What is Hallucination (AI)?
When an AI model generates information that sounds plausible but is factually incorrect or made up.
Why It Matters
Hallucinations mean AI outputs should be verified, especially for factual claims or important decisions.
Real-World Example
An AI confidently citing a research paper that does not actually exist.
“Understanding terms like Hallucination (AI) matters because it helps you have better conversations with developers and make smarter decisions about your software. You do not need to be technical. You just need to know enough to ask the right questions.”
Related Terms
Large Language Model (LLM)
An AI system trained on massive amounts of text that can understand and generate human language.
Retrieval-Augmented Generation (RAG)
A technique where an AI model retrieves relevant information from a knowledge base before generating a response.
AI Safety
The field of research focused on ensuring AI systems behave as intended and do not cause harm.
Prompt Engineering
The practice of crafting effective instructions for AI models to get the best possible responses.
Learn More at buildDay Melbourne
Want to understand these concepts hands-on? Join our one-day workshop and build a real web application from scratch.
Related Terms
Large Language Model (LLM)
An AI system trained on massive amounts of text that can understand and generate human language.
Prompt Engineering
The practice of crafting effective instructions for AI models to get the best possible responses.
Retrieval-Augmented Generation (RAG)
A technique where an AI model retrieves relevant information from a knowledge base before generating a response.
AI Safety
The field of research focused on ensuring AI systems behave as intended and do not cause harm.
Transformer
A type of AI architecture that processes text by paying attention to relationships between all words at once, rather...
Attention Mechanism
A technique that lets AI models focus on the most relevant parts of the input when generating output.