What is an AI Hallucination?
When AI confidently makes things up
Did you spot the problem? AI made a confident claim without knowing your situation.
AI Predicts Patterns, Not Truth
Understanding how AI really works
Patterns
AI learns from data and predicts likely responses
Truth
Verified facts that are actually correct
Patterns ≠ Truth
AI generates responses based on what seems likely, not what's necessarily true or verified.
Why This Matters
The ripple effect of AI hallucinations
Critical Insight
If you don't spot the hallucination early, everything that follows in the conversation can be affected.
Can You Spot the Hallucination?
Choose a scenario to test your understanding
Scenario 1: Career Advice
You asked: "What job should I do after school?"
🤔 What should you think about this answer?
Scenario 2: Learning Time
You asked: "How long will it take me to learn a new skill?"
🤔 What should you think about this answer?
Scenario 3: Accuracy & Trust
You asked: "Is this information definitely correct?"
🤔 What should you think about this answer?
Excellent Work! 🎉
You've completed the AI Hallucination lesson
Key Takeaway
AI can sound confident even when it's guessing. Knowing when to question it is an essential part of AI literacy.
What You've Learned:
Confidence doesn't equal correctness - AI can be very wrong while sounding very sure
Always verify important information - Don't trust AI blindly, especially for critical decisions
Humans are responsible for checking - You're the final quality control for AI outputs