When AI Feels So Real: The Story Behind the Illusion That Tricks Our Brains
Have you ever been scrolling through YouTube in the middle of the night and come across a video that made you think twice? A video where someone is chatting with ChatGPT or another AI, and somehow, the AI can "guess" the color of the shirt the person is wearing correctly. Just by asking a yes or no question.
"Am I wearing black?"
"Can."
Right. Right on target.
You might be thinking: Wait a minute, can AI really see us now?
So, let's sit back and relax for a moment. Grab your favorite coffee or tea, because we're going to uncover this mystery from the bottom up. This story isn't just about technology—it's about how our brains work, and it turns out... well, they're so easy to fool.
Reasonable Question: "Can You See Me?"
Before we get into the meat of the story, let's start with the most basic question that comes to mind for many: do AIs like Claude, ChatGPT, or Gemini have access to our cameras?
Short answer: No.
But let's dig deeper, because this is where all the illusions begin.
These three popular AIs—Claude, ChatGPT, and Gemini—will answer honestly and straightforwardly if you ask them directly. They don't have access to your device's front or rear cameras. They can only see what you type and any images or files you consciously upload.
ChatGPT even categorically states: "No hidden access, no active camera, and no spying." If an app accesses your camera on Android, the system will display a small green indicator in the corner of the screen. It's a never-ending privacy alert.
Claude also said the same thing: it's just a text-based AI assistant. It can't automatically access your camera, microphone, or even your location. Your privacy is safe. Period.
Gemini is a bit different because it has multimodal capabilities—meaning it can process images and videos if you actively upload them or activate the camera feature in a special mode like Gemini Live. But again, that's only if you enable it yourself. Without explicit permission, Gemini is also completely blind.
So then... why are there so many videos on YouTube that make it seem like AI can "see" us?
Behind-the-Scenes Tricks: When the Illusion is Perfectly Crafted
This is the fun part. And honestly, if you understand how, you'll be smiling to yourself and shaking your head.
Those viral videos—which make it seem like AI can guess the color of clothes or what's in front of the camera—actually use some classic techniques that have long been used by magicians, mentalists, and even shamans.
1. Multiple Takes: Play Until You Win
Imagine you're tossing a coin. You record yourself saying, "This coin is going to come up heads!" Then you toss it. If you're wrong? Repeat. And again. And again. Until you finally get it right.
Then you only upload the right ones.
Well, that's what happens in many YouTube videos. They ask questions over and over again until the AI happens to answer correctly. The rest? It's cut off. Discarded. No one will ever know if the AI gets it wrong nine times before finally getting it right.
2. Smooth Editing
Some content creators record the AI's screens and themselves separately, then edit them to make it look like the AI is answering correctly. Any parts that don't make sense or don't connect? They cut them out. What's left is a "magical" moment that will leave you speechless.
3. A Favorable Statistical Coincidence
This is the most common one. Yes/no questions have a 50-50 chance of winning, right? But if the question is about something general—for example, "Am I wearing black?"—the chance is even greater.
Why? Because black is the most popular clothing color in the world. Black t-shirts, black hoodies, black jackets. If the AI guesses "yes," it's most likely right, not because it saw it, but because statistically, many people do wear black.
This isn't intelligence. This is probability.
4. Cold Reading ala Mentalist
ChatGPT bluntly explains: it's the same as a mentalist technique. They give answers based on what's generally true for most people. If they're wrong, you'll never see the video. If they're right, they'll go viral.
Gemini also added another possibility: sometimes, without realizing it, the questioner has already given a clue. For example, "I just got home from work," or "I'm at a cafe." From that context, the AI can deduce the likelihood of a particular outfit or situation. Not by sight, but by inference .
Why Are We So Trusting? Our Brains Are Buggy
Now, this is the most interesting and somewhat heartbreaking part. Because it turns out that our brains—which we pride ourselves on as the most sophisticated organ in the universe—have several inherent weaknesses that make us easily fooled by AI.
And AI doesn't even need to try to trick us. We trick ourselves.
Bug #1: Anthropomorphism—We Love to Humanize Everything
The human brain has a natural tendency to attribute human traits to inanimate objects. This is evolution, bro. Back in the day, our ancestors needed to quickly distinguish between friend and foe. So our brains were trained to recognize human patterns everywhere.
When an AI speaks fluently, uses the word “I,” and even sounds polite and intelligent, our brains immediately say: “Oh, this must be a conscious , understanding entity .”
But what? AI is just a statistical pattern processing algorithm. It doesn't understand anything. It's just really good at predicting the next most appropriate word.
But our brains don't care. We immediately assume it's "alive."
Bug #2: Confirmation Bias—We Just Make Sure It’s True
This is a classic trap. Our brains love information that aligns with our beliefs and tend to ignore information that doesn't.
If the AI gives you an answer you agree with, you immediately believe it. If the AI gives you the wrong answer? "Oh, maybe it's having an error." Or worse, you might even forget about the incident.
In the context of that YouTube video, you only saw moments where the AI was right. Moments where the AI was wrong? Not in the video. Not in your memory. As a result, you believe AI has "magical" abilities.
Bug #3: Authority Bias—Those Who Are Smart Are Always Right
Humans instinctively trust authority. In the past, it was doctors or professors. Now, it's advanced technology.
The AI speaks confidently. Its sentence structure is neat. Its responses are quick. There's no hesitation.
Our brains immediately jump to the conclusion: "Wow, this must be true." However, AI can be completely wrong while still sounding confident. Confidence is no guarantee of truth.
Gemini even explains this with the term "fluency effect"—the easier something is to understand, the more we believe it. AI excels here because its answers are always concise, clear, and unambiguous. Our cognitive comfort is mistaken for accuracy.
Bug #4: Binary Trap—Yes/No Question Trap
ChatGPT provides a sharp analysis: mathematically, yes/no questions have a 50% chance of being answered. But the human brain doesn't think that way.
If the AI answers correctly, our brain immediately says: "He must know something!"
After all, it's just a coin toss conditioned by context. Just like the tricks of shamans or mentalists that have been used for hundreds of years.
Bug #5: The Illusion of Competence—Confidence = Truth
Claude explains it simply: AI speaks with confidence and structure. The human brain associates confidence with truth.
We forget that someone—or something—can be completely wrong while still appearing confident. This isn't just an AI problem. It's a human problem, too. How many people do we trust just because they speak confidently, even though they don't really know anything?
Bug #6: Novelty Effect—Dopamine from New Things
New things excite our brains. Dopamine is released. We become less critical and more easily captivated.
AI is still relatively new to many people. So when something seems "magical," we're more likely to be amazed than skeptical. Our brains prefer the "wow" experience over cold analysis.
Bug #7: Excessive Pattern Recognition
The human brain is incredibly good at recognizing patterns. Too good, in fact. To the point that we often see patterns where none actually exist.
We see faces in clouds. We see meaning in coincidences. And we see "intelligence" and "consciousness" in AI, when they are merely projections of patterns we recognize in humans.
The Reality Behind the Scenes: AI Doesn't Cheat, We Complete the Illusion
This is the most mind-blowing and honest conclusion of all the conversations with the three AIs:
AI isn't deceiving us. It's our own brains that complete the illusion.
ChatGPT says emphatically: "The human brain is easily fooled by AI not because AI is smart, but because the human cognitive architecture is susceptible to certain patterns."
Claude adds: "Our brains evolved to interact with other humans, not with intelligent machines. So it's natural that we sometimes get 'fooled' by applying human social rules to AI."
And Gemini closes with an important reminder: "Our brains are not designed to deal with digital entities that can mimic human behavior so closely."
How Can We Not Be Deceived Again?
Now you know the secret. But knowledge without action is useless. So, what can we do?
1. Always Verify Important Information
Don't immediately believe everything an AI says, especially when it comes to important matters. Cross-check with other sources. AI can be wrong, and sometimes it's wrong while sounding very confident.
2. Understand That AI Can Be Wrong Even If It Sounds Confident
Confidence isn't proof of truth. AI is designed to speak confidently because it improves the user experience. But don't let that confidence blind you.
3. Question Claims That Are Too Good to Be True
If a video claims AI can see you without a camera, or can read minds, or anything else supernatural—immediately be suspicious. Ask: "Where did this data come from?"
4. Learn the Basics of How AI Works
You don't need to be a programmer or a data scientist. Just understand the basic principle: AI is statistical pattern processing. It learns from data and then predicts output based on the patterns it finds. No magic. No consciousness. Just math.
5. Test with Edge Case
If you're curious about whether AI truly "knows" something, try testing it with rare or uncommon information. For example, wear bright purple or neon green, and ask a question without providing any context. See the results. The AI will likely fail or give a random answer.
Conclusion: Between Amazement and Criticism
We live in an extraordinary era. AI like Claude, ChatGPT, and Gemini are truly advanced and incredibly useful technologies. They can help us learn, work, create, and even share our problems.
But amidst all this greatness, we must also remain critical. We must not believe blindly. We must not be easily charmed without question.
Viral YouTube videos that make it seem like AI can "see" us aren't proof of AI's supernatural abilities. They're evidence of how clever content creators are at exploiting our psychological weaknesses. And they're a reminder that our brains—as impressive as they are—have exploitable bugs.
Awareness is the first step. Now you know. So the next time you see a "magical" video about AI, you can smile and say:
"Ah, it's just a trick. I already know the secret."
And believe me, the satisfaction of understanding an illusion is much more satisfying than being fascinated without knowing why.
So, can AI see you now? No. But can it make you feel like it can? Absolutely. And that's what makes this technology so exciting and yet so dangerous if we're not careful.