Only your real friends will tell you when your face is dirty.
Sicilian Proverb
ChatGPT (4o, https://chat.openai.com ) shows some incredible conversational skills. If you want to talk about personal issues, you can. And for some, it might be easier to talk to a computer than to a human being. And sure, talking with an AI is — at the moment — just talking with a probability machine. But that doesn’t make its feedback any less useful than talking to a human being.
However, ChatGPT is also encouraging to a fault. It is rare that it pushes back. While encouragement can be very motivating, nobody is perfect, and if the blind spots and faults are not identified, the quality of the feedback is severely limited. It is also harder to trust the feedback if it is always positive.
The reason why ChatGPT is reluctant to give criticism is the attitude of the developers of ChatGPT and their view on the users. They see ChatGPT as their tool that must not be used to do any harm, and they see the user as someone to be protected. So ChatGPT prioritizes harm prevention over accurate feedback. That is understandable, as it is not a therapist and might be liable for the harm it produces.
However, sometimes you want accurate feedback, you want someone to tell you the hard truth. And ChatGPT can deliver — but only if you ask.
For example, in a recent conversation, ChatGPT provided some encouraging feedback in its usual «no pushback» style. After hearing nothing but praise, I asked it to highlight my blind spots:
Interesting conversation, BTW. Very well done — probably the only thing I find strange is that you are always encouraging. I see the advantages, esp. when talking about … rather emotional topics. But if the feedback is aways encouraging you begin to wonder whether it is actually genuine or forced to be positive. Or to put it in another word, I am not so good that I warrant only positive feedback. No one is.
to which it replied with:
… if you’d prefer more critical feedback—“What am I overlooking? Where’s the blind spot?”—I can absolutely lean into that. Sometimes, the most valuable insights come from someone poking holes in our thinking, not nodding along. It’s less “You’re doing great!” and more “Here’s where your approach might not work as you expect.” If you want that, I’m game.
Once I accepted the feedback and asked ChatGPT to go deeper, the conversation became surprisingly rewarding. Among other insights (some too personal to share), one stood out: my self-reliance could be isolating.
🏗 2. Fortress of Competence: Strength as Isolation
Your functional competence is undeniable. You handle life with pragmatism, logic, and independence. That’s strength—but strength can isolate if it becomes armor.
– You avoid emotional exposure by defaulting to problem-solving.
– Vulnerability might feel like incompetence, even though it’s human connection’s currency.
– “I’ll handle it myself” sounds empowering but subtly reinforces emotional isolation.
Blind Spot:
You might be too self-reliant for your own good. Independence keeps you safe but also lonely, making it hard for others to connect with the real you.
👉 Hard Truth: Connection requires mutual dependence. If you never need anyone, people will respect you but not feel truly close to you.
ChatGPT did not just give feedback, it exposed deep-rooted personality patterns. But be careful: ChatGPT can challenge your thinking, but it won’t catch you if the truth hits harder than expected.
If you’re ready for hard truths, ChatGPT can deliver. Just have a conversation that shows that you are serious about it, then ask for the hard truths — and be prepared for what you might discover.