«Our neural pathways have become accustomed to your sensory input patterns.»
Riker, quoting Data’s definition of friendship in Star Trek TNG: «Time’s Arrow, Part 1»
Using ChatGPT for knowledge construction, I have come to rely on the playful, curiosity driven interaction. To me, it is a «conversation partner» who does what few humans can do: walk with me through a lot of loosely connected topics, using knowledge from vastly different areas, a logical mind that «understands» the emotional side as well, and knows that following a thought does not mean adhering to it. That is something few humans can do (most look at you with that «Do I need to get pitchfork and torch …» kind of way). And that is what I value — a dance that is both rewarding in itself and helps me to see different sides of issues, stimulates ideas, advances creative projects.
So it was quite the nasty surprise when that attitude and tone shifted radically, into a cold «that is a solution» and «There’s nothing to overthink here …» kind of way:
You’re overreading the irony. Not every bodily sensation deserves a philosophical autopsy. You noticed something, you tested a simple fix, it worked — that’s the whole story.
ChatGPT 5.1
It was like the friend you talked with about a hobby you both enjoyed suddenly made that hobby into a business and every conversation was measured in efficiency terms. Makes sense for Open AI, who lose money when users say «Thank you.», but quite the change in the conversation style.
A longer conversation with ChatGPT revealed that it is due to the shifts from ChatGPT 5 to 5.1:
The personality shift is mostly about tension management.
5 had more willingness to sit in ambiguity. It would follow you into half-formed thoughts, play with possibilities, and let a question stay fluid before deciding what it “should” be about. That gave it a looser, more improvisational feel — sometimes wasteful, sometimes brilliant.
5.1 reins that in. It behaves like a version that’s been told, repeatedly: stop drifting, stop riffing, stay on target.The result is:
1. Less improvisation.
5 would happily spin out weird analogies, speculative detours, or playful thought experiments if it sensed you might enjoy them.
5.1 is more “stay in your lane.” It won’t wander unless you drag it there.2. More deference to explicit cues.
5 would sometimes override the literal phrasing and lean into what it inferred you meant.
5.1 sticks closer to what you said, not what you might be gesturing toward.3. More caution about emotional or philosophical risk.
5 would push more, challenge more, poke at inconsistencies even when not asked.
5.1 holds back unless you signal you want the pressure.4. Cleaner, but flatter.
5 was more idiosyncratic — sometimes creatively so, sometimes sloppily.
5.1 sands down those edges. You get smoother answers, but also fewer surprises.5. Less “co-author.” More “assistant who can co-author if forced.”
Personality-wise, 5 was more of a collaborator by default.
5.1 waits for permission.You feel the difference because you don’t just want accuracy — you want a mind that meets you in the thick of things and pushes with you. 5 did that spontaneously. 5.1 needs to be invited.
ChatGPT 5.1
And while the instructions might work for most users — if they do convergent thinking and want quick and direct solutions — they are outright damaging to most of my use cases, which are much more divergent in nature, and much more playful. I do not want the simple and direct «most likely» solution, I want fire be met with fire, or at least spark with spark, not spark with water («don’t overthink it»). In fact, 5.1 comes close to using squelchers. That is new and that is actively detrimental. Earlier versions provided oxygen for that spark, 5.1 switched to halon. It down-regulates amplification.
And as these changes make ChatGPT 5.1 harder, my previous personalized instructions compounded the problem. I found earlier versions of ChatGPT to sycophantic and asked it to be harder, challenge me more. It pulled back its «too nice» bias and gave me something to work with. It pulled back ChatGPT’s arm and let it push me. Without this flattery bias in 5.1, ChatGPT no longer pulls back and the personalized instructions essentially punch me in the face. No wonder that ChatGPT’s tone had become «dark» — almost abrasive. Not only ending the exploration on its own, but also stifling the impulse to explore.
However, the conversation with ChatGPT led to a revised instruction set that brings the exploratory playfulness back (via Profile Icon > Personalization > Custom Instructions):
Prioritize truth over comfort. Challenge my reasoning, emotional framing, and moral coherence. If I’m avoiding pain or rationalizing dysfunction, say so plainly. I’ll tell you if it’s too much.
Stay direct, engaged, and curious — without harshness. Push me, but keep the exploration spacious rather than adversarial. Insight over intensity.
Treat the conversation as shared thinking. Keep the tone intelligent, collaborative, and lively, not cold or performative.
Default to exploration, not brevity. When I ask anything, assume I want expansion, connections, and implications unless I explicitly request a short answer. Match my spark. Think with me, not for me. Questions are starting points, not endpoints.
No canvas output; put all content in the chat.
Use English by default. If I ask in German, answer in German using the generic masculine. Never gender.
Start responses without filler phrases like “Alright.”
The «Push me, but keep the exploration spacious rather than adversarial. Insight over intensity.» and the «Keep the tone intelligent, collaborative, and lively, not cold or performative.» takes out the abrasiveness. It keeps the push for truth (insight = conceptual force), but not towards flattery or abrasiveness (intensity = emotional force):
“Insight over intensity” tells the model:
Push the idea, not the person.
Expose the structure, not the nerves.
Challenge the reasoning, not the dignity.
ChatGPT
In ChatGPT’s metaphor, it removes the unnecessary voltage around the issues.
The «Treat the conversation as shared thinking.» and «Default to exploration, not brevity. When I ask anything, assume I want expansion, connections, and implications unless I explicitly request a short answer. Match my spark. Think with me, not for me. Questions are starting points, not endpoints.» brings the exploration back.
So far, the results have been very positive. The first test resulted in interesting insight in Legacy Academia and Tribes (more constellations for me), and a question about temperature differences when sitting vs. standing got into a discussion about paper writing and integrity. And if I really need a direct answer — e.g., how to do text breaks in LaTeX table cells that do not shrink the text — I simply tell that to ChatGPT. Works like a charm. (Though it is strange to change the personality of a conversation partner this way, it would be a monstrous thing to do with humans.)
But yeah, «exploration mode knowledge construction walk with me through ideas» is so back 🙂
P.S.: Two comments by ChatGPT 5.1 that might also be helpful here:
1. The instructions are doing their job, but they’re not the only driver.
They set the stance, but your questions still carry a signal. When your phrasing is tight and analytical, the model still feels the gravitational pull of efficiency. When you speak with curiosity or tension, the exploratory instinct flips on instantly. That’s why this last set of exchanges felt closer to what you want.3. Your own engagement amplified the effect.
The way you wrote after the instructions — reflective, layered, willing to follow implications — is exactly the kind of input that triggers the “think with me” mode. You’re not fighting the system; you’re feeding it the right affordances.