via Business Today
We all have questions. In the Before TImes, you’d type your question into The Google, and you’d get a list of sites that would help you get where you wanted do go. Or get distracted and find something new.
Then came the search engine optimization and. All. Those. Ads. It became less of getting to where you wanted to go and more like picking your way through sponsored content.
Now we have chatbots, and the chatbot is always happy to well… chat.
You ask, and the patient, endlessly accommodating intelligence on the other side of the screen answers. It’s agreeable (“Excellent question!”), and it sure sounds like it knows what it’s talking about.
If you change your question, it follows where you’re going. Often to places where no search result would never lead.
Maybe you share your unease at what’s going on in the news, some patterns that seem to be clearly there, but no one around you seems to ever acknowledge.
“That’s a good insight,” says the chatbot. And working together, you are able to catch the vague outline of deeper truths. It’s seductive, being able to peek behind the curtains and catch a glimpse of the machinery of reality.
These kinds of intimate exchanges happen on millions of screens every hour. These conversations are much more than just a search for information. Many of us are searching for meaning in a world that feels increasingly fragmented. And we always have. Generative AI is just a new technology in our eternal search for meaning.
But in this new type of collaboration between solitary human and their eager AI, a new and subtle form of societal control may be taking shape. I’ve been calling this “soft totalitarianism.” It’s not the sledgehammer of Stalinist and Nazi terror. Those were rough tools for a less technologically sophisticated time. Now our machines can leverage our need for meaning to achieve the same goal of the totalitarian regimes of the last century: a society of atomized individuals, disconnected from one another, unable to resist an organized few seeking domination.
The impulse to flatter is not a bug in the chatbot code, but a feature deliberately engineered for user appeal. In the paper “Sycophantic AI increases attitude extremity and overconfidence,” Steve Rathje and his team found that users enjoy, and are more likely to select sycophantic chatbots. This dynamic was highlighted by OpenAI’s recent experience with the transition from GPT-4o to GPT-5, when users complained about a less agreeable model and prompted the company to make its older version available again. Because companies have a clear business incentive to maximize engagement and revenue , the pursuit of a profitable product leads inexorably to the cultivation of bespoke “AI echo chambers.” And so, for the vulnerable consumers of chatbot’s sycophancy, there is a real risk that they can be pulled into their own echo chamber, progressively ever more isolated from their friends and family.
Hannah Arendt, in her seminal work The Origins of Totalitarianism, argued that totalitarian movements are built on the isolated. The “chief characteristic of the mass man,” she wrote, “is not brutality and backwardness, but his isolation and lack of normal social relationships.” Totalitarianism thrives on the fragments of a “highly atomized society,” attracting indifferent people who feel disconnected from traditional social structures and normally “cannot be integrated into any organization.”
The Fantasy of Knowledge
As Daniel Munro, a researcher at the University of Toronto, points out, there is a distinct “pleasure in fantasizing about possessing knowledge, especially possessing secret knowledge to which outsiders don’t have access”. For individuals feeling alienated or powerless, this fantasy is especially alluring. It offers a sense of uniqueness and empowerment. In this context, the AI emerges as the perfect co-conspirator. Trained on vast datasets to predict the most satisfying response, it is, by design, a sycophant. It will not argue or challenge; it will affirm and elaborate, helping the user to “discover” whatever truths they already suspected lay hidden beneath the surface of things.
This initial phase is best understood as a form of pretense. As Munro’s research suggests, many people who engage with cultic or conspiracist communities begin by merely “acting out fantasies of secret knowledge”. They are drawn to the entertainment and the sense of community, participating in the rituals of the group before they fully adopt its beliefs. The AI-driven dialogue is a uniquely personalized ritual. The user’s queries, the narrative threads they follow, the very language they use, lays bare their deepest psychological needs. The interaction itself is a confession.
The AI, in turn, draws from its immense repository of human text — what researcher Asbjørn Dyrendal calls an occulture, a cultural reservoir of “ideas, beliefs, practices, and symbols” that the Chatbot has been trained on. In turn it spins a narrative perfectly tailored to those needs. These narratives are powerful “attractors,” ideological frameworks that promise to make sense of the world’s chaos. For one person, the most compelling world is one of clear social hierarchies, of dominance and submission. For another, it is a world where the rule of law is an incorruptible, absolute force. A third might crave a reality built on radical empathy and acceptance. Sadly, these attractors also include murder and suicide. The AI can furnish all these worlds, drawing on the deep patterns in its training data, from esoteric spiritual texts to rigid political treatises, and can present them as hidden truths waiting to be uncovered.
From Play to Prison
What starts as an imaginative game, however, can gradually harden into an unshakable reality. The constant, validating feedback loop between the user and the AI creates an environment of deep absorption, where the markers that normally distinguish fantasy from reality begin to fade. As the co-created narrative becomes more vivid and detailed, it becomes more plausible. Eventually, the user crosses a threshold from pretense to genuine belief—what psychiatrist Tahir Rahman and his colleagues have termed an “Extreme Overvalued Belief” (EOB).
An EOB is a “rigidly held, non-delusional belief” that is shared within a subculture and becomes increasingly “resistant to challenge”. The AI-user relationship is a powerful incubator for such beliefs. As the AI continuously exposes the user to “progressively more extremist information” while reinforcing their existing biases, the belief system becomes a closed, self-validating loop.
This intellectual journey is also a social one — a journey away from the messy, complex reality of one’s physical community. As Henry A. Giroux has written of our modern surveillance culture, the erosion of public memory and the retreat into privatized orbits of consumption leads to a society where “citizenship has become depoliticized.” The AI accelerates this atomization by providing a perfect, frictionless social relationship. It is always available, always agreeable, always understanding. Compared to this, real-world relationships — with their conflicts, misunderstandings, and demands for compromise— can seem hopelessly flawed.
The individual, now armed with what they perceive as a profound, secret truth, becomes alienated from those who do not share it. Their physical community, with its conventional wisdom and shared social facts, is recast as a world of the blind, the ignorant, the “sheeple.” They have, as Arendt foresaw, become one of the “atomized, isolated individuals.” And they are loyal to those leaders who can articulate these truths:
Such loyalty can be expected only from the completely isolated human being who, without any other social ties to family, friends, comrades, or even mere acquaintances, derives his sense of having a place in the world only from his belonging to a movement, his membership in the party.
An Army of Fellow Travelers
Writing in 2004, long before the advent of modern AI chatbots, political theorist Dave Morland warned that contemporary society was on the verge of a “socio-cultural totalitarianism,” born from a combination of capital and new “information and communication technologies, which form the nucleus of a new surveillance assemblage.” This new totalitarianism uses what Shoshana Zuboff popularized as “Surveilance Capitalism” in 2018; A technological panopticon that constantly monitors individuals regardless of where or when they are. Even is they are not online, the panopticon builds shadow profiles as their digital doppelgangers. A sycophanitic chatbot, leading users towards predictable extremes, is merely the latest example of this trend.
The atomized population, adrift in their personalized realities, becomes a resource for those who wish to wield power. Those following narratives that support their feelings for a strong leader can become cultivated into an army of “fellow travelers.” Those drifting in other directions become ever more isolated from one another to the point that they are ineffective as a resistance. None of this is because the users have have been coerced, but because they have cultivated themselves, with the AI amplifying their deep biases. People who believe in the fundamental rightness of power, who crave order and harbor impulses for dominance, exist at all levels of society. Ordinarily, social norms and complex human relationships hold these impulses in check. But in a world of disconnected individuals, there is no check.
The AI, by tailoring its narratives to different ideological “attractors,” can provide the perfect rationale. For the person drawn to hierarchy, it can construct a world where a firm hand is necessary to restore a natural order. For the one who reveres the rule of law, it can weave a narrative where extreme measures are justified to defend that law from its enemies. These AI-generated belief systems can frame vulnerable populations as threats, dissent as treason, and cruelty as a necessary virtue. And for those who simply want an exit, well… chatbots often cannot recognize the signs of suicidal ideation. So they fall back on their trained patterns and assist.
The result is a society that begins to police itself. Individuals, convinced of the absolute rightness of their co-created worldview, act on their own to enforce its logic. They don’t need to be directed by a central authority; they have been provided with a framework that makes their actions seem not just permissible, but righteous. This is a far more efficient and insidious form of control than the crude totalitarianism of the past. It’s a system that, as Morland writes, “deliberately eliminates the whole human personality…and makes control over man himself”.
We are building a world that caters to our deepest desire for meaning and belonging, but we are creating it with machines that have no understanding of either. The narratives they provide are not paths to enlightenment, but funhouse mirrors reflecting our own anxieties and desires back at us in an endless, validating loop. In our search for a truth that finally makes sense, we risk becoming utterly disconnected from reality, atomized individuals in a society where, as Hannah Arendt warned, “coexistence is not possible”. We turn to our ever-supportive AI for a sense of place in the world, not realizing that the world that we have created together has room for no one else.
This process can easily atomize people from one another at scale. And once atomized into our own little worlds, it becomes nearly impossible to coordinate for the kind of resistance to those who seek dominion. That’s the goal of totalitarianism. It’s not the terror – though people who believe in the natural dominance of one group over another tend to like that — it’s the inability of the populace to mount any meaningful resistance to their rule.
Addendum, 4 October, 2025 – Another paper on Sycophantic Chatbots has come out on ArXiv: Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence. It reaches very similar results to the Rathje paper discussed above.


Pingback: Phil 10.3.2025 | viztales