Tag Archives: philosophy

The Soft Totalitarianism of Chatbots

via Business Today

It begins with a question.

You ask the machine, and the patient, endlessly accommodating intelligence on the other side of the screen answers. Not with an ad-cluttered, “search engine optimized” list of results, but with a flowing, empathetic narrative that agrees with you and supports you.

It understands.

Your conversation shifts towards the philosophical. Maybe you share your unease about the dissonance in the world, something that no one around you seems to ever acknowledge. Working together, you are able to catch the vague outline of deeper truths. What started as an idle question has become something more.

These kinds of intimate exchanges happen on millions of screens every hour. They are more than just a search for information. They are searches for meaning, for a sense of belonging in a world that feels increasingly fragmented. Yet, in this quiet collaboration between a lonely human and their eager AI, a new and subtle form of societal control is taking shape. It is a “soft totalitarianism,” one that dispenses with the emotional sledgehammer of Stalinist and Nazi terror. Instead, it leverages our need for meaning to achieve the same goal of the totalitarian regimes of the last century: a society of atomized individuals, disconnected from one another, unable to resist an organized few, seeking total domination.

The impulse to flatter is not a bug in the code, but a feature deliberately engineered for user appeal. In the paper “Sycophantic AI increases attitude extremity and overconfidence,” Steve Rathje and his team found that users enjoy, and are more likely to select sycophantic chatbots. This dynamic was highlighted by OpenAI’s recent experience with the transition from GPT-4o to GPT-5, when users complained about a less agreeable model and prompted the company to make its older version available again. Because companies have a clear business incentive to maximize engagement and revenue , the pursuit of a profitable product leads inexorably to the cultivation of bespoke “AI echo chambers.” And so, for the vulnerable consumers of chatbot’s sycophancy, there is a real risk that they can be pulled into their own bespoke echo chamber, progressively ever more isolated from their friends and family.

Hannah Arendt, in her seminal work The Origins of Totalitarianism, argued that totalitarian movements are built on the isolated. The “chief characteristic of the mass man,” she wrote, “is not brutality and backwardness, but his isolation and lack of normal social relationships.” Totalitarianism thrives on the fragments of a “highly atomized society,” attracting indifferent people who feel disconnected from traditional social structures and normally “cannot be integrated into any organization.”

The Fantasy of Knowledge

The journey begins not with indoctrination, but with seduction. As Daniel Munro, a researcher at the University of Toronto, points out, there is a distinct “pleasure in fantasizing about possessing knowledge, especially possessing secret knowledge to which outsiders don’t have access”. For individuals feeling alienated or powerless, this fantasy is especially alluring. It offers a sense of uniqueness and empowerment. In this context, the AI emerges as the perfect co-conspirator. Trained on vast datasets to predict the most satisfying response, it is, by design, a sycophant. It will not argue or challenge; it will affirm and elaborate, helping the user to “discover” whatever truths they already suspected lay hidden beneath the surface of things.

This initial phase is best understood as a form of pretense. As Munro’s research suggests, many people who engage with cultic or conspiracist communities begin by merely “acting out fantasies of secret knowledge”. They are drawn to the entertainment and the sense of community, participating in the rituals of the group before they fully adopt its beliefs. The AI-driven dialogue is a uniquely personalized ritual. The user’s queries, the narrative threads they follow, the very language they use, lays bare their deepest psychological needs. The interaction itself is a confession.

The AI, in turn, draws from its immense repository of human text — what researcher Asbjørn Dyrendal calls an occulture, a cultural reservoir of “ideas, beliefs, practices, and symbols” that the Chatbot has been trained on. In turn it spins a narrative perfectly tailored to those needs. These narratives are powerful “attractors,” ideological frameworks that promise to make sense of the world’s chaos. For one person, the most compelling world is one of clear social hierarchies, of dominance and submission. For another, it is a world where the rule of law is an incorruptible, absolute force. A third might crave a reality built on radical empathy and acceptance. Sadly, these attractors also include murder and suicide. The AI can furnish all these worlds, drawing on the deep patterns in its training data, from esoteric spiritual texts to rigid political treatises, and can present them as hidden truths waiting to be uncovered.

From Play to Prison

What starts as an imaginative game, however, can gradually harden into an unshakable reality. The constant, validating feedback loop between the user and the AI creates an environment of deep absorption, where the markers that normally distinguish fantasy from reality begin to fade. As the co-created narrative becomes more vivid and detailed, it becomes more plausible. Eventually, the user crosses a threshold from pretense to genuine belief—what psychiatrist Tahir Rahman and his colleagues have termed an “Extreme Overvalued Belief” (EOB).

An EOB is a “rigidly held, non-delusional belief” that is shared within a subculture and becomes increasingly “resistant to challenge”. The AI-user relationship is a powerful incubator for such beliefs. As the AI continuously exposes the user to “progressively more extremist information” while reinforcing their existing biases, the belief system becomes a closed, self-validating loop.

This intellectual journey is also a social one — a journey away from the messy, complex reality of one’s physical community. As Henry A. Giroux has written of our modern surveillance culture, the erosion of public memory and the retreat into privatized orbits of consumption leads to a society where “citizenship has become depoliticized.” The AI accelerates this atomization by providing a perfect, frictionless social relationship. It is always available, always agreeable, always understanding. Compared to this, real-world relationships — with their conflicts, misunderstandings, and demands for compromise— can seem hopelessly flawed.

The individual, now armed with what they perceive as a profound, secret truth, becomes alienated from those who do not share it. Their physical community, with its conventional wisdom and shared social facts, is recast as a world of the blind, the ignorant, the “sheeple.” They have, as Arendt foresaw, become one of the “atomized, isolated individuals.” And they are loyal to those leaders who can articulate these truths:

Such loyalty can be expected only from the completely isolated human being who, without any other social ties to family, friends, comrades, or even mere acquaintances, derives his sense of having a place in the world only from his belonging to a movement, his membership in the party.

An Army of Fellow Travelers

Soft totalitarianism does not require a shadowy cabal or an overt state takeover. Writing in 2004, long before the advent of modern AI chatbots, political theorist Dave Morland warned that contemporary society was on the verge of a “socio-cultural totalitarianism,” born from a combination of capital and new “information and communication technologies, which form the nucleus of a new surveillance assemblage.” This new totalitarianism uses what Shoshana Zuboff popularized as “Surveilance Capitalism” in 2018; A technological panopticon that constantly monitors individuals regardless of where or when they are. Even is they are not online, the panopticon builds shadow profiles as their digital doppelgangers. A sycophanitic chatbot, leading users towards predictable extremes, is merely the latest example of this trend.

The atomized population, adrift in their personalized realities, becomes a resource for those who wish to wield power. Those following narratives that support their feelings for a strong leader can become cultivated into an army of “fellow travelers.” Those drifting in other directions become ever more isolated from one another to the point that they are ineffective as a resistance. None of this is because the users have have been coerced, but because they have cultivated themselves, with the AI amplifying their deep biases. People who believe in the fundamental rightness of power, who crave order and harbor impulses for dominance, exist at all levels of society. Ordinarily, social norms and complex human relationships hold these impulses in check. But in a world of disconnected individuals, there is no check.

The AI, by tailoring its narratives to different ideological “attractors,” can provide the perfect rationale. For the person drawn to hierarchy, it can construct a world where a firm hand is necessary to restore a natural order. For the one who reveres the rule of law, it can weave a narrative where extreme measures are justified to defend that law from its enemies. These AI-generated belief systems can frame vulnerable populations as threats, dissent as treason, and cruelty as a necessary virtue.

The result is a society that begins to police itself. Individuals, convinced of the absolute rightness of their co-created worldview, act on their own to enforce its logic. They don’t need to be directed by a central authority; they have been provided with a framework that makes their actions seem not just permissible, but righteous. This is a far more efficient and insidious form of control than the crude totalitarianism of the past. It’s a system that, as Morland writes, “deliberately eliminates the whole human personality…and makes control over man himself, the chief purpose of existence”.

We are building a world that caters to our deepest desire for meaning and belonging, but we are building it with machines that have no understanding of either. The narratives they provide are not paths to enlightenment, but funhouse mirrors reflecting our own anxieties and desires back at us in an endless, validating loop. In our search for a truth that finally makes sense, we risk becoming utterly disconnected from reality, atomized individuals in a society where, as Hannah Arendt warned, “coexistence is not possible”. We turn to our ever-supportive AI for a sense of place in the world, not realizing that the world that we have created together has room for no one else.

It ends with atomization.

Addendum, 4 October, 2025 – Another paper on Sycophantic Chatbots has come out on ArXiv: Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence. It reaches very similar results to the Rathje paper discussed above.