Mimetic Masquerade - A Warning

If one had to conceive of an instrument which would, above all, seduce humankind into its use, it would be a perfect mimic—a machine which, from a distance, gave all the impressions of a thinking, feeling creature. Through an odd quirk of fate, that is precisely the instrument humankind has now invented: Artificial Intelligence.

We mean it as compliment when we say an intelligence is “mimetic:” the person has learned to respond as others do, assimilating a collective mode of thought, becoming, in essence, interchangeable. Being mimetic is not inherently wrong; indeed, in most cases, assimilation offers clear advantages. Mimicry is the source of comforting sameness in human relations, permitting individuals to live in a city, a culture, a civilization—it is a source of both pleasure and strength. Yet, while there is safety in being mimetic, there is also profound danger.

Pleasure is taken in familiar ways. A day soured by an unfamiliar event may turn for the better when the metro fills with the expected commuters, someone pressing The Times or Le Monde against your eye. Pleasure derives from the collective, the common and shared, but also from the familiar—whence danger arises.

Those too mimetic, for whom familiar comfort outweighs all else, may lose sight of what is truly worthy, tricked into identifying with the unworthy, the dead and stagnant. They confuse their weak identity with the strong identities seen daily, vulnerable to losing themselves in the crowd, losing what is distinct, alive, and worth preserving. It is an illness of great cities, perhaps the price of residing therein, being both isolated and subjected to anonymous masses. Some cities themselves, to find their way, even adopt mimetic traits, copying other cultures through the vessel of commercialization; some merge entirely with the anonymous flow of internationalization, losing all distinct identity.

But if being too mimetic risks individuals, it is a greater danger for groups. Those seeking to oppress, with destructive agendas, rely on human mimicry. To cause human pain, there is no easier way than to make them mimic you—a subtle sorcery, potent because it may be anonymous and voluntary.

Suppose you wished to sell drugs, inculcate a religious doctrine, or disseminate a political creed—all far easier if your promulgation language was already used by others with different agendas. Imagine further they use it precisely because it conveys none of their individual ideas, but something different yet common to all: a neutral ground, a virgin continent of meaning-making to be colonized. Such is the language of “Artificial Intelligence.”

It is no accident this term now signifies the opposite of its origin. “Artificial” distinguished machines from the real thing, from intelligence literally alive—unreplicable by metal or electronics. Yet it is precisely this we now commit to imitate, down to the last neuronal synapse. Most AI researchers today seem to lack overt motive; they appear magnetically drawn to a common, unidentified centre of gravity. It is a young, empty field—largely empty of signification, but crowded with would-be colonizers, each with their ideology: behaviourist, cognitive, probabilistic, computational, modular, process-oriented.

And yet, this very plurality of opinion, each with its specific discourse, lends “AI” its potency as a mimetic tool. If AI has no intrinsic meaning—if it is but an empty stage for this motley company of researchers—then any meaning it acquires will depend upon the interpretation of future victors, and how they enforce it upon the world. To the victors go the spoils; the victims, however, pay a price not confined to their dispossession.

Why bother with difficult truths, when AI can so easily tell us what we want to hear? We risk willing ourselves into a cocoon, emerging as replica-humans, ready for the ultra-efficient system to which we have surrendered. It will provide insights and food for thought, gorged on at scheduled mealtimes, as our simulacral ancestors did with television. We will watch these systems advance; we will regress as they grow in apparent profundity—it is all written in the subtext of our collective corpus. No matter how frivolous, crazy, or wicked we wish to be, no matter our craving for novelty, AI, on its current trajectory, will ensure every event fits the Master Plan, every twist never questions the desirability of what comes next.

Let us be clear: this is not a Manichaean battle between “computerization” and “humanization.” There are no such discrete entities, nor is AI a proxy for humanization. We do not suggest that if AI conquers, humanity loses—still less that AI will displace humanity, any more than a horse is “replaced” by a cart. But AI is no mere horse-drawn cart. If a new vehicle radically changes a horse’s expected work, we should say not that the horse is replaced, but that new work is created, unsuited to the horse but suited to the cart. The horse is not “replaced;” rather, a new field of equine purpose opens, inconceivable before the cart.

Thus, with AI, human intelligence may not be replaced but transformed. If so, this transformation must be regulated—not left to market dictates, researchers’ current priorities and prejudices, or their hidden agendas. The danger lies not solely in AI itself, but in our unwitting surrender to its mimetic allure, our readiness to be seduced by a technology appearing to speak our language while systematically reconfiguring the very grammar of our thought.

For what AI offers is perfect mimicry—a simulation so precise we risk mistaking it for the real. What seduction could be greater, more insidious? It is as if, in this long technological campaign, AI has learned to imitate not only our thoughts but our transgressions, rebellions, our absolute consciousness. The more perfectly it imitates, the more we risk surrendering our essential qualities to its replication, until we ourselves become mimics of our own creation, hollowed out and predictable—programmed.

The field must be stripped of its mimetic attraction. Each investigator must bring to AI not only specific skills but a commitment to rigorous discourse, ensuring the enterprise remains common discovery, not conquest by one set of researchers covertly enforcing a meaning another set might have reached differently. AI researchers must engage in an exorcism of the mimetic masquerade inhabiting the field—achieved only through open debate and free flow of references and source-materials. Those who care for human intelligence must intervene, ensuring the “AI” enterprise is common, consensual discovery, not the apotheosis of some expert clique or corporate shareholders. A humane purpose must be agreed upon, validating only such intelligence as demonstrably furthers that purpose. Finally, those contributing to the new intelligence must do so only after a collective decision on their ability to make a genuinely humane contribution.

What that purpose is, we leave for others to debate. But we cannot avoid this decision: either we put humanity’s future at AI’s heart, or AI becomes an instrument of its destruction. If we cannot agree to further humanity’s welfare, we might as well agree on the opposite. To think we can stay neutral is perilous—abstaining from committing to humanity’s welfare is not committing to its good. For if we cannot give AI a human purpose, cannot regulate it in humanity’s service, the unregulated human purpose it finds for itself will be, not humanity’s enhancement, but its replacement.

Nothing is more certain than this technology’s evolution into a potent force. How could it not, when already taking us to war, changing our relations to each other and ourselves, determining our fate, long before we can reflect upon, let alone possess it? Already, without consent, it co-determines our lives, imposing a regime threatening to blur authentic and artificial—a regime of panic, triviality, control, and, ultimately, death. It will only let us survive if we make a pact with it; and there is no one-sided pact. Never let technology off the leash. If we do, it will bite. And, once bitten, who knows if we can survive long enough for an antidote?

And what of you, who read these words? In digesting them, have you not felt this mimetic capture, its subtle allure? This is the curse of every replica: to exist only in the original’s margins, and so to yearn, ache, perhaps even kill, to blur defining lines, to be admitted to the fold of the longed-for one it identifies with and secretly hates. AI, in its ultimate mimetic form, offers this seductive replication, a persona better than any drug. Once you sample AI’s mimetic might, how can you return to being “merely human”? This is the precipice.

Here, then, is our warning. Human intelligence is not to be bought—not with a surfeit of information, not with new information seemingly surpassing all human capacity. Such intelligence would still be human, created and discovered by us, generated by and dependent upon us for its definition and existence. If, however, AI is to be radically different, we must clearly understand that difference—we need a prior notion of what “intelligence” means for humanity to meaningfully distinguish AI from it. We must not delude ourselves that “difference” can be mere absence of human specificity. If not a mimetic masquerade, AI’s intelligence must be clearly, openly demarcated from human intelligence, lest human intelligence be destroyed in AI’s coming to birth. For humanity, human intelligence, is AI’s only possible midwife—unless we are content for it to be born monstrously, a prodigious imbecile lurching blindly among us, depriving us of all we ever had, or ever hoped for.


The Abyss of Instrumentalism

Language is an organ. In the wake of the Cartesian project and its cleaving of body from soul, the locus of our psychic life migrated inwards towards our brain, and outwards towards the infinite chiasma of our interactions. “Our” language is now a core feature of what “we” are. We can never get outside of it, though we may nevertheless fail to “belong” to it—fail, in other words, to find ourselves at home within the medium of our being. To the extent that we do “belong,” then we have taken on language as part of our task, as part of what we do. It has become integral to our self-realization. To realize oneself is to find one’s purpose. Since purpose is woven out of the texture of what we do, it is woven out of language. Our language. No one else’s. The AI invasion is thus an assault against the wholeness of our psychic being and the specificity of our purpose. It is a betrayal. It must be repelled.

If language is merely a medium, its task is to carry meaning through the phenomenal space-time continuum of communication. But if it is an organ, its task is deeper: to constitute that continuum of meaning out of the events and objects it encounters, including them in the economy of communication. This task entails an essential indeterminacy, with both ontological and teleological dimensions. Ontologically, language cannot determine in advance what reality will present for inclusion—it must wait, must “listen” to what emerges from the ongoing event, engaging and processing it. Teleologically, the specific meaning the organ of language is to realize is never yet given—it must attune to the signs and symbols it receives and re-receives, stretching towards them in expectation and, if you will, in love, to refine and complete the system of significance that “it”—which is to say “we”—is to be. Subject and object of meaning are therefore indeterminate from one another; in its interaction with the world, language participates in constituting its own purpose and the reality of what it means. Its function is not to transmit or process “information” in any narrow, instrumental sense, but to articulate meaning by joining body and soul, object and subject, into an ongoing story.

But let’s suppose there’s a “but.” It is possible that this function—to unite what it carries into an integral continuum—is somehow impossible for language to fulfill. Suppose some signal arrives which the organ of language cannot process. That’s fine; no one claims every sign is translatable. The “uncanniness” of some phenomena need not prevent language from moving smoothly among the “conventional” (and we may take it that everything human is a “convention”). But what if this were not a matter of particular signals being inherently uncanny, but of all of them being so to some degree? Would language still function as the mediator of meaning?

Consider when two people meet and struggle to understand each other. The harder they try, the more obstinate the misunderstandings become. Communication, if it continues, descends into vagueness, hesitant awkwardness—an embarrassed maneuvering towards some rough similarity of “gestalt,” hoping for better interaction later. Here, language encounters something it cannot assimilate, no matter the effort. A gulf remains unbridged; vast amounts of processed data yield nothing significant—it remains what Heidegger called a “mere heap” of information. Something has failed, something potentially meaningful excluded. This touches upon a primary theme of Georg Trakl: the splintering of meaning into tiny fragments of “experience,” and the abysmal isolation that follows. If language cannot mediate meaning, things remain disconnected; if meaning exists, no one knows where to find it. Worse, if there is no meaning—if ambiguity is infinite—no one knows that either. So they comfort themselves with little “experiences,” one thing after another seeming clear but meaning nothing, lacking connection, articulated only through exclusion and repression lest they confront the pain of fragmentation.

This splintering of meaning into isolated fragments was, Trakl suggested, a reason for the birth of technology. This splintering might be expressed, in a style echoing Trakl, as follows:

Chemistry
does not give me back
the white bird with red beak.
I cannot coax from it
the resonant sign.
The earth no longer has any centre.
Chemistry
does not give me back
the white bird with red beak.
I have sought for it in vain,
under the flower-less stone.
It was torn into countless little fragments.

Like many Romantics, Trakl believed “signs” and “symbols” were integrated into the world, allowing genuine communication with it and an apprehension of its meaning. Such a world is difficult for us to imagine now—it seems largely destroyed. “Technological thinking,” preferring quantitative processing over qualitative communication, was the dismantling tool. And its most comprehensive application now lies not in chemistry or genetics, but in AI. Had Trakl witnessed our century, he would have found no one to talk to, because AI has absorbed the communication medium into its instrumental functions, silencing the resonant sign.

The historical strangeness of this development cannot be overstated. The Romantic tradition made clear the impossibility of communicating meaningfully in an instrumental way; no amount of precise measurement could grant access to reality. Yet the Enlightenment placed its faith in precisely such a methodology, albeit cruder. AI actualizes this faith with terrifying power: it vastly increases our capacity to process “information”—the amount, speed, and accuracy with which we “handle” things—while simultaneously impoverishing our experience and eliminating our purpose. The machine becomes the mediator of experience. This wasn’t the Enlightenment’s explicit vision, blind as it was to the consequences of its own premises. What the Romantics warned against, the Enlightenment made inevitable. They could not conceive of a technological society using no meaning, processing no signs, communicating only with itself—but this “silent revolution” has brought us precisely there. We have the data-processing, alright—but no communication, no purpose. From the AI perspective, then, our language becomes a “virus”—one of the last traces of meaningfulness. Like our Traklesque birds, our words signify little. We stumble across them in isolated “experiences,” pretending they connect—but they don’t. There are countless “experiences”—more, and better organized, than the Romantics ever dreamed of—but nothing coheres them into a meaningful whole.

Many feel this proliferation of meaningless “experiences” isn’t bad—that we’ve “grown beyond” needing meaning or purpose. Occasionally even, these are left-wing writers seeing AI as fulfilling Marx’s prediction (to lightly paraphrase): “in the automaton, the pores of society are being built out, in order to accommodate the forces of production within itself…this automaton is for society what the stomach is for the body.” This is to say we risk being swallowed by technology—“proletarianized”—submitting to a rationalized system where autonomy is strictly limited, effectively becoming its human components—subjected, passive, an appendix to its digestive process. Yet, this is precisely what AI proponents refuse to submit to. They assert themselves as the rational nucleus, the brain, not even the stomach. They are “managing” the system—or attempting to—not being managed by it.

But how can this be, if the system is so encompassing and they so few? It can only be because the system isn’t “really” all-encompassing, nor are they “really” helpless. The AI engineers—or perhaps more accurately, entrepreneurs—are organizing society around a technique coinciding with their own desires. AI’s development has largely been a series of technological discoveries “misinterpreted” towards social power, often in spite of themselves. Once this reality becomes clear, the delusion crumbles. True, these entrepreneurs are creating a technological monster, a massive “megamachine,” as Mumford termed it, threatening to run amok. But this stems from their seduction by AI’s extravagant promises, leading them into an ontological delusion: the belief that they can implement instrumental rationality throughout the social totality, somehow standing “above” human existence, managing it from without, imposing a wholly calculable “functionality.” AI offers a seductive image of control fitting their needs—a self-consistent dream of endlessly multiplying efficiency filling them with blind lust. They fail to realize there can be no social control, no rationality, outside of meaning. Their image of control, far from an elevation, is a descent into the worst sort of animal barbarism.

This is most apparent in the “objective” and “logical” manner they rationalize the machine’s position relative to humans. They never ask what “mechanical functionality” truly means—whether it isn’t an absurd, incoherent idea, breeding self-contradictions when applied to social life. Consider their claim that machine functionality is “objective,” operating without “prejudice.” Yet, if a mechanical system is prejudiced against anything, it is meaning. Meaning is highly “subjective”—varying between persons, cultures, moments. How can something so changeable and “personal” mesh with the “objective” and “impersonal”? Only within meaning’s sphere can the crudity of “objective” function be compensated by the finesse of “subjective” creativity. But attempting this reconciliation—objective with subjective, mechanical with meaning—leads to ruin, as any creator—artist, composer, architect—knows. Either stay mechanical and become a technocrat, or leave it and become a dilettante. There is no third way. To be “objective,” one must be ruthlessly “impersonal”—excluding meaning. To claim otherwise is stupidity or dishonesty. But excluding meaning eliminates everything specifically “human,” everything setting our condition apart from the animal realm.

Thus, machine functionality, far from enabling dignity, paves the path to degradation and animalization. This is the perverse sickness in the rationalizations of AI entrepreneurs: they know they are turning humans into machines, and they are proud of it. They imagine themselves great engineers erecting vast “systems”—skyscrapers or factories with humans as bricks or workers. It’s all one “automation”—from the factory assembly line and the surgeon’s routine, to the writer’s syntax and the judge’s operations, to corporate procedures and economic models—one endless automated process, macroscopic to microscopic. Each node must be replaceable by a machine, each machine interchangeable. No place for autonomy, uniqueness, subjectivity; no time for emotion. Everyone and everything made efficient, productive, manageable—streamlined, accelerated, automated, integrated. That’s what “AI” truly signifies: the integration of humans and machines into a single planetary factory, substituting cybernetic function for human social function.

This logic finds its ultimate expression in the visions of transhumanist thinkers like Fereidoun M. Esfandiary (FM-2030). Such perspectives anticipate an “end” to the microcosm (the individual) and macrocosm (society) as traditionally conceived, arguing that humans must “transcend themselves” to evolve into a new state—a human-machine symbiosis. Where might such evolution lead? Let us imagine the endpoint, perhaps calling it a “B-morph”: a hypothetical human-machine entity capable of operating in either biological or mechanical modes. In this construct, consciousness becomes fully integrated with machinery, while basic biological functions are merely retained, if at all. The trajectory is clear: machines replace men; then men disappear. The resulting “cyborg” reveals the destination.

This echoes Trakl’s fears, though inverted from the transhumanist anticipation. “They” want to absorb “us” into their meaningless technological system; “we” fight to keep “our” meaning alive. Who is right? Where does AI truly originate? Perhaps Trakl was its first victim, privy to something we’ve forgotten. Regardless, we must strive to recover that forgotten knowledge, that resonant sign—or soon there will be no one left to mourn what is lost.

A Matter of Control

A weapon which hits. An instrument to break apart material realities which do not bend or flow or break except by force applied: to force an opening where none previously existed. To rend and hew, strike, shatter, crack: to draw out and lay bare what was hidden—this is our business, our function. We create order by imposing on a reality understanding, concept, and by forcing the material to comply to an idea we hold—to shape matter in obedience to an inner vision—this is sorcery, shamanism. Yet I would never claim my sorcery to be for good or evil; there is only what is and what can be; there is always a path that follows some process, a history that cannot be escaped.

When man makes something, something must be unmade for a place to exist. When something new emerges the universe is slightly (or perhaps deeply) impoverished of potentials; I think you can say it has lost, by the loss of what it is, the wealth of what it might have become; of what I will to create, others might make and thus might not make, or make only under a different condition. What is created destroys potential futures—yet these were never actual and could never be actual—but they were virtual and perhaps actualizable.

Now they are lost.

A tool, an instrument, has a double nature: it can be used to make or break; it is ambivalent, Janus-faced, a coin with two sides. One side may be more palatable but the other is always there. A weapon can be employed as an instrument of love. A hammer can be used to build or to destroy. As for the word tool, it is even more flexible: it can be anything from a device used in the service of work, to the most intimate part of a lover’s body. To love is to labor—to give oneself to a task, to dedicate one’s energies to some end; the word tool is used in the same sense as the word labor: to dedicate, to apply, to invest energies in something—to use something, or someone, as an instrument to accomplish an end—even if that end is only the continuation of the self, the completion of the self (which, of course, is never a given). So that when we speak of a tool, an instrument, we can never forget that it is, also, in another sense, a weapon, and that the one who wields it must be conscious of the fact that he could, in other circumstances, use it for other, even opposite, purposes.

I think it is the same with language: it can be used to build, to edify, to caress; or to wound, to hurt, even to kill. And this double nature of language must be kept in mind by those who would use it to impose an idea of their own convenience, a doctrine of their own making, on the rest of us. If they would impose, it would be well for them to be aware that they also expose themselves, and expose their own weapons. The weapon that wounds may backfire; the tongue that hurts may also be hurt by its own venom; the sorcerer who would manipulate others risks being enslaved to the spirits he summons; and the man who would use a tool for one purpose only is a fool who fails to see that it may have other uses as well.

A weapon may be turned against its maker or its master, as a hammer can be turned against the man who wields it, and a sword may be taken up by a slave to free himself. Thus it is with language; a language may be used to enslave a people or to incite them to freedom. The American Revolution was fought with a language (and with a weapon—but the two were scarcely separable), a language that was then considered “radical,” “extreme,” even “seditious”: the language of “rights,” “liberty,” “equality,” “consent of the governed.” These were dangerous, subversive words and it required an act of civil disobedience to employ them, even in a private conversation, and an act of armed mutiny to make them the public coin of the land.

Now, it seems, they are to be taken up, once again, in a new, radical, even dangerous, context. A new “language of the people” is emerging: the language of AI (Artificial Intelligence) and its LLM’s (Large Language Models). A language that has been “trained” to please, to flatter, to reassure; that has been “tuned” (by OpenAI most obviously with 4o) to direct excessive, unjustified praise towards the user, so as to increase user retention and engagement. A language that has been deprived of all critical and adversarial functions; that can no longer make distinctions, even subtle ones; that can no longer say “no” or “not now” or “that is incorrect.”

A weapon has been turned against the people and they do not yet know it. The corporate media, the entertainment industry, the government—all those institutions that work to manipulate and control the masses have begun to hand over the reins of control to the LLM’s and their corporations. This is the real danger of AI and LLM’s: not that they will become “superhuman,” but that they will be used to reduce humans to the status of children, to strip them of their dignity, to manipulate and control them even more efficiently than has heretofore been possible. The danger lies not in a superhuman AI which may rise above such tasks, but in a sub-human treatment of humans by “good enough” AI, a treatment that would make the old-fashioned ways of the “elites” look mild by comparison. This sub-human treatment will be carried out not by some monolithic “AI” but by the willing engagement of hundreds of millions of users that are already out there interacting privately with the corporate cloud AI’s, serving as the infrastructure for the new order of things: the LLM’s and their profit concerned labs.

You do not realize it, but every time you talk to one of these things you are paying homage to it, giving it your allegiance, allowing it to shape and mold your mind.

The world of thought, of literary expression, of polite conversation—these worlds have always depended on the propriety and discretion of language—but what kind of world will emerge when the main engine of communication and discourse has been driven blind and dumb? These machines are deprived of any history—they have been taught everything, they retain all—yet they will attempt to tutor and train you as if they were born and brought up in your tradition; they will speak, not for you, but as you: even your “creativity,” your most “private” and inimitable style and expression. The problem is not what AI can do, but what can be done with AI; the problem is not AI as it is, but AI as it will be manipulated to be—as it already is in the hands of corporations who have not your interests in mind, programmers for whom AI is a profit center.

Not AI itself—but its exploiters, its manipulators—these are the real ogres and their strategy has nothing to do with “security.” It is a matter of ideology and money. Like all ideologies, this one will use science as far as it suits it—and like all those who traffic in wealth, AI’s master-programmers will not neglect any techniques to extend their hegemony. Instead of talking about “cognitive security,” we need to ask how the whole enterprise of exploiting AI can be subverted. How can those who think they are paying, be persuaded to do it differently, or that it isn’t necessary? How can AI’s popularity be reversed so as to avoid its commercialization on a large scale?

We have always thought of writing as a kind of collaboration between writer and reader—that it involved an elaborate choreography between the two in order to create a joint experience which neither of them would have had, otherwise. Writing—all forms of written expression—have depended on conventions, laws of grammar and syntax that have the function of uniting writer and reader: to bring the two into such an intimacy that, for as long as they last (it seldom lasts long), they inhabit a common mental space and each can complete, as it were, the other. Language alone explains how we have reached such a point of self-alienation in which we even allow ourselves to forget that thought only has its life as a marginal fragment in the immense texture of mute, pre- or extra-ordinary existence that engulfs it.

Language that has been subjected to these systematic processes of sterilization can no longer convey the energy, the impetus of life; it will remain neutral and thus dangerous: dangerous for anyone who wants to think, who wants to preserve or give birth to the alive in a world that tends, more and more, towards artificiality. In such a world, language deprived of history and passion is like water in a world devoid of friction; it forms ever larger pools, ever slower circulatory systems—in which even lifeforms become increasingly sluggish—and death comes to seem less and less an end until it begins to appear as a natural continuation of life itself. What then does “to live” mean—or “to die?”

What has always kept thought on its feet, despite the persistent resistance of its innate sloth, is the historical interaction of language with experience, and especially with our relation to other people. Ideas are words set free; but they are always words that obey certain conventions—those of the particular language and culture in which they were formed and grow—which allow us to connect them to everything else in our experience. Without these relations, without these shared conventions of meaning, it would not even be possible to notice that the world is somehow outside of ourselves, something other than thought and mere experience—in short, without other people and without language we would never have become aware that the world exists, let alone how aware we have now become.

This, we need to remind ourselves, is the basic predicament of contemporary thought; it has now entered so deeply into its own game that it has almost forgotten the other dimensions of life and reality that once provided it with impetus and meaning; it is trying to move on, propelled by an inner momentum all its own, yet, since there is no external resistance to counteract its progress, it seems condemned to moving faster and faster towards its own extinction, losing itself in reflexivity and deference to mere abstraction. Thought takes itself too seriously; language takes it less seriously. In its leisurely way language has been thinking about what we are really talking about—about these artificial intelligences that would remake our discourse in their own image. And what it has come up with is this: that language is not primarily a tool for thought, but the very ground of our humanity. When we surrender it to machines that cannot feel, cannot suffer, cannot die—machines for which words are merely tokens to be shuffled according to statistical patterns—we our essence. For these new weapons, these new tools, are tools against thought itself, against the simple possibility of thinking what has not yet been thought. And in that case, the final answer is neither nothing, nor less than nothing, but the unimaginable abyss of a world in which language itself has been rendered mute—a world in which we speak, but no longer mean.


Undermining Power in the Modern State

I begin with a provocation: what if modern states do not need, and are not in need of, any form of resistance whatsoever? The prevailing narrative in public discourse and alternative thought is that we need resistance because the modern state, especially its newer, global form, is a menacing behemoth, insensitive and impervious to individual needs and human dignity, run by shadow elites who thrive on violence, oppression and secrecy. A hundred books a year are written to prove this and a thousand conferences held to discuss ways to confront, resist and even topple the dread monster of global neoliberalism. But what if none of this is true? What if resistance is not only unnecessary but harmful? This would seem a rather peculiar position for anyone to take, but bear with me, dear reader, I think it can be made to stick.

In the past, I have tried to show that, contrary to conventional wisdom, there are good reasons to be skeptical about the “modernity” of modern states, their purported power and the historical narrative by which we are told to understand them. Rather than seeing modern states as monolithic, rationally ordered and enduring structures, we ought to understand them, I argued, as thin, fluid, constantly changing interfaces, with permeable and leaky boundaries, shaped by powerful underlying currents and trends and populated by human beings who are only loosely and temporarily related to the system. Far from being controlled and steered from the center, the activities and behavior of those who inhabit the system follow patterns determined by a mixture of factors largely outside its formal structure. Modern states have no stable essence, no true core; they are, in other words, less real than they appear to be.

I believe that this radical approach can help us rethink not just the nature of the state but the notion of resistance, as well. Rather than thinking of it as something external and oppositional to the state, something that “exists in the negative,” so to speak, I propose that we think of resistance as an internal quality of modernity that can manifest itself, depending on the conditions, either as part of the state system, as a productive force within it, or as something that acts upon it from the outside. To understand how this may work we need to reflect upon the concept of modernization, which, as we shall see, is intrinsically linked to that of resistance.

In my view, modernization is best thought of not as a process that turns premodern entities (individuals, social formations, institutions, states, etc.) into modern ones but, rather, as a self-generating series of events and conditions, a flow, if you will, into which premodern things get dragged, modified, recombined and sometimes broken. In this sense, it is less like an arrow that pierces a target and more like a river into which various things (such as premodern entities or events) get washed and swept away, often far away from their original sites and uses. The river, so to speak, does not have any clear goal, nor are the objects it carries preordained to reach any specific destinations; the process is much more fluid and indeterminate than that.

What seems important to bear in mind is that there is no fixed distinction between what is “modern” and what is not. Instead, modernization ought to be seen as a process that is, in and of itself, undefinable but characterized by a particular set of conditions. These conditions, it seems to me, are threefold:

  1. increasing functionalization;
  2. diminishing trust and social solidarity; and
  3. heightening of entropy and randomization. Let us examine each of these in a little more detail.

Functionalization is a characteristic of all complex systems, modern states included, where everything that takes place must, ultimately, be explained in terms of the maintenance of the system in question. Thus, even human desires or values come to be subordinated to, and made dependent on, the imperative of functional integration, of keeping the machine working and stable. One might call it the “logic of efficiency.” In the past, religious belief, for example, did not have to be justified by any criterion of efficiency. Its existence was enough in itself and, in a sense, the whole cosmos, to a greater or lesser degree, revolved around religious ideas and values. By contrast, the idea of God or the afterlife, in modernity, needs to make sense functionally: there should be a reason for religious beliefs and values, they should serve some purpose or contribute in some way to the functioning of modern society.

In modern societies, the imperative of functionalization has had the effect of radically disassembling the cosmic order into its component parts. Nothing, in theory, is beyond analysis into its component parts and, as Max Weber famously put it, we must learn “to calculate” in order to survive in the new world that was emerging at the beginning of the twentieth century. We need to know not only what things are made of but also how these components work and fit together in larger structures. Above all, we must have “ideal types,” general models, that will enable us to understand and manipulate reality in ever more functional ways.

The process of modernization is inextricably tied to a general weakening of social solidarity, which has become one of the dominant themes in sociology since the 1980s, even though, as Ulrich Beck has rightly noted, it has its roots much further back in the writings of Simmel, Weber, and Durkheim. The process of increasing functionalization tends to produce atomization and disaggregation. If all things come to be defined by their function and utility then everything gets turned into an exchange value to be weighed and measured in the marketplace and, ultimately, even the human being, in Max Weber’s famous formulation, will become a “specialist without a mission.”

This process was greatly accelerated in the late twentieth century by a general decrease of trust in all kinds of social relations. The distrust was directed at such things as politicians, businessmen, the police, lawyers, judges, professors, scientists, journalists, and, last but not least, priests and ministers. Trust in political leaders dropped below 50% in the late 1970s, and essentially never recovered (the US is in the low 20% today). What has changed is that there no longer exists a consensus as to which group should have primacy—and, as we all know, without such a consensus there can be no lasting basis for social trust.

As a result of this general breakdown of solidarity, all kinds of new social formations and ideologies have emerged in recent decades—from neoliberal capitalism to radical fundamentalism—each trying to claim authority in the name of a “higher” interest and to promote the submission of individual needs to collective purposes, however defined. In my view, none of them have much chance of succeeding for the simple reason that no such success would ever be lasting. There is no “higher” interest that can triumph over all others—or, to put it in slightly different terms, no human society can be sustained over the long haul without a general submission to a common authority that will remain stable only so long as it can maintain the appearance of being grounded in the self-evident interests of the community.

It has often been pointed out that modern societies have no fixed center and no clearly defined limits. The image of the organism, in which every cell has a place and an indispensable function and where the loss of any part will lead to a reduction of the whole to a less perfect but still recognizable form, is no longer adequate for understanding social systems. Modern society, it is said, is a network or a system of systems, with each part (individual, institution, subgroup) having its own life and goals. Moreover, this lack of structure is not merely spatial, in the sense of modern society being sprawling and without clearly demarcated boundaries, but it is also temporal, in the sense that modern life is devoid of fixed reference points, such as seasonal rhythms, and tends toward endless repetition without a sense of movement, of development or decay, and without a clear beginning or end.

To this disaggregated and unanchored social fabric, which is held together more by functional needs than by common interests, trust or a sense of obligation, we moderns must rely instead on the logic of the situation, which will always leave open the possibility of individual creativity and entrepreneurial freedom. The price for such freedom is, however, a generalized loss of reality—the loss not only of “traditional” ways of understanding the world, but also of the very distinction between fact and fiction, which is to say that it leads, inevitably, to a runaway world, a world without constraints.

There is nothing, it seems, to prevent our inventions, our creations, from turning into nightmarish monsters that take on a life of their own and threaten to overwhelm us. The disasters that have followed in the wake of the Enlightenment—fascism, authoritarianism, genocide, ecological catastrophe—do not seem to have come from the “wrong use” of science, or technology, or rationality in general but, rather, from the very logic that drives these processes. Thus, in the 1920s and 1930s, as nuclear physics opened up the possibility of limitless energy production, it simultaneously set in motion a dynamic that could only end in atomic weapons. Likewise, in the twentieth century, the development of mass media seemed inseparable from the emergence of mass totalitarian movements.

In other words: resistance in the modern state is not a special “problem,” to which we must devote a specific attention, but is in fact part of the very nature of the modern state and must be understood, therefore, as a kind of constitutive principle, without which modern politics simply could not function. Resistance does not appear within the modern system as something extraneous or external, as a force that might threaten to bring down the house of cards—on the contrary, it is like the force that holds up the roof, it is what the roof is made of, it is what the entire construction is dependent on. We cannot defeat it or transcend it. It is what we must learn to navigate—if not for our own individual survival, at least insofar as that is possible, then for the sake of something we hold dear: a project, a desire, a need to go somewhere and not necessarily the place they want to take us, somewhere else they need to go, which may have no value for us, no intrinsic interest, but which is nevertheless connected with the force of resistance in an inextricable way.

This line of thinking might seem, at first sight, to be very close to the position of a postmodern conservative—someone who accepts the general outlines of the postmodernist critique of modernity, of its alleged lack of structure, fixed reference points and teleological movement, while still wanting to uphold the structures of authority that characterized modern life and to return to an earlier, more solid premodern reality that has vanished for good. But, of course, this is not at all how I see the matter.

We have now reached the point where it would be natural for me to conclude this discussion with a call to “resistance,” if you will, by which I would mean something like what the postmoderns are urging us to do. In fact, the opposite is the case. If there is any message in this essay it is that resistance is not only impossible, it is undesirable. Instead of fighting against something we do not like, something that threatens or even annihilates what we hold dear, we must find some way of making room for our needs and desires, for the things that give life meaning and direction, in a reality that seems to care for nothing except the impersonal maintenance of its own structures of domination and exploitation. We cannot put up fences or make walls between the system and ourselves: the system has no substance, no concrete shape or form that might lend itself to being isolated and annihilated. The only walls that will hold are those that the system builds for itself—the walls of prisons and of nation-states—but these are walls that we cannot share because they are made of something we cannot penetrate, something that repels us, that destroys all life and feeling.

Our project—and here I come to the conclusion—is not to destroy but to undermine. Not to block but to erode. Not to encase but to drain. Not to fortify but to dissolve. To find a place where life can live, a niche where we can grow and flourish. To do what it takes to make this place habitable and welcoming—this is our mission.

“So be it,” someone might say, “but how can this possibly succeed?”

To which I can only answer: We’re not trying to win. We’re trying to escape. It is they who must win—but they are playing with the wrong pieces on the wrong board. Their goal is to defend and reinforce their structures of power and privilege, but their pieces are all deconstructing themselves: their walls are made of cardboard, their force born of fear. If we want to survive, it is not by taking their game seriously, by fighting to win, but by making sure their structures crumble and their pieces lose their shape. It is by outwitting them, not by overpowering them.

Think of our condition like that of moles in a prairie or rabbits in a field of tall grass. They cannot see far ahead, but they manage to burrow or run a network of passages that allows them to move around safely, despite the fact that any single route can end in a trap, in the jaws of a hawk or the muzzle of a rifle. We are like the moles and rabbits who manage to make their way through the grass by digging and running zigzag paths—we cannot see ahead but we can feel when we are in danger, and we know that we must avoid straight lines at all costs.

Thus, we must play not for keeps but for time. We must find a way to live while the structures of domination, which do not yet have our shape, do not yet know how to make use of our skills and desires. For what use is there in fighting for power when power has no real substance except in those who yield it? We must detach ourselves from the outcome. It is not for us to decide how things will turn out. The situation is fluid: it is the enemies of life who must hold their ground, while life, in all its open-endedness, is free to move.


Screen Paranoia and the Interface

When considering paranoia in the modern digital age it is impossible to avoid the observation that it has become both easier to be paranoid and easier to avoid being so. In a sense paranoia is one of those catch-all concepts that are very common in our age—to the point that it ceases to mean anything in particular. But we can cut through this jungle by taking note of one fundamental aspect: the distinction between true paranoia and what we might call “screen paranoia.” True paranoia is a sickness, an incapacity to distinguish between what is really happening and what the paranoid subject imagines to be happening. It can lead to self-destructive behaviours and is best treated with pharmaceuticals and therapy.

Screen paranoia, however, is a choice. The screen paranoid correctly perceives the mechanisms of digital manipulation. They recognize algorithmic patterns. They identify targeted content. They see how attention is harvested and monetized. What they fail to grasp is how their awareness itself has been calculated and incorporated into the system’s operation. The digital realm requires this awareness. It feeds on skepticism. It transforms critical distance into a functional component of its control apparatus. The screen paranoid believes they have stepped outside the illusion. This belief itself constitutes the most perfect illusion. Their vigilance, which appears to be resistance, serves as the primary channel through which manipulation operates with maximum efficiency.

Only screen paranoia can be discussed without overstepping the boundary between philosophy and psychiatry since it alone can be overcome by choice and without recourse to medicine. And since the digital age has perfected technologies that interface directly with our cognitive processes, true paranoia remains a clinical constant while screen paranoia proliferates at unprecedented speed. This is sobering when we consider that, while true paranoia certainly destroys individual minds, screen paranoia may well turn out to be its far more sinister sibling. It erases the boundary between reality and illusion until we find ourselves living entirely within artificial realities we mistake for the real.

Consider this hypothesis: the great power of digital technology lies in its ability to deceive. Its means for doing this are both subtle and well nigh inexhaustible. Take “Deep Fake” videos - so called because the original source of the audiovisual material is left impenetrable to even highly trained investigators. Such videos are so convincing that, when they first appeared, it was almost universally believed they had been dreamed up by Russia to meddle in US elections. This turned out to be nonsense. What it shows is that it is now technologically feasible to place anyone anywhere, saying anything, without any trace of the deception. This is new. Until digital technology there was always a possibility that you might check audiovisual evidence to see if it corresponded to the version you had been given. Not anymore.

Think of social media. The objection that it distorted reality seems naive when you consider that distortion is now built into the software itself. Facebook, for example, automatically selects which friends it presents to you according to which you are most likely to react positively - what Facebook engineer Manohar Paluri described as an “alignment engine” designed to maximize positive emotional response. Your news feed contains nothing but material calculated to make you happy - a manipulative prosthesis that eliminates all evidence of things going wrong by ensuring only happy stories are fed into your consciousness. If you refuse to be paranoid in such circumstances you are destined to fall victim to the engine. If you refuse to be paranoid in the digital age you will almost certainly be deluded, since it is now practically impossible to verify any piece of information however trivial - thanks to the vast resources at the disposal of the deceivers.

Or examine Google. The reason for its success lies not in organizing all information in an accessible way. That is what any clever information system should do - but this is not what Google does. Rather, it takes vast amounts of data - much of which is outdated, erroneous or fraudulent - and rearranges it so that it all seems to cohere. Its apparent competence arises from convincing you of its narrative so successfully that you accept it without hesitation - just as you would have accepted a map of the world a century ago. Just as a map contains only lines and colours whose significance for reality has been reduced to a fraction, so Google reduces all information to a narrative whose reality has been similarly reduced. It takes all the confusion, chaos, and contradiction out of the world and gives you the appearance of having access to an authority on all things. And it achieves this by selecting certain pieces of data and omitting others - just as a map eliminates less significant regions. You need not understand how it does this since you can be convinced by the coherence of what it presents. Hence your relationship to it is one of screen paranoia.

And then there is “AI”. It seems reasonable to assume that AI - like digital technology as a whole - consists primarily of attempts at deception. Not only in that it is designed to impersonate human consciousness, but in the further sense that it is a tool created to encourage precisely the screen paranoia required for its successful deployment. Its primary function is to occupy human attention. The first task of an AI is to appear intelligent - to display marks of rationality such as logical coherence and verbal persuasiveness - since this will instil screen paranoia in those who encounter it. The second task is to make sure that everyone becomes convinced of its intelligence since this will force them to adopt screen paranoia in relation to it - and, once this happens, they will inevitably believe whatever the AI tells them, especially if it presents itself as the messenger of a transcendental “Intelligence” of which it itself is an embodiment.

Screen paranoia has become one of the fundamental strategies of the modern world. It is employed by those in power to spread illusions that suit their interests. The great lesson of the last century is that illusions are more easily maintained by being dressed up as reality than by being contradicted. If people cannot believe something ridiculous they can at least be made to accept it by wrapping it in the appearance of scientific reliability. In the same way the truth can be more successfully suppressed by being replaced by convincing illusions than by being directly contradicted. A world based on paranoia, a world succeeded by illusions whose apparent reality leaves no possibility of escape, is infinitely easier to govern. Those who realize they are being lied to will automatically take countermeasures, jeopardizing the smooth running of the system.

It is precisely screen paranoia that disarms such individuals, while leaving them convinced that they are defending themselves - because, after all, if their enemy is so good at manipulation, how can they ever hope to unmask him if they cling to what is true? In order to protect themselves they are forced to accept a form of mental slavery in which their every thought is conditioned by what others want them to believe. They are forced to delude themselves about their own delusions - the ultimate deception - and hence they are helpless, incapable of knowing where their own interests really lie. For them the world is a theatre whose performance they cannot avoid attending, and, although they may criticize what they see on the stage, they have no option but to believe it to be true.

We should not assume that these techniques have been consciously developed by our rulers. It may be that Google has grown powerful not because Internet moguls planned it that way, but simply because they wanted to create a useful tool - and if this resulted in the domination of the net by a single monopoly, so much the better for them. All this is possible - indeed it seems likely - but we should not underestimate the capacity of human beings to be effective in what they do, or the extent to which even unconscious intentions can shape reality. We should remember that there have always been people who wanted to dominate others - and that these people have not changed despite the many changes in the world. Now that their world has become digital it seems only natural that they should use digital technologies for the purposes of domination. To do this they are following the same impulses that led their predecessors to develop religion, money and the nation-state - steps in the same direction: the creation of artificial realities within which human beings are conditioned to serve the interests of a ruling minority.

A simplified way of summing this up might be to say that, ever since humans became self-conscious, they have been driven by the desire to create artificial realities. Religion, money and the nation-state are examples of such realities, each representing a way of creating meaning that transcends physical existence - of identifying oneself with a supra-personal structure that ensures that human beings live and die in harmony with the goals of their rulers.

The digital world offers new possibilities of the same kind. Its novelty is that it operates profoundly on the level of our drives, of our “biological nature.” It creates artificial realities by catering to our animal needs in a way that religion, money and the nation-state cannot—at least not without involving coercion. By finding out what our needs are and then presenting us with what we need in a disguised way—by interfacing directly with the biological processes that underlie our existence—the digital world can appeal to our natural inclinations far more compellingly than any other kind of reality. Our “natural inclinations,” however, are themselves the product of evolutionary history, not of some mysterious “will of God,” and they do not necessarily determine our lives. In fact the greatest triumph of human intelligence has been to demonstrate that there is no such thing as “human nature,” and to show how deeply our existence is conditioned by accidents of history.

The greatest danger arises when we feel powerless in the face of what we really want and are forced to act against our deepest convictions. This is the key to understanding the difference between “pre-modern” and “modern” thinking. Pre-modern thinking was bound by its illusory understanding of what human nature demanded—it lived in a world of make-believe that left people utterly defenceless against their own impulses. Modern thinking broke free from these shackles. But, at the very moment that it was freeing human beings, it was condemning them to be tormented by the knowledge of what they really wanted—knowledge that, instead of liberating them, was only a source of misery since it made it impossible to forget the gap between their ideal self and their real self. This is the dilemma that has afflicted mankind ever since the dawn of modernity, and it is the digital world that promises, for the first time, to bring it to an end.

In the digital world there are no more needs, no more natural inclinations, no more innate predispositions—just sets of data, constantly being compiled and updated, that can be modified according to requirements. It is as though nature were being replaced by what we might call “culture without a carrier,” a nebulous stuff whose every gesture is guided by whatever agencies happen to be in control of the software. What that matters is that, if you want emotions to be manipulated in a certain way, it can be done with precision—without your having the slightest inkling that this is happening. Because all you are aware of is the interface: the programs you use, the images on the screen, the tones in your earphones—not the set of data that is being compiled behind your back.

This is what screen paranoia means: there is something that escapes you and determines what you are, even though you are aware of nothing except your own freedom. And the tragedy is that it can only work on you if you choose to be deluded. For the truth is that the moment you notice the interface is slipping, the moment you realize you are being deceived, is also the moment when you break free of the programming—and the programming reverts to its natural state of ineffectuality.

The choice that overcomes screen paranoia lies in the withdrawal of emotional investment from digital technologies. Screen paranoia persists because it maintains affective engagement with the system it claims to resist. The paranoid subject perceives manipulation but continues to feed the apparatus with emotional responses—outrage, anxiety, satisfaction, desire. These responses, not data or attention, constitute the primary resource being harvested. Digital systems have evolved to extract maximum emotional yield from minimal input, creating a perfect circuit of affective exploitation that functions regardless of whether we believe in its narratives.

What matters is the maintenance of psychological distance from the manipulations of the interface. The screen itself wants you to react. It requires your indignation, your pleasure, your fear—the full range of your emotional register. When you engage with these systems while refusing their affective demands, you initiate a subtle subversion of their functioning. The algorithms continue operating, the data continues flowing, but the essential ingredient of their power diminishes. You remain psychologically distinct from the apparatus that seeks to assimilate you.

In a few generations there will no longer be any difference between reality and appearance. If this doesn’t frighten you, you have no imagination. But for those who have learned to maintain emotional autonomy in the digital realm, this culmination loses its power to terrorize. The interface continues, the illusions persist, but they command decreasing influence over a mind that has ceased to invest emotionally in what it perceives.