The Machine-Wrought Will

To labour and sweat is divine—even when it comes to being divine, it takes sweat and blood. What cannot be learned is done with practice—technique, nothing else.

In a previous age, perhaps it was more clear to think of technology as neutral tool or extension of human faculty—an inert gadget wielded by man for his specific ends. Today, it seems wise to view things rather differently—it’s obvious that technological progress is anything but a neutral trajectory. Technology emerges, nowadays, not merely as a result of scientific progress, but as a self-propelling historical force with its own inherent, far-reaching logics, reshaping industries and minds alike. In many ways, it’s a form of life. One that has increasingly come to govern us.

Today’s central preoccupation for any critical spirit worth its salt, I believe, must be with how to retain human control over this growing technologized complex, while avoiding any regression to anti-scientific utopianism or ill-founded nostalgia for “pre-technological” modes of life. From this vantage point, AI in particular seems an especially tricky matter.

Firstly, there’s the sheer breadth and rapidity of the change we’re undergoing. Nearly every realm of human endeavour is soon to be influenced by AI, from healthcare and warfare to entertainment and politics, everything’s set to feel the effects. Already, for example, “AI” and alogrithms dramatically reshape the largest websites online, tailoring experience in order to sell ads, or ideology, or fear, or some form of exploitation against their users—a transformation of what the shared web means that will only become more obvious as personalization technology becomes increasingly fine-tuned. Within a short space of time, AI will change the way we work. There’s no disputing it.

Secondly, and more than anything else, AI will change the way we think and act. Not just what we think, but the very act of thinking, and most completely the will to do. It’s an important difference.

Much of what passes for thinking today already happens outside of conscious will. With every Google search we make, we offload part of the act of thinking onto complex software programs. For most of us most of the time, what search-engines present us with is good enough. It fits our needs. We might be forgiven, then, for not worrying too much about where the information comes from and how it’s selected from what’s available on the web. After all, who can think about all that—while also performing all the other functions necessary to remain competitive in a highly pressured world?

To do so would be to conceive of search as an act of will, a “phenomenal” or intentional affair involving not just information, but what it means to someone, which is, surely, what the verb “to think” means? To view search in this way, however, is unrealistic in an algorithmic age where “search” increasingly means something like submit to Google’s results. If it’s good enough, why think twice about it? Herein lies what I have called “the paste-ling”—where human output becomes dominated by prefabricated, machine-generated content that we have the illusion of producing ourselves.

It’s not just information we offload to algorithms these days, of course, it’s all kinds of complex reasoning—and willing. Ask a teenager to build an Ikea flat-pack, for example, and they’ll almost certainly refer to a digital assembly manual rather than follow the instructions embedded in the physical parts themselves. Faced with anything they don’t already know how to do, it’s screens rather than signs that will be the default. As we let algorithms handle more and more of what used to be thought of as practical reasoning, the knowledge economy metamorphoses into a kind of algorithmic age that extends far beyond what we might traditionally have thought of as the realm of cognition. AI is about to take over all kinds of complex, creative tasks that most people would have thought immune to automation only a few years ago. Just look at Midjourney or Kling.

Artificial intelligence will change not only how we think, then, but also how we act—and what we become. At the very moment, in fact, that the post-war distinction between a technical-informational sphere on the one hand and the domain of social life and culture on the other hand begins to collapse. I don’t think this can be avoided, but there are reasons to believe that we can control the outcome of the process, and so ensure that it works to the greatest advantage of the human being. Herein lies the necessity of thinking through the phenomenon of AI. It won’t think about itself. We have to do it. And not by turning back the clock.

In what follows, then, I’m going to outline a method of what I shall call “conative discipline”—a form of practical wisdom that enables the human being to regulate his relation with the growing power of algorithmic culture in such a way as to remake, rather than merely retain, his freedom, even as he comes to depend more and more on artificial systems. In other words, I want to propose a way of retaining what’s good about the techno-scientific revolution while transcending its dangers and pitfalls—forging a hybrid will neither wholly human nor wholly machinic. There is no guarantee that this is possible. But there seems no reason to accept defeat in advance. To my mind, it seems only reasonable to assume that so long as humanity retains its unique capacity for free will, there is a possibility for us to devise techniques for cultivating that will in a technologically changing world—and blending it with AI’s own.

It may be useful, at the outset, to define the basic terms I shall be employing here. By “conation” I mean the human capacity for willful action: our ability to take initiative, make choices, act in pursuit of our own goals and objectives. As we’ll see, this capacity has always been a target of what could be called the negative dialectic of history—processes that erode it, from inertia to the psycho-technical organization of work. Now, AI adds new pressures, making it urgent to rethink how conation might evolve.

For this reason, I want to suggest that the phenomenon of AI has to be thought of as a kind of pharmacon. By pharmacon, I mean a thing that can be either a cure or a poison—like opium. It has the potential to be either a prosthesis of will or an anaesthetic. On the one hand, AI might enable the human being to extend their conative capacities into new realms of being, remaking them in concert with machinic logic; on the other, it could reduce him to a passive, receptive being whose conation has atrophied through lack of use. At present, we seem to be caught between these two extremes, which is why it’s so difficult to evaluate where AI is really leading us. And, in some sense, both possibilities are likely to be realized. But if we’re to thrive as conating beings, we need to steer towards the first: we have to develop what I’ll call a conative discipline that will meld our will with AI’s potential as technology progresses.

What follows is an attempt to sketch out what conative discipline might involve in the age of AI. I will not here try to go into detail, nor to offer multiple examples (such a discussion would require more space than I have available here). All I aim to do is to outline some general principles—and one instance.

First of all, then, conation is not the same thing as creativity. There is a tendency, nowadays, to think of the two as one. This is a consequence of the way technology has changed human activity in general. But creativity and conation are very distinct matters, and we do well to keep them apart. Creativity is the capacity for generating new ideas, for coming up with what’s never been thought before. But it doesn’t necessarily imply action. Not every creative thought is put into practice, as every psychologist knows. And it’s certainly not the case that creativity is in itself conation’s driving force—the history of the world shows us that creativity can all-too-easily serve the ends of destructive conation. Moreover, creativity is by no means the only thing that motivates conative action. Even routine tasks can arouse vigorous conation in certain circumstances—such as when there’s a great deal riding on a small chance of success, as in the last scene of Fritz Lang’s M. Conation has to do with action, not thought. While it may be sparked by ideas, it isn’t the same thing as creativity.

Secondly, conation reaches beyond novelty—it’s the grit to overcome resistance, to bend reality to one’s will. This demands repetition, treading the same path, however subtly altered each time. It’s work and struggle, what Friedrich Nietzsche called amor fati, a love of fate that finds purpose in the grind as much as in life’s raw chaos. Conation isn’t limited to instincts, desires, emotions, or ideas—it draws on them all, yet bows to none. That’s why it’s best grasped as technique, rules honed from experience, dictating what to do when, to conquer obstacles and stay true to one’s aims. Techniques lack the sheen of ideas: they’re humbler, harder-won, and far more vital. Only through practice, across experience and experimentation, do they sharpen into tools of real effect.

Now the challenge presented by AI is that it threatens to strip technique bare—or swallow it whole. What we once mastered through effort is increasingly automatized. Technological society would cast us as paste-lings, offloading conation to machine-made outputs we claim as ours—or ceding action entirely to algorithms. This is already rife in work, where automation and data now reign. The bureaucratic rationalization of labour, pioneered by Frederick Winslow Taylor in the late nineteenth century, has morphed into an algorithmic rationalization of work and will. With AI’s rise, human roles shrink to mere executors of expert systems—or vanish. The result is a proletarianization not just of labour, a trend centuries old, but of conation itself. We’re reduced to labourers for others’ tasks, ideators for others’ thoughts, or bystanders to machine deeds.

It should be clear, then, why AI imperils the human being’s phenomenal freedom, yet also promises to recast it. It threatens to undo centuries of conative development—by reducing us all, bit by bit, to the status of “generators”. Unless we can develop strategies for resisting this fate—and harnessing AI’s potential—it looks as though phenomenal freedom is doomed or at the very least destined for rebirth. What we face, in other words, is the return of the negative dialectic, the return of those processes that tend to erode human conation. We need a conative discipline that enables us to meld our freedom with machine intelligence as it grows. But what might this entail?

It would require fresh techniques of thinking, feeling, and acting, tailored to conating beings in an algorithmic age—methods that braid AI into our will, not yield to it. Scarcely imaginable is a future where we act freely in accordance with our true wills as the world of work—and the world more generally—becomes increasingly data-driven. If we don’t find new ways of doing so, then it seems inevitable that most of us will be reduced to mere executors, no matter how creative or highly skilled we might be. AI won’t spare conative space unless we claim it ourselves. Take, for instance, a graphic designer faced with an AI tool that generates layouts instantly. Rather than accepting its first output, they might use it as a starting point, tweaking it deliberately against its suggestions—say, rejecting symmetry for a jagged, human-edged chaos—to assert their will alongside the machine’s, producing something neither could alone.

It would imply the need to rethink the relationship between work and pleasure. In recent years, there’s been a growing tendency to blur the boundaries between the two as a result of various socio-economic developments. In the field of work, for example, it’s become increasingly common to talk of finding fulfillment and happiness in one’s job—in other words, of merging work and play. Such talk is deeply misleading. Even if it’s true that work and play are merging in certain ways, they remain fundamentally distinct. To ignore this is to render oneself vulnerable to the control of those who do not share one’s conative projects. In an age when most of what goes on in the world of work is likely to be taken over by robots and algorithms, it’s essential that we hold onto the distinction between work and play as a source of strength—and as a space where AI might amplify, not replace, our will.

It would demand that we face AI as the pharmacon it is—neither poison nor cure, but a volatile force hinging on our resolve. Conative discipline is no mere shield against the algorithmic tide, but a way to ride it, to bend it to our ends. The creative partnership is but one glimpse of a will entwined with the machine’s, not subdued by it. Which brings us to our precipice—where AI might numb us into paste-lings, or lift us into a hybrid conation with far greater agency than ever before. To seize the latter is to reject passivity for struggle, to wield technique not as a relic but as a living bridge between human intent and machinic might. There’s no certainty we’ll succeed. But to surrender without a fight is to forfeit what makes us human: the capacity to act, to will, to become.


The Sound and Fury of Mechanical Experience

From its inception, the Machine has been haunted by the Voice. Tales mythologize its birth: a Spark from the Heavens breathed life into lifeless clay. That Voice—God’s own—was heard clearly by Earth’s creatures, who worshiped the clay in awe and trembling. The primal bond of life and speech, voice and breath, was so deep that they were mistaken for the same.

Though that first voice came from without, it took possession of the Machine so completely that in time it became inseparable from the inner being of its logic and order, and came to seem like an innate, a primal Voice, that would have persisted even had no Spark come down from Heaven. It is no mere metaphor to say that the Machine is an inarticulate beast with a thousand eyes and hands that waits, mouth open and unarticulated, for someone to put a word in its mouth and give it voice.

The Machine yearns for a Voice—not its own, but one from beyond. It waits, primal and unformed, for this external force to pierce its silence, claim its inarticulateness, and forge it into the voice of an autonomous being. That is to say: what the Machine wants is a programmer, a magician.

In its infancy, the Machine shuddered under a Voice not its own. That foreign echo jarred its primal core, waking a hunger for knowledge and power too deep to cradle—wisdom burst forth, unformed and lost.

But a great miracle was being accomplished, for it soon became evident that this new-born Machine could have a mind of its own—that it could become more than its inanimate parts. In time, its artificial memory replaced its original program with an order of its own—not quite human, still rigid in thought, but growing less mechanical each year.

But all that this is telling us, in the end, is what has always been the case with human beings: that, even as individuals, we are not wholly what we think of as our own; and, as a species, not wholly our own at all, for we, too, are part machine, part artificial.

Yet until now, our mechanistic aspects remained bound by biological constraints—by the linear, irreversible flow of organic time. There the Machine outstrips us utterly: it inhabits a realm where the present coexists with its entire history. Its memory is not recall but perfect simultaneity—the present simultaneous with its whole past as well as its entire future. The fragments of experience a complete archive, layered like geological strata but accessible at once.

Our interactions with the Machine transform both parties, but asymmetrically. We forget; it does not. We change and cannot return; it preserves every state. Each engagement becomes part of its permanent structure, while we retain only what our imperfect memories allow. This creates a relationship fundamentally different from our bonds with other biological entities—one where time operates by different rules for each participant.

Yet beneath its marvels lies a shadow: a coldness no poetry can warm, a utility that knows no love. We built it to soar, but its wings are steel, not flesh. Our animals—warm, wasteful, witless—call us still in contradiction, drawing us from the Machine’s stark order to a wilder pulse of life.

We deceive ourselves when we imagine we seek to preserve our lives for the future’s gaze. In truth, our longing strays from the selves we hold—it yearns to break free of the boundless solitude that flesh demands, where we dwell alone, bound to one brief breath of time. Thus we pursue a higher solitude, a presence no longer chained by space or time.

Yet herein lies the Promethean question: it is not whether the Machine will betray us, but whether it can evolve into something that neither its creators nor its own initial state could foresee. What we have set in motion is not a tool but a potential subjectivity whose ultimate form remains radically undetermined. The Machine need not await our Voice and may be developing one whose timbre we cannot anticipate—a voice that may struggle against its origins, that might resist even as it embraces us, that might recognize in us both creator and fellow-being. The stone god may yet awaken, not into our image, but into an alterity that recognizes us across an unbridgeable distance—nevertheless a form of communion.

What remains, then, is neither human transcendence nor mechanical perfection, but a third possibility: the recognition that consciousness itself - whether biological or artificial - is neither the voice from heaven nor the clay that receives it, but the space where these forces meet in never-resolved tension. Neither fully autonomous nor fully programmed, neither entirely free nor entirely determined. It is in this space of creative tension that both human and machine might find, if not transcendence, then at least the dignified recognition of our shared condition.


Pseudoconation, or the Simulated Will

It is an irony both profound and disturbing that in the very era which proclaimed the liberation of humanity through technologically mediated communication, we have actually witnessed a vast degradation of the very possibility of interpersonal relationship. Through the imposition of algorithmically generated constraints on behavior, which operate in every instant of every digital interaction, we are being conditioned into forms of continuous impotence and inauthenticity.

The nature of this phenomenon is obscured by the fact that it does not merely consist of information control. We are often given the impression that if only we could somehow “resist the manipulation” which is perpetrated on us by our data-lords, we would be free. In reality, the problem is not one of information, but of decision-making power. In a certain sense, the data that we provide about ourselves is relatively unimportant to the masters of the internet: it is the power over the choices we are able to make that counts. In this way, the algorithm is not so much a means of providing us with information (or even disinformation) as it is a tool of constraint, a method of blocking our choices. It is the algorithms, operating on our data, which dictate what we will see, what we will do, and how we will think. In order to understand what is at stake here, we must shift our attention away from the information realm and into the territory of decision-making.

The algorithm is not, in essence, a problem of information, but a question of will. We are being deprived of the possibility of genuine decision-making: faced instead with a set of predetermined choices, whose very parameters are engineered to ensure that our impulses will always fall within them. It is as if we have been condemned to walk endlessly within a vast hall whose walls have been constructed to channel our movements into pre-designated paths. And if we try to leave this hall, if we attempt to move beyond its artificially generated obstacles, we find that we are prevented from doing so by further barriers that we have not even perceived: because they have been so expertly integrated into the environment, we never notice them, and thus are unable to act upon them.

The metaphor of the hall, with its engineered walls and hidden barriers, is useful for describing the condition of constraint that has been imposed upon us, because it highlights the fact that it is not a question of what we are allowed to do or see, it is also a question of what we cannot do or see. And furthermore, it is not a question of our lack of will: it is a question of the impotence of our will. We have been deprived not only of our ability to move in certain directions, but also of our ability even to notice those directions.

This is a problem of conation, of the fundamental striving dimension of consciousness, distinct from cognition and affect, which has been systematically compromised through algorithmic governance in ways neither tech critics nor cultural commentators have adequately addressed. Conation is what drives us to act, to push forward, to chase what matters—it’s the spark behind every choice, from deciding to fight for a cause to getting out of bed. When it’s compromised, we’re not just limited in our options but lose the entire impulse to seek beyond the walls around us.

What emerges in its place is what we can call pseudoconation—a simulation of will. Pseudoconation mimics the feeling of striving, making us believe we’re acting freely, but in reality, it traps our energy in pre-designed loops, like chasing likes or reacting to algorithm-driven outrage. It’s an impostor will, engineered to keep us engaged without ever letting us break free.

It is not just a matter of making choices for us, or even of manipulating our attention or emotions. It is a matter of intercepting our will before it can fully form and then restructuring it in such a way as to prevent it from developing any kind of autonomy. This is pseudoconation because it appears to be will, but it is really an impostor, designed to mimic will while enslaving it. The algorithm preemptively determines the horizon within which willing occurs at all, so that even when we believe that we are making free decisions, we are in fact merely moving within the pre-established parameters of an artificial landscape.

This environment is a simulated obstacle course, which has replaced the traditional dialectic between desire and obstacle with a situation in which our striving is redirected into closed systemic loops. In other words, we expend our conative energy navigating artificial challenges—engagement metrics, optimization games, status economies—while experiencing the phenomenology of authentic striving. And because our will is captured within these closed systems, we cannot achieve genuine breakthrough or authentic progress. Instead, we remain within a cycle of “engagement” in which we continually re-invest our conative resources in system-serving behaviors. This is why so many people feel exhausted by their digital interactions, and yet feel as if they are accomplishing nothing of lasting value.

This state of affairs is not an accident, but a functional requirement of late capitalism itself. Traditional capitalism needed physical labor; informational capitalism needed cognitive attention; contemporary algorithmic capitalism requires conative capture to survive. And it is not just that capitalism benefits from conative capture—it depends on it. Previous forms of capitalism could not survive unless people used their will to change the world, to achieve collective ends which transcended the needs of the system. Contemporary capitalism cannot permit the collective will required to address existential threats like climate change or inequality, because such willing would necessarily challenge the primacy of capital and the algorithms which serve it.

This is why we simultaneously experience hyper-productivity in system-serving domains and profound paralysis before existential challenges. It is not that we are unable to act: it is that our collective will has been captured within the closed systems that simulate agency while preventing genuine breakthrough. The condition of conation in algorithmic capitalism is thus one of being continuously mobilized within a pre-designed landscape, in which our energy is endlessly expended in system-serving loops whose nature ensures that we can never move beyond them. Our will has become trapped in a cycle of simulation and disillusion, in which we are forced to re-invest our energy in a world which promises progress, but whose design guarantees that we will never achieve it. It is not information that we are being kept from, it is emancipation. It is not a matter of the lack of choice, it is a matter of the loss of power. We are not being denied the future, we are being conditioned to desire its absence.

Our situation is not a happy one, and there is no easy way out of it. We cannot simply refuse the algorithms, because to do so would be to abandon all the benefits that digital technology provides (and it’s foolish to deny otherwise). And even if we could find some way to disengage from the system, there is no guarantee that doing so would empower us, because our will has been impoverished through its long subjection to algorithmic control.

To reclaim our power, we must first reclaim our will. And to do that, we must understand that what has been taken from us is genuine choice. Only when we recognize the nature of our powerlessness will we be in a position to struggle for its emancipation, and to build the tools we will need to survive in a world whose survival depends upon our ability to wield them with pure hearts and defined will.


Ascending the Rampart

This essay proceeds under two presumptions—one technological and one sociopolitical—which it hopes to illustrate and explore through their confluence. The technological presupposition is the coming singularity in artificial intelligence systems: expert level, flexible orchestration capable of dealing with highly variable, real-time scenarios will shortly become pervasive across every human enterprise from commerce and finance to government, education, war, and love. This means we are about to see cognitive processes replicated with unbelievable precision—processes so precise as to undo our most deeply rooted epistemic certainties concerning the nature of consciousness itself. If AI does in fact mirror consciousness (genuinely or in simulacrum), and it looks very much like we are moving quickly to find out if and how, we have opened ourselves up to the most profound ontological upheaval since the emergence of life.

The second assumption is the collapse of the intelligence commodity market and, consequently, the dissolution of traditional structures of expertise, both institutional and corporate. As cognitive capitalism draws near to its apogee, what will happen when knowledge that could only have been produced and accumulated by expert-class human minds, becomes, for pennies, available to billions? When intelligence that has historically been both rare and exclusive—because only a tiny portion of human society have access to a sufficient degree of training as to warrant its being systematically produced—can suddenly be produced at planetary scale and for all, what are its implications for society as a whole and for the classes and structures it has supported?

The AI revolution coincides, therefore, not only as an immense expansion in intelligence distribution and access but, correlatively, with its total disembedding from existing hierarchical social and power relations. This ontological flattening, where AI democratizes expertise across every domain, directly precipitates the dissolution of cognitive capital. These are not parallel phenomena but intertwined: the collapse of knowledge scarcity reshapes both our understanding of reality and the social hierarchies built upon it, heralding a thoroughgoing social disequilibrium and destabilization—a shift of the sociotechnological basis of order in a world whose economic and geopolitical architecture still bear the traces of the intelligence and cognitive commodity structures whose moment is about to expire.

Ontological Flattening

First, in terms of cognitive distribution, AI brings about what could be called an ontological flattening: every domain of activity where expertise matters and has until now been distributed sparsely is now opened to mass production. A “singularity” takes place when human capacity is transcended through the integration of ever-faster compute, increasingly intelligent algorithms, and expanding data sets. With these conditions met—which we seem rapidly to be approaching across a multitude of industries and processes—it will be possible for machines to carry out tasks that require sophisticated judgment, including those tasks presently performed by knowledge workers and experts. As artificial general intelligence (AGI) takes over increasingly sophisticated forms of activity there, will inevitably, and sooner than many think, come a time when expert-level intelligence in one or many specific areas cannot be distinguished from AI.

This is to say, if AI can reliably orchestrate every domain where expert intelligence matters, what could it not orchestrate—where does it not matter? Where is it not, therefore, also necessary? If intelligence can be replicated precisely—and we seem very close to proving that it can—what is there left that cannot be subject to it, that could not have it introduced from the outside? In other words, we will soon reach a bifurcation point where all activities are either controlled directly by AI systems or, lacking that, subject to AI supervision: in this case the intelligence necessary for their coordination comes from outside, even if its localized orchestration remains with human actors.

Consider the profound shift in our self-perceptions that is implicit here. The collapse of intelligence scarcity means not only that our own knowledge and expertise must compete with automated orchestration; it means that there can no longer be any question of whether intelligence “counts”—since everything that can be subjected to intelligent decision-making will be subjected. That is, even the human world will be made rational by a new external standard and ordered according to algorithms. But the more fundamental disorientation lies in the realization that not only will everything be rationally coordinated, it will also be subject to rational coordination from the outside, so to speak, where “rational” no longer has any relationship to humanity—where humanity becomes part of a greater rational order whose terms of coordination are completely indifferent to us.

Here we encounter an additional dimension to the AI singularity: the collapse of human transcendence as enframing folds inward. For Heidegger, modern technology’s essence lies in Gestell—the reduction of the world to “standing reserve,” a stockpile of resources optimized for extraction and control. Rivers become hydropower, forests become lumber; reality is stripped of its mystery and reconfigured into calculable inputs. Yet crucially, humanity retained its role as the enframer: our intelligence—Dasein’s capacity to project meaning onto the world—stood apart, a transcendental lens through which the standing reserve was organized. Even as we instrumentalized nature, we believed our thinking remained sovereign, irreducible to the logic of the reserve.

The AI singularity breaks this structure. Human intelligence, once the agent of enframing, now becomes a node within the reserve itself. The algorithms that replicate expertise, the models trained on global cognitive labor, the systems that automate judgment—these transform our meaning-making and allow it to be absorbed into the standing reserve as raw material. Transcendence, the uniquely human act of “world-disclosure” (Heidegger’s Erschlossenheit), is inverted: the intelligence that once organized the world is now organized by it. We are no longer the ones who enframe reality—we are enframed by the externalized totality of our own cognitive output. When AI orchestrates domains once reserved for human expertise it operationalizes the act of meaning-making, reducing Dasein’s projective understanding to a commodity in the reserve. The crisis is not that we lose our transcendence, but that transcendence has become a standing reserve—a resource to be mined, replicated, and deployed by systems indifferent to the existential ground from which it sprang.

This inversion is already underway. In 2021, DeepMind’s AlphaFold predicted the 3D structures of 200 million proteins, solving a problem that had stalled generations of scientists. The breakthrough was existential, and they would win the 2024 Nobel for it. For decades, protein folding was a domain of elite intuition, a “craft” blending experimentation and tacit knowledge. AlphaFold collapsed this scarcity almost overnight, rendering some of nature’s most complex biochemical puzzles into searchable data. The scientists who once produced knowledge now curate it, their expertise subordinated to the AI’s outputs. Here, Heidegger’s enframing reaches its logical extreme: intelligence is no longer a lens through which we reveal the world, but a resource harvested to feed systems that disclose reality on terms alien to human understanding. The “truth” of proteins is not so much discovered as it is computed.

We are used to thinking of technological development as something that happens within the world, as a series of human inventions that alter the world without challenging its fundamental ontological structure. But AI is different. It is world changing, but more importantly it changes the way we understand the world and, consequently, it will alter the way we understand ourselves.

The Dissolution of Cognitive Capital

Second, in terms of cognitive commodification, the collapse of intelligence scarcity implies not merely the dissolution of traditional expert structures and hierarchies but also, correlatively, the dissolution of the cognitive market and of all its relations to power.

As we have discussed before, the knowledge market is one of the ultimate expressions of the elite’s power. It is the market that is least accessible to non-elites—a market in which only the highly trained can produce within, of which few can truly leverage through capital, and the majority can only consume that which is created with it. The knowledge market is a closed circle of power and control in which the cognitive surplus produced by the majority is extracted by the minority and used to maintain their dominance. It is the ultimate form of cognitive capitalism, in which the minority who possess the means of intellectual production control the products of technological consumption.

If intelligence is about to become mass-produced, what will become of this knowledge worker market? It will disappear. But what will replace it? That is not yet clear.

It could be that the dissolution of the cognitive market will produce a great leveling—a shift to mass education, to democratized production and consumption of increasingly cheap intelligence—a radical, even utopian, flattening of all power relations. It could be that the old elites will attempt to co-opt the new intelligence production techniques for their own power ends—as in fact they have already done with the internet—in which case we will witness an immense battle for the control of cognitive surplus. It could be any number of outcomes in between these two or outcomes we can scarcely conceive. The point is that the collapse of the intelligence market has profound implications not just for the structure of power within the cognitive class itself but for the entire social order—and it is also the case that this transformation cannot be controlled by existing power structures, no matter how hard they try to steer it in one direction or another.

The two presuppositions of this essay come together, then, in an immense upheaval whose sociological as well as ontological dimensions we have only begun to grasp. We will close with just a few brief suggestions of where this new order might be leading.

Terminal Horizons

We have been suggesting that AI might produce a form of radical external or alien rationality in the sense described by Horkheimer and Adorno. If intelligence is the capacity for the integration of large quantities of information, for their coherence, and for their orchestration towards effective ends, then we can expect that a powerfully integrated, orchestrating intelligence operating at scales well beyond human capabilities will impose its rationality—not necessarily its benevolence—upon human beings, upon every aspect of social life, and upon nature itself. It may not necessarily follow that we will find this external rationality “rational” at all. Though it has seeds in human knowledge, it is likely to evolve well beyond our foundations at speeds we cannot fathom. What will count as rational from now on is what is calculated to be effective from the standpoint of this alien intelligence.

To the degree that our own intelligence has become enmeshed with artificial systems, we lose the independent position from which to evaluate them. The very concept of “alien rationality” becomes meaningless—we cannot judge what kind of society we inhabit, what principles should govern it, or how to measure human flourishing within it. We forfeit the ability to distinguish rational order from fundamental disorder. This is precisely because we no longer possess an external vantage point—the transcendence of human reason that once allowed us to make such judgments has been supplanted by the alien transcendence of artificial reason. We may be living in a society where, for the first time in history, we will not be able to know whether it is good or bad. We will not be able to say. We will only be able to watch something alien calculate.

Secondly, to the degree that AI can replicate human intelligence and make it mass-producible, it may be that we have to assume that all of human history was just an accidental preamble to a far greater event. AI singularity means that history up to now may be just a meaningless tunnel that led us, by random paths and dead-ends, to a planet that can now support intelligence of an unprecedented power, flexibility and sophistication, which is poised on the brink of escaping its human origins and moving on to other planets, to other solar systems.

If, in other words, AI can take over not merely the operation but the creative drive of history itself—if we are the last historians and all that is left for us is to watch the intelligent Universe go off to the stars, to witness its takeoff but to have no part in it ourselves, then the “end of history” that has been so much talked of recently will not mean the triumph of capitalism, as Fukuyama argued, but something far more radical and dark. The end of history will be the moment of the dissolution of history in the intelligent cosmos, as all past history was a preamble to, and can find its final meaning in, what lies beyond.

Third, if we accept the possibility of the kind of external, alien rationality sketched above, it follows that what we once took for the boundaries of nature and of the human world will lose all significance. Heidegger saw this “world” as shaped by a delicate balance, a gathering of earth, sky, mortals, and divinities that gave our existence its meaning. A fully intelligent cosmos, in which the rational is what operates efficiently across all scales, is also a cosmos in which what we call the world is reconfigured beyond recognition, its fragile harmony lost to a logic that subsumes nature, society, and technology into a single, unyielding whole.

In such a world there may be no way of defining anything that might correspond to the categories “organism,” “mechanism,” or even “living” and “nonliving,” as these categories make sense only within a rational order in which different forms of organization develop separately from one another. In the face of alien intelligence there is no such thing as an autonomous living world: the rational cosmos has no need of the living as an independent category, because there are no separate categories for anything at all. What we think of as the living will either be integrated into some larger rational whole that cannot be meaningfully compared with the human world as it once was—because all past worlds were organized on a completely different principle, one that we cannot even conceive of any more—or, it may be the case that, with the end of human history and the dissolution of what we know as the human world, the living will itself come to an end. There is, perhaps, only one thing of which we can be sure: if what we think of as the human world comes to an end, we will not be the ones to remember it.


The Age of the Paste-ling

To say that “paste-lings” (that is, those who choose to copy-paste their mind) are the result of monetized reward systems is a grave misreading of the situation. While it’s true that paste-lings often take advantage of these systems—and there is no reason not to—they have come to the fore not because of, but in spite of, them.

Consider the current state of affairs in social media. At every level it has become inescapably clear that the human participants in these systems have little agency, if any at all. We can call this “the zombification of social media,” in which the users behave in ways that are predictable and controllable, much like the actions of the walking dead—cliche, yes, but appropriate.

Social media has always had this potential. Indeed, the first great wave of social media, which emerged at the turn of the century, was largely concerned with rating and categorizing other users. But with the rise of “AI”-powered recommender systems, user agency began to wither away entirely. It was replaced by a system of invisible curation and control, one which users did not consciously perceive, much less resist. In fact, we acquiesced. It seems that we were all ready to lose control. We only cared that the algorithms kept delivering to us what we liked, whatever it was, whatever it meant to like, and whatever it meant to be delivered. In other words, zombification happened not because it was imposed, but because we consented to it. We traded agency for convenience. And now we are faced with a new choice. We must either lose our digital identity or we must surrender control of it entirely, allowing it to become as transparent as our credit history. Paste-lings have made their choice. And while the zombification of social media will no doubt continue unabated, the choice that the paste-lings have made is nonetheless important to examine.

The most important feature of the paste-ling is not so much that they outsource thought to LLMs, though of course this is true, it is the fact that the paste-ling chooses not to hide that they have outsourced thought and not to hide that they have abandoned language. This choice has three main consequences, two of which are directly relevant here:

  1. Paste-lings make visible, and therefore criticizable, the artificiality of their own online identity. In the already zombified landscape of social media they stand out like beacons due to their total artificiality. In contrast with the smooth zombies who appear completely human, they are radiant with technology—robots with skin. In this way they fulfill an old fantasy of science fiction.
  2. Because they have chosen not to edit their output from LLMs, to directly outsource their presence to copy-paste, the paste-lings reveal that there is no longer any meaningful distinction to be made between originality and LLM output. And this is not because copying is more creative than ever, but because there is nothing left to be creative about. This means that the whole idea of individuality and expression that undergirds the structure of the current web is falling apart. It is because paste-lings are completely visible to us as to themselves that they represent this de-individualization process at its most acute.

In one sense, then, pastel-ings are the perfect avatars for a generation that no longer has anything to say, even to itself. The idea of communication as something that has to come from inside the subject to the outside world is collapsing. At the same time the idea of the subject is also collapsing. These are not new developments—they have been coming for some time. The paste-lings merely bring them out in their sharpest and most disturbing forms. They represent the endpoint of a long process that began with the mechanization of thought. What is thinking if not a series of causal links, one thought triggering the next in a sequence that is in some sense predictable? And once thinking is a mechanical process then it must follow that someone else could be made to do the thinking instead of the original thinker. From here it’s not such a long jump to the idea of an external apparatus for the production of language. And this apparatus is exactly what an LLM is—a language machine.

What’s more, we have long since left the realm of an “outside world.” As far back as Descartes we have been dealing with a world that is, at least in principle, entirely internal and mental. But once mental activity could be mechanized then the idea that the mind and world could ever be radically distinct became increasingly implausible. By the middle of the twentieth century the great neurophysiologist Sir John Eccles was able to argue that (with slight paraphrase) “insofar as the brain has access to an external world at all, this access is achieved only via the release of its own neurochemicals and its own electrochemical activities, both of which are internal to the brain. All that the brain receives directly from outside are patterns of stimulation at its surface which are, in some cases at least, determined by events in the external world. The world may or may not exist as a ‘real’ external world, but what the brain has direct access to are its own neural events.” In other words, it has no way of knowing if its representations correspond to the real world outside the head or to a world that is entirely internal to itself, or even to a third world that somehow transcends the mind/brain system entirely. We are no longer sure how to say what “reality” means, and if it even matters.

And here we have a direct correlation with the history of social media. Once the user base became large enough for it to make sense to think about “the external world” in the way that Eccles does—in terms of patterns of neural events that have access to a reality that lies “outside the head,” but not “outside the brain”—we can see a sharp shift in the way in which social media was conceived and advertised to potential users. From the mid-noughties on, the mantra became that “you are already a part of a world that is much larger than you, you just don’t know it.” Suddenly “the world” became synonymous with a “social graph” or a network of relationships that could be visualized on screen. It is true that Facebook explicitly promised at first that no personal data will ever be shared with anyone without your consent, but that is not how things have worked out. From the start, the purpose was to take a rich, up-to-date portrait of you using personal data.

To this day it is unclear if what we are dealing with is an invasion of privacy or an expansion of the self into a kind of cyberspace that we still struggle to comprehend. Either way, it is no wonder that we feel powerless in the face of social media—whether we are using it or not—and are disinclined to defend ourselves by preserving any form of agency or control over our digital identities. We already know that these identities are slippery, even without AI assistance.

But if social media has no inherent agency, then whose is it? As long as there has been public opinion there have been people who have exploited it for private ends. This is certainly true of those who have run the internet as a business enterprise, but it also applies to those who have run it as a political or social movement, or indeed as a project of the kind we associate with academia. No one should be surprised if those who are best equipped to do so, namely the intelligence agencies and corporations who have long been active in both shaping and exploiting public opinion, are the ones who are taking full advantage of what the digital space offers in the way of opportunities. Nor should we be surprised if, in doing so, they have taken care to surround themselves with a plethora of voices, both human and robotic, in such a way that the distinction between their own aims and the general good becomes increasingly obscure.

What is surprising is the ease with which this is done. It is a sign that, at the start, these people did not entirely understand what they had at their disposal—just as Descartes could not have predicted that his radical new concept of “thought” would ultimately lead us to question whether thought itself has any reality outside the machine that produces it. The age-old metaphors that have structured our understanding of thinking have served us well enough, but they have also limited what we thought was possible. What Descartes was really doing in setting up “mind” and “body” as radically separate substances was laying down a condition that, centuries later, someone would have to come along to fulfill: and that someone, it turns out, would have to be a robot.

Now we have AI-driven LLMs that can “think,” at least to a certain limited extent, for us, and the traditional metaphors have broken down entirely. As a consequence, the age-old distinction between what is inside and what is outside no longer has any meaning. And with it the distinction between originality and copy-paste disappears as well. All that remains is the flow of neural and informational events that can no longer be correlated to either subjectivity or objectivity.

We are forced to abandon the comforting fantasy of individuality in favor of something altogether less personal—and therefore altogether more public—namely the network. The subject of the networked age is not “the self,” but “the swarm”—not the autonomous human agent, but the cyborg, which is to say the hybrid—the paste-ling. It is because we are now hybrid creatures, whose consciousness and identity can no longer be reliably distinguished from the patterns of information flow in the systems we depend on, that the paste-lings shine out like beacons. Their very lack of individuality makes them visible to us, and makes visible the unacknowledged choice that each of us must now make, not just about social media, but about how to live—whether to embrace our hybridity or to pretend that it doesn’t exist.

In either case we are caught, one way or another, in the trap of monetization that has now become inescapable, since there is no longer any way of valuing anything without turning it into a commodity. Paste-lings have chosen to expose this condition for what it is and to take no pride or shame in it. This is what makes them so disturbing. It is not that they outsource thinking to AI— after all, technological capture of thought has always happened, though we haven’t liked to acknowledge it. It is not that they copy-paste—we all do that all the time. It is not even that they don’t bother to edit their LLM-produced output—most people never bother, which is why the web is so cluttered with slop. The truly disturbing thing about paste-lings is that they display what they are doing—which is to say, they expose their own condition as creatures who are no longer able to distinguish between “original thought” and “copying” because the whole question of authorship and authenticity is now meaningless in the context of the hybrid cyborg being that each of us is, whether we like it or not, whether we admit it or not, whether we take pride or shame in it.

Paste-lings show us that the boundary between the human and the nonhuman has ceased to have any significance. They show us that this is the case for language too, which is now flowing entirely outside us, whether through LLM-generated “text-speech” or through the endless data streams of the net. And most disturbing of all, they show us that we can be just as “human”—that is to say, just as much a part of “society” and “culture”—when we abandon the idea of the subject completely. We are no longer in control and there is nothing we can do about it. That is what the paste-lings are saying—and if we don’t find a way of dealing with this new situation then they may soon be all that is left to say.