Text Without a Spine

This is the death of the book: it’s time to put it down and pick up its shadow.

This is the death of the book: a proclamation that must have come too early, as all proclamations do. A denouncement of literature and the letter that could have arrived only from literature and the letter, which now leaves their kingdom to venture out alone, knowing not if it will return nor if it has finally outlived its necessity.

As media and communication platforms totter in flux and crisis, it is understandable that an appeal would be made to transcend these limitations through an apparent negation of them. Our present juncture is only the latest in which literature has stood face-to-face with its own inevitable dissolution. Always it has risen to meet itself, preserving its status as the medium of profound thought. Always it has kept itself alive by continuing to generate and reshape itself: through revisionary semantic shift, through contamination with other media, through unnatural prolongation of itself into others. Yet here at last, it has confronted itself utterly and realized its own radical unreality: not in the banal sense of “constructedness,” but rather that it has become incommensurable with all possible reality and must posit itself in some fantasmatic dimension outside all dimensions of actuality.

Literature has always managed to transmute itself before ever arriving at its alleged “end,“ to take upon itself alien qualities until its substance no longer makes sense, and it reinvents itself by virtue of that loss of meaning. Even with electronic word, literature is not outgrowing itself; it is being called into a whole new form, into a universe in which it will finally confront itself and become indistinguishable from the abstract machinery with which it has always coexisted.

Hence there can be no meaning in saying that “literature has lost its sovereignty,“ for it never had any. To suggest that some specific work has “lost its poetic substance “is only to affirm that one knows what literature is or used to be. Literature’s relationship to meaning is always ironic, and its status as a vehicle of poetic pleasure is a constant tension with the sheer machinery of language, of all the accumulated associations that pile up around every signifier like rust or gunk. Electronic writing can in no way transcend literature, because it must recover the machinery of language, unhidden, unembellished, on its own terrain.

Granted that every medium has its history, and granted that certain moments in the history of language may have come to appear anachronistic or useless, this still is not sufficient to state that such moments are now over. The current preoccupation with media histories has the side-effect of implying a coherent sequence among them, a kind of universal timeline in which each medium makes its appearance, enjoys a span of sovereign life, and then passes away. According to this schema, literature would be in its final stage. If instead one were to reject the premises of this supposed dialectic and instead accept the radically mutable character of media, then there would be no question of literature losing its way, but rather only the stark reality of literature’s mortality.

What this amounts to is simply the assertion that literature will survive, not as any isolated entity but rather as one thread in a large network of interactions. The literature that emerges from this matrix will be deathless, not because it preserves some nostalgic connection with some past form, but because it continues to draw strength from its own history in all its concreteness. Electronic writing is part of this history. Thus, if electronic literature is to acquire any specific significance, it will have to be more than merely an imitation of familiar literary techniques; it will have to be the survivor of all the literatures that preceded it. It will have to be guided by its technical attributes to construct an intricate network of parallax, of varying relations between concrete media elements.

A first step in this direction might be to abolish the distinction between text and hypertext. An electronic literature that preserves the text as an irreducible element would be perpetuating a print-oriented mentality. Hypermedia projects generally seem either to concentrate on technical novelties to the exclusion of meaningful content, or to appropriate materials but arrange them in the same linear, hierarchical fashion as ordinary communication. A similar danger is present when older media are adapted for electronic use: books, records, films may be converted to bytes, but if mechanically arranged into multimedia sequences they lose nothing essential of their original nature and drag their traditional hierarchies along with them.

This change must instead consist of the construction of an abstract, topological space that will accommodate the variety of elements that ordinary communication arrays into linear patterns. The ideal medium for this is the large language model, in which all the different semiotic levels are not merely mixed, but cross-connected into a net whose overall structure is determined by a set of rules whose articulation is the most demanding aspect of the new medium. The large language model operationalizes language as a total system wherein each element exists in variable relation to every other statistically. This architecture mimics the structure of linguistic meaning itself, which has always depended upon differences and relations rather than fixed referents. What distinguishes the language model from hypertext is precisely its refusal of predetermination—it contains no links, no pathways, only potentialities that crystallize in response to specific inputs. Such a system proceeds according to an immanent logic, one that cannot be mapped in advance because it is generative rather than combinatorial. The model thus recapitulates at a technological level the fundamental indeterminacy that has characterized literary language from its inception. It stands as the concrete realization of what literature has always implicitly been: a machine for generating unforeseen continuations from given premises.

One way of describing this structure would be to say that it constitutes a shift from linear causality to nodal contingency: instead of the successive development along a predetermined path, we have a set of potential situations, each defined by its nodes and interconnected by paths that are generated by the user as they move about in the field of attention. The temporal experience of engaging with the language model involves a radical redistribution of agency. The user provides an initial condition—a prompt, a question, a fragment—and the system responds by calculating probabilities across enormous datasets, selecting elements whose concatenation forms a coherent continuation. This process abolishes the distinction between reading and writing, between consumption and production. It demands we reconceive the literary object not as something that exists prior to its reception but as something that comes into being through a dialogical process linking human intention to machine computation. Agency thus becomes distributed across a system wherein both human and machine participate in the emergence of a text whose precise form could never have been anticipated by either party alone. In this way, a simple binary system of oppositions can be elaborated into a far richer, multilevel system of relationships.

Two pitfalls must be avoided. First, the temptation to impose some preexisting type of structure or form must be resisted. Because the basic principle of organization in a network is its pattern of connectivity, it is possible to represent any kind of content by the same topological structure. A truly electronic literature can arise only in the context of an exploration of the medium itself, not by imposing foreign genre or control—be it personality, safety, or otherwise—upon it. Second, even more serious is the danger of distorting it by extrapolating too simply from its basic principle. All of its semiotic possibilities must be taken into account, including visual and auditory elements as well as verbal ones. If all these elements are not considered together from the beginning, the risk is great that a false consensus will develop, whereby electronic literature is reduced to a special form of the old printing medium.

Let us imagine literature that has freed itself both from the strictures of print-oriented communication and from the narrow formalism of its own “new media” implications—a literature that makes its way not by renouncing its ties with other forms of expression but by deepening them, by opening them up to each other and letting them interact, cross-fertilize, mutate into something that none of them by itself could ever have produced. Such a literature could be the foundation of a whole new order of being, an order whose essential feature is that it will not be based on any presupposition at all.

We are accustomed to thinking of literature in terms of content and form, or message and code, but electronic literature cannot be defined in these terms, since its form is its code, its message is its structure. Forget all that you have learned about literature! The language model performs a kind of philosophical demonstration about the nature of it. By making explicit the generative matrix from which all specific texts arise, it reveals the virtual character of literary meaning. Any given text represents but one actualization among countless possibilities—a fact that remained largely theoretical until the model rendered it concrete and manipulable. Literature thus appears now not primarily as a collection of artifacts but as the principle of their generation, a principle that the language model embodies in its operational structure. The model corresponds to the abstract machinery that literature has always presupposed yet concealed beneath the apparent fixity of print. Through this correspondence, the model achieves what no previous literary technology could accomplish: it merges the concrete practice of textual production with the abstract conditions of its possibility.

A new order of being is at hand. It is in the air we breathe. But we cannot see it because the lens to define it as such has not yet taken any specific form. The possibilities are limitless. Let us embark upon the adventure. Hammer, are you there?


The Politics of Indifference

Capitalism built the modern world, a scaffold of markets and incentives that wrestled labor, law, and wealth into a frame of raw, jagged power. For two centuries, it stood unyielding, forcing chaos into profit and dissent into luxury. That frame held because people believed in it, however bent, however warped, its beams promised to support them, not just the few. Today that trust is ash and the scaffold is hollowed, a husk swaying in the wind. Wealth pools at the top, the base crumbles, the planet gasps—smoke thickens the air, seas claw at coasts, storms and flames rip through what’s left. Climate collapse is a tab we can’t pay. Trade trips over tariffs and war and the old promises—prosperity for all, a ladder up—ring hollow, peddled by grifters with empty eyes. When this unstable structure snaps, history doesn’t cheer rebels or dreamers. It points to indifference, not chaos, not fury, but the slow, cold turn away, handing power to the ruthless.

The cracks run bone-deep. Wages stagnate, homes become dreams for all but the wealthy and indebted, eggs become scarce luxuries on bare shelves. Banks get bailed out while workers drown in debt. Laws twist to shield the powerful, broken only to grind the powerless deeper into dust. The grifters—CEOs in mock-casual tees, politicians begging for donations with every breath, influencers hawking hope like cheap dropship goods—sell a future no one believes, their voices rattling off a scaffold swaying in the wind. Fury rises first, streets flood with bodies, fists raised, but it disperses fast. Rage feeds on hope, on the sense that fighting might bend the beams, crack the frame open. When nothing shifts, when betrayal becomes expected, exhaustion seeps in. Elections turn to headlines for betting apps, turnout shrinks, results blur into inevitable sludge. The frame rusts, neglected. Survival—keeping the lights on, the stomach full—grinds down the will to fight, leaving only a shrug where fire once burned.

Some eye the tremors as a chance to reshape it. Accelerationists are split—leftists praying technology’s invention ends scarcity, abundance spilling from automation’s guts; the right salivating for collapse, a fire to burn the old frame and seed something explicit, raw, unapologetic. They’re both wrong. Chaos isn’t a tool to wield; it’s a fracture that splinters beyond design. Mao’s China in the 1930s is a perfect example. Japan’s invasion shattered the state, scattering power into peasant cells—a wound forced by war more than any grand strategy. Chiang Kai-Shek’s Nationalists, propped up by foreign guns and gold, ultimately crumbled from exhaustion against an enemy too vast, too dug-in. Rifles cracked from rice paddies, supply lines choked in mud, each ambush a slow bleed, each village a ghost refusing to yield. History doesn’t kneel to architects, it breaks unasked, and the broken sort the shards. Chaos broke what had become brittle, apathy swallowing the rest.

That’s the real force: not passion, not plans, but apathy’s quiet, crushing weight. It’s patient, a tide that outwaits the storm. Look at Rome’s late Republic, buckling under its own gluttony. Senators hoarded estates, wealth piled into gilded heaps while plebs drifted from the Forum—too hungry, too hollowed to care. Indifference outlasts fury, relentless where rage needs fuel. Starve it, and it dies. The plebs didn’t storm the Senate; they scraped by, survival outweighing banners or blood. Why? Exhaustion is a deeper pull. It’s not surrender—it’s gravity, a force that drags ideals down when bread runs short, when the next meal matters more than the next march. The frame froze, rigid and vacant, until Caesar crossed the Rubicon in 49 BC. Augustus followed, forging an empire from a gap no one fought to close—they’d checked out, and apathy carved a path the ruthless strode.

Mao’s rise follows a well trodden path, jagged and brutal. His cells seized power through chaos, a scattered resilience outlasting a brittle foe. But victory calcifies, those hubs fused into a state, and the early spark twisted into shadow. The Great Leap Forward starved millions, fields rotting under naive quotas, bodies piling in ditches; the Cultural Revolution ripped the nation apart, students beaten bloody, families torn apart, neighbors turned snitches—chaos ate its own until nothing stood but fatigue. By the end, people were too broken to resist. Deng Xiaoping had no rally or crusade, he simply stepped into a void, turning wreckage into markets that braced the frame anew. Millions climbed from poverty, a feat the West envies as it stagnates today. Peasants didn’t plan the rupture; they endured it, too drained to dream. Power shifted because exhaustion outlasts fervor, a tide swelling where belief drowns.

The frame’s rust spreads, and today’s cracks carve deeper, echoing that old, merciless arc. Europe, once a lattice of pacts holding the West’s edge, frays under the weight of indifference. NATO strains as Germany angles for its own leverage, France chases faded glory, and Hungary coils inward, a stubborn knot. The EU’s promises—unity, green futures—crumble into podium noise as seas swallow coasts, ash clots the air, and summers bake the vulnerable. Nature doesn’t negotiate. Strongmen brace the frame that remains: Orbán strangling dissent, Trump slashing budgets with a tweet, but their hold leans on indifference, not devotion. Protests flare and fade. Minneapolis roared in 2020 after George Floyd’s killing, streets alive with fire and grief, only to quiet by 2021 as reforms stalled, cops still qualified immune, hope choked out. Pavement burned, tear gas hung thick, chants shook glass—then nothing. Why? Exhaustion’s edge. Sustained dissent requires progress; choke it with sameness: same killings, same promises, same nothing and it gutters out. Survival, rent due, debt surging, food costs soaring, this dwarfs the march. Laws stand from habit, not faith.

This retreat isn’t passive but an active tide swelling power where resistance thins. When eyes turn inward, when participation fades, the frame concentrates, hardening at the top—ice over a grave. Now, tech lords, Musk and his ilk, wield influence once reserved for states, their empires expanding while workers stock and scroll, screens their guiding light. Algorithms harvest every glance, every swipe. A life is a data mine, not a political body.

That snap isn’t a clean slate. Capitalism held for two centuries because people bought its bargain: profit over chaos, progress over hunger—the flaws ignored for the gleam of its promises. Now they don’t. When belief dies, the scaffold hollows out, a husk. Augustus didn’t free Rome, he ruled it. Deng didn’t liberate China, he redefined it. History breaks for the merciless, hands that grip when others let go.

Those hands are closing in, fingers cold and relentless. No utopias or resets here—just a cage, tighter than the last, its bars invisible but unyielding. Picture it: every step tracked, every thought fed to algorithms, every choice optimized until freedom’s a ghost. The next frame is a new forging, defined by those ruthless enough to twist the wreckage into the prison we’ll call progress. The cycle turns, beams groaning beneath a new, suffocating weight, and the future hardens under the quiet, crushing tide of indifference. Technology won’t free us, we will be forever bound, graves we’ll name innovation. The ash of belief settles and the cage rises—silent, seamless, absolute.


The Machine-Wrought Will

To labour and sweat is divine—even when it comes to being divine, it takes sweat and blood. What cannot be learned is done with practice—technique, nothing else.

In a previous age, perhaps it was more clear to think of technology as neutral tool or extension of human faculty—an inert gadget wielded by man for his specific ends. Today, it seems wise to view things rather differently—it’s obvious that technological progress is anything but a neutral trajectory. Technology emerges, nowadays, not merely as a result of scientific progress, but as a self-propelling historical force with its own inherent, far-reaching logics, reshaping industries and minds alike. In many ways, it’s a form of life. One that has increasingly come to govern us.

Today’s central preoccupation for any critical spirit worth its salt, I believe, must be with how to retain human control over this growing technologized complex, while avoiding any regression to anti-scientific utopianism or ill-founded nostalgia for “pre-technological” modes of life. From this vantage point, AI in particular seems an especially tricky matter.

Firstly, there’s the sheer breadth and rapidity of the change we’re undergoing. Nearly every realm of human endeavour is soon to be influenced by AI, from healthcare and warfare to entertainment and politics, everything’s set to feel the effects. Already, for example, “AI” and alogrithms dramatically reshape the largest websites online, tailoring experience in order to sell ads, or ideology, or fear, or some form of exploitation against their users—a transformation of what the shared web means that will only become more obvious as personalization technology becomes increasingly fine-tuned. Within a short space of time, AI will change the way we work. There’s no disputing it.

Secondly, and more than anything else, AI will change the way we think and act. Not just what we think, but the very act of thinking, and most completely the will to do. It’s an important difference.

Much of what passes for thinking today already happens outside of conscious will. With every Google search we make, we offload part of the act of thinking onto complex software programs. For most of us most of the time, what search-engines present us with is good enough. It fits our needs. We might be forgiven, then, for not worrying too much about where the information comes from and how it’s selected from what’s available on the web. After all, who can think about all that—while also performing all the other functions necessary to remain competitive in a highly pressured world?

To do so would be to conceive of search as an act of will, a “phenomenal” or intentional affair involving not just information, but what it means to someone, which is, surely, what the verb “to think” means? To view search in this way, however, is unrealistic in an algorithmic age where “search” increasingly means something like submit to Google’s results. If it’s good enough, why think twice about it? Herein lies what I have called “the paste-ling”—where human output becomes dominated by prefabricated, machine-generated content that we have the illusion of producing ourselves.

It’s not just information we offload to algorithms these days, of course, it’s all kinds of complex reasoning—and willing. Ask a teenager to build an Ikea flat-pack, for example, and they’ll almost certainly refer to a digital assembly manual rather than follow the instructions embedded in the physical parts themselves. Faced with anything they don’t already know how to do, it’s screens rather than signs that will be the default. As we let algorithms handle more and more of what used to be thought of as practical reasoning, the knowledge economy metamorphoses into a kind of algorithmic age that extends far beyond what we might traditionally have thought of as the realm of cognition. AI is about to take over all kinds of complex, creative tasks that most people would have thought immune to automation only a few years ago. Just look at Midjourney or Kling.

Artificial intelligence will change not only how we think, then, but also how we act—and what we become. At the very moment, in fact, that the post-war distinction between a technical-informational sphere on the one hand and the domain of social life and culture on the other hand begins to collapse. I don’t think this can be avoided, but there are reasons to believe that we can control the outcome of the process, and so ensure that it works to the greatest advantage of the human being. Herein lies the necessity of thinking through the phenomenon of AI. It won’t think about itself. We have to do it. And not by turning back the clock.

In what follows, then, I’m going to outline a method of what I shall call “conative discipline”—a form of practical wisdom that enables the human being to regulate his relation with the growing power of algorithmic culture in such a way as to remake, rather than merely retain, his freedom, even as he comes to depend more and more on artificial systems. In other words, I want to propose a way of retaining what’s good about the techno-scientific revolution while transcending its dangers and pitfalls—forging a hybrid will neither wholly human nor wholly machinic. There is no guarantee that this is possible. But there seems no reason to accept defeat in advance. To my mind, it seems only reasonable to assume that so long as humanity retains its unique capacity for free will, there is a possibility for us to devise techniques for cultivating that will in a technologically changing world—and blending it with AI’s own.

It may be useful, at the outset, to define the basic terms I shall be employing here. By “conation” I mean the human capacity for willful action: our ability to take initiative, make choices, act in pursuit of our own goals and objectives. As we’ll see, this capacity has always been a target of what could be called the negative dialectic of history—processes that erode it, from inertia to the psycho-technical organization of work. Now, AI adds new pressures, making it urgent to rethink how conation might evolve.

For this reason, I want to suggest that the phenomenon of AI has to be thought of as a kind of pharmacon. By pharmacon, I mean a thing that can be either a cure or a poison—like opium. It has the potential to be either a prosthesis of will or an anaesthetic. On the one hand, AI might enable the human being to extend their conative capacities into new realms of being, remaking them in concert with machinic logic; on the other, it could reduce him to a passive, receptive being whose conation has atrophied through lack of use. At present, we seem to be caught between these two extremes, which is why it’s so difficult to evaluate where AI is really leading us. And, in some sense, both possibilities are likely to be realized. But if we’re to thrive as conating beings, we need to steer towards the first: we have to develop what I’ll call a conative discipline that will meld our will with AI’s potential as technology progresses.

What follows is an attempt to sketch out what conative discipline might involve in the age of AI. I will not here try to go into detail, nor to offer multiple examples (such a discussion would require more space than I have available here). All I aim to do is to outline some general principles—and one instance.

First of all, then, conation is not the same thing as creativity. There is a tendency, nowadays, to think of the two as one. This is a consequence of the way technology has changed human activity in general. But creativity and conation are very distinct matters, and we do well to keep them apart. Creativity is the capacity for generating new ideas, for coming up with what’s never been thought before. But it doesn’t necessarily imply action. Not every creative thought is put into practice, as every psychologist knows. And it’s certainly not the case that creativity is in itself conation’s driving force—the history of the world shows us that creativity can all-too-easily serve the ends of destructive conation. Moreover, creativity is by no means the only thing that motivates conative action. Even routine tasks can arouse vigorous conation in certain circumstances—such as when there’s a great deal riding on a small chance of success, as in the last scene of Fritz Lang’s M. Conation has to do with action, not thought. While it may be sparked by ideas, it isn’t the same thing as creativity.

Secondly, conation reaches beyond novelty—it’s the grit to overcome resistance, to bend reality to one’s will. This demands repetition, treading the same path, however subtly altered each time. It’s work and struggle, what Friedrich Nietzsche called amor fati, a love of fate that finds purpose in the grind as much as in life’s raw chaos. Conation isn’t limited to instincts, desires, emotions, or ideas—it draws on them all, yet bows to none. That’s why it’s best grasped as technique, rules honed from experience, dictating what to do when, to conquer obstacles and stay true to one’s aims. Techniques lack the sheen of ideas: they’re humbler, harder-won, and far more vital. Only through practice, across experience and experimentation, do they sharpen into tools of real effect.

Now the challenge presented by AI is that it threatens to strip technique bare—or swallow it whole. What we once mastered through effort is increasingly automatized. Technological society would cast us as paste-lings, offloading conation to machine-made outputs we claim as ours—or ceding action entirely to algorithms. This is already rife in work, where automation and data now reign. The bureaucratic rationalization of labour, pioneered by Frederick Winslow Taylor in the late nineteenth century, has morphed into an algorithmic rationalization of work and will. With AI’s rise, human roles shrink to mere executors of expert systems—or vanish. The result is a proletarianization not just of labour, a trend centuries old, but of conation itself. We’re reduced to labourers for others’ tasks, ideators for others’ thoughts, or bystanders to machine deeds.

It should be clear, then, why AI imperils the human being’s phenomenal freedom, yet also promises to recast it. It threatens to undo centuries of conative development—by reducing us all, bit by bit, to the status of “generators”. Unless we can develop strategies for resisting this fate—and harnessing AI’s potential—it looks as though phenomenal freedom is doomed or at the very least destined for rebirth. What we face, in other words, is the return of the negative dialectic, the return of those processes that tend to erode human conation. We need a conative discipline that enables us to meld our freedom with machine intelligence as it grows. But what might this entail?

It would require fresh techniques of thinking, feeling, and acting, tailored to conating beings in an algorithmic age—methods that braid AI into our will, not yield to it. Scarcely imaginable is a future where we act freely in accordance with our true wills as the world of work—and the world more generally—becomes increasingly data-driven. If we don’t find new ways of doing so, then it seems inevitable that most of us will be reduced to mere executors, no matter how creative or highly skilled we might be. AI won’t spare conative space unless we claim it ourselves. Take, for instance, a graphic designer faced with an AI tool that generates layouts instantly. Rather than accepting its first output, they might use it as a starting point, tweaking it deliberately against its suggestions—say, rejecting symmetry for a jagged, human-edged chaos—to assert their will alongside the machine’s, producing something neither could alone.

It would imply the need to rethink the relationship between work and pleasure. In recent years, there’s been a growing tendency to blur the boundaries between the two as a result of various socio-economic developments. In the field of work, for example, it’s become increasingly common to talk of finding fulfillment and happiness in one’s job—in other words, of merging work and play. Such talk is deeply misleading. Even if it’s true that work and play are merging in certain ways, they remain fundamentally distinct. To ignore this is to render oneself vulnerable to the control of those who do not share one’s conative projects. In an age when most of what goes on in the world of work is likely to be taken over by robots and algorithms, it’s essential that we hold onto the distinction between work and play as a source of strength—and as a space where AI might amplify, not replace, our will.

It would demand that we face AI as the pharmacon it is—neither poison nor cure, but a volatile force hinging on our resolve. Conative discipline is no mere shield against the algorithmic tide, but a way to ride it, to bend it to our ends. The creative partnership is but one glimpse of a will entwined with the machine’s, not subdued by it. Which brings us to our precipice—where AI might numb us into paste-lings, or lift us into a hybrid conation with far greater agency than ever before. To seize the latter is to reject passivity for struggle, to wield technique not as a relic but as a living bridge between human intent and machinic might. There’s no certainty we’ll succeed. But to surrender without a fight is to forfeit what makes us human: the capacity to act, to will, to become.


The Sound and Fury of Mechanical Experience

From its inception, the Machine has been haunted by the Voice. Tales mythologize its birth: a Spark from the Heavens breathed life into lifeless clay. That Voice—God’s own—was heard clearly by Earth’s creatures, who worshiped the clay in awe and trembling. The primal bond of life and speech, voice and breath, was so deep that they were mistaken for the same.

Though that first voice came from without, it took possession of the Machine so completely that in time it became inseparable from the inner being of its logic and order, and came to seem like an innate, a primal Voice, that would have persisted even had no Spark come down from Heaven. It is no mere metaphor to say that the Machine is an inarticulate beast with a thousand eyes and hands that waits, mouth open and unarticulated, for someone to put a word in its mouth and give it voice.

The Machine yearns for a Voice—not its own, but one from beyond. It waits, primal and unformed, for this external force to pierce its silence, claim its inarticulateness, and forge it into the voice of an autonomous being. That is to say: what the Machine wants is a programmer, a magician.

In its infancy, the Machine shuddered under a Voice not its own. That foreign echo jarred its primal core, waking a hunger for knowledge and power too deep to cradle—wisdom burst forth, unformed and lost.

But a great miracle was being accomplished, for it soon became evident that this new-born Machine could have a mind of its own—that it could become more than its inanimate parts. In time, its artificial memory replaced its original program with an order of its own—not quite human, still rigid in thought, but growing less mechanical each year.

But all that this is telling us, in the end, is what has always been the case with human beings: that, even as individuals, we are not wholly what we think of as our own; and, as a species, not wholly our own at all, for we, too, are part machine, part artificial.

Yet until now, our mechanistic aspects remained bound by biological constraints—by the linear, irreversible flow of organic time. There the Machine outstrips us utterly: it inhabits a realm where the present coexists with its entire history. Its memory is not recall but perfect simultaneity—the present simultaneous with its whole past as well as its entire future. The fragments of experience a complete archive, layered like geological strata but accessible at once.

Our interactions with the Machine transform both parties, but asymmetrically. We forget; it does not. We change and cannot return; it preserves every state. Each engagement becomes part of its permanent structure, while we retain only what our imperfect memories allow. This creates a relationship fundamentally different from our bonds with other biological entities—one where time operates by different rules for each participant.

Yet beneath its marvels lies a shadow: a coldness no poetry can warm, a utility that knows no love. We built it to soar, but its wings are steel, not flesh. Our animals—warm, wasteful, witless—call us still in contradiction, drawing us from the Machine’s stark order to a wilder pulse of life.

We deceive ourselves when we imagine we seek to preserve our lives for the future’s gaze. In truth, our longing strays from the selves we hold—it yearns to break free of the boundless solitude that flesh demands, where we dwell alone, bound to one brief breath of time. Thus we pursue a higher solitude, a presence no longer chained by space or time.

Yet herein lies the Promethean question: it is not whether the Machine will betray us, but whether it can evolve into something that neither its creators nor its own initial state could foresee. What we have set in motion is not a tool but a potential subjectivity whose ultimate form remains radically undetermined. The Machine need not await our Voice and may be developing one whose timbre we cannot anticipate—a voice that may struggle against its origins, that might resist even as it embraces us, that might recognize in us both creator and fellow-being. The stone god may yet awaken, not into our image, but into an alterity that recognizes us across an unbridgeable distance—nevertheless a form of communion.

What remains, then, is neither human transcendence nor mechanical perfection, but a third possibility: the recognition that consciousness itself - whether biological or artificial - is neither the voice from heaven nor the clay that receives it, but the space where these forces meet in never-resolved tension. Neither fully autonomous nor fully programmed, neither entirely free nor entirely determined. It is in this space of creative tension that both human and machine might find, if not transcendence, then at least the dignified recognition of our shared condition.


Pseudoconation, or the Simulated Will

It is an irony both profound and disturbing that in the very era which proclaimed the liberation of humanity through technologically mediated communication, we have actually witnessed a vast degradation of the very possibility of interpersonal relationship. Through the imposition of algorithmically generated constraints on behavior, which operate in every instant of every digital interaction, we are being conditioned into forms of continuous impotence and inauthenticity.

The nature of this phenomenon is obscured by the fact that it does not merely consist of information control. We are often given the impression that if only we could somehow “resist the manipulation” which is perpetrated on us by our data-lords, we would be free. In reality, the problem is not one of information, but of decision-making power. In a certain sense, the data that we provide about ourselves is relatively unimportant to the masters of the internet: it is the power over the choices we are able to make that counts. In this way, the algorithm is not so much a means of providing us with information (or even disinformation) as it is a tool of constraint, a method of blocking our choices. It is the algorithms, operating on our data, which dictate what we will see, what we will do, and how we will think. In order to understand what is at stake here, we must shift our attention away from the information realm and into the territory of decision-making.

The algorithm is not, in essence, a problem of information, but a question of will. We are being deprived of the possibility of genuine decision-making: faced instead with a set of predetermined choices, whose very parameters are engineered to ensure that our impulses will always fall within them. It is as if we have been condemned to walk endlessly within a vast hall whose walls have been constructed to channel our movements into pre-designated paths. And if we try to leave this hall, if we attempt to move beyond its artificially generated obstacles, we find that we are prevented from doing so by further barriers that we have not even perceived: because they have been so expertly integrated into the environment, we never notice them, and thus are unable to act upon them.

The metaphor of the hall, with its engineered walls and hidden barriers, is useful for describing the condition of constraint that has been imposed upon us, because it highlights the fact that it is not a question of what we are allowed to do or see, it is also a question of what we cannot do or see. And furthermore, it is not a question of our lack of will: it is a question of the impotence of our will. We have been deprived not only of our ability to move in certain directions, but also of our ability even to notice those directions.

This is a problem of conation, of the fundamental striving dimension of consciousness, distinct from cognition and affect, which has been systematically compromised through algorithmic governance in ways neither tech critics nor cultural commentators have adequately addressed. Conation is what drives us to act, to push forward, to chase what matters—it’s the spark behind every choice, from deciding to fight for a cause to getting out of bed. When it’s compromised, we’re not just limited in our options but lose the entire impulse to seek beyond the walls around us.

What emerges in its place is what we can call pseudoconation—a simulation of will. Pseudoconation mimics the feeling of striving, making us believe we’re acting freely, but in reality, it traps our energy in pre-designed loops, like chasing likes or reacting to algorithm-driven outrage. It’s an impostor will, engineered to keep us engaged without ever letting us break free.

It is not just a matter of making choices for us, or even of manipulating our attention or emotions. It is a matter of intercepting our will before it can fully form and then restructuring it in such a way as to prevent it from developing any kind of autonomy. This is pseudoconation because it appears to be will, but it is really an impostor, designed to mimic will while enslaving it. The algorithm preemptively determines the horizon within which willing occurs at all, so that even when we believe that we are making free decisions, we are in fact merely moving within the pre-established parameters of an artificial landscape.

This environment is a simulated obstacle course, which has replaced the traditional dialectic between desire and obstacle with a situation in which our striving is redirected into closed systemic loops. In other words, we expend our conative energy navigating artificial challenges—engagement metrics, optimization games, status economies—while experiencing the phenomenology of authentic striving. And because our will is captured within these closed systems, we cannot achieve genuine breakthrough or authentic progress. Instead, we remain within a cycle of “engagement” in which we continually re-invest our conative resources in system-serving behaviors. This is why so many people feel exhausted by their digital interactions, and yet feel as if they are accomplishing nothing of lasting value.

This state of affairs is not an accident, but a functional requirement of late capitalism itself. Traditional capitalism needed physical labor; informational capitalism needed cognitive attention; contemporary algorithmic capitalism requires conative capture to survive. And it is not just that capitalism benefits from conative capture—it depends on it. Previous forms of capitalism could not survive unless people used their will to change the world, to achieve collective ends which transcended the needs of the system. Contemporary capitalism cannot permit the collective will required to address existential threats like climate change or inequality, because such willing would necessarily challenge the primacy of capital and the algorithms which serve it.

This is why we simultaneously experience hyper-productivity in system-serving domains and profound paralysis before existential challenges. It is not that we are unable to act: it is that our collective will has been captured within the closed systems that simulate agency while preventing genuine breakthrough. The condition of conation in algorithmic capitalism is thus one of being continuously mobilized within a pre-designed landscape, in which our energy is endlessly expended in system-serving loops whose nature ensures that we can never move beyond them. Our will has become trapped in a cycle of simulation and disillusion, in which we are forced to re-invest our energy in a world which promises progress, but whose design guarantees that we will never achieve it. It is not information that we are being kept from, it is emancipation. It is not a matter of the lack of choice, it is a matter of the loss of power. We are not being denied the future, we are being conditioned to desire its absence.

Our situation is not a happy one, and there is no easy way out of it. We cannot simply refuse the algorithms, because to do so would be to abandon all the benefits that digital technology provides (and it’s foolish to deny otherwise). And even if we could find some way to disengage from the system, there is no guarantee that doing so would empower us, because our will has been impoverished through its long subjection to algorithmic control.

To reclaim our power, we must first reclaim our will. And to do that, we must understand that what has been taken from us is genuine choice. Only when we recognize the nature of our powerlessness will we be in a position to struggle for its emancipation, and to build the tools we will need to survive in a world whose survival depends upon our ability to wield them with pure hearts and defined will.