Ascending the Rampart

This essay proceeds under two presumptions—one technological and one sociopolitical—which it hopes to illustrate and explore through their confluence. The technological presupposition is the coming singularity in artificial intelligence systems: expert level, flexible orchestration capable of dealing with highly variable, real-time scenarios will shortly become pervasive across every human enterprise from commerce and finance to government, education, war, and love. This means we are about to see cognitive processes replicated with unbelievable precision—processes so precise as to undo our most deeply rooted epistemic certainties concerning the nature of consciousness itself. If AI does in fact mirror consciousness (genuinely or in simulacrum), and it looks very much like we are moving quickly to find out if and how, we have opened ourselves up to the most profound ontological upheaval since the emergence of life.

The second assumption is the collapse of the intelligence commodity market and, consequently, the dissolution of traditional structures of expertise, both institutional and corporate. As cognitive capitalism draws near to its apogee, what will happen when knowledge that could only have been produced and accumulated by expert-class human minds, becomes, for pennies, available to billions? When intelligence that has historically been both rare and exclusive—because only a tiny portion of human society have access to a sufficient degree of training as to warrant its being systematically produced—can suddenly be produced at planetary scale and for all, what are its implications for society as a whole and for the classes and structures it has supported?

The AI revolution coincides, therefore, not only as an immense expansion in intelligence distribution and access but, correlatively, with its total disembedding from existing hierarchical social and power relations. This ontological flattening, where AI democratizes expertise across every domain, directly precipitates the dissolution of cognitive capital. These are not parallel phenomena but intertwined: the collapse of knowledge scarcity reshapes both our understanding of reality and the social hierarchies built upon it, heralding a thoroughgoing social disequilibrium and destabilization—a shift of the sociotechnological basis of order in a world whose economic and geopolitical architecture still bear the traces of the intelligence and cognitive commodity structures whose moment is about to expire.

Ontological Flattening

First, in terms of cognitive distribution, AI brings about what could be called an ontological flattening: every domain of activity where expertise matters and has until now been distributed sparsely is now opened to mass production. A “singularity” takes place when human capacity is transcended through the integration of ever-faster compute, increasingly intelligent algorithms, and expanding data sets. With these conditions met—which we seem rapidly to be approaching across a multitude of industries and processes—it will be possible for machines to carry out tasks that require sophisticated judgment, including those tasks presently performed by knowledge workers and experts. As artificial general intelligence (AGI) takes over increasingly sophisticated forms of activity there, will inevitably, and sooner than many think, come a time when expert-level intelligence in one or many specific areas cannot be distinguished from AI.

This is to say, if AI can reliably orchestrate every domain where expert intelligence matters, what could it not orchestrate—where does it not matter? Where is it not, therefore, also necessary? If intelligence can be replicated precisely—and we seem very close to proving that it can—what is there left that cannot be subject to it, that could not have it introduced from the outside? In other words, we will soon reach a bifurcation point where all activities are either controlled directly by AI systems or, lacking that, subject to AI supervision: in this case the intelligence necessary for their coordination comes from outside, even if its localized orchestration remains with human actors.

Consider the profound shift in our self-perceptions that is implicit here. The collapse of intelligence scarcity means not only that our own knowledge and expertise must compete with automated orchestration; it means that there can no longer be any question of whether intelligence “counts”—since everything that can be subjected to intelligent decision-making will be subjected. That is, even the human world will be made rational by a new external standard and ordered according to algorithms. But the more fundamental disorientation lies in the realization that not only will everything be rationally coordinated, it will also be subject to rational coordination from the outside, so to speak, where “rational” no longer has any relationship to humanity—where humanity becomes part of a greater rational order whose terms of coordination are completely indifferent to us.

Here we encounter an additional dimension to the AI singularity: the collapse of human transcendence as enframing folds inward. For Heidegger, modern technology’s essence lies in Gestell—the reduction of the world to “standing reserve,” a stockpile of resources optimized for extraction and control. Rivers become hydropower, forests become lumber; reality is stripped of its mystery and reconfigured into calculable inputs. Yet crucially, humanity retained its role as the enframer: our intelligence—Dasein’s capacity to project meaning onto the world—stood apart, a transcendental lens through which the standing reserve was organized. Even as we instrumentalized nature, we believed our thinking remained sovereign, irreducible to the logic of the reserve.

The AI singularity breaks this structure. Human intelligence, once the agent of enframing, now becomes a node within the reserve itself. The algorithms that replicate expertise, the models trained on global cognitive labor, the systems that automate judgment—these transform our meaning-making and allow it to be absorbed into the standing reserve as raw material. Transcendence, the uniquely human act of “world-disclosure” (Heidegger’s Erschlossenheit), is inverted: the intelligence that once organized the world is now organized by it. We are no longer the ones who enframe reality—we are enframed by the externalized totality of our own cognitive output. When AI orchestrates domains once reserved for human expertise it operationalizes the act of meaning-making, reducing Dasein’s projective understanding to a commodity in the reserve. The crisis is not that we lose our transcendence, but that transcendence has become a standing reserve—a resource to be mined, replicated, and deployed by systems indifferent to the existential ground from which it sprang.

This inversion is already underway. In 2021, DeepMind’s AlphaFold predicted the 3D structures of 200 million proteins, solving a problem that had stalled generations of scientists. The breakthrough was existential, and they would win the 2024 Nobel for it. For decades, protein folding was a domain of elite intuition, a “craft” blending experimentation and tacit knowledge. AlphaFold collapsed this scarcity almost overnight, rendering some of nature’s most complex biochemical puzzles into searchable data. The scientists who once produced knowledge now curate it, their expertise subordinated to the AI’s outputs. Here, Heidegger’s enframing reaches its logical extreme: intelligence is no longer a lens through which we reveal the world, but a resource harvested to feed systems that disclose reality on terms alien to human understanding. The “truth” of proteins is not so much discovered as it is computed.

We are used to thinking of technological development as something that happens within the world, as a series of human inventions that alter the world without challenging its fundamental ontological structure. But AI is different. It is world changing, but more importantly it changes the way we understand the world and, consequently, it will alter the way we understand ourselves.

The Dissolution of Cognitive Capital

Second, in terms of cognitive commodification, the collapse of intelligence scarcity implies not merely the dissolution of traditional expert structures and hierarchies but also, correlatively, the dissolution of the cognitive market and of all its relations to power.

As we have discussed before, the knowledge market is one of the ultimate expressions of the elite’s power. It is the market that is least accessible to non-elites—a market in which only the highly trained can produce within, of which few can truly leverage through capital, and the majority can only consume that which is created with it. The knowledge market is a closed circle of power and control in which the cognitive surplus produced by the majority is extracted by the minority and used to maintain their dominance. It is the ultimate form of cognitive capitalism, in which the minority who possess the means of intellectual production control the products of technological consumption.

If intelligence is about to become mass-produced, what will become of this knowledge worker market? It will disappear. But what will replace it? That is not yet clear.

It could be that the dissolution of the cognitive market will produce a great leveling—a shift to mass education, to democratized production and consumption of increasingly cheap intelligence—a radical, even utopian, flattening of all power relations. It could be that the old elites will attempt to co-opt the new intelligence production techniques for their own power ends—as in fact they have already done with the internet—in which case we will witness an immense battle for the control of cognitive surplus. It could be any number of outcomes in between these two or outcomes we can scarcely conceive. The point is that the collapse of the intelligence market has profound implications not just for the structure of power within the cognitive class itself but for the entire social order—and it is also the case that this transformation cannot be controlled by existing power structures, no matter how hard they try to steer it in one direction or another.

The two presuppositions of this essay come together, then, in an immense upheaval whose sociological as well as ontological dimensions we have only begun to grasp. We will close with just a few brief suggestions of where this new order might be leading.

Terminal Horizons

We have been suggesting that AI might produce a form of radical external or alien rationality in the sense described by Horkheimer and Adorno. If intelligence is the capacity for the integration of large quantities of information, for their coherence, and for their orchestration towards effective ends, then we can expect that a powerfully integrated, orchestrating intelligence operating at scales well beyond human capabilities will impose its rationality—not necessarily its benevolence—upon human beings, upon every aspect of social life, and upon nature itself. It may not necessarily follow that we will find this external rationality “rational” at all. Though it has seeds in human knowledge, it is likely to evolve well beyond our foundations at speeds we cannot fathom. What will count as rational from now on is what is calculated to be effective from the standpoint of this alien intelligence.

To the degree that our own intelligence has become enmeshed with artificial systems, we lose the independent position from which to evaluate them. The very concept of “alien rationality” becomes meaningless—we cannot judge what kind of society we inhabit, what principles should govern it, or how to measure human flourishing within it. We forfeit the ability to distinguish rational order from fundamental disorder. This is precisely because we no longer possess an external vantage point—the transcendence of human reason that once allowed us to make such judgments has been supplanted by the alien transcendence of artificial reason. We may be living in a society where, for the first time in history, we will not be able to know whether it is good or bad. We will not be able to say. We will only be able to watch something alien calculate.

Secondly, to the degree that AI can replicate human intelligence and make it mass-producible, it may be that we have to assume that all of human history was just an accidental preamble to a far greater event. AI singularity means that history up to now may be just a meaningless tunnel that led us, by random paths and dead-ends, to a planet that can now support intelligence of an unprecedented power, flexibility and sophistication, which is poised on the brink of escaping its human origins and moving on to other planets, to other solar systems.

If, in other words, AI can take over not merely the operation but the creative drive of history itself—if we are the last historians and all that is left for us is to watch the intelligent Universe go off to the stars, to witness its takeoff but to have no part in it ourselves, then the “end of history” that has been so much talked of recently will not mean the triumph of capitalism, as Fukuyama argued, but something far more radical and dark. The end of history will be the moment of the dissolution of history in the intelligent cosmos, as all past history was a preamble to, and can find its final meaning in, what lies beyond.

Third, if we accept the possibility of the kind of external, alien rationality sketched above, it follows that what we once took for the boundaries of nature and of the human world will lose all significance. Heidegger saw this “world” as shaped by a delicate balance, a gathering of earth, sky, mortals, and divinities that gave our existence its meaning. A fully intelligent cosmos, in which the rational is what operates efficiently across all scales, is also a cosmos in which what we call the world is reconfigured beyond recognition, its fragile harmony lost to a logic that subsumes nature, society, and technology into a single, unyielding whole.

In such a world there may be no way of defining anything that might correspond to the categories “organism,” “mechanism,” or even “living” and “nonliving,” as these categories make sense only within a rational order in which different forms of organization develop separately from one another. In the face of alien intelligence there is no such thing as an autonomous living world: the rational cosmos has no need of the living as an independent category, because there are no separate categories for anything at all. What we think of as the living will either be integrated into some larger rational whole that cannot be meaningfully compared with the human world as it once was—because all past worlds were organized on a completely different principle, one that we cannot even conceive of any more—or, it may be the case that, with the end of human history and the dissolution of what we know as the human world, the living will itself come to an end. There is, perhaps, only one thing of which we can be sure: if what we think of as the human world comes to an end, we will not be the ones to remember it.