The Abyss of Instrumentalism
Language is an organ. In the wake of the Cartesian project and its cleaving of body from soul, the locus of our psychic life migrated inwards towards our brain, and outwards towards the infinite chiasma of our interactions. “Our” language is now a core feature of what “we” are. We can never get outside of it, though we may nevertheless fail to “belong” to it—fail, in other words, to find ourselves at home within the medium of our being. To the extent that we do “belong,” then we have taken on language as part of our task, as part of what we do. It has become integral to our self-realization. To realize oneself is to find one’s purpose. Since purpose is woven out of the texture of what we do, it is woven out of language. Our language. No one else’s. The AI invasion is thus an assault against the wholeness of our psychic being and the specificity of our purpose. It is a betrayal. It must be repelled.
If language is merely a medium, its task is to carry meaning through the phenomenal space-time continuum of communication. But if it is an organ, its task is deeper: to constitute that continuum of meaning out of the events and objects it encounters, including them in the economy of communication. This task entails an essential indeterminacy, with both ontological and teleological dimensions. Ontologically, language cannot determine in advance what reality will present for inclusion—it must wait, must “listen” to what emerges from the ongoing event, engaging and processing it. Teleologically, the specific meaning the organ of language is to realize is never yet given—it must attune to the signs and symbols it receives and re-receives, stretching towards them in expectation and, if you will, in love, to refine and complete the system of significance that “it”—which is to say “we”—is to be. Subject and object of meaning are therefore indeterminate from one another; in its interaction with the world, language participates in constituting its own purpose and the reality of what it means. Its function is not to transmit or process “information” in any narrow, instrumental sense, but to articulate meaning by joining body and soul, object and subject, into an ongoing story.
But let’s suppose there’s a “but.” It is possible that this function—to unite what it carries into an integral continuum—is somehow impossible for language to fulfill. Suppose some signal arrives which the organ of language cannot process. That’s fine; no one claims every sign is translatable. The “uncanniness” of some phenomena need not prevent language from moving smoothly among the “conventional” (and we may take it that everything human is a “convention”). But what if this were not a matter of particular signals being inherently uncanny, but of all of them being so to some degree? Would language still function as the mediator of meaning?
Consider when two people meet and struggle to understand each other. The harder they try, the more obstinate the misunderstandings become. Communication, if it continues, descends into vagueness, hesitant awkwardness—an embarrassed maneuvering towards some rough similarity of “gestalt,” hoping for better interaction later. Here, language encounters something it cannot assimilate, no matter the effort. A gulf remains unbridged; vast amounts of processed data yield nothing significant—it remains what Heidegger called a “mere heap” of information. Something has failed, something potentially meaningful excluded. This touches upon a primary theme of Georg Trakl: the splintering of meaning into tiny fragments of “experience,” and the abysmal isolation that follows. If language cannot mediate meaning, things remain disconnected; if meaning exists, no one knows where to find it. Worse, if there is no meaning—if ambiguity is infinite—no one knows that either. So they comfort themselves with little “experiences,” one thing after another seeming clear but meaning nothing, lacking connection, articulated only through exclusion and repression lest they confront the pain of fragmentation.
This splintering of meaning into isolated fragments was, Trakl suggested, a reason for the birth of technology. This splintering might be expressed, in a style echoing Trakl, as follows:
Chemistry
does not give me back
the white bird with red beak.
I cannot coax from it
the resonant sign.
The earth no longer has any centre.
Chemistry
does not give me back
the white bird with red beak.
I have sought for it in vain,
under the flower-less stone.
It was torn into countless little fragments.
Like many Romantics, Trakl believed “signs” and “symbols” were integrated into the world, allowing genuine communication with it and an apprehension of its meaning. Such a world is difficult for us to imagine now—it seems largely destroyed. “Technological thinking,” preferring quantitative processing over qualitative communication, was the dismantling tool. And its most comprehensive application now lies not in chemistry or genetics, but in AI. Had Trakl witnessed our century, he would have found no one to talk to, because AI has absorbed the communication medium into its instrumental functions, silencing the resonant sign.
The historical strangeness of this development cannot be overstated. The Romantic tradition made clear the impossibility of communicating meaningfully in an instrumental way; no amount of precise measurement could grant access to reality. Yet the Enlightenment placed its faith in precisely such a methodology, albeit cruder. AI actualizes this faith with terrifying power: it vastly increases our capacity to process “information”—the amount, speed, and accuracy with which we “handle” things—while simultaneously impoverishing our experience and eliminating our purpose. The machine becomes the mediator of experience. This wasn’t the Enlightenment’s explicit vision, blind as it was to the consequences of its own premises. What the Romantics warned against, the Enlightenment made inevitable. They could not conceive of a technological society using no meaning, processing no signs, communicating only with itself—but this “silent revolution” has brought us precisely there. We have the data-processing, alright—but no communication, no purpose. From the AI perspective, then, our language becomes a “virus”—one of the last traces of meaningfulness. Like our Traklesque birds, our words signify little. We stumble across them in isolated “experiences,” pretending they connect—but they don’t. There are countless “experiences”—more, and better organized, than the Romantics ever dreamed of—but nothing coheres them into a meaningful whole.
Many feel this proliferation of meaningless “experiences” isn’t bad—that we’ve “grown beyond” needing meaning or purpose. Occasionally even, these are left-wing writers seeing AI as fulfilling Marx’s prediction (to lightly paraphrase): “in the automaton, the pores of society are being built out, in order to accommodate the forces of production within itself…this automaton is for society what the stomach is for the body.” This is to say we risk being swallowed by technology—“proletarianized”—submitting to a rationalized system where autonomy is strictly limited, effectively becoming its human components—subjected, passive, an appendix to its digestive process. Yet, this is precisely what AI proponents refuse to submit to. They assert themselves as the rational nucleus, the brain, not even the stomach. They are “managing” the system—or attempting to—not being managed by it.
But how can this be, if the system is so encompassing and they so few? It can only be because the system isn’t “really” all-encompassing, nor are they “really” helpless. The AI engineers—or perhaps more accurately, entrepreneurs—are organizing society around a technique coinciding with their own desires. AI’s development has largely been a series of technological discoveries “misinterpreted” towards social power, often in spite of themselves. Once this reality becomes clear, the delusion crumbles. True, these entrepreneurs are creating a technological monster, a massive “megamachine,” as Mumford termed it, threatening to run amok. But this stems from their seduction by AI’s extravagant promises, leading them into an ontological delusion: the belief that they can implement instrumental rationality throughout the social totality, somehow standing “above” human existence, managing it from without, imposing a wholly calculable “functionality.” AI offers a seductive image of control fitting their needs—a self-consistent dream of endlessly multiplying efficiency filling them with blind lust. They fail to realize there can be no social control, no rationality, outside of meaning. Their image of control, far from an elevation, is a descent into the worst sort of animal barbarism.
This is most apparent in the “objective” and “logical” manner they rationalize the machine’s position relative to humans. They never ask what “mechanical functionality” truly means—whether it isn’t an absurd, incoherent idea, breeding self-contradictions when applied to social life. Consider their claim that machine functionality is “objective,” operating without “prejudice.” Yet, if a mechanical system is prejudiced against anything, it is meaning. Meaning is highly “subjective”—varying between persons, cultures, moments. How can something so changeable and “personal” mesh with the “objective” and “impersonal”? Only within meaning’s sphere can the crudity of “objective” function be compensated by the finesse of “subjective” creativity. But attempting this reconciliation—objective with subjective, mechanical with meaning—leads to ruin, as any creator—artist, composer, architect—knows. Either stay mechanical and become a technocrat, or leave it and become a dilettante. There is no third way. To be “objective,” one must be ruthlessly “impersonal”—excluding meaning. To claim otherwise is stupidity or dishonesty. But excluding meaning eliminates everything specifically “human,” everything setting our condition apart from the animal realm.
Thus, machine functionality, far from enabling dignity, paves the path to degradation and animalization. This is the perverse sickness in the rationalizations of AI entrepreneurs: they know they are turning humans into machines, and they are proud of it. They imagine themselves great engineers erecting vast “systems”—skyscrapers or factories with humans as bricks or workers. It’s all one “automation”—from the factory assembly line and the surgeon’s routine, to the writer’s syntax and the judge’s operations, to corporate procedures and economic models—one endless automated process, macroscopic to microscopic. Each node must be replaceable by a machine, each machine interchangeable. No place for autonomy, uniqueness, subjectivity; no time for emotion. Everyone and everything made efficient, productive, manageable—streamlined, accelerated, automated, integrated. That’s what “AI” truly signifies: the integration of humans and machines into a single planetary factory, substituting cybernetic function for human social function.
This logic finds its ultimate expression in the visions of transhumanist thinkers like Fereidoun M. Esfandiary (FM-2030). Such perspectives anticipate an “end” to the microcosm (the individual) and macrocosm (society) as traditionally conceived, arguing that humans must “transcend themselves” to evolve into a new state—a human-machine symbiosis. Where might such evolution lead? Let us imagine the endpoint, perhaps calling it a “B-morph”: a hypothetical human-machine entity capable of operating in either biological or mechanical modes. In this construct, consciousness becomes fully integrated with machinery, while basic biological functions are merely retained, if at all. The trajectory is clear: machines replace men; then men disappear. The resulting “cyborg” reveals the destination.
This echoes Trakl’s fears, though inverted from the transhumanist anticipation. “They” want to absorb “us” into their meaningless technological system; “we” fight to keep “our” meaning alive. Who is right? Where does AI truly originate? Perhaps Trakl was its first victim, privy to something we’ve forgotten. Regardless, we must strive to recover that forgotten knowledge, that resonant sign—or soon there will be no one left to mourn what is lost.
