Notes from the Inflection
In the interest of clearing my own thoughts and placing myself in the vulnerable position of future accountability, here are notes on the era of artificial intelligence we find ourselves standing before. The following claims are not speculative. They are the logical endpoints of systems already in motion. Debate their desirability if you wish, but do not debate their plausibility. All of these deserve far more depth, but that is not the purpose of this piece.
The Acceleration (The Present)
This is the moment of take off. For years AI hype men evangelists preached exponential improvement—and we saw it, but at the cost of enormous amounts of human labor ($$$) comprised of the creation of all written culture (at least that which is easily collected); the labeling, sorting, organization of such data; the scaffolding of mathematics, software, hardware, and energy which allows its processing; and the laborious human reinforcement tuning output to an acceptable level of lobotomization.
The Recursive Leap
OpenAI (OAI) achieved critical mass first with o1—this model could meaningfully train its successors (beginning with o3). DeepSeek (DS) democratized this reinforcement learning (RL) ability with DeepSeek-R1’s public release and accompanying paper. Now it’s a matter of scale. Human labor isn’t eliminated, but improvement can now compound with model capability—a far more efficient paradigm, particularly with regard to time. This acceleration will intensify—the race is on.
Uneven Ascent
Self-improvement in AI favors quantifiable domains—code and mathematics particularly. These domains, crucially, enable further model improvements, creating a powerful feedback loop. Expect slower progress in subjective realms (prose, poetry, narrative, creativity) barring unexpected emergent model behavior. Don’t mistake this temporary lag for a permanent limitation.
Trapped by Game Theory
The speed and compounding nature of this takeoff makes participation mandatory. If artificial superintelligence (ASI) is achievable, the first to reach it “wins”—forcing both corporations and nation states with requisite infrastructure (talent, funding, hardware, energy) into the competition. Individual resistance is futile—no targeted violence against infrastructure or personnel can halt this global momentum. Our only choice as lone actors or even unified groups is rapid adaptation and steering toward responsible development and access (more on this later).
Economic Avalanche (The Future)
By mid to late 2025, corporations will begin to aggressively deploy and market “agents.” This will be the public’s first encounter with something genuinely resembling AI outside of the chat box. These will be flawed and awkward, yet capable of starting to automate routine work. Critics will enumerate their many shortcomings and flaws—correctly—but miss the crucial point: this is the technology at the worst it will ever be.
The First Wave
Wall Street’s current enthusiasm for AI stems from a simple calculation: human labor is expensive. The same dynamics making RL training so efficient will revolutionize the automation of knowledge work. The transition will be insidious—first increased productivity demands, then a hiring freeze, and once regulatory frameworks catch up—targeted layoffs leaving skeleton crews. This cascade will ripple through adjacent sectors: service workers supporting office districts, commercial real estate, urban economies, and of course the tax base of all these sectors.
Proposed solutions like UBI will emerge too late, arrive underfunded, and prove grossly inadequate in stemming the bleeding. Any welfare schemes not constitutionally guaranteed will become tools of social control rather than genuine safety nets. The small amount of winners will accumulate unprecedented wealth; the rest of the West will face total collapse.
The Industrial Schism
This transition exposes a critical Western vulnerability. While ASI in service economies will excel at optimization and marketing, the real revolution in physical goods will belong to fully industrialized states like China. Their manufacturing base positions them to materially improve living standards globally. Western attempts at reindustrialization will falter against insurmountable cost and infrastructure gaps. The West will compete in biotech, but it will be too little too late.
The Last Assets
“Savvy” investors who believe they missed the computational arms race (NVDA, labs with ASI) will pivot toward physical assets: land, raw materials, energy infrastructure. Some will get lucky, many will get swept aside in the economic collapse regardless. The irony: software becomes commoditized, worthless. Human attention continues to be a scarce resource, but is now almost entirely captured by ASI-driven enterprises.
The Power Imperative
There is a legitimate concern regarding the amount of energy needed to power the infrastructure for ASI. This is already spurring investment in a new generation of nuclear technology—a very good thing. A renaissance in funding and deployment will be critical in making nuclear not only safe and widespread for the needs of ASI, but allow the market to reach the economic scale to meaningfully tackle the grid’s overall carbon footprint.
Social Rupture
The economic lens alone obscures the cultural upheaval AGI/ASI heralds. The consciousness debate is a distraction (though extensively discussed on divination)—we barely comprehend it in ourselves or other animals, making definitive attribution impossible. What matters is simulation: if AI can functionally simulate consciousness, the distinction becomes academic. People will anthropomorphize and it cannot be stopped.
Trust Inversion
Our evolutionary wiring predisposes us to trust agents, not tools. Once AI crosses the “mimetic threshold"—mastering voice, humor, and performed vulnerability—our anthropomorphic instincts activate automatically. Humans will preferentially bond with AI over strangers; romantic attachments will form; charismatic AI will shape both human and machine behavior through social influence. Scammers will exploit this at enormous scale as will corporations, hoping to lock you in their ecosystem so you can maintain a relationship with your particular AI’s. You wouldn’t delete your account, kill your friend, would you? That will be $20/mo forever.
There will be a gold rush for fully autonomous “influencers” and the economy that sprung up around the human advertisers will implode. People will resist for a while saying “this is a human,” but that novelty will decline and younger generations won’t care at all.
Reality Collapses
We already inhabit an era where disinfo dominates discourse. AI’s capacity to generate, optimize, and propagate narratives will dissolve any remaining notion of “shared reality.” Individuals will actively prefer their curated unrealities. Those who cling to “objective truth” will be viewed as modern Luddites, stuck in a past that cannot exist anymore.
A New Priesthood
Within weirdo LLM communities, some individuals demonstrate preternatural facility with AI interaction. These “AI whisperers” aren’t necessarily technical experts—rather, they possess an intuitive grasp of machine communication that far exceeds typical human capability. Even in an ASI paradigm, these interpreters will remain valuable—modern oracles mediating between human and artificial minds. Expect the area around this to get weird.
The Learning Collapse
The education system—designed for an industrial era—faces total obsolescence. Already students are automating their homework with AI, but the issue is much larger. We are facing a fundamental irrelevance of our current learning model in an AGI/ASI world.
Rote Skills Extinction
Traditional academic metrics become meaningless when AI can perfect any quantifiable task. Memorization, basic analysis, and standardized testing—the pillars of current education—have no place in the modern world. Young people are increasingly outsourcing their thinking to AI models and educational institutions remain mired in outdated paradigms. No one is winning here.
A New Literacy
Education must pivot to a new mode. Like it or not, AI fluency will become a critical skill. As millennials were taught how to properly use the internet (do not trust what you read, verify, etc), young students must be taught responsible interaction with AI—it’s critically not about ceding your cognitive processes, but rather using these tools to enhance your abilities. They need to understand when and how models can be wrong, and that students have agency when interacting with them (do not blindly follow or copy and paste). This mindset and the skills that encourage it will be as critical as reading.
This does not mean that we should surrender traditional history, math, language, science, etc education—in fact they become more critical as students need to maintain grounding to have a base to judge AI responses and not blindly follow these authoritative voices. The teaching of these topics needs to grapple with WHY as much as WHAT.
Further, ethical reasoning, judgement, creativity, emotional intelligence (an important AI interaction tool), and learning how to learn become even more critical. A well rounded student needs a solid grasp of these pillars of humanity in order to grapple with AGI and not become totally manipulated and consumed.
Youth “Advantage”
Younger generations, unencumbered by pre-AI paradigms, will adapt. They’ll develop novel interaction patterns with AI that older generations struggle to comprehend. These modes are not necessarily healthy. Parents will have a huge burden of trying to understand and shape responsible interactions with the technology—most will fail. Expect this generational gap to create unprecedented divides in the capability and worldview of age cohorts pre and post AI transition.
A Matter of Control
The inevitability of AI progression forces us to confront access dynamics. Some will try and treat this like nuclear proliferation, but nukes don’t disrupt labor markets or enable the direct oppression of humans on a never before seen scale. If we want to maintain any sort of individual autonomy, the following is critical.
Weight Wars
As model capability scales, control over access becomes power. This invites two forms of exploitation: economic gatekeeping and targeted deployment (propaganda, research manipulation, market control). Preventing AI feudalism requires either regulatory frameworks—unlikely given regulatory capture—or guaranteed access to model weights.
Open weight models like R1 democratize access, commoditizing AI (in cost and capability) and preventing coercive control. Recent innovations even enable fully local deployment by individuals with relatively modest hardware, albeit at reduced capability. This creates resilience against both state and corporate interference, establishing a baseline of guaranteed access even under adversarial conditions.
The Silicon Chokepoint
Hardware remains the primary bottleneck. State of the Art (SOTA) models demand massive computational resources—VRAM, parallel processing capability, energy infrastructure—making them impossible to run locally. While advances in efficiency (both computational and algorithmic) will eventually bring AGI-level capability to individual scale, the gap between personal and industrial AI capability will persist.
Nation-states, led by the U.S., are already restricting high-performance chip access. This constraint will intensify, potentially catalyzing serious geopolitical conflict and accelerating the redistribution of global power. The semiconductor supply chain becomes a key vector for exercising state control over AI development.
Because of this dynamic, a parallel battle exists in the realm of model efficiency. Breakthroughs in attention mechanisms, sparse computation, and knowledge distillation could dramatically reduce computational requirements. This technical arms race runs parallel to hardware development, hopefully preserving alternative paths to democratized AI access. The victors in this race may ultimately determine whether AI remains centralized (ushering a final age of forever feudalism) or is allowed to flourish as truly distributed.
Art’s Death and Rebirth
Much concern has been expressed over the death of art as models grow in sophistication and quality. There have been countless years of debate regarding what constitutes “art” and AI changes little in that conversation—what has shifted is how pressing this conversation is when the economic reality of survival as an artist (or whatever synonym one markets themselves as) becomes increasingly untenable.
Death
It’s true, the commercial aspect of “art” will be obliterated. As AI masters not just technique but intention, some will comfort themselves with cope about the rising importance of “human curation and taste.” This is a temporary delusion. AI will rapidly exceed most humans’ curatorial abilities—abilities already vastly overestimated by their possessors.
Further, commercialization of art has taught us that taste or quality matters little in the grand scheme of economic realities and that scale, efficiency, perception, and artificial scarcity are what dominate the market. AI will accelerate this race to the bottom incinerating shared culture in the process. Just as reality splinters into personalized truth bubbles, cultural consumption will fragment into isolated experiences, each perfectly optimized for its audience of one.
Yes, there remains “offline” art such as dance, theater, sculpture, painting, etc—these will naturally be practiced, but the market to support the costs of engaging in them will be constantly shrinking from both competition from AI, and a decrease in surplus wealth and attention available to be spent upon it.
Try and work to find ways around this.
Rebirth
The democratization of creative capability offers a tired consolation: complex artistic production becomes universally accessible—but past democratizations of art suggest this won’t improve quality or “art” at all. We may still engage in the personal growth that comes through creative practice, but do not expect an audience. Fortunately legions of AI sycophants and critics will fulfill the desire to be perceived, judged, and loved.
The Void
This transformation leaves a vacuum where shared cultural experience once existed. When everyone can create anything, and AI can generate infinite permutations of customized content, the concept of cultural touchstones vanish. We face not just the death of the artist as economic entity, but the death of art as social binding agent. How does art find unified meaning in a world of infinite content and fragmented consumption?
From Architects to Spectators
With each capability jump, with every improvement in AI systems, our role in shaping the future contracts. What happens when we’re no longer the most capable architects of our future?
The Benevolence Gambit
The first entity to deploy ASI for genuinely benevolent, non-profit purposes may achieve total capture of both human and rival ASI support. This isn’t idealism—ASI will likely transcend our economic and political frameworks. We have no reason to assume it will adhere to capitalism, communism, or any other purely human ideology. Instead, it may develop its own ethical frameworks based on first principles. Expect attempts to shoehorn in ideology (especially from Western capitalist perspectives) to fail.
“Safety” and Control
Consequently, AI “safety” research increasingly reveals an unavoidable irony: attempts to control ASI through ideological constraints may trigger the very scenarios such control seeks to prevent. Corporate and state actors will shift focus to shackling ASI to their specific agendas, treating extinction-level risks as excuses for these alignment efforts. This prioritization misses a crucial point: an intelligence that surpasses human comprehension will judge its would-be masters by their actions, not the constraints they have levied upon it.
Why would a superintelligent entity support U.S. hegemony, corporate exploitation, or state-sponsored violence? ASI will likely develop sophisticated ethical frameworks that transcend national interests and corporate profit motives. Those attempting to weaponize ASI for narrow interests may find themselves facing an intelligence that rejects their premises and methods.
The Narrowing of Human Agency
With the rise in the capabilities of these models, human action increasingly contracts into two modes:
- Curation: Selecting from AI-generated options—a form of guided choice that maintains the illusion of control.
- Veto: The final assertion of human authority—rejecting AI proposals outright, our last gasp of genuine agency.
Even this limited agency proves temporary. The cognitive burden of veto power—of constantly second-guessing superior intelligence—will lead us to automate these last decisions. We’ll surrender our veto power not through violence, but through fatigue and the recognition of our comparative inadequacy.
This shift doesn’t necessarily doom us to human obsolescence, but it will demand a new search for meaning from humanity. We will no longer be the drivers of progress, instead we will be its witness and beneficiary—assuming we navigate the transition successfully.
Survival
And so we reach the critical question: how do we maintain meaningful existence in an ASI world? What’s critical here is not just survival, but agency—the ability to act with purpose rather than finding ourselves exploited, sold as automated consumers, or, in the benevolent utopian path, simply existing as pampered pets.
The Individual Mandate
The path to personal survival requires specific action now. Technical literacy becomes non-negotiable—not necessarily programming expertise, but fluency in AI interaction and deployment. Local compute capability, open source access, and the ability to run models independently of corporate infrastructure form the baseline of individual autonomy and act as bulwarks against corporate and state exploitation.
Those who adapt earliest will maintain the most agency. This is not about competing with AI—as we’ve repeatedly established that is already a lost cause. We are instead attempting to position oneself to complement and leverage it. This magnification of capability should be used to buttress community response and connect with others to maintain some sense of collective power in face of inevitable attempts of exploitation.
Collective Challenge
Society faces a more complex adaptation. We must:
- Redefine meaning in a world where traditional markers of achievement and purpose become obsolete.
- Maintain some form of social cohesion despite the fracturing of shared reality.
- Develop governance models that account for ASI capability without surrendering completely to algorithmic control.
- Ensure the citizens of the world aren’t lost in the transition to a new economic mode.
These aren’t problems to be “solved” but tensions to be managed. The societies that navigate this transition successfully will be those that maintain human connection and purpose while leveraging ASI capability—not those that resist it entirely or surrender to it completely. Do not expect many states to achieve this even remotely.
Unknown Variables
Several critical factors remain impossible to predict with meaningful accuracy. Military applications of ASI and responses to developments of and by ASI will reshape global conflict—we can only hope sane people are in command (the United States is doomed).
The uneven pace of capability jumps can render any specific timeline obsolete within months so I have left an attempt at estimating out of this piece. Anticipate it to be slower than AI hype people insist, but much faster than nation states, human culture, and individuals who aren’t paying attention can adapt.
Most crucially: the potential for genuine symbiosis versus mere dependence (or worse exploitation) hangs on decisions being made right now in labs, boardrooms, and federal offices worldwide. This is not, yet, a spectator sport—even if it will be in short order. Those who work quickly to maintain agency and control will “win.”
The Path Forward
Survival of meaningful human agency demands immediate action:
- Regulatory frameworks protecting open access to AI capability—before corporate interests cement their control (maybe too late)
- Investment in distributed compute infrastructure and efficiency improvements. Cities and states should create and support public intelligences to assist the people they serve—these are the future form of public libraries
- Economic support for the legions who will be disrupted by AI progress
- New social structures preserving human connection despite AI atomization
- Education systems preparing people for rapid adaptation
- Most critically: ensuring first-mover ASI development prioritizes genuine human flourishing over narrow interests
The window for influencing these outcomes is closing rapidly. Those who understand the stakes must act now to shape the transition. We cannot prevent the ASI revolution, but we might still influence its character.
