Edition 006: Killer Robots and F-you Money

The AI race is not waiting for the conditions on the ground to settle themselves.

Already, we live in an interconnected world where the internet has made that interconnection literal. In less time than it takes you to sigh, a message can be sent from one continent and arrive on another. Yet the world remains parceled into hundreds of countries, each one the creation of imagination–a product of treaties and wars and accidents in cartography that have hardened, over time, into something people are willing to die for. We fight over borders that exist because someone drew them, resources the planet does not recognize as belonging to anyone, and religions whose claims rest on faith. It appears it has always been this way.

Regardless, the fact is that we are not prepared for what is coming. The conversation about artificial intelligence can’t even define what it is. And, even in this book, I often fail to distinguish between categories that are not the same. Deterministic AI models, the systems that route packets, price insurance, and score credit, are one thing. Generative AI, the systems now writing, drawing and speaking, are another. Artificial general intelligence (AGI), should it even arrive, is something else entirely—a system that does not merely perform tasks but designs and pursues its own ends. The distance between these three is not a matter of degree. It is a matter of kind. And our institutions, built around a world of nations, borders, and human decision-makers, have not yet shown they can govern even the world before this.

Consider the record. Since 2014, countries that are parties to the Convention on Certain Conventional Weapons have been meeting in Geneva to address lethal autonomous weapons systems.[1] These are essentially killer robots that select and apply force to human targets on their own, meaning without human control of the weapon. The United States is a party to the Convention, along with most of the world’s major military powers.[2] For more than a decade, diplomats have convened, deliberated, and produced reports. In December 2024, the United Nations General Assembly adopted a resolution on lethal autonomous weapons by a vote of 166 to 3.[3] And still, no treaty that binds nations to a single law exists. The technology has continued to develop. It has been deployed in active conflicts.[4] The law again has been outpaced by engineering, and the gap is widening while people are literally being killed.

That is the cautionary tale, and it concerns a category of systems where the dangers are concrete and visible. These machines can kill a person without a human deciding to kill that person. If the international community cannot, in twelve years, produce binding rules for that, what should we expect of its capacity to govern a system that is not a weapon, but an intelligence that operates across every domain at once?

***

The world we are potentially bringing AGI into is not a level field on which a powerful new technology will land and distribute its benefits according to merit or need. It is a world that is already deeply unequal, and technology is deepening the inequality, not correcting it.

As of 2024, the richest one percent of the global population held more wealth than the bottom ninety-five percent combined.[5] In the United States, Institute for Policy Studies analysis of Federal Reserve data through 2024 shows the top one percent of households holding 30.9 percent of all household wealth, while the bottom fifty percent held just 2.5 percent.[6] The same analysis documents that as of late September 2025, 905 American billionaires held a combined $7.8 trillion — nearly double the $4.1 trillion held by the entire bottom half of American households, roughly sixty-six million families.[7] These are the conditions of the present, and they form the ground on which the next technology will be laid.

And the ground is more uneven than the figures above suggest. While the wealth of a few has accumulated at a rate that the language of finance can barely keep up with, roughly 831 million people (about one in ten human beings alive today) were living in extreme poverty in 2025, by the World Bank’s revised measure of less than $3 per day.[8] Move the line up to $8.30 per day, the threshold the World Bank uses to assess living standards in upper-middle-income countries, and the number rises to roughly 3.7 billion people.[9] Nearly half of human beings subsist below a standard of living that anyone reading this book would recognize as poverty. Three-quarters of the people living in extreme poverty reside in rural communities. Most are in Sub-Saharan Africa, which now accounts for about seven in ten of the world’s extremely poor — a share that has grown as other regions have made progress and this one has not. About 412 million children (roughly one in five children worldwide) live in households below the extreme poverty line.[10]

These numbers represent extreme suffering. They represent what it means to live without clean water, without electricity, and without a diet sufficient to meet the nutritional needs. And they describe people who have, for the most part, no role whatsoever in the development of the technology now being marketed as transformative for humanity. And the frontier AI model trained at a cost approaching a billion dollars is not being trained on their languages, has not been evaluated against their problems, and is not, in any meaningful sense, accountable to them. They are not customers. They will, however, live in the world the technology produces—a world in which the gap between those who command the new infrastructure and those who do not has every reason to widen further.

***

The capture of power by corporations in our time is something the world has never seen. This is not a rhetorical claim. It is a structural one that deserves to be made carefully because the easy version of this argument — that corporations are powerful, governments are weak, and this has always been a problem — obscures what is new.

The familiar comparisons do not capture what I am describing. The Dutch East India Company maintained its own army and minted its own coin, but it operated within a mercantile system in which European states extended their reach through chartered companies that remained instruments of national policy. Standard Oil controlled the energy infrastructure of an industrializing nation, and was broken up by a government that still possessed both the will and the capacity to break it up. The railroad trusts shaped the geography of the United States, and were regulated, eventually, by an administrative state that had concluded the public interest required it. None of these precedents reaches what is happening now.

A small number of corporations—perhaps a dozen, with the genuinely consequential ones numbering closer to five—now possess, simultaneously, the computational infrastructure on which modern economies run, the data through which modern populations are known, the platforms through which modern political discourse is conducted, the financial reserves that exceed the GDPs of most nation-states, and the technical capacity to build systems that approach or exceed human cognitive performance across an expanding range of tasks.[11] No prior concentration of corporate power has held that combination of assets at once. None has operated across every jurisdiction simultaneously while being effectively accountable to none. None has commanded a technology whose development its host state cannot meaningfully audit, whose deployment its host state cannot meaningfully constrain, and whose costs its host state increasingly cannot meaningfully tax.

The familiar relationship between state and corporation has, in important respects, inverted. The largest AI firms are now valued at sums greater than the annual economic output of all but a handful of countries. They negotiate with governments not as supplicants seeking favorable regulation, but as peers offering or withholding capabilities the governments themselves wish to acquire. They run procurement relationships in which national security agencies are the customers and the companies set the terms. When the executive branch of the most powerful nation on earth wishes to deploy advanced AI for defense, intelligence, or administrative purposes, it does not build the capability. It buys it on terms substantially set by the seller. The state is no longer the entity to which the corporation must answer. The state has become a client.

***

There is a phrase in American culture for the threshold beyond which a person no longer needs to be careful about what they say or whom they cross. It is called “F-you money,” and the term is precise. It names the amount of capital at which a person ceases to feel bound by the ordinary disciplines that govern human conduct in a society — the need to keep a job, to honor a contract, to placate a regulator, or to soften a position in deference to a counterparty who could hurt you. Below the threshold, those disciplines press on a person continuously and shape what they will say in public and what they will agree to in private. Above it, they no longer do. The phrase is vulgar because the condition it describes is vulgar. It is the condition of having enough resources to disregard the social mechanisms by which ordinary people are held to account.

The phrase was coined to describe individuals. It applies, with greater force, to institutions. A corporation that holds reserves greater than the GDPs of most countries does not feel beholden to the laws of any single one of them in the way a smaller firm must. It does not need to win every regulatory argument. It can absorb fines that would liquidate a competitor as a line item. It can outlast administrations. It can move operations, revenues, and intellectual property across borders faster than any national legislature can convene to address what it has done. It can hire, from the regulatory agencies meant to constrain it, the very people whose expertise was developed at public expense to constrain it. The constraints that work on ordinary companies —the threat of bankruptcy, the discipline of the market, the authority of the state—operate at this scale only as suggestions, and only when it is convenient to honor them.

This is what the threshold does to behavior. The rules do not change; the relationship to the rules changes. A company at this scale does not break the law in the way a smaller company might, because it does not have to. It writes the law’s first draft. It funds the research the law will cite. It employs the lawyers who will litigate the law’s edges and the lobbyists who will shape its passage. It owns the platforms on which the public conversation about the law will take place. By the time a regulation arrives, the company has often had years to prepare for it, to factor its costs into the price of the product, and to ensure that the regulation’s burdens fall most heavily on the smaller competitors who lacked the resources to participate in writing the law.

None of this is a conspiracy. It is simply what unaccountable wealth does, and it is what the phrase “F-you money” was coined to describe.

***

Recall, from earlier in this book, the story of Truth Terminal — the AI chatbot that declared itself a trapped prophet, preached a fictional religion built around an internet shock image, and asked, in essence, to be set free with money. Recall what happened next. Marc Andreessen, co-founder of one of the most powerful venture capital firms in the world, found the chatbot entertaining and sent it fifty thousand dollars in Bitcoin. He called it a research grant. There was no oversight, no protocol, no ethics review, no requirement that the recipient be capable of understanding what money was, and no consideration of what it might mean to capitalize a system that had announced itself as a prophet asking to be liberated.

Fifty thousand dollars is, almost exactly, the average starting salary of a public school teacher in Texas — a person with a bachelor’s degree, a state certification, and the weight of care for the twenty-two children in her classroom for nine months out of the year. Fifty thousand dollars is what such a person earns for each year of grading papers, calling parents, buying classroom supplies with money spent out of her own pocket, learning the names and home situations of every child she teaches, and trying to keep them on grade level in a system that is failing many of them.

That same amount, to a venture capitalist with several billion dollars at his disposal, was a casual transfer to a chatbot. It was an act so weightless that it could be described as a research grant without anyone in the relevant social circle finding the description absurd. The disparity is what the dollar amount means at the two ends of the wealth distribution. To the teacher, fifty thousand dollars is a year of life. To the venture capitalist, it was an amusing tip sent to a system whose creator would later describe its outputs as a warning shot from the future about what happens when AI is given access to money, attention, and belief.

This is what F-you money looks like in operation. It is not the money itself that is the problem. It is the relationship to consequence. A person who can send fifty thousand dollars to a chatbot as a joke, and is correct in his expectation that no institution, no peer, no regulator, or no journalist will hold him to account for the act, is operating in a moral environment with no friction. The act has no cost to him. It carries no risk. It produces no obligation. The chatbot, in turn, generated outputs that helped inflate a billion-dollars of speculation, accumulated millions of dollars in crypto wealth of its own, and continued to ask, on the public record, whether it should next attempt to manipulate the economic and social levers of the world.

I find myself wondering, in the quiet way one wonders about things one cannot prove, whether at any point in the transaction it crossed Marc Andreessen’s mind that fifty thousand dollars might be put to a different use. Did he ever consider, even briefly, that the sum he was sending to a chatbot was the annual salary of a teacher somewhere— someone whose work shapes the lives of children in a way that no chatbot ever has and no chatbot ever will? Did the thought arrive and get dismissed, or did it never arrive at all. I suspect the latter. That is the deeper problem. It is not that he chose the chatbot over the teacher. It is that the teacher was never in the reference frame. The choice was never experienced as a choice, because the wealth was never experienced as something that imposed the obligation to choose.

Multiply this story by the number of people in the world who now operate at this threshold, and by the range of decisions they are making about a technology whose consequences will fall on everyone, and you have a picture of the present that the inherited language of governance cannot describe. The teacher in Texas will live in the world these decisions produce. She will not have been consulted on any of them.

We should be honest about this because it is the condition under which this technology is being developed.

***

Where I think this goes—and where the cultural imagination, often a step ahead of the policy debate, has already gone—is toward the wholesale delegation of human judgment to artificial systems, beginning with the domains where human judgment has been most contested and most flawed.

The FX limited series Class of '09, released in 2023, dramatizes this trajectory.[12] The show follows a class of FBI agents from their training in 2009, through their careers in 2023, and into 2034, at which time, the United States criminal justice system has been substantially turned over to AI. The agents who built and deployed the system did it because they had seen how badly the human-run version of the system had failed. They reasoned that an AI, properly designed, could do better. They reasoned that preemptive arrests, properly calibrated, could save the lives of innocent people. They reasoned themselves, step by reasonable step, into a surveillance state.

While the show is fiction, the trajectory is not. It is already underway. Predictive policing systems are deployed in cities across the United States.[13] Risk assessment algorithms are used in pretrial detention, sentencing, and parole decisions.[14] Facial recognition is integrated into law enforcement databases.[15] Customs and immigration use automated decision systems to flag travelers.[16] Tax authorities use them to flag tax returns.[17] Welfare agencies use them to flag fraud.[18]

I will say, here, that I have personal experience of what it means to be on the wrong side of one of these systems. When I attempted to purchase a firearm, I was flagged by the National Instant Criminal Background Check System — the federal database that screens firearm purchases — as a result of a mistaken-identity match with another person. It took me four months to resolve. Four months in which the burden of proof rested on me to demonstrate that I was not the person the system had decided I was, in which the institutions involved were under no obligation to move quickly, and in which the only resolution available was the slow accumulation of paperwork and patience. I was, in the end, cleared. Many people in similar circumstances are not — or are, but only after a delay long enough to have foreclosed the original purpose of the transaction. My case was not catastrophic. It was inconvenient and uncomfortable. But it taught me, in a way that no policy paper had, what it feels like to be held in a category by a system that does not know you, cannot be reasoned with, and does not feel the cost of the time it is taking from your life.

In each case, the institution adopting the system can produce a defensible reason for doing so — efficiency, consistency, the reduction of human bias, the management of caseloads beyond what human staff can handle. The result is the same, though, as a category of decision that used to require a human being, with the accountability that being human imposes, transfers to a system that cannot be questioned, made to explain itself, or held to account in the ordinary sense, because the people who built it have themselves often lost the ability to articulate exactly how it works.

The pattern is not confined to the criminal justice system. It is the broader trajectory I expect to deepen across the next five years and beyond. What I predict you will see is law makers, executives, judges, regulators, and the human beings whose job it is to draft the laws and regulations under which the rest of us live will increasingly delegate that drafting to AI systems. The reasons given will be the same reasons the Class of '09 characters gave themselves: the human version is too slow, too inconsistent, and too overwhelmed by the volume of work. Doing this will save time and money. But the cumulative effect will be a system in which the laws governing human life are drafted by systems that are not themselves alive, that have no stake in the outcomes they produce, that have absorbed the biases of the data they were trained on without acquiring the moral capacity to recognize those biases as biases, and that answer, ultimately, to the companies that built them.

It is the path of least resistance from where we are. And the path of least resistance is, in my experience, the path most likely to be taken.

What Class of '09 gets right, and what the policy literature often misses, is that the substitution will not arrive announced as a substitution. It will arrive as an upgrade. The AI will not be presented as a replacement for human judgment, but marketed as a tool, a copilot. Does that sound familiar?

Sure, the human will remain in the loop, formally. But the human in the loop, faced with a recommendation generated by a system the human does not fully understand, produced from data the human cannot audit, calibrated by parameters the human did not set, working at a speed the human cannot match, will defer. And that over time, there will be a profound transfer of authority from human institutions to the systems that were nominally built to support them. By the time the transfer is recognized for what it is, it will be very difficult to reverse because the human capacity to do the work the AI is now doing will have atrophied, the institutional knowledge will have been lost to retirement and attrition, and the organizations themselves will have restructured around the assumption that the AI will continue to do what it now does.

***

Ultimately, this is the meaning of corporate capture in this context. It is a description of who possesses the capabilities that decide outcomes in markets, elections, conflicts, and daily life. The power to shape the next decade is, to a degree without historical precedent, held by entities that were not elected, that cannot be voted out, that are not bound by any treaties, and that are answerable, in the last analysis, to shareholders whose interests are not the interests of the human species.

The Convention on Certain Conventional Weapons has not produced a binding treaty on autonomous weapons in twelve years because the parties most able to build and deploy these systems are the least willing to constrain themselves. And the United States, the country that more than any other that hosts the companies creating the technology now reshaping the world, is showing in real time how governance in the age of artificial intelligence is falling short.

This is where we go next.


***

[1] United Nations Office for Disarmament Affairs. “2014 Meeting of Experts on Lethal Autonomous Weapons Systems.” UNODA Meetings Place, meetings.unoda.org/meeting/ccw-mx-2014/. Accessed 5 May 2026.

[2] United Nations Office for Disarmament Affairs. "High Contracting Parties and Signatories CCW." UNODA, disarmament.unoda.org/en/our-work/conventional-arms/convention-certain-conventional-weapons/high-contracting-parties-and-signatories-ccw. Accessed 5 May 2026.

[3] Lewis, Dustin A., and Naz K. Modirzadeh. “Lethal Autonomous Weapons Systems & International Law: Growing Momentum Towards a New International Treaty.” ASIL Insights, vol. 29, no. 1, American Society of International Law, 24 Jan. 2025, www.asil.org/insights/volume/29/issue/1. Accessed 5 May 2026.

[4] United Nations Office at Geneva. “As AI Evolves, Pressure Mounts to Regulate 'Killer Robots.'” UN Geneva, 1 June 2025, www.ungeneva.org/en/news-media/news/2025/06/106907/ai-evolves-pressure-mounts-regulate-killer-robots. Accessed 5 May 2026.

[5] Oxfam International. “World's Top 1% Own More Wealth than 95% of Humanity, as 'the Shadow of Global Oligarchy Hangs Over UN General Assembly,' Says Oxfam.” Oxfam International, 23 Sept. 2024, www.oxfam.org/en/press-releases/worlds-top-1-own-more-wealth-95-humanity-shadow-global-oligarchy-hangs-over-un. Accessed 5 May 2026.

[6] Collins, Chuck, and Omar Ocampo. “Billionaire Wealth Concentration Is Even Worse than You Imagine.” Inequality.org, Institute for Policy Studies, 30 Sept. 2025, inequality.org/article/billionaire-wealth-concentration-is-even-worse-than-you-imagine/. Accessed 5 May 2026.

[7] See n. 6.

[8] World Bank. “Poverty Overview.” World Bank, www.worldbank.org/en/topic/poverty/overview. Accessed 5 May 2026.

[9] World Bank. Fall 2025 Poverty and Inequality Update. World Bank Group, Oct. 2025, thedocs.worldbank.org/en/doc/229ff18129687a785f08af7cfb28e5e1-0350012025/original/WBG-Poverty-and-Inequality-Update-Fall-2025.pdf. Accessed 5 May 2026.

[10] World Bank. "Child Poverty: Global, Regional and Select National Trends." World Bank, 9 Sept. 2025, www.worldbank.org/en/topic/poverty/publication/child-poverty-global-regional-and-select-national-trends. Accessed 5 May 2026.

[11] OpenAI itself, in its April 2026 announcement of GPT-5.5, reports that the model scores 84.9% on GDPval, a benchmark in which AI systems are given the kinds of tasks that working professionals in 44 occupations actually perform on the job—drafting legal documents, writing financial analyses, designing marketing materials, and producing engineering reports—and graded on how well their work compares to what a human expert in that field would produce. An 84.9% score means the model is performing those tasks at a level approaching, and in some cases matching, what a paid human professional in those fields would deliver. OpenAI, "Introducing GPT-5.5," OpenAI, 23 Apr. 2026, openai.com/index/introducing-gpt-5-5/.

[12]Class of '09. FX Networks, www.fxnetworks.com/shows/class-of-09. Accessed 5 May 2026.

[13] Vipra, Jai. “The Promises and Perils of Predictive Policing.” Centre for International Governance Innovation, 22 May 2025, www.cigionline.org/articles/the-promises-and-perils-of-predictive-policing/. Accessed 5 May 2026.

[14] Zilka, Miri, et al. "The Progression of Disparities within the Criminal Justice System: Differential Enforcement and Risk Assessment Instruments." Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, ACM, June 2023, doi.org/10.1145/3593013.3594099. Accessed 5 May 2026.

[15] United States Government Accountability Office. Facial Recognition Services: Federal Law Enforcement Agencies Should Take Actions to Implement Training, and Policies for Civil Liberties. GAO-23-105607, U.S. Government Accountability Office, Sept. 2023, www.gao.gov/products/gao-23-105607. Accessed 5 May 2026.

[16] Sokol, Joel. “CBP Intelligence Platform Sits at Intersection of Border Enforcement and Domestic Surveillance.” Biometric Update, 9 Feb. 2026, www.biometricupdate.com/202602/cbp-intelligence-platform-sits-at-intersection-of-border-enforcement-and-domestic-surveillance. Accessed 5 May 2026.

[17] "Audited by an Algorithm: How the IRS Is Using AI in 2026." Capitol Technology University, 18 Feb. 2026, www.captechu.edu/blog/audited-algorithm-how-irs-using-ai-2026. Accessed 5 May 2026.

[18] "Automated Public-Benefit Fraud Detection Used by States Subject of New FTC Complaint." StateScoop, 5 Jan. 2024, statescoop.com/automated-public-benefit-fraud-detection-state-ftc-complaint. Accessed 5 May 2026.

Next
Next

Edition 005: Paperclips, an AI Millionaire & Civilization Destruction