Edition 008: The Patchwork is Missing the Person

In the absence of a federal framework, and now in the face of federal hostility to one, the states have acted. The result is a patchwork of state laws scattered across more than a dozen regulatory domains—employment, healthcare, privacy, consumer protection, cybersecurity, election integrity, antitrust, frontier-model safety, and others. Each treats AI as a feature of a problem that the legal system already knew how to handle. Each sees the person shielded by the law only in the market role its domain happens to recognize— consumer, candidate, applicant, patient. None sees the entire person.

The scale is not small. In 2025 alone, more than twelve hundred AI-related bills were introduced across the fifty state legislatures.[1] One hundred and forty-five became law.[2]  As of March 2026, lawmakers in 45 states have introduced more than fifteen hundred more.[3]

The laws vary widely. New York City requires bias audits of automated hiring tools.[4]  Illinois treats discriminatory employment AI as a civil rights violation and bars zip codes as a proxy for protected classes.[5] Colorado passed the most ambitious AI law in the country in 2024, focused on high-risk systems making consequential decisions in employment, housing, healthcare, education, and financial services.[6] California has built the most layered framework — privacy rules on automated decisionmaking,[7] a frontier-model transparency law,[8] content-provenance[9] and training-data acts,[10] a companion-chatbot statute,[11] and employment-discrimination regulations regarding automated-decision systems.[12] Utah moved earliest and has been narrowing its core consumer-protection rules ever since.[13] Texas signed the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) into law in June 2025; it took effect this past January. [14]

But a patchwork of laws is not a legal framework. And already, the most ambitious of them is under attack.

***

On April 9, 2026, xAI, Elon Musk’s artificial intelligence company, sued Colorado in federal court to stop the law.[15] On April 24, the Department of Justice (DOJ) then intervened on xAI’s side.[16] It was the first time the federal government has joined a private suit to invalidate a state AI law.[17] According to the DOJ’s own press release, “Laws that require AI companies to infect their products with woke DEI ideology are illegal.”[18]

The word woke signals a cultural disagreement, but Colorado’s AI law does not require any company to adopt an ideology. Instead, it requires companies deploying high-risk AI systems to take reasonable care that those systems do not discriminate on the basis of race, sex, religion, age, disability, or national origin—the same protected categories that federal civil rights law has recognized and enforced for sixty years. What the DOJ is dismantling is not ideology, it’s accountability.

Consider what this looks like for one person. She pays taxes to a country whose government has chosen the side of the companies deciding whether or not she gets hired. If she lives in New York City, the company screening her résumé must have run a bias audit and given her notice. If she lives in Illinois, she is owed notice that AI is being used. If she lives in Colorado, her state tried to do more, but the federal government has gone to court to stop it. If she lives in Texas, her legislature considered protecting her, then stripped the protections out of the bill before passing it, leaving her with rules that mostly govern what the government does and largely lets private employers do whatever they want. And the companies running these algorithms that decide whether she is considered, scored, and screened out, largely answer to no one, except their investors. That is where the power resides.

***

Even where the laws do reach, what they require is mostly notice. Telling a candidate that AI is being used is not the same as ensuring it does not discriminate. Requiring bias audits is not the same as requiring that the results of those audits produce accountability when bias is found. Requiring transparency is not the same as requiring that the information be accurate, comprehensive, or actionable.

This is not a new mistake. Data privacy law in the United States has been built on a notice regime for a generation: tell the consumer what you are about to do to her, and as long as you have told her, you are free to do it. And scholars have spent decades pointing out the obvious —this is not protection (or privacy). A privacy notice that announces the harm is not a safeguard against the harm. The same flawed logic is now being written into AI law. To call this a failure of regulation is to misunderstand what is happening. These laws are not failing to do what they were designed to do. They are doing exactly what notice-based regimes have always done. They are telling the person what is about to happen to her. They are not stopping it.

***

Texas is a case in point. The original draft of TRAIGA was ambitious.[19] It would have required developers to exercise reasonable care to protect consumers from foreseeable harm, impact assessments for high-risk systems, and transparency obligations on the companies deploying AI that make decisions about people’s lives.

What was enacted is something else. TRAIGA now reaches only a handful of intentional misuses of AI—inciting self-harm or crime, producing child sexual abuse material, discriminating unlawfully, or infringing constitutional rights—and requires government agencies to provide notice to people when AI is on the other end of the conversation. A sandbox program and an advisory council fill out the rest. Yet, the trade press reports that Texas has struck a reasonable balance.[20]

Read carefully, TRAIGA it is a near-perfect demonstration of how the system decides whom to protect, whom to leave out, and whom to shelter. The law protects consumers, defining a consumer as a Texas resident “acting only in an individual or household context,” and expressly excludes “an individual acting in a commercial or employment context.” The protected category is the person buying things and being a member of a household. The unprotected person is that same person while at work, which incidentally is where many people encounter AI making consequential decisions about their lives.

The law also requires intent. An AI system cannot be intentionally developed or deployed to discriminate, manipulate, or infringe constitutional rights. But the hiring algorithms that rejected D.K. and Derek Mobley were not designed to discriminate. Those algorithms were designed to score interviews and applications. No one intended the harm. Under TRAIGA, that would be a defense, not an admission.

TRAIGA also draws a line about transparency. It obligates government agencies and healthcare providers using AI to disclose it, but private companies using AI to screen applications, extend credit, rent housing, hire or fire a worker, set a price, or recommend content are not required to disclose anything. There is also no private right of action —only the Texas Attorney General (AG) can bring a lawsuit. So whether anyone is held to account under this law depends on whether the AG decides to investigate. And a safe harbor sits on top of all of it. A defendant can establish an affirmative defense by showing substantial compliance with the NIST AI Risk Management Framework — a voluntary document that demands the kind of paperwork only a well-funded compliance department can produce.

TRAIGA sees consumers, but not workers. It sees intentional misconduct, but not structural harm. It requires notice from the state and healthcare providers, but that’s it. It doesn’t address the companies that have absorbed the state’s functions. It denies the harmed person any direct legal recourse. And it shelters, by design, the actors who can afford to produce the paperwork.

***

Now, let’s take step back and look at the patchwork of state AI laws as a whole. Each of these laws tells someone what they cannot do, within a narrow band of conduct the law has decided to address. None of the laws tells a person what she is owed.

Look also at who the laws see when they look at a person. New York City sees a candidate screened by an automated employment decision tool. Illinois sees an employee and an applicant. Colorado divides the world into developers, deployers, and the consumers whose interactions with high-risk systems trigger its protections. California’s rules turn on whether AI is making a significant decision about a consumer—defined to include employees and applicants, but to exclude advertising.[21] Utah speaks of consumers and suppliers.[22] Texas sees a consumer and an individual acting in a commercial or employment context, who is not protected at all.

And this is the pattern in American AI law: The patchwork of state statutes does not see the consumer, the worker, the renter, the patient, the parent, and the citizen as a whole. It sees a series of market roles, each given a small and partial set of protections, none of them adding up to shield the full human being. A person moves through her day, but the law’s recognition of her flickers on and off depending on which role she occupies when the algorithm reaches her.

Every person has a stake in this — not equally, but everyone has one. The corporate executive is also a patient. The investor is also someone whose health insurance company is using AI to deny claims. The technologist building these systems goes home and applies for a mortgage like everyone else. No one is protected as a whole person, because the framework has no place for what a whole person is.

This is not a peculiarity of AI law. It is the basic move of American corporate law. Our laws recognize people only as the market does. This pattern is so deep in American legal thinking that most of us do not notice it. And it sits atop something more systemic. We have built a constitutional order around the right to be left alone by the government. We have not built one around the right to be protected by it.

***

Step back and look. The pattern runs back to the structure of the rights this country has chosen to recognize, and the rights it has chosen not to. That’s where we go next.

***

[1] “Artificial Intelligence (AI) Legislation Tracker 2026: All 50 States.” MultiState, www.multistate.ai/artificial-intelligence-ai-legislation. Accessed 7 May 2026.

[2] See n. 1.

[3] See n. 1.

[4] New York City. Council. Local Law No. 144 of 2021: A Local Law to Amend the Administrative Code of the City of New York, in Relation to Automated Employment Decision Tools. 2021. Law and the Workplace, www.lawandtheworkplace.com/wp-content/uploads/sites/29/2023/03/Local-Law-144.pdf. Accessed 4 May 2026.

[5] Illinois. General Assembly. House Bill 3773: An Act Concerning Civil Rights (Public Act 103-0804). 103rd General Assembly, 9 Aug. 2024. Illinois General Assembly,www.ilga.gov/ftp/legislation/103/BillStatus/HTML/10300HB3773.html. Accessed 4 May 2026.

[6] Colorado. General Assembly. Senate Bill 24-205: Concerning Consumer Protections in Interactions with Artificial Intelligence Systems. 74th General Assembly, 17 May 2024. Colorado General Assembly, leg.colorado.gov/bills/sb24-205. Accessed 4 May 2026.

[7] California Privacy Protection Agency. CCPA Updates, Cybersecurity Audits, Risk Assessments, Automated Decisionmaking Technology (ADMT), and Insurance Regulations." California Privacy Protection Agency, cppa.ca.gov/regulations/ccpa_updates.html. Accessed 6 May 2026.

[8] California Legislature. "SB-53 Artificial Intelligence Models: Large Developers." California Legislative Information, leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202520260SB53. Accessed 6 May 2026.

[9] California Legislature. "AB-853 California AI Transparency Act." California Legislative Information, leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202520260AB853. Accessed 6 May 2026.

[10] California Legislature. "AB-2013 Generative Artificial Intelligence: Training Data Transparency." California Legislative Information, leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240AB2013. Accessed 6 May 2026.

[11] California Legislature. "SB-243 Companion Chatbots." California Legislative Information, leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202520260SB243. Accessed 6 May 2026.

[12] California Civil Rights Department. "Final Text of Regulations on Automated-Decision Systems." Civil Rights Department, calcivilrights.ca.gov/wp-content/uploads/sites/32/2025/03/Attachment-A-Final-Text-of-Regs.pdf. Accessed 6 May 2026.

[13] Davis Wright Tremaine LLP. "Utah Enacts Multiple Laws Amending and Expanding the State's Regulation of the Deployment and Use of Artificial Intelligence." Davis Wright Tremaine, 23 Apr. 2025, www.dwt.com/blogs/artificial-intelligence-law-advisor/2025/04/utah-regulation-ai-policy-mental-health-chatbots. Accessed 6 May 2026.

[14] Tex. Bus. & Com. Code Ann. § 552.101, https://tcss.legis.texas.gov/resources/bc/htm/bc.552.htm. Accessed 6 May 2026.

[15]X. AI LLC v. Weiser, No. 1:26-cv-01515 (D. Colo. filed Apr. 9, 2026), CourtListener (Free Law Project), https://www.courtlistener.com/docket/73171074/x-ai-llc-v-weiser/. The complaint is also available at https://www.courthousenews.com/wp-content/uploads/2026/04/grok-ai-bill-colorado-complaint.pdf.

[16] U.S. Department of Justice, Office of Public Affairs, “Justice Department Intervenes in xAI Lawsuit Challenging Colorado’s Algorithmic Discrimination Law,” press release, April 24, 2026, https://www.justice.gov/opa/pr/justice-department-intervenes-xai-lawsuit-challenging-colorados-algorithmic-discrimination.

[17] Ashley Gold, “Justice Department joins xAI challenge to Colorado AI law,” Axios, Apr. 24, 2026, https://www.axios.com/2026/04/24/justice-department-joins-xai-challenge-colorado-ai-law  (“It’s the first time the DOJ has intervened in a case challenging state regulations on AI.”)

[18]See n. 16.

[19]See K&L Gates LLP. "Pared Back Version of the Texas Responsible Artificial Intelligence Governance Act Signed into Law." K&L Gates, 24 June 2025, www.klgates.com/Pared-Back-Version-of-the-Texas-Responsible-Artificial-Intelligence-Governance-Act-Signed-Into-Law-6-24-2025. Accessed 6 May 2026.

[20]See, e.g., Tene, Omer. “With TRAIGA, Lone Star State Leans Into AI Governance Regulation.” Goodwin, 9 July 2025, www.goodwinlaw.com/en/insights/publications/2025/07/alerts-practices-dpc-with-traiga-lone-star-state-leans. Accessed 6 May 2026.

[21] Cal. Code Regs. tit. 11, § 7001(ddd) (2026) (defining "significant decision" to include employment and independent contracting opportunities, but to exclude advertising to a consumer), https://cppa.ca.gov/regulations/pdf/ccpa_updates_cyber_risk_admt_appr_text.pdf. Accessed 5 May 2026.

[22] Utah Code §§ 13-75-101 to 13-75-106 (2025), enacted by S.B. 226 (defining the AI consumer-protection scheme around the supplier-consumer relationship and incorporating the consumer transaction definition from the Utah Consumer Sales Practices Act, § 13-11-3), https://le.utah.gov/Session/2025/bills/introduced/SB0226S02.pdf. Accessed 5 May 2026.

Next
Next

Edition 007: Who Writes the Rules?