Edition 004: When Laws Don’t Fully Protect

The was system built to look away. Let’s start with the structural problem of the law’s response to AI hiring discrimination.

The cases described in the previous edition — D.K.’s and Derek Mobley’s — are possible only because D.K. and Mobley are members of protected classes. D.K. is Deaf and Indigenous. Mobley is African American, over forty, and disabled. The civil rights framework — the Americans with Disabilities Act, Title VII of the Civil Rights Act, and the Age Discrimination in Employment Act — exists specifically to protect people in those categories from discriminatory treatment. Without that framework, their cases would have no legal home.

But the civil rights framework was not built to provide general accountability for algorithmic decision-making. It was built to protect specific groups from specific forms of discrimination. And that distinction creates a structural gap that has received far less attention than it deserves.

Consider the person who is also harmed by an AI hiring system, but who does not fall into a recognized protected class. The white, non-disabled applicant in their thirties whose application was still rejected in the middle of the night by the same algorithm. The harm and the process are the same. The person is still processed rather than considered, still discarded without explanation, and still denied any meaningful opportunity to understand what happened or why. But the civil rights framework offers that person nothing. They are not a member of a protected class. They cannot allege that they were treated differently because of a characteristic the law recognizes as protected.

Their options are limited and largely inadequate. They could pursue a privacy theory — arguing that the company collected and processed their data in ways that violated applicable privacy law. They could pursue a contract theory — arguing that the company’s terms of service created an obligation it failed to fulfill. They could pursue a consumer protection theory — arguing that the company’s description of its hiring process was misleading in ways that caused them harm. But all of those theories are harder to bring and even harder to prove. They require demonstrating specific violations of specific legal standards that were not designed with AI hiring discrimination in mind. And they are less likely to attract sophisticated advocacy organizations, like the ACLU, that take on discrimination cases because they present a clear civil rights claim.

The result is an accountability structure organized around the categories the law can see — race, age, disability, sex, national origin — rather than around the actual distribution of harm.

Algorithms do not discriminate only along protected class lines. They optimize for patterns in historical data. Those patterns can exclude people in ways that have nothing to do with any characteristic the civil rights framework recognizes.

Let’s pause on this. I asked the AI model I am working with, Claude Opus 4.7, to extend a list I had started — three sentences about characteristics that could be used by algorithmic hiring systems to reject applicants, each one naming a different mechanism by which the rejection occurred. I wanted to see how far the logic would travel if pushed. Here is what came back:

·      The applicant is rejected because their resume formatting deviates from templates the model associates with strong candidates.

·      The applicant is rejected because gaps in their employment history pattern-match to profiles the training data labeled as higher risk.

·      The applicant is rejected because their previous job titles do not follow the linear progression the model has learned to prefer.

·      The applicant is rejected because their prior employers are not in the cluster of companies the model treats as feeders for this role.

·      The applicant is rejected because the vocabulary in their cover letter scores below a threshold derived from past hires, regardless of what they actually said.

·      The applicant is rejected because they took longer than the median time to complete the online assessment, which the model treats as a proxy for capability.

·      The applicant is rejected because their LinkedIn activity patterns do not match those of employees who stayed past the two-year mark.

·      The applicant is rejected because their commute distance exceeds a value the model learned correlates with attrition, even though they relocated last month.

·      The applicant is rejected because a video interview algorithm rated their facial expressions as insufficiently enthusiastic by standards calibrated on a reference set that did not include people who look like them.

·      The applicant is rejected because their references’ writing styles fall outside the distribution the model associates with credible endorsements.

None of these people have a civil rights claim. All of them were harmed. The law offers them theories rather than remedies — expensive, uncertain, and ill-fitted claims designed for different problems, applied to a harm the law was not built to see.

This is not an argument against the civil rights framework. I stand firmly in support of the need for that framework. It is one of the most important legal achievements in American history, and D.K. and Mobley are right to use it.

Our argument here that the civil rights framework cannot bear the full weight of AI accountability — that organizing AI governance primarily around protected class discrimination leaves an enormous portion of the harm unaddressed, and leaves the people harmed by it without adequate legal recourse.

The gap is not just between what AI systems do and what the law requires. The gap is inside the law itself — between the people the law was built to protect and the full range of people the system was built to look away from.

***

Next, we’ll going to zoom out.

Previous
Previous

Edition 005: Paperclips, an AI Millionaire & Civilization Destruction

Next
Next

Edition 003: When People Don’t Count