"Ethnonationalism by Algorithm" - Spencer Overton speaks

Spencer Overton, in a new paper (linked below), argues that artificial intelligence (AI) has become a central instrument in modern political...

Spencer Overton, in a new paper (linked below), argues that artificial intelligence (AI) has become a central instrument in modern political power, capable of embedding ideology into public infrastructure. It shows how US federal AI policy, framed as neutral and innovation-driven, can systematically encode ethnonationalist values, weaken civil rights protections, and reshape democratic institutions through algorithmic governance rather than overt legislation.

Here are 10 key takeaways

1. AI governance is no longer technical; it is deeply political

The paper shows that decisions about AI design, procurement, and regulation shape social outcomes as much as traditional lawmaking. Choices framed as “technical optimization” determine whose values, identities, and interests are encoded into public systems. AI governance has become a new arena for political power, not a neutral engineering exercise.

2. Federal AI policy is used to advance ethnonationalist ideology indirectly

Rather than explicit racial language, ethnonationalist goals are pursued through race-neutral policy framing. By redefining equity and fairness as ideological interference, AI policy becomes a mechanism to privilege dominant cultural identities while claiming objectivity. This allows ideology to be embedded without overt discrimination. 

AI ethnonationalism Spencer Overton


3. Removing bias safeguards allows dominant cultural norms to be reproduced by default

When protections against algorithmic bias are dismantled, AI systems fall back on majority patterns present in historical data. These defaults silently reinforce existing hierarchies in employment, immigration, policing, and public services. The absence of safeguards is itself a political choice.

4. “Neutral” algorithms amplify historical inequalities embedded in data

The paper emphasizes that neutrality in AI is illusory. Training data reflects past discrimination and structural exclusion, and algorithms trained on such data will reproduce those patterns at scale. Claims of neutrality mask the continuation of inequality through automated means. 

5. Executive orders can reshape AI governance without legislative debate

AI policy is increasingly driven through executive action rather than democratic lawmaking. This enables rapid, sweeping changes to governance frameworks without public scrutiny, hearings, or accountability. As a result, core democratic values can be altered administratively rather than legislatively. 

6. Public-sector AI decisions affect citizenship, rights, and access at scale

Government use of AI influences immigration screening, benefits allocation, surveillance, taxation, and criminal justice. Errors or bias in these systems do not affect isolated users but entire populations. Public AI therefore directly shapes who belongs, who is trusted, and who is excluded.

7. Deregulation disproportionately harms marginalized communities

The rollback of civil rights-oriented AI oversight removes mechanisms designed to detect disparate impact. Marginalized groups bear the cost of these decisions through higher error rates, reduced recourse, and opaque decision-making. Deregulation, presented as innovation-friendly, shifts risk onto the most vulnerable.

8. AI homogenizes culture by privileging majority patterns

Algorithmic systems optimize for dominant linguistic, cultural, and behavioral norms. Minority perspectives, languages, and lived experiences are flattened or excluded because they deviate from statistical averages. Over time, this homogenization narrows public discourse and suppresses pluralism.

9. Democratic accountability weakens when algorithms replace human judgment

As automated systems increasingly guide or replace human decision-making, responsibility becomes diffused. Agencies can deflect accountability onto models, vendors, or “the algorithm.” This weakens transparency, due process, and the ability of citizens to challenge state power.

10. Government AI sets norms that influence private-sector deployment globally

Because the US government is a major AI procurer and standard-setter, its policies shape global norms. When public-sector AI deprioritizes fairness and inclusion, those values are weakened across markets and borders. State AI governance thus has international democratic consequences. 

Summary

Spencer Overton's paper reframes AI law as a cornerstone of democratic governance, warning that unregulated public-sector AI can entrench exclusion while claiming neutrality. It argues for embedding fairness, pluralism, and accountability into government AI systems, positioning equitable AI governance not as a barrier to innovation, but as essential to sustaining an inclusive democracy. 

Download full report here


WELCOME TO OUR YOUTUBE CHANNEL $show=page

Loaded All Posts Not found any posts VIEW ALL READ MORE Reply Cancel reply Delete By Home PAGES POSTS View All RECOMMENDED FOR YOU LABEL ARCHIVE SEARCH ALL POSTS Not found any post match with your request Back Home Sunday Monday Tuesday Wednesday Thursday Friday Saturday Sun Mon Tue Wed Thu Fri Sat January February March April May June July August September October November December Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec just now 1 minute ago $$1$$ minutes ago 1 hour ago $$1$$ hours ago Yesterday $$1$$ days ago $$1$$ weeks ago more than 5 weeks ago Followers Follow THIS PREMIUM CONTENT IS LOCKED STEP 1: Share to a social network STEP 2: Click the link on your social network Copy All Code Select All Code All codes were copied to your clipboard Can not copy the codes / texts, please press [CTRL]+[C] (or CMD+C with Mac) to copy Table of Content