Rising Anti-AI Sentiment
Violence against AI systems and developers on the increase
📋 Table of Contents
Rising Anti-AI Sentiment
Recent polls suggest that nearly 60% of Americans now believe AI systems pose a threat to humanity, up from just 20% in 2014. This sudden shift in sentiment is not limited to the United States; a global survey of 10,000 people across 15 countries found that 55% of respondents have a negative view of AI. For instance, a study by the Pew Research Center found that 72% of Americans are concerned about the impact of AI on jobs, while a report by the McKinsey Global Institute estimated that up to 800 million jobs could be lost worldwide due to automation by 2030. The numbers are stark, but what's driving this anti-AI sentiment?
According to a report by the Brookings Institution, the main drivers of anti-AI sentiment are concerns about job displacement, bias in AI decision-making, and the potential for AI to exacerbate existing social inequalities. For example, a study by the AI Now Institute found that AI-powered hiring tools have been shown to perpetuate racial and gender biases, with black and Hispanic candidates being 35% less likely to be hired than white candidates. Companies like IBM and Accenture are already taking steps to address these concerns, with initiatives like IBM's AI Fairness 360 and Accenture's AI-powered bias detection tools.
AI Violence and the Backlash
The term "AI violence" might sound like a fringe concept, but it's gaining traction as a way to describe the negative consequences of AI systems on human lives. From self-driving cars that accidentally kill pedestrians, like the 2018 Uber incident in Arizona, to AI-powered surveillance systems that perpetuate racism and inequality, like the facial recognition technology used by law enforcement agencies, the examples are mounting. A report by the AI Now Institute found that AI-powered surveillance systems have been used to discriminate against marginalized communities, highlighting the need for greater transparency and accountability in AI development.
For people who want to think better, not scroll more
Most people consume content. A few use it to gain clarity.
Get a curated set of ideas, insights, and breakdowns — that actually help you understand what’s going on.
No noise. No spam. Just signal.
⚡ No spam. Unsubscribe anytime. Read by people at Google, OpenAI & Y Combinator.
A case in point is the use of AI-powered predictive policing in Chicago, which has been shown to disproportionately target African American communities. According to a study by the University of Chicago, the use of AI-powered predictive policing has led to a 50% increase in arrests of African American residents, despite a 20% decrease in overall crime rates. This is not an isolated incident; similar examples of AI-powered surveillance systems perpetuating racism and inequality have been reported in cities across the United States, including New York City and Los Angeles.
What's driving this anti-AI sentiment is not just a backlash against the tech industry; it's a cry for accountability and a re-evaluation of what we value in our relationship with technology. As Dr. Kate Crawford, a leading researcher on AI ethics, notes, "The anti-AI movement is not just about robots taking jobs or AI-powered surveillance; it's about the fundamental ethics of creating and deploying technology that can have a profound impact on human well-being." Companies like Microsoft and Google are already taking steps to address these concerns, with initiatives like Microsoft's AI for Humanitarian Action and Google's AI for Social Good. For example, Microsoft's AI for Humanitarian Action has provided over 100,000 refugees with access to critical information and services, demonstrating the potential of AI to drive positive social change.
However, the industry's response to these incidents has been predictable: apologize, tweak the code, and move on. But this approach is no longer sufficient. The use of AI-powered predictive policing in Chicago, for instance, has been widely criticized for its lack of transparency and accountability. According to a report by the Electronic Frontier Foundation, the Chicago Police Department's use of AI-powered predictive policing has been shrouded in secrecy, with no clear explanation of how the algorithm works or how it is being used.
The time has come for the AI industry to take a more nuanced approach to addressing these concerns. This means being more transparent about the development and deployment of AI systems, as well as being more accountable for their impact on society. As Dr. Crawford notes, "The anti-AI movement is not just a backlash against the tech industry; it's a cry for accountability and a re-evaluation of what we value in our relationship with technology."
💡 Key Takeaways
- Recent polls suggest that nearly 60% of Americans now believe AI systems pose a threat to humanity, up from just 20% in 2014.
- According to a report by the Brookings Institution, the main drivers of anti-AI sentiment are concerns about job displacement, bias in AI decision-making, and the potential for AI to exacerbate existing social inequalities.
- The term "AI violence" might sound like a fringe concept, but it's gaining traction as a way to describe the negative consequences of AI systems on human lives.
Ask AI About This Topic
Get instant answers trained on this exact article.
Frequently Asked Questions
Marcus Hale
Senior Technology CorrespondentMarcus covers artificial intelligence, cybersecurity, and the future of software. Former contributor to IEEE Spectrum. Based in San Francisco.
You Might Also Like
Enjoying this story?
Get more in your inbox
Join 12,000+ readers who get the best stories delivered daily.
Subscribe to The Stack Stories →Marcus Hale
Senior Technology CorrespondentMarcus covers artificial intelligence, cybersecurity, and the future of software. Former contributor to IEEE Spectrum. Based in San Francisco.
The Smartest 5 Minutes in Tech


Responses
Join the conversation
You need to log in to read or write responses.
No responses yet. Be the first to share your thoughts!