The AI-Generated Text Arms Race: How Institutions Are Fighting Back Against AI Slop
In early 2023, the science fiction magazine Clarkesworld made headlines when it was forced to close its submissions portal — overwhelmed by a flood of AI-generated short stories. It was one of the first visible signs of a phenomenon that security researcher Bruce Schneier and co-author Nathan E. Sanders now describe as an arms race between AI-generated content and the institutions trying to cope with it.
Three years later, that flood has become a tsunami — and it’s hitting virtually every institution that accepts written submissions from the public.

The Deluge Is Everywhere
The pattern is remarkably consistent across domains. A legacy system that relied on the natural difficulty of writing to limit volume suddenly faces an explosion of submissions, and the humans on the receiving end simply can’t keep up.
Here’s where AI-generated content is overwhelming existing systems:
- Courts: Legal systems around the world are being flooded with AI-generated filings, particularly from self-represented litigants who can now produce voluminous (if not always coherent) legal documents.
- Academic journals: Both research papers and letters to the editor are being inundated with AI-generated content. AI conferences themselves are drowning in AI-generated research papers.
- Job applications: Employers report a massive surge in AI-crafted applications, with some candidates using AI to fabricate entire identities and experience.
- Social media: Platforms are awash in AI-generated posts, images, and videos — what the internet has collectively started calling “AI slop.”
- Democracy: Lawmakers are struggling to distinguish genuine constituent feedback from AI-generated messages, while newspapers face a wave of AI-written letters to the editor.
- Open source: Even software projects are seeing AI-generated contributions that create more work for maintainers than they solve.

Fighting AI With AI
Faced with this onslaught, institutions are increasingly turning to the same technology that created the problem. It’s a classic arms race — and the defensive measures mirror the offensive ones:
- Academic peer reviewers now use AI to evaluate papers that may have been generated by AI.
- Social media platforms deploy AI moderators to handle content volumes that human moderators can’t match.
- Court systems in countries like Brazil are using AI to triage and process litigation supercharged by AI.
- Employers turn to AI-powered screening tools to catch fraudulent applications.
- Educators use AI not just to grade papers but to administer oral exams and provide feedback.
The problem? This defensive AI will likely never achieve permanent supremacy. Each improvement in detection spurs improvements in generation, and vice versa. It’s an adversarial game with no stable equilibrium.
The Nuance: AI as Equalizer vs. AI as Fraud Engine
This is where the conversation gets genuinely complicated — and where Schneier and Sanders make their most important point.
Not all AI-assisted writing is fraud. Consider:
- A non-English-speaking researcher using AI to write a paper in English was previously at a massive disadvantage. Well-funded researchers could hire human editors; everyone else struggled. AI levels that playing field.
- A job seeker using AI to polish a resume or write a better cover letter is doing exactly what wealthy applicants have always done — hiring professional help. AI just makes that help universally accessible.
- A citizen using AI to articulate their views to a legislator is exercising the same capability that lobbyists and the wealthy have always had — professional writing assistance.
The key distinction isn’t whether AI was used — it’s whether AI enables fraud or democratizes access.
Using AI to polish your genuine thoughts into clear prose? That’s democratization. Using AI to generate hundreds of fake constituent letters for an astroturf campaign? That’s fraud. Using AI to help express your real work experience in a cover letter? Legitimate. Using AI to fabricate credentials and cheat on job interviews? Clearly over the line.
As Schneier and Sanders put it: “What differentiates the positive from the negative here is not any inherent aspect of the technology, it’s the power dynamic.”
The Uncomfortable Reality
There’s no putting this genie back in the bottle. Highly capable AI models are widely available and can run on a laptop. The technology exists, it’s accessible, and it’s only getting better.
This means every institution needs to adapt. Some key principles for navigating this landscape:
- Focus on fraud, not tool use. Policies that ban “AI-generated content” entirely are both unenforceable and counterproductive. Better to focus on whether the content is fraudulent or deceptive.
- Embrace transparency. Requiring disclosure of AI assistance (as many academic journals now do) is more realistic and more fair than trying to detect and ban it.
- Build better systems, not just better detectors. If your institution can be overwhelmed by volume alone, the problem is the system, not the AI. Courts, journals, and hiring processes all need structural adaptation.
- Protect the equalizing benefits. Any response to AI slop needs to be careful not to eliminate the genuine benefits AI provides to people who previously lacked access to professional writing assistance.
What Developers Should Watch
For those of us building AI tools and applications, this arms race has direct implications:
- Watermarking and provenance technologies are becoming increasingly important. If your tools generate text, consider building in provenance signals.
- Detection APIs are a growing market, but they’re fundamentally limited — expect false positives and an ongoing cat-and-mouse game.
- Authentication and identity may become more important than content analysis. Proving who wrote something may matter more than proving how it was written.
- Responsible AI design means thinking about how your tools might be used at scale for fraud, not just how individual users interact with them.
The Bottom Line
The AI text arms race isn’t a problem that gets “solved.” It’s a new permanent feature of the information landscape. Institutions that adapt — by focusing on fraud rather than tool use, by embracing transparency, and by redesigning systems for a world of abundant generated text — will come out stronger. Those that try to simply detect and ban AI content are fighting a losing battle.
As Schneier and Sanders conclude: “There is no simple way to tell whether the potential benefits of AI will outweigh the harms, now or in the future. But as a society, we can influence the balance.”
The question isn’t whether people will use AI to write. They will. The question is whether we build systems that harness the democratizing potential while limiting the fraud. That’s the real challenge — and it’s one that requires thoughtful policy, not just better technology.
This article discusses themes from Bruce Schneier and Nathan E. Sanders’ essay “AI-Generated Text and the Detection Arms Race,” originally published in The Conversation.
