The AI-Generated Text Arms Race: How Institutions Are Fighting Back Against AI Slop

In early 2023, the science fiction magazine Clarkesworld made headlines when it was forced to close its submissions portal — overwhelmed by a flood of AI-generated short stories. It was one of the first visible signs of a phenomenon that security researcher Bruce Schneier and co-author Nathan E. Sanders now describe as an arms race between AI-generated content and the institutions trying to cope with it.

Three years later, that flood has become a tsunami — and it’s hitting virtually every institution that accepts written submissions from the public.

Digital arms race between AI systems showing competing artificial intelligence technologies

The Deluge Is Everywhere

The pattern is remarkably consistent across domains. A legacy system that relied on the natural difficulty of writing to limit volume suddenly faces an explosion of submissions, and the humans on the receiving end simply can’t keep up.

Here’s where AI-generated content is overwhelming existing systems:

Institutions overwhelmed by flood of digital documents and AI-generated content

Fighting AI With AI

Faced with this onslaught, institutions are increasingly turning to the same technology that created the problem. It’s a classic arms race — and the defensive measures mirror the offensive ones:

The problem? This defensive AI will likely never achieve permanent supremacy. Each improvement in detection spurs improvements in generation, and vice versa. It’s an adversarial game with no stable equilibrium.

The Nuance: AI as Equalizer vs. AI as Fraud Engine

This is where the conversation gets genuinely complicated — and where Schneier and Sanders make their most important point.

Not all AI-assisted writing is fraud. Consider:

  • A non-English-speaking researcher using AI to write a paper in English was previously at a massive disadvantage. Well-funded researchers could hire human editors; everyone else struggled. AI levels that playing field.
  • A job seeker using AI to polish a resume or write a better cover letter is doing exactly what wealthy applicants have always done — hiring professional help. AI just makes that help universally accessible.
  • A citizen using AI to articulate their views to a legislator is exercising the same capability that lobbyists and the wealthy have always had — professional writing assistance.

The key distinction isn’t whether AI was used — it’s whether AI enables fraud or democratizes access.

Using AI to polish your genuine thoughts into clear prose? That’s democratization. Using AI to generate hundreds of fake constituent letters for an astroturf campaign? That’s fraud. Using AI to help express your real work experience in a cover letter? Legitimate. Using AI to fabricate credentials and cheat on job interviews? Clearly over the line.

As Schneier and Sanders put it: “What differentiates the positive from the negative here is not any inherent aspect of the technology, it’s the power dynamic.”

The Uncomfortable Reality

There’s no putting this genie back in the bottle. Highly capable AI models are widely available and can run on a laptop. The technology exists, it’s accessible, and it’s only getting better.

This means every institution needs to adapt. Some key principles for navigating this landscape:

  1. Focus on fraud, not tool use. Policies that ban “AI-generated content” entirely are both unenforceable and counterproductive. Better to focus on whether the content is fraudulent or deceptive.
  2. Embrace transparency. Requiring disclosure of AI assistance (as many academic journals now do) is more realistic and more fair than trying to detect and ban it.
  3. Build better systems, not just better detectors. If your institution can be overwhelmed by volume alone, the problem is the system, not the AI. Courts, journals, and hiring processes all need structural adaptation.
  4. Protect the equalizing benefits. Any response to AI slop needs to be careful not to eliminate the genuine benefits AI provides to people who previously lacked access to professional writing assistance.

What Developers Should Watch

For those of us building AI tools and applications, this arms race has direct implications:

  • Watermarking and provenance technologies are becoming increasingly important. If your tools generate text, consider building in provenance signals.
  • Detection APIs are a growing market, but they’re fundamentally limited — expect false positives and an ongoing cat-and-mouse game.
  • Authentication and identity may become more important than content analysis. Proving who wrote something may matter more than proving how it was written.
  • Responsible AI design means thinking about how your tools might be used at scale for fraud, not just how individual users interact with them.

The Bottom Line

The AI text arms race isn’t a problem that gets “solved.” It’s a new permanent feature of the information landscape. Institutions that adapt — by focusing on fraud rather than tool use, by embracing transparency, and by redesigning systems for a world of abundant generated text — will come out stronger. Those that try to simply detect and ban AI content are fighting a losing battle.

As Schneier and Sanders conclude: “There is no simple way to tell whether the potential benefits of AI will outweigh the harms, now or in the future. But as a society, we can influence the balance.”

The question isn’t whether people will use AI to write. They will. The question is whether we build systems that harness the democratizing potential while limiting the fraud. That’s the real challenge — and it’s one that requires thoughtful policy, not just better technology.


This article discusses themes from Bruce Schneier and Nathan E. Sanders’ essay “AI-Generated Text and the Detection Arms Race,” originally published in The Conversation.

Leave a Reply

Your email address will not be published. Required fields are marked *