AI Is Breaking the Zeitgeist
- Get link
- X
- Other Apps
Opinion
AI Is Breaking the Zeitgeist (and Why We Need AI to Survive It)
Generative AI is collapsing the friction costs that once limited how much content could be produced and pushed into public life. The result is an information environment that scales beyond human attention: more claims, more commentary, more persuasion attempts, and more social obligation to respond than any individual can evaluate. This post argues that the emergent problem is not only misinformation but overload, verification failure, and attention depletion. It also argues that AI assistants are becoming necessary as “attention governors” - but only if designed to preserve critical thinking rather than quietly replace it.
For most of modern life, the public conversation had a shape. Not a neat one - but a workable one. There were limited publishers, limited channels, and a shared sense of what mattered today.
That shape was already stressed by algorithmic feeds and the attention economy. Generative AI pushes it into a new regime: the cost of producing plausible content has fallen so sharply that volume can expand until the limiting reagent is human attention.
Herbert Simon described the invariant decades ago: when information becomes abundant, attention becomes scarce (Simon, 1971). AI doesn’t change the scarcity. It multiplies the abundance.
The new regime: infinite output meets finite attention
AI systems can generate a continuous stream of posts, replies, summaries, hot takes, pitches, and propaganda - and do so in endless variations. This matters because saturation is not just “more content.” It changes incentives and outcomes:
Low-effort production increases background noise.
Engagement-optimized platforms amplify what triggers reaction.
Verification gets more expensive relative to generation.
Evidence supports key parts of this. Visible engagement cues (likes/shares) can increase susceptibility to low-credibility information (Avram et al., 2020). In a world where attention is scarce, anything that “looks popular” becomes cognitively easier to accept, even when it shouldn’t.
Persuasion at industrial scale
The internet has always contained persuasion campaigns. AI changes their economics.
Research suggests that LLM-generated messages can influence attitudes on policy issues in controlled settings (Bai et al., 2025). Another line of work models how AI-enabled paraphrasing can scale repetition-based persuasion tactics: instead of spammy copy-paste, it becomes endless “fresh” variants that bypass fatigue and reactance (Dash et al., 2025). That is not magic mind control; it is cheaper, more scalable messaging in a world where attention is already strained.
Once persuasion scales, the environment rewards whoever can produce the most emotionally effective variants per unit time. The public conversation stops being a dialogue and starts behaving like a competitive optimization problem.
“AI slop” is a symptom, not a punchline
One signal that the ecosystem is shifting is cultural: Merriam-Webster’s 2025 Word of the Year was “slop,” defined in the specific sense of low-quality digital content produced in large volume by generative AI (Merriam-Webster, 2025). This is not merely about annoyance. Slop imposes a real cognitive tax: it consumes attention, degrades search and discovery, and increases baseline distrust.
Verification is losing the race
If cheap generation rises faster than verification, rational people default to doubt.
Content provenance standards like C2PA exist to attach tamper-evident “content credentials” to media (C2PA, 2024). But practical reporting highlights a stubborn gap: provenance metadata may not be displayed consistently by major platforms, and even when present it can be stripped during reposting workflows (Koebler, 2024). In short, the verification layer exists but is not yet frictionless enough to counter a frictionless generation layer.
That mismatch - cheap content, expensive verification - accelerates fragmentation of shared reality.
The quieter failure mode: obligation overload
There is another form of overload that is less political and more personal: the obligation to keep up, and to respond.
Work-email research links high email load to disrupted work and reduced well-being, and analyzes how specific email classes and stressors drive overload (Kern et al., 2024). Social media research describes “social overload,” where perceived demand to provide support and respond becomes exhausting and drives withdrawal (Maier et al., 2015). Generative AI intensifies both sides: it increases capacity to send and respond, which can inflate expectations and inbound volume.
That creates a ratchet. More output becomes possible, so more output becomes expected.
The adaptive response: AI as an attention governor
In a post-human throughput environment, the adaptation has to be prosthetic: not “try harder,” but “insert a governor.”
AI assistants can, in principle, do three critical jobs:
Triage (what matters now, what can wait, what can be ignored),
Compression (summary and synthesis with sources),
Obligation management (track commitments and prevent the response-ratchet).
But there is a serious caveat: offloading can weaken critical thinking if the assistant becomes a substitute for judgment rather than a scaffold for it. Survey work on knowledge workers finds that higher confidence in generative AI correlates with reduced critical-thinking effort, shifting effort toward lightweight verification and oversight (Lee et al., 2025). Related work warns that AI use can mediate cognitive offloading and reductions in critical thinking (Gerlich, 2025). These findings argue for deliberately designing assistants to keep humans in the loop: citations, uncertainty, adversarial counterpoints, and explicit “verify mode” for high-stakes claims.
What “broken zeitgeist” really means
“Breaking” is metaphor, but the underlying claim is concrete:
Attention is finite (Simon, 1971).
Content production is now near-infinite.
Engagement systems reward what spreads (Avram et al., 2020).
Persuasion can be scaled and varied cheaply (Bai et al., 2025; Dash et al., 2025).
Provenance and verification are lagging (C2PA, 2024; Koebler, 2024).
Overload dynamics push people to withdrawal (Kern et al., 2024; Maier et al., 2015).
In that environment, the old shared sense of “what’s going on” becomes harder to sustain. The repair is not more shouting. It is building personal and collective filtering that restores human-scale intake and preserves judgment.
AI assistants are becoming necessary - not as firehoses, but as governors.
References (live URLs)
Avram, M., Micallef, N., Patil, S., & Menczer, F. (2020). Exposure to social engagement metrics increases vulnerability to misinformation. Harvard Kennedy School Misinformation Review. https://misinforeview.hks.harvard.edu/article/exposure-to-social-engagement-metrics-increases-vulnerability-to-misinformation/
Bai, H., et al. (2025). LLM-generated messages can persuade humans on policy issues. Nature Communications. https://www.nature.com/articles/s41467-025-61345-5
Coalition for Content Provenance and Authenticity (C2PA). (2024). C2PA Technical Specification (v2.3). https://spec.c2pa.org/specifications/specifications/2.3/specs/C2PA_Specification.html
Dash, S., et al. (2025). The persuasive potential of AI-paraphrased information at scale. PNAS Nexus. https://pmc.ncbi.nlm.nih.gov/articles/PMC12281505/
Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15(1), 6. https://www.mdpi.com/2075-4698/15/1/6
Kern, M., Ohly, S., Duranova, L., & Friedrichs, J. (2024). Drowning in emails: Investigating email classes and work stressors as antecedents of high email load and implications for well-being. Frontiers in Psychology, 15. https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2024.1439070/full
Koebler, J. (2024, August 21). This system can sort real pictures from AI fakes - why aren’t platforms using it? The Verge. https://www.theverge.com/2024/8/21/24223932/c2pa-standard-verify-ai-generated-images-content-credentials
Lee, H. P. H., et al. (2025). The impact of generative AI on critical thinking: Self-reported reductions in cognitive effort and confidence effects from a survey of knowledge workers. Microsoft Research (CHI 2025). https://www.microsoft.com/en-us/research/wp-content/uploads/2025/01/lee_2025_ai_critical_thinking_survey.pdf
Maier, C., Laumer, S., Eckhardt, A., & Weitzel, T. (2015). Giving too much social support: Social overload on social networking sites. European Journal of Information Systems, 24(5), 447-464. https://www.tandfonline.com/doi/full/10.1057/ejis.2014.3
Merriam-Webster. (2025, December 14). 2025 Word of the Year: Slop. https://www.merriam-webster.com/wordplay/word-of-the-year
Simon, H. A. (1971). Designing organizations for an information-rich world. In M. Greenberger (Ed.), Computers, communications, and the public interest (pp. 37-72). Johns Hopkins Press. https://gwern.net/doc/design/1971-simon.pdf
Comments