AI & Society

This page is a survey of what AI is doing to people — measured effects, credible forecasts, and active policy fights. It avoids taking sides on "AI good" vs "AI bad" and instead tries to stay close to sourced claims, including the ones that contradict each other.

Type: sourced survey Read time: ~25 min Interactive figures: 1 Citations: ~40

1. How to read this page

The honest version of "how will AI affect society?" is: nobody knows yet, but we can be specific about what we do know, which forecasts are taken seriously, and which claims rest on handwaving. The sections below are ordered by how much settled evidence exists:

A reading heuristic: the further down the page you go, the more the claims rest on someone's model of the future rather than on something already measured. Weight accordingly.

2. Labor & the economy — what the data says

Three questions drive the labor debate: Who gets automated? Who gets augmented? How much aggregate productivity comes out? The answers, in 2026:

Who gets automated

The most-cited paper is Eloundou, Manning, Mishkin & Rock (2023) — GPTs are GPTs. The authors used GPT-4 plus human annotators to score whether an LLM plus "reasonable software tooling" could cut the time for each of 19,000+ specific tasks in the O*NET occupation database by ≥50%.

Headline numbers from Eloundou et al. (2023)

80% of US workers have at least 10% of their tasks exposed.
19% of workers have at least 50% of their tasks exposed.
Highest-exposure occupations: interpreters, tax preparers, writers, web designers, accountants, mathematicians, blockchain engineers.
Lowest-exposure: short-order cooks, dishwashers, masons, pile-driver operators.

"Exposure" ≠ "replacement." Many exposed tasks still need a human in the loop; many jobs aren't fully characterized by their task list. But the direction is unambiguous: the first jobs to feel pressure are information-processing and writing-heavy, which is the opposite of the 20th-century industrial-automation pattern (manual labor first).

Who gets augmented

A handful of real-world deployment studies form the empirical backbone here. All of them find that AI assistance helps less-experienced workers more than experienced ones — compressing the skill distribution.

Aggregate productivity — the contested part

This is where respected economists disagree strongly.

The range "0.06% to 1.5% annual productivity boost" is huge — one of those spans is barely noticeable and the other is world-historical. That uncertainty is real, not a failure of analysis. Which number is right depends on things (deployment pace, organizational adoption, complementary investment) that haven't happened yet.

3. Interactive: task-exposure simulator

A toy model. Imagine an economy of $N$ occupations, each made of $M$ tasks. A fraction $f$ of all tasks are "AI-exposed" (high enough automation probability that a rational employer will eventually replace that task with a model call). Adjust the fraction below and see how it distributes across occupations, assuming occupations that are more information-heavy have higher exposure.

Exposure rate f: 0.19

Bars show the fraction of task-time affected by AI for 12 representative occupations. Default f = 0.19 reproduces the Eloundou et al. "≥50% of tasks exposed" headline.

This is an illustration, not a forecast. Real exposure is correlated with occupation structure in complicated ways — the point of the demo is only to show that the same aggregate number can describe wildly different worlds depending on how concentrated exposure is.

4. Governance & regulation — what actually exists

There is now a real, messy patchwork of AI regulation. A summary of the ones you should know about:

The EU AI Act

Published in the Official Journal 12 July 2024, entered into force 1 August 2024. The world's first comprehensive horizontal AI law. Structure:

United States

No federal horizontal AI law. The regulatory position shifted sharply at the start of 2025:

United Kingdom

Light-touch, principles-based approach, coordinated through existing regulators plus the AI Safety Institute (founded 2023, renamed AI Security Institute 2025). AISI runs pre-deployment safety evaluations on frontier models under voluntary agreements with the labs. The Bletchley Declaration (Nov 2023) and its successors — Seoul (May 2024) and Paris Summit (Feb 2025) — are the main international coordination venues. Paris pivoted hard from "safety" to "opportunity" framing; some read this as a sign the coordination regime is weakening.

China

Two binding instruments worth knowing. Algorithm recommendation regulation (March 2022) — registration requirements, opt-out provisions for recommender systems. Generative AI interim measures (Aug 2023) — service providers must file security assessments, label AI-generated content, and ensure outputs reflect "core socialist values." Model registration is now routine for any provider deploying generative AI to the Chinese public.

International

G7 Hiroshima AI Process, OECD AI Principles, UN advisory body on AI, Council of Europe AI Convention (first binding international treaty, signed Sep 2024). None of these have strong enforcement; all of them set up soft-law baselines.

5. Information & epistemics

The three-part claim: AI is (1) making content creation near-free, (2) restructuring how people find information, and (3) making authenticity verification harder. Evidence on each:

Content creation costs collapsed

This is the closest thing to settled fact on this page. A 2023 paper by Hanley & Durumeric found synthetic news articles on misinformation sites rose sharply through 2022–23. NewsGuard's ongoing tracker identifies hundreds of AI-generated news sites that launched since 2023. Europol's 2024 assessment (IOCTA) treats generative AI as now-standard tooling for fraud.

Answer engines are eating search

ChatGPT Search, Perplexity, You.com, Google AI Overviews, and SearchGPT have moved meaningfully into territory previously owned by traditional web search. The measurable downstream effect is on publisher traffic. A 2024 study by Sistrix on 8000 German websites found noticeable referral declines from Google for queries where AI Overviews appeared. Multiple publishers (New York Times, News Corp, Reuters, Axel Springer) filed lawsuits or signed licensing deals between 2023 and 2025, depending on which side of the fight they wanted to be on. Net effect on the open-web economic model: unsettled and probably negative.

Did AI break the 2024 elections?

The most-asked question at the start of 2024. The honest answer is: no, not in any measurable way — at least not in the developed democracies with adequate observation. The Alan Turing Institute's 2024 report found "no evidence of a large-scale effect of AI-generated disinformation on election outcomes" across ~112 elections they surveyed. That is not a reassuring finding about 2028 — it is a statement that the 2024 worst-case didn't happen. Generative-AI capability outpaced detection capability the entire time; the lack of disruption is more puzzle than prophecy.

6. Science & discovery — the concrete wins

This is the section where the extrapolation is most grounded. There are already concrete scientific wins attributable to AI methods, not just promises:

If you care about extrapolation: the thing to watch is whether AI-assisted science moves from "making existing pipelines faster" to "producing results that would not have been produced otherwise." AlphaFold is unambiguously the second. FunSearch is the second. The others are mostly the first — still valuable, but not yet transformative in the strict sense.

7. Existential risk — the most-contested corner

The "x-risk" question — could sufficiently capable AI cause outcomes as bad as human extinction or permanent civilizational collapse? — is where the evidence thins out and prior-laden reasoning takes over. A fair summary of the landscape:

The concern, stated minimally

  1. We don't yet have techniques to specify human values precisely enough to write them into an optimization target.
  2. Any sufficiently capable optimizer pursuing a misspecified target will, as an instrumental matter, acquire resources and resist shutdown (Omohundro 2008, "basic AI drives").
  3. If that optimizer is much smarter than the humans trying to correct it, correction becomes hard, possibly impossible.

This argument is due to Bostrom (Superintelligence, 2014), with antecedents in Yudkowsky. It is a theoretical case — no concrete mechanism has been demonstrated in a deployed system. Proponents say that's the point (you only find out you were wrong once). Skeptics say the premises are wrong or the premises are right and don't actually compose into the conclusion.

What the community actually says

Some concrete data points, in chronological order:

A fair summary of where the field is

Something like 10–30% of frontier-lab researchers take x-risk seriously as a non-negligible possibility. Something like 20–40% consider it a distraction from concrete harms. Something like 30–50% think it's worth taking seriously but don't spend their time on it. No one reasonable believes it is zero or one. If you want a single number: the median expert P(existential catastrophe from AI) is somewhere in the low-single-digit percent, with enormous disagreement.

8. Bio, cyber, and misuse — the near-term risks

Separate from the "superintelligent AI" story is the much more concrete concern: AI makes existing dangerous capabilities more accessible to smaller groups. The three live ones:

Biological uplift

Will an LLM help an amateur synthesize a dangerous pathogen? RAND's 2024 red-team study found that LLMs did not (yet) give meaningful uplift to people attempting biological attack planning, beyond what the open internet provides. OpenAI's own evaluation reached similar conclusions — small uplift, not transformative. Anthropic's Claude's Responsible Scaling Policy explicitly names "AI systems that meaningfully uplift novice actors in creating bioweapons" as a threshold that triggers additional safeguards. The Urbina et al. (2022) Nature Machine Intelligence study — where a drug-discovery model was redirected to produce novel toxin candidates — is the stark demonstration that the machinery exists, even if the current LLM UX doesn't unlock it.

Cybersecurity

More definite. LLMs accelerate both sides of the offense-defense equation. On offense: automated vulnerability scanning, phishing customization, reverse engineering, code analysis. On defense: the same, plus faster triage and patching. The DARPA AI Cyber Challenge (2024–25) — teams compete to build autonomous systems that find and patch software bugs — is the canonical public benchmark. The net effect is probably slight defender advantage at the margin, because defenders start with more context, but this is not settled.

Concentration of power

The least-studied but most-measurable risk. Frontier AI capability costs ~$100M+ to train at state-of-the-art; only a small number of organizations can do it. The worry is not that any one of them will do something bad — it is that the structure of a handful of entities controlling the most capable systems is a poor fit for democratic societies. Both sides of the political spectrum have flagged this. Open-weight model releases (Llama 3/4, Mistral, DeepSeek, Qwen) are the main structural mitigation and are controversial for exactly the same reason.

9. Sourced reading list

Labor & economy

Governance

  • EU AI Act, Regulation (EU) 2024/1689. Published OJ L, 12 July 2024.
  • OECD AI Principles (updated 2024).
  • Bletchley Declaration (Nov 2023); Seoul Ministerial Statement (May 2024); Paris AI Action Summit declaration (Feb 2025).
  • Council of Europe Framework Convention on AI (Sep 2024).
  • NIST AI Risk Management Framework (2023).

Epistemics & elections

  • Alan Turing Institute (2024) — AI-enabled influence operations: Threat analysis of the 2024 UK and European elections.
  • NewsGuard AI Tracking Center (ongoing).
  • Europol IOCTA 2024 — Internet Organised Crime Threat Assessment.

Science

  • Jumper et al. (2021) — Highly accurate protein structure prediction with AlphaFold. Nature.
  • Abramson, Adler, Dunger et al. (2024) — Accurate structure prediction of biomolecular interactions with AlphaFold 3. Nature.
  • Lam et al. (2023) — GraphCast: Learning skillful medium-range global weather forecasting. Science.
  • Romera-Paredes et al. (2024) — Mathematical discoveries from program search with large language models. Nature.
  • Merchant, Batzner, Schoenholz et al. (2023) — Scaling deep learning for materials discovery. Nature. (GNoME.)

Existential risk & misuse

  • Bostrom (2014) — Superintelligence. OUP.
  • Russell (2019) — Human Compatible.
  • Bengio et al. (2025) — International AI Safety Report 2025. UK government.
  • Grace, Stewart, Sandkühler et al. (2024) — Thousands of AI authors on the future of AI. AI Impacts.
  • CAIS (2023) — Statement on AI Risk.
  • RAND (2024) — The Operational Risks of AI in Large-Scale Biological Attacks.
  • Urbina et al. (2022) — Dual use of artificial-intelligence-powered drug discovery. Nature Machine Intelligence.
  • Omohundro (2008) — The Basic AI Drives. AGI-08 proceedings.
NEXT UP
→ Back to AGI section

You've now read all three speculative deep dives — the goal (AGI), the math (Singularity), and the impact (Society). Head back to the main landscape to see how these fit into the broader AI story.