Somewhere in a quarterly headcount report, a line item is shrinking and nobody has quite noticed. Software developers between the ages of 22 and 25. Fewer than there used to be. Nearly 20 per cent fewer than there were in late 2022, at the same companies, doing the same work. Their older colleagues at those same companies grew headcount by 6 to 12 per cent over the same period.

That is chart one of the 2026 Stanford HAI Artificial Intelligence Index, released this month. The report runs to several hundred pages. It tracks technical performance, investment, compute, adoption, public sentiment, and safety. It is the closest thing the AI field has to an annual general meeting.

This is the first edition where the workforce numbers are not a forecast. They are a measurement. Five charts in the report should be on the desk of every workforce planner in the country. Not because they are alarming, although they are. Because each one is a fact that invalidates at least one assumption inside most workforce plans currently being written.

1. The age cliff

The primary source is a working paper from Stanford's Digital Economy Lab called, with admirable economy, Canaries in the Coal Mine? Six Facts about Recent Generative AI Exposure in the Labour Market. Authors Brynjolfsson, Chandar, and Chen track employment by age and occupation from US payroll data, comparing the period before ChatGPT's late-2022 launch to early 2026.

~20%
decline in employment for US software developers aged 22-25 since late 2022

The effect is not a general hiring freeze. At the same firms, developers aged 30 and above grew employment 6 to 12 per cent. The pattern repeats in customer service, accounting, and marketing. Call centre hiring for young workers fell 15 per cent. It does not appear in low-AI-exposure occupations: health aides, production supervisors, and manual labourers saw steady or growing employment across all age groups.

The interpretation the data forces is narrow and specific. AI has not replaced software developers. It has replaced the work that junior software developers were hired to do.

For workforce planners this is the chart to print and pin above the desk. It says your entry-level pipeline is where the disruption is arriving first, in the occupations you have been describing as "AI-augmented" for three years. The cohort that was supposed to be most adaptable is the cohort being adapted around.

2. The capability saturation

GPQA Diamond is 198 graduate-level physics, chemistry, and biology questions written by PhD-level domain experts. It was introduced in late 2023 as a benchmark frontier models could not approach. The questions are designed to be unsolvable through web search. PhD experts in the relevant field average 65 per cent.

In April 2026, the top model on GPQA Diamond scores 94.1 per cent.

Every benchmark built to be "the one models cannot solve" has been cleared inside 18 months.

It is the third frontier benchmark in two years to be effectively saturated. MMLU got cleared. MMLU-Pro was introduced specifically as a harder replacement. Gemini 3 Pro now sits at around 90 per cent on that one too. The benchmarks are not being gamed. They are being passed.

The implication for workforce planning is not about whether your organisation should buy more AI licences. It is about which jobs still have a defensible skill moat at the graduate-level-expert layer, and which have quietly lost theirs. If the benchmark of "what a competent domain expert knows" can be cleared by something that fits in a browser tab, then the planning question is no longer "when will AI be capable enough." It is "which roles did we assume were safe because they required PhD-level cognition, and why did we assume that."

3. The compute doubling

Moore's Law, as commonly understood, describes transistor density doubling every 18 to 24 months. This is the reference most executives default to when they reason about how fast compute grows. It is how most long-range capability plans are calibrated.

The 2026 Index notes, with a straight face, that the compute used to train frontier AI models doubles every five months. Datasets double every eight months. Power consumption doubles annually. In 2025, AI data-centre capacity in the United States reached 29.6 gigawatts, which is roughly what the entire state of New York draws at peak demand.

5 months
frontier-model training compute doubling time (Stanford AI Index 2026)

Moore's Law is off by roughly a factor of four in the wrong direction.

This matters for workforce planning because almost every capability forecast a large organisation produces is implicitly calibrated to Moore's Law timelines. Five-year plans assume a technology trajectory that takes five years. A doubling every five months compresses a five-year outlook into something closer to a twelve-month forecast with wide confidence intervals at the end.

The practical test: take any workforce roadmap your organisation has published with an AI-capability assumption in it. Check the date. If it is more than six months old, the assumption is already an artefact.

4. The adoption curve

Generative AI reached 53 per cent US population adoption in three years. The personal computer took roughly 15 years to reach the same level. The commercial internet took eight.

Organisational adoption follows the same curve: 55 per cent in 2023, 78 per cent in 2024, 88 per cent in 2025. Consumer value is compounding in parallel. Stanford's economists estimate US consumers drew $172 billion in annual value from generative AI tools by early 2026. The median value per user tripled between 2025 and 2026.

There is no prior general-purpose technology with an adoption curve this steep. Electricity took forty years to reach a comparable level of workplace penetration. Telephony took thirty. The PC took fifteen. Generative AI is on course to do it in five.

The workforce-planning consequence of this is structural. Diffusion-of-innovations models assume an S-curve long enough for organisations to retrain, hire, and restructure across its slope. A three-year S-curve does not leave time for any of those things. It leaves time to react.

Every internal capability plan built around the assumption that "it will take a few years for the workforce to adjust" is calibrated to a timeline the technology has already refused to honour.

5. The 50-point gap

The Index surveys public opinion on AI annually. Experts and members of the public answer the same question set. For seven years the gap between the two has been growing. In 2026 it hit a new record.

50 points
gap between AI experts (73%) and the general public (23%) on whether AI will have a positive impact on how people do their jobs

The public is more pessimistic than experts on job impact, on personal safety, and on economic outcomes. Seventy-three per cent of AI experts surveyed believe AI will have a positive impact on how people do their jobs. Twenty-three per cent of the general public agrees. It is the largest expert-public divergence the Index has recorded on a workforce question.

The standard response inside corporate change-management circles is to treat this as a communication problem. If the experts are right and the public is wrong, the argument goes, the remedy is clearer messaging, more town-halls, more carefully framed transition plans.

This reads the chart backwards. The public is not failing to receive the message. The public has received a different message, directly, through three years of news coverage that includes the charts preceding this one. The age cliff is on their LinkedIn feeds. The capability saturation is in the product updates. The adoption curve is in their own pockets. The 50-point gap is not a failure of comms. It is a verdict.

The operational implication for a planner: every change-management strategy that depends on employee trust in expert guidance is running into a 50-point headwind. Your workforce has already made up its mind about what AI will do to their job. They did not make it up the same way the executive team did.

What the five charts say together

Taken individually each chart is a data point. Taken together they describe a coherent shape.

A technology that has cleared graduate-expert benchmarks is diffusing through the population faster than any prior general-purpose technology, backed by compute growth four times faster than the benchmark most plans are calibrated to, already measurably displacing the entry-level cohort in the occupations most exposed to it, and watched by a workforce that does not believe the expert consensus about what happens next.

No single chart requires a radical response. Any three of them, taken together, require the workforce plan to be rewritten.

The canaries in the coal mine were the 22-year-old developers. The 2026 Index is the report that says the canaries have stopped singing. The question for every workforce planner reading this in mid-2026 is not whether to respond. It is whether the response will be retrospective or anticipatory.

One of those options is already foreclosed for at least some of the cohorts affected. The other is on a countdown.

Sources

  1. Stanford HAI (2026). The 2026 AI Index Report. hai.stanford.edu/ai-index/2026-ai-index-report
  2. Stanford HAI (April 2026). Inside the AI Index: 12 Takeaways from the 2026 Report. hai.stanford.edu/news/inside-the-ai-index-12-takeaways-from-the-2026-report
  3. Brynjolfsson, E., Chandar, B., and Chen, R. (2025). Canaries in the Coal Mine? Six Facts about Recent Generative AI Exposure in the Labour Market. Stanford Digital Economy Lab.
  4. Federal Reserve Bank of Dallas (January 2026). Young workers' employment drops in occupations with high AI exposure. dallasfed.org/research/economics/2026/0106
  5. MIT Technology Review (April 2026). Want to understand the current state of AI? Check out these charts.
  6. IEEE Spectrum (April 2026). Stanford's AI Index for 2026 Shows the State of AI.
  7. Lightcast and Stanford HAI (2026). Annual AI Index 2026 co-publication.
  8. Bick, A., Blandin, A., and Deming, D. (2024). The Rapid Adoption of Generative AI. NBER Working Paper 32966.