What AI Does to Human Thinking: Cognitive Sovereignty, the Median Pull, and Why It Matters for Product Teams

Helen and Dave Edwards have spent a decade studying how AI changes cognition. Their findings should reshape how product leaders measure AI adoption.

B
Brittany Hobbs · · 8 min read
Editorial photograph: What AI Does to Human Thinking: Cognitive Sovereignty, the Median Pull, and Why It Matters for Product Teams
Photo: Generated via Flux 1.1 Pro
Overview
  • Cognitive sovereignty is the ability to remain the author of your own thinking when using AI — a concept introduced by Helen Edwards of the Artificiality Institute.
  • The median pull, documented in a Nature study, shows AI-using scientists published 26% more but their work became measurably less diverse.
  • Three capacities determine whether AI strengthens or erodes human thinking: awareness, agency, and accountability.
  • Product teams measuring AI adoption by usage volume are missing the metric that matters — whether AI is making their people better thinkers or just faster typists.

What does AI do to human thinking?

Most conversations about AI's impact focus on what AI can do for us — tasks completed, time saved, outputs generated. Helen Edwards and Dave Edwards, co-founders of the Artificiality Institute, have spent a decade asking a different question: what does AI do to us?

Their research, discussed at length on Episode 5 of the Product Impact Podcast, examines how AI changes human cognition, identity, and meaning-making — not in theory, but in measurable ways. The findings challenge the assumption that more AI usage equals more value.

"We study what moderates these different outcomes. What is it about AI design or AI use, or the particular mindset that you bring to AI, that drives you in one direction or the other?"

Helen Edwards, Product Impact Podcast S02E05

The overarching question: does AI make us smarter, or does it make us faster while quietly eroding the capacities that matter most?

What is cognitive sovereignty?

Cognitive sovereignty is the ability to remain the author of your own thinking when using AI. The term, introduced by Helen Edwards on the Product Impact Podcast, draws a parallel to the EU human rights framework's "right to a future tense" — the principle that your future remains open and your agency is preserved.

Applied to AI, cognitive sovereignty asks: is the technology making you more capable of independent thought, or is it quietly outsourcing that capacity?

Edwards's framework identifies three capacities that AI either strengthens or erodes:

Awareness. Do people understand how AI is changing their thinking patterns? Most users cannot identify how their reasoning shifts when AI mediates it. They know they're faster. They don't notice they're narrower.

Agency. Are people still making deliberate choices, or defaulting to AI outputs? The difference between "I reviewed the AI's suggestion and chose to use it" and "I pasted the AI's output without reading it" is the difference between augmented thinking and outsourced thinking.

Accountability. Can people stand behind their work, or has authorship become ambiguous? When a team produces AI-assisted analysis, the question "who actually thought this through?" becomes harder to answer — and the answer matters for trust, credibility, and professional identity.

If any of these three capacities is weakening, AI is reducing cognitive sovereignty even when adoption dashboards look healthy.

What is the median pull?

The median pull is the observable effect where AI-using groups become more productive at the cost of becoming less distinctive from each other.

The evidence comes from a paper by James Evans at the University of Chicago, published in Nature. The study examined scientists using AI in their research workflows:

  • AI-using scientists published 26% more and their citation rates increased
  • But their work began to converge — methodological choices narrowed, research questions contracted, writing homogenized
  • The measurable diversity of their thinking dropped even as their measurable productivity rose

Edwards explains why this happens: AI tools are trained on the aggregate of existing human knowledge. When you use AI to assist your thinking, the AI pulls your output toward the statistical center of what everyone else has already thought. You become more productive and more average simultaneously.

For product teams, the median pull means that AI adoption metrics (workflows augmented, tools deployed, prompt volume) are measuring the wrong thing. They measure throughput. They cannot detect the erosion of the distinctive thinking that makes a team's work different from its competitors'.

How should product teams measure cognitive sovereignty?

Edwards's research suggests two metrics that standard AI dashboards miss:

Override rate. When a team receives an AI-assisted work product, how often do they push back, reframe it, or reject it? A team captured by the median pull has a high use-verbatim rate and a low override rate. That pattern looks like efficiency. It is actually convergence.

Divergence. When multiple people analyze the same data, do they arrive at different conclusions or versions of the same conclusion? If the range of interpretations is narrowing over time, the median pull is operating — and no adoption metric will catch it.

Edwards recommends intentional spaces for unmediated thinking — not blanket "no AI" policies, but specific blocks where the hardest problems are worked without AI assistance. The point is not that AI is bad at those problems. The point is that human-only thinking produces the divergent outputs that AI-mediated thinking systematically eliminates.

What is the adjacent possible?

The counterpart to the median pull is what Edwards calls the adjacent possible — the kind of innovation that only biological intelligence can produce.

"We don't have any breakthroughs about this true innovative capacity that biological intelligence has for what we call the adjacent possible — a very local, very precise, very specific effect that will only happen when humans have AIs they're operating themselves, as opposed to a centralized top-down AI."

Helen Edwards, Product Impact Podcast S02E05

The adjacent possible matters because it draws a sharp line between what AI can produce on its own (by pulling toward a median position from training data) and what humans-with-AI can produce (by exploring further from any starting point). Centralized AI converges. Distributed human intelligence with AI in the loop diverges — but only when the humans remain the authors of the thinking.

For product teams, this is the strategic argument against the "AI does it for you" pattern and in favor of the "AI helps you do more" pattern. If your AI product replaces human judgment with model output, you're optimizing for what frontier models already know. If your AI product amplifies human judgment with model assistance, you unlock the adjacent possible — the innovation space your competitors literally cannot reach with the same models.

Why this matters for enterprise AI deployment

The enterprise implications are direct. Most organizations deploying AI are measuring adoption: how many people are using the tools, how many workflows are augmented, how much time is saved. Edwards's research suggests these metrics, taken alone, can mask the opposite of their intended outcome.

An organization where 90% of employees use AI daily and produce 30% more output sounds like a success. An organization where that same 90% is producing outputs that are 30% less distinctive from their competitors' outputs is a strategic failure — and no current enterprise AI dashboard can distinguish between the two.

This is one of the questions I'm tracking closely in my ongoing research into AI value in enterprise deployments. The organizations that are starting to measure cognitive sovereignty — awareness, agency, accountability — are the ones asking the right question. Not "are people using AI?" but "is AI making our people better?" The answer is not always the same, and the difference determines whether AI adoption creates lasting value or quietly erodes the capacity that makes an organization worth paying for.

PH1 Research works with product teams measuring the real impact of AI — not just adoption curves, but the behavioral layer that determines whether AI creates or destroys value. AI Value Acceleration diagnoses where enterprise AI value creation stalls at exactly this layer — the gap between people using AI and AI making people better.


Listen: Product Impact Podcast S02E05 — The Human Impact of AI We Need to Measure

Related:
- Cognitive Sovereignty — Concept page
- The Median Pull — Concept page
- The Adjacent Possible — Concept page
- Helen Edwards — Person page
- Artificiality Institute — Organization page

Sources:
- Nature: AI and the convergence of scientific research (James Evans)
- Artificiality Institute
- Product Impact Podcast S02E05 — primary source for all Edwards quotes

B
Brittany Hobbs

Co-host, Product Impact Podcast

View all articles →

Hosted by Arpy Dragffy and Brittany Hobbs. Arpy runs PH1 Research, a product adoption research firm, and leads AI Value Acceleration, enterprise AI consulting.

Get AI product impact news weekly

Subscribe

Latest Episodes

All episodes
7: $490 Billion in AI Spend Is Delivering Nothing — Orchestration Is the Fix
EP 7

7: $490 Billion in AI Spend Is Delivering Nothing — Orchestration Is the Fix

Apr 17, 2026
6. Robert Brunner Was the Secret to Beats' & Apple's Success — Now He's Redefining AI for the Physical World
EP 6

6. Robert Brunner Was the Secret to Beats' & Apple's Success — Now He's Redefining AI for the Physical World

Apr 9, 2026
5. The Human Impact of AI We Need to Measure [Helen & Dave Edwards]
EP 5

5. The Human Impact of AI We Need to Measure [Helen & Dave Edwards]

Mar 30, 2026
4. The AI Agent Era Will Change How We Work
EP 4

4. The AI Agent Era Will Change How We Work

Mar 19, 2026
3. Win The AI Context Wars — Unlock The Value of Data [Juan Sequeda ]
EP 3

3. Win The AI Context Wars — Unlock The Value of Data [Juan Sequeda ]

Mar 12, 2026

Related

6