The Cognitive Shift Every UX Researcher Needs to Make
And why having the tools is the easy part
- ● MIT EEG research found LLM-assisted knowledge workers showed measurably lower brain engagement and produced homogenised output — and the cognitive deficit persisted after participants stopped using the tool.
- ● Four shifts separate researchers who compound value with AI from those producing more mediocre work at speed: from questions to context, responses to workflows, outputs to outcomes, and interactions to orchestration.
- ● Five practice paths — Methodology-First, Workflow-Library, Conversation-Protected, Skeptic's, and Orchestration — each anchored to a credible practitioner, suit distinct professional identities and setup budgets.
- ● The work that transfers to AI (transcription, tagging, coding) was rarely the most valuable part of the craft. The work that doesn't transfer — skeptical interpretation, detecting what's absent — is becoming the central output of the role.
In June 2025, MIT Media Lab researchers ran an EEG study on knowledge workers writing essays with and without ChatGPT. The brains of the LLM-assisted group lit up substantially less than the unassisted group across every measured network. Their essays converged on the same expressions and ideas, were rated "soulless" by independent reviewers, and the writers themselves struggled to accurately quote work they had produced minutes earlier. When the LLM group was later asked to write without ChatGPT, their brain connectivity did not recover to the unassisted baseline. The authors named what they observed cognitive debt: a measurable, accumulating cost paid by people who let the model do the thinking they used to do themselves.
In the same period, sociologists at Stanford and NYU published Generative AI Meets Open-Ended Survey Responses in Sociological Methods & Research. Surveying 1,500 participants, they found that 34% admitted to using LLMs when answering open-ended survey questions, and that the AI-mediated responses were measurably more homogeneous and uniformly positive than the human-only ones, particularly on sensitive topics. The variation that qualitative research depends on, the variation that lets us see what is actually going on in someone's head, was being smoothed out before it ever reached the researcher.
For UX research, those two findings sit at the centre of the problem. Researchers using AI badly are paying cognitive debt against the very judgment they are paid for, and the participants we study are increasingly answering us through the same models, narrowing what we can hear from them. The work survives this only if researchers treat it as a craft to relearn under new conditions rather than a productivity tool to bolt onto the old practice. At PH1, I now spend a meaningful share of my time helping research leaders and senior practitioners make exactly that transition. The four shifts and five frameworks that follow are the ones we work through with teams most often, and they are designed to be picked up by any researcher who already knows what good qualitative work looks like. Part One of this series covers the practical setup (Claude.ai, Claude Cowork, Claude Code, prompt caching, the Batch API, and the data-privacy ground rules). This piece is the part that decides whether any of that setup turns into work you can be proud of.
How Default AI Use Quietly Fails Research
The instinct most of us bring to Claude or ChatGPT (ask, evaluate, refine) works for search engines and conversation, but for research analysis it is one of the most reliable ways to produce output that looks like insight without being insight. The MIT cognitive-debt finding and the Stanford/NYU survey-homogenisation finding both point at the same underlying mechanism: the model returns confident, fluent output that does not push back when something is missing, and any reviewer who is not actively looking for what is absent will not see it. In qualitative UX research, what is absent is usually the finding that matters.
A 2025 systematic review and meta-analysis of human–AI co-creation studies identified the same homogenisation effect across 19 separate empirical studies, with the strongest convergence in semantically constrained tasks (the kind that look most like a thematic analysis or coding pass). A separate empirical comparison of human and ChatGPT writing found that diversity gaps widen at scale: the more output the model produces, the more it converges on a narrow band of expression. For UX researchers, that is not an abstract methodological concern; it is the mechanism by which a 40-transcript synthesis can look polished and end up flattening exactly the variation we were paid to find.
Erika Hall makes the counterweight argument cleanly in a 2026 Medium piece: human conversation is still the most powerful design tool we have, and AI cannot follow an unexpected thread, notice hesitation, or ask the question that wasn't in the guide. Holding her argument and the homogenisation studies together is the working frame for this piece. The model is genuinely capable on certain kinds of work, and there is a different set of work it cannot replace, and a research practice that thrives across this transition has to be precise about which is which.
The Four Shifts
The four shifts below all run against instincts that good qualitative researchers have built up over careers. Researchers are trained to ask open questions and let answers emerge, to follow people rather than process them, to read every transcript before claiming to understand a population, and to treat each engagement as its own intellectual problem rather than a process to systematise. Those instincts are exactly what makes senior researchers worth listening to. They are also the reason these tools tend to fail in researcher hands: a chat interface rewards the curiosity-led pattern, and the model returns confident, fluent output that does not push back when something is missing. Working well with AI requires switching, on demand, between the practitioner stance researchers spent years cultivating and a systems-design stance most have never had a reason to learn. The four shifts are the ones we see senior practitioners get caught on most often.
Shift 1: From Questions to Context
The chatbot pattern is to arrive, ask, and evaluate. The research pattern is to arrive with a brief, contextualise, then request.
Before any prompt in a research session, the model needs three things: what the project is, what your research question actually is, and what you want it to do differently from default. The last piece is the one almost no one writes down. The default behaviour of a language model is to produce coherent, confident, readable output, which in research contexts smooths ambiguity, absorbs outliers, and presents interpretation as fact.
Nikki Anderson, who runs The User Research Strategist and has published one of the most detailed AI-for-UXR frameworks available, argues on Dscout that the move from prompt to brief is the single most important change in how researchers need to approach these tools. Her framework specifies what the model should do and how to handle uncertainty, ambiguity, and contradiction (the exact places defaults fail). A brief and a long prompt are different documents performing different functions, and the move from one to the other is most of the cognitive shift in this section.
In Claude Cowork, the brief lives in your Project Knowledge file and system prompt. In Claude Code, the CLAUDE.md file plays the same role. Both create the same forcing function: before the model serves you, you have to have thought clearly about your project, your methodology, and your output expectations. Researchers who get the most from these tools have not become better prompters; they have become better at briefing, which is a research skill you already have. Most of our consulting work with research teams starts here, because skipping this shift makes the next three shifts impossible.
Shift 2: From Responses to Workflows
The second shift is from asking for an output to specifying a process.
"Can you analyse these interview transcripts?" and "Apply this coding taxonomy to each transcript, ground every code in a direct quote, flag ambiguity rather than resolving it, and output a structured summary with a confidence rating against each interpretation" are technically the same request, but they produce completely different work. The second version takes longer to write, produces output you do not have to redo, and means the methodology you have spent years developing is now running at scale across a dataset that would have taken your team three weeks.
Sam Ladner is the clearest voice on what this requires. Her background is unusual: founding researcher on Amazon's Echo Look, formerly Microsoft (Cortana, Windows 10), and the author of Mixed Methods and Practical Ethnography. She has built AI products and set methodological standards for qualitative work. In her UXRConf 2024 talk she frames the requirement as strategic foresight: knowing where your expertise will matter most as tools evolve. The workflow brief is the artefact that makes your methodology executable, and writing it functions as a methodology audit. Most researchers attempting one for the first time discover that what they have been calling a method is actually a habit, which is exactly the moment teams we work with start to see compounding returns from these tools.
The 2023 Nature study on AI-assisted scientific researchers found AI-supported researchers produced more output but converged toward the median position in their field. The researchers who did not converge were running specific, constrained prompts, which is the same pattern as moving from responses to workflows. Convergence is what happens when the briefing is generic.
Shift 3: Calibrating Outputs and Outcomes
The third shift is about evaluation, and it has two layers most researchers conflate.
Output calibration is defining, in advance and explicitly, what good looks like for a given deliverable. Which parts are load-bearing and require deep review? Which are structural and can be accepted at a glance? Before running the brief, write down the three to five attributes that distinguish a finding you would stand behind from a finding that merely reads well.
Outcome calibration is being honest about what the deliverable is supposed to drive. A discovery synthesis informs a decision, an evaluative report changes a design, and a debrief shifts a stakeholder's mental model. AI accelerates the production of artefacts without, on its own, accelerating the production of outcomes, so a review focused only on whether the artefact looks right is calibrating the wrong layer.
The 2025 academic literature on AI in qualitative research has converged on a consistent finding that informs both layers: AI is reliable at identifying concrete, descriptive themes and unreliable at interpretive analysis (recognising what a response means rather than what it says). Friedman (2025) found that ChatGPT consistently missed interpretive themes human coders identified, while human coders missed concrete themes AI handled reliably. Naeem et al. (2025) reach the same conclusion for thematic analysis: capable for structural pattern detection, requires human validation for interpretive synthesis. Barrera et al. (2025) provide a practical framework for where to insert validation gates.
The practical implication is that concrete-theme outputs (taxonomy, coding, frequency counts) can be reviewed at the level of did the model apply the rule correctly, while interpretive outputs (theme naming, finding articulation, strategic implication) must be reviewed at the level of does this match what the participant actually meant. These are two different review modes. Most researchers default to the lighter mode for both, which is exactly how interpretive work degrades over time.
Helen and Dave Edwards introduced cognitive sovereignty on the Product Impact podcast as the capacity to maintain independent critical judgment while integrating AI into workflows. At the practitioner level it translates to calibrated output evaluation and a working refusal to let polished presentation collapse the distinction between work that needs deep review and work that doesn't.
Shift 4: From Single Interactions to Orchestration
The fourth shift turns a useful tool into a research-operations capability, and most researchers never make it.
At the single-interaction level, every prompt is a fresh transaction. At the orchestration level, interactions become components in a compound workflow: a coding pass feeds a thematic synthesis pass, which feeds a recommendations pass, which feeds a stakeholder summary. Each stage is constrained by the brief above it, and the whole system runs end-to-end with you reviewing at the gates rather than at every step.
Claude Code and the broader agentic-tool category change what an individual researcher can produce here. The setup investment is real (the Anthropic Claude Code documentation is the canonical starting point, and combining it with prompt caching and the Batch API as covered in Part One makes the economics work for high-volume research workloads), and the payoff is structural. Once the workflow is encoded, every transcript run through it gets the same disconfirmation discipline and evidence grounding, and you stop rebuilding methodology each session and start applying it at scale.
Sam Ladner's strategic foresight framing applies directly: researchers who treat AI as a faster way to do single tasks accumulate marginal gains, and researchers who treat it as the substrate for orchestrated research operations accumulate compounding ones. The User Interviews 2025 State of User Research report shows the size of the gap, with 58% of researchers reporting improved efficiency and 63% reporting faster turnaround, while the spread between the top and bottom quintile is almost entirely about whether the workflow has been orchestrated.
Orchestration also changes how you review. Instead of evaluating every output, you evaluate the workflow itself: stress-test it on a known-good dataset, identify failure modes, tighten constraints, then trust it with sampling for the next ten, fifty, or two hundred runs. The trust is engineered through the testing rather than granted on faith. Gregg Bernstein's 2024 reminder that AI is neither the product nor the solution is the right counterweight here, because orchestration scales whatever methodology is already in place; it does not substitute for it.
Five Paths Forward
The shifts above are universal, but the path through them is not. The five frameworks below are each anchored to the public work of a credible practitioner, and each suits a different methodological identity, working context, and set of values. Most research teams we advise eventually combine two of them, with one as a philosophical anchor and the other as an operational scaffold.
Path 1: The Methodology-First Path
Best for: senior researchers with strong methodological commitments who want AI to scale their existing rigour rather than reshape their approach.
Anchor practitioner: Sam Ladner, author of Mixed Methods and Practical Ethnography, founding researcher on Amazon's Echo Look, formerly Microsoft (Cortana, Windows 10).
Core idea: document your method first, in plain English that another researcher could execute, and then encode it in a Project Knowledge file or CLAUDE.md so the model is constrained by your standards rather than its defaults. The starting move is a methodology audit: write down, in three to five paragraphs, exactly how you analyse a transcript. Most researchers attempting it for the first time discover that what they have been calling a method is actually a habit, which is the central piece of work this path produces.
Where to start:
- Sam Ladner's Brave UX interview on mixed methods (90 minutes; foundational)
- Her UXRConf 2024 talk on Strategic Foresight
- Read or re-read Practical Ethnography before writing your first brief
- Ethan Mollick, Co-Intelligence: Living and Working with AI, and his One Useful Thing Substack — the clearest public framework for thinking about AI as a co-worker rather than a replacement, written by the Wharton professor who runs the Generative AI Lab. Methodology-first researchers benefit from his rules of engagement (be the human in the loop, treat AI like a person but define its role) before encoding their method into a system prompt.
Path 2: The Workflow-Library Path
Best for: researchers who want to integrate AI systematically across the entire research lifecycle (planning → recruitment → fieldwork → analysis → reporting), and who think in templates and reusable components.
Anchor practitioner: Nikki Anderson, founder of Drop In Research and author of The User Research Strategist.
Core idea: build a structured prompt library, with one brief per phase of your workflow, each tested and refined against your own quality bar. The library is the asset that compounds across studies, so it needs to be treated as a living document rather than a set-and-forget collection. Anderson's published framework covers planning, analysis, and stakeholder communication, and is unusually disciplined about marking the phases where AI should not be used.
Where to start:
- AI Frameworks + Prompts to Optimize Your Entire Research Process on Dscout — the most detailed public starting point
- Nikki Anderson's Substack: The User Research Strategist
- Dear Nikki podcast for ongoing practical guidance
- Caitlin Sullivan (formerly Head of User Research at Spotify Business; 2,000+ hours testing AI for research) runs AI Customer Research Analysis on Maven, which builds exactly this kind of multi-step insights workflow. Her Great Question article on UX research tasks AI should and should not do and Where to Use AI in the UX Research Process talk are the strongest practitioner references on workflow placement that I know of.
Path 3: The Conversation-Protected Path
Best for: researchers whose primary value comes from time in the field (interviews, ethnography, contextual inquiry), and who want to use AI for the work around the conversation while protecting the conversation itself.
Anchor practitioner: Erika Hall, principal at Mule Design, author of Just Enough Research and Conversational Design.
Core idea: human conversation is the most powerful design tool you have. AI accelerates everything adjacent to the conversation (transcription, formatting, taxonomy application, first-draft synthesis, search across your back catalogue), but the moment you let AI mediate the conversation itself (auto-summarised interviews you didn't watch, AI-moderated sessions, machine-generated insights from data you never sat with), the value compounds in the wrong direction. Hall's recent writing names this distinction explicitly.
Where to start:
- Talk to Each Other: Why Human Conversation Is Still the Most Powerful Design Tool (2026)
- Erika Hall on Dscout: Knows How to Fix Your Design Process
- Just Enough Research (2nd edition) — read alongside any AI integration work as a check on first principles
Path 4: The Skeptic's Path
Best for: researchers whose discipline depends on resisting industry hype cycles, who are deeply suspicious of AI vendor marketing, and who want to integrate the technology only at points where it demonstrably helps without becoming the centre of their practice.
Anchor practitioner: Gregg Bernstein, author of Research Practice, longtime documenter of how research actually gets done.
Core idea: AI is neither the product nor the solution; it is one tool among many, and centring your practice on it is how you guarantee disappointment. Use it where the evidence is clear (transcription, taxonomy application, structural summarisation, search) and refuse to use it where the evidence is weak (interpretive synthesis without validation, generative finding articulation, anything that requires sitting with ambiguity). Disciplined refusal is the central craft this path is built around.
Where to start:
- Gregg Bernstein's blog: blog.gregg.io (2024 essay I don't care about AI, because AI is neither the product nor the solution)
- Research Practice — the book that anchors the worldview
- Pair with Should ChatGPT help with my research? (Friedman, 2025) for empirical grounding
Path 5: The Orchestration Path
Best for: researchers building research operations infrastructure, running at scale (high volume of studies, high transcript count), or working in teams where consistency across many engagements is the central challenge.
Anchor practice: Claude Code and the broader agentic-tool category, used to chain compound workflows.
Core idea: stop thinking in single prompts and start thinking in orchestrated workflows. Every brief becomes a component, every output becomes the input to the next stage, and the entire pipeline runs end-to-end with you reviewing at the gates. The set-up investment is genuine (two to eight weeks for the first serious workflow) and the payoff is structural: once your methodology is encoded, every dataset run through it gets the same quality of treatment, and the academic guidance on validation gates (Barrera et al., 2025) becomes implementable in the pipeline rather than improvised at review. Stacking with prompt caching and the Batch API (covered in Part One) is what makes this economically viable for high-volume research workloads.
Where to start:
- Anthropic Claude Code documentation — the canonical starting point
- Bringing AI into UX Research: Frameworks, Tools & Tactics — clearer than most YouTube treatments of compound workflows
- How Our UX Studio Uses AI for UX Research (Lessons Learned) — a working team showing real integration patterns and what failed first
- For the broader tool landscape, Maze's 2025 User Research Trends Report, Looppanel's How AI is Transforming UX Research in 2025, and Great Question's Practical Guide to AI for UX Research in 2025
How to Choose Your Path
Three diagnostic questions. Answer them honestly, because the wrong path produces friction without payoff and the right one compounds.
Where does your professional identity sit?
- Rigorous methodology → Path 1
- Systematic process across the workflow → Path 2
- Time spent with participants → Path 3
- Disciplined refusal of hype → Path 4
- Research operations and scale → Path 5
What is your biggest constraint right now?
- Quality consistency across many studies → Path 1 or 5
- Time on repeatable analytical work → Path 2 or 5
- Loss of depth from compressed timelines → Path 3
- Vendor noise and unclear ROI → Path 4
- Inability to scale your team → Path 5
What setup capacity do you have?
- 2–4 hours per week → Path 1, 3, or 4
- 1–2 weeks dedicated → Path 2
- 4–8 weeks dedicated → Path 5
The paths are not mutually exclusive, and most researchers end up combining a philosophical anchor (Path 1 or 3) with an operational scaffold (Path 2 or 5). Order matters: start with the philosophy, because retrofitting it onto operations almost always produces more volume of lower-quality work.
One principle holds across every path. Build explicit disconfirmation into every analysis prompt as a structural requirement: Before summarising any pattern, identify evidence that contradicts or complicates it. In our experience helping research teams refine their briefs, this single instruction does more for output quality than any other intervention, regardless of which path is in use.
Why This Feels So Hard Right Now
UX research has been hit harder over the last three years than at any point in twenty years of doing this work. Indeed reports UX research job postings dropped 73% from 2022 to 2023, the User Interviews 2025 State of User Research found 49% of researchers feel negative about the future of UXR (a 26-point swing in a single year) and 21% of organisations laid off UX researchers in 2025, and roles that survived were quietly redefined with researcher and designer collapsed into a single title, salary, and person.
The aggregate context is unusual. The Stanford AI Index 2025 documents AI capability advancing faster than any previous professional technology, while the lived experience for most researchers is the tools got better, my team got smaller, my budget got cut, and my CEO read a McKinsey report on a plane.
If you feel behind, you are reading the situation correctly. The OpenAI productivity analysis covered by VentureBeat documented a 6x productivity gap between AI power users and everyone else with identical tools available, and the gap between researchers who have made these cognitive shifts and researchers who haven't is showing up in deliverable quality, client retention, and who keeps their job through the next round of cuts.
What is shifting is the value distribution inside the discipline. The work around the judgment (transcribing, tagging, formatting, cross-referencing, first-draft synthesis) is the work AI does well, and most of it was never the most interesting part of our craft to begin with. The work that does not transfer to a model, treating a finding skeptically, checking a pattern against a contradicting case, knowing when a participant is performing rather than reporting, is moving from a background skill to the central output of the role. Researchers who can do that work in the new conditions, with disciplined briefs and calibrated review, are seeing their per-engagement value rise rather than fall.
The painful part is that the researchers losing ground fastest tend to be among the most careful, because their professional identity was built on rigour and the prospect of trusting a system they could not fully inspect was reasonably unattractive. That instinct is correct as an initial response, and it becomes a strategic risk only when it stops the practitioner from learning to use these tools well enough to defend the rigour they actually care about. The cognitive shift the new role is built around is from undifferentiated skepticism to precise skepticism: knowing which parts of any AI-assisted output require full critical engagement, and bringing it specifically there.
Where Researcher Judgment Compounds
The practitioners seeing the largest gains from these tools share one trait: they have built a precise map of where the model is reliable and where it isn't, and they spend their critical attention specifically at the boundary. That mapping skill is a research skill, not a technology skill, and senior researchers already have most of it. It is the same discipline behind pushing back on a finding that is too clean, returning to the transcript when a pattern doesn't feel right, and flagging any synthesis whose evidence base cannot be inspected. The cognitive shift is not asking for a new instinct; it is asking the existing instinct to be applied at a different layer of the work.
Where to Start This Week
The narrative that everyone else is already using these tools brilliantly is mostly manufactured by people with a commercial interest in selling something. The senior researchers who are getting compounding value from AI are working it out in real time, with the same care they apply to any new method, and they are doing it inside studies that already matter rather than against synthetic exercises.
A practical starting move for the week ahead is to pick one of the four shifts and practise it on a live engagement. Write a brief that meets your own standard, run a small batch of real work through it, and note specifically where the model's output diverged from what you would have produced. The compounding craft beneath these tools comes from that comparison, run repeatedly, against work you already understand well enough to evaluate.
Three resources worth your time, distinct from the ones in Part One:
- Indi Young — Practical Empathy and the thinking-styles work — Indi has been articulating mental-models research and listening craft for two decades. Her work is the most durable counterweight to AI-mediated synthesis I know of, because it is built around the parts of human cognition the model genuinely cannot replicate.
- Steve Portigal — Interviewing Users and portigal.com — Interviewing Users is the canonical reference for the conversational craft AI cannot do for you. Read it (or re-read it) alongside any AI integration work as a check on what is actually load-bearing in primary research.
- Gregg Bernstein — Research Practice — A working researcher's account of how rigorous practice actually gets done day to day. Pair it with the disconfirmation principle for a useful counterweight to vendor narratives about what AI is and is not for.
For the tooling, cost-saving levers, and data-privacy side of this conversation, Part One — The UX Researcher's Guide to Claude, Claude Cowork, and Claude Code is the companion piece.
If your team would benefit from advisory or training support making these shifts, working through a methodology audit, building a brief library, calibrating output review, or designing an orchestrated research-ops workflow, that is the work we do at PH1 and AI Value Acceleration.
Sources and Further Reading
Peer-reviewed academic sources on AI in qualitative research
- Friedman, C. et al. (2025), Should ChatGPT help with my research? A caution against artificial intelligence in qualitative analysis, Qualitative Research
- Naeem, M. et al. (2025), Thematic Analysis and Artificial Intelligence: A Step-by-Step Process for Using ChatGPT in Thematic Analysis, International Journal of Qualitative Methods
- Barrera, B. et al. (2025), Leveraging AI to Enhance Qualitative Research: Experiences and Recommendations From Case Studies, International Journal of Qualitative Methods
- Chatzichristos, G. (2025), Qualitative Research in the Era of AI: A Return to Positivism or a New Paradigm?, International Journal of Qualitative Methods
- Jones, K.M.L. (2025), Generative AI in Qualitative Research and Related Transparency Problems: A Novel Heuristic for Disclosing Uses of AI, International Journal of Qualitative Methods
Cognitive impact and homogenisation studies
- MIT Media Lab (2025), Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task
- Zhang, S., Xu, J., Alvero, A.J. (2025), Generative AI Meets Open-Ended Survey Responses: Research Participant Use of AI and Homogenization, Sociological Methods & Research
- Tilburg University (2025), Does generative AI make us think alike? A systematic review and meta-analysis of homogenization effects in human–AI co-creation
- ScienceDirect (2025), Homogenizing effect of large language models on creative diversity: An empirical comparison of human and ChatGPT writing
Industry context
- Indeed Design (2023), UX Design and UX Research Job Listings Plunged in 2023
- User Interviews (2025), State of User Research Report
- Stanford HAI (2025), AI Index Report
- VentureBeat (2025), OpenAI report reveals a 6x productivity gap between AI power users and everyone else
- Maze (2025), 2025 User Research Trends & Insights
- Looppanel (2025), How AI is Transforming UX Research in 2025
- Great Question (2025), A Practical Guide to AI for UX Research in 2025
- Miro (2025), How to Use AI for User Research Tools & Methods
UXR thought leaders referenced
- Sam Ladner — UXRConf 2024 talk on Strategic Foresight; Brave UX interview on mixed methods; samladner.com
- Erika Hall — Talk to Each Other: Why Human Conversation Is Still the Most Powerful Design Tool; Erika Hall on Dscout
- Nikki Anderson — AI Frameworks + Prompts to Optimize Your Entire Research Process on Dscout; The User Research Strategist; Dear Nikki podcast
- Caitlin Sullivan — AI Customer Research Analysis on Maven; aicustomerresearch.com; Great Question articles; Where to Use AI in the UX Research Process talk
- Ethan Mollick — Co-Intelligence: Living and Working with AI; One Useful Thing Substack; Wharton Generative AI Lab
- Gregg Bernstein — blog.gregg.io (I don't care about AI, because AI is neither the product nor the solution, 2024); gregg.io
- Indi Young — indiyoung.com; Interview on User Research, Empathy and Thinking Styles
- Steve Portigal — portigal.com (canonical reference for interview craft alongside any AI integration)
- Helen and Dave Edwards — cognitive sovereignty discussed on the Product Impact podcast, S02E05
Tool and workflow video resources
- Bringing AI into UX Research: Frameworks, Tools & Tactics
- How Our UX Studio Uses AI for UX Research (Lessons Learned)
- Where to use AI in the UX research process
- Using AI in UX Research Analysis, Synthesis, and Reporting
- AI & UX Research (feat. Savina Hawkins & Caleb Sponheim)
- How To Use AI to Increase Efficiency in Your User Research
Tool documentation
- Anthropic Claude Code documentation
- Claude Cowork / Teams overview
This is part two of a two-part series. Part one, The UX Researcher's Guide to Claude, Claude Cowork, and Claude Code, covers practical setup for each tool, the self-assessment framework, and the data privacy risks you need to understand before processing any participant data.
Brittany Hobbs is COO and VP Research at PH1, CEO of AI Value Acceleration, and co-host of the Product Impact podcast. She has led research at Mozilla, Spotify, Google, BBVA, TELUS Health, and Schneider Electric across more than 300 engagements.
How helpful was this article?
Share this article
COO, PH1 · CEO, AI Value Acceleration · Co-host, Product Impact Podcast
View all articles →Hosted by Arpy Dragffy and Brittany Hobbs. Arpy runs PH1 Research, a product adoption research firm, and leads AI Value Acceleration, enterprise AI consulting.
Get AI product impact news weekly
SubscribeLatest Episodes ›
All episodes
8. The Most Important Data Points in AI Right Now
7: $490 Billion in AI Spend Is Delivering Nothing — Orchestration Is the Fix
6. Robert Brunner Was the Secret to Beats' & Apple's Success — Now He's Redefining AI for the Physical World
5. The Human Impact of AI We Need to Measure [Helen & Dave Edwards]
4. The AI Agent Era Will Change How We Work
Related
6Silicon Valley's AI Is Repeating the Social Media Mistake

The UX Researcher's Guide to Claude, Claude Cowork, and Claude Code

Meta Is the Cautionary Tale About AI Every Founder Needs to Remember

Apple Turns 50: 50 Ways It Could Use AI in Ways Only Apple Can

The Free Ride Is Over: AI Economics Is Now Your Most Important Strategy Decision
