The UX Researcher's Guide to Claude, Claude Cowork, and Claude Code
Which tool you need, how to set it up, and what the risks actually are
- ● Claude, Claude Cowork, and Claude Code are not version increments of the same product — each represents a different relationship with AI that fits a different stage of UX research workflow.
- ● A 6x productivity gap between AI power users and everyone else has been documented with identical tools available to both groups — the differentiator is how, not whether, you use them.
- ● Prompt caching and the Batch API can reduce AI processing costs by 50–90% for researchers running high-volume transcript workflows, making Claude competitive with specialty UXR AI tools per seat.
- ● Data privacy varies significantly by tier — Free users may have inputs used for training; Teams and Enterprise users are covered by no-training-on-inputs policies by default.
A 2025 Microsoft and Carnegie Mellon study found that knowledge workers applied zero critical thinking to roughly 40% of their AI-assisted tasks. A separate METR study of experienced developers found something stranger: practitioners using AI tools believed they were 24% faster, but measured against a control they were actually 19% slower. Productivity feel and productivity reality were inversely correlated.
Both findings land hard in UX research, where we are paid for the rigour of our judgment rather than the speed of our output. An AI tool that makes you feel more productive while quietly degrading your analytical standard is a liability you cannot see by looking at the artefact.
I have spent more than 20 years in UX research and led 300+ client engagements, and it still took me a long stretch of experimentation, false starts, and rounds of analysis that just weren't good enough before I arrived at a method I trust. The shift that finally worked was treating every research engagement like a product, with a written PRD, explicit guardrails, a defined quality bar, and constraints on what the AI is allowed to do at each stage. At PH1 and AI Value Acceleration, we now help research teams set up exactly that kind of working method, and most of the questions in this article are the ones we are asked most often. I am writing it so you can shortcut past the failures I went through, and so the rigour and security you are paid for stay intact while you fold these tools into your practice.
The tools are everywhere and the expectations are building, but almost all of the guidance available online assumes you are either a developer building software or an executive buying it. It rarely assumes you are someone who designs studies, conducts participant interviews, synthesises qualitative data, and owns the integrity of the finding. This article is written for that person. It covers the three Anthropic tools you actually have access to, why each exists, how each fits into a different stage of your workflow, and the privacy, security, and rigour risks most coverage skips when promoting AI as a productivity miracle.
The harder question, rewiring how you think to use these tools well, is the subject of the companion piece: The Cognitive Shift Every UX Researcher Needs to Make. Read this one first.
The Job Market You're Reading This In
The pressure to learn these tools is not arriving in a vacuum. The conditions matter.
Indeed reports UX research job postings fell 73% from 2022 to 2023, one of the steepest single-year declines the discipline has seen, and postings have not recovered since. The User Interviews State of User Research 2025 found that 49% of researchers now feel negative about the future of UXR, a 26-point increase from the prior year, and 21% of surveyed organisations reported laying off UX researchers in 2025. A 2025 Measuring U analysis found that 35% of organisations reported reducing UX staff.
The stories behind those numbers are familiar. Research teams at major tech companies reduced from eight people to two. Companies that commissioned dedicated discovery research are now asking for "just the key headlines." Designers are being told to handle research as a side responsibility. Consultants and agency researchers who built stable client pipelines in 2022 are finding that market considerably harder to sustain.
An analysis by UX Army found AI tools already automating entry-level research tasks: basic transcript coding, usability pattern identification, survey analysis, the work that once served as career on-ramps. The Nielsen Norman Group State of UX 2026 report describes the researcher role evolving toward strategy and synthesis while the execution layer compresses, and senior practitioners are surviving while entry-level and mid-market roles are not recovering at the same rate.
The discipline is consolidating into a different shape, recognisable to anyone who has spent time at the senior end of this craft.
Product teams in 2026 can ship faster than they ever could. Vibe coding, design AI, automated A/B testing, AI-driven analytics all compress the time between idea and live product. That changes the demand profile for research dramatically. The constant trickle of small evaluation studies and mid-funnel usability checks that occupied most UX teams from 2018 to 2022 is being absorbed into faster product cycles, increasingly handled by AI-assisted tools or by designers themselves.
What remains, and what is becoming significantly more valuable per engagement, is foundation and generative research: the work that decides what gets built in the first place. Discovery interviews that surface what an audience actually needs. Behavioural research that reveals what people do, not what they say. Strategic synthesis that connects evidence across studies into a defensible direction. This work will be lower incidence than it used to be (fewer studies, longer cycles between them) and higher value per study, because the cost of shipping the wrong product fast is now greater than the cost of shipping the right product slow.
The research function that survives works at both ends of this barbell. The strategic end (generative, foundation, and behavioural research that justifies a dedicated function) and the execution end, where AI tools handle the coding, tagging, formatting, and first-draft synthesis that used to consume most of a researcher's week. The middle layer is the one being compressed. The researchers who can credibly do both ends, and who use these tools well enough that the execution layer doesn't burn their week, will be in a fundamentally different position than those who can do only one.
What Claude Code Is Actually For
The claims about Claude Code circulating right now range from accurate to absurd, and the gap between them is where UX researchers are getting hurt.
The accurate version, in research terms: imagine you have a folder on your computer with forty interview transcripts, a coding taxonomy document, and a research question. You install Claude Code, write a paragraph telling it what your project is and what your methodology requires, and ask it to apply your taxonomy to all forty transcripts and output structured CSVs with the verbatim quote supporting each code, a confidence rating on each interpretation, and an explicit flag on any code where the application is ambiguous. That work would take a careful researcher most of a week. Claude Code can produce a competent first pass in an afternoon, you still review and correct it, and you are still the analyst, but the execution layer that used to eat the week is now compressed into something you can do alongside actual analytical work.
That capability is genuinely new, which is why Claude Code reached $1 billion in run-rate revenue inside six months of launch and why "vibe coding", describing what you want in plain English and watching software get built, has entered mainstream tech vocabulary.
The absurd version is uglier. LinkedIn influencers, course sellers, and self-styled "AI research consultants" are now claiming you can replace your entire research function with Claude Code prompts, run a complete discovery study in 90 minutes, or generate research insights from no actual data. These claims are predatory, aimed at researchers who are scared, leaders who are credulous, and budgets looking for an excuse to cut. They produce work that looks like research without being it, and the people selling this version are profiting from your industry's anxiety rather than serving your craft.
When research teams we work with are wrestling with the gap between those two stories, what they are actually wrestling with is leadership pressure on one side and unclear capability on the other. Leaders read the absurd version and judge their internal researchers against capabilities that don't exist. Researchers read the absurd version, feel inadequate against an imaginary benchmark, and either freeze or chase the wrong workflows. The defence against both is a precise picture of what each Anthropic tool actually does and what it does not, which is what the rest of this article lays out.
The Case for Learning These Tools Now
An OpenAI productivity analysis covered by VentureBeat found a 6x productivity gap between AI power users and everyone else, with identical tools available to both groups. The State of User Research 2025 found that 88% of researchers identified AI-assisted analysis as the top trend shaping UXR in 2026.
Not every piece of AI hype is justified, particularly for research work. The case for learning is narrower and harder to argue with: the gap between practitioners using these tools with genuine rigour and those not using them at all is now large enough to show up in what teams can take on, what clients pay for, and in some organisations, who makes it through the next round of headcount decisions. Experienced researchers should not have to justify their continued employment by mastering tools designed for a different discipline, and the conditions are nonetheless what they are.
Three Tools. Three Different Relationships with AI
A common mistake is treating Claude, Claude Cowork, and Claude Code as version increments of the same product. They're not. Each represents a different relationship between you and the model, and each fits a different part of the UX research workflow.
Claude (claude.ai, Free or Pro)
A web-based chat interface. Each conversation starts fresh by default, but Claude Pro now includes Projects, which let you attach documents (briefs, taxonomies, prior reports) that the model references on demand inside that Project's conversations. So persistent context exists at the personal-account level, but only inside Projects you set up. Outside of Projects, you are the memory.
Best fit in your workflow: desk research and source synthesis when you need to think alongside a fast reader, refining your own writing (executive summaries, stakeholder messages, recommendation language), drafting interview guides, sense-checking a research question before you brief the team, and exploratory analysis where you want to talk through a pattern rather than commit it to a deliverable. Less suited to high-volume processing or shared team methodology.
Claude Cowork (Teams or Enterprise)
Claude Cowork is Anthropic's agentic product for knowledge work. It runs inside the Claude desktop app on macOS and Windows and, when you give it permission, can read, edit, and create files inside specific folders on your local machine. That moves it past Claude.ai's chat-with-attachments paradigm: the model can deliver finished documents into your working folders rather than producing text you have to copy out.
Cowork still gives you Projects with shared knowledge bases, system prompts, admin controls, and team collaboration. The Teams and Enterprise plans add data handling policies that are meaningfully different from consumer tiers (no training on your inputs by default), role-based access controls, and connectors to services like Google Drive and Gmail.
On security: filesystem access is scoped to the directories you explicitly grant. The model runs with your user-account permissions, so the directories you authorise are the directories it can read and write. Treat the folder you point Cowork at the way you would treat a folder shared with a contractor: structure it deliberately, keep raw participant data out of it unless pseudonymised, and audit what the model has touched at the end of each session. For sensitive research, run it against a dedicated working folder rather than your home directory.
This is the right tool for most working UX researchers running active studies.
Claude Code (agentic CLI, installed locally)
A command-line tool installed on your computer that reads and writes files in your project folder, runs scripts, and executes multi-step tasks autonomously rather than waiting on chat turns. You describe a workflow and it carries it out. For research, this means: process all 47 transcripts in this folder using this taxonomy and output a structured CSV with grounded quotes and confidence ratings. That is an automated research pipeline, not a chatbot session.
The privacy posture is meaningfully different from the web tools. Claude Code is a CLI that calls Anthropic's API for inference, so your files are not uploaded to a remote project. Only the relevant excerpts are sent in API calls, and on Pro, Teams, and Enterprise tiers your inputs are not used for training. For research with sensitive participant data, that is a real upgrade over web-uploaded approaches.
The security responsibility is also higher, because the tool can read and write files. You have to scope it. Anthropic's Claude Code sandboxing documentation describes OS-level enforcement (sandbox-exec on macOS, bubblewrap on Linux) and proxy-based network filtering, with permissions that control which tools (Bash, Read, Edit, WebFetch, MCP) Claude Code can use. Defence in depth is the working model we recommend to teams: keep Claude Code working in a project-specific folder rather than your home directory, lock down permissions to the minimum the workflow needs, and audit the changelog at the end of each run. For regulated data, treat the sandbox as a requirement rather than an option.
Setup takes real time, and the ceiling on what is possible is significantly higher than the chat surfaces.
Match the Tool to the Stage of Your Workflow
These three tools are not stepped versions of the same product, and the question is not which one to commit to. The question is which tool fits which part of a research engagement. Most working researchers will use all three across the lifecycle of a study, just for different stages.
If you're doing desk research, refining your writing, or thinking alongside a quick reader: use Claude.ai. This is the right surface for early-stage exploration of a problem space, sourcing and summarising published research, sense-checking a research question, drafting an interview guide, or sharpening the language in an executive summary or stakeholder communication. The Pro Projects feature lets you attach a brief or style guide that the model references on demand. The strength is fluency and speed against well-defined inputs you keep close to hand.
If you're running active studies and want a shared team methodology baked into every session: use Claude Cowork. This is where the work compounds. A Cowork Project holds your research framework, analysis taxonomy, screener, discussion guide, and quality bar in one place, and every conversation inside the Project starts informed. The desktop app's local file access lets the model deliver finished synthesis directly into your working folders rather than producing text you copy out. Teams and Enterprise tiers add admin controls, role-based access, no-training-on-inputs by default, and audit visibility your IT and Legal teams will care about. For most working UX researchers running studies with real participant data, this is the day-to-day surface.
If you're processing volume or building a repeatable research-ops workflow: use Claude Code. Forty transcripts that need consistent coding. Six studies you want to synthesise across. A quarterly research operation you currently dread. A multi-stage analysis pipeline (coding pass → thematic synthesis → recommendations → stakeholder summary) where each stage is constrained by the brief above it. The setup investment is real (two to eight weeks for the first serious workflow) and the security surface is real, but for high-volume work the upside is structural. Source data stays on your machine, only excerpts go to the API, and the workflow runs the same way every time.
A few honest questions to ask before any of them touch participant data:
- Have you ever opened a terminal? If never, Claude Cowork is your operational surface. Claude Code is reachable when you're ready, not on day one.
- What kind of data will you be feeding it? If it includes named participants or PII, only Cowork or Enterprise are acceptable starting points, and pseudonymisation comes first. Read the risks section before you do anything.
- Are you delivering next week or building a workflow that compounds? Next week sits in Claude.ai or Cowork. Compounding capacity sits in Cowork plus Code.
The cleanest starting point is to match the tool to the stage of work, keep the rigour and security the work demands, and add the next tool only when the current one stops being enough. Most teams we advise start with Cowork, layer in Claude.ai for desk research, and add Claude Code once they have a repeatable analytical workflow worth encoding.
Getting More Out of Each Tool
Once a research team has the three tools mapped to their workflow, the question we get asked next is the practical one: how do we get the most out of these without paying for a stack of specialty UXR analysis tools on top? Two API features change the economics of running research-grade workflows at scale, and both apply to anything you build with Claude Code or via the API.
Prompt caching for repeated context. When a multi-step workflow keeps sending the same reference material into the model (your methodology brief, analysis taxonomy, code book, discussion guide, prior synthesis from earlier in the engagement), Anthropic's prompt caching feature lets you mark that material as cached. A cache read costs roughly 10% of the standard input price, which is a 90% discount on the repeated portion of every subsequent call. The cache write costs 1.25x base for a 5-minute lifetime or 2x base for a 1-hour lifetime, so caching pays off after one read at the short lifetime and after two reads at the long one. For a transcript-coding pipeline that sends the same 8-page methodology brief into the model 40 times, the cost difference is the difference between a meaningful research-ops budget and a rounding error.
Batch processing for non-urgent work. A lot of UX research processing genuinely does not need to happen in real time. Coding 60 transcripts overnight, generating draft synthesis across six prior studies, refreshing a research repository, or running an asynchronous quality-pass over a stakeholder summary are all jobs you can submit to Anthropic's Message Batches API and accept a 24-hour turnaround for. The Batch API gives you a flat 50% discount on both input and output tokens for that latency tradeoff. Stack it with prompt caching on the methodology context and you can run a serious volume of research-grade processing for a fraction of what specialty UXR analysis platforms charge per seat.
The strategic implication. Most of the specialty AI-for-UXR tools on the market today are wrappers around the same underlying models, marked up substantially per seat, with features that are useful but not load-bearing for teams who already have a strong methodology. If your team has the discipline to write a clear PRD, define the analysis taxonomy, and run a controlled iteration loop on a workflow, Claude Cowork plus a prompt-cached, batch-processed pipeline in Claude Code will replicate most of what those tools do, at materially lower cost, with your data inside your sandbox and your methodology as the source of truth. This is the working pattern we help research teams set up at PH1 and AI Value Acceleration when the answer to "should we buy [specialty tool X]?" is "let's see what your existing methodology can do first."
The technical features above are necessary, not sufficient. The harder part is the methodology audit and the brief discipline that make the workflow worth caching at all, which is exactly what the companion piece covers in depth.
Claude Cowork: Setup and What to Expect
Setup: 15–30 minutes
- Go to claude.ai → sign in or create an account → upgrade to the Teams plan under Settings → Plans
- Navigate to Projects (left sidebar) → New Project
- Name the Project for the engagement or methodology
- Under Project Knowledge, upload your core context: research framework, discussion guide, analysis taxonomy, relevant client brief
- Add a system prompt. Even one sentence changes the quality: "You are assisting a senior UX researcher. Before summarising any pattern, identify and explicitly surface disconfirming evidence."
- Invite team members from Settings → Team Management
Every conversation inside that Project now inherits all of that context. You stop re-explaining your methodology every session.
What it does well for UX research: applying a consistent analysis taxonomy across multiple interview sessions, generating discussion guide variants from a master template, structuring debrief notes into a standardized format, drafting synthesis with your methodology already loaded, keeping shared team methodology visible and consistently applied.
What it won't do: catch contradictions you haven't asked it to find. Default LLM behaviour produces coherent, pattern-aligned summaries, and in qualitative UX research the most important finding is frequently the one that doesn't fit. Build the disconfirmation ask into every analysis prompt as a structural requirement, not a polite gesture.
A practical evaluation criterion we borrow from Robert Brunner (founder of Apple's Industrial Design Group, designer of Beats by Dre) is to apply his test for any new technology after two weeks of real use: count the steps in your workflow before and after. If Claude Cowork has added steps rather than removed them, your brief or your project context is doing too little.
What Claude Code Looks Like in Practice
Almost every description you'll find online assumes you're a developer, and that framing is exactly what's intimidating most researchers off this tool unnecessarily. Here is what it actually feels like in research practice.
You install Claude Code on your computer. It opens in a terminal window — yes, the black box with the blinking cursor that probably gives you flashbacks to the IT helpdesk. Don't let the access point stop you, because the actual product is what happens after you type your instruction and press enter.
You navigate Claude Code into the folder where your research project lives. You write a plain-text file called CLAUDE.md that explains what this project is, what methodology you use, what rules you want it to follow. ("Always surface disconfirming evidence before summarising a pattern. Never invent quotes. If a code is ambiguous, flag it; do not force it.") Then you give it a task: analyse these transcripts using this taxonomy, output a structured CSV, flag low-confidence interpretations. You watch it work through the steps you would have done by hand.
The pattern across research leaders I have spoken to who have actually integrated this into their workflow is consistent. The first thing they did was write a careful brief. The second was test it on five transcripts, not fifty, to find where it failed. The third was tighten the brief based on those failures. They iterated for two to eight weeks before the output became consistent enough to trust without heavy correction. After that, every one of them describes a transformation in what their team can take on.
Equally consistent is what's missing from those accounts: a researcher solving a real research problem on a first prompt, a researcher trusting Claude Code's analytical judgment without expert review, a researcher succeeding by skipping the brief and hoping the model would figure out what they needed.
If you're trying to decide whether Claude Code is for you right now, the honest version of that question is: do I have the time, the kind of work, and the data volume to make a setup investment worthwhile in the next two months? If yes, the Anthropic Claude Code documentation will get you started. If no, Claude Cowork will serve you well in the meantime, and the option to expand later is always there.
The Risks You Cannot Ignore
The five risks below are the ones we walk every research team through before any participant data goes near these tools, in roughly this order.
PII and participant data
Your research participants consented to being interviewed. They did not consent to having their verbatims processed by a third-party AI system. This is a meaningful ethical obligation and, in many jurisdictions, a legal one. Pseudonymise before you paste anything: replace names, job titles, company names, locations, any identifier. This applies to every product at every tier. It is non-negotiable regardless of how much you trust the platform.
Platform data handling differs significantly by tier
| Tier | Used for model training? |
|---|---|
| Claude.ai Free | May be used |
| Claude.ai Pro | Not by default |
| Claude Cowork (Teams) | No |
| Claude Enterprise | No (per enterprise agreement) |
Check Anthropic's current privacy policy and commercial terms before you process any participant data. These policies are updated. Verify the current version, not what you remember reading months ago.
GDPR and IRB implications
If you work with EU participants, sending data, even pseudonymised, to a US-based AI provider has GDPR jurisdictional implications. Your IRB or ethics review protocol almost certainly was not written to cover LLM processing of participant data. If you're unsure whether your setup is compliant, treat it as non-compliant until you've verified with your ethics board or legal counsel.
Hallucination in qualitative synthesis
LLMs produce outputs that are statistically coherent given the input, and statistical coherence is not the same thing as research validity. A synthesis that smooths over an edge case is a finding-level failure, not a minor formatting issue. A 2023 Nature study found AI-assisted researchers producing more output alongside measurable convergence toward the median, and the most important UX findings are usually the ones that don't fit the dominant pattern. Build disconfirmation explicitly into every prompt.
Third-party AI tools are a real attack surface
In April 2026, Vercel was breached via a compromised third-party AI tool, with an employee's access to Context.ai used to reach Vercel's internal systems and customer credentials. That same month, Lovable's security crisis came to light: an API flaw had exposed source code, credentials, and AI chat histories for every project created before November 2025, for 48 days, and the company initially responded by calling it "intentional behavior." Every AI integration in your research operations stack carries permissions. Treat each one as a potential breach vector.
Deciding Is the Easy Part. Getting Value Is the Hard Part.
This article has helped you decide which tool fits which part of your workflow, how to set each one up, where the cost-saving levers (caching, batch) actually move the needle, and what privacy and security risks to manage on the way. That is the easy part. The harder part, the part that determines whether these tools compound your capability or quietly produce more mediocre work at higher speed, happens after the setup is finished.
The aggregate data is brutal. MIT's Project NANDA found in mid-2025 that 95% of enterprise generative AI pilots produce no measurable business return, despite $30–40 billion in collective investment. Boston Consulting Group's September 2025 study of 1,250+ companies found only 5% achieving AI value at scale, while 60% reported essentially no value at all, and only 37% of executives could demonstrate clear ROI from AI initiatives even as 85% increased AI investment year over year.
These are behavioural-layer failures rather than technology failures. The models work. What breaks down is the moment in the workflow where someone has to decide to use the tool well, abandon it, or fake using it. Organisations are failing because they treated "deciding to adopt AI" as the difficult decision and treated "actually getting value from AI" as a problem that solves itself once the licences are bought, and the same pattern that is wasting tens of billions at the enterprise level shows up at the individual researcher level too. A researcher who installs Claude Code, runs a few prompts, gets mediocre output, and concludes "the tool doesn't work" is repeating the same mistake those companies are making at scale. The skill of using these tools well is a separate craft from the documentation, and it is the craft we work on with research teams who want to get there faster than trial and error allows.
The companion piece, Part Two: The Cognitive Shift Every UX Researcher Needs to Make, is the next thing to read.
A Few Words Before You Go
The discipline you trained for is being reshaped under conditions that are not fair and on a timeline that is not humane. Most senior researchers are working this out in real time, including the ones whose LinkedIn posts make it sound otherwise. The ones who come through this period with their standards intact are treating it the way they would treat learning any new method: deliberately, in paced iterations, and grounded in the same rigour they apply to everything else.
If you only do one thing this week, set up a single Cowork Project on a study you are actively running, attach your methodology brief and analysis taxonomy as Project Knowledge, and use it on two weeks of real research. Pay attention to where the output needs correction and where it doesn't. That observation is the first piece of data you have about how your methodology actually translates into a model context, and it is the same starting point we use when we're brought in to help research teams shift their working practice.
Then read Part Two for the cognitive shift that determines whether the setup compounds.
Three further resources worth your time:
- User Interviews — AI tools for UX research: a practical overview of the broader UXR AI tooling landscape.
- John Whalen on Maven: structured courses for practitioners building durable AI skills in research and design. He also joined the Product Impact Podcast (Episode 28: AI will transform product research) for an extended conversation on where AI-enhanced research excels and where human judgment remains irreplaceable.
- Quant UX Blog — Four areas of UXR thinking about AI/LLMs: a grounded framework across study design, analysis, application, and evaluation.
If your team wants advisory or training support setting this up internally, that is the work we do at PH1 and AI Value Acceleration.
Sources and Further Reading
- Microsoft Research & Carnegie Mellon (2025), The Impact of Generative AI on Critical Thinking
- METR (2025), Measuring the Impact of Early-2025 AI on Experienced Developer Productivity
- Indeed Design (2023), UX Design and UX Research Job Listings Plunged in 2023
- Measuring U (2025), How Does the UX Job Market Look for 2025?
- User Interviews (2025), State of User Research Report
- UX Army (2025), 7 Alarming Truths About AI-Powered User Research Platforms
- Nielsen Norman Group (2026), State of UX 2026
- VentureBeat (2025), OpenAI report reveals a 6x productivity gap between AI power users and everyone else
- MIT Project NANDA (2025), via Fortune, 95% of generative AI pilots at companies are failing
- Boston Consulting Group (2025), Are You Generating Value from AI? The Widening Gap
- Scientific American (2026), How Claude Code is bringing vibe coding to everyone
- Nature (2023), AI and scientific creativity
- Axios (2026), Anthropic's Claude Code transforms vibe coding
- Bleeping Computer (2026), Vercel confirms breach as hackers claim to be selling stolen data
- The Next Web (2026), Lovable security crisis: 48 days of exposed projects
- Stanford HAI (2025), AI Index Report
- Anthropic, Privacy Policy | Commercial Terms | Claude Cowork product page | Claude Code Documentation | Claude Code Sandboxing | Prompt caching | Message Batches API
Brittany Hobbs is COO and VP Research at PH1, CEO of AI Value Acceleration, and co-host of the Product Impact podcast. She has led research at Mozilla, Spotify, Google, BBVA, TELUS Health, and Schneider Electric across more than 300 engagements.
How helpful was this article?
Share this article
COO, PH1 · CEO, AI Value Acceleration · Co-host, Product Impact Podcast
View all articles →Hosted by Arpy Dragffy and Brittany Hobbs. Arpy runs PH1 Research, a product adoption research firm, and leads AI Value Acceleration, enterprise AI consulting.
Get AI product impact news weekly
SubscribeLatest Episodes ›
All episodes
8. The Most Important Data Points in AI Right Now
7: $490 Billion in AI Spend Is Delivering Nothing — Orchestration Is the Fix
6. Robert Brunner Was the Secret to Beats' & Apple's Success — Now He's Redefining AI for the Physical World
5. The Human Impact of AI We Need to Measure [Helen & Dave Edwards]
4. The AI Agent Era Will Change How We Work
Related
6Silicon Valley's AI Is Repeating the Social Media Mistake

The Cognitive Shift Every UX Researcher Needs to Make

The 10% Problem: AI's Value Gap Is Wider Than Anyone Is Admitting

Stanford's AI Index Proves the US Can't Buy Its Way to an AI Lead

Stanford's 2026 AI Index Just Dropped. Here Are the Numbers Product Leaders Need.
