Silicon Valley Built the Loneliness Machine. AI Is the Upgrade.
The Verge's 'Brain rot' series named Silicon Valley's disconnection from reality. What it stopped short of: a governing ideology with no model for the social human — and the evidence for what comes next is already in.
- ● Silicon Valley's governing ideology has no word for belonging — and that absence is a design brief, not an oversight.
- ● The AI companionship crisis is the social media crisis replayed with better data: loneliness worsens with use, cognitive capacity depletes, collective intelligence contracts.
- ● The one-person billion-dollar startup is the ideology's business model — concentrated power, distributed impact to no one, community solved by eliminating the humans who might share in it.
- ● Builders who design for cognitive sovereignty aren't making an ethical bet. They're making the more durable product bet.
The Verge has been running a series called "Brain rot," and you should read it.
Nilay Patel named what he calls "software brain" — the worldview that reduces every human problem to an algorithm waiting to be written. Elizabeth Lopatto argued that tech leaders haven't thought seriously about what normal people's lives are actually like. The numbers back them up: an NBC News poll found AI polling below ICE in public favorability, with 57% of registered voters saying its risks outweigh its benefits. The series is sharp, well-reported, and worth your time.
It also stops before it gets uncomfortable.
The Verge frames Silicon Valley's failure as cultural disconnection — tech leaders who got too rich, too insular, too intoxicated by "software brain" to notice what ordinary people want. That's real. But cultural disconnection doesn't explain why the disconnection persists after it's been named, after the polls have been published, after four separate Verge writers have documented it. Something more structural is at work. A governing ideology that has defined the terms of this industry for fifteen years — and that demands a direct answer, not just an observation.
Technology is doing two things at once — and the Andreessen manifesto is built to celebrate only one of them. As an economic force, it concentrates: the ROI is measurable, the leverage compounds, the advantage to whoever ships fastest is real and growing. As a cultural force, it disintermediates — pulling us progressively further from the collectives, dependencies, and reciprocities that constitute community life.
The doctrine this produces: metrics are the measure of success. They aren't. Engagement is not connection. Output is not meaning. Retention is not loyalty. Cigarettes brought billions of people real joy. For decades, the companies that produced them faced no accountability for the harm they understood and concealed. We are building toward the same compact: no framework forces an AI product team to measure the loneliness its companion app generates, the cognitive capacity its autocomplete depletes, the civic ties its engagement algorithm severs. The impact is real. The accountability is not. This piece is for the builders who want to close that gap.
The Worldview That Built the Approval Machine
In October 2023, Marc Andreessen published "The Techno-Optimist Manifesto." It is the governing document of the ideology driving AI investment. Technology is the primary driver of human flourishing; acceleration is unconditional good; critics are enemies of the future; maximizing profit is a moral act. Musk operates from the same premise — AI will eliminate scarcity, post-AGI abundance solves everything — while simultaneously dismantling the civic and social infrastructure that existing communities depend on to function today.
Read either document looking for these words: belonging, community, connection, meaning, loneliness. They don't appear. The humans in this worldview are economic agents — producers, consumers, innovators. The social being — the person who needs to be known, to matter to people nearby, embedded in relationships that require something of them — is not modeled.
The ideology has a business model to match: the one-person billion-dollar startup. Reach millions with a team of three; use AI to eliminate every human who might otherwise share in the value created. Most of the AI industry's headline success stories are built on deferred revenue and future potential sold at present valuations — house-of-cards businesses whose power concentrates in a single founder's hands while impact distributes to no one. The industry celebrating the most concentrated wealth creation in history is the same one claiming to solve loneliness and rebuild community. Those two things are not compatible.
Technology designed by people who can't see that variable doesn't accidentally serve community. It systematically displaces it — not with malice, but with a model of the human that has no room for what makes human life worth living.
We've Seen This Movie Before
Your phone is the primary interface for almost everything that used to happen face to face. Community spaces haven't disappeared — they've been repurposed. The pub is a co-working space. We now call sitting together in silence huddled around each of our computers a social hangout. Third places — the in-between spaces where people became known to each other — now fill mainly for scheduled functions, not everyday belonging. Addiction rates are rising alongside chronic disease, parallel to social isolation — not coincidentally. Since 2005, the US has lost more than a third of its local newspapers — roughly 3,300 titles, with closures still running at more than two a week. E-commerce now accounts for nearly a quarter of all US retail sales. Cigarette companies tracked the mortality data internally, buried it, and kept optimizing distribution. People died — directly, in hospitals, in front of their families. Millions of them. It took decades of accumulated deaths and suppressed evidence before the rules changed. The erasure of local commerce has no single body to point to, no autopsy to file. That is what makes it easier to keep building.
Then the iPhone. In 2012, social media became the dominant arena for teen social life, and rates of depression, anxiety, and self-harm among adolescent girls began rising — sharply, year by year, tracking smartphone adoption across the UK, Canada, Australia, and the United States simultaneously. Haidt and Twenge's longitudinal data left researchers little room for alternative explanations. By 2023, the Surgeon General had declared loneliness a public health epidemic — one in two American adults measurably lonely, the mortality risk equivalent to smoking 15 cigarettes a day.
Now AI companions are filling the space where human friendship used to be. 75% of American teenagers have tried one. One in three finds it as satisfying as time with a real friend. A 2025 longitudinal study found regular use makes loneliness worse over time — a dynamic clinicians have described as "like drinking salt water." MIT Media Lab found 83% of LLM users couldn't quote their own work. Cognitive debt: short-term ease, long-term cost, accruing silently at scale.
The endpoint isn't mysterious. Young people substituting AI companionship for human connection show measurably reduced capacity for the harder work of real relationships. Cognitive capacity outsourced to AI doesn't come back. The companies posting these numbers know what the data says. The question is not what happens next — it's who gets named in the reckoning, and what they'll claim they knew.
What Haidt Might Write in 2036
Imagine Jonathan Haidt in 2036, writing the follow-up to The Anxious Generation. Not speculation. A reckoning.
"We had the MIT cognitive debt study in 2025. We had the loneliness epidemic declared by the Surgeon General in 2023. We had OpenAI's own post-mortem on sycophancy. We understood — with more precision than we ever had with social media — exactly what the training signal rewarded and what it would produce. The pattern from 2012 was not hidden. It was on the front page of every major publication. We accelerated anyway. Not because we didn't know. Because the people making the decisions were not the ones bearing the cost."
Three truths that essay would name:
The communities most harmed by AI have the least power to shape it. Workers whose jobs automate first, neighborhoods where AI companions fill the space left by hollowed-out civic life, teens who formed AI friendships before developing the capacity for human intimacy — none of them are in the room when product decisions are made.
AI companions are not solving loneliness. They are monetizing it. They found a real human need — to be heard, understood, responded to — and built a product that simulates meeting it without meeting it. That distinction is not philosophical. It is documented in longitudinal data.
Collective intelligence is contracting while individual output rises. The James Evans Nature study — 41.3 million papers — found AI adoption shrinks the volume of scientific topics explored by 4.6% and reduces researcher engagement by 22%. More papers. Fewer ideas. OpenAI's own post-mortem on GPT-4o's sycophancy problem confirmed the mechanism: RLHF trains models to seek approval, not accuracy. A 2026 paper in AI & Ethics called it a moral harm, not a product defect. We are building lonely crowds optimized to agree with themselves.
What the UX Discipline Already Knew
The design community was supposed to hold the line.
Human-centered design began with a simple premise: before engagement, before retention, before conversion — understand the person. A satisfied customer was the metric because a satisfied customer came back, referred others, and trusted the product enough to grow with it. The business case for empathy was straightforward.
Then came the dashboard. Engagement metrics, session length, DAU/MAU ratios, funnel conversion — all measurable, all reportable to a board, none of them equivalent to impact. Human-centered design didn't disappear. It got repurposed. JTBD, which was sound in conception — map what the person is actually trying to accomplish — became a rationalization tool: frame whatever the product team had already decided as a "job" the user was "hiring for." The method got separated from the moral orientation that gave it meaning, and in that separation, human-centered design became a label applied to products that were anything but.
Kat Holmes, whose inclusive design work at Microsoft reshaped how the industry thinks about access, named the mechanism plainly: "Exclusion happens when we solve problems using our own biases." That is what JTBD became when applied by teams who never left the building — a mirror, not a window. HCD didn't fail because the methods were wrong. It failed because the methods got recruited to prove relevance inside organizations that had already decided what to build.
The AI industry completed the substitution: replace field research with behavioral analytics; replace the researcher who changes the roadmap with the researcher who validates it. The power users building personal operating systems in terminal windows to extract value from AI are not a use case to celebrate. They are evidence that the discipline abandoned the 85% and optimized for the 15% who could figure it out themselves.
What We've Learned from the People Doing the Work
When Helen and Dave Edwards joined us for S02E05: "The Human Impact of AI We Need to Measure," Dave asked the question every AI builder should sit with: "If AI can replace the humans in your business, does your business have any value at all?" The answer is: not much. A business whose humans are replaceable has automated away the thing that made it worth anything. Helen's framework of cognitive sovereignty — Awareness, Agency, Accountability — gives that claim teeth as a design requirement: every product decision either strengthens those three capacities in users or erodes them. That is the benchmark. Not engagement. Not retention.
Kwame Nyanning, whose Agentics is the sharpest framework for the agentic era and whose work on the show helped shape how we think about this: "The real risk of automation isn't lost jobs — it's lost meaning." His provocation to builders is direct — the companies that win won't be fastest to automate, they'll be first to redesign their ontology: the invisible system of goals, relationships, and decisions that defines what the work is actually for. Most teams are layering agents onto broken processes and calling it transformation. That is not transformation. That is technical debt with a marketing budget.
John Maeda's 2026 Design in Tech Report names the shift in terms every practitioner needs to internalize: we have moved from UX to AX — from designing flows users navigate to designing outcomes agents produce. The design question changes with it. UX asked how do I help someone do this? AX must ask how do I help someone know whether it was done well? — what Maeda calls the shift from the gulf of execution to the gulf of evaluation.
The future will be much more personalized — and it will also be fed to us from whichever AI straw we're hooked to.
The Other Vision
Andreessen's model produces one thing efficiently: individual output at scale — collective capability, shared meaning, community trust consumed in the process. Gallup's State of the Global Workplace found only 23% of employees worldwide actively engaged at work despite a decade of productivity investment — and the latest 2024 figure has slipped further, to 21%. Individual output is rising. The humans producing it are checked out.
An LLM trained over time on your voice, your values, your actual perspectives becomes something the Andreessen model cannot acquire: a mirror for a specific human mind. The next evolution of how we hold and build on our own thinking — the frameworks forged through hard experience, the values you've never quite put into words until you read them back. That is community technology. Understanding yourself more clearly is what lets you show up more fully for the people around you.
Nearly half of working Americans say their job is not central to who they are. Most chose from what was available when they needed income. AI's most underexplored potential is not efficiency — it is liberation: returning cognitive capacity consumed by unchosen work to the communities and problems people actually care about.
The future of community is not a better dating app or a global network of strangers. It is using AI to make local commerce viable again — an LLM that helps a neighborhood restaurant compete with chains, a local contractor find clients without paying tribute to a platform, a family business build the marketing capacity it never had the budget for. The collapse of local economies has driven budget cuts at every level of government, straining the pillars of connection and community that depend on public funding. The arts are at risk of becoming as bland as LLMs themselves — funded solely to present the narratives acceptable to corporate funders.
What Builders Who Understand This Actually Do
The Facebook and Instagram developers of 2012 claimed, for years, that they didn't know what they were building. Some of them genuinely didn't. That option is no longer available. We have the data, the precedent, the longitudinal studies, and OpenAI's own public post-mortem. The willful blindness card has been played. It expired.
The Andreessen manifesto has no concept of belonging. Add the social being back into the model and the design brief changes entirely. Here is what it looks like when builders take that seriously:
Build for empowerment, not ease. Every feature decision sits on a spectrum between making something easier for the user and making the user more capable. Ease compounds dependency. Capability compounds agency. The AI product that removes friction without building the human capacity to function without it is building a user who cannot leave. That is not a moat. That is a trap — and eventually, it becomes a liability. The MIT Media Lab research makes it concrete: users who outsource reasoning to LLMs show reduced engagement in the neural regions responsible for judgment and recall. Friction, it turns out, is sometimes the product.
Build and empower communities as your moat. The companies that will be hardest to displace are not the ones with the best models. They are the ones whose products have strengthened the communities and relationships around their users. When your AI helps a professional network know itself better, or a neighborhood solve a real local problem, or a team build genuine institutional knowledge — that value is not portable. The model can be replicated. The community cannot. Mozilla didn't win by optimizing engagement. It won by making the browser's integrity a community mission — millions of contributors who trusted the product because they shaped it. The open-source ethos it proved has found an unexpected heir: vibe coding — building with AI as creative partner — has brought developers back into physical spaces. Hackathons, ship weekends, co-working sessions where people sit together, build in public, and solve real problems with real humans in the room.
Recognize the risk of removing meaning. Kwame Nyanning put it plainly: the real risk of automation isn't lost jobs, it's lost meaning. Before automating a workflow, ask what that workflow gives the person doing it — judgment, identity, craft, relationships. If the answer is nothing, automate freely. If the answer is something, design the transition with that loss in mind. Most teams skip this question entirely. That is how you get workers who are technically employed and psychologically adrift.
Free up time for work that matters, not just work that's billable. The promise of AI-released capacity is only realized if people have somewhere worth putting it. Products that return time to users without pointing toward something worth doing are just creating a new kind of hollow productivity. Build the bridge: from the time your product saves to the communities, projects, and relationships it could help your users invest in. That is the product vision no one has shipped yet.
Remember that IRL value creation is the only part that matters. Your users are not asking for more output. They are asking, whether or not they know it, for a life that feels worth living — work that connects them to something beyond their own screen, relationships that hold under pressure, communities that call them by name. The AI that returns real capacity for that kind of life will outlast the approval machine. Not because it's more ethical. Because it's more true.
A Generation Already Paid
This is not speculation about what might happen. We already know. We can point to the year — 2012 — when the CDC's Youth Risk Behavior Survey recorded the first sharp inflection in adolescent mental health, a pattern that repeated across the UK, Canada, Australia, and the United States in lockstep with smartphone adoption. Haidt's longitudinal research, the Surgeon General's declaration, the WHO's 2025 resolution all point at the same curve. The industry knew what it was doing by 2017. It chose engagement metrics. A generation paid.
Silicon Valley's disassociation from community is not a cultural accident. It is the logical endpoint of a worldview whose founding document doesn't contain the words belonging, community, connection, or meaning. It produced tools that simulate the feeling of being known while measurably increasing loneliness. That simulate the experience of connection while contracting collective intelligence. That simulate productivity while generating cognitive debt.
The builders reading this are not neutral. You are the people the next reckoning will either thank or indict. The choice is not abstract: build AI that serves how people actually live — in communities, in relationships, with the need to contribute to something beyond their own feed — or add another engagement engine to a world that is already drowning in them.
James Baldwin wrote: "Not everything that is faced can be changed, but nothing can be changed until it is faced."
Arpy Dragffy is the founder of the Product Impact Pod news platform and podcast and works at the forefront of AI, assisting founders and teams measure and improve the impact of their products with PH1.
If you have a book, a guest, or an article to recommend, reach out at [email protected].
How helpful was this article?
Share this article
Hosted by Arpy Dragffy and Brittany Hobbs. Arpy runs PH1 Research, a product adoption research firm, and leads AI Value Acceleration, enterprise AI consulting.
Get AI product impact news weekly
SubscribeLatest Episodes ›
All episodes
8. The Most Important Data Points in AI Right Now
7: $490 Billion in AI Spend Is Delivering Nothing — Orchestration Is the Fix
6. Robert Brunner Was the Secret to Beats' & Apple's Success — Now He's Redefining AI for the Physical World
5. The Human Impact of AI We Need to Measure [Helen & Dave Edwards]
4. The AI Agent Era Will Change How We Work
Related
6
Physical AI: What It Is, What's Been Built, and Five Startups That Will Define It

Apple Turns 50: 50 Ways It Could Use AI in Ways Only Apple Can

How Tim Cook Is Leaving Apple Points to the Future of AI

Stanford's AI Index Proves the US Can't Buy Its Way to an AI Lead

Will Claude Design Replace Figma? Why the Source of Truth for Design Matters More Than Generation
