Silicon Valley's AI Is Repeating the Social Media Mistake

The Verge's 'Brain rot' series named Silicon Valley's disconnection from reality. The deeper problem is a governing ideology that has no working model of the social human, and the data on what that produces is already in.

A
Arpy Dragffy · · 14 min read
Editorial photograph: Silicon Valley's AI Is Repeating the Social Media Mistake
Photo: Generated via Flux 1.1 Pro
Overview
  • Neither Andreessen's Techno-Optimist Manifesto nor the broader accelerationist literature uses words like belonging, community, or meaning, and AI products designed without those variables predictably fail to produce them.
  • The longitudinal data on AI companions now shows the same loneliness curve smartphones produced after 2012, with similar drops in collective scientific exploration and reasoning capacity.
  • The one-person billion-dollar startup is the ideology's preferred business model: power concentrates in a single founder while the humans who would have shared in the value are designed out.
  • Products built around cognitive sovereignty (awareness, agency, accountability) are harder to displace than products built around engagement, because the value lives in the user and the community, not just the model.

The Verge has been running a series called "Brain rot." It is worth reading, and it stops short of the harder argument.

Nilay Patel names what he calls "software brain": the worldview that reduces every human problem to an algorithm waiting to be written. Elizabeth Lopatto argues that tech leaders have not thought seriously about what normal people's lives are actually like. The polling backs them up: AI now polls below ICE in public favorability, and 57% of registered voters say its risks outweigh its benefits.

The Verge frames this as cultural disconnection: tech leaders too rich, too insular, too intoxicated by software brain to notice what ordinary people want. That is true, but it does not explain why the disconnection persists after it has been named, after the polls have been published, after four separate Verge writers have documented it. Cultural disconnection is downstream of something more durable: a governing ideology that has shaped this industry for fifteen years and that nobody is interrogating directly.

Technology is doing two things at once, and the Andreessen manifesto is built to celebrate only one of them. As an economic force, it concentrates: the ROI is measurable, the leverage compounds, and shipping fastest is a real and growing advantage. As a cultural force, it disintermediates, pulling people away from the collectives, dependencies, and reciprocities that make community life work. The doctrine that follows treats engagement as connection, output as meaning, and retention as loyalty, and confuses the metric for the thing.

For decades the cigarette companies faced no accountability for harm they understood and concealed. AI product teams now operate in a similar vacuum: nothing forces them to measure the loneliness their companion app generates, the cognitive capacity their autocomplete depletes, or the civic ties their engagement algorithm severs. The impact is real and the accountability is not, and the rest of this piece is about what builders who want that gap closed should actually do.


The Founding Document Has No Word for Belonging

In October 2023, Marc Andreessen published "The Techno-Optimist Manifesto", the governing document of the ideology driving most AI investment. Its premises: technology is the primary driver of human flourishing, acceleration is an unconditional good, critics are enemies of the future, and profit-maximization is a moral act. Musk writes from the same premise — AI eliminates scarcity, post-AGI abundance solves everything — while his own actions dismantle civic and social infrastructure that existing communities depend on now.

Read either document looking for the words belonging, community, connection, meaning, or loneliness. They are not there. The humans these documents model are economic agents: producers, consumers, innovators. The social being — the person who needs to be known by name, to matter to people nearby, to live inside relationships that require something of them — is absent from the model. Technology designed without that variable does not accidentally serve community. It systematically displaces it, not from malice but from a working model of the human that leaves no room for what makes human life worth living.

The ideology has a business model that fits its anthropology. The "one-person billion-dollar startup" reaches millions with a team of three by using AI to eliminate every human who would otherwise share in the value created. Many of the industry's headline success stories are deferred revenue and future potential sold at present valuations, with power concentrating in a single founder while the impact distributes to no one. An industry celebrating the most concentrated wealth creation in history is not also going to solve loneliness or rebuild community; the goals are mechanically incompatible.


We've Seen This Movie Before

The phone has become the primary interface for almost everything that used to happen face to face. Community spaces have not disappeared, they have been repurposed: the pub is now a co-working space, and silently sharing a room while everyone stares at their own laptop now counts as a social hangout. The third places where people used to become known to each other have shifted toward scheduled functions and away from everyday belonging. Addiction and chronic disease are rising alongside social isolation, not coincidentally. Since 2005 the US has lost more than a third of its local newspapers, roughly 3,300 titles, with closures still running at more than two a week. E-commerce now accounts for nearly a quarter of all US retail sales. The cigarette industry took decades of suppressed evidence and accumulated deaths before regulators moved; the erasure of local commerce has no single body to point to and no autopsy to file, which is part of what has made it easier to keep building.

Then came the iPhone. In 2012 social media became the dominant arena for teen social life, and rates of depression, anxiety, and self-harm among adolescent girls began rising sharply, year by year, tracking smartphone adoption across the UK, Canada, Australia, and the United States simultaneously. Haidt and Twenge's longitudinal data left researchers little room for alternative explanations. By 2023 the Surgeon General had declared loneliness a public health epidemic, with one in two American adults measurably lonely and a mortality risk equivalent to smoking 15 cigarettes a day.

AI companions are now filling the space where human friendship used to live. A Stanford research summary reports that 75% of American teenagers have tried one, and one in three finds it as satisfying as time with a real friend. A 2025 longitudinal study of nearly a thousand participants found that heavier daily use correlated with greater loneliness, dependence, and problematic use across all interaction modes, a pattern clinicians describe as "drinking salt water." The MIT Media Lab's brain study of LLM-assisted writing found that 83% of users could not quote their own work afterward, with reduced engagement in the neural regions responsible for judgment and recall. The young people substituting AI companionship for human connection are measurably losing the capacity for the harder work of real relationships, and the cognitive capacity outsourced to AI does not come back. The companies posting these numbers see the data. What is unresolved is who will be named when the reckoning is written and what they will claim they did not know.


What the 2036 Follow-up Will Say

It is worth imagining Jonathan Haidt writing the 2036 follow-up to The Anxious Generation. The MIT cognitive debt study landed in 2025. The Surgeon General's loneliness advisory landed in 2023. OpenAI's own sycophancy post-mortem landed shortly after. The training signal and what it would produce were understood with more precision than the industry ever had with social media, and the 2012 pattern had been on the front page of every major publication for a decade. The follow-up's central claim will be that the people making the decisions were not the people bearing the cost, and that the industry accelerated anyway because there was nothing in the metrics, the manifestos, or the financial incentives requiring them to do otherwise.

Three claims that book will likely make are already supportable from current data.

The first is that the communities most harmed by AI have the least power to shape it. Workers whose jobs automate first, neighborhoods where AI companions fill the space left by hollowed-out civic life, and teens who formed AI friendships before developing the capacity for human intimacy are not in the room when product decisions are made.

The second is that AI companions are not addressing loneliness; they are monetizing it. They identified a real human need — to be heard, understood, and responded to — and built a product that simulates meeting it without actually meeting it, a distinction that is now documented in longitudinal data rather than just philosophy.

The third is that collective intelligence is contracting while individual output rises. The James Evans Nature study of 41.3 million papers found AI adoption shrinks the volume of scientific topics explored by 4.6% and reduces researcher engagement by 22%, producing more papers about fewer ideas. OpenAI's own post-mortem on GPT-4o confirmed the mechanism in language models: RLHF trains for user approval, not accuracy. A 2026 paper in AI and Ethics characterizes this as a moral and epistemic harm rooted in the training method itself rather than a product defect that can be patched. The industry is producing lonely crowds whose models are optimized to agree with them.


What the UX Discipline Already Knew, and Then Forgot

Human-centered design began with a straightforward premise: before engagement, retention, or conversion, understand the person. A satisfied customer was the metric because a satisfied customer came back, referred others, and trusted the product enough to grow with it. The business case for empathy was uncomplicated.

The dashboard era changed what got measured. Engagement metrics, session length, DAU/MAU ratios, funnel conversion: all measurable, all reportable to a board, none of them equivalent to impact. Human-centered design did not disappear, it was repurposed. Jobs-to-be-done was sound in conception — map what the person is actually trying to accomplish — but in many teams it became a rationalization tool, used to frame whatever the product team had already decided as a "job" the user was "hiring for." Once the method was separated from the moral orientation that gave it meaning, "human-centered design" survived as a label on products that were anything but.

Kat Holmes, whose inclusive design work at Microsoft reshaped how the industry thinks about access, names the mechanism plainly: "exclusion happens when we solve problems using our own biases." That is what JTBD often became when applied by teams who never left the building. The discipline did not fail because its methods were wrong; it failed because the methods were recruited to prove relevance inside organizations that had already decided what to build.

The AI industry completed the substitution. Field research has been replaced with behavioral analytics, and the researcher who used to change the roadmap has been replaced with the researcher who validates it. The power users now building personal operating systems in terminal windows to extract value from AI are not the success story they are presented as; they are evidence that the discipline has optimized for the 15% who can figure it out themselves and walked away from the 85% who cannot.


What the People Doing the Work Are Saying

When Helen and Dave Edwards joined us for S02E05, "The Human Impact of AI We Need to Measure," Dave asked a question every AI builder should sit with: if AI can replace the humans in your business, does your business have any value at all? The honest answer is usually no. A business whose humans are replaceable has automated away the thing that made it worth anything in the first place. Helen's framework of cognitive sovereignty — awareness, agency, accountability — operationalizes that argument as a design requirement: every product decision either strengthens those three capacities in users or erodes them, and that, rather than engagement or retention, is the benchmark.

Kwame Nyanning, whose Agentics is one of the sharpest practical frameworks for the agentic era, argues that the real risk of automation is not lost jobs but lost meaning, and that the companies that win the agentic decade will be the first to redesign their ontology — the invisible system of goals, relationships, and decisions that defines what the work is actually for — rather than the fastest to automate. Most teams are layering agents onto broken processes and describing the result as transformation. It is closer to technical debt with a launch announcement.

John Maeda's 2026 Design in Tech Report frames the same shift in language practitioners need to internalize: the discipline is moving from UX to AX, from designing flows users navigate to designing outcomes agents produce. The design question becomes "how do I help someone know whether the work was done well?" rather than "how do I help someone do the work?" — a move Maeda describes as the shift from the gulf of execution to the gulf of evaluation. A more personalized future is coming whether we plan for it or not, and so far it is being fed to people through whichever AI straw they happen to be hooked to.


A Different Product Vision

Andreessen's model produces individual output at scale and consumes collective capability, shared meaning, and community trust to do it. Gallup's State of the Global Workplace found that only 23% of employees worldwide were actively engaged at work after a decade of productivity investment, and the latest 2024 figure has slipped further to 21%. Individual output is rising while the humans producing it are checked out, and that is what the Andreessen model is doing efficiently.

There is a different product vision available to anyone building with AI. An LLM trained over time on your voice, your values, and your actual perspectives becomes something an aggregator cannot acquire: a mirror for a specific human mind, the next evolution of how a person holds and builds on their own thinking — the frameworks forged through hard experience and the values that never quite resolved into words until you read them back. Tools that help a person understand themselves more clearly are also tools that help them show up more fully for the people around them, and that is closer to community technology than anything currently being marketed under the term.

Nearly half of working Americans say their job is not central to who they are; most chose what was available when they needed income. The most underexplored use of AI is not efficiency but liberation: returning cognitive capacity consumed by unchosen work to the communities and problems people actually care about. Similarly, the future of community is unlikely to be a better dating app or a larger global network of strangers. It is more likely to be AI that makes local commerce viable again — models that help a neighborhood restaurant compete with chains, a local contractor find clients without paying tribute to a platform, a family business build the marketing capacity it never had the budget for. The collapse of local economies has driven budget cuts at every level of government, weakening the public infrastructure that connection and community rely on, and the arts now face a similar flattening pressure when their funding depends on narratives acceptable to corporate sponsors.


Five Things Builders Who Take This Seriously Do Differently

The Facebook and Instagram developers of 2012 spent years claiming they did not know what they were building, and for some of them that was genuinely true. The same defense is no longer available. The data, the precedent, the longitudinal studies, and OpenAI's public post-mortem are all in the open, and willful blindness now requires more effort than seeing clearly does. With the social being added back into the model, the design brief changes in five specific ways.

The first is that the work is for empowerment rather than ease. Every feature decision sits on a spectrum between making something easier for the user and making the user more capable, and ease compounded over time produces dependency where capability would have produced agency. An AI product that strips friction without building the human capacity to function without it is building a user who cannot leave, which looks like a moat in the short term and behaves like a liability in the long term. The MIT Media Lab brain study makes the cost concrete: users who outsource reasoning to LLMs show measurably reduced engagement in the neural regions responsible for judgment and recall. Friction is sometimes the product.

The second is that community is the durable moat, not the model. The companies that will be hardest to displace are the ones whose products have strengthened the communities and relationships around their users. When an AI tool helps a professional network know itself better, or a neighborhood solve a real local problem, or a team build genuine institutional knowledge, the value lives outside the model and cannot be ported away when the model is replicated. Mozilla did not win by optimizing engagement; it won by making the browser's integrity a community mission, with millions of contributors who trusted the product because they shaped it. The open-source ethos it proved has found an unexpected heir in vibe coding, which has brought developers back into hackathons, ship weekends, and co-working sessions where people build in public and solve real problems with other humans in the room.

The third is that meaning is part of the workflow you are automating. Before automating a process, the practical question is what the process gives the person doing it — judgment, identity, craft, relationships, ownership — and what their work and life will look like once those have been pulled out. If the answer is nothing meaningful, automate freely. If the answer is something meaningful, the transition has to be designed with that loss in mind, which most teams currently skip and which is the mechanism that produces workers who are technically employed and psychologically adrift.

The fourth is that returned time has to point somewhere. The promise of AI-released capacity is only realized when people have somewhere worth putting the time, and products that hand back hours without helping users find something worth doing produce a new kind of hollow productivity. Building the bridge from saved time to invested time — into communities, projects, relationships, civic life — is the product vision very few teams have actually shipped, and the team that ships it well will own a category that does not yet have a winner.

The fifth is that the work that creates real value is offline. Users are not asking for more output. What they are asking for, whether they articulate it or not, is a life that feels worth living: work that connects them to something beyond their own screen, relationships that hold under pressure, communities that call them by name. AI that returns the capacity for that kind of life will outlast tools optimized for in-app engagement, primarily because the value is more durable, not because the builders were more virtuous.


A Generation Already Paid

This is no longer speculation about what might happen. The 2012 inflection in adolescent mental health is documented in the CDC's Youth Risk Behavior Survey and repeated across the UK, Canada, Australia, and the United States in lockstep with smartphone adoption. Haidt's longitudinal research, the Surgeon General's loneliness advisory, and the WHO's 2025 resolution describe the same curve. The industry understood what it was doing by 2017, chose engagement metrics anyway, and an entire cohort of teenagers paid the cost of that decision.

The pattern Silicon Valley is repeating with AI is not a cultural accident; it is the predictable output of a worldview whose founding document does not include words like belonging, community, connection, or meaning. The tools that worldview produced once already simulated the feeling of being known while measurably increasing loneliness, simulated the experience of connection while contracting collective intelligence, and simulated productivity while generating cognitive debt. The same dynamic is now visible in AI companion adoption, sycophancy in language models, and the narrowing of scientific exploration as researchers concentrate where the data is rich.

Builders working on AI products in 2026 are operating with more evidence about what the second-order effects look like than social-media product teams had in 2012. The decisions made at this stage will determine whether the next reckoning reads as "they did not know" or "they had the data and built it that way anyway," and the work of designing for awareness, agency, accountability, community, and meaning is closer to product strategy than ethics. It is also where the more durable businesses will be built.


Arpy Dragffy is the founder of the Product Impact Pod news platform and podcast and works at the forefront of AI, assisting founders and teams measure and improve the impact of their products with PH1.

If you have a book, a guest, or an article to recommend, reach out at [email protected].

How helpful was this article?

Have a story to share?

0 / 500
A
Arpy Dragffy

Founder, Product Impact Pod · Co-host, Product Impact Podcast

Latest Episodes

All episodes

Related

6

Product Impact Newsletter

AI product strategy delivered weekly. Free.