AI Governance Repeats Social Media’s Costly Mistakes, Warns Former Meta Policy Chief
As artificial intelligence rapidly transforms global economies and governance systems, industry veterans are sounding alarms about déjà vu patterns emerging from the social media era. Sean Evins, former Head of Public Policy at Meta and current Partner at Kekst CNC, argues that the technology sector is making the same regulatory missteps with AI that previously plagued social media platforms.
According to recent analysis, the “move fast and break things” mentality that drove social media innovation is now being applied to AI development with similar disregard for long-term consequences. Evins, who had a front-row seat to the social media revolution during his tenure at Twitter and Meta, observes that the same rapid scaling approach is creating parallel governance challenges.
The acceleration of AI deployment mirrors the early days of social media, where platforms prioritized growth over responsible implementation. Industry data shows that regulatory frameworks are struggling to keep pace with technological advancement, creating gaps that could lead to significant societal impacts. This pattern of delayed governance response was previously witnessed during social media’s global expansion.
Evins emphasizes that the “building the plane while flying it” approach, while effective for rapid innovation, creates systemic vulnerabilities. The unintended consequences of social media—including misinformation ecosystems and privacy concerns—are now being replicated in AI systems. Research indicates that without proactive governance measures, AI could amplify these existing challenges while introducing new regulatory complexities.
The parallels extend to corporate responsibility practices, where technology companies continue to prioritize product development over comprehensive impact assessment. Experts at technology policy analysis note that the same optimistic assumptions that guided social media expansion—particularly about self-regulation and positive societal impact—are now being applied to AI without sufficient evidence.
Current AI governance discussions echo early social media policy debates, focusing on theoretical risks rather than establishing concrete guardrails. Recent industry reports suggest that the window for implementing effective AI governance is narrowing rapidly, much like the missed opportunities during social media’s formative years. The concentration of AI development within a few major tech companies further compounds these concerns, creating power imbalances similar to those observed in social media markets.
As AI systems become increasingly embedded in critical infrastructure and decision-making processes, the stakes for getting governance right have never been higher. The social media experience provides valuable lessons about the costs of delayed action and the importance of multidisciplinary oversight. Data from policy research confirms that addressing these challenges requires collaboration between technologists, policymakers, and civil society from the earliest stages of development.
The time for course correction is now, before AI systems become too entrenched to regulate effectively. Learning from social media’s governance failures could help prevent similar—or potentially greater—consequences in the AI era.