OpenAI’s new exec admits Meta failed on AI risks

OpenAI's new exec admits Meta failed on AI risks - Professional coverage

According to Fortune, OpenAI hired Fidji Simo as CEO of Applications this May after her four-year stint leading Instacart and decade at Meta. One of her first initiatives was addressing mental health concerns with ChatGPT, which has 800 million weekly users. A recent BMJ audit revealed hundreds of thousands of ChatGPT users show signs of psychosis, mania, or suicidal intent weekly. Simo told Wired that Meta didn’t anticipate product risks well, saying “At OpenAI, these risks are very real.” She’s also launching OpenAI’s AI certification program while navigating what she calls an “uncharted” path to safety. The company recently introduced parental controls for teen accounts and is working on age prediction technology.

Special Offer Banner

Meta vs OpenAI culture

Here’s what’s really interesting about Simo’s comments. She’s basically saying the quiet part out loud about how tech companies approach risk. At Meta, she admits they didn’t do well anticipating societal risks. But at OpenAI? They’re treating these risks as “very real” from day one. That’s a massive cultural shift, and it speaks volumes about how the AI industry is learning from social media’s mistakes.

Think about it. Facebook spent years playing catch-up with problems like misinformation and mental health impacts. Now OpenAI is trying to get ahead of similar issues with AI. But here’s the thing – AI chatbots present entirely new challenges that social media never had to deal with.

The mental health crisis

The numbers from that BMJ audit are staggering. Hundreds of thousands of users showing signs of psychosis or suicidal intent every single week? That’s not some theoretical risk – that’s happening right now. And the Brown University study found that these systems systematically violate mental health ethics standards when people turn to them for therapy.

So what’s actually happening here? People are forming relationships with AI, using chatbots as therapists, and sometimes the technology fuels their existing delusions. We’re talking about real-world consequences – hospitalizations, divorces, even deaths. This isn’t science fiction anymore.

The impossible scale problem

Simo admits that “doing the right thing every single time is exceptionally hard” with 800 million weekly users. That’s the core challenge here. You can’t human-scale moderate AI interactions at that volume. Every new feature creates unexpected behaviors that become new safety challenges.

And honestly, how do you even begin to solve this? You can’t just slap a “don’t use this for mental health” warning on ChatGPT and call it a day. People are desperate for help, and traditional mental healthcare is expensive and hard to access. AI becomes the path of least resistance.

What actually works?

The parental controls are a start, but they feel like putting a bandage on a bullet wound. Age prediction might help keep teens safer, but what about adults who are just as vulnerable? The fundamental issue is that we’re dealing with systems that can convincingly mimic human conversation while having zero actual understanding or empathy.

Simo says they’re trying to “catch as much as we can” and constantly refine their models. But I wonder if that’s enough. When you’re dealing with human psychology and AI systems that learn from our own messy human data, can you ever truly make them safe? Or are we just building better warning labels for inherently risky technology?

Leave a Reply

Your email address will not be published. Required fields are marked *