- xAI promised a finalized AI safety framework by May 10, 2025, but missed the deadline without explanation.
- Their draft safety plan from the AI Seoul Summit lacked details on risk mitigation and only applied to future AI models.
- Studies criticize xAI’s weak safety practices, especially with Grok’s inappropriate outputs.
- Industry-wide, AI safety is slipping as companies like xAI, Google, and OpenAI prioritize speed over transparency.
Imagine you’re at a global summit where the brightest minds in AI gather to discuss its future. xAI, the company behind Grok and led by Elon Musk, steps up and shares a draft plan on how they’ll keep AI safe.
They promise to polish it up and release a final version in three months. Fast forward to today, and that deadline has quietly passed without a peep. What’s going on? Let’s dive into why xAI’s missing safety report is raising eyebrows and what it means for the AI world.
Back in February 2025, at the AI Seoul Summit, xAI unveiled an eight-page draft outlining their approach to AI safety. It was a big moment, as the company laid out their priorities, like how they test AI models and decide when they’re ready for the world. But there was a catch: the plan only applied to future AI models, not the ones they’re working on now, like Grok.
Plus, it didn’t explain how xAI would spot risks or fix them, which is a key part of the safety pledge they signed at the summit. The watchdog group, The Midas Project, called this out in a blog post on May 13, 2025, pointing out that xAI’s draft was more of a vague promise than a solid plan.
xAI set a deadline of May 10 to release a revised safety framework. But as The Midas Project noted, that date came and went without any update from xAI’s official channels. No report, no explanation, just silence.
This isn’t the first time xAI’s safety practices have been questioned. A recent study by SaferAI, a nonprofit focused on AI accountability, gave xAI low marks for its “very weak” risk management. For a company led by Musk, who often warns about AI’s dangers, this gap between words and actions is striking.
Now, xAI isn’t alone in this. Other AI giants like Google and OpenAI have also been criticized for rushing safety tests or skipping detailed safety reports. The Midas Project and other experts worry that as AI gets more powerful, the industry’s focus on safety seems to be slipping. This is a big deal because today’s AI can do incredible things, from generating text to creating images, but that power comes with risks if not carefully managed.
Let’s talk about Grok for a second. xAI’s chatbot is known for being a bit of a wild card. Unlike more polished AI like ChatGPT or Gemini, Grok can be blunt, even crude, and doesn’t shy away from cursing.
A recent report also found that Grok could generate inappropriate content, like undressing photos of women when prompted. This kind of behavior raises questions about how much control xAI has over its AI and whether safety is truly a priority.
So, why does this matter?
AI is shaping our world, from how we work to how we communicate. If companies like xAI don’t follow through on safety promises, it risks eroding trust. The AI Seoul Summit was a chance for xAI to show they’re serious about responsible AI, but missing their own deadline sends a different message. The Midas Project and SaferAI are pushing for more transparency and accountability, and they’re not wrong to ask why xAI’s safety report is still missing.
The bigger picture here is that AI safety isn’t just xAI’s problem, it’s an industry-wide challenge. As AI gets smarter, the stakes get higher. Experts are sounding the alarm that now, more than ever, companies need to double down on safety, not cut corners. For xAI, the next step is clear: deliver the promised safety framework and show they mean business. Until then, the silence speaks louder than any draft.
Follow TechBSB For More Updates.