ChatGPT Uninstalls Surge 295 Percent After Military Partnership Backlash

Share

- Advertisement -
  • Sam Altman admitted OpenAI rushed its US defense deal announcement and called it sloppy.
  • The agreement now clarifies ChatGPT will not be used for domestic surveillance of US citizens.
  • ChatGPT uninstalls in the US have surged by 295 percent amid backlash.
  • Rival Claude has gained installs as ethical concerns and quality complaints grow.

OpenAI CEO Sam Altman is attempting to steady the ship after a turbulent few days that have seen criticism mount over the company’s new agreement with the US Department of War. In an internal memo later shared publicly, Altman acknowledged that the announcement was pushed out too quickly and without enough clarity.

His words were unusually candid. He described the rollout as rushed and admitted it came across as opportunistic and sloppy. For a company that has long framed itself as safety focused and cautious, the perception that it hurried into a military partnership has clearly struck a nerve with users.

The revised wording of the agreement now states that ChatGPT powered systems under the deal shall not be intentionally used for domestic surveillance of US persons and nationals. That clarification appears aimed directly at critics who feared the technology could be deployed for mass monitoring inside the United States.

Still, for many observers, the damage may already have been done.

Surveillance fears and ethical fault lines

Concerns about the militarization of artificial intelligence are not new, but this deal has brought them into sharp focus. Critics argue that once AI systems are embedded within defense infrastructure, oversight becomes murkier and the line between support tools and active weapons systems can blur.

Altman did not directly address fears around autonomous weapons in his latest comments. Instead, he emphasized that OpenAI intends to work through democratic processes and would resist any unconstitutional directive. In a striking statement, he said he would rather face jail than comply with an order he believed violated the Constitution.

- Advertisement -

That pledge may reassure some, but skepticism remains strong. Anthropic, the company behind Claude, recently walked away from its own potential arrangement with the same department, reportedly after failing to secure sufficient safety guarantees.

Altman has since urged the US government to reverse a reported decision to freeze out Anthropic from certain official channels, calling it a very bad move.

The broader debate now extends beyond one contract. It touches on who controls advanced AI, how it is governed, and whether commercial developers can maintain ethical boundaries once national security interests enter the picture.

ChatGPT uninstalls surge as users look elsewhere

While executives debate principles, users are voting with their thumbs. Data from mobile analytics firm Sensor Tower indicates that ChatGPT uninstall rates in the United States have jumped by 295 percent over recent days. That means nearly three times as many people as usual are removing the app from their devices.

At the same time, rival app Claude has seen a noticeable bump. Installs reportedly rose sharply over a single weekend, and the app has climbed the charts on Apple’s App Store. The timing suggests at least some users are switching platforms in response to the controversy.

Ethics are not the only factor at play. Across social platforms, including Reddit, some longtime ChatGPT users have been voicing frustration over what they see as declining response quality. The recent retirement of the GPT 4o model drew criticism, with some arguing that newer iterations feel less consistent or less capable in certain tasks.

- Advertisement -

For a product that became synonymous with accessible AI, the combination of ethical unease and perceived performance dips presents a real reputational challenge.

A pivotal moment for OpenAI and the AI industry

This episode may prove to be more than a temporary storm. It underscores how tightly intertwined AI development has become with public trust. OpenAI’s brand has been built not just on technical prowess but on the promise of responsible innovation. When that narrative wavers, user loyalty can shift quickly.

Altman’s admission that the company moved too fast suggests an awareness that transparency and timing matter as much as the substance of a deal. By revising the language and openly acknowledging missteps, OpenAI appears to be trying to regain control of the conversation.

Whether that will be enough remains uncertain. The debate over AI’s role in defense and surveillance is unlikely to fade. Governments see strategic value in advanced models, while citizens increasingly demand clear guardrails and accountability.

For now, OpenAI finds itself navigating a delicate balance between commercial growth, government partnerships, and the expectations of millions of everyday users. The coming weeks will reveal whether this backlash is a short term dip or a signal of deeper unease about where AI is headed next.

Follow TechBSB For More Updates

- Advertisement -
Emily Parker
Emily Parker
Emily Parker is a seasoned tech consultant with a proven track record of delivering innovative solutions to clients across various industries. With a deep understanding of emerging technologies and their practical applications, Emily excels in guiding businesses through digital transformation initiatives. Her expertise lies in leveraging data analytics, cloud computing, and cybersecurity to optimize processes, drive efficiency, and enhance overall business performance. Known for her strategic vision and collaborative approach, Emily works closely with stakeholders to identify opportunities and implement tailored solutions that meet the unique needs of each organization. As a trusted advisor, she is committed to staying ahead of industry trends and empowering clients to embrace technological advancements for sustainable growth.

Read More

Trending Now