- Nudify apps remain available on major app stores despite policy bans
- Millions of downloads show the scale of the issue
- Apps often disguise features to bypass moderation
- Regulators are increasing pressure on tech companies
Apple and Google are once again facing scrutiny over the presence of so called “nudify” apps on their platforms, despite having clear policies that prohibit such content. A new report by the Tech Transparency Project claims that both companies continue to host and even promote apps capable of generating nonconsensual sexualized images.
The issue is not just about availability. Researchers say search results within app stores often surface these tools directly. Even autocomplete suggestions can guide users toward similar apps, raising concerns about how actively these platforms are curating content.
According to the report, dozens of such apps have collectively reached hundreds of millions of downloads and generated significant revenue. That scale suggests the problem is not isolated, but systemic.
A cycle of removal and return
Both Apple and Google have taken action when specific apps are flagged. Apple confirmed it removed several apps highlighted in the report, while Google said it has suspended others and is continuing its investigation.
However, enforcement appears inconsistent. Researchers observed that after removals, similar apps quickly reappear under different names or with slight modifications. Some developers rebrand their tools as general image editors or AI generators, making it harder to detect misuse during review.
This creates a recurring cycle where problematic apps are taken down, only to be replaced by near identical versions. Critics argue that this reflects gaps in review processes rather than one off enforcement failures.
Hidden capabilities and misleading presentation
One of the more concerning findings is how these apps present themselves. Not all explicitly advertise nudification features. Instead, some position themselves as harmless face swap or AI editing tools, while quietly enabling sexualized outputs once installed.
In certain cases, apps included preloaded templates or categories that encouraged inappropriate use. Others relied on user generated content, making moderation even more complex.
Experts say this ambiguity allows apps to bypass initial screening while still enabling harmful behavior in practice. It also blurs the line between legitimate creative tools and those designed for exploitation.
Growing regulatory pressure
The controversy comes at a time when governments are tightening oversight on digital platforms. Lawmakers in multiple countries are pushing for stronger accountability around nonconsensual imagery and deepfake technology.
Recent legislation in the United States criminalizes the distribution of such content and requires platforms to act quickly when violations are reported. The United Kingdom is also preparing laws that could hold tech executives personally accountable if harmful material is not removed.
Industry observers believe this growing pressure could force companies to rethink how they detect and prevent misuse, especially as AI tools become more powerful and accessible.
The bigger challenge for tech platforms
At the heart of the issue is a broader challenge. App stores are designed to promote engagement and visibility, often rewarding apps that attract attention. Controversial or sensational features can sometimes boost discoverability, even if they violate policies.
Researchers argue that without more transparent and consistent enforcement, harmful apps will continue to find ways onto these platforms. The combination of financial incentives, algorithmic promotion, and evolving AI capabilities makes this a difficult problem to solve.
For Apple and Google, the question is no longer whether these apps exist, but how effectively they can prevent them from resurfacing.
Follow TechBSB For More Updates
