- Gemini AI blocked over 8.3 billion malicious ads in 2025
- More than 99 percent of harmful ads stopped before going live
- Nearly 25 million advertiser accounts were suspended
- AI analyzes behavior and intent to detect advanced scams
Google says it is gaining ground in its ongoing battle against online ad fraud, and it is doing so with the help of its own artificial intelligence. The company revealed that its Gemini platform played a central role in identifying and blocking harmful advertising activity across its network in 2025, stopping billions of bad ads before they ever reached users.
According to the company, its AI driven systems are now capable of detecting more than 99 percent of policy violating ads at the submission stage. That means most malicious campaigns are shut down before they even go live, a significant shift from reactive moderation to proactive enforcement.
Billions of bad ads stopped before reaching users
The scale of the problem Google faces is enormous. As one of the largest advertising ecosystems in the world, its platform is a constant target for cybercriminals looking to exploit its reach. Fraudsters often hijack legitimate accounts or create new ones, then use generative AI tools to produce highly convincing fake ads that mimic real brands.
These deceptive campaigns are designed to trick users into clicking through to fraudulent websites, where scams or malware may follow. Google says that in 2025 alone, it removed or blocked more than 8.3 billion ads and suspended nearly 25 million advertiser accounts.
A significant portion of this activity was tied directly to scams. The company reports that hundreds of millions of scam related ads were eliminated, along with millions of accounts connected to fraudulent operations.
Fighting AI generated threats with smarter AI
What makes this wave of fraud particularly challenging is the growing use of generative AI by bad actors. These tools allow scammers to produce realistic ad copy, imagery, and branding at scale, making detection more difficult than ever.
Google’s response has been to double down on its own AI capabilities. Gemini analyzes vast amounts of data in real time, including account behavior, campaign patterns, and historical signals. This allows it to identify suspicious intent rather than just obvious rule violations.
The company says this shift is crucial. Instead of relying solely on static rules, Gemini can interpret context and detect subtle indicators of fraud, even when attackers attempt to disguise their activity.
This approach also enables earlier intervention. Harmful ads can now be flagged and blocked at the point of creation, rather than after they begin circulating.
A broader push toward proactive ad safety
Google’s efforts extend beyond detection alone. The company emphasized that its advertiser verification programs continue to serve as an additional safeguard, helping ensure that those placing ads are legitimate businesses.
At the same time, AI powered moderation is being expanded across more ad formats. Google noted that many responsive search ads are already reviewed instantly, with plans to bring similar real time protections to other types of campaigns.
The broader goal is clear. As cybercriminals adopt more advanced tools, platforms like Google must evolve just as quickly to maintain trust and safety across their ecosystems.
While the numbers shared by Google highlight progress, they also underscore the scale of the threat. Billions of malicious ads in a single year point to a rapidly growing challenge. Whether AI can continue to stay ahead of increasingly sophisticated attackers remains an open question, but for now, Google is betting heavily on Gemini to lead that fight.
Follow TechBSB For More Updates
