Learning Center

AI Bot Traffic Is Now 40% of the Web. Publishers, Pay Attention.

May 11, 2026

Show Editorial Policy

shield-icon-2

Editorial Policy

All of our content is generated by subject matter experts with years of ad tech experience and structured by writers and educators for ease of use and digestibility. Learn more about our rigorous interview, content production and review process here.

AI Bot Traffic Is Now 40% of the Web. Publishers, Pay Attention.
Ready to be powered by Playwire?

Maximize your ad revenue today!

Apply Now

Key Points

  • Bad bots now account for 40% of all web traffic, up from previous years, according to Thales' 2026 Bad Bot Report.
  • AI-enabled bot attacks rose from 2 million to 25 million per day in a single year, a 10-fold increase.
  • For publishers, this isn't just a cybersecurity story. It's an invalid traffic (IVT) problem with direct revenue consequences.
  • Bot inflation distorts your analytics, corrupts your audience data, and puts your advertiser relationships at risk.
  • The right response is cleaner traffic, better verification infrastructure, and demand partners who take IVT seriously.

What the Bot Numbers Say

The 2026 Bad Bot Report from Thales puts hard numbers on a problem publishers have been quietly absorbing for years. Daily AI-enabled bot attacks jumped from 2 million to 25 million in a single year. Bots now make up more than 53% of all web traffic, and roughly 40% is classified as "bad bots," covering everything from data scrapers to botnets built to take sites offline.

The industries targeted include retail, education, and government. Publishers aren't called out by name in the report, but content-heavy sites are exactly the kind of infrastructure bad bots love. Crawlable, high-traffic, and often less hardened than enterprise software targets.

Tim Chang, general manager of applications and security at Thales, put it plainly: "The challenge is no longer identifying bots. It's understanding what the bot, agent, or automation is doing, whether it aligns with business intent, and how it interacts with critical systems."

That framing applies directly to ad-supported publishing.

See It In Action:

What Bot Inflation Does to Publisher Revenue

Most publishers think about IVT as an ad fraud problem. It's bigger than that.

When bots inflate your session and pageview counts, your audience data lies to you. Advertisers buying against your audience segments are reaching fewer real humans than the numbers suggest. That gap between reported traffic and verified human traffic is exactly what surfaces during a brand safety audit or a post-campaign review, and when it does, CPMs and direct deal renewals take the hit.

The damage shows up across several dimensions:

  • Inflated impressions: Bot traffic generates ad impressions that don't deliver any real advertiser value. DSPs and verification vendors catch this eventually, and the recalibration lands on your effective CPMs.
  • Corrupted audience data: If your analytics platform can't distinguish human sessions from bot sessions, your first-party data segments are polluted. That matters a lot in a cookieless environment where your own data is a core monetization asset.
  • Floor price miscalibration: Price floor strategies built on bad traffic data are optimizing against phantom demand signals. You're tuning floors to win auctions for sessions that don't convert for advertisers.
  • Verification friction: If a brand safety vendor or DSP flags elevated IVT on your domain, you get suppressed in buying algorithms. That suppression is quiet, and it compounds.

The revenue impact isn't always immediate. It erodes steadily, until a direct sales conversation surfaces it or a programmatic CPM trend that should be going up is instead going down.

Essential Background Reading:

The IVT Problem Is Getting Harder to Solve

Bot detection used to rely on pattern matching: identify a known bad actor's IP range or user agent string, block it, move on. AI-powered bots broke that model.

Modern bad bots mimic human behavior well enough to fool basic detection. They rotate user agents, simulate mouse movements, and distribute traffic across residential proxy networks to avoid IP-based blocking. The Thales report notes that the larger shift in 2025 was "the normalisation of AI and automation within internet infrastructure itself." That's not a temporary spike. It's a structural change in what the web looks like.

For publishers, the table below captures the key differences between the old IVT problem and the current one:

DimensionTraditional Bot TrafficAI-Powered Bot Traffic
Detection methodIP blocklists, user agent filteringBehavioral analysis, ML classification
Mimicry capabilityLow, easily flaggedHigh, simulates human interaction
Traffic distributionCentralized IP rangesResidential proxies, distributed
Attack volumeMeasured in thousands/dayMillions per day, automated at scale
Publisher revenue riskAd fraud exposureAudience data corruption + ad fraud

This is why publisher-side verification infrastructure matters more now than it did three years ago. Rules-based filtering isn't enough when AI bots are learning faster than the rules update.

Related Content:

What Publishers Should Do Now

The response to a 10-fold increase in AI bot attacks isn't panic. It's tightening your infrastructure and making sure your monetization partners are doing the same.

A practical checklist for publishers:

  • Audit your analytics: Compare server-side traffic data to client-side analytics. Significant gaps often indicate bot inflation that your tag-based tools aren't catching.
  • Review your verification vendors: Make sure you're running IVT detection at the impression level, not just the session level. Session-level filtering misses a meaningful share of sophisticated bot traffic.
  • Pressure-test your first-party data: If your audience segments are built on pageview data, run them through a verification layer before using them in direct deal pitches or PMP deals.
  • Ask your monetization partner hard questions: What IVT detection do they run? What happens when impressions are flagged? Do they pass through clean traffic only, or is there reconciliation happening after the fact?
  • Watch your viewability and completion rates by segment: Unusual drops in viewability or video completion for specific audience segments or content categories can be early signals of bot contamination.

According to the Lunio 2026 Global IVT Report, an estimated 8.51% of all global digital ad traffic is invalid, translating into $63 billion in wasted ad spend in 2025 alone, out of a $740B+ global digital ad market.

Publishers navigating the real cost of blocking AI traffic also need to weigh the tradeoffs carefully: not all bot-adjacent traffic carries equal revenue risk, and blunt blocking strategies can suppress legitimate demand signals alongside the bad ones. Understanding how to allow beneficial bots while blocking others is increasingly the more precise question to be asking.

Next Steps:

How We Think About IVT at Playwire

We're explicit about this in how we position our publisher network to advertisers: no IVT, no MFA. That's not a tagline. It's a filtering standard we apply to maintain CPMs and protect the value of publisher inventory across our network.

When advertisers buy through our ecosystem, they're buying verified human impressions. That verification matters for publishers because it protects your CPMs, your direct relationships, and your audience data from the kind of erosion that bot inflation causes over time.

The Thales numbers are a useful reminder that this problem isn't going away. AI-powered bots are getting better, more distributed, and harder to catch with legacy tools. Publishers who treat IVT as a background concern rather than an active revenue risk are the ones who get surprised when the CPM trends stop making sense.

Clean traffic is a revenue asset. Protect it like one.

New call-to-action