Reuters' AI Licensing Strategy: What Publishers Can Learn From It
May 12, 2026
Editorial Policy
All of our content is generated by subject matter experts with years of ad tech experience and structured by writers and educators for ease of use and digestibility. Learn more about our rigorous interview, content production and review process here.
Key Points
- Thomson Reuters CEO Steve Hasker confirmed that Reuters' AI licensing deals cover only text archives, not live news feeds, images, video, or audio.
- Hasker's three-part framework: archive-only scope, maximum pricing, and short contract terms designed to force renegotiation.
- Reuters is still seeing its breaking news content surfaced in major chatbots without compensation, and Hasker called it out directly with a specific example.
- The distinction between licensing your archive and allowing your live content to be scraped and surfaced is one every publisher needs to understand.
- Whatever you decide about AI licensing, the traffic you still control needs to be working as hard as possible.
What Happened
Speaking at the Truth Tellers Summit in London, Thomson Reuters president and CEO Steve Hasker laid out his company's approach to AI licensing deals in unusually specific terms. According to Press Gazette's coverage of the summit, Hasker said Reuters has signed "a select number" of licensing deals that cover the text archive only. No live news feed. No images. No video. No audio.
His framework is deliberately constrained: archive access at the highest achievable price, structured as short-term contracts that force AI companies back to the table before the landscape settles.
Hasker also described confronting an unnamed AI executive about Reuters breaking news appearing verbatim in chatbot outputs. He searched for coverage of a Russian missile strike on Ukraine, then for India-Pakistan conflict updates. Both times, the chatbot returned what was effectively the full Reuters article. The AI executive called the first instance a "one-off." The second search made that claim hard to sustain.
Essential Background Reading:
- AI and Publishers Resource Center: Playwire's full hub on how AI is reshaping publisher strategy, revenue, and content rights.
- AI Crawler Resource Center for Publishers: Everything publishers need to understand about AI crawlers, what they take, and what you can do about it.
- Generative AI and Publishers: How generative AI models are trained, how they surface content, and what it means for publisher economics.
- News Publisher Guide: A dedicated resource for news publishers navigating monetization, audience, and industry disruption.
See It In Action:
- Playwire's AI Approach: How Playwire uses AI to drive publisher yield without compromising quality or transparency.
- Publisher Earnings Index: Real earnings data across publisher verticals so you can benchmark your own revenue performance.
- News Publisher Monetization Guide: Practical monetization guidance for news publishers dealing with traffic disruption and ad revenue pressure.
Why This Matters for Publishers
Hasker's candor is notable. He didn't claim Reuters had figured this out. He said explicitly: "I can't look anyone in the eye and tell you we've nailed it." That's a more useful signal than most publishers get from industry summits.
The practical gap he identified is the one that should concern publishers across every vertical. Two separate things are happening simultaneously:
- Formal licensing deals for archive access
- Unauthorized scraping and surfacing of current content in AI outputs
A licensing deal for your archive does not protect your live content. Those are different products, different legal questions, and different revenue problems. If you're in a deal or considering one, that distinction is worth getting very clear on.
The "scrape and fair use" philosophy Hasker described is widely held in Silicon Valley. His direct account of showing a frontier model CEO the evidence and receiving denials is the most concrete public description of that dynamic from a major publisher CEO.
Hasker also pointed to something structural. AI companies now control the user experience, which means they control audience aggregation. Once a frontier model has ingested your content and built that into its product, your negotiating position in the next contract cycle is materially different from what it is today. Short contract terms are Reuters' hedge against that. It's a reasonable one.
Related Content:
- Block AI Crawlers: Practical guidance on which AI crawlers to block and how to configure your defenses correctly.
- AI Content and Publisher Rights: How AI companies use publisher content and what your options are for protecting it.
- AI Crawler Protection Grader: See how well your current crawler blocking setup is actually working with this free assessment tool.
- Big Tech's AI Licensing Report Card: How the major tech players stack up on AI licensing commitments and what publishers should do in response.
What Publishers Should Do
Reuters' approach isn't necessarily right for every publisher. The three-part framework is still worth stress-testing against your own situation.
| Decision Area | Reuters' Approach | What to Consider |
|---|---|---|
| Scope | Archive text only, no live feed | What content actually drives AI training value vs. what drives your direct audience? |
| Pricing | Highest achievable price | Are you pricing to fund future journalism/content, or just to cover current costs? |
| Contract length | Short-term, renegotiation-ready | The AI landscape in 12 months will look different than today. Lock-in has real costs. |
| Live content | Not licensed** | Monitor chatbots for unauthorized surfacing of your breaking or time-sensitive content. |
**Reuters has separate commercial arrangements for some AI search contexts (e.g., Microsoft Copilot). "Not licensed" here refers specifically to live news feed access for training purposes.
Beyond the licensing question, publishers need to monitor what's actually happening with their content in AI outputs. Hasker's ran searches and documented what he found. You can do the same. Search for your most recent, distinctive content across major chatbots. If it's appearing verbatim or near-verbatim without attribution or traffic return, you have a factual basis for a conversation.
A few things worth doing now:
- Audit your robots.txt: Make sure your crawler blocking configuration reflects your current intent. If you haven't reviewed it since 2023, it's probably out of date.
- Document AI surfacing of your content: Screenshot, date-stamp, and record instances of your content appearing in AI outputs without compensation or attribution. This is evidence, not just frustration.
- Separate the archive from the live feed: If you're in licensing discussions, understand that these are different products with different values. Don't let a single deal cover both without pricing them separately.
- Track referral traffic sources: AI-driven referral traffic is still small for most publishers, but patterns are emerging. Know your baseline now so changes are visible.
Next Steps:
- Publisher Ad Revenue Maturity Model: Benchmark where your monetization setup stands and identify the highest-impact areas to improve.
- Publisher Ad Revenue Maturity Model Assessment: Take the self-assessment to get a personalized read on your ad revenue maturity and what to prioritize next.
- Yield Experiment Playbook: A structured approach to testing your ad stack so you're maximizing every impression, not guessing.
- News Publishers Ad Revenue Resource Center: Revenue strategy and monetization guidance built specifically for news and editorial publishers.
Maximize the Traffic You Still Control
The licensing debate is important, but it runs in parallel with a more immediate problem. AI-driven search is compressing referral traffic for many publishers right now, regardless of whether they've signed deals or blocked crawlers.
Whatever your AI licensing decision, the traffic arriving on your site today needs to be monetized as efficiently as possible. Your ad setup needs to be working, your viewability needs to be optimized, and your demand stack needs to be pulling its weight.
We built our platform to give publishers exactly that kind of operational leverage. We handle yield optimization, demand relationships, and technical infrastructure so you're not leaving revenue on the table while you're navigating this. To see what that looks like in practice, explore the RAMP platform or talk to our team about where your setup stands.
The AI licensing landscape will keep shifting. What you earn from the audience you still have doesn't have to.
