Don't have time to read this?
Take a copy with you
Select Your Chapter
If you're still optimizing purely for page view RPM, you're looking at the scoreboard while the game is being played on the field. Page view RPM tells you how well a single page monetized. Revenue per session tells you how well your entire user experience monetized. That's a fundamentally different question, and it's the one that actually matters.
Think about it this way: you could have a page that earns a fantastic RPM but drives users away so fast they never view a second page. Your RPM looks great. Your total revenue? Not so much. RPS forces you to account for the relationship between monetization intensity and user behavior, which is exactly the tension yield teams need to be managing.
RPS is calculated as your total ad revenue divided by total sessions over a given period. It captures everything: ad density, CPMs, session length, pages per session, viewability, refresh behavior, and ad unit mix. When RPS goes up, you're genuinely making more money per user visit. When it goes down, something in that complex equation broke.
Every experiment in this playbook is designed to move that number. Some do it by increasing the value of individual impressions. Others do it by extending sessions so you get more impressions per visit. The best ones do both.
Need a Primer? Read these first:
- What is Ad Yield Management? Understand the fundamentals of yield optimization, team roles, and the hypothesis-test-measure workflow.
- How to Manage and Monitor Your Website Ad Revenue Metrics: Learn which metrics to track daily and how to benchmark performance before running experiments.
Running yield experiments without a framework is like throwing darts blindfolded. You might hit something, but you won't know why. Before diving into specific experiments, let's establish how to run them properly.
Every experiment starts with a hypothesis. Not "let's try something and see what happens" (that's not an experiment, that's gambling). A proper hypothesis looks like this: "If I do X for traffic segment Y, it will increase RPS because of Z." The "because of Z" part is critical. It forces you to articulate your reasoning, which makes it much easier to diagnose results.
Split your traffic between a control group (your current setup) and a test group (your experimental configuration). The exact split depends on your traffic volume and risk tolerance, but 50/50 is the gold standard for reaching statistical significance fastest. If you're nervous about a test, start with 80/20 (control/test) and scale up once you see directionally positive results.
Let tests run for at least one full week (ideally two) to account for day-of-week patterns in both traffic and advertiser demand. Ending a test on a Tuesday because it "looks good" is a great way to fool yourself.
Regardless of which specific experiment you're running, every test should track these core metrics. Changes to any one of them can affect RPS, and you need to understand the interplay.
Metric | Why It Matters | Watch For |
Revenue Per Session | Your north star. Total revenue / total sessions. | Ensure both test and control have comparable traffic quality. |
Session Duration | Longer sessions generally mean more ad impressions and revenue. | A spike in duration with flat RPS could mean users are idle, not engaged. |
Pages Per Session | More pageviews = more opportunities to serve ads. | Ensure navigation isn't broken in the test variant. |
Page View RPM | Revenue per 1,000 page views. Shows per-page monetization intensity. | RPM can increase while RPS decreases if session length drops enough. |
Viewability Rate | Directly impacts CPMs. Advertisers pay more for viewable impressions. | Below 50% viewability may trigger SSP blocks or reduced demand. |
Ad Impressions Per Session | Total impressions across a full session. The volume component of RPS. | More isn't always better. Watch for CPM depression at high volumes. |
Bounce Rate | Users who leave after one page. High bounce = low RPS. | Changes in bounce rate are often the first signal of a UX issue. |
Not all experiments require the same level of effort or tooling. We've categorized each experiment in this playbook using the following tiers so you can prioritize based on your current capabilities.
Tier | Effort Level | Tooling Required | Best For |
Tier 1 | Low. Config changes, minimal dev. | Ad server + basic analytics | Any publisher. Start here. |
Tier 2 | Medium. Requires segmentation and conditional logic. | A/B testing platform, traffic segmentation tools | Publishers with some ad ops capability. |
Tier 3 | High. Custom development or advanced platform features. | Programmatic platform with ML, custom dev resources | Sophisticated teams or managed service partners. |
You can use any of our 30+ sample experiments below for inspiration or feel free to start building your own!
Build a structured test plan to optimize your revenue per session. Pick your lever, target your segment, get a ready-to-run experiment.
Select the aspect of your ad stack you want to experiment with.
Choose how you want to slice your traffic for this experiment.
What change do you believe will improve revenue per session?
Related Content:
- 10 Ad Revenue Optimization Ideas That Actually Move the Needle: Actionable optimization strategies covering price floors, demand sources, and ad unit performance.
- Ad Revenue Optimization: Session-Focused Strategies That Actually Work: Deep dive into session-level monetization including progressive layouts and engagement-based strategies.
- How to Build a Winning Ad Monetization Strategy for Your Website: The complete framework for ad unit selection, A/B testing methodology, and stack architecture.
Your organic search traffic is not the same as your social traffic. Your direct visitors are not the same as your email newsletter clicks. They arrive with different intent, different attention spans, and different value to advertisers. Treating them all identically is the yield management equivalent of a buffet restaurant that serves the same food at breakfast, lunch, and dinner. Technically functional, but a massive missed opportunity.
Traffic source segmentation experiments are some of the highest-ROI tests you can run because they unlock a fundamental truth: the optimal ad experience depends on who's visiting and why.
Hypothesis: Organic search users stay longer and view more pages. Serving them a lighter ad layout will extend session duration and increase total impressions per session, resulting in higher RPS compared to a one-size-fits-all layout. Conversely, social referral traffic that bounces quickly should see a denser layout that monetizes aggressively in the first pageview.
How to set it up:
What to measure:
What good looks like: RPS increases for the traffic sources where you customized the layout without a meaningful drop in session engagement metrics. Even a 5-10% RPS lift on your highest-volume traffic source can be significant at scale.
Watch out: Don't over-index on one metric. If your dense layout for social traffic pushes viewability below 50%, you may trigger SSP blocks that negate any gains.
Hypothesis: Organic search traffic carries stronger intent signals, making it more valuable to advertisers. Setting higher price floors on organic sessions will increase CPMs without tanking fill rate, because demand is strong enough to support it. Lower-value traffic sources (social, certain referrals) should run with lower or no floors to protect fill rate where demand is softer.
How to set it up:
What to measure:
What good looks like: CPMs increase on high-value sources without fill rate dropping more than 2-3 percentage points. The net revenue effect should be positive. If fill rate craters, your floors are too aggressive.
Watch out: Price floor experiments can take longer to show results because the auction ecosystem needs time to adjust. Don't pull the plug too early.
Hypothesis: Different SSPs value different traffic sources differently. An SSP that performs well on gaming content from organic search may underperform on social referral traffic. Testing different bidder configurations per traffic source will reveal which SSP combinations maximize revenue for each segment.
How to set it up:
What to measure:
What good looks like: Fewer bidders result in faster pages (lower latency) and comparable or higher CPMs. The latency reduction alone can improve session metrics enough to boost RPS even if per-impression CPMs stay flat.
Watch out: Removing an SSP means losing their demand entirely for that segment. Make sure you're removing genuinely low-performing partners, not ones that win infrequently but at high CPMs.
Hypothesis: Traffic from Reddit behaves differently than traffic from Google Discover, which behaves differently than a newsletter click. Each referral source produces users with distinct scroll patterns, session lengths, and engagement levels. Tailoring ad layouts to the expected behavior of each referral source will increase RPS.
How to set it up:
What to measure:
What good looks like: RPS improvements on at least 2-3 of your top referral sources. You won't win everywhere, and that's fine. Focus on the sources that account for the most sessions.
Watch out: Referral traffic volume for individual sources can be volatile. Ensure your sample sizes are large enough for statistical confidence before drawing conclusions.
These experiments are built on a simple premise: the deeper a user goes into a session, the more valuable that session becomes, and the more aggressively you can monetize it. A user who has scrolled 80% down a page or clicked to their third article has demonstrated intent. They're not leaving. That's the moment to introduce your highest-value ad units, not on the first page load when you have no idea if they'll stick around.
Hypothesis: Loading all ad units at page load wastes impressions on users who bounce immediately and adds unnecessary page weight. Loading ad units progressively as the user scrolls deeper will concentrate impressions on engaged users, boost viewability scores, and improve CPMs.
How to set it up:
What to measure:
What good looks like: Viewability increases significantly (5-15+ percentage points). CPMs follow. Total impressions per session may drop slightly, but the CPM increase more than compensates. Net RPS should be positive.
Watch out: Make sure your lazy loading implementation doesn't cause layout shift when ads inject. Test the user experience visually, not just the numbers.
Hypothesis: Returning visitors have already demonstrated they like your site enough to come back. They're more tolerant of ads and more valuable to advertisers (repeat visitors signal quality). Serving returning visitors a slightly higher ad density, and first-time visitors a lighter experience, will increase RPS on your most engaged segment while protecting acquisition of new users.
How to set it up:
What to measure:
What good looks like: RPS increases for returning users without a drop in the rate of return visits. If your return visit rate drops, you've pushed too hard and need to dial back.
Watch out: This test requires longer run times because you need returning visitors to actually return during the test window. Two weeks minimum, three is better.
Hypothesis: Users who navigate to a second or third page within a session have signaled engagement. Introducing higher-value ad units (video, interactive, rewarded) starting on the second pageview will capture more revenue from engaged sessions without risking bounce on the landing page.
How to set it up:
What to measure:
What good looks like: RPS increases because you're concentrating high-CPM units on users who stick around to see them, improving viewability and completion rates. Pages per session should remain stable or improve.
Watch out: If you currently run video units on page one and they perform well, this test may actually decrease performance. It works best when your page-one video viewability is already low.
Hypothesis: Holding back overlay-style ad units (floating video, adhesive banners, interstitials) until a user demonstrates engagement (X seconds on page, Y% scroll depth) will increase their viewability and completion rates, driving up CPMs on those units.
How to set it up:
What to measure:
What good looks like: Viewability and completion rates on overlay units increase significantly. CPMs follow. Users who never hit the engagement threshold never see the ad, which saves you wasted impressions and keeps your supply quality high.
Watch out: This is exclusively for overlay-style units. Attempting to delay-load units that affect page layout (like site skins, rail units, or in-content display) will destroy your CLS scores, damage Core Web Vitals, and potentially hurt your search rankings. Don't do it.
Your ad unit mix is arguably the lever with the most direct impact on both user experience and revenue. Every unit you add increases supply. Every unit you remove decreases it. The art is in finding the combination that maximizes the total revenue you extract from a session without tipping the balance into ad clutter territory, where CPMs drop, users leave, and SSPs start sending you uncomfortable emails.
Hypothesis: Replacing two or three inline display units with a single refreshing adhesive unit will simplify the page experience, improve session duration, and generate comparable or greater revenue through consistent viewability and regular refresh cycles.
How to set it up:
What to measure:
What good looks like: Session duration increases because the page is less cluttered. The adhesive unit's consistent viewability and refresh cycles generate comparable or better revenue than the inline units it replaced. RPS goes up.
Watch out: Make sure the adhesive unit doesn't obscure important navigational elements or content on mobile. A bad implementation drives users away faster than too many inline ads.
Hypothesis: Long-form article pages keep users engaged for minutes at a time, making them ideal for longer refresh intervals that boost viewability per impression and increase CPMs. Gallery pages and shorter content can support faster refresh intervals since users scan them quickly. Matching refresh intervals to content consumption patterns will increase RPS.
How to set it up:
What to measure:
What good looks like: Longer refresh intervals on long-form content result in higher viewability and CPMs per impression. Faster intervals on short content capture more impressions before the user navigates away. The combined effect lifts overall RPS.
Watch out: Be careful not to refresh too aggressively on any content type. SSPs and advertisers monitor refresh behavior, and overly fast refresh can get you flagged or blocked.
Hypothesis: Different high-impact ad formats perform differently depending on the content, the audience, and the time of day. Instead of always showing the same high-impact format, rotating between formats (interscroller, skin, adhesive video, rewarded) and measuring performance will reveal which format maximizes RPS for each context.
How to set it up:
What to measure:
What good looks like: One or two formats clearly outperform the others on an RPS basis. You may also find that the winner varies based on device type, traffic source, or content type, opening the door for even more targeted optimization.
Watch out: High-impact formats often have limited demand compared to standard display. Make sure you have adequate demand for any format you include in the rotation.
Hypothesis: Video ad units command dramatically higher CPMs than display, but poor implementation (autoplay at page load, intrusive positioning) can crater session duration. Testing different video positions (in-content, floating corner, adhesive) and trigger mechanisms (scroll-into-view, click-to-play, time-delayed) will find the combination that maximizes video revenue without damaging the session.
How to set it up:
What to measure:
What good looks like: You find the placement and trigger combination that maximizes video CPM and completion rate while maintaining session metrics. The best video implementations actually improve session engagement because they offer a content break rather than an interruption.
Watch out: Autoplay video with sound is a guaranteed way to tank session metrics and anger your users. If you're testing autoplay, it should always be muted with a user-initiated unmute option.
Related Content:
- Best Practices for Ad Clutter and Ad Density: The 30% mobile rule, adhesive unit strategies, and balancing supply vs. demand.
- Ad Layout Optimization: Best Practices for Ad Unit Performance: Unit positioning, sizing, and padding best practices for maximizing CPMs and viewability.
- Why We'll Suggest That You Change Your Layout (And Why You Should Listen): How website-level viewability, session experience, and user segmentation drive layout decisions.
Price floors are one of the most powerful levers in yield management, and one of the easiest to screw up. Set them too high and you kill fill rate. Set them too low and you leave CPM on the table. The sweet spot is somewhere in the middle, and the frustrating truth is that "somewhere" changes constantly based on time of day, day of week, geography, device, and about a hundred other factors.
These experiments focus on making your auction mechanics smarter by varying price floors across dimensions that most publishers treat uniformly.
Hypothesis: Advertiser demand fluctuates throughout the day. Weekday mornings and primetime hours typically see stronger demand, while late-night and early-morning hours are softer. Increasing price floors during peak demand windows and relaxing them during off-peak windows will optimize the fill rate vs. CPM balance across the day.
How to set it up:
What to measure:
What good looks like: Peak-hour CPMs increase without significant fill rate loss. Off-peak fill rates improve, capturing revenue that was previously blocked by floors that were too high for the available demand. Total daily revenue increases.
Watch out: Your demand curve will be different from everyone else's, especially if your audience skews to a specific timezone or demographic. Don't use generic assumptions. Use your actual data.
Hypothesis: Desktop inventory generally commands higher CPMs than mobile because of larger ad formats, higher viewability, and different advertiser demand. Setting differentiated price floors per device category will allow you to be more aggressive on desktop (where demand supports it) while protecting mobile fill rate.
How to set it up:
What to measure:
What good looks like: Desktop CPMs increase without a meaningful fill rate decline. Mobile fill rates improve, and the additional filled impressions generate incremental revenue. Net effect is positive across both device types.
Watch out: Tablet traffic sits in a gray area. If your tablet volume is significant, treat it as a third segment rather than lumping it in with mobile or desktop.
Hypothesis: Tier 1 geos (US, UK, Canada, Australia) have dramatically stronger advertiser demand than Tier 2/3 markets. Your Tier 1 traffic can support much more aggressive price floors while your international traffic needs lower floors to maintain fill rate. Segmenting floors by geo will capture more value from premium markets without sacrificing revenue in developing ones.
How to set it up:
What to measure:
What good looks like: Tier 1 CPMs increase meaningfully (10%+). Tier 3 fill rates improve. The combined revenue impact across all tiers is positive. You're extracting more value where demand is rich and filling more inventory where demand is thin.
Watch out: Be aware that some SSPs have geo-specific demand that may not respond well to aggressive floors. Monitor at the SSP level, not just the aggregate.
Hypothesis: The first ad impression in a session has the highest viewability because the user just arrived and is actively looking at the page. Setting a premium price floor specifically on the first impression, then relaxing floors on subsequent impressions, will capture the maximum value from that high-attention moment.
How to set it up:
What to measure:
What good looks like: The first impression CPM jumps significantly with only a minor fill rate impact. Since this is your highest-viewability impression, demand is typically strong enough to absorb higher floors. The per-session revenue contribution of that first impression increases.
Watch out: This requires a platform that can differentiate between first and subsequent impressions at the ad call level. Not all setups support this natively.
Related Content:
- Revolutionizing the Use of Unified Pricing Rules to Maximize Ad Revenue: Advanced unified pricing strategies and how AI manages 1.2M+ floor rules per website.
- RTB Ads: Revenue Optimization Tips That Actually Work: Floor price segmentation tactics by device, day-parting, and demand partner management.
Every bidder you call adds latency to your page load. Latency hurts session metrics. Session metrics affect RPS. But every bidder you remove is demand you're leaving on the table. This category of experiments is about finding the right balance: enough demand to drive competitive auctions, not so much that your pages load like it's 2005 on a dial-up connection.
Hypothesis: Tightening your header bidding timeout will reduce page load times, improving user experience and session metrics. The key question is whether the lost bids from slower SSPs are worth more than the session improvements gained from faster pages. This experiment finds the optimal timeout window.
How to set it up:
What to measure:
What good looks like: A 200-400ms timeout reduction improves page load times noticeably, session duration increases, and the lost bid revenue is offset (or exceeded) by the session improvement. Net RPS goes up.
Watch out: Don't tighten timeouts so much that you lose your highest-CPM bidders. The goal is to cut the fat (slow, low-value bids), not the muscle.
Hypothesis: For traffic segments you know are low-CPM (Tier 3 geos, certain referral sources), calling a full bidder stack is overkill. Reducing the number of bidders called for these segments will dramatically improve page speed for those users, potentially increasing session depth enough to offset the slightly lower per-impression CPMs.
How to set it up:
What to measure:
What good looks like: Pages load significantly faster for the reduced-bidder variant. Session duration and page depth improve. Even though per-impression CPMs may dip slightly, the additional pageviews per session more than compensate. RPS for these segments increases.
Watch out: "Low-value" is relative to your site. What counts as a low-value segment on a US-focused gaming site is different from a global education publisher. Define it based on your data, not industry generalizations.
Hypothesis: Identity solutions (unified IDs, hashed emails, etc.) boost CPMs by making traffic addressable to advertisers. But each identity call adds latency and cost. Testing different combinations of identity solutions will reveal which ones provide enough CPM lift to justify their overhead.
How to set it up:
What to measure:
What good looks like: One or two identity solutions provide the bulk of the CPM lift. Others add latency without meaningful return. Removing the underperformers speeds up your pages and reduces costs without hurting CPMs.
Watch out: Identity solution effectiveness varies dramatically by audience type. A publisher with high login rates will see very different results than one with mostly anonymous traffic. Your mileage will vary.
Related Content:
- Best Header Bidding Partners in 2025: A rundown of top SSPs and header bidding demand partners for your bidder stack decisions.
- How to Implement Prebid: A Step-By-Step Guide: Phased implementation, timeout optimization, and A/B testing Prebid configurations.
- Programmatic Monetization: The Complete Publisher's Guide: End-to-end guide to header bidding, SSP management, and programmatic auction mechanics.
Not all pages on your site are created equal. Your homepage serves a fundamentally different purpose than an article page. A gallery page has different user behavior than a tools page. Treating every URL on your site the same way is the yield management equivalent of a doctor prescribing the same medication for every patient regardless of symptoms. The experiments in this section focus on tailoring your monetization strategy to the specific characteristics of different page types and content categories.
Hypothesis: Your homepage, article pages, gallery pages, and utility pages all have different engagement patterns. Designing entirely different ad strategies per page template (different unit counts, different formats, different refresh settings) will outperform a single universal configuration.
How to set it up:
What to measure:
What good looks like: At least 2-3 of your template-specific strategies outperform the universal one. The site-wide RPS improvement from combining multiple template-specific wins is greater than any single experiment could achieve alone.
Watch out: Start with your highest-traffic templates first. Optimizing a template that accounts for 3% of your pageviews is a bad use of time compared to one that accounts for 60%.
Hypothesis: Short articles can only support a limited number of ad units before they become more ad than content. Long-form pieces can support more units spaced throughout without feeling cluttered. Matching your ad density to content length will find a better UX/revenue balance than a uniform approach.
How to set it up:
What to measure:
What good looks like: Short-form content RPS stays stable or improves (fewer but higher-viewability impressions). Long-form content RPS increases (more units that maintain viewability thanks to engaged readers). Overall site RPS improves.
Watch out: Content length is a proxy for engagement opportunity, not a guarantee. A 2,000-word article that nobody reads past the first paragraph doesn't benefit from more ad units. Combine this with scroll depth data for best results.
Hypothesis: Pages that serve as session entry points (landing pages) should be optimized to encourage the next click, not to extract maximum ad revenue immediately. Reducing ad load on landing pages to improve navigation rates, then monetizing more heavily on internal pages (where the user has already committed to the session), will increase total session revenue.
How to set it up:
What to measure:
What good looks like: Bounce rate on landing pages drops. Pages per session increases. Even though the landing page itself earns less per pageview, the additional pageviews per session more than compensate. Total session revenue increases.
Watch out: This requires a willingness to accept that one metric (landing page RPM) will get worse in service of the metric that actually matters (RPS). Make sure stakeholders understand this trade-off before the test begins.
Hypothesis: If your site spans multiple content verticals (e.g., gaming, news, entertainment), different SSPs likely perform very differently across those categories. Some SSPs have stronger demand in gaming. Others excel in news. Running category-specific bidder configurations will ensure each content section gets the best possible auction competition.
How to set it up:
What to measure:
What good looks like: Categories where you've optimized the bidder stack show CPM improvements because you're running a tighter, more competitive auction. Categories with reduced bidder counts also see page speed improvements. Net RPS per category increases.
Watch out: This test requires a platform that can conditionally call bidders based on content signals. It's not something you can easily do with a basic header bidding wrapper.
Related Content:
- What Your Ad Revenue Dashboard Should Actually Show You: Real-time optimization visibility, revenue attribution, and the analytics you need to measure experiments.
- Take Control of Your Ad Strategy: 8 Things to Configure Without Engineering: Visual tools for A/B testing, price floors, layouts, and bidder management without dev resources.
These experiments address dimensions that publishers frequently treat as afterthoughts: how the ad experience should differ across devices, how consent status should inform your strategy, and how the way you collect consent itself affects your revenue. Each of these is a meaningful optimization surface that most publishers leave untouched.
Hypothesis: Beyond responsive design, mobile and desktop users exhibit fundamentally different behavior (scroll patterns, session length, content consumption speed). Designing completely different ad strategies per device, rather than simply adapting the same strategy responsively, will unlock RPS improvements on both.
How to set it up:
What to measure:
What good looks like: Both mobile and desktop RPS improve when given strategies designed specifically for their user behavior rather than a one-size-fits-all responsive approach.
Watch out: Mobile ad density is governed by the Coalition for Better Ads' 30% rule. Don't design a mobile strategy that violates this, no matter how good the revenue looks in testing.
Hypothesis: Users on slow connections are more likely to bounce if ad-related assets add significant page weight. Serving lighter ad configurations (fewer bidders, fewer heavy formats) to slow-connection users will preserve session length and potentially increase RPS for that segment despite lower per-impression CPMs.
How to set it up:
What to measure:
What good looks like: Slow-connection users see dramatically improved page load times, resulting in lower bounce rates and longer sessions. The lighter ad config generates fewer impressions per pageview but the extended sessions compensate. RPS for this segment increases.
Watch out: The Network Information API has limited browser support. You may need to rely on server-side connection quality estimation based on response times, which is less precise but more broadly available.
Hypothesis: Users who accept consent (and become addressable) generate meaningfully higher CPMs because advertisers can target them. Running a denser or more premium ad layout for consented users (where demand is rich) and a lighter layout for non-consented users (where CPMs are lower anyway) will optimize total RPS by matching ad intensity to expected revenue potential.
How to set it up:
What to measure:
What good looks like: Consented user RPS increases because you're leveraging higher demand with premium units. Non-consented user sessions may lengthen due to the lighter experience, partially offsetting their lower per-impression value. Overall RPS improves.
Watch out: Be very careful about privacy regulations when implementing consent-based differentiation. The strategy should always be additive (more for consented), never punitive (degraded experience for non-consented). And make sure your legal team reviews the implementation.
Hypothesis: How you present your consent management prompt affects consent rates. Consent rates affect addressability. Addressability affects CPMs. Testing different CMP presentation styles (banner vs. wall, button placement, pre-selected options) will reveal which approach maximizes consent rates and, consequently, downstream ad revenue.
How to set it up:
What to measure:
What good looks like: A CMP variant increases consent rates meaningfully (even 5-10 percentage points can matter at scale). The increased addressability translates to higher CPMs across your traffic. RPS increases.
Watch out: This is a minefield of regulatory risk. Any CMP testing must be reviewed by legal counsel to ensure compliance. A higher consent rate is worthless if it comes from dark patterns that violate regulations.
These experiments zoom out from individual ad units and pages to look at the session as a whole. They challenge some conventional wisdom about ad monetization and test whether restraint at certain points in a session can actually increase total revenue. These tend to be the most counterintuitive experiments in the playbook, and some of the most impactful.
Hypothesis: Capping the total number of ad impressions per session will improve viewability scores across the board, increase CPMs on the impressions you do serve, and potentially extend session duration by reducing the perception of ad overload. The result: fewer impressions at higher value per impression, and a net increase in RPS.
How to set it up:
What to measure:
What good looks like: Viewability increases significantly. CPMs follow. The reduced impression volume is more than offset by higher per-impression revenue. Sessions may also lengthen due to a less cluttered experience. Net RPS improves.
Watch out: The cap level matters a lot. Too aggressive and you're leaving real money on the table. Start conservative (small reduction) and tighten gradually. This is not a "cut your impressions in half" experiment.
Hypothesis: Giving users 5-10 seconds of ad-free content before the first ad loads will improve the initial site experience, reduce bounce rates, and lead to longer sessions. The lost impressions in those opening seconds will be more than compensated by the additional session depth they enable.
How to set it up:
What to measure:
What good looks like: Bounce rate drops noticeably. Session duration increases. Users who stay past the initial grace period generate more pageviews and more total impressions than they would have under immediate loading. The net revenue per session increases.
Watch out: Users who would have bounced in the first 5 seconds were unlikely to generate meaningful impressions anyway. The "lost" impressions from the grace period are often near-zero-value impressions you weren't really losing.
Hypothesis: Once a session has generated strong revenue (hit a target RPS threshold), tapering ad density for the remainder of the session will prioritize user experience and encourage the user to return. The lifetime value of a loyal, returning user exceeds the incremental revenue from squeezing a few more impressions out of a single long session.
How to set it up:
What to measure:
What good looks like: Individual session RPS may decrease slightly (you're pulling back at the end). But return visit rates increase, and the 30-day user revenue goes up because those users are coming back more frequently. You're trading marginal session revenue for user loyalty, and the math works out.
Watch out: This requires a longer measurement window than most experiments because the payoff is in return visit behavior. It's also harder to measure cleanly because return visit behavior has many confounding variables. This is a Tier 3 experiment for a reason.
Related Content:
- Strategies for Improving Ad Viewability: Lazy loading, refresh settings, and viewability optimization tactics for hitting 70%+ targets.
- Best Practices for Managing Poor Ad Yield Performance: How to build alerting systems, diagnose yield drops, and isolate root causes fast.
Thirty experiments is a lot. You're not going to run them all at once (please don't try). The right approach is to prioritize based on your current setup, your team's capabilities, and where your biggest revenue gaps are. Here's a framework for building your experiment roadmap.
These experiments require minimal tooling and can be launched with basic ad server configuration. Start here to build momentum and generate early revenue lifts that fund more sophisticated tests later.
Once you've captured the easy wins, move to experiments that require traffic segmentation and conditional logic. These have higher potential upside because they move you from a one-size-fits-all approach to a tailored one.
These experiments require sophisticated tooling, machine learning capabilities, or custom development. They represent the frontier of yield optimization and are where the biggest compounding gains live.
Here's where it gets exciting. Individual experiments might move your RPS a few percentage points each. But multiple successful experiments stack. The impact of combining traffic source segmentation with page-level optimization with smart price flooring is multiplicative, not additive. A 5% lift from Experiment 1, a 7% lift from Experiment 10, and a 10% lift from Experiment 13 don't add up to 22%. They compound into something bigger, because each optimization improves the conditions for the next one.
Next Steps:
- Yield Ops Team: Build vs. Outsource: Determine whether to build your own experimentation team or partner with yield experts.
- Ad Revenue Estimates Suck. Here's Why: Why experimentation beats projections and how to evaluate monetization partners honestly.
If you've read this far, you understand the potential. You also probably understand the challenge: running yield experiments at this level of sophistication requires tooling that most publishers don't have and can't justify building.
Playwire's RAMP platform was purpose-built for exactly this. The config-based architecture lets you manage multiple versions running on subsets of your traffic. Traffic segmentation by source, geo, device, and custom criteria is built in. AI and machine learning algorithms can handle the experiments that are too complex for manual management, like dynamic price flooring across millions of factor combinations.
Our Price Floor Controller alone manages approximately 1.2 million different price floor rules per website. That's the equivalent of Experiments 13, 14, 15, and 16 running simultaneously and adapting in real time. No human team can match that, and we say that with all the respect in the world for human yield teams.
Whether you want to run these experiments yourself using our self-service platform, or hand them off to our yield optimization team through managed service, the infrastructure is ready.
Ready to stop guessing and start experimenting? Let's talk about what your first experiments should be.
We'll email you a downloadable PDF version of the guide and you can read later.