Balancing Ad Revenue and User Experience on Mobile Apps: The Operator's Framework
How many ads is too many? Frequency caps by format, ad density benchmarks by category, the first-session trap, and how to A/B test ad load without breaking retention.
Ad revenue and user retention are not inherently in conflict, but they do trade off past a specific threshold that varies by format and app category. For interstitials, the minimum safe interval is 60-90 seconds between impressions and no first interstitial before the user has completed at least one core action. Rewarded video is the only format where volume, when user-initiated, correlates with engagement improvement rather than decline. Banner and native ads have a lower retention cost per impression but collapse viewability above roughly 3-4 simultaneous placements. The right ad load is not the maximum your mediation stack allows. It is the highest load your D7 retention curve tolerates before it bends.
The actual tradeoff: ARPDAU, retention, and session length
The elasticity curve is the core operator concept here. Ad impressions per session and ARPDAU move together up to a threshold, then diverge. Past that threshold, ARPDAU gains flatten while D7 and D30 retention begin to fall. The curve shape is not theoretical. It is what every operator who has run a proper ad load A/B test observes, and the inflection point is different for every format and category combination.
ARPDAU for an ad-monetized app has three dominant drivers: impressions per session, eCPM per format, and session frequency. Retention is the multiplier on session frequency. An operator who increases impressions per session by 25% but drops D7 retention by 15% has not improved revenue. They have shifted it forward in the user lifecycle by burning cohorts faster. Short-term ARPDAU looks fine. The active user base quietly compresses.
Which retention metric matters depends on the app. D7 is the primary metric for retention-sensitive apps (puzzle, casual, productivity). Session length is the right metric for highly engaged casual or social apps where habit is measured in time-per-visit rather than return frequency. D30 or return rate is correct for utility and news apps where sessions are short but habitual. There is no universal retention metric. Choose the one that measures whether users are building a habit, not whether they opened the app once.
ARPDAU ranges by category, for apps with active mediation running ad-only monetization: hyper-casual $0.04-0.12, casual game $0.08-0.25, utility $0.05-0.15, news and media $0.03-0.10, social and community $0.02-0.08. Hybrid apps (ads plus IAP) produce higher total ARPDAU because monetized IAP users are typically excluded from the ad impression pool. These are operator ranges, not guarantees.
Session length and revenue efficiency interact in a specific pattern. Revenue per session peaks in the 2-8 minute range and declines on a per-minute basis above 8 minutes. Apps that optimize for session length over user habit often win the impression count metric and lose the return rate metric. For most apps, habit (return rate) is the more valuable metric to protect.
The framing for every section that follows: the goal is not to maximize ad density. It is to find the highest density at which your chosen retention metric does not measurably decline. That optimum is different by format, category, and user segment.
Frequency caps that work by format
Frequency caps are the primary lever for controlling per-user ad pressure. Set them too high and you create the conditions for retention damage. Set them too low and you leave impressions uncaptured. The right cap is format-specific because the user cost of each format is different.
Interstitials
Minimum interval: 60-90 seconds between impressions within a session. Below 60 seconds, the user experiences interstitials as constant interruption rather than an occasional break between content. AdMob's policy floor is 60 seconds for user-initiated content, but the UX floor for retention-safe behavior is closer to 90 seconds in most content categories. Use 60 seconds as the absolute floor. Use 90 seconds as your starting point.
Session cap: 2-3 interstitials per session is a defensible starting point for casual games. Above 4 per session, measurable retention risk appears in most categories. Trigger at natural transition points (level complete, article end, content load screen), not during active gameplay or mid-content. The mid-level or mid-scroll interstitial is the highest retention cost placement in the format.
Country-level variation matters if you have significant US plus APAC traffic. US and Western Europe users tolerate interstitials at lower frequency than APAC markets. Regional frequency cap segmentation is worth the operational overhead when the traffic split is material.
Rewarded video
User-initiated only. This is not a preference. It is the condition under which rewarded video produces positive engagement effects. Non-user-initiated rewarded video (auto-triggered without user choice) behaves like an interstitial from a retention standpoint and loses the engagement benefit entirely.
No cap is needed on user-initiated rewarded video beyond what the game economy supports. If the in-game reward structure allows unlimited farming, the limit is the economy design, not the ad frequency. Reward value calibration is the real lever here: the reward must be meaningful enough that users actively choose to watch. A reward easily earned through normal gameplay does not create a compelling value exchange. A reward rare enough to be worth 30 seconds of watching reliably produces high completion rates.
Banner ads
Persistent banner placement (bottom or top of screen throughout the session) is acceptable from a UX standpoint in most categories. The retention cost of a persistent banner is low because users habituate within 2-3 sessions.
The viewability problem is different. Persistent banners in fixed positions generate banner blindness quickly. Viewability rates for a banner in the same position for 30 or more minutes typically fall below 30%. Adaptive Banner Ads that shift with content layout perform measurably better on viewability.
Density limit: more than 2-3 simultaneous banner placements (top plus bottom plus inline) produces measurable user complaint signals (review spikes, session length drops) and collapses viewability further. The revenue math on a third banner rarely justifies the UX cost.
Native ads
Placement is everything. A native ad that appears in a content feed at natural intervals (every 5-7 content items) is low-cost to UX. A native ad at position 2 in a content list is high-cost. The operator range for news and social apps is 1 native ad per 5-7 content items. Below 5 items feels editorial. Above 7 items may miss the impression.
Label requirement: native ads that are not clearly labeled erode trust disproportionately when discovered. The label ("Sponsored" or "Ad") is not optional from either a policy or a retention standpoint.
The first-session trap
The first-session trap is the single highest-leverage retention risk in mobile ad monetization: showing ads in the user's first session before they have established value in the app. The correlation between first-session interstitial exposure and D1/D7 retention drop is the most consistently observed pattern across mobile operator experience.
Why the first session is different: the user has not yet decided whether the app is worth their time. Every friction point before that decision carries a disproportionate cost. An interstitial during level 1 or article 1 is interrupting the value discovery process. An interstitial during level 5 is interrupting someone who has already decided to stay. The retention cost of those two interstitials is not the same. The first one is paid in churn. The second one is paid in mild annoyance.
The operator rule: do not show the first interstitial until the user has completed at least one core engagement cycle (one level, one article read, one task completed). For retention-sensitive apps, delay the first interstitial to session 2 or 3, not session 1.
Session count escalation is the right structure. Session 1: no interstitials, banner only if a persistent banner is already in use. Sessions 2-3: introduce one interstitial per session at minimum frequency. Sessions 4 and beyond: move to full target frequency cap. This escalation is not a concession to UX. It is a deliberate onboarding strategy that preserves D1 retention, which determines whether the user ever reaches the session depth where rewarded video and higher-frequency interstitials can be monetized profitably.
Rewarded video is the exception. User-initiated rewarded video can be offered in the first session if the in-game economy makes it natural. A user who chooses to watch a rewarded ad in session 1 is a high-value engagement signal. It does not carry the retention risk of a forced impression.
First-session rules by category:
- Casual game: no interstitial in session 1. Introduce in sessions 2-3.
- Hyper-casual: more tolerant. Session 1 interstitial after level 2-3 is common and viable because user LTV expectations are calibrated to high ad density in this category from the start.
- Utility: no interstitials in session 1. Banner only. Interstitials for utility apps are high-risk at any session unless triggered at very specific natural exits.
- News and media: no interstitials in session 1. Native feed ads acceptable after the first 3-4 content items.
Format-by-format UX impact
This section is an operator decision reference, not a format comparison. The question is: what happens to user behavior and retention when each format is applied at the recommended parameters versus above them?
Interstitial
At correct frequency (60-90 second minimum, transition-triggered): measurable ARPDAU contribution with a retention impact that is difficult to separate from statistical noise in a properly run test. This is the target state.
Above correct frequency (under 60 seconds, or mid-action triggering): detectable D7 retention drops. The signal typically shows up in session length first. Users cut sessions short rather than enduring another interstitial, which shows up as a 10-15% session length compression before D3 and D7 return rate start to move.
The specific failure mode to know: interstitials that appear on app open before any user action have the highest observed retention cost of any placement. App-open interstitials are not recommended except in hyper-casual categories where the user self-selected into a known high-density experience.
Rewarded video
The only format where volume correlates with engagement improvement when user-initiated. Users who watch rewarded video have longer sessions, higher return rates, and higher total revenue per user than users in the same cohort who do not watch rewarded video, even after controlling for selection effects.
The mechanism: rewarded video creates a positive reinforcement loop between time-in-app and tangible reward. Users who watch often also use the app more often. The engagement lift is real, not just self-selection.
The failure mode: making rewarded video the only way to progress (pay-to-continue equivalent) converts the format from a value exchange to a paywall. That inverts the retention effect. Design the economy so rewarded video accelerates progress, not gates it.
Banner ads
Low retention cost at correct placement. High banner blindness. The practical effect on D7 retention is near zero for users who habituate to the persistent banner, which is most users by session 3.
Revenue ceiling is low per placement. Banner eCPMs are a fraction of interstitial and rewarded video eCPMs. The argument for multiple banner placements (top plus bottom) is revenue diversification, not revenue maximization. In most apps, the second banner placement adds 10-20% incremental revenue at the cost of 15-25% of screen real estate. See Adaptive Banner Ads for implementation detail on the viewability side.
Native ads
Lowest perceived intrusion when placed correctly. Highest trust cost when misplaced.
In content-feed apps, native ads at natural feed intervals are consistently the lowest-complaint format among users who notice them. The disclosure requirement (Sponsored label) is necessary but does not materially reduce CTR when the creative is relevant.
Placement failure mode: native ads at the top of a content list, or styled to appear indistinguishable from organic content, generate short-term CTR but long-term trust erosion. The disclosure requirement is not just a policy issue. It is a retention issue.
Ad density benchmarks by app category
"Ad density" in this section means impressions per session, not ads per screen. This is the operator metric. It captures the cumulative ad pressure a user experiences in a session, which is what retention data responds to.
Hyper-casual games
Typical range: 4-8 interstitials per session, persistent banner throughout. This category has the highest ad density of any app category. Users largely self-select into it knowing what to expect.
The trade: hyper-casual user LTV is low. D1-D3 is where most revenue is extracted, so the operator optimization is short-term extraction, not long-term retention. Ad density above 8 interstitials per session still produces measurable D1 retention drop in hyper-casual, but the absolute drop is accepted as a cost of the business model. Do not apply hyper-casual density to other categories. It does not transfer.
Casual games (puzzle, match-3, runner)
Recommended range: 2-4 interstitials per session, user-initiated rewarded video, persistent banner.
D7 retention is the key metric. Above 4 interstitials per session, D7 retention declines in most operator-reported experience. The inflection is not as sharp as in utility apps, but it is consistent. Session count escalation applies here: sessions 1-2 at lower density (1-2 interstitials), sessions 3 and beyond at full density.
Utility apps (weather, productivity, tools)
Recommended range: 0-1 interstitials per session, persistent banner if at all, native in content areas.
Utility users have a task-first relationship with the app. Any interruption during task completion has disproportionate cost. Interstitials in utility apps are high-risk unless triggered at very specific natural exits: app close, task completion screen, settings return. Many utility apps should not run interstitials at all and should rely on banner plus native for ad revenue.
News and media apps
Recommended range: 0-1 interstitials per session (at natural content breaks only), native every 5-7 content items, persistent banner.
Content-aligned native ads are the primary ad revenue vehicle in this category. Interstitials are viable at end-of-article or session-end triggers only. The reading flow is already interrupted by article boundaries; native at those boundaries is natural. Mid-article interstitials are not.
Social and community apps
Recommended range: native feed ads every 5-7 items, no interstitials during core social interaction (messaging, feed browsing), banner only in secondary screens.
Social app users have the highest sensitivity to in-session interruption because the session is driven by social interaction. An interstitial during a conversation thread or feed creates friction at the exact moment the user is most engaged. Banner and native in non-interactive areas of the UI are the right formats.
Viewability vs frequency: the tradeoff operators miss
Increasing frequency past the viewability ceiling does not increase revenue. It decreases it. This is the most underappreciated operator error in ad load optimization.
The mechanism: each additional ad impression in a session competes for the same attention budget. As the session progresses and ad count increases, users scroll faster, interact less with non-game content, and move ad positions toward the periphery of their attention. Viewability rates fall. A banner that was 70% viewable at position 1 in the session may be 30% viewable by position 4.
The revenue math: if your banner eCPM is based on viewable impressions (vCPM), and viewability drops from 70% to 30% by adding a second banner placement, the second placement may be generating less net revenue per slot than the first placement at its lower viewability. Optimizing on impression count alone misses this completely.
For interstitials, the same dynamic applies but is less severe because interstitials are full-screen with a forced minimum view time. Even so, above a certain interstitial frequency per session, users begin closing interstitials faster (reducing creative engagement) or develop skip behavior patterns. The advertiser's effective CPM from a 0.5-second interstitial close is lower than from a 3-second view. High interstitial frequency does not guarantee high interstitial revenue if close rates accelerate.
Viewability ceiling by format (operator ranges):
- Banner ads in fixed positions reach viewability saturation around 2 simultaneous placements.
- Interstitials maintain high viewability up to 2-3 per session. Above 3 per session, close rates accelerate.
- Rewarded video maintains high completion rates regardless of session volume when user-initiated.
The correct metric: optimize ad load against viewable impressions per session (or completion rate for rewarded video), not raw impressions per session. If a frequency increase raises impressions but lowers viewable impressions, the frequency increase is not revenue-positive. See Sticky and Anchor Ads Implementation Guide for implementation guidance on persistent placement viewability.
Cohort behavior: ad-monetized vs IAP-monetized vs hybrid users
Ad-monetized and IAP-monetized user cohorts are structurally different populations. The same ad load produces different outcomes on each. An operator who designs ad load for the average user is underserving both segments.
Ad-monetized users (never made an IAP)
This is the majority of users in free-to-play and ad-supported apps. Their tolerance for ad load depends heavily on session depth. Early-cohort ad-monetized users (sessions 1-5) have higher churn sensitivity to ad pressure than established users (sessions 10 and beyond) because they have not yet built a strong habit. The escalation strategy from the first-session trap section is designed specifically for this group.
IAP users
Users who have made a purchase in an ad-supported app are more engaged and have a higher overall tolerance for the app experience. However, showing interstitials to paying users creates a negative sentiment effect that is disproportionate to the revenue impact. Most operators who have run this test conclude: suppress interstitials for IAP users entirely, or show them at a materially lower frequency.
The revenue cost of suppressing interstitials for IAP users is low. IAP users are a small percentage of total users. The retention benefit is material. IAP users are your highest-LTV segment. The math on suppression almost always works.
Hybrid users (IAP plus rewarded video engagement)
These are typically your highest-LTV users. They make purchases and opt into rewarded video. Ad load design for hybrid users should maximize rewarded video availability (since they actively choose it) and be conservative on interstitials (since they are paying customers and interruption creates cognitive dissonance between "I paid" and "I still get ads").
Source cohort
Users acquired through paid UA have different ad tolerance than organic users. UA-acquired users have an implicit expectation set by the creative they saw before installing. Organic users who found the app through the store or word of mouth have no such expectation. The UA creative sets a contract with the user. An app that shows hyper-casual-level ad density to users acquired via casual game creative violates that contract. For more on how attribution affects cohort signal quality, see SKAdNetwork 4.0 Conversion Value Setup.
The practical implication
Segment your ad configuration by user property where your mediation platform supports it. Both AppLovin MAX and AdMob allow server-side targeting rules that can suppress or reduce ad frequency for specific user segments (purchasers, VIPs, new users in session 1). This is not an advanced feature. It is basic audience segmentation applied to ad serving. Most operators who have not done this are running a single ad configuration across structurally different user populations.
A/B testing ad load without contaminating your cohort data
Ad load A/B testing is the correct way to find your optimum, but it is the source of the most common operator error in retention analysis: cohort contamination.
The contamination problem
If you change ad load globally (turn up frequency for all users on a Monday) and then compare Monday-Tuesday cohorts against the prior week, you have not run an A/B test. You have run a before-after comparison that conflates ad load change with weekly seasonality, acquisition mix, and any other variable that changed at the same time. Any conclusion drawn from that comparison is unreliable.
Proper test design
Split traffic at the user level, not the session level or date level, and maintain the split for the full measurement window. The measurement window must cover at least D7 retention (7 days of post-install behavior) and ideally D14, because D7 impacts from ad load changes can take several sessions to manifest.
Change one variable at a time: frequency cap, first-session rule, or density cap. Not multiple simultaneously. A test that changes interstitial frequency and adds a new native placement at the same time cannot attribute observed retention changes to either variable.
Before running any ad load test, confirm your SDK and mediation adapter versions are current. Adapter bugs can confound results independently of the ad load change. The Mediation SDK Checker audits your dependency files for known compatibility issues before you begin.
Sample size
For a D7 retention test to be statistically significant at 95% confidence with a 1-2% expected effect size, you need a minimum of 5,000-10,000 users per group. Apps with lower daily install volume should extend the test duration rather than draw conclusions from small samples. Drawing conclusions at 500 users per group on a D7 metric is not a valid A/B test. The confidence interval is too wide to distinguish real effects from noise.
Platform tools
AppLovin MAX provides a built-in mediation group A/B test tool that splits traffic between configurations and reports eCPM, fill rate, and ARPDAU by group. Use it for configuration-level tests (frequency cap changes, group priority changes) rather than format-level tests. For the mediation configuration context, see Mediation Waterfall vs In-App Bidding.
AdMob mediation does not have a built-in A/B test framework. Operators testing ad load on AdMob mediation should use Firebase Remote Config to assign users to groups, then compare retention by group in Firebase Analytics or their MMP. Run an AdMob configuration audit before the test starts to confirm the baseline configuration is clean.
The metric sequence
Measure in this order: (1) impressions per session: did the test group actually receive the higher ad load? (2) session length: did it compress? (3) D7 retention: did return rate change? (4) ARPDAU: did revenue per active user change?
If (1) shows more impressions but (2) through (4) show no change, the result is either a true null effect or the test was underpowered. If (1) shows more impressions, (2) shows shorter sessions, (3) shows lower D7 retention, and (4) shows higher ARPDAU, you are trading future revenue (from retained users) for present revenue. Whether that trade is worth making depends on your LTV model and payback window. It is a decision, not an optimization.
The most common mistake in this test design is measuring the wrong thing in the wrong window. If your ad load test showed higher ARPDAU but you did not measure D14 retention, you may not have the full picture. That is the kind of analysis worth running before you commit to a new frequency cap. Book a free 30-minute call if you want to run it against your actual numbers.
The retention signal of going too far
There is a specific, observable metric sequence that tells you ad pressure has crossed the retention threshold. Operators who know what to look for can catch it before it compounds.
Primary signal
D7 retention in the test cohort drops faster than ARPDAU rises. If a 20% increase in interstitial frequency raises ARPDAU by 8% but drops D7 retention by 5%, the net LTV impact is negative for any app with a payback window beyond two weeks. The ARPDAU gain is front-loaded. The retention cost is a compounding drag on future revenue.
Secondary signal (faster-moving)
Session length shortens before D7 retention drops. Session length is the leading indicator. An ad load change that is compressing sessions by 10-15% in the test group is usually doing so because users are exiting when they encounter an ad rather than continuing. Watch D7 retention closely when session length moves. The session shortening is the canary.
Tertiary signal (lagging, confirming)
App store review sentiment. A material increase in ad-related review mentions ("too many ads," "constant interruptions") is a lagging but confirming indicator. Review sentiment lags the behavioral data by days to weeks, but it is a useful confirmation when the quantitative signals are ambiguous.
The operator off-switch
When D7 retention drops faster than ARPDAU rises, roll back the frequency change. This sounds obvious, but the ARPDAU data is typically visible daily while D7 retention data requires 7 days or more to manifest. Operators who check ARPDAU first and plan to roll back later have already exposed a full new-user cohort to the higher ad load and paid the retention cost. Structure the measurement plan before the test runs. Define the rollback trigger condition before you start. That condition is: if D7 retention in the test group drops by X% relative to control, the test ends and the configuration reverts.
What not to do: do not attempt to compensate for an ad-load-driven retention drop with increased UA spend. The economics do not work. Paying to acquire users who churn faster at the same rate as users who churn slower does not recover the retention cost. Fix the ad load.
The one metric to watch if you only watch one
For a retention-sensitive ad-monetized app, the single metric that captures the health of the ad load vs retention balance is ARPDAU trend in the D7-plus cohort, not in the D0-7 cohort.
Why cohort depth matters
D0-7 ARPDAU is dominated by early impressions, high novelty engagement, and first-session behavior. It typically looks healthy even when the ad load is too high, because users are engaging at their peak before the habit has a chance to decay. D7-plus ARPDAU (revenue per daily active user among users who survived to day 7) captures the real productive monetization: the users who built a habit and kept coming back.
A healthy ad load produces D7-plus ARPDAU that is stable or growing over time as the retained cohort deepens its engagement. An unhealthy ad load produces D7-plus ARPDAU that is stable (revenue per active user looks fine) but applied to a shrinking retained cohort (the active user base is compressing). The per-user number looks fine. The total revenue trajectory does not.
Session-count-normalized ARPDAU
If your analytics platform allows it, compare ARPDAU by session count (sessions 1-5 vs sessions 10-20 vs sessions 20-plus). If ARPDAU by session count is healthy but users are not reaching session 10, the problem is early retention, not monetization depth. If ARPDAU drops by session count as users go deeper, the problem may be ad fatigue in established users, which is a different diagnosis with a different fix.
D7 benchmark for casual games
D7 retention of 15-20% is the benchmark range for casual games with healthy monetization. Below 12%, the app is either retention-challenged on the core loop, or ad-load-damaged. The benchmark does not isolate ad load as the cause. Other factors (onboarding quality, core loop depth, progression design) affect D7 independently. But a D7 drop that coincides precisely with an ad frequency increase is strong evidence of ad load as the driver.
The practical setup requirement
Track D7 retention as a metric that is at least as visible as daily ARPDAU. In most analytics setups (Firebase, Amplitude, Adjust), D7 retention is not the default view. You have to build the cohort report. Build it before you run any ad load test so the baseline exists and is stable before the test starts.
If your D7 retention is not currently tracked separately from overall retention, that is the first thing to fix before any ad load decision. If you are not sure whether your current ad load is inside or outside the safe zone for your category, that is the second. Book a free 30-minute call to run that analysis against your specific numbers.
Frequently Asked Questions
How many ads per session is too many for a mobile app?
The threshold is format-specific and category-specific. For interstitials in casual games, the operator range is 2-4 per session; above 4, D7 retention measurably declines in most operator experience. For hyper-casual games, 4-8 interstitials per session is typical but comes with accepted lower D7 retention as a category norm. For utility apps, 0-1 interstitials per session is the safe range. Task-oriented users tolerate interruption poorly. Banner and native ads have a lower per-impression retention cost but collapse viewability above 2-3 simultaneous placements. There is no universal number. The signal that you have crossed the line is D7 retention dropping faster than ARPDAU rises.
Does showing more ads always increase revenue?
No. Past the viewability ceiling, adding impressions reduces effective revenue per impression without proportionally increasing total revenue. A banner placement that is 30% viewable earns less per slot than one that is 70% viewable. For interstitials, high frequency accelerates close rates (users skip faster), which reduces advertiser-effective CPM. More importantly, ad load above the retention threshold burns cohorts faster: D7 retention drops, the active user base shrinks, and the daily impression pool contracts. Total daily revenue may hold short-term while lifetime value declines. The correct optimization target is viewable impressions per day from a retained user base, not raw impressions per day.
When should I show the first interstitial in a user session?
Not before the user has completed at least one core engagement cycle. For a casual game, that means after level 1 is complete at minimum. For a retention-sensitive app, the first interstitial should be withheld until session 2 or 3, not session 1. The first session is when users decide whether the app is worth their time. An interstitial before that decision is made carries a disproportionate retention cost. For hyper-casual games, post-level-1 is the common trigger and more viable because user LTV expectations are calibrated to high ad density from the start. Never show an interstitial on app open before any content is displayed. That is the highest-cost interstitial placement in terms of retention impact.
Do rewarded videos hurt retention?
When user-initiated, rewarded video is the only ad format that correlates with higher engagement rather than lower. Users who opt into rewarded video have longer sessions, higher return rates, and higher total revenue per user than users who do not watch rewarded video in the same cohort. The mechanism is a positive reinforcement loop: watching an ad to earn a meaningful in-game reward creates a positive association with the app session rather than an interruption of it. Rewarded video hurts retention only when it is non-user-initiated (treated like an interstitial) or when the reward is so marginal that the value exchange does not feel genuine. Design the economy so rewarded video accelerates meaningful progression, not cosmetic rewards.
How do I A/B test ad density without breaking my retention metrics?
Split traffic at the user level, not the date level, and maintain the split for a minimum of 14 days to capture D7 retention in both groups. Change one variable at a time (frequency cap, first-session rule, or density cap), not multiple simultaneously. Require a minimum of 5,000-10,000 users per group for a 1-2% effect size at 95% confidence. Measure in sequence: impressions per session first, then session length, then D7 retention, then ARPDAU. If ARPDAU rises but D7 retention drops faster, the net lifetime value impact is negative for any app with a payback window beyond two weeks. AppLovin MAX has a built-in mediation group A/B test tool. AdMob operators should use Firebase Remote Config for user-level splits and Firebase Analytics for retention tracking.
What is the right frequency cap for interstitials in a casual game?
The minimum interval is 60-90 seconds between interstitials within a session, triggered at a natural content transition such as level complete or screen change. The session cap of 2-4 interstitials is the defensible range for casual games. Start at 2 per session and increase to 3-4 only after confirming via a properly controlled A/B test that D7 retention does not decline below the category baseline of 15-20%. Apply session-count escalation: hold to 1 interstitial per session in sessions 1-3, then move to full frequency cap at session 4 and beyond. Country-level segmentation is worth the overhead if you have material US plus APAC traffic, as APAC users generally tolerate higher interstitial frequency than US users.
When to bring someone in
The ad revenue vs UX tradeoff is not a fixed line. It is a threshold that depends on your format mix, your app category, your user acquisition channel, and your cohort composition. The framework in this article gives you the parameters. Applying it to your specific stack is a different conversation.
If you are seeing D7 retention move in the wrong direction and you are not sure whether it is an ad load problem, a core loop problem, or a UA cohort problem, that ambiguity is exactly where structured analysis earns its value. If you have run an ad load test and you got a higher ARPDAU number without measuring D14 retention, you may not have the full picture.
That is the kind of analysis worth running against your actual data before you commit to a new configuration. The Mediation SDK Checker is a useful first step if you want to audit your SDK setup before the conversation. The AdMob Approval Checker is worth running if AdMob is your primary mediation layer. If you want to run the full analysis against your numbers, that is what the free 30-minute call is for.