Mediation Reporting Discrepancy Diagnosis: Why Your Dashboards Disagree

Your mediator and network dashboards will never match exactly. Here's why every discrepancy type happens, which number to trust, and when to escalate.

Discrepancies between your mediation dashboard and individual network dashboards are structurally expected. A 5-10% gap is normal. Above 20% points to a diagnosable cause. The eight causes, in frequency order: timezone misalignment, reporting lag and adjustment cycles, impression counting methodology differences, net vs gross revenue reporting, click vs impression attribution, currency conversion timing, server-side vs client-side counting, and IVT clawbacks.

What "discrepancy" actually means here, and what a normal gap looks like

Before working through causes, it helps to be precise about which discrepancy this article addresses. There are four types operators routinely conflate:

  1. Mediator-reported revenue for network X vs that network's own dashboard. This is the discrepancy this article covers.
  2. Advertiser vs publisher discrepancy. Different measurement tools counting the same impression from the buy side vs the sell side. Different problem, different article.
  3. MMP vs platform discrepancy. Attribution mismatch between AppsFlyer or Adjust and the ad network. Also different.
  4. Mediator aggregated total vs payment received. Settlement and payment timing. Different again.

This article is specifically about one thing: your mediation platform shows $X for a specific network during a specific period, and that network's own dashboard shows $Y for the same period. The question is why, how large the gap should be before it matters, and what to do about it.

Three reference points for what normal looks like:

AppLovin's own documentation states that discrepancies greater than 5% per network placement ID warrant contacting support, and that threshold applies after confirming full SDK adoption and correct timezone alignment. The implied baseline is that under 5% is within expected operating range.

The IAB Mobile Discrepancies Working Group specifies that discrepancy rates of 10-15% are commonly seen in mobile advertising, and that rates above 20% should be investigated. Their taxonomy defines six primary cause categories, covered in the next section.

Google's documentation for Ad Manager acknowledges that campaign variances of up to 20% are common for third-party creative types.

Working threshold for operators:

  • Under 10%: Structurally normal. Do not investigate unless the gap changed suddenly, which may indicate a new cause.
  • 10-20%: Worth diagnosing if it has been persistent for more than 7 days and revenue is material.
  • Above 20%: Investigate. A root cause exists. Work through the causes below in order.

One more signal that matters: the direction of the gap. Your mediation platform will almost always report higher impressions than the network. That is the expected direction for structural reasons covered in Causes 3 and 7. If the network is reporting higher than the mediator, that is unusual and narrows the diagnostic considerably. The most likely causes in that scenario are currency reporting timing or an adjustment cycle that has revised the mediation platform's number downward but not yet the network's.

If your gap has been above 20% for more than a week, you have confirmed timezone settings match on both sides, the reporting dates are finalized, and the period is the same on both platforms: that is the right moment to bring both exports to a fresh read. Book the free 30-minute call and we can work through the data directly.

If what you are seeing looks more like flat revenue over time rather than a discrepancy between dashboards, the companion diagnostic is Mobile Ad Revenue Stagnation: A Diagnostic Checklist, which covers demand, supply chain, SDK, and floor causes in priority order.

The IAB Mobile Discrepancies framework: the canonical taxonomy operators don't read

The IAB Mobile Discrepancies Working Group published a specification that defines the standard taxonomy for why mobile ad numbers disagree. No operator-facing resource has translated it into practical terms. This section does that.

The IAB taxonomy identifies six primary cause categories:

Impression definition variance. Different systems count an impression at different points in the ad delivery pipeline. One system counts when the ad is requested. Another counts when the ad response is returned. A third counts when the ad renders on screen. These three points can produce different counts from the exact same traffic.

Timezone misalignment. Reports generated across different timezones create day-boundary shifts. An impression served at 11:30 PM EST appears in Day 1 for EST-aligned reporting and Day 2 for UTC-aligned reporting.

Failed passbacks. When a network cannot fill an impression and passes it back to the mediation platform, the network may count the passback attempt as an impression while the mediation platform does not.

Broken integrations. Impressions incorrectly transmitted via SDK or ad tag go unrecognized by one party but are counted by the other.

Timeouts. When a network's response exceeds the mediation platform's timeout threshold, the platform moves to the next network. If the network's response arrives slightly after the timeout, the network may still count a fill while the mediation platform does not.

Fraud filtering and IVT removal. One party filters invalid traffic before reporting; the other reports gross and filters later. This creates a timing-based discrepancy that resolves when both sides finalize their numbers.

One limitation worth noting: the IAB specification was published in 2015, before in-app bidding existed at scale. Two additional causes have become significant since then. Net vs gross revenue reporting (rev share deducted at different points across different dashboards) and adjustment cycles (revenue restated 7-14 days post-delivery for IVT, fraud, and click-bot filtering) are not in the original taxonomy. They are covered in Causes 4 and 8 below.

For SKAN-specific reporting differences on iOS (where SKAdNetwork postback timing adds another dimension to cross-platform discrepancies), see SKAdNetwork 4.0 Conversion Value Setup.

Cause 1: Timezone alignment

Timezone misalignment is the most common cause of day-level discrepancies between mediation and network dashboards. It does not create a gap in total revenue when periods are measured correctly. It creates a gap when you compare one platform's Day 1 to another platform's Day 1 without confirming both use the same day boundary.

AppLovin MAX defaults to US Pacific Time for new accounts. Google AdMob also defaults to Pacific Time. Meta Audience Network defaults to US Pacific Standard Time but allows changes in settings. Unity LevelPlay defaults to UTC. When you compare a MAX daily report to a LevelPlay daily report without aligning timezones, a 7-8 hour boundary difference means impressions served between midnight UTC and 8 AM UTC on Day 2 appear in Day 2 on LevelPlay but in Day 1 on MAX. The weekly or monthly totals will be identical. The per-day comparisons will show gaps on every individual day.

The signal: Day-level discrepancies that are directionally inconsistent (MAX is higher on some days, the network is higher on others, alternating across consecutive days). Weekly or monthly totals are closer than daily totals. The gap is largest around midnight UTC if you have hourly impression data available.

The fix: Before comparing any two dashboards, confirm both are set to the same reporting timezone. In MAX: Settings > Account Settings > Reporting Timezone. In AdMob: Reports > gear icon > reporting timezone. In LevelPlay and Meta Audience Network: reporting settings in their respective dashboards. Set all platforms to UTC for cross-platform comparison.

One note before you change settings: changing your reporting timezone in MAX or AdMob retroactively affects historical report exports. Export full months under the current timezone setting before changing it.

For a broader platform comparison on how AppLovin MAX and Unity LevelPlay handle timezone and reporting defaults differently, see AppLovin MAX vs Unity LevelPlay.

If the gap persists after timezone alignment, move to Cause 2.

Cause 2: Reporting lag and adjustment cycles

Every ad network has a data finalization timeline. Revenue reported on the same calendar day (D+0) is a preliminary estimate. The final number appears later, after the network runs its fraud filtering, IVT analysis, and click-bot adjustment passes.

Finalization timelines by platform:

AppLovin MAX aggregated mediation reporting finalizes within 24-48 hours. Network-level pass-through data (what MAX receives from each integrated network's API) may lag 24-72 hours depending on the network's own API refresh schedule.

Google AdMob estimated earnings update throughout the day but are labeled as estimated. Final numbers for a given day are available approximately 24 hours after day close. Revenue adjustments for IVT and policy violations may be applied retroactively for up to 30 days.

Meta Audience Network revenue reports typically lag 24-72 hours. Meta states explicitly that figures are not final until settlement.

Unity LevelPlay reporting dashboard updates periodically throughout the day. Final numbers may not settle until 48 hours after the reporting period closes.

Adjustment cycles:

Beyond initial finalization lag, most networks run a secondary adjustment pass that revises revenue retroactively. Most operate a 7-day primary adjustment window and a 14-day secondary window. What this means in practice: revenue from Day 1 may change three times. D+1 for initial finalization. D+7 for the primary IVT pass. D+14 for the secondary fraud pass and final settlement.

The signal: Comparing today's revenue to last week's same-day revenue produces apparent gaps because one is a D+0 estimate and the other is D+7 finalized. A specific day's revenue in your mediation aggregated report looks different when pulled again 14 days later. A network's monthly total changes after the month closes.

The fix: Never compare a D+0 or D+1 number from one platform against a D+7 or later number from another. Set a reporting convention: compare only numbers that are at least 7 days old, so both platforms have passed their primary adjustment cycle. For revenue reconciliation against actual payments, compare D+14 finalized figures.

Cause 3: Impression counting methodology differences

Not all impressions are equal in how they are counted. Three counting points in mobile ad delivery each produce a different number.

Eligible impressions: The number of ad requests the mediation platform sent to the network. This is the mediation platform's maximum count.

Measured impressions: The number of ad responses the network SDK logged as received, at the moment the SDK callback fired. If the SDK fires but the ad does not render (the app went to background, a rendering error occurred, the user navigated away), the mediation platform may still count this as a measured impression.

Paid impressions: The number of impressions the network considers billable. The ad must have appeared on screen, been visible for a minimum duration, and passed the network's viewability threshold before it is counted as a paid impression.

This three-tier difference creates a predictable direction to the discrepancy: the mediator almost always reports more impressions than the network. The mediator counts closer to "eligible" or "measured." The network counts "paid." The gap between measured and paid typically runs 5-15% for well-functioning stacks, and can reach 10-15% on apps with a high share of low-end Android devices or high Tier-2/3 geo mix.

The IAB taxonomy names this "impression definition variance" and identifies it as one of the primary sources of discrepancy in mobile advertising. The variance is not fraud, not a reporting error, and not recoverable. It is structural.

At the platform level:

AppLovin MAX reports "Attempts" (eligible) and "Impressions" (measured, SDK callback fired). The network's own dashboard will report closer to rendered/paid, not attempts.

Google AdMob mediation report shows "Ad Requests," "Matched Requests," and "Impressions." Impressions in AdMob mediation is the server-side logged count, which may still exceed what the demand network records as paid.

Unity LevelPlay shows "Requests," "Fill," and "Impressions." Similar hierarchy.

The signal: Impression count in your mediation report is consistently higher than the same network's own count by a stable percentage. The gap scales with traffic volume, not as a constant. Revenue gap tracks impression gap at roughly the same ratio.

The fix: There is no fix for the structural gap. Accept a 5-15% impression count discrepancy between mediation and network as the normal operating range when comparing across methodologies. If the gap is stable and within this range, it does not need investigation. If it suddenly widens beyond your historical baseline, see Cause 7 (server-side vs client-side counting) and Cause 8 (IVT filtering).

Cause 4: Net vs gross revenue reporting

This is the cause most frequently responsible for large revenue gaps that operators cannot explain by other means. The mediation platform and the network report revenue at different points in the revenue share chain.

How the revenue share chain works:

Advertiser pays gross CPM. The demand network takes its margin (typically 30-50% for programmatic). The network pays publisher net CPM. The mediation platform takes its rev share (typically 0-20% depending on platform and deal structure). The publisher receives the final amount.

Different dashboards report at different points in this chain:

A demand network's own dashboard (Meta Audience Network, for example) typically shows the amount that network credits to the publisher. This is already net of Meta's margin. It is not the gross advertiser CPM.

AppLovin MAX's aggregated dashboard reports the revenue pass-through from each network's API. For networks that pass their own net revenue via the API (Meta, Google), MAX is reporting their number. For networks where MAX has its own rev share arrangement, the MAX dashboard may be net of MAX's take rate.

Google AdMob mediation shows estimated earnings net of AdMob's publisher revenue share.

Some mediation platforms show gross eCPM as the basis for waterfall ordering while showing net revenue in the earnings column. Comparing the eCPM from the waterfall against actual revenue will produce a discrepancy because the two sides of the comparison are at different points in the rev share split.

The signal: The mediation platform shows consistently higher revenue than the network's own dashboard. The gap percentage is approximately constant, not fluctuating day to day. The gap ratio matches approximately the known rev share percentage for that network or mediation platform.

The fix: Before comparing dashboards, confirm what point in the chain each dashboard reports. Check your contract terms with the mediation platform: what rev share does the mediator take, and is it visible in the dashboard or silently deducted before the number displayed? Check the same for each major demand network. Once the chain is mapped, the expected gap becomes calculable. If the actual gap exceeds the expected rev share gap, another cause is contributing.

For a comparison of how AdMob and AppLovin MAX differ in their rev share structures and reporting presentation, see AdMob Mediation vs AppLovin MAX.

Cause 5: Click vs impression attribution and click-bot adjustment windows

Some networks operate on a hybrid model where CPC demand exists alongside CPM demand. In these mixed models, revenue for CPC inventory appears in reporting at click-time, not at impression-time. If the mediator records an impression when the ad renders and the network records revenue when the click fires, the two dashboards will show revenue in different periods for the same underlying inventory.

This is not a primary cause of large discrepancies on rewarded or interstitial formats, where the pricing model is nearly always CPM/eCPM. It is more relevant for banner formats and for networks with significant CPC demand, particularly in Tier-2 and Tier-3 geos where CPM demand is thin.

Click-bot adjustment:

A related and more impactful issue: networks with CPC demand run click validation passes to filter invalid clicks. A network may report a given day's click-sourced revenue at D+0 and then claw it back 24-48 hours later when click validation identifies bot traffic. From the mediation platform's perspective, that revenue appeared and then disappeared. The D+0 mediation total included it; the D+7 finalized number does not.

The signal: Discrepancy is higher on days with known bot-traffic spikes (Monday mornings, end-of-month periods, after an organic traffic surge from an unusual source). The gap resolves partially when comparing finalized D+7 numbers rather than D+0 estimates. Banner formats are most affected. Geos with high invalid traffic rates show larger per-day gaps.

The fix: For CPC timing attribution, compare revenue over weekly or monthly periods rather than daily to smooth out the click-vs-impression timing difference. For click-bot adjustments, always use D+7 finalized numbers for any comparison intended to measure the discrepancy.

Cause 6: Currency conversion timing

If you operate in multiple currencies, or if any network in your stack reports in a currency different from your settlement currency, currency conversion timing introduces a gap that looks like a reporting discrepancy but is an FX artifact.

Most mediation platforms convert all network-reported revenue into a single reporting currency (typically USD) at a daily exchange rate. The exchange rate used may differ from the rate the network used in their own reporting. If Meta Audience Network reports revenue in EUR and converts at the day's 9 AM UTC rate, and MAX converts the same revenue at the day's 12 PM UTC rate, the numbers will differ by the spread between those two snapshots. Across a high-volume day, this produces a consistent small gap.

More significantly: if a network reports revenue in one currency and settles payments in another, the conversion in the reporting dashboard may differ from the conversion on the payment statement.

The signal: The gap correlates with periods of high FX volatility. The discrepancy direction changes in a pattern that tracks with the USD against the network's native currency. The gap is small in absolute terms but persistent and directionally unpredictable.

The fix: Confirm every platform is reporting in the same currency with the same conversion methodology before comparing. For payment reconciliation, compare actual received payment against the payment-stated settlement amount, not against the dashboard number. Most networks settle in USD regardless of display currency; the dashboard display may not reflect the settlement conversion rate exactly.

Cause 7: Server-side vs client-side counting

This is the most technically precise of the eight causes and the one most likely to be misdiagnosed as a network problem when it is actually a client-side integration issue.

The distinction:

Server-side counting: the mediation platform's server records an impression when it receives a response from the network SDK. This happens on the server, independent of what happens in the app on the device.

Client-side counting: the network's SDK (running inside the app, on the device) records an impression when the ad renders on screen. This requires the device to have loaded the ad, the app to be in the foreground, and the ad to have been visible.

When the two counts diverge, the mediator's server-side count is almost always higher. Common causes:

The ad request fired but the user put the app in the background before the ad rendered. The ad response returned but a rendering failure (OpenGL error, memory pressure, slow CPU on a low-end device) prevented display. The SDK fired the server-side callback but the ad creative failed to load from the CDN. The app crashed between the server-side callback firing and the ad appearing on screen.

For most stacks, this gap accounts for 3-8% of impressions. On apps with a high share of low-end Android devices (high Tier-2/3 geo mix, large casual gaming installs), it can reach 10-15%.

The signal: The mediator's impression count is consistently higher than the network's count beyond the 5-15% normal range. The gap is higher for banner formats than for interstitials or rewarded. The gap is higher on low-end device segments. The gap correlates with fill rate spikes, when high-fill events mean more requests fire in quick succession, increasing the probability that some land while the user is transitioning between app states.

The fix: For a background-render gap, implement proper ad lifecycle management. Only request ads when the app is in the foreground. Only show ads during active sessions. Confirm the mediation SDK's impression counting is set to "rendered impression" mode rather than "ad loaded" mode where the SDK offers this setting. AppLovin MAX has this configuration option; confirm it for each integrated network.

Before investigating impression count gaps, run the Mediation SDK Checker to audit your current SDK and adapter configuration. Integration issues at the adapter level are a frequent contributor to server-side vs client-side count divergence. Full impression counting configuration by SDK version: Mediation SDK & Adapter Compatibility Guide.

For CDN-related creative failures, check the network's creative delivery error rates in their debug panel. High creative error rates on specific ad units or geos point to CDN latency issues the network can investigate.

Cause 8: IVT clawbacks

Invalid traffic filtering creates a specific discrepancy pattern: revenue that appeared in both dashboards at D+0 and then disappeared from one or both at D+7 or D+14 during the adjustment cycle. This is not fraud by the operator. It is the standard post-delivery audit that all networks run.

After ad delivery, networks run IVT analysis on their logged impressions. Traffic flagged as invalid (bots, click farms, emulators, anomalous behavior patterns) is removed from the revenue count and a deduction is applied to the account. This deduction may appear as a negative line item in the following month's statement, a retroactive revision to the historical day's reported revenue, or a silent reduction in the aggregated dashboard without itemized explanation.

The IAB "Mobile Discrepancies 2.0" specification identifies this as one of the six primary cause categories. The adjustment window in practice is typically 7-14 days post-delivery, though some networks reserve the right to adjust up to 60 days.

The signal: Revenue for a specific historical period decreases when you pull the report again 10-14 days later. The decrease is concentrated in specific geos or time periods (suggesting a specific traffic source was flagged). A line item in your network payment statement shows a deduction against prior months. The gap between the mediation dashboard and the network dashboard grew larger after the network's adjustment pass.

The fix:

Use D+14 finalized numbers as the only basis for reconciliation. D+0 through D+7 numbers are preliminary.

If clawbacks are recurring and large (above 5% of monthly revenue), investigate the traffic source. Purchased installs from incentivized sources, SDK-spoofed events, and emulator traffic are the most common causes of large IVT flags.

If you believe an IVT flag is incorrect (legitimate traffic was flagged), the dispute process is with the network directly. Request the IVT report breakdown by category and challenge specific categories with session logs.

The reconciliation framework: which dashboard to trust, and for what

No single dashboard is correct for all purposes. The right canonical source depends on what you are measuring.

For revenue booking and financial reporting: Use the payment statement, not any dashboard. The dashboard is a real-time estimate. The payment statement is what settles. Any comparison for accounting or reporting purposes should be built from payment statements, cross-referenced against D+14 finalized dashboard exports.

For day-to-day operational monitoring: Use the mediation platform's aggregated dashboard (MAX, LevelPlay, or AdMob mediation) as your primary operational number. It is the single consolidated view. Understand that this number is a D+0 or D+1 estimate and will change. Do not make stack decisions from a single day's mediation aggregated number. Use 7-day rolling averages.

For per-network revenue accuracy: Use each network's own dashboard for that network's contribution. Each network is authoritative for their own paid impression count. The mediation platform's number for a specific network is derived from the network's API, which may lag 24-72 hours and may be pre-adjustment. If you need to know exactly what AdMob paid you, look at AdMob. If you need to know what Meta paid, look at Meta Audience Network.

For waterfall and bidding optimization: Use the mediation platform's eCPM metrics, not the network's own reported eCPM. The mediation platform observes the competitive context (what other networks bid, in what order) that the individual network's dashboard cannot see. Optimizing floor prices or waterfall ordering using individual network eCPMs, without the mediation platform's competitive context, leads to miscalibrated floors. For a full walkthrough of waterfall vs bidding eCPM calibration: Mediation Waterfall vs In-App Bidding.

AppLovin's network comparison reporting documentation recommends using the CPM Delta metric within MAX as the starting diagnostic tool, and specifies that "network data powers auto-CPM updating and bidding." MAX treats network-reported data as authoritative for eCPM calibration, while the aggregate MAX dashboard serves as the operational view.

The reconciliation process, step by step:

  1. Pick a closed period at least 14 days in the past. Both sides should have finalized.
  2. Export the mediation aggregated report for that period, broken out by network and ad unit.
  3. Export each network's own dashboard report for the same period, using the same timezone (UTC is the consistent choice).
  4. Compare network by network. Expected gap per network is 5-15%.
  5. Flag any network where the gap exceeds 15%. For each flagged network, apply the eight diagnostic causes in order: timezone first, then lag, then counting methodology, then net vs gross.
  6. After applying all adjustments, if a gap above 10% persists on a finalized period, escalate to the network directly with both report exports.

If the per-network comparison produces a gap above 15% that you cannot attribute to any of the causes above, that is also the documented threshold AppLovin uses to escalate to support. It is the right moment to bring the gap calculation spreadsheet and both exports to the free call. Book at /contact.

When the discrepancy is the network's problem vs yours, and how to escalate

Not every discrepancy is something you can resolve alone. Some require the network to investigate their own reporting infrastructure.

It is your problem when:

Timezone settings are misaligned (fix: align to UTC on both sides). The comparison period includes D+0 preliminary numbers (fix: use finalized periods). The impression counting methodology difference is within the 5-15% normal range (no fix required; this is structural). The net vs gross difference maps to known rev share (not a discrepancy; understood difference in definition). Rendering failures from client-side integration issues (fix: SDK and lifecycle management).

It is the network's problem when:

The gap exceeds 20% on a D+14 finalized, same-timezone comparison. The gap appeared suddenly after a network-side update or campaign change. The network's API data feeding the mediation platform is stale (the mediation platform's per-network number has not updated in 48+ hours). Revenue from a specific period was revised downward by more than the normal IVT range without explanation.

How to escalate effectively:

Escalation without documentation produces slow or no response. Escalation with documentation produces results.

What to bring:

  1. Export of your mediation platform report: network-level breakdown, by day, for a closed 14-day period, in UTC.
  2. Export of the network's own dashboard report: same period, same timezone, same ad unit or placement IDs.
  3. The calculated gap by day: a simple spreadsheet showing mediation number, network number, and percentage difference for each day.
  4. The classification of which causes you have already ruled out: timezone (confirmed aligned), lag (confirmed 14+ days old, finalized), and methodology (gap exceeds 15% after accounting for the structural difference).

Most network support teams have a standardized process for discrepancy investigations. Providing a pre-completed analysis skips the first two tiers of their triage process, which cover exactly the causes you have already ruled out.

AppLovin's guidance specifically recommends escalating when discrepancies greater than 5% persist for one or more network placement IDs after full SDK adoption. Use that threshold as the opening line of your support ticket. For supply chain validation before escalating, run the AdMob Approval Checker to confirm no underlying seller identity issue is compounding the discrepancy.

Tools for cross-source reconciliation

Manual reconciliation across three or more networks is possible with exports and a spreadsheet but does not scale beyond two or three demand sources.

AdLibertas is a consolidated revenue reporting platform built specifically for mobile ad operators. It aggregates reporting APIs from multiple mediation platforms and demand networks into a single dashboard with standardized metrics. The primary benefit for discrepancy analysis: it pulls each network's own API data directly rather than through the mediation platform's pass-through, so the comparison between mediation numbers and network numbers is available in one place without manual exports. Their platform documentation explicitly covers reporting discrepancies as a documented category.

A homegrown spreadsheet is the practical alternative for operators who want direct control or whose revenue volume does not justify a third-party tool.

A template structure that works:

  • Column A: Date (14-day closed period, UTC)
  • Columns B-F: Mediation platform reported revenue per network, by day
  • Columns G-K: Network's own dashboard reported revenue per network, by day (same period, same timezone)
  • Column L: Gap per network per day
  • Column M: Gap percentage
  • Column N: Classification (timezone, lag, methodology, net/gross, or flagged for escalation)

A weekly 30-minute reconciliation pass against this template surfaces persistent gaps early, before they compound into payment disputes or missed settlement adjustments.

What these tools do not replace: cross-source reconciliation tools aggregate and compare reported numbers. They do not resolve the underlying cause of a discrepancy. A consistent 12% gap in AdLibertas still requires one of the eight diagnostic causes to explain it. The tools reduce the data-collection work; the diagnosis still requires the framework from the previous sections.

When reconciliation requires more than one operator can do alone

Some discrepancies survive the full diagnostic above. The patterns that most consistently do:

Multi-network compounding. Each network has a 5-8% gap, individually within normal range. But with four networks, the aggregate unexplained revenue variance is 20-30% of total mediation revenue. Each individual gap passes the per-network diagnostic. The aggregate does not. Operators working through each network in isolation tend to rule each one out as "probably fine" and miss the compounding effect.

Revenue share opacity. Some mediation arrangements do not disclose the exact rev share in publisher-facing documentation, and the expected net vs gross gap cannot be calculated without it. This is not unusual. Some mediation rev share structures are confidential between the platform and the network.

Network-side reporting API issues. The mediation platform's per-network numbers are derived from the network's reporting API. If the network has a persistent API issue (stale data, incorrect aggregation, misconfigured placement ID mapping), the mediation platform's number will be wrong in a way you cannot detect or fix from your side. This requires network-level investigation and is outside your control.

Payment statement vs dashboard misalignment that persists after D+14. Some operators find that monthly payment statements do not match D+14 finalized dashboard exports even after adjusting for known causes. This is usually a settlement-side issue (payment currency conversion, payment threshold holds, revenue against which offsets are applied). The root cause requires analyzing the payment breakdown against the dashboard by ad unit and placement, which most operators do not have the tools or process to run routinely.

If you have worked through all eight causes above, given each one a clean ruling with finalized data, and still have a gap above 10% that you cannot attribute to a known structural cause: that is the right moment for a fresh set of eyes on the data.

Bring the exports from the reconciliation framework: your mediation report, per-network dashboards, and your gap calculation spreadsheet. The free call is a working session, not a pitch. If the gap has an explanation you missed, you will find it in the first 15 minutes. If it requires network-level escalation, you will leave with the documentation to do it.

Book the free 30-minute call

Frequently Asked Questions

How big a discrepancy between mediator and network is normal?

A 5-10% gap between your mediation dashboard and an individual network's dashboard is structurally normal and has well-documented causes: timezone differences, impression counting methodology, and reporting lag. AppLovin MAX recommends investigating discrepancies greater than 5% per network placement ID, but only after confirming full SDK adoption and correct timezone alignment. The IAB Mobile Discrepancies Working Group specifies 10-15% as commonly seen in mobile advertising. Gaps above 20% on finalized, same-timezone data warrant a root cause investigation.

Why does my MAX dashboard not match my AdMob dashboard?

Three primary causes. First, timezone: confirm both MAX and AdMob are set to the same reporting timezone before comparing daily figures, because day boundaries shift when they differ. Second, impression counting methodology: MAX counts impressions at the SDK callback level (closer to measured), while AdMob's demand-side counts at the rendered and paid level, producing a structural 5-15% gap. Third, net vs gross reporting: if you are comparing MAX's aggregated number against AdMob's own publisher earnings report, confirm whether both figures are after rev share deductions, because one may be gross and the other net.

Which dashboard should I trust for revenue reporting?

The right source depends on the purpose. For financial reporting: use payment statements, not dashboards. For daily operational monitoring: use the mediation platform's aggregated dashboard (MAX, LevelPlay, or AdMob mediation), treating it as a preliminary estimate subject to revision. For per-network revenue accuracy: use each network's own dashboard, since each network is authoritative for their own paid impression count. For waterfall and bidding optimization: use the mediation platform's eCPM metrics, which include competitive context the individual network dashboard cannot see.

How long does revenue reporting take to settle?

Most platforms complete initial finalization within 24-48 hours of day close. A primary adjustment cycle for IVT and fraud filtering runs 7 days post-delivery on most networks. A secondary adjustment pass may run at 14 days. Google AdMob reserves the right to adjust up to 30 days. For any comparison intended to diagnose a discrepancy, use data that is at least 14 days old so both platforms have passed their primary adjustment cycle. Comparing a same-day estimate from one platform against a 14-day finalized number from another is the most common source of apparent discrepancies.

Why does my mediator show more impressions than the network?

This is the expected direction and is structurally normal. The mediation platform counts impressions closer to the measured point in the ad delivery pipeline: it fires a count when the SDK callback returns, not when the ad renders on the user's screen. The network counts at the paid impression point: the ad must have rendered, been visible, and passed the network's viewability and fraud filtering requirements. The gap from this source alone typically runs 5-15%. If your gap is consistently above 15%, look at client-side rendering failures such as the app going to background or low-end device rendering errors, and confirm the SDK impression counting mode is set to rendered impression rather than ad loaded.

What is the difference between eligible, measured, and paid impressions?

Eligible impressions are the ad requests the mediation platform sent to a specific network. Measured impressions are the responses the mediation SDK logged as received, at the moment the SDK callback fired on the server side. Paid impressions are what the network considers billable: the ad rendered on screen, was visible, and passed the network's viewability and fraud filtering requirements. Mediator dashboards typically report measured impressions. Network dashboards report paid impressions. The eligible-to-measured gap captures timeout and passback losses. The measured-to-paid gap captures rendering failures, invalid traffic removal, and viewability shortfall.