Multi-Bidder Mediation: How Many Is Too Many (2026)
More bidders means more competition in theory. In mobile mediation, each SDK carries weight and crash risk. Framework: how many networks is too many.
For most mobile apps, 3 to 5 active bidding networks is the practical optimum. Web header bidding research (Prebid) shows more bidders help in timeout windows above 300ms, but in-app mediation has a different cost structure: each network is a compiled SDK with binary weight, initialization latency, and a crash surface, not just a parallel bid request. Past 5 to 7 networks, the marginal eCPM gain from adding another bidder typically falls below the operational and technical cost of carrying another SDK. The right number depends on revenue tier, geo mix, and whether the additional demand is genuinely incremental or duplicative of demand already flowing through your existing stack.
The math: marginal value of adding the Nth bidder
Every network rep uses the same pitch: one more demand source means one more bidder, one more bidder creates competition, competition raises CPMs. The auction theory behind that argument is sound. The problem is that each rep is describing the marginal value of adding their network to a stack that does not yet include them. None of them describe the marginal value of adding the tenth or twelfth network. None of them have any incentive to tell you when the right move is to cut, not add.
Start with what the math actually says. In a competitive auction, each additional bidder raises the expected clearing price because the probability that at least one bidder values the impression highly increases with bidder count. This holds in first-price and second-price formats. It is not speculation. It is a property of order statistics.
But the marginal gains diminish with every bidder added. Going from 1 to 2 bidders is the single largest eCPM gain you will ever see from adding competition. Going from 2 to 3 is the second largest. By the time you are going from 8 to 9, the increment is a fraction of what that first additional bidder gave you. The math does not break here. It just becomes very, very small.
Where it does break is on the demand duplication problem. Networks that source from the same DSPs do not add independent competitive pressure. They add a duplicate path to the same advertiser budget. If two networks in your stack both primarily route AppLovin or Google demand through slightly different pipes, their auction competition is not real. The same budget is appearing twice. The incremental eCPM contribution is near zero. The SDK cost is not.
A network with real, independent demand contributing 15% of ARPDAU justifies its place even if its SDK adds 5MB to the binary. A network contributing 2-3% of total revenue while representing demand that already flows through your top network is a different calculation entirely.
For the underlying auction mechanics that govern how bids compete, see Mediation Waterfall vs In-App Bidding.
Web vs. in-app: why bidder counts differ structurally
This is the section most operators miss because the most credible analysis on bidder counts in the index was written for web header bidding.
Prebid's documentation on bidder count (the "how many bidders" guidance that AI search cites most often) is based on web header bidding tests. It shows 10 bidders outperforming 5 bidders in revenue at timeout windows of 500ms and above. Below 300ms, the latency cost of 10 parallel bid requests partially offsets the competitive benefit. The conclusion is directionally accurate for web: more bidders are better when the timeout is set correctly.
That conclusion does not transfer directly to in-app mediation. Here is the structural reason.
In web header bidding, adding the 9th bidder means adding one more parallel HTTP request from a page that is already loaded. The cost to the user is marginal latency on the bid response. Prebid.js runs asynchronously. Adding bidders does not block page content from rendering.
In mobile mediation, adding the 9th network means adding a compiled SDK to your app binary. That SDK runs on the user's device, in every session, whether or not that network wins a single impression. The costs are:
Binary size. A major ad network SDK (AdMob, AppLovin MAX base SDK, Meta Audience Network) adds approximately 3-8MB to your compressed app binary. Smaller or regional network SDKs run 1-4MB. Each one. Google Play's compressed app size limit for OTA delivery is 150MB. An app starting at 80MB with 8 full ad SDKs may be approaching that threshold.
Init time. Most ad SDKs initialize at cold launch with at least one network call to register the device and retrieve configuration. Running 8-10 SDK inits in parallel at cold launch adds 100-300ms to functional startup time under normal conditions. On poor connections, common in Tier 2 and Tier 3 markets, that range extends further.
Memory. An SDK that has initialized but not yet served an ad still holds the memory it allocated during init. On lower-end Android devices with 2-3GB of RAM, which remain a material share of the global install base in 2026, this is a real concern.
Crash surface. Each third-party SDK introduces code paths outside your control. Adapter bugs, threading conflicts between concurrent SDKs, and compatibility problems with specific Android or iOS versions are documented crash sources. The crash rate contribution of any single SDK is typically small. It is not zero, and it compounds. See Mediation SDK and Adapter Compatibility Guide for the full adapter version management framework.
The asymmetry in plain terms: in web, adding a bidder costs latency during the bidding window. In mobile, adding a network costs weight and stability during every session, including sessions where that network never fires.
The Prebid 5-7 rule and where it does not apply
The "5-7 bidders as a practical optimum" framing is commonly attributed to Prebid guidance. What the actual Prebid test showed was that 10 bidders outperformed 5 bidders in revenue at timeout windows of 500ms and above. The community landed on a 5-7 working range based on the balance of competitive gain and latency management. At timeouts above 1,200ms, most bids had already returned regardless of bidder count and additional bidders added no incremental value. At very short windows (under 300ms), the advantage of 10 bidders narrowed because some bids could not return in time.
That analysis is internally consistent for web. In web header bidding, pruning from 10 bidders to 7 means removing bidding endpoints that rarely win in time. The only cost is those bids' CPM contributions in the auctions they could have won.
In in-app mediation, pruning from 10 networks to 5-7 means removing compiled SDKs that were contributing binary weight, init time, memory, and crash exposure in every session, win or no win. The cost of carrying them was there from the moment they were added. The savings from removing them are also immediate.
The correct adaptation of the Prebid principle for mobile: the target network count is lower than for web, not because fewer bidders produce less competition, but because the cost per additional network is structurally higher. The in-app equivalent of the Prebid 5-7 rule is 3 to 5 well-chosen networks for most apps, with a ceiling of 7 for apps with the volume and ops capacity to justify the maintenance load.
The framing that matters most is not "how many" but "are these bidders genuinely independent demand sources?" Three networks representing distinct, non-overlapping advertiser budgets will outperform six networks where four of them route overlapping programmatic demand through different wrappers at slightly different take-rates.
For the bidding mechanics context, see Mediation Waterfall vs In-App Bidding.
Diminishing returns: when network N+1 starts costing more than it adds
The first 2-4 weeks after adding a new bidding network are not evaluation data. The network's ML models are calibrating on your specific inventory. Fill rate during this window is below steady-state. Bids are exploratory. A network evaluated at day 7 looks worse than it will at day 30. Operators who add a network and conclude at week 2 that it is underperforming are evaluating calibration noise. Operators who never formally evaluate their networks after adding them accumulate dead weight indefinitely.
The correct evaluation window starts at day 14 (when calibration is approximately complete) and runs for 28 days. At the end of that window, pull per-network fill rate, per-network eCPM, and ARPDAU contribution. ARPDAU contribution is the number that matters: the network's total revenue divided by total daily active users served across the evaluation period. It normalizes for both fill rate and eCPM.
In most mature stacks running 6 or more networks, the top 2-3 networks account for 65-80% of ad revenue. Networks 4-6 account for 20-35%. Everything from network 7 onward typically represents under 5% collectively. This is the standard long-tail distribution applied to demand sources. The marginal contribution of each added network falls as the stack grows.
Where the cost-benefit ratio inverts: when a network's ARPDAU contribution falls below roughly 3-5% of total ad revenue, the engineering cost of maintaining its SDK integration typically exceeds the dollar contribution. Adapter updates, compatibility testing, and crash monitoring for one network take the same number of developer-hours regardless of how much revenue that network generates.
The revenue tier qualification matters here. A network contributing 3% of revenue at $100K per month is contributing $3,000 per month. Worth keeping. A network contributing 3% of revenue at $4,000 per month is contributing $120 per month. Probably worth cutting.
The ops overhead that gets underestimated: each network in the stack releases adapter updates on its own cadence. Google and AppLovin update frequently. Smaller networks are less consistent. Each update requires a build and release cycle to test against your current SDK matrix. The quarterly engineering cost of maintaining a 10-network stack versus a 5-network stack is roughly double, before factoring in any additional crash surface.
The correct cadence: add a network, run the 28-day steady-state evaluation starting from day 14, make a keep/remove decision based on that data, repeat quarterly for the full stack.
For how floor settings interact with per-network fill evaluation, see Floor Pricing Strategy for Mobile Apps 2026.
The SDK weight tax
Vendor content never quantifies this cost. Here are the actual numbers.
Binary size. Major ad network SDKs (AdMob, AppLovin MAX base, Meta Audience Network) add approximately 3-8MB each to your compressed app binary. Smaller or regional network SDKs add 1-4MB. Google Play's compressed OTA delivery limit is 150MB. On iOS, thin provisioning reduces installed size but the raw binary still grows. Above the cellular download threshold that causes user abandonment, download conversion rates drop.
Method count on Android. Each major ad SDK adds roughly 5,000-20,000 methods toward Android's DEX method limit. Multidex is the solution, but the multidex overhead slows cold start time on older Android devices. A stack with 10 ad SDKs is adding upward of 100,000 methods on top of your application code.
Init time. At cold launch, 8 SDK inits running in parallel generate 8 concurrent network calls on the user's connection. On a 4G connection, this typically adds 100-250ms to functional startup time. On a 2G or poor connection, init times extend to 500ms-1 second for the full set. These are real numbers from production app profiling.
Init failure rate. Each SDK initialization has a nonzero failure rate that most mediator dashboards do not surface directly. A proxy: compare the number of ad requests sent to a network against the number of impression opportunities created. If the ratio is significantly lower for one network than others in the same stack (controlling for geo and format), that network is experiencing init failures that are silently reducing its availability. Running 10 SDKs with an individual init failure rate of 2% each means a 20% aggregate probability of at least one failed init per session. For a high-contributing network, a consistently failing init is a silent revenue drain that appears as zero requests, which looks like no demand rather than a technical failure.
Memory. Major SDKs are engineered to be memory-efficient. Smaller or older SDKs may allocate and retain memory in patterns that increase out-of-memory crash risk on low-RAM devices. 2-3GB Android devices are still a material share of the global install base in 2026.
Crash surface. SDK bugs, adapter compatibility conflicts, and threading errors between concurrent SDKs are real crash sources. Any individual SDK's crash rate contribution is typically small. It is not zero, and it is additive across the stack.
A useful operator heuristic: calculate the revenue-per-SDK-weight ratio for each network. Divide the network's monthly ARPDAU contribution by its SDK weight (binary size, init latency). Networks with low revenue-per-weight are candidates for removal. Networks with high revenue-per-weight are core infrastructure.
Bidder mix vs. bidder count
The most operationally important insight in this article is not about a number. It is about independence.
Many mobile ad networks are demand aggregators routing the same DSP budgets through their own buying pipes. A stack with 8 networks may represent only 4-5 distinct advertiser pools because several of those networks are simultaneously tapping AppLovin, Google, and Meta demand through their own programmatic reselling relationships. When two networks compete in your auction, you want them to represent genuinely different budget sources, not two paths to the same Google DV360 campaign.
Identifying demand overlap does not have a single public tool. The best proxy available is eCPM correlation analysis by time-of-day in MAX reporting. Networks that move together in eCPM across the day are more likely to represent the same underlying demand than networks that move independently. Correlated eCPM curves are a signal of duplicated demand, not independent competition.
Three networks representing deeply independent demand (AppLovin, Meta, and a strong category-specific direct demand source) will typically outperform a stack of eight networks where five of them are routing overlapping programmatic budgets at slightly different take-rates.
Format specialization changes the calculation. A network with genuine strength in rewarded video but thin interstitial fill is worth keeping if rewarded video is a primary format for your app, even if its aggregate fill rate ranks low. Evaluate by per-format ARPDAU contribution, not total fill rate. A network contributing 15% of rewarded video revenue with near-zero interstitial contribution is a net positive for a gaming app where rewarded video is the dominant monetization format.
Geographic concentration matters too. A network with deep demand in Southeast Asia may contribute 1% of total revenue for an app with primarily US traffic, but 20% of revenue for that app's SEA segment. Removing it destroys fill for that segment without affecting the headline numbers. Always evaluate network contribution by geo before making removal decisions for apps with material non-US traffic.
The mediocre-fill trap is worth naming directly. Networks that fill at low eCPM across many impressions look like strong performers in fill rate reports while dragging down average eCPM. A network with 40% fill at $0.60 eCPM is worse than a network with 20% fill at $1.80 eCPM for the same inventory because the second network is selecting higher-value impressions rather than filling everything at below-market prices. Fill rate is not a standalone metric. ARPDAU per network is.
How to A/B test adding or removing a bidder
Do not make stack changes based on intuition or a vendor pitch. Do not add a network to your full production stack and watch overall eCPM. You will see seasonality, geo mix shifts, and changes in other networks simultaneously. The signal and the noise are indistinguishable in full-stack testing.
The parallel-run methodology for adding a network:
- Create a test mediation group with the new network and a 10-20% traffic allocation for the target ad unit and geo segment.
- Keep the control group running the existing stack on the remaining 80-90% of traffic.
- Run both groups for a minimum of 28 days. Evaluate from day 14 onward. The first two weeks are calibration.
- The metric that matters is ARPDAU of the test group vs the control group, not the new network's eCPM in isolation. A new network that adds 8% to test group ARPDAU (including all other networks) is a meaningful addition. A new network that adds 1.5% to eCPM while the test group ARPDAU is flat is not.
- Randomize the traffic split across the same user segment: same geo, same device cohort, same format. Do not split by geo or device if those variables differ between groups.
Platform-specific notes. AppLovin MAX has a built-in A/B test tool for mediation group configurations that handles traffic splitting and reports comparative eCPM and fill rate by group. Use it for MAX publishers rather than manual group management. AdMob does not have an equivalent native A/B test tool at the mediation level. The practical approach for AdMob is two mediation groups for the same ad unit (with and without the new network), using geographic targeting to route different geo segments to each group. This is an imperfect proxy but substantially better than full-stack testing.
For removing a network: the same parallel-run approach. Remove the target network from the test group, keep it in the control group, run 28 days, compare ARPDAU. A common result when removing a low-contributing network: ARPDAU in the test group is within measurement noise of the control group. That confirms the removed network's revenue contribution was already being captured by other networks in the auction, not lost. That result clears the way for permanent removal.
Watch per-network fill rate for remaining networks in the test group during a removal test. If removing a network causes fill rate to drop for other networks (because that network was providing passback fill that others were receiving), the removal impact is larger than the direct fill contribution suggests. Monitor this in the first week.
The stat significance problem. Mediation A/B tests are hard to power for statistical significance on small apps. Under 2,000 impressions per day per ad unit in the test segment, a 28-day test may not produce enough data to distinguish a 5% revenue difference from noise. For low-volume apps, extend to a 45-day window and set the evaluation threshold higher: treat a 15% or greater ARPDAU difference as a signal, not a 3-5% difference.
Before starting any test, confirm the new network's SDK version compatibility with your current stack. Use the Mediation SDK Checker for this before building. If your impression volume is too low to power a clean A/B test but you want a decision on whether to add or remove a specific network, that is the kind of scoped analysis the free initial conversation is designed for. Bring the stack config and your mediation report. Book a free 30-minute call.
The diagnostic: when your stack has too many bidders
Five specific signals. Check each one against your current stack using standard mediation reporting.
Signal 1: eCPM is flat or declining despite adding the network. Evaluate this over a 4-week steady-state window, not during calibration. If eCPM has not moved in the 4 weeks following day 14 after adding a network, one of three things is happening. The new network represents demand that was already flowing through an existing network (no independent competitive pressure added). The new network is filling low-value impressions that previously went unfilled (increasing fill rate while dragging average eCPM down). Or the new network's SDK is causing init conflicts or load failures that are subtly reducing the performance of existing networks. Each of these has a different fix. The common starting point is isolating per-network contribution in your mediation report.
Signal 2: network fill rate is below 5% of ad requests. A network filling less than 5% of the requests it receives from the mediator is not contributing meaningful supply to the auction. It is still initializing, allocating memory, and sending bid requests in every eligible session. The threshold is not fixed for every format and geo, but the principle holds: if a network's fill rate is consistently below the point where it can materially influence a single session's revenue, its presence is overhead-only.
Signal 3: init failure rate above 1%. Standard mediation reporting does not surface init failure data directly. The proxy: compare the number of ad requests sent to the network against the number of impression opportunities created. If that ratio is significantly lower for one network than others in the same stack controlling for geo and format, the network is experiencing silent init failures. Persistent init failures above roughly 1% of sessions indicate either an SDK compatibility issue or availability problems on that network's server side. Use the AdMob Approval Checker if you are trying to audit your AdMob mediation configuration alongside this diagnostic. See also Mediation SDK and Adapter Compatibility Guide for the adapter version management process when init failures point to a compatibility root cause.
Signal 4: the network's bids consistently fall below your configured floor. A network that consistently bids below your floor for a given segment is filling nothing while still completing the init and bid request sequence. It is consuming overhead with no revenue return. This is distinct from a floor set too high. The question is whether the network's competitive range is simply below even a correctly calibrated floor for that inventory.
Signal 5: build complexity and crash rate increased after adding the network without a corresponding revenue increase. If adding networks 8, 9, and 10 coincided with increased crash rate or a spike in build time and SDK compatibility work, and revenue is flat, you have passed the breakeven point for stack size.
Run these five signals against your current stack quarterly. Any network triggering two or more of them is a candidate for the cull process.
The bidder cull: which networks to drop first
When the diagnostic confirms the stack is too large, work through the triage in this order.
Step 1: rank by ARPDAU contribution. Not fill rate. Not eCPM in isolation. ARPDAU contribution: the network's total revenue divided by total daily active users served across the evaluation period. This is the single number that normalizes for both variables.
Step 2: segment the ranking by format and geo. A network that ranks low in aggregate may rank high for a specific format (rewarded video only) or a specific geography (Southeast Asia only). Pull it out of the aggregate ranking and evaluate it on its actual domain before treating it as a removal candidate.
Step 3: identify networks that fail two or more signals from the diagnostic above. These are the primary candidates. Any network with both low ARPDAU contribution and fill rate below 5% is a removal candidate regardless of other properties.
Step 4: among the candidates, remove in this order:
First, networks with high SDK weight and low ARPDAU. The heaviest binary presence with the least revenue contribution is the highest-priority removal.
Second, networks with documented crash or compatibility issues. SDK stability problems carry a cost that exceeds revenue contribution in almost all cases.
Third, networks that duplicate the demand of a top-3 network. If network 7 is sourcing the same DSP budgets as network 2 at a higher take-rate and lower fill efficiency, network 7 is a redundant cost.
Last, networks with strong geo-specific or format-specific fill, even if aggregate numbers are low. Remove these only after confirming through a parallel-run test that their segment fill will be captured by remaining networks.
How to remove safely. Use the parallel-run removal test before committing to permanent removal. Run the test group without the network for 28 days. If ARPDAU in the test group is within measurement noise of the control group, proceed with removal. If ARPDAU drops materially, investigate before removing. The network may be providing passback fill or format-specific supply that the aggregate numbers did not reflect.
Do not cull more than 1-2 networks at a time. Removing multiple networks simultaneously makes it impossible to isolate the revenue impact of each removal. Sequential removals with 28-day gaps between each produce clean data.
After removal: monitor total stack ARPDAU for 60 days. Confirm binary size decrease in your next build. Measure cold launch time before and after with a profiling tool. Monitor crash rate in Firebase Crashlytics or your crash reporting setup.
For operators removing networks as part of a platform migration, see AdMob to AppLovin MAX Migration Playbook for how to manage network removal as part of a mediation platform change.
Recommended starting points by app revenue tier
These are starting points, not mandates. They are specific because "it depends" without resolution is not useful.
Under $5K per month
Target stack: 3 to 4 networks maximum.
Core networks: AdMob (required for most apps, deep demand, stable SDK, reliable fill), AppLovin MAX as the mediator with AppLovin bidding enabled, and one format-specific network for your primary ad type. Meta Audience Network if you run interstitials or banners with US traffic. Unity Ads if you run rewarded video in gaming apps. Beyond this, the engineering overhead exceeds the revenue gain.
What to skip: do not add a network because a rep said it would improve fill. At under $5K per month, your ad request volume is insufficient for most networks' ML models to calibrate meaningful fill on your specific inventory. New network contributions will be flat or noise-level during the calibration period, and steady-state contribution will be minimal. The ops cost is real. The revenue gain is not.
One exception: if more than 30% of your traffic is in a specific geo where your core networks have thin demand (Southeast Asia, for instance), add one geo-specialist network for that segment. Keep the total at 4.
For the platform comparison at this tier, see AdMob Mediation vs AppLovin MAX.
$5K to $50K per month
Target stack: 4 to 6 networks.
Core networks: AdMob, AppLovin (as mediator and demand source), Meta Audience Network, and 1-2 networks that fill specific format or geo gaps in your current stack. Use fill rate and ARPDAU data from your current stack to identify which formats or geos have the weakest fill. That is where an additional network adds genuine marginal value.
Mediation platform choice matters at this tier. MAX gives more per-instance control and built-in A/B testing than AdMob mediation. That is worth the migration cost at this revenue level. LevelPlay is a viable alternative if you are already on Unity Ads and do not want to manage another platform. See AppLovin MAX vs Unity LevelPlay for the full configuration comparison.
Format-level stack thinking: your rewarded video stack and your interstitial stack do not need to be the same networks. A network with strong rewarded video demand but weak interstitial fill can be activated for rewarded video only in MAX's format-level configuration.
Evaluation cadence: quarterly network audit using the five diagnostic signals above. Any network failing two signals is a test candidate.
$50K or more per month
Target stack: 5 to 8 networks, with a strict per-network evaluation framework.
At this revenue level, a 3-5% improvement in ARPDAU from an additional competitive demand source is worth $1,500-$2,500 per month at the $50K base. That justifies a new SDK integration and ongoing maintenance overhead. The threshold for adding a network is lower than at earlier tiers. But the threshold for keeping an underperforming network is also lower: the opportunity cost of a slot occupied by a low-contributing network is higher when each percentage point of yield has a real dollar value.
Mandatory tooling at this tier: per-network eCPM and fill rate reporting at daily granularity minimum, crash reporting tracked by SDK source, and a formal quarterly network review producing ARPDAU contribution rankings.
Network specialization becomes viable at this tier. A category-specific direct demand source (a gaming-focused DSP or a mobile brand network with strong CPM in your app's vertical) may provide genuinely independent demand that programmatic networks do not. These are worth evaluating if they have proven scale in your category.
The hard ceiling: above 8 networks, the engineering and ops overhead begins to exceed the marginal revenue contribution for most apps, even at this revenue tier. If you are at 10 or 12 networks and have never run a formal per-network ARPDAU evaluation, the five-signal diagnostic above will almost certainly identify networks that pass the cull criteria.
If you are at this revenue tier and running more than 7 networks without a formal per-network ARPDAU evaluation, that is a structured audit that typically pays for itself in the first month. The free initial conversation is the right starting point. Book a free 30-minute call.
Frequently Asked Questions
How many bidders should I run in mediation?
For most mobile apps, 3 to 5 active networks is the practical optimum. Web header bidding research from Prebid shows more bidders help in timeout windows above 300ms, but in-app mediation has a different cost structure: each network is a compiled SDK with binary weight, initialization latency, and a crash surface, not just a parallel bid request. Past 5 to 7 networks, the marginal eCPM gain from additional competition typically falls below the ongoing engineering cost of maintaining the SDK and the performance cost it adds to every session. The right number depends on your revenue tier: under $5K per month, target 3 to 4 networks; $5K to $50K per month, target 4 to 6; $50K or more per month, up to 7 to 8 with a strict per-network evaluation framework.
Does adding more bidders always increase revenue?
No. The auction theory case for adding bidders is real: each additional independent demand source raises the expected clearing price. But two conditions limit that benefit in practice. First, diminishing returns: the marginal gain from adding bidder N is always smaller than from bidder N-1. Going from 2 to 3 networks produces a larger eCPM gain than going from 8 to 9. Second, demand duplication: if the new network sources from the same demand-side platforms as an existing network, it adds no independent competitive pressure, just a duplicate path to the same advertiser budget. A stack with 3 genuinely independent demand sources will typically outperform one with 8 networks that share significant DSP overlap.
What is the SDK weight cost of adding a bidder in mobile mediation?
Each major ad network SDK adds approximately 3 to 8MB to your app's compressed binary size, 5,000 to 20,000 Android methods toward the DEX method limit, and 100 to 300ms of additional initialization work at cold launch when run alongside 6 to 8 other SDKs. These costs apply to every app session, including sessions where that network never serves an impression. A network that contributes less than 3 percent of total ad revenue while adding 5MB to the binary and 80ms to init time is a candidate for removal on weight alone, independent of its fill rate.
How do I know if a bidder is dragging my mediation stack down?
Five signals: eCPM is flat or declining despite adding the network after the 4-week calibration period; the network's fill rate is consistently below 5 percent of ad requests; init failure rate is above 1 percent (proxy: compare ad requests to impression opportunities for that network versus others in the same stack); the network's bids consistently fall below your configured floor for that segment; or build complexity and crash rate increased after adding the network without a corresponding revenue increase. Any network triggering two or more of these signals is a candidate for removal.
Should I run AdMob, MAX, and LevelPlay all together?
No. AdMob, MAX, and LevelPlay are mediation platforms. You run one as your primary mediator, not all three simultaneously. You can access the ad demand those platforms represent through a single mediator using network adapters, but stacking the mediators themselves adds the binary weight and compatibility overhead of all three platforms without the auction efficiency of a single unified auction. Choose one mediator based on your revenue tier and demand mix. MAX gives the most per-instance control and built-in A/B testing and is generally the better choice for apps above $5K per month in ad revenue. AdMob is the right default for smaller apps or those primarily dependent on Google demand.
What is the difference between bidder count in web header bidding versus in-app mediation?
In web header bidding using Prebid, adding a bidder means adding one more parallel HTTP request during the bidding timeout window. The cost is marginal latency on the bid response. Prebid runs asynchronously and does not block page content from loading. Prebid's published test data shows 10 bidders outperforming 5 in most timeout configurations above 300ms. In in-app mediation, adding a network means adding a compiled SDK to your app binary that runs on the user's device in every session, contributing binary size, initialization time, memory, and crash surface whether or not that SDK ever wins an impression. The in-app practical range of 3 to 5 networks is lower than the web practical range of 5 to 10 bidders precisely because the cost of carrying each additional network is higher in mobile.
The right number is the one that earns its place
The right number of networks in your mediation stack is not the number your network reps tell you to run. It is the number where the marginal revenue from the last network you added exceeds the cost of carrying it: in binary size, in init overhead, in engineering time, and in crash surface.
Most operators who have never done a formal per-network ARPDAU evaluation are carrying at least one network that fails this test. The diagnostic is not complicated. The data is already in your mediation report.
If you want to run the analysis against your actual stack data rather than against generic benchmarks, that is what the free initial conversation is for. Book a free 30-minute call.