SKAdNetwork 4.0 Conversion Value Setup: Schema Design for iOS Growth Operators
SKAN 4.0 changes postback windows, fine vs coarse values, and source IDs. This is the schema design reference for iOS growth operators, not a vendor pitch.
SKAdNetwork 4.0 introduces three postback windows (0-2, 3-7, 8-35 days), a two-tier conversion value system (fine values in window 1 only, coarse values in all three windows), and a hierarchical source identifier that returns more digits at higher install volumes. Schema design decisions made at SKAN 2.x do not port cleanly to 4.0. Postbacks 2 and 3 deliver coarse values only; fine values in window 1 require meeting Apple's crowd anonymity threshold. Redownloads receive only postback 1. Design the schema around these constraints, not around the events you wish you could measure.
What SKAN 4.0 actually changed
If your SKAN schema was designed before iOS 16.1, it was built for a single-postback world. SKAN 4.0 changes the architecture, not just the field names, and schemas that were not rebuilt for 4.0 are missing the measurement the new system was designed to produce.
The most operationally significant change is the move from one postback to three. Where SKAN 2.x sent a single postback within 24-48 hours of install, SKAN 4.0 sends three postbacks spread across a 35-day measurement window. Window 1 covers days 0-2. Window 2 covers days 3-7. Window 3 covers days 8-35. Each window has its own conversion value measurement and its own postback, with separate privacy thresholds. The practical implication is that you now have three distinct signals instead of one, but only one of them can carry fine values, and the other two are always coarse. Schema design has to account for that split from the start.
The source identifier field changed from 2 digits to 4 digits. This sounds like an expansion, but the effective precision depends on your install volume. Apple returns the source identifier at different levels of truncation based on crowd anonymity tier. Tier 1 (low volume): only the first 2 digits are returned. Tier 2 (medium volume): 3 digits. Tier 3 (high volume): all 4. A source identifier encoded as 5739 may come back as 57, 573, or 5739, depending on how much traffic the campaign generates. Encodings designed with SKAN 2.x assumptions, where 2 digits were the only available space anyway, break when the operator does not account for what 2-digit-only data looks like in their reporting system.
Web-to-app attribution is new in SKAN 4.0. Ads served in Safari can now trigger a SKAN postback via a link flow rather than an in-app impression, using a sourceDomain field instead of sourceAppID for web-originated conversions. AppsFlyer, Adjust, and Singular all support sourceDomain parsing as of 2026. Smaller MMPs lag. If you run web-to-app campaigns, confirm your MMP has shipped this before expecting attribution to flow correctly.
The fine vs coarse conversion value split is the other major structural change. SKAN 2.x had a single 6-bit fine value (0-63) in its one postback. SKAN 4.0 adds a three-state coarse value (low, medium, high) alongside the fine value in window 1, and uses coarse values alone in windows 2 and 3. This two-tier system exists because Apple needed a way to return some conversion signal even for low-volume cohorts that do not meet the fine value privacy threshold. From a schema design perspective, coarse values are not fallbacks. They are the primary signal for any event that happens after day 2.
One operational point worth stating directly: SKAN 4.0 is the default for iOS 16.1+ (available October 2022) and iOS 17+. If your SKAN schema has not been redesigned since the 4.0 release, you are still measuring through a 2.x lens on a 4.0 system. Windows 2 and 3 are sending postbacks that your schema was never built to use.
Designing your conversion value schema
Schema design is where the revenue signal is either captured or lost. The API is not the hard part. The hard part is deciding what to measure in each window and what to do when the privacy threshold cuts the fine value off.
There are three schema strategies for window 1 fine values, and the right one depends on your app's revenue model.
The revenue-mapped approach assigns each fine value (0-63) to a revenue band. This works for IAP-heavy or subscription apps where a meaningful percentage of installs make a purchase in days 0-2. The risk is that most installs produce no day 0-2 revenue, so the majority of postbacks cluster in the lowest revenue bucket or return null when the crowd anonymity threshold is not met. If less than 15-20% of your installs generate a day 0-2 purchase, revenue mapping produces a low-information signal.
The engagement-mapped approach assigns fine values to behavioral events: tutorial completion, day 2 open, feature activation, session depth. This works well for retention-optimized UA and for games where early engagement is a reliable LTV predictor. The risk is that the events you map may not correlate strongly enough with actual revenue for bid optimization to improve on them without validation.
The predictive LTV-mapped approach is the one most often skipped and most often needed for ad-monetized apps. Fine values map to cohort-level predicted revenue tiers based on early engagement signals, calibrated against historical cohort data. Day 0-2 ad revenue per user is usually too small to carry a useful signal in a revenue bucket. But session count, session depth, and return behavior in the first 48 hours are measurable and can be correlated with 30-day ARPU if you have the historical data to run the calibration. This approach requires more upfront work but produces a fine value schema that is actually usable for bid optimization.
Window 2 and 3 coarse values are not an afterthought. Three coarse values sounds like very limited signal, but calibrated well they carry meaningful data. Most MMP configuration UIs default to distributing installs evenly across low, medium, and high by revenue quantile or event count. If your install distribution is heavily skewed (as most are), this default produces a low bucket that captures 70-80% of all installs. A coarse value where 80% of events fall in one tier is nearly useless for differentiation. Recalibrate thresholds based on your actual install distribution, or map the three tiers to specific milestone events that matter for your model.
For subscription apps: low equals no trial started, medium equals trial started, high equals subscription activated. For ad-monetized apps: low equals no day-7 retention, medium equals day-7 retained, high equals day-30 retained or high session-count cohort. The exact mapping depends on your data, but the principle is that each coarse tier should represent a meaningfully different user segment, not an arbitrary revenue percentile.
One capacity constraint that trips up teams mapping multiple measurement components: fine values are 6-bit, meaning 64 possible values (0-63). If you combine revenue bands, event flags, and engagement tiers in the same fine value space, the total combinations must not exceed 64. Some MMP configurators surface an alert when this limit is exceeded. The resolution is usually to reduce granularity on one dimension or remove a measurement component that is lower priority.
Fine vs coarse values and the privacy threshold
If your fine value null rate is high and your campaigns are mid-scale, the privacy threshold is almost certainly the reason. The mechanism is straightforward: Apple withholds fine values when the install cohort does not meet its crowd anonymity threshold. The threshold itself is not published. Apple has not disclosed the exact number. Operator experience puts it around 200 installs per source campaign per day as the floor for fine values to appear consistently. Below roughly 100 installs per campaign per day, expect coarse-only postback 1 data as the norm.
The practical implication for schema design: if a meaningful portion of your campaigns run below that threshold (which is most non-top-performing campaigns), the schema needs to work with coarse values in window 1, not just in windows 2 and 3. Coarse values have a lower crowd anonymity threshold and appear more reliably. Designing for coarse-only data as the default, with fine values as the bonus when volume is sufficient, is more realistic than designing for fine values and treating coarse as the fallback.
When a campaign does meet the fine value threshold, postback 1 returns both values. The format is effectively "coarse band + fine integer" together, sometimes described as High 42 where High is the coarse tier and 42 is the fine value. The coarse value in a high-volume postback 1 is not redundant. It is the same signal format that windows 2 and 3 will return, so it can be used for cross-window consistency checks. If High in window 1 coarse maps to a different user cohort than High in window 2 coarse, the mapping definitions need review.
Google Ads has a specific constraint that is documented in Google's own help center but rarely synthesized into its operator implications. As of 2026, Google Ads conversion modeling uses fine values from postback 1 only. Coarse values from all windows, including window 1, are excluded from Google's SKAN optimization (support.google.com/google-ads/answer/13286653). For teams running Universal App Campaigns on iOS, this means: if your window 1 fine value rate is low, Google Ads is running iOS campaigns with no SKAN feedback at all. The symptom is SKAN installs appearing as attributed in the MMP but campaign performance not improving over time. The fix is either to consolidate iOS campaigns to meet the threshold, or to accept that fine values will not be available at your volume and use Google's SKAdNetwork modeling alongside coarse value data from your MMP for internal reporting.
The lockWindow decision is also a fine vs coarse question in practice. Calling lockWindow finalizes the conversion value and triggers the postback 1 timer within approximately 24 hours. This accelerates feedback for bid optimization. It also means any event that happens after the lock does not update the conversion value for that user. If your highest-value window 1 event is a day 0 action (registration, first ad impression, tutorial completion), locking early makes sense. If your highest-value window 1 event is a day 2 return open or first in-app purchase, locking after a day 0 action kills the signal you actually care about. The decision is not about speed; it is about which event you are willing to commit to as your window 1 measurement anchor.
Postback windows and the redownload trap
The timing of SKAN 4.0 postbacks is not fixed. Each window has a random delay built in. Window 1 (days 0-2 post-install) sends its postback with a 24-48 hour random delay. Windows 2 and 3 send with a 24-144 hour random delay. A postback for a day 3-7 event can arrive on day 9, day 11, or later. Attribution pipelines that assume a fixed arrival time will either misattribute or drop late postbacks. Before drawing any conclusions from window 2 and 3 data, confirm how your MMP handles late-arriving SKAN postbacks. Some dashboards show near-real-time data without flagging that a significant portion of window 2 postbacks may not have arrived yet.
The most underexplained operator pitfall in SKAN 4.0 is the redownload classification. When a user reinstalls an app they previously installed, Apple classifies the event as a redownload, not a new install. SKAN 4.0 sends only postback 1 for redownloads. Windows 2 and 3 do not fire. This is not configurable. It is a structural property of the framework.
The operator consequence is specific: if you run re-engagement or win-back campaigns on iOS and you observe that window 2 and 3 postback counts are lower than expected for those campaigns, redownload classification is the most likely cause. It is not a measurement configuration error. It looks like a configuration error because the symptom (missing postbacks) is the same as what you would see if windows 2 and 3 were misconfigured. The distinction matters because the fix is different. A misconfiguration can be corrected. A redownload restriction cannot be worked around. Design win-back campaign measurement around window 1 coarse values as the primary signal, because that is the only postback you will reliably receive for reinstalled users.
A related timing issue that is less discussed: off-by-one window boundary errors. If a user installs on day 0 and your target window 2 event is a trial start that typically happens on day 3, events that occur at 11:30 PM on day 3 may fall close enough to the window boundary that small timing variations in the measurement push them outside the window. Design window 2 schema around events that happen solidly in the middle of the days 3-7 range, not at the edges. If your subscription trial start typically happens within the first 4 hours of day 3, consider whether it is a window 1 late event or a window 2 event depending on your install timing distribution.
For ad-monetized apps, the window 2 and 3 signals serve a different purpose than for subscription or IAP apps. Ad revenue accrues over time through repeated sessions. Window 2 coarse values (days 3-7) are most valuable when mapped to retention milestones (day-7 active, session count threshold, return visit on day 5 or later) rather than revenue amounts. Window 3 (days 8-35) is the signal for longer-term retention, and for most ad-monetized apps this is where the highest-LTV cohort differentiation lives. The coarse signal is blunt, but a high coarse value in window 3 that maps to day-30 retained users is meaningful signal for campaign optimization, even if it is not granular.
A note on redownload classification and paid re-engagement budgets: if a material portion of your iOS re-engagement budget is producing redownloads and you have designed your measurement around three-window attribution, you are attributing revenue signals that will never arrive. The measurement model and the campaign type are misaligned. Audit your re-engagement campaign cohorts for redownload rate before interpreting window 2 and 3 performance data.
SKAN 4 at the mediation layer
For ad-monetized apps, SKAdNetwork attribution does not flow only through the MMP. It flows through the mediation stack too. Understanding what each mediation platform reports back to publishers is necessary for interpreting what you are actually measuring.
AppLovin MAX participates in SKAN 4.0 as both a demand-side network and as a mediation platform. MAX SDK registers its SKAdNetwork IDs and handles impression-level SKAN registration for its own demand. For SKAN reporting, MAX surfaces postback data at the campaign level in the MAX Ads dashboard, subject to crowd anonymity truncation. AppLovin MAX exposes SKAN postbacks aggregated in their dashboard. Raw postback access is not publisher-accessible at the time of writing. At tier 1 crowd anonymity (the most common tier for mid-scale campaigns), the campaign granularity collapses to a 2-digit source identifier. For publishers running many campaigns through MAX mediation, the practical attribution resolution at tier 1 is very low. This is a property of SKAN, not a MAX limitation specifically, but it is the reality of using MAX as your primary iOS measurement channel for mid-scale campaigns.
For operators choosing between AppLovin MAX and Unity LevelPlay as mediation platforms, the SKAN reporting difference is one dimension of that decision. See AppLovin MAX vs Unity LevelPlay for the full comparison. For the AdMob mediation context specifically, see AdMob Mediation vs AppLovin MAX.
AdMob handles SKAN registration for Google Ads demand. Publishers using AdMob mediation receive SKAN attribution through GA4 or Google Ads reporting interfaces. The Google Ads coarse value exclusion applies here: Google's optimization uses fine values from postback 1 only. Coarse values are excluded from modeling. The AdMob mediation adapter's SKAN ID list requires ongoing maintenance; the Info.plist section below covers this. You can audit your AdMob mediation configuration with the AdMob Approval Checker.
IronSource exposes SKAN reporting in LevelPlay (formerly under the SideWalk brand). The data resolution is what matters, not the product name. For mediation publishers, IronSource campaign-level SKAN data is subject to the same crowd anonymity truncation as MAX and AdMob. IronSource's reporting does distinguish SKAN-attributed installs from non-SKAN installs, which is useful for understanding what share of your iOS installs are measurable at all. On re-engagement campaigns run through LevelPlay, the redownload restriction means window 2 and 3 postbacks from IronSource demand will be similarly limited.
The structural problem at the mediation layer is not platform-specific. At tier 1 crowd anonymity, a 2-digit source identifier maps to 100 possible campaign identifiers. Publishers running more than a handful of campaigns across multiple networks through mediation receive a campaign-level attribution bucket, not a campaign identifier. If you are trying to run granular iOS campaign optimization through a mediation stack at moderate volumes, the attribution signal is going to be coarser than the campaign granularity you are managing. That mismatch needs to be factored into how you interpret per-campaign ROAS.
Three questions worth asking your mediation platform before drawing conclusions from SKAN data: Does the dashboard surface SKAN data at the individual campaign level or aggregated across campaigns? At what crowd anonymity tier does your reporting start producing usable campaign-level data? How does the pipeline handle the 24-144 hour postback delay in window 2 and 3 reporting?
For a reference on which SDK and adapter versions to run in your mediation stack, see the Mediation SDK & Adapter Compatibility Guide.
Configuring Info.plist and SKAdNetwork IDs
Every ad network whose campaigns should receive SKAN attribution must have its SKAdNetwork ID declared in the app's Info.plist under SKAdNetworkItems. Missing IDs mean campaigns from that network receive no attribution. The postback still fires (the ad was shown, Apple recorded the event), but your MMP cannot attribute the install to the correct campaign. The revenue is attributed somewhere or nowhere. The diagnostic signal is SKAN attribution volume dropping on one network's line with no clear cause.
This is not a one-time configuration. SKAdNetwork ID lists change when networks add new IDs with SDK updates or new campaign types. AppLovin MAX, AdMob, Meta, and IronSource all update their ID lists with SDK releases. Apps that do not update Info.plist after a network publishes new IDs silently lose attribution for campaigns running on those IDs. There is no error. There is no warning. The symptom is a lower-than-expected SKAN attribution rate for a specific network after an SDK update, which looks identical to a dozen other possible causes.
For mediation apps, the Info.plist must contain the union of IDs from every network in the mediation stack. Updating the MAX SDK requires updating the MAX-generated ID list. Updating the Google adapter requires checking whether Google has added any IDs in that adapter release. These are separate update steps, and they are both easy to skip because the build does not fail without them.
The AppLovin MAX Integration Manager auto-generates the Info.plist section as part of the adapter update flow. Use it rather than maintaining the list manually. Google's AdMob developer documentation lists the required IDs for Google Ads campaigns. IronSource publishes its current ID list in the LevelPlay integration documentation. The cadence for updating the ID list should match the adapter update cadence: when you update an adapter, check the corresponding network's current ID list at the same time.
The Mediation SDK Checker audits your dependency files for known compatibility issues, including cases where updated mediation SDKs have published new SKAdNetwork IDs that are not yet in the app's declared list. For the adapter version compatibility dimension of this, the full reference is in the Mediation SDK & Adapter Compatibility Guide.
The silent failure mode of missing SKAdNetwork IDs is worth naming clearly: you are running paid iOS campaigns, the campaigns are serving, but the attribution for a portion of those campaigns is lost. The budget is spent. The conversions may still happen. The measurement signal is gone. For any app running meaningful iOS UA spend, a quarterly audit of Info.plist against the current ID lists from every network in the mediation stack is a defensible minimum maintenance cadence.
Measuring revenue with SKAN 4
SKAN 4.0's three-window architecture fits some revenue models better than others, and the schema design for a subscription app looks nothing like the schema for an ad-monetized app. Getting this wrong means the schema is technically valid but carries no useful optimization signal.
For subscription apps, the key revenue event is trial-to-paid conversion, which typically happens on days 3-7. That is window 2 territory, and window 2 returns coarse values only. The schema decision for a subscription app flows from this fact: window 2 coarse values must be mapped to subscription-meaningful milestones. A workable default is low for no trial started, medium for trial started, high for subscription activated. This single mapping decision determines whether SKAN 4 produces any actionable signal for subscription UA optimization. Everything else in the schema is secondary.
In window 1 (days 0-2) for a subscription app, use fine values to capture onboarding depth and early engagement signals that predict trial starts. Tutorial completion, account creation, feature activation, session depth on day 1. These are the early indicators that correlate with a trial start on day 3-5. Do not map window 1 fine values to revenue for a free-to-try subscription app; day 0-2 revenue is typically zero and carries no optimization signal.
For IAP apps, window 1 fine values can directly encode revenue bands if day 0-2 purchase rates are high enough to meet the crowd anonymity threshold. Games with immediate IAP purchase prompts (starter packs, battle passes, first-session offers) often see enough day 0-2 purchase volume for revenue fine values to be meaningful. For apps where day 0-2 purchase rates are below the threshold, use engagement signals in window 1 and map purchases to window 2 coarse value milestones.
For ad-monetized apps, the schema problem is different and harder. Day 0-2 ad revenue per user is usually too small to be useful as a fine value signal. The right approach is to map fine values to engagement depth signals (session count in 48 hours, session length, day 2 return visit) that are calibrated predictors of 30-day ad ARPU. This requires historical cohort data to validate the calibration before the schema goes into production, but it produces fine values that are actually usable for bid optimization. Window 2 and 3 coarse values for an ad-monetized app should map to retention milestones: day-7 active for window 2, day-30 retained or high-session cohort for window 3.
A point that needs stating clearly because most MMP documentation glosses over it: SKAN 4.0 gives you a non-redownload install count (from postback 1), a coarse 3-35 day milestone signal (from windows 2 and 3), and a fine 0-2 day early engagement signal (from window 1 when thresholds are met). It does not give you campaign-level ROAS. It does not give you user-level LTV. At low crowd anonymity tiers, it does not give you campaign-level fine granularity at all. SKAN 4.0 is a meaningfully better system than SKAN 2.x. It is still a probabilistic aggregate, not deterministic attribution. Designing the schema around what the system can actually deliver, rather than what deterministic attribution could deliver, is the adjustment most growth teams have not made.
If you are running a subscription app or a premium ad-monetized app and your window 2 and 3 coarse value mapping is not designed around your actual conversion events, that schema gap is costing you optimization signal. That is the kind of structured engagement we cover. Start with the free 30-minute call.
Known gotchas and recent operator pain
Each issue below follows the same structure: what breaks, why it breaks, and what to do about it. If your setup matches, treat it as a priority item.
Google Ads ignores coarse values from all postback windows
Google Ads conversion modeling, as of 2026, uses fine values from postback 1 only. Coarse values from windows 1, 2, and 3 are excluded from Google's SKAN optimization (source: support.google.com/google-ads/answer/13286653). If your window 1 fine value null rate is high (because of campaign volume below threshold, or schema misconfiguration that returns no fine value), Google Ads is running iOS UA with no SKAN feedback. The symptom is SKAN installs tracked by your MMP but campaign performance not improving over time, even as attribution data accumulates. The fix: consolidate iOS campaigns to reach the fine value threshold, or accept that fine values will not appear at your volume and use Google's SKAdNetwork modeling alongside coarse-value MMP data for internal reporting. Running fragmented low-volume campaigns across Google Ads on iOS and expecting SKAN to optimize them is not realistic at current threshold levels.
Coarse value thresholds miscalibrated at setup
Most MMP configuration UIs default to distributing installs across coarse tiers by revenue or event count quantile, splitting the distribution roughly into thirds. If your install distribution is heavily right-skewed (as most app install distributions are), the default thresholds put 70-80% of installs in low. A coarse signal where the majority of installs land in one tier is not a signal. It is noise with three labels. Recalibrate thresholds based on your actual install distribution. Source: operator pattern observed in r/googleads threads and Aarki's SKAN 4 schema analysis. The fix is not technical; it is a threshold configuration change in your MMP dashboard. But it requires having the distribution data to calibrate against.
Source identifier truncation breaks campaign mapping
An operator encodes source identifiers as a compound key: [network-id][campaign-id] in 4 digits, where 12 is the network and 34 is the campaign. At crowd anonymity tier 1, only 12 is returned. The campaign dimension is lost. Any reporting that joins SKAN data to campaign metadata using the full 4-digit source ID fails for tier 1 postbacks, which are the most common postback type for mid-scale campaigns. The fix is to design source identifier encodings so the 2-digit prefix is independently meaningful. Encode your most important campaign dimension (campaign type, major audience bucket, channel) in the first 2 digits. Reserve digits 3 and 4 for refinement (geo, creative, bid strategy) that only appears at higher volume tiers. If the 2-digit prefix maps to nothing actionable in your reporting system, tier 1 postbacks are effectively unattributed. Source: Apple WWDC 2022 implementation notes.
lockWindow on a day-2 event kills the signal
Calling lockWindow immediately after a day 0 event to accelerate postback 1 timing means no day 1-2 events will update the conversion value, even if those events would have raised it. This is by design: lockWindow is final. Operators who call it after tutorial completion (day 0) lose visibility into any day 2 signal, whether that is a second session, a return open, or a first in-app event. The result is optimization on a day 0-only signal even when day 2 signals are more predictive. Use lockWindow only when you are confident that the day 0 event you are locking on is your best available early LTV signal and that day 1-2 events do not add materially.
Redownload campaigns receive only postback 1
Re-engagement and win-back campaigns targeting previous users produce redownloads, not new installs in Apple's classification. SKAN 4.0 sends postback 1 only for redownloads. Windows 2 and 3 do not fire. Operators who run win-back campaigns and see lower-than-expected window 2 and 3 postback counts often diagnose a measurement configuration error and start auditing the schema. The schema is not the problem. This is a structural property of SKAN 4.0. Design win-back measurement around window 1 coarse values, which are the only reliable postback for reinstalled users. Budget allocated to win-back campaigns on iOS should be measured against window 1 coarse return signal, not against a three-window attribution model.
MMP postback server timing assumptions
Some MMP implementations assume postback 2 and 3 will arrive within a predictable interval after the window closes. The actual delay is 24-144 hours, randomly selected by Apple. Querying SKAN cohort data 24 hours after window 2 closes (day 8) may capture less than half of window 2 postbacks, with the rest arriving on days 10-12. MMP reporting dashboards that surface SKAN data in near-real-time without flagging postback latency cause operators to underestimate window 2 and 3 signal during the arrival window. Check how your MMP surfaces postback latency in its SKAN reporting before drawing conclusions from incomplete cohort data. A cohort that looks like it has low window 2 engagement on day 9 may look normal by day 13.
Info.plist SKAdNetwork IDs drift after adapter updates
AppLovin MAX, Google, and other networks add new SKAdNetwork IDs in SDK updates. An app that updates the MAX SDK but does not regenerate the Info.plist SKAdNetwork ID list will silently lose attribution for campaigns on any new IDs added in that SDK release. The failure is invisible at the build level. The postback fires but is unattributed in the MMP. The symptom is an attribution rate drop on one network after an SDK update, which looks like a dozen other possible causes. Run the MAX Integration Manager's ID sync after every adapter update. Do not treat Info.plist as a set-once configuration file. Treat it as a dependency that needs updating on the same cadence as your adapters.
SKAN 4.0 web-to-app attribution requires separate MMP configuration
Web-to-app attribution in SKAN 4.0 uses sourceDomain instead of sourceAppID. Not all MMPs parse this field and surface it correctly in their postback reporting. If you run web-to-app campaigns and your MMP does not handle sourceDomain, web-originated SKAN installs land in an "unknown source" bucket or are not attributed at all. AppsFlyer, Adjust, and Singular all support sourceDomain parsing as of 2026. Smaller MMPs lag. Confirm your MMP's sourceDomain support status before launching web-to-app campaigns and expecting SKAN attribution to flow.
Testing and validating your SKAN 4 schema
The only structured way to validate a SKAN 4 schema before production is Apple's StoreKit Test framework, available in Xcode 13.3+. The framework supports simulating postbacks with fine and coarse values, custom source identifiers, and multiple postback windows. This is not optional validation for schema changes. Running a schema change to production without testing it in StoreKit Test is guesswork.
The testing sequence uses SKAdTestSession. Create a test session, configure an SKAdImpression, validate it with testSession.validate(impression:publicKeyComponents:), then configure SKAdTestPostback objects for each expected window and call testSession.flushPostbacks() to trigger delivery to your postback server. This simulates the postback flow without the crowd anonymity restrictions of production. You can test that conversion value updates fire at the correct events, that the correct window's postback sends, and that coarse value mapping matches the intended events.
What Xcode's test session does not reproduce: crowd anonymity tier simulation is not precise. The test does not replicate Apple's actual privacy threshold logic. Testing whether a campaign will meet the fine value threshold in production requires production traffic data, not a local test. StoreKit Test confirms that the postback fires correctly; it does not confirm that fine values will appear at your actual campaign volume.
Most major MMPs (AppsFlyer, Adjust, Singular) provide a SKAN debugger mode that logs postback content in near-real-time for a test device. Use this to confirm that conversion value updates are firing at the expected events, that the correct window's postback is being sent, and that coarse value mapping matches the intended event assignments. MMP debugger mode is useful for confirming the implementation; StoreKit Test is useful for confirming the schema design.
The canary release pattern reduces the risk of a schema change that degrades production data. Push schema changes to a 5-10% traffic slice first. Watch postback 1 fine value null rate, coarse value distribution across windows 1-3, and window 2-3 postback arrival rate for 7-10 days before expanding to full traffic. A schema change that inadvertently raises the fine value null rate in window 1 (by setting thresholds incorrectly or locking the window too early) is detectable in a canary slice before it affects the full production cohort. A 7-day canary also gives you enough window 2 postback data to see whether the coarse value distribution is calibrated correctly before you commit to full rollout.
One thing the canary does not tell you: whether the schema is producing the right optimization signal for your ad networks. A schema can return postbacks with valid values and still be measuring the wrong events. Validating schema quality against bid optimization outcomes (campaign performance before and after the new schema reaches the ad network's model) takes 2-4 weeks of production data and requires comparing campaign metrics across the schema transition.
If you are working through a SKAN 4 schema change and the test results are not matching expected behavior, that is usually a sign that the schema design itself needs review before the implementation is worth validating further. That is the kind of structured engagement we cover. Start with the free 30-minute call.
Frequently Asked Questions
What is the difference between fine and coarse conversion values in SKAdNetwork 4.0?
Fine conversion values are 6-bit integers ranging from 0-63, available only in postback 1 (the 0-2 day measurement window). They appear when the install cohort meets Apple's crowd anonymity threshold, which requires sufficient install volume per campaign. Coarse conversion values are one of three states: low, medium, or high. They appear across all three postback windows and have a lower crowd anonymity threshold than fine values. When a campaign does not meet the fine value threshold, postback 1 returns coarse only. Postbacks 2 and 3 always return coarse values regardless of volume. Google Ads uses fine values from postback 1 for conversion modeling and excludes coarse values from optimization.
Why are my SKAN 4 postbacks returning null conversion values?
Null conversion values in postback 1 mean the campaign did not meet Apple's crowd anonymity threshold for fine values. Apple does not publish the exact threshold, but operator experience suggests roughly 200 or more installs per campaign per day is needed for fine values to appear consistently. Below that volume, postback 1 falls to coarse values. If no coarse value mapping is configured, or thresholds are miscalibrated so nearly all installs land in the same tier, the postback returns a null or low-information coarse value. Null postbacks in windows 2 and 3 on re-engagement campaigns often indicate redownload classification: SKAdNetwork 4.0 sends only postback 1 for reinstalled apps.
How should a subscription app design its SKAN 4.0 conversion value schema?
The trial-to-paid conversion event for most subscription apps happens on days 3-7, which falls in postback window 2. Window 2 returns coarse values only. Map your coarse value tiers to subscription-meaningful milestones: for example, low means no trial started, medium means trial started, high means subscription activated. In window 1 (days 0-2), use fine values to map onboarding depth and early engagement signals that predict trial starts, since you will not have the subscription signal at that point. Do not rely on window 1 revenue fine values if your app is free-to-try. Day 0-2 revenue from a subscription app is typically zero and carries no useful optimization signal.
How does the SKAdNetwork 4.0 source identifier hierarchy work?
The source identifier field (previously campaign ID) expanded from 2 to 4 digits in SKAdNetwork 4.0, providing up to 10,000 distinct identifiers. Apple returns the identifier at different levels of precision depending on the install volume tier: tier 1 (low volume) returns only the first 2 digits, tier 2 (medium volume) returns 3 digits, tier 3 (high volume) returns all 4. Encodings must be designed to remain meaningful at each truncation level. A common approach is to encode campaign type or major audience bucket in the first 2 digits, and refine with geography or creative in digits 3 and 4. If the 2-digit prefix alone maps to nothing useful in your reporting system, you will have no actionable data for low-volume campaigns.
Do AppLovin MAX, AdMob, and IronSource handle SKAdNetwork 4 differently at the mediation layer?
All three platforms register their own SKAdNetwork IDs and handle impression-level attribution for their own demand. The publisher-facing difference is in reporting. AppLovin MAX surfaces SKAN cohort data in the MAX Ads dashboard, subject to crowd anonymity truncation. AdMob exposes SKAN data through GA4 or Google Ads reporting, using fine values from postback 1 for optimization and excluding coarse values. IronSource surfaces SKAN attribution in its SideWalk reporting tool. None of the platforms give publishers access to raw postback data. At tier 1 crowd anonymity, campaign-level resolution collapses to a 2-digit source ID across all three platforms, making campaign-specific ROAS measurement impractical for smaller campaigns.
What happens to SKAN 4 attribution for re-engagement and win-back campaigns?
When a user reinstalls an app they previously installed, Apple classifies the event as a redownload. SKAdNetwork 4.0 sends only postback 1 for redownloads. Postbacks 2 and 3 do not fire for redownload events. If you run win-back campaigns and observe lower-than-expected window 2 and 3 postback counts, redownload classification is the most likely cause. This is not a configuration error. It is a structural property of SKAdNetwork 4.0. Design win-back campaign measurement around window 1 coarse values, which are the only reliable postback signal for reinstalled users.
When to bring someone in
SKAN 4.0 is a better measurement system than 2.x. Whether it is a useful one depends entirely on how the schema is designed. A technically valid schema that measures the wrong events, or that returns postbacks with well-formatted values that carry no optimization signal, is indistinguishable from a broken setup in the short run. The difference shows up in campaign performance over 4-8 weeks, by which time the damage is already in the bid model.
Schema design is not a one-session task. It requires understanding your install distribution, your revenue event timing, your campaign volume tier, and how each of those factors interacts with the crowd anonymity threshold. Then it requires calibrating coarse value thresholds against actual data, validating the schema in StoreKit Test, running a canary, and watching the postback distribution before committing. That process has a lot of places to go wrong quietly.
If your iOS UA strategy is blocked on attribution signal quality, or if your SKAN data looks valid but your campaigns are not optimizing, that is a schema problem more often than it is an implementation problem. Start with the free 30-minute call. The Mediation SDK Checker is also a useful first step if you want to audit your SDK and SKAdNetwork ID configuration before the call.