Thread Transfer
Modeled Conversions: Should You Trust Meta's Estimates?
iOS 14 broke direct tracking. Meta responded with modeled conversions. But are these estimates reliable? Here's how to evaluate and validate them.
Jorgo Bardho
Founder, Meta Ads Audit
You check your Meta Ads dashboard and see 47 conversions. Your backend shows 38. Where did the other 9 come from? The answer is modeled conversions—Meta's statistical estimates for conversions it can't directly observe. Since iOS 14.5 shattered direct tracking, these modeled numbers have become an increasingly large portion of what you see in reporting.
The question every advertiser needs to answer: should you trust these estimates when making budget decisions? The answer is nuanced. Modeled conversions are neither completely reliable nor completely useless. Understanding how they work—and their limitations—is essential for accurate performance assessment in 2025.
What Are Modeled Conversions?
Modeled conversions are Meta's statistical estimates for conversion events that couldn't be directly measured. When a user opts out of tracking (via ATT on iOS) or when browser restrictions block the pixel, Meta loses direct visibility into what happens after the ad click. Modeled conversions fill this gap using machine learning to estimate likely outcomes.
There are two primary types of modeling that affect your reported numbers:
Statistical Modeling
Meta analyzes patterns from users who did consent to tracking and extrapolates to the non-consenting population. If 60% of tracked iOS users who clicked your ad converted, Meta estimates a similar conversion rate for untracked iOS users and adds those estimated conversions to your totals.
Aggregated Event Measurement (AEM)
For iOS 14.5+ users, Meta uses Apple's SKAdNetwork and its own Aggregated Event Measurement framework to receive limited, delayed, and aggregated conversion data. This data is then modeled to provide campaign-level estimates that approximate actual performance.
Why Modeled Conversions Exist
Before iOS 14.5, Meta's pixel could track virtually every conversion that happened on your website. A user clicked an ad, visited your site, and made a purchase—Meta saw the entire journey and attributed it accurately. Then came ATT (App Tracking Transparency).
With ATT, iOS users see a prompt asking if they want to allow tracking. Roughly 75-85% say no. For these users, Meta's pixel can't reliably connect ad clicks to website conversions. Without modeling, your reported conversions would drop by 40-70%, making it nearly impossible to optimize campaigns.
Modeled conversions exist to maintain campaign optimization. Without them, Meta's algorithm wouldn't have enough signal to learn which ads work. By estimating conversions for untracked users, Meta keeps its optimization engine running—but the accuracy of those estimates is imperfect.
How Accurate Are Modeled Conversions?
Meta claims their models are statistically valid, but independent verification tells a more complex story. Here's what the data suggests:
The Good
At scale and over time, modeled conversions tend to track actual business outcomes reasonably well. If you're looking at 1,000+ conversions over 30 days, the modeled totals often land within 10-20% of backend reality. Meta's models are trained on massive datasets and improve continuously.
The Bad
At small scales or short time windows, accuracy drops significantly. If you're analyzing a single ad set with 50 conversions over 7 days, modeled numbers can be off by 30-50% or more. The models need volume to be accurate—small sample sizes produce unreliable estimates.
The Variable
Accuracy varies by vertical, audience, and even time of year. E-commerce brands with high traffic volume typically see better model accuracy than B2B companies with longer sales cycles. Industries with unusual conversion patterns (high-ticket, low-volume) often see larger discrepancies.
How to Validate Modeled Conversions
Never take Meta's numbers at face value. Build a validation process that compares reported conversions to actual business outcomes.
Method 1: Backend Comparison
Compare Meta's reported conversions to your backend system (Shopify, your CRM, Google Analytics 4). Do this weekly, using the same attribution window Meta uses (typically 7-day click, 1-day view by default). Calculate the variance percentage consistently.
If Meta reports 100 conversions and your backend shows 85, you have a 15% modeled inflation. Apply this discount factor when evaluating campaign performance. If variance changes significantly over time, investigate what's driving the shift.
Method 2: UTM-Based Tracking
Use UTM parameters on all your Meta ad URLs and track conversions in GA4 or your analytics platform. This gives you a Meta-independent measure of traffic quality and conversion rates. Compare UTM-attributed conversions to Meta's reported numbers to quantify the gap.
Method 3: Lift Studies
Meta offers conversion lift studies that use randomized holdout groups to measure true incremental impact. While not a direct validation of modeled conversions, lift studies tell you whether your Meta spend is actually driving business outcomes—regardless of what the dashboard says.
Method 4: Customer Surveys
Add a "How did you hear about us?" field to your checkout or lead forms. While not perfectly accurate (users forget or misattribute), survey data provides a reality check on whether Meta is the acquisition driver it claims to be.
When to Trust Modeled Conversions
Modeled conversions are more reliable in certain scenarios:
- High volume: Campaigns with 500+ weekly conversions have enough signal for accurate modeling
- Long time windows: Looking at 30+ days of data smooths out short-term modeling noise
- Consistent conversion patterns: Products with predictable purchase behavior are easier to model
- Strong CAPI implementation: Server-side tracking provides additional signal that improves model accuracy
- Trending analysis: Relative changes over time are more reliable than absolute numbers
When to Be Skeptical
Exercise extra caution with modeled numbers in these situations:
- Low volume: Ad sets with fewer than 50 weekly conversions have unreliable modeled estimates
- Short windows: Day-over-day changes are mostly noise, not signal
- New campaigns: Models need historical data to calibrate; new campaigns have less reliable estimates
- Unusual events: Sales, launches, or market shifts confuse models trained on normal patterns
- High iOS traffic: Accounts with 70%+ iOS users have more modeling and less direct measurement
The Attribution Window Problem
Modeled conversions compound an existing attribution issue: Meta's default attribution windows are generous. A 7-day click window means someone who clicked Monday and purchased Sunday gets attributed to your ad—even if they found you through Google on Thursday.
When you combine generous attribution windows with modeled conversions, you get inflated numbers. The person who would have converted anyway gets attributed to your ad, and that attribution gets modeled onto untracked users who also might have converted anyway.
Consider testing shorter attribution windows (1-day click) to get a more conservative view of Meta's true impact. You'll see lower reported conversions, but they'll be more directly attributable to your ads.
CAPI: Improving Model Accuracy
The Conversions API (CAPI) significantly improves modeled conversion accuracy. By sending conversion events server-to-server, CAPI bypasses browser restrictions and provides Meta with more ground truth data to train its models.
Accounts with strong CAPI implementation (Event Match Quality scores above 8.0) typically see closer alignment between modeled and actual conversions. If you're running significant Meta spend without CAPI, your modeled numbers are less reliable than they could be.
Our Meta Ads Audit tool checks your Event Match Quality scores and flags accounts where CAPI implementation is weak or missing. Improving server-side tracking is one of the highest-ROI investments you can make for measurement accuracy.
How to Report When Numbers Don't Match
If you're an agency or in-house marketer reporting to stakeholders, the Meta-backend discrepancy creates a communication challenge. Here's how to handle it:
Option 1: Report Both
Show Meta's reported conversions alongside backend-verified conversions. Explain the difference is due to modeled estimates. This is transparent but can confuse stakeholders unfamiliar with the technical details.
Option 2: Apply a Discount Factor
Calculate your historical variance percentage and apply it to Meta's numbers. If Meta consistently over-reports by 20%, show "adjusted conversions" that discount by 20%. This gives a single number that's closer to reality.
Option 3: Report Backend Only
Use your CRM or e-commerce platform as the source of truth and attribute to Meta based on UTM data or first/last touch. This ignores Meta's modeled data entirely but may undercount Meta's contribution due to cross-device and view-through impacts.
The Future of Modeled Conversions
Modeled conversions aren't going away—they're becoming more prevalent. Privacy regulations continue to tighten, third-party cookies are disappearing, and user tracking consent rates remain low. Meta's response will be increasingly sophisticated modeling.
Meta is investing heavily in privacy-preserving measurement techniques, including:
- Enhanced machine learning models trained on larger datasets
- Integration with Apple's SKAdNetwork 4.0
- First-party data partnerships and clean rooms
- Improved CAPI matching and signal recovery
The advertisers who thrive will be those who understand model limitations, validate consistently, and build first-party data assets that improve measurement accuracy over time.
Key Takeaways
- Modeled conversions fill gaps left by iOS 14.5 tracking restrictions
- Accuracy improves with volume—trust large-scale, long-term trends more than small-scale, short-term data
- Always validate against backend data; calculate and apply a discount factor
- Strong CAPI implementation significantly improves model accuracy
- Use shorter attribution windows for more conservative, reliable measurement
- Report with transparency about what's measured vs. modeled
FAQ
Can I turn off modeled conversions?
No. Meta's reporting inherently includes modeled data for any user who opted out of tracking or uses browsers that block the pixel. You cannot request a "measured only" view of your data. The best you can do is validate against backend systems and apply discount factors.
Do modeled conversions affect how Meta optimizes my campaigns?
Yes. Meta uses both observed and modeled conversion data to train its delivery algorithm. This means your campaigns are being optimized partly on estimates. Strong CAPI implementation improves the signal quality the algorithm receives.
Why does Meta's reported ROAS differ from my calculated ROAS?
Meta's ROAS uses reported conversions (including modeled) and their associated revenue values. If modeled conversions are inflated, ROAS is inflated too. Calculate your own ROAS using backend-verified revenue divided by ad spend for a more accurate picture.
Are Android conversions also modeled?
To a lesser extent. Android doesn't have ATT-style opt-in prompts, so more conversions can be directly tracked. However, browser restrictions (Safari ITP, Firefox ETP) and cookie policies still create measurement gaps that require modeling, even on Android devices.
Learn more: How it works · Why bundles beat raw thread history