Most performance ads fail, according to new research from advertising analytics platform Motion.
An analysis of approximately $1.29 billion in Meta ad spend found that winning creatives account for just 4% to 8% of total output, depending on advertiser size. The advertisers that surface the most winners are not necessarily producing better ads. They are producing more of them.
Motion’s “Creative Benchmarks 2026” report, published this month, analyzed nearly 600,000 unique ads placed by 6,015 advertisers across Facebook and Instagram between September 2025 and early January 2026.
Its central finding: low hit rates are structural, not exceptional. Success in performance advertising is less about predicting which ad will win and more about building systems capable of sustained creative testing.
Most Ads Are Designed to Fail
Motion defines a “winning” ad as one that spends at least 10 times its account’s median ad spend and clears a $500 floor. By that standard, winners account for a small minority of creatives across all spend tiers.
Roughly half of all ads in the dataset were discontinued before accumulating 28 days of active spend. At the same time, approximately 55% of total spend went to winning ads, indicating that a limited share of creatives absorbed the majority of budget allocation.
The report characterizes this distribution as a statistical feature of performance advertising rather than a reflection of weak creative.
“Low hit rates are not necessarily a sign of weak creative,” the report states. “They are a statistical feature of how performance advertising works.”
That framing challenges how many teams evaluate effectiveness. A team that launches five ads and produces one winner posts a 20% hit rate. A team that launches 50 ads and produces five winners has a 10% hit rate and five times as many winning creatives.
The report notes that optimizing for hit rate alone may limit overall upside. “High hit rates may actually signal that someone isn’t testing enough to maximize their accounts’ potential.”
Volume as a Structural Advantage
If winning ads are rare by nature, then output becomes a measurable advantage.
Across spend tiers, advertisers that launched more creatives per week surfaced more winners, independent of budget size. The pattern held both across tiers and within them.
Among advertisers spending between $200,000 and $1 million per month, the top quartile by winner count launched a median of 31.11 creatives per week. The tier-wide median was 11.24. The top group generated a median of 5.99 winners per month, compared to 1.75 for the broader cohort.
At the enterprise level, the gap widened. Top performers launched 54.64 creatives per week, compared to a tier median of 18.85, generating 10.48 winners per month, compared to 3.99.
“Larger advertisers are not simply spending more,” the report states. “They are building systems that support faster testing.”
An Organizational Constraint
The report attributes much of the performance gap to internal capacity rather than platform mechanics.
Many teams, it finds, calibrate output to what their production and approval workflows can comfortably manage, rather than aligning with the testing cadence that performance systems reward.
“The research suggests that this constraint is often organizational rather than algorithmic,” the report states, adding that “creative strategy should be seen more as capacity planning than optimization.”
That distinction is particularly relevant for brand marketing teams, where creative production often involves multiple stakeholders, longer approval cycles, and higher per-asset costs than performance-focused teams.
The report does not prescribe a specific output benchmark, noting that appropriate testing volume depends on budget, team size, and production capacity. It does provide reference medians: medium-tier advertisers launched 6.67 creatives per week, while micro advertisers launched 2.80.
The analysis also identifies a stabilizing role for mid-range creatives; ads that sustain spend for at least 28 days without reaching winner thresholds. These ads absorbed a meaningful share of the budget, particularly in smaller accounts. Among micro advertisers, mid-range creatives accounted for 45.6% of total spend, compared to 23% allocated to winners.
For marketers operating on Meta, the findings suggest that competitive advantage may depend less on predicting breakout concepts and more on building testing systems that consistently generate new creative at scale.
Methodology: Motion’s dataset covers creatives launched between September 1, 2025, and January 1, 2026, with the end date set at least 28 days before the last available data point to allow equal classification opportunity for all creatives. All advertiser data was anonymized. The analysis is limited to Facebook and Instagram and does not incorporate conversion-based metrics such as ROAS or CPA.
Image source: Motion The full report is available here
Nii A. Ahene is the founder and managing director of Net Influencer, a website dedicated to offering insights into the influencer marketing industry. Together with its newsletter, Influencer Weekly, Net Influencer provides news, commentary, and analysis of the events shaping the creator and influencer marketing space. Through interviews with startups, influencers, brands, and platforms, Nii and his team explore how influencer marketing is being effectively used to benefit businesses and personal brands alike.
Most performance ads fail, according to new research from advertising analytics platform Motion.
An analysis of approximately $1.29 billion in Meta ad spend found that winning creatives account for just 4% to 8% of total output, depending on advertiser size. The advertisers that surface the most winners are not necessarily producing better ads. They are producing more of them.
Motion’s “Creative Benchmarks 2026” report, published this month, analyzed nearly 600,000 unique ads placed by 6,015 advertisers across Facebook and Instagram between September 2025 and early January 2026.
Its central finding: low hit rates are structural, not exceptional. Success in performance advertising is less about predicting which ad will win and more about building systems capable of sustained creative testing.
Most Ads Are Designed to Fail
Motion defines a “winning” ad as one that spends at least 10 times its account’s median ad spend and clears a $500 floor. By that standard, winners account for a small minority of creatives across all spend tiers.
Roughly half of all ads in the dataset were discontinued before accumulating 28 days of active spend. At the same time, approximately 55% of total spend went to winning ads, indicating that a limited share of creatives absorbed the majority of budget allocation.
The report characterizes this distribution as a statistical feature of performance advertising rather than a reflection of weak creative.
“Low hit rates are not necessarily a sign of weak creative,” the report states. “They are a statistical feature of how performance advertising works.”
That framing challenges how many teams evaluate effectiveness. A team that launches five ads and produces one winner posts a 20% hit rate. A team that launches 50 ads and produces five winners has a 10% hit rate and five times as many winning creatives.
The report notes that optimizing for hit rate alone may limit overall upside. “High hit rates may actually signal that someone isn’t testing enough to maximize their accounts’ potential.”
Volume as a Structural Advantage
If winning ads are rare by nature, then output becomes a measurable advantage.
Across spend tiers, advertisers that launched more creatives per week surfaced more winners, independent of budget size. The pattern held both across tiers and within them.
Among advertisers spending between $200,000 and $1 million per month, the top quartile by winner count launched a median of 31.11 creatives per week. The tier-wide median was 11.24. The top group generated a median of 5.99 winners per month, compared to 1.75 for the broader cohort.
At the enterprise level, the gap widened. Top performers launched 54.64 creatives per week, compared to a tier median of 18.85, generating 10.48 winners per month, compared to 3.99.
“Larger advertisers are not simply spending more,” the report states. “They are building systems that support faster testing.”
An Organizational Constraint
The report attributes much of the performance gap to internal capacity rather than platform mechanics.
Many teams, it finds, calibrate output to what their production and approval workflows can comfortably manage, rather than aligning with the testing cadence that performance systems reward.
“The research suggests that this constraint is often organizational rather than algorithmic,” the report states, adding that “creative strategy should be seen more as capacity planning than optimization.”
That distinction is particularly relevant for brand marketing teams, where creative production often involves multiple stakeholders, longer approval cycles, and higher per-asset costs than performance-focused teams.
The report does not prescribe a specific output benchmark, noting that appropriate testing volume depends on budget, team size, and production capacity. It does provide reference medians: medium-tier advertisers launched 6.67 creatives per week, while micro advertisers launched 2.80.
The analysis also identifies a stabilizing role for mid-range creatives; ads that sustain spend for at least 28 days without reaching winner thresholds. These ads absorbed a meaningful share of the budget, particularly in smaller accounts. Among micro advertisers, mid-range creatives accounted for 45.6% of total spend, compared to 23% allocated to winners.
For marketers operating on Meta, the findings suggest that competitive advantage may depend less on predicting breakout concepts and more on building testing systems that consistently generate new creative at scale.
Methodology: Motion’s dataset covers creatives launched between September 1, 2025, and January 1, 2026, with the end date set at least 28 days before the last available data point to allow equal classification opportunity for all creatives. All advertiser data was anonymized. The analysis is limited to Facebook and Instagram and does not incorporate conversion-based metrics such as ROAS or CPA.
Image source: Motion
The full report is available here