




In 2025, ecommerce finally felt like growth mode again. Across Northbeam’s dataset, businesses increased ad spend and saw higher topline revenue year over year. On average, advertisers spent roughly 15% more on ads and generated a similar 14% lift in revenue, while the median business posted mid single digit gains on both lines.
The catch was what it cost to produce that growth. Marketing efficiency ratios (MER) softened and first time customer acquisition costs climbed. Median MER fell just over two percentage points, and median first time CAC was up nearly 9%, showing that incremental demand became more expensive to capture.
That is the core tension carrying into 2026: you could grow in 2025, but you probably did it with weaker unit economics. For some operators, that tradeoff was intentional and strategic. For others, it was a slow drift into paying more to stand still.
In 2026, the question is no longer whether growth is available. It is whether you can pursue it without eroding the economics that keep the business alive.
Download the full Northbeam 2025 data report.
This article reframes the key 2025 findings for a 2026 plan:
Before digging into segments, it helps to reset the vocabulary you should be using to run 2026.
On all three measures, 2025 was a step backward:
So when we talk about “the cost of growth” in 2026, we are not being metaphorical. In 2025, businesses had to accept worse economics on each incremental dollar of new revenue.
This is the baseline to plan against now.
The size breakdown makes the 2025 story much sharper and should directly shape your 2026 expectations.

Businesses under $5M in annual revenue had the roughest year:
For these businesses, 2025 was less about “pay to grow” and more about pay more just to hang on. They prioritized cash preservation and margin protection rather than aggressive expansion.
In 2026, this segment should not pretend that a softer market will save them. The playbook is discipline: narrow guardrails on MER and CAC, simple attribution, and clear rules for when to pull back rather than chasing headline growth.
In the $5M–$20M bands, the picture was more nuanced:
This is the classic “comfortable but fragile” middle: enough budget to feel auction pressure and creative fatigue, but not enough operational scale to absorb mistakes or massively out-innovate the market.
In 2026, this tier must decide whether to behave like sub-$5M survivalists or $20M–$50M operators. Continuing to pay slightly more each quarter for slightly more revenue is how you end up boxed in by cash constraints and rising CAC.
The $20M–$50M cohort was one of the few genuine bright spots.
Even here, first time CAC and first time MER generally tightened. This group did not escape the 2025 acquisition environment. They simply managed it better than most by pairing scale with more disciplined guardrails.
In 2026, this tier is best positioned to compound if they protect their balance between topline and unit economics.
At the top end, $50M+ businesses flipped from defensive to offensive:
Upper mid and enterprise operators turned 2025 into a genuine step change year. Healthy fundamentals, well built out systems, and larger budgets helped them absorb the volatility that comes with algorithm driven advertising.
Several overlapping forces drove the 2025 efficiency squeeze and will continue to matter in 2026.

Across Meta, TikTok, Axon, YouTube, Pinterest, and Snap, there was a consistent relationship:
The platforms that drove growth in 2025 also demanded much higher creative velocity. For businesses that did not increase output at the same pace, creative fatigue showed up as rising CAC and sliding conversion.
That is another hidden cost of growth: systems, production, and internal operations that need to scale alongside budget and often do not.
Across the dataset:
That combination rarely happens by accident. It reflects two trends you should assume will still be in play this year:
Net result: paying more per click to attract visitors who were less likely to convert and less likely to be truly net new. In 2026, ignoring new visit percentage is one of the fastest ways to quietly wreck your funnel.

Industry rollups showed how some verticals leaned into expensive growth:
These are textbook examples of deliberately paying more per customer to secure category share. In 2026, treat these as upper bound scenarios when you model your own tolerance for CAC and MER deterioration.
For an industry-by-industry breakdown of the 2025 data, see my colleague CJ Hunter’s recent article.
Viewed month by month, 2025 looked less like a smooth climb and more like a series of cliffs:
Q4 was especially revealing. Revenue spiked around peak promo moments, but in several categories first time MER and first time revenue deteriorated sharply in November and December, which implies a lot of holiday wins came from expensive, discount sensitive new cohorts.
In 2026, that pattern should inform when you are willing to pay up and what kind of cohorts you are willing to load during those windows.
‍
The final step is turning those 2025 lessons into a 2026 operating checklist.
Set MER floors and CAC ceilings for 2026 that reflect what you learned last year:
Returning customers repeatedly masked weaker acquisition economics in 2025. Fix that now:
Stop grading yourself against global averages:
Treat creative capacity as a hard constraint:
The 2025 data makes one thing clear: growth is no longer the scarce resource. Profitable growth is.
Businesses in the dataset proved they could grow spend and revenue. On average, they did exactly that. But the typical operator did it while accepting worse efficiency, more expensive new customers, and softer traffic quality.
So the real 2026 question is:
What does each additional dollar of growth actually cost you, and are you comfortable with that price
If 2025 was your “growth returned, but not on easy terms” year, let 2026 be the year you tighten guardrails, rebuild acquisition economics, and turn growth from something you buy into something your system can earn repeatably, at a price that keeps the business healthy enough to win your category over the long run.

Content is one of the most trusted marketing channels and one of the hardest to measure. It influences decisions over time, across channels, and rarely converts in a single step.Â
This article breaks down why content measurement falls short, what ROI actually means in a content context, and how teams can build a more credible measurement system.Â
Content marketing measurement is often confused with content reporting, but they are not the same thing. Reporting tells you what happened. Measurement helps you understand why it happened and what to do next.Â
Pageviews, clicks, and downloads are reports. Insight comes from connecting those signals to decisions, outcomes, and trade-offs.
One reason content marketing measurement ROI is so hard to pin down is that teams mix three different types of metrics:
None of these layers is unimportant, but problems arise when activity or content performance metrics are treated as proof of ROI on their own.
Content rarely drives immediate conversion, especially in B2B or complex buying cycles. Its value compounds over time, shaping awareness, credibility, and consideration long before a buyer fills out a form or talks to sales.
‍Good content measurement accounts for both short-term and long-term value. A single asset may generate quick engagement, while also supporting future conversions weeks or months later. In this way, content works across the full funnel, from discovery to retention.Â
Measuring it well means evaluating contribution at each stage, not forcing it into a last-click box it was never meant to fit.
Content marketing measurement breaks down because content does not behave like most performance channels. Its impact is distributed, delayed, and shared across systems that were never designed to tell a single, simple story.
Most content is consumed long before a buyer is ready to act. Educational articles, guides, and thought leadership build familiarity and trust, but they are rarely the final step before conversion. When teams expect direct revenue attribution, content appears to underperform.
Buyers encounter many pieces of content across channels. By the time a conversion happens, isolating the influence of any single asset becomes difficult. Attribution models struggle to reflect how content actually shapes decisions.
Content measurement depends on clean taxonomy and consistent tracking. In reality, tagging varies by team, attribution windows are arbitrary, and historical data is often incomplete. This creates gaps that undermine confidence in the results.
Content lives in CMSs, content marketing analytics platforms, marketing automation tools, and CRMs. When ownership is fragmented, no one is responsible for connecting engagement to downstream outcomes.
Leadership often wants fast proof of impact. This pushes teams toward shallow metrics that are easy to report but weak indicators of business value.
When content appears ineffective, teams either underinvest or optimize for the wrong signals. Both outcomes limit long-term growth and learning.

Effective content measurement starts with recognizing that no single metric tells the whole story. Content plays different roles at different stages of the funnel, and the metrics that matter should reflect those roles rather than forcing every asset to justify itself in revenue terms.
At the top of the funnel, content supports discovery and visibility. Useful signals include:
Engagement shows whether content is earning attention once discovered. Common indicators include:
As buyers evaluate options, content helps them progress and compare. Signals to track include:
Content often contributes indirectly to conversion. Helpful measures include:
Post-conversion content usage can support long-term value. Look for:
The goal is not to eliminate leading indicators, but to interpret them in context and connect them to downstream outcomes over time.
Attribution is where many content measurement efforts break down. Traditional models were designed for channels that drive immediate action, not for assets that influence decisions gradually and in combination with other touchpoints.
Last-click attribution assigns all credit to the final interaction before conversion. For content, this almost always misrepresents reality.Â
Educational articles, guides, and thought leadership tend to appear early or mid-journey, long before a buyer takes action. When last-click is the default, content appears ineffective even when it plays a meaningful role.
Multi-touch attribution distributes credit across multiple interactions in a buyer journey. For content, this approach better reflects how influence accumulates over time. It help teams see patterns in how content supports progression rather than expecting direct conversion.
Position-based models assign more weight to early and late touches, acknowledging both discovery and conversion moments.Â
Time-decay models prioritize interactions closer to conversion while still recognizing earlier influence.Â
Both approaches are useful for content because they avoid the extremes of all-or-nothing credit.
Evaluating individual assets in isolation often produces noisy results. Grouping content by theme, audience, or funnel stage allows teams to assess performance at a level that supports decision-making.Â
Content attribution rarely delivers precision. The goal is not perfect accounting, but informed direction.Â
When models consistently show which content types and themes contribute to successful journeys, teams have enough insight to prioritize investment and improve performance over time.

A scalable content measurement framework focuses less on perfect data and more on consistency, clarity, and repeatable decision-making. The goal is to create a system teams can maintain over time as content volume and complexity grow.
Measurement should start with clear objectives tied to who the content is for and where it fits in the funnel. Different audiences and stages require different success criteria, and alignment here prevents misinterpretation later.
Consistent taxonomy is the foundation of reliable measurement. Standard naming conventions, tags, and metadata ensure content can be grouped, compared, and analyzed without manual cleanup.
Metrics should exist to inform action. Each tracked signal should map to a decision, such as where to invest, what to optimize, or what to retire, rather than filling space in a report.
Regular reviews turn data into insight. A defined cadence helps teams identify patterns, test changes, and apply learnings across future content.
No measurement system is perfect. Documenting assumptions, gaps, and known limitations for the next iteration builds trust and prevents overconfidence in the numbers.
The following content measurement mistakes are common, understandable, and fixable with a few structural changes.
Mistake:
Fix:
Mistake:
Fix:
Mistake:
Fix:
Mistake:
Fix:Â
Strong content measurement requires ongoing governance, not just one-time setup. Without guardrails, even well-designed frameworks degrade as content volume grows, teams change, and tools evolve.Â
‍
Diagnostic checks help teams assess whether their measurement system is working as intended:
Governance also requires clear ownership and shared rules. Teams should agree on who is responsible for content measurement, how attribution logic and windows are defined, and when those assumptions should be revisited.Â
Regular audits of tagging, taxonomy, and data quality help maintain confidence in the insights generated and prevent small inconsistencies from undermining long-term decision-making.

Effective content measurement does not depend on a single platform, but on how different tools work together across the measurement lifecycle.
These systems capture the raw signals that content measurement relies on:
This layer turns raw data into insight:
Measurement is only valuable if it informs action:
Content marketing ROI is about influence, contribution, and cumulative impact across the buyer journey. Strong measurement starts with clear objectives, realistic expectations, and metrics tied to decisions rather than vanity.Â
Attribution will never deliver perfect answers, but it can provide enough direction to guide smarter investment. Teams that measure content ROI well gain clarity, confidence, and the ability to scale what works.

January 2026 is our first clean look at how ecommerce businesses are entering the new year. The topline story: the median business is still paying more to grow, while the top quartile is turning the same market into cheaper, more efficient acquisition.
In this article, we’re breaking down what happened in January, how typical performance compares to the 75th percentile, and what operators should do with these benchmarks.

Across Northbeam customers, the median business saw modest growth and worsening new customer economics in January 2026:
There are a few clear implications:
If January felt like you were working harder for less convincing performance on first time buyers, these numbers explain why.
The 75th percentile tells a very different story. In the same month, the best performers delivered step change gains in both growth and efficiency:
This is what healthy, scalable growth looks like:
In other words, January was not just “easier” for the top quartile. They operated with a different set of rules.

Higher budgets only work if your creative pipeline can support them. Our 2025 data showed a clear relationship between spend bands and ad volume: businesses that spent more launched dramatically more ads, especially on platforms like Meta and TikTok.
In January 2025, the top quartile looked like businesses that had already internalized this:
The result is visible in the benchmarks: higher spend, higher conversion, and lower CAC at the same time.
Higher budgets only work if your creative pipeline can support them.
In January 2026, the top quartile’s results are exactly what you would expect from businesses that took that lesson seriously
Download the full Northbeam 2025 data report.
Median businesses nudged budgets up and accepted whatever economics the market delivered. Top performers acted as if MER and CAC were hard constraints, not nice to have metrics.
Practically, that means:
You cannot control the market, but you can control how much pain you are willing to tolerate before you pull back.
In the January aggregate, blended metrics do not tell the full story. Revenue is up a little, MER is flat to slightly positive, and it would be easy to decide you are fine. New customer metrics say otherwise.
Top performers watched first time economics independently:
That discipline shows up directly in the January gap between median and 75th percentile new customer performance.
January is not just “month one” on a clean slate. It is a very specific demand environment: resolution season in some verticals, hangover season in others, and a reset after Q4 promo pressure.
The data suggests that:
That is why you see top performers able to add new revenue at lower CAC even as the median business struggles.

You cannot copy the exact numbers from the 75th percentile, but you can copy the operating patterns that got them there. A practical way to work with the January 2026 data:
January 2026 shows two very different realities inside the same market. The median business nudged spend up, saw modest revenue growth, and watched new customer economics slip. The top quartile increased budgets aggressively, grew faster, and improved efficiency at the same time.
The difference is not access to some secret channel. It is whether you are willing to run the year on explicit guardrails, separate scorecards for first time performance, calendar aware planning, and creative systems that can actually support the spend level you want.
If you get those four pieces right, the January 2026 benchmarks stop being a warning and start becoming a baseline for what your own growth can look like over the rest of the year.
