You’re spending $15,000 a month on marketing. Your agency sends a report showing 47% increase in impressions. Your SEO consultant celebrates a jump to position 4 for a keyword you’ve never heard of. Your social media manager points to growing follower counts.
Revenue is flat.
Something is wrong — either with the marketing or with how you’re measuring it. Usually both.
The problem with marketing measurement isn’t a lack of data. It’s too much data, organized in ways that make everything look like it’s working. Here’s how to cut through it.
Why Every Attribution Model Lies
Attribution is the process of determining which marketing touchpoint deserves credit for a conversion. It sounds simple. It’s not.
First-Touch Attribution
Gives 100% of the credit to whatever first brought the customer to your site.
The lie: A customer sees your Facebook ad, clicks through, leaves, Googles your brand name two weeks later, reads three blog posts over a month, gets a retargeting ad, then finally buys. First-touch says Facebook gets all the credit. The blog posts, the retargeting, the brand search? They don’t exist in this model.
When it’s useful: Measuring top-of-funnel channel effectiveness. Which channels introduce new audiences?
Last-Touch Attribution
Gives 100% of the credit to the final touchpoint before conversion.
The lie: That same customer journey above? Last-touch says the retargeting ad gets all the credit. Facebook, the blog posts, and the brand search contributed nothing.
When it’s useful: Measuring bottom-of-funnel efficiency. Which channels close?
Multi-Touch Attribution (MTA)
Distributes credit across all touchpoints. Sounds fair. The problem is deciding how to distribute it.
Common models:
- Linear: Equal credit to every touchpoint. Mathematically simple, practically useless for decision-making.
- Time decay: More credit to touchpoints closer to conversion. Better, but still arbitrary.
- Position-based (U-shaped): 40% to first touch, 40% to last touch, 20% distributed across middle touches. Popular because it feels balanced. But “balanced” doesn’t mean “accurate.”
- Algorithmic/data-driven: Uses machine learning to assign credit based on statistical patterns. Available in GA4 and some advanced platforms. Best available option, but requires significant data volume (typically 300+ conversions per month) to produce meaningful results.
The fundamental problem with all MTA models: they only see digital touchpoints they can track. They miss word-of-mouth, podcast mentions, conference conversations, brand recognition built over years. For many B2B companies, the most influential touchpoint is a conversation that never touched a browser.
A Framework That Actually Works
Stop trying to assign credit to individual touches. Instead, measure marketing at three levels:
Level 1: Leading Indicators (Weekly)
These tell you if your marketing engine is running. They don’t tell you if it’s producing revenue yet.
For SEO:
- Organic click-through rate from Search Console
- Number of pages ranking in positions 1-10
- Organic sessions to high-intent pages (pricing, contact, product pages)
For PPC:
- Cost per click trend (rising CPC with flat conversion rate = problem)
- Quality Score by ad group
- Impression share on brand and top non-brand terms
For Content:
- Engagement depth (scroll depth, time on page) on new content
- Email capture rate on content pages
- Content-assisted conversions (did they read content before converting?)
For Social:
- Click-through rate to site (not likes, not impressions)
- Referral traffic quality (bounce rate, pages per session from social)
- Direct message inquiries (actual purchase intent)
Level 2: Lagging Indicators (Monthly)
These tell you what marketing actually produced.
- Marketing-sourced pipeline: Revenue in your CRM that originated from a marketing touchpoint. This requires proper UTM tagging and CRM integration.
- Customer acquisition cost (CAC): Total marketing spend divided by new customers acquired. Track by channel if your attribution is good enough.
- Revenue per channel: How much revenue can be reasonably attributed to each marketing channel? Use last-touch as a baseline and adjust upward for channels you know contribute to the top of funnel.
- Blended ROAS (Return on Ad Spend): Total revenue divided by total ad spend. Doesn’t tell you which campaign worked, but tells you if paid advertising is profitable overall.
Level 3: Strategic Metrics (Quarterly)
These tell you if your marketing is building long-term value or just buying short-term results.
- Organic traffic growth rate: Are you building an asset (content, SEO authority) or just renting attention (ads)?
- Brand search volume: Is your brand name being searched more over time? Google Trends is a rough but useful indicator. Rising brand search means marketing is building awareness, regardless of what attribution says.
- Customer lifetime value (LTV) by acquisition channel: Do customers from organic search retain better than customers from paid ads? This changes how you allocate budget.
- Marketing efficiency ratio: Revenue divided by total marketing spend (including salaries and tools, not just ad spend). A healthy ratio depends on your business model: 5:1 is good for e-commerce, 3:1 is good for SaaS, 10:1+ is good for service businesses.
Incrementality Testing: The Honest Measurement
Incrementality testing answers the question attribution can’t: “Would this revenue have happened anyway without this marketing spend?”
How It Works
- Split your audience geographically. Run ads in half your target markets, hold ads in the other half.
- Measure the difference. If the ad markets produce 20% more conversions than the holdout markets, your ads are driving 20% incremental lift.
- Calculate true ROI. Only count the incremental conversions against your ad spend.
Why This Matters
A common discovery: branded search ads often show incredible ROAS in attribution reports (10:1, 20:1) because they capture people who were already searching for your brand. Incrementality tests frequently reveal that 60-80% of those conversions would have happened without the ad. The true ROAS is often 2:1 or 3:1 — still positive, but a very different story than the attribution report tells.
Practical Constraints
Incrementality testing requires enough volume to reach statistical significance. If you’re spending $5,000/month, you probably don’t have enough data to run clean tests. At $20,000+/month, it becomes viable and valuable.
For smaller budgets, a simpler version: turn off one channel for 30 days and measure the revenue impact. This is crude, but it reveals dependencies. If you pause Facebook ads and revenue drops 5% while attribution claimed Facebook drove 25% of revenue, you now know that 80% of Facebook-attributed revenue was going to happen anyway.
Channel-Specific Metrics Worth Tracking
SEO
| Metric | What It Tells You | What It Doesn’t Tell You |
|---|---|---|
| Organic sessions | Traffic volume | Whether traffic converts |
| Organic conversions | Direct revenue impact | Long-term brand building value |
| Keyword rankings | Visibility trend | Actual click-through rates |
| Domain authority/rating | Competitive position | Whether your content meets intent |
The honest truth about SEO measurement: SEO builds a compounding asset. A blog post written in January might drive revenue in July. Attribution windows typically cap at 30-90 days. SEO’s long-term contribution is systematically undercounted by every attribution model.
PPC (Google Ads, Meta Ads)
| Metric | What It Tells You | What It Doesn’t Tell You |
|---|---|---|
| ROAS | Revenue per dollar spent | Whether it’s incremental |
| CPA | Cost to acquire a customer | Customer quality/lifetime value |
| Impression share | Market coverage | Whether you should want more coverage |
| Conversion rate | Landing page effectiveness | If the “conversions” are real (form spam, bot clicks) |
The honest truth about PPC measurement: PPC is the most measurable channel, which makes it the most over-measured channel. Businesses over-allocate to PPC because the numbers are clean, even when other channels produce better long-term returns.
Content Marketing
| Metric | What It Tells You | What It Doesn’t Tell You |
|---|---|---|
| Organic traffic to content | Reach | Revenue impact |
| Email signups from content | Lead generation | Lead quality |
| Content-assisted conversions | Influence on purchases | Causation vs. correlation |
| Social shares | Distribution | Whether sharers become customers |
The honest truth about content measurement: Content marketing is the hardest channel to attribute because it operates at the top and middle of the funnel. Its primary value — trust building, objection handling, brand authority — doesn’t show up in any attribution dashboard. If your content is good, it’s working. You just can’t prove it with numbers the way you can with PPC.
The Reporting Framework
Here’s what a useful marketing report looks like:
Section 1: Business Metrics (Did Revenue Grow?)
- Total revenue (this period vs. last period vs. same period last year)
- New customers acquired
- Customer acquisition cost
- Marketing spend as a percentage of revenue
Section 2: Channel Performance (What’s Working?)
- Revenue by channel (with attribution model noted)
- Cost and ROAS by paid channel
- Organic traffic and conversion trend
- Leading indicator changes
Section 3: Honest Assessment (What Do We Actually Know?)
- What we’re confident about (data supports it)
- What we think is true (directionally supported but not proven)
- What we don’t know (and what we’d need to test to find out)
That third section is where honest marketing measurement lives. Any agency or consultant who never says “we don’t know” is lying to you — or to themselves.
Practical Steps
-
Set up proper UTM tagging. Every link you control should have source, medium, and campaign parameters. This is table stakes for measurement and most businesses still do it inconsistently.
-
Connect your analytics to your CRM. Google Analytics tells you about sessions. Your CRM tells you about revenue. Without connecting them, you’re measuring activity, not results.
-
Agree on a primary attribution model and stick with it. The model matters less than consistency. If you use last-touch, always use last-touch. Switching models makes trend analysis impossible.
-
Review blended metrics monthly. Total revenue divided by total marketing spend. Is the overall ratio improving? If yes, something is working. If no, dig deeper.
-
Run an incrementality test quarterly. Even a simple one. Turn something off and see what happens. The results will surprise you.
-
Accept uncertainty. Marketing is not engineering. Some of what you spend will produce returns you can measure. Some will produce returns you can’t. The goal is to make increasingly better decisions with imperfect information — not to achieve perfect attribution.
The companies that measure marketing well aren’t the ones with the most sophisticated tools. They’re the ones with the most honest conversations about what the data actually shows.