← All Episodes
David Cardiel · Fractional CMO Independent/Fractional Consulting ·

How to Improve Your Marketing Attribution Model and Kill Vanity Metrics

Fractional CMO David Cardiel reveals the attribution frameworks that helped him scale ARR from $9M to $100M+. Fix broken reporting before it costs you.

How to Improve Your Marketing Attribution Model and Kill Vanity Metrics


The $26M Wake-Up Call on Broken Attribution

“If you give up on attribution, you’re just gonna fly blind. You have to put in the effort even if it’s not straightforward.”

That’s David Cardiel, Fractional CMO — the operator who scaled Parsley from $9M to $26M ARR and drove $100M+ ARR at TrendKite before its acquisition by Cision. He has lived inside the rooms where marketing leaders get caught cheerleading inflated pipeline numbers while revenue stays flat. He knows exactly how broken attribution models destroy executive trust, misallocate budget, and stall growth at precisely the moment a company should be accelerating.

Most B2B SaaS companies between $2M and $26M ARR have the same root problem: their growth foundation is structurally unsound. The ICP is loosely defined or never updated. Win-loss analysis doesn’t happen across all deal stages. Marketing reports vanity metrics that look impressive in a slide deck but can’t answer the question a CEO asks in a hallway: What percentage of new business did marketing contribute this month? If your VP of Marketing can’t answer that question with a specific dollar figure and a percentage tied to closed-won revenue, attribution is broken — and the fix isn’t a new tool. It’s a set of disciplines that most teams actively avoid because they’re uncomfortable.

This page distills David’s playbook for rebuilding attribution from the ground up, including the five frameworks he uses across every engagement, the exact metrics he holds himself accountable to, and the operational review structure that makes the numbers impossible to hide from.


Key Takeaways


Deep Dive: Building a Marketing Attribution Model That Holds Up to CEO Scrutiny

Why Most SaaS Attribution Models Fail Before They Start

The failure mode is predictable. A company hits $3M–$5M ARR, hires its first marketing leader, and immediately starts measuring activity: leads generated, email open rates, ad impressions, webinar registrations. Those numbers go into a dashboard. The dashboard goes into a board slide. Nobody connects them to closed-won revenue because connecting them is genuinely hard — and because nobody forces the conversation.

David is direct about what happens next:

“There was a little bit of that cheerleading as you just mentioned before. Like — we got all of these thousands of leads, more than we’ve ever done before. And you looked around the room and at first maybe the first couple of months I think there was some looking around. But then as the company starts growing, you’re getting some — I’ll just say there are some smarter people in the room looking at you.”

The cheerleading phase has a shelf life. Once the company adds a data-literate CRO, a finance leader who reads pipeline conversion reports, or a board member who has seen this pattern before, the inflated metrics become a liability. The attribution model either evolves into something defensible or marketing loses its seat at the revenue table.

David’s Attribution Model Evolution framework is built precisely to prevent that outcome. It does not start with a perfect tech stack. It starts with whatever data is available, defines contribution incrementally, and runs every output past sales and executive leadership until the numbers pass what he calls the “sniff test.”


The 10-Day Foundation Audit: ICP Before Attribution

You cannot build a defensible attribution model on top of a poorly defined ICP. The two disciplines are linked: if you don’t know exactly who you’re selling to and why they buy, you cannot accurately assess which channels are sourcing qualified pipeline versus filling the funnel with noise.

David makes this one of the first two things he does in any new engagement:

“There are two core things that I did within the first 10 days of working in those organizations. That was to revisit — if it was there, in one case it was, in one case it wasn’t — but really revisiting what that ideal customer profile is, what is it, what’s been documented, how has it been documented, and is it talked about on literally a daily basis?”

The ICP redefinition and win-loss analysis are not sequential — they feed each other. Win-loss data tells you who actually buys, why they chose you, and what objections killed deals. That intelligence reshapes the ICP. A refined ICP tells you what channels should be sourcing those prospects. Better channel targeting produces cleaner attribution data. The loop closes.

The 10-Day Foundation Audit has five steps: map and document the current ICP, run cross-functional win-loss analysis, identify messaging and positioning gaps from deal outcomes, assign accountability for testing improvements, and establish operational reviews to track progress. None of these steps require a new tool. They require people in a room being honest with each other.


How to Run Win-Loss Analysis Across Every Pipeline Stage

Most win-loss reviews are post-mortems on closed-lost deals. That’s half the picture — and the less useful half. David’s practice is to run win-loss analysis at all stages of the buying cycle, including early-stage opportunities that stalled, mid-funnel deals that disqualified, and closed-won accounts to understand what specifically tipped the decision.

“A common practice we had was — it was hard. It wasn’t the most attractive meeting to be in. But we did that win-loss analysis at all stages of the buying cycle with the right people in the room: executive leadership.”

The key structural requirement is cross-functional participation. Marketing cannot run this meeting alone and expect sales to trust the outputs. Sales cannot run it alone and expect marketing to change anything. When the CRO and CMO sit in the same room reviewing the same deals with the same data, the conversation becomes about patterns rather than blame.

The Operational Review Discipline framework formalizes this into a monthly cadence. Marketing brings pipeline contribution metrics, MCAC by channel, and closed-won attribution data. Sales validates or challenges the numbers in real time. Leadership identifies which channels or campaigns are underperforming. Ownership for fixing the gaps is assigned before anyone leaves the room.

This is where pipeline contribution reporting and revenue attribution accuracy intersect. The operational review is not a reporting exercise — it’s an accountability structure that forces attribution to improve over time because the data is publicly tested every month.


The CEO Hallway Standard for Attribution Maturity

There is a concrete test David uses to assess whether an attribution model is actually working. It comes from a real hallway:

“Our CEO would walk down the hall and we’d pass each other and he’d say ‘What percentage of new business is marketing contributing this month?’ and he’d look at me and I’d be like — I could say ‘61%, $1.8 million or something like that.’ I could confidently say that and we were talking at the closed-won business level — which was awesome.”

That number — 61% of new business, $1.8M — is the product of a functioning attribution model. It is not a pipeline metric. It is not an MQL count. It is a closed-won revenue figure that marketing can own, defend, and trace back to specific channels and campaigns.

Getting to that number requires the Attribution Model Evolution framework to be fully operational: starting with available data, defining channel-level pipeline contribution, validating with win-loss data, and stress-testing with both sales and executive leadership. The model does not need to be perfect on day one. It needs to get better every review cycle until the CEO stops asking because they already trust the answer.


Dual-Track Messaging Testing: Data-Driven + Creative

Once attribution is stable enough to tell you which channels are working, the next discipline is messaging testing methodology — understanding not just where pipeline comes from, but what message converted it.

David runs two tracks simultaneously. The first is pure data analysis: behavioral signals, content consumption patterns, search intent, site usage. The second is deliberately creative — what he calls “shots in the dark.”

“I love a good data-driven test that comes to fruition because you’ve done your homework there. But on the other end of the spectrum, I absolutely love a shot in the dark. I do — because that brings in a little bit of the creativity aspect, the artistic aspect to marketing.”

The most powerful example from his career: while auditing content performance at Parsley, David found a small blog entry written by a founder about making content analytics easy. Nobody on the team had flagged it. It was ranking in the top 10 organically.

“I found this little — not even an article, almost a blurb — that one of the founders had written years before about making content analytics easy. People organically were flocking to this search and nobody ever brought this up. I ran a campaign called ‘Content Analytics Made Easy’ and it was a game changer. It literally changed the homepage to ‘Content Analytics Made Easy’ — that tripled our traffic.”

That single behavioral data insight — identified through careful analysis rather than gut instinct — produced a 3x traffic increase and fed demand generation campaigns for months. This is what closed-won revenue tracking tied to content looks like in practice: finding what buyers are already telling you through their behavior, amplifying it, and attributing the downstream pipeline back to the source.

The Dual-Track Messaging Testing framework runs data-driven campaigns and creative campaigns in parallel, with equal intentionality applied to both. The data track grounds execution in evidence. The creative track produces the unexpected wins — the campaign angles that behavioral analysis would never have predicted.


AI-Assisted Operations: Automate the Repeatable, Protect the Strategic

David has extended the attribution discipline into AI-assisted operations — but with a specific philosophy. AI should automate tasks that are repeatable, time-consuming, and currently consuming leadership attention that should be spent on strategy.

“I built an SDR manager — and I’m going to say ‘her’ because I asked my GPTs and apps to humanize themselves if I can. I built her up to have all of our pertinent financial pipeline and financial goals, pipeline goals.”

The AI-assisted SDR manager was built with Salesforce dashboard access, daily targets, individual rep performance data, and team tenure context. The input workflow: drop in a dashboard export, get out a formatted performance report with coaching notes. The freed-up time goes back to win-loss analysis, messaging work, and executive alignment.

The AI-Assisted Operations Framework follows a five-step build: identify repeatable leadership tasks, build custom GPTs or apps with relevant data access, program context into the system, create a single-input workflow, and reallocate the recovered time to strategic priorities. The point is not to replace human judgment in attribution. It is to remove the manual data-pulling and report-formatting work that prevents leaders from doing the analysis that actually improves attribution over time.


The Unsexy Foundation That Outperforms Every Growth Hack

Everything in David’s playbook — the 10-Day Audit, the attribution model evolution, the operational review discipline, the dual-track messaging tests — reduces to one principle:

“People, process, and technology, man. I’ve built a career on it. It’s just been — to different degrees of those three.”

And its corollary:

“The most powerful stuff is the most boring, unsexy, uncool things ever.”

Sales and marketing alignment on attribution is not a technology problem. It is a people problem that gets solved through process discipline: the right people reviewing the right data in the same room on a predictable cadence. The technology — the CRM, the attribution software, the AI tools — only amplifies what the people and process have already established. Layer it on a broken foundation and you get faster wrong answers.

The companies that figure out how to improve their marketing attribution model are not the ones that buy the best attribution software. They are the ones that do the uncomfortable work: redefining the ICP, sitting through the win-loss reviews, replacing pipeline cheerleading with closed-won reporting, and iterating until the CEO can ask a hallway question and get a specific dollar figure back in five seconds.


About David Cardiel

David Cardiel is a Fractional CMO who has operated at the intersection of demand generation strategy, pipeline contribution reporting, and organizational alignment across multiple high-growth SaaS companies. He scaled Parsley from $9M to $26M ARR and generated $100M+ ARR at TrendKite, which was subsequently acquired by Cision. His frameworks — built around people, process, and technology — have been tested inside companies navigating the transition from early-stage product-market fit to scaled GTM execution. David works with $2M–$26M ARR SaaS companies as a fractional leader, embedding the operational disciplines that make attribution defensible and growth compoundable.


Ready to Connect Marketing Attribution to Closed-Won Revenue?

David’s playbook is clear: attribution discipline starts with ICP clarity, gets stress-tested in win-loss reviews, and matures into a number you can say in a hallway with confidence. If your marketing team is still reporting pipeline metrics that can’t be traced to closed-won business — or if your CEO has to guess what marketing is contributing — the foundation needs work before any new channel, tool, or campaign will deliver compounding returns. RPG works with $2–5M ARR B2B companies to build exactly this kind of revenue-connected marketing infrastructure: ICP redefinition, attribution model design, operational review structures, and messaging testing programs that produce results you can defend at the board level.

Talk to a Growth Strategist →


Frequently Asked Questions

How do you build an attribution model that connects marketing to closed-won revenue?

Start with available data — leads, MQLs, SQLs — even if imperfect. Define pipeline contribution by channel, validate with win-loss analysis, and run the numbers past your CEO and CRO until they pass the sniff test. Refine incrementally as data quality improves. Never abandon the effort because it’s hard.

What is the best way to run win-loss analysis across a sales funnel?

Convene sales, marketing, and executive leadership to review actual deals at every stage of the buying cycle — not just closed-lost. The goal is pattern recognition across outcomes. It’s not a comfortable meeting, but it surfaces the truth about which channels, messages, and motions are actually working.

How do you eliminate vanity metrics from marketing reporting?

Replace pipeline metrics with closed-won contribution percentages tied to real revenue. If marketing reports thousands of leads without tying them to closed business, leadership will eventually see through it. Require marketing to report MCAC, channel-level pipeline contribution, and a dollar figure that sales can validate in the room.

**How often should you revisit your ideal customer profile

Ready to accelerate your B2B SaaS growth?