
Make confident, data-driven decisions with actionable ad spend insights.
13 min read
Compare Google Ads attribution models how each works, pros and cons, and how model choice impacts bidding and reporting.


Simul Sarker
CEO of DataCops
Last Updated
November 20, 2025
I used to live in my Google Ads dashboard. For years, the cost per acquisition (CPA) column was my north star. If a campaign’s CPA was low, I’d pour more money into it. If it was high, I’d cut the budget. It felt logical, decisive, data-driven. But a nagging feeling grew over time. We’d pause a high-CPA “awareness” campaign, and a month later, our low-CPA branded search campaign would start to wither. The numbers in the dashboard didn’t connect to the reality of our business.
The deeper I dug, the clearer it became that the problem wasn't the campaigns themselves; it was how we were measuring them. We were giving 100% of the credit for a sale to the last ad a customer clicked, ignoring the entire journey that led them there. It was like giving a trophy to the person who tapped the ball over the goal line, ignoring the midfielders and defenders who fought to get it there.
What’s wild is how invisible it all is. This flawed logic shows up in dashboards, reports, and headlines, yet almost nobody questions it. We accept the default settings, optimize for the simplest metric, and wonder why our growth has plateaued.
Maybe this isn’t about attribution models alone. Maybe it says something bigger about how we perceive value and how the complex, messy path of a human decision gets flattened into a single data point. The modern internet is not a straight line, and the tools we use to measure it are often lying to us by omission.
I don’t have all the answers. But if you look closely at your own customer journeys, at the touchpoints you’re currently ignoring, you might start to notice it too. This is a guide to understanding the language of attribution, moving beyond its simplest dialects, and recognizing that the model you choose is only as good as the data you feed it.
For over a decade, Last-Click Attribution was the law of the land. It was the default model in Google Ads and Google Analytics, and its logic was seductively simple: the last ad a user clicks before converting gets 100% of the credit.
If a user’s journey looks like this:
Last-Click gives 100% of the conversion value to the Branded Search Ad. The Display and Non-Branded Search ads get zero. According to this model, they contributed nothing.
Last-Click attribution became the standard for two reasons: it was easy to understand and technically simple to implement. It required tracking only one event. But its simplicity is a trap.
By focusing exclusively on the final touchpoint, you create dangerous blind spots:
Relying on Last-Click is like driving a car using only the rearview mirror. You can see what’s directly behind you, but you have no idea where you are going or what lies ahead.
To combat the flaws of Last-Click, Google introduced a suite of "rules-based" models. These models distribute credit across multiple touchpoints according to a fixed rule. While none are perfect, they each offer a different lens through which to view your marketing efforts.
Let’s analyze these models using a consistent customer journey that results in a $200 sale:
First-Click is the polar opposite of Last-Click. It gives 100% of the credit to the first ad the user clicked.
The Linear model is the democratic approach. It distributes credit equally among all touchpoints in the path.
The Time Decay model gives more credit to touchpoints that happened closer in time to the conversion. The credit assigned to each touchpoint "decays" with a 7-day half-life.
The Position-Based model is a hybrid. It gives a set percentage of credit to the first and last interactions (typically 40% each) and distributes the remaining 20% evenly among the touchpoints in the middle.
This table illustrates how each model would assign credit for our $200 sale across the four touchpoints.
| Touchpoint | Last-Click | First-Click | Linear | Time Decay* | Position-Based |
|---|---|---|---|---|---|
| YouTube Ad (Day 1) | $0 | $200 | $50 | ~$24 | $80 |
| Non-Branded Search (Day 8) | $0 | $0 | $50 | ~$48 | $20 |
| Shopping Ad (Day 12) | $0 | $0 | $50 | ~$72 | $20 |
| Branded Search (Day 14) | $200 | $0 | $50 | ~$156 | $80 |
| Total | $200 | $200 | $200 | ~$300 (Illustrative)** | $200 |
*Time Decay values are illustrative to show distribution, not precise calculation.
**The Time Decay example total is illustrative of credit distribution, not a literal sum in this simplified view. Google's model ensures the total credit equals the conversion value.
As PPC expert Brad Geddes, Co-Founder of AdAlysis, points out, the choice of model directly influences your actions:
"Your attribution model should dictate your bid management. If you are using last click, you are making decisions based upon closers. If you are using a multi-touch attribution model, then you can start to value assists and make different decisions."
This is the key takeaway: changing your attribution model is not an academic exercise. It fundamentally changes which campaigns you value and how you invest your budget.
While rules-based models are a significant step up, they all share a common flaw: the rules are arbitrary and based on human assumptions. Google’s answer to this is Data-Driven Attribution (DDA), which is now the default model for most new conversion actions.
Instead of using a fixed rule, DDA uses machine learning to create a custom model for your specific account. It analyzes all the converting and non-converting paths on your website to determine how much credit each touchpoint actually deserves.
Conceptually, it works by comparing the paths of customers who converted to the paths of those who did not. If it notices that users who watched a specific YouTube ad are 30% more likely to eventually convert than users who didn't, it will assign a significant credit value to that YouTube ad. It learns from your account's unique data what the true drivers of conversion are.
As Ginny Marvin, Google's Ads Liaison, explains:
"Data-driven attribution looks at all of the clicks on your Search ads on Google.com. By comparing the click paths of customers who convert and customers who don't, the model identifies patterns among those clicks that lead to conversions."
The promise of DDA is immense. It offers a dynamic, self-improving model that is tailored to your business, free from the rigid assumptions of rules-based models. For accounts with sufficient data, it is almost always the most accurate choice.
However, DDA has a critical vulnerability that most advertisers overlook: its output is entirely dependent on the quality of its input.
The machine learning algorithm is powerful, but it is not magic. It can only analyze the data it is given. If the data you are feeding it is incomplete or corrupted, DDA's conclusions will be equally flawed, no matter how sophisticated the algorithm is.
This brings us to the detail most blogs on attribution never mention. The entire discussion of which model is "best" is meaningless if you are not capturing a complete and accurate picture of the customer journey in the first place.
Today's digital ecosystem is actively working against data completeness.
When you combine these two problems, your attribution model, especially DDA, is operating on a dangerously distorted version of reality.
| Scenario | The Incomplete Data Fed to Google | The Resulting Flawed Attribution | The Complete Data (with DataCops) | The Accurate Attribution |
|---|---|---|---|---|
| User Journey | A user on Safari clicks a Display Ad, then later a Branded Search Ad and converts. | The Display Ad click is blocked by ITP. Google only sees the Branded Search click. | DataCops, using a first-party proxy, captures the Display Ad click and the Branded Search click. | DDA sees both touches and correctly assigns credit to the Display Ad for introducing the user. |
| Flawed Conclusion | Last-Click and DDA both give 100% credit to Branded Search. You cut your Display budget. | You see the Display Ad's value and invest properly in awareness. | ||
| Bot Attack | A bot network clicks a specific Shopping Ad campaign 1,000 times and triggers 10 fake "conversions." | Google's DDA sees a high correlation between that Shopping Ad and conversions. | DataCops identifies and filters out the 1,000 bot clicks and 10 fake conversions at the source. | DDA receives only clean, human data and correctly assesses the Shopping Ad's true (and lower) performance. |
| Flawed Conclusion | DDA assigns high value to the Shopping Ad. You increase its budget, wasting money on bots. | You see the ad is underperforming with real humans and optimize or pause it. |
This is where a first-party data integrity solution becomes essential. Platforms like DataCops are designed to solve this exact problem. By serving tracking scripts from your own domain, they bypass ITP and ad blockers, ensuring you capture the full customer journey. Simultaneously, their advanced fraud detection filters out bot and other non-human traffic.
The result is a clean, complete, and trustworthy dataset. When you feed this high-integrity data into Google's DDA, you empower the algorithm to work as intended. You are no longer asking it to find patterns in a fragmented and polluted dataset; you are giving it the ground truth. For more on this, our guide to first-party data is a critical read. [Hub content link]
Choosing an attribution model is one of the most strategic decisions a digital marketer can make. It defines what you value, dictates where you invest, and ultimately shapes the growth trajectory of your business. For too long, we have been content with the simple but deeply flawed logic of Last-Click, optimizing for a fraction of the customer journey while ignoring the rest.
The move toward more sophisticated models like Data-Driven Attribution is a massive leap forward. But these powerful algorithms are not a panacea. They are a reflection of the data they are fed. Garbage in, garbage out.
The future of successful advertising is not about finding the perfect model in a vacuum. It is about building a resilient and trustworthy data foundation that captures the complete, human customer journey. It's about owning your data pipeline so you can feed the machine the truth. Only then can you move beyond simply measuring clicks and start truly understanding your customers.