INDUSTRY WISDOM

First look at MiQ’s Privacy Sandbox testing: what Chrome’s Attribution Reporting API solves, lacks, and changes

MiQ

As we started looking through the event-level reports from our testing of Chrome’s Privacy Sandbox Attribution Reporting API (ARA), one thing was immediately clear: ARA wasn’t as bad as recent industry opinion led us, and many marketers, to believe. Far from it, in fact.

From our initial analysis, we’ve drawn a few conclusions about what ARA solves, lacks, and changes for marketers and agencies. In this blog, we share our learnings so far so that you:

  • Know what to expect from implementing ARA
  • Understand why you should test ARA now, if you haven’t already started

Background

Chrome’s most recent cookie deprecation delay extends the testing period for its cookieless solutions, Privacy Sandbox, until at least early 2025. For many marketers, however, this latest delay simply pushes Sandbox testing further to the bottom of their to-do lists.

To date, there’s been little published around Sandbox results and insights, outside of articles like this. Reflecting on these unknowns, we’re now sharing what we’ve learned so far from analyzing ARA’s event-level reports.

There are two types of ARA report:

Event-level reports associate individual ad clicks and views with attributed conversion events. These row-level reports can contain an auction ID, thus giving them all the same contextual richness of a cookie-based conversion log. Trade-offs of event-level reports include noise (i.e. a small number of fictional conversion events) and delays for added privacy. In theory, these reports should support granular optimization, as long as noise and reporting delays can be managed.

Summary reports provide aggregated conversion data for a more complete view of performance, including more detail about results such as cart contents and the monetary value of conversions. Summary reports have a limit (or budget) on the granularity of reporting, which constrains the richness of the data for optimization.

We're excited to see MiQ and the wider industry working with the Privacy Sandbox and actively testing the privacy-preserving solutions. We look forward to collaborating with all industry players as they transition to the next era of online privacy.
Hanne Tuomisto-Inch, Director of Privacy Partnerships EMEA at Google
Our learnings

What ARA solves: privacy-first optimization

In our first wave of testing, ARA reported on 84.9% of the same unique converters as cookies, with an additional 3.7% that cookies did not capture (some of which will be noise). Given that event-level reports capture the same richness of data as legacy conversion pixels, this is a highly viable optimization dataset. A few things to note:

  • Although ARA reports on a similar number of converters, it recorded only 1 conversion per user in our tests, compared to sending up to 8 conversions per user with cookies (depending on the campaign). This is partly by design - we chose to limit the campaign to 1 conversion per user in order to reduce noise (which seems to have been effective). From an optimization perspective, it’s debatable the extent to which this matters - repeat converters are potentially more valuable, but also likely less incremental for a campaign.
  • It’s impossible to know how many of the 3.7% additional converters are noise. However even if this is as much as 3%, this is a manageable level which, given it is randomized, is highly unlikely to distort results.
  • Comparing ARA to the main alternative - using first party data - there are some major advantages: match rates on first-party data typically range between 40-80% which is lower than ARA. Even campaigns running in 100% logged-in environments will only capture data on advertiser events recorded when an email address is captured, which likely precludes gathering high-scale data on homepage or detailed page view activity for online performance campaigns.

There’s a healthy scope for ARA to improve in the coming months. If more browsers start to adopt it, it could even produce more data scale than cookies at some point in the future.

In short, ARA appears to have clear value for optimization, and dismissing it will likely place campaigns at a strategic disadvantage come 2025.

What ARA lacks: a complete measurement dataset

As mentioned, ARA event-level reports see a high % of Chrome converters but, on a campaign running across multiple browsers and devices, see a lower % of total conversions than cookies. This is problematic if this data were to be used in its raw state - fewer conversions means lower ROI. Again there are some important things to note here:

  • We chose to only track 1 conversion per user, but there are other limitations with event-level reports which are causing data loss
  • ARA is unavailable when Chrome is being used on iOS: our results showed that this contributed to ARA being present across only about 25% of ad impressions
  • Some data is lost for more opaque reasons. Even when ARA is running in an eligible Chrome browser, we saw 11% more converters recorded by cookies. The working hypothesis for why this is happening is that data loss is occuring between the conversion event happening and the reports being sent to us. To explain - we configured a 7-day reporting delay on these campaigns. During this period the user might clear their browsing data, or might have their browser closed when the reports are due to be sent. There are also limits on the total number of conversion events that can be stored on a single browser, so highly active web users might not have all their conversions recorded.

The bottom line here is that ARA data is usable for measurement, but modeling will be required in order to accurately represent the true ROI of campaigns. Given the prevalence of modeled conversions within tools such as Campaign Manager 360, this doesn’t feel like a blocker to ARA’s adoption; ultimately a combination of summary and event-level reports will likely be needed in order to most accurately model out true campaign performance.

What ARA changes: campaign optimization best practice

With the amount of data collectable with cookies, brands and partners often resorted to a ”track everything and sift out what we need later” approach. That doesn’t stick with ARA. Reporting decisions now need to be made up-front, with brands and operational teams deciding which data or conversions they’d like to prioritize in the trigger priority settings, and making calculated tradeoffs between speed, accuracy, and detail.

Partners such as MiQ will need to strike the right balance between the number of conversion events that they want to receive and the amount of noise included in reports, with more conversion events leading to more noise. In our testing, we limited tracking to a single conversion event per impression or click but ARA’s trigger priority settings enable us to continue recording different types of conversion.

Finding the right balance across various settings will become much harder once the testing period ends, when cookie-based comparisons disappear and the noise level is no longer revealed. That’s arguably the most compelling reason to start testing now

We think that this change will also support more strategic trading as brands work with partners to make considered choices around the pixels actually needed for a campaign, rather than adding a pixel to every page.

At MiQ, we’re looking forward to the improved focus that we see this change forcing upon conversion tracking, with the potential for positive knock-on effects across trading, campaign setup, and deeper brand-partner relationships.

ARA is… almost there

We think ARA will benefit the industry, the consumer, and the marketer, with more focused data, privacy-first optimization, and more strategic approaches to conversion tracking. That said, we view ARA as one of many powerful solutions which can be used in combination to deliver the best campaign outcomes.

With at least another year of testing to go, we’re expecting increased industry collaboration around Privacy Sandbox which we hope will lead to improved adoption rates, support for ARA across more device types, and better measurement of total converters leading to more accurate reporting.

Iterating and improving upon Privacy Sandbox APIs is only possible through continued testing, which marketers can and should get involved with now, for the benefit of both their campaigns and the wider industry.

What you’ll get by testing ARA with MiQ

Join our Privacy Sandbox Early-Access Testing Program for priority invitations to all upcoming Sandbox tests. You’ll receive:

  • Expert guidance on ARA implementation and testing
  • Custom reports on what conversion measurement with ARA means for your business
  • Priority access to new optimization and retargeting opportunities using ARA data

Methodology

This analysis uses data from 6 brands across 4 different markets. Conversions were measured as visits to landing and conversion pages.

Stay tuned for our ARA summary-level report analysis, where we’ll dive deeper into the available conversion data.

share: