An up-to-date guide to Google’s Privacy Sandbox
A lot of opinion pieces, industry working groups, and proposals emerged last year in the aftermath of Google’s announcement about deprecating third-party cookies by 2022.
For some background, we recently published a primer on third-party cookies, which covers how they are used in ad tech, how different browsers treat them, their correlation with publisher revenues, and the solutions being proposed in response to the development.
One of those solutions, The Privacy Sandbox, is Google’s own proposal aimed to satisfy third-party use cases without using third-party cookies, which has received a lot of industry attention. Since the project is still in development, there’s a lot of confusion around how it will work.
The confusion is further compounded by the bird-themed acronyms that are flying around (pun intended), like TURTLEDOVE, SPARROW, PARROT, and DOVEKEY. In this article, we’ll cover what we know about The Privacy Sandbox so far, how it intends to meet various ad tech use cases, what those weird acronyms mean, and what early test results say about its effectiveness.
What is The Privacy Sandbox?
The Privacy Sandbox is a proposed browser-based ad platform and Google’s attempt to preserve the ad-supported model of the web, while limiting intrusive forms of cross-site tracking that rely on Personally Identifiable Information (PII), third-party cookies, fingerprinting, cache inspection, link decoration, and network tracking.
In engineering terms, a “sandbox” is a protected environment. In this context, it means that the key principle behind The Privacy Sandbox is that a user’s personal information should be protected and not shared in any way that allows the user to be identified across sites.
But how can advertisers target users if they are not able to identify them?
According to Google, the answer lies in using a set of privacy-preserving, browser-based application programming interfaces (APIs). These browser APIs anonymize and aggregate user data, so that no individual user can be identified. At the same time, they allow advertisers to use this data to enable various ad-serving use cases, including ad selection, interest-based targeting, conversion measurement, and remarketing.
How will it work?
Before diving into the proposals and APIs that are part of The Privacy Sandbox, it might be useful to understand the different use cases that it is trying to meet. Publishers, advertisers, and users have certain expectations from any kind of ad platform, and the Privacy Sandbox aims to:
- Combat fingerprinting, spam, ad fraud, denial-of-service attacks
- Improve IP address security by limiting access to it
- Allowing advertisers to select the right ad for the right user
- Give advertisers the means to measure ad performance
- Preserve the privacy and identity of individual users
Here are the components proposed within The Privacy Sandbox:
Malicious bots that mimic human behavior are widely used to perpetrate spam, fraud, and denial-of-service attacks. Ad fraud has a huge cost for both advertisers and publishers. Trust tokens are cryptographic tokens that can be issued to trusted users, these tokens can then be stored locally in the browser and used in other contexts to prove the user’s authenticity. One way to think about it is as a universal CAPTCHA authentication that is shared between websites.
As third-party data becomes increasingly scarce, alternative techniques to identify and track the behavior of individual users, known as fingerprinting, have continued to evolve. Fingerprinting is an attempt to identify individual users based on building a unique profile of hardware and software signals. The Privacy Budget proposal aims to limit the amount of information a website can glean from Google’s APIs or other sources (such as HTTP headers), so that the information that is typically used to build a fingerprint profile is sufficiently throttled to curb the practice.
Willful IP blindness
This will allow websites to “blind” themselves to IP addresses. A user’s IP address is their public address on the internet and is dynamically assigned by the network in most cases. However, even dynamic IP addresses can remain stable over time, creating the technical conditions that make it easy to build fingerprint profiles. By using willful IP blindness, websites can preserve their Privacy Budget, and in the long term, it will help curb the use of IP in user identification.
The reporting APIs are intended to allow advertisers to measure ad performance without linking user identities across sites or accessing individual browsing history. It consists of two APIs:
- Conversion measurement API allows advertisers to learn which ad clicks later turned into conversions in a privacy-preserving manner using limited impression data
- Aggregate reporting API support a variety of use cases related to ad performance measurement, including view-through-conversion, brand, lift and reach measurement
Allows domains owned by one entity to declare themselves as belonging to the same first party. Having this functionality will allow companies that own multiple domains to track users across their own websites, as a valid exception to the general rule of limiting cross-site tracking.
Federated Learning of Cohorts (FLoC)
This is the main candidate that Google is proposing as a replacement for third-party cookie based behavioral targeting. The FLoC API generates clusters of users with similar interests, called “cohorts”, using machine learning. This is done at the individual browser level, not by a third-party. Advertisers can then select and target these cohorts based on the aggregated browsing behavior associated with them, and as a result, individual privacy is preserved.
Meet the birds of Privacy Sandbox
All the bird-themes acronyms that have been discussed in the context of The Privacy Sandbox are in fact additional proposals submitted in response to TURTLEDOVE.
Criteo submitted SPARROW (“Secure Private Advertising Remotely Run On Webserver”) to the W3C, which builds upon TURTLEDOVE but suggests moving the auction outside the browser to an external, third-party server called the “Gatekeeper”. The proposal is intended to give advertisers more control over campaigns, ad safety, brand safety, and improve transparency in billings, while upholding all the key objectives related to user privacy outlined by TURTLEDOVE.
Then came another proposal called DOVEKEY, also from Google but this time from the Ads team. It aims to reduce the complexity of previous proposals by suggesting the use of key-value pairs to return bids instead of undertaking more complex computations within the browser. It also replaces the Gatekeeper proposed by Criteo with a key-value (KV) server. This approach, however, will likely result in an explosion in key-value pairs to be maintained by publishers.
Finally, Magnite submitted PARROT (The Publisher Auction Responsibility Retention Revision of TurtleDove), which aims to uphold the privacy tenets of TURTLEDOVE but move the auction decisioning back to where it currently belongs: the publisher side. Using third-party iFrames called Fenced Frames, it gives back publishers and their systems (SSPs/ad servers) the job of considering all the factors necessary to determine auction outcomes. In simple terms, PARROT is an attempt to re-create header bidding under the framework of The Privacy Sandbox.
Early performance results from testing
Most of the discussion around The Privacy Sandbox in 2020 was focused on proposals, responses to those proposals, and counter-responses to those responses. Not having seen any part of the technology in action, advertisers and publishers were obviously skeptical of what the new browser-based experimental ad platform could achieve in the real world.
In October last year, Google finally released some evidence demonstrating the effectiveness of The Privacy Sandbox after a few months of testing. These results were posted on GitHub and analyzed the performance of cohorts that are part of the FLoC API.
FLoCs were able to generate a 350% improvement in recall and a 70% increase in precision over a random assignment of users to cohorts. While that is impressive, it’s important to note that the efficiency is measured against a random cohort, so no accurate comparison can be made about how it may measure up against the existing third-party cookie based targeting.
Although other members of the ad tech community have been participating in discussions about The Privacy Sandbox, there are concerns that the development is too slow and that their inputs are not being sufficiently addressed by Chrome representatives in the W3C working group.
The Electronic Frontier Foundation (EFF) has said that many parts of The Privacy Sandbox just don’t work. “FLoC would use Chrome users’ browsing history to do clustering. This is, in a word, bad for privacy. A flock name would essentially be a behavioral credit score: a tattoo on your digital forehead that gives a succinct summary of who you are, what you like, where you go, what you buy, and with whom you associate,” writes Bennett Cyphers on the EFF website.
Meanwhile, Google is pressing forward with testing other components of The Privacy Sandbox, namely Trust Tokens and Click Conversion Measurement API.
CMA’s investigation of The Privacy Sandbox
On 8th January, UK’s Competition and Markets Authority (CMA) launched an investigation into The Privacy Sandbox, to discover whether the move to kill third-party cookies and transferring key auction functionality to Chrome further strengthens Google’s already dominant position in the market at the expense of competitors.
“Google’s Privacy Sandbox proposals will potentially have a very significant impact on publishers like newspapers, and the digital advertising market,” Andrea Coscelli, Chief Executive of the CMA, said in the press release. “But there are also privacy concerns to consider, which is why we will continue to work with the ICO as we progress this investigation, while also engaging directly with Google and other market participants about our concerns.”
One of the complaints cited by the CMA comes from Marketers for the Open Web. James Rosewell, the director of the coalition, has said: “The Privacy Sandbox would create a Google-owned walled garden that would close down the competitive, vibrant open web. Providing more directly identifiable, personal information to Google does not protect anyone’s privacy. We believe that the CMA’s investigation will confirm this and save the web for future generations.”
The CMA has not reached any conclusions though and has stated that they are approaching the investigation with an open mind, engaging Google and other market participants to proceed.
What should publishers do?
Even though Google has made significant progress on The Privacy Sandbox project since launching it in August 2019, the road to replacing third-party cookies is long and untested.
Although it is encouraging to see that the first bits of The Privacy Sandbox have already started landing in Chromium and Google Chrome Canary. Users interested in trying The Privacy Sandbox can do so by following these steps:
- Load chrome://flags in the Google Chrome address bar.
- Search for “privacy sandbox” using the search field at the top.
- Set Privacy Sandbox Settings to Enabled.
- Restart Google Chrome.
For now, there isn’t a lot for publishers to actually do because a functioning model of The Privacy Sandbox doesn’t exist yet. In absence of that, publishers should seek ways in which they can provide feedback and contribute to the development of The Privacy Sandbox.
“As always, we encourage you to give feedback on the web standards community proposals via GitHub and make sure they address your needs. And if they don’t, file issues through GitHub or email the W3C group. If you rely on the web for your business, please ensure your technology vendors engage in this process and that the trade groups who represent your interests are actively engaged,” Justin Schuh, Director, Chrome Engineering, wrote in a blog post.
“Publishers, advertisers, and independent ad tech must work together to test these solutions and provide real feedback to Chrome. If the solutions don’t work for the open web, we need to make it abundantly clear to the world that Chrome needs to go back to the drawing board,” Paul Bannister, Chief Strategy Officer at CafeMedia wrote in an AdExchanger column. “Right now, The Privacy Sandbox will make the web a less attractive platform for advertisers, which will push even more of their spending to the walled gardens. But how the proposals are implemented next year  will be a huge indicator as to whether The Privacy Sandbox can work for the open web.”
Note: We will continue updating this guide about The Privacy Sandbox as additional information about its development, testing, and release becomes available.