Fair warning: the blog that follows is fairly opinionated.
When GDPR fines are discussed, it is typically because of the numbers. By now, the legal community at large is familiar with the infamous GDPR maximums of €20M or 4% annual turnover.
But while money is obviously a key aspect of fines, there are other important and interesting aspects that sometimes fly under the radar. So, here is a short list of fines that stand out in one way or another. Some were crazy expensive, some were a slap on the wrist, some were outright dumb, and all of them contain a valuable lesson in one way or another.
- The largest fines
- The most impactful fine
- The most underwhelming fine
- The dumbest fine
- The looming threat
Let’s dive in!
The largest fines
Let’s start from the best known entry on the list: Meta’s 2023 €1.2 billion fine over data transfers.
This record fine came in the aftermath of the landmark Schrems II decision. The decision was the result of a reeeally long legal saga involving Facebook (now Meta), Max Schrems, the Irish privacy watchdog, the European Data Protection Board, and even the EU Court of Justice.
Amazon is second on the list with its 2021 €746M fine over targeted advertising.
The motivation is not public yet due to some quirks in Luxembourgish law. By the time it goes public, it might be saying something we already know from newer decisions on data monetization- whether that’s the fines for Meta’s advertising (see below) or the landmark Bundeskartellamt ruling.
The most impactful fine
Meta ate two more major fines in 2023 over personalized advertising on Facebook and Instagram for a total of **€390M.*
Yup, that was the same year as the €1.2B fine. 2023 was not kind on Meta.
These fines were about the legal bases for Meta’s targeted advertising. In other words, the cases revolved around one core question: how does Meta justify collecting and analyzing personal data for its targeted advertising?
At the time, Meta’s answer (as per its privacy policy) was that hoarding the data was necessary to perform its Terms of Service. This is something GDPR folks refer to as the “legal basis” of “contractual necessity”.
Privacy advocates were unhappy with this justification. The justification of “contractual necessity” would allow Meta to justify anything just about anything as long as it is mentioned in the Terms of Service for its platforms. It also led to the funny conclusion that Meta had a right to profile us because we, as the users of its platforms, want to see targeted advertising.
The DPC and the European Data Protection Board sided with the critics: they found that Meta did not really need to harvest the data to perform the contract and that the justification of contractual necessity did not apply.
(To be exact, the DPC wasn’t really willing to fine Meta, but the EDPB essentially forced it to take action. There was a fair amount of disagreement between enforcers over this case!)
The decisions are crucial because this “legal bases” thing is not a tiny compliance detail that Meta’s legal teams can iron out. The extraction of personal data for targeted advertising is difficult to justify under the GDPR, and that is by design. It’s a feature, not a bug.
Meta isn’t the only company to monetize personal data, either. Think of TikTok,X, and the countless “free” apps on your phone. That’s why the decisions have impacted many companies with a presence in the EU.
The legal battle over data monetization isn’t quite over yet. After the fines Meta has been hopping from one legal basis to another trying to keep its business model alive under the GDPR. Eventually the Court of Justice will chime in and clarify on what conditions (if at all!) pay-with-your-data business models are allowed under the GDPR.
The most underwhelming fine
Meta’s €390M fines are also my pick as the most underwhelming, for two reasons.
First, the amount of the fines look like a slap on the wrist for the violations at hand.
The EDPB held that personalized advertising over Meta’s social platform was performed through unlawfully harvested data. The violation went on for years, involved hundreds of millions of European citizens, and generated billions in revenue for Meta. As if this wasn’t enough, eleven major data breaches took place between the complaints and the decision.
So: Meta harvested our data illegally for years, made billions off of them, and leaked them about a dozen times along the way. What more does a company need to do to get the maximum fine?
Second, a crucial point of the cases was not investigated by the DPC.
The original complaints claimed Meta was unlawfully collecting sensitive data. These are special types of data that receive stronger protection under the GDPR, such as political beliefs, health status, and sexual orientation. Collecting these data unlawfully is one of the worst violations I can think of. And yet there was no mention of this issue in the DPC’s decisions!
This omission was called out not only by privacy advocates, but also by other privacy authorities across the EU. As impactful as the decisions were, they might have been much more impactful if the complaints were investigated integrally.
To this day, Meta and countless other companies are either pretending that the data they use for profiling is not sensitive, or systematically downplaying how much of that data qualifies as sensitive. The DPC’s decisions against Meta stand out as a missed opportunity to tackle a major privacy issue.
The dumbest fine
Much like Meta, Uber also ate a big fine over data transfers: the Dutch data protection authority fined the company €290M. But there is a key difference: Uber’s fine was incredibly dumb. Not dumb as in “wrong”, mind you. Dumb as in “avoidable”.
Multinational corporations were dealing with legal uncertainty over EU-US data transfers between 2020 (Schrems II) and 2023 (US adequacy decision). So, this was definitely a broader problem.
But Uber decided to handle it in the wrongest way possible. Instead of doing what everyone else was doing at the time (= implementing Standard Contractual Clauses for data transfers), they relied on a somewhat non-mainstream interpretation of the GDPR and transferred data through a different mechanism.
There’s just one problem. Uber’s interpretation of the GDPR contradicted what all European DPAs had been saying for years. And DPAs are the guys who fine companies over data transfers.
The real question is why. Why did Uber have to do its own (and predictably wrong) thing instead of sticking to the de-facto industry standard? What were the advantages of its compliance strategies?
I have no answer. The only conceivable upside to Uber’s strategy is that it (presumably) saved the legal staff some work. Then again, I would expect a giant worth more than one hundred billion dollars not to cut corners.
The looming threat
Over the years Clearview AI has been fined by the Italian, Greek, and Dutch authorities over the illegal collection of personal data. By now, the fines amount to €110M.
ClearviewAI is a company offering facial recognition technology to government agencies and law enforcement outside the EU. The company mainly trains its products on pictures scraped from the open Web.
So far so good. But here’s the thing:large scale, non-consensual scraping from the Web is how Big Tech collects data for training generative AI.
It is no coincidence that Open AI is under investigation in Italy and came under scrutiny in all of the EU. In fact, the whole thing drew media attention last year when the Italian DPA shut down ChatGPT for a month.
Developers of generative AI systems largely follow the same playbook: they hoard any data they can and claim that they can sanitize them by excluding sensitive or dangerous data from the data set. As of yet, there is no convincing proof that such a sanitization is possible- let alone on databases about as large as the Internet.
It is hard to say how things will play out with regards to scraping. DPAs are certainly more inclined to have a constructive conversation with OpenAI than with surveillance creeps like Clearview AI. But the most recent developments are not promising.
A recent EDPB document on the OpenAI case hints at a somewhat strict stance from DPAs. Furthermore, Meta recently backtracked on some of its AI policies after consultation with the Irish DPA and now asks for opt-in consent from EU users before training its AI on their data.
Reading between the lines, the Irish DPA might see opt-in consent as a non-negotiable requirement for training AI on personal data. But reliance on consent is pretty mich impossible for companies that have no direct relationship whatsoever with the people whose data are being harvested (think OpenAI or Anthropic). In fact, reliance on consent might be tricky even for Meta, depending on how things play out with regards to pay-or-ok.
Bottom line, the problem of data scraping is yet unsolved and may well jeopardize the future of generative AI in the EU market.
We at Simple Analytics believe in a web that is open and privacy-friendly. This is why we built our web analytics software to collect no personal data and still provide you with all the insight you need to grow your online presence. Our software is privacy-friendly, easy to learn, and comes with an innovative AI assistant.
If this sounds good to you, feel free to give Simple Analytics a spin with our full-features free trial!