Facebook and Google data breaches: Why this time is different

Andrew Munro 10 October 2018 NEWS

Much of the regulatory foundation for self-sovereign identity will be laid in the next 12 months.

On Tuesday afternoon, 25 September, Facebook engineers discover a security issue affecting 50 million accounts. On Friday morning, 28 September, Facebook publicly announced a security breach. Later that day, on Friday afternoon, two people filed a class action lawsuit against Facebook on behalf of everyone in the United States whose personal information was compromised in the breach.

That someone had a lawsuit loaded and ready to go on the very same day might show how readily people expect their data to be compromised and how expensive this kind of data mismanagement might get in the future. At the same time, European regulators are keenly watching how Facebook works its way through these issues, ready to cudgel the tech company with General Data Protection Regulation (GDPR) notices and fines if it's found to have misstepped.

Google also stepped into the spotlight on 8 October when the Wall Street Journal reported that it experienced a serious data breach back in March, but neglected to notify any of the approximately half a million users who might have been affected.

Both Google and Facebook have a long history of questionable data handling, but this time might be different, thanks to the presence of such immediate lawsuits, hefty GDPR fines, the growing awareness of how valuable personal information can be and the feasibility of self-sovereign identity over the relatively near horizon.



Self-sovereign identity

Self-sovereign identity is the idea that everyone should have control of their own personal data. It's typically envisioned as a blockchain solution.

But rather than being an empty promise buried somewhere in the terms and conditions, self-sovereign identity is a tangibly different way of doing things. People can literally start controlling their own personal data by approving or denying requests to access that information on a case-by-case basis.

And when someone decides to provide information, it only needs to be the relevant data points. For example, someone might provide a digital token that proves they've been verified as over 18, rather than handing over their full driver's licence.

Self-sovereign identity might be a win-win, giving people back control of their personal information while freeing businesses from the risks involved with handling people's data. These risks, as the Facebook breach demonstrates, are very real and potentially very expensive.

But it might not be that simple for Facebook or other tech giants that have fattened up quite nicely on a steady diet of people's personal data.

Life in the data mines

"I don't know how we got so far away from fundamental property rights, or why companies think it's okay to take whatever data they want off a person's phone or computer without permission," said Ari Scharg to finder.

Scharg is a partner at the Edelson PC law firm, which specialises in privacy cases, and also sits on the board of directors at the Digital Privacy Alliance. The thing about Facebook, Scharg says, is that personal data is the key to its value.

Though Facebook started out as a social media company, its business model shifted over the years. It's now considered one of the largest data mining operations on the planet.

"About four, five years ago, Facebook began conducting experiments on its users to show that it had the ability to change their emotion and even influence their behaviour.

"Some of these studies were published in peer reviewed journals and demonstrated that Facebook can influence, in a major way, how their users behave and act. Obviously, from a marketing and advertising perspective, that's a powerful tool, which is why Facebook has shifted its business model.

The emotional manipulation study, titled Experimental evidence of massive-scale emotional contagion through social networks, was covered under Facebook's terms and conditions, the authors noted, because they didn't actually view anything which would have violated user's privacy settings.

Agreement with Facebook's terms and conditions, the study's authors said, constituted informed consent for the research.

The tip of the iceberg

"I think it's important to recognise that we found out about Cambridge Analytica because a whistleblower came forward. How many more "Cambridge Analytica's are out there?"

These are the companies that might access in-depth information on you, courtesy of Facebook, Google or others, to use for their own ends.

You don't have to look very far to find them either. Just look at Exactis, which held detailed information on some 340 million people in an unsecured online database, including names, addresses, children and their ages, information on pets, interests and much more. Exactis didn't collect that data itself.

There's a decent chance your data is already out there, in just as much raw detail as was found in the Exactis database, especially if you've ever consented to a Facebook-connected app using any of your information.

But even if you haven't, it might still be out there thanks to a deliberate quirk that lets people consent to their friends' data being harvested.

Beyond the sheer lack of consent, there might have been nothing but the honour system (aka terms and conditions) to prevent apps from exfiltrating the data, storing it themselves and doing whatever they wanted with it. This is what Cambridge Analytica and many more did. This might have been done by companies hoping to bolster targeted advertising services as in the case of Exactis, or it might have been sold and further propagated on the dark web for a quick buck.

The parallels to Google's recently revealed major data leak are uncanny. In that case, it was similarly made public only by a whistleblower, and similarly revolved around a bug which let developers gain access to the private data of the "friends of friends" who never consented to it.

If this is the first time you're hearing about the sheer scope of Facebook's leaky data, it might be because in many cases Facebook never notified its users. Much like Google didn't notify its users until someone leaked the story to the Wall Street Journal.

And without GDPR, which requires Facebook to notify authorities of data breaches within 72 hours of its discovery and to tell users what happened, it's quite possible that the world still wouldn't know that anything happened – only that millions of people had spontaneously been logged out of Facebook for some reason.

Why was I logged out of Facebook?

If you were logged out of Facebook after this breach, it's probably because you're among the approximately 90 million people who used the "View As" feature within the last year.

The feature contained a vulnerability, that when exploited in tandem with other vulnerabilities, allowed hackers to steal your "access tokens." These access tokens are the little digital tokens that let you remain logged into Facebook and Facebook-connected apps.

Facebook says it estimates that about 50 million people actually had their access tokens stolen, while a further 40 million were potentially vulnerable.

In more detail, this particular breach was a sophisticated and multi-pronged attack which utilised separate vulnerabilities in the View As feature and in a new version of the Facebook video uploader.

The two separate bugs were as follows:

  • The View As feature, which should have been view only, actually allowed people to enter data in one specific place – the message box version that lets people wish friends happy birthday.
  • The new version of the Facebook video uploader incorrectly generated an access token.

So when someone used the video uploader in the message box in the View As mode, it would give them an access token for the user they were looking up.

That's the point

What makes the Cambridge Analytica scandal particularly unsettling, Scharg said, is that Facebook could have put a stop to it in 2016, when it first learned that the company was harvesting user data without permission. Unfortunately, Facebook allowed Cambridge Analytica's app to continue operating on its platform without notifying users of the violation.

juicy crypto words

And that's the point.

Facebook is a multi-billion dollar company because it's a data mining operation. Preventing Cambridge Analytica-style abuses isn't part of its business model because those abuses are an incredible source of revenue. The killer product Facebook is selling is the ability to manipulate people's emotions and behaviour, and that means giving its customers access to its users' data.

"It really encouraged that because Facebook, like I said, is not a social media platform anymore. It's a data mining operation and it wanted to prove that its users could be influenced and manipulated," Scharg said. "That's the whole point of this platform at this point... giving candidates and companies a platform where they can influence and manipulate users into doing something they want them to do."

Facebook started scrambling to regain user trust after news of Cambridge Analytica broke, but it's been greeted with some scepticism.

What's the harm?

In Facebook's defence, advertising is meant to tickle people in just the right way to influence their decisions in the desired way, and using data to do something better is just common sense. How do you draw the line between appropriate use of data-driven advertising and Facebook's supposedly unethical use of data-driven tools?

Besides, what's the harm? It might be ethically troubling, but is anyone actually being harmed by Facebook's activity?

These are the questions that, as a lawyer, Scharg has given a lot of thought. Legally, a lawsuit will typically have to prove harm, and despite all the disquietude, it's hard to say that Facebook – or any other data harvesting tech company – is actually harming people.

"There are some cases filed by consumers in the US against Facebook," Scharg observes, "but without resulting financial injury, it can be difficult to sue a company over misuse of personal information.

That might be a stumbling block in the lawsuit filed after the most recent data breach. The suit points out that identity theft is a very real crime that brings real injury to people and that it's exacerbated by data leaks and that Facebook may have misrepresented its level of security, but it might not be able to prove injury or prove that injuries suffered by defendants (such as being the victim of identity theft) are the direct result of Facebook's negligence.

Legally, it might be very difficult to pull off a successful lawsuit against a company that's been leaking data, and with so much data circulating, without any real consequences for leaks, it might come as no surprise that it's become so common.

That's one of the reasons GDPR was introduced; it filled the holes left by the increased use of consumer data. The US could benefit from a similar law, Scharg says, to require companies to be transparent with their users and bring more clarity around what a platform can and cannot do with someone's information, "but we're not quite there yet."

Of course, it would be easier to forgive Facebook's transgressions on those grounds if it, along with Google and other tech companies, hadn't been so consistently lobbying against new laws that would bring the needed clarity.

Industry pushback

"In Illinois, there were two privacy bills introduced in 2017. One was a bill called the Geolocation Privacy Protection Act, which prohibited developers from tracking your physical movements without consent." Scharg recounts.

"The other was the Illinois Right To Know Act, which would have given consumers the right to learn what personal information companies were collecting from them and who they were sharing it with. The bill was designed to increase transparency and empower consumers to take control of their privacy and identity.

"The amount of opposition and backlash to these bills was intense. Internet industry lobbying groups were flying in from both coasts to fight it.

"Some of the criticism of the bill was extremely far fetched," Scharg recalls. "Some argued that these laws would make the internet less safe and that consumers would no longer be able to use the internet anonymously. But the main talking points were that privacy measures 'will bring the economy to a screeching halt,' and 'stifle innovation.' These are difficult arguments to make with a straight face, when the laws would simply require tech companies to be transparent with users about how they are going to use their personal information."

The Geolocation Bill, incidentally, was vetoed on the grounds that it would stifle innovation and cost jobs. And industry lobbying groups that sprung up right after the bill was introduced disappeared after the veto, Scharg says. He suspects they were created specifically to fight those two bills.

Transparently having your cake and privately eating it

It's really, really hard to maintain bulletproof privacy and data security on the Internet, and it's clear that leveraging people's data en masse is an extremely effective way of doing a great many things. The argument might be overblown, but it's still clear that you can do a lot of great things when you have enough data. It would be a shame to take that opportunity away.

A pragmatic goal is to push for transparency, Scharg says.

"Towards the beginning of 2017, Uber updated their app," Scharg explained. The update in question was that it would be collecting more information from its riders.

"Towards the beginning of 2017, Uber updated its privacy policy to explain that the app would collect location information for up to 5 minutes after riders were dropped off," Scharg said. "Then, the next time a user opened the app, Uber displayed a pop up notice explaining the new policy in plain English. Users were not allowed to continue using the app unless they accepted the new policy."

"Most privacy advocates took the position that the type of tracking and information collection was unreasonable and not necessary. There were calls on Uber to roll it back.

"My take on it was very different. From my perspective, Uber was transparent about what data they were going to collect and what they were going to use it for. At that point, consumers had the opportunity to say to themselves, 'I'm not comfortable with this. I'm going to get a Lyft or taxi instead.' At that point, consumers had the ability to make an informed choice. That is transparency. What more can you ask for from a company?"

ETA to transparency: <12 months

The odds of more transparency bills being introduced sometime in the next year are pretty good, Scharg reckons.

"We are working with legislators around the country – this is something that policymakers are focused on," he said. "I think within the next year or so, you will see a large number of states, both Democrat and Republican, embracing internet transparency as a matter of policy."

A large part of the reason might be because there's a huge amount of support for these decisions from within the tech world, where all those arguments about hurting innovation or costing jobs tend to ring hollow.

Contrary to the lobbyists' talking points, Scharg notes that the tendency of Silicon Valley's giants to abuse people's data might be inhibiting innovation and harming the tech industry as a whole. And the industry knows it.

juicy crypto words

"We've had a lot of tech companies reaching out to us, supporting us and joining the Digital Privacy Alliance," Scharg says.

Their reasoning is twofold, and both reasons focus on the idea that promoting transparency is just plain good business for smaller tech companies.

First, consumers are losing trust in tech and apps and the online space in general because people feel as though tech companies are hiding their actual data policies from them. So the population of customers interested in paying for online services and apps is going down.

The second reason is that smaller companies see transparency and ethical data practices as valuable points of difference they can leverage to win customers.

In this way, more transparency lets people make more informed decisions and might promote more diversity in the market.

"For local companies in Chicago and all around the world, it's very, very difficult to compete with the Silicon Valley mega rich companies. These tech companies really transparency is the equaliser that will level the playing field.

Making data better

The industry doesn't have to lose out, Scharg stresses. Leveraging data can produce some great results, but it has to be done transparently and people have to give informed consent. He sees transparency as a vital, practical and perfectly feasible next step and as something of a consumer education initiative to lay the rails for more advanced solutions down the line.

The first step is to promote transparency and to make people more aware of the trade-offs they might experience when choosing certain apps and services. Do you want a free app that sells your data, or do you want to use a paid app that maintains privacy?

People should have the right to make that choice, and in the process, learn more about how much different data points are worth and what makes certain data more or less valuable.

Self-sovereign identity and data, related blockchain solutions and even more exciting developments come next as part of the recognition that data has real, tangible value and is not something that should be freely taken, packed and re-sold. Rather, it's an asset that the data owner – the individuals whose information is at stake – should be given full control over.

"Transparency measures will give consumers the tools they need to become aware of each company's actual data policies and practices," Scharg says. "Platforms like ShareRoot or MediaConsent [privacy and data ownership tools] serve a different function; they take things to another level and say 'now we all understand that personal information has value'."

And much like transparency itself can bring real business benefits, this next step doesn't have to come at a cost to innovation either, despite what lobbyists will inevitably start saying when the time comes.

"This type of platform really opens up opportunities for consumers that want to voluntarily share or sell their personal data to companies on their own terms," Scharg emphasises.

At this point, it's obvious that data has enormous value, Scharg says. The value of your data (plus admittedly great search engines, marketing, UI and similar) is what turned Facebook and Google into multi-billion dollar companies.

"The platform demonstrates that personal information has value. It's a model for the future."



Crypto explained


Latest cryptocurrency news

Picture: Shutterstock

Latest crypto guides

Ask an Expert

You are about to post a question on finder.com.au:

  • Do not enter personal information (eg. surname, phone number, bank details) as your question will be made public
  • finder.com.au is a financial comparison and information service, not a bank or product provider
  • We cannot provide you with personal advice or recommendations
  • Your answer might already be waiting – check previous questions below to see if yours has already been asked

Finder only provides general advice and factual information, so consider your own circumstances, or seek advice before you decide to act on our content. By submitting a question, you're accepting our Terms of Use, Disclaimer & Privacy Policy and Privacy & Cookies Policy.
Ask a question
Go to site