EFF: Data Privacy Scandals and Public Policy Picking Up Speed: 2018 in Review
2018 may be remembered as the Year of the Facebook Scandal, and rightly so. The Cambridge Analytica fiasco, Mark Zuckerberg’s congressional testimony, a massive hack, and revelations of corporate smear campaigns were only the tip of the iceberg. But many more companies mishandled consumer privacy in 2018, too. From the Strava heatmap exposing military locations in January to the gigantic Marriot hack discovered in November, companies across Silicon Valley and beyond made big mistakes with consumer data this year—and lawmakers and the public have taken notice.
Tech Companies Putting Their Profits Before Your Privacy
The problem that came into focus in 2018 was not just hacks, breaches, or unauthorized bad guys breaking into systems. Instead, 2018’s worst privacy actors were the tech companies themselves, harvesting of mountains of users’ data and employing flawed systems to use and share it.
Facebook’s Cambridge Analytica scandal, for example, was the result of a feature of Facebook’s Graph API in 2014. In this case, Facebook was designed to collect as much user information as possible, and then share it indiscriminately with third-party developers. In a set of newly revealed emails from 2012, Mark Zuckerberg acknowledged that he knew “we leak info to developers,” but didn’t think there was enough “strategic risk” to do anything about it.
Google’s social network didn’t perform much better. The final nails in the coffin of Google+ came with two API bugs: one quietly announced in October that exposed the personal information of half a million users, and an even bigger one revealed in December. Unlike Facebook’s Cambridge Analytica problems, these bugs were unintended engineering mistakes. But they exposed users to the same risk: the exposure of users’ personal information to third-party developers without anything resembling informed consent.
2018 also saw tech companies creep further into our wallets and our homes. Facebook and Google reportedly partnered with banks and bought financial data in secret, raising serious privacy concerns about giving companies access to yet another sensitive category of information.
The torrent of data-related scandals this year drove new popular awareness of privacy issues.
Big companies made big new investments in the Internet of Things, with Facebook introducing Portal and Google introducing the Home Hub, both designed to put their manufacturers at the center of home life. Companies also gave users new reasons to question the privacy limits on their home assistant devices. One couple’s Amazon Alexa silently recorded one of their conversations and sent it to a colleague. And Facebook was unable to clearly say whether data collected through Portal could or would be used for targeting ads.
The torrent of data-related scandals this year drove new popular awareness of privacy issues. The Pew Research Center found that a whopping 74 percent of American adults had adjusted their Facebook privacy settings, taken a break from the platform, or deleted its app from their phones. More broadly, it also found that people are worried about their personal information online, and that the vast majority of American adults say it is important to them to be in control of who can get information about them.
User Privacy and the Law
Many legislators agree. 2018 was a blockbuster year for legislative action on privacy. On May 25, Europe’s General Data Privacy Regulation (GDPR) took effect. The law includes some of the most ambitious privacy protections ever put into force. However, the immediate impact of the regulation has been a mixed bag. On paper, GDPR prohibits tracking unless the user has opted in. In reality, users are being confronted with “consent management” pop-ups which enable “consent” with one click but erect an obstacle course for anyone who wants to refuse. A challenge moving forward is to successfully engineer meaningful systems of consent that are not stymied by evasive company systems that generate consent fatigue.
In the United States, 2018 may go down as the year that government began to get serious about privacy.
Some sites, such as Facebook and Yahoo, simply deny access to users who don't agree to allow tracking, making a mockery of the idea of choice. Other organizations, like ICANN, made some privacy-positive improvements under GDPR, but did not take the opportunity to go far enough. And it remains to be seen whether the GDPR can curb the most entrenched and sophisticated trackers, including companies that currently use browser fingerprinting to sidestep users’ attempts to opt out. Worst of all, the government of Romania tried to use GDPR to force journalists to reveal their sources, underlining the importance of strong exceptions for newsgathering in any privacy legislation.
In the United States, 2018 may go down as the year that government began to get serious about privacy. The deluge of privacy scandals, from Equifax to Cambridge Analytica, made room for serious privacy proposals on the legislative floor. Responding to the Equifax debacle, Vermont passed a trailblazing new law that begins to regulate data brokers. The California Consumer Privacy Act (CCPA), though far from perfect, is a good start—and there is a lot of work to be done before it goes into effect in 2020. EFF will fight to improve the law and oppose industry efforts to weaken it.
The Federal Trade Commission scheduled a series of hearings about “Competition and Consumer Protection in the 21st Century,” with digital privacy as a central theme. As part of our ongoing investigation into the overlap between corporate concentration and civil liberties, EFF submitted comments calling for increased scrutiny of mergers and acquisitions that would combine large, sensitive sets of user data in the hands of the tech giants. We’ve drawn attention to the way Google, which owns the largest browser and largest tracking network in the world, uses its power to protect its own interests rather than protecting its users. We’ve also lobbied the U.S. Department of Commerce to apply a users’ rights framework to any future policy proposals.
Even as some lawmakers moved to protect users’ privacy, corporations increased their lobbying at the both the state and federal levels to try to protect their own interests. In Illinois, hostile bills and legal attacks threatened to defang the state’s Biometric Information Privacy Act, the strongest protection for biometrics like fingerprints, voiceprints, and facial recognition in the country. In California, as noted above, EFF is fighting industry efforts to weaken the newly-passed CCPA. And in Washington, DC, Big Tech has attempted to “preempt” (a legal term for “dismantle”) strong state-level privacy laws with weaker federal legislation. We’ve resisted those efforts.
While the tech industry has been pitching its version of “privacy law,” EFF has outlined its own recommendations for a legal framework that protects users’ civil liberties online without undermining innovation. We’ve explained how legislatures at every level can establish smart, effective, and carefully-tailored rules to protect user privacy, defend the freedom to tinker, and avoid impeding speech or innovation. We’ve also endorsed the idea of treating tech companies as information fiduciaries, which would legally require them to use your information in your best interests.
The tech company scandals and legislative complexity around consumer privacy show no signs of slowing down in 2019—and neither will we. EFF will be here to keep fighting for users’ privacy rights in 2019 and beyond.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2018.
Like what you're reading? Support digital freedom defense today!
Comments
Post a Comment