646 posts categorized "Privacy" Feed

VPN Service Provider Announced A Data Breach Incident Which Occurred in 2018

Consumers in the United States lost both control and privacy protections when the U.S. Federal Communications Commission (FCC), led by President Trump appointee Ajit Pai, a former Verizon lawyer, repealed in 2017 both broadband privacy and net neutrality protections for consumers. Since then, many people have subscribed to Virtual Private Network (VPN) services to regain protections of their sensitive personal information and online activities.

NordVPN logo NordVPN, a provider of VPN services, announced on Monday a data breach:

"1) One server was affected in March 2018 in Finland. The rest of our service was not affected. No other servers of any type were put at risk. This was an attack on our server, not our entire service; 2) The breach was made possible by poor configuration on a third-party datacenter’s part that we were never notified of. Evidence suggests that when the datacenter became aware of the intrusion, they deleted the accounts that had caused the vulnerabilities rather than notify us of their mistake. As soon as we learned of the breach, the server and our contract with the provider were terminated and we began an extensive audit of our service; 3) No user credentials were affected; 4) There are no signs that the intruder attempted to monitor user traffic in any way. Even if they had, they would not have had access to those users’ credentials..."

In 2018, NordVPN operated about 3,000 servers. It now operates about 5,000 servers. The NordVPN announcement includes more information including technical details.

Earlier this month, C/Net and  PC Magazine published their lists of the best VPN services in 2019. PC Magazine's list, which was published before the breach announcement, included NordVPN. So, it is always wise for consumers to do their research before switching to a VPN service.

What to make of this breach? We don't know who performed the attack. My impression: the attack seemed targeted, since few people probably use the single server in Finland. And, this cyberattack seemed very different from the massive retail attacks where hackers seek to steal the payment information (e.g., credit/debit card numbers) of thousands of consumers.

This cyberattack may have targeted a specific person. Perhaps, the attacker was a competitor or the government agency of a country NordVPN has refused to do business with. (Or, maybe this.) Hopefully, investigative journalists with more resources than this solo blogger will probe deeper.

Several things seem clear: a) cybercriminals have added VPN services to their list of high-value targets, b) hackers have identified the outsourcing vendors used by VPN service providers, and c) cyber attacks like this will probably continue. You might say this breach was a warning shot across the bow of the entire VPN industry. Seems like there is lots more news to come.


Court Says Biometric Privacy Lawsuit Against Facebook Can Proceed

Facebook logo MediaPost reported:

"A federal appellate court has rejected Facebook's request for a new hearing over an Illinois biometric privacy law. Unless the Supreme Court steps in, Illinois Facebook users can now proceed with a class-action alleging that Facebook violated Illinois residents' rights by compiling a database of their faceprints... The legal battle, which dates to 2015, when several Illinois residents alleged that Facebook violated the Illinois Biometric Privacy Information Act, which requires companies to obtain written releases from people before collecting “face geometry” and other biometric data, including retinal scans and voiceprints... The fight centers on Facebook's photo-tagging function, which draws on a vast trove of photos to recognize users' faces and suggest their names when they appear in photos uploaded by their friends..."


The National Auto Surveillance Database You Haven't Heard About Has Plenty Of Privacy Issues

Some consumers have heard of Automated License Plate Recognition (ALPR) cameras, the high-speed, computer-controlled technology that automatically reads and records vehicle license plates. Local governments have installed ALPR cameras on stationary objects such as street-light poles, traffic lights, overpasses, highway exit ramps, and electronic toll collection (ETC).

Mobile ALPR cameras have been installed on police cars and/or police surveillance vans. The Houston Police Department explained in this 2016 video how it uses the technology. Last year, a blog post discussed ALPR usage in San Diego and its data-sharing with Vigilant Solutions.

What you probably don't know: the auto repossession industry also uses the technology. Many "repo men" have ALPR cameras installed on their vehicles. The data they collect is fed into a massive, nationwide, and privately-owned database which archives license-plate images. Reporters at Motherboard obtained a private demo of the database tool to understand its capabilities.

Digital Recognition Network logo The demo included tracking a license plate with the vehicle owner's consent. Vice reported:

"This tool, called Digital Recognition Network (DRN), is not run by a government, although law enforcement can also access it. Instead, DRN is a private surveillance system crowdsourced by hundreds of repo men who have installed cameras that passively scan, capture, and upload the license plates of every car they drive by to DRN's database. DRN stretches coast to coast and is available to private individuals and companies focused on tracking and locating people or vehicles. The tool is made by a company that is also called Digital Recognition Network... DRN has more than 600 of these "affiliates" collecting data, according to the contract. These affiliates are paid a monthly bonus for gathering the data..."

ALPR financing image from DRN site on September 20, 2019. Click to view larger version Affiliates are rep men and others, who both use the database tool and upload images to it. DRN even offers financing to help affiliates buy ALPR cameras. The image on the right was taken from the site on September 20, 2019.

When consumers fail to pay their bills, lenders and insurance companies have valid needs to retrieve ( or repossess) their unpaid assets. Lenders hire repo men, who then use the DRN database to find vehicles they've been hired to repossess. Those applications are valid, but there are plenty of privacy issues and opportunity for abuse.

Plenty.

First, the data collection is indiscriminate and broad. As repo men (and women) drive through cities and towns to retrieve wanted vehicles, the ALPR cameras mounted on their cars scan all nearby vehicles: both moving and parked vehicles. Scans are not limited solely to vehicles they've been hired to repossess, nor to vehicles of known/suspected criminals. So, innocent consumers are caught in the massive data collection. According to Vice:

"... in fact, the vast majority of vehicles captured are connected to innocent people. DRN claims to have more than 9 billion license plate scans, according to a DRN contract obtained by Motherboard..."

Second, the data is archived forever. That can provide a very detailed history of a vehicle's (or a person's) movements:

"The results popped up: dozens of sightings, spanning years. The system could see photos of the car parked outside the owner's house; the car in another state as its driver went to visit family; and the car parked in other spots in the owner's city... Some showed the car's location as recently as a few weeks before."

Third, to facilitate searches metadata is automatically attached to the images: GPS or geolocation, date, time, day of week, and more. The metadata helps provide a pretty detailed history of each vehicle's -- or person's -- movements: where and when a vehicle ( or person) travels, patterns such as which days of the week certain locations are visited, and how long the vehicle (or person) parked at specific locations. Vice explained:

"The data is easy to query, according to a DRN training video obtained by Motherboard. The system adds a "tag" to each result, categorising what sort of location the vehicle was likely spotted at, such as "workplace" or "home."

So, DRN can help users to associate specific addresses (work, home, school, doctors, etc.) with specific vehicles. How accurate might this be? While that might help repo men and insurance companies spot fraud via out-of-state registered vehicles whose owners are trying to avoid detection and/or higher premiums, it raises other concerns.

Fourth, consumers -- vehicle owners -- have no control over the data describing them. Vehicle owners cannot opt out of the data collection. Vehicle owners cannot review nor correct any errors in their DRN profiles.

That sounds out of control to me.

The persons which the archived data directly describes have no say. None. That's a huge concern.

Also, I wonder about single females -- victims of domestic violence -- who have protective orders for their safety. Some states, such as Massachusetts, have Address Confidentiality Programs (ACPs) to protect victims of domestic violence, sexual assault, and stalkers. Does DRN accommodate ACP programs? And if so, how? And if not, why not? How does DRN prevent perps from using its database tool? (Yes, DRN access is an issue. Keep reading.) The Vice report didn't say. Hopefully, future reporting will discuss this.

Fifth, DRN is robust. It can be used to track vehicles near or in real time:

"DRN charges $20 to look up a license plate, or $70 for a "live alert", according to the contract. With a live alert, a user can enter a license plate they wish to receive updates on; when the DRN system spots the vehicle, it'll send an email to the user with the newly discovered location."

That makes DRN highly appealing to both valid users (e.g., police, repo men, insurance companies, private investigators) and bad actors posing as valid users. Who might those bad actors be? The Electronic Frontier Foundation (EFF) warned:

"Taken in the aggregate, ALPR data can paint an intimate portrait of a driver’s life and even chill First Amendment protected activity. ALPR technology can be used to target drivers who visit sensitive places such as health centers, immigration clinics, gun shops, union halls, protests, or centers of religious worship."

Sixth, is the problem of access. Anybody can use DRN. According to Vice:

"... a private investigator, or a repo man, or an insurance company does not need a warrant to search for someone's movements over years; they just need to pay to access the DRN system, or find someone willing to share or leverage their access..."

Users simply need to comply with DRN's policies. The company says that, a) users can use its database tool only for certain applications, and b) its contract prohibits users from sharing search results with third parties. We consumers have only DRN's word and assurances that it enforces its policies; and that users comply. As we have seen with Facebook data breaches, it is easy for bad actors to pose as valid users in order to doo end runs around such policies.

What are your opinions of ALPR cameras and DRN?


Survey: Consumers Use Smart Home Devices Despite Finding Them 'Creepy'

Selligent Marketing Cloud logo Last month, Selligent Marketing Cloud announced the results of a global survey about how consumers view various brands. Some of the findings included smart speakers or voice assistants. Key findings:

"Sixty-nine percent of surveyed consumers find it “creepy” when they receive ads based on unprompted cues from voice assistants like Apple’s Siri, Amazon’s Alexa and Google Home. Fifty-one percent are worried that voice assistants are listening to conversations without their consent."

Regarding voice assistants, younger consumers are likely to believe they are being listened to without their knowledge. 58 percent of Gen-Z (ages 18-24) versus 36 percent for Baby Boomers (ages 55-75) held this view. Key findings about privacy and social media: 41 percent of respondents said they have reduced their use of social media due to privacy concerns, and 32 percent said they quit at least one social media platform within the last 12 months.

Selligent surveyed 5,000 consumers in North America and Western Europe. The company provides services to help B2C marketers. To learn more, see the Selligent "Global Connected Consumer Index."


3 Countries Sent A Joint Letter Asking Facebook To Delay End-To-End Encryption Until Law Enforcement Has Back-Door Access. 58 Concerned Organizations Responded

Plenty of privacy and surveillance news recently. Last week, the governments of three countries sent a joint, open letter to Facebook.com asking the social media platform to delay implementation of end-to-end encryption in its messaging apps until back-door access can be provided for law enforcement.

Facebook logo Buzzfeed News published the joint, open letter by U.S. Attorney General William Barr, United Kingdom Home Secretary Priti Patel, acting US Homeland Security Secretary Kevin McAleenan, and Australian Minister for Home Affairs Peter Dutton. The letter, dated October 4th, was sent to Mark Zuckerberg, the Chief Executive Officer of Facebook. It read in part:

"OPEN LETTER: FACEBOOK’S “PRIVACY FIRST” PROPOSALS

We are writing to request that Facebook does not proceed with its plan to implement end-to-end encryption across its messaging services without ensuring that there is no reduction to user safety and without including a means for lawful access to the content of communications to protect our citizens.

In your post of 6 March 2019, “A Privacy-Focused Vision for Social Networking,” you acknowledged that “there are real safety concerns to address before we can implement end-to-end encryption across all our messaging services.” You stated that “we have a responsibility to work with law enforcement and to help prevent” the use of Facebook for things like child sexual exploitation, terrorism, and extortion. We welcome this commitment to consultation. As you know, our governments have engaged with Facebook on this issue, and some of us have written to you to express our views. Unfortunately, Facebook has not committed to address our serious concerns about the impact its proposals could have on protecting our most vulnerable citizens.

We support strong encryption, which is used by billions of people every day for services such as banking, commerce, and communications. We also respect promises made by technology companies to protect users’ data. Law abiding citizens have a legitimate expectation that their privacy will be protected. However, as your March blog post recognized, we must ensure that technology companies protect their users and others affected by their users’ online activities. Security enhancements to the virtual world should not make us more vulnerable in the physical world..."

The open, joint letter is also available on the United Kingdom government site. Mr. Zuckerberg's complete March 6, 2019 post is available here.

Earlier this year, the U.S. Federal Bureau of Investigation (FBI) issued a Request For Proposals (RFP) seeking quotes from technology companies to build a real-time social media monitoring tool. It seems, such a tool would have limited utility without back-door access to encrypted social media accounts.

In 2016, the Federal Bureau of Investigation (FBI) filed a lawsuit to force Apple Inc. to build "back door" software to unlock an attacker's iPhone. Apple refused as back-door software would provide access to any iPhone, not only this particular smartphone. Ultimately, the FBI found an offshore tech company to build the backdoor. Later that year, then FBI Director James Comey suggested a national discussion about encryption versus safety. It seems, the country still hasn't had that conversation.

According to BuzzFeed, Facebook's initial response to the joint letter:

"In a three paragraph statement, Facebook said it strongly opposes government attempts to build backdoors."

We shall see if Facebook holds steady to that position. Privacy advocates quickly weighed in. The Electronic Frontier Foundation (EFF) wrote:

"This is a staggering attempt to undermine the security and privacy of communications tools used by billions of people. Facebook should not comply. The letter comes in concert with the signing of a new agreement between the US and UK to provide access to allow law enforcement in one jurisdiction to more easily obtain electronic data stored in the other jurisdiction. But the letter to Facebook goes much further: law enforcement and national security agencies in these three countries are asking for nothing less than access to every conversation... The letter focuses on the challenges of investigating the most serious crimes committed using digital tools, including child exploitation, but it ignores the severe risks that introducing encryption backdoors would create. Many people—including journalists, human rights activists, and those at risk of abuse by intimate partners—use encryption to stay safe in the physical world as well as the online one. And encryption is central to preventing criminals and even corporations from spying on our private conversations... What’s more, the backdoors into encrypted communications sought by these governments would be available not just to governments with a supposedly functional rule of law. Facebook and others would face immense pressure to also provide them to authoritarian regimes, who might seek to spy on dissidents..."

The new agreement the EFF referred to was explained in this United Kingdom announcement:

"The world-first UK-US Bilateral Data Access Agreement will dramatically speed up investigations and prosecutions by enabling law enforcement, with appropriate authorisation, to go directly to the tech companies to access data, rather than through governments, which can take years... The current process, which see requests for communications data from law enforcement agencies submitted and approved by central governments via Mutual Legal Assistance (MLA), can often take anywhere from six months to two years. Once in place, the Agreement will see the process reduced to a matter of weeks or even days."

The Agreement will each year accelerate dozens of complex investigations into suspected terrorists and paedophiles... The US will have reciprocal access, under a US court order, to data from UK communication service providers. The UK has obtained assurances which are in line with the government’s continued opposition to the death penalty in all circumstances..."

On Friday, a group of 58 privacy advocates and concerned organizations from several countries sent a joint letter to Facebook regarding its end-to-end encryption plans. The Center For Democracy & Technology (CDT) posted the group's letter:

"Given the remarkable reach of Facebook’s messaging services, ensuring default end-to-end security will provide a substantial boon to worldwide communications freedom, to public safety, and to democratic values, and we urge you to proceed with your plans to encrypt messaging through Facebook products and services. We encourage you to resist calls to create so-called “backdoors” or “exceptional access” to the content of users’ messages, which will fundamentally weaken encryption and the privacy and security of all users."

It seems wise to have a conversation to discuss all of the advantages and disadvantages; and not selectively focus only upon some serious crimes while ignoring other significant risks, since back-door software can be abused like any other technology. What are your opinions?


Transcripts Of Internal Facebook Meetings Reveal True Views Of The Company And Its CEO

Facebook logo It's always good for consumers -- and customers -- to know a company's positions on key issues. Thanks to The Verge, we now know more about Facebook's views. Portions of the leaked transcripts included statements by Mr. Zuckerberg, Facebook's CEO, during internal business meetings. The Verge explained the transcripts:

"In two July meetings, Zuckerberg rallied his employees against critics, competitors, and Senator Elizabeth Warren, among others..."

Portions of statements by Mr. Zuckerberg included:

"I’m certainly more worried that someone is going to try to break up our company... So there might be a political movement where people are angry at the tech companies or are worried about concentration or worried about different issues and worried that they’re not being handled well. That doesn’t mean that, even if there’s anger and that you have someone like Elizabeth Warren who thinks that the right answer is to break up the companies... I mean, if she gets elected president, then I would bet that we will have a legal challenge, and I would bet that we will win the legal challenge... breaking up these companies, whether it’s Facebook or Google or Amazon, is not actually going to solve the issues. And, you know, it doesn’t make election interference less likely. It makes it more likely because now the companies can’t coordinate and work together. It doesn’t make any of the hate speech or issues like that less likely. It makes it more likely..."

An October 1st post by Mr. Zuckerberg confirmed the transcripts. Earlier this year, Mr. Zuckerberg called for more government regulation. Given his latest comments, we now know his true views.

Also, C/Net reported:

"In an interview with the Today show that aired Wednesday, Instagram CEO Adam Mosseri said he generally agrees with the comments Zuckerberg made during the meetings, adding that the company's large size can help it tackle issues like hate speech and election interference on social media."

The claim by Mosseri, Zuckerberg and others that their company needs to be even bigger to tackle issues is frankly -- laughable. Consumers are concerned about several different issues: privacy, hacked and/or cloned social media accounts, costs, consumer choice, surveillance, data collection we can't opt out of, the inability to delete Facebook and other mobile apps, and elections interference. A recent study found that consumers want social sites to collect less data.

Industry consolidation and monopolies/oligopolies usually result with reduced consumer choices and higher prices. Prior studies have documented this. The lack of ISP competition in key markets meant consumers in the United States pay more for broadband and get slower speeds compared to other countries. At the U.S. Federal Trade Commission's "Privacy, Big Data, And Competition" hearing last year, the developers of the Brave web browser submitted this feedback:

""First, big tech companies “cross-use” user data from one part of their business to prop up others. This stifles competition, and hurts innovation and consumer choice. Brave suggests that FTC should investigate..."

Facebook is already huge, and its massive size still hasn't stopped multiple data breaches and privacy snafus. Rather, the snafus have demonstrated an inability (unwillingness?) by the company and its executives to effectively tackle and implement solutions to adequately and truly protect users' sensitive information. Mr. Zuckerberg has repeatedly apologized, but nothing ever seems to change. Given the statements in the transcripts, his apologies seem even less believable and less credible than before.

Alarmingly, Facebook has instead sought more ways to share users' sensitive data. In August of 2018, reports surfaced that Facebook approached several major banks and offered to share its detailed financial information about consumers in order, "to boost user engagement." Reportedly, the detailed financial information included debit/credit/prepaid card transactions and checking account balances. Also last year, Facebook's Onavo VPN App was removed from the Apple App store because the app violated data-collection policies. Not good.

Plus, the larger problem is this: Facebook isn't just a social network. It is also an advertiser, publishing platform, dating service, and wannabe payments service. There are several anti-trust investigations underway involving Facebook. Remember, Facebook tracks both users and non-users around the internet. So, claims about it needing to be bigger to solve problem are malarkey.

And, Mr. Zuckerberg's statements seem to mischaracterize Senator Warren's positions by conflating and ignoring (or minimizing) several issues. Here is what Senator Warren actually stated in March, 2019:

"America’s big tech companies provide valuable products but also wield enormous power over our digital lives. Nearly half of all e-commerce goes through Amazon. More than 70% of all Internet traffic goes through sites owned or operated by Google or Facebook. As these companies have grown larger and more powerful, they have used their resources and control over the way we use the Internet to squash small businesses and innovation, and substitute their own financial interests for the broader interests of the American people... Weak antitrust enforcement has led to a dramatic reduction in competition and innovation in the tech sector. Venture capitalists are now hesitant to fund new startups to compete with these big tech companies because it’s so easy for the big companies to either snap up growing competitors or drive them out of business. The number of tech startups has slumped, there are fewer high-growth young firms typical of the tech industry, and first financing rounds for tech startups have declined 22% since 2012... To restore the balance of power in our democracy, to promote competition, and to ensure that the next generation of technology innovation is as vibrant as the last, it’s time to break up our biggest tech companies..."

Senator Warren listed several examples:

"Using Mergers to Limit Competition: Facebook has purchased potential competitors Instagram and WhatsApp. Amazon has used its immense market power to force smaller competitors like Diapers.com to sell at a discounted rate. Google has snapped up the mapping company Waze and the ad company DoubleClick... Using Proprietary Marketplaces to Limit Competition: Many big tech companies own a marketplace — where buyers and sellers transact — while also participating on the marketplace. This can create a conflict of interest that undermines competition. Amazon crushes small companies by copying the goods they sell on the Amazon Marketplace and then selling its own branded version. Google allegedly snuffed out a competing small search engine by demoting its content on its search algorithm, and it has favored its own restaurant ratings over those of Yelp."

Mr. Zuckerberg would be credible if he addressed each of these examples. In the transcript from The Verge, he didn't.

And, there is plenty of blame to spread around on executives in both tech companies and anti-trust regulators in government. Readers wanting to learn more can read about hijacked product pages and other chaos among sellers on the Amazon platform. There's plenty to fault tech companies for, and it isn't a political attack.

Plenty of operational failures, data security failures, and willful sharing of consumers' data collected. What are your opinions of the transcript?


Millions of Americans’ Medical Images and Data Are Available on the Internet. Anyone Can Take a Peek.

[Editor's note: today's guest blog post, by reporters at ProPublica, explores data security issues within the healthcare industry and its outsourcing vendors. It is reprinted with permission.]

By Jack Gillum, Jeff Kao and Jeff Larson - ProPublica

Medical images and health data belonging to millions of Americans, including X-rays, MRIs and CT scans, are sitting unprotected on the internet and available to anyone with basic computer expertise.

Bayerischer Rundfunk logo The records cover more than 5 million patients in the U.S. and millions more around the world. In some cases, a snoop could use free software programs — or just a typical web browser — to view the images and private data, an investigation by ProPublica and the German broadcaster Bayerischer Rundfunk found.

We identified 187 servers — computers that are used to store and retrieve medical data — in the U.S. that were unprotected by passwords or basic security precautions. The computer systems, from Florida to California, are used in doctors’ offices, medical-imaging centers and mobile X-ray services.

The insecure servers we uncovered add to a growing list of medical records systems that have been compromised in recent years. Unlike some of the more infamous recent security breaches, in which hackers circumvented a company’s cyber defenses, these records were often stored on servers that lacked the security precautions that long ago became standard for businesses and government agencies.

"It’s not even hacking. It’s walking into an open door," said Jackie Singh, a cybersecurity researcher and chief executive of the consulting firm Spyglass Security. Some medical providers started locking down their systems after we told them of what we had found.

Our review found that the extent of the exposure varies, depending on the health provider and what software they use. For instance, the server of U.S. company MobilexUSA displayed the names of more than a million patients — all by typing in a simple data query. Their dates of birth, doctors and procedures were also included.

Alerted by ProPublica, MobilexUSA tightened its security earlier this month. The company takes mobile X-rays and provides imaging services to nursing homes, rehabilitation hospitals, hospice agencies and prisons. "We promptly mitigated the potential vulnerabilities identified by ProPublica and immediately began an ongoing, thorough investigation," MobilexUSA’s parent company said in a statement.

Another imaging system, tied to a physician in Los Angeles, allowed anyone on the internet to see his patients’ echocardiograms. (The doctor did not respond to inquiries from ProPublica.) All told, medical data from more than 16 million scans worldwide was available online, including names, birthdates and, in some cases, Social Security numbers.

Experts say it’s hard to pinpoint who’s to blame for the failure to protect the privacy of medical images. Under U.S. law, health care providers and their business associates are legally accountable for securing the privacy of patient data. Several experts said such exposure of patient data could violate the Health Insurance Portability and Accountability Act, or HIPAA, the 1996 law that requires health care providers to keep Americans’ health data confidential and secure.

Although ProPublica found no evidence that patient data was copied from these systems and published elsewhere, the consequences of unauthorized access to such information could be devastating. "Medical records are one of the most important areas for privacy because they’re so sensitive. Medical knowledge can be used against you in malicious ways: to shame people, to blackmail people," said Cooper Quintin, a security researcher and senior staff technologist with the Electronic Frontier Foundation, a digital-rights group.

"This is so utterly irresponsible," he said.

The issue should not be a surprise to medical providers. For years, one expert has tried to warn about the casual handling of personal health data. Oleg Pianykh, the director of medical analytics at Massachusetts General Hospital’s radiology department, said medical imaging software has traditionally been written with the assumption that patients’ data would be secured by the customer’s computer security systems.

But as those networks at hospitals and medical centers became more complex and connected to the internet, the responsibility for security shifted to network administrators who assumed safeguards were in place. "Suddenly, medical security has become a do-it-yourself project," Pianykh wrote in a 2016 research paper he published in a medical journal.

ProPublica’s investigation built upon findings from Greenbone Networks, a security firm based in Germany that identified problems in at least 52 countries on every inhabited continent. Greenbone’s Dirk Schrader first shared his research with Bayerischer Rundfunk after discovering some patients’ health records were at risk. The German journalists then approached ProPublica to explore the extent of the exposure in the U.S.

Schrader found five servers in Germany and 187 in the U.S. that made patients’ records available without a password. ProPublica and Bayerischer Rundfunk also scanned Internet Protocol addresses and identified, when possible, which medical provider they belonged to.

ProPublica independently determined how many patients could be affected in America, and found some servers ran outdated operating systems with known security vulnerabilities. Schrader said that data from more than 13.7 million medical tests in the U.S. were available online, including more than 400,000 in which X-rays and other images could be downloaded.

The privacy problem traces back to the medical profession’s shift from analog to digital technology. Long gone are the days when film X-rays were displayed on fluorescent light boards. Today, imaging studies can be instantly uploaded to servers and viewed over the internet by doctors in their offices.

In the early days of this technology, as with much of the internet, little thought was given to security. The passage of HIPAA required patient information to be protected from unauthorized access. Three years later, the medical imaging industry published its first security standards.

Our reporting indicated that large hospital chains and academic medical centers did put security protections in place. Most of the cases of unprotected data we found involved independent radiologists, medical imaging centers or archiving services.

One German patient, Katharina Gaspari, got an MRI three years ago and said she normally trusts her doctors. But after Bayerischer Rundfunk showed Gaspari her images available online, she said: "Now, I am not sure if I still can." The German system that stored her records was locked down last week.

We found that some systems used to archive medical images also lacked security precautions. Denver-based Offsite Image left open the names and other details of more than 340,000 human and veterinary records, including those of a large cat named "Marshmellow," ProPublica found. An Offsite Image executive told ProPublica the company charges clients $50 for access to the site and then $1 per study. "Your data is safe and secure with us," Offsite Image’s website says.

The company referred ProPublica to its tech consultant, who at first defended Offsite Image’s security practices and insisted that a password was needed to access patient records. The consultant, Matthew Nelms, then called a ProPublica reporter a day later and acknowledged Offsite Image’s servers had been accessible but were now fixed.

Medical Imaging and Technology Alliance logo "We were just never even aware that there was a possibility that could even happen," Nelms said.

In 1985, an industry group that included radiologists and makers of imaging equipment created a standard for medical imaging software. The standard, which is now called DICOM, spelled out how medical imaging devices talk to each other and share information.

We shared our findings with officials from the Medical Imaging & Technology Alliance, the group that oversees the standard. They acknowledged that there were hundreds of servers with an open connection on the internet, but suggested the blame lay with the people who were running them.

"Even though it is a comparatively small number," the organization said in a statement, "it may be possible that some of those systems may contain patient records. Those likely represent bad configuration choices on the part of those operating those systems."

Meeting minutes from 2017 show that a working group on security learned of Pianykh’s findings and suggested meeting with him to discuss them further. That “action item” was listed for several months, but Pianykh said he never was contacted. The medical imaging alliance told ProPublica last week that the group did not meet with Pianykh because the concerns that they had were sufficiently addressed in his article. They said the committee concluded its security standards were not flawed.

Pianykh said that misses the point. It’s not a lack of standards; it’s that medical device makers don’t follow them. “Medical-data security has never been soundly built into the clinical data or devices, and is still largely theoretical and does not exist in practice,” Pianykh wrote in 2016.

ProPublica’s latest findings follow several other major breaches. In 2015, U.S. health insurer Anthem Inc. revealed that private data belonging to more than 78 million people was exposed in a hack. In the last two years, U.S. officials have reported that more than 40 million people have had their medical data compromised, according to an analysis of records from the U.S. Department of Health and Human Services.

Joy Pritts, a former HHS privacy official, said the government isn’t tough enough in policing patient privacy breaches. She cited an April announcement from HHS that lowered the maximum annual fine, from $1.5 million to $250,000, for what’s known as “corrected willful neglect” — the result of conscious failures or reckless indifference that a company tries to fix. She said that large firms would not only consider those fines as just the cost of doing business, but that they could also negotiate with the government to get them reduced. A ProPublica examination in 2015 found few consequences for repeat HIPAA offenders.

A spokeswoman for HHS’ Office for Civil Rights, which enforces HIPAA violations, said it wouldn’t comment on open or potential investigations.

"What we typically see in the health care industry is that there is Band-Aid upon Band-Aid applied" to legacy computer systems, said Singh, the cybersecurity expert. She said it’s a “shared responsibility” among manufacturers, standards makers and hospitals to ensure computer servers are secured.

"It’s 2019," she said. "There’s no reason for this."

How Do I Know if My Medical Imaging Data is Secure?

If you are a patient:

If you have had a medical imaging scan (e.g., X-ray, CT scan, MRI, ultrasound, etc.) ask the health care provider that did the scan — or your doctor — if access to your images requires a login and password. Ask your doctor if their office or the medical imaging provider to which they refer patients conducts a regular security assessment as required by HIPAA.

If you are a medical imaging provider or doctor’s office:

Researchers have found that picture archiving and communication systems (PACS) servers implementing the DICOM standard may be at risk if they are connected directly to the internet without a VPN or firewall, or if access to them does not require a secure password. You or your IT staff should make sure that your PACS server cannot be accessed via the internet without a VPN connection and password. If you know the IP address of your PACS server but are not sure whether it is (or has been) accessible via the internet, please reach out to us at [email protected].

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for The Big Story newsletter to receive stories like this one in your inbox.


Study: Anonymized Data Can Not Be Totally Anonymous. And 'Homomorphic Encryption' Explained

Many online users have encountered situations where companies collect data with the promised that it is safe because the data has been anonymized -- all personally-identifiable data elements have been removed. How safe is this really? A recent study reinforced the findings that it isn't as safe as promised. Anonymized data can be de-anonymized = re-identified to individual persons.

The Guardian UK reported:

"... data can be deanonymised in a number of ways. In 2008, an anonymised Netflix data set of film ratings was deanonymised by comparing the ratings with public scores on the IMDb film website in 2014; the home addresses of New York taxi drivers were uncovered from an anonymous data set of individual trips in the city; and an attempt by Australia’s health department to offer anonymous medical billing data could be reidentified by cross-referencing “mundane facts” such as the year of birth for older mothers and their children, or for mothers with many children. Now researchers from Belgium’s Université catholique de Louvain (UCLouvain) and Imperial College London have built a model to estimate how easy it would be to deanonymise any arbitrary dataset. A dataset with 15 demographic attributes, for instance, “would render 99.98% of people in Massachusetts unique”. And for smaller populations, it gets easier..."

According to the U.S. Census Bureau, the population of Massachusetts was abut 6.9 million on July 1, 2018. How did this de-anonymization problem happen? Scientific American explained:

"Many commonly used anonymization techniques, however, originated in the 1990s, before the Internet’s rapid development made it possible to collect such an enormous amount of detail about things such as an individual’s health, finances, and shopping and browsing habits. This discrepancy has made it relatively easy to connect an anonymous line of data to a specific person: if a private detective is searching for someone in New York City and knows the subject is male, is 30 to 35 years old and has diabetes, the sleuth would not be able to deduce the man’s name—but could likely do so quite easily if he or she also knows the target’s birthday, number of children, zip code, employer and car model."

Data brokers, including credit-reporting agencies, have collected a massive number of demographic data attributes about every persons. According to this 2018 report, Acxiom has compiled about 5,000 data elements for each of 700 million persons worldwide.

It's reasonable to assume that credit-reporting agencies and other data brokers have similar capabilities. So, data brokers' massive databases can make it relatively easy to re-identify data that was supposedly been anonymized. This means consumers don't have the privacy promised.

What's the solution? Researchers suggest that data brokers must develop new anonymization methods, and rigorously test them to ensure anonymization truly works. And data brokers must be held to higher data security standards.

Any legislation serious about protecting consumers' privacy must address this, too. What do you think?


51 Corporations Tell Congress: A Federal Privacy Law Is Needed. 145 Corporations Tell The U.S. Senate: Inaction On Gun Violence Is 'Simply Unacceptable'

Last week, several of the largest corporations petitioned the United States government for federal legislation in two key topics: consumer privacy and gun reform.

First, the Chief Executive Officers (CEOs) at 51 corporations sent a jointly signed letter to leaders in Congress asking for a federal privacy law to supersede laws emerging in several states. ZD Net reported:

"The open-letter was sent on behalf of Business Roundtable, an association made up of the CEOs of America's largest companies... CEOs blamed a patchwork of differing privacy regulations that are currently being passed in multiple US states, and by several US agencies, as one of the reasons why consumer privacy is a mess in the US. This patchwork of privacy regulations is creating problems for their companies, which have to comply with an ever-increasing number of laws across different states and jurisdictions. Instead, the 51 CEOs would like one law that governs all user privacy and data protection across the US, which would simplify product design, compliance, and data management."

The letter was sent to U.S. Senate Majority Leader Mitch McConnell, U.S. Senate Minority Leader Charles E. Schumer, Senator Roger F. Wicker (Chairman of the Committee on Commerce, Science and Transportation), Nancy Pelosi (Speaker of the U.S. House of Representatives), Kevin McCarthy (Minority Leader of the U.S. House of Representatives), Frank Pallone, Jr. (Chairman of the Committee on Energy and Commerce in the U.S. House of Representatives), and other ranking politicians.

The letter stated, in part:

"Consumers should not and cannot be expected to understand rules that may change depending upon the state in which they reside, the state in which they are accessing the internet, and the state in which the company’s operation is providing those resources or services. Now is the time for Congress to act and ensure that consumers are not faced with confusion about their rights and protections based on a patchwork of inconsistent state laws. Further, as the regulatory landscape becomes increasingly fragmented and more complex, U.S. innovation and global competitiveness in the digital economy are threatened. "

That sounds fair and noble enough. After writing this blog for more than 12 years, I have learned that details matters. Who writes the proposed legislation and the details in that legislation matter. It is too early to tell if the proposed legislation is weaker or stronger than what some states have implemented.

Some of the notable companies which signed the joint letter included AT&T, Amazon, Comcast, Dell Technologies, FedEx, IBM, Qualcomm, Salesforce, SAP, Target, and Walmart. Signers from the financial services sector included American Express, Bank of America, Citigroup, JPMorgan Chase, MasterCard, State Farm Insurance, USAA, and Visa. Several notable companies did not sign the letter: Facebook, Google, Microsoft, and Verizon.

Second, The New York Times reported that executives from 145 companies sent a joint letter to members of the U.S. Senate demanding that they take action on gun violence. The letter stated, in part (emphasis added):

"... we are writing to you because we have a responsibility and obligation to stand up for the safety of our employees ,customers, and all Americans in the communities we serve across the country. Doing nothing about America's gun violence crisis is simply unacceptable and it is time to stand with the American public on gun safety. Gun violence in America is not inevitable; it's preventable. There are steps Congress can, and must take to prevent and reduce gun violence. We need our lawmakers to support common sense gun laws... we urge the Senate to stand with the American public and take action on gun safety by passing a bill to require background checks on all gun sales and a strong Red Flag law that would allow courts to issue life-saving extreme risk protection orders..."

Some of the notable companies which signed the letter included Airbnb, Bain Capital, Bloomberg LP, Conde Nast, DICK'S Sporting Goods, Gap Inc., Levi Strauss & Company, Lyft, Pinterest, Publicis Groupe, Reddit, Royal Caribbean Cruises Ltd., Twitter, Uber, and Yelp.

Earlier this year, the U.S. House of Representatives passed legislation to address gun violence. So far, the U.S. Senate has done nothing. Representative Kathy Castor (14th District in Florida), explained the actions the House took in 2019:

"The Bipartisan Background Checks Act that I championed is a commonsense step to address gun violence and establish measures that protect our community and families. America is suffering from a long-term epidemic of gun violence – each year, 120,000 Americans are injured and 35,000 die by firearms. This bill ensures that all gun sales or transfers are subject to a background check, stopping senseless violence by individuals to themselves and others... Additionally, the Democratic House passed H.R. 1112 – the Enhanced Background Checks Act of 2019 – which addresses the Charleston Loophole that currently allows gun dealers to sell a firearm to dangerous individuals if the FBI background check has not been completed within three business days. H.R. 1112 makes the commonsense and important change to extend the review period to 10 business days..."

Findings from a February, 2018 Quinnipiac national poll:

"American voters support stricter gun laws 66 - 31 percent, the highest level of support ever measured by the independent Quinnipiac University National Poll, with 50 - 44 percent support among gun owners and 62 - 35 percent support from white voters with no college degree and 58 - 38 percent support among white men... Support for universal background checks is itself almost universal, 97 - 2 percent, including 97 - 3 percent among gun owners. Support for gun control on other questions is at its highest level since the Quinnipiac University Poll began focusing on this issue in the wake of the Sandy Hook massacre: i) 67 - 29 percent for a nationwide ban on the sale of assault weapons; ii) 83 - 14 percent for a mandatory waiting period for all gun purchases. It is too easy to buy a gun in the U.S. today..."


Court Okays 'Data Scraping' By Analytics Firm Of Users' Public LinkedIn Profiles. Lots Of Consequences

LinkedIn logo Earlier this week, a Federal appeals court affirmed an August 2017 injunction which required LinkedIn, a professional networking platform owned by Microsoft Corporation, to allow hiQ Labs, Inc. to access members' profiles. This ruling has implications for everyone.

hiQ Labs logo First, some background. The Naked Security blog by Sophos explained in December, 2017:

"... hiQ is a company that makes its money by “scraping” LinkedIn’s public member profiles to feed two analytical systems, Keeper and Skill Mapper. Keeper can be used by employers to detect staff that might be thinking about leaving while Skill Mapper summarizes the skills and status of current and future employees. For several years, this presented no problems until, in 2016, LinkedIn decided to offer something similar, at which point it sent hiQ and others in the sector cease and desist letters and started blocking the bots reading its pages."

So, hiQ apps use algorithms which determine for its clients (prospective or current employers) which employees will stay or go. Gizmodo explained the law which LinkedIn used in its arguments in court, namely the:

".... practice of scraping publicly available information from their platform violated the 1986 Computer Fraud and Abuse Act (CFAA). The CFAA is infamously vaguely written and makes it illegal to access a “protected computer” without or in excess of “authorization”—opening the door to sweeping interpretations that could be used to criminalize conduct not even close to what would traditionally be understood as hacking.

Second, the latest court ruling basically said two things: a) it is legal (and doesn't violate hacking laws) for companies to scrape information contained in publicly available profiles; and b) LinkedIn must allow hiQ (and potentially other firms) to continue with data-scraping. This has plenty of implications.

This recent ruling may surprise some persons, since the issue of data scraping was supposedly settled law previously. MediaPost reported:

"Monday's ruling appears to effectively overrule a decision issued six years ago in a dispute between Craigslist and the data miner 3Taps, which also scraped publicly available listings. In that matter, 3Taps allegedly scraped real estate listings and made them available to the developers PadMapper and Lively. PadMapper allegedly meshed Craigslist's apartment listings with Google maps... U.S. District Court Judge Charles Breyer in the Northern District of California ruled in 2013 that 3Taps potentially violated the anti-hacking law by scraping listings from Craigslist after the company told it to stop doing so."

So, you can bet that both social media sites and data analytics firms closely watched and read the appeal court's ruling this week.

Third, in theory any company or agency could then legally scrape information from public profiles on the LinkedIn platform. This scraping could be done by industries and/or entities (e.g., spy agencies worldwide) which job seekers didn't intend nor want.

Many consumers simply signed up and use LinkedIn to build professional relationship and/or to find jobs, either fulltime as employees or as contractors. The 2019 social media survey by Pew Research found that 27 percent of adults in the United States use LinkedIn, but higher usage penetration among persons with college degrees (51 percent), persons making more than $75K annually (49 percent), persons ages 25 - 29 (44 percent), persons ages 30 - 49 (37 percent), and urban residents (33 percent).  

I'll bet that many LinkedIn users never imagined that their profiles would be used against them by data analytics firms. Like it or not, that is how consumers' valuable, personal data is used (abused?) by social media sites and their clients.

Fourth, the practice of data scraping has divided tech companies. Again, from the Naked Security blog post in 2017:

"Data scraping, its seems, has become a booming tech sector that increasingly divides the industry ideologically. One side believes LinkedIn is simply trying to shut down a competitor wanting to access public data LinkedIn merely displays rather than owns..."

The Electronic Frontier Foundation (EFF), the DuckDuckGo search engine, and the Internet Archived had filed an amicus brief with the appeals court before its ruling. The EFF explained the group's reasoning and urged the:

"... Court of Appeals to reject LinkedIn’s request to transform the CFAA from a law meant to target serious computer break-ins into a tool for enforcing its computer use policies. The social networking giant wants violations of its corporate policy against using automated scripts to access public information on its website to count as felony “hacking” under the Computer Fraud and Abuse Act, a 1986 federal law meant to criminalize breaking into private computer systems to access non-public information. But using automated scripts to access publicly available data is not "hacking," and neither is violating a website’s terms of use. LinkedIn would have the court believe that all "bots" are bad, but they’re actually a common and necessary part of the Internet. "Good bots" were responsible for 23 percent of Web traffic in 2016..."

So, bots are here to stay. And, it's up to LinkedIn executives to find a solution to protect their users' information.

Fifth, according to the Reuters report the court judge suggested a solution for LinkedIn by "eliminating the public access option." Hmmmm. Public, or at least broad access, is what many job seekers desire. So, a balance needs to be struck between truly "public" where anyone, anywhere worldwide could access public profiles, versus intended targets (e.g., hiring executives in potential employers in certain industries).

Sixth, what struck me about the court ruling this week was that nobody was in the court room representing the interests of LinkedIn users, of which I am one. MediaPost reported:

"The appellate court discounted LinkedIn's argument that hiQ was harming users' privacy by scraping data even when people used a "do not broadcast" setting. "There is no evidence in the record to suggest that most people who select the 'Do Not Broadcast' option do so to prevent their employers from being alerted to profile changes made in anticipation of a job search," the judges wrote. "As the district court noted, there are other reasons why users may choose that option -- most notably, many users may simply wish to avoid sending their connections annoying notifications each time there is a profile change." "

What? Really?! We LinkedIn users have a natural, vested interest in control over both our profiles and the sensitive, personal information that describes each of us in our profiles. Somebody at LinkedIn failed to adequately represent users' interests of its users, the court didn't really listen closely nor seek out additional evidence, or all of the above.

Maybe the "there is no evidence in the record" regarding the 'Do Not Broadcast' feature will be the basis of another appeal or lawsuit.

With this latest court ruling, we LinkedIn users have totally lost control (except for deleting or suspending our LinkedIn accounts). It makes me wonder how a court could reach its decision without hearing directly from somebody representing LinkedIn users.

Seventh, it seems that LinkedIn needs to modify its platform in three key ways:

  1. Allow its users to specify which uses or applications (e.g., find fulltime work, find contract work, build contacts in my industry or area of expertise, find/screen job candidates, advertise/promote a business, academic research, publish content, read news, dating, etc.) their profiles can only be used for. The 'Do Not Broadcast' feature is clearly not strong enough;
  2. Allow its users to specify or approve individual users -- other actual persons who are LinkedIn users and not bots nor corporate accounts -- who can access their full, detailed profiles; and
  3. Outline in the user agreement the list of applications or uses profiles may be accessed for, so that both prospective and current LinkedIn users can make informed decisions. 

This would give LinkedIn users some control over the sensitive, personal information in their profiles. Without control, the benefits of using LinkedIn quickly diminish. And, that's enough to cause me to rethink my use of LinkedIn, and either deactivate or delete my account.

What are your opinions of this ruling? If you currently use LinkedIn, will you continue using it? If you don't use LinkedIn and were considering it, will you still consider using it?


Mashable: 7 Privacy Settings iPhone Users Should Enable Today

Most people want to get the most from their smartphones. That includes using their devices wisely and with privacy. Mashable recommended seven privacy settings for Apple iPhone users. I found the recommendations very helpful, and thought that you would, too.

Three privacy settings stood out. First, many mobile apps have:

"... access to your camera. For some of these, the reasoning is a no-brainer. You want to be able to use Snapchat filters? Fine, the app needs access to your camera. That makes sense. Other apps' reasoning for having access to your camera might be less clear. Once again, head to Settings > Privacy > Camera and review what apps you've granted camera access. See anything in there that doesn't make sense? Go ahead and disable it."

A feature most consumers probably haven't considered:

"... which apps on your phone have requested microphone access. For example, do you want Drivetime to have access to your mic? No? Because if you've downloaded it, then it might. If an app doesn't have a clear reason for needing access to your microphone, don't give it that access."

And, perhaps most importantly:

"Did you forget about your voicemail? Hackers didn't. At the 2018 DEF CON, researchers demonstrated the ability to brute force voicemail accounts and use that access to reset victims' Google and PayPal accounts... Set a random 9-digit voicemail password. Go to Settings > Phone and scroll down to "Change Voicemail Password." You iPhone should let you choose a 9-digit code..."

The full list is a reminder for consumers not to assume that the default settings on mobile apps you install are right for your privacy needs. Wise consumers check and make adjustments.


Privacy Study Finds Consumers Less Likely To Share Several Key Data Elements

Advertising Research Foundation logoLast month, the Advertising Research Foundation (ARF) announced the results of its 2019 Privacy Study, which was conducted in March. The survey included 1,100 consumers in the United States weighted by age gender, and region. Key findings including device and internet usage:

"The key differences between 2018 and 2019 are: i) People are spending more time on their mobile devices and less time on their PCs; ii) People are spending more time checking email, banking, listening to music, buying things, playing games, and visiting social media via mobile apps; iii) In general, people are only slightly less likely to share their data than last year. iv) They are least likely to share their social security number; financial and medical information; and their home address and phone numbers; v) People seem to understand the benefits of personalized advertising, but do not value personalization highly and do not understand the technical approaches through which it is accomplished..."

Advertisers use these findings to adjust their advertising, offers, and pitches to maximize responses by consumers. More detail about the above privacy and data sharing findings:

"In general, people were slightly less likely to share their data in 2019 than they were in 2018. They were least likely to share their social security number; financial and medical information; their work address; and their home address and phone numbers in both years. They were most likely to share their gender, race, marital status, employment status, sexual orientation, religion, political affiliation, and citizenship... The biggest changes in respondents’ willingness to share their data from 2018 to 2019 were seen in their home address (-10 percentage points), spouse’s first and last name (-8 percentage points), personal email address (-7 percentage points), and first and last names (-6 percentage points)."

The researchers asked the data sharing question in two ways:

  1. "Which of the following types of information would you be willing to share with a website?"
  2. "Which of the following types of information would you be willing to share for a personalized experience?"

The survey included 20 information types for both questions. For the first question, survey respondents' willingness to share decreased for 15 of 20 information types, remained constant for two information types, and increased slightly for the remainder:

Which of the following types of information
would you be willing to share with a website?
Information Type 2018: %
Respondents
2019: %
Respondents
2019 H/(L)
2018
Birth Date 71 68 (3)
Citizenship Status 82 79 (3)
Employment Status 84 82 (2)
Financial Information 23 20 (3)
First & Last Name 69 63 (6)
Gender 93 93 --
Home Address 41 31 (10)
Home Landline
Phone Number
33 30 (3)
Marital Status 89 85 (4)
Medical Information 29 26 (3)
Personal Email Address 61 54 (7)
Personal Mobile
Phone Number
34 32 (2)
Place Of Birth 62 58 (4)
Political Affiliation 76 77 1
Race or Ethnicity 90 91 1
Religious Preference 78 79 1
Sexual Orientation 80 79 (1)
Social Security Number 10 10 --
Spouse's First
& Last Name
41 33 (8)
Work Address 33 31 (2)

The researchers asked about citizenship status due to controversy related to the upcoming 2020 Census. The researchers concluded:

The survey finding most relevant to these proposals is that the public does not see the value of sharing data to improve personalization of advertising messages..."

Overall, it appears that consumers are getting wiser about their privacy. Consumers' willingness to share decreased for more items than it increased for. View the detailed ARF 2019 Privacy Survey (Adobe PDF).


Privacy Tips For The Smart Speakers In Your Home

Many consumers love the hands-free convenience of smart speakers in their homes. The appeal includes several applications: stream music, plan travel, manage your grocery list, get briefed on news headlines, buy movie tickets, hear jokes, get sports scores, and more. Like any other internet-connected device, it's wise to know and use the device's security settings if you value your, your children's, and your guests' privacy.

In the August issue of its print magazine, Consumer Reports (CR) advises the following settings for your smart speakers:

"Protect Your Privacy
If keeping a speaker with a microphone in your home makes you uneasy, you have reason to be. Amazon, Apple, and Google all collect recorded snippets of consumers' commands to improve their voice-computing technology. But they also offer ways to mute the mic when it's not in use. The Amazon Echo has an On/Off button on top of the device. The Google Home's mute button is on the back. And Apple's HomePod requires a voice command: "Hey, Siri, stop listening." (You then use a button to turn the device back on.) For a third-party speaker, consult the owner's manual for instructions."

To learn more, the CR site offers several resources:


Google Claims Blocking Cookies Is Bad For Privacy. Researchers: Nope. That Is 'Privacy Gaslighting'

Google logo The announcement by Google last week included some dubious claims, which received a fair amount of attention among privacy experts. First, a Senior Product Manager of User Privacy and Trust wrote in a post:

"Ads play a major role in sustaining the free and open web. They underwrite the great content and services that people enjoy... But the ad-supported web is at risk if digital advertising practices don’t evolve to reflect people’s changing expectations around how data is collected and used. The mission is clear: we need to ensure that people all around the world can continue to access ad supported content on the web while also feeling confident that their privacy is protected. As we shared in May, we believe the path to making this happen is also clear: increase transparency into how digital advertising works, offer users additional controls, and ensure that people’s choices about the use of their data are respected."

Okay, that is a fair assessment of today's internet. And, more transparency is good. Google executives are entitled to their opinions. The post also stated:

"The web ecosystem is complex... We’ve seen that approaches that don’t account for the whole ecosystem—or that aren’t supported by the whole ecosystem—will not succeed. For example, efforts by individual browsers to block cookies used for ads personalization without suitable, broadly accepted alternatives have fallen down on two accounts. First, blocking cookies materially reduces publisher revenue... Second, broad cookie restrictions have led some industry participants to use workarounds like fingerprinting, an opaque tracking technique that bypasses user choice and doesn’t allow reasonable transparency or control. Adoption of such workarounds represents a step back for user privacy, not a step forward."

So, Google claims that blocking cookies is bad for privacy. With a statement like that, the "User Privacy and Trust" title seems like an oxymoron. Maybe, that's the best one can expect from a company that gets 87 percent of its revenues from advertising.

Also on August 22nd, the Director of Chrome Engineering repeated this claim and proposed new internet privacy standards (bold emphasis added):

... we are announcing a new initiative to develop a set of open standards to fundamentally enhance privacy on the web. We’re calling this a Privacy Sandbox. Technology that publishers and advertisers use to make advertising even more relevant to people is now being used far beyond its original design intent... some other browsers have attempted to address this problem, but without an agreed upon set of standards, attempts to improve user privacy are having unintended consequences. First, large scale blocking of cookies undermine people’s privacy by encouraging opaque techniques such as fingerprinting. With fingerprinting, developers have found ways to use tiny bits of information that vary between users, such as what device they have or what fonts they have installed to generate a unique identifier which can then be used to match a user across websites. Unlike cookies, users cannot clear their fingerprint, and therefore cannot control how their information is collected... Second, blocking cookies without another way to deliver relevant ads significantly reduces publishers’ primary means of funding, which jeopardizes the future of the vibrant web..."

Yes, fingerprinting is a nasty, privacy-busting technology. No argument with that. But, blocking cookies is bad for privacy? Really? Come on, let's be honest.

This dubious claim ignores corporate responsibility... that some advertisers and website operators made choices -- conscious decisions to use more invasive technologies like fingerprinting to do an end-run around users' needs, desires, and actions to regain online privacy. Sites and advertisers made those invasive-tech choices when other options were available, such as using subscription services to pay for their content.

Plus, Google's claim also ignores the push by corporate internet service providers (ISPs) which resulted in the repeal of online privacy protections for consumers thanks to a compliant, GOP-led Federal Communications Commission (FCC), which seems happy to tilt the playing field further towards corporations and against consumers. So, users are simply trying to regain online privacy.

During the past few years, both privacy-friendly web browsers (e.g., Brave, Firefox) and search engines (e.g., DuckDuckGo) have emerged to meet consumers' online privacy needs. (Well, it's not only consumers that need online privacy. Attorneys and businesses need it, too, to protect their intellectual property and proprietary business methods.) Online users demanded choice, something advertisers need to remember and value.

Privacy experts weighed in about Google's blocking-cookies-is-bad-for-privacy claim. Jonathan Mayer and Arvind Narayanan explained:

That’s the new disingenuous argument from Google, trying to justify why Chrome is so far behind Safari and Firefox in offering privacy protections. As researchers who have spent over a decade studying web tracking and online advertising, we want to set the record straight. Our high-level points are: 1) Cookie blocking does not undermine web privacy. Google’s claim to the contrary is privacy gaslighting; 2) There is little trustworthy evidence on the comparative value of tracking-based advertising; 3) Google has not devised an innovative way to balance privacy and advertising; it is latching onto prior approaches that it previously disclaimed as impractical; and 4) Google is attempting a punt to the web standardization process, which will at best result in years of delay."

The researchers debunked Google's claim with more details:

"Google is trying to thread a needle here, implying that some level of tracking is consistent with both the original design intent for web technology and user privacy expectations. Neither is true. If the benchmark is original design intent, let’s be clear: cookies were not supposed to enable third-party tracking, and browsers were supposed to block third-party cookies. We know this because the authors of the original cookie technical specification said so (RFC 2109, Section 4.3.5). Similarly, if the benchmark is user privacy expectations, let’s be clear: study after study has demonstrated that users don’t understand and don’t want the pervasive web tracking that occurs today."

Moreover:

"... there are several things wrong with Google’s argument. First, while fingerprinting is indeed a privacy invasion, that’s an argument for taking additional steps to protect users from it, rather than throwing up our hands in the air. Indeed, Apple and Mozilla have already taken steps to mitigate fingerprinting, and they are continuing to develop anti-fingerprinting protections. Second, protecting consumer privacy is not like protecting security—just because a clever circumvention is technically possible does not mean it will be widely deployed. Firms face immense reputational and legal pressures against circumventing cookie blocking. Google’s own privacy fumble in 2012 offers a perfect illustration of our point: Google implemented a workaround for Safari’s cookie blocking; it was spotted (in part by one of us), and it had to settle enforcement actions with the Federal Trade Commission and state attorneys general."

Gaslighting, indeed. Online privacy is important. So, too, are consumers' choices and desires. Thanks to Mr. Mayer and Mr. Narayanan for the comprehensive response.

What are your opinions of cookie blocking? Of Google's claims?


ExpressVPN Survey Indicates Americans Care About Privacy. Some Have Already Taken Action

ExpressVPN published the results of its privacy survey. The survey, commissioned by ExpressVPN and conducted by Propeller Insights, included a representative sample of about 1,000 adults in the United States.

Overall, 29.3% of survey respondents said they already use had used a virtual private network (VPN) or a proxy network. Survey respondents cited three broad reasons for using a VPN service: 1) to avoid surveillance, 2) to access content, and 3) to stay safe online. Detailed survey results about surveillance concerns:

"The most popular reasons to use a VPN are related to surveillance, with 41.7% of respondents aiming to protect against sites seeing their IP, 26.4% to prevent their internet service provider (ISP) from gathering information, and 16.6% to shield against their local government."

Who performs the surveillance matters to consumers. People are more concerned with surveillance by companies than by law enforcement agencies within the U.S. government:

"Among the respondents, 15.9% say they fear the FBI surveillance, and only 6.4% fear the NSA spying on them. People are by far most worried about information gathering by ISPs (23.2%) and Facebook (20.5%). Google spying is more of a worry for people (5.9%) than snooping by employers (2.6%) or family members (5.1%).

Concerns with internet service providers (ISPs) are not surprising since these telecommunications company enjoy a unique position enabling them to track all online activities by consumers. Concerns about Facebook are not surprising since it tracks both users and non-users, similar to advertising networks. The "protect against sites seeing their IP" suggests that consumers, or at least VPN users, want to protect themselves and their devices against advertisers, advertising networks, and privacy-busting mobile apps which track their geo-location.

Detailed survey results about content access concerns:

"... 26.7% use [a VPN service] to access their corporate or academic network, 19.9% to access content otherwise not available in their region, and 16.9% to circumvent censorship."

The survey also found that consumers generally trust their mobile devices:

" Only 30.5% of Android users are “not at all” or “not very” confident in their devices. iOS fares slightly better, with 27.4% of users expressing a lack of confidence."

The survey uncovered views about government intervention and policies:

"Net neutrality continues to be popular (70% more respondents support it rather then don’t), but 51.4% say they don’t know enough about it to form an opinion... 82.9% also believe Congress should enact laws to require tech companies to get permission before collecting personal data. Even more, 85.2% believe there should be fines for companies that lose users’ data, and 90.2% believe there should be further fines if the data is misused. Of the respondents, 47.4% believe Congress should go as far as breaking up Facebook and Google."

The survey found views about smart devices (e.g., door bells, voice assistants, smart speakers) installed in many consumers' homes, since these devices are equipped with always-on cameras and/or microphones:

"... 85% of survey respondents say they are extremely (24.7%), very (23.4%), or somewhat (28.0%) concerned about smart devices monitoring their personal habits... Almost a quarter (24.8%) of survey respondents do not own any smart devices at all, while almost as many (24.4%) always turn off their devices’ microphones if they are not using them. However, one-fifth (21.2%) say they always leave the microphone on. The numbers are similar for camera use..."

There are more statistics and findings in the entire survey report by ExpressVPN. I encourage everyone to read it.


Researcher Uncovers Several Browser Extensions That Track Users' Online Activity And Share Data

Many consumers use web browsers since websites contain full content and functionality, versus pieces of websites in mobile apps. A researcher has found that as many as four million consumers have been affected by browser extensions, the optional functionality for web browsers, which collected sensitive personal and financial information.

Ars Technica reported about DataSpii, the name of the online privacy issue:

"The term DataSpii was coined by Sam Jadali, the researcher who discovered—or more accurately re-discovered—the browser extension privacy issue. Jadali intended for the DataSpii name to capture the unseen collection of both internal corporate data and personally identifiable information (PII).... DataSpii begins with browser extensions—available mostly for Chrome but in more limited cases for Firefox as well—that, by Google's account, had as many as 4.1 million users. These extensions collected the URLs, webpage titles, and in some cases the embedded hyperlinks of every page that the browser user visited. Most of these collected Web histories were then published by a fee-based service called Nacho Analytics..."

At first glance, this may not sound important, but it is. Why? First, the data collected included the most sensitive and personal information:

"Home and business surveillance videos hosted on Nest and other security services; tax returns, billing invoices, business documents, and presentation slides posted to, or hosted on, Microsoft OneDrive, Intuit.com, and other online services; vehicle identification numbers of recently bought automobiles, along with the names and addresses of the buyers; patient names, the doctors they visited, and other details listed by DrChrono, a patient care cloud platform that contracts with medical services; travel itineraries hosted on Priceline, Booking.com, and airline websites; Facebook Messenger attachments..."

I'll bet you thought your Facebook Messenger stuff was truly private. Second, because:

"... the published URLs wouldn’t open a page unless the person following them supplied an account password or had access to the private network that hosted the content. But even in these cases, the combination of the full URL and the corresponding page name sometimes divulged sensitive internal information. DataSpii is known to have affected 50 companies..."

Ars Technica also reported:

"Principals with both Nacho Analytics and the browser extensions say that any data collection is strictly "opt in." They also insist that links are anonymized and scrubbed of sensitive data before being published. Ars, however, saw numerous cases where names, locations, and other sensitive data appeared directly in URLs, in page titles, or by clicking on the links. The privacy policies for the browser extensions do give fair warning that some sort of data collection will occur..."

So, the data collection may be legal, but is it ethical -- especially if the anonymization is partial? After the researcher's report went public, many of the suspect browser extensions were deleted from online stores. However, extensions already installed locally on users' browsers can still collect data:

"Beginning on July 3—about 24 hours after Jadali reported the data collection to Google—Fairshare Unlock, SpeakIt!, Hover Zoom, PanelMeasurement, Branded Surveys, and Panel Community Surveys were no longer available in the Chrome Web Store... While the notices say the extensions violate the Chrome Web Store policy, they make no mention of data collection nor of the publishing of data by Nacho Analytics. The toggle button in the bottom-right of the notice allows users to "force enable" the extension. Doing so causes browsing data to be collected just as it was before... In response to follow-up questions from Ars, a Google representative didn't explain why these technical changes failed to detect or prevent the data collection they were designed to stop... But removing an extension from an online marketplace doesn't necessarily stop the problems. Even after the removals of Super Zoom in February or March, Jadali said, code already installed by the Chrome and Firefox versions of the extension continued to collect visited URL information..."

Since browser developers haven't remotely disabled leaky browser extensions, the burden is on consumers. The Ars Technica report lists the leaky browser extensions by name. Since online stores can't seem to consistently police browser extensions for privacy compliance, again the burden falls upon consumers.

The bottom line: browser extensions can easily compromise your online privacy and security. That means like any other software, wise consumers: read independent online reviews first, read the developer's terms of use and privacy policy before installing the browser extension, and use a privacy-focused web browser.

Consumer Reports advises consumers to, a) install browser extensions only from companies you trust, and b) uninstall browser extensions you don't need nor use. For consumers that don't know how, the Consumer Reports article also lists step-by-step instructions to uninstall browser extensions in Google Chrome, Firefox, Safari, and Internet Explorer branded web browsers.


FBI Seeks To Monitor Twitter, Facebook, Instagram, And Other Social Media Accounts For Violent Threats

Federal Bureau of Investigation logo The U.S. Federal Bureau of Investigation (FBI) issued on July 8th a Request For Proposals (RFP) seeking quotes from technology companies to build a "Social Media Alerting" tool, which would enable the FBI to monitor in real-time accounts in several social media services for violence threats. The RFP, which was amended on August 7th, stated:

"The purpose of this procurement is to acquire the services of a company to proactively identify and reactively monitor threats to the United States and its interests through a means of online sources. A subscription to this service shall grant the Federal Bureau of Investigation (FBI) access to tools that will allow for the exploitation of lawfully collected/acquired data from social media platforms that will be stored, vetted and formatted by a vendor... This synopsis and solicitation is being issued as Request for Proposal (RFP) number DJF194750PR0000369 and... This announcement is supplemented by a detailed RFP Notice, an SF-33 document, an accompanying Statement of Objectives (SOO) and associated FBI documents..."

"Proactively identify" suggests the usage of software algorithms or artificial intelligence (AI). And, the vendor selected will archive the collected data for an undisclosed period of time. The RFP also stated:

"Background: The use of social media platforms, by terrorist groups, domestic threats, foreign intelligence services, and criminal organizations to further their illegal activity creates a demonstrated need for tools to properly identify the activity and react appropriately. With increased use of social media platforms by subjects of current FBI investigations and individuals that pose a threat to the United States, it is critical to obtain a service which will allow the FBI to identify relevant information from Twitter, Facebook, Instagram, and other Social media platforms in a timely fashion. Consequently, the FBI needs near real time access to a full range of social media exchanges..."

For context, in 2016 the FBI attempted to force Apple Computer to build "backdoor software" to unclock an alleged terrorist's iPhone in California. The FBI later found an offshore technology company to build its backdoor.

The documents indicate that the FBI wants its staff to use the tool at both headquarters and field-office locations globally, and with mobile devices. The SOO document stated:

"FBI personnel are deployed internationally and sometimes in areas of press censorship. A social media exploitation tool with international reach and paired with a strong language translation capability, can become crucial to their operations and more importantly their safety. The functions of most value to these individuals is early notification, broad international reach, instant translation, and the mobility of the needed capability."

The SOO also explained the data elements too be collected:

"3.3.2.2.1 Obtain the full social media profile of persons-of-interest and their affiliation to any organization or groups through the corroboration of multiple social media sources... Items of interest in this context are social networks, user IDs, emails, IP addresses and telephone numbers, along with likely additional account with similar IDs or aliases... Any connectivity between aliases and their relationship must be identifiable through active link analysis mapping..."
"3.3.3.2.1 Online media is monitored based on location, determined by the users’ delineation or the import of overlays from existing maps (neighborhood, city, county, state or country). These must allow for customization as AOR sometimes cross state or county lines..."

While the document mentioned "user IDs" and didn't mention passwords, the implication seems clear that the FBI wants both in order to access and monitor in real-time social media accounts. And, the "other Social Media platforms" statement raises questions. What is the full list of specific services that refers to? Why list only the three largest platforms by name?

As this FBI project proceeds, let's hope that the full list of social sites includes 8Chan, Reddit, Stormfront, and similar others. Why? In a study released in November of 2018, the Center for Strategic and International Studies (CSIS) found:

"Right-wing extremism in the United States appears to be growing. The number of terrorist attacks by far-right perpetrators rose over the past decade, more than quadrupling between 2016 and 2017. The recent pipe bombs and the October 27, 2018, synagogue attack in Pittsburgh are symptomatic of this trend. U.S. federal and local agencies need to quickly double down to counter this threat. There has also been a rise in far-right attacks in Europe, jumping 43 percent between 2016 and 2017... Of particular concern are white supremacists and anti-government extremists, such as militia groups and so-called sovereign citizens interested in plotting attacks against government, racial, religious, and political targets in the United States... There also is a continuing threat from extremists inspired by the Islamic State and al-Qaeda. But the number of attacks from right-wing extremists since 2014 has been greater than attacks from Islamic extremists. With the rising trend in right-wing extremism, U.S. federal and local agencies need to shift some of their focus and intelligence resources to penetrating far-right networks and preventing future attacks. To be clear, the terms “right-wing extremists” and “left-wing extremists” do not correspond to political parties in the United States..."

The CSIS study also noted:

"... right-wing terrorism commonly refers to the use or threat of violence by sub-national or non-state entities whose goals may include racial, ethnic, or religious supremacy; opposition to government authority; and the end of practices like abortion... Left-wing terrorism, on the other hand, refers to the use or threat of violence by sub-national or non-state entities that oppose capitalism, imperialism, and colonialism; focus on environmental or animal rights issues; espouse pro-communist or pro-socialist beliefs; or support a decentralized sociopolitical system like anarchism."

Terrorism is terrorism. All of it needs to be prosecuted including left-, right-, domestic, and foreign. (This prosecutor is doing the right thing.) It seems wise to monitor the platform where suspects congregate.

This project also raises questions about the effectiveness of monitoring social media? Will this really works. Digital Trends reported:

"Companies like Google, Facebook, Twitter, and Amazon already use algorithms to predict your interests, your behaviors, and crucially, what you like to buy. Sometimes, an algorithm can get your personality right – like when Spotify somehow manages to put together a playlist full of new music you love. In theory, companies could use the same technology to flag potential shooters... But preventing mass shootings before they happen raises thorny legal questions: how do you determine if someone is just angry online rather than someone who could actually carry out a shooting? Can you arrest someone if a computer thinks they’ll eventually become a shooter?"

Some social media users have already experienced inaccuracies (failures?) when sites present irrelevant advertisements and/or political party messaging based upon supposedly accurate software algorithms. The Digital Trends article also dug deeper:

"A Twitter spokesperson wouldn’t say much directly about Trump’s proposal, but did tell Digital Trends that the company suspended 166,513 accounts connected to the promotion of terrorism during the second half of 2018... Twitter also frequently works to help facilitate investigations when authorities request information – but the company largely avoids proactively flagging banned accounts (or the people behind them) to those same authorities. Even if they did, that would mean flagging 166,513 people to the FBI – far more people than the agency could ever investigate."

Then, there is the problem of the content by users in social media posts:

"Even if someone does post to social media immediately before they decide to unleash violence, it’s often not something that would trip up either Twitter or Facebook’s policies. The man who killed three people at the Gilroy Garlic Festival in Northern California posted to Instagram from the event itself – once calling the food served there “overprices” and a second that told people to read a 19th-century pro-fascist book that’s popular with white nationalists."

Also, Amazon got caught up in the hosting mess with 8Chan. So, there is more news to come.

Last, this blog post explored the problems with emotion recognition by facial-recognition software. Let's hope this FBI project is not a waste of taxpayer's hard-earned money.


White Hat Hacker: Social Media Is a 'Goldmine For Details' For Cyberattacks Targeting Companies

Many employees are their own worst enemy when they start a new job. In this Fast Company article, a white hat hacker explains the security fails by employees which compromise their employer's data security.

Stephanie “Snow” Carruthers, the chief people hacker within a group at IBM Inc., explained that hackers troll:

"... social media for photos, videos, and other clues that can help them better target your company in an attack. I know this because I’m one of them... I’m part of an elite team of hackers within IBM known as X-Force Red. Companies hire us to find gaps in their security – before the real bad guys do... Social media posts are a goldmine for details that aid in our “attacks.” What you find in the background of photos is particularly revealing... The first thing you may be surprised to know is that 75% of the time, the information I’m finding is coming from interns or new hires. Younger generations entering the workforce today have grown up on social media, and internships or new jobs are exciting updates to share. Add in the fact that companies often delay security training for new hires until weeks or months after they’ve started, and you’ve got a recipe for disaster..."

The obvious security fails include selfie photos by interns or new hires wearing their security badges, selfies showing log-in credentials on computer screens, and selfies showing passwords written on post-it notes attached to computer monitors. Less obvious security fails include group photos by interns or new hires with their work team. Group photos can help hackers identify team members to craft personalized and more effective phishing e-mails and text messages using co-workers' names, to trick recipients into opening attachments containing malware.

This highlights one business practice interns and new hires should understand. Your immediate boss or supervisor won't scour your social media accounts looking for security fails. Your employer will outsource the job to another company, which will.

If you just started a new job, don't be that clueless employee posting security fails to your social media accounts. Read and understand your employer's social media policy. If you are a manager, schedule security training for your interns and new hires ASAP.


FTC Levies $5 Billion Fine, 'New Restrictions, And Modified Corporate Structure' To Hold Facebook Accountable. Will These Actions Prevent Future Privacy Abuses?

The U.S. Federal Trade Commission (FTC) announced on July 24th a record-breaking fine against Facebook, Inc., plus new limitations on the social networking service. The FTC announcement stated:

"Facebook, Inc. will pay a record-breaking $5 billion penalty, and submit to new restrictions and a modified corporate structure that will hold the company accountable for the decisions it makes about its users’ privacy, to settle Federal Trade Commission charges that the company violated a 2012 FTC order by deceiving users about their ability to control the privacy of their personal information... The settlement order announced [on July 24th] also imposes unprecedented new restrictions on Facebook’s business operations and creates multiple channels of compliance..."

During 2018, Facebook generated after-tax profits of $22.1 billion on sales of $55.84 billion. While a $5 billion fine is a lot of money, the company can easily afford the record-breaking fine. The fine equals about one month's revenues, or a little over 4 percent of its $117 billion in assets.

U.S. Federal Trade Commission. New compliance system for Facebook. Click to view larger version The FTC announcement explained several "unprecedented" restrictions in the settlement order. First, the restrictions are designed to:

"... prevent Facebook from deceiving its users about privacy in the future, the FTC’s new 20-year settlement order overhauls the way the company makes privacy decisions by boosting the transparency of decision making... It establishes an independent privacy committee of Facebook’s board of directors, removing unfettered control by Facebook’s CEO Mark Zuckerberg over decisions affecting user privacy. Members of the privacy committee must be independent and will be appointed by an independent nominating committee. Members can only be fired by a supermajority of the Facebook board of directors."

Facebook logo Second, the restrictions mandated compliance officers:

"Facebook will be required to designate compliance officers who will be responsible for Facebook’s privacy program. These compliance officers will be subject to the approval of the new board privacy committee and can be removed only by that committee—not by Facebook’s CEO or Facebook employees. Facebook CEO Mark Zuckerberg and designated compliance officers must independently submit to the FTC quarterly certifications that the company is in compliance with the privacy program mandated by the order, as well as an annual certification that the company is in overall compliance with the order. Any false certification will subject them to individual civil and criminal penalties."

Third, the new order strengthens oversight:

"... The order enhances the independent third-party assessor’s ability to evaluate the effectiveness of Facebook’s privacy program and identify any gaps. The assessor’s biennial assessments of Facebook’s privacy program must be based on the assessor’s independent fact-gathering, sampling, and testing, and must not rely primarily on assertions or attestations by Facebook management. The order prohibits the company from making any misrepresentations to the assessor, who can be approved or removed by the FTC. Importantly, the independent assessor will be required to report directly to the new privacy board committee on a quarterly basis. The order also authorizes the FTC to use the discovery tools provided by the Federal Rules of Civil Procedure to monitor Facebook’s compliance with the order."

Fourth, the order included six new privacy requirements:

"i) Facebook must exercise greater oversight over third-party apps, including by terminating app developers that fail to certify that they are in compliance with Facebook’s platform policies or fail to justify their need for specific user data; ii) Facebook is prohibited from using telephone numbers obtained to enable a security feature (e.g., two-factor authentication) for advertising; iii) Facebook must provide clear and conspicuous notice of its use of facial recognition technology, and obtain affirmative express user consent prior to any use that materially exceeds its prior disclosures to users; iv) Facebook must establish, implement, and maintain a comprehensive data security program; v) Facebook must encrypt user passwords and regularly scan to detect whether any passwords are stored in plaintext; and vi) Facebook is prohibited from asking for email passwords to other services when consumers sign up for its services."

Wow! Lots of consequences when a manager builds a corporation with a, "move fast and break things" culture, values, and ethics. Assistant Attorney General Jody Hunt for the Department of Justice’s Civil Division said:

"The Department of Justice is committed to protecting consumer data privacy and ensuring that social media companies like Facebook do not mislead individuals about the use of their personal information... This settlement’s historic penalty and compliance terms will benefit American consumers, and the Department expects Facebook to treat its privacy obligations with the utmost seriousness."

There is disagreement among the five FTC commissioners about the settlement, as the vote for the order was 3 - 2. FTC Commissioner Rebecca Kelly Slaughter stated in her dissent:

"My principal objections are: (1) The negotiated civil penalty is insufficient under the applicable statutory factors we are charged with weighing for order violators: injury to the public, ability to pay, eliminating the benefits derived from the violation, and vindicating the authority of the FTC; (2) While the order includes some encouraging injunctive relief, I am skeptical that its terms will have a meaningful disciplining effect on how Facebook treats data and privacy. Specifically, I cannot view the order as adequately deterrent without both meaningful limitations on how Facebook collects, uses, and shares data and public transparency regarding Facebook’s data use and order compliance; (3) Finally, my deepest concern with this order is that its release of Facebook and its officers from legal liability is far too broad..."

FTC Commissioners Noah Joshua Phillips and Christine S. Wilson stated on July 24th in an 8-page joint statement (Adobe PDF) with Chairman Joseph J. Simons of the U.S. District Court for the District of Columbia:

"In 2012, Facebook entered into a consent order with the FTC, resolving allegations that the company misrepresented to consumers the extent of data sharing with third-party applications and the control consumers had over that sharing. The 2012 order barred such misrepresentations... Our complaint announced today alleges that Facebook failed to live up to its commitments under that order. Facebook subsequently made similar misrepresentations about sharing consumer data with third-party apps and giving users control over that sharing, and misrepresented steps certain consumers needed to take to control [over] facial recognition technology. Facebook also allowed financial considerations to affect decisions about how it would enforce its platform policies against third-party users of data, in violation of its obligation under the 2012 order... The $5 billion penalty serves as an important deterrent to future order violations... For purposes of comparison, the EU’s General Data Protection Regulation (GDPR) is touted as the high-water mark for comprehensive privacy legislation, and the penalty the FTC has negotiated is over 20 times greater than the largest GDPR fine to date... IV. The Settlement Far Exceeds What Could be Achieved in Litigation and Gives Consumers Meaningful Protections Now... Even assuming the FTC would prevail in litigation, a court would not give the Commission carte blanche to reorganize Facebook’s governance structures and business operations as we deem fit. Instead, the court would impose the relief. Such relief would be limited to injunctive relief to remedy the specific proven violations... V. Mark Zuckerberg is Being Held Accountable and the Order Cabins His Authority Our dissenting colleagues argue that the Commission should not have settled because the Commission’s investigation provides an inadequate basis for the decision not to name Mark Zuckerberg personally as a defendant... The provisions of this Order extinguish the ability of Mr. Zuckerberg to make privacy decisions unilaterally by also vesting responsibility and accountability for those decisions within business units, DCOs, and the privacy committee... the Order significantly diminishes Mr. Zuckerberg’s power — something no government agency, anywhere in the world, has thus far accomplished. The Order requires multiple information flows and imposes a robust system of checks and balances..."

Time will tell how effective the order's restrictions and $5 billion are. That Facebook can easily afford the penalty suggests the amount is a weak deterrence. If all or part of the penalty is tax-deductible (yes, tax-deductible fines have happened before to directly reduce a company's taxes), then that would weaken the deterrence effectiveness. And, if all or part of the fine is tax-deductible, then we taxpayers just paid for part of Facebook's alleged wrongdoing. I'll bet most taxpayers wouldn't want that.

Facebook stated in a July 24th news release that its second-quarter 2019 earnings included:

"... an additional $2.0 billion legal expense related to the U.S. Federal Trade Commission (FTC) settlement and a $1.1 billion income tax expense due to the developments in Altera Corp. v. Commissioner, as discussed below. As the FTC expense is not expected to be tax-deductible, it had no effect on our provision for income taxes... In July 2019, we entered into a settlement and modified consent order to resolve the inquiry of the FTC into our platform and user data practices. Among other matters, our settlement with the FTC requires us to pay a penalty of $5.0 billion and to significantly enhance our practices and processes for privacy compliance and oversight. In particular, we have agreed to implement a comprehensive expansion of our privacy program, including substantial management and board of directors oversight, stringent operational requirements and reporting obligations, and a process to regularly certify our compliance with the privacy program to the FTC. In the second quarter of 2019, we recorded an additional $2.0 billion accrual in connection with our settlement with the FTC, which is included in accrued expenses and other current liabilities on our condensed consolidated balance sheet."

"Not expected to be" is not the same as definitely not. And, business expenses reduce a company's taxable net income.

A copy of the FTC settlement order with Facebook is also available here (Adobe PDF format; 920K bytes). Plus, there is more:

"... the FTC also announced today separate law enforcement actions against data analytics company Cambridge Analytica, its former Chief Executive Officer Alexander Nix, and Aleksandr Kogan, an app developer who worked with the company, alleging they used false and deceptive tactics to harvest personal information from millions of Facebook users. Kogan and Nix have agreed to a settlement with the FTC that will restrict how they conduct any business in the future."

Cambridge Analytica was involved in the massive Facebook data breach in 2018 when persons allegedly posed as academic researchers in order to download Facebook users' profile information they really weren't authorized to access.

What are your opinions? Hopefully, some tax experts will weigh in about the fine.


EFF Filed Lawsuit In California Against AT&T To Stop Sales Of Wireless Customers' Realtime Geolocations

The Electronic Frontier Foundation (EFF) announced on July 16th that it had filed:

"... a class action lawsuit on behalf of AT&T customers in California to stop the telecom giant and two data location aggregators from allowing numerous entities—including bounty hunters, car dealerships, landlords, and stalkers—to access wireless customers’ real-time locations without authorization. An investigation by Motherboard earlier this year revealed that any cellphone user’s precise, real-time location could be bought for just $300. The report showed that carriers, including AT&T, were making this data available to hundreds of third parties without first verifying that users had authorized such access. AT&T not only failed to obtain its customers’ express consent, making matters worse, it created an active marketplace that trades on its customers’ real-time location data..."

The lawsuit, Scott, et al. v. AT&T Inc., et al., was filed in the U.S. District Court of the Northern District of California. The suit seeks money damages and an injunction against AT&T and the named location data aggregators: LocationSmart and Zumigo. The suit alleges AT&T violated the Federal Communications Act and engaged in deceptive practices under California’s unfair competition law. It also alleges that AT&T, LocationSmart, and Zumigo have violated California’s constitutional, statutory, and common law rights to privacy. The EFF is represented by Pierce Bainbridge Beck Price & Hecht LLP.