147 posts categorized "Europe" Feed

VPN Service Provider Announced A Data Breach Incident Which Occurred in 2018

Consumers in the United States lost both control and privacy protections when the U.S. Federal Communications Commission (FCC), led by President Trump appointee Ajit Pai, a former Verizon lawyer, repealed in 2017 both broadband privacy and net neutrality protections for consumers. Since then, many people have subscribed to Virtual Private Network (VPN) services to regain protections of their sensitive personal information and online activities.

NordVPN logo NordVPN, a provider of VPN services, announced on Monday a data breach:

"1) One server was affected in March 2018 in Finland. The rest of our service was not affected. No other servers of any type were put at risk. This was an attack on our server, not our entire service; 2) The breach was made possible by poor configuration on a third-party datacenter’s part that we were never notified of. Evidence suggests that when the datacenter became aware of the intrusion, they deleted the accounts that had caused the vulnerabilities rather than notify us of their mistake. As soon as we learned of the breach, the server and our contract with the provider were terminated and we began an extensive audit of our service; 3) No user credentials were affected; 4) There are no signs that the intruder attempted to monitor user traffic in any way. Even if they had, they would not have had access to those users’ credentials..."

In 2018, NordVPN operated about 3,000 servers. It now operates about 5,000 servers. The NordVPN announcement includes more information including technical details.

Earlier this month, C/Net and  PC Magazine published their lists of the best VPN services in 2019. PC Magazine's list, which was published before the breach announcement, included NordVPN. So, it is always wise for consumers to do their research before switching to a VPN service.

What to make of this breach? We don't know who performed the attack. My impression: the attack seemed targeted, since few people probably use the single server in Finland. And, this cyberattack seemed very different from the massive retail attacks where hackers seek to steal the payment information (e.g., credit/debit card numbers) of thousands of consumers.

This cyberattack may have targeted a specific person. Perhaps, the attacker was a competitor or the government agency of a country NordVPN has refused to do business with. (Or, maybe this.) Hopefully, investigative journalists with more resources than this solo blogger will probe deeper.

Several things seem clear: a) cybercriminals have added VPN services to their list of high-value targets, b) hackers have identified the outsourcing vendors used by VPN service providers, and c) cyber attacks like this will probably continue. You might say this breach was a warning shot across the bow of the entire VPN industry. Seems like there is lots more news to come.


Survey: Consumers Use Smart Home Devices Despite Finding Them 'Creepy'

Selligent Marketing Cloud logo Last month, Selligent Marketing Cloud announced the results of a global survey about how consumers view various brands. Some of the findings included smart speakers or voice assistants. Key findings:

"Sixty-nine percent of surveyed consumers find it “creepy” when they receive ads based on unprompted cues from voice assistants like Apple’s Siri, Amazon’s Alexa and Google Home. Fifty-one percent are worried that voice assistants are listening to conversations without their consent."

Regarding voice assistants, younger consumers are likely to believe they are being listened to without their knowledge. 58 percent of Gen-Z (ages 18-24) versus 36 percent for Baby Boomers (ages 55-75) held this view. Key findings about privacy and social media: 41 percent of respondents said they have reduced their use of social media due to privacy concerns, and 32 percent said they quit at least one social media platform within the last 12 months.

Selligent surveyed 5,000 consumers in North America and Western Europe. The company provides services to help B2C marketers. To learn more, see the Selligent "Global Connected Consumer Index."


3 Countries Sent A Joint Letter Asking Facebook To Delay End-To-End Encryption Until Law Enforcement Has Back-Door Access. 58 Concerned Organizations Responded

Plenty of privacy and surveillance news recently. Last week, the governments of three countries sent a joint, open letter to Facebook.com asking the social media platform to delay implementation of end-to-end encryption in its messaging apps until back-door access can be provided for law enforcement.

Facebook logo Buzzfeed News published the joint, open letter by U.S. Attorney General William Barr, United Kingdom Home Secretary Priti Patel, acting US Homeland Security Secretary Kevin McAleenan, and Australian Minister for Home Affairs Peter Dutton. The letter, dated October 4th, was sent to Mark Zuckerberg, the Chief Executive Officer of Facebook. It read in part:

"OPEN LETTER: FACEBOOK’S “PRIVACY FIRST” PROPOSALS

We are writing to request that Facebook does not proceed with its plan to implement end-to-end encryption across its messaging services without ensuring that there is no reduction to user safety and without including a means for lawful access to the content of communications to protect our citizens.

In your post of 6 March 2019, “A Privacy-Focused Vision for Social Networking,” you acknowledged that “there are real safety concerns to address before we can implement end-to-end encryption across all our messaging services.” You stated that “we have a responsibility to work with law enforcement and to help prevent” the use of Facebook for things like child sexual exploitation, terrorism, and extortion. We welcome this commitment to consultation. As you know, our governments have engaged with Facebook on this issue, and some of us have written to you to express our views. Unfortunately, Facebook has not committed to address our serious concerns about the impact its proposals could have on protecting our most vulnerable citizens.

We support strong encryption, which is used by billions of people every day for services such as banking, commerce, and communications. We also respect promises made by technology companies to protect users’ data. Law abiding citizens have a legitimate expectation that their privacy will be protected. However, as your March blog post recognized, we must ensure that technology companies protect their users and others affected by their users’ online activities. Security enhancements to the virtual world should not make us more vulnerable in the physical world..."

The open, joint letter is also available on the United Kingdom government site. Mr. Zuckerberg's complete March 6, 2019 post is available here.

Earlier this year, the U.S. Federal Bureau of Investigation (FBI) issued a Request For Proposals (RFP) seeking quotes from technology companies to build a real-time social media monitoring tool. It seems, such a tool would have limited utility without back-door access to encrypted social media accounts.

In 2016, the Federal Bureau of Investigation (FBI) filed a lawsuit to force Apple Inc. to build "back door" software to unlock an attacker's iPhone. Apple refused as back-door software would provide access to any iPhone, not only this particular smartphone. Ultimately, the FBI found an offshore tech company to build the backdoor. Later that year, then FBI Director James Comey suggested a national discussion about encryption versus safety. It seems, the country still hasn't had that conversation.

According to BuzzFeed, Facebook's initial response to the joint letter:

"In a three paragraph statement, Facebook said it strongly opposes government attempts to build backdoors."

We shall see if Facebook holds steady to that position. Privacy advocates quickly weighed in. The Electronic Frontier Foundation (EFF) wrote:

"This is a staggering attempt to undermine the security and privacy of communications tools used by billions of people. Facebook should not comply. The letter comes in concert with the signing of a new agreement between the US and UK to provide access to allow law enforcement in one jurisdiction to more easily obtain electronic data stored in the other jurisdiction. But the letter to Facebook goes much further: law enforcement and national security agencies in these three countries are asking for nothing less than access to every conversation... The letter focuses on the challenges of investigating the most serious crimes committed using digital tools, including child exploitation, but it ignores the severe risks that introducing encryption backdoors would create. Many people—including journalists, human rights activists, and those at risk of abuse by intimate partners—use encryption to stay safe in the physical world as well as the online one. And encryption is central to preventing criminals and even corporations from spying on our private conversations... What’s more, the backdoors into encrypted communications sought by these governments would be available not just to governments with a supposedly functional rule of law. Facebook and others would face immense pressure to also provide them to authoritarian regimes, who might seek to spy on dissidents..."

The new agreement the EFF referred to was explained in this United Kingdom announcement:

"The world-first UK-US Bilateral Data Access Agreement will dramatically speed up investigations and prosecutions by enabling law enforcement, with appropriate authorisation, to go directly to the tech companies to access data, rather than through governments, which can take years... The current process, which see requests for communications data from law enforcement agencies submitted and approved by central governments via Mutual Legal Assistance (MLA), can often take anywhere from six months to two years. Once in place, the Agreement will see the process reduced to a matter of weeks or even days."

The Agreement will each year accelerate dozens of complex investigations into suspected terrorists and paedophiles... The US will have reciprocal access, under a US court order, to data from UK communication service providers. The UK has obtained assurances which are in line with the government’s continued opposition to the death penalty in all circumstances..."

On Friday, a group of 58 privacy advocates and concerned organizations from several countries sent a joint letter to Facebook regarding its end-to-end encryption plans. The Center For Democracy & Technology (CDT) posted the group's letter:

"Given the remarkable reach of Facebook’s messaging services, ensuring default end-to-end security will provide a substantial boon to worldwide communications freedom, to public safety, and to democratic values, and we urge you to proceed with your plans to encrypt messaging through Facebook products and services. We encourage you to resist calls to create so-called “backdoors” or “exceptional access” to the content of users’ messages, which will fundamentally weaken encryption and the privacy and security of all users."

It seems wise to have a conversation to discuss all of the advantages and disadvantages; and not selectively focus only upon some serious crimes while ignoring other significant risks, since back-door software can be abused like any other technology. What are your opinions?


Vancouver, Canada Welcomed The 'Tesla Of The Cruise Industry." Ports In France Consider Bans For Certain Cruise Ships

For drivers concerned about the environment and pollution, the automobile industry has offered hybrids (which run on gasoline, and electric battery power) and completely electric vehicles (solely on electric battery power). The same technology trend is underway within the cruise industry.

On September 26, the Port of Vancouver welcomed the MS Roald Amundsen. Some call this cruise ship the "Tesla of the cruise industry." The International Business Times explained:

"MS Roald Amundsen can be called Tesla of the cruise industry as it is similar to the electrically powered Tesla car that set off a revolution in the auto sector by running on batteries... The state of the art ship was unveiled earlier this year by Scandinavian cruise operator Hurtigruten. The cruise ship is one of the most sustainable cruise vessels with the distinction of being one of the two hybrid-electric cruise ships in the world. MS Roald Amundsen utilizes hybrid technology to save fuel and reduce carbon dioxide emissions by 20 percent."

Hurtigruten logo With 15 cruise ships, Hurtigruten offers sailings to Norway, Iceland, Alaska, Arctic, Antarctica, Europe, South America, and more. Named after the first man to cross Antarctica and reach the South Pole, the MS Roald Amundsen carries about 530 passengers.

View of solar panels on the Celebrity Solstice cruise ship in March, 2019. Click to view larger version While some cruise ships already use onboard solar panels to reduce fuel consumption, this is the first hybrid-electric cruise ship. It is an important step forward to prove that large ships can be powered in this manner.

Several ships in Royal Caribbean Cruise Line's fleet, including the Oasis of the Seas, have been outfitted with solar panels. The image on the right provides a view of  the solar panels on the Celebrity Solstice cruise ship, while it was docked in Auckland, New Zealand in March, 2019. The panels are small and let sunlight through.

The Vancouver Is Awesome site explained why the city gave the MS Roald Amundsen special attention:

"... the Vancouver Fraser Port Authority, the federal agency responsible for the stewardship of the port, has set its vision to be the world’s most sustainable port. As a part of this vision, the port authority works to ensure the highest level of environmental protection is met in and around the Port of Vancouver. This commitment resulted in the port authority being the first in Canada and third in the world to offer shore power, an emissions-reducing initiative, for cruise ships. That said, a shared commitment to sustainability isn’t the only thing Hurtigruten has in common with our awesome city... The hybrid-electric battery used in the MS Roald Amundsen was created by Vancouver company, Corvus Energy."

Port Of Vancouver, Canada logo Reportedly, the MS Roald Amundsen can operate for brief periods of time only on battery power, resulting in zero fuel usage and zero emissions. The Port of Vancouver's website explains its Approach to Sustainability policy:

"We are on a journey to meet our vision to become the world’s most sustainable port. In 2010 we embarked on a two-year scenario planning process with stakeholders called Port 2050, to improve our understanding of what the region may look like in the future... We believe a sustainable port delivers economic prosperity through trade, maintains a healthy environment, and enables thriving communities, through meaningful dialogue, shared aspirations and collective accountability. Our definition of sustainability includes 10 areas of focus and 22 statements of success..."

I encourage everyone to read the Port of Vancouver's 22 statements of success for a healthy environment and sustainable port. Selected statements from that list:

"Healthy ecosystems:
8) Takes a holistic approach to protecting and improving air, land and water quality to promote biodiversity and human health
9) Champions coordinated management programs to protect habitats and species. Climate action
10) Is a leader among ports in energy conservation and alternative energy to minimize greenhouse gas emissions..."

"Responsible practices:
12) Improves the environmental, social and economic performance of infrastructure through design, construction and operational practices
13) Supports responsible practices throughout the global supply chain..."

"Aboriginal relationships:
18) Respects First Nations’ traditional territories and value traditional knowledge
19) Embraces and celebrates Aboriginal culture and history
20) Understands and considers contemporary interests and aspirations..."

In separate but related news, government officials in the French Riviera city of Cannes are considering a ban of cruise ships to curb pollution. The Travel Pulse site reported:

"The ban would apply to passenger vessels that do not meet a 0.1 percent sulfur cap in their fuel emissions. Any cruise ship that attempted to enter the port that did not meet the higher standards would be turned away without allowing passengers to disembark."

During 2018, about 370,000 cruise ship passengers visited Cannes, making it the fourth busiest port in France. Officials are concerned about pollution. Other European ports are considering similar bans:

"Another French city, Saint-Raphael, has also instituted similar rules to curb the pollution of the water and air around the city. Other European ports such as Santorini and Venice have also cited cruise ships as a significant cause of over-tourism across the region."

If you live and/or work in a port city, it seems worthwhile to ask your local government or port authority what it is doing about sustainability and pollution. The video below explains some of the features in this new "expedition ship" with itineraries and activities that focus upon science:


Video courtesy of Hurtigruten

[Editor's note: this post was updated to include a photo of solar panels on the Celebrity Solstice cruise ship.]


Millions of Americans’ Medical Images and Data Are Available on the Internet. Anyone Can Take a Peek.

[Editor's note: today's guest blog post, by reporters at ProPublica, explores data security issues within the healthcare industry and its outsourcing vendors. It is reprinted with permission.]

By Jack Gillum, Jeff Kao and Jeff Larson - ProPublica

Medical images and health data belonging to millions of Americans, including X-rays, MRIs and CT scans, are sitting unprotected on the internet and available to anyone with basic computer expertise.

Bayerischer Rundfunk logo The records cover more than 5 million patients in the U.S. and millions more around the world. In some cases, a snoop could use free software programs — or just a typical web browser — to view the images and private data, an investigation by ProPublica and the German broadcaster Bayerischer Rundfunk found.

We identified 187 servers — computers that are used to store and retrieve medical data — in the U.S. that were unprotected by passwords or basic security precautions. The computer systems, from Florida to California, are used in doctors’ offices, medical-imaging centers and mobile X-ray services.

The insecure servers we uncovered add to a growing list of medical records systems that have been compromised in recent years. Unlike some of the more infamous recent security breaches, in which hackers circumvented a company’s cyber defenses, these records were often stored on servers that lacked the security precautions that long ago became standard for businesses and government agencies.

"It’s not even hacking. It’s walking into an open door," said Jackie Singh, a cybersecurity researcher and chief executive of the consulting firm Spyglass Security. Some medical providers started locking down their systems after we told them of what we had found.

Our review found that the extent of the exposure varies, depending on the health provider and what software they use. For instance, the server of U.S. company MobilexUSA displayed the names of more than a million patients — all by typing in a simple data query. Their dates of birth, doctors and procedures were also included.

Alerted by ProPublica, MobilexUSA tightened its security earlier this month. The company takes mobile X-rays and provides imaging services to nursing homes, rehabilitation hospitals, hospice agencies and prisons. "We promptly mitigated the potential vulnerabilities identified by ProPublica and immediately began an ongoing, thorough investigation," MobilexUSA’s parent company said in a statement.

Another imaging system, tied to a physician in Los Angeles, allowed anyone on the internet to see his patients’ echocardiograms. (The doctor did not respond to inquiries from ProPublica.) All told, medical data from more than 16 million scans worldwide was available online, including names, birthdates and, in some cases, Social Security numbers.

Experts say it’s hard to pinpoint who’s to blame for the failure to protect the privacy of medical images. Under U.S. law, health care providers and their business associates are legally accountable for securing the privacy of patient data. Several experts said such exposure of patient data could violate the Health Insurance Portability and Accountability Act, or HIPAA, the 1996 law that requires health care providers to keep Americans’ health data confidential and secure.

Although ProPublica found no evidence that patient data was copied from these systems and published elsewhere, the consequences of unauthorized access to such information could be devastating. "Medical records are one of the most important areas for privacy because they’re so sensitive. Medical knowledge can be used against you in malicious ways: to shame people, to blackmail people," said Cooper Quintin, a security researcher and senior staff technologist with the Electronic Frontier Foundation, a digital-rights group.

"This is so utterly irresponsible," he said.

The issue should not be a surprise to medical providers. For years, one expert has tried to warn about the casual handling of personal health data. Oleg Pianykh, the director of medical analytics at Massachusetts General Hospital’s radiology department, said medical imaging software has traditionally been written with the assumption that patients’ data would be secured by the customer’s computer security systems.

But as those networks at hospitals and medical centers became more complex and connected to the internet, the responsibility for security shifted to network administrators who assumed safeguards were in place. "Suddenly, medical security has become a do-it-yourself project," Pianykh wrote in a 2016 research paper he published in a medical journal.

ProPublica’s investigation built upon findings from Greenbone Networks, a security firm based in Germany that identified problems in at least 52 countries on every inhabited continent. Greenbone’s Dirk Schrader first shared his research with Bayerischer Rundfunk after discovering some patients’ health records were at risk. The German journalists then approached ProPublica to explore the extent of the exposure in the U.S.

Schrader found five servers in Germany and 187 in the U.S. that made patients’ records available without a password. ProPublica and Bayerischer Rundfunk also scanned Internet Protocol addresses and identified, when possible, which medical provider they belonged to.

ProPublica independently determined how many patients could be affected in America, and found some servers ran outdated operating systems with known security vulnerabilities. Schrader said that data from more than 13.7 million medical tests in the U.S. were available online, including more than 400,000 in which X-rays and other images could be downloaded.

The privacy problem traces back to the medical profession’s shift from analog to digital technology. Long gone are the days when film X-rays were displayed on fluorescent light boards. Today, imaging studies can be instantly uploaded to servers and viewed over the internet by doctors in their offices.

In the early days of this technology, as with much of the internet, little thought was given to security. The passage of HIPAA required patient information to be protected from unauthorized access. Three years later, the medical imaging industry published its first security standards.

Our reporting indicated that large hospital chains and academic medical centers did put security protections in place. Most of the cases of unprotected data we found involved independent radiologists, medical imaging centers or archiving services.

One German patient, Katharina Gaspari, got an MRI three years ago and said she normally trusts her doctors. But after Bayerischer Rundfunk showed Gaspari her images available online, she said: "Now, I am not sure if I still can." The German system that stored her records was locked down last week.

We found that some systems used to archive medical images also lacked security precautions. Denver-based Offsite Image left open the names and other details of more than 340,000 human and veterinary records, including those of a large cat named "Marshmellow," ProPublica found. An Offsite Image executive told ProPublica the company charges clients $50 for access to the site and then $1 per study. "Your data is safe and secure with us," Offsite Image’s website says.

The company referred ProPublica to its tech consultant, who at first defended Offsite Image’s security practices and insisted that a password was needed to access patient records. The consultant, Matthew Nelms, then called a ProPublica reporter a day later and acknowledged Offsite Image’s servers had been accessible but were now fixed.

Medical Imaging and Technology Alliance logo "We were just never even aware that there was a possibility that could even happen," Nelms said.

In 1985, an industry group that included radiologists and makers of imaging equipment created a standard for medical imaging software. The standard, which is now called DICOM, spelled out how medical imaging devices talk to each other and share information.

We shared our findings with officials from the Medical Imaging & Technology Alliance, the group that oversees the standard. They acknowledged that there were hundreds of servers with an open connection on the internet, but suggested the blame lay with the people who were running them.

"Even though it is a comparatively small number," the organization said in a statement, "it may be possible that some of those systems may contain patient records. Those likely represent bad configuration choices on the part of those operating those systems."

Meeting minutes from 2017 show that a working group on security learned of Pianykh’s findings and suggested meeting with him to discuss them further. That “action item” was listed for several months, but Pianykh said he never was contacted. The medical imaging alliance told ProPublica last week that the group did not meet with Pianykh because the concerns that they had were sufficiently addressed in his article. They said the committee concluded its security standards were not flawed.

Pianykh said that misses the point. It’s not a lack of standards; it’s that medical device makers don’t follow them. “Medical-data security has never been soundly built into the clinical data or devices, and is still largely theoretical and does not exist in practice,” Pianykh wrote in 2016.

ProPublica’s latest findings follow several other major breaches. In 2015, U.S. health insurer Anthem Inc. revealed that private data belonging to more than 78 million people was exposed in a hack. In the last two years, U.S. officials have reported that more than 40 million people have had their medical data compromised, according to an analysis of records from the U.S. Department of Health and Human Services.

Joy Pritts, a former HHS privacy official, said the government isn’t tough enough in policing patient privacy breaches. She cited an April announcement from HHS that lowered the maximum annual fine, from $1.5 million to $250,000, for what’s known as “corrected willful neglect” — the result of conscious failures or reckless indifference that a company tries to fix. She said that large firms would not only consider those fines as just the cost of doing business, but that they could also negotiate with the government to get them reduced. A ProPublica examination in 2015 found few consequences for repeat HIPAA offenders.

A spokeswoman for HHS’ Office for Civil Rights, which enforces HIPAA violations, said it wouldn’t comment on open or potential investigations.

"What we typically see in the health care industry is that there is Band-Aid upon Band-Aid applied" to legacy computer systems, said Singh, the cybersecurity expert. She said it’s a “shared responsibility” among manufacturers, standards makers and hospitals to ensure computer servers are secured.

"It’s 2019," she said. "There’s no reason for this."

How Do I Know if My Medical Imaging Data is Secure?

If you are a patient:

If you have had a medical imaging scan (e.g., X-ray, CT scan, MRI, ultrasound, etc.) ask the health care provider that did the scan — or your doctor — if access to your images requires a login and password. Ask your doctor if their office or the medical imaging provider to which they refer patients conducts a regular security assessment as required by HIPAA.

If you are a medical imaging provider or doctor’s office:

Researchers have found that picture archiving and communication systems (PACS) servers implementing the DICOM standard may be at risk if they are connected directly to the internet without a VPN or firewall, or if access to them does not require a secure password. You or your IT staff should make sure that your PACS server cannot be accessed via the internet without a VPN connection and password. If you know the IP address of your PACS server but are not sure whether it is (or has been) accessible via the internet, please reach out to us at [email protected].

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for The Big Story newsletter to receive stories like this one in your inbox.


Google Home Devices Recorded Users' Conversations. Legal Questions Result. Google Says It Is Investigating

Many consumers love the hands-free convenience of smart speakers. However, there are risks with the technology. BBC News reported on Thursday:

"Belgian broadcaster VRT exposed the recordings made by Google Home devices in Belgium and the Netherlands... VRT said the majority of the recordings it reviewed were short clips logged by the Google Home devices as owners used them. However, it said, 153 were "conversations that should never have been recorded" because the wake phrase of "OK Google" was not given. These unintentionally recorded exchanges included: a) blazing rows; b) bedroom chatter; c) parents talking to their children; d) phone calls exposing confidential information. It said it believed the devices logged these conversations because users said a word or phrase that sounded similar to "OK Google" that triggered the device..."

So, conversations that shouldn't have been recorded were recorded by Google Home devices. Consumers use the devices to perform and control a variety of tasks, such as entertainment (e.g., music, movies, games), internet searches (e.g., cooking recipes), security systems and cameras, thermostats, window blinds and shades, appliances (e.g., coffee makers), online shopping, internet searches, and more.

The device software doesn't seem accurate, since it mistook similar phrases as wake phrases. Google calls these errors "false accepts." Google replied in a blog post:

"We just learned that one of these language reviewers has violated our data security policies by leaking confidential Dutch audio data. Our Security and Privacy Response teams have been activated on this issue, are investigating, and we will take action. We are conducting a full review of our safeguards... We apply a wide range of safeguards to protect user privacy throughout the entire review process. Language experts only review around 0.2 percent of all audio snippets. Audio snippets are not associated with user accounts as part of the review process, and reviewers are directed not to transcribe background conversations or other noises, and only to transcribe snippets that are directed to Google."

"The Google Assistant only sends audio to Google after your device detects that you’re interacting with the Assistant—for example, by saying “Hey Google” or by physically triggering the Google Assistant... Rarely, devices that have the Google Assistant built in may experience what we call a “false accept.” This means that there was some noise or words in the background that our software interpreted to be the hotword (like “Ok Google”). We have a number of protections in place to prevent false accepts from occurring in your home... We also provide you with tools to manage and control the data stored in your account. You can turn off storing audio data to your Google account completely, or choose to auto-delete data after every 3 months or 18 months..."

To be fair, Google is not alone. Amazon Alexa devices also record and archive users' conversations. Would you want your bedroom chatter recorded (and stored indefinitely)? Or your conversations with your children? Many persons work remotely from home, so would you want business conversations with coworkers recorded? I think not. Very troubling news.

And, there is more.

This data security incident confirms that human workers listen to recordings by Google Assistant devices. Those workers can be employees or outsourced contractors. Who are these contractors, by name? What methods does Google employ to confirm privacy compliance by contractors? So many unanswered questions.

Also, according to U.S. News & World Report:

"Google's recording feature can be turned off, but doing so means Assistant loses some of its personalized touch. People who turn off the recording feature lose the ability for the Assistant to recognize individual voices and learn your voice pattern. Assistant recording is actually turned off by default — but the technology prompts users to turn on recording and other tools in order to get personalized features."

So, to get the full value of the technology, users must enable recordings. That sounds a lot like surveillance by design. Not good. You'd think that Google software developers would have developed a standard vocabulary, or dictionary, in several languages (by beta test participants) to test the accuracy of Assistant software; rather than review users' actual conversations. I guess they viewed it easier, faster, and cheaper to snoop on users.

Since Google already scans the contents of Gmail users' email messages, maybe this is simply technology creep and Google assumed nobody would mind human reviews of Assistant recordings.

About the review of recordings by human workers, the M.I.T. Technology Review said:

"Legally questionable: Because Google doesn’t inform users that humans review recordings in this way, and thus doesn’t seek their explicit consent for the practice, it’s quite possible that it could be breaking EU data protection regulations. We have asked Google for a response and will update if we hear back."

So, it will be interesting to see what European Union regulators have to say about the recordings and human reviews.

To summarize: consumers have willingly installed perpetual surveillance devices in their homes. What are your views of this data security incident? Do you enable recordings on your smart speakers? Should human workers have access to archives of your recorded conversations?


Aggression Detectors: What They Are, Who Uses Them, And Why

Sound Intelligence logo Like most people, you probably have not heard of "aggression detectors." What are these devices? Who makes them? Who uses these devices and why? What consumers are affected?

To answer these questions, ProPublica explained who makes the devices and why:

"In response to mass shootings, some schools and hospitals are installing microphones equipped with algorithms. The devices purport to identify stress and anger before violence erupts... By deploying surveillance technology in public spaces like hallways and cafeterias, device makers and school officials hope to anticipate and prevent everything from mass shootings to underage smoking... Besides Sound Intelligence, South Korea-based Hanwha Techwin, formerly part of Samsung, makes a similar “scream detection” product that’s been installed in American schools. U.K.-based Audio Analytic used to sell its aggression- and gunshot-detection software to customers in Europe and the United States... Sound Intelligence CEO Derek van der Vorst said security cameras made by Sweden-based Axis Communications account for 90% of the detector’s worldwide sales, with privately held Louroe making up the other 10%... Mounted inconspicuously on the ceiling, Louroe’s smoke-detector-sized microphones measure aggression on a scale from zero to one. Users choose threshold settings. Any time they’re exceeded for long enough, the detector alerts the facility’s security apparatus, either through an existing surveillance system or a text message pinpointing the microphone that picked up the sound..."

Louroe Electronics logo The microphone-equipped sensors have been installed in a variety of industries. The Sound Intelligence website listed prisons, schools, public transportation, banks, healthcare institutes, retail stores, public spaces, and more. Louroe Electronics' site included a similar list plus law enforcement.

The ProPublica article also discussed several key issues. First, sensor accuracy and its own tests:

"... ProPublica’s analysis, as well as the experiences of some U.S. schools and hospitals that have used Sound Intelligence’s aggression detector, suggest that it can be less than reliable. At the heart of the device is what the company calls a machine learning algorithm. Our research found that it tends to equate aggression with rough, strained noises in a relatively high pitch, like [a student's] coughing. A 1994 YouTube clip of abrasive-sounding comedian Gilbert Gottfried ("Is it hot in here or am I crazy?") set off the detector, which analyzes sound but doesn’t take words or meaning into account... Sound Intelligence and Louroe said they prefer whenever possible to fine-tune sensors at each new customer’s location over a period of days or weeks..."

Second, accuracy concerns:

"[Sound Intelligence CEO] Van der Vorst acknowledged that the detector is imperfect and confirmed our finding that it registers rougher tones as aggressive. He said he “guarantees 100%” that the system will at times misconstrue innocent behavior. But he’s more concerned about failing to catch indicators of violence, and he said the system gives schools and other facilities a much-needed early warning system..."

This is interesting and troubling. Sound Intelligence's position seems to suggest that it is okay for sensor to miss-identify innocent persons as aggressive in order to avoid failures to identify truly aggressive persons seeking to do harm. That sounds like the old saying: the ends justify the means. Not good. The harms against innocent persons matters, especially when they are young students.

Yesterday's blog post described a far better corporate approach. Based upon current inaccuracies and biases with the technology, a police body camera assembled an ethics board to help guide its decisions regarding the technology; and then followed that board's recommendations not to implement facial recognition in its devices. When the inaccuracies and biases are resolved, then it would implement facial recognition.

What ethics boards have Sound Intelligence, Louroe, and other aggression detector makers utilized?

Third, the use of aggression detectors raises the issue of notice. Are there physical postings on-site at schools, hospitals, healthcare facilities, and other locations? Notice seems appropriate, especially since almost all entities provide notice (e.g., terms of service, privacy policy) for visitors to their websites.

Fourth, privacy concerns:

"Although a Louroe spokesman said the detector doesn’t intrude on student privacy because it only captures sound patterns deemed aggressive, its microphones allow administrators to record, replay and store those snippets of conversation indefinitely..."

I encourage parents of school-age children to read the entire ProPublica article. Concerned parents may demand explanations by school officials about the surveillance activities and devices used within their children's schools. Teachers may also be concerned. Patients at healthcare facilities may also be concerned.

Concerned persons may seek answers to several issues:

  • The vendor selection process, which aggression detector devices were selected, and why
  • Evidence supporting the accuracy of aggression detectors used
  • The school's/hospital's policy, if it has one, covering surveillance devices; plus any posted notices
  • The treatment and rights of wrongly identified persons (e.g., students, patients,, visitors, staff) by aggression detector devices
  • Approaches by the vendor and school to improve device accuracy for both types of errors: a) wrongly identified persons, and b) failures to identify truly aggressive or threatening persons
  • How long the school and/or vendor archive recorded conversations
  • What persons have access to the archived recordings
  • The data security methods used by the school and by the vendor to prevent unauthorized access and abuse of archived recordings
  • All entities, by name, which the school and/or vendor share archived recordings with

What are your opinions of aggression detectors? Of device inaccuracy? Of the privacy concerns?


Brave Alerts FTC On Threats From Business Practices With Big Data

The U.S. Federal Trade Commission (FTC) held a "Privacy, Big Data, And Competition" hearing on November 6-8, 2018 as part of its "Competition And Consumer Protection in the 21st Century" series of discussions. During that session, the FTC asked for input on several topics:

  1. "What is “big data”? Is there an important technical or policy distinction to be drawn between data and big data?
  2. How have developments involving data – data resources, analytic tools, technology, and business models – changed the understanding and use of personal or commercial information or sensitive data?
  3. Does the importance of data – or large, complex data sets comprising personal or commercial information – in a firm’s ordinary course operations change how the FTC should analyze mergers or firm conduct? If so, how? Does data differ in importance from other assets in assessing firm or industry conduct?
  4. What structural, behavioral or conduct remedies should the FTC consider when remedying antitrust harm in a market or industry where data or personal or commercial information are a significant product or a key competitive input?
  5. Are there policy recommendations that would facilitate competition in markets involving data or personal or commercial information that the FTC should consider?
  6. Do the presence of personal information or privacy concerns inform or change competition analysis?
  7. How do state, federal, and international privacy laws and regulations, adopted to protect data and consumers, affect competition, innovation, and product offerings in the United States and abroad?"

Brave, the developer of a web browser, submitted comments to the FTC which highlighted two concerns:

"First, big tech companies “cross-use” user data from one part of their business to prop up others. This stifles competition, and hurts innovation and consumer choice. Brave suggests that FTC should investigate. Second, the GDPR is emerging as a de facto international standard. Whether this helps or harms United States firms will be determined by whether the United States enacts and actively enforces robust federal privacy laws."

A letter by Dr. Johnny Ryan, the Chief Policy & Industry Relations Officer at Brave, described in detail the company's concerns:

"The cross-use and offensive leveraging of personal information from one line of business to another is likely to have anti-competitive effects. Indeed anti-competitive practices may be inevitable when companies with Google’s degree of market dominance update their privacy policies to include the cross-use of personal information. The result is that a company can leverage all the personal information accumulated from its users in one line of business to dominate other lines of business too. Rather than competing on the merits, the company can enjoy the unfair advantage of massive network effects... The result is that nascent and potential competitors will be stifled, and consumer choice will be limited... The cross-use of data between different lines of business is analogous to the tying of two products. Indeed, tying and cross-use of data can occur at the same time, as Google Chrome’s latest “auto sign in to everything” controversy illustrates..."

Historically, Google let Chrome web browser users decide whether or not to sign in for cross-device usage. The Chrome 69 update forced auto sign-in, but a Chrome 70 update restored users' choice after numerous complaints and criticism.

Regarding topic #7 by the FTC, Brave's response said:

"A de facto international standard appears to be emerging, based on the European Union’s General Data Protection Regulation (GDPR)... the application of GDPR-like laws for commercial use of consumers’ personal data in the EU, Britain (post EU), Japan, India, Brazil, South Korea, Malaysia, Argentina, and China bring more than half of global GDP under a common standard. Whether this emerging standard helps or harms United States firms will be determined by whether the United States enacts and actively enforces robust federal privacy laws. Unless there is a federal GDPR-like law in the United States, there may be a degree of friction and the potential of isolation for United States companies... there is an opportunity in this trend. The United States can assume the global lead by adopting the emerging GDPR standard, and by investing in world-leading regulation that pursues test cases, and defines practical standards..."

Currently, companies collect, archive, share, and sell consumers' personal information at will -- often without notice nor consent. While all 50 states and territories have breach notification laws, most states have not upgraded their breach notification laws to include biometric and passport data. While the Health Insurance Portability and Accountability Act (HIPAA) is the federal law which governs healthcare data and related breaches, many consumers share health data with social media sites -- robbing themselves of HIPAA protections.

Moreover, it's an unregulated free-for-all of data collection, archiving, and sharing by telecommunications companies after the revoking in 2017 of broadband privacy protections for consumers in the USA. Plus, laws have historically focused upon "declared data" (e.g., the data users upload or submit into websites or apps) while ignoring "inferred data" -- which is arguably just as sensitive and revealing.

Regarding future federal privacy legislation, Brave added:

"... The GDPR is compatible with a United States view of consumer protection and privacy principles. Indeed, the FTC has proposed important privacy protections to legislators in 2009, and again in 2012 and 2014, which ended up being incorporated in the GDPR. The high-level principles of the GDPR are closely aligned, and often identical to, the United States’ privacy principles... The GDPR also incorporates principles endorsed by the U.S. in the 1980 OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data; and the principles endorsed by the United States this year, in Article 19.8 (3) of the new United States-Mexico-Canada Agreement."

"The GDPR differs from established United States privacy principles in its explicit reference to “proportionality” as a precondition of data use, and in its more robust approach to data minimization and to purpose specification. In our view, a federal law should incorporate these elements too. We also recommend that federal law should adopt the GDPR definitions of concepts such as “personal data”, “legal basis” including opt-in “consent”, “processing”, “special category personal data”, ”profiling”, “data controller”, “automated decision making”, “purpose limitation”, and so forth, and tools such as data protection impact assessments, breach notification, and records of processing activities."

"In keeping with the fair information practice principles (FIPPs) of the 1974 US Privacy Act, Brave recommends that a federal law should require that the collection of personal information is subject to purpose specification. This means that personal information shall only be collected for specific and explicit purposes. Personal information should not used beyond those purposes without consent, unless a further purpose is poses no risk of harm and is compatible with the initial purpose, in which case the data subject should have the opportunity to opt-out."

Submissions by Brave and others are available to the public at the FTC website in the "Public Comments" section.


Study: Privacy Concerns Have Caused Consumers To Change How They Use The Internet

Facebook commissioned a study by the Economist Intelligence Unit (EIU) to understand "internet inclusion" globally, or how people use the Internet, the benefits received, and the obstacles experienced. The latest survey included 5,069 respondents from 100 countries in Asia-Pacific, the Americas, Europe, the Middle East, North Africa and Sub-Saharan Africa.

Overall findings in the report cited:

"... cause for both optimism and concern. We are seeing steady progress in the number and percentage of households connected to the Internet, narrowing the gender gap and improving accessibility for people with disabilities. The Internet also has become a crucial tool for employment and obtaining job-related skills. On the other hand, growth in Internet connections is slowing, especially among the lowest income countries, and efforts to close the digital divide are stalling..."

The EIU describes itself as, "the world leader in global business intelligence, to help companies, governments and banks understand changes in the world is changing, seize opportunities created by those changes, and manage associated risks. So, any provider of social media services globally would greatly value the EIU's services.

The chart below highlights some of the benefits mentioned by survey respondents:

Chart-internet-benefits-eiu-2019

Other benefits respondents said: almost three-quarters (74.4%) said the Internet is more effective than other methods for finding jobs; 70.5% said their job prospects have improved due to the Internet; and more. So, job seekers and employers both benefit.

Key findings regarding online privacy (emphasis added):

"... More than half (52.2%) of [survey] respondents say they are not confident about their online privacy, hardly changed from 51.5% in the 2018 survey... Most respondents are changing the way they use the Internet because they believe some information may not remain private. For example, 55.8% of respondents say they limit how much financial information they share online because of privacy concerns. This is relatively consistent across different age groups and household income levels... 42.6% say they limit how much personal health and medical information they share. Only 7.5% of respondents say privacy concerns have not changed the way they use the Internet."

So, the lack of online privacy affects how people use the internet -- for business and pleasure. The chart below highlights the types of online changes:

Chart-internet-usage-eiu-2019

Findings regarding privacy and online shopping:

"Despite lingering privacy concerns, people are increasingly shopping online. Whether this continues in the future may hinge on attitudes toward online safety and security... A majority of respondents say that making online purchases is safe and secure, but, at 58.8% it was slightly lower than the 62.1% recorded in the 2018 survey."

So, the percentage of respondents who said online purchases as safe and secure went in the wrong direction -- down. Not good. There were regional differences, too, about online privacy:

"In Europe, the share of respondents confident about their online privacy increased by 8 percentage points from the 2018 survey, probably because of the General Data Protection Regulation (GDPR), the EU’s comprehensive data privacy rules that came into force in May 2018. However, the Middle East and North Africa region saw a decline of 9 percentage points compared with the 2018 survey."

So, sensible legislation to protect consumers' online privacy can have positive impacts. There were other regional differences:

"Trust in online sources of information remained relatively stable, except in the West. Political turbulence in the US and UK may have played a role in causing the share of respondents in North America and Europe who say they trust information on government websites and apps to retreat by 10 percentage points and 6 percentage points, respectively, compared with the 2018 survey."

So, stability is important. The report's authors concluded:

"The survey also reflects anxiety about online privacy and a decline in trust in some sources of information. Indeed, trust in government information has fallen since last year in Europe and North America. The growth and importance of the digital economy will mean that alleviating these anxieties should be a priority of companies, governments, regulators and developers."

Addressing those anxieties is critical, if governments in the West are serious about facilitating business growth via consumer confidence and internet usage. Download the Inclusive Internet Index 2019 Executive Summary (Adobe PDF) report.


UK Parliamentary Committee Issued Its Final Report on Disinformation And Fake News. Facebook And Six4Three Discussed

On February 18th, a United Kingdom (UK) parliamentary committee published its final report on disinformation and "fake news." The 109-page report by the Digital, Culture, Media, And Sport Committee (DCMS) updates its interim report from July, 2018.

The report covers many issues: political advertising (by unnamed entities called "dark adverts"), Brexit and UK elections, data breaches, privacy, and recommendations for UK regulators and government officials. It seems wise to understand the report's findings regarding the business practices of U.S.-based companies mentioned, since these companies' business practices affect consumers globally, including consumers in the United States.

Issues Identified

First, the DCMS' final report built upon issues identified in its:

"... Interim Report: the definition, role and legal liabilities of social media platforms; data misuse and targeting, based around the Facebook, Cambridge Analytica and Aggregate IQ (AIQ) allegations, including evidence from the documents we obtained from Six4Three about Facebook’s knowledge of and participation in data-sharing; political campaigning; Russian influence in political campaigns; SCL influence in foreign elections; and digital literacy..."

The final report includes input from 23 "oral evidence sessions," more than 170 written submissions, interviews of at least 73 witnesses, and more than 4,350 questions asked at hearings. The DCMS Committee sought input from individuals, organizations, industry experts, and other governments. Some of the information sources:

"The Canadian Standing Committee on Access to Information, Privacy and Ethics published its report, “Democracy under threat: risks and solutions in the era of disinformation and data monopoly” in December 2018. The report highlights the Canadian Committee’s study of the breach of personal data involving Cambridge Analytica and Facebook, and broader issues concerning the use of personal data by social media companies and the way in which such companies are responsible for the spreading of misinformation and disinformation... The U.S. Senate Select Committee on Intelligence has an ongoing investigation into the extent of Russian interference in the 2016 U.S. elections. As a result of data sets provided by Facebook, Twitter and Google to the Intelligence Committee -- under its Technical Advisory Group -- two third-party reports were published in December 2018. New Knowledge, an information integrity company, published “The Tactics and Tropes of the Internet Research Agency,” which highlights the Internet Research Agency’s tactics and messages in manipulating and influencing Americans... The Computational Propaganda Research Project and Graphika published the second report, which looks at activities of known Internet Research Agency accounts, using Facebook, Instagram, Twitter and YouTube between 2013 and 2018, to impact US users"

Why Disinformation

Second, definitions matter. According to the DCMS Committee:

"We have even changed the title of our inquiry from “fake news” to “disinformation and ‘fake news’”, as the term ‘fake news’ has developed its own, loaded meaning. As we said in our Interim Report, ‘fake news’ has been used to describe content that a reader might dislike or disagree with... We were pleased that the UK Government accepted our view that the term ‘fake news’ is misleading, and instead sought to address the terms ‘disinformation’ and ‘misinformation'..."

Overall Recommendations

Summary recommendations from the report:

  1. "Compulsory Code of Ethics for tech companies overseen by independent regulator,
  2. Regulator given powers to launch legal action against companies breaching code,
  3. Government to reform current electoral communications laws and rules on overseas involvement in UK elections, and
  4. Social media companies obliged to take down known sources of harmful content, including proven sources of disinformation"

Role And Liability Of Tech Companies

Regarding detailed observations and findings about the role and liability of tech companies, the report stated:

"Social media companies cannot hide behind the claim of being merely a ‘platform’ and maintain that they have no responsibility themselves in regulating the content of their sites. We repeat the recommendation from our Interim Report that a new category of tech company is formulated, which tightens tech companies’ liabilities, and which is not necessarily either a ‘platform’ or a ‘publisher’. This approach would see the tech companies assume legal liability for content identified as harmful after it has been posted by users. We ask the Government to consider this new category of tech company..."

The UK Government and its regulators may adopt some, all, or none of the report's recommendations. More observations and findings in the report:

"... both social media companies and search engines use algorithms, or sequences of instructions, to personalize news and other content for users. The algorithms select content based on factors such as a user’s past online activity, social connections, and their location. The tech companies’ business models rely on revenue coming from the sale of adverts and, because the bottom line is profit, any form of content that increases profit will always be prioritized. Therefore, negative stories will always be prioritized by algorithms, as they are shared more frequently than positive stories... Just as information about the tech companies themselves needs to be more transparent, so does information about their algorithms. These can carry inherent biases, as a result of the way that they are developed by engineers... Monika Bickert, from Facebook, admitted that Facebook was concerned about “any type of bias, whether gender bias, racial bias or other forms of bias that could affect the way that work is done at our company. That includes working on algorithms.” Facebook should be taking a more active and urgent role in tackling such inherent biases..."

Based upon this, the report recommended that the UK's new Centre For Ethics And Innovation (CFEI) should play a key role as an advisor to the UK Government by continually analyzing and anticipating gaps in governance and regulation, suggesting best practices and corporate codes of conduct, and standards for artificial intelligence (AI) and related technologies.

Inferred Data

The report also discussed a critical issue related to algorithms (emphasis added):

"... When Mark Zuckerberg gave evidence to Congress in April 2018, in the wake of the Cambridge Analytica scandal, he made the following claim: “You should have complete control over your data […] If we’re not communicating this clearly, that’s a big thing we should work on”. When asked who owns “the virtual you”, Zuckerberg replied that people themselves own all the “content” they upload, and can delete it at will. However, the advertising profile that Facebook builds up about users cannot be accessed, controlled or deleted by those users... In the UK, the protection of user data is covered by the General Data Protection Regulation (GDPR). However, ‘inferred’ data is not protected; this includes characteristics that may be inferred about a user not based on specific information they have shared, but through analysis of their data profile. This, for example, allows political parties to identify supporters on sites like Facebook, through the data profile matching and the ‘lookalike audience’ advertising targeting tool... Inferred data is therefore regarded by the ICO as personal data, which becomes a problem when users are told that they can own their own data, and that they have power of where that data goes and what it is used for..."

The distinction between uploaded and inferred data cannot be overemphasized. It is critical when evaluating tech companies statements, policies (e.g., privacy, terms of use), and promises about what "data" users have control over. Wise consumers must insist upon clear definitions to avoided getting misled or duped.

What might be an exampled of inferred data? What comes to mind is Facebook's Ad Preferences feature allows users to review and delete the "Interests" -- advertising categories -- Facebook assigns to each user's profile. (The service's algorithms assign Interests based groups/pages/events/advertisements users "Liked" or clicked on, posts submitted, posts commented upon, and more.) These "Interests" are inferred data, since Facebook assigned them, and uers didn't.

In fact, Facebook doesn't notify its users when it assigns new Interests. It just does it. And, Facebook can assign Interests whether you interacted with an item once or many times. How relevant is an Interest assigned after a single interaction, "Like," or click? Most people would say: not relevant. So, does the Interests list assigned to users' profiles accurately describe users? Do Facebook users own the Interests list assigned to their profiles? Any control Facebook users have seems minimal. Why? Facebook users can delete Interests assigned to their profiles, but users cannot stop Facebook from applying new Interests. Users cannot prevent Facebook from re-applying Interests previously deleted. Deleting Interests doesn't reduce the number of ads users see on Facebook.

The only way to know what Interests have been assigned is for Facebook users to visit the Ad Preferences section of their profiles, and browse the list. Depending how frequently a person uses Facebook, it may be necessary to prune an Interests list at least once monthly -- a cumbersome and time consuming task, probably designed that way to discourage reviews and pruning. And, that's one example of inferred data. There are probably plenty more examples, and as the report emphasizes users don't have access to all inferred data with their profiles.

Now, back to the report. To fix problems with inferred data, the DCMS recommended:

"We support the recommendation from the ICO that inferred data should be as protected under the law as personal information. Protections of privacy law should be extended beyond personal information to include models used to make inferences about an individual. We recommend that the Government studies the way in which the protections of privacy law can be expanded to include models that are used to make inferences about individuals, in particular during political campaigning. This will ensure that inferences about individuals are treated as importantly as individuals’ personal information."

Business Practices At Facebook

Next, the DCMS Committee's report said plenty about Facebook, its management style, and executives (emphasis added):

"Despite all the apologies for past mistakes that Facebook has made, it still seems unwilling to be properly scrutinized... Ashkan Soltani, an independent researcher and consultant, and former Chief Technologist to the US Federal Trade Commission (FTC), called into question Facebook’s willingness to be regulated... He discussed the California Consumer Privacy Act, which Facebook supported in public, but lobbied against, behind the scenes... By choosing not to appear before the Committee and by choosing not to respond personally to any of our invitations, Mark Zuckerberg has shown contempt towards both the UK Parliament and the ‘International Grand Committee’, involving members from nine legislatures from around the world. The management structure of Facebook is opaque to those outside the business and this seemed to be designed to conceal knowledge of and responsibility for specific decisions. Facebook used the strategy of sending witnesses who they said were the most appropriate representatives, yet had not been properly briefed on crucial issues, and could not or chose not to answer many of our questions. They then promised to follow up with letters, which -- unsurprisingly -- failed to address all of our questions. We are left in no doubt that this strategy was deliberate."

So, based upon Facebook's actions (or lack thereof), the DCMS concluded that Facebook executives intentionally ducked and dodged issues and questions.

While discussing data use and targeting, the report said more about data breaches and Facebook:

"The scale and importance of the GSR/Cambridge Analytica breach was such that its occurrence should have been referred to Mark Zuckerberg as its CEO immediately. The fact that it was not is evidence that Facebook did not treat the breach with the seriousness it merited. It was a profound failure of governance within Facebook that its CEO did not know what was going on, the company now maintains, until the issue became public to us all in 2018. The incident displays the fundamental weakness of Facebook in managing its responsibilities to the people whose data is used for its own commercial interests..."

So, internal management failed. That's not all. After a detailed review of the GSR/Cambridge Analytica breach and Facebook's 2011 Consent Decree with the U.S. Federal Trade Commission (FTC), the DCMS Committee concluded (emphasis and text link added):

"The Cambridge Analytica scandal was facilitated by Facebook’s policies. If it had fully complied with the FTC settlement, it would not have happened. The FTC Complaint of 2011 ruled against Facebook -- for not protecting users’ data and for letting app developers gain as much access to user data as they liked, without restraint -- and stated that Facebook built their company in a way that made data abuses easy. When asked about Facebook’s failure to act on the FTC’s complaint, Elizabeth Denham, the Information Commissioner, told us: “I am very disappointed that Facebook, being such an innovative company, could not have put more focus, attention and resources into protecting people’s data”. We are equally disappointed."

Wow! Not good. There's more:

"... a current court case at the San Mateo Superior Court in California also concerns Facebook’s data practices. It is alleged that Facebook violated the privacy of US citizens by actively exploiting its privacy policy... The published ‘corrected memorandum of points and authorities to defendants’ special motions to strike’, by the complainant in the case, the U.S.-based app developer Six4Three, describes the allegations against Facebook; that Facebook used its users’ data to persuade app developers to create platforms on its system, by promising access to users’ data, including access to data of users’ friends. The case also alleges that those developers that became successful were targeted and ordered to pay money to Facebook... Six4Three lodged its original case in 2015, after Facebook removed developers’ access to friends’ data, including its own. The DCMS Committee took the unusual, but lawful, step of obtaining these documents, which spanned between 2012 and 2014... Since we published these sealed documents, on 14 January 2019 another court agreed to unseal 135 pages of internal Facebook memos, strategies and employee emails from between 2012 and 2014, connected with Facebook’s inappropriate profiting from business transactions with children. A New York Times investigation published in December 2018 based on internal Facebook documents also revealed that the company had offered preferential access to users data to other major technology companies, including Microsoft, Amazon and Spotify."

"We believed that our publishing the documents was in the public interest and would also be of interest to regulatory bodies... The documents highlight Facebook’s aggressive action against certain apps, including denying them access to data that they were originally promised. They highlight the link between friends’ data and the financial value of the developers’ relationship with Facebook. The main issues concern: ‘white lists’; the value of friends’ data; reciprocity; the sharing of data of users owning Android phones..."

You can read the report's detailed descriptions of those issues. A summary: a) Facebook allegedly used promises of access to users' data to lure developers (often by overriding Facebook users' privacy settings); b) some developers got priority treatment based upon unclear criteria; c) developers who didn't spend enough money with Facebook were denied access to data previously promised; d) Facebook's reciprocity clause demanded that developers also share their users' data with Facebook; e) Facebook's mobile app for Android OS phone users collected far more data about users, allegedly without consent, than users were told; and f) Facebook allegedly targeted certain app developers (emphasis added):

"We received evidence that showed that Facebook not only targeted developers to increase revenue, but also sought to switch off apps where it considered them to be in competition or operating in a lucrative areas of its platform and vulnerable to takeover. Since 1970, the US has possessed high-profile federal legislation, the Racketeer Influenced and Corrupt Organizations Act (RICO); and many individual states have since adopted similar laws. Originally aimed at tackling organized crime syndicates, it has also been used in business cases and has provisions for civil action for damages in RICO-covered offenses... Despite specific requests, Facebook has not provided us with one example of a business excluded from its platform because of serious data breaches. We believe that is because it only ever takes action when breaches become public. We consider that data transfer for value is Facebook’s business model and that Mark Zuckerberg’s statement that “we’ve never sold anyone’s data” is simply untrue.” The evidence that we obtained from the Six4Three court documents indicates that Facebook was willing to override its users’ privacy settings in order to transfer data to some app developers, to charge high prices in advertising to some developers, for the exchange of that data, and to starve some developers—such as Six4Three—of that data, thereby causing them to lose their business. It seems clear that Facebook was, at the very least, in violation of its Federal Trade Commission settlement."

"The Information Commissioner told the Committee that Facebook needs to significantly change its business model and its practices to maintain trust. From the documents we received from Six4Three, it is evident that Facebook intentionally and knowingly violated both data privacy and anti-competition laws. The ICO should carry out a detailed investigation into the practices of the Facebook Platform, its use of users’ and users’ friends’ data, and the use of ‘reciprocity’ of the sharing of data."

The Information Commissioner's Office (ICO) is one of the regulatory agencies within the UK. So, the Committee concluded that Facebook's real business model is, "data transfer for value" -- in other words: have money, get access to data (regardless of Facebook users' privacy settings).

One quickly gets the impression that Facebook acted like a monopoly in its treatment of both users and developers... or worse, like organized crime. The report concluded (emphasis added):

"The Competitions and Market Authority (CMA) should conduct a comprehensive audit of the operation of the advertising market on social media. The Committee made this recommendation its interim report, and we are pleased that it has also been supported in the independent Cairncross Report commissioned by the government and published in February 2019. Given the contents of the Six4Three documents that we have published, it should also investigate whether Facebook specifically has been involved in any anti-competitive practices and conduct a review of Facebook’s business practices towards other developers, to decide whether Facebook is unfairly using its dominant market position in social media to decide which businesses should succeed or fail... Companies like Facebook should not be allowed to behave like ‘digital gangsters’ in the online world, considering themselves to be ahead of and beyond the law."

The DCMS Committee's report also discussed findings from the Cairncross Report. In summary, Damian Collins MP, Chair of the DCMS Committee, said:

“... we cannot delay any longer. Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalized ‘dark adverts’ from unidentifiable sources, delivered through the major social media platforms we use everyday. Much of this is directed from agencies working in foreign countries, including Russia... Companies like Facebook exercise massive market power which enables them to make money by bullying the smaller technology companies and developers... We need a radical shift in the balance of power between the platforms and the people. The age of inadequate self regulation must come to an end. The rights of the citizen need to be established in statute, by requiring the tech companies to adhere to a code of conduct..."

So, the report seems extensive, comprehensive, and detailed. Read the DCMS Committee's announcement, and/or download the full DCMS Committee report (Adobe PDF format, 3,5o7 kilobytes).

Once can assume that governments' intelligence and spy agencies will continue to do what they've always done: collect data about targets and adversaries, use disinformation and other tools to attempt to meddle in other governments' activities. It is clear that social media makes these tasks far easier than before. The DCMS Committee's report provided recommendations about what the UK Government's response should be. Other countries' governments face similar decisions about their responses, if any, to the threats.

Given the data in the DCMS report, it will be interesting to see how the FTC and lawmakers in the United States respond. If increased regulation of social media results, tech companies arguably have only themselves to blame. What do you think?


Google Fined 50 Million Euros For Violations Of New European Privacy Law

Google logo Google has been find 50 million Euros (about U.S. $57 million) under the new European privacy law for failing to properly disclose to users how their data is collected and used for targeted advertising. The European Union's General Data Protection Regulations, which went into effect in May 2018, give EU residents more control over their information and how companies use it.

After receiving two complaints last year from privacy-rights groups, France's National Data Protection Commission (CNL) announced earlier this month:

"... CNIL carried out online inspections in September 2018. The aim was to verify the compliance of the processing operations implemented by GOOGLE with the French Data Protection Act and the GDPR by analysing the browsing pattern of a user and the documents he or she can have access, when creating a GOOGLE account during the configuration of a mobile equipment using Android. On the basis of the inspections carried out, the CNIL’s restricted committee responsible for examining breaches of the Data Protection Act observed two types of breaches of the GDPR."

The first violation involved transparency failures:

"... information provided by GOOGLE is not easily accessible for users. Indeed, the general structure of the information chosen by the company does not enable to comply with the Regulation. Essential information, such as the data processing purposes, the data storage periods or the categories of personal data used for the ads personalization, are excessively disseminated across several documents, with buttons and links on which it is required to click to access complementary information. The relevant information is accessible after several steps only, implying sometimes up to 5 or 6 actions... some information is not always clear nor comprehensive. Users are not able to fully understand the extent of the processing operations carried out by GOOGLE. But the processing operations are particularly massive and intrusive because of the number of services offered (about twenty), the amount and the nature of the data processed and combined. The restricted committee observes in particular that the purposes of processing are described in a too generic and vague manner..."

So, important information is buried and scattered across several documents making it difficult for users to access and to understand. The second violation involved the legal basis for personalized ads processing:

"... GOOGLE states that it obtains the user’s consent to process data for ads personalization purposes. However, the restricted committee considers that the consent is not validly obtained for two reasons. First, the restricted committee observes that the users’ consent is not sufficiently informed. The information on processing operations for the ads personalization is diluted in several documents and does not enable the user to be aware of their extent. For example, in the section “Ads Personalization”, it is not possible to be aware of the plurality of services, websites and applications involved in these processing operations (Google search, Youtube, Google home, Google maps, Playstore, Google pictures, etc.) and therefore of the amount of data processed and combined."

"[Second], the restricted committee observes that the collected consent is neither “specific” nor “unambiguous.” When an account is created, the user can admittedly modify some options associated to the account by clicking on the button « More options », accessible above the button « Create Account ». It is notably possible to configure the display of personalized ads. That does not mean that the GDPR is respected. Indeed, the user not only has to click on the button “More options” to access the configuration, but the display of the ads personalization is moreover pre-ticked. However, as provided by the GDPR, consent is “unambiguous” only with a clear affirmative action from the user (by ticking a non-pre-ticked box for instance). Finally, before creating an account, the user is asked to tick the boxes « I agree to Google’s Terms of Service» and « I agree to the processing of my information as described above and further explained in the Privacy Policy» in order to create the account. Therefore, the user gives his or her consent in full, for all the processing operations purposes carried out by GOOGLE based on this consent (ads personalization, speech recognition, etc.). However, the GDPR provides that the consent is “specific” only if it is given distinctly for each purpose."

So, not only is important information buried and scattered across multiple documents (again), but also critical boxes for users to give consent are pre-checked when they shouldn't be.

CNIL explained its reasons for the massive fine:

"The amount decided, and the publicity of the fine, are justified by the severity of the infringements observed regarding the essential principles of the GDPR: transparency, information and consent. Despite the measures implemented by GOOGLE (documentation and configuration tools), the infringements observed deprive the users of essential guarantees regarding processing operations that can reveal important parts of their private life since they are based on a huge amount of data, a wide variety of services and almost unlimited possible combinations... Moreover, the violations are continuous breaches of the Regulation as they are still observed to date. It is not a one-off, time-limited, infringement..."

This is the largest fine, so far, under GDPR laws. Reportedly, Google will appeal the fine:

"We've worked hard to create a GDPR consent process for personalised ads that is as transparent and straightforward as possible, based on regulatory guidance and user experience testing... We're also concerned about the impact of this ruling on publishers, original content creators and tech companies in Europe and beyond... For all these reasons, we've now decided to appeal."

This is not the first EU fine for Google. CNet reported:

"Google is no stranger to fines under EU laws. It's currently awaiting the outcome of yet another antitrust investigation -- after already being slapped with a $5 billion fine last year for anticompetitive Android practices and a $2.7 billion fine in 2017 over Google Shopping."


Google To EU Regulators: No One Country Should Censor The Web Globally. Poll Finds Canadians Support 'Right To Be Forgotten'

For those watching privacy legislation in Europe, MediaPost reported:

"... Maciej Szpunar, an advisor to the highest court in the EU, sided with Google in the fight, arguing that the right to be forgotten should only be enforceable in Europe -- not the entire world. The opinion is non-binding, but seen as likely to be followed."

For those unfamiliar, in the European Union (EU) the right to be forgotten:

"... was created in 2014, when EU judges ruled that Google (and other search engines) must remove links to embarrassing information about Europeans at their request... The right to be forgotten doesn't exist in the United States... Google interpreted the EU's ruling as requiring removal of links to material in search engines designed for European countries but not from its worldwide search results... In 2015, French regulators rejected Google's position and ordered the company to remove material from all of its results pages. Google then asked Europe's highest court to reject that view. The company argues that no one country should be able to censor the web internationally."

No one corporation should be able to censor the web globally, either. Meanwhile, Radio Canada International reported:

"A new poll shows a slim majority of Canadians agree with the concept known as the “right to be forgotten online.” This means the right to have outdated, inaccurate, or no longer relevant information about yourself removed from search engine results. The poll by the Angus Reid Institute found 51 percent of Canadians agree that people should have the right to be forgotten..."

Consumers should have control over their information. If that control is limited to only the country of their residence, then the global nature of the internet means that control is very limited -- and probably irrelevant. What are your opinions?


Gigantic Data Breach At Marriott International Affects 500 Million Customers. Plenty Of Questions Remain

Marriott International logo A gigantic data breach at Marriott International affects about 500 million customers who have stayed at its Starwood network of hotels in the United States, Canada, and the United Kingdom. Marriott International announced the data breach on Friday, November 30th, and set up a website for affected Starwood guests.

According to its breach announcement, an "internal security tool" discovered the breach on September 8, 2018. The initial data breach investigation determined that unauthorized persons accessed its registration database as far back as 2014, and had both copied and encrypted information before removing it. Marriott engaged security experts, the information was partially decrypted on November 19, 2018, and the global hotel chain determined that the information was from its Starwood guest reservation database.

Starwood Preferred Guest logo The Starwood hotels network includes brands such as W Hotels, St. Regis, Sheraton Hotels & Resorts, Westin Hotels & Resorts, Le Méridien Hotels & Resorts, Four Points by Sheraton, and more. Marriott has not finished decrypting all information, so there may be future updates from the breach investigation.

For 327 million guests, the personal data items stolen included a combination of name, mailing address, phone number, email address, passport number, Starwood Preferred Guest (“SPG”) account information, date of birth, gender, arrival and departure information, reservation date, and communication preferences. For some guests, the information stolen also included payment card numbers and payment card expiration dates. While Marriott said the payment card numbers were encrypted using Advanced Encryption Standard encryption (AES-128), its warned that it doesn't yet know if the encryption keys (needed to decrypt payment information) were also stolen.

For 173 million guests, fewer personal data items were stolen included, "name and sometimes other data such as mailing address, email address, or other information." Marriott International said its Marriott-branded hotels were not affected since they use a different reservations database on a different server.

Marriott said it has notified law enforcement, is working with law enforcement, and has begun to notify affected guests via email. The hotel chain will offer affected guests in select countries one year of free enrollment in the WebWatcher program which, "monitors internet sites where personal information is shared and  an alert to the consumer if evidence of the consumer’s personal information is found." WebWatcher will not be offered to all affected guests. Eligible guests should read the fine print, which the Starwood breach site summarized:

"Due to regulatory and other reasons, WebWatcher or similar products are not available in all countries. For residents of the United States, enrolling in WebWatcher also provides you with two additional benefits: (1) a Fraud Loss Reimbursement benefit, which reimburses you for out-of-pocket expenses totaling up to $1 million in covered legal costs and expenses for any one stolen identity event. All coverage is subject to the conditions and exclusions in the policy; and (2) unlimited access to consultation with a Kroll fraud specialist. Consultation support includes showing you the most effective ways to protect your identity, explaining your rights and protections under the law, assistance with fraud alerts, and interpreting how personal information is accessed and used..."

The seriousness of this data breach cannot be overstated. First, it went undetected for a very long time. Marriott needs to explain that and the changes it will implement with an improved "internal security tool" so this doesn't happen again. Second, 500 million is an awful lot of affected customers. An awful lot. Third, breach CNN Business reported:

"Because the hack involves customers in the European Union and the United Kingdom, the company might be in violation of the recently enacted General Data Protection Regulation (GDPR). Mark Thompson, the global lead for consulting company KPMG's Privacy Advisory Practice, told CNN Business that hefty GDPR penalties will potentially be slapped on the company. "The size and scale of this thing is huge," he said, adding that it's going to take several months for (EU) regulators to investigate the breach."

Fourth, the data items stolen are sufficient to cause plenty of damage. Security experts advise affected customers to change their Starwood passwords, check the answers.Kroll.com breach site next week to see if their information was compromised/stolen, sign up for credit monitoring (if they don't already have it), watch their payment or bank accounts for fraudulent entries, and consider an early renewal if your passport number was compromised/stolen. Fifth, companies usually arrange free credit monitoring for breach victims for one or two years. So far, Marriott hasn't done this. Maybe it will. If not, Marriott needs to explain why.

Sixth, breach notification of affected guests via email seems sketchy... like Marriott is trying to cut corners and costs. History is littered with numerous examples of skilled spammers and cybercriminals using faked or spoofed email to trick consumers into revealing sensitive personal and payment information. It will be interesting to see how Marriott's breach notification via email works and manages this threat.

Seventh, lawsuits and other investigations have already begun. ZDNet reported:

"... two Oregon men sued international hotel chain Marriott for exposing their data. Their lawsuit was followed hours later by another one filed in the state of Maryland. Both lawsuits seek class-action status. While plaintiffs in the Maryland lawsuit didn't specify the amount of damages they were seeking from Marriott, the plaintiffs in the Oregon lawsuit want $12.5 billion in costs and losses. his should equate to $25 for each of the 500 million users who had their personal data stolen from Marriott's serv ers... The Maryland lawsuit was filed by Baltimore law firm Murphy, Falcon & Murphy..."

Bloomberg BNA announced:

"The Massachusetts, New York and Illinois state attorneys general quickly announced they would examine the hack. Connecticut George Jepsen (D) is also looking into the matter, a spokesman told Bloomberg Law."

Eighth, the breach site's website address unnecessarily vague: answers.kroll.com. Frankly, a website address like "starwood-breach.kroll.com" or "marriott-breach.kroll.com" would have been better. (The combination of email notification and vague website name seems eerily similar to the post-breach clusterf--k by Equifax's poorly implemented breach site.) Maybe this vague address was a temporary quick fix, and Marriott will host a comprehensive breach-status site later on one of its servers. That would be better and clearer for affected customers, who probably are unfamiliar with Kroll. Readers of this blog probably first encountered Kroll after IBM Inc. contracted it to help implement IBM's post-breach response in 2007.

The Starwood breach notice appears within the news section of Marriott.com site. Also, Marriott's post-breach notice included overlays on both the home page and the Starwood landing page within the Marriott.com site. This is a good start, but a better implementation would insert a link directly into the webpages, since the overlays don't render well in all browsers on all devices. (Marriott: you did test this before deployment?) Example: people with pop-up blockers may miss the breach notice in the overlays. And, a better implementation would link to the news story's detail page within the Marriott.com site -- not directly to the vague answers.kroll.com site.

Last, some questions remain about the post-breach response:

  • Why email notices to breach victims? Hopefully, there are more reasons than simply saving postal mailing costs.
  • Why no credit monitoring offers to breach victims?
  • What data in the Starwood reservations database was altered by the attackers? That data was encrypted by the attackers suggests that the attackers had sufficient time, resources, and skills to modify or alter database records. Marriott needs to explain what it is doing about this.
  • When will Marriott host a breach site on one of its servers? No doubt, there will be follow-up news, more questions by breach victims, and breach investigation updates. A dedicated breach site on one of its servers seems best. Leaning too much on Kroll is not good.
  • Why did the intrusion go undetected for so long? Marriott needs to explain this and the post-breach fix so guests are reassured it won't happen again.
  • Is the main Marriott reservations database also vulnerable? Guests for other brands weren't affected since a separate reservations database was used. Maybe this is because the main Marriott reservations database and server are better protected, or cybercriminals haven't attacked it (yet). Guests deserve comprehensive answers.
  • Why the website overlaps/pop-ups and not static links?
  • What changes (e.g., software upgrades, breach detection tools, employee training, etc.) will be implemented so this doesn't happen again?

Having blogged about data breaches for 11+ years, these types of questions often arise. None are unreasonable questions. Answers will help guests feel comfortable with using Starwood hotels. Plus, Marriott has an obligation to fully inform guests directly at its website, and not lean on Kroll. What do you think?


Ireland Regulator: LinkedIn Processed Email Addresses Of 18 Million Non-Members

LinkedIn logo On Friday November 23rd, the Data Protection Commission (DPC) in Ireland released its annual report. That report includes the results of an investigation by the DPC of the LinkedIn.com social networking site, after a 2017 complaint by a person who didn't use the social networking service. Apparently, LinkedIn obtained 18 million email address of non-members so it could then use the Facebook platform to deliver advertisements encouraging them to join.

The DPC 2018 report (Adobe PDF; 827k bytes) stated on page 21:

"The DPC concluded its audit of LinkedIn Ireland Unlimited Company (LinkedIn) in respect of its processing of personal data following an investigation of a complaint notified to the DPC by a non-LinkedIn user. The complaint concerned LinkedIn’s obtaining and use of the complainant’s email address for the purpose of targeted advertising on the Facebook Platform. Our investigation identified that LinkedIn Corporation (LinkedIn Corp) in the U.S., LinkedIn Ireland’s data processor, had processed hashed email addresses of approximately 18 million non-LinkedIn members and targeted these individuals on the Facebook Platform with the absence of instruction from the data controller (i.e. LinkedIn Ireland), as is required pursuant to Section 2C(3)(a) of the Acts. The complaint was ultimately amicably resolved, with LinkedIn implementing a number of immediate actions to cease the processing of user data for the purposes that gave rise to the complaint."

So, in an attempt to gain more users LinkedIn acquired and processed the email addresses of 18 million non-members without getting governmental "instruction" as required by law. Not good.

The DPC report covered the time frame from January 1st through May 24, 2018. The report did not mention the source(s) from which LinkedIn acquired the email addresses. The DPC report also discussed investigations of Facebook (e.g., WhatsApp, facial recognition),  and Yahoo/Oath. Microsoft acquired LinkedIn in 2016. GDPR went into effect across the EU on May 25, 2018.

There is more. The investigation's findings raised concerns about broader compliance issues, so the DPC conducted a more in-depth audit:

"... to verify that LinkedIn had in place appropriate technical security and organisational measures, particularly for its processing of non-member data and its retention of such data. The audit identified that LinkedIn Corp was undertaking the pre-computation of a suggested professional network for non-LinkedIn members. As a result of the findings of our audit, LinkedIn Corp was instructed by LinkedIn Ireland, as data controller of EU user data, to cease pre-compute processing and to delete all personal data associated with such processing prior to 25 May 2018."

That the DPC ordered LinkedIn to stop this particular data processing, strongly suggests that the social networking service's activity probably violated data protection laws, as the European Union (EU) implements stronger privacy laws, known as General Data Protection Regulation (GDPR). ZDNet explained in this primer:

".... GDPR is a new set of rules designed to give EU citizens more control over their personal data. It aims to simplify the regulatory environment for business so both citizens and businesses in the European Union can fully benefit from the digital economy... almost every aspect of our lives revolves around data. From social media companies, to banks, retailers, and governments -- almost every service we use involves the collection and analysis of our personal data. Your name, address, credit card number and more all collected, analysed and, perhaps most importantly, stored by organisations... Data breaches inevitably happen. Information gets lost, stolen or otherwise released into the hands of people who were never intended to see it -- and those people often have malicious intent. Under the terms of GDPR, not only will organisations have to ensure that personal data is gathered legally and under strict conditions, but those who collect and manage it will be obliged to protect it from misuse and exploitation, as well as to respect the rights of data owners - or face penalties for not doing so... There are two different types of data-handlers the legislation applies to: 'processors' and 'controllers'. The definitions of each are laid out in Article 4 of the General Data Protection Regulation..."

The new GDPR applies to both companies operating within the EU, and to companies located outside of the EU which offer goods or services to customers or businesses inside the EU. As a result, some companies have changed their business processes. TechCrunch reported in April:

"Facebook has another change in the works to respond to the European Union’s beefed up data protection framework — and this one looks intended to shrink its legal liabilities under GDPR, and at scale. Late yesterday Reuters reported on a change incoming to Facebook’s [Terms & Conditions policy] that it said will be pushed out next month — meaning all non-EU international are switched from having their data processed by Facebook Ireland to Facebook USA. With this shift, Facebook will ensure that the privacy protections afforded by the EU’s incoming GDPR — which applies from May 25 — will not cover the ~1.5 billion+ international Facebook users who aren’t EU citizens (but current have their data processed in the EU, by Facebook Ireland). The U.S. does not have a comparable data protection framework to GDPR..."

What was LinkedIn's response to the DPC report? At press time, a search of LinkedIn's blog and press areas failed to find any mentions of the DPC investigation. TechCrunch reported statements by Dennis Kelleher, Head of Privacy, EMEA at LinkedIn:

"... Unfortunately the strong processes and procedures we have in place were not followed and for that we are sorry. We’ve taken appropriate action, and have improved the way we work to ensure that this will not happen again. During the audit, we also identified one further area where we could improve data privacy for non-members and we have voluntarily changed our practices as a result."

What does this mean? Plenty. There seem to be several takeaways for consumer and users of social networking services:

  • EU regulators are proactive and conduct detailed audits to ensure companies both comply with GDPR and act consistent with any promises they made,
  • LinkedIn wants consumers to accept another "we are sorry" corporate statement. No thanks. No more apologies. Actions speak more loudly than words,
  • The DPC didn't fine LinkedIn probably because GDPR didn't become effective until May 25, 2018. This suggests that fines will be applied to violations occurring on or after May 25, 2018, and
  • People in different areas of the world view privacy and data protection differently - as they should. That is fine, and it shouldn't be a surprise. (A global survey about self-driving cars found similar regional differences.) Smart executives in businesses -- and in governments -- worldwide recognize regional differences, find ways to sell products and services across areas without degraded customer experience, and don't try to force their country's approach on other countries or areas which don't want it.

What takeaways do you see?


Security Researcher Warns 'Hack A Smart Meter And Kill The Grid'

Everyone has utility meters which measure their gas and/or electricity consumption. Many of those meters are smart meters, installed in homes in both the United States and Britain. How secure are smart meters? First, some background since few consumers know what's installed in their homes.

According to the U.S. Energy Information Administration (EIA), there were 78.9 million smart meters installed in the United States by 2017, and residential installations account for 88 percent of that total. So, about half of all electricity customers in the United States use smart utility meters.

All smart meters wirelessly transmit usage to the utility provider. That's why you never see utility technicians visiting homes to manually "read" utility meters. There are two types of smart meters. In 2013, the number of two-way (AMI) smart meters in the United States exceeded the number of one-way (AMR) smart meters. The EIA explained:

"Two-way or AMI (advanced metering infrastructure) meters allow utilities and customers to interact to support smart consumption applications...The deployment and use of AMI and AMR meters vary greatly by state. Only 5 states had an AMI penetration rate above 80 percent. High penetration rates are seen in northern New England, Western states, Georgia and Texas. California added the most AMI meters of any state in 2013... There were 6 states with AMR penetration rates above 80 percent with Rhode Island the leader at 95 percent. The highest penetration rates are in the Rocky Mountain, Upper Plains and Southern Atlantic states. New York added nearly 540,000 AMR meters in 2013. Pennsylvania lost 580,000 AMR meters, but gained almost 620,000 AMI meters..."

Chart with 2-way smart utility meter penetration by state in 2013 by EIA. Click to view larger version See the chart on the right for two-way installations by state. Readers of this blog are aware of the privacy issues. A few states allow users to opt out of smart meters. Since the wireless transmissions occur throughout each month, smart meters can be used to profile consumers with a high degree of accuracy (e.g., the number of persons living in the dwelling, when you're home and the duration, which electric appliances are used, the presence of security and alarm systems, special equipment such as in-home medical equipment, etc.).

Now, back to Britain. It seems that the security of smart utility meters is questionable. Smart Grid Awareness follows the security and privacy issues associated with smart utility meters. The site reported that a British security expert:

"... is more convinced than ever that evidence now exists that rogue chips may be embedded into electronic circuit boards during the manufacturing process, such as those contained within utility smart meters. Smart meters can be considered high value targets for hackers due to the existence of the “remote disconnect” feature included as an option for most smart meters deployed today."

So, smart meters are part of the "smart power grid." If smart utility meters can be hacked, then the power grid -- and residential utility services -- can be interrupted or disabled. The remote-disconnect feature allows the utility to remotely turn off a meter. Smart meters in the United States have this feature, too. Reportedly, utilities say that they've disabled the feature. However, if the supply chain has been compromised with hacked chips (as this Bloomberg report claimed with computers), then bad actors may indeed be able to turn on and use the remote-disconnect feature in smart utility meters.

You can read for yourself the report by security researcher Nick Hunn. Clearly, there is more news to come about this. I look forward to energy providers assuring consumers how they've protected their supply chains. What do you think?


FTC: How You Should Handle Robocalls. 4 Companies Settle Regarding Privacy Shield Claims

First, it seems that the number of robocalls has increased during the past two years. Some automated calls are English. Some are in other languages. All try to trick consumers into sending money or disclosing sensitive financial and payment information. Advice from the U.S. Federal Trade Commission (FTC):

Second, the FTC announced a settlement agreement with four companies:

"In separate complaints, the FTC alleges that IDmission, LLC, mResource LLC (doing business as Loop Works, LLC), SmartStart Employment Screening, Inc., and VenPath, Inc. falsely claimed to be certified under the EU-U.S. Privacy Shield, which establishes a process to allow companies to transfer consumer data from European Union countries to the United States in compliance with EU law... The Department of Commerce administers the Privacy Shield framework, while the FTC enforces the promises companies make when joining the framework."

According to the lawsuits, IDmission, a cloud-based services firm, applied in 2017 for Privacy Shield certification with the U.S. Department of Commerce but never completed the necessary steps to be certified under the program. The other three companies each obtained Privacy Shield certification in 2016 but allowed their certifications to lapse. VenPath is a data analytics firm. SmartStart offers employment and background screening services. mResource provides talent management and recruitment services.

Terms of the settlement agreements prohibit all four companies from misrepresenting their participation in any privacy or data security program sponsored by the government. Also:

"... VenPath and SmartStart must also continue to apply the Privacy Shield protections to personal information they collected while participating in the program, protect it by another means authorized by the Privacy Shield framework, or return or delete the information within 10 days of the order."


Study: Performance Issues Impede IoT Device Trust And Usage Worldwide By Consumers

Dynatrace logo A global survey recently uncovered interesting findings about the usage and satisfaction of Iot (Internet of things) devices by consumers. A survey of consumers in several countries found that 52 percent already use IoT devices, and 64 percent of users have already encountered performance issues with their devices.

Opinium Research logo Dynatrace, a software intelligence company, commissioned Opinium Research to conduct a global survey of 10,002 participants, with 2,000 in the United States, 2,000 in the United Kingdom, and 1,000 respondents each in France, Germany, Australia, Brazil, Singapore, and China. Dynatrace announced several findings, chiefly:

"On average, consumers experience 1.5 digital performance problems every day, and 62% of people fear the number of problems they encounter, and the frequency, will increase due to the rise of IoT."

That seems like plenty of poor performance. Some findings were specific to travel, healthcare, and in-home retail sectors. Regarding travel:

"The digital performance failures consumers are already experiencing with everyday technology is potentially making them wary of other uses of IoT. 85% of respondents said they are concerned that self-driving cars will malfunction... 72% feel it is likely software glitches in self-driving cars will cause serious injuries and fatalities... 84% of consumers said they wouldn’t use self-driving cars due to a fear of software glitches..."

Regarding healthcare:

"... 62% of consumers stated they would not trust IoT devices to administer medication; this sentiment is strongest in the 55+ age range, with 74% expressing distrust. There were also specific concerns about the use of IoT devices to monitor vital signs, such as heart rate and blood pressure. 85% of consumers expressed concern that performance problems with these types of IoT devices could compromise clinical data..."

Regarding in-home retail devices:

"... 83% of consumers are concerned about losing control of their smart home due to digital performance problems... 73% of consumers fear being locked in or out of the smart home due to bugs in smart home technology... 68% of consumers are worried they won’t be able to control the temperature in the smart home due to malfunctions in smart home technology... 81% of consumers are concerned that technology or software problems with smart meters will lead to them being overcharged for gas, electricity, and water."

The findings are a clear call to IoT makers to improve the performance, security, and reliability of their internet-connected devices. To learn more, download the full Dynatrace report titled, "IoT Consumer Confidence Report: Challenges for Enterprise Cloud Monitoring on the Horizon."


European Regulators Fine Google $5 Billion For 'Breaching EU Antitrust Rules'

On Wednesday, European anti-trust regulators fined Google 4.34 billion Euros (U.S. $5 billion) and ordered the tech company to stop using its Android operating system software to block competition. ComputerWorld reported:

"The European Commission found that Google has abused its dominant market position in three ways: tying access to the Play store to installation of Google Search and Google Chrome; paying phone makers and network operators to exclusively install Google Search, and preventing manufacturers from making devices running forks of Android... Google won't let smartphone manufacturers install Play on their phones unless they also make its search engine and Chrome browser the defaults on their phones. In addition, they must only use a Google-approved version of Android. This has prevented companies like Amazon.com, which developed a fork of Android it calls FireOS, from persuading big-name manufacturers to produce phones running its OS or connecting to its app store..."

Reportedly, less than 10% of Android phone users download a different browser than the pre-installed default. Less than 1% use a different search app. View the archive of European Commission Android OS documents.

Yesterday, the European Commission announced on social media:

European Commission tweet. Google Android OS restrictions graphic. Click to view larger version

European Commission tweet. Vestager comments. Click to view larger version

And, The Guardian newspaper reported:

"Soon after Brussels handed down its verdict, Google announced it would appeal. "Android has created more choice for everyone, not less," a Google spokesperson said... Google has 90 days to end its "illegal conduct" or its parent company Alphabet could be hit with fines amounting to 5% of its daily [revenues] for each day it fails to comply. Wednesday’s verdict ends a 39-month investigation by the European commission’s competition authorities into Google’s Android operating system but it is only one part of an eight-year battle between Brussels and the tech giant."

According to the Reuters news service, a third EU case against Google, involving accusations that the tech company's AdSense advertising service blocks users from displaying search ads from competitors, is still ongoing.


Facial Recognition At Facebook: New Patents, New EU Privacy Laws, And Concerns For Offline Shoppers

Facebook logo Some Facebook users know that the social networking site tracks them both on and off (e.g., signed into, not signed into) the service. Many online users know that Facebook tracks both users and non-users around the internet. Recent developments indicate that the service intends to track people offline, too. The New York Times reported that Facebook:

"... has applied for various patents, many of them still under consideration... One patent application, published last November, described a system that could detect consumers within [brick-and-mortar retail] stores and match those shoppers’ faces with their social networking profiles. Then it could analyze the characteristics of their friends, and other details, using the information to determine a “trust level” for each shopper. Consumers deemed “trustworthy” could be eligible for special treatment, like automatic access to merchandise in locked display cases... Another Facebook patent filing described how cameras near checkout counters could capture shoppers’ faces, match them with their social networking profiles and then send purchase confirmation messages to their phones."

Some important background. First, the usage of surveillance cameras in retail stores is not new. What is new is the scope and accuracy of the technology. In 2012, we first learned about smart mannequins in retail stores. In 2013, we learned about the five ways retail stores spy on shoppers. In 2015, we learned more about tracking of shoppers by retail stores using WiFi connections. In 2018, some smart mannequins are used in the healthcare industry.

Second, Facebook's facial recognition technology scans images uploaded by users, and then allows users identified to accept or decline labels with their name for each photo. Each Facebook user can adjust their privacy settings to enable or disable the adding of their name label to photos. However:

"Facial recognition works by scanning faces of unnamed people in photos or videos and then matching codes of their facial patterns to those in a database of named people... The technology can be used to remotely identify people by name without their knowledge or consent. While proponents view it as a high-tech tool to catch criminals... critics said people cannot actually control the technology — because Facebook scans their faces in photos even when their facial recognition setting is turned off... Rochelle Nadhiri, a Facebook spokeswoman, said its system analyzes faces in users’ photos to check whether they match with those who have their facial recognition setting turned on. If the system cannot find a match, she said, it does not identify the unknown face and immediately deletes the facial data."

Simply stated: Facebook maintains a perpetual database of photos (and videos) with names attached, so it can perform the matching and not display name labels for users who declined and/or disabled the display of name labels in photos (videos). To learn more about facial recognition at Facebook, visit the Electronic Privacy Information Center (EPIC) site.

Third, other tech companies besides Facebook use facial recognition technology:

"... Amazon, Apple, Facebook, Google and Microsoft have filed facial recognition patent applications. In May, civil liberties groups criticized Amazon for marketing facial technology, called Rekognition, to police departments. The company has said the technology has also been used to find lost children at amusement parks and other purposes..."

You may remember, earlier in 2017 Apple launched its iPhone X with Face ID feature for users to unlock their phones. Fourth, since Facebook operates globally it must respond to new laws in certain regions:

"In the European Union, a tough new data protection law called the General Data Protection Regulation now requires companies to obtain explicit and “freely given” consent before collecting sensitive information like facial data. Some critics, including the former government official who originally proposed the new law, contend that Facebook tried to improperly influence user consent by promoting facial recognition as an identity protection tool."

Perhaps, you find the above issues troubling. I do. If my facial image will be captured, archived, tracked by brick-and-mortar stores, and then matched and merged with my online usage, then I want some type of notice before entering a brick-and-mortar store -- just as websites present privacy and terms-of-use policies. Otherwise, there is no notice nor informed consent by shoppers at brick-and-mortar stores.

So, is facial recognition a threat, a protection tool, or both? What are your opinions?


Academic Professors, Researchers, And Google Employees Protest Warfare Programs By The Tech Giant

Google logo Many internet users know that Google's business of model of free services comes with a steep price: the collection of massive amounts of information about users of its services. There are implications you may not be aware of.

A Guardian UK article by three professors asked several questions:

"Should Google, a global company with intimate access to the lives of billions, use its technology to bolster one country’s military dominance? Should it use its state of the art artificial intelligence technologies, its best engineers, its cloud computing services, and the vast personal data that it collects to contribute to programs that advance the development of autonomous weapons? Should it proceed despite moral and ethical opposition by several thousand of its own employees?"

These questions are relevant and necessary for several reasons. First, more than a dozen Google employees resigned citing ethical and transparency concerns with artificial intelligence (AI) help the company provides to the U.S. Department of Defense for Maven, a weaponized drone program to identify people. Reportedly, these are the first known mass resignations.

Second, more than 3,100 employees signed a public letter saying that Google should not be in the business of war. That letter (Adobe PDF) demanded that Google terminate its Maven program assistance, and draft a clear corporate policy that neither it, nor its contractors, will build warfare technology.

Third, more than 700 academic researchers, who study digital technologies, signed a letter in support of the protesting Google employees and former employees. The letter stated, in part:

"We wholeheartedly support their demand that Google terminate its contract with the DoD, and that Google and its parent company Alphabet commit not to develop military technologies and not to use the personal data that they collect for military purposes... We also urge Google and Alphabet’s executives to join other AI and robotics researchers and technology executives in calling for an international treaty to prohibit autonomous weapon systems... Google has become responsible for compiling our email, videos, calendars, and photographs, and guiding us to physical destinations. Like many other digital technology companies, Google has collected vast amounts of data on the behaviors, activities and interests of their users. The private data collected by Google comes with a responsibility not only to use that data to improve its own technologies and expand its business, but also to benefit society. The company’s motto "Don’t Be Evil" famously embraces this responsibility.

Project Maven is a United States military program aimed at using machine learning to analyze massive amounts of drone surveillance footage and to label objects of interest for human analysts. Google is supplying not only the open source ‘deep learning’ technology, but also engineering expertise and assistance to the Department of Defense. According to Defense One, Joint Special Operations Forces “in the Middle East” have conducted initial trials using video footage from a small ScanEagle surveillance drone. The project is slated to expand “to larger, medium-altitude Predator and Reaper drones by next summer” and eventually to Gorgon Stare, “a sophisticated, high-tech series of cameras... that can view entire towns.” With Project Maven, Google becomes implicated in the questionable practice of targeted killings. These include so-called signature strikes and pattern-of-life strikes that target people based not on known activities but on probabilities drawn from long range surveillance footage. The legality of these operations has come into question under international and U.S. law. These operations also have raised significant questions of racial and gender bias..."

I'll bet that many people never imagined -- nor want - that their personal e-mail, photos, calendars, video, social media, map usage, archived photos, social media, and more would be used for automated military applications. What are your opinions?