609 posts categorized "Privacy" Feed

Google To EU Regulators: No One Country Should Censor The Web Internationally. Poll Finds Canadians Support 'Right To Be Forgotten'

For those watching privacy legislation in Europe, MediaPost reported:

"... Maciej Szpunar, an advisor to the highest court in the EU, sided with Google in the fight, arguing that the right to be forgotten should only be enforceable in Europe -- not the entire world. The opinion is non-binding, but seen as likely to be followed."

For those unfamiliar, in the European Union (EU) the right to be forgotten:

"... was created in 2014, when EU judges ruled that Google (and other search engines) must remove links to embarrassing information about Europeans at their request... The right to be forgotten doesn't exist in the United States... Google interpreted the EU's ruling as requiring removal of links to material in search engines designed for European countries but not from its worldwide search results... In 2015, French regulators rejected Google's position and ordered the company to remove material from all of its results pages. Google then asked Europe's highest court to reject that view. The company argues that no one country should be able to censor the web internationally."

No one corporation should be able to censor the web internationally, too. Meanwhile, Radio Canada International reported:

"A new poll shows a slim majority of Canadians agree with the concept known as the “right to be forgotten online.” This means the right to have outdated, inaccurate, or no longer relevant information about yourself removed from search engine results. The poll by the Angus Reid Institute found 51 percent of Canadians agree that people should have the right to be forgotten..."

Consumers should have control over their information. If that control is limited to only the country of their residence, then the global nature of the internet means that control is very limited -- and probably irrelevant. What are your opinions?


After Promises To Stop, Mobile Providers Continued Sales Of Location Data About Consumers. What You Can Do To Protect Your Privacy

Sadly, history repeats itself. First, the history: after getting caught selling consumers' real-time GPS location data without notice nor consumers' consent, in 2018 mobile providers promised to stop the practice. The Ars Technica blog reported in June, 2018:

"Verizon and AT&T have promised to stop selling their mobile customers' location information to third-party data brokers following a security problem that leaked the real-time location of US cell phone users. Senator Ron Wyden (D-Ore.) recently urged all four major carriers to stop the practice, and today he published responses he received from Verizon, AT&T, T-Mobile USA, and SprintWyden's statement praised Verizon for "taking quick action to protect its customers' privacy and security," but he criticized the other carriers for not making the same promise... AT&T changed its stance shortly after Wyden's statement... Senator Wyden recognized AT&T's change on Twitter and called on T-Mobile and Sprint to follow suit."

Kudos to Senator Wyden. The other mobile providers soon complied... sort of.

Second, some background: real-time location data is very valuable stuff. It indicates where you are as you (with your phone or other mobile devices) move about the physical world in your daily routine. No delays. No lag. Yes, there are appropriate uses for real-time GPS location data -- such as by law enforcement to quickly find a kidnapped person or child before further harm happens. But, do any and all advertisers need real-time location data about consumers? Data brokers? Others?

I think not. Domestic violence and stalking victims probably would not want their, nor their children's, real-time location data resold publicly. Most parents would not want their children's location data resold publicly. Most patients probably would not want their location data broadcast every time they visit their physician, specialist, rehab, or a hospital. Corporate executives, government officials, and attorneys conducting sensitive negotiations probably wouldn't want their location data collected and resold, either.

So, most consumers probably don't want their real-time location data resold publicly. Well, some of you make location-specific announcements via posts on social media. That's your choice, but I conclude that most people don't. Consumers want control over their location information so they can decide if, when, and with whom to share it. The mass collection and sales of consumers' real-time location data by mobile providers prevents choice -- and it violates persons' privacy.

Third, fast forward seven months from 2018. TechCrunch reported on January 9th:

"... new reporting by Motherboard shows that while [reseller] LocationSmart faced the brunt of the criticism [in 2018], few focused on the other big player in the location-tracking business, Zumigo. A payment of $300 and a phone number was enough for a bounty hunter to track down the participating reporter by obtaining his location using Zumigo’s location data, which was continuing to pay for access from most of the carriers. Worse, Zumigo sold that data on — like LocationSmart did with Securus — to other companies, like Microbilt, a Georgia-based credit reporting company, which in turn sells that data on to other firms that want that data. In this case, it was a bail bond company, whose bounty hunter was paid by Motherboard to track down the reporter — with his permission."

"Everyone seemed to drop the ball. Microbilt said the bounty hunter shouldn’t have used the location data to track the Motherboard reporter. Zumigo said it didn’t mind location data ending up in the hands of the bounty hunter, but still cut Microbilt’s access. But nobody quite dropped the ball like the carriers, which said they would not to share location data again."

The TechCrunch article rightly held offending mobile providers accountable. Example: T-Mobile's chief executive tweeted last year:

Then, Legere tweeted last week:

The right way? In my view, real-time location never should have been collected and resold. Almost a year after reports first surfaced, T-Mobile is finally getting around to stopping the practice and terminating its relationships with location data resellers -- two months from now. Why not announce this slow wind-down last year when the issue first surfaced? "Emergency assistance" is the reason we are supposed to believe. Yeah, right.

The TechCrunch article rightly took AT&T and Verizon to task, too. Good. I strongly encourage everyone to read the entire TechCrunch article.

What can consumers make of this? There seem to be several takeaways:

  1. Transparency is needed, since corporate privacy policies don't list all (or often any) business partners. This lack of transparency provides an easy way for mobile providers to resume location data sales without notice to anyone and without consumers' consent,
  2. Corporate executives will say anything in tweets/social media. A healthy dose of skepticism by consumers and regulators is wise,
  3. Consumers can't trust mobile providers. They are happy to make money selling consumers' real-time location data, regardless of consumers' desires not for our data to be collected and sold,
  4. Data brokers and credit reporting agencies want consumers' location data,
  5. To ensure privacy, consumers also must take action: adjust the privacy settings on your phones to limit or deny mobile apps access to your location data. I did. It's not hard. Do it today, and
  6. Oversight is needed, since a) mobile providers have, at best, sloppy to minimal oversight and internal processes to prevent location data sales; and b) data brokers and others are readily available to enable and facilitate location data transactions.

I cannot over-emphasize #5 above. What issues or takeaways do you see? What are your opinions about real-time location data?


Samsung Phone Owners Unable To Delete Facebook And Other Apps. Anger And Privacy Concerns Result

Some consumers have learned that they can't delete Facebook and other mobile apps from their Samsung smartphones. Bloomberg described one consumer's experiences:

"Winke bought his Samsung Galaxy S8, an Android-based device that comes with Facebook’s social network already installed, when it was introduced in 2017. He has used the Facebook app to connect with old friends and to share pictures of natural landscapes and his Siamese cat -- but he didn’t want to be stuck with it. He tried to remove the program from his phone, but the chatter proved true -- it was undeletable. He found only an option to "disable," and he wasn’t sure what that meant."

Samsung phones operate using Google's Android operating system (OS). The "chatter" refers to online complaints by Samsung phone owners. There were plenty of complaints, ranging from snarky:

To informative:

And:

Some persons shared their (understandable) anger:

One person reminded consumers of bigger issues with Android OS phones:

And, that privacy concern still exists. Sophos Labs reported:

"Advocacy group Privacy International announced the findings in a presentation at the 35th Chaos Computer Congress late last month. The organization tested 34 apps and documented the results, as part of a downloadable report... 61% of the apps tested automatically tell Facebook that a user has opened them. This accompanies other basic event data such as an app being closed, along with information about their device and suspected location based on language and time settings. Apps have been doing this even when users don’t have a Facebook account, the report said. Some apps went far beyond basic event information, sending highly detailed data. For example, the travel app Kayak routinely sends search information including departure and arrival dates and cities, and numbers of tickets (including tickets for children)."

After multiple data breaches and privacy snafus, some Facebook users have decided to either quit the Facebook mobile app or quit the service entirely. Now, some Samsung phone users have learned that quitting can be more difficult, and they don't have as much control over their devices as they thought.

How did this happen? Bloomberg explained:

"Samsung, the world’s largest smartphone maker, said it provides a pre-installed Facebook app on selected models with options to disable it, and once it’s disabled, the app is no longer running. Facebook declined to provide a list of the partners with which it has deals for permanent apps, saying that those agreements vary by region and type... consumers may not know if Facebook is pre-loaded unless they specifically ask a customer service representative when they purchase a phone."

Not good. So, now we know that there are two classes of mobile apps: 1) pre-installed and 2) permanent. Pre-installed apps come on new devices. Some pre-installed apps can be deleted by users. Permanent mobile apps are pre-installed apps which cannot be removed/deleted by users. Users can only disable permanent apps.

Sadly, there's more and it's not only Facebook. Bloomberg cited other agreements:

"A T-Mobile US Inc. list of apps built into its version of the Samsung Galaxy S9, for example, includes the social network as well as Amazon.com Inc. The phone also comes loaded with many Google apps such as YouTube, Google Play Music and Gmail... Other phone makers and service providers, including LG Electronics Inc., Sony Corp., Verizon Communications Inc. and AT&T Inc., have made similar deals with app makers..."

This is disturbing. There seem to be several issues:

  1. Notice: consumers should be informed before purchase of any and all phone apps which can't be removed. The presence of permanent mobile apps suggests either a lack of notice, notice buried within legal language of phone manufacturers' user agreements, or both.
  2. Privacy: just because a mobile app isn't running doesn't mean it isn't operating. Stealth apps can still collect GPS location and device information while running in the background; and then transmit it to manufacturers. Hopefully, some enterprising technicians or testing labs will verify independently whether "disabled" permanent mobile apps have truly stopped working.
  3. Transparency: phone manufacturers should explain and publish their lists of partners with both pre-installed and permanent app agreements -- for each device model. Otherwise, consumers cannot make informed purchase decisions about phones.
  4. Scope: the Samsung-Facebook pre-installed apps raises questions about other devices with permanent apps: phones, tablets, laptops, smart televisions, and automotive vehicles. Perhaps, some independent testing by Consumer Reports can determine a full list of devices with permanent apps.
  5. Nothing is free. Pre-installed app agreements indicate another method which device manufacturers use to make money, by collecting and sharing consumers' data with other tech companies.

The bottom line is trust. Consumers have more valid reasons to distrust some device manufacturers and OS developers. What issues do you see? What are your thoughts about permanent mobile apps?


More Than One Billion Accounts Affected By Data Breaches During 2018

Now that 2019 is here, we can assess 2018. It was a terrible year for identity theft, privacy, and data breaches. Several corporations failed miserably to protect the data they archive about consumers. This included failures within websites and mobile apps. There were so many massive data breaches that it isn't a question of whether or not you were affected.

You were. NordVPN reviewed the failures during 2018:

"If your data wasn’t leaked in 2018, you’re lucky. The information of over a billion people was compromised in 2018 as many of the companies we trust failed to protect our data."

That's billion with a "b." NordVPN provides virtual private network (VPN) services. If you want to use the internet with privacy, a VPN is the way to go. That is especially important for residents of the United States, since the U.S. Federal Communications Commission (FCC) repealed in 2017 both broadband privacy and net neutrality protections for consumers. A December 2017 study of 1,077 voters found that most want net neutrality protections. President Trump signed the privacy-rollback legislation in April 2017. A prior blog post listed many historical abuses of consumers by some ISPs.

PC Magazine reviewed NordVPN earlier this month and concluded:

"... NordVPN has proved itself to be our top service for securing your online activities. The company now has more than 5,100 servers across the globe, making it the largest service we've yet tested. It also takes a strong stance on privacy for its customers and includes tools rarely seen in the competition."

The NordVPN articled listed the major corporate data security failures during 2018. Frequent readers of this blog are familiar with the breaches. Chances are you use one or more of the services. Below is a partial list:

  • Marriott: 500 million
  • Twitter: 330 million
  • My Fitness Pal: 150 million accounts
  • Facebook: 147 million accounts
  • Quora: 100 million accounts
  • Firebase: 100 million accounts
  • Google+ : 500,000 accounts
  • British Airways: 380,000 accounts

While Google is closing its Google+ service, that is little help for breach victims whose personal data is out in the wild. The massive Equifax breach affecting 145.5 million persons isn't on the list because it happened in 2017. It's important to remember Equifax because persons cannot opt out of Equifax, or any of the other credit reporting agencies. Ain't corporate welfare nice?

What can consumers do to protect themselves and their sensitive personal and payment information? NordVPN advised:

  1. "Use strong and unique passwords.
  2. Think twice before posting anything on social media. This information can be used against you.
  3. If you shop online, use a credit card. You will have less liability for fraudulent charges if your financial information leaks.
  4. Provide companies only with necessary information. The less information they have, the less they can leak.
  5. Look out for fraud. If notified that your data was leaked, change your passwords and take the steps advised by the company that compromised your data."

Well, there you go. That's a good starter list for consumers to protect themselves. Do it because your personal data is out in the wild. The only question is which bad actor is abusing it.


A Series Of Recent Events And Privacy Snafus At Facebook Cause Multiple Concerns. Does Facebook Deserve Users' Data?

Facebook logo So much has happened lately at Facebook that it can be difficult to keep up with the data scandals, data breaches, privacy fumbles, and more at the global social service. To help, below is a review of recent events.

The the New York Times reported on Tuesday, December 18th that for years:

"... Facebook gave some of the world’s largest technology companies more intrusive access to users’ personal data than it has disclosed, effectively exempting those business partners from its usual privacy rules... The special arrangements are detailed in hundreds of pages of Facebook documents obtained by The New York Times. The records, generated in 2017 by the company’s internal system for tracking partnerships, provide the most complete picture yet of the social network’s data-sharing practices... Facebook allowed Microsoft’s Bing search engine to see the names of virtually all Facebook users’ friends without consent... and gave Netflix and Spotify the ability to read Facebook users’ private messages. The social network permitted Amazon to obtain users’ names and contact information through their friends, and it let Yahoo view streams of friends’ posts as recently as this summer, despite public statements that it had stopped that type of sharing years earlier..."

According to the Reuters newswire, a Netflix spokesperson denied that Netflix accessed Facebook users' private messages, nor asked for that access. Facebook responded with denials the same day:

"... none of these partnerships or features gave companies access to information without people’s permission, nor did they violate our 2012 settlement with the FTC... most of these features are now gone. We shut down instant personalization, which powered Bing’s features, in 2014 and we wound down our partnerships with device and platform companies months ago, following an announcement in April. Still, we recognize that we’ve needed tighter management over how partners and developers can access information using our APIs. We’re already in the process of reviewing all our APIs and the partners who can access them."

Needed tighter management with its partners and developers? That's an understatement. During March and April of 2018 we learned that bad actors posed as researchers and used both quizzes and automated tools to vacuum up (and allegedly resell later) profile data for 87 million Facebook users. There's more news about this breach. The Office of the Attorney General for Washington, DC announced on December 19th that it has:

"... sued Facebook, Inc. for failing to protect its users’ data... In its lawsuit, the Office of the Attorney General (OAG) alleges Facebook’s lax oversight and misleading privacy settings allowed, among other things, a third-party application to use the platform to harvest the personal information of millions of users without their permission and then sell it to a political consulting firm. In the run-up to the 2016 presidential election, some Facebook users downloaded a “personality quiz” app which also collected data from the app users’ Facebook friends without their knowledge or consent. The app’s developer then sold this data to Cambridge Analytica, which used it to help presidential campaigns target voters based on their personal traits. Facebook took more than two years to disclose this to its consumers. OAG is seeking monetary and injunctive relief, including relief for harmed consumers, damages, and penalties to the District."

Sadly, there's still more. Facebook announced on December 14th another data breach:

"Our internal team discovered a photo API bug that may have affected people who used Facebook Login and granted permission to third-party apps to access their photos. We have fixed the issue but, because of this bug, some third-party apps may have had access to a broader set of photos than usual for 12 days between September 13 to September 25, 2018... the bug potentially gave developers access to other photos, such as those shared on Marketplace or Facebook Stories. The bug also impacted photos that people uploaded to Facebook but chose not to post... we believe this may have affected up to 6.8 million users and up to 1,500 apps built by 876 developers... Early next week we will be rolling out tools for app developers that will allow them to determine which people using their app might be impacted by this bug. We will be working with those developers to delete the photos from impacted users. We will also notify the people potentially impacted..."

We believe? That sounds like Facebook doesn't know for sure. Where was the quality assurance (QA) team on this? Who is performing the post-breach investigation to determine what happened so it doesn't happen again? This post-breach response seems sloppy. And, the "bug" description seems disingenuous. Anytime persons -- in this case developers -- have access to data they shouldn't have, it is a data breach.

One quickly gets the impression that Facebook has created so many niches, apps, APIs, and special arrangements for developers and advertisers that it really can't manage nor control the data it collects about its users. That implies Facebook users aren't in control of their data, either.

There were other notable stumbles. There were reports after many users experienced repeated bogus Friend Requests, due to hacked and/or cloned accounts. It can be difficult for users to distinguish valid Friend Requests from spammers or bad actors masquerading as friends.

In August, reports surfaced that Facebook approached several major banks offering to share its detailed financial information about consumers in order, "to boost user engagement." Reportedly, the detailed financial information included debit/credit/prepaid card transactions and checking account balances. Not good.

Also in August, Facebook's Onavo VPN App was removed from the Apple App store because the app violated data-collection policies. 9 To 5 Mac reported on December 5th:

"The UK parliament has today publicly shared secret internal Facebook emails that cover a wide-range of the company’s tactics related to its free iOS VPN app that was used as spyware, recording users’ call and text message history, and much more... Onavo was an interesting effort from Facebook. It posed as a free VPN service/app labeled as Facebook’s “Protect” feature, but was more or less spyware designed to collect data from users that Facebook could leverage..."

Why spy? Why the deception? This seems unnecessary for a global social networking company already collecting massive amounts of content.

In November, an investigative report by ProPublica detailed the failures in Facebook's news transparency implementation. The failures mean Facebook hasn't made good on its promises to ensure trustworthy news content, nor stop foreign entities from using the social service to meddle in elections in democratic countries.

There is more. Facebook disclosed in October a massive data breach affecting 30 million users (emphasis added):

For 15 million people, attackers accessed two sets of information – name and contact details (phone number, email, or both, depending on what people had on their profiles). For 14 million people, the attackers accessed the same two sets of information, as well as other details people had on their profiles. This included username, gender, locale/language, relationship status, religion, hometown, self-reported current city, birth date, device types used to access Facebook, education, work, the last 10 places they checked into or were tagged in, website, people or Pages they follow, and the 15 most recent searches..."

The stolen data allows bad actors to operate several types of attacks (e.g., spam, phishing, etc.) against Facebook users. The stolen data allows foreign spy agencies to collect useful information to target persons. Neither is good. Wired summarized the situation:

"Every month this year—and in some months, every week—new information has come out that makes it seem as if Facebook's big rethink is in big trouble... Well-known and well-regarded executives, like the founders of Facebook-owned Instagram, Oculus, and WhatsApp, have left abruptly. And more and more current and former employees are beginning to question whether Facebook's management team, which has been together for most of the last decade, is up to the task.

Technically, Zuckerberg controls enough voting power to resist and reject any moves to remove him as CEO. But the number of times that he and his number two Sheryl Sandberg have over-promised and under-delivered since the 2016 election would doom any other management team... Meanwhile, investigations in November revealed, among other things, that the company had hired a Washington firm to spread its own brand of misinformation on other platforms..."

Hiring a firm to distribute misinformation elsewhere while promising to eliminate misinformation on its platform. Not good. Are Zuckerberg and Sandberg up to the task? The above list of breaches, scandals, fumbles, and stumbles suggest not. What do you think?

The bottom line is trust. Given recent events, BuzzFeed News article posed a relevant question (emphasis added):

"Of all of the statements, apologies, clarifications, walk-backs, defenses, and pleas uttered by Facebook employees in 2018, perhaps the most inadvertently damning came from its CEO, Mark Zuckerberg. Speaking from a full-page ad displayed in major papers across the US and Europe, Zuckerberg proclaimed, "We have a responsibility to protect your information. If we can’t, we don’t deserve it." At the time, the statement was a classic exercise in damage control. But given the privacy blunders that followed, it hasn’t aged well. In fact, it’s become an archetypal criticism of Facebook and the set up for its existential question: Why, after all that’s happened in 2018, does Facebook deserve our personal information?"

Facebook executives have apologized often. Enough is enough. No more apologies. Just fix it! And, if Facebook users haven't asked themselves the above question yet, some surely will. Earlier this week, a friend posted on the site:

"To all my FB friends:
I will be deleting my FB account very soon as I am disgusted by their invasion of the privacy of their users. Please contact me by email in the future. Please note that it will take several days for this action to take effect as FB makes it hard to get out of its grip. Merry Christmas to all and with best wishes for a Healthy, safe, and invasive free New Year."

I reminded this friend to also delete any Instagram and What's App accounts, since Facebook operates those services, too. If you want to quit the service but suffer with FOMO (Fear Of Missing Out), then read the experiences of a person who quit Apple, Google, Facebook, Microsoft, and Amazon for a month. It can be done. And, your social life will continue -- spectacularly. It did before Facebook.

Me? I have reduced my activity on Facebook. And there are certain activities I don't do on Facebook: take quizzes, make online payments, use its emotion reaction buttons (besides "Like"), use its mobile app, use the Messenger mobile app, nor use its voting and ballot previews content. Long ago I disabled the Facebook API platform on my Facebook account. You should, too. I never use my Facebook credentials (e.g., username, password) to sign into other sites. Never.

I will continue to post on Facebook links to posts in this blog, since it is helpful information for many Facebook users. In what ways have you reduced your usage of Facebook?


China Blamed For Cyberattack In The Gigantic Marriott-Starwood Hotels Data Breach

Marriott International logo An update on the gigantic Marriott-Starwood data breach where details about 500 million guests were stolen. The New York Times reported that the cyberattack:

"... was part of a Chinese intelligence-gathering effort that also hacked health insurers and the security clearance files of millions more Americans, according to two people briefed on the investigation. The hackers, they said, are suspected of working on behalf of the Ministry of State Security, the country’s Communist-controlled civilian spy agency... While American intelligence agencies have not reached a final assessment of who performed the hacking, a range of firms brought in to assess the damage quickly saw computer code and patterns familiar to operations by Chinese actors... China has reverted over the past 18 months to the kind of intrusions into American companies and government agencies that President Barack Obama thought he had ended in 2015 in an agreement with Mr. Xi. Geng Shuang, a spokesman for China’s Ministry of Foreign Affairs, denied any knowledge of the Marriott hacking..."

Why any country's intelligence agency would want to hack a hotel chain's database:

"The Marriott database contains not only credit card information but passport data. Lisa Monaco, a former homeland security adviser under Mr. Obama, noted last week at a conference that passport information would be particularly valuable in tracking who is crossing borders and what they look like, among other key data."

Also, context matters. First, this corporate acquisition was (thankfully) blocked:

"The effort to amass Americans’ personal information so alarmed government officials that in 2016, the Obama administration threatened to block a $14 billion bid by China’s Anbang Insurance Group Co. to acquire Starwood Hotel & Resorts Worldwide, according to one former official familiar with the work of the Committee on Foreign Investments in the United States, a secretive government body that reviews foreign acquisitions..."

Later that year, Marriott Hotels acquired Starwood for $13.6 billion. Second, remember the massive government data breach in 2014 at the Office of Personnel Management (OPM). The New York Times added that the Marriott breach:

"... was only part of an aggressive operation whose centerpiece was the 2014 hacking into the Office of Personnel Management. At the time, the government bureau loosely guarded the detailed forms that Americans fill out to get security clearances — forms that contain financial data; information about spouses, children and past romantic relationships; and any meetings with foreigners. Such information is exactly what the Chinese use to root out spies, recruit intelligence agents and build a rich repository of Americans’ personal data for future targeting..."

MSS Inside Not good. And, this is not the first time concerns about China have been raised. Reports surfaced in 2016 about malware installed in the firmware of smartphones running the Android operating system (OS) software. In 2015, China enacted a new "secure and controllable" security law which many security experts viewed then as a method to ensure that back doors were built into computing products and devices during into the manufacturing and assembly process.

And, even if China's MSS didn't do this massive cyberattack, it could have been another country's intelligence agency. Not good either.

Regardless who the attackers were, this incident is a huge reminder to executives in government and in the private sector to secure their computer systems. Hopefully, executives at major hotel chains -- especially those frequented by government officials and military members -- now realize that their systems are high-value targets.


Your Medical Devices Are Not Keeping Your Health Data to Themselves

[Editor's note: today's guest post, by reporters at ProPublica, is part of a series which explores data collection, data sharing, and privacy issues within the healthcare industry. It is reprinted with permission.]

By Derek Kravitz and Marshall Allen, ProPublica

Medical devices are gathering more and more data from their users, whether it’s their heart rates, sleep patterns or the number of steps taken in a day. Insurers and medical device makers say such data can be used to vastly improve health care.

But the data that’s generated can also be used in ways that patients don’t necessarily expect. It can be packaged and sold for advertising. It can anonymized and used by customer support and information technology companies. Or it can be shared with health insurers, who may use it to deny reimbursement. Privacy experts warn that data gathered by insurers could also be used to rate individuals’ health care costs and potentially raise their premiums.

Patients typically have to give consent for their data to be used — so-called “donated data.” But some patients said they weren’t aware that their information was being gathered and shared. And once the data is shared, it can be used in a number of ways. Here are a few of the most popular medical devices that can share data with insurers:

Continuous Positive Airway Pressure, or CPAP, Machines

What Are They?

One of the more popular devices for those with sleep apnea, CPAP machines are covered by insurers after a sleep study confirms the diagnosis. These units, which deliver pressurized air through masks worn by patients as they sleep, collect data and transmit it wirelessly.

What Do They Collect?

It depends on the unit, but CPAP machines can collect data on the number of hours a patient uses the device, the number of interruptions in sleep and the amount of air that leaks from the mask.

Who Gets the Info?

The data may be transmitted to the makers or suppliers of the machines. Doctors may use it to assess whether the therapy is effective. Health insurers may receive the data to track whether patients are using their CPAP machines as directed. They may refuse to reimburse the costs of the machine if the patient doesn’t use it enough. The device maker ResMed said in a statement that patients may withdraw their consent to have their data shared.

Heart Monitors

What Are They?

Heart monitors, oftentimes small, battery-powered devices worn on the body and attached to the skin with electrodes, measure and record the heart’s electrical signals, typically over a few days or weeks, to detect things like irregular heartbeats or abnormal heart rhythms. Some devices implanted under the skin can last up to five years.

What Do They Collect?

Wearable ones include Holter monitors, wired external devices that attach to the skin, and event recorders, which can track slow or fast heartbeats and fainting spells. Data can also be shared from implanted pacemakers, which keep the heart beating properly for those with arrhythmias.

Who Gets the Info?

Low resting heart rates or other abnormal heart conditions are commonly used by insurance companies to place patients in more expensive rate classes. Children undergoing genetic testing are sometimes outfitted with heart monitors before their diagnosis, increasing the odds that their data is used by insurers. This sharing is the most common complaint cited by the World Privacy Forum, a consumer rights group.

Blood Glucose Monitors

What Are They?

Millions of Americans who have diabetes are familiar with blood glucose meters, or glucometers, which take a blood sample on a strip of paper and analyze it for glucose, or sugar, levels. This allows patients and their doctors to monitor their diabetes so they don’t have complications like heart or kidney disease. Blood glucose meters are used by the more the 1.2 million Americans with Type 1 diabetes, which is usually diagnosed in children, teens and young adults.

What Do They Collect?

Blood sugar monitors measure the concentration of glucose in a patient’s blood, a key indicator of proper diabetes management.

Who Gets the Info?

Diabetes monitoring equipment is sold directly to patients, but many still rely on insurer-provided devices. To get reimbursement for blood glucose meters, health insurers will typically ask for at least a month’s worth of blood sugar data.

Lifestyle Monitors

What Are They?

Step counters, medication alerts and trackers, and in-home cameras are among the devices in the increasingly crowded lifestyle health industry.

What Do They Collect?

Many health data research apps are made up of “donated data,” which is provided by consumers and falls outside of federal guidelines that require the sharing of personal health data be disclosed and anonymized to protect the identity of the patient. This data includes everything from counters for the number of steps you take, the calories you eat and the number of flights of stairs you climb to more traditional health metrics, such as pulse and heart rates.

Who Gets the Info?

It varies by device. But the makers of the Fitbit step counter, for example, say they never sell customer personal data or share personal information unless a user requests it; it is part of a legal process; or it is provided on a “confidential basis” to a third-party customer support or IT provider. That said, Fitbit allows users who give consent to share data “with a health insurer or wellness program,” according to a statement from the company.

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


House Oversight Committee Report On The Equifax Data Breach. Did The Recommendations Go Far Enough?

On Monday, the U.S. House of Representatives Committee on Oversight and Government Reform released its report (Adobe PDF) on the massive Equifax data breach, where the most sensitive personal and payment information of more than 148 million consumers -- nearly half of the population -- was accessed and stolen. The report summary:

"In 2005, former Equifax Chief Executive Officer(CEO) Richard Smith embarked on an aggressive growth strategy, leading to the acquisition of multiple companies, information technology (IT) systems, and data. While the acquisition strategy was successful for Equifax’s bottom line and stock price, this growth brought increasing complexity to Equifax’s IT systems, and expanded data security risks... Equifax, however, failed to implement an adequate security program to protect this sensitive data. As a result, Equifax allowed one of the largest data breaches in U.S. history. Such a breach was entirely preventable."

The report cited several failures by Equifax. First:

"On March 7, 2017, a critical vulnerability in the Apache Struts software was publicly disclosed. Equifax used Apache Struts to run certain applications on legacy operating systems. The following day, the Department of Homeland Security alerted Equifax to this critical vulnerability. Equifax’s Global Threate and Vulnerability Management (GTVM) team emailed this alert to over 400 people on March 9, instructing anyone who had Apache Struts running on their system to apply the necessary patch within 48 hours. The Equifax GTVM team also held a March 16 meeting about this vulnerability. Equifax, however, did not fully patch its systems. Equifax’s Automated Consumer Interview System (ACIS), a custom-built internet-facing consumer dispute portal developed in the 1970s, was running a version of Apache Struts containing the vulnerability. Equifax did not patch the Apache Struts software located within ACIS, leaving its systems and data exposed."

As bad as that is, it gets worse:

"On May 13, 2017, attackers began a cyberattack on Equifax. The attack lasted for 76 days. The attackers dropped “web shells” (a web-based backdoor) to obtain remote control over Equifax’s network. They found a file containing unencrypted credentials (usernames and passwords), enabling the attackers to access sensitive data outside of the ACIS environment. The attackers were able to use these credentials to access 48 unrelated databases."

"Attackers sent 9,000 queries on these 48 databases, successfully locating unencrypted personally identifiable information (PII) data 265 times. The attackers transferred this data out of the Equifax environment, unbeknownst to Equifax. Equifax did not see the data exfiltration because the device used to monitor ACIS network traffic had been inactive for 19 months due to an expired security certificate. On July 29, 2017, Equifax updated the expired certificate and immediately noticed suspicious web traffic..."

Findings so far: 1) growth prioritized over security while archiving highly valuable data; 2) antiquated computer systems; 3) failed security patches; 4) unprotected user credentials; and 5) failed intrusion detection mechanism. Geez!

Only after updating its expired security certificate did Equifax notice the intrusion. After that, you'd think that Equifax would have implemented a strong post-breach response. You'd be wrong. More failures:

"When Equifax informed the public of the breach on September 7, the company was unprepared to support the large number of affected consumers. The dedicated breach website and call centers were immediately overwhelmed, and consumers were not able to obtain timely information about whether they were affected and how they could obtain identity protection services."

"Equifax should have addressed at least two points of failure to mitigate, or even prevent, this data breach. First, a lack of accountability and no clear lines of authority in Equifax’s IT management structure existed, leading to an execution gap between IT policy development and operation. This also restricted the company’s implementation of other security initiatives in a comprehensive and timely manner. As an example, Equifax had allowed over 300 security certificates to expire, including 79 certificates for monitoring business critical domains. "Second, Equifax’s aggressive growth strategy and accumulation of data resulted in a complex IT environment. Equifax ran a number of its most critical IT applications on custom-built legacy systems. Both the complexity and antiquated nature of Equifax’s IT systems made IT security especially challenging..."

Findings so far: 6) inadequate post-breach response; and 7) complicated IT structure making updates difficult. Geez!

The report listed the executives who retired and/or were fired. That's a small start for a company archiving the most sensitive personal and payment information of all USA citizens. The report included seven recommendations:

"1: Empower Consumers through Transparency. Consumer reporting agencies (CRAs) should provide more transparency to consumers on what data is collected and how it is used. A large amount of the public’s concern after Equifax’s data breach announcement stemmed from the lack of knowledge regarding the extensive data CRAs hold on individuals. CRAs must invest in and deploy additional tools to empower consumers to better control their own data..."

"2: Review Sufficiency of FTC Oversight and Enforcement Authorities. Currently, the FTC uses statutory authority under Section 5 of the Federal Trade Commission Act to hold businesses accountable for making false or misleading claims about their data security or failing to employ reasonable security measures. Additional oversight authorities and enforcement tools may be needed to enable the FTC to effectively monitor CRA data security practices..."

"3: Review Effectiveness of Identity Monitoring and Protection Services Offered to Breach Victims. The General Accounting Office (GAO) should examine the effectiveness of current identity monitoring and protection services and provide recommendations to Congress. In particular, GAO should review the length of time that credit monitoring and protection services are needed after a data breach to mitigate identity theft risks. Equifax offered free credit monitoring and protection services for one year to any consumer who requested it... This GAO study would help clarify the value of credit monitoring services and the length of time such services should be maintained. The GAO study should examine alternatives to credit monitoring services and identify addit ional or complimentary services..."

"4: Increase Transparency of Cyber Risk in Private Sector. Federal agencies and the private sector should work together to increase transparency of a company’s cybersecurity risks and steps taken to mitigate such risks. One example of how a private entity can increase transparency related to the company’s cyber risk is by making disclosures in its Securities and Exchange Commission (SEC) filings. In 2011, the SEC developed guidance to assist companies in disclosing cybersecurity risks and incidents. According to the SEC guidance, if cybersecurity risks or incidents are “sufficiently material to investors” a private company may be required to disclose the information... Equifax did not disclose any cybersecurity risks or cybers ecurity incidents in its SEC filings prior to the 2017 data breach..."

"5: Hold Federal Contractors Accountable for Cybersecurity with Clear Requirements. The Equifax data breach and federal customers’ use of Equifax identity validation services highlight the need for the federal government to be vigilant in mitigating cybersecurity risk in federal acquisition. The Office of Management and Budget (OMB) should continue efforts to develop a clear set of requirements for federal contractors to address increasing cybersecurity risks, particularly as it relates to handling of PII. There should be a government-wide framework of cybersecurity and data security risk-based requirements. In 2016, the Committee urged OMB to focus on improving and updating cybersecurity requirements for federal acquisition... The Committee again urges OMB to expedite development of a long-promised cybersecurity acquisition memorandum to provide guidance to federal agencies and acquisition professionals..."

"6: Reduce Use of Social Security Numbers as Personal Identifiers. The executive branch should work with the private sector to reduce reliance on Social Security numbers. Social Security numbers are widely used by the public and private sector to both identify and authenticate individuals. Authenticators are only useful if they are kept confidential. Attackers stole the Social Security numbers of an estimated 145 million consumers from Equifax. As a result of this breach, nearly half of the country’s Social Security numbers are no longer confidential. To better protect consumers from identity theft, OMB and other relevant federal agencies should pursue emerging technology solutions as an alternative to Social Security number use."

"7: Implement Modernized IT Solutions. Companies storing sensitive consumer data should transition away from legacy IT and implement modern IT security solutions. Equifax failed to modernize its IT environments in a timely manner. The complexity of the legacy IT environment hosting the ACIS application allowed the attackers to move throughout the Equifax network... Equifax’s legacy IT was difficult to scan, patch, and modify... Private sector companies, especially those holding sensitive consumer data like Equifax, must prioritize investment in modernized tools and technologies...."

The history of corporate data breaches and the above list of corporate failures by Equifax both should be warnings to anyone in government promoting the privatization of current government activities. Companies screw up stuff, too.

Recommendation #6 is frightening in that it hasn't been implemented. Yikes! No federal agency should do business with a private sector firm operating with antiquated computer systems. And, if Equifax can't protect the information it archives, it should cease to exist. While that sounds harsh, it ain't. Continual data breaches place risks and burdens upon already burdened consumers trying to control and protect their data.

What are your opinions of the report? Did it go far enough?


You Snooze, You Lose: Insurers Make The Old Adage Literally True

[Editor's note: today's guest post, by reporters at ProPublica, is part of a series which explores data collection, data sharing, and privacy issues within the healthcare industry. It is reprinted with permission.]

By Marshall Allen, ProPublica

Last March, Tony Schmidt discovered something unsettling about the machine that helps him breathe at night. Without his knowledge, it was spying on him.

From his bedside, the device was tracking when he was using it and sending the information not just to his doctor, but to the maker of the machine, to the medical supply company that provided it and to his health insurer.

Schmidt, an information technology specialist from Carrollton, Texas, was shocked. “I had no idea they were sending my information across the wire.”

Schmidt, 59, has sleep apnea, a disorder that causes worrisome breaks in his breathing at night. Like millions of people, he relies on a continuous positive airway pressure, or CPAP, machine that streams warm air into his nose while he sleeps, keeping his airway open. Without it, Schmidt would wake up hundreds of times a night; then, during the day, he’d nod off at work, sometimes while driving and even as he sat on the toilet.

“I couldn’t keep a job,” he said. “I couldn’t stay awake.” The CPAP, he said, saved his career, maybe even his life.

As many CPAP users discover, the life-altering device comes with caveats: Health insurance companies are often tracking whether patients use them. If they aren’t, the insurers might not cover the machines or the supplies that go with them.

In fact, faced with the popularity of CPAPs, which can cost $400 to $800, and their need for replacement filters, face masks and hoses, health insurers have deployed a host of tactics that can make the therapy more expensive or even price it out of reach.

Patients have been required to rent CPAPs at rates that total much more than the retail price of the devices, or they’ve discovered that the supplies would be substantially cheaper if they didn’t have insurance at all.

Experts who study health care costs say insurers’ CPAP strategies are part of the industry’s playbook of shifting the costs of widely used therapies, devices and tests to unsuspecting patients.

“The doctors and providers are not in control of medicine anymore,” said Harry Lawrence, owner of Advanced Oxy-Med Services, a New York company that provides CPAP supplies. “It’s strictly the insurance companies. They call the shots.”

Insurers say their concerns are legitimate. The masks and hoses can be cumbersome and noisy, and studies show that about third of patients don’t use their CPAPs as directed.

But the companies’ practices have spawned lawsuits and concerns by some doctors who say that policies that restrict access to the machines could have serious, or even deadly, consequences for patients with severe conditions. And privacy experts worry that data collected by insurers could be used to discriminate against patients or raise their costs.

Schmidt’s privacy concerns began the day after he registered his new CPAP unit with ResMed, its manufacturer. He opted out of receiving any further information. But he had barely wiped the sleep out of his eyes the next morning when a peppy email arrived in his inbox. It was ResMed, praising him for completing his first night of therapy. “Congratulations! You’ve earned yourself a badge!” the email said.

Then came this exchange with his supply company, Medigy: Schmidt had emailed the company to praise the “professional, kind, efficient and competent” technician who set up the device. A Medigy representative wrote back, thanking him, then adding that Schmidt’s machine “is doing a great job keeping your airway open.” A report detailing Schmidt’s usage was attached.

Alarmed, Schmidt complained to Medigy and learned his data was also being shared with his insurer, Blue Cross Blue Shield. He’d known his old machine had tracked his sleep because he’d taken its removable data card to his doctor. But this new invasion of privacy felt different. Was the data encrypted to protect his privacy as it was transmitted? What else were they doing with his personal information?

He filed complaints with the Better Business Bureau and the federal government to no avail. “My doctor is the ONLY one that has permission to have my data,” he wrote in one complaint.

In an email, a Blue Cross Blue Shield spokesperson said that it’s standard practice for insurers to monitor sleep apnea patients and deny payment if they aren’t using the machine. And privacy experts said that sharing the data with insurance companies is allowed under federal privacy laws. A ResMed representative said once patients have given consent, it may share the data it gathers, which is encrypted, with the patients’ doctors, insurers and supply companies.

Schmidt returned the new CPAP machine and went back to a model that allowed him to use a removable data card. His doctor can verify his compliance, he said.

Luke Petty, the operations manager for Medigy, said a lot of CPAP users direct their ire at companies like his. The complaints online number in the thousands. But insurance companies set the prices and make the rules, he said, and suppliers follow them, so they can get paid.

“Every year it’s a new hurdle, a new trick, a new game for the patients,” Petty said.

A Sleep Saving Machine Gets Popular

The American Sleep Apnea Association estimates about 22 million Americans have sleep apnea, although it’s often not diagnosed. The number of people seeking treatment has grown along with awareness of the disorder. It’s a potentially serious disorder that left untreated can lead to risks for heart disease, diabetes, cancer and cognitive disorders. CPAP is one of the only treatments that works for many patients.

Exact numbers are hard to come by, but ResMed, the leading device maker, said it’s monitoring the CPAP use of millions of patients.

Sleep apnea specialists and health care cost experts say insurers have countered the deluge by forcing patients to prove they’re using the treatment.

Medicare, the government insurance program for seniors and the disabled, began requiring CPAP “compliance” after a boom in demand. Because of the discomfort of wearing a mask, hooked up to a noisy machine, many patients struggle to adapt to nightly use. Between 2001 and 2009, Medicare payments for individual sleep studies almost quadrupled to $235 million. Many of those studies led to a CPAP prescription. Under Medicare rules, patients must use the CPAP for four hours a night for at least 70 percent of the nights in any 30-day period within three months of getting the device. Medicare requires doctors to document the adherence and effectiveness of the therapy.

Sleep apnea experts deemed Medicare’s requirements arbitrary. But private insurers soon adopted similar rules, verifying usage with data from patients’ machines — with or without their knowledge.

Kristine Grow, spokeswoman for the trade association America’s Health Insurance Plans, said monitoring CPAP use is important because if patients aren’t using the machines, a less expensive therapy might be a smarter option. Monitoring patients also helps insurance companies advise doctors about the best treatment for patients, she said. When asked why insurers don’t just rely on doctors to verify compliance, Grow said she didn’t know.

Many insurers also require patients to rack up monthly rental fees rather than simply pay for a CPAP.

Dr. Ofer Jacobowitz, a sleep apnea expert at ENT and Allergy Associates and assistant professor at The Mount Sinai Hospital in New York, said his patients often pay rental fees for a year or longer before meeting the prices insurers set for their CPAPs. But since patients’ deductibles — the amount they must pay before insurance kicks in — reset at the beginning of each year, they may end up covering the entire cost of the rental for much of that time, he said.

The rental fees can surpass the retail cost of the machine, patients and doctors say. Alan Levy, an attorney who lives in Rahway, New Jersey, bought an individual insurance plan through the now-defunct Health Republic Insurance of New Jersey in 2015. When his doctor prescribed a CPAP, the company that supplied his device, At Home Medical, told him he needed to rent the device for $104 a month for 15 months. The company told him the cost of the CPAP was $2,400.

Levy said he wouldn’t have worried about the cost if his insurance had paid it. But Levy’s plan required him to reach a $5,000 deductible before his insurance plan paid a dime. So Levy looked online and discovered the machine actually cost about $500.

Levy said he called At Home Medical to ask if he could avoid the rental fee and pay $500 up front for the machine, and a company representative said no. “I’m being overcharged simply because I have insurance,” Levy recalled protesting.

Levy refused to pay the rental fees. “At no point did I ever agree to enter into a monthly rental subscription,” he wrote in a letter disputing the charges. He asked for documentation supporting the cost. The company responded that he was being billed under the provisions of his insurance carrier.

Levy’s law practice focuses, ironically, on defending insurance companies in personal injury cases. So he sued At Home Medical, accusing the company of violating the New Jersey Consumer Fraud Act. Levy didn’t expect the case to go to trial. “I knew they were going to have to spend thousands of dollars on attorney’s fees to defend a claim worth hundreds of dollars,” he said.

Sure enough, At Home Medical, agreed to allow Levy to pay $600 — still more than the retail cost — for the machine.

The company declined to comment on the case. Suppliers said that Levy’s case is extreme, but acknowledged that patients’ rental fees often add up to more than the device is worth.

Levy said that he was happy to abide by the terms of his plan, but that didn’t mean the insurance company could charge him an unfair price. “If the machine’s worth $500, no matter what the plan says, or the medical device company says, they shouldn’t be charging many times that price,” he said.

Dr. Douglas Kirsch, president of the American Academy of Sleep Medicine, said high rental fees aren’t the only problem. Patients can also get better deals on CPAP filters, hoses, masks and other supplies when they don’t use insurance, he said.

Cigna, one of the largest health insurers in the country, currently faces a class-action suit in U.S. District Court in Connecticut over its billing practices, including for CPAP supplies. One of the plaintiffs, Jeffrey Neufeld, who lives in Connecticut, contends that Cigna directed him to order his supplies through a middleman who jacked up the prices.

Neufeld declined to comment for this story. But his attorney, Robert Izard, said Cigna contracted with a company called CareCentrix, which coordinates a network of suppliers for the insurer. Neufeld decided to contact his supplier directly to find out what it had been paid for his supplies and compare that to what he was being charged. He discovered that he was paying substantially more than the supplier said the products were worth. For instance, Neufeld owed $25.68 for a disposable filter under his Cigna plan, while the supplier was paid $7.50. He owed $147.78 for a face mask through his Cigna plan while the supplier was paid $95.

ProPublica found all the CPAP supplies billed to Neufeld online at even lower prices than those the supplier had been paid. Longtime CPAP users say it’s well known that supplies are cheaper when they are purchased without insurance.

Neufeld’s cost “should have been based on the lower amount charged by the actual provider, not the marked-up bill from the middleman,” Izard said. Patients covered by other insurance companies may have fallen victim to similar markups, he said.

Cigna would not comment on the case. But in documents filed in the suit, it denied misrepresenting costs or overcharging Neufeld. The supply company did not return calls for comment.

In a statement, Stephen Wogen, CareCentrix’s chief growth officer, said insurers may agree to pay higher prices for some services, while negotiating lower prices for others, to achieve better overall value. For this reason, he said, isolating select prices doesn’t reflect the overall value of the company’s services. CareCentrix declined to comment on Neufeld’s allegations.

Izard said Cigna and CareCentrix benefit from such behind-the-scenes deals by shifting the extra costs to patients, who often end up covering the marked-up prices out of their deductibles. And even once their insurance kicks in, the amount the patients must pay will be much higher.

The ubiquity of CPAP insurance concerns struck home during the reporting of this story, when a ProPublica colleague discovered how his insurer was using his data against him.

Sleep Aid or Surveillance Device?

Without his CPAP, Eric Umansky, a deputy managing editor at ProPublica, wakes up repeatedly through the night and snores so insufferably that he is banished to the living room couch. “My marriage depends on it.”

In September, his doctor prescribed a new mask and airflow setting for his machine. Advanced Oxy-Med Services, the medical supply company approved by his insurer, sent him a modem that he plugged into his machine, giving the company the ability to change the settings remotely if needed.

But when the mask hadn’t arrived a few days later, Umansky called Advanced Oxy-Med. That’s when he got a surprise: His insurance company might not pay for the mask, a customer service representative told him, because he hadn’t been using his machine enough. “On Tuesday night, you only used the mask for three-and-a-half hours,” the representative said. “And on Monday night, you only used it for three hours.”

“Wait — you guys are using this thing to track my sleep?” Umansky recalled saying. “And you are using it to deny me something my doctor says I need?”

Umansky’s new modem had been beaming his personal data from his Brooklyn bedroom to the Newburgh, New York-based supply company, which, in turn, forwarded the information to his insurance company, UnitedHealthcare.

Umansky was bewildered. He hadn’t been using the machine all night because he needed a new mask. But his insurance company wouldn’t pay for the new mask until he proved he was using the machine all night — even though, in his case, he, not the insurance company, is the owner of the device.

“You view it as a device that is yours and is serving you,” Umansky said. “And suddenly you realize it is a surveillance device being used by your health insurance company to limit your access to health care.”

Privacy experts said such concerns are likely to grow as a host of devices now gather data about patients, including insertable heart monitors and blood glucose meters, as well as Fitbits, Apple Watches and other lifestyle applications. Privacy laws have lagged behind this new technology, and patients may be surprised to learn how little control they have over how the data is used or with whom it is shared, said Pam Dixon, executive director of the World Privacy Forum.

“What if they find you only sleep a fitful five hours a night?” Dixon said. “That’s a big deal over time. Does that affect your health care prices?”

UnitedHealthcare said in a statement that it only uses the data from CPAPs to verify patients are using the machines.

Lawrence, the owner of Advanced Oxy-Med Services, conceded that his company should have told Umansky his CPAP use would be monitored for compliance, but it had to follow the insurers’ rules to get paid.

As for Umansky, it’s now been two months since his doctor prescribed him a new airflow setting for his CPAP machine. The supply company has been paying close attention to his usage, Umansky said, but it still hasn’t updated the setting.

The irony is not lost on Umansky: “I wish they would spend as much time providing me actual care as they do monitoring whether I’m ‘compliant.’”

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

 


Oath To Pay Almost $5 Million To Settle Charges By New York AG Regarding Children's Privacy Violations

Oath Inc. logo Barbara D. Underwood, the Attorney General (AG) for New York State, announced last week a settlement with Oath, Inc. for violating the Children’s Online Privacy Protection Act (COPPA). Oath Inc. is a wholly-owned subsidiary of Verizon Communications. Until June 2017, Oath was known as AOL Inc. ("AOL"). The announcement stated:

"The Attorney General’s Office found that AOL conducted billions of auctions for ad space on hundreds of websites the company knew were directed to children under the age of 13. Through these auctions, AOL collected, used, and disclosed personal information from the websites’ users in violation of COPPA, enabling advertisers to track and serve targeted ads to young children. The company has agreed to adopt comprehensive reforms to protect children from improper tracking and pay a record $4.95 million in penalties..."

The United States Congress enacted COPPA in 1998 to protect the safety and privacy of young children online. As many parents know, young children don't understand complicated legal documents such as terms-of-use and privacy policies. COPPA prohibits operators of certain websites from collecting, using, or disclosing personal information (e.g., first and last name, e-mail address) of children under the age of 13 without first obtaining parental consent.

The definition of "personal information" was revised in 2013 to include persistent identifiers that can be used to recognize a user over time and across websites, such as the ID found in a web browser cookie or an Internet Protocol (“IP”) address. The revision effectively prohibits covered operators from using cookies, IP addresses, and other persistent identifiers to track users across websites for most advertising purposes on COPPA-covered websites.

The announcement by AG Underwood explained the alleged violations in detail. Despite policies to the contrary:

"... AOL nevertheless used its display ad exchange to conduct billions of auctions for ad space on websites that it knew to be directed to children under the age of 13 and subject to COPPA. AOL obtained this knowledge in two ways. First, several AOL clients provided notice to AOL that their websites were subject to COPPA. These clients identified more than a dozen COPPA-covered websites to AOL. AOL conducted at least 1.3 billion auctions of display ad space from these websites. Second, AOL itself determined that certain websites were directed to children under the age of 13 when it conducted a review of the content and privacy policies of client websites. Through these reviews, AOL identified hundreds of additional websites that were subject to COPPA. AOL conducted at least 750 million auctions of display ad space from these websites."

AG Underwood said in a statement:

"COPPA is meant to protect young children from being tracked and targeted by advertisers online. AOL flagrantly violated the law – and children’s privacy – and will now pay the largest-ever penalty under COPPA. My office remains committed to protecting children online and will continue to hold accountable those who violate the law."

A check at press time of both the press and "company values" sections of Oath's site failed to find any mentions of the settlement. TechCrunch reported on December 4th:

"We reached out to Oath with a number of questions about this privacy failure. But a spokesman did not engage with any of them directly — emailing a short statement instead, in which it writes: "We are pleased to see this matter resolved and remain wholly committed to protecting children’s privacy online." The spokesman also did not confirm nor dispute the contents of the New York Times report."

Hmmm. Almost a week has passed since AG Underwood's December 4th announcement. You'd think that Oath management would have released a statement by now. Maybe Oath isn't as committed to children's online privacy as they claim. Something for parents to note.

The National Law Review provided some context:

"...in 2016, the New York AG concluded a two-year investigation into the tracking practices of four online publishers for alleged COPPA violations... As recently as September of this year, the New Mexico AG filed a lawsuit for alleged COPPA violations against a children's game app company, Tiny Lab Productions, and the online ad companies that work within Tiny Lab's, including those run by Google and Twitter... The Federal Trade Commission (FTC) continues to vigorously enforce COPPA, closing out investigations of alleged COPPA violations against smart toy manufacturer VTech and online talent search company Explore Talent... there have been a total of 28 enforcement proceedings since the COPPA rule was issued in 2000."

You can read about many of these actions in this blog, and how COPPA was strengthened in 2013.

So, the COPPA law works well and it is being vigorously enforced. Kudos to AG Underwood, her staff, and other states' AGs for taking these actions. What are your opinions about the AOL/Oath settlement?


Google Admitted Tracking Users' Location Even When Phone Setting Disabled

If you are considering, or already have, a smartphone running Google's Android operating system (OS), then take note. ZDNet reported (emphasis added):

"Phones running Android have been gathering data about a user's location and sending it back to Google when connected to the internet, with Quartz first revealing the practice has been occurring since January 2017. According to the report, Android phones and tablets have been collecting the addresses of nearby cellular towers and sending the encrypted data back, even when the location tracking function is disabled by the user... Google does not make this explicitly clear in its Privacy Policy, which means Android users that have disabled location tracking were still being tracked by the search engine giant..."

This is another reminder of the cost of free services and/or cheaper smartphones. You're gonna be tracked... extensively... whether you want it or not. The term "surveillance capitalism" is often used.

A reader shared a blunt assessment, "There is no way to avoid being Google’s property (a/k/a its bitch) if you use an Android phone." Harsh, but accurate. What is your opinion?


Ireland Regulator: LinkedIn Processed Email Addresses Of 18 Million Non-Members

LinkedIn logo On Friday November 23rd, the Data Protection Commission (DPC) in Ireland released its annual report. That report includes the results of an investigation by the DPC of the LinkedIn.com social networking site, after a 2017 complaint by a person who didn't use the social networking service. Apparently, LinkedIn obtained 18 million email address of non-members so it could then use the Facebook platform to deliver advertisements encouraging them to join.

The DPC 2018 report (Adobe PDF; 827k bytes) stated on page 21:

"The DPC concluded its audit of LinkedIn Ireland Unlimited Company (LinkedIn) in respect of its processing of personal data following an investigation of a complaint notified to the DPC by a non-LinkedIn user. The complaint concerned LinkedIn’s obtaining and use of the complainant’s email address for the purpose of targeted advertising on the Facebook Platform. Our investigation identified that LinkedIn Corporation (LinkedIn Corp) in the U.S., LinkedIn Ireland’s data processor, had processed hashed email addresses of approximately 18 million non-LinkedIn members and targeted these individuals on the Facebook Platform with the absence of instruction from the data controller (i.e. LinkedIn Ireland), as is required pursuant to Section 2C(3)(a) of the Acts. The complaint was ultimately amicably resolved, with LinkedIn implementing a number of immediate actions to cease the processing of user data for the purposes that gave rise to the complaint."

So, in an attempt to gain more users LinkedIn acquired and processed the email addresses of 18 million non-members without getting governmental "instruction" as required by law. Not good.

The DPC report covered the time frame from January 1st through May 24, 2018. The report did not mention the source(s) from which LinkedIn acquired the email addresses. The DPC report also discussed investigations of Facebook (e.g., WhatsApp, facial recognition),  and Yahoo/Oath. Microsoft acquired LinkedIn in 2016. GDPR went into effect across the EU on May 25, 2018.

There is more. The investigation's findings raised concerns about broader compliance issues, so the DPC conducted a more in-depth audit:

"... to verify that LinkedIn had in place appropriate technical security and organisational measures, particularly for its processing of non-member data and its retention of such data. The audit identified that LinkedIn Corp was undertaking the pre-computation of a suggested professional network for non-LinkedIn members. As a result of the findings of our audit, LinkedIn Corp was instructed by LinkedIn Ireland, as data controller of EU user data, to cease pre-compute processing and to delete all personal data associated with such processing prior to 25 May 2018."

That the DPC ordered LinkedIn to stop this particular data processing, strongly suggests that the social networking service's activity probably violated data protection laws, as the European Union (EU) implements stronger privacy laws, known as General Data Protection Regulation (GDPR). ZDNet explained in this primer:

".... GDPR is a new set of rules designed to give EU citizens more control over their personal data. It aims to simplify the regulatory environment for business so both citizens and businesses in the European Union can fully benefit from the digital economy... almost every aspect of our lives revolves around data. From social media companies, to banks, retailers, and governments -- almost every service we use involves the collection and analysis of our personal data. Your name, address, credit card number and more all collected, analysed and, perhaps most importantly, stored by organisations... Data breaches inevitably happen. Information gets lost, stolen or otherwise released into the hands of people who were never intended to see it -- and those people often have malicious intent. Under the terms of GDPR, not only will organisations have to ensure that personal data is gathered legally and under strict conditions, but those who collect and manage it will be obliged to protect it from misuse and exploitation, as well as to respect the rights of data owners - or face penalties for not doing so... There are two different types of data-handlers the legislation applies to: 'processors' and 'controllers'. The definitions of each are laid out in Article 4 of the General Data Protection Regulation..."

The new GDPR applies to both companies operating within the EU, and to companies located outside of the EU which offer goods or services to customers or businesses inside the EU. As a result, some companies have changed their business processes. TechCrunch reported in April:

"Facebook has another change in the works to respond to the European Union’s beefed up data protection framework — and this one looks intended to shrink its legal liabilities under GDPR, and at scale. Late yesterday Reuters reported on a change incoming to Facebook’s [Terms & Conditions policy] that it said will be pushed out next month — meaning all non-EU international are switched from having their data processed by Facebook Ireland to Facebook USA. With this shift, Facebook will ensure that the privacy protections afforded by the EU’s incoming GDPR — which applies from May 25 — will not cover the ~1.5 billion+ international Facebook users who aren’t EU citizens (but current have their data processed in the EU, by Facebook Ireland). The U.S. does not have a comparable data protection framework to GDPR..."

What was LinkedIn's response to the DPC report? At press time, a search of LinkedIn's blog and press areas failed to find any mentions of the DPC investigation. TechCrunch reported statements by Dennis Kelleher, Head of Privacy, EMEA at LinkedIn:

"... Unfortunately the strong processes and procedures we have in place were not followed and for that we are sorry. We’ve taken appropriate action, and have improved the way we work to ensure that this will not happen again. During the audit, we also identified one further area where we could improve data privacy for non-members and we have voluntarily changed our practices as a result."

What does this mean? Plenty. There seem to be several takeaways for consumer and users of social networking services:

  • EU regulators are proactive and conduct detailed audits to ensure companies both comply with GDPR and act consistent with any promises they made,
  • LinkedIn wants consumers to accept another "we are sorry" corporate statement. No thanks. No more apologies. Actions speak more loudly than words,
  • The DPC didn't fine LinkedIn probably because GDPR didn't become effective until May 25, 2018. This suggests that fines will be applied to violations occurring on or after May 25, 2018, and
  • People in different areas of the world view privacy and data protection differently - as they should. That is fine, and it shouldn't be a surprise. (A global survey about self-driving cars found similar regional differences.) Smart executives in businesses -- and in governments -- worldwide recognize regional differences, find ways to sell products and services across areas without degraded customer experience, and don't try to force their country's approach on other countries or areas which don't want it.

What takeaways do you see?


Plenty Of Bad News During November. Are We Watching The Fall Of Facebook?

Facebook logo November has been an eventful month for Facebook, the global social networking giant. And not in a good way. So much has happened, it's easy to miss items. Let's review.

A November 1st investigative report by ProPublica described how some political advertisers exploit gaps in Facebook's advertising transparency policy:

"Although Facebook now requires every political ad to “accurately represent the name of the entity or person responsible,” the social media giant acknowledges that it didn’t check whether Energy4US is actually responsible for the ad. Nor did it question 11 other ad campaigns identified by ProPublica in which U.S. businesses or individuals masked their sponsorship through faux groups with public-spirited names. Some of these campaigns resembled a digital form of what is known as “astroturfing,” or hiding behind the mirage of a spontaneous grassroots movement... Adopted this past May in the wake of Russian interference in the 2016 presidential campaign, Facebook’s rules are designed to hinder foreign meddling in elections by verifying that individuals who run ads on its platform have a U.S. mailing address, governmental ID and a Social Security number. But, once this requirement has been met, Facebook doesn’t check whether the advertiser identified in the “paid for by” disclosure has any legal status, enabling U.S. businesses to promote their political agendas secretly."

So, political ad transparency -however faulty it is -- has only been operating since May, 2018. Not long. Not good.

The day before the November 6th election in the United States, Facebook announced:

"On Sunday evening, US law enforcement contacted us about online activity that they recently discovered and which they believe may be linked to foreign entities. Our very early-stage investigation has so far identified around 30 Facebook accounts and 85 Instagram accounts that may be engaged in coordinated inauthentic behavior. We immediately blocked these accounts and are now investigating them in more detail. Almost all the Facebook Pages associated with these accounts appear to be in the French or Russian languages..."

This happened after Facebook removed 82 Pages, Groups and accounts linked to Iran on October 16th. Thankfully, law enforcement notified Facebook. Interested in more proactive action? Facebook announced on November 8th:

"We are careful not to reveal too much about our enforcement techniques because of adversarial shifts by terrorists. But we believe it’s important to give the public some sense of what we are doing... We now use machine learning to assess Facebook posts that may signal support for ISIS or al-Qaeda. The tool produces a score indicating how likely it is that the post violates our counter-terrorism policies, which, in turn, helps our team of reviewers prioritize posts with the highest scores. In this way, the system ensures that our reviewers are able to focus on the most important content first. In some cases, we will automatically remove posts when the tool indicates with very high confidence that the post contains support for terrorism..."

So, Facebook deployed in 2018 some artificial intelligence to help its human moderators identify terrorism threats -- not automatically remove them, but to identify them -- as the news item also mentioned its appeal process. Then, Facebook announced in a November 13th update:

"Combined with our takedown last Monday, in total we have removed 36 Facebook accounts, 6 Pages, and 99 Instagram accounts for coordinated inauthentic behavior. These accounts were mostly created after mid-2017... Last Tuesday, a website claiming to be associated with the Internet Research Agency, a Russia-based troll farm, published a list of Instagram accounts they said that they’d created. We had already blocked most of them, and based on our internal investigation, we blocked the rest... But finding and investigating potential threats isn’t something we do alone. We also rely on external partners, like the government or security experts...."

So, in 2018 Facebook leans heavily upon both law enforcement and security researchers to identify threats. You have to hunt a bit to find the total number of fake accounts removed. Facebook announced on November 15th:

"We also took down more fake accounts in Q2 and Q3 than in previous quarters, 800 million and 754 million respectively. Most of these fake accounts were the result of commercially motivated spam attacks trying to create fake accounts in bulk. Because we are able to remove most of these accounts within minutes of registration, the prevalence of fake accounts on Facebook remained steady at 3% to 4% of monthly active users..."

That's about 1.5 billion fake accounts by a variety of bad actors. Hmmmm... sounds good, but... it makes one wonder about the digital arms race happening. If the bad actors can programmatically create new fake accounts faster than Facebook can identify and remove them, then not good.

Meanwhile, CNet reported on November 11th that Facebook had ousted Oculus founder Palmer Luckey due to:

"... a $10,000 to an anti-Hillary Clinton group during the 2016 presidential election, he was out of the company he founded. Facebook CEO Mark Zuckerberg, during congressional testimony earlier this year, called Luckey's departure a "personnel issue" that would be "inappropriate" to address, but he denied it was because of Luckey's politics. But that appears to be at the root of Luckey's departure, The Wall Street Journal reported Sunday. Luckey was placed on leave and then fired for supporting Donald Trump, sources told the newspaper... [Luckey] was pressured by executives to publicly voice support for libertarian candidate Gary Johnson, according to the Journal. Luckey later hired an employment lawyer who argued that Facebook illegally punished an employee for political activity and negotiated a payout for Luckey of at least $100 million..."

Facebook acquired Oculus Rift in 2014. Not good treatment of an executive.

The next day, TechCrunch reported that Facebook will provide regulators from France with access to its content moderation processes:

"At the start of 2019, French regulators will launch an informal investigation on algorithm-powered and human moderation... Regulators will look at multiple steps: how flagging works, how Facebook identifies problematic content, how Facebook decides if it’s problematic or not and what happens when Facebook takes down a post, a video or an image. This type of investigation is reminiscent of banking and nuclear regulation. It involves deep cooperation so that regulators can certify that a company is doing everything right... The investigation isn’t going to be limited to talking with the moderation teams and looking at their guidelines. The French government wants to find algorithmic bias and test data sets against Facebook’s automated moderation tools..."

Good. Hopefully, the investigation will be a deep dive. Maybe other countries, which value citizens' privacy, will perform similar investigations. Companies and their executives need to be held accountable.

Then, on November 14th The New York Times published a detailed, comprehensive "Delay, Deny, and Deflect" investigative report based upon interviews of at least 50 persons:

"When Facebook users learned last spring that the company had compromised their privacy in its rush to expand, allowing access to the personal information of tens of millions of people to a political data firm linked to President Trump, Facebook sought to deflect blame and mask the extent of the problem. And when that failed... Facebook went on the attack. While Mr. Zuckerberg has conducted a public apology tour in the last year, Ms. Sandberg has overseen an aggressive lobbying campaign to combat Facebook’s critics, shift public anger toward rival companies and ward off damaging regulation. Facebook employed a Republican opposition-research firm to discredit activist protesters... In a statement, a spokesman acknowledged that Facebook had been slow to address its challenges but had since made progress fixing the platform... Even so, trust in the social network has sunk, while its pell-mell growth has slowed..."

The New York Times' report also highlighted the history of Facebook's focus on revenue growth and lack of focus to identify and respond to threats:

"Like other technology executives, Mr. Zuckerberg and Ms. Sandberg cast their company as a force for social good... But as Facebook grew, so did the hate speech, bullying and other toxic content on the platform. When researchers and activists in Myanmar, India, Germany and elsewhere warned that Facebook had become an instrument of government propaganda and ethnic cleansing, the company largely ignored them. Facebook had positioned itself as a platform, not a publisher. Taking responsibility for what users posted, or acting to censor it, was expensive and complicated. Many Facebook executives worried that any such efforts would backfire... Mr. Zuckerberg typically focused on broader technology issues; politics was Ms. Sandberg’s domain. In 2010, Ms. Sandberg, a Democrat, had recruited a friend and fellow Clinton alum, Marne Levine, as Facebook’s chief Washington representative. A year later, after Republicans seized control of the House, Ms. Sandberg installed another friend, a well-connected Republican: Joel Kaplan, who had attended Harvard with Ms. Sandberg and later served in the George W. Bush administration..."

The report described cozy relationships between the company and Democratic politicians. Not good for a company wanting to deliver unbiased, reliable news. The New York Times' report also described the history of failing to identify and respond quickly to content abuses by bad actors:

"... in the spring of 2016, a company expert on Russian cyberwarfare spotted something worrisome. He reached out to his boss, Mr. Stamos. Mr. Stamos’s team discovered that Russian hackers appeared to be probing Facebook accounts for people connected to the presidential campaigns, said two employees... Mr. Stamos, 39, told Colin Stretch, Facebook’s general counsel, about the findings, said two people involved in the conversations. At the time, Facebook had no policy on disinformation or any resources dedicated to searching for it. Mr. Stamos, acting on his own, then directed a team to scrutinize the extent of Russian activity on Facebook. In December 2016... Ms. Sandberg and Mr. Zuckerberg decided to expand on Mr. Stamos’s work, creating a group called Project P, for “propaganda,” to study false news on the site, according to people involved in the discussions. By January 2017, the group knew that Mr. Stamos’s original team had only scratched the surface of Russian activity on Facebook... Throughout the spring and summer of 2017, Facebook officials repeatedly played down Senate investigators’ concerns about the company, while publicly claiming there had been no Russian effort of any significance on Facebook. But inside the company, employees were tracing more ads, pages and groups back to Russia."

Facebook responded in a November 15th new release:

"There are a number of inaccuracies in the story... We’ve acknowledged publicly on many occasions – including before Congress – that we were too slow to spot Russian interference on Facebook, as well as other misuse. But in the two years since the 2016 Presidential election, we’ve invested heavily in more people and better technology to improve safety and security on our services. While we still have a long way to go, we’re proud of the progress we have made in fighting misinformation..."

So, Facebook wants its users to accept that it has invested more = doing better.

Regardless, the bottom line is trust. Can users trust what Facebook said about doing better? Is better enough? Can users trust Facebook to deliver unbiased news? Can users trust that Facebook's content moderation process is better? Or good enough? Can users trust Facebook to fix and prevent data breaches affecting millions of users? Can users trust Facebook to stop bad actors posing as researchers from using quizzes and automated tools to vacuum up (and allegedly resell later) millions of users' profiles? Can citizens in democracies trust that Facebook has stopped data abuses, by bad actors, designed to disrupt their elections? Is doing better enough?

The very next day, Facebook reported a huge increase in the number of government requests for data, including secret orders. TechCrunch reported about 13 historical national security letters:

"... dated between 2014 and 2017 for several Facebook and Instagram accounts. These demands for data are effectively subpoenas, issued by the U.S. Federal Bureau of Investigation (FBI) without any judicial oversight, compelling companies to turn over limited amounts of data on an individual who is named in a national security investigation. They’re controversial — not least because they come with a gag order that prevents companies from informing the subject of the letter, let alone disclosing its very existence. Companies are often told to turn over IP addresses of everyone a person has corresponded with, online purchase information, email records and cell-site location data... Chris Sonderby, Facebook’s deputy general counsel, said that the government lifted the non-disclosure orders on the letters..."

So, Facebook is a go-to resource for both bad actors and the good guys.

An eventful month, and the month isn't over yet. Taken together, this news is not good for a company wanting its social networking service to be a source of reliable, unbiased news source. This news is not good for a company wanting its users to accept it is doing better -- and that better is enough. The situation begs the question: are we watching the fall of Facebook? Share your thoughts and opinions below.


ABA Updates Guidance For Attorneys' Data Security And Data Breach Obligations. What Their Clients Can Expect

To provide the best representation, attorneys often process and archive sensitive information about their clients. Consumers hire attorneys to complete a variety of transactions: buy (or sell) a home, start (or operate) a business, file a complaint against a company, insurer, or website for unsatisfactory service, file a complaint against a former employer, and more. What are attorneys' obligations regarding data security to protect their clients' sensitive information, intellectual property, and proprietary business methods?

What can consumers expect when the attorney or law firm they've hired experienced a data breach? Yes, law firms experience data breaches. The National Law Review reported last year:

"2016 was the year that law firm data breaches landed and stayed squarely in both the national and international headlines. There have been numerous law firm data breaches involving incidents ranging from lost or stolen laptops and other portable media to deep intrusions... In March, the FBI issued a warning that a cybercrime insider-trading scheme was targeting international law firms to gain non-public information to be used for financial gain. In April, perhaps the largest volume data breach of all time involved law firm Mossack Fonesca in Panama... Finally, Chicago law firm, Johnson & Bell Ltd., was in the news in December when a proposed class action accusing them of failing to protect client data was unsealed."

So, what can clients expect regarding data security and data breaches? A post in the Lexology site reported:

"Lawyers don’t get a free pass when it comes to data security... In a significant ethics opinion issued last month, Formal Opinion 483, Lawyers’ Obligations After an Electronic Data Breach or Cyberattack, the American Bar Association’s Standing Committee on Ethics and Professional Responsibility provides a detailed roadmap to a lawyer’s obligations to current and former clients when they learn that they – or their firm – have been the subject of a data breach... a lawyer’s compliance with state or federal data security laws does "not necessarily achieve compliance with ethics obligations," and identifies six ABA Model Rules that might be implicated in the breach of client information."

Readers of this blog are familiar with the common definition of a data breach: unauthorized persons have accessed, stolen, altered, and/or destroyed information they shouldn't have. Attorneys have an obligation to use technology competently. The post by Patterson Belknap Webb & Tyler LLP also stated:

"... lawyers have an obligation to take “reasonable steps” to monitor for data breaches... When a breach is detected, a lawyer must act “reasonably and promptly” to stop the breach and mitigate damages resulting from the breach... A lawyer must make reasonable efforts to assess whether any electronic files were, in fact, accessed and, if so, identify them. This requires a post-breach investigation... Lawyers must then provide notice to their affected clients of the breach..."

I read the ABA Formal Opinion 483. (A copy of the opinion is also available here.) A follow-up post this week by the National Law Review listed 10 best practices to stop cyberattacks and breaches. Since many law firms outsource some back-office functions, this might be the most important best-practice item:

"4. Evaluate Your Vendors’ Security: Ask to see your vendor’s security certificate. Review the vendor’s security system as you would your own, making sure they exercise the same or stronger security systems than your own law firm..."


Some Surprising Facts About Facebook And Its Users

Facebook logo The Pew Research Center announced findings from its latest survey of social media users:

  • About two-thirds (68%) of adults in the United States use Facebook. That is unchanged from April 2016, but up from 54% in August 2012. Only Youtube gets more adult usage (73%).
  • About three-quarters (74%) of adult Facebook users visit the site at least once a day. That's higher than Snapchat (63%) and Instagram (60%).
  • Facebook is popular across all demographic groups in the United States: 74% of women use it, as do 62% of men, 81% of persons ages 18 to 29, and 41% of persons ages 65 and older.
  • Usage by teenagers has fallen to 51% (at March/April 2018) from 71% during 2014 to 2015. More teens use other social media services: YouTube (85%), Instagram (72%) and Snapchat (69%).
  • 43% of adults use Facebook as a news source. That is higher than other social media services: YouTube (21%), Twitter (12%), Instagram (8%), and LinkedIn (6%). More women (61%) use Facebook as a news source than men (39%). More whites (62%) use Facebook as a news source than nonwhites (37%).
  • 54% of adult users said they adjusted their privacy settings during the past 12 months. 42% said they have taken a break from checking the platform for several weeks or more. 26% said they have deleted the app from their phone during the past year.

Perhaps, the most troubling finding:

"Many adult Facebook users in the U.S. lack a clear understanding of how the platform’s news feed works, according to the May and June survey. Around half of these users (53%) say they do not understand why certain posts are included in their news feed and others are not, including 20% who say they do not understand this at all."

Facebook users should know that the service does not display in their news feed all posts by their friends and groups. Facebook's proprietary algorithm -- called its "secret sauce" by some -- displays items it thinks users will engage with = click the "Like" or other emotion buttons. This makes Facebook a terrible news source, since it doesn't display all news -- only the news you (probably already) agree with.

That's like living life in an online bubble. Sadly, there is more.

If you haven't watched it, PBS has broadcast a two-part documentary titled, "The Facebook Dilemma" (see trailer below), which arguable could have been titled, "the dark side of sharing." The Frontline documentary rightly discusses Facebook's approaches to news, privacy, its focus upon growth via advertising revenues, how various groups have used the service as a weapon, and Facebook's extensive data collection about everyone.

Yes, everyone. Obviously, Facebook collects data about its users. The service also collects data about nonusers in what the industry calls "shadow profiles." CNet explained that during an April:

"... hearing before the House Energy and Commerce Committee, the Facebook CEO confirmed the company collects information on nonusers. "In general, we collect data of people who have not signed up for Facebook for security purposes," he said... That data comes from a range of sources, said Nate Cardozo, senior staff attorney at the Electronic Frontier Foundation. That includes brokers who sell customer information that you gave to other businesses, as well as web browsing data sent to Facebook when you "like" content or make a purchase on a page outside of the social network. It also includes data about you pulled from other Facebook users' contacts lists, no matter how tenuous your connection to them might be. "Those are the [data sources] we're aware of," Cardozo said."

So, there might be more data sources besides the ones we know about. Facebook isn't saying. So much for greater transparency and control claims by Mr. Zuckerberg. Moreover, data breaches highlight the problems with the service's massive data collection and storage:

"The fact that Facebook has [shadow profiles] data isn't new. In 2013, the social network revealed that user data had been exposed by a bug in its system. In the process, it said it had amassed contact information from users and matched it against existing user profiles on the social network. That explained how the leaked data included information users hadn't directly handed over to Facebook. For example, if you gave the social network access to the contacts in your phone, it could have taken your mom's second email address and added it to the information your mom already gave to Facebook herself..."

So, Facebook probably launched shadow profiles when it introduced its mobile app. That means, if you uploaded the address book in your phone to Facebook, then you helped the service collect information about nonusers, too. This means Facebook acts more like a massive advertising network than simply a social media service.

How has Facebook been able to collect massive amounts of data about both users and nonusers? According to the Frontline documentary, we consumers have lax privacy laws in the United States to thank for this massive surveillance advertising mechanism. What do you think?


Senator Wyden Introduces Bill To Help Consumers Regain Online Privacy And Control Over Sensitive Data

Late last week, Senator Ron Wyden (Dem - Oregon) introduced a "discussion draft" of legislation to help consumers recover online privacy and control over their sensitive personal data. Senator Wyden said:

"Today’s economy is a giant vacuum for your personal information – Everything you read, everywhere you go, everything you buy and everyone you talk to is sucked up in a corporation’s database. But individual Americans know far too little about how their data is collected, how it’s used and how it’s shared... It’s time for some sunshine on this shadowy network of information sharing. My bill creates radical transparency for consumers, gives them new tools to control their information and backs it up with tough rules with real teeth to punish companies that abuse Americans’ most private information.”

The press release by Senator Wyden's office explained the need for new legislation:

"The government has failed to respond to these new threats: a) Information about consumers’ activities, including their location information and the websites they visit is tracked, sold and monetized without their knowledge by many entities; b) Corporations’ lax cybersecurity and poor oversight of commercial data-sharing partnerships has resulted in major data breaches and the misuse of Americans’ personal data; c) Consumers have no effective way to control companies’ use and sharing of their data."

Consumers in the United States lost both control and privacy protections when the U.S. Federal Communications Commission (FCC), led by President Trump appointee Ajit Pai, a former Verizon lawyer, repealed last year both broadband privacy and net neutrality protections for consumers. A December 2017 study of 1,077 voters found that most want net neutrality protections. President Trump signed the privacy-rollback legislation in April 2017. A prior blog post listed many historical abuses of consumers by some internet service providers (ISPs).

With the repealed broadband privacy, ISPs are free to collect and archive as much data about consumers as desired without having to notify and get consumers' approval of the collection nor of who they share archived data with. That's 100 percent freedom for ISPs and zero freedom for consumers.

By repealing online privacy and net neutrality protections for consumers, the FCC essentially punted responsibility to the U.S. Federal Trade Commission (FTC). According to Senator Wyden's press release:

"The FTC, the nation’s main privacy and data security regulator, currently lacks the authority and resources to address and prevent threats to consumers’ privacy: 1) The FTC cannot fine first-time corporate offenders. Fines for subsequent violations of the law are tiny, and not a credible deterrent; 2) The FTC does not have the power to punish companies unless they lie to consumers about how much they protect their privacy or the companies’ harmful behavior costs consumers money; 3) The FTC does not have the power to set minimum cybersecurity standards for products that process consumer data, nor does any federal regulator; and 4) The FTC does not have enough staff, especially skilled technology experts. Currently about 50 people at the FTC police the entire technology sector and credit agencies."

This means consumers have no protections nor legal options unless the company, or website, violates its published terms-of-conditions and privacy policies. To solves the above gaps, Senator Wyden's new legislation, titled the Consumer Data Privacy Act (CDPA), contains several new and stronger protections. It:

"... allows consumers to control the sale and sharing of their data, gives the FTC the authority to be an effective cop on the beat, and will spur a new market for privacy-protecting services. The bill empowers the FTC to: i) Establish minimum privacy and cybersecurity standards; ii) Issue steep fines (up to 4% of annual revenue), on the first offense for companies and 10-20 year criminal penalties for senior executives; iii) Create a national Do Not Track system that lets consumers stop third-party companies from tracking them on the web by sharing data, selling data, or targeting advertisements based on their personal information. It permits companies to charge consumers who want to use their products and services, but don’t want their information monetized; iv) Give consumers a way to review what personal information a company has about them, learn with whom it has been shared or sold, and to challenge inaccuracies in it; v) Hire 175 more staff to police the largely unregulated market for private data; and vi) Require companies to assess the algorithms that process consumer data to examine their impact on accuracy, fairness, bias, discrimination, privacy, and security."

Permitting companies to charge consumers who opt out of data collection and sharing is a good thing. Why? Monthly payments by consumers are leverage -- a strong incentive for companies to provide better cybersecurity.

Business as usual -- cybersecurity methods by corporate executives and government enforcement -- isn't enough. The tsunami of data breaches is an indication. During October alone:

A few notable breach events from earlier this year:

The status quo, or business as usual, is unacceptable. Executives' behavior won't change without stronger consequences like jail time, since companies perform cost-benefit analyses regarding how much to spend on cybersecurity versus the probability of breaches and fines. Opt-outs of data collection and sharing by consumers, steeper fines, and criminal penalties could change those cost-benefit calculations.

Four former chief technologists at the FCC support Senator Wyden's legislation. Gabriel Weinberg, the Chief Executive Officer of DuckDuckGo also supports it:

"Senator Wyden’s proposed consumer privacy bill creates needed privacy protections for consumers, mandating easy opt-outs from hidden tracking. By forcing companies that sell and monetize user data to be more transparent about their data practices, the bill will also empower consumers to make better-informed privacy decisions online, enabling companies like ours to compete on a more level playing field."

Regular readers of this blog know that the DuckDuckGo search engine (unlike Google, Bing and Yahoo search engines) doesn't track users, doesn't collect nor archive data about users and their devices, and doesn't collect nor store users' search criteria. So, DuckDuckGo users can search knowing their data isn't being sold to advertisers, data brokers, and others.

Lastly, Wyden's proposed legislation includes several key definitions (emphasis added):

"... The term "automated decision system" means a computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision or facilitates human decision making, that impacts consumers... The term "automated decision system impact assessment" means a study evaluating an automated decision system and the automated decision system’s development process, including the design and training data of the automated decision 14 system, for impacts on accuracy, fairness, bias, discrimination, privacy, and security that includes... The term "data protection impact assessment" means a study evaluating the extent to which an information system protects the privacy and security of personal information the system processes... "

The draft legislation requires companies to perform both automated data impact assessments and data protection impact assessments; and requires the FTC to set the frequency and conditions for both. A copy of the CDPA draft is also available here (Adobe PDF; 67.7 k bytes).

This is a good start. It is important... critical... to hold accountable both corporate executives and the automated decision systems their approve and deploy. Based upon history, outsourcing has been one corporate tactic to manage liability by shifting it to providers. Good to close any loopholes now where executives could abuse artificial intelligence and related technologies to avoid responsibility.

What are your thoughts, opinions of the proposed legislation?


Aetna To Pay More Than $17 Million To Resolve 2 Privacy Breaches

Aetna logo Aetna inked settlement agreements with several states, including New Jersey, to resolve disclosures of sensitive patient information. According to an announcement by the Attorney General for New Jersey, the settlement agreements resolve:

"... a multi-state investigation focused on two separate privacy breaches by Aetna that occurred in 2017 – one involving a mailing that potentially revealed information about addressees’ HIV/AIDS status, the other involving a mailing that potentially revealed individuals’ involvement in a study of patients with atrial fibrillation (or AFib)..."

Connecticut, Washington, and the District of Columbia joined with New Jersey for both the  investigation and settlement agreements. The multi-state investigation found:

"... that Aetna inadvertently disclosed HIV/AIDS-related information about thousands of individuals across the U.S. – including approximately 647 New Jersey residents – through a third-party mailing on July 28, 2017. The envelopes used in the mailing had an over-sized, transparent glassine address window, which revealed not only the recipients’ names and addresses, but also text that included the words “HIV Medications"... The second breach occurred in September 2017 and involved a mailing sent to 1,600 individuals concerning a study of patients with AFib. The envelopes for the mailing included the name and logo for the study – IMPACT AFib – which could have been interpreted as indicating that the addressee had an AFib diagnosis... Aetna not only violated the federal Health Insurance Portability and Accountability Act (HIPAA), but also state laws pertaining to the protected health information of individuals in general, and of persons with AIDS or HIV infection in particular..."

A class-action lawsuit filed on behalf of affected HIV/AIDS patients has been settled, pending approval from a federal court, which requires Aetna to pay about $17 million to resolve allegations. Terms of the multi-state settlement agreement require Aetna to pay a $365,211.59 civil penalty to New Jersey, and:

  • Implement policy, processes, and employee training reforms to both better protect persons' protected health information, and ensure mailings maintain persons' privacy; and
  • Hire an independent consultant to evaluate and report on its privacy protection practices, and to monitor its compliance with the terms of the settlement agreements.

CVS Health logo In December of last year, CVS Health and Aetna announced a merger agreement where CVS Health acquired Aetna for about $69 billion. Last week, CVS Health announced an expansion of its board of directors to include the addition of three directors from its Aetna unit. At press time, neither company's website mentioned the multi-state settlement agreement.


'Got Another Friend Request From You' Warnings Circulate On Facebook. What's The Deal?

Facebook logo Several people have posted on their Facebook News Feeds messages with warnings, such as:

"Please do not accept any new Friend requests from me"

And:

"Hi … I actually got another friend request from you yesterday … which I ignored so you may want to check your account. Hold your finger on the message until the forward button appears … then hit forward and all the people you want to forward too … I had to do the people individually. Good Luck!"

Maybe, you've seen one of these warnings. Some of my Facebook friends posted these warnings in their News Feed or in private messages via Messenger. What's happening? The fact-checking site Snopes explained:

"This message played on warnings about the phenomenon of Facebook “pirates” engaging in the “cloning” of Facebook accounts, a real (but much over-hyped) process by which scammers target existing Facebook users accounts by setting up new accounts with identical profile pictures and names, then sending out friend requests which appear to originate from those “cloned” users. Once those friend requests are accepted, the scammers can then spread messages which appear to originate from the targeted account, luring that person’s friends into propagating malware, falling for phishing schemes, or disclosing personal information that can be used for identity theft."

Hacked Versus Cloned Accounts

While everyone wants to warn their friends, it is important to do your homework first. Many Facebook users have confused "hacked" versus "cloned" accounts. A hack is when another person has stolen your password and used it to sign into your account to post fraudulent messages -- pretending to be you.

Snopes described above what a "cloned" account is... basically a second, unauthorized account. Sadly, there are plenty of online sources for scammers to obtain stolen photos and information to create cloned accounts. One source is the multitude of massive corporate data breaches: Equifax, Nationwide, Facebook, the RNC, Uber, and others. Another source are Facebook friends with sloppy security settings on their accounts: the "Public" setting is no security. That allows scammers to access your account via your friends' wide-open accounts lacking security.

It is important to know the differences between "hacked" and "cloned" accounts. Snopes advised:

"... there would be no utility to forwarding [the above] warning to any of your Facebook friends unless you had actually received a second friend request from one of them. Moreover, even if this warning were possibly real, the optimal approach would not be for the recipient to forward it willy-nilly to every single contact on their friends list... If you have reason to believe your Facebook account might have been “cloned,” you should try sending separate private messages to a few of your Facebook friends to check whether any of them had indeed recently received a duplicate friend request from you, as well as searching Facebook for accounts with names and profile pictures identical to yours. Should either method turn up a hit, use Facebook’s "report this profile" link to have the unauthorized account deactivated."

Cloned Accounts

If you received a (second) Friend Request from a person who you are already friends with on Facebook, then that suggests a cloned account. (Cloned accounts are not new. It's one of the disadvantages of social media.) Call your friend on the phone or speak with him/her in-person to: a) tell him/her you received a second Friend Request, and b) determine whether or not he/she really sent that second Friend Request. (Yes, online privacy takes some effort.) If he/she didn't send a second Friend Request, then you know what to do: report the unauthorized profile to Facebook, and then delete the second Friend Request. Don't accept it.

If he/she did send a second Friend Request, ask why. (Let's ignore the practice by some teens to set up multiple accounts; one for parents and a second for peers.) I've had friends -- adults -- forget their online passwords, and set up a second Facebook account -- a clumsy, confusing solution. Not everyone has good online skills. Your friend will tell you which account he/she uses and which account he/she wants you to connect to. Then, un-Friend the other account.

Hacked Accounts

All Facebook users should know how to determine if your Facebook account has been hacked. Online privacy takes effort. How to check:

  1. Sign into Facebook
  2. Select "Settings."
  3. Select "Security and Login."
  4. You will see a list of the locations where your account has been accessed. If one or more of the locations weren't you, then it's likely another person has stolen and used your password. Proceed to step #5.
  5. For each location that wasn't you, select "Not You" and then "Secure Account." Follow the online instructions displayed and change your password immediately.

I've performed this check after friends have (erroneously) informed me that my account was hacked. It wasn't.

Facebook Search and Privacy Settings

Those wanting to be proactive can search the Facebook site to find other persons using the same name. Simply, enter your name in the search mechanism. The results page lists other accounts with the same name. If you see another account using your identical profile photo (and/or other identical personal information and photos), then use Facebook's "report this profile" link to report the unauthorized account.

You can go one step further and warn your Facebook friends who have the "Public" security setting on their accounts. They may be unaware of the privacy risks, and once informed may change their security setting to "Friends Only." Hopefully, they will listen.

If they don't listen, you can suggest that he/she at a minimum change other privacy settings. Users control who can see their photos and list of friends on Facebook. To change the privacy setting, navigate to your Friends List page and select the edit icon. Then, select the "Edit Privacy" link. Next, change both privacy settings for, "Who can see your friends?" and "Who can see the people, Pages, and lists you follow?" to "Only Me." As a last resort, you can un-Friend the security neophyte, if he/she refuses to make any changes to their security settings.


Why The Recent Facebook Data Breach Is Probably Much Worse Than You First Thought

Facebook logo The recent data breach at Facebook has indications that it may be much worse than first thought. It's not the fact that a known 50 million users were affected, and 40 million more may also be affected. There's more. The New York Times reported on Tuesday:

"... the impact could be significantly bigger since those stolen credentials could have been used to gain access to so many other sites. Companies that allow customers to log in with Facebook Connect are scrambling to figure out whether their own user accounts have been compromised."

Facebook Connect, an online tool launched in 2008, allows users to sign into other apps and websites using their Facebook credentials (e.g., username, password). many small, medium, and large businesses joined the Facebook Connect program, which was using:

"... a simple proposition: Connect to our platform, and we’ll make it faster and easier for people to use your apps... The tool was adopted by thousands of other firms, from mom-and-pop publishing companies to high-profile tech outfits like Airbnb and Uber."

Initially, Facebook Connect made online life easier and more convenient. Users could sign up for new apps and sites without having to create and remember new sign-in credentials:

But in July 2017, that measure of security fell short. By exploiting three software bugs, attackers forged “access tokens,” digital keys used to gain entry to a user’s account. From there, the hackers were able to do anything users could do on their own Facebook accounts, including logging in to third-party apps."

On Tuesday, Facebook released a "Login Update," which said in part:

"We have now analyzed our logs for all third-party apps installed or logged in during the attack we discovered last week. That investigation has so far found no evidence that the attackers accessed any apps using Facebook Login.

Any developer using our official Facebook SDKs — and all those that have regularly checked the validity of their users’ access tokens – were automatically protected when we reset people’s access tokens. However, out of an abundance of caution, as some developers may not use our SDKs — or regularly check whether Facebook access tokens are valid — we’re building a tool to enable developers to manually identify the users of their apps who may have been affected, so that they can log them out."

So, there are more news and updates to come about this. According to the New York Times, some companies' experiences so far:

"Tinder, the dating app, has found no evidence that accounts have been breached, based on the "limited information Facebook has provided," Justine Sacco, a spokeswoman for Tinder and its parent company, the Match Group, said in a statement... The security team at Uber, the ride-hailing giant, is logging some users out of their accounts to be cautious, said Melanie Ensign, a spokeswoman for Uber. It is asking them to log back in — a preventive measure that would invalidate older, stolen access tokens."


Tips For Parents To Teach Their Children Online Safety

Today's children often use mobile devices at very young ages... four, five, or six years of age. And they don't know anything about online dangers: computer viruses, stalking, cyber-bullying, identity theft, phishing scams, ransomware, and more. Nor do they know how to read terms-of-use and privacy policies. It is parents' responsibility to teach them.

NordVPN logo NordVPN, a maker of privacy software, offers several tips to help parents teach their children about online safety:

"1. Set an example: If you want your kid to be careful and responsible online, you should start with yourself."

Children watch their parents. If you practice good online safety habits, they will learn from watching you. And:

"2. Start talking to your kid early and do it often: If your child already knows how to play a video on Youtube or is able to download a gaming app without your help, they also should learn how to do it safely. Therefore, it’s important to start explaining the basics of privacy and cybersecurity at an early age."

So, long before having the "sex talk" with your children, parents should have the online safety talk. Developing good online safety habits at a young age will help children throughout their lives; especially as adults:

"3. Explain why safe behavior matters: Give relatable examples of what personal information is – your address, social security number, phone number, account credentials, and stress why you can never share this information with strangers."

You wouldn't give this information to a stranger on a city street. The same applies online. That also means discussing social media:

"4. Social media and messaging: a) don’t accept friend requests from people you don’t know; b) never send your pictures to strangers; c) make sure only your friends can see what you post on Facebook; d) turn on timeline review to check posts you are tagged in before they appear on your Facebook timeline; e) if someone asks you for some personal information, always tell your parents; f) don’t share too much on your profile (e.g., home address, phone number, current location); and g) don’t use your social media logins to authorize apps."

These are the basics. Read the entire list of online safety tips for parents by Nord VPN.