A Series Of Recent Events And Privacy Snafus At Facebook Cause Multiple Concerns. Does Facebook Deserve Users' Data?

Facebook logo So much has happened lately at Facebook that it can be difficult to keep up with the data scandals, data breaches, privacy fumbles, and more at the global social service. To help, below is a review of recent events.

The the New York Times reported on Tuesday, December 18th that for years:

"... Facebook gave some of the world’s largest technology companies more intrusive access to users’ personal data than it has disclosed, effectively exempting those business partners from its usual privacy rules... The special arrangements are detailed in hundreds of pages of Facebook documents obtained by The New York Times. The records, generated in 2017 by the company’s internal system for tracking partnerships, provide the most complete picture yet of the social network’s data-sharing practices... Facebook allowed Microsoft’s Bing search engine to see the names of virtually all Facebook users’ friends without consent... and gave Netflix and Spotify the ability to read Facebook users’ private messages. The social network permitted Amazon to obtain users’ names and contact information through their friends, and it let Yahoo view streams of friends’ posts as recently as this summer, despite public statements that it had stopped that type of sharing years earlier..."

According to the Reuters newswire, a Netflix spokesperson denied that Netflix accessed Facebook users' private messages, nor asked for that access. Facebook responded with denials the same day:

"... none of these partnerships or features gave companies access to information without people’s permission, nor did they violate our 2012 settlement with the FTC... most of these features are now gone. We shut down instant personalization, which powered Bing’s features, in 2014 and we wound down our partnerships with device and platform companies months ago, following an announcement in April. Still, we recognize that we’ve needed tighter management over how partners and developers can access information using our APIs. We’re already in the process of reviewing all our APIs and the partners who can access them."

Needed tighter management with its partners and developers? That's an understatement. During March and April of 2018 we learned that bad actors posed as researchers and used both quizzes and automated tools to vacuum up (and allegedly resell later) profile data for 87 million Facebook users. There's more news about this breach. The Office of the Attorney General for Washington, DC announced on December 19th that it has:

"... sued Facebook, Inc. for failing to protect its users’ data... In its lawsuit, the Office of the Attorney General (OAG) alleges Facebook’s lax oversight and misleading privacy settings allowed, among other things, a third-party application to use the platform to harvest the personal information of millions of users without their permission and then sell it to a political consulting firm. In the run-up to the 2016 presidential election, some Facebook users downloaded a “personality quiz” app which also collected data from the app users’ Facebook friends without their knowledge or consent. The app’s developer then sold this data to Cambridge Analytica, which used it to help presidential campaigns target voters based on their personal traits. Facebook took more than two years to disclose this to its consumers. OAG is seeking monetary and injunctive relief, including relief for harmed consumers, damages, and penalties to the District."

Sadly, there's still more. Facebook announced on December 14th another data breach:

"Our internal team discovered a photo API bug that may have affected people who used Facebook Login and granted permission to third-party apps to access their photos. We have fixed the issue but, because of this bug, some third-party apps may have had access to a broader set of photos than usual for 12 days between September 13 to September 25, 2018... the bug potentially gave developers access to other photos, such as those shared on Marketplace or Facebook Stories. The bug also impacted photos that people uploaded to Facebook but chose not to post... we believe this may have affected up to 6.8 million users and up to 1,500 apps built by 876 developers... Early next week we will be rolling out tools for app developers that will allow them to determine which people using their app might be impacted by this bug. We will be working with those developers to delete the photos from impacted users. We will also notify the people potentially impacted..."

We believe? That sounds like Facebook doesn't know for sure. Where was the quality assurance (QA) team on this? Who is performing the post-breach investigation to determine what happened so it doesn't happen again? This post-breach response seems sloppy. And, the "bug" description seems disingenuous. Anytime persons -- in this case developers -- have access to data they shouldn't have, it is a data breach.

One quickly gets the impression that Facebook has created so many niches, apps, APIs, and special arrangements for developers and advertisers that it really can't manage nor control the data it collects about its users. That implies Facebook users aren't in control of their data, either.

There were other notable stumbles. There were reports after many users experienced repeated bogus Friend Requests, due to hacked and/or cloned accounts. It can be difficult for users to distinguish valid Friend Requests from spammers or bad actors masquerading as friends.

In August, reports surfaced that Facebook approached several major banks offering to share its detailed financial information about consumers in order, "to boost user engagement." Reportedly, the detailed financial information included debit/credit/prepaid card transactions and checking account balances. Not good.

Also in August, Facebook's Onavo VPN App was removed from the Apple App store because the app violated data-collection policies. 9 To 5 Mac reported on December 5th:

"The UK parliament has today publicly shared secret internal Facebook emails that cover a wide-range of the company’s tactics related to its free iOS VPN app that was used as spyware, recording users’ call and text message history, and much more... Onavo was an interesting effort from Facebook. It posed as a free VPN service/app labeled as Facebook’s “Protect” feature, but was more or less spyware designed to collect data from users that Facebook could leverage..."

Why spy? Why the deception? This seems unnecessary for a global social networking company already collecting massive amounts of content.

In November, an investigative report by ProPublica detailed the failures in Facebook's news transparency implementation. The failures mean Facebook hasn't made good on its promises to ensure trustworthy news content, nor stop foreign entities from using the social service to meddle in elections in democratic countries.

There is more. Facebook disclosed in October a massive data breach affecting 30 million users (emphasis added):

For 15 million people, attackers accessed two sets of information – name and contact details (phone number, email, or both, depending on what people had on their profiles). For 14 million people, the attackers accessed the same two sets of information, as well as other details people had on their profiles. This included username, gender, locale/language, relationship status, religion, hometown, self-reported current city, birth date, device types used to access Facebook, education, work, the last 10 places they checked into or were tagged in, website, people or Pages they follow, and the 15 most recent searches..."

The stolen data allows bad actors to operate several types of attacks (e.g., spam, phishing, etc.) against Facebook users. The stolen data allows foreign spy agencies to collect useful information to target persons. Neither is good. Wired summarized the situation:

"Every month this year—and in some months, every week—new information has come out that makes it seem as if Facebook's big rethink is in big trouble... Well-known and well-regarded executives, like the founders of Facebook-owned Instagram, Oculus, and WhatsApp, have left abruptly. And more and more current and former employees are beginning to question whether Facebook's management team, which has been together for most of the last decade, is up to the task.

Technically, Zuckerberg controls enough voting power to resist and reject any moves to remove him as CEO. But the number of times that he and his number two Sheryl Sandberg have over-promised and under-delivered since the 2016 election would doom any other management team... Meanwhile, investigations in November revealed, among other things, that the company had hired a Washington firm to spread its own brand of misinformation on other platforms..."

Hiring a firm to distribute misinformation elsewhere while promising to eliminate misinformation on its platform. Not good. Are Zuckerberg and Sandberg up to the task? The above list of breaches, scandals, fumbles, and stumbles suggest not. What do you think?

The bottom line is trust. Given recent events, BuzzFeed News article posed a relevant question (emphasis added):

"Of all of the statements, apologies, clarifications, walk-backs, defenses, and pleas uttered by Facebook employees in 2018, perhaps the most inadvertently damning came from its CEO, Mark Zuckerberg. Speaking from a full-page ad displayed in major papers across the US and Europe, Zuckerberg proclaimed, "We have a responsibility to protect your information. If we can’t, we don’t deserve it." At the time, the statement was a classic exercise in damage control. But given the privacy blunders that followed, it hasn’t aged well. In fact, it’s become an archetypal criticism of Facebook and the set up for its existential question: Why, after all that’s happened in 2018, does Facebook deserve our personal information?"

Facebook executives have apologized often. Enough is enough. No more apologies. Just fix it! And, if Facebook users haven't asked themselves the above question yet, some surely will. Earlier this week, a friend posted on the site:

"To all my FB friends:
I will be deleting my FB account very soon as I am disgusted by their invasion of the privacy of their users. Please contact me by email in the future. Please note that it will take several days for this action to take effect as FB makes it hard to get out of its grip. Merry Christmas to all and with best wishes for a Healthy, safe, and invasive free New Year."

I reminded this friend to also delete any Instagram and What's App accounts, since Facebook operates those services, too. If you want to quit the service but suffer with FOMO (Fear Of Missing Out), then read the experiences of a person who quit Apple, Google, Facebook, Microsoft, and Amazon for a month. It can be done. And, your social life will continue -- spectacularly. It did before Facebook.

Me? I have reduced my activity on Facebook. And there are certain activities I don't do on Facebook: take quizzes, make online payments, use its emotion reaction buttons (besides "Like"), use its mobile app, use the Messenger mobile app, nor use its voting and ballot previews content. Long ago I disabled the Facebook API platform on my Facebook account. You should, too. I never use my Facebook credentials (e.g., username, password) to sign into other sites. Never.

I will continue to post on Facebook links to posts in this blog, since it is helpful information for many Facebook users. In what ways have you reduced your usage of Facebook?


China Blamed For Cyberattack In The Gigantic Marriott-Starwood Hotels Data Breach

Marriott International logo An update on the gigantic Marriott-Starwood data breach where details about 500 million guests were stolen. The New York Times reported that the cyberattack:

"... was part of a Chinese intelligence-gathering effort that also hacked health insurers and the security clearance files of millions more Americans, according to two people briefed on the investigation. The hackers, they said, are suspected of working on behalf of the Ministry of State Security, the country’s Communist-controlled civilian spy agency... While American intelligence agencies have not reached a final assessment of who performed the hacking, a range of firms brought in to assess the damage quickly saw computer code and patterns familiar to operations by Chinese actors... China has reverted over the past 18 months to the kind of intrusions into American companies and government agencies that President Barack Obama thought he had ended in 2015 in an agreement with Mr. Xi. Geng Shuang, a spokesman for China’s Ministry of Foreign Affairs, denied any knowledge of the Marriott hacking..."

Why any country's intelligence agency would want to hack a hotel chain's database:

"The Marriott database contains not only credit card information but passport data. Lisa Monaco, a former homeland security adviser under Mr. Obama, noted last week at a conference that passport information would be particularly valuable in tracking who is crossing borders and what they look like, among other key data."

Also, context matters. First, this corporate acquisition was (thankfully) blocked:

"The effort to amass Americans’ personal information so alarmed government officials that in 2016, the Obama administration threatened to block a $14 billion bid by China’s Anbang Insurance Group Co. to acquire Starwood Hotel & Resorts Worldwide, according to one former official familiar with the work of the Committee on Foreign Investments in the United States, a secretive government body that reviews foreign acquisitions..."

Later that year, Marriott Hotels acquired Starwood for $13.6 billion. Second, remember the massive government data breach in 2014 at the Office of Personnel Management (OPM). The New York Times added that the Marriott breach:

"... was only part of an aggressive operation whose centerpiece was the 2014 hacking into the Office of Personnel Management. At the time, the government bureau loosely guarded the detailed forms that Americans fill out to get security clearances — forms that contain financial data; information about spouses, children and past romantic relationships; and any meetings with foreigners. Such information is exactly what the Chinese use to root out spies, recruit intelligence agents and build a rich repository of Americans’ personal data for future targeting..."

MSS Inside Not good. And, this is not the first time concerns about China have been raised. Reports surfaced in 2016 about malware installed in the firmware of smartphones running the Android operating system (OS) software. In 2015, China enacted a new "secure and controllable" security law which many security experts viewed then as a method to ensure that back doors were built into computing products and devices during into the manufacturing and assembly process.

And, even if China's MSS didn't do this massive cyberattack, it could have been another country's intelligence agency. Not good either.

Regardless who the attackers were, this incident is a huge reminder to executives in government and in the private sector to secure their computer systems. Hopefully, executives at major hotel chains -- especially those frequented by government officials and military members -- now realize that their systems are high-value targets.


Your Medical Devices Are Not Keeping Your Health Data to Themselves

[Editor's note: today's guest post, by reporters at ProPublica, is part of a series which explores data collection, data sharing, and privacy issues within the healthcare industry. It is reprinted with permission.]

By Derek Kravitz and Marshall Allen, ProPublica

Medical devices are gathering more and more data from their users, whether it’s their heart rates, sleep patterns or the number of steps taken in a day. Insurers and medical device makers say such data can be used to vastly improve health care.

But the data that’s generated can also be used in ways that patients don’t necessarily expect. It can be packaged and sold for advertising. It can anonymized and used by customer support and information technology companies. Or it can be shared with health insurers, who may use it to deny reimbursement. Privacy experts warn that data gathered by insurers could also be used to rate individuals’ health care costs and potentially raise their premiums.

Patients typically have to give consent for their data to be used — so-called “donated data.” But some patients said they weren’t aware that their information was being gathered and shared. And once the data is shared, it can be used in a number of ways. Here are a few of the most popular medical devices that can share data with insurers:

Continuous Positive Airway Pressure, or CPAP, Machines

What Are They?

One of the more popular devices for those with sleep apnea, CPAP machines are covered by insurers after a sleep study confirms the diagnosis. These units, which deliver pressurized air through masks worn by patients as they sleep, collect data and transmit it wirelessly.

What Do They Collect?

It depends on the unit, but CPAP machines can collect data on the number of hours a patient uses the device, the number of interruptions in sleep and the amount of air that leaks from the mask.

Who Gets the Info?

The data may be transmitted to the makers or suppliers of the machines. Doctors may use it to assess whether the therapy is effective. Health insurers may receive the data to track whether patients are using their CPAP machines as directed. They may refuse to reimburse the costs of the machine if the patient doesn’t use it enough. The device maker ResMed said in a statement that patients may withdraw their consent to have their data shared.

Heart Monitors

What Are They?

Heart monitors, oftentimes small, battery-powered devices worn on the body and attached to the skin with electrodes, measure and record the heart’s electrical signals, typically over a few days or weeks, to detect things like irregular heartbeats or abnormal heart rhythms. Some devices implanted under the skin can last up to five years.

What Do They Collect?

Wearable ones include Holter monitors, wired external devices that attach to the skin, and event recorders, which can track slow or fast heartbeats and fainting spells. Data can also be shared from implanted pacemakers, which keep the heart beating properly for those with arrhythmias.

Who Gets the Info?

Low resting heart rates or other abnormal heart conditions are commonly used by insurance companies to place patients in more expensive rate classes. Children undergoing genetic testing are sometimes outfitted with heart monitors before their diagnosis, increasing the odds that their data is used by insurers. This sharing is the most common complaint cited by the World Privacy Forum, a consumer rights group.

Blood Glucose Monitors

What Are They?

Millions of Americans who have diabetes are familiar with blood glucose meters, or glucometers, which take a blood sample on a strip of paper and analyze it for glucose, or sugar, levels. This allows patients and their doctors to monitor their diabetes so they don’t have complications like heart or kidney disease. Blood glucose meters are used by the more the 1.2 million Americans with Type 1 diabetes, which is usually diagnosed in children, teens and young adults.

What Do They Collect?

Blood sugar monitors measure the concentration of glucose in a patient’s blood, a key indicator of proper diabetes management.

Who Gets the Info?

Diabetes monitoring equipment is sold directly to patients, but many still rely on insurer-provided devices. To get reimbursement for blood glucose meters, health insurers will typically ask for at least a month’s worth of blood sugar data.

Lifestyle Monitors

What Are They?

Step counters, medication alerts and trackers, and in-home cameras are among the devices in the increasingly crowded lifestyle health industry.

What Do They Collect?

Many health data research apps are made up of “donated data,” which is provided by consumers and falls outside of federal guidelines that require the sharing of personal health data be disclosed and anonymized to protect the identity of the patient. This data includes everything from counters for the number of steps you take, the calories you eat and the number of flights of stairs you climb to more traditional health metrics, such as pulse and heart rates.

Who Gets the Info?

It varies by device. But the makers of the Fitbit step counter, for example, say they never sell customer personal data or share personal information unless a user requests it; it is part of a legal process; or it is provided on a “confidential basis” to a third-party customer support or IT provider. That said, Fitbit allows users who give consent to share data “with a health insurer or wellness program,” according to a statement from the company.

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


House Oversight Committee Report On The Equifax Data Breach. Did The Recommendations Go Far Enough?

On Monday, the U.S. House of Representatives Committee on Oversight and Government Reform released its report (Adobe PDF) on the massive Equifax data breach, where the most sensitive personal and payment information of more than 148 million consumers -- nearly half of the population -- was accessed and stolen. The report summary:

"In 2005, former Equifax Chief Executive Officer(CEO) Richard Smith embarked on an aggressive growth strategy, leading to the acquisition of multiple companies, information technology (IT) systems, and data. While the acquisition strategy was successful for Equifax’s bottom line and stock price, this growth brought increasing complexity to Equifax’s IT systems, and expanded data security risks... Equifax, however, failed to implement an adequate security program to protect this sensitive data. As a result, Equifax allowed one of the largest data breaches in U.S. history. Such a breach was entirely preventable."

The report cited several failures by Equifax. First:

"On March 7, 2017, a critical vulnerability in the Apache Struts software was publicly disclosed. Equifax used Apache Struts to run certain applications on legacy operating systems. The following day, the Department of Homeland Security alerted Equifax to this critical vulnerability. Equifax’s Global Threate and Vulnerability Management (GTVM) team emailed this alert to over 400 people on March 9, instructing anyone who had Apache Struts running on their system to apply the necessary patch within 48 hours. The Equifax GTVM team also held a March 16 meeting about this vulnerability. Equifax, however, did not fully patch its systems. Equifax’s Automated Consumer Interview System (ACIS), a custom-built internet-facing consumer dispute portal developed in the 1970s, was running a version of Apache Struts containing the vulnerability. Equifax did not patch the Apache Struts software located within ACIS, leaving its systems and data exposed."

As bad as that is, it gets worse:

"On May 13, 2017, attackers began a cyberattack on Equifax. The attack lasted for 76 days. The attackers dropped “web shells” (a web-based backdoor) to obtain remote control over Equifax’s network. They found a file containing unencrypted credentials (usernames and passwords), enabling the attackers to access sensitive data outside of the ACIS environment. The attackers were able to use these credentials to access 48 unrelated databases."

"Attackers sent 9,000 queries on these 48 databases, successfully locating unencrypted personally identifiable information (PII) data 265 times. The attackers transferred this data out of the Equifax environment, unbeknownst to Equifax. Equifax did not see the data exfiltration because the device used to monitor ACIS network traffic had been inactive for 19 months due to an expired security certificate. On July 29, 2017, Equifax updated the expired certificate and immediately noticed suspicious web traffic..."

Findings so far: 1) growth prioritized over security while archiving highly valuable data; 2) antiquated computer systems; 3) failed security patches; 4) unprotected user credentials; and 5) failed intrusion detection mechanism. Geez!

Only after updating its expired security certificate did Equifax notice the intrusion. After that, you'd think that Equifax would have implemented a strong post-breach response. You'd be wrong. More failures:

"When Equifax informed the public of the breach on September 7, the company was unprepared to support the large number of affected consumers. The dedicated breach website and call centers were immediately overwhelmed, and consumers were not able to obtain timely information about whether they were affected and how they could obtain identity protection services."

"Equifax should have addressed at least two points of failure to mitigate, or even prevent, this data breach. First, a lack of accountability and no clear lines of authority in Equifax’s IT management structure existed, leading to an execution gap between IT policy development and operation. This also restricted the company’s implementation of other security initiatives in a comprehensive and timely manner. As an example, Equifax had allowed over 300 security certificates to expire, including 79 certificates for monitoring business critical domains. "Second, Equifax’s aggressive growth strategy and accumulation of data resulted in a complex IT environment. Equifax ran a number of its most critical IT applications on custom-built legacy systems. Both the complexity and antiquated nature of Equifax’s IT systems made IT security especially challenging..."

Findings so far: 6) inadequate post-breach response; and 7) complicated IT structure making updates difficult. Geez!

The report listed the executives who retired and/or were fired. That's a small start for a company archiving the most sensitive personal and payment information of all USA citizens. The report included seven recommendations:

"1: Empower Consumers through Transparency. Consumer reporting agencies (CRAs) should provide more transparency to consumers on what data is collected and how it is used. A large amount of the public’s concern after Equifax’s data breach announcement stemmed from the lack of knowledge regarding the extensive data CRAs hold on individuals. CRAs must invest in and deploy additional tools to empower consumers to better control their own data..."

"2: Review Sufficiency of FTC Oversight and Enforcement Authorities. Currently, the FTC uses statutory authority under Section 5 of the Federal Trade Commission Act to hold businesses accountable for making false or misleading claims about their data security or failing to employ reasonable security measures. Additional oversight authorities and enforcement tools may be needed to enable the FTC to effectively monitor CRA data security practices..."

"3: Review Effectiveness of Identity Monitoring and Protection Services Offered to Breach Victims. The General Accounting Office (GAO) should examine the effectiveness of current identity monitoring and protection services and provide recommendations to Congress. In particular, GAO should review the length of time that credit monitoring and protection services are needed after a data breach to mitigate identity theft risks. Equifax offered free credit monitoring and protection services for one year to any consumer who requested it... This GAO study would help clarify the value of credit monitoring services and the length of time such services should be maintained. The GAO study should examine alternatives to credit monitoring services and identify addit ional or complimentary services..."

"4: Increase Transparency of Cyber Risk in Private Sector. Federal agencies and the private sector should work together to increase transparency of a company’s cybersecurity risks and steps taken to mitigate such risks. One example of how a private entity can increase transparency related to the company’s cyber risk is by making disclosures in its Securities and Exchange Commission (SEC) filings. In 2011, the SEC developed guidance to assist companies in disclosing cybersecurity risks and incidents. According to the SEC guidance, if cybersecurity risks or incidents are “sufficiently material to investors” a private company may be required to disclose the information... Equifax did not disclose any cybersecurity risks or cybers ecurity incidents in its SEC filings prior to the 2017 data breach..."

"5: Hold Federal Contractors Accountable for Cybersecurity with Clear Requirements. The Equifax data breach and federal customers’ use of Equifax identity validation services highlight the need for the federal government to be vigilant in mitigating cybersecurity risk in federal acquisition. The Office of Management and Budget (OMB) should continue efforts to develop a clear set of requirements for federal contractors to address increasing cybersecurity risks, particularly as it relates to handling of PII. There should be a government-wide framework of cybersecurity and data security risk-based requirements. In 2016, the Committee urged OMB to focus on improving and updating cybersecurity requirements for federal acquisition... The Committee again urges OMB to expedite development of a long-promised cybersecurity acquisition memorandum to provide guidance to federal agencies and acquisition professionals..."

"6: Reduce Use of Social Security Numbers as Personal Identifiers. The executive branch should work with the private sector to reduce reliance on Social Security numbers. Social Security numbers are widely used by the public and private sector to both identify and authenticate individuals. Authenticators are only useful if they are kept confidential. Attackers stole the Social Security numbers of an estimated 145 million consumers from Equifax. As a result of this breach, nearly half of the country’s Social Security numbers are no longer confidential. To better protect consumers from identity theft, OMB and other relevant federal agencies should pursue emerging technology solutions as an alternative to Social Security number use."

"7: Implement Modernized IT Solutions. Companies storing sensitive consumer data should transition away from legacy IT and implement modern IT security solutions. Equifax failed to modernize its IT environments in a timely manner. The complexity of the legacy IT environment hosting the ACIS application allowed the attackers to move throughout the Equifax network... Equifax’s legacy IT was difficult to scan, patch, and modify... Private sector companies, especially those holding sensitive consumer data like Equifax, must prioritize investment in modernized tools and technologies...."

The history of corporate data breaches and the above list of corporate failures by Equifax both should be warnings to anyone in government promoting the privatization of current government activities. Companies screw up stuff, too.

Recommendation #6 is frightening in that it hasn't been implemented. Yikes! No federal agency should do business with a private sector firm operating with antiquated computer systems. And, if Equifax can't protect the information it archives, it should cease to exist. While that sounds harsh, it ain't. Continual data breaches place risks and burdens upon already burdened consumers trying to control and protect their data.

What are your opinions of the report? Did it go far enough?


You Snooze, You Lose: Insurers Make The Old Adage Literally True

[Editor's note: today's guest post, by reporters at ProPublica, is part of a series which explores data collection, data sharing, and privacy issues within the healthcare industry. It is reprinted with permission.]

By Marshall Allen, ProPublica

Last March, Tony Schmidt discovered something unsettling about the machine that helps him breathe at night. Without his knowledge, it was spying on him.

From his bedside, the device was tracking when he was using it and sending the information not just to his doctor, but to the maker of the machine, to the medical supply company that provided it and to his health insurer.

Schmidt, an information technology specialist from Carrollton, Texas, was shocked. “I had no idea they were sending my information across the wire.”

Schmidt, 59, has sleep apnea, a disorder that causes worrisome breaks in his breathing at night. Like millions of people, he relies on a continuous positive airway pressure, or CPAP, machine that streams warm air into his nose while he sleeps, keeping his airway open. Without it, Schmidt would wake up hundreds of times a night; then, during the day, he’d nod off at work, sometimes while driving and even as he sat on the toilet.

“I couldn’t keep a job,” he said. “I couldn’t stay awake.” The CPAP, he said, saved his career, maybe even his life.

As many CPAP users discover, the life-altering device comes with caveats: Health insurance companies are often tracking whether patients use them. If they aren’t, the insurers might not cover the machines or the supplies that go with them.

In fact, faced with the popularity of CPAPs, which can cost $400 to $800, and their need for replacement filters, face masks and hoses, health insurers have deployed a host of tactics that can make the therapy more expensive or even price it out of reach.

Patients have been required to rent CPAPs at rates that total much more than the retail price of the devices, or they’ve discovered that the supplies would be substantially cheaper if they didn’t have insurance at all.

Experts who study health care costs say insurers’ CPAP strategies are part of the industry’s playbook of shifting the costs of widely used therapies, devices and tests to unsuspecting patients.

“The doctors and providers are not in control of medicine anymore,” said Harry Lawrence, owner of Advanced Oxy-Med Services, a New York company that provides CPAP supplies. “It’s strictly the insurance companies. They call the shots.”

Insurers say their concerns are legitimate. The masks and hoses can be cumbersome and noisy, and studies show that about third of patients don’t use their CPAPs as directed.

But the companies’ practices have spawned lawsuits and concerns by some doctors who say that policies that restrict access to the machines could have serious, or even deadly, consequences for patients with severe conditions. And privacy experts worry that data collected by insurers could be used to discriminate against patients or raise their costs.

Schmidt’s privacy concerns began the day after he registered his new CPAP unit with ResMed, its manufacturer. He opted out of receiving any further information. But he had barely wiped the sleep out of his eyes the next morning when a peppy email arrived in his inbox. It was ResMed, praising him for completing his first night of therapy. “Congratulations! You’ve earned yourself a badge!” the email said.

Then came this exchange with his supply company, Medigy: Schmidt had emailed the company to praise the “professional, kind, efficient and competent” technician who set up the device. A Medigy representative wrote back, thanking him, then adding that Schmidt’s machine “is doing a great job keeping your airway open.” A report detailing Schmidt’s usage was attached.

Alarmed, Schmidt complained to Medigy and learned his data was also being shared with his insurer, Blue Cross Blue Shield. He’d known his old machine had tracked his sleep because he’d taken its removable data card to his doctor. But this new invasion of privacy felt different. Was the data encrypted to protect his privacy as it was transmitted? What else were they doing with his personal information?

He filed complaints with the Better Business Bureau and the federal government to no avail. “My doctor is the ONLY one that has permission to have my data,” he wrote in one complaint.

In an email, a Blue Cross Blue Shield spokesperson said that it’s standard practice for insurers to monitor sleep apnea patients and deny payment if they aren’t using the machine. And privacy experts said that sharing the data with insurance companies is allowed under federal privacy laws. A ResMed representative said once patients have given consent, it may share the data it gathers, which is encrypted, with the patients’ doctors, insurers and supply companies.

Schmidt returned the new CPAP machine and went back to a model that allowed him to use a removable data card. His doctor can verify his compliance, he said.

Luke Petty, the operations manager for Medigy, said a lot of CPAP users direct their ire at companies like his. The complaints online number in the thousands. But insurance companies set the prices and make the rules, he said, and suppliers follow them, so they can get paid.

“Every year it’s a new hurdle, a new trick, a new game for the patients,” Petty said.

A Sleep Saving Machine Gets Popular

The American Sleep Apnea Association estimates about 22 million Americans have sleep apnea, although it’s often not diagnosed. The number of people seeking treatment has grown along with awareness of the disorder. It’s a potentially serious disorder that left untreated can lead to risks for heart disease, diabetes, cancer and cognitive disorders. CPAP is one of the only treatments that works for many patients.

Exact numbers are hard to come by, but ResMed, the leading device maker, said it’s monitoring the CPAP use of millions of patients.

Sleep apnea specialists and health care cost experts say insurers have countered the deluge by forcing patients to prove they’re using the treatment.

Medicare, the government insurance program for seniors and the disabled, began requiring CPAP “compliance” after a boom in demand. Because of the discomfort of wearing a mask, hooked up to a noisy machine, many patients struggle to adapt to nightly use. Between 2001 and 2009, Medicare payments for individual sleep studies almost quadrupled to $235 million. Many of those studies led to a CPAP prescription. Under Medicare rules, patients must use the CPAP for four hours a night for at least 70 percent of the nights in any 30-day period within three months of getting the device. Medicare requires doctors to document the adherence and effectiveness of the therapy.

Sleep apnea experts deemed Medicare’s requirements arbitrary. But private insurers soon adopted similar rules, verifying usage with data from patients’ machines — with or without their knowledge.

Kristine Grow, spokeswoman for the trade association America’s Health Insurance Plans, said monitoring CPAP use is important because if patients aren’t using the machines, a less expensive therapy might be a smarter option. Monitoring patients also helps insurance companies advise doctors about the best treatment for patients, she said. When asked why insurers don’t just rely on doctors to verify compliance, Grow said she didn’t know.

Many insurers also require patients to rack up monthly rental fees rather than simply pay for a CPAP.

Dr. Ofer Jacobowitz, a sleep apnea expert at ENT and Allergy Associates and assistant professor at The Mount Sinai Hospital in New York, said his patients often pay rental fees for a year or longer before meeting the prices insurers set for their CPAPs. But since patients’ deductibles — the amount they must pay before insurance kicks in — reset at the beginning of each year, they may end up covering the entire cost of the rental for much of that time, he said.

The rental fees can surpass the retail cost of the machine, patients and doctors say. Alan Levy, an attorney who lives in Rahway, New Jersey, bought an individual insurance plan through the now-defunct Health Republic Insurance of New Jersey in 2015. When his doctor prescribed a CPAP, the company that supplied his device, At Home Medical, told him he needed to rent the device for $104 a month for 15 months. The company told him the cost of the CPAP was $2,400.

Levy said he wouldn’t have worried about the cost if his insurance had paid it. But Levy’s plan required him to reach a $5,000 deductible before his insurance plan paid a dime. So Levy looked online and discovered the machine actually cost about $500.

Levy said he called At Home Medical to ask if he could avoid the rental fee and pay $500 up front for the machine, and a company representative said no. “I’m being overcharged simply because I have insurance,” Levy recalled protesting.

Levy refused to pay the rental fees. “At no point did I ever agree to enter into a monthly rental subscription,” he wrote in a letter disputing the charges. He asked for documentation supporting the cost. The company responded that he was being billed under the provisions of his insurance carrier.

Levy’s law practice focuses, ironically, on defending insurance companies in personal injury cases. So he sued At Home Medical, accusing the company of violating the New Jersey Consumer Fraud Act. Levy didn’t expect the case to go to trial. “I knew they were going to have to spend thousands of dollars on attorney’s fees to defend a claim worth hundreds of dollars,” he said.

Sure enough, At Home Medical, agreed to allow Levy to pay $600 — still more than the retail cost — for the machine.

The company declined to comment on the case. Suppliers said that Levy’s case is extreme, but acknowledged that patients’ rental fees often add up to more than the device is worth.

Levy said that he was happy to abide by the terms of his plan, but that didn’t mean the insurance company could charge him an unfair price. “If the machine’s worth $500, no matter what the plan says, or the medical device company says, they shouldn’t be charging many times that price,” he said.

Dr. Douglas Kirsch, president of the American Academy of Sleep Medicine, said high rental fees aren’t the only problem. Patients can also get better deals on CPAP filters, hoses, masks and other supplies when they don’t use insurance, he said.

Cigna, one of the largest health insurers in the country, currently faces a class-action suit in U.S. District Court in Connecticut over its billing practices, including for CPAP supplies. One of the plaintiffs, Jeffrey Neufeld, who lives in Connecticut, contends that Cigna directed him to order his supplies through a middleman who jacked up the prices.

Neufeld declined to comment for this story. But his attorney, Robert Izard, said Cigna contracted with a company called CareCentrix, which coordinates a network of suppliers for the insurer. Neufeld decided to contact his supplier directly to find out what it had been paid for his supplies and compare that to what he was being charged. He discovered that he was paying substantially more than the supplier said the products were worth. For instance, Neufeld owed $25.68 for a disposable filter under his Cigna plan, while the supplier was paid $7.50. He owed $147.78 for a face mask through his Cigna plan while the supplier was paid $95.

ProPublica found all the CPAP supplies billed to Neufeld online at even lower prices than those the supplier had been paid. Longtime CPAP users say it’s well known that supplies are cheaper when they are purchased without insurance.

Neufeld’s cost “should have been based on the lower amount charged by the actual provider, not the marked-up bill from the middleman,” Izard said. Patients covered by other insurance companies may have fallen victim to similar markups, he said.

Cigna would not comment on the case. But in documents filed in the suit, it denied misrepresenting costs or overcharging Neufeld. The supply company did not return calls for comment.

In a statement, Stephen Wogen, CareCentrix’s chief growth officer, said insurers may agree to pay higher prices for some services, while negotiating lower prices for others, to achieve better overall value. For this reason, he said, isolating select prices doesn’t reflect the overall value of the company’s services. CareCentrix declined to comment on Neufeld’s allegations.

Izard said Cigna and CareCentrix benefit from such behind-the-scenes deals by shifting the extra costs to patients, who often end up covering the marked-up prices out of their deductibles. And even once their insurance kicks in, the amount the patients must pay will be much higher.

The ubiquity of CPAP insurance concerns struck home during the reporting of this story, when a ProPublica colleague discovered how his insurer was using his data against him.

Sleep Aid or Surveillance Device?

Without his CPAP, Eric Umansky, a deputy managing editor at ProPublica, wakes up repeatedly through the night and snores so insufferably that he is banished to the living room couch. “My marriage depends on it.”

In September, his doctor prescribed a new mask and airflow setting for his machine. Advanced Oxy-Med Services, the medical supply company approved by his insurer, sent him a modem that he plugged into his machine, giving the company the ability to change the settings remotely if needed.

But when the mask hadn’t arrived a few days later, Umansky called Advanced Oxy-Med. That’s when he got a surprise: His insurance company might not pay for the mask, a customer service representative told him, because he hadn’t been using his machine enough. “On Tuesday night, you only used the mask for three-and-a-half hours,” the representative said. “And on Monday night, you only used it for three hours.”

“Wait — you guys are using this thing to track my sleep?” Umansky recalled saying. “And you are using it to deny me something my doctor says I need?”

Umansky’s new modem had been beaming his personal data from his Brooklyn bedroom to the Newburgh, New York-based supply company, which, in turn, forwarded the information to his insurance company, UnitedHealthcare.

Umansky was bewildered. He hadn’t been using the machine all night because he needed a new mask. But his insurance company wouldn’t pay for the new mask until he proved he was using the machine all night — even though, in his case, he, not the insurance company, is the owner of the device.

“You view it as a device that is yours and is serving you,” Umansky said. “And suddenly you realize it is a surveillance device being used by your health insurance company to limit your access to health care.”

Privacy experts said such concerns are likely to grow as a host of devices now gather data about patients, including insertable heart monitors and blood glucose meters, as well as Fitbits, Apple Watches and other lifestyle applications. Privacy laws have lagged behind this new technology, and patients may be surprised to learn how little control they have over how the data is used or with whom it is shared, said Pam Dixon, executive director of the World Privacy Forum.

“What if they find you only sleep a fitful five hours a night?” Dixon said. “That’s a big deal over time. Does that affect your health care prices?”

UnitedHealthcare said in a statement that it only uses the data from CPAPs to verify patients are using the machines.

Lawrence, the owner of Advanced Oxy-Med Services, conceded that his company should have told Umansky his CPAP use would be monitored for compliance, but it had to follow the insurers’ rules to get paid.

As for Umansky, it’s now been two months since his doctor prescribed him a new airflow setting for his CPAP machine. The supply company has been paying close attention to his usage, Umansky said, but it still hasn’t updated the setting.

The irony is not lost on Umansky: “I wish they would spend as much time providing me actual care as they do monitoring whether I’m ‘compliant.’”

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

 


Oath To Pay Almost $5 Million To Settle Charges By New York AG Regarding Children's Privacy Violations

Oath Inc. logo Barbara D. Underwood, the Attorney General (AG) for New York State, announced last week a settlement with Oath, Inc. for violating the Children’s Online Privacy Protection Act (COPPA). Oath Inc. is a wholly-owned subsidiary of Verizon Communications. Until June 2017, Oath was known as AOL Inc. ("AOL"). The announcement stated:

"The Attorney General’s Office found that AOL conducted billions of auctions for ad space on hundreds of websites the company knew were directed to children under the age of 13. Through these auctions, AOL collected, used, and disclosed personal information from the websites’ users in violation of COPPA, enabling advertisers to track and serve targeted ads to young children. The company has agreed to adopt comprehensive reforms to protect children from improper tracking and pay a record $4.95 million in penalties..."

The United States Congress enacted COPPA in 1998 to protect the safety and privacy of young children online. As many parents know, young children don't understand complicated legal documents such as terms-of-use and privacy policies. COPPA prohibits operators of certain websites from collecting, using, or disclosing personal information (e.g., first and last name, e-mail address) of children under the age of 13 without first obtaining parental consent.

The definition of "personal information" was revised in 2013 to include persistent identifiers that can be used to recognize a user over time and across websites, such as the ID found in a web browser cookie or an Internet Protocol (“IP”) address. The revision effectively prohibits covered operators from using cookies, IP addresses, and other persistent identifiers to track users across websites for most advertising purposes on COPPA-covered websites.

The announcement by AG Underwood explained the alleged violations in detail. Despite policies to the contrary:

"... AOL nevertheless used its display ad exchange to conduct billions of auctions for ad space on websites that it knew to be directed to children under the age of 13 and subject to COPPA. AOL obtained this knowledge in two ways. First, several AOL clients provided notice to AOL that their websites were subject to COPPA. These clients identified more than a dozen COPPA-covered websites to AOL. AOL conducted at least 1.3 billion auctions of display ad space from these websites. Second, AOL itself determined that certain websites were directed to children under the age of 13 when it conducted a review of the content and privacy policies of client websites. Through these reviews, AOL identified hundreds of additional websites that were subject to COPPA. AOL conducted at least 750 million auctions of display ad space from these websites."

AG Underwood said in a statement:

"COPPA is meant to protect young children from being tracked and targeted by advertisers online. AOL flagrantly violated the law – and children’s privacy – and will now pay the largest-ever penalty under COPPA. My office remains committed to protecting children online and will continue to hold accountable those who violate the law."

A check at press time of both the press and "company values" sections of Oath's site failed to find any mentions of the settlement. TechCrunch reported on December 4th:

"We reached out to Oath with a number of questions about this privacy failure. But a spokesman did not engage with any of them directly — emailing a short statement instead, in which it writes: "We are pleased to see this matter resolved and remain wholly committed to protecting children’s privacy online." The spokesman also did not confirm nor dispute the contents of the New York Times report."

Hmmm. Almost a week has passed since AG Underwood's December 4th announcement. You'd think that Oath management would have released a statement by now. Maybe Oath isn't as committed to children's online privacy as they claim. Something for parents to note.

The National Law Review provided some context:

"...in 2016, the New York AG concluded a two-year investigation into the tracking practices of four online publishers for alleged COPPA violations... As recently as September of this year, the New Mexico AG filed a lawsuit for alleged COPPA violations against a children's game app company, Tiny Lab Productions, and the online ad companies that work within Tiny Lab's, including those run by Google and Twitter... The Federal Trade Commission (FTC) continues to vigorously enforce COPPA, closing out investigations of alleged COPPA violations against smart toy manufacturer VTech and online talent search company Explore Talent... there have been a total of 28 enforcement proceedings since the COPPA rule was issued in 2000."

You can read about many of these actions in this blog, and how COPPA was strengthened in 2013.

So, the COPPA law works well and it is being vigorously enforced. Kudos to AG Underwood, her staff, and other states' AGs for taking these actions. What are your opinions about the AOL/Oath settlement?


Massive Data Breach At Quora Affects 100 Million Users

Quora logo Quora, the knowledge-sharing social networking site, announced on Monday a data breach affecting about 100 million of its users. The company discovered the breach on Friday, and a breach investigation is ongoing.

The company’s Chief Executive Officer, Adam D’Angelo, wrote in a blog post that the following data elements were compromised or stolen:

"a) Account information, e.g. name, email address, encrypted password (hashed using bcrypt with a salt that varies for each user), data imported from linked networks when authorized by users; b) Public content and actions, e.g. questions, answers, comments, upvotes; and c) Non-public content and actions, e.g. answer requests, downvotes, direct messages (note that a low percentage of Quora users have sent or received such messages)"

Quora has cancelled affected users' passwords. Quora does not yet know exactly how unauthorized persons accessed its system. The breach announcement did not state when the intrusion began. D'Angelo added:

"We're still investigating the precise causes and in addition to the work being conducted by our internal security teams, we have retained a leading digital forensics and security firm to assist us. We have also notified law enforcement officials."

Affected users are being notified via email. Affected users returning to the site must reset their accounts with new passwords. Quora encourages users with questions to visit its breach help site. Users are warned to change their online passwords.

The New York Times reported:

"... the incident was unlikely to result in identity theft, as the site does not collect sensitive information such as credit card or Social Security numbers... 300 million people around the world use its site at least once a month to ask and answer questions about politics, faith, calculus, unrequited love, the meaning of life and more. By comparison, Twitter claims 326 million monthly active users. But since it blasted onto the social media landscape in 2010, igniting a blaze of interest among tech company employees, Quora has not become the mainstream cultural force that Twitter has..."

This breach is another reminder to all consumers to never use the same password at multiple sites. Cybercriminals are persistent, and will reuse stolen passwords to see which other sites they can break into to steal sensitive personal and payment information.

If you received an email breach notice from Quora, please share it below (after deleting any sensitive personal data).


Gigantic Data Breach At Marriott International Affects 500 Million Customers. Plenty Of Questions Remain

Marriott International logo A gigantic data breach at Marriott International affects about 500 million customers who have stayed at its Starwood network of hotels in the United States, Canada, and the United Kingdom. Marriott International announced the data breach on Friday, November 30th, and set up a website for affected Starwood guests.

According to its breach announcement, an "internal security tool" discovered the breach on September 8, 2018. The initial data breach investigation determined that unauthorized persons accessed its registration database as far back as 2014, and had both copied and encrypted information before removing it. Marriott engaged security experts, the information was partially decrypted on November 19, 2018, and the global hotel chain determined that the information was from its Starwood guest reservation database.

Starwood Preferred Guest logo The Starwood hotels network includes brands such as W Hotels, St. Regis, Sheraton Hotels & Resorts, Westin Hotels & Resorts, Le Méridien Hotels & Resorts, Four Points by Sheraton, and more. Marriott has not finished decrypting all information, so there may be future updates from the breach investigation.

For 327 million guests, the personal data items stolen included a combination of name, mailing address, phone number, email address, passport number, Starwood Preferred Guest (“SPG”) account information, date of birth, gender, arrival and departure information, reservation date, and communication preferences. For some guests, the information stolen also included payment card numbers and payment card expiration dates. While Marriott said the payment card numbers were encrypted using Advanced Encryption Standard encryption (AES-128), its warned that it doesn't yet know if the encryption keys (needed to decrypt payment information) were also stolen.

For 173 million guests, fewer personal data items were stolen included, "name and sometimes other data such as mailing address, email address, or other information." Marriott International said its Marriott-branded hotels were not affected since they use a different reservations database on a different server.

Marriott said it has notified law enforcement, is working with law enforcement, and has begun to notify affected guests via email. The hotel chain will offer affected guests in select countries one year of free enrollment in the WebWatcher program which, "monitors internet sites where personal information is shared and  an alert to the consumer if evidence of the consumer’s personal information is found." WebWatcher will not be offered to all affected guests. Eligible guests should read the fine print, which the Starwood breach site summarized:

"Due to regulatory and other reasons, WebWatcher or similar products are not available in all countries. For residents of the United States, enrolling in WebWatcher also provides you with two additional benefits: (1) a Fraud Loss Reimbursement benefit, which reimburses you for out-of-pocket expenses totaling up to $1 million in covered legal costs and expenses for any one stolen identity event. All coverage is subject to the conditions and exclusions in the policy; and (2) unlimited access to consultation with a Kroll fraud specialist. Consultation support includes showing you the most effective ways to protect your identity, explaining your rights and protections under the law, assistance with fraud alerts, and interpreting how personal information is accessed and used..."

The seriousness of this data breach cannot be overstated. First, it went undetected for a very long time. Marriott needs to explain that and the changes it will implement with an improved "internal security tool" so this doesn't happen again. Second, 500 million is an awful lot of affected customers. An awful lot. Third, breach CNN Business reported:

"Because the hack involves customers in the European Union and the United Kingdom, the company might be in violation of the recently enacted General Data Protection Regulation (GDPR). Mark Thompson, the global lead for consulting company KPMG's Privacy Advisory Practice, told CNN Business that hefty GDPR penalties will potentially be slapped on the company. "The size and scale of this thing is huge," he said, adding that it's going to take several months for (EU) regulators to investigate the breach."

Fourth, the data items stolen are sufficient to cause plenty of damage. Security experts advise affected customers to change their Starwood passwords, check the answers.Kroll.com breach site next week to see if their information was compromised/stolen, sign up for credit monitoring (if they don't already have it), watch their payment or bank accounts for fraudulent entries, and consider an early renewal if your passport number was compromised/stolen. Fifth, companies usually arrange free credit monitoring for breach victims for one or two years. So far, Marriott hasn't done this. Maybe it will. If not, Marriott needs to explain why.

Sixth, breach notification of affected guests via email seems sketchy... like Marriott is trying to cut corners and costs. History is littered with numerous examples of skilled spammers and cybercriminals using faked or spoofed email to trick consumers into revealing sensitive personal and payment information. It will be interesting to see how Marriott's breach notification via email works and manages this threat.

Seventh, lawsuits and other investigations have already begun. ZDNet reported:

"... two Oregon men sued international hotel chain Marriott for exposing their data. Their lawsuit was followed hours later by another one filed in the state of Maryland. Both lawsuits seek class-action status. While plaintiffs in the Maryland lawsuit didn't specify the amount of damages they were seeking from Marriott, the plaintiffs in the Oregon lawsuit want $12.5 billion in costs and losses. his should equate to $25 for each of the 500 million users who had their personal data stolen from Marriott's serv ers... The Maryland lawsuit was filed by Baltimore law firm Murphy, Falcon & Murphy..."

Bloomberg BNA announced:

"The Massachusetts, New York and Illinois state attorneys general quickly announced they would examine the hack. Connecticut George Jepsen (D) is also looking into the matter, a spokesman told Bloomberg Law."

Eighth, the breach site's website address unnecessarily vague: answers.kroll.com. Frankly, a website address like "starwood-breach.kroll.com" or "marriott-breach.kroll.com" would have been better. (The combination of email notification and vague website name seems eerily similar to the post-breach clusterf--k by Equifax's poorly implemented breach site.) Maybe this vague address was a temporary quick fix, and Marriott will host a comprehensive breach-status site later on one of its servers. That would be better and clearer for affected customers, who probably are unfamiliar with Kroll. Readers of this blog probably first encountered Kroll after IBM Inc. contracted it to help implement IBM's post-breach response in 2007.

The Starwood breach notice appears within the news section of Marriott.com site. Also, Marriott's post-breach notice included overlays on both the home page and the Starwood landing page within the Marriott.com site. This is a good start, but a better implementation would insert a link directly into the webpages, since the overlays don't render well in all browsers on all devices. (Marriott: you did test this before deployment?) Example: people with pop-up blockers may miss the breach notice in the overlays. And, a better implementation would link to the news story's detail page within the Marriott.com site -- not directly to the vague answers.kroll.com site.

Last, some questions remain about the post-breach response:

  • Why email notices to breach victims? Hopefully, there are more reasons than simply saving postal mailing costs.
  • Why no credit monitoring offers to breach victims?
  • What data in the Starwood reservations database was altered by the attackers? That data was encrypted by the attackers suggests that the attackers had sufficient time, resources, and skills to modify or alter database records. Marriott needs to explain what it is doing about this.
  • When will Marriott host a breach site on one of its servers? No doubt, there will be follow-up news, more questions by breach victims, and breach investigation updates. A dedicated breach site on one of its servers seems best. Leaning too much on Kroll is not good.
  • Why did the intrusion go undetected for so long? Marriott needs to explain this and the post-breach fix so guests are reassured it won't happen again.
  • Is the main Marriott reservations database also vulnerable? Guests for other brands weren't affected since a separate reservations database was used. Maybe this is because the main Marriott reservations database and server are better protected, or cybercriminals haven't attacked it (yet). Guests deserve comprehensive answers.
  • Why the website overlaps/pop-ups and not static links?
  • What changes (e.g., software upgrades, breach detection tools, employee training, etc.) will be implemented so this doesn't happen again?

Having blogged about data breaches for 11+ years, these types of questions often arise. None are unreasonable questions. Answers will help guests feel comfortable with using Starwood hotels. Plus, Marriott has an obligation to fully inform guests directly at its website, and not lean on Kroll. What do you think?


Google Admitted Tracking Users' Location Even When Phone Setting Disabled

If you are considering, or already have, a smartphone running Google's Android operating system (OS), then take note. ZDNet reported (emphasis added):

"Phones running Android have been gathering data about a user's location and sending it back to Google when connected to the internet, with Quartz first revealing the practice has been occurring since January 2017. According to the report, Android phones and tablets have been collecting the addresses of nearby cellular towers and sending the encrypted data back, even when the location tracking function is disabled by the user... Google does not make this explicitly clear in its Privacy Policy, which means Android users that have disabled location tracking were still being tracked by the search engine giant..."

This is another reminder of the cost of free services and/or cheaper smartphones. You're gonna be tracked... extensively... whether you want it or not. The term "surveillance capitalism" is often used.

A reader shared a blunt assessment, "There is no way to avoid being Google’s property (a/k/a its bitch) if you use an Android phone." Harsh, but accurate. What is your opinion?


Massive Data Breach At U.S. Postal Service Affects 60 Million Users

United States Postal Service logo The United States Postal Service (USPS) experienced a massive data breach due to a vulnerable component at its website. The "application program interface" or API component allowed unauthorized users to access and download details about other users of the Informed Visibility service.

Security researcher Brian Krebs explained:

"In addition to exposing near real-time data about packages and mail being sent by USPS commercial customers, the flaw let any logged-in usps.com user query the system for account details belonging to any other users, such as email address, username, user ID, account number, street address, phone number, authorized users, mailing campaign data and other information.

Many of the API’s features accepted “wildcard” search parameters, meaning they could be made to return all records for a given data set without the need to search for specific terms. No special hacking tools were needed to pull this data, other than knowledge of how to view and modify data elements processed by a regular Web browser like Chrome or Firefox."

Geez! The USPS has since fixed the API vulnerability. Regardless, this is bad, very bad, for several reasons. Not only should the vulnerable API have prevented one user from viewing details about another, but it allowed changes to some data elements. Krebs added:

"A cursory review by KrebsOnSecurity indicates the promiscuous API let any user request account changes for any other user, such as email address, phone number or other key details. Fortunately, the USPS appears to have included a validation step to prevent unauthorized changes — at least with some data fields... The ability to modify database entries related to Informed Visibility user accounts could create problems for the USPS’s largest customers — think companies like Netflix and others that get discounted rates for high volumes. For instance, the API allowed any user to convert regular usps.com accounts to Informed Visibility business accounts, and vice versa."

About 13 million Informed Delivery users were also affected, since the vulnerable API component affected all USPS.com users. A vulnerability like this makes package theft easier since criminals could determine when certain types of mail (e.g., debit cards, credit cards, etc.) arrive at users' addresses. The vulnerable API probably existed for more than one year, when a security researcher first alerted the USPS about it.

While the USPS provided a response to Krebs on Security, a check at press time of the Newsroom and blog sections of About.USPS.com failed to find any mention of the data breach. Not good. Transparency matters.

If the USPS is serious about data security, then it should issue a public statement. When will users receive breach notification letters, if they haven't been sent? Who fixed the vulnerable API? How long was it broken? What post-breach investigation is underway? What types of changes (e.g., employee training, software testing, outsource vendor management, etc.) are being implement so this won't happen again?

Trust matters. The lack of a public statement makes it difficult for consumers to judge the seriousness of the breach and the seriousness of the fix by USPS. We probably will hear more about this breach.


Ireland Regulator: LinkedIn Processed Email Addresses Of 18 Million Non-Members

LinkedIn logo On Friday November 23rd, the Data Protection Commission (DPC) in Ireland released its annual report. That report includes the results of an investigation by the DPC of the LinkedIn.com social networking site, after a 2017 complaint by a person who didn't use the social networking service. Apparently, LinkedIn obtained 18 million email address of non-members so it could then use the Facebook platform to deliver advertisements encouraging them to join.

The DPC 2018 report (Adobe PDF; 827k bytes) stated on page 21:

"The DPC concluded its audit of LinkedIn Ireland Unlimited Company (LinkedIn) in respect of its processing of personal data following an investigation of a complaint notified to the DPC by a non-LinkedIn user. The complaint concerned LinkedIn’s obtaining and use of the complainant’s email address for the purpose of targeted advertising on the Facebook Platform. Our investigation identified that LinkedIn Corporation (LinkedIn Corp) in the U.S., LinkedIn Ireland’s data processor, had processed hashed email addresses of approximately 18 million non-LinkedIn members and targeted these individuals on the Facebook Platform with the absence of instruction from the data controller (i.e. LinkedIn Ireland), as is required pursuant to Section 2C(3)(a) of the Acts. The complaint was ultimately amicably resolved, with LinkedIn implementing a number of immediate actions to cease the processing of user data for the purposes that gave rise to the complaint."

So, in an attempt to gain more users LinkedIn acquired and processed the email addresses of 18 million non-members without getting governmental "instruction" as required by law. Not good.

The DPC report covered the time frame from January 1st through May 24, 2018. The report did not mention the source(s) from which LinkedIn acquired the email addresses. The DPC report also discussed investigations of Facebook (e.g., WhatsApp, facial recognition),  and Yahoo/Oath. Microsoft acquired LinkedIn in 2016. GDPR went into effect across the EU on May 25, 2018.

There is more. The investigation's findings raised concerns about broader compliance issues, so the DPC conducted a more in-depth audit:

"... to verify that LinkedIn had in place appropriate technical security and organisational measures, particularly for its processing of non-member data and its retention of such data. The audit identified that LinkedIn Corp was undertaking the pre-computation of a suggested professional network for non-LinkedIn members. As a result of the findings of our audit, LinkedIn Corp was instructed by LinkedIn Ireland, as data controller of EU user data, to cease pre-compute processing and to delete all personal data associated with such processing prior to 25 May 2018."

That the DPC ordered LinkedIn to stop this particular data processing, strongly suggests that the social networking service's activity probably violated data protection laws, as the European Union (EU) implements stronger privacy laws, known as General Data Protection Regulation (GDPR). ZDNet explained in this primer:

".... GDPR is a new set of rules designed to give EU citizens more control over their personal data. It aims to simplify the regulatory environment for business so both citizens and businesses in the European Union can fully benefit from the digital economy... almost every aspect of our lives revolves around data. From social media companies, to banks, retailers, and governments -- almost every service we use involves the collection and analysis of our personal data. Your name, address, credit card number and more all collected, analysed and, perhaps most importantly, stored by organisations... Data breaches inevitably happen. Information gets lost, stolen or otherwise released into the hands of people who were never intended to see it -- and those people often have malicious intent. Under the terms of GDPR, not only will organisations have to ensure that personal data is gathered legally and under strict conditions, but those who collect and manage it will be obliged to protect it from misuse and exploitation, as well as to respect the rights of data owners - or face penalties for not doing so... There are two different types of data-handlers the legislation applies to: 'processors' and 'controllers'. The definitions of each are laid out in Article 4 of the General Data Protection Regulation..."

The new GDPR applies to both companies operating within the EU, and to companies located outside of the EU which offer goods or services to customers or businesses inside the EU. As a result, some companies have changed their business processes. TechCrunch reported in April:

"Facebook has another change in the works to respond to the European Union’s beefed up data protection framework — and this one looks intended to shrink its legal liabilities under GDPR, and at scale. Late yesterday Reuters reported on a change incoming to Facebook’s [Terms & Conditions policy] that it said will be pushed out next month — meaning all non-EU international are switched from having their data processed by Facebook Ireland to Facebook USA. With this shift, Facebook will ensure that the privacy protections afforded by the EU’s incoming GDPR — which applies from May 25 — will not cover the ~1.5 billion+ international Facebook users who aren’t EU citizens (but current have their data processed in the EU, by Facebook Ireland). The U.S. does not have a comparable data protection framework to GDPR..."

What was LinkedIn's response to the DPC report? At press time, a search of LinkedIn's blog and press areas failed to find any mentions of the DPC investigation. TechCrunch reported statements by Dennis Kelleher, Head of Privacy, EMEA at LinkedIn:

"... Unfortunately the strong processes and procedures we have in place were not followed and for that we are sorry. We’ve taken appropriate action, and have improved the way we work to ensure that this will not happen again. During the audit, we also identified one further area where we could improve data privacy for non-members and we have voluntarily changed our practices as a result."

What does this mean? Plenty. There seem to be several takeaways for consumer and users of social networking services:

  • EU regulators are proactive and conduct detailed audits to ensure companies both comply with GDPR and act consistent with any promises they made,
  • LinkedIn wants consumers to accept another "we are sorry" corporate statement. No thanks. No more apologies. Actions speak more loudly than words,
  • The DPC didn't fine LinkedIn probably because GDPR didn't become effective until May 25, 2018. This suggests that fines will be applied to violations occurring on or after May 25, 2018, and
  • People in different areas of the world view privacy and data protection differently - as they should. That is fine, and it shouldn't be a surprise. (A global survey about self-driving cars found similar regional differences.) Smart executives in businesses -- and in governments -- worldwide recognize regional differences, find ways to sell products and services across areas without degraded customer experience, and don't try to force their country's approach on other countries or areas which don't want it.

What takeaways do you see?


Amazon Said Its Data Breach Was Due To A "Technical Error" And Discloses Few Breach Details

Amazon logo Amazon.com, the online retail giant, confirmed that it experienced a data breach last Wednesday. CBS News reported:

"Amazon said a technical error on its website exposed the names and email addresses of some customers. The online retail giant its website and systems weren't hacked. "We have fixed the issue and informed customers who may have been impacted," said an Amazon spokesperson. An Amazon spokesman didn't answer additional questions, like how many people were affected or whether any of the information was stolen."

A check of the press center and blog sections with the Amazon.com site failed to find any mentions of the data breach. The Ars Technica blog posted the text of the breach notification email Amazon sent to affected users:

"From: Amazon.com
Sent: 21 November 2018 10:53
To: a--------l@hotmail.com
Subject: Important Information about your Amazon.com Account

Hello,
We’re contacting you to let you know that our website inadvertently disclosed your name and email address due to a technical error. The issue has been fixed. This is not a result of anything you have done, and there is no need for you to change your password or take any other action.

Sincerely,
Customer Service
http://Amazon.com"

What? That's all? No link to a site or to a page for customers with questions?

This incident is a reminder that several things can cause data breaches. It's not only when cyber-criminals break into an organization's computers or systems. Human error causes data breaches, too. In some breaches, employees collude with criminals. In some cases, sloppy data security by outsource vendors causes data breaches. Details matter.

Typically, organizations affected by data breaches hire external security agencies to conduct independent, post-breach investigations to learn important details: when the breach started, how exactly the breach happened, the list of data elements unauthorized users accessed/stole, what else may have happened that wasn't readily apparent when the incident was discovered, and key causal events leading up to the breach -- all so that a complete fix can be implemented, and so that it doesn't happen again.

Who made the "technical error?" Who discovered it? What caused it? How long did the error exist? Who fixed it? Were specialized skills or tools necessary? What changes were made so that it won't happen again? Amazon isn't saying. If management decided to skip a post-breach investigation, consumers deserve to know that and why, too.

Often, the breach starts long before it is discovered by the company, or by a security researcher. Often, the fix includes several improvements: software changes, employee training, and/or improved security processes with contractors.

So, all we know is that names and email addresses were accessed by unauthorized persons. If stolen, that is sufficient to do damage -- spam or phishing email messages, to trick victims into revealing sensitive personal (e.g., usernames, passwords, etc.) and payment (e.g., bank account numbers, credit card numbers, etc.) information. It is not too much to ask Amazon to share both breach details and the results of a post-breach investigation.

Executives at Amazon know all of this, so maybe it was a management decision not to share breach details nor a post-breach investigation -- perhaps, not wanting to risk huge Black Friday holiday sales. Then again, the lack of details could imply the breach was far worse than management wants to admit.

Either way, this is troublesome. It's all about trust. When details are shared, consumers can judge the severity of the breach, the completeness of the company's post-breach response, and ideally feel better about continuing to shop at the site. What do you  think?


Aging Machines, Crowds, Humidity: Problems at the Polls Were Mundane but Widespread

[Editor's Note: today's guest blog post, by Reporters at ProPublica, discusses widespread problems many voters encountered earlier this month. The data below was compiled before the runoffs in Florida, Georgia and other states. It is reprinted with permission.]

By Ian MacDougall, Jessica Huseman, and Isaac Arnsdorf - ProPublica

If the defining risk of Election Day 2016 was a foreign meddling, 2018’s seems to have been a domestic overload. High turnout across the country threw existing problems — aging machines, poorly trained poll workers and a hot political landscape — into sharp relief.

Michael McDonald, a political science professor at the University of Florida who studies turnout, says early numbers indicate Tuesday’s midterm saw the highest percentage turnout since the mid-’60s. “All signs indicate that everyone is now engaged in this country — Republicans and Democrats,” he said, adding that he expects 2020 to also be a year of high turnout. “Election officials need to start planning for that now, and hopefully elected officials who hold the purse strings will be responsive to those needs.”

Aging Technology

Electionland monitored problems across the country on Election Day, supporting the work of 250 local journalists in more than 120 local newsrooms. Thousands of voters reported issues at the polls, and Electionland sought to report on as many as possible. The most striking problem of the night was perhaps the most predictable — aged or ineffective voting equipment caused hours-long lines across the country.

American voting hasn’t had a major technology refresh since the early 2000s, in the aftermath of the Florida recount and the passage of the 2002 Help America Vote Act, which infused billions of dollars into American elections. More recent upgrades, such as poll books that could be accessed via computer, were supposed to reduce bottlenecks at check-ins — but they repeatedly failed on Tuesday, worsening waits in Georgia, South Carolina and Indiana.

While aging infrastructure was already a well-known problem to election administrators, the surge of voters experiencing ordinary glitches led to extraordinarily long waits, sometimes stretching over hours. From Pennsylvania to Georgia to Arizona and Michigan, polling places started the day with broken machines leading to long lines, and never recovered.

“In 2016, we learned the technology has security vulnerabilities. Today was a wake-up call to performance vulnerabilities,” said Trey Grayson, the former president of the National Association of Secretaries of State and a member of the 2013 Presidential Commission on Election Administration. Tuesday, Grayson said, showed “the implications of turnout, stressing the system, revealing planning failures, feel impact of limited resources. If you had more resources, you’d have had more paper ballots, more machines, more polling places.”

The election hotline from the Lawyers’ Committee for Civil Rights Under Law clocked 24,000 calls by 6 p.m., twice the rate in in the 2014 midterm election. “People were not able to vote because of technical issues that are completely avoidable,” Ryan Snow, of the Lawyers’ Committee, said. “People who came to vote — registered to vote, showed up to vote — were not able to vote.”

“We think we can solve all of these voting problems by adding technology, but you have to have a contingency plan for when each of these pieces fail,” said Joseph Lorenzo Hall, the chief technologist at the Center for Democracy & Technology in Washington, D.C. It appears many of the places that saw electronic poll book failures had no viable backup system.

Hall said that problems with machines and computers force election administrators to become technicians on the spot, despite their lack of training. This exacerbates problems: Poll workers aren’t able to accurately or efficiently report issues to their central offices, leading to delays in dispatches of appropriate equipment or staff.

Perhaps the most embarrassing technological faceplant was in New York City, where the machines used to scan ballots proved no match for wet weather. Humidity caused the scanners to malfunction, leading to outages and long lines.

The breakdowns proliferated up and down the East Coast. Humidity also roiled scanners in North Carolina. In Charleston, South Carolina, an interminable delay driven by a downed voting system drove one person to leave for work before she could cast her ballot. “It felt like a type of disenfranchisement,” she told ProPublica. Voting machine outages in some Georgia precincts stranded voters in hours-long lines. In predominantly black sections of St. Petersburg, Florida, wait times ballooned as voting machines froze.

Some of the pressure on the aging technology was relieved by early and mail-in voting, so that everyone didn’t have to vote on the same day, Grayson said. But many states still require people to cast their ballots on Election Day, and others have added time-consuming procedures such as strict ID requirements.

Those sorts of security measures add their own layers of confusion. Many voters reported never receiving their ballots in the mail. Georgia voter Shelley Martin couldn’t vote because her ballot was mailed to the wrong address — even though she filled out her address correctly, the county election office accidentally changed a 9 to a 0. In Ohio, some in-person voters were incorrectly told they had already received an absentee ballot, because of a computer error.

When people show up at the wrong polling place or have problems with their registration, they are usually entitled to cast a provisional ballot that will be counted once it’s verified. But these problems were so common on Tuesday that some locations ran out of provisional ballots and turned people away, according to North Carolina voters’ reports to ProPublica. In Arizona, some voters were told they couldn’t have provisional ballots because of broken printers. In Pennsylvania, some college students encountered glitches with their registration and said poll workers wouldn’t give them provisional ballots.

A newly implemented law in North Dakota left a handful of college students — many of whom had voted in previous elections — confused and unable to vote. “I was so frustrated because I’ve voted in North Dakota before,” said Alissa Maesse, a student at the University of North Dakota who came to the polls with a Minnesota driver’s license and bank statement with a North Dakota address, but needed a North Dakota driver’s license, identification card or tribal ID. “I can’t participate at all and I wanted to.”

Administrative Error

Administrative stumbling blocks and unhelpful election officials left some voters throughout the day scrambling to figure out where or how they were supposed to vote. Across the country, confusion over new laws and poll worker error forced voters to work with attorneys or drive long distances in an attempt to solve problems.

In Missouri, a last-minute court ruling resulted in chaos across the state. Less than a month before, a judge radically altered the state’s voter ID law to allow more valid forms of identification. By then, poll workers had already been trained. Many enforced the incorrect version of the law.

In St. Charles County, northwest of St. Louis, voters across the county reported that poll workers openly argued with voters who showed identification allowed under the new ruling, demanding old forms of ID. By the end of the evening, the county had ignored demand letters from attorneys at Advancement Project, a civil rights group. Denise Lieberman, an attorney with the group, said it is considering legal remedies due to the county’s “flagrant disregard” for the judge’s ruling.

Rich Chrismer, the director of elections for the county, said he never saw the letters — he was at polling places all day. By late morning, he’d been made aware of 12 different polling locations where poll workers were giving incorrect instructions. He utilized the local police to distribute memos to all 121 polling locations, correcting poll worker instructions. They were distributed by the late morning, and complaints dropped off after that, he said.

Chrismer said training had already happened by the time the judge issued his ruling, but that he’d put new instructions in “four different places” in the packet mailed to poll workers ahead of the election. “They were either ignoring me or they didn’t know how to read, which upsets me,” he said.

Dallas County Clerk Stephanie Hendricks expressed similar frustration at the short window of time allowed by the court to retrain poll workers, update signs and ensure voter understanding.

Hendricks said the small county had to “scrape the bottom of the barrel” for poll workers, who only received 90 minutes of training. This, combined with the very short notice for the legal change, made it difficult to help poll workers understand the law. “The last few elections it’s been photo ID, photo ID, photo ID, and now all of a sudden the brakes have been thrown on. It’s confusing for people,” she said.

The frustrations for Chris Sears began on Friday, when he turned up to cast an early ballot at Cinco Ranch Public Library, a brick building abutting a duck pond in the suburbs west of Houston. Sears, a 43-year-old Texan who works in real estate, had voted at the library in the 2016 election, after moving to the area from adjoining Harris County a year earlier. Now, at the library, poll workers couldn’t find him in their rolls. His only recourse, they told him, was to drive the half hour or so to the Fort Bend County election office. Sears, realizing he wouldn’t make it there and back before early voting closed, decided to go first thing Tuesday morning.

After he explained his situation and presented his driver’s license, which had a local address, the clerk at the election office had a terse message for him. She slid a fresh voter registration application across the counter and told him: “Fill this out, and you’ll be eligible to vote in the next election.” Sears told the clerk he hadn’t moved, and that he’d voted in the last election.

The clerk was unmoved. “What you can do,” the clerk repeated, pointing at the registration form, “is fill this out, and vote in the next election.”

Sears wasn’t alone. As he went back and forth with the clerk, three other men who, like Sears, had moved recently from other Texas counties, came in with near-identical complaints. The clerk gave them the same response she had given Sears. County officials told ProPublica they all should have been offered provisional ballots — not sent across town or told to register again.

Ultimately, Sears would cast a provisional ballot, but he didn’t discover this option until he’d done hours of research to try and hunt down the cause of his problems.

“I finally got to vote,” he said. “But that was after driving across two counties and spending five or six hours of my time trying to determine whether there was a way I could do it.”

Some administrative problems were a bit more bizarre — a polling place in Chandler, Arizona, was foreclosed upon overnight. Voter Joann Swain arrived at the Golf Academy of America, which housed the poll, to find TV news crews and a crowd of people in the parking lot of the type of faux Spanish Mission Revival shopping centers that fleck the desert around Phoenix. Voting booths were arrayed along the sidewalk.

A sign affixed to the building’s locked front door indicated that the landlord has foreclosed on the Golf Academy for failing to pay rent. While poll workers had set up the voting booths the night before, that didn’t appear to matter to the landlord. The sign read: “UNAUTHORIZED ENTRY UPON THESE PREMISES OR THE REMOVAL OF PROPERTY HEREFROM MAY RESULT IN CRIMINAL AND/OR CIVIL PROSECUTION.”

The timing struck Swain as suspect. “Were they trying to make it more difficult for people to vote?” she asked Wednesday. Election officials had provided no answers. “It’s just fishy.”

Swain, who is 47, waited in line for two hours as poll workers promised the machines necessary for voters to print and cast their ballot were on their way from Phoenix. She didn’t want to cast a provisional ballot, for fear it wouldn’t be counted. One man in line who took poll workers up on an alternative to waiting — voting at Chandler City Hall — returned not long after he left. With polling site difficulties cropping up throughout the Phoenix area, he hadn’t been able to vote there either.

To the puzzlement of voters waiting in line, Maricopa County Recorder Adrian Fontes tweeted that the Golf Academy polling place was open. “No it’s not. I’m here,” an Arizonan named Gary Taylor shot back.

Other voters reacted to situation more volubly. “I got things to do. I can’t stand around all day waiting because these guys can’t do their job,” a voter named Thomas Wood told reporters. “It’s ridiculous. It’s absolutely ridiculous.”

Swain ultimately left at 8:30 a.m. By the time she returned, later in the day, poll workers had set up the voting machines delivered from Phoenix in another storefront in the shopping center. The original machines remained locked in the Golf Academy, she said.

Electioneering

Back East, reports of potentially improper political messages at polling sites had begun to crop up, and the response from election officials highlighted the at times flimsy nature of electioneering laws. On Tuesday morning, a handwritten sign appeared on the door of a polling station near downtown Pittsburgh, which read “Vote Straight Democrat.” County election officials were alerted to the sign in the early afternoon, but by then the sign had been removed, Amie Downs, an Allegheny County spokeswoman, said in a statement.

An official in the county election office, who declined to give her name, blamed the sign on a member of the local Democratic Party committee. “He said he does that every year but never had problems till this year,” she said. Pennsylvania law prohibits electioneering within 10 feet of a polling place, and Downs said it wasn’t clear whether the sign violated the law.

Down the coast, in New Port Richey — a politically mixed cluster of strip malls northwest of Tampa, Florida — Pastor Al Carlisle triggered upward of 75 complaints to Pasco County election officials after he put up a handwritten sign reading “Don’t Vote for Democrats on Tuesday and Sing ‘Oh How I Love Jesus’ on Sunday” outside his church. That wouldn’t be a problem, except that on Election Day, his church doubles as a polling place. Carlisle remained unrepentant. He continued Wednesday to trumpet the sign on his Facebook page, mixed among posts conflating religious faith with support for President Donald Trump.

Local election officials, however, stopped at mild censure. Pasco County election chief Brian Corley told the Tampa Bay Times the sign was “not appropriate” but legal, since Carlisle had placed it only just more than 100 feet away from where voters were casting their ballot.

Later in the day, some voters complained about large posters opposing abortion — an example: “God Doesn’t Make Mistakes, Choose Life” — plastered on the walls of a church gymnasium in Holts Summit, Missouri, used as a polling place. Despite the political implications, election officials told local radio station KBIA that the posters were legal because there were no abortion-related issues on the ballot.

Behind the scenes, officials nationwide were addressing gaps in website reliability and security. In Kentucky, a handful of county websites that provided information to voters flickered offline for parts of the day. State officials said the issue was likely a technical problem not caused by a malicious attack.

But several states meanwhile alerted U.S. election-security officials to efforts of hackers scanning their computer systems for software vulnerabilities. Days before the election, a county clerk’s office said its email account was compromised and its messages forwarded to a private Gmail address, according to person familiar with the matter who was not authorized to discuss it publicly.

As polls closed Tuesday evening, back in New York, the crowds and ballot scanner failures remained. At one school in Brooklyn that had seen long lines in the morning, the wait to vote at 7 p.m. was no better — still upward of two hours, Emily Chen told ProPublica. By the end of the day, the New York City Council speaker had called for the elections director’s resignation, and the mayor had denounced the technical snags as “absolutely unacceptable.”

Down the coast, in Broward County, Florida, just north of Miami, election officials were struggling with a technical failure of a different sort. Seven precincts were unable to transmit vote tallies electronically. This time, it would force election officials to internalize what voters had suffered throughout much of the day. Around 11 p.m., they walked out into the balmy South Florida night, got into their cars and drove the voter files to the county election office.

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Plenty Of Bad News During November. Are We Watching The Fall Of Facebook?

Facebook logo November has been an eventful month for Facebook, the global social networking giant. And not in a good way. So much has happened, it's easy to miss items. Let's review.

A November 1st investigative report by ProPublica described how some political advertisers exploit gaps in Facebook's advertising transparency policy:

"Although Facebook now requires every political ad to “accurately represent the name of the entity or person responsible,” the social media giant acknowledges that it didn’t check whether Energy4US is actually responsible for the ad. Nor did it question 11 other ad campaigns identified by ProPublica in which U.S. businesses or individuals masked their sponsorship through faux groups with public-spirited names. Some of these campaigns resembled a digital form of what is known as “astroturfing,” or hiding behind the mirage of a spontaneous grassroots movement... Adopted this past May in the wake of Russian interference in the 2016 presidential campaign, Facebook’s rules are designed to hinder foreign meddling in elections by verifying that individuals who run ads on its platform have a U.S. mailing address, governmental ID and a Social Security number. But, once this requirement has been met, Facebook doesn’t check whether the advertiser identified in the “paid for by” disclosure has any legal status, enabling U.S. businesses to promote their political agendas secretly."

So, political ad transparency -however faulty it is -- has only been operating since May, 2018. Not long. Not good.

The day before the November 6th election in the United States, Facebook announced:

"On Sunday evening, US law enforcement contacted us about online activity that they recently discovered and which they believe may be linked to foreign entities. Our very early-stage investigation has so far identified around 30 Facebook accounts and 85 Instagram accounts that may be engaged in coordinated inauthentic behavior. We immediately blocked these accounts and are now investigating them in more detail. Almost all the Facebook Pages associated with these accounts appear to be in the French or Russian languages..."

This happened after Facebook removed 82 Pages, Groups and accounts linked to Iran on October 16th. Thankfully, law enforcement notified Facebook. Interested in more proactive action? Facebook announced on November 8th:

"We are careful not to reveal too much about our enforcement techniques because of adversarial shifts by terrorists. But we believe it’s important to give the public some sense of what we are doing... We now use machine learning to assess Facebook posts that may signal support for ISIS or al-Qaeda. The tool produces a score indicating how likely it is that the post violates our counter-terrorism policies, which, in turn, helps our team of reviewers prioritize posts with the highest scores. In this way, the system ensures that our reviewers are able to focus on the most important content first. In some cases, we will automatically remove posts when the tool indicates with very high confidence that the post contains support for terrorism..."

So, Facebook deployed in 2018 some artificial intelligence to help its human moderators identify terrorism threats -- not automatically remove them, but to identify them -- as the news item also mentioned its appeal process. Then, Facebook announced in a November 13th update:

"Combined with our takedown last Monday, in total we have removed 36 Facebook accounts, 6 Pages, and 99 Instagram accounts for coordinated inauthentic behavior. These accounts were mostly created after mid-2017... Last Tuesday, a website claiming to be associated with the Internet Research Agency, a Russia-based troll farm, published a list of Instagram accounts they said that they’d created. We had already blocked most of them, and based on our internal investigation, we blocked the rest... But finding and investigating potential threats isn’t something we do alone. We also rely on external partners, like the government or security experts...."

So, in 2018 Facebook leans heavily upon both law enforcement and security researchers to identify threats. You have to hunt a bit to find the total number of fake accounts removed. Facebook announced on November 15th:

"We also took down more fake accounts in Q2 and Q3 than in previous quarters, 800 million and 754 million respectively. Most of these fake accounts were the result of commercially motivated spam attacks trying to create fake accounts in bulk. Because we are able to remove most of these accounts within minutes of registration, the prevalence of fake accounts on Facebook remained steady at 3% to 4% of monthly active users..."

That's about 1.5 billion fake accounts by a variety of bad actors. Hmmmm... sounds good, but... it makes one wonder about the digital arms race happening. If the bad actors can programmatically create new fake accounts faster than Facebook can identify and remove them, then not good.

Meanwhile, CNet reported on November 11th that Facebook had ousted Oculus founder Palmer Luckey due to:

"... a $10,000 to an anti-Hillary Clinton group during the 2016 presidential election, he was out of the company he founded. Facebook CEO Mark Zuckerberg, during congressional testimony earlier this year, called Luckey's departure a "personnel issue" that would be "inappropriate" to address, but he denied it was because of Luckey's politics. But that appears to be at the root of Luckey's departure, The Wall Street Journal reported Sunday. Luckey was placed on leave and then fired for supporting Donald Trump, sources told the newspaper... [Luckey] was pressured by executives to publicly voice support for libertarian candidate Gary Johnson, according to the Journal. Luckey later hired an employment lawyer who argued that Facebook illegally punished an employee for political activity and negotiated a payout for Luckey of at least $100 million..."

Facebook acquired Oculus Rift in 2014. Not good treatment of an executive.

The next day, TechCrunch reported that Facebook will provide regulators from France with access to its content moderation processes:

"At the start of 2019, French regulators will launch an informal investigation on algorithm-powered and human moderation... Regulators will look at multiple steps: how flagging works, how Facebook identifies problematic content, how Facebook decides if it’s problematic or not and what happens when Facebook takes down a post, a video or an image. This type of investigation is reminiscent of banking and nuclear regulation. It involves deep cooperation so that regulators can certify that a company is doing everything right... The investigation isn’t going to be limited to talking with the moderation teams and looking at their guidelines. The French government wants to find algorithmic bias and test data sets against Facebook’s automated moderation tools..."

Good. Hopefully, the investigation will be a deep dive. Maybe other countries, which value citizens' privacy, will perform similar investigations. Companies and their executives need to be held accountable.

Then, on November 14th The New York Times published a detailed, comprehensive "Delay, Deny, and Deflect" investigative report based upon interviews of at least 50 persons:

"When Facebook users learned last spring that the company had compromised their privacy in its rush to expand, allowing access to the personal information of tens of millions of people to a political data firm linked to President Trump, Facebook sought to deflect blame and mask the extent of the problem. And when that failed... Facebook went on the attack. While Mr. Zuckerberg has conducted a public apology tour in the last year, Ms. Sandberg has overseen an aggressive lobbying campaign to combat Facebook’s critics, shift public anger toward rival companies and ward off damaging regulation. Facebook employed a Republican opposition-research firm to discredit activist protesters... In a statement, a spokesman acknowledged that Facebook had been slow to address its challenges but had since made progress fixing the platform... Even so, trust in the social network has sunk, while its pell-mell growth has slowed..."

The New York Times' report also highlighted the history of Facebook's focus on revenue growth and lack of focus to identify and respond to threats:

"Like other technology executives, Mr. Zuckerberg and Ms. Sandberg cast their company as a force for social good... But as Facebook grew, so did the hate speech, bullying and other toxic content on the platform. When researchers and activists in Myanmar, India, Germany and elsewhere warned that Facebook had become an instrument of government propaganda and ethnic cleansing, the company largely ignored them. Facebook had positioned itself as a platform, not a publisher. Taking responsibility for what users posted, or acting to censor it, was expensive and complicated. Many Facebook executives worried that any such efforts would backfire... Mr. Zuckerberg typically focused on broader technology issues; politics was Ms. Sandberg’s domain. In 2010, Ms. Sandberg, a Democrat, had recruited a friend and fellow Clinton alum, Marne Levine, as Facebook’s chief Washington representative. A year later, after Republicans seized control of the House, Ms. Sandberg installed another friend, a well-connected Republican: Joel Kaplan, who had attended Harvard with Ms. Sandberg and later served in the George W. Bush administration..."

The report described cozy relationships between the company and Democratic politicians. Not good for a company wanting to deliver unbiased, reliable news. The New York Times' report also described the history of failing to identify and respond quickly to content abuses by bad actors:

"... in the spring of 2016, a company expert on Russian cyberwarfare spotted something worrisome. He reached out to his boss, Mr. Stamos. Mr. Stamos’s team discovered that Russian hackers appeared to be probing Facebook accounts for people connected to the presidential campaigns, said two employees... Mr. Stamos, 39, told Colin Stretch, Facebook’s general counsel, about the findings, said two people involved in the conversations. At the time, Facebook had no policy on disinformation or any resources dedicated to searching for it. Mr. Stamos, acting on his own, then directed a team to scrutinize the extent of Russian activity on Facebook. In December 2016... Ms. Sandberg and Mr. Zuckerberg decided to expand on Mr. Stamos’s work, creating a group called Project P, for “propaganda,” to study false news on the site, according to people involved in the discussions. By January 2017, the group knew that Mr. Stamos’s original team had only scratched the surface of Russian activity on Facebook... Throughout the spring and summer of 2017, Facebook officials repeatedly played down Senate investigators’ concerns about the company, while publicly claiming there had been no Russian effort of any significance on Facebook. But inside the company, employees were tracing more ads, pages and groups back to Russia."

Facebook responded in a November 15th new release:

"There are a number of inaccuracies in the story... We’ve acknowledged publicly on many occasions – including before Congress – that we were too slow to spot Russian interference on Facebook, as well as other misuse. But in the two years since the 2016 Presidential election, we’ve invested heavily in more people and better technology to improve safety and security on our services. While we still have a long way to go, we’re proud of the progress we have made in fighting misinformation..."

So, Facebook wants its users to accept that it has invested more = doing better.

Regardless, the bottom line is trust. Can users trust what Facebook said about doing better? Is better enough? Can users trust Facebook to deliver unbiased news? Can users trust that Facebook's content moderation process is better? Or good enough? Can users trust Facebook to fix and prevent data breaches affecting millions of users? Can users trust Facebook to stop bad actors posing as researchers from using quizzes and automated tools to vacuum up (and allegedly resell later) millions of users' profiles? Can citizens in democracies trust that Facebook has stopped data abuses, by bad actors, designed to disrupt their elections? Is doing better enough?

The very next day, Facebook reported a huge increase in the number of government requests for data, including secret orders. TechCrunch reported about 13 historical national security letters:

"... dated between 2014 and 2017 for several Facebook and Instagram accounts. These demands for data are effectively subpoenas, issued by the U.S. Federal Bureau of Investigation (FBI) without any judicial oversight, compelling companies to turn over limited amounts of data on an individual who is named in a national security investigation. They’re controversial — not least because they come with a gag order that prevents companies from informing the subject of the letter, let alone disclosing its very existence. Companies are often told to turn over IP addresses of everyone a person has corresponded with, online purchase information, email records and cell-site location data... Chris Sonderby, Facebook’s deputy general counsel, said that the government lifted the non-disclosure orders on the letters..."

So, Facebook is a go-to resource for both bad actors and the good guys.

An eventful month, and the month isn't over yet. Taken together, this news is not good for a company wanting its social networking service to be a source of reliable, unbiased news source. This news is not good for a company wanting its users to accept it is doing better -- and that better is enough. The situation begs the question: are we watching the fall of Facebook? Share your thoughts and opinions below.


ABA Updates Guidance For Attorneys' Data Security And Data Breach Obligations. What Their Clients Can Expect

To provide the best representation, attorneys often process and archive sensitive information about their clients. Consumers hire attorneys to complete a variety of transactions: buy (or sell) a home, start (or operate) a business, file a complaint against a company, insurer, or website for unsatisfactory service, file a complaint against a former employer, and more. What are attorneys' obligations regarding data security to protect their clients' sensitive information, intellectual property, and proprietary business methods?

What can consumers expect when the attorney or law firm they've hired experienced a data breach? Yes, law firms experience data breaches. The National Law Review reported last year:

"2016 was the year that law firm data breaches landed and stayed squarely in both the national and international headlines. There have been numerous law firm data breaches involving incidents ranging from lost or stolen laptops and other portable media to deep intrusions... In March, the FBI issued a warning that a cybercrime insider-trading scheme was targeting international law firms to gain non-public information to be used for financial gain. In April, perhaps the largest volume data breach of all time involved law firm Mossack Fonesca in Panama... Finally, Chicago law firm, Johnson & Bell Ltd., was in the news in December when a proposed class action accusing them of failing to protect client data was unsealed."

So, what can clients expect regarding data security and data breaches? A post in the Lexology site reported:

"Lawyers don’t get a free pass when it comes to data security... In a significant ethics opinion issued last month, Formal Opinion 483, Lawyers’ Obligations After an Electronic Data Breach or Cyberattack, the American Bar Association’s Standing Committee on Ethics and Professional Responsibility provides a detailed roadmap to a lawyer’s obligations to current and former clients when they learn that they – or their firm – have been the subject of a data breach... a lawyer’s compliance with state or federal data security laws does "not necessarily achieve compliance with ethics obligations," and identifies six ABA Model Rules that might be implicated in the breach of client information."

Readers of this blog are familiar with the common definition of a data breach: unauthorized persons have accessed, stolen, altered, and/or destroyed information they shouldn't have. Attorneys have an obligation to use technology competently. The post by Patterson Belknap Webb & Tyler LLP also stated:

"... lawyers have an obligation to take “reasonable steps” to monitor for data breaches... When a breach is detected, a lawyer must act “reasonably and promptly” to stop the breach and mitigate damages resulting from the breach... A lawyer must make reasonable efforts to assess whether any electronic files were, in fact, accessed and, if so, identify them. This requires a post-breach investigation... Lawyers must then provide notice to their affected clients of the breach..."

I read the ABA Formal Opinion 483. (A copy of the opinion is also available here.) A follow-up post this week by the National Law Review listed 10 best practices to stop cyberattacks and breaches. Since many law firms outsource some back-office functions, this might be the most important best-practice item:

"4. Evaluate Your Vendors’ Security: Ask to see your vendor’s security certificate. Review the vendor’s security system as you would your own, making sure they exercise the same or stronger security systems than your own law firm..."


Some Surprising Facts About Facebook And Its Users

Facebook logo The Pew Research Center announced findings from its latest survey of social media users:

  • About two-thirds (68%) of adults in the United States use Facebook. That is unchanged from April 2016, but up from 54% in August 2012. Only Youtube gets more adult usage (73%).
  • About three-quarters (74%) of adult Facebook users visit the site at least once a day. That's higher than Snapchat (63%) and Instagram (60%).
  • Facebook is popular across all demographic groups in the United States: 74% of women use it, as do 62% of men, 81% of persons ages 18 to 29, and 41% of persons ages 65 and older.
  • Usage by teenagers has fallen to 51% (at March/April 2018) from 71% during 2014 to 2015. More teens use other social media services: YouTube (85%), Instagram (72%) and Snapchat (69%).
  • 43% of adults use Facebook as a news source. That is higher than other social media services: YouTube (21%), Twitter (12%), Instagram (8%), and LinkedIn (6%). More women (61%) use Facebook as a news source than men (39%). More whites (62%) use Facebook as a news source than nonwhites (37%).
  • 54% of adult users said they adjusted their privacy settings during the past 12 months. 42% said they have taken a break from checking the platform for several weeks or more. 26% said they have deleted the app from their phone during the past year.

Perhaps, the most troubling finding:

"Many adult Facebook users in the U.S. lack a clear understanding of how the platform’s news feed works, according to the May and June survey. Around half of these users (53%) say they do not understand why certain posts are included in their news feed and others are not, including 20% who say they do not understand this at all."

Facebook users should know that the service does not display in their news feed all posts by their friends and groups. Facebook's proprietary algorithm -- called its "secret sauce" by some -- displays items it thinks users will engage with = click the "Like" or other emotion buttons. This makes Facebook a terrible news source, since it doesn't display all news -- only the news you (probably already) agree with.

That's like living life in an online bubble. Sadly, there is more.

If you haven't watched it, PBS has broadcast a two-part documentary titled, "The Facebook Dilemma" (see trailer below), which arguable could have been titled, "the dark side of sharing." The Frontline documentary rightly discusses Facebook's approaches to news, privacy, its focus upon growth via advertising revenues, how various groups have used the service as a weapon, and Facebook's extensive data collection about everyone.

Yes, everyone. Obviously, Facebook collects data about its users. The service also collects data about nonusers in what the industry calls "shadow profiles." CNet explained that during an April:

"... hearing before the House Energy and Commerce Committee, the Facebook CEO confirmed the company collects information on nonusers. "In general, we collect data of people who have not signed up for Facebook for security purposes," he said... That data comes from a range of sources, said Nate Cardozo, senior staff attorney at the Electronic Frontier Foundation. That includes brokers who sell customer information that you gave to other businesses, as well as web browsing data sent to Facebook when you "like" content or make a purchase on a page outside of the social network. It also includes data about you pulled from other Facebook users' contacts lists, no matter how tenuous your connection to them might be. "Those are the [data sources] we're aware of," Cardozo said."

So, there might be more data sources besides the ones we know about. Facebook isn't saying. So much for greater transparency and control claims by Mr. Zuckerberg. Moreover, data breaches highlight the problems with the service's massive data collection and storage:

"The fact that Facebook has [shadow profiles] data isn't new. In 2013, the social network revealed that user data had been exposed by a bug in its system. In the process, it said it had amassed contact information from users and matched it against existing user profiles on the social network. That explained how the leaked data included information users hadn't directly handed over to Facebook. For example, if you gave the social network access to the contacts in your phone, it could have taken your mom's second email address and added it to the information your mom already gave to Facebook herself..."

So, Facebook probably launched shadow profiles when it introduced its mobile app. That means, if you uploaded the address book in your phone to Facebook, then you helped the service collect information about nonusers, too. This means Facebook acts more like a massive advertising network than simply a social media service.

How has Facebook been able to collect massive amounts of data about both users and nonusers? According to the Frontline documentary, we consumers have lax privacy laws in the United States to thank for this massive surveillance advertising mechanism. What do you think?


Federal Reserve Released Its Non-cash Payments Fraud Report. Have Chip Cards Helped?

Many consumers prefer to pay for products and services using methods other than cash. How secure are these non-cash payment methods? The Federal Reserve Board (FRB) analyzed the payments landscape within the United States. Its October 2018 report found good and bad news. The good news: non-cash payments fraud is small. The bad news:

  • Overall, non-cash payments fraud is growing,
  • Card payments fraud drove the growth
Non-Cash Payment Activity And Fraud
Payment Type 2012 2015 Increase (Decrease)
Card payments & ATM withdrawal fraud $4 billion $6.5 billion 62.5 percent
Check fraud $1.1 billion $710 million (35) percent
Non-cash payments fraud $6.1 billion $8.3 billion 37 percent
Total Non-cash payments $161.2 trillion $180.3 trillion 12 percent

The FRB report included:

"... fraud totals and rates for payments processed over general-purpose credit and debit card networks, including non-prepaid and prepaid debit card networks, the automated clearinghouse (ACH) transfer system, and the check clearing system. These payment systems form the core of the noncash payment and settlement systems used to clear and settle everyday payments made by consumers and businesses in the United States. The fraud data were collected as part of Federal Reserve surveys of depository institutions in 2012 and 2015 and payment card networks in 2015 and 2016. The types of fraudulent payments covered in the study are those made by an unauthorized third party."

Data from the card network survey included general-purpose credit and debit (non-prepaid and prepaid) card payments, but did not include ATM withdrawals. The card networks include Visa, MasterCard, Discover and others. Additional findings:

"... the rate of card fraud, by value, was nearly flat from 2015 to 2016, with the rate of in-person card fraud decreasing notably and the rate of remote card fraud increasing significantly..."

The industry defines several categories of card fraud:

  1. "Counterfeit card. Fraud is perpetrated using an altered or cloned card;
  2. Lost or stolen card. Fraud is undertaken using a legitimate card, but without the cardholder’s consent;
  3. Card issued but not received. A newly issued card sent to a cardholder is intercepted and used to commit fraud;
  4. Fraudulent application. A new card is issued based on a fake identity or on someone else’s identity;
  5. Fraudulent use of account number. Fraud is perpetrated without using a physical card. This type of fraud is typically remote, with the card number being provided through an online web form or a mailed paper form, or given orally over the telephone; and
  6. Other. Fraud including fraud from account take-over and any other types of fraud not covered above."
Card Fraud By Category
Fraud Category 2015 2016 Increase/(Decrease)
Fraudulent use of account number $2.88 billion $3.46 billion 20 percent
Counterfeit card fraud $3.05 billion $2.62 billion (14) percent
Lost or stolen card fraud $730 million $810 million 11 percent
Fraudulent application $210 million $360 million 71 percent

The increase in fraudulent application suggests that criminals consider it easy to intercept pre-screened credit and card offers sent via postal mail. It is easy for consumers to opt out of pre-screened credit and card offers. There is also the National Do Not Call Registry. Do both today if you haven't.

The report also covered EMV chip cards, which were introduced to stop counterfeit card fraud. Card networks distributed both chip cards to consumers, and chip-reader terminals to retailers. The banking industry had set an October 1, 2015 deadline to switch to chip cards. The FRB report:

EMV Chip card fraud and payments. Federal Reserve Board. October 2018

The FRB concluded:

"Card systems brought EMV processing online, and a liability shift, beginning in October 2015, created an incentive for merchants to accept chip cards. By value, the share of non-fraudulent in-person payments made with [chip cards] shifted dramatically between 2015 and 2016, with chip-authenticated payments increasing from 3.2 percent to 26.4 percent. The share of fraudulent in-person payments made with [chip cards] also increased from 4.1 percent in 2015 to 22.8 percent in 2016. As [chip cards] are more secure, this growth in the share of fraudulent in-person chip payments may seem counter-intuitive; however, it reflects the overall increase in use. Note that in 2015, the share of fraudulent in-person payments with [chip cards] (4.1 percent) was greater than the share of non-fraudulent in-person payments with [chip cards] (3.2 percent), a relationship that reversed in 2016."


Senator Wyden Introduces Bill To Help Consumers Regain Online Privacy And Control Over Sensitive Data

Late last week, Senator Ron Wyden (Dem - Oregon) introduced a "discussion draft" of legislation to help consumers recover online privacy and control over their sensitive personal data. Senator Wyden said:

"Today’s economy is a giant vacuum for your personal information – Everything you read, everywhere you go, everything you buy and everyone you talk to is sucked up in a corporation’s database. But individual Americans know far too little about how their data is collected, how it’s used and how it’s shared... It’s time for some sunshine on this shadowy network of information sharing. My bill creates radical transparency for consumers, gives them new tools to control their information and backs it up with tough rules with real teeth to punish companies that abuse Americans’ most private information.”

The press release by Senator Wyden's office explained the need for new legislation:

"The government has failed to respond to these new threats: a) Information about consumers’ activities, including their location information and the websites they visit is tracked, sold and monetized without their knowledge by many entities; b) Corporations’ lax cybersecurity and poor oversight of commercial data-sharing partnerships has resulted in major data breaches and the misuse of Americans’ personal data; c) Consumers have no effective way to control companies’ use and sharing of their data."

Consumers in the United States lost both control and privacy protections when the U.S. Federal Communications Commission (FCC), led by President Trump appointee Ajit Pai, a former Verizon lawyer, repealed last year both broadband privacy and net neutrality protections for consumers. A December 2017 study of 1,077 voters found that most want net neutrality protections. President Trump signed the privacy-rollback legislation in April 2017. A prior blog post listed many historical abuses of consumers by some internet service providers (ISPs).

With the repealed broadband privacy, ISPs are free to collect and archive as much data about consumers as desired without having to notify and get consumers' approval of the collection nor of who they share archived data with. That's 100 percent freedom for ISPs and zero freedom for consumers.

By repealing online privacy and net neutrality protections for consumers, the FCC essentially punted responsibility to the U.S. Federal Trade Commission (FTC). According to Senator Wyden's press release:

"The FTC, the nation’s main privacy and data security regulator, currently lacks the authority and resources to address and prevent threats to consumers’ privacy: 1) The FTC cannot fine first-time corporate offenders. Fines for subsequent violations of the law are tiny, and not a credible deterrent; 2) The FTC does not have the power to punish companies unless they lie to consumers about how much they protect their privacy or the companies’ harmful behavior costs consumers money; 3) The FTC does not have the power to set minimum cybersecurity standards for products that process consumer data, nor does any federal regulator; and 4) The FTC does not have enough staff, especially skilled technology experts. Currently about 50 people at the FTC police the entire technology sector and credit agencies."

This means consumers have no protections nor legal options unless the company, or website, violates its published terms-of-conditions and privacy policies. To solves the above gaps, Senator Wyden's new legislation, titled the Consumer Data Privacy Act (CDPA), contains several new and stronger protections. It:

"... allows consumers to control the sale and sharing of their data, gives the FTC the authority to be an effective cop on the beat, and will spur a new market for privacy-protecting services. The bill empowers the FTC to: i) Establish minimum privacy and cybersecurity standards; ii) Issue steep fines (up to 4% of annual revenue), on the first offense for companies and 10-20 year criminal penalties for senior executives; iii) Create a national Do Not Track system that lets consumers stop third-party companies from tracking them on the web by sharing data, selling data, or targeting advertisements based on their personal information. It permits companies to charge consumers who want to use their products and services, but don’t want their information monetized; iv) Give consumers a way to review what personal information a company has about them, learn with whom it has been shared or sold, and to challenge inaccuracies in it; v) Hire 175 more staff to police the largely unregulated market for private data; and vi) Require companies to assess the algorithms that process consumer data to examine their impact on accuracy, fairness, bias, discrimination, privacy, and security."

Permitting companies to charge consumers who opt out of data collection and sharing is a good thing. Why? Monthly payments by consumers are leverage -- a strong incentive for companies to provide better cybersecurity.

Business as usual -- cybersecurity methods by corporate executives and government enforcement -- isn't enough. The tsunami of data breaches is an indication. During October alone:

A few notable breach events from earlier this year:

The status quo, or business as usual, is unacceptable. Executives' behavior won't change without stronger consequences like jail time, since companies perform cost-benefit analyses regarding how much to spend on cybersecurity versus the probability of breaches and fines. Opt-outs of data collection and sharing by consumers, steeper fines, and criminal penalties could change those cost-benefit calculations.

Four former chief technologists at the FCC support Senator Wyden's legislation. Gabriel Weinberg, the Chief Executive Officer of DuckDuckGo also supports it:

"Senator Wyden’s proposed consumer privacy bill creates needed privacy protections for consumers, mandating easy opt-outs from hidden tracking. By forcing companies that sell and monetize user data to be more transparent about their data practices, the bill will also empower consumers to make better-informed privacy decisions online, enabling companies like ours to compete on a more level playing field."

Regular readers of this blog know that the DuckDuckGo search engine (unlike Google, Bing and Yahoo search engines) doesn't track users, doesn't collect nor archive data about users and their devices, and doesn't collect nor store users' search criteria. So, DuckDuckGo users can search knowing their data isn't being sold to advertisers, data brokers, and others.

Lastly, Wyden's proposed legislation includes several key definitions (emphasis added):

"... The term "automated decision system" means a computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision or facilitates human decision making, that impacts consumers... The term "automated decision system impact assessment" means a study evaluating an automated decision system and the automated decision system’s development process, including the design and training data of the automated decision 14 system, for impacts on accuracy, fairness, bias, discrimination, privacy, and security that includes... The term "data protection impact assessment" means a study evaluating the extent to which an information system protects the privacy and security of personal information the system processes... "

The draft legislation requires companies to perform both automated data impact assessments and data protection impact assessments; and requires the FTC to set the frequency and conditions for both. A copy of the CDPA draft is also available here (Adobe PDF; 67.7 k bytes).

This is a good start. It is important... critical... to hold accountable both corporate executives and the automated decision systems their approve and deploy. Based upon history, outsourcing has been one corporate tactic to manage liability by shifting it to providers. Good to close any loopholes now where executives could abuse artificial intelligence and related technologies to avoid responsibility.

What are your thoughts, opinions of the proposed legislation?


Mail-Only Voting In Oregon: Easy, Simple, And Secure. Why Not In All 50 States?

Hopefully, you voted today. A democracy works best when citizens participate. And voting is one way to participate.

If you already stood in line to vote, or if your state was one which closed some polling places, know that it doesn't have to be this way. Consider Oregon. Not only is the process there easier and simpler, but elections officials in Oregon don't have to worry as much as officials in other states about hacks and tampering. Why? The don't have voting machines. Yes, that's correct. No voting machines. No polling places either.

NBC News explained:

"Twenty years ago, Oregon became the first state in the nation to conduct all statewide elections entirely by mail. Three weeks before each election, all of Oregon's nearly 2.7 million registered voters are sent a ballot by the U.S. Postal Service. Then they mark and sign their ballots and send them in. You don't have to ask for the ballot, it just arrives. There are no forms to fill out, no voter ID, no technology except paper and stamps. If you don't want to pay for a stamp, you can drop your ballot in a box at one of the state's hundreds of collection sites."

Reportedly, Washington and Colorado also have mail-only voting. Perhaps most importantly, Oregon gets a higher voter participation:

"In the 2014 election, records showed that 45 percent of registered voters 34 and under marked a ballot — twice the level of many other states."

State and local governments across the United States use a variety of voting technologies. The two dominant are optical-scan ballots or direct-recording electronic (DRE) devices. Optical-scan ballots are paper ballots where voters fill in bubbles or other machine-readable marks. DRE devices include touch-screen devices that store votes in computer memory. A study in 2016 found that half of registered voters (47%) live in areas hat use only optical-scan as their standard voting system, about 28% live in DRE-only areas, 19% live in areas with both optical-scan and DRE systems, and about 5% of registered voters live in areas that conduct elections entirely by mail.

Some voters and many experts worry about areas using old, obsolete DRE devices that lack software and security upgrades. An analysis earlier this year found that the USA has made little progress since the 2016 election in replacing antiquated, vulnerable voting machines; and done even less to improve capabilities to recover from cyberattacks.

Last week, the Pew Research Center released results of its latest survey. Key findings: while nearly nine-in-ten (89%) Americans have confidence in poll workers in their community to do a good job, 67% of Americans say it is very or somewhat likely that Russia (or other foreign governments) will try to influence the midterm elections, and less than half (45%) are very or somewhat confident that election systems are secure from hacking. The survey also found that younger voters (ages 18 - 29) are less likely to view voting as convenient, compared to older voters.

Oregon's process is more secure. There are no local, electronic DRE devices scattered across towns and cities that can be hacked or tampered with; and which don't provide paper backups. If there is a question about the count, the paper ballots are stored in a secure place after the election, so elections officials can perform re-counts when needed for desired communities. According to the NBC News report, Oregon's Secretary of State, Dennis Richardson, said:

"You can't hack paper"

Oregon posts results online at results.oregonvotes.gov starting at 8:00 pm on Tuesday. Residents of Oregon can use the oregonvotes.gov site to check their voter record, track their ballot, find an official drop box, check election results, and find other relevant information. 2) ,

Oregon's process sounds simple, comprehensive, more secure, and easy for voters. Voters don't have to stand in long lines, nor take time off from work to vote. If online retailers can reliably fulfill consumers' online purchases via package delivery, then elections officials in local towns and cities can -- and should -- do the same with paper ballots. Many states already provide absentee ballots via postal mail, so a mail-only process isn't a huge stretch.


When Fatal Crashes Can't Be Avoided, Who Should Self-Driving Cars Save? Or Sacrifice? Results From A Global Survey May Surprise You

Experts predict that there will be 10 million self-driving cars on the roads by 2020. Any outstanding issues need to be resolved before then. One outstanding issue is the "trolley problem" - a situation where a fatal vehicle crash can not be avoided and the self-driving car must decide whether to save the passenger or a nearby pedestrian. Ethical issues with self-driving cars are not new. There are related issues, and some experts have called for a code of ethics.

Like it or not, the software in self-driving cars must be programmed to make decisions like this. Which person in a "trolley problem" should the self-driving car save? In other words, the software must be programmed with moral preferences which dictate which person to sacrifice.

The answer is tricky. You might assume: always save the driver, since nobody would buy self-driving car which would kill their owners. What if the pedestrian is crossing against a 'do not cross' signal within a crosswalk? Does the answer change if there are multiple pedestrians in the crosswalk? What if the pedestrians are children, elders, or pregnant? Or a doctor? Does it matter if the passenger is older than the pedestrians?

To understand what the public wants -- expects -- in self-driving cars, also known as autonomous vehicles (AV), researchers from MIT asked consumers in a massive, online global survey. The survey included 2 million people from 233 countries. The survey included 13 accident scenarios with nine varying factors:

  1. "Sparing people versus pets/animals,
  2. Staying on course versus swerving,
  3. Sparing passengers versus pedestrians,
  4. Sparing more lives versus fewer lives,
  5. Sparing men versus women,
  6. Sparing the young versus the elderly,
  7. Sparing pedestrians who cross legally versus jaywalking,
  8. Sparing the fit versus the less fit, and
  9. Sparing those with higher social status versus lower social status."

Besides recording the accident choices, the researchers also collected demographic information (e.g., gender, age, income, education, attitudes about religion and politics, geo-location) about the survey participants, in order to identify clusters: groups, areas, countries, territories, or regions containing people with similar "moral preferences."

Newsweek reported:

"The study is basically trying to understand the kinds of moral decisions that driverless cars might have to resort to," Edmond Awad, lead author of the study from the MIT Media Lab, said in a statement. "We don't know yet how they should do that."

And the overall findings:

"First, human lives should be spared over those of animals; many people should be saved over a few; and younger people should be preserved ahead of the elderly."

These have implications for policymakers. The researchers noted:

"... given the strong preference for sparing children, policymakers must be aware of a dual challenge if they decide not to give a special status to children: the challenge of explaining the rationale for such a decision, and the challenge of handling the strong backlash that will inevitably occur the day an autonomous vehicle sacrifices children in a dilemma situation."

The researchers found regional differences about who should be saved:

"The first cluster (which we label the Western cluster) contains North America as well as many European countries of Protestant, Catholic, and Orthodox Christian cultural groups. The internal structure within this cluster also exhibits notable face validity, with a sub-cluster containing Scandinavian countries, and a sub-cluster containing Commonwealth countries.

The second cluster (which we call the Eastern cluster) contains many far eastern countries such as Japan and Taiwan that belong to the Confucianist cultural group, and Islamic countries such as Indonesia, Pakistan and Saudi Arabia.

The third cluster (a broadly Southern cluster) consists of the Latin American countries of Central and South America, in addition to some countries that are characterized in part by French influence (for example, metropolitan France, French overseas territories, and territories that were at some point under French leadership). Latin American countries are cleanly separated in their own sub-cluster within the Southern cluster."

The researchers also observed:

"... systematic differences between individualistic cultures and collectivistic cultures. Participants from individualistic cultures, which emphasize the distinctive value of each individual, show a stronger preference for sparing the greater number of characters. Furthermore, participants from collectivistic cultures, which emphasize the respect that is due to older members of the community, show a weaker preference for sparing younger characters... prosperity (as indexed by GDP per capita) and the quality of rules and institutions (as indexed by the Rule of Law) correlate with a greater preference against pedestrians who cross illegally. In other words, participants from countries that are poorer and suffer from weaker institutions are more tolerant of pedestrians who cross illegally, presumably because of their experience of lower rule compliance and weaker punishment of rule deviation... higher country-level economic inequality (as indexed by the country’s Gini coefficient) corresponds to how unequally characters of different social status are treated. Those from countries with less economic equality between the rich and poor also treat the rich and poor less equally... In nearly all countries, participants showed a preference for female characters; however, this preference was stronger in nations with better health and survival prospects for women. In other words, in places where there is less devaluation of women’s lives in health and at birth, males are seen as more expendable..."

This is huge. It makes one question the wisdom of a one-size-fits-all programming approach by AV makers wishing to sell cars globally. Citizens in clusters may resent an AV maker forcing its moral preferences upon them. Some clusters or countries may demand vehicles matching their moral preferences.

The researchers concluded (emphasis added):

"Never in the history of humanity have we allowed a machine to autonomously decide who should live and who should die, in a fraction of a second, without real-time supervision. We are going to cross that bridge any time now, and it will not happen in a distant theatre of military operations; it will happen in that most mundane aspect of our lives, everyday transportation. Before we allow our cars to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers that will regulate them... Our data helped us to identify three strong preferences that can serve as building blocks for discussions of universal machine ethics, even if they are not ultimately endorsed by policymakers: the preference for sparing human lives, the preference for sparing more lives, and the preference for sparing young lives. Some preferences based on gender or social status vary considerably across countries, and appear to reflect underlying societal-level preferences..."

And the researchers advised caution, given this study's limitations (emphasis added):

"Even with a sample size as large as ours, we could not do justice to all of the complexity of autonomous vehicle dilemmas. For example, we did not introduce uncertainty about the fates of the characters, and we did not introduce any uncertainty about the classification of these characters. In our scenarios, characters were recognized as adults, children, and so on with 100% certainty, and life-and-death outcomes were predicted with 100% certainty. These assumptions are technologically unrealistic, but they were necessary... Similarly, we did not manipulate the hypothetical relationship between respondents and characters (for example, relatives or spouses)... Indeed, we can embrace the challenges of machine ethics as a unique opportunity to decide, as a community, what we believe to be right or wrong; and to make sure that machines, unlike humans, unerringly follow these moral preferences. We might not reach universal agreement: even the strongest preferences expressed through the [survey] showed substantial cultural variations..."

Several important limitations to remember. And, there are more. It didn't address self-driving trucks. Should an AV tractor-trailer semi  -- often called a robotruck -- carrying $2 million worth of goods sacrifice its load (and passenger) to save one or more pedestrians? What about one or more drivers on the highway? Does it matter if the other drivers are motorcyclists, school buses, or ambulances?

What about autonomous freighters? Should an AV cargo ship be programed to sacrifice its $80 million load to save a pleasure craft? Does the size (e.g., number of passengers) of the pleasure craft matter? What if the other craft is a cabin cruiser with five persons? Or a cruise ship with 2,000 passengers and a crew of 800? What happens in international waters between AV ships from different countries programmed with different moral preferences?

Regardless, this MIT research seems invaluable. It's a good start. AV makers (e.g., autos, ships, trucks) need to explicitly state what their vehicles will (and won't do). Don't hide behind legalese similar to what exists today in too many online terms-of-use and privacy policies.

Hopefully, corporate executives and government policymakers will listen, consider the limitations, demand follow-up research, and not dive headlong into the AV pool without looking first. After reading this study, it struck me that similar research would have been wise before building a global social media service, since people in different countries or regions having varying preferences with online privacy, sharing information, and corporate surveillance. What are your opinions?