Your Medical Devices Are Not Keeping Your Health Data to Themselves

[Editor's note: today's guest post, by reporters at ProPublica, is part of a series which explores data collection, data sharing, and privacy issues within the healthcare industry. It is reprinted with permission.]

By Derek Kravitz and Marshall Allen, ProPublica

Medical devices are gathering more and more data from their users, whether it’s their heart rates, sleep patterns or the number of steps taken in a day. Insurers and medical device makers say such data can be used to vastly improve health care.

But the data that’s generated can also be used in ways that patients don’t necessarily expect. It can be packaged and sold for advertising. It can anonymized and used by customer support and information technology companies. Or it can be shared with health insurers, who may use it to deny reimbursement. Privacy experts warn that data gathered by insurers could also be used to rate individuals’ health care costs and potentially raise their premiums.

Patients typically have to give consent for their data to be used — so-called “donated data.” But some patients said they weren’t aware that their information was being gathered and shared. And once the data is shared, it can be used in a number of ways. Here are a few of the most popular medical devices that can share data with insurers:

Continuous Positive Airway Pressure, or CPAP, Machines

What Are They?

One of the more popular devices for those with sleep apnea, CPAP machines are covered by insurers after a sleep study confirms the diagnosis. These units, which deliver pressurized air through masks worn by patients as they sleep, collect data and transmit it wirelessly.

What Do They Collect?

It depends on the unit, but CPAP machines can collect data on the number of hours a patient uses the device, the number of interruptions in sleep and the amount of air that leaks from the mask.

Who Gets the Info?

The data may be transmitted to the makers or suppliers of the machines. Doctors may use it to assess whether the therapy is effective. Health insurers may receive the data to track whether patients are using their CPAP machines as directed. They may refuse to reimburse the costs of the machine if the patient doesn’t use it enough. The device maker ResMed said in a statement that patients may withdraw their consent to have their data shared.

Heart Monitors

What Are They?

Heart monitors, oftentimes small, battery-powered devices worn on the body and attached to the skin with electrodes, measure and record the heart’s electrical signals, typically over a few days or weeks, to detect things like irregular heartbeats or abnormal heart rhythms. Some devices implanted under the skin can last up to five years.

What Do They Collect?

Wearable ones include Holter monitors, wired external devices that attach to the skin, and event recorders, which can track slow or fast heartbeats and fainting spells. Data can also be shared from implanted pacemakers, which keep the heart beating properly for those with arrhythmias.

Who Gets the Info?

Low resting heart rates or other abnormal heart conditions are commonly used by insurance companies to place patients in more expensive rate classes. Children undergoing genetic testing are sometimes outfitted with heart monitors before their diagnosis, increasing the odds that their data is used by insurers. This sharing is the most common complaint cited by the World Privacy Forum, a consumer rights group.

Blood Glucose Monitors

What Are They?

Millions of Americans who have diabetes are familiar with blood glucose meters, or glucometers, which take a blood sample on a strip of paper and analyze it for glucose, or sugar, levels. This allows patients and their doctors to monitor their diabetes so they don’t have complications like heart or kidney disease. Blood glucose meters are used by the more the 1.2 million Americans with Type 1 diabetes, which is usually diagnosed in children, teens and young adults.

What Do They Collect?

Blood sugar monitors measure the concentration of glucose in a patient’s blood, a key indicator of proper diabetes management.

Who Gets the Info?

Diabetes monitoring equipment is sold directly to patients, but many still rely on insurer-provided devices. To get reimbursement for blood glucose meters, health insurers will typically ask for at least a month’s worth of blood sugar data.

Lifestyle Monitors

What Are They?

Step counters, medication alerts and trackers, and in-home cameras are among the devices in the increasingly crowded lifestyle health industry.

What Do They Collect?

Many health data research apps are made up of “donated data,” which is provided by consumers and falls outside of federal guidelines that require the sharing of personal health data be disclosed and anonymized to protect the identity of the patient. This data includes everything from counters for the number of steps you take, the calories you eat and the number of flights of stairs you climb to more traditional health metrics, such as pulse and heart rates.

Who Gets the Info?

It varies by device. But the makers of the Fitbit step counter, for example, say they never sell customer personal data or share personal information unless a user requests it; it is part of a legal process; or it is provided on a “confidential basis” to a third-party customer support or IT provider. That said, Fitbit allows users who give consent to share data “with a health insurer or wellness program,” according to a statement from the company.

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


House Oversight Committee Report On The Equifax Data Breach. Did The Recommendations Go Far Enough?

On Monday, the U.S. House of Representatives Committee on Oversight and Government Reform released its report (Adobe PDF) on the massive Equifax data breach, where the most sensitive personal and payment information of more than 148 million consumers -- nearly half of the population -- was accessed and stolen. The report summary:

"In 2005, former Equifax Chief Executive Officer(CEO) Richard Smith embarked on an aggressive growth strategy, leading to the acquisition of multiple companies, information technology (IT) systems, and data. While the acquisition strategy was successful for Equifax’s bottom line and stock price, this growth brought increasing complexity to Equifax’s IT systems, and expanded data security risks... Equifax, however, failed to implement an adequate security program to protect this sensitive data. As a result, Equifax allowed one of the largest data breaches in U.S. history. Such a breach was entirely preventable."

The report cited several failures by Equifax. First:

"On March 7, 2017, a critical vulnerability in the Apache Struts software was publicly disclosed. Equifax used Apache Struts to run certain applications on legacy operating systems. The following day, the Department of Homeland Security alerted Equifax to this critical vulnerability. Equifax’s Global Threate and Vulnerability Management (GTVM) team emailed this alert to over 400 people on March 9, instructing anyone who had Apache Struts running on their system to apply the necessary patch within 48 hours. The Equifax GTVM team also held a March 16 meeting about this vulnerability. Equifax, however, did not fully patch its systems. Equifax’s Automated Consumer Interview System (ACIS), a custom-built internet-facing consumer dispute portal developed in the 1970s, was running a version of Apache Struts containing the vulnerability. Equifax did not patch the Apache Struts software located within ACIS, leaving its systems and data exposed."

As bad as that is, it gets worse:

"On May 13, 2017, attackers began a cyberattack on Equifax. The attack lasted for 76 days. The attackers dropped “web shells” (a web-based backdoor) to obtain remote control over Equifax’s network. They found a file containing unencrypted credentials (usernames and passwords), enabling the attackers to access sensitive data outside of the ACIS environment. The attackers were able to use these credentials to access 48 unrelated databases."

"Attackers sent 9,000 queries on these 48 databases, successfully locating unencrypted personally identifiable information (PII) data 265 times. The attackers transferred this data out of the Equifax environment, unbeknownst to Equifax. Equifax did not see the data exfiltration because the device used to monitor ACIS network traffic had been inactive for 19 months due to an expired security certificate. On July 29, 2017, Equifax updated the expired certificate and immediately noticed suspicious web traffic..."

Findings so far: 1) growth prioritized over security while archiving highly valuable data; 2) antiquated computer systems; 3) failed security patches; 4) unprotected user credentials; and 5) failed intrusion detection mechanism. Geez!

Only after updating its expired security certificate did Equifax notice the intrusion. After that, you'd think that Equifax would have implemented a strong post-breach response. You'd be wrong. More failures:

"When Equifax informed the public of the breach on September 7, the company was unprepared to support the large number of affected consumers. The dedicated breach website and call centers were immediately overwhelmed, and consumers were not able to obtain timely information about whether they were affected and how they could obtain identity protection services."

"Equifax should have addressed at least two points of failure to mitigate, or even prevent, this data breach. First, a lack of accountability and no clear lines of authority in Equifax’s IT management structure existed, leading to an execution gap between IT policy development and operation. This also restricted the company’s implementation of other security initiatives in a comprehensive and timely manner. As an example, Equifax had allowed over 300 security certificates to expire, including 79 certificates for monitoring business critical domains. "Second, Equifax’s aggressive growth strategy and accumulation of data resulted in a complex IT environment. Equifax ran a number of its most critical IT applications on custom-built legacy systems. Both the complexity and antiquated nature of Equifax’s IT systems made IT security especially challenging..."

Findings so far: 6) inadequate post-breach response; and 7) complicated IT structure making updates difficult. Geez!

The report listed the executives who retired and/or were fired. That's a small start for a company archiving the most sensitive personal and payment information of all USA citizens. The report included seven recommendations:

"1: Empower Consumers through Transparency. Consumer reporting agencies (CRAs) should provide more transparency to consumers on what data is collected and how it is used. A large amount of the public’s concern after Equifax’s data breach announcement stemmed from the lack of knowledge regarding the extensive data CRAs hold on individuals. CRAs must invest in and deploy additional tools to empower consumers to better control their own data..."

"2: Review Sufficiency of FTC Oversight and Enforcement Authorities. Currently, the FTC uses statutory authority under Section 5 of the Federal Trade Commission Act to hold businesses accountable for making false or misleading claims about their data security or failing to employ reasonable security measures. Additional oversight authorities and enforcement tools may be needed to enable the FTC to effectively monitor CRA data security practices..."

"3: Review Effectiveness of Identity Monitoring and Protection Services Offered to Breach Victims. The General Accounting Office (GAO) should examine the effectiveness of current identity monitoring and protection services and provide recommendations to Congress. In particular, GAO should review the length of time that credit monitoring and protection services are needed after a data breach to mitigate identity theft risks. Equifax offered free credit monitoring and protection services for one year to any consumer who requested it... This GAO study would help clarify the value of credit monitoring services and the length of time such services should be maintained. The GAO study should examine alternatives to credit monitoring services and identify addit ional or complimentary services..."

"4: Increase Transparency of Cyber Risk in Private Sector. Federal agencies and the private sector should work together to increase transparency of a company’s cybersecurity risks and steps taken to mitigate such risks. One example of how a private entity can increase transparency related to the company’s cyber risk is by making disclosures in its Securities and Exchange Commission (SEC) filings. In 2011, the SEC developed guidance to assist companies in disclosing cybersecurity risks and incidents. According to the SEC guidance, if cybersecurity risks or incidents are “sufficiently material to investors” a private company may be required to disclose the information... Equifax did not disclose any cybersecurity risks or cybers ecurity incidents in its SEC filings prior to the 2017 data breach..."

"5: Hold Federal Contractors Accountable for Cybersecurity with Clear Requirements. The Equifax data breach and federal customers’ use of Equifax identity validation services highlight the need for the federal government to be vigilant in mitigating cybersecurity risk in federal acquisition. The Office of Management and Budget (OMB) should continue efforts to develop a clear set of requirements for federal contractors to address increasing cybersecurity risks, particularly as it relates to handling of PII. There should be a government-wide framework of cybersecurity and data security risk-based requirements. In 2016, the Committee urged OMB to focus on improving and updating cybersecurity requirements for federal acquisition... The Committee again urges OMB to expedite development of a long-promised cybersecurity acquisition memorandum to provide guidance to federal agencies and acquisition professionals..."

"6: Reduce Use of Social Security Numbers as Personal Identifiers. The executive branch should work with the private sector to reduce reliance on Social Security numbers. Social Security numbers are widely used by the public and private sector to both identify and authenticate individuals. Authenticators are only useful if they are kept confidential. Attackers stole the Social Security numbers of an estimated 145 million consumers from Equifax. As a result of this breach, nearly half of the country’s Social Security numbers are no longer confidential. To better protect consumers from identity theft, OMB and other relevant federal agencies should pursue emerging technology solutions as an alternative to Social Security number use."

"7: Implement Modernized IT Solutions. Companies storing sensitive consumer data should transition away from legacy IT and implement modern IT security solutions. Equifax failed to modernize its IT environments in a timely manner. The complexity of the legacy IT environment hosting the ACIS application allowed the attackers to move throughout the Equifax network... Equifax’s legacy IT was difficult to scan, patch, and modify... Private sector companies, especially those holding sensitive consumer data like Equifax, must prioritize investment in modernized tools and technologies...."

The history of corporate data breaches and the above list of corporate failures by Equifax both should be warnings to anyone in government promoting the privatization of current government activities. Companies screw up stuff, too.

Recommendation #6 is frightening in that it hasn't been implemented. Yikes! No federal agency should do business with a private sector firm operating with antiquated computer systems. And, if Equifax can't protect the information it archives, it should cease to exist. While that sounds harsh, it ain't. Continual data breaches place risks and burdens upon already burdened consumers trying to control and protect their data.

What are your opinions of the report? Did it go far enough?


You Snooze, You Lose: Insurers Make The Old Adage Literally True

[Editor's note: today's guest post, by reporters at ProPublica, is part of a series which explores data collection, data sharing, and privacy issues within the healthcare industry. It is reprinted with permission.]

By Marshall Allen, ProPublica

Last March, Tony Schmidt discovered something unsettling about the machine that helps him breathe at night. Without his knowledge, it was spying on him.

From his bedside, the device was tracking when he was using it and sending the information not just to his doctor, but to the maker of the machine, to the medical supply company that provided it and to his health insurer.

Schmidt, an information technology specialist from Carrollton, Texas, was shocked. “I had no idea they were sending my information across the wire.”

Schmidt, 59, has sleep apnea, a disorder that causes worrisome breaks in his breathing at night. Like millions of people, he relies on a continuous positive airway pressure, or CPAP, machine that streams warm air into his nose while he sleeps, keeping his airway open. Without it, Schmidt would wake up hundreds of times a night; then, during the day, he’d nod off at work, sometimes while driving and even as he sat on the toilet.

“I couldn’t keep a job,” he said. “I couldn’t stay awake.” The CPAP, he said, saved his career, maybe even his life.

As many CPAP users discover, the life-altering device comes with caveats: Health insurance companies are often tracking whether patients use them. If they aren’t, the insurers might not cover the machines or the supplies that go with them.

In fact, faced with the popularity of CPAPs, which can cost $400 to $800, and their need for replacement filters, face masks and hoses, health insurers have deployed a host of tactics that can make the therapy more expensive or even price it out of reach.

Patients have been required to rent CPAPs at rates that total much more than the retail price of the devices, or they’ve discovered that the supplies would be substantially cheaper if they didn’t have insurance at all.

Experts who study health care costs say insurers’ CPAP strategies are part of the industry’s playbook of shifting the costs of widely used therapies, devices and tests to unsuspecting patients.

“The doctors and providers are not in control of medicine anymore,” said Harry Lawrence, owner of Advanced Oxy-Med Services, a New York company that provides CPAP supplies. “It’s strictly the insurance companies. They call the shots.”

Insurers say their concerns are legitimate. The masks and hoses can be cumbersome and noisy, and studies show that about third of patients don’t use their CPAPs as directed.

But the companies’ practices have spawned lawsuits and concerns by some doctors who say that policies that restrict access to the machines could have serious, or even deadly, consequences for patients with severe conditions. And privacy experts worry that data collected by insurers could be used to discriminate against patients or raise their costs.

Schmidt’s privacy concerns began the day after he registered his new CPAP unit with ResMed, its manufacturer. He opted out of receiving any further information. But he had barely wiped the sleep out of his eyes the next morning when a peppy email arrived in his inbox. It was ResMed, praising him for completing his first night of therapy. “Congratulations! You’ve earned yourself a badge!” the email said.

Then came this exchange with his supply company, Medigy: Schmidt had emailed the company to praise the “professional, kind, efficient and competent” technician who set up the device. A Medigy representative wrote back, thanking him, then adding that Schmidt’s machine “is doing a great job keeping your airway open.” A report detailing Schmidt’s usage was attached.

Alarmed, Schmidt complained to Medigy and learned his data was also being shared with his insurer, Blue Cross Blue Shield. He’d known his old machine had tracked his sleep because he’d taken its removable data card to his doctor. But this new invasion of privacy felt different. Was the data encrypted to protect his privacy as it was transmitted? What else were they doing with his personal information?

He filed complaints with the Better Business Bureau and the federal government to no avail. “My doctor is the ONLY one that has permission to have my data,” he wrote in one complaint.

In an email, a Blue Cross Blue Shield spokesperson said that it’s standard practice for insurers to monitor sleep apnea patients and deny payment if they aren’t using the machine. And privacy experts said that sharing the data with insurance companies is allowed under federal privacy laws. A ResMed representative said once patients have given consent, it may share the data it gathers, which is encrypted, with the patients’ doctors, insurers and supply companies.

Schmidt returned the new CPAP machine and went back to a model that allowed him to use a removable data card. His doctor can verify his compliance, he said.

Luke Petty, the operations manager for Medigy, said a lot of CPAP users direct their ire at companies like his. The complaints online number in the thousands. But insurance companies set the prices and make the rules, he said, and suppliers follow them, so they can get paid.

“Every year it’s a new hurdle, a new trick, a new game for the patients,” Petty said.

A Sleep Saving Machine Gets Popular

The American Sleep Apnea Association estimates about 22 million Americans have sleep apnea, although it’s often not diagnosed. The number of people seeking treatment has grown along with awareness of the disorder. It’s a potentially serious disorder that left untreated can lead to risks for heart disease, diabetes, cancer and cognitive disorders. CPAP is one of the only treatments that works for many patients.

Exact numbers are hard to come by, but ResMed, the leading device maker, said it’s monitoring the CPAP use of millions of patients.

Sleep apnea specialists and health care cost experts say insurers have countered the deluge by forcing patients to prove they’re using the treatment.

Medicare, the government insurance program for seniors and the disabled, began requiring CPAP “compliance” after a boom in demand. Because of the discomfort of wearing a mask, hooked up to a noisy machine, many patients struggle to adapt to nightly use. Between 2001 and 2009, Medicare payments for individual sleep studies almost quadrupled to $235 million. Many of those studies led to a CPAP prescription. Under Medicare rules, patients must use the CPAP for four hours a night for at least 70 percent of the nights in any 30-day period within three months of getting the device. Medicare requires doctors to document the adherence and effectiveness of the therapy.

Sleep apnea experts deemed Medicare’s requirements arbitrary. But private insurers soon adopted similar rules, verifying usage with data from patients’ machines — with or without their knowledge.

Kristine Grow, spokeswoman for the trade association America’s Health Insurance Plans, said monitoring CPAP use is important because if patients aren’t using the machines, a less expensive therapy might be a smarter option. Monitoring patients also helps insurance companies advise doctors about the best treatment for patients, she said. When asked why insurers don’t just rely on doctors to verify compliance, Grow said she didn’t know.

Many insurers also require patients to rack up monthly rental fees rather than simply pay for a CPAP.

Dr. Ofer Jacobowitz, a sleep apnea expert at ENT and Allergy Associates and assistant professor at The Mount Sinai Hospital in New York, said his patients often pay rental fees for a year or longer before meeting the prices insurers set for their CPAPs. But since patients’ deductibles — the amount they must pay before insurance kicks in — reset at the beginning of each year, they may end up covering the entire cost of the rental for much of that time, he said.

The rental fees can surpass the retail cost of the machine, patients and doctors say. Alan Levy, an attorney who lives in Rahway, New Jersey, bought an individual insurance plan through the now-defunct Health Republic Insurance of New Jersey in 2015. When his doctor prescribed a CPAP, the company that supplied his device, At Home Medical, told him he needed to rent the device for $104 a month for 15 months. The company told him the cost of the CPAP was $2,400.

Levy said he wouldn’t have worried about the cost if his insurance had paid it. But Levy’s plan required him to reach a $5,000 deductible before his insurance plan paid a dime. So Levy looked online and discovered the machine actually cost about $500.

Levy said he called At Home Medical to ask if he could avoid the rental fee and pay $500 up front for the machine, and a company representative said no. “I’m being overcharged simply because I have insurance,” Levy recalled protesting.

Levy refused to pay the rental fees. “At no point did I ever agree to enter into a monthly rental subscription,” he wrote in a letter disputing the charges. He asked for documentation supporting the cost. The company responded that he was being billed under the provisions of his insurance carrier.

Levy’s law practice focuses, ironically, on defending insurance companies in personal injury cases. So he sued At Home Medical, accusing the company of violating the New Jersey Consumer Fraud Act. Levy didn’t expect the case to go to trial. “I knew they were going to have to spend thousands of dollars on attorney’s fees to defend a claim worth hundreds of dollars,” he said.

Sure enough, At Home Medical, agreed to allow Levy to pay $600 — still more than the retail cost — for the machine.

The company declined to comment on the case. Suppliers said that Levy’s case is extreme, but acknowledged that patients’ rental fees often add up to more than the device is worth.

Levy said that he was happy to abide by the terms of his plan, but that didn’t mean the insurance company could charge him an unfair price. “If the machine’s worth $500, no matter what the plan says, or the medical device company says, they shouldn’t be charging many times that price,” he said.

Dr. Douglas Kirsch, president of the American Academy of Sleep Medicine, said high rental fees aren’t the only problem. Patients can also get better deals on CPAP filters, hoses, masks and other supplies when they don’t use insurance, he said.

Cigna, one of the largest health insurers in the country, currently faces a class-action suit in U.S. District Court in Connecticut over its billing practices, including for CPAP supplies. One of the plaintiffs, Jeffrey Neufeld, who lives in Connecticut, contends that Cigna directed him to order his supplies through a middleman who jacked up the prices.

Neufeld declined to comment for this story. But his attorney, Robert Izard, said Cigna contracted with a company called CareCentrix, which coordinates a network of suppliers for the insurer. Neufeld decided to contact his supplier directly to find out what it had been paid for his supplies and compare that to what he was being charged. He discovered that he was paying substantially more than the supplier said the products were worth. For instance, Neufeld owed $25.68 for a disposable filter under his Cigna plan, while the supplier was paid $7.50. He owed $147.78 for a face mask through his Cigna plan while the supplier was paid $95.

ProPublica found all the CPAP supplies billed to Neufeld online at even lower prices than those the supplier had been paid. Longtime CPAP users say it’s well known that supplies are cheaper when they are purchased without insurance.

Neufeld’s cost “should have been based on the lower amount charged by the actual provider, not the marked-up bill from the middleman,” Izard said. Patients covered by other insurance companies may have fallen victim to similar markups, he said.

Cigna would not comment on the case. But in documents filed in the suit, it denied misrepresenting costs or overcharging Neufeld. The supply company did not return calls for comment.

In a statement, Stephen Wogen, CareCentrix’s chief growth officer, said insurers may agree to pay higher prices for some services, while negotiating lower prices for others, to achieve better overall value. For this reason, he said, isolating select prices doesn’t reflect the overall value of the company’s services. CareCentrix declined to comment on Neufeld’s allegations.

Izard said Cigna and CareCentrix benefit from such behind-the-scenes deals by shifting the extra costs to patients, who often end up covering the marked-up prices out of their deductibles. And even once their insurance kicks in, the amount the patients must pay will be much higher.

The ubiquity of CPAP insurance concerns struck home during the reporting of this story, when a ProPublica colleague discovered how his insurer was using his data against him.

Sleep Aid or Surveillance Device?

Without his CPAP, Eric Umansky, a deputy managing editor at ProPublica, wakes up repeatedly through the night and snores so insufferably that he is banished to the living room couch. “My marriage depends on it.”

In September, his doctor prescribed a new mask and airflow setting for his machine. Advanced Oxy-Med Services, the medical supply company approved by his insurer, sent him a modem that he plugged into his machine, giving the company the ability to change the settings remotely if needed.

But when the mask hadn’t arrived a few days later, Umansky called Advanced Oxy-Med. That’s when he got a surprise: His insurance company might not pay for the mask, a customer service representative told him, because he hadn’t been using his machine enough. “On Tuesday night, you only used the mask for three-and-a-half hours,” the representative said. “And on Monday night, you only used it for three hours.”

“Wait — you guys are using this thing to track my sleep?” Umansky recalled saying. “And you are using it to deny me something my doctor says I need?”

Umansky’s new modem had been beaming his personal data from his Brooklyn bedroom to the Newburgh, New York-based supply company, which, in turn, forwarded the information to his insurance company, UnitedHealthcare.

Umansky was bewildered. He hadn’t been using the machine all night because he needed a new mask. But his insurance company wouldn’t pay for the new mask until he proved he was using the machine all night — even though, in his case, he, not the insurance company, is the owner of the device.

“You view it as a device that is yours and is serving you,” Umansky said. “And suddenly you realize it is a surveillance device being used by your health insurance company to limit your access to health care.”

Privacy experts said such concerns are likely to grow as a host of devices now gather data about patients, including insertable heart monitors and blood glucose meters, as well as Fitbits, Apple Watches and other lifestyle applications. Privacy laws have lagged behind this new technology, and patients may be surprised to learn how little control they have over how the data is used or with whom it is shared, said Pam Dixon, executive director of the World Privacy Forum.

“What if they find you only sleep a fitful five hours a night?” Dixon said. “That’s a big deal over time. Does that affect your health care prices?”

UnitedHealthcare said in a statement that it only uses the data from CPAPs to verify patients are using the machines.

Lawrence, the owner of Advanced Oxy-Med Services, conceded that his company should have told Umansky his CPAP use would be monitored for compliance, but it had to follow the insurers’ rules to get paid.

As for Umansky, it’s now been two months since his doctor prescribed him a new airflow setting for his CPAP machine. The supply company has been paying close attention to his usage, Umansky said, but it still hasn’t updated the setting.

The irony is not lost on Umansky: “I wish they would spend as much time providing me actual care as they do monitoring whether I’m ‘compliant.’”

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

 


Oath To Pay Almost $5 Million To Settle Charges By New York AG Regarding Children's Privacy Violations

Oath Inc. logo Barbara D. Underwood, the Attorney General (AG) for New York State, announced last week a settlement with Oath, Inc. for violating the Children’s Online Privacy Protection Act (COPPA). Oath Inc. is a wholly-owned subsidiary of Verizon Communications. Until June 2017, Oath was known as AOL Inc. ("AOL"). The announcement stated:

"The Attorney General’s Office found that AOL conducted billions of auctions for ad space on hundreds of websites the company knew were directed to children under the age of 13. Through these auctions, AOL collected, used, and disclosed personal information from the websites’ users in violation of COPPA, enabling advertisers to track and serve targeted ads to young children. The company has agreed to adopt comprehensive reforms to protect children from improper tracking and pay a record $4.95 million in penalties..."

The United States Congress enacted COPPA in 1998 to protect the safety and privacy of young children online. As many parents know, young children don't understand complicated legal documents such as terms-of-use and privacy policies. COPPA prohibits operators of certain websites from collecting, using, or disclosing personal information (e.g., first and last name, e-mail address) of children under the age of 13 without first obtaining parental consent.

The definition of "personal information" was revised in 2013 to include persistent identifiers that can be used to recognize a user over time and across websites, such as the ID found in a web browser cookie or an Internet Protocol (“IP”) address. The revision effectively prohibits covered operators from using cookies, IP addresses, and other persistent identifiers to track users across websites for most advertising purposes on COPPA-covered websites.

The announcement by AG Underwood explained the alleged violations in detail. Despite policies to the contrary:

"... AOL nevertheless used its display ad exchange to conduct billions of auctions for ad space on websites that it knew to be directed to children under the age of 13 and subject to COPPA. AOL obtained this knowledge in two ways. First, several AOL clients provided notice to AOL that their websites were subject to COPPA. These clients identified more than a dozen COPPA-covered websites to AOL. AOL conducted at least 1.3 billion auctions of display ad space from these websites. Second, AOL itself determined that certain websites were directed to children under the age of 13 when it conducted a review of the content and privacy policies of client websites. Through these reviews, AOL identified hundreds of additional websites that were subject to COPPA. AOL conducted at least 750 million auctions of display ad space from these websites."

AG Underwood said in a statement:

"COPPA is meant to protect young children from being tracked and targeted by advertisers online. AOL flagrantly violated the law – and children’s privacy – and will now pay the largest-ever penalty under COPPA. My office remains committed to protecting children online and will continue to hold accountable those who violate the law."

A check at press time of both the press and "company values" sections of Oath's site failed to find any mentions of the settlement. TechCrunch reported on December 4th:

"We reached out to Oath with a number of questions about this privacy failure. But a spokesman did not engage with any of them directly — emailing a short statement instead, in which it writes: "We are pleased to see this matter resolved and remain wholly committed to protecting children’s privacy online." The spokesman also did not confirm nor dispute the contents of the New York Times report."

Hmmm. Almost a week has passed since AG Underwood's December 4th announcement. You'd think that Oath management would have released a statement by now. Maybe Oath isn't as committed to children's online privacy as they claim. Something for parents to note.

The National Law Review provided some context:

"...in 2016, the New York AG concluded a two-year investigation into the tracking practices of four online publishers for alleged COPPA violations... As recently as September of this year, the New Mexico AG filed a lawsuit for alleged COPPA violations against a children's game app company, Tiny Lab Productions, and the online ad companies that work within Tiny Lab's, including those run by Google and Twitter... The Federal Trade Commission (FTC) continues to vigorously enforce COPPA, closing out investigations of alleged COPPA violations against smart toy manufacturer VTech and online talent search company Explore Talent... there have been a total of 28 enforcement proceedings since the COPPA rule was issued in 2000."

You can read about many of these actions in this blog, and how COPPA was strengthened in 2013.

So, the COPPA law works well and it is being vigorously enforced. Kudos to AG Underwood, her staff, and other states' AGs for taking these actions. What are your opinions about the AOL/Oath settlement?


Massive Data Breach At Quora Affects 100 Million Users

Quora logo Quora, the knowledge-sharing social networking site, announced on Monday a data breach affecting about 100 million of its users. The company discovered the breach on Friday, and a breach investigation is ongoing.

The company’s Chief Executive Officer, Adam D’Angelo, wrote in a blog post that the following data elements were compromised or stolen:

"a) Account information, e.g. name, email address, encrypted password (hashed using bcrypt with a salt that varies for each user), data imported from linked networks when authorized by users; b) Public content and actions, e.g. questions, answers, comments, upvotes; and c) Non-public content and actions, e.g. answer requests, downvotes, direct messages (note that a low percentage of Quora users have sent or received such messages)"

Quora has cancelled affected users' passwords. Quora does not yet know exactly how unauthorized persons accessed its system. The breach announcement did not state when the intrusion began. D'Angelo added:

"We're still investigating the precise causes and in addition to the work being conducted by our internal security teams, we have retained a leading digital forensics and security firm to assist us. We have also notified law enforcement officials."

Affected users are being notified via email. Affected users returning to the site must reset their accounts with new passwords. Quora encourages users with questions to visit its breach help site. Users are warned to change their online passwords.

The New York Times reported:

"... the incident was unlikely to result in identity theft, as the site does not collect sensitive information such as credit card or Social Security numbers... 300 million people around the world use its site at least once a month to ask and answer questions about politics, faith, calculus, unrequited love, the meaning of life and more. By comparison, Twitter claims 326 million monthly active users. But since it blasted onto the social media landscape in 2010, igniting a blaze of interest among tech company employees, Quora has not become the mainstream cultural force that Twitter has..."

This breach is another reminder to all consumers to never use the same password at multiple sites. Cybercriminals are persistent, and will reuse stolen passwords to see which other sites they can break into to steal sensitive personal and payment information.

If you received an email breach notice from Quora, please share it below (after deleting any sensitive personal data).


Gigantic Data Breach At Marriott International Affects 500 Million Customers. Plenty Of Questions Remain

Marriott International logo A gigantic data breach at Marriott International affects about 500 million customers who have stayed at its Starwood network of hotels in the United States, Canada, and the United Kingdom. Marriott International announced the data breach on Friday, November 30th, and set up a website for affected Starwood guests.

According to its breach announcement, an "internal security tool" discovered the breach on September 8, 2018. The initial data breach investigation determined that unauthorized persons accessed its registration database as far back as 2014, and had both copied and encrypted information before removing it. Marriott engaged security experts, the information was partially decrypted on November 19, 2018, and the global hotel chain determined that the information was from its Starwood guest reservation database.

Starwood Preferred Guest logo The Starwood hotels network includes brands such as W Hotels, St. Regis, Sheraton Hotels & Resorts, Westin Hotels & Resorts, Le Méridien Hotels & Resorts, Four Points by Sheraton, and more. Marriott has not finished decrypting all information, so there may be future updates from the breach investigation.

For 327 million guests, the personal data items stolen included a combination of name, mailing address, phone number, email address, passport number, Starwood Preferred Guest (“SPG”) account information, date of birth, gender, arrival and departure information, reservation date, and communication preferences. For some guests, the information stolen also included payment card numbers and payment card expiration dates. While Marriott said the payment card numbers were encrypted using Advanced Encryption Standard encryption (AES-128), its warned that it doesn't yet know if the encryption keys (needed to decrypt payment information) were also stolen.

For 173 million guests, fewer personal data items were stolen included, "name and sometimes other data such as mailing address, email address, or other information." Marriott International said its Marriott-branded hotels were not affected since they use a different reservations database on a different server.

Marriott said it has notified law enforcement, is working with law enforcement, and has begun to notify affected guests via email. The hotel chain will offer affected guests in select countries one year of free enrollment in the WebWatcher program which, "monitors internet sites where personal information is shared and  an alert to the consumer if evidence of the consumer’s personal information is found." WebWatcher will not be offered to all affected guests. Eligible guests should read the fine print, which the Starwood breach site summarized:

"Due to regulatory and other reasons, WebWatcher or similar products are not available in all countries. For residents of the United States, enrolling in WebWatcher also provides you with two additional benefits: (1) a Fraud Loss Reimbursement benefit, which reimburses you for out-of-pocket expenses totaling up to $1 million in covered legal costs and expenses for any one stolen identity event. All coverage is subject to the conditions and exclusions in the policy; and (2) unlimited access to consultation with a Kroll fraud specialist. Consultation support includes showing you the most effective ways to protect your identity, explaining your rights and protections under the law, assistance with fraud alerts, and interpreting how personal information is accessed and used..."

The seriousness of this data breach cannot be overstated. First, it went undetected for a very long time. Marriott needs to explain that and the changes it will implement with an improved "internal security tool" so this doesn't happen again. Second, 500 million is an awful lot of affected customers. An awful lot. Third, breach CNN Business reported:

"Because the hack involves customers in the European Union and the United Kingdom, the company might be in violation of the recently enacted General Data Protection Regulation (GDPR). Mark Thompson, the global lead for consulting company KPMG's Privacy Advisory Practice, told CNN Business that hefty GDPR penalties will potentially be slapped on the company. "The size and scale of this thing is huge," he said, adding that it's going to take several months for (EU) regulators to investigate the breach."

Fourth, the data items stolen are sufficient to cause plenty of damage. Security experts advise affected customers to change their Starwood passwords, check the answers.Kroll.com breach site next week to see if their information was compromised/stolen, sign up for credit monitoring (if they don't already have it), watch their payment or bank accounts for fraudulent entries, and consider an early renewal if your passport number was compromised/stolen. Fifth, companies usually arrange free credit monitoring for breach victims for one or two years. So far, Marriott hasn't done this. Maybe it will. If not, Marriott needs to explain why.

Sixth, breach notification of affected guests via email seems sketchy... like Marriott is trying to cut corners and costs. History is littered with numerous examples of skilled spammers and cybercriminals using faked or spoofed email to trick consumers into revealing sensitive personal and payment information. It will be interesting to see how Marriott's breach notification via email works and manages this threat.

Seventh, lawsuits and other investigations have already begun. ZDNet reported:

"... two Oregon men sued international hotel chain Marriott for exposing their data. Their lawsuit was followed hours later by another one filed in the state of Maryland. Both lawsuits seek class-action status. While plaintiffs in the Maryland lawsuit didn't specify the amount of damages they were seeking from Marriott, the plaintiffs in the Oregon lawsuit want $12.5 billion in costs and losses. his should equate to $25 for each of the 500 million users who had their personal data stolen from Marriott's serv ers... The Maryland lawsuit was filed by Baltimore law firm Murphy, Falcon & Murphy..."

Bloomberg BNA announced:

"The Massachusetts, New York and Illinois state attorneys general quickly announced they would examine the hack. Connecticut George Jepsen (D) is also looking into the matter, a spokesman told Bloomberg Law."

Eighth, the breach site's website address unnecessarily vague: answers.kroll.com. Frankly, a website address like "starwood-breach.kroll.com" or "marriott-breach.kroll.com" would have been better. (The combination of email notification and vague website name seems eerily similar to the post-breach clusterf--k by Equifax's poorly implemented breach site.) Maybe this vague address was a temporary quick fix, and Marriott will host a comprehensive breach-status site later on one of its servers. That would be better and clearer for affected customers, who probably are unfamiliar with Kroll. Readers of this blog probably first encountered Kroll after IBM Inc. contracted it to help implement IBM's post-breach response in 2007.

The Starwood breach notice appears within the news section of Marriott.com site. Also, Marriott's post-breach notice included overlays on both the home page and the Starwood landing page within the Marriott.com site. This is a good start, but a better implementation would insert a link directly into the webpages, since the overlays don't render well in all browsers on all devices. (Marriott: you did test this before deployment?) Example: people with pop-up blockers may miss the breach notice in the overlays. And, a better implementation would link to the news story's detail page within the Marriott.com site -- not directly to the vague answers.kroll.com site.

Last, some questions remain about the post-breach response:

  • Why email notices to breach victims? Hopefully, there are more reasons than simply saving postal mailing costs.
  • Why no credit monitoring offers to breach victims?
  • What data in the Starwood reservations database was altered by the attackers? That data was encrypted by the attackers suggests that the attackers had sufficient time, resources, and skills to modify or alter database records. Marriott needs to explain what it is doing about this.
  • When will Marriott host a breach site on one of its servers? No doubt, there will be follow-up news, more questions by breach victims, and breach investigation updates. A dedicated breach site on one of its servers seems best. Leaning too much on Kroll is not good.
  • Why did the intrusion go undetected for so long? Marriott needs to explain this and the post-breach fix so guests are reassured it won't happen again.
  • Is the main Marriott reservations database also vulnerable? Guests for other brands weren't affected since a separate reservations database was used. Maybe this is because the main Marriott reservations database and server are better protected, or cybercriminals haven't attacked it (yet). Guests deserve comprehensive answers.
  • Why the website overlaps/pop-ups and not static links?
  • What changes (e.g., software upgrades, breach detection tools, employee training, etc.) will be implemented so this doesn't happen again?

Having blogged about data breaches for 11+ years, these types of questions often arise. None are unreasonable questions. Answers will help guests feel comfortable with using Starwood hotels. Plus, Marriott has an obligation to fully inform guests directly at its website, and not lean on Kroll. What do you think?


Google Admitted Tracking Users' Location Even When Phone Setting Disabled

If you are considering, or already have, a smartphone running Google's Android operating system (OS), then take note. ZDNet reported (emphasis added):

"Phones running Android have been gathering data about a user's location and sending it back to Google when connected to the internet, with Quartz first revealing the practice has been occurring since January 2017. According to the report, Android phones and tablets have been collecting the addresses of nearby cellular towers and sending the encrypted data back, even when the location tracking function is disabled by the user... Google does not make this explicitly clear in its Privacy Policy, which means Android users that have disabled location tracking were still being tracked by the search engine giant..."

This is another reminder of the cost of free services and/or cheaper smartphones. You're gonna be tracked... extensively... whether you want it or not. The term "surveillance capitalism" is often used.

A reader shared a blunt assessment, "There is no way to avoid being Google’s property (a/k/a its bitch) if you use an Android phone." Harsh, but accurate. What is your opinion?


Massive Data Breach At U.S. Postal Service Affects 60 Million Users

United States Postal Service logo The United States Postal Service (USPS) experienced a massive data breach due to a vulnerable component at its website. The "application program interface" or API component allowed unauthorized users to access and download details about other users of the Informed Visibility service.

Security researcher Brian Krebs explained:

"In addition to exposing near real-time data about packages and mail being sent by USPS commercial customers, the flaw let any logged-in usps.com user query the system for account details belonging to any other users, such as email address, username, user ID, account number, street address, phone number, authorized users, mailing campaign data and other information.

Many of the API’s features accepted “wildcard” search parameters, meaning they could be made to return all records for a given data set without the need to search for specific terms. No special hacking tools were needed to pull this data, other than knowledge of how to view and modify data elements processed by a regular Web browser like Chrome or Firefox."

Geez! The USPS has since fixed the API vulnerability. Regardless, this is bad, very bad, for several reasons. Not only should the vulnerable API have prevented one user from viewing details about another, but it allowed changes to some data elements. Krebs added:

"A cursory review by KrebsOnSecurity indicates the promiscuous API let any user request account changes for any other user, such as email address, phone number or other key details. Fortunately, the USPS appears to have included a validation step to prevent unauthorized changes — at least with some data fields... The ability to modify database entries related to Informed Visibility user accounts could create problems for the USPS’s largest customers — think companies like Netflix and others that get discounted rates for high volumes. For instance, the API allowed any user to convert regular usps.com accounts to Informed Visibility business accounts, and vice versa."

About 13 million Informed Delivery users were also affected, since the vulnerable API component affected all USPS.com users. A vulnerability like this makes package theft easier since criminals could determine when certain types of mail (e.g., debit cards, credit cards, etc.) arrive at users' addresses. The vulnerable API probably existed for more than one year, when a security researcher first alerted the USPS about it.

While the USPS provided a response to Krebs on Security, a check at press time of the Newsroom and blog sections of About.USPS.com failed to find any mention of the data breach. Not good. Transparency matters.

If the USPS is serious about data security, then it should issue a public statement. When will users receive breach notification letters, if they haven't been sent? Who fixed the vulnerable API? How long was it broken? What post-breach investigation is underway? What types of changes (e.g., employee training, software testing, outsource vendor management, etc.) are being implement so this won't happen again?

Trust matters. The lack of a public statement makes it difficult for consumers to judge the seriousness of the breach and the seriousness of the fix by USPS. We probably will hear more about this breach.


Ireland Regulator: LinkedIn Processed Email Addresses Of 18 Million Non-Members

LinkedIn logo On Friday November 23rd, the Data Protection Commission (DPC) in Ireland released its annual report. That report includes the results of an investigation by the DPC of the LinkedIn.com social networking site, after a 2017 complaint by a person who didn't use the social networking service. Apparently, LinkedIn obtained 18 million email address of non-members so it could then use the Facebook platform to deliver advertisements encouraging them to join.

The DPC 2018 report (Adobe PDF; 827k bytes) stated on page 21:

"The DPC concluded its audit of LinkedIn Ireland Unlimited Company (LinkedIn) in respect of its processing of personal data following an investigation of a complaint notified to the DPC by a non-LinkedIn user. The complaint concerned LinkedIn’s obtaining and use of the complainant’s email address for the purpose of targeted advertising on the Facebook Platform. Our investigation identified that LinkedIn Corporation (LinkedIn Corp) in the U.S., LinkedIn Ireland’s data processor, had processed hashed email addresses of approximately 18 million non-LinkedIn members and targeted these individuals on the Facebook Platform with the absence of instruction from the data controller (i.e. LinkedIn Ireland), as is required pursuant to Section 2C(3)(a) of the Acts. The complaint was ultimately amicably resolved, with LinkedIn implementing a number of immediate actions to cease the processing of user data for the purposes that gave rise to the complaint."

So, in an attempt to gain more users LinkedIn acquired and processed the email addresses of 18 million non-members without getting governmental "instruction" as required by law. Not good.

The DPC report covered the time frame from January 1st through May 24, 2018. The report did not mention the source(s) from which LinkedIn acquired the email addresses. The DPC report also discussed investigations of Facebook (e.g., WhatsApp, facial recognition),  and Yahoo/Oath. Microsoft acquired LinkedIn in 2016. GDPR went into effect across the EU on May 25, 2018.

There is more. The investigation's findings raised concerns about broader compliance issues, so the DPC conducted a more in-depth audit:

"... to verify that LinkedIn had in place appropriate technical security and organisational measures, particularly for its processing of non-member data and its retention of such data. The audit identified that LinkedIn Corp was undertaking the pre-computation of a suggested professional network for non-LinkedIn members. As a result of the findings of our audit, LinkedIn Corp was instructed by LinkedIn Ireland, as data controller of EU user data, to cease pre-compute processing and to delete all personal data associated with such processing prior to 25 May 2018."

That the DPC ordered LinkedIn to stop this particular data processing, strongly suggests that the social networking service's activity probably violated data protection laws, as the European Union (EU) implements stronger privacy laws, known as General Data Protection Regulation (GDPR). ZDNet explained in this primer:

".... GDPR is a new set of rules designed to give EU citizens more control over their personal data. It aims to simplify the regulatory environment for business so both citizens and businesses in the European Union can fully benefit from the digital economy... almost every aspect of our lives revolves around data. From social media companies, to banks, retailers, and governments -- almost every service we use involves the collection and analysis of our personal data. Your name, address, credit card number and more all collected, analysed and, perhaps most importantly, stored by organisations... Data breaches inevitably happen. Information gets lost, stolen or otherwise released into the hands of people who were never intended to see it -- and those people often have malicious intent. Under the terms of GDPR, not only will organisations have to ensure that personal data is gathered legally and under strict conditions, but those who collect and manage it will be obliged to protect it from misuse and exploitation, as well as to respect the rights of data owners - or face penalties for not doing so... There are two different types of data-handlers the legislation applies to: 'processors' and 'controllers'. The definitions of each are laid out in Article 4 of the General Data Protection Regulation..."

The new GDPR applies to both companies operating within the EU, and to companies located outside of the EU which offer goods or services to customers or businesses inside the EU. As a result, some companies have changed their business processes. TechCrunch reported in April:

"Facebook has another change in the works to respond to the European Union’s beefed up data protection framework — and this one looks intended to shrink its legal liabilities under GDPR, and at scale. Late yesterday Reuters reported on a change incoming to Facebook’s [Terms & Conditions policy] that it said will be pushed out next month — meaning all non-EU international are switched from having their data processed by Facebook Ireland to Facebook USA. With this shift, Facebook will ensure that the privacy protections afforded by the EU’s incoming GDPR — which applies from May 25 — will not cover the ~1.5 billion+ international Facebook users who aren’t EU citizens (but current have their data processed in the EU, by Facebook Ireland). The U.S. does not have a comparable data protection framework to GDPR..."

What was LinkedIn's response to the DPC report? At press time, a search of LinkedIn's blog and press areas failed to find any mentions of the DPC investigation. TechCrunch reported statements by Dennis Kelleher, Head of Privacy, EMEA at LinkedIn:

"... Unfortunately the strong processes and procedures we have in place were not followed and for that we are sorry. We’ve taken appropriate action, and have improved the way we work to ensure that this will not happen again. During the audit, we also identified one further area where we could improve data privacy for non-members and we have voluntarily changed our practices as a result."

What does this mean? Plenty. There seem to be several takeaways for consumer and users of social networking services:

  • EU regulators are proactive and conduct detailed audits to ensure companies both comply with GDPR and act consistent with any promises they made,
  • LinkedIn wants consumers to accept another "we are sorry" corporate statement. No thanks. No more apologies. Actions speak more loudly than words,
  • The DPC didn't fine LinkedIn probably because GDPR didn't become effective until May 25, 2018. This suggests that fines will be applied to violations occurring on or after May 25, 2018, and
  • People in different areas of the world view privacy and data protection differently - as they should. That is fine, and it shouldn't be a surprise. (A global survey about self-driving cars found similar regional differences.) Smart executives in businesses -- and in governments -- worldwide recognize regional differences, find ways to sell products and services across areas without degraded customer experience, and don't try to force their country's approach on other countries or areas which don't want it.

What takeaways do you see?


Amazon Said Its Data Breach Was Due To A "Technical Error" And Discloses Few Breach Details

Amazon logo Amazon.com, the online retail giant, confirmed that it experienced a data breach last Wednesday. CBS News reported:

"Amazon said a technical error on its website exposed the names and email addresses of some customers. The online retail giant its website and systems weren't hacked. "We have fixed the issue and informed customers who may have been impacted," said an Amazon spokesperson. An Amazon spokesman didn't answer additional questions, like how many people were affected or whether any of the information was stolen."

A check of the press center and blog sections with the Amazon.com site failed to find any mentions of the data breach. The Ars Technica blog posted the text of the breach notification email Amazon sent to affected users:

"From: Amazon.com
Sent: 21 November 2018 10:53
To: a--------l@hotmail.com
Subject: Important Information about your Amazon.com Account

Hello,
We’re contacting you to let you know that our website inadvertently disclosed your name and email address due to a technical error. The issue has been fixed. This is not a result of anything you have done, and there is no need for you to change your password or take any other action.

Sincerely,
Customer Service
http://Amazon.com"

What? That's all? No link to a site or to a page for customers with questions?

This incident is a reminder that several things can cause data breaches. It's not only when cyber-criminals break into an organization's computers or systems. Human error causes data breaches, too. In some breaches, employees collude with criminals. In some cases, sloppy data security by outsource vendors causes data breaches. Details matter.

Typically, organizations affected by data breaches hire external security agencies to conduct independent, post-breach investigations to learn important details: when the breach started, how exactly the breach happened, the list of data elements unauthorized users accessed/stole, what else may have happened that wasn't readily apparent when the incident was discovered, and key causal events leading up to the breach -- all so that a complete fix can be implemented, and so that it doesn't happen again.

Who made the "technical error?" Who discovered it? What caused it? How long did the error exist? Who fixed it? Were specialized skills or tools necessary? What changes were made so that it won't happen again? Amazon isn't saying. If management decided to skip a post-breach investigation, consumers deserve to know that and why, too.

Often, the breach starts long before it is discovered by the company, or by a security researcher. Often, the fix includes several improvements: software changes, employee training, and/or improved security processes with contractors.

So, all we know is that names and email addresses were accessed by unauthorized persons. If stolen, that is sufficient to do damage -- spam or phishing email messages, to trick victims into revealing sensitive personal (e.g., usernames, passwords, etc.) and payment (e.g., bank account numbers, credit card numbers, etc.) information. It is not too much to ask Amazon to share both breach details and the results of a post-breach investigation.

Executives at Amazon know all of this, so maybe it was a management decision not to share breach details nor a post-breach investigation -- perhaps, not wanting to risk huge Black Friday holiday sales. Then again, the lack of details could imply the breach was far worse than management wants to admit.

Either way, this is troublesome. It's all about trust. When details are shared, consumers can judge the severity of the breach, the completeness of the company's post-breach response, and ideally feel better about continuing to shop at the site. What do you  think?


Aging Machines, Crowds, Humidity: Problems at the Polls Were Mundane but Widespread

[Editor's Note: today's guest blog post, by Reporters at ProPublica, discusses widespread problems many voters encountered earlier this month. The data below was compiled before the runoffs in Florida, Georgia and other states. It is reprinted with permission.]

By Ian MacDougall, Jessica Huseman, and Isaac Arnsdorf - ProPublica

If the defining risk of Election Day 2016 was a foreign meddling, 2018’s seems to have been a domestic overload. High turnout across the country threw existing problems — aging machines, poorly trained poll workers and a hot political landscape — into sharp relief.

Michael McDonald, a political science professor at the University of Florida who studies turnout, says early numbers indicate Tuesday’s midterm saw the highest percentage turnout since the mid-’60s. “All signs indicate that everyone is now engaged in this country — Republicans and Democrats,” he said, adding that he expects 2020 to also be a year of high turnout. “Election officials need to start planning for that now, and hopefully elected officials who hold the purse strings will be responsive to those needs.”

Aging Technology

Electionland monitored problems across the country on Election Day, supporting the work of 250 local journalists in more than 120 local newsrooms. Thousands of voters reported issues at the polls, and Electionland sought to report on as many as possible. The most striking problem of the night was perhaps the most predictable — aged or ineffective voting equipment caused hours-long lines across the country.

American voting hasn’t had a major technology refresh since the early 2000s, in the aftermath of the Florida recount and the passage of the 2002 Help America Vote Act, which infused billions of dollars into American elections. More recent upgrades, such as poll books that could be accessed via computer, were supposed to reduce bottlenecks at check-ins — but they repeatedly failed on Tuesday, worsening waits in Georgia, South Carolina and Indiana.

While aging infrastructure was already a well-known problem to election administrators, the surge of voters experiencing ordinary glitches led to extraordinarily long waits, sometimes stretching over hours. From Pennsylvania to Georgia to Arizona and Michigan, polling places started the day with broken machines leading to long lines, and never recovered.

“In 2016, we learned the technology has security vulnerabilities. Today was a wake-up call to performance vulnerabilities,” said Trey Grayson, the former president of the National Association of Secretaries of State and a member of the 2013 Presidential Commission on Election Administration. Tuesday, Grayson said, showed “the implications of turnout, stressing the system, revealing planning failures, feel impact of limited resources. If you had more resources, you’d have had more paper ballots, more machines, more polling places.”

The election hotline from the Lawyers’ Committee for Civil Rights Under Law clocked 24,000 calls by 6 p.m., twice the rate in in the 2014 midterm election. “People were not able to vote because of technical issues that are completely avoidable,” Ryan Snow, of the Lawyers’ Committee, said. “People who came to vote — registered to vote, showed up to vote — were not able to vote.”

“We think we can solve all of these voting problems by adding technology, but you have to have a contingency plan for when each of these pieces fail,” said Joseph Lorenzo Hall, the chief technologist at the Center for Democracy & Technology in Washington, D.C. It appears many of the places that saw electronic poll book failures had no viable backup system.

Hall said that problems with machines and computers force election administrators to become technicians on the spot, despite their lack of training. This exacerbates problems: Poll workers aren’t able to accurately or efficiently report issues to their central offices, leading to delays in dispatches of appropriate equipment or staff.

Perhaps the most embarrassing technological faceplant was in New York City, where the machines used to scan ballots proved no match for wet weather. Humidity caused the scanners to malfunction, leading to outages and long lines.

The breakdowns proliferated up and down the East Coast. Humidity also roiled scanners in North Carolina. In Charleston, South Carolina, an interminable delay driven by a downed voting system drove one person to leave for work before she could cast her ballot. “It felt like a type of disenfranchisement,” she told ProPublica. Voting machine outages in some Georgia precincts stranded voters in hours-long lines. In predominantly black sections of St. Petersburg, Florida, wait times ballooned as voting machines froze.

Some of the pressure on the aging technology was relieved by early and mail-in voting, so that everyone didn’t have to vote on the same day, Grayson said. But many states still require people to cast their ballots on Election Day, and others have added time-consuming procedures such as strict ID requirements.

Those sorts of security measures add their own layers of confusion. Many voters reported never receiving their ballots in the mail. Georgia voter Shelley Martin couldn’t vote because her ballot was mailed to the wrong address — even though she filled out her address correctly, the county election office accidentally changed a 9 to a 0. In Ohio, some in-person voters were incorrectly told they had already received an absentee ballot, because of a computer error.

When people show up at the wrong polling place or have problems with their registration, they are usually entitled to cast a provisional ballot that will be counted once it’s verified. But these problems were so common on Tuesday that some locations ran out of provisional ballots and turned people away, according to North Carolina voters’ reports to ProPublica. In Arizona, some voters were told they couldn’t have provisional ballots because of broken printers. In Pennsylvania, some college students encountered glitches with their registration and said poll workers wouldn’t give them provisional ballots.

A newly implemented law in North Dakota left a handful of college students — many of whom had voted in previous elections — confused and unable to vote. “I was so frustrated because I’ve voted in North Dakota before,” said Alissa Maesse, a student at the University of North Dakota who came to the polls with a Minnesota driver’s license and bank statement with a North Dakota address, but needed a North Dakota driver’s license, identification card or tribal ID. “I can’t participate at all and I wanted to.”

Administrative Error

Administrative stumbling blocks and unhelpful election officials left some voters throughout the day scrambling to figure out where or how they were supposed to vote. Across the country, confusion over new laws and poll worker error forced voters to work with attorneys or drive long distances in an attempt to solve problems.

In Missouri, a last-minute court ruling resulted in chaos across the state. Less than a month before, a judge radically altered the state’s voter ID law to allow more valid forms of identification. By then, poll workers had already been trained. Many enforced the incorrect version of the law.

In St. Charles County, northwest of St. Louis, voters across the county reported that poll workers openly argued with voters who showed identification allowed under the new ruling, demanding old forms of ID. By the end of the evening, the county had ignored demand letters from attorneys at Advancement Project, a civil rights group. Denise Lieberman, an attorney with the group, said it is considering legal remedies due to the county’s “flagrant disregard” for the judge’s ruling.

Rich Chrismer, the director of elections for the county, said he never saw the letters — he was at polling places all day. By late morning, he’d been made aware of 12 different polling locations where poll workers were giving incorrect instructions. He utilized the local police to distribute memos to all 121 polling locations, correcting poll worker instructions. They were distributed by the late morning, and complaints dropped off after that, he said.

Chrismer said training had already happened by the time the judge issued his ruling, but that he’d put new instructions in “four different places” in the packet mailed to poll workers ahead of the election. “They were either ignoring me or they didn’t know how to read, which upsets me,” he said.

Dallas County Clerk Stephanie Hendricks expressed similar frustration at the short window of time allowed by the court to retrain poll workers, update signs and ensure voter understanding.

Hendricks said the small county had to “scrape the bottom of the barrel” for poll workers, who only received 90 minutes of training. This, combined with the very short notice for the legal change, made it difficult to help poll workers understand the law. “The last few elections it’s been photo ID, photo ID, photo ID, and now all of a sudden the brakes have been thrown on. It’s confusing for people,” she said.

The frustrations for Chris Sears began on Friday, when he turned up to cast an early ballot at Cinco Ranch Public Library, a brick building abutting a duck pond in the suburbs west of Houston. Sears, a 43-year-old Texan who works in real estate, had voted at the library in the 2016 election, after moving to the area from adjoining Harris County a year earlier. Now, at the library, poll workers couldn’t find him in their rolls. His only recourse, they told him, was to drive the half hour or so to the Fort Bend County election office. Sears, realizing he wouldn’t make it there and back before early voting closed, decided to go first thing Tuesday morning.

After he explained his situation and presented his driver’s license, which had a local address, the clerk at the election office had a terse message for him. She slid a fresh voter registration application across the counter and told him: “Fill this out, and you’ll be eligible to vote in the next election.” Sears told the clerk he hadn’t moved, and that he’d voted in the last election.

The clerk was unmoved. “What you can do,” the clerk repeated, pointing at the registration form, “is fill this out, and vote in the next election.”

Sears wasn’t alone. As he went back and forth with the clerk, three other men who, like Sears, had moved recently from other Texas counties, came in with near-identical complaints. The clerk gave them the same response she had given Sears. County officials told ProPublica they all should have been offered provisional ballots — not sent across town or told to register again.

Ultimately, Sears would cast a provisional ballot, but he didn’t discover this option until he’d done hours of research to try and hunt down the cause of his problems.

“I finally got to vote,” he said. “But that was after driving across two counties and spending five or six hours of my time trying to determine whether there was a way I could do it.”

Some administrative problems were a bit more bizarre — a polling place in Chandler, Arizona, was foreclosed upon overnight. Voter Joann Swain arrived at the Golf Academy of America, which housed the poll, to find TV news crews and a crowd of people in the parking lot of the type of faux Spanish Mission Revival shopping centers that fleck the desert around Phoenix. Voting booths were arrayed along the sidewalk.

A sign affixed to the building’s locked front door indicated that the landlord has foreclosed on the Golf Academy for failing to pay rent. While poll workers had set up the voting booths the night before, that didn’t appear to matter to the landlord. The sign read: “UNAUTHORIZED ENTRY UPON THESE PREMISES OR THE REMOVAL OF PROPERTY HEREFROM MAY RESULT IN CRIMINAL AND/OR CIVIL PROSECUTION.”

The timing struck Swain as suspect. “Were they trying to make it more difficult for people to vote?” she asked Wednesday. Election officials had provided no answers. “It’s just fishy.”

Swain, who is 47, waited in line for two hours as poll workers promised the machines necessary for voters to print and cast their ballot were on their way from Phoenix. She didn’t want to cast a provisional ballot, for fear it wouldn’t be counted. One man in line who took poll workers up on an alternative to waiting — voting at Chandler City Hall — returned not long after he left. With polling site difficulties cropping up throughout the Phoenix area, he hadn’t been able to vote there either.

To the puzzlement of voters waiting in line, Maricopa County Recorder Adrian Fontes tweeted that the Golf Academy polling place was open. “No it’s not. I’m here,” an Arizonan named Gary Taylor shot back.

Other voters reacted to situation more volubly. “I got things to do. I can’t stand around all day waiting because these guys can’t do their job,” a voter named Thomas Wood told reporters. “It’s ridiculous. It’s absolutely ridiculous.”

Swain ultimately left at 8:30 a.m. By the time she returned, later in the day, poll workers had set up the voting machines delivered from Phoenix in another storefront in the shopping center. The original machines remained locked in the Golf Academy, she said.

Electioneering

Back East, reports of potentially improper political messages at polling sites had begun to crop up, and the response from election officials highlighted the at times flimsy nature of electioneering laws. On Tuesday morning, a handwritten sign appeared on the door of a polling station near downtown Pittsburgh, which read “Vote Straight Democrat.” County election officials were alerted to the sign in the early afternoon, but by then the sign had been removed, Amie Downs, an Allegheny County spokeswoman, said in a statement.

An official in the county election office, who declined to give her name, blamed the sign on a member of the local Democratic Party committee. “He said he does that every year but never had problems till this year,” she said. Pennsylvania law prohibits electioneering within 10 feet of a polling place, and Downs said it wasn’t clear whether the sign violated the law.

Down the coast, in New Port Richey — a politically mixed cluster of strip malls northwest of Tampa, Florida — Pastor Al Carlisle triggered upward of 75 complaints to Pasco County election officials after he put up a handwritten sign reading “Don’t Vote for Democrats on Tuesday and Sing ‘Oh How I Love Jesus’ on Sunday” outside his church. That wouldn’t be a problem, except that on Election Day, his church doubles as a polling place. Carlisle remained unrepentant. He continued Wednesday to trumpet the sign on his Facebook page, mixed among posts conflating religious faith with support for President Donald Trump.

Local election officials, however, stopped at mild censure. Pasco County election chief Brian Corley told the Tampa Bay Times the sign was “not appropriate” but legal, since Carlisle had placed it only just more than 100 feet away from where voters were casting their ballot.

Later in the day, some voters complained about large posters opposing abortion — an example: “God Doesn’t Make Mistakes, Choose Life” — plastered on the walls of a church gymnasium in Holts Summit, Missouri, used as a polling place. Despite the political implications, election officials told local radio station KBIA that the posters were legal because there were no abortion-related issues on the ballot.

Behind the scenes, officials nationwide were addressing gaps in website reliability and security. In Kentucky, a handful of county websites that provided information to voters flickered offline for parts of the day. State officials said the issue was likely a technical problem not caused by a malicious attack.

But several states meanwhile alerted U.S. election-security officials to efforts of hackers scanning their computer systems for software vulnerabilities. Days before the election, a county clerk’s office said its email account was compromised and its messages forwarded to a private Gmail address, according to person familiar with the matter who was not authorized to discuss it publicly.

As polls closed Tuesday evening, back in New York, the crowds and ballot scanner failures remained. At one school in Brooklyn that had seen long lines in the morning, the wait to vote at 7 p.m. was no better — still upward of two hours, Emily Chen told ProPublica. By the end of the day, the New York City Council speaker had called for the elections director’s resignation, and the mayor had denounced the technical snags as “absolutely unacceptable.”

Down the coast, in Broward County, Florida, just north of Miami, election officials were struggling with a technical failure of a different sort. Seven precincts were unable to transmit vote tallies electronically. This time, it would force election officials to internalize what voters had suffered throughout much of the day. Around 11 p.m., they walked out into the balmy South Florida night, got into their cars and drove the voter files to the county election office.

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Plenty Of Bad News During November. Are We Watching The Fall Of Facebook?

Facebook logo November has been an eventful month for Facebook, the global social networking giant. And not in a good way. So much has happened, it's easy to miss items. Let's review.

A November 1st investigative report by ProPublica described how some political advertisers exploit gaps in Facebook's advertising transparency policy:

"Although Facebook now requires every political ad to “accurately represent the name of the entity or person responsible,” the social media giant acknowledges that it didn’t check whether Energy4US is actually responsible for the ad. Nor did it question 11 other ad campaigns identified by ProPublica in which U.S. businesses or individuals masked their sponsorship through faux groups with public-spirited names. Some of these campaigns resembled a digital form of what is known as “astroturfing,” or hiding behind the mirage of a spontaneous grassroots movement... Adopted this past May in the wake of Russian interference in the 2016 presidential campaign, Facebook’s rules are designed to hinder foreign meddling in elections by verifying that individuals who run ads on its platform have a U.S. mailing address, governmental ID and a Social Security number. But, once this requirement has been met, Facebook doesn’t check whether the advertiser identified in the “paid for by” disclosure has any legal status, enabling U.S. businesses to promote their political agendas secretly."

So, political ad transparency -however faulty it is -- has only been operating since May, 2018. Not long. Not good.

The day before the November 6th election in the United States, Facebook announced:

"On Sunday evening, US law enforcement contacted us about online activity that they recently discovered and which they believe may be linked to foreign entities. Our very early-stage investigation has so far identified around 30 Facebook accounts and 85 Instagram accounts that may be engaged in coordinated inauthentic behavior. We immediately blocked these accounts and are now investigating them in more detail. Almost all the Facebook Pages associated with these accounts appear to be in the French or Russian languages..."

This happened after Facebook removed 82 Pages, Groups and accounts linked to Iran on October 16th. Thankfully, law enforcement notified Facebook. Interested in more proactive action? Facebook announced on November 8th:

"We are careful not to reveal too much about our enforcement techniques because of adversarial shifts by terrorists. But we believe it’s important to give the public some sense of what we are doing... We now use machine learning to assess Facebook posts that may signal support for ISIS or al-Qaeda. The tool produces a score indicating how likely it is that the post violates our counter-terrorism policies, which, in turn, helps our team of reviewers prioritize posts with the highest scores. In this way, the system ensures that our reviewers are able to focus on the most important content first. In some cases, we will automatically remove posts when the tool indicates with very high confidence that the post contains support for terrorism..."

So, Facebook deployed in 2018 some artificial intelligence to help its human moderators identify terrorism threats -- not automatically remove them, but to identify them -- as the news item also mentioned its appeal process. Then, Facebook announced in a November 13th update:

"Combined with our takedown last Monday, in total we have removed 36 Facebook accounts, 6 Pages, and 99 Instagram accounts for coordinated inauthentic behavior. These accounts were mostly created after mid-2017... Last Tuesday, a website claiming to be associated with the Internet Research Agency, a Russia-based troll farm, published a list of Instagram accounts they said that they’d created. We had already blocked most of them, and based on our internal investigation, we blocked the rest... But finding and investigating potential threats isn’t something we do alone. We also rely on external partners, like the government or security experts...."

So, in 2018 Facebook leans heavily upon both law enforcement and security researchers to identify threats. You have to hunt a bit to find the total number of fake accounts removed. Facebook announced on November 15th:

"We also took down more fake accounts in Q2 and Q3 than in previous quarters, 800 million and 754 million respectively. Most of these fake accounts were the result of commercially motivated spam attacks trying to create fake accounts in bulk. Because we are able to remove most of these accounts within minutes of registration, the prevalence of fake accounts on Facebook remained steady at 3% to 4% of monthly active users..."

That's about 1.5 billion fake accounts by a variety of bad actors. Hmmmm... sounds good, but... it makes one wonder about the digital arms race happening. If the bad actors can programmatically create new fake accounts faster than Facebook can identify and remove them, then not good.

Meanwhile, CNet reported on November 11th that Facebook had ousted Oculus founder Palmer Luckey due to:

"... a $10,000 to an anti-Hillary Clinton group during the 2016 presidential election, he was out of the company he founded. Facebook CEO Mark Zuckerberg, during congressional testimony earlier this year, called Luckey's departure a "personnel issue" that would be "inappropriate" to address, but he denied it was because of Luckey's politics. But that appears to be at the root of Luckey's departure, The Wall Street Journal reported Sunday. Luckey was placed on leave and then fired for supporting Donald Trump, sources told the newspaper... [Luckey] was pressured by executives to publicly voice support for libertarian candidate Gary Johnson, according to the Journal. Luckey later hired an employment lawyer who argued that Facebook illegally punished an employee for political activity and negotiated a payout for Luckey of at least $100 million..."

Facebook acquired Oculus Rift in 2014. Not good treatment of an executive.

The next day, TechCrunch reported that Facebook will provide regulators from France with access to its content moderation processes:

"At the start of 2019, French regulators will launch an informal investigation on algorithm-powered and human moderation... Regulators will look at multiple steps: how flagging works, how Facebook identifies problematic content, how Facebook decides if it’s problematic or not and what happens when Facebook takes down a post, a video or an image. This type of investigation is reminiscent of banking and nuclear regulation. It involves deep cooperation so that regulators can certify that a company is doing everything right... The investigation isn’t going to be limited to talking with the moderation teams and looking at their guidelines. The French government wants to find algorithmic bias and test data sets against Facebook’s automated moderation tools..."

Good. Hopefully, the investigation will be a deep dive. Maybe other countries, which value citizens' privacy, will perform similar investigations. Companies and their executives need to be held accountable.

Then, on November 14th The New York Times published a detailed, comprehensive "Delay, Deny, and Deflect" investigative report based upon interviews of at least 50 persons:

"When Facebook users learned last spring that the company had compromised their privacy in its rush to expand, allowing access to the personal information of tens of millions of people to a political data firm linked to President Trump, Facebook sought to deflect blame and mask the extent of the problem. And when that failed... Facebook went on the attack. While Mr. Zuckerberg has conducted a public apology tour in the last year, Ms. Sandberg has overseen an aggressive lobbying campaign to combat Facebook’s critics, shift public anger toward rival companies and ward off damaging regulation. Facebook employed a Republican opposition-research firm to discredit activist protesters... In a statement, a spokesman acknowledged that Facebook had been slow to address its challenges but had since made progress fixing the platform... Even so, trust in the social network has sunk, while its pell-mell growth has slowed..."

The New York Times' report also highlighted the history of Facebook's focus on revenue growth and lack of focus to identify and respond to threats:

"Like other technology executives, Mr. Zuckerberg and Ms. Sandberg cast their company as a force for social good... But as Facebook grew, so did the hate speech, bullying and other toxic content on the platform. When researchers and activists in Myanmar, India, Germany and elsewhere warned that Facebook had become an instrument of government propaganda and ethnic cleansing, the company largely ignored them. Facebook had positioned itself as a platform, not a publisher. Taking responsibility for what users posted, or acting to censor it, was expensive and complicated. Many Facebook executives worried that any such efforts would backfire... Mr. Zuckerberg typically focused on broader technology issues; politics was Ms. Sandberg’s domain. In 2010, Ms. Sandberg, a Democrat, had recruited a friend and fellow Clinton alum, Marne Levine, as Facebook’s chief Washington representative. A year later, after Republicans seized control of the House, Ms. Sandberg installed another friend, a well-connected Republican: Joel Kaplan, who had attended Harvard with Ms. Sandberg and later served in the George W. Bush administration..."

The report described cozy relationships between the company and Democratic politicians. Not good for a company wanting to deliver unbiased, reliable news. The New York Times' report also described the history of failing to identify and respond quickly to content abuses by bad actors:

"... in the spring of 2016, a company expert on Russian cyberwarfare spotted something worrisome. He reached out to his boss, Mr. Stamos. Mr. Stamos’s team discovered that Russian hackers appeared to be probing Facebook accounts for people connected to the presidential campaigns, said two employees... Mr. Stamos, 39, told Colin Stretch, Facebook’s general counsel, about the findings, said two people involved in the conversations. At the time, Facebook had no policy on disinformation or any resources dedicated to searching for it. Mr. Stamos, acting on his own, then directed a team to scrutinize the extent of Russian activity on Facebook. In December 2016... Ms. Sandberg and Mr. Zuckerberg decided to expand on Mr. Stamos’s work, creating a group called Project P, for “propaganda,” to study false news on the site, according to people involved in the discussions. By January 2017, the group knew that Mr. Stamos’s original team had only scratched the surface of Russian activity on Facebook... Throughout the spring and summer of 2017, Facebook officials repeatedly played down Senate investigators’ concerns about the company, while publicly claiming there had been no Russian effort of any significance on Facebook. But inside the company, employees were tracing more ads, pages and groups back to Russia."

Facebook responded in a November 15th new release:

"There are a number of inaccuracies in the story... We’ve acknowledged publicly on many occasions – including before Congress – that we were too slow to spot Russian interference on Facebook, as well as other misuse. But in the two years since the 2016 Presidential election, we’ve invested heavily in more people and better technology to improve safety and security on our services. While we still have a long way to go, we’re proud of the progress we have made in fighting misinformation..."

So, Facebook wants its users to accept that it has invested more = doing better.

Regardless, the bottom line is trust. Can users trust what Facebook said about doing better? Is better enough? Can users trust Facebook to deliver unbiased news? Can users trust that Facebook's content moderation process is better? Or good enough? Can users trust Facebook to fix and prevent data breaches affecting millions of users? Can users trust Facebook to stop bad actors posing as researchers from using quizzes and automated tools to vacuum up (and allegedly resell later) millions of users' profiles? Can citizens in democracies trust that Facebook has stopped data abuses, by bad actors, designed to disrupt their elections? Is doing better enough?

The very next day, Facebook reported a huge increase in the number of government requests for data, including secret orders. TechCrunch reported about 13 historical national security letters:

"... dated between 2014 and 2017 for several Facebook and Instagram accounts. These demands for data are effectively subpoenas, issued by the U.S. Federal Bureau of Investigation (FBI) without any judicial oversight, compelling companies to turn over limited amounts of data on an individual who is named in a national security investigation. They’re controversial — not least because they come with a gag order that prevents companies from informing the subject of the letter, let alone disclosing its very existence. Companies are often told to turn over IP addresses of everyone a person has corresponded with, online purchase information, email records and cell-site location data... Chris Sonderby, Facebook’s deputy general counsel, said that the government lifted the non-disclosure orders on the letters..."

So, Facebook is a go-to resource for both bad actors and the good guys.

An eventful month, and the month isn't over yet. Taken together, this news is not good for a company wanting its social networking service to be a source of reliable, unbiased news source. This news is not good for a company wanting its users to accept it is doing better -- and that better is enough. The situation begs the question: are we watching the fall of Facebook? Share your thoughts and opinions below.