Previous month:
August 2019
Next month:
October 2019

13 posts from September 2019

The New Target That Enables Ransomware Hackers to Paralyze Dozens of Towns and Businesses at Once

[Editor's note: today's guest post, by reporters at ProPublica, is part of a series which discusses trends in cyberattacks and data breaches. It is reprinted with permission.]

By Renee Dudley, ProPublica

On July 3, employees at Arbor Dental in Longview, Washington, noticed glitches in their computers and couldn’t view X-rays. Arbor was one of dozens of dental clinics in Oregon and Washington stymied by a ransomware attack that disrupted their business and blocked access to patients’ records.

But the hackers didn’t target the clinics directly. Instead, they infiltrated them by exploiting vulnerable cybersecurity at Portland-based PM Consultants Inc., which handled the dentists’ software updates, firewalls and data backups. Arbor’s frantic calls to PM went to voicemail, said Whitney Joy, the clinic’s office coordinator.

“The second it happened, they ghosted everybody,” she said. “They didn’t give us a heads up.”

A week later, PM sent an email to clients. “Due to the size and scale of the attack, we are not optimistic about the chances for a full or timely recovery,” it wrote. “At this time we must recommend you seek outside technical assistance with the recovery of your data.”

On July 22, PM notified clients in an email that it was shutting down, “in part due to this devastating event.” The contact phone number listed on PM's website is disconnected, and the couple that managed the firm did not respond to messages left on their cellphones.

The attack on the dental clinics illustrates a new and worrisome frontier in ransomware — the targeting of managed service providers, or MSPs, to which local governments, medical clinics, and other small- and medium-sized businesses outsource their IT needs. While many MSPs offer reliable support and data storage, others have proven inexperienced or understaffed, unable to defend their own computer systems or help clients salvage files. As a result, cybercriminals profit by infiltrating dozens of businesses or public agencies with a single attack, while the beleaguered MSPs and their incapacitated clients squabble over who should pay the ransom or recovery costs.

Cost savings are the chief appeal of MSPs. It’s often cheaper and more convenient for towns and small businesses with limited technical needs to rely on an MSP rather than hire full-time IT employees. But those benefits are sometimes illusory. This year, attacks on MSPs have paralyzed thousands of small businesses and public agencies. Huntress Labs, a Maryland-based cybersecurity and software firm, has worked with about three dozen MSPs struck by ransomware this year, its executives said. In one incident, 4,200 computers were infected by ransomware through a single MSP.

Last month, hackers infiltrated MSPs in Texas and Wisconsin. An attack on TSM Consulting Services Inc. of Rockwall, Texas, crippled 22 cities and towns, while one on PerCSoft of West Allis, Wisconsin, deprived 400 dental practices around the country of access to electronic files, the Wisconsin Dental Association said in a letter to members. PerCSoft, which hackers penetrated through its cloud remote management software, said in a letter to victims that it had obtained a key to decrypt the ransomware, indicating that it likely paid a ransom. PerCSoft did not return a message seeking comment.

TSM referred questions about the Texas attack to the state’s Department of Information Resources, which referred questions to the FBI, which confirmed that the ransomware struck the towns through TSM. One of the 22 Texas municipalities has been hit by ransomware twice in the past year while using TSM’s services.

FBI spokeswoman Melinda Urbina acknowledged that MSPs are profitable targets for hackers. “Those are the targets they’re going after because they know that those individuals would be more apt to pay because they want to get those services back online for the public,” she said.

Beyond the individual victims, the MSPs’ shortcomings have a larger consequence. They foster the spread of ransomware, one of the world’s most common cybercrimes. By failing to provide clients with reliable backups or to maintain their own cybersecurity, and in some cases paying ransoms when alternatives are available, they may in effect reward criminals and give them an incentive to strike again. This year, ProPublica has reported on other industries in the ransomware economy, such as data recovery and insurance, which also have enriched ransomware hackers.

To get inside MSPs, attackers have capitalized on security lapses such as weak passwords and failure to use two-factor authentication. In Wisconsin and elsewhere, they also have exploited vulnerabilities in “remote monitoring and management” software that the firms use to install computer updates and handle clients’ other IT needs. Even when patches for such vulnerabilities are available, MSPs sometimes haven’t installed them.

The remote management tools are like “golden keys to immediately distribute ransomware,” said Huntress CEO Kyle Hanslovan. “Just like how you’d want to push a patch at lightning speed, it turns out you can push out ransomware at lightning speed as well.”

Otherwise, the hacker may spread the ransomware manually, infecting computers one at a time using software that normally allows MSP technicians to remotely view and click around on a client’s screen to resolve an IT problem, Hanslovan said. One Huntress client had the “record session” feature of this software automatically enabled. By watching those recordings following the attack, Huntress was able to view exactly how the hacker installed and tracked ransomware on the machines.

In some cases, Hanslovan said, MSPs have failed to save and store backup files properly for clients who paid specifically for that service so that systems would be restored in the event of an attack. Instead, the MSPs may have relied on low-cost and insufficient backup solutions, he said. Last month, he said, Huntress worked with an MSP whose clients’ computers and backup files were encrypted in a ransomware attack. The only way to restore the files was to pay the ransom, Hanslovan said.

Even when backups are available, MSPs sometimes prefer to pay the ransom. Hackers have leverage in negotiations because the MSP — usually a small business itself — can’t handle the volume of work for dozens of affected clients who simultaneously demand attention, said Chris Bisnett, chief architect at Huntress.

“It increases the likelihood that someone will pay rather than just try to fix it themselves,” Bisnett said. “It’s one thing if I have 50 computers that are ransomed and encrypted and I can fix them. There’s no way I have time to go and do thousands of computers all at the same time when I’ve got all these customers calling and saying: ‘Hey, we can’t do any business, we’re losing money. We need to be back right now.’ So the likelihood of the MSP just saying, ‘Oh I can’t deal with this, let me just pay,’ goes up.”

Because there are so many victims, the hacker can make a larger ransom demand with greater confidence that it will be paid, Hanslovan said. Attacking the MSP “gives you hundreds or even thousands more computers for the same cost of infection,” he said. The “support cost of negotiating the ransom is low” since the attacker typically corresponds with the MSP rather than its individual clients.

Before this year’s ransomware spree, MSPs were susceptible to other kinds of cybercrime. Last October, the U.S. Department of Homeland Security warned in an alert about attacks on MSPs for “purposes of cyber espionage and intellectual property theft.” It added that “MSPs generally have direct and unfettered access to their customers’ networks,” and that “a compromise in one part of an MSP’s network can spread globally, affecting other customers and introducing risk.”

The first spate of ransomware attacks on MSPs, early this year, deployed what is called the GandCrab strain. Then, in an online hacking forum, the hackers behind GandCrab announced their retirement in May. After that, another strain of ransomware known as Sodinokibi ransomware sprung up and began targeting MSPs.

Sodinokibi ransom amounts are “scaled to the size of the organization and the perceived capacity to pay,” according to Connecticut-based Coveware, which negotiates ransoms for clients hit by ransomware. Sodinokibi will not run on systems that use languages including Russian, Romanian and Ukrainian, according to security firm Cylance, possibly because those are native languages for hackers who don’t want to draw the attention of local law enforcement.

Sodinokibi was the strain used in the attack on TSM Consulting Services that encrypted the computers of 22 Texas municipalities, leaving them unable to fulfill tasks such as accepting online payments for water bills, providing copies of birth and death certificates and responding to emails. Most of the towns have not been publicly identified. More than half have returned to normal operations, the Texas Information Resources Department said in an update posted on its website. The hackers sought millions of dollars. The department is "unaware of any ransom being paid in this event," according to the update.

TSM began operations in 1997, and it provides equipment and support to more than 300 law enforcement agencies in Texas, according to its website. It is unclear why the 22 municipalities, and not TSM’s other clients, were affected by the August attack.

One of the 22 Texas municipalities hit last month was Kaufman, a city about 30 miles southeast of Dallas. An attack last November on Kaufman, which forced its police department to cease normal operations, was mentioned in a ProPublica article about two data recovery firms that purported to use proprietary technology to disable ransomware but in reality often just paid the attackers. TSM had enlisted one of the firms, Florida-based MonsterCloud, to help Kaufman recover from the November intrusion.

MonsterCloud waived its fee in exchange for a video testimonial featuring the Kaufman police chief, the president of TSM and the TSM technician who worked with Kaufman. In the testimonial, TSM technician Robby Pleasant said that the attackers had “reset everyone’s password, including the administrator,” and that the data “was locked up and not functioning.” Pleasant said in the video that MonsterCloud was able to “recover all the data” and “saved the day.”

“They can come in and recover even if someone does find a hole in our armor,” Pleasant said in the video.

Last month, attackers again found a hole in TSM’s armor. Using a third-party software vendor, rather than TSM, Kaufman had strengthened its backup system since the first attack, so it was able to restore much of the lost data, City Manager Michael Slye said. Kaufman’s computer systems were down for 24 hours, and the city handled municipal business such as writing tickets and taking payments on paper during that time, Slye said.

But backup safeguards were less effective for Kaufman’s police department, which uses a different type of software than other city offices, Slye said. The department’s dashcam video storage lost months of footage, and it still isn’t working, he said.

“It was not a fun experience to get this twice,” he said.

A TSM employee who declined to be named said the November attack may have been caused by “someone clicking on a bad email. We don’t have definitive information on that. We went into recovery mode immediately.”

PM Consultants, the Oregon provider of IT services to dental clinics, was run by a husband and wife, Charles Gosta Miller and Ava Piekarski, out of their home, according to state records. The firm didn’t employ enough technicians, said Cameron Willis, general manager of Dentech LLC in Eugene, Oregon, which took on many of PM’s former clients. Some former PM clients have complained to Willis that it was unresponsive to their requests for help, he said.

“A lot of dental office facilities don’t want to spend the money on IT infrastructure the way they should,” and they lack the technical know-how to vet providers, Willis said. They “don’t know any better. They don’t have the time to research. If you have someone who does provide some service, it’s very, very easy to see how some of the fly-by-nights would attract such a large clientele. ... When one office finds something that works, they scream it to the hills.”

In the July 22 email announcing its closure, PM said it had been “inundated with calls” on the morning of the ransomware attack, “and we immediately started investigating and trying to restore data. Throughout the next several days and into the weekend, we worked around the clock on recovery efforts. ... However, it was soon apparent the number of PC’s that needed restoration was too large for our small team to complete in any reasonable time frame.” The company was also “receiving hundreds of calls, emails and texts to which we were unable to respond.”

PM said that it had retained counsel to “assist with recovery of any available insurance, payment and billing proceeds,” and that it would be “sending out final invoices in the next two weeks.” Its formal dissolution, it continued, “will include an option to submit a claim” against the company.

Austin Covington, director of Lower Columbia Oral Health, a Longview, Washington, clinic affected by the attack, said it plans to take legal action against PM and declined to comment further. Other victims have not been publicly identified.

Some dentists “did not lose any data” because they had good backup files, Willis said. “Some clients lost some. Some lost a lot.” He doesn’t know whether clients paid ransoms, he said.

Dentech takes a different approach than PM did, Willis said. To prevent ransomware and other breaches, even its own staff has limited access to the remote management software favored by hackers, he said. It has 14 technicians, who often handle services such as software updates in person, he said. Dentech requires clients to use best practices, Willis said. If they decline, the firm requires them to sign a waiver releasing Dentech of liability in case of ransomware or other data loss.

Without such explicit terms, it’s often unclear whether the MSP or its clients are responsible for paying ransoms or recovery costs associated with an attack. Chris Loehr, executive vice president of Texas-based Solis Security, which helps victims negotiate ransom payments, was called in when GandCrab ransomware struck an MSP and encrypted some of its clients’ backup files several months ago. The MSP paid the ransom only for those that used its data backup service, which had failed, Loehr said. Clients who did not buy the backup service had to decide themselves whether to pay the ransom.

This summer, in a separate incident, Loehr negotiated with hackers on behalf of a New York-based MSP that was hit by Sodinokibi ransomware. The MSP didn’t want to pay the total ransom of about $2 million in bitcoin to unlock the files of all its clients, who were primarily architectural and engineering firms. Instead, each of the 200 affected clients was left to decide whether to pay about $10,000 in bitcoin. The MSP’s owner refused for legal reasons; he was worried that, if he was sued over the attack, a payment might be construed as an admission of fault, Loehr said.

The preponderance of low-quality MSPs has fostered the current ransomware onslaught, Loehr said. He noted that little experience or funding is needed to open an MSP; the barriers to entry are few.

“The startup costs are low,” Loehr said. “It doesn’t take much. The way the MSP world works, it’s not like you have to go out and buy $1 million of software. You can operate out of your house. These guys charge their clients up front. There is little cash flow to get this stuff off the ground.”

“Every IT guy thinks he can do this,” Loehr said. “‘Hey, I’m a technology guy.’

“No.”

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for The Big Story newsletter to receive stories like this one in your inbox.

 


Survey Asked Americans Which They Consider Safer: Self-Driving Ride-Shares Or Solo Ride-Shares With Human Drivers

Many consumers use ride-sharing services, such as Lyft and Uber. We all have heard about self-driving cars. A polling firm asked consumers a very relevant question: "Which ride is trusted more? Would you rather take a rideshare alone or a self-driving car?" The results may surprise you.

The questions are relevant given news reports about sexual assaults and kidnapping ride-sharing drivers and imposters. A pedestrian death involving a self-driving ride-sharing car highlighted the ethical issues about who machines should save when fatal crashes can't be avoided. Developers have admitted that self-driving cars can be hacked by bad actors, just like other computers and mobile devices. And, new car buyers stated clear preferences when considering self-driving (a/k/a autonomous) vehicles versus standard vehicles with self-driving modes.

Using Google Consumer Surveys, The Zebra surveyed 2,000 persons in the United States during August, 2019 and found:

"53 percent of people felt safer taking a self-driving car than driver-operated rideshare alone; Baby Boomers (age 55-plus) were the only age group to prefer a solo Uber ride over a driverless car; Gen Z (ages 18–24) were most open to driverless rideshares: 40 percent said they were willing to hail a ride from one."

Founded 7 years ago, The Zebra describes itself as, "the nation's leading insurance comparison site." The survey also found:

"... Baby Boomers were the only group to trust solo ridesharing more than they would a ride in a self-driving car... despite women being subjected to higher rates of sexual violence, the poll found women were only slightly more likely than men to choose a self-driving car over ridesharing alone (53 percent of women compared to 52 percent of men).

It seems safe to assume: trust it or not, the tech is coming. Quickly. What are your opinions?


Global Study: New Car Buyers Still Prefer Standard Cars Over Self-Driving Cars

Ipsos logo A recent worldwide study found that new car buyers continue to enjoy and prefer the experience of driving. When asked whether they would consider fully self-driving cars (a/k/a as autonomous vehicles) or vehicles with autonomous modes, drivers stated their clear preferences. Key findings by Ipsos Mobility:

"1) Roughly half of new car buyers have some familiarity with autonomous mode; Familiarity highest in China and Japan; 2) On a global basis, 36% would consider a vehicle with autonomous mode however, only 12% would Definitely Consider; 3) If given the choice, only 6% of new car buyers would purchase a fully autonomous vehicle while the majority (57%) would purchase a vehicle with an autonomous mode and 37% would just purchase a standard vehicle..."

To summarize: it's the driving experience which matters.

Ipsos surveyed 20,000 drivers across ten countries: Brazil, China, France, Germany, Italy, Japan, Russia, Spain, the United Kingdom, and the United States. The 2019 "Global Mobility Navigator Syndicated Study" includes three modules: a) Autonomous and Advanced Features, b) Electric Vehicles (Needs & Intentions), and c) Shared Mobility (Car Sharing & Ride Hailing). The above findings are from the first module.

Secondary findings about autonomous features in cars:

"The auto industry is also battling an awareness issue with the new technology. Globally, only 15% said they knew a fair amount about Autonomous mode... while there are enjoyment factors to consider in the autonomous future, there are also safety concerns for consumers. The study revealed one is pedestrian safety as well as other vehicles, while the driver’s own safety is a slightly lower concern. Meanwhile, if a driver did use the autonomous mode, 44% state they would still remain focused on the road. This implies a tremendous lack of trust in the system’s ability to safely self-drive. Another big worry for consumers is the security of the vehicle’s data. A strong concern was the possibility of someone hacking into their self-driving system and causing an accident."

The report listed 16 features for "connected cars," including Predicting The Traffic, Advanced Drive Assist Systems, Search For Nearby Parking Lots, Automated Parking, Smart Refueling/Recharging, and more. Additional details about the report and features are available here.


Millions of Americans’ Medical Images and Data Are Available on the Internet. Anyone Can Take a Peek.

[Editor's note: today's guest blog post, by reporters at ProPublica, explores data security issues within the healthcare industry and its outsourcing vendors. It is reprinted with permission.]

By Jack Gillum, Jeff Kao and Jeff Larson - ProPublica

Medical images and health data belonging to millions of Americans, including X-rays, MRIs and CT scans, are sitting unprotected on the internet and available to anyone with basic computer expertise.

Bayerischer Rundfunk logo The records cover more than 5 million patients in the U.S. and millions more around the world. In some cases, a snoop could use free software programs — or just a typical web browser — to view the images and private data, an investigation by ProPublica and the German broadcaster Bayerischer Rundfunk found.

We identified 187 servers — computers that are used to store and retrieve medical data — in the U.S. that were unprotected by passwords or basic security precautions. The computer systems, from Florida to California, are used in doctors’ offices, medical-imaging centers and mobile X-ray services.

The insecure servers we uncovered add to a growing list of medical records systems that have been compromised in recent years. Unlike some of the more infamous recent security breaches, in which hackers circumvented a company’s cyber defenses, these records were often stored on servers that lacked the security precautions that long ago became standard for businesses and government agencies.

"It’s not even hacking. It’s walking into an open door," said Jackie Singh, a cybersecurity researcher and chief executive of the consulting firm Spyglass Security. Some medical providers started locking down their systems after we told them of what we had found.

Our review found that the extent of the exposure varies, depending on the health provider and what software they use. For instance, the server of U.S. company MobilexUSA displayed the names of more than a million patients — all by typing in a simple data query. Their dates of birth, doctors and procedures were also included.

Alerted by ProPublica, MobilexUSA tightened its security earlier this month. The company takes mobile X-rays and provides imaging services to nursing homes, rehabilitation hospitals, hospice agencies and prisons. "We promptly mitigated the potential vulnerabilities identified by ProPublica and immediately began an ongoing, thorough investigation," MobilexUSA’s parent company said in a statement.

Another imaging system, tied to a physician in Los Angeles, allowed anyone on the internet to see his patients’ echocardiograms. (The doctor did not respond to inquiries from ProPublica.) All told, medical data from more than 16 million scans worldwide was available online, including names, birthdates and, in some cases, Social Security numbers.

Experts say it’s hard to pinpoint who’s to blame for the failure to protect the privacy of medical images. Under U.S. law, health care providers and their business associates are legally accountable for securing the privacy of patient data. Several experts said such exposure of patient data could violate the Health Insurance Portability and Accountability Act, or HIPAA, the 1996 law that requires health care providers to keep Americans’ health data confidential and secure.

Although ProPublica found no evidence that patient data was copied from these systems and published elsewhere, the consequences of unauthorized access to such information could be devastating. "Medical records are one of the most important areas for privacy because they’re so sensitive. Medical knowledge can be used against you in malicious ways: to shame people, to blackmail people," said Cooper Quintin, a security researcher and senior staff technologist with the Electronic Frontier Foundation, a digital-rights group.

"This is so utterly irresponsible," he said.

The issue should not be a surprise to medical providers. For years, one expert has tried to warn about the casual handling of personal health data. Oleg Pianykh, the director of medical analytics at Massachusetts General Hospital’s radiology department, said medical imaging software has traditionally been written with the assumption that patients’ data would be secured by the customer’s computer security systems.

But as those networks at hospitals and medical centers became more complex and connected to the internet, the responsibility for security shifted to network administrators who assumed safeguards were in place. "Suddenly, medical security has become a do-it-yourself project," Pianykh wrote in a 2016 research paper he published in a medical journal.

ProPublica’s investigation built upon findings from Greenbone Networks, a security firm based in Germany that identified problems in at least 52 countries on every inhabited continent. Greenbone’s Dirk Schrader first shared his research with Bayerischer Rundfunk after discovering some patients’ health records were at risk. The German journalists then approached ProPublica to explore the extent of the exposure in the U.S.

Schrader found five servers in Germany and 187 in the U.S. that made patients’ records available without a password. ProPublica and Bayerischer Rundfunk also scanned Internet Protocol addresses and identified, when possible, which medical provider they belonged to.

ProPublica independently determined how many patients could be affected in America, and found some servers ran outdated operating systems with known security vulnerabilities. Schrader said that data from more than 13.7 million medical tests in the U.S. were available online, including more than 400,000 in which X-rays and other images could be downloaded.

The privacy problem traces back to the medical profession’s shift from analog to digital technology. Long gone are the days when film X-rays were displayed on fluorescent light boards. Today, imaging studies can be instantly uploaded to servers and viewed over the internet by doctors in their offices.

In the early days of this technology, as with much of the internet, little thought was given to security. The passage of HIPAA required patient information to be protected from unauthorized access. Three years later, the medical imaging industry published its first security standards.

Our reporting indicated that large hospital chains and academic medical centers did put security protections in place. Most of the cases of unprotected data we found involved independent radiologists, medical imaging centers or archiving services.

One German patient, Katharina Gaspari, got an MRI three years ago and said she normally trusts her doctors. But after Bayerischer Rundfunk showed Gaspari her images available online, she said: "Now, I am not sure if I still can." The German system that stored her records was locked down last week.

We found that some systems used to archive medical images also lacked security precautions. Denver-based Offsite Image left open the names and other details of more than 340,000 human and veterinary records, including those of a large cat named "Marshmellow," ProPublica found. An Offsite Image executive told ProPublica the company charges clients $50 for access to the site and then $1 per study. "Your data is safe and secure with us," Offsite Image’s website says.

The company referred ProPublica to its tech consultant, who at first defended Offsite Image’s security practices and insisted that a password was needed to access patient records. The consultant, Matthew Nelms, then called a ProPublica reporter a day later and acknowledged Offsite Image’s servers had been accessible but were now fixed.

Medical Imaging and Technology Alliance logo "We were just never even aware that there was a possibility that could even happen," Nelms said.

In 1985, an industry group that included radiologists and makers of imaging equipment created a standard for medical imaging software. The standard, which is now called DICOM, spelled out how medical imaging devices talk to each other and share information.

We shared our findings with officials from the Medical Imaging & Technology Alliance, the group that oversees the standard. They acknowledged that there were hundreds of servers with an open connection on the internet, but suggested the blame lay with the people who were running them.

"Even though it is a comparatively small number," the organization said in a statement, "it may be possible that some of those systems may contain patient records. Those likely represent bad configuration choices on the part of those operating those systems."

Meeting minutes from 2017 show that a working group on security learned of Pianykh’s findings and suggested meeting with him to discuss them further. That “action item” was listed for several months, but Pianykh said he never was contacted. The medical imaging alliance told ProPublica last week that the group did not meet with Pianykh because the concerns that they had were sufficiently addressed in his article. They said the committee concluded its security standards were not flawed.

Pianykh said that misses the point. It’s not a lack of standards; it’s that medical device makers don’t follow them. “Medical-data security has never been soundly built into the clinical data or devices, and is still largely theoretical and does not exist in practice,” Pianykh wrote in 2016.

ProPublica’s latest findings follow several other major breaches. In 2015, U.S. health insurer Anthem Inc. revealed that private data belonging to more than 78 million people was exposed in a hack. In the last two years, U.S. officials have reported that more than 40 million people have had their medical data compromised, according to an analysis of records from the U.S. Department of Health and Human Services.

Joy Pritts, a former HHS privacy official, said the government isn’t tough enough in policing patient privacy breaches. She cited an April announcement from HHS that lowered the maximum annual fine, from $1.5 million to $250,000, for what’s known as “corrected willful neglect” — the result of conscious failures or reckless indifference that a company tries to fix. She said that large firms would not only consider those fines as just the cost of doing business, but that they could also negotiate with the government to get them reduced. A ProPublica examination in 2015 found few consequences for repeat HIPAA offenders.

A spokeswoman for HHS’ Office for Civil Rights, which enforces HIPAA violations, said it wouldn’t comment on open or potential investigations.

"What we typically see in the health care industry is that there is Band-Aid upon Band-Aid applied" to legacy computer systems, said Singh, the cybersecurity expert. She said it’s a “shared responsibility” among manufacturers, standards makers and hospitals to ensure computer servers are secured.

"It’s 2019," she said. "There’s no reason for this."

How Do I Know if My Medical Imaging Data is Secure?

If you are a patient:

If you have had a medical imaging scan (e.g., X-ray, CT scan, MRI, ultrasound, etc.) ask the health care provider that did the scan — or your doctor — if access to your images requires a login and password. Ask your doctor if their office or the medical imaging provider to which they refer patients conducts a regular security assessment as required by HIPAA.

If you are a medical imaging provider or doctor’s office:

Researchers have found that picture archiving and communication systems (PACS) servers implementing the DICOM standard may be at risk if they are connected directly to the internet without a VPN or firewall, or if access to them does not require a secure password. You or your IT staff should make sure that your PACS server cannot be accessed via the internet without a VPN connection and password. If you know the IP address of your PACS server but are not sure whether it is (or has been) accessible via the internet, please reach out to us at [email protected].

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for The Big Story newsletter to receive stories like this one in your inbox.


Study: Anonymized Data Can Not Be Totally Anonymous. And 'Homomorphic Encryption' Explained

Many online users have encountered situations where companies collect data with the promised that it is safe because the data has been anonymized -- all personally-identifiable data elements have been removed. How safe is this really? A recent study reinforced the findings that it isn't as safe as promised. Anonymized data can be de-anonymized = re-identified to individual persons.

The Guardian UK reported:

"... data can be deanonymised in a number of ways. In 2008, an anonymised Netflix data set of film ratings was deanonymised by comparing the ratings with public scores on the IMDb film website in 2014; the home addresses of New York taxi drivers were uncovered from an anonymous data set of individual trips in the city; and an attempt by Australia’s health department to offer anonymous medical billing data could be reidentified by cross-referencing “mundane facts” such as the year of birth for older mothers and their children, or for mothers with many children. Now researchers from Belgium’s Université catholique de Louvain (UCLouvain) and Imperial College London have built a model to estimate how easy it would be to deanonymise any arbitrary dataset. A dataset with 15 demographic attributes, for instance, “would render 99.98% of people in Massachusetts unique”. And for smaller populations, it gets easier..."

According to the U.S. Census Bureau, the population of Massachusetts was abut 6.9 million on July 1, 2018. How did this de-anonymization problem happen? Scientific American explained:

"Many commonly used anonymization techniques, however, originated in the 1990s, before the Internet’s rapid development made it possible to collect such an enormous amount of detail about things such as an individual’s health, finances, and shopping and browsing habits. This discrepancy has made it relatively easy to connect an anonymous line of data to a specific person: if a private detective is searching for someone in New York City and knows the subject is male, is 30 to 35 years old and has diabetes, the sleuth would not be able to deduce the man’s name—but could likely do so quite easily if he or she also knows the target’s birthday, number of children, zip code, employer and car model."

Data brokers, including credit-reporting agencies, have collected a massive number of demographic data attributes about every persons. According to this 2018 report, Acxiom has compiled about 5,000 data elements for each of 700 million persons worldwide.

It's reasonable to assume that credit-reporting agencies and other data brokers have similar capabilities. So, data brokers' massive databases can make it relatively easy to re-identify data that was supposedly been anonymized. This means consumers don't have the privacy promised.

What's the solution? Researchers suggest that data brokers must develop new anonymization methods, and rigorously test them to ensure anonymization truly works. And data brokers must be held to higher data security standards.

Any legislation serious about protecting consumers' privacy must address this, too. What do you think?


The Extortion Economy: How Insurance Companies Are Fueling a Rise in Ransomware Attacks

[Editor's note: today's guest post, by reporters at ProPublica, is part of a series which discusses the intersection of cyberattacks, ransomware, and the insurance industry. It is reprinted with permission.]

By Renee Dudley, ProPublica

On June 24, the mayor and council of Lake City, Florida, gathered in an emergency session to decide how to resolve a ransomware attack that had locked the city’s computer files for the preceding fortnight. Following the Pledge of Allegiance, Mayor Stephen Witt led an invocation. “Our heavenly father,” Witt said, “we ask for your guidance today, that we do what’s best for our city and our community.”

Witt and the council members also sought guidance from City Manager Joseph Helfenberger. He recommended that the city allow its cyber insurer, Beazley, an underwriter at Lloyd’s of London, to pay the ransom of 42 bitcoin, then worth about $460,000. Lake City, which was covered for ransomware under its cyber-insurance policy, would only be responsible for a $10,000 deductible. In exchange for the ransom, the hacker would provide a key to unlock the files.

“If this process works, it would save the city substantially in both time and money,” Helfenberger told them.

Without asking questions or deliberating, the mayor and the council unanimously approved paying the ransom. The six-figure payment, one of several that U.S. cities have handed over to hackers in recent months to retrieve files, made national headlines.

Left unmentioned in Helfenberger’s briefing was that the city’s IT staff, together with an outside vendor, had been pursuing an alternative approach. Since the attack, they had been attempting to recover backup files that were deleted during the incident. On Beazley’s recommendation, the city chose to pay the ransom because the cost of a prolonged recovery from backups would have exceeded its $1 million coverage limit, and because it wanted to resume normal services as quickly as possible.

“Our insurance company made [the decision] for us,” city spokesman Michael Lee, a sergeant in the Lake City Police Department, said. “At the end of the day, it really boils down to a business decision on the insurance side of things: them looking at how much is it going to cost to fix it ourselves and how much is it going to cost to pay the ransom.”

The mayor, Witt, said in an interview that he was aware of the efforts to recover backup files but preferred to have the insurer pay the ransom because it was less expensive for the city. “We pay a $10,000 deductible, and we get back to business, hopefully,” he said. “Or we go, ‘No, we’re not going to do that,’ then we spend money we don’t have to just get back up and running. And so to me, it wasn’t a pleasant decision, but it was the only decision.”

Ransomware is proliferating across America, disabling computer systems of corporations, city governments, schools and police departments. This month, attackers seeking millions of dollars encrypted the files of 22 Texas municipalities. Overlooked in the ransomware spree is the role of an industry that is both fueling and benefiting from it: insurance. In recent years, cyber insurance sold by domestic and foreign companies has grown into an estimated $7 billion to $8 billion-a-year market in the U.S. alone, according to Fred Eslami, an associate director at AM Best, a credit rating agency that focuses on the insurance industry. While insurers do not release information about ransom payments, ProPublica has found that they often accommodate attackers’ demands, even when alternatives such as saved backup files may be available.

The FBI and security researchers say paying ransoms contributes to the profitability and spread of cybercrime and in some cases may ultimately be funding terrorist regimes. But for insurers, it makes financial sense, industry insiders said. It holds down claim costs by avoiding expenses such as covering lost revenue from snarled services and ongoing fees for consultants aiding in data recovery. And, by rewarding hackers, it encourages more ransomware attacks, which in turn frighten more businesses and government agencies into buying policies.

“The onus isn’t on the insurance company to stop the criminal, that’s not their mission. Their objective is to help you get back to business. But it does beg the question, when you pay out to these criminals, what happens in the future?” said Loretta Worters, spokeswoman for the Insurance Information Institute, a nonprofit industry group based in New York. Attackers “see the deep pockets. You’ve got the insurance industry that’s going to pay out, this is great.”

A spokesperson for Lloyd’s, which underwrites about one-third of the global cyber-insurance market, said that coverage is designed to mitigate losses and protect against future attacks, and that victims decide whether to pay ransoms. “Coverage is likely to include, in the event of an attack, access to experts who will help repair the damage caused by any cyberattack and ensure any weaknesses in a company’s cyberprotection are eliminated,” the spokesperson said. “A decision whether to pay a ransom will fall to the company or individual that has been attacked.” Beazley declined comment.

Fabian Wosar, chief technology officer for anti-virus provider Emsisoft, said he recently consulted for one U.S. corporation that was attacked by ransomware. After it was determined that restoring files from backups would take weeks, the company’s insurer pressured it to pay the ransom, he said. The insurer wanted to avoid having to reimburse the victim for revenues lost as a result of service interruptions during recovery of backup files, as its coverage required, Wosar said. The company agreed to have the insurer pay the approximately $100,000 ransom. But the decryptor obtained from the attacker in return didn’t work properly and Wosar was called in to fix it, which he did. He declined to identify the client and the insurer, which also covered his services.

“Paying the ransom was a lot cheaper for the insurer,” he said. “Cyber insurance is what’s keeping ransomware alive today. It’s a perverted relationship. They will pay anything, as long as it is cheaper than the loss of revenue they have to cover otherwise.”

Worters, the industry spokeswoman, said ransom payments aren’t the only example of insurers saving money by enriching criminals. For instance, the companies may pay fraudulent claims — for example, from a policyholder who sets a car on fire to collect auto insurance — when it’s cheaper than pursuing criminal charges. “You don’t want to perpetuate people committing fraud,” she said. “But there are some times, quite honestly, when companies say: ’This fraud is not a ton of money. We are better off paying this.’ ... It’s much like the ransomware, where you’re paying all these experts and lawyers, and it becomes this huge thing.”

Insurers approve or recommend paying a ransom when doing so is likely to minimize costs by restoring operations quickly, regulators said. As in Lake City, recovering files from backups can be arduous and time-consuming, potentially leaving insurers on the hook for costs ranging from employee overtime to crisis management public relations efforts, they said.

“They’re going to look at their overall claim and dollar exposure and try to minimize their losses,” said Eric Nordman, a former director of the regulatory services division of the National Association of Insurance Commissioners, or NAIC, the organization of state insurance regulators. “If it’s more expeditious to pay the ransom and get the key to unlock it, then that’s what they’ll do.”

As insurance companies have approved six- and seven-figure ransom payments over the past year, criminals’ demands have climbed. The average ransom payment among clients of Coveware, a Connecticut firm that specializes in ransomware cases, is about $36,000, according to its quarterly report released in July, up sixfold from last October. Josh Zelonis, a principal analyst for the Massachusetts-based research company Forrester, said the increase in payments by cyber insurers has correlated with a resurgence in ransomware after it had started to fall out of favor in the criminal world about two years ago.

One cybersecurity company executive said his firm has been told by the FBI that hackers are specifically extorting American companies that they know have cyber insurance. After one small insurer highlighted the names of some of its cyber policyholders on its website, three of them were attacked by ransomware, Wosar said. Hackers could also identify insured targets from public filings; the Securities and Exchange Commission suggests that public companies consider reporting “insurance coverage relating to cybersecurity incidents.”

Even when the attackers don’t know that insurers are footing the bill, the repeated capitulations to their demands give them confidence to ask for ever-higher sums, said Thomas Hofmann, vice president of intelligence at Flashpoint, a cyber-risk intelligence firm that works with ransomware victims.

Ransom demands used to be “a lot less,” said Worters, the industry spokeswoman. But if hackers think they can get more, “they’re going to ask for more. So that’s what’s happening. ... That’s certainly a concern.”

In the past year, dozens of public entities in the U.S. have been paralyzed by ransomware. Many have paid the ransoms, either from their own funds or through insurance, but others have refused on the grounds that it’s immoral to reward criminals. Rather than pay a $76,000 ransom in May, the city of Baltimore — which did not have cyber insurance — sacrificed more than $5.3 million to date in recovery expenses, a spokesman for the mayor said this month. Similarly, Atlanta, which did have a cyber policy, spurned a $51,000 ransom demand last year and has spent about $8.5 million responding to the attack and recovering files, a spokesman said this month. Spurred by those and other cities, the U.S. Conference of Mayors adopted a resolution this summer not to pay ransoms.

Still, many public agencies are delighted to have their insurers cover ransoms, especially when the ransomware has also encrypted backup files. Johannesburg-Lewiston Area Schools, a school district in Michigan, faced that predicament after being attacked in October. Beazley, the insurer handling the claim, helped the district conduct a cost-benefit analysis, which found that paying a ransom was preferable to rebuilding the systems from scratch, said Superintendent Kathleen Xenakis-Makowski.

“They sat down with our technology director and said, ‘This is what’s affected, and this is what it would take to re-create,’” said Xenakis-Makowski, who has since spoken at conferences for school officials about the importance of having cyber insurance. She said the district did not discuss the ransom decision publicly at the time in part to avoid a prolonged debate over the ethics of paying. “There’s just certain things you have to do to make things work,” she said.

Ransomware is one of the most common cybercrimes in the world. Although it is often cast as a foreign problem, because hacks tend to originate from countries such as Russia and Iran, ProPublica has found that American industries have fostered its proliferation. We reported in May on two ransomware data recovery firms that purported to use their own technology to disable ransomware but in reality often just paid the attackers. One of the firms, Proven Data, of Elmsford, New York, tells victims on its website that insurance is likely to cover the cost of ransomware recovery.

Lloyd’s of London, the world’s largest specialty insurance market, said it pioneered the first cyber liability policy in 1999. Today, it offers cyber coverage through 74 syndicates — formed by one or more Lloyd’s members such as Beazley joining together — that provide capital and accept and spread risk. Eighty percent of the cyber insurance written at Lloyd’s is for entities based in the U.S. The Lloyd’s market is famous for insuring complex, high-risk and unusual exposures, such as climate-change consequences, Arctic explorers and Bruce Springsteen’s voice.

Many insurers were initially reluctant to cover cyber disasters, in part because of the lack of reliable actuarial data. When they protect customers against traditional risks such as fires, floods and auto accidents, they price policies based on authoritative information from national and industry sources. But, as Lloyd’s noted in a 2017 report, “there are no equivalent sources for cyber-risk,” and the data used to set premiums is collected from the internet. Such publicly available data is likely to underestimate the potential financial impact of ransomware for an insurer. According to a report by global consulting firm PwC, both insurers and victimized companies are reluctant to disclose breaches because of concerns over loss of competitive advantage or reputational damage.

Despite the uncertainty over pricing, dozens of carriers eventually followed Lloyd’s in embracing cyber coverage. Other lines of insurance are expected to shrink in the coming decades, said Nordman, the former regulator. Self-driving cars, for example, are expected to lead to significantly fewer car accidents and a corresponding drop in premiums, according to estimates. Insurers are seeking new areas of opportunity, and “cyber is one of the small number of lines that is actually growing,” Nordman said.

Driven partly by the spread of ransomware, the cyber insurance market has grown rapidly. Between 2015 and 2017, total U.S. cyber premiums written by insurers that reported to the NAIC doubled to an estimated $3.1 billion, according to the most recent data available.

Cyber policies have been more profitable for insurers than other lines of insurance. The loss ratio for U.S. cyber policies was about 35% in 2018, according to a report by Aon, a London-based professional services firm. In other words, for every dollar in premiums collected from policyholders, insurers paid out roughly 35 cents in claims. That compares to a loss ratio of about 62% across all property and casualty insurance, according to data compiled by the NAIC of insurers that report to them. Besides ransomware, cyber insurance frequently covers costs for claims related to data breaches, identity theft and electronic financial scams.

During the underwriting process, insurers typically inquire about a prospective policyholder’s cyber security, such as the strength of its firewall or the viability of its backup files, Nordman said. If they believe the organization’s defenses are inadequate, they might decline to write a policy or charge more for it, he said. North Dakota Insurance Commissioner Jon Godfread, chairman of the NAIC’s innovation and technology task force, said some insurers suggest prospective policyholders hire outside firms to conduct “cyber audits” as a “risk mitigation tool” aimed to prevent attacks — and claims — by strengthening security.

“Ultimately, you’re going to see that prevention of the ransomware attack is likely going to come from the insurance carrier side,” Godfread said. “If they can prevent it, they don’t have to pay out a claim, it’s better for everybody.”

Not all cyber insurance policies cover ransom payments. After a ransomware attack on Jackson County, Georgia, last March, the county billed insurance for credit monitoring services and an attorney but had to pay the ransom of about $400,000, County Manager Kevin Poe said. Other victims have struggled to get insurers to pay cyber-related claims. Food company Mondelez International and pharmaceutical company Merck sued insurers last year in state courts after the carriers refused to reimburse costs associated with damage from NotPetya malware. The insurers cited “hostile or warlike action” or “act of war” exclusions because the malware was linked to the Russian military. The cases are pending.

The proliferation of cyber insurers willing to accommodate ransom demands has fostered an industry of data recovery and incident response firms that insurers hire to investigate attacks and negotiate with and pay hackers. This year, two FBI officials who recently retired from the bureau opened an incident response firm in Connecticut. The firm, The Aggeris Group, says on its website that it offers “an expedient response by providing cyber extortion negotiation services and support recovery from a ransomware attack.”

Ramarcus Baylor, a principal consultant for The Crypsis Group, a Virginia incident response firm, said he recently worked with two companies hit by ransomware. Although both clients had backup systems, insurers promised to cover the six-figure ransom payments rather than spend several days assessing whether the backups were working. Losing money every day the systems were down, the clients accepted the offer, he said.

Crypsis CEO Bret Padres said his company gets many of its clients from insurance referrals. There’s “really good money in ransomware” for the cyberattacker, recovery experts and insurers, he said. Routine ransom payments have created a “vicious circle,” he said. “It’s a hard cycle to break because everyone involved profits: We do, the insurance carriers do, the attackers do.”

Chris Loehr, executive vice president of Texas-based Solis Security, said there are “a lot of times” when backups are available but clients still pay ransoms. Everyone from the victim to the insurer wants the ransom paid and systems restored as fast as possible, Loehr said.

“They figure out that it’s going to take a month to restore from the cloud, and so even though they have the data backed up,” paying a ransom to obtain a decryption key is faster, he said.

“Let’s get it negotiated very quickly, let’s just get the keys, and get the customer decrypted to minimize business interruption loss,” he continued. “It makes the client happy, it makes the attorneys happy, it makes the insurance happy.”

If clients morally oppose ransom payments, Loehr said, he reminds them where their financial interests lie, and of the high stakes for their businesses and employees. “I’ll ask, ‘The situation you’re in, how long can you go on like this?’” he said. “They’ll say, ‘Well, not for long.’ Insurance is only going to cover you for up to X amount of dollars, which gets burned up fast.”

“I know it sucks having to pay off assholes, but that’s what you gotta do,” he said. “And they’re like, ‘Yeah, OK, let’s get it done.’ You gotta kind of take charge and tell them, ‘This is the way it’s going to be or you’re dead in the water.’”

Lloyd’s-backed CFC, a specialist insurance provider based in London, uses Solis for some of its U.S. clients hit by ransomware. Graeme Newman, chief innovation officer at CFC, said “we work relentlessly” to help victims improve their backup security. “Our primary objective is always to get our clients back up and running as quickly as possible,” he said. “We would never recommend that our clients pay ransoms. This would only ever be a very final course of action, and any decision to do so would be taken by our clients, not us as an insurance company.”

As ransomware has burgeoned, the incident response division of Solis has “taken off like a rocket,” Loehr said. Loehr’s need for a reliable way to pay ransoms, which typically are transacted in digital currencies such as Bitcoin, spawned Sentinel Crypto, a Florida-based money services business managed by his friend, Wesley Spencer. Sentinel’s business is paying ransoms on behalf of clients whose insurers reimburse them, Loehr and Spencer said.

New York-based Flashpoint also pays ransoms for insurance companies. Hofmann, the vice president, said insurers typically give policyholders a toll-free number to dial as soon as they realize they’ve been hit. The number connects to a lawyer who provides a list of incident response firms and other contractors. Insurers tightly control expenses, approving or denying coverage for the recovery efforts advised by the vendors they suggest.

“Carriers are absolutely involved in the decision making,” Hofmann said. On both sides of the attack, “insurance is going to transform this entire market,” he said.

On June 10, Lake City government officials noticed they couldn’t make calls or send emails. IT staff then discovered encrypted files on the city’s servers and disconnected the infected servers from the internet. The city soon learned it was struck by Ryuk ransomware. Over the past year, unknown attackers using the Ryuk strain have besieged small municipalities and technology and logistics companies, demanding ransoms up to $5 million, according to the FBI.

Shortly after realizing it had been attacked, Lake City contacted the Florida League of Cities, which provides insurance for more than 550 public entities in the state. Beazley is the league’s reinsurer for cyber coverage, and they share the risk. The league declined to comment.

Initially, the city had hoped to restore its systems without paying a ransom. IT staff was “plugging along” and had taken server drives to a local vendor who’d had “moderate success at getting the stuff off of it,” Lee said. However, the process was slow and more challenging than anticipated, he said.

As the local technicians worked on the backups, Beazley requested a sample encrypted file and the ransom note so its approved vendor, Coveware, could open negotiations with the hackers, said Steve Roberts, Lake City’s director of risk management. The initial ransom demand was 86 bitcoin, or about $700,000 at the time, Coveware CEO Bill Siegel said. “Beazley was not happy with it — it was way too high,” Roberts said. “So [Coveware] started negotiations with the perps and got it down to the 42 bitcoin. Insurance stood by with the final negotiation amount, waiting for our decision.”

Lee said Lake City may have been able to achieve a “majority recovery” of its files without paying the ransom, but it probably would have cost “three times as much money trying to get there.” The city fired its IT director, Brian Hawkins, in the midst of the recovery efforts. Hawkins, who is suing the city, said in an interview posted online by his new employer that he was made “the scapegoat” for the city’s unpreparedness. The “recovery process on the files was taking a long time” and “the lengthy process was a major factor in paying the ransom,” he said in the interview.

On June 25, the day after the council meeting, the city said in a press release that while its backup recovery efforts “were initially successful, many systems were determined to be unrecoverable.” Lake City fronted the ransom amount to Coveware, which converted the money to bitcoin, paid the attackers and received a fee for its services. The Florida League of Cities reimbursed the city, Roberts said.

Lee acknowledged that paying ransoms spurs more ransomware attacks. But as cyber insurance becomes ubiquitous, he said, he trusts the industry’s judgment.

“The insurer is the one who is going to get hit with most of this if it continues,” he said. “And if they’re the ones deciding it’s still better to pay out, knowing that means they’re more likely to have to do it again — if they still find that it’s the financially correct decision — it’s kind of hard to argue with them because they know the cost-benefit of that. I have a hard time saying it’s the right decision, but maybe it makes sense with a certain perspective.”

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for The Big Story newsletter to receive stories like this one in your inbox.

 


51 Corporations Tell Congress: A Federal Privacy Law Is Needed. 145 Corporations Tell The U.S. Senate: Inaction On Gun Violence Is 'Simply Unacceptable'

Last week, several of the largest corporations petitioned the United States government for federal legislation in two key topics: consumer privacy and gun reform.

First, the Chief Executive Officers (CEOs) at 51 corporations sent a jointly signed letter to leaders in Congress asking for a federal privacy law to supersede laws emerging in several states. ZD Net reported:

"The open-letter was sent on behalf of Business Roundtable, an association made up of the CEOs of America's largest companies... CEOs blamed a patchwork of differing privacy regulations that are currently being passed in multiple US states, and by several US agencies, as one of the reasons why consumer privacy is a mess in the US. This patchwork of privacy regulations is creating problems for their companies, which have to comply with an ever-increasing number of laws across different states and jurisdictions. Instead, the 51 CEOs would like one law that governs all user privacy and data protection across the US, which would simplify product design, compliance, and data management."

The letter was sent to U.S. Senate Majority Leader Mitch McConnell, U.S. Senate Minority Leader Charles E. Schumer, Senator Roger F. Wicker (Chairman of the Committee on Commerce, Science and Transportation), Nancy Pelosi (Speaker of the U.S. House of Representatives), Kevin McCarthy (Minority Leader of the U.S. House of Representatives), Frank Pallone, Jr. (Chairman of the Committee on Energy and Commerce in the U.S. House of Representatives), and other ranking politicians.

The letter stated, in part:

"Consumers should not and cannot be expected to understand rules that may change depending upon the state in which they reside, the state in which they are accessing the internet, and the state in which the company’s operation is providing those resources or services. Now is the time for Congress to act and ensure that consumers are not faced with confusion about their rights and protections based on a patchwork of inconsistent state laws. Further, as the regulatory landscape becomes increasingly fragmented and more complex, U.S. innovation and global competitiveness in the digital economy are threatened. "

That sounds fair and noble enough. After writing this blog for more than 12 years, I have learned that details matters. Who writes the proposed legislation and the details in that legislation matter. It is too early to tell if the proposed legislation is weaker or stronger than what some states have implemented.

Some of the notable companies which signed the joint letter included AT&T, Amazon, Comcast, Dell Technologies, FedEx, IBM, Qualcomm, Salesforce, SAP, Target, and Walmart. Signers from the financial services sector included American Express, Bank of America, Citigroup, JPMorgan Chase, MasterCard, State Farm Insurance, USAA, and Visa. Several notable companies did not sign the letter: Facebook, Google, Microsoft, and Verizon.

Second, The New York Times reported that executives from 145 companies sent a joint letter to members of the U.S. Senate demanding that they take action on gun violence. The letter stated, in part (emphasis added):

"... we are writing to you because we have a responsibility and obligation to stand up for the safety of our employees ,customers, and all Americans in the communities we serve across the country. Doing nothing about America's gun violence crisis is simply unacceptable and it is time to stand with the American public on gun safety. Gun violence in America is not inevitable; it's preventable. There are steps Congress can, and must take to prevent and reduce gun violence. We need our lawmakers to support common sense gun laws... we urge the Senate to stand with the American public and take action on gun safety by passing a bill to require background checks on all gun sales and a strong Red Flag law that would allow courts to issue life-saving extreme risk protection orders..."

Some of the notable companies which signed the letter included Airbnb, Bain Capital, Bloomberg LP, Conde Nast, DICK'S Sporting Goods, Gap Inc., Levi Strauss & Company, Lyft, Pinterest, Publicis Groupe, Reddit, Royal Caribbean Cruises Ltd., Twitter, Uber, and Yelp.

Earlier this year, the U.S. House of Representatives passed legislation to address gun violence. So far, the U.S. Senate has done nothing. Representative Kathy Castor (14th District in Florida), explained the actions the House took in 2019:

"The Bipartisan Background Checks Act that I championed is a commonsense step to address gun violence and establish measures that protect our community and families. America is suffering from a long-term epidemic of gun violence – each year, 120,000 Americans are injured and 35,000 die by firearms. This bill ensures that all gun sales or transfers are subject to a background check, stopping senseless violence by individuals to themselves and others... Additionally, the Democratic House passed H.R. 1112 – the Enhanced Background Checks Act of 2019 – which addresses the Charleston Loophole that currently allows gun dealers to sell a firearm to dangerous individuals if the FBI background check has not been completed within three business days. H.R. 1112 makes the commonsense and important change to extend the review period to 10 business days..."

Findings from a February, 2018 Quinnipiac national poll:

"American voters support stricter gun laws 66 - 31 percent, the highest level of support ever measured by the independent Quinnipiac University National Poll, with 50 - 44 percent support among gun owners and 62 - 35 percent support from white voters with no college degree and 58 - 38 percent support among white men... Support for universal background checks is itself almost universal, 97 - 2 percent, including 97 - 3 percent among gun owners. Support for gun control on other questions is at its highest level since the Quinnipiac University Poll began focusing on this issue in the wake of the Sandy Hook massacre: i) 67 - 29 percent for a nationwide ban on the sale of assault weapons; ii) 83 - 14 percent for a mandatory waiting period for all gun purchases. It is too easy to buy a gun in the U.S. today..."


Court Okays 'Data Scraping' By Analytics Firm Of Users' Public LinkedIn Profiles. Lots Of Consequences

LinkedIn logo Earlier this week, a Federal appeals court affirmed an August 2017 injunction which required LinkedIn, a professional networking platform owned by Microsoft Corporation, to allow hiQ Labs, Inc. to access members' profiles. This ruling has implications for everyone.

hiQ Labs logo First, some background. The Naked Security blog by Sophos explained in December, 2017:

"... hiQ is a company that makes its money by “scraping” LinkedIn’s public member profiles to feed two analytical systems, Keeper and Skill Mapper. Keeper can be used by employers to detect staff that might be thinking about leaving while Skill Mapper summarizes the skills and status of current and future employees. For several years, this presented no problems until, in 2016, LinkedIn decided to offer something similar, at which point it sent hiQ and others in the sector cease and desist letters and started blocking the bots reading its pages."

So, hiQ apps use algorithms which determine for its clients (prospective or current employers) which employees will stay or go. Gizmodo explained the law which LinkedIn used in its arguments in court, namely the:

".... practice of scraping publicly available information from their platform violated the 1986 Computer Fraud and Abuse Act (CFAA). The CFAA is infamously vaguely written and makes it illegal to access a “protected computer” without or in excess of “authorization”—opening the door to sweeping interpretations that could be used to criminalize conduct not even close to what would traditionally be understood as hacking.

Second, the latest court ruling basically said two things: a) it is legal (and doesn't violate hacking laws) for companies to scrape information contained in publicly available profiles; and b) LinkedIn must allow hiQ (and potentially other firms) to continue with data-scraping. This has plenty of implications.

This recent ruling may surprise some persons, since the issue of data scraping was supposedly settled law previously. MediaPost reported:

"Monday's ruling appears to effectively overrule a decision issued six years ago in a dispute between Craigslist and the data miner 3Taps, which also scraped publicly available listings. In that matter, 3Taps allegedly scraped real estate listings and made them available to the developers PadMapper and Lively. PadMapper allegedly meshed Craigslist's apartment listings with Google maps... U.S. District Court Judge Charles Breyer in the Northern District of California ruled in 2013 that 3Taps potentially violated the anti-hacking law by scraping listings from Craigslist after the company told it to stop doing so."

So, you can bet that both social media sites and data analytics firms closely watched and read the appeal court's ruling this week.

Third, in theory any company or agency could then legally scrape information from public profiles on the LinkedIn platform. This scraping could be done by industries and/or entities (e.g., spy agencies worldwide) which job seekers didn't intend nor want.

Many consumers simply signed up and use LinkedIn to build professional relationship and/or to find jobs, either fulltime as employees or as contractors. The 2019 social media survey by Pew Research found that 27 percent of adults in the United States use LinkedIn, but higher usage penetration among persons with college degrees (51 percent), persons making more than $75K annually (49 percent), persons ages 25 - 29 (44 percent), persons ages 30 - 49 (37 percent), and urban residents (33 percent).  

I'll bet that many LinkedIn users never imagined that their profiles would be used against them by data analytics firms. Like it or not, that is how consumers' valuable, personal data is used (abused?) by social media sites and their clients.

Fourth, the practice of data scraping has divided tech companies. Again, from the Naked Security blog post in 2017:

"Data scraping, its seems, has become a booming tech sector that increasingly divides the industry ideologically. One side believes LinkedIn is simply trying to shut down a competitor wanting to access public data LinkedIn merely displays rather than owns..."

The Electronic Frontier Foundation (EFF), the DuckDuckGo search engine, and the Internet Archived had filed an amicus brief with the appeals court before its ruling. The EFF explained the group's reasoning and urged the:

"... Court of Appeals to reject LinkedIn’s request to transform the CFAA from a law meant to target serious computer break-ins into a tool for enforcing its computer use policies. The social networking giant wants violations of its corporate policy against using automated scripts to access public information on its website to count as felony “hacking” under the Computer Fraud and Abuse Act, a 1986 federal law meant to criminalize breaking into private computer systems to access non-public information. But using automated scripts to access publicly available data is not "hacking," and neither is violating a website’s terms of use. LinkedIn would have the court believe that all "bots" are bad, but they’re actually a common and necessary part of the Internet. "Good bots" were responsible for 23 percent of Web traffic in 2016..."

So, bots are here to stay. And, it's up to LinkedIn executives to find a solution to protect their users' information.

Fifth, according to the Reuters report the court judge suggested a solution for LinkedIn by "eliminating the public access option." Hmmmm. Public, or at least broad access, is what many job seekers desire. So, a balance needs to be struck between truly "public" where anyone, anywhere worldwide could access public profiles, versus intended targets (e.g., hiring executives in potential employers in certain industries).

Sixth, what struck me about the court ruling this week was that nobody was in the court room representing the interests of LinkedIn users, of which I am one. MediaPost reported:

"The appellate court discounted LinkedIn's argument that hiQ was harming users' privacy by scraping data even when people used a "do not broadcast" setting. "There is no evidence in the record to suggest that most people who select the 'Do Not Broadcast' option do so to prevent their employers from being alerted to profile changes made in anticipation of a job search," the judges wrote. "As the district court noted, there are other reasons why users may choose that option -- most notably, many users may simply wish to avoid sending their connections annoying notifications each time there is a profile change." "

What? Really?! We LinkedIn users have a natural, vested interest in control over both our profiles and the sensitive, personal information that describes each of us in our profiles. Somebody at LinkedIn failed to adequately represent users' interests of its users, the court didn't really listen closely nor seek out additional evidence, or all of the above.

Maybe the "there is no evidence in the record" regarding the 'Do Not Broadcast' feature will be the basis of another appeal or lawsuit.

With this latest court ruling, we LinkedIn users have totally lost control (except for deleting or suspending our LinkedIn accounts). It makes me wonder how a court could reach its decision without hearing directly from somebody representing LinkedIn users.

Seventh, it seems that LinkedIn needs to modify its platform in three key ways:

  1. Allow its users to specify which uses or applications (e.g., find fulltime work, find contract work, build contacts in my industry or area of expertise, find/screen job candidates, advertise/promote a business, academic research, publish content, read news, dating, etc.) their profiles can only be used for. The 'Do Not Broadcast' feature is clearly not strong enough;
  2. Allow its users to specify or approve individual users -- other actual persons who are LinkedIn users and not bots nor corporate accounts -- who can access their full, detailed profiles; and
  3. Outline in the user agreement the list of applications or uses profiles may be accessed for, so that both prospective and current LinkedIn users can make informed decisions. 

This would give LinkedIn users some control over the sensitive, personal information in their profiles. Without control, the benefits of using LinkedIn quickly diminish. And, that's enough to cause me to rethink my use of LinkedIn, and either deactivate or delete my account.

What are your opinions of this ruling? If you currently use LinkedIn, will you continue using it? If you don't use LinkedIn and were considering it, will you still consider using it?


Mashable: 7 Privacy Settings iPhone Users Should Enable Today

Most people want to get the most from their smartphones. That includes using their devices wisely and with privacy. Mashable recommended seven privacy settings for Apple iPhone users. I found the recommendations very helpful, and thought that you would, too.

Three privacy settings stood out. First, many mobile apps have:

"... access to your camera. For some of these, the reasoning is a no-brainer. You want to be able to use Snapchat filters? Fine, the app needs access to your camera. That makes sense. Other apps' reasoning for having access to your camera might be less clear. Once again, head to Settings > Privacy > Camera and review what apps you've granted camera access. See anything in there that doesn't make sense? Go ahead and disable it."

A feature most consumers probably haven't considered:

"... which apps on your phone have requested microphone access. For example, do you want Drivetime to have access to your mic? No? Because if you've downloaded it, then it might. If an app doesn't have a clear reason for needing access to your microphone, don't give it that access."

And, perhaps most importantly:

"Did you forget about your voicemail? Hackers didn't. At the 2018 DEF CON, researchers demonstrated the ability to brute force voicemail accounts and use that access to reset victims' Google and PayPal accounts... Set a random 9-digit voicemail password. Go to Settings > Phone and scroll down to "Change Voicemail Password." You iPhone should let you choose a 9-digit code..."

The full list is a reminder for consumers not to assume that the default settings on mobile apps you install are right for your privacy needs. Wise consumers check and make adjustments.


Privacy Study Finds Consumers Less Likely To Share Several Key Data Elements

Advertising Research Foundation logoLast month, the Advertising Research Foundation (ARF) announced the results of its 2019 Privacy Study, which was conducted in March. The survey included 1,100 consumers in the United States weighted by age gender, and region. Key findings including device and internet usage:

"The key differences between 2018 and 2019 are: i) People are spending more time on their mobile devices and less time on their PCs; ii) People are spending more time checking email, banking, listening to music, buying things, playing games, and visiting social media via mobile apps; iii) In general, people are only slightly less likely to share their data than last year. iv) They are least likely to share their social security number; financial and medical information; and their home address and phone numbers; v) People seem to understand the benefits of personalized advertising, but do not value personalization highly and do not understand the technical approaches through which it is accomplished..."

Advertisers use these findings to adjust their advertising, offers, and pitches to maximize responses by consumers. More detail about the above privacy and data sharing findings:

"In general, people were slightly less likely to share their data in 2019 than they were in 2018. They were least likely to share their social security number; financial and medical information; their work address; and their home address and phone numbers in both years. They were most likely to share their gender, race, marital status, employment status, sexual orientation, religion, political affiliation, and citizenship... The biggest changes in respondents’ willingness to share their data from 2018 to 2019 were seen in their home address (-10 percentage points), spouse’s first and last name (-8 percentage points), personal email address (-7 percentage points), and first and last names (-6 percentage points)."

The researchers asked the data sharing question in two ways:

  1. "Which of the following types of information would you be willing to share with a website?"
  2. "Which of the following types of information would you be willing to share for a personalized experience?"

The survey included 20 information types for both questions. For the first question, survey respondents' willingness to share decreased for 15 of 20 information types, remained constant for two information types, and increased slightly for the remainder:

Which of the following types of information
would you be willing to share with a website?
Information Type 2018: %
Respondents
2019: %
Respondents
2019 H/(L)
2018
Birth Date 71 68 (3)
Citizenship Status 82 79 (3)
Employment Status 84 82 (2)
Financial Information 23 20 (3)
First & Last Name 69 63 (6)
Gender 93 93 --
Home Address 41 31 (10)
Home Landline
Phone Number
33 30 (3)
Marital Status 89 85 (4)
Medical Information 29 26 (3)
Personal Email Address 61 54 (7)
Personal Mobile
Phone Number
34 32 (2)
Place Of Birth 62 58 (4)
Political Affiliation 76 77 1
Race or Ethnicity 90 91 1
Religious Preference 78 79 1
Sexual Orientation 80 79 (1)
Social Security Number 10 10 --
Spouse's First
& Last Name
41 33 (8)
Work Address 33 31 (2)

The researchers asked about citizenship status due to controversy related to the upcoming 2020 Census. The researchers concluded:

The survey finding most relevant to these proposals is that the public does not see the value of sharing data to improve personalization of advertising messages..."

Overall, it appears that consumers are getting wiser about their privacy. Consumers' willingness to share decreased for more items than it increased for. View the detailed ARF 2019 Privacy Survey (Adobe PDF).


Google And YouTube To Pay $170 Million In Proposed Settlement To Resolve Charges Of Children's Privacy Violations

Google logo Today's blog post contains information all current and future parents should know. On Tuesday, the U.S. Federal Trade Commission (FTC) announced a proposed settlement agreement where YouTube LLC, and its parent company, Google LLC, will pay a monetary fine of $170 million to resolve charges that the video-sharing service illegally collected the personal information of children without their parents' consent.

YouTube logo The proposed settlement agreement requires YouTube and Google to pay $136 million to the FTC and $34 million to New York State to resolve charges that the video sharing service violated the Children’s Online Privacy Protection Act (COPPA) Rule. The announcement explained the allegations:

"... that YouTube violated the COPPA Rule by collecting personal information—in the form of persistent identifiers that are used to track users across the Internet—from viewers of child-directed channels, without first notifying parents and getting their consent. YouTube earned millions of dollars by using the identifiers, commonly known as cookies, to deliver targeted ads to viewers of these channels, according to the complaint."

"The COPPA Rule requires that child-directed websites and online services provide notice of their information practices and obtain parental consent prior to collecting personal information from children under 13, including the use of persistent identifiers to track a user’s Internet browsing habits for targeted advertising. In addition, third parties, such as advertising networks, are also subject to COPPA where they have actual knowledge they are collecting personal information directly from users of child-directed websites and online services... the FTC and New York Attorney General allege that while YouTube claimed to be a general-audience site, some of YouTube’s individual channels—such as those operated by toy companies—are child-directed and therefore must comply with COPPA."

While $170 million is a lot of money, it is tiny compared to the $5 billion fine by the FTC assessed against Facebook. The fine is also tiny compared to Google's earnings. Alphabet Inc., the holding company which owns Google, generated pretax net income of $34.91 billion during 2018 on revenues of $136.96 billion.

In February, the FTC concluded a settlement with Musical.ly, a video social networking app now operating as TikTok, where Musical.ly paid $5.7 million to resolve allegations of COPPA violations. Regarding the proposed settlement with YouTube, Education Week reported:

"YouTube has said its service is intended for ages 13 and older, although younger kids commonly watch videos on the site and many popular YouTube channels feature cartoons or sing-a-longs made for children. YouTube has its own app for children, called YouTube Kids; the company also launched a website version of the service in August. The site says it requires parental consent and uses simple math problems to ensure that kids aren't signing in on their own. YouTube Kids does not target ads based on viewer interests the way YouTube proper does. The children's version does track information about what kids are watching in order to recommend videos. It also collects personally identifying device information."

The proposed settlement also requires YouTube and Google:

"... to develop, implement, and maintain a system that permits channel owners to identify their child-directed content on the YouTube platform so that YouTube can ensure it is complying with COPPA. In addition, the companies must notify channel owners that their child-directed content may be subject to the COPPA Rule’s obligations and provide annual training about complying with COPPA for employees who deal with YouTube channel owners. The settlement also prohibits Google and YouTube from violating the COPPA Rule, and requires them to provide notice about their data collection practices and obtain verifiable parental consent before collecting personal information from children."

The complaint and proposed consent decree were filed in the U.S. District Court for the District of Columbia. After approval by a judge, the proposed settlement become final. Hopefully, the fine and additional requirements will be enough to deter future abuses.


Privacy Tips For The Smart Speakers In Your Home

Many consumers love the hands-free convenience of smart speakers in their homes. The appeal includes several applications: stream music, plan travel, manage your grocery list, get briefed on news headlines, buy movie tickets, hear jokes, get sports scores, and more. Like any other internet-connected device, it's wise to know and use the device's security settings if you value your, your children's, and your guests' privacy.

In the August issue of its print magazine, Consumer Reports (CR) advises the following settings for your smart speakers:

"Protect Your Privacy
If keeping a speaker with a microphone in your home makes you uneasy, you have reason to be. Amazon, Apple, and Google all collect recorded snippets of consumers' commands to improve their voice-computing technology. But they also offer ways to mute the mic when it's not in use. The Amazon Echo has an On/Off button on top of the device. The Google Home's mute button is on the back. And Apple's HomePod requires a voice command: "Hey, Siri, stop listening." (You then use a button to turn the device back on.) For a third-party speaker, consult the owner's manual for instructions."

To learn more, the CR site offers several resources:


Operating Issues Continue To Affect The Integrity Of Products Sold On Amazon Site

Amazon logo News reports last week described in detail the operating issues that affect the integrity and reliability of products sold on the Amazon site. The Verge reported that some sellers:

"... hop onto fast-selling listings with counterfeit goods, or frame their competitors with fake reviews. One common tactic is to find a once popular, but now abandoned product and hijack its listing, using the page’s old reviews to make whatever you’re selling appear trustworthy. Amazon’s marketplace is so chaotic that not even Amazon itself is safe from getting hijacked. In addition to being a retail platform, Amazon sells its own house-brand goods under names like AmazonBasics, Rivet furniture, Happy Belly food, and hundreds of other labels."

The hijacked product pages include photos, descriptions, reviews, and/or comments from other products -- a confusing mix of content. You probably assumed that it isn't possible for this to happen, but it does. The Verge report explained:

"There are now more than 2 million sellers on the platform, and Amazon has struggled to maintain order. A recent Wall Street Journal investigation found thousands of items for sale on the site that were deceptively labeled or declared unsafe by federal regulators... A former Amazon employee who now works as a consultant for Amazon sellers, she’s worked with clients who have undergone similar hijackings. She says these listings were likely seized by a seller who contacted Amazon’s Seller Support team and asked them to push through a file containing the changes. The team is based mostly overseas, experiences high turnover, and is expected to work quickly, Greer says, and if you find the right person they won’t check what changes the file contains."

This directly affects online shoppers. The article also included this tip for shoppers:

"... the easiest way to detect a hijacking is to check that the reviews refer to the product being sold..."

What a mess! The burden should not fall upon shoppers. Amazon needs to clean up its mess -- quickly. What are your opinions.