1,206 posts categorized "Corporate Responsibility" Feed

Facebook To Pay $40 Million To Advertisers To Resolve Allegations of Inflated Advertising Metrics

Facebook logo According to court papers last week, Facebook has entered a proposed settlement agreement where it will pay $40 million to advertisers to resolve allegations in a class-action lawsuit that the social networking platform inflated video advertising engagement metrics. Forbes explained:

"The metrics in question are critical for advertisers on video-based content platforms such as YouTube and Facebook because they show the average amount of time users spend watching their content before clicking away. During the 18 months between February of 2015 and September of 2016, Facebook was incorrectly calculating — and consequently, inflating — two key metrics of this type. Members of the class action are alleging that the faulty metrics led them to spend more money on Facebook ads than they otherwise would have..."

Metrics help advertisers determine if the ads they paid for are delivering results. Reportedly, the lawsuit took three years and Facebook denied any wrongdoing. The proposed settlement must be approved by a court. About $12 million of the $40 million total will be used to pay plaintiffs' attorney fees.

A brief supporting the proposed settlement provided more details:

" One metric—“Average Duration of Video Viewed”—depicted the average number of seconds users watched the video; another—–“Average Percentage of Video Viewed”—depicted the average percentage of the video ad that users watched... Starting in February 2015, Facebook incorrectly calculated Average Duration of Video Viewed... The Average View Duration error, in turn, led to the Average Percentage Viewed metric also being inflated... Because of the error, the average watch times of video ads were exaggerated for about 18 months... Facebook acknowledges there was an error. But Facebook has argued strenuously that the error was an innocent mistake that Facebook corrected shortly after discovering it. Facebook has also pointed out that some advertisers likely never viewed the erroneous metrics and that because Facebook does not set prices based on the impacted metrics, the error did not lead to overcharges... The settlement provides a $40 million cash fund from Facebook, which constitutes as much as 40% of what Plaintiffs estimate they may realistically have been able to recover had the case made it to trial and had Plaintiffs prevailed. Facebook’s $40 million payment will... also cover the costs of settlement administration, class notice, service awards, and Plaintiffs’ litigation costs24 and attorneys’ fees."

It seems that besides a multitude of data breaches and privacy snafus, Facebook can't quite operate reliably its core advertising business. What do you think?


FTC To Distribute $31 Million In Refunds To Affected Lifelock Customers

U.S. Federal Trade Commission logo The U.S. Federal Trade Commission (FTC) announced on Tuesday the distribution of about $31 million worth of refunds to certain customers of Lifelock, an identity protection service. The refunds are part of a previously announced settlement agreement to resolve allegations that the identity-theft service violated a 2010 consent order.

Lifelock has featured notable spokespersons, including radio talk-show host Rush Limbaugh, television personality Montel Williams, actress Angie Harmon, and former New York City Mayor Rudy Giuliani, who is now the personal attorney for President Trump.

The FTC announcement explained:

"The refunds stem from a 2015 settlement LifeLock reached with the Commission, which alleged that from 2012 to 2014 LifeLock violated an FTC order that required the company to secure consumers’ personal information and prohibited it from deceptive advertising. The FTC alleged, among other things, that LifeLock failed to establish and maintain a comprehensive information security program to protect users’ sensitive personal information, falsely advertised that it protected consumers’ sensitive data with the same high-level safeguards used by financial institutions, and falsely claimed it provided 24/7/365 alerts “as soon as” it received any indication a consumer’s identity was being used."

Lifelock logo The 2015 settlement agreement with the FTC required LifeLock agreed to pay $100 million to affected customers. About $68 million has been paid to customers who were part of a class action lawsuit. The FTC is using the remaining money to provide refunds to consumers who were LifeLock members between 2012 and 2014, but did not receive a payment from the class action settlement.

The FTC expects to mail about one million refund checks worth about $29 each.

If you are a Lifelock customer and find this checkered history bothersome, Consumer Reports has some recommendations about what you can do instead. It might save you some money, too.


3 Countries Sent A Joint Letter Asking Facebook To Delay End-To-End Encryption Until Law Enforcement Has Back-Door Access. 58 Concerned Organizations Responded

Plenty of privacy and surveillance news recently. Last week, the governments of three countries sent a joint, open letter to Facebook.com asking the social media platform to delay implementation of end-to-end encryption in its messaging apps until back-door access can be provided for law enforcement.

Facebook logo Buzzfeed News published the joint, open letter by U.S. Attorney General William Barr, United Kingdom Home Secretary Priti Patel, acting US Homeland Security Secretary Kevin McAleenan, and Australian Minister for Home Affairs Peter Dutton. The letter, dated October 4th, was sent to Mark Zuckerberg, the Chief Executive Officer of Facebook. It read in part:

"OPEN LETTER: FACEBOOK’S “PRIVACY FIRST” PROPOSALS

We are writing to request that Facebook does not proceed with its plan to implement end-to-end encryption across its messaging services without ensuring that there is no reduction to user safety and without including a means for lawful access to the content of communications to protect our citizens.

In your post of 6 March 2019, “A Privacy-Focused Vision for Social Networking,” you acknowledged that “there are real safety concerns to address before we can implement end-to-end encryption across all our messaging services.” You stated that “we have a responsibility to work with law enforcement and to help prevent” the use of Facebook for things like child sexual exploitation, terrorism, and extortion. We welcome this commitment to consultation. As you know, our governments have engaged with Facebook on this issue, and some of us have written to you to express our views. Unfortunately, Facebook has not committed to address our serious concerns about the impact its proposals could have on protecting our most vulnerable citizens.

We support strong encryption, which is used by billions of people every day for services such as banking, commerce, and communications. We also respect promises made by technology companies to protect users’ data. Law abiding citizens have a legitimate expectation that their privacy will be protected. However, as your March blog post recognized, we must ensure that technology companies protect their users and others affected by their users’ online activities. Security enhancements to the virtual world should not make us more vulnerable in the physical world..."

The open, joint letter is also available on the United Kingdom government site. Mr. Zuckerberg's complete March 6, 2019 post is available here.

Earlier this year, the U.S. Federal Bureau of Investigation (FBI) issued a Request For Proposals (RFP) seeking quotes from technology companies to build a real-time social media monitoring tool. It seems, such a tool would have limited utility without back-door access to encrypted social media accounts.

In 2016, the Federal Bureau of Investigation (FBI) filed a lawsuit to force Apple Inc. to build "back door" software to unlock an attacker's iPhone. Apple refused as back-door software would provide access to any iPhone, not only this particular smartphone. Ultimately, the FBI found an offshore tech company to build the backdoor. Later that year, then FBI Director James Comey suggested a national discussion about encryption versus safety. It seems, the country still hasn't had that conversation.

According to BuzzFeed, Facebook's initial response to the joint letter:

"In a three paragraph statement, Facebook said it strongly opposes government attempts to build backdoors."

We shall see if Facebook holds steady to that position. Privacy advocates quickly weighed in. The Electronic Frontier Foundation (EFF) wrote:

"This is a staggering attempt to undermine the security and privacy of communications tools used by billions of people. Facebook should not comply. The letter comes in concert with the signing of a new agreement between the US and UK to provide access to allow law enforcement in one jurisdiction to more easily obtain electronic data stored in the other jurisdiction. But the letter to Facebook goes much further: law enforcement and national security agencies in these three countries are asking for nothing less than access to every conversation... The letter focuses on the challenges of investigating the most serious crimes committed using digital tools, including child exploitation, but it ignores the severe risks that introducing encryption backdoors would create. Many people—including journalists, human rights activists, and those at risk of abuse by intimate partners—use encryption to stay safe in the physical world as well as the online one. And encryption is central to preventing criminals and even corporations from spying on our private conversations... What’s more, the backdoors into encrypted communications sought by these governments would be available not just to governments with a supposedly functional rule of law. Facebook and others would face immense pressure to also provide them to authoritarian regimes, who might seek to spy on dissidents..."

The new agreement the EFF referred to was explained in this United Kingdom announcement:

"The world-first UK-US Bilateral Data Access Agreement will dramatically speed up investigations and prosecutions by enabling law enforcement, with appropriate authorisation, to go directly to the tech companies to access data, rather than through governments, which can take years... The current process, which see requests for communications data from law enforcement agencies submitted and approved by central governments via Mutual Legal Assistance (MLA), can often take anywhere from six months to two years. Once in place, the Agreement will see the process reduced to a matter of weeks or even days."

The Agreement will each year accelerate dozens of complex investigations into suspected terrorists and paedophiles... The US will have reciprocal access, under a US court order, to data from UK communication service providers. The UK has obtained assurances which are in line with the government’s continued opposition to the death penalty in all circumstances..."

On Friday, a group of 58 privacy advocates and concerned organizations from several countries sent a joint letter to Facebook regarding its end-to-end encryption plans. The Center For Democracy & Technology (CDT) posted the group's letter:

"Given the remarkable reach of Facebook’s messaging services, ensuring default end-to-end security will provide a substantial boon to worldwide communications freedom, to public safety, and to democratic values, and we urge you to proceed with your plans to encrypt messaging through Facebook products and services. We encourage you to resist calls to create so-called “backdoors” or “exceptional access” to the content of users’ messages, which will fundamentally weaken encryption and the privacy and security of all users."

It seems wise to have a conversation to discuss all of the advantages and disadvantages; and not selectively focus only upon some serious crimes while ignoring other significant risks, since back-door software can be abused like any other technology. What are your opinions?


Transcripts Of Internal Facebook Meetings Reveal True Views Of The Company And Its CEO

Facebook logo It's always good for consumers -- and customers -- to know a company's positions on key issues. Thanks to The Verge, we now know more about Facebook's views. Portions of the leaked transcripts included statements by Mr. Zuckerberg, Facebook's CEO, during internal business meetings. The Verge explained the transcripts:

"In two July meetings, Zuckerberg rallied his employees against critics, competitors, and Senator Elizabeth Warren, among others..."

Portions of statements by Mr. Zuckerberg included:

"I’m certainly more worried that someone is going to try to break up our company... So there might be a political movement where people are angry at the tech companies or are worried about concentration or worried about different issues and worried that they’re not being handled well. That doesn’t mean that, even if there’s anger and that you have someone like Elizabeth Warren who thinks that the right answer is to break up the companies... I mean, if she gets elected president, then I would bet that we will have a legal challenge, and I would bet that we will win the legal challenge... breaking up these companies, whether it’s Facebook or Google or Amazon, is not actually going to solve the issues. And, you know, it doesn’t make election interference less likely. It makes it more likely because now the companies can’t coordinate and work together. It doesn’t make any of the hate speech or issues like that less likely. It makes it more likely..."

An October 1st post by Mr. Zuckerberg confirmed the transcripts. Earlier this year, Mr. Zuckerberg called for more government regulation. Given his latest comments, we now know his true views.

Also, C/Net reported:

"In an interview with the Today show that aired Wednesday, Instagram CEO Adam Mosseri said he generally agrees with the comments Zuckerberg made during the meetings, adding that the company's large size can help it tackle issues like hate speech and election interference on social media."

The claim by Mosseri, Zuckerberg and others that their company needs to be even bigger to tackle issues is frankly -- laughable. Consumers are concerned about several different issues: privacy, hacked and/or cloned social media accounts, costs, consumer choice, surveillance, data collection we can't opt out of, the inability to delete Facebook and other mobile apps, and elections interference. A recent study found that consumers want social sites to collect less data.

Industry consolidation and monopolies/oligopolies usually result with reduced consumer choices and higher prices. Prior studies have documented this. The lack of ISP competition in key markets meant consumers in the United States pay more for broadband and get slower speeds compared to other countries. At the U.S. Federal Trade Commission's "Privacy, Big Data, And Competition" hearing last year, the developers of the Brave web browser submitted this feedback:

""First, big tech companies “cross-use” user data from one part of their business to prop up others. This stifles competition, and hurts innovation and consumer choice. Brave suggests that FTC should investigate..."

Facebook is already huge, and its massive size still hasn't stopped multiple data breaches and privacy snafus. Rather, the snafus have demonstrated an inability (unwillingness?) by the company and its executives to effectively tackle and implement solutions to adequately and truly protect users' sensitive information. Mr. Zuckerberg has repeatedly apologized, but nothing ever seems to change. Given the statements in the transcripts, his apologies seem even less believable and less credible than before.

Alarmingly, Facebook has instead sought more ways to share users' sensitive data. In August of 2018, reports surfaced that Facebook approached several major banks and offered to share its detailed financial information about consumers in order, "to boost user engagement." Reportedly, the detailed financial information included debit/credit/prepaid card transactions and checking account balances. Also last year, Facebook's Onavo VPN App was removed from the Apple App store because the app violated data-collection policies. Not good.

Plus, the larger problem is this: Facebook isn't just a social network. It is also an advertiser, publishing platform, dating service, and wannabe payments service. There are several anti-trust investigations underway involving Facebook. Remember, Facebook tracks both users and non-users around the internet. So, claims about it needing to be bigger to solve problem are malarkey.

And, Mr. Zuckerberg's statements seem to mischaracterize Senator Warren's positions by conflating and ignoring (or minimizing) several issues. Here is what Senator Warren actually stated in March, 2019:

"America’s big tech companies provide valuable products but also wield enormous power over our digital lives. Nearly half of all e-commerce goes through Amazon. More than 70% of all Internet traffic goes through sites owned or operated by Google or Facebook. As these companies have grown larger and more powerful, they have used their resources and control over the way we use the Internet to squash small businesses and innovation, and substitute their own financial interests for the broader interests of the American people... Weak antitrust enforcement has led to a dramatic reduction in competition and innovation in the tech sector. Venture capitalists are now hesitant to fund new startups to compete with these big tech companies because it’s so easy for the big companies to either snap up growing competitors or drive them out of business. The number of tech startups has slumped, there are fewer high-growth young firms typical of the tech industry, and first financing rounds for tech startups have declined 22% since 2012... To restore the balance of power in our democracy, to promote competition, and to ensure that the next generation of technology innovation is as vibrant as the last, it’s time to break up our biggest tech companies..."

Senator Warren listed several examples:

"Using Mergers to Limit Competition: Facebook has purchased potential competitors Instagram and WhatsApp. Amazon has used its immense market power to force smaller competitors like Diapers.com to sell at a discounted rate. Google has snapped up the mapping company Waze and the ad company DoubleClick... Using Proprietary Marketplaces to Limit Competition: Many big tech companies own a marketplace — where buyers and sellers transact — while also participating on the marketplace. This can create a conflict of interest that undermines competition. Amazon crushes small companies by copying the goods they sell on the Amazon Marketplace and then selling its own branded version. Google allegedly snuffed out a competing small search engine by demoting its content on its search algorithm, and it has favored its own restaurant ratings over those of Yelp."

Mr. Zuckerberg would be credible if he addressed each of these examples. In the transcript from The Verge, he didn't.

And, there is plenty of blame to spread around on executives in both tech companies and anti-trust regulators in government. Readers wanting to learn more can read about hijacked product pages and other chaos among sellers on the Amazon platform. There's plenty to fault tech companies for, and it isn't a political attack.

Plenty of operational failures, data security failures, and willful sharing of consumers' data collected. What are your opinions of the transcript?


Vancouver, Canada Welcomed The 'Tesla Of The Cruise Industry." Ports In France Consider Bans For Certain Cruise Ships

For drivers concerned about the environment and pollution, the automobile industry has offered hybrids (which run on gasoline, and electric battery power) and completely electric vehicles (solely on electric battery power). The same technology trend is underway within the cruise industry.

On September 26, the Port of Vancouver welcomed the MS Roald Amundsen. Some call this cruise ship the "Tesla of the cruise industry." The International Business Times explained:

"MS Roald Amundsen can be called Tesla of the cruise industry as it is similar to the electrically powered Tesla car that set off a revolution in the auto sector by running on batteries... The state of the art ship was unveiled earlier this year by Scandinavian cruise operator Hurtigruten. The cruise ship is one of the most sustainable cruise vessels with the distinction of being one of the two hybrid-electric cruise ships in the world. MS Roald Amundsen utilizes hybrid technology to save fuel and reduce carbon dioxide emissions by 20 percent."

Hurtigruten logo With 15 cruise ships, Hurtigruten offers sailings to Norway, Iceland, Alaska, Arctic, Antarctica, Europe, South America, and more. Named after the first man to cross Antarctica and reach the South Pole, the MS Roald Amundsen carries about 530 passengers.

View of solar panels on the Celebrity Solstice cruise ship in March, 2019. Click to view larger version While some cruise ships already use onboard solar panels to reduce fuel consumption, this is the first hybrid-electric cruise ship. It is an important step forward to prove that large ships can be powered in this manner.

Several ships in Royal Caribbean Cruise Line's fleet, including the Oasis of the Seas, have been outfitted with solar panels. The image on the right provides a view of  the solar panels on the Celebrity Solstice cruise ship, while it was docked in Auckland, New Zealand in March, 2019. The panels are small and let sunlight through.

The Vancouver Is Awesome site explained why the city gave the MS Roald Amundsen special attention:

"... the Vancouver Fraser Port Authority, the federal agency responsible for the stewardship of the port, has set its vision to be the world’s most sustainable port. As a part of this vision, the port authority works to ensure the highest level of environmental protection is met in and around the Port of Vancouver. This commitment resulted in the port authority being the first in Canada and third in the world to offer shore power, an emissions-reducing initiative, for cruise ships. That said, a shared commitment to sustainability isn’t the only thing Hurtigruten has in common with our awesome city... The hybrid-electric battery used in the MS Roald Amundsen was created by Vancouver company, Corvus Energy."

Port Of Vancouver, Canada logo Reportedly, the MS Roald Amundsen can operate for brief periods of time only on battery power, resulting in zero fuel usage and zero emissions. The Port of Vancouver's website explains its Approach to Sustainability policy:

"We are on a journey to meet our vision to become the world’s most sustainable port. In 2010 we embarked on a two-year scenario planning process with stakeholders called Port 2050, to improve our understanding of what the region may look like in the future... We believe a sustainable port delivers economic prosperity through trade, maintains a healthy environment, and enables thriving communities, through meaningful dialogue, shared aspirations and collective accountability. Our definition of sustainability includes 10 areas of focus and 22 statements of success..."

I encourage everyone to read the Port of Vancouver's 22 statements of success for a healthy environment and sustainable port. Selected statements from that list:

"Healthy ecosystems:
8) Takes a holistic approach to protecting and improving air, land and water quality to promote biodiversity and human health
9) Champions coordinated management programs to protect habitats and species. Climate action
10) Is a leader among ports in energy conservation and alternative energy to minimize greenhouse gas emissions..."

"Responsible practices:
12) Improves the environmental, social and economic performance of infrastructure through design, construction and operational practices
13) Supports responsible practices throughout the global supply chain..."

"Aboriginal relationships:
18) Respects First Nations’ traditional territories and value traditional knowledge
19) Embraces and celebrates Aboriginal culture and history
20) Understands and considers contemporary interests and aspirations..."

In separate but related news, government officials in the French Riviera city of Cannes are considering a ban of cruise ships to curb pollution. The Travel Pulse site reported:

"The ban would apply to passenger vessels that do not meet a 0.1 percent sulfur cap in their fuel emissions. Any cruise ship that attempted to enter the port that did not meet the higher standards would be turned away without allowing passengers to disembark."

During 2018, about 370,000 cruise ship passengers visited Cannes, making it the fourth busiest port in France. Officials are concerned about pollution. Other European ports are considering similar bans:

"Another French city, Saint-Raphael, has also instituted similar rules to curb the pollution of the water and air around the city. Other European ports such as Santorini and Venice have also cited cruise ships as a significant cause of over-tourism across the region."

If you live and/or work in a port city, it seems worthwhile to ask your local government or port authority what it is doing about sustainability and pollution. The video below explains some of the features in this new "expedition ship" with itineraries and activities that focus upon science:


Video courtesy of Hurtigruten

[Editor's note: this post was updated to include a photo of solar panels on the Celebrity Solstice cruise ship.]


Millions of Americans’ Medical Images and Data Are Available on the Internet. Anyone Can Take a Peek.

[Editor's note: today's guest blog post, by reporters at ProPublica, explores data security issues within the healthcare industry and its outsourcing vendors. It is reprinted with permission.]

By Jack Gillum, Jeff Kao and Jeff Larson - ProPublica

Medical images and health data belonging to millions of Americans, including X-rays, MRIs and CT scans, are sitting unprotected on the internet and available to anyone with basic computer expertise.

Bayerischer Rundfunk logo The records cover more than 5 million patients in the U.S. and millions more around the world. In some cases, a snoop could use free software programs — or just a typical web browser — to view the images and private data, an investigation by ProPublica and the German broadcaster Bayerischer Rundfunk found.

We identified 187 servers — computers that are used to store and retrieve medical data — in the U.S. that were unprotected by passwords or basic security precautions. The computer systems, from Florida to California, are used in doctors’ offices, medical-imaging centers and mobile X-ray services.

The insecure servers we uncovered add to a growing list of medical records systems that have been compromised in recent years. Unlike some of the more infamous recent security breaches, in which hackers circumvented a company’s cyber defenses, these records were often stored on servers that lacked the security precautions that long ago became standard for businesses and government agencies.

"It’s not even hacking. It’s walking into an open door," said Jackie Singh, a cybersecurity researcher and chief executive of the consulting firm Spyglass Security. Some medical providers started locking down their systems after we told them of what we had found.

Our review found that the extent of the exposure varies, depending on the health provider and what software they use. For instance, the server of U.S. company MobilexUSA displayed the names of more than a million patients — all by typing in a simple data query. Their dates of birth, doctors and procedures were also included.

Alerted by ProPublica, MobilexUSA tightened its security earlier this month. The company takes mobile X-rays and provides imaging services to nursing homes, rehabilitation hospitals, hospice agencies and prisons. "We promptly mitigated the potential vulnerabilities identified by ProPublica and immediately began an ongoing, thorough investigation," MobilexUSA’s parent company said in a statement.

Another imaging system, tied to a physician in Los Angeles, allowed anyone on the internet to see his patients’ echocardiograms. (The doctor did not respond to inquiries from ProPublica.) All told, medical data from more than 16 million scans worldwide was available online, including names, birthdates and, in some cases, Social Security numbers.

Experts say it’s hard to pinpoint who’s to blame for the failure to protect the privacy of medical images. Under U.S. law, health care providers and their business associates are legally accountable for securing the privacy of patient data. Several experts said such exposure of patient data could violate the Health Insurance Portability and Accountability Act, or HIPAA, the 1996 law that requires health care providers to keep Americans’ health data confidential and secure.

Although ProPublica found no evidence that patient data was copied from these systems and published elsewhere, the consequences of unauthorized access to such information could be devastating. "Medical records are one of the most important areas for privacy because they’re so sensitive. Medical knowledge can be used against you in malicious ways: to shame people, to blackmail people," said Cooper Quintin, a security researcher and senior staff technologist with the Electronic Frontier Foundation, a digital-rights group.

"This is so utterly irresponsible," he said.

The issue should not be a surprise to medical providers. For years, one expert has tried to warn about the casual handling of personal health data. Oleg Pianykh, the director of medical analytics at Massachusetts General Hospital’s radiology department, said medical imaging software has traditionally been written with the assumption that patients’ data would be secured by the customer’s computer security systems.

But as those networks at hospitals and medical centers became more complex and connected to the internet, the responsibility for security shifted to network administrators who assumed safeguards were in place. "Suddenly, medical security has become a do-it-yourself project," Pianykh wrote in a 2016 research paper he published in a medical journal.

ProPublica’s investigation built upon findings from Greenbone Networks, a security firm based in Germany that identified problems in at least 52 countries on every inhabited continent. Greenbone’s Dirk Schrader first shared his research with Bayerischer Rundfunk after discovering some patients’ health records were at risk. The German journalists then approached ProPublica to explore the extent of the exposure in the U.S.

Schrader found five servers in Germany and 187 in the U.S. that made patients’ records available without a password. ProPublica and Bayerischer Rundfunk also scanned Internet Protocol addresses and identified, when possible, which medical provider they belonged to.

ProPublica independently determined how many patients could be affected in America, and found some servers ran outdated operating systems with known security vulnerabilities. Schrader said that data from more than 13.7 million medical tests in the U.S. were available online, including more than 400,000 in which X-rays and other images could be downloaded.

The privacy problem traces back to the medical profession’s shift from analog to digital technology. Long gone are the days when film X-rays were displayed on fluorescent light boards. Today, imaging studies can be instantly uploaded to servers and viewed over the internet by doctors in their offices.

In the early days of this technology, as with much of the internet, little thought was given to security. The passage of HIPAA required patient information to be protected from unauthorized access. Three years later, the medical imaging industry published its first security standards.

Our reporting indicated that large hospital chains and academic medical centers did put security protections in place. Most of the cases of unprotected data we found involved independent radiologists, medical imaging centers or archiving services.

One German patient, Katharina Gaspari, got an MRI three years ago and said she normally trusts her doctors. But after Bayerischer Rundfunk showed Gaspari her images available online, she said: "Now, I am not sure if I still can." The German system that stored her records was locked down last week.

We found that some systems used to archive medical images also lacked security precautions. Denver-based Offsite Image left open the names and other details of more than 340,000 human and veterinary records, including those of a large cat named "Marshmellow," ProPublica found. An Offsite Image executive told ProPublica the company charges clients $50 for access to the site and then $1 per study. "Your data is safe and secure with us," Offsite Image’s website says.

The company referred ProPublica to its tech consultant, who at first defended Offsite Image’s security practices and insisted that a password was needed to access patient records. The consultant, Matthew Nelms, then called a ProPublica reporter a day later and acknowledged Offsite Image’s servers had been accessible but were now fixed.

Medical Imaging and Technology Alliance logo "We were just never even aware that there was a possibility that could even happen," Nelms said.

In 1985, an industry group that included radiologists and makers of imaging equipment created a standard for medical imaging software. The standard, which is now called DICOM, spelled out how medical imaging devices talk to each other and share information.

We shared our findings with officials from the Medical Imaging & Technology Alliance, the group that oversees the standard. They acknowledged that there were hundreds of servers with an open connection on the internet, but suggested the blame lay with the people who were running them.

"Even though it is a comparatively small number," the organization said in a statement, "it may be possible that some of those systems may contain patient records. Those likely represent bad configuration choices on the part of those operating those systems."

Meeting minutes from 2017 show that a working group on security learned of Pianykh’s findings and suggested meeting with him to discuss them further. That “action item” was listed for several months, but Pianykh said he never was contacted. The medical imaging alliance told ProPublica last week that the group did not meet with Pianykh because the concerns that they had were sufficiently addressed in his article. They said the committee concluded its security standards were not flawed.

Pianykh said that misses the point. It’s not a lack of standards; it’s that medical device makers don’t follow them. “Medical-data security has never been soundly built into the clinical data or devices, and is still largely theoretical and does not exist in practice,” Pianykh wrote in 2016.

ProPublica’s latest findings follow several other major breaches. In 2015, U.S. health insurer Anthem Inc. revealed that private data belonging to more than 78 million people was exposed in a hack. In the last two years, U.S. officials have reported that more than 40 million people have had their medical data compromised, according to an analysis of records from the U.S. Department of Health and Human Services.

Joy Pritts, a former HHS privacy official, said the government isn’t tough enough in policing patient privacy breaches. She cited an April announcement from HHS that lowered the maximum annual fine, from $1.5 million to $250,000, for what’s known as “corrected willful neglect” — the result of conscious failures or reckless indifference that a company tries to fix. She said that large firms would not only consider those fines as just the cost of doing business, but that they could also negotiate with the government to get them reduced. A ProPublica examination in 2015 found few consequences for repeat HIPAA offenders.

A spokeswoman for HHS’ Office for Civil Rights, which enforces HIPAA violations, said it wouldn’t comment on open or potential investigations.

"What we typically see in the health care industry is that there is Band-Aid upon Band-Aid applied" to legacy computer systems, said Singh, the cybersecurity expert. She said it’s a “shared responsibility” among manufacturers, standards makers and hospitals to ensure computer servers are secured.

"It’s 2019," she said. "There’s no reason for this."

How Do I Know if My Medical Imaging Data is Secure?

If you are a patient:

If you have had a medical imaging scan (e.g., X-ray, CT scan, MRI, ultrasound, etc.) ask the health care provider that did the scan — or your doctor — if access to your images requires a login and password. Ask your doctor if their office or the medical imaging provider to which they refer patients conducts a regular security assessment as required by HIPAA.

If you are a medical imaging provider or doctor’s office:

Researchers have found that picture archiving and communication systems (PACS) servers implementing the DICOM standard may be at risk if they are connected directly to the internet without a VPN or firewall, or if access to them does not require a secure password. You or your IT staff should make sure that your PACS server cannot be accessed via the internet without a VPN connection and password. If you know the IP address of your PACS server but are not sure whether it is (or has been) accessible via the internet, please reach out to us at medicalimaging@propublica.org.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for The Big Story newsletter to receive stories like this one in your inbox.


Study: Anonymized Data Can Not Be Totally Anonymous. And 'Homomorphic Encryption' Explained

Many online users have encountered situations where companies collect data with the promised that it is safe because the data has been anonymized -- all personally-identifiable data elements have been removed. How safe is this really? A recent study reinforced the findings that it isn't as safe as promised. Anonymized data can be de-anonymized = re-identified to individual persons.

The Guardian UK reported:

"... data can be deanonymised in a number of ways. In 2008, an anonymised Netflix data set of film ratings was deanonymised by comparing the ratings with public scores on the IMDb film website in 2014; the home addresses of New York taxi drivers were uncovered from an anonymous data set of individual trips in the city; and an attempt by Australia’s health department to offer anonymous medical billing data could be reidentified by cross-referencing “mundane facts” such as the year of birth for older mothers and their children, or for mothers with many children. Now researchers from Belgium’s Université catholique de Louvain (UCLouvain) and Imperial College London have built a model to estimate how easy it would be to deanonymise any arbitrary dataset. A dataset with 15 demographic attributes, for instance, “would render 99.98% of people in Massachusetts unique”. And for smaller populations, it gets easier..."

According to the U.S. Census Bureau, the population of Massachusetts was abut 6.9 million on July 1, 2018. How did this de-anonymization problem happen? Scientific American explained:

"Many commonly used anonymization techniques, however, originated in the 1990s, before the Internet’s rapid development made it possible to collect such an enormous amount of detail about things such as an individual’s health, finances, and shopping and browsing habits. This discrepancy has made it relatively easy to connect an anonymous line of data to a specific person: if a private detective is searching for someone in New York City and knows the subject is male, is 30 to 35 years old and has diabetes, the sleuth would not be able to deduce the man’s name—but could likely do so quite easily if he or she also knows the target’s birthday, number of children, zip code, employer and car model."

Data brokers, including credit-reporting agencies, have collected a massive number of demographic data attributes about every persons. According to this 2018 report, Acxiom has compiled about 5,000 data elements for each of 700 million persons worldwide.

It's reasonable to assume that credit-reporting agencies and other data brokers have similar capabilities. So, data brokers' massive databases can make it relatively easy to re-identify data that was supposedly been anonymized. This means consumers don't have the privacy promised.

What's the solution? Researchers suggest that data brokers must develop new anonymization methods, and rigorously test them to ensure anonymization truly works. And data brokers must be held to higher data security standards.

Any legislation serious about protecting consumers' privacy must address this, too. What do you think?


The Extortion Economy: How Insurance Companies Are Fueling a Rise in Ransomware Attacks

[Editor's note: today's guest post, by reporters at ProPublica, is part of a series which discusses the intersection of cyberattacks, ransomware, and the insurance industry. It is reprinted with permission.]

By Renee Dudley, ProPublica

On June 24, the mayor and council of Lake City, Florida, gathered in an emergency session to decide how to resolve a ransomware attack that had locked the city’s computer files for the preceding fortnight. Following the Pledge of Allegiance, Mayor Stephen Witt led an invocation. “Our heavenly father,” Witt said, “we ask for your guidance today, that we do what’s best for our city and our community.”

Witt and the council members also sought guidance from City Manager Joseph Helfenberger. He recommended that the city allow its cyber insurer, Beazley, an underwriter at Lloyd’s of London, to pay the ransom of 42 bitcoin, then worth about $460,000. Lake City, which was covered for ransomware under its cyber-insurance policy, would only be responsible for a $10,000 deductible. In exchange for the ransom, the hacker would provide a key to unlock the files.

“If this process works, it would save the city substantially in both time and money,” Helfenberger told them.

Without asking questions or deliberating, the mayor and the council unanimously approved paying the ransom. The six-figure payment, one of several that U.S. cities have handed over to hackers in recent months to retrieve files, made national headlines.

Left unmentioned in Helfenberger’s briefing was that the city’s IT staff, together with an outside vendor, had been pursuing an alternative approach. Since the attack, they had been attempting to recover backup files that were deleted during the incident. On Beazley’s recommendation, the city chose to pay the ransom because the cost of a prolonged recovery from backups would have exceeded its $1 million coverage limit, and because it wanted to resume normal services as quickly as possible.

“Our insurance company made [the decision] for us,” city spokesman Michael Lee, a sergeant in the Lake City Police Department, said. “At the end of the day, it really boils down to a business decision on the insurance side of things: them looking at how much is it going to cost to fix it ourselves and how much is it going to cost to pay the ransom.”

The mayor, Witt, said in an interview that he was aware of the efforts to recover backup files but preferred to have the insurer pay the ransom because it was less expensive for the city. “We pay a $10,000 deductible, and we get back to business, hopefully,” he said. “Or we go, ‘No, we’re not going to do that,’ then we spend money we don’t have to just get back up and running. And so to me, it wasn’t a pleasant decision, but it was the only decision.”

Ransomware is proliferating across America, disabling computer systems of corporations, city governments, schools and police departments. This month, attackers seeking millions of dollars encrypted the files of 22 Texas municipalities. Overlooked in the ransomware spree is the role of an industry that is both fueling and benefiting from it: insurance. In recent years, cyber insurance sold by domestic and foreign companies has grown into an estimated $7 billion to $8 billion-a-year market in the U.S. alone, according to Fred Eslami, an associate director at AM Best, a credit rating agency that focuses on the insurance industry. While insurers do not release information about ransom payments, ProPublica has found that they often accommodate attackers’ demands, even when alternatives such as saved backup files may be available.

The FBI and security researchers say paying ransoms contributes to the profitability and spread of cybercrime and in some cases may ultimately be funding terrorist regimes. But for insurers, it makes financial sense, industry insiders said. It holds down claim costs by avoiding expenses such as covering lost revenue from snarled services and ongoing fees for consultants aiding in data recovery. And, by rewarding hackers, it encourages more ransomware attacks, which in turn frighten more businesses and government agencies into buying policies.

“The onus isn’t on the insurance company to stop the criminal, that’s not their mission. Their objective is to help you get back to business. But it does beg the question, when you pay out to these criminals, what happens in the future?” said Loretta Worters, spokeswoman for the Insurance Information Institute, a nonprofit industry group based in New York. Attackers “see the deep pockets. You’ve got the insurance industry that’s going to pay out, this is great.”

A spokesperson for Lloyd’s, which underwrites about one-third of the global cyber-insurance market, said that coverage is designed to mitigate losses and protect against future attacks, and that victims decide whether to pay ransoms. “Coverage is likely to include, in the event of an attack, access to experts who will help repair the damage caused by any cyberattack and ensure any weaknesses in a company’s cyberprotection are eliminated,” the spokesperson said. “A decision whether to pay a ransom will fall to the company or individual that has been attacked.” Beazley declined comment.

Fabian Wosar, chief technology officer for anti-virus provider Emsisoft, said he recently consulted for one U.S. corporation that was attacked by ransomware. After it was determined that restoring files from backups would take weeks, the company’s insurer pressured it to pay the ransom, he said. The insurer wanted to avoid having to reimburse the victim for revenues lost as a result of service interruptions during recovery of backup files, as its coverage required, Wosar said. The company agreed to have the insurer pay the approximately $100,000 ransom. But the decryptor obtained from the attacker in return didn’t work properly and Wosar was called in to fix it, which he did. He declined to identify the client and the insurer, which also covered his services.

“Paying the ransom was a lot cheaper for the insurer,” he said. “Cyber insurance is what’s keeping ransomware alive today. It’s a perverted relationship. They will pay anything, as long as it is cheaper than the loss of revenue they have to cover otherwise.”

Worters, the industry spokeswoman, said ransom payments aren’t the only example of insurers saving money by enriching criminals. For instance, the companies may pay fraudulent claims — for example, from a policyholder who sets a car on fire to collect auto insurance — when it’s cheaper than pursuing criminal charges. “You don’t want to perpetuate people committing fraud,” she said. “But there are some times, quite honestly, when companies say: ’This fraud is not a ton of money. We are better off paying this.’ ... It’s much like the ransomware, where you’re paying all these experts and lawyers, and it becomes this huge thing.”

Insurers approve or recommend paying a ransom when doing so is likely to minimize costs by restoring operations quickly, regulators said. As in Lake City, recovering files from backups can be arduous and time-consuming, potentially leaving insurers on the hook for costs ranging from employee overtime to crisis management public relations efforts, they said.

“They’re going to look at their overall claim and dollar exposure and try to minimize their losses,” said Eric Nordman, a former director of the regulatory services division of the National Association of Insurance Commissioners, or NAIC, the organization of state insurance regulators. “If it’s more expeditious to pay the ransom and get the key to unlock it, then that’s what they’ll do.”

As insurance companies have approved six- and seven-figure ransom payments over the past year, criminals’ demands have climbed. The average ransom payment among clients of Coveware, a Connecticut firm that specializes in ransomware cases, is about $36,000, according to its quarterly report released in July, up sixfold from last October. Josh Zelonis, a principal analyst for the Massachusetts-based research company Forrester, said the increase in payments by cyber insurers has correlated with a resurgence in ransomware after it had started to fall out of favor in the criminal world about two years ago.

One cybersecurity company executive said his firm has been told by the FBI that hackers are specifically extorting American companies that they know have cyber insurance. After one small insurer highlighted the names of some of its cyber policyholders on its website, three of them were attacked by ransomware, Wosar said. Hackers could also identify insured targets from public filings; the Securities and Exchange Commission suggests that public companies consider reporting “insurance coverage relating to cybersecurity incidents.”

Even when the attackers don’t know that insurers are footing the bill, the repeated capitulations to their demands give them confidence to ask for ever-higher sums, said Thomas Hofmann, vice president of intelligence at Flashpoint, a cyber-risk intelligence firm that works with ransomware victims.

Ransom demands used to be “a lot less,” said Worters, the industry spokeswoman. But if hackers think they can get more, “they’re going to ask for more. So that’s what’s happening. ... That’s certainly a concern.”

In the past year, dozens of public entities in the U.S. have been paralyzed by ransomware. Many have paid the ransoms, either from their own funds or through insurance, but others have refused on the grounds that it’s immoral to reward criminals. Rather than pay a $76,000 ransom in May, the city of Baltimore — which did not have cyber insurance — sacrificed more than $5.3 million to date in recovery expenses, a spokesman for the mayor said this month. Similarly, Atlanta, which did have a cyber policy, spurned a $51,000 ransom demand last year and has spent about $8.5 million responding to the attack and recovering files, a spokesman said this month. Spurred by those and other cities, the U.S. Conference of Mayors adopted a resolution this summer not to pay ransoms.

Still, many public agencies are delighted to have their insurers cover ransoms, especially when the ransomware has also encrypted backup files. Johannesburg-Lewiston Area Schools, a school district in Michigan, faced that predicament after being attacked in October. Beazley, the insurer handling the claim, helped the district conduct a cost-benefit analysis, which found that paying a ransom was preferable to rebuilding the systems from scratch, said Superintendent Kathleen Xenakis-Makowski.

“They sat down with our technology director and said, ‘This is what’s affected, and this is what it would take to re-create,’” said Xenakis-Makowski, who has since spoken at conferences for school officials about the importance of having cyber insurance. She said the district did not discuss the ransom decision publicly at the time in part to avoid a prolonged debate over the ethics of paying. “There’s just certain things you have to do to make things work,” she said.

Ransomware is one of the most common cybercrimes in the world. Although it is often cast as a foreign problem, because hacks tend to originate from countries such as Russia and Iran, ProPublica has found that American industries have fostered its proliferation. We reported in May on two ransomware data recovery firms that purported to use their own technology to disable ransomware but in reality often just paid the attackers. One of the firms, Proven Data, of Elmsford, New York, tells victims on its website that insurance is likely to cover the cost of ransomware recovery.

Lloyd’s of London, the world’s largest specialty insurance market, said it pioneered the first cyber liability policy in 1999. Today, it offers cyber coverage through 74 syndicates — formed by one or more Lloyd’s members such as Beazley joining together — that provide capital and accept and spread risk. Eighty percent of the cyber insurance written at Lloyd’s is for entities based in the U.S. The Lloyd’s market is famous for insuring complex, high-risk and unusual exposures, such as climate-change consequences, Arctic explorers and Bruce Springsteen’s voice.

Many insurers were initially reluctant to cover cyber disasters, in part because of the lack of reliable actuarial data. When they protect customers against traditional risks such as fires, floods and auto accidents, they price policies based on authoritative information from national and industry sources. But, as Lloyd’s noted in a 2017 report, “there are no equivalent sources for cyber-risk,” and the data used to set premiums is collected from the internet. Such publicly available data is likely to underestimate the potential financial impact of ransomware for an insurer. According to a report by global consulting firm PwC, both insurers and victimized companies are reluctant to disclose breaches because of concerns over loss of competitive advantage or reputational damage.

Despite the uncertainty over pricing, dozens of carriers eventually followed Lloyd’s in embracing cyber coverage. Other lines of insurance are expected to shrink in the coming decades, said Nordman, the former regulator. Self-driving cars, for example, are expected to lead to significantly fewer car accidents and a corresponding drop in premiums, according to estimates. Insurers are seeking new areas of opportunity, and “cyber is one of the small number of lines that is actually growing,” Nordman said.

Driven partly by the spread of ransomware, the cyber insurance market has grown rapidly. Between 2015 and 2017, total U.S. cyber premiums written by insurers that reported to the NAIC doubled to an estimated $3.1 billion, according to the most recent data available.

Cyber policies have been more profitable for insurers than other lines of insurance. The loss ratio for U.S. cyber policies was about 35% in 2018, according to a report by Aon, a London-based professional services firm. In other words, for every dollar in premiums collected from policyholders, insurers paid out roughly 35 cents in claims. That compares to a loss ratio of about 62% across all property and casualty insurance, according to data compiled by the NAIC of insurers that report to them. Besides ransomware, cyber insurance frequently covers costs for claims related to data breaches, identity theft and electronic financial scams.

During the underwriting process, insurers typically inquire about a prospective policyholder’s cyber security, such as the strength of its firewall or the viability of its backup files, Nordman said. If they believe the organization’s defenses are inadequate, they might decline to write a policy or charge more for it, he said. North Dakota Insurance Commissioner Jon Godfread, chairman of the NAIC’s innovation and technology task force, said some insurers suggest prospective policyholders hire outside firms to conduct “cyber audits” as a “risk mitigation tool” aimed to prevent attacks — and claims — by strengthening security.

“Ultimately, you’re going to see that prevention of the ransomware attack is likely going to come from the insurance carrier side,” Godfread said. “If they can prevent it, they don’t have to pay out a claim, it’s better for everybody.”

Not all cyber insurance policies cover ransom payments. After a ransomware attack on Jackson County, Georgia, last March, the county billed insurance for credit monitoring services and an attorney but had to pay the ransom of about $400,000, County Manager Kevin Poe said. Other victims have struggled to get insurers to pay cyber-related claims. Food company Mondelez International and pharmaceutical company Merck sued insurers last year in state courts after the carriers refused to reimburse costs associated with damage from NotPetya malware. The insurers cited “hostile or warlike action” or “act of war” exclusions because the malware was linked to the Russian military. The cases are pending.

The proliferation of cyber insurers willing to accommodate ransom demands has fostered an industry of data recovery and incident response firms that insurers hire to investigate attacks and negotiate with and pay hackers. This year, two FBI officials who recently retired from the bureau opened an incident response firm in Connecticut. The firm, The Aggeris Group, says on its website that it offers “an expedient response by providing cyber extortion negotiation services and support recovery from a ransomware attack.”

Ramarcus Baylor, a principal consultant for The Crypsis Group, a Virginia incident response firm, said he recently worked with two companies hit by ransomware. Although both clients had backup systems, insurers promised to cover the six-figure ransom payments rather than spend several days assessing whether the backups were working. Losing money every day the systems were down, the clients accepted the offer, he said.

Crypsis CEO Bret Padres said his company gets many of its clients from insurance referrals. There’s “really good money in ransomware” for the cyberattacker, recovery experts and insurers, he said. Routine ransom payments have created a “vicious circle,” he said. “It’s a hard cycle to break because everyone involved profits: We do, the insurance carriers do, the attackers do.”

Chris Loehr, executive vice president of Texas-based Solis Security, said there are “a lot of times” when backups are available but clients still pay ransoms. Everyone from the victim to the insurer wants the ransom paid and systems restored as fast as possible, Loehr said.

“They figure out that it’s going to take a month to restore from the cloud, and so even though they have the data backed up,” paying a ransom to obtain a decryption key is faster, he said.

“Let’s get it negotiated very quickly, let’s just get the keys, and get the customer decrypted to minimize business interruption loss,” he continued. “It makes the client happy, it makes the attorneys happy, it makes the insurance happy.”

If clients morally oppose ransom payments, Loehr said, he reminds them where their financial interests lie, and of the high stakes for their businesses and employees. “I’ll ask, ‘The situation you’re in, how long can you go on like this?’” he said. “They’ll say, ‘Well, not for long.’ Insurance is only going to cover you for up to X amount of dollars, which gets burned up fast.”

“I know it sucks having to pay off assholes, but that’s what you gotta do,” he said. “And they’re like, ‘Yeah, OK, let’s get it done.’ You gotta kind of take charge and tell them, ‘This is the way it’s going to be or you’re dead in the water.’”

Lloyd’s-backed CFC, a specialist insurance provider based in London, uses Solis for some of its U.S. clients hit by ransomware. Graeme Newman, chief innovation officer at CFC, said “we work relentlessly” to help victims improve their backup security. “Our primary objective is always to get our clients back up and running as quickly as possible,” he said. “We would never recommend that our clients pay ransoms. This would only ever be a very final course of action, and any decision to do so would be taken by our clients, not us as an insurance company.”

As ransomware has burgeoned, the incident response division of Solis has “taken off like a rocket,” Loehr said. Loehr’s need for a reliable way to pay ransoms, which typically are transacted in digital currencies such as Bitcoin, spawned Sentinel Crypto, a Florida-based money services business managed by his friend, Wesley Spencer. Sentinel’s business is paying ransoms on behalf of clients whose insurers reimburse them, Loehr and Spencer said.

New York-based Flashpoint also pays ransoms for insurance companies. Hofmann, the vice president, said insurers typically give policyholders a toll-free number to dial as soon as they realize they’ve been hit. The number connects to a lawyer who provides a list of incident response firms and other contractors. Insurers tightly control expenses, approving or denying coverage for the recovery efforts advised by the vendors they suggest.

“Carriers are absolutely involved in the decision making,” Hofmann said. On both sides of the attack, “insurance is going to transform this entire market,” he said.

On June 10, Lake City government officials noticed they couldn’t make calls or send emails. IT staff then discovered encrypted files on the city’s servers and disconnected the infected servers from the internet. The city soon learned it was struck by Ryuk ransomware. Over the past year, unknown attackers using the Ryuk strain have besieged small municipalities and technology and logistics companies, demanding ransoms up to $5 million, according to the FBI.

Shortly after realizing it had been attacked, Lake City contacted the Florida League of Cities, which provides insurance for more than 550 public entities in the state. Beazley is the league’s reinsurer for cyber coverage, and they share the risk. The league declined to comment.

Initially, the city had hoped to restore its systems without paying a ransom. IT staff was “plugging along” and had taken server drives to a local vendor who’d had “moderate success at getting the stuff off of it,” Lee said. However, the process was slow and more challenging than anticipated, he said.

As the local technicians worked on the backups, Beazley requested a sample encrypted file and the ransom note so its approved vendor, Coveware, could open negotiations with the hackers, said Steve Roberts, Lake City’s director of risk management. The initial ransom demand was 86 bitcoin, or about $700,000 at the time, Coveware CEO Bill Siegel said. “Beazley was not happy with it — it was way too high,” Roberts said. “So [Coveware] started negotiations with the perps and got it down to the 42 bitcoin. Insurance stood by with the final negotiation amount, waiting for our decision.”

Lee said Lake City may have been able to achieve a “majority recovery” of its files without paying the ransom, but it probably would have cost “three times as much money trying to get there.” The city fired its IT director, Brian Hawkins, in the midst of the recovery efforts. Hawkins, who is suing the city, said in an interview posted online by his new employer that he was made “the scapegoat” for the city’s unpreparedness. The “recovery process on the files was taking a long time” and “the lengthy process was a major factor in paying the ransom,” he said in the interview.

On June 25, the day after the council meeting, the city said in a press release that while its backup recovery efforts “were initially successful, many systems were determined to be unrecoverable.” Lake City fronted the ransom amount to Coveware, which converted the money to bitcoin, paid the attackers and received a fee for its services. The Florida League of Cities reimbursed the city, Roberts said.

Lee acknowledged that paying ransoms spurs more ransomware attacks. But as cyber insurance becomes ubiquitous, he said, he trusts the industry’s judgment.

“The insurer is the one who is going to get hit with most of this if it continues,” he said. “And if they’re the ones deciding it’s still better to pay out, knowing that means they’re more likely to have to do it again — if they still find that it’s the financially correct decision — it’s kind of hard to argue with them because they know the cost-benefit of that. I have a hard time saying it’s the right decision, but maybe it makes sense with a certain perspective.”

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for The Big Story newsletter to receive stories like this one in your inbox.

 


51 Corporations Tell Congress: A Federal Privacy Law Is Needed. 145 Corporations Tell The U.S. Senate: Inaction On Gun Violence Is 'Simply Unacceptable'

Last week, several of the largest corporations petitioned the United States government for federal legislation in two key topics: consumer privacy and gun reform.

First, the Chief Executive Officers (CEOs) at 51 corporations sent a jointly signed letter to leaders in Congress asking for a federal privacy law to supersede laws emerging in several states. ZD Net reported:

"The open-letter was sent on behalf of Business Roundtable, an association made up of the CEOs of America's largest companies... CEOs blamed a patchwork of differing privacy regulations that are currently being passed in multiple US states, and by several US agencies, as one of the reasons why consumer privacy is a mess in the US. This patchwork of privacy regulations is creating problems for their companies, which have to comply with an ever-increasing number of laws across different states and jurisdictions. Instead, the 51 CEOs would like one law that governs all user privacy and data protection across the US, which would simplify product design, compliance, and data management."

The letter was sent to U.S. Senate Majority Leader Mitch McConnell, U.S. Senate Minority Leader Charles E. Schumer, Senator Roger F. Wicker (Chairman of the Committee on Commerce, Science and Transportation), Nancy Pelosi (Speaker of the U.S. House of Representatives), Kevin McCarthy (Minority Leader of the U.S. House of Representatives), Frank Pallone, Jr. (Chairman of the Committee on Energy and Commerce in the U.S. House of Representatives), and other ranking politicians.

The letter stated, in part:

"Consumers should not and cannot be expected to understand rules that may change depending upon the state in which they reside, the state in which they are accessing the internet, and the state in which the company’s operation is providing those resources or services. Now is the time for Congress to act and ensure that consumers are not faced with confusion about their rights and protections based on a patchwork of inconsistent state laws. Further, as the regulatory landscape becomes increasingly fragmented and more complex, U.S. innovation and global competitiveness in the digital economy are threatened. "

That sounds fair and noble enough. After writing this blog for more than 12 years, I have learned that details matters. Who writes the proposed legislation and the details in that legislation matter. It is too early to tell if the proposed legislation is weaker or stronger than what some states have implemented.

Some of the notable companies which signed the joint letter included AT&T, Amazon, Comcast, Dell Technologies, FedEx, IBM, Qualcomm, Salesforce, SAP, Target, and Walmart. Signers from the financial services sector included American Express, Bank of America, Citigroup, JPMorgan Chase, MasterCard, State Farm Insurance, USAA, and Visa. Several notable companies did not sign the letter: Facebook, Google, Microsoft, and Verizon.

Second, The New York Times reported that executives from 145 companies sent a joint letter to members of the U.S. Senate demanding that they take action on gun violence. The letter stated, in part (emphasis added):

"... we are writing to you because we have a responsibility and obligation to stand up for the safety of our employees ,customers, and all Americans in the communities we serve across the country. Doing nothing about America's gun violence crisis is simply unacceptable and it is time to stand with the American public on gun safety. Gun violence in America is not inevitable; it's preventable. There are steps Congress can, and must take to prevent and reduce gun violence. We need our lawmakers to support common sense gun laws... we urge the Senate to stand with the American public and take action on gun safety by passing a bill to require background checks on all gun sales and a strong Red Flag law that would allow courts to issue life-saving extreme risk protection orders..."

Some of the notable companies which signed the letter included Airbnb, Bain Capital, Bloomberg LP, Conde Nast, DICK'S Sporting Goods, Gap Inc., Levi Strauss & Company, Lyft, Pinterest, Publicis Groupe, Reddit, Royal Caribbean Cruises Ltd., Twitter, Uber, and Yelp.

Earlier this year, the U.S. House of Representatives passed legislation to address gun violence. So far, the U.S. Senate has done nothing. Representative Kathy Castor (14th District in Florida), explained the actions the House took in 2019:

"The Bipartisan Background Checks Act that I championed is a commonsense step to address gun violence and establish measures that protect our community and families. America is suffering from a long-term epidemic of gun violence – each year, 120,000 Americans are injured and 35,000 die by firearms. This bill ensures that all gun sales or transfers are subject to a background check, stopping senseless violence by individuals to themselves and others... Additionally, the Democratic House passed H.R. 1112 – the Enhanced Background Checks Act of 2019 – which addresses the Charleston Loophole that currently allows gun dealers to sell a firearm to dangerous individuals if the FBI background check has not been completed within three business days. H.R. 1112 makes the commonsense and important change to extend the review period to 10 business days..."

Findings from a February, 2018 Quinnipiac national poll:

"American voters support stricter gun laws 66 - 31 percent, the highest level of support ever measured by the independent Quinnipiac University National Poll, with 50 - 44 percent support among gun owners and 62 - 35 percent support from white voters with no college degree and 58 - 38 percent support among white men... Support for universal background checks is itself almost universal, 97 - 2 percent, including 97 - 3 percent among gun owners. Support for gun control on other questions is at its highest level since the Quinnipiac University Poll began focusing on this issue in the wake of the Sandy Hook massacre: i) 67 - 29 percent for a nationwide ban on the sale of assault weapons; ii) 83 - 14 percent for a mandatory waiting period for all gun purchases. It is too easy to buy a gun in the U.S. today..."


Court Okays 'Data Scraping' By Analytics Firm Of Users' Public LinkedIn Profiles. Lots Of Consequences

LinkedIn logo Earlier this week, a Federal appeals court affirmed an August 2017 injunction which required LinkedIn, a professional networking platform owned by Microsoft Corporation, to allow hiQ Labs, Inc. to access members' profiles. This ruling has implications for everyone.

hiQ Labs logo First, some background. The Naked Security blog by Sophos explained in December, 2017:

"... hiQ is a company that makes its money by “scraping” LinkedIn’s public member profiles to feed two analytical systems, Keeper and Skill Mapper. Keeper can be used by employers to detect staff that might be thinking about leaving while Skill Mapper summarizes the skills and status of current and future employees. For several years, this presented no problems until, in 2016, LinkedIn decided to offer something similar, at which point it sent hiQ and others in the sector cease and desist letters and started blocking the bots reading its pages."

So, hiQ apps use algorithms which determine for its clients (prospective or current employers) which employees will stay or go. Gizmodo explained the law which LinkedIn used in its arguments in court, namely the:

".... practice of scraping publicly available information from their platform violated the 1986 Computer Fraud and Abuse Act (CFAA). The CFAA is infamously vaguely written and makes it illegal to access a “protected computer” without or in excess of “authorization”—opening the door to sweeping interpretations that could be used to criminalize conduct not even close to what would traditionally be understood as hacking.

Second, the latest court ruling basically said two things: a) it is legal (and doesn't violate hacking laws) for companies to scrape information contained in publicly available profiles; and b) LinkedIn must allow hiQ (and potentially other firms) to continue with data-scraping. This has plenty of implications.

This recent ruling may surprise some persons, since the issue of data scraping was supposedly settled law previously. MediaPost reported:

"Monday's ruling appears to effectively overrule a decision issued six years ago in a dispute between Craigslist and the data miner 3Taps, which also scraped publicly available listings. In that matter, 3Taps allegedly scraped real estate listings and made them available to the developers PadMapper and Lively. PadMapper allegedly meshed Craigslist's apartment listings with Google maps... U.S. District Court Judge Charles Breyer in the Northern District of California ruled in 2013 that 3Taps potentially violated the anti-hacking law by scraping listings from Craigslist after the company told it to stop doing so."

So, you can bet that both social media sites and data analytics firms closely watched and read the appeal court's ruling this week.

Third, in theory any company or agency could then legally scrape information from public profiles on the LinkedIn platform. This scraping could be done by industries and/or entities (e.g., spy agencies worldwide) which job seekers didn't intend nor want.

Many consumers simply signed up and use LinkedIn to build professional relationship and/or to find jobs, either fulltime as employees or as contractors. The 2019 social media survey by Pew Research found that 27 percent of adults in the United States use LinkedIn, but higher usage penetration among persons with college degrees (51 percent), persons making more than $75K annually (49 percent), persons ages 25 - 29 (44 percent), persons ages 30 - 49 (37 percent), and urban residents (33 percent).  

I'll bet that many LinkedIn users never imagined that their profiles would be used against them by data analytics firms. Like it or not, that is how consumers' valuable, personal data is used (abused?) by social media sites and their clients.

Fourth, the practice of data scraping has divided tech companies. Again, from the Naked Security blog post in 2017:

"Data scraping, its seems, has become a booming tech sector that increasingly divides the industry ideologically. One side believes LinkedIn is simply trying to shut down a competitor wanting to access public data LinkedIn merely displays rather than owns..."

The Electronic Frontier Foundation (EFF), the DuckDuckGo search engine, and the Internet Archived had filed an amicus brief with the appeals court before its ruling. The EFF explained the group's reasoning and urged the:

"... Court of Appeals to reject LinkedIn’s request to transform the CFAA from a law meant to target serious computer break-ins into a tool for enforcing its computer use policies. The social networking giant wants violations of its corporate policy against using automated scripts to access public information on its website to count as felony “hacking” under the Computer Fraud and Abuse Act, a 1986 federal law meant to criminalize breaking into private computer systems to access non-public information. But using automated scripts to access publicly available data is not "hacking," and neither is violating a website’s terms of use. LinkedIn would have the court believe that all "bots" are bad, but they’re actually a common and necessary part of the Internet. "Good bots" were responsible for 23 percent of Web traffic in 2016..."

So, bots are here to stay. And, it's up to LinkedIn executives to find a solution to protect their users' information.

Fifth, according to the Reuters report the court judge suggested a solution for LinkedIn by "eliminating the public access option." Hmmmm. Public, or at least broad access, is what many job seekers desire. So, a balance needs to be struck between truly "public" where anyone, anywhere worldwide could access public profiles, versus intended targets (e.g., hiring executives in potential employers in certain industries).

Sixth, what struck me about the court ruling this week was that nobody was in the court room representing the interests of LinkedIn users, of which I am one. MediaPost reported:

"The appellate court discounted LinkedIn's argument that hiQ was harming users' privacy by scraping data even when people used a "do not broadcast" setting. "There is no evidence in the record to suggest that most people who select the 'Do Not Broadcast' option do so to prevent their employers from being alerted to profile changes made in anticipation of a job search," the judges wrote. "As the district court noted, there are other reasons why users may choose that option -- most notably, many users may simply wish to avoid sending their connections annoying notifications each time there is a profile change." "

What? Really?! We LinkedIn users have a natural, vested interest in control over both our profiles and the sensitive, personal information that describes each of us in our profiles. Somebody at LinkedIn failed to adequately represent users' interests of its users, the court didn't really listen closely nor seek out additional evidence, or all of the above.

Maybe the "there is no evidence in the record" regarding the 'Do Not Broadcast' feature will be the basis of another appeal or lawsuit.

With this latest court ruling, we LinkedIn users have totally lost control (except for deleting or suspending our LinkedIn accounts). It makes me wonder how a court could reach its decision without hearing directly from somebody representing LinkedIn users.

Seventh, it seems that LinkedIn needs to modify its platform in three key ways:

  1. Allow its users to specify which uses or applications (e.g., find fulltime work, find contract work, build contacts in my industry or area of expertise, find/screen job candidates, advertise/promote a business, academic research, publish content, read news, dating, etc.) their profiles can only be used for. The 'Do Not Broadcast' feature is clearly not strong enough;
  2. Allow its users to specify or approve individual users -- other actual persons who are LinkedIn users and not bots nor corporate accounts -- who can access their full, detailed profiles; and
  3. Outline in the user agreement the list of applications or uses profiles may be accessed for, so that both prospective and current LinkedIn users can make informed decisions. 

This would give LinkedIn users some control over the sensitive, personal information in their profiles. Without control, the benefits of using LinkedIn quickly diminish. And, that's enough to cause me to rethink my use of LinkedIn, and either deactivate or delete my account.

What are your opinions of this ruling? If you currently use LinkedIn, will you continue using it? If you don't use LinkedIn and were considering it, will you still consider using it?


Google And YouTube To Pay $170 Million In Proposed Settlement To Resolve Charges Of Children's Privacy Violations

Google logo Today's blog post contains information all current and future parents should know. On Tuesday, the U.S. Federal Trade Commission (FTC) announced a proposed settlement agreement where YouTube LLC, and its parent company, Google LLC, will pay a monetary fine of $170 million to resolve charges that the video-sharing service illegally collected the personal information of children without their parents' consent.

YouTube logo The proposed settlement agreement requires YouTube and Google to pay $136 million to the FTC and $34 million to New York State to resolve charges that the video sharing service violated the Children’s Online Privacy Protection Act (COPPA) Rule. The announcement explained the allegations:

"... that YouTube violated the COPPA Rule by collecting personal information—in the form of persistent identifiers that are used to track users across the Internet—from viewers of child-directed channels, without first notifying parents and getting their consent. YouTube earned millions of dollars by using the identifiers, commonly known as cookies, to deliver targeted ads to viewers of these channels, according to the complaint."

"The COPPA Rule requires that child-directed websites and online services provide notice of their information practices and obtain parental consent prior to collecting personal information from children under 13, including the use of persistent identifiers to track a user’s Internet browsing habits for targeted advertising. In addition, third parties, such as advertising networks, are also subject to COPPA where they have actual knowledge they are collecting personal information directly from users of child-directed websites and online services... the FTC and New York Attorney General allege that while YouTube claimed to be a general-audience site, some of YouTube’s individual channels—such as those operated by toy companies—are child-directed and therefore must comply with COPPA."

While $170 million is a lot of money, it is tiny compared to the $5 billion fine by the FTC assessed against Facebook. The fine is also tiny compared to Google's earnings. Alphabet Inc., the holding company which owns Google, generated pretax net income of $34.91 billion during 2018 on revenues of $136.96 billion.

In February, the FTC concluded a settlement with Musical.ly, a video social networking app now operating as TikTok, where Musical.ly paid $5.7 million to resolve allegations of COPPA violations. Regarding the proposed settlement with YouTube, Education Week reported:

"YouTube has said its service is intended for ages 13 and older, although younger kids commonly watch videos on the site and many popular YouTube channels feature cartoons or sing-a-longs made for children. YouTube has its own app for children, called YouTube Kids; the company also launched a website version of the service in August. The site says it requires parental consent and uses simple math problems to ensure that kids aren't signing in on their own. YouTube Kids does not target ads based on viewer interests the way YouTube proper does. The children's version does track information about what kids are watching in order to recommend videos. It also collects personally identifying device information."

The proposed settlement also requires YouTube and Google:

"... to develop, implement, and maintain a system that permits channel owners to identify their child-directed content on the YouTube platform so that YouTube can ensure it is complying with COPPA. In addition, the companies must notify channel owners that their child-directed content may be subject to the COPPA Rule’s obligations and provide annual training about complying with COPPA for employees who deal with YouTube channel owners. The settlement also prohibits Google and YouTube from violating the COPPA Rule, and requires them to provide notice about their data collection practices and obtain verifiable parental consent before collecting personal information from children."

The complaint and proposed consent decree were filed in the U.S. District Court for the District of Columbia. After approval by a judge, the proposed settlement become final. Hopefully, the fine and additional requirements will be enough to deter future abuses.


Operating Issues Continue To Affect The Integrity Of Products Sold On Amazon Site

Amazon logo News reports last week described in detail the operating issues that affect the integrity and reliability of products sold on the Amazon site. The Verge reported that some sellers:

"... hop onto fast-selling listings with counterfeit goods, or frame their competitors with fake reviews. One common tactic is to find a once popular, but now abandoned product and hijack its listing, using the page’s old reviews to make whatever you’re selling appear trustworthy. Amazon’s marketplace is so chaotic that not even Amazon itself is safe from getting hijacked. In addition to being a retail platform, Amazon sells its own house-brand goods under names like AmazonBasics, Rivet furniture, Happy Belly food, and hundreds of other labels."

The hijacked product pages include photos, descriptions, reviews, and/or comments from other products -- a confusing mix of content. You probably assumed that it isn't possible for this to happen, but it does. The Verge report explained:

"There are now more than 2 million sellers on the platform, and Amazon has struggled to maintain order. A recent Wall Street Journal investigation found thousands of items for sale on the site that were deceptively labeled or declared unsafe by federal regulators... A former Amazon employee who now works as a consultant for Amazon sellers, she’s worked with clients who have undergone similar hijackings. She says these listings were likely seized by a seller who contacted Amazon’s Seller Support team and asked them to push through a file containing the changes. The team is based mostly overseas, experiences high turnover, and is expected to work quickly, Greer says, and if you find the right person they won’t check what changes the file contains."

This directly affects online shoppers. The article also included this tip for shoppers:

"... the easiest way to detect a hijacking is to check that the reviews refer to the product being sold..."

What a mess! The burden should not fall upon shoppers. Amazon needs to clean up its mess -- quickly. What are your opinions.


Cloud Services Security Vendor Disclosed a 'Security Incident'

Imperva logo Imperva, a cloud-services security company, announced on Tuesday a data breach involving its Cloud Web Application Firewall (WAF) product, formerly known as Incapsula. The August 27th announcement stated:

"... this data exposure is limited to our Cloud WAF product. Here is what we know about the situation today: 1) On August 20, 2019, we learned from a third party of a data exposure that impacts a subset of customers of our Cloud WAF product who had accounts through September 15, 2017; 2) Elements of our Incapsula customer database through September 15, 2017 were exposed. These included: email addresses, hashed and salted passwords; 3) And for a subset of the Incapsula customers through September 15, 2017: API keys and customer-provided SSL certificates..."

Imperva provides firewall and security services to block cyberattacks by bad actors. These security services protect the information its clients (and clients' customers) store in cloud-storage databases. The home page of Imperva's site promotes the following clients: AARP, General Electric, Siemens, Xoom (A PayPal service), and Zillow. Many consumers use these clients' sites and service to store sensitive personal and payment information.

Imperva has informed the appropriate global regulatory agencies, hired forensic experts to help with the breach investigation, reset affected clients' passwords, and is informing affected clients. Security experts quickly weighed in about the data breach. The Krebs On Security blog reported:

"Rich Mogull, founder and vice president of product at Kansas City-based cloud security firm DisruptOps, said Imperva is among the top three Web-based firewall providers... an attacker in possession of a customer’s API keys and SSL certificates could use that access to significantly undermine the security of traffic flowing to and from a customer’s various Web sites. At a minimum, he said, an attacker in possession of these key assets could reduce the security of the WAF settings... A worst-case scenario could allow an attacker to intercept, view or modify traffic destined for an Incapsula client Web site, and even to divert all traffic for that site to or through a site owned by the attacker."

So, this breach and the data elements accessed by hackers were serious. It is another example indicating that hackers are persistent and attack where the money is.

Security experts said the cause of the breach is not yet known. Imperva is based in Redwood Shores, California.


Google Claims Blocking Cookies Is Bad For Privacy. Researchers: Nope. That Is 'Privacy Gaslighting'

Google logo The announcement by Google last week included some dubious claims, which received a fair amount of attention among privacy experts. First, a Senior Product Manager of User Privacy and Trust wrote in a post:

"Ads play a major role in sustaining the free and open web. They underwrite the great content and services that people enjoy... But the ad-supported web is at risk if digital advertising practices don’t evolve to reflect people’s changing expectations around how data is collected and used. The mission is clear: we need to ensure that people all around the world can continue to access ad supported content on the web while also feeling confident that their privacy is protected. As we shared in May, we believe the path to making this happen is also clear: increase transparency into how digital advertising works, offer users additional controls, and ensure that people’s choices about the use of their data are respected."

Okay, that is a fair assessment of today's internet. And, more transparency is good. Google executives are entitled to their opinions. The post also stated:

"The web ecosystem is complex... We’ve seen that approaches that don’t account for the whole ecosystem—or that aren’t supported by the whole ecosystem—will not succeed. For example, efforts by individual browsers to block cookies used for ads personalization without suitable, broadly accepted alternatives have fallen down on two accounts. First, blocking cookies materially reduces publisher revenue... Second, broad cookie restrictions have led some industry participants to use workarounds like fingerprinting, an opaque tracking technique that bypasses user choice and doesn’t allow reasonable transparency or control. Adoption of such workarounds represents a step back for user privacy, not a step forward."

So, Google claims that blocking cookies is bad for privacy. With a statement like that, the "User Privacy and Trust" title seems like an oxymoron. Maybe, that's the best one can expect from a company that gets 87 percent of its revenues from advertising.

Also on August 22nd, the Director of Chrome Engineering repeated this claim and proposed new internet privacy standards (bold emphasis added):

... we are announcing a new initiative to develop a set of open standards to fundamentally enhance privacy on the web. We’re calling this a Privacy Sandbox. Technology that publishers and advertisers use to make advertising even more relevant to people is now being used far beyond its original design intent... some other browsers have attempted to address this problem, but without an agreed upon set of standards, attempts to improve user privacy are having unintended consequences. First, large scale blocking of cookies undermine people’s privacy by encouraging opaque techniques such as fingerprinting. With fingerprinting, developers have found ways to use tiny bits of information that vary between users, such as what device they have or what fonts they have installed to generate a unique identifier which can then be used to match a user across websites. Unlike cookies, users cannot clear their fingerprint, and therefore cannot control how their information is collected... Second, blocking cookies without another way to deliver relevant ads significantly reduces publishers’ primary means of funding, which jeopardizes the future of the vibrant web..."

Yes, fingerprinting is a nasty, privacy-busting technology. No argument with that. But, blocking cookies is bad for privacy? Really? Come on, let's be honest.

This dubious claim ignores corporate responsibility... that some advertisers and website operators made choices -- conscious decisions to use more invasive technologies like fingerprinting to do an end-run around users' needs, desires, and actions to regain online privacy. Sites and advertisers made those invasive-tech choices when other options were available, such as using subscription services to pay for their content.

Plus, Google's claim also ignores the push by corporate internet service providers (ISPs) which resulted in the repeal of online privacy protections for consumers thanks to a compliant, GOP-led Federal Communications Commission (FCC), which seems happy to tilt the playing field further towards corporations and against consumers. So, users are simply trying to regain online privacy.

During the past few years, both privacy-friendly web browsers (e.g., Brave, Firefox) and search engines (e.g., DuckDuckGo) have emerged to meet consumers' online privacy needs. (Well, it's not only consumers that need online privacy. Attorneys and businesses need it, too, to protect their intellectual property and proprietary business methods.) Online users demanded choice, something advertisers need to remember and value.

Privacy experts weighed in about Google's blocking-cookies-is-bad-for-privacy claim. Jonathan Mayer and Arvind Narayanan explained:

That’s the new disingenuous argument from Google, trying to justify why Chrome is so far behind Safari and Firefox in offering privacy protections. As researchers who have spent over a decade studying web tracking and online advertising, we want to set the record straight. Our high-level points are: 1) Cookie blocking does not undermine web privacy. Google’s claim to the contrary is privacy gaslighting; 2) There is little trustworthy evidence on the comparative value of tracking-based advertising; 3) Google has not devised an innovative way to balance privacy and advertising; it is latching onto prior approaches that it previously disclaimed as impractical; and 4) Google is attempting a punt to the web standardization process, which will at best result in years of delay."

The researchers debunked Google's claim with more details:

"Google is trying to thread a needle here, implying that some level of tracking is consistent with both the original design intent for web technology and user privacy expectations. Neither is true. If the benchmark is original design intent, let’s be clear: cookies were not supposed to enable third-party tracking, and browsers were supposed to block third-party cookies. We know this because the authors of the original cookie technical specification said so (RFC 2109, Section 4.3.5). Similarly, if the benchmark is user privacy expectations, let’s be clear: study after study has demonstrated that users don’t understand and don’t want the pervasive web tracking that occurs today."

Moreover:

"... there are several things wrong with Google’s argument. First, while fingerprinting is indeed a privacy invasion, that’s an argument for taking additional steps to protect users from it, rather than throwing up our hands in the air. Indeed, Apple and Mozilla have already taken steps to mitigate fingerprinting, and they are continuing to develop anti-fingerprinting protections. Second, protecting consumer privacy is not like protecting security—just because a clever circumvention is technically possible does not mean it will be widely deployed. Firms face immense reputational and legal pressures against circumventing cookie blocking. Google’s own privacy fumble in 2012 offers a perfect illustration of our point: Google implemented a workaround for Safari’s cookie blocking; it was spotted (in part by one of us), and it had to settle enforcement actions with the Federal Trade Commission and state attorneys general."

Gaslighting, indeed. Online privacy is important. So, too, are consumers' choices and desires. Thanks to Mr. Mayer and Mr. Narayanan for the comprehensive response.

What are your opinions of cookie blocking? Of Google's claims?


How Trump’s Political Appointees Overruled Tougher Settlements With Big Banks

[Editor's note: today's guest post, by reporters at ProPublica, discusses enforcement approaches by the United States government with the banking industry. It is reprinted with permission.]

By Jesse Eisinger, ProPublica, and Kevin Wack, American Banker

Since Donald Trump’s election, federal white-collar enforcement has taken a big hit. Fines and settlements against corporations have plummeted. Prosecutions of individuals are falling to record lows.

But just how these fines and settlements came to be slashed is less well understood. Two settlements with giant banks over financial crisis-era misdeeds provide a window into how the Trump administration has eased up on corporate wrongdoers.

In settlements last year with the two big U.K.-based banks, Barclays and Royal Bank of Scotland, political appointees at the Trump administration Justice Department took the unusual step of overruling staff prosecutors to reduce the settlements sought, leaving billions of dollars in potential recoveries on the table, according to four people familiar with the settlements.

In the case of RBS, then-Deputy Attorney General Rod Rosenstein decided that the charges should not be pursued as a criminal case, as the prosecutorial team advocated, but rather as a less serious civil one.

Both cases were developed by the Obama administration DOJ and involved accusations that the banks misled buyers of residential mortgage-backed securities before the 2008 financial crisis. Prosecutors seemingly found numerous examples of bankers knowingly selling lemons to their customers. The mortgages they were putting into securities were “total fucking garbage,” one RBS executive said in a phone call that was recorded and cited in a DOJ filing. A Barclays banker said a group of loans “scares the shit out of me.” Mortgages that went into the two banks’ securities lost a total of $73 billion, according to calculations used by the government.

In March 2018, the DOJ settled with Barclays for $2 billion, a sum dictated by Trump appointees that was far below what the staff prosecutors in the Eastern District of New York in Brooklyn had sought. The settlement with RBS occurred in August 2018, for $4.9 billion. After Rosenstein downgraded the case from criminal to civil, other Trump appointees concluded that the settlement amount should be about half of what staff prosecutors in the District of Massachusetts had sought.

DOJ spokeswoman Sarah Sutton said that the Barclays and RBS settlements held the banks accountable for serious misconduct, and that the penalties recovered from the banks were fair and proportionate compared with those previously obtained from other banks. She did not respond to detailed questions about how the two settlements were reached and why key decisions were dictated from Washington. “They were largely negotiated by career attorneys in the Department and U.S. Attorneys’ offices with the support and collaboration of Department leadership,” Sutton wrote in an email.

Aspects of how the DOJ came to settle the cases have been recounted. The New York Times reported on Rosenstein’s decision in the RBS case. But this is the first extensive account of how the banks secured the favorable outcomes.

The British banks employed an old playbook, one that proved effective with the Trump administration: Hire prominent former high-level DOJ officials who were now at major law firms. These attorneys won access to the top echelons of the Trump DOJ, where they found an audience receptive to their arguments that the staff prosecutors were unfairly singling out their clients for excess punishment.

The two cases stemmed from the Obama administration’s efforts to bring charges against banks for misdeeds that contributed to the financial crisis. Critics assailed the Obama DOJ for what they perceived as tardy and inadequate policing of financial crisis malfeasance. For example, the Obama DOJ did not prosecute any top bankers for actions related to the crisis. But it did belatedly bring civil charges, and it reached large settlements with numerous banks, including JPMorgan Chase, Citigroup and Bank of America. Moreover, the Obama-era DOJ consistently required the banks to acknowledge their bad acts, a practice that has ceased during the Trump administration.

As the Obama administration was winding up in the fall of 2016, the DOJ had not completed all that it aspired to. It rushed to reach settlements with foreign banks that had shown less urgency to resolve the allegations than some of their U.S. counterparts.

Less than a week before Trump’s inauguration, the DOJ announced that Deutsche Bank had agreed to pay a $3.1 billion civil penalty, and that Credit Suisse would pay $2.48 billion. But there were holdouts, including Barclays and RBS.

Prosecutors in Brooklyn wanted Barclays to pay somewhere within a range in the high single digits of billions of dollars, according to two people familiar with the negotiations. Barclays balked, drawing a line at $2 billion, according to a Bloomberg News account.

Barclays hired an all-star team of defense lawyers. The roster included Karen Seymour, a partner at Sullivan & Cromwell who had previously served as chief of the criminal division in the U.S. attorney’s office in Manhattan and has since become general counsel at Goldman Sachs.

Also on Barclays’ legal team was Kannon Shanmugam, a former high-ranking official in the George W. Bush DOJ who was then a partner at Williams & Connolly.

With the two sides far apart in December 2016, the DOJ sued Barclays. Prosecutors also brought civil charges against two former executives at the bank who played key roles in its pre-crisis subprime mortgage operations.

Suing was an unusual step — cases against large corporations normally settle before a complaint is filed — and it was meant to send an implicit message to Barclays. Because the DOJ had been forced to go to court, the British bank could expect the price tag of an eventual settlement to be higher.

Barclays was making the opposite bet: that it would be able to negotiate a more favorable settlement once Trump appointees were in place at DOJ.

In a 192-page complaint, the DOJ alleged that Barclays engaged in fraud on a massive scale, deceiving investors about the characteristics of mortgages used to create securities that sold for tens of billions of dollars.

A Barclays employee commented during a 2006 phone call that one particular pool of mortgages was “about as bad as it can be,” but he did not abandon the loans or modify the bank’s standard disclosures to investors, according to the government’s complaint. In another example, when that same banker said that a particular pool of loans “scares the shit out of me,” because he believed the company that originated the mortgages was likely to go bankrupt soon, Barclays bought the loans anyway. The bank deliberately did not conduct due diligence on the mortgages and then packaged them into bonds, the complaint asserted, all the while falsely telling a rating agency that due diligence had been done on 100% of the loans.

“More than half of the underlying loans defaulted,” the complaint stated, causing huge losses for investors.

Barclays’ legal team argued that the bank should not pay higher penalties in a settlement than other banks had paid relative to their market share. Barclays had been a relatively small player in the residential mortgage-backed securities, or RMBS, market, and its settlement should be sized accordingly, they reasoned.

This was an argument that the DOJ had long rejected. In a 2014 speech, then-Associate Attorney General Tony West argued that a firm’s market share should not outweigh evidence of the extent of its wrongdoing. “The facts and evidence of a particular case — they are what will ultimately matter the most,” he said.

In the Barclays matter, prosecutors in Brooklyn believed they had a strong case. The judge assigned to the case, U.S. District Judge Kiyo Matsumoto, seemed to agree. “This complaint is probably one of the more fulsome complaints I’ve ever seen,” Matsumoto said at an April 2017 hearing.

But the view that ultimately mattered was the one held by a new crop of officials at Main Justice, the DOJ’s headquarters in Washington. Besides Rosenstein, who was not involved in the Barclays case, key players in the RMBS settlements included Trump administration political appointees in the associate attorney general’s office, according to people familiar with the talks.

Steve Cox, the deputy associate attorney general, oversaw the cases, reporting to Jesse Panuccio, the principal deputy associate attorney general. In February 2018, Panuccio became acting associate attorney general, the No. 3 position at the DOJ, after Rachel Brand resigned from the post.

Neither had much experience with federal prosecutions. Panuccio was a former lawyer to Florida Gov. Rick Scott, as well as the chief labor and land use official in Florida, and Cox was a onetime associate at WilmerHale who had spent six years as a corporate counsel at an oil company, Apache Corporation, before joining the DOJ.

Following communications with the Barclays legal team, DOJ officials in Washington conveyed a message to the staff prosecutors in Brooklyn: settle the case within a narrow range around $2 billion, or we will take the negotiations out of your hands. The instruction came via a spreadsheet that listed the dollar range.

For DOJ officials in Washington to dictate specific terms of a settlement was unusual. U.S. attorney offices generally have wide latitude in choosing what they investigate and in making prosecutorial decisions. “Involvement of DOJ in cases handled in the U.S. attorney’s offices is not common” but happens on big cases from time to time, said Harry Sandick, a former federal prosecutor who is now a partner at Patterson Belknap. During Obama-era negotiations, Main Justice had tried to show a united front with prosecutors who’d investigated the RMBS cases, according to former department officials.

At least one prosecutor acknowledged the internal rift between Brooklyn and Washington to the Barclays’ defense team, according to a source familiar with the matter. Once prosecutors in Brooklyn learned Main Justice’s position, this prosecutor communicated to the Barclays side that the bank had prevailed. Recalling how the deal went down, one government official said: “It seemed like a defeat.”

The staff prosecutors weren’t just disappointed about settling for a fraction of what they had sought back in 2016. They had brought civil charges against two former Barclays employees, Paul Menefee and John Carroll, and in exchange for dismissal, the two men agreed to pay a combined $2 million. But the agreement did not include language that precluded Barclays from footing the bill. That meant that Menefee and Carroll, who did not admit wrongdoing, might not have to pay a dime out of their own pockets.

Lawyers for Menefee and Carroll did not respond to requests for comment. In a statement, U.S. Attorney Richard Donoghue said, “The substantial penalty Barclays and its executives had to pay was an important step in recognizing the harm that was caused to the national economy and to investors in RMBS.”

At Main Justice, at least one official also regretted the Barclays deal, but from the opposite perspective. Cox told a prosecutor that he wished the Barclays settlement had been even smaller, but he explained that it wasn’t feasible to go lower because it had been reported that the bank offered to pay $2 billion, according to a person familiar with the conversation.

Cox did not respond to requests for comment.

Panuccio, who stepped down from the DOJ in the spring, declined to answer specific questions, citing the confidentiality of the department’s process. In an email response, he said, “The general narrative the questions seem to suggest is belied by the facts — including the fact that DOJ recovered historically significant sums in its 2018 and 2019 FIRREA settlements, and the fact that DOJ filed a major FIRREA suit against UBS in November 2018.” (FIRREA is the Financial Institutions Reform, Recovery and Enforcement Act of 1989, a law dating from the savings and loan scandals of the late ’80s.)

Barclays declined to comment.

While Barclays had been in active negotiations with the DOJ during the Obama administration, the RBS defense team had not. RBS did not want to enter negotiations until the prosecutors dropped the criminal investigation.

Boston prosecutors declined to do so. Mortgages that went into RBS’ securities suffered about $54 billion in losses, ravaging their customers’ investments. The prosecutors believed they had compiled damning evidence that RBS officials knew what they were doing was wrong. In one example, RBS’ chief credit officer in the United States called the mortgages that were going into the securities “total fucking garbage” with “fraud [that] was so rampant … [and] all random,” according to calls the prosecutors later quoted in the statement of facts against the bank. He stated that “the loans are all disguised to, you know, look okay kind of … in a data file.”

In 2016, the RBS defense team, which included former Deputy Attorney General Jamie Gorelick, of WilmerHale, appealed to Stuart Delery, then the third-highest official at the DOJ. Delery knew Gorelick from their time at the DOJ. Despite that relationship, according to a person knowledgeable about the matter, Delery said he would not interfere with an ongoing investigation at a U.S. attorney’s office. (Delery did not respond to requests for comment. Gorelick directed questions to RBS.)

Then came November. A few months later, the Trump appointees arrived.

For a while, nothing changed. The Boston prosecutors continued their investigation, more convinced than ever that the RBS conduct merited a criminal charge. They wrote what’s known as a “prosecution memo,” which they had begun during the Obama administration, describing the underlying criminal acts under FIRREA.

Such a move would have been groundbreaking. The Obama DOJ had used FIRREA, but for civil charges. And the Boston prosecutors did not want to stop there. They argued for first charging the bank criminally, and then moving on to seek criminal charges against individual bankers. Those would have been the first of their kind.

They never got that far.

In time, Trump political appointees such as Panuccio and Cox began to figure out their way around the department. The RBS defense team, including Gorelick, requested meetings with top officials. Gorelick again had a connection with a key DOJ official. She had worked with Cox, earlier in his career when he was an associate at WilmerHale, defending BP in investigations of the Deepwater Horizon spill.

The defense group now also included Mark Filip of Kirkland & Ellis, representing the British government’s interest in RBS. Filip, who did not respond to requests for comment, has a special stature. During his tenure as deputy attorney general, he had codified the conditions prosecutors had to assess in bringing cases against corporations, which are today known as the “Filip Factors.” Prosecutors are supposed to weigh a variety of issues, such as how serious offenses are and whether the company has cooperated with investigators. As a private sector big hitter, companies hire him, in the view of prosecutors, to explain why his factors are not met in a given case.

The RBS team was able to meet with the No. 2 at the DOJ, Rosenstein. It’s unusual, though not unprecedented, for a defense team to get access to such a high-level official. The RBS team persuaded Rosenstein.

In the spring of 2018, Rosenstein informed Andrew Lelling, the U.S. attorney for the District of Massachusetts, that his office couldn’t pursue criminal charges against RBS. Rosenstein said he didn’t want the DOJ treating RBS differently from other banks, which faced only civil investigations. (The Massachusetts U.S. attorney’s office declined to comment on the details of the RBS settlement. Rosenstein, who left the DOJ in May, did not respond to inquiries.)

RBS spokeswoman Linda Harper confirmed that the Boston U.S. attorney’s office had recommended criminal prosecution and that the bank had met first with Delery and then with Rosenstein.

“The argument we made was for fairness and parity,” she said. The bank’s defense team, she added, argued that “Main Justice should ensure that like cases are treated alike.”

The Boston team was disappointed and angry. It argued that prosecutors charge people when they have the necessary evidence, even if they cannot charge all people who committed the same crime. And it maintained that the decision went against department policy. In May 2017, then-Attorney General Jeff Sessions issued a memo directing prosecutors to charge defendants with the most serious provable crimes carrying the highest penalties.

“It calls into question whether the memo meant what it says when it came to white-collar prosecutions,” a person familiar with the decision said.

Once the case was downgraded, the Boston team turned to deciding on the monetary settlement. Internally, prosecutors had discussed seeking a settlement in the $9 billion to $10 billion range, reflecting their belief that the RBS conduct was especially egregious.

At one point in the spring after the Rosenstein meetings, Main Justice sent Boston a similar spreadsheet that it sent to other U.S. attorney offices concerning their open cases, including those against Barclays, Japanese bank Nomura and Swiss bank UBS. For RBS, the range was between around $4.5 billion to about $6.6 billion.

The Boston prosecutors tried to get the settlement as close to the top of the range as they could. But they were thwarted even in that attempt. Cox told the Boston team that the DOJ would “call the bluff” of RBS and tell the bank it would take $4.9 billion.

Prosecutors thought the DOJ had caved. They complained to Cox that Main Justice had authorized the office to seek as much as $6.6 billion. Cox’s reply: But RBS won’t go that high.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for The Big Story newsletter to receive stories like this one in your inbox.


Waymo To Test Driverless Cars In Rainy Florida. Expanded Data Set Available

Waymo logo Waymo, formerly the Google self-driving car project, announced it will test vehicles in rainy Florida:

"... we’re bringing both Waymo vehicles — including our Chrysler Pacificas and a Jaguar I-Pace — to the state to begin heavy rain testing. During the summer months of Hurricane Season, Miami is one of the wettest cities in the U.S., averaging an annual 61.9 inches of rain and experiencing some of the most intense weather conditions in the country. Heavy rain can create a lot of noise for our sensors. Wet roads also may result in other road users behaving differently. Testing allows us to understand the unique driving conditions, and get a better handle on how rain affects our own vehicle movements, too."

"First, we’re spending several weeks driving on a closed course in Naples where we will rigorously test our sensor suite — which includes lidar, cameras, and radar — during the rainiest season in the south. Later in the month, we’ll bring our vehicles to public roads in Miami. They’ll be manually operated by our trained test drivers which will give us the opportunity to collect data of real-world driving situations in heavy rain. Additionally, Florida residents will start seeing a few of our vehicles on highways between Orlando, Tampa, Fort Myers and Miami as we learn about Florida roads."

Prior test locations included: a) Novi, Michigan; b) Kirkland, Washington; c) San Francisco, California; and 4) Phoenix, Arizona.

In related news, Waymo announced the availability of an expanded dataset for academic researchers. TechCrunch reported:

"Waymo is opening up its significant stores of autonomous driving data with a new Open Data Set it’s making available for the purposes of research. The data set isn’t for commercial use, but its definition of “research” is fairly broad, and includes researchers at other companies as well as academics... The Waymo Open Data set tries to fill in some of these gaps for their research peers by providing data collected from 1,000 driving segments done by its autonomous vehicles on roads, with each segment representing 20 seconds of continuous driving. It includes a range of different driving conditions, including at night, during rain, at dusk and more. The segments include data collected from five of Waymo’s own proprietary lidars, as well as five standard cameras that face front and to the sides, providing a 360-degree view captured in high resolution, as well as synchronization Waymo uses to fuse lidar and imaging data. Objects, including vehicles, pedestrians, cyclists and signage is all labeled."


New York State Strengthens Its Data Breach Laws

To help its residents, the State of New York has improved its existing data breach law. Governor Andrew Cuomo signed two bills on July 25th:

"The Governor signed the Stop Hacks and Improve Electronic Data Security - or SHIELD - Act (S.5575B/A.5635), which imposes stronger obligations on businesses handling private data to provide proper notification to affected consumers when there is a security breach. The Governor also signed legislation (A.2374/S.3582) requiring consumer credit reporting agencies to offer identity theft prevention and mitigation services to consumers who have been affected by a security breach of the agency's system."

The Governor's announcement emphasized the importance of the state's laws keeping pace with rapid advances in technology. To address new technologies, the SHIELD Act will provide stronger protections by:

"1) Broadening the scope of information covered under the notification law to include biometric information and email addresses with their corresponding passwords or security questions and answers; 2) Updating the notification requirements and procedures that companies and state entities must follow when there has been a breach of private information; 3) Extending the notification requirement to any person or entity with private information of a New York resident, not just those who conduct business in New York State; 4) Expanding the definition of a data breach to include unauthorized access to private information; and 5) Creating reasonable data security requirements tailored to the size of a business."

The full text of the SHIELD Act legislation is available here. The SHIELD Act will go into effect on March 21, 2020. The announcement also mentioned Equifax:

"In late July 2017, one of the three main credit reporting agencies, Equifax Inc., experienced a major data breach involving personal information, including social security numbers... the company's response was insufficient and it is unacceptable that consumers were left to bear the burden to protect their own identities even though their information was stolen at no fault of their own. On July 22, 2019, Governor Cuomo, the State Department of Financial Services and State Attorney General James announced a $19.2 million settlement with Equifax over the data breach. As part of that settlement, Equifax agreed to provide New York consumers with credit monitoring services and free annual credit reports, and the company will pay restitution to consumers affected by the breach..."

So, it seems that Equifax's breach and data security failures factored into the new legislation. The announcement also explained the new Identity Theft Prevention and Mitigation Services (A.2374/S.3582) legislation:

This legislation establishes the minimal amount of long-term protections to consumers who are affected by a data breach from a credit reporting agency. It requires credit reporting agency that suffers a breach of information containing consumer social security numbers to provide five-year identity theft prevention services, and if applicable, identity theft mitigation services to affected customers. Additionally, the legislation requires credit reporting agencies to inform consumers on credit freezes of a breach of data involving a social security number, and provides consumers with the right to freeze their credit at no cost. The bill... applies to any breach of the security of a consumer credit reporting agency that occurred no more than three years prior to the effective date of this act."

The A.2374/S.3582 bill will go into effect on September 23, 2019. The retroactive coverage of three years is good as it ensures credit reporting agencies with recent data breaches cannot escape responsibility.

Consumer reporting agencies enjoy a unique position as consumers cannot opt out of having their credit reports covered by Experian, Equifax, and TransUnion. Some people would call that corporate welfare. It would be great if consumers had the right to remove their credit reports from credit reporting agencies that practice poor data security with repeated data breaches. Consumers have that right with retail stores -- you can stop shopping at stores with poor data security and multiple data breaches.

In related news, JD Supra reported about proposed legislation:

"... New York City lawmakers have proposed a bill that would make it unlawful for a mobile app developer or telecommunications carrier to share a customer’s location data without an authorized purpose if the data was collected from the customer’s device within the city. The bill broadly defines the term “share” as making “location data available to another person, whether for a fee or otherwise,” suggesting that selling information is unlawful without an authorized purpose such as customer consent. The bill allows for a private right of action, including penalties for violations of $1,000 per violation, with a maximum penalty of $10,000 per day per person whose location data was unlawfully shared, as well as attorney’s fees."

To learn more, read about new data breach legislation in other states this year.


What Can Be Done Right Now to Stop a Basic Source of Health Care Fraud

[Editor's note: today's post, by reporters at ProPublica, discusses fixes for the security issues discussed in a prior post. It is reprinted with permission.]

By Marshall Allen, ProPublica

In our story about the convicted health care con man David Williams, we detailed how the Texas personal trainer made off with millions by billing some of the nation’s largest health insurers as if he were a doctor providing medical services.

Williams cannily exploited gaping loopholes in the health insurance system that allowed him almost unfettered entry. Taking commonsense steps to close those loopholes, experts say, could block other fraudsters from entry.

1. No one checks to see whether people getting federal ID numbers that allow them to bill insurers have valid licenses. They could.

Anyone billing an insurance company needs a National Provider Identifier, or NPI number. The number is obtained through Medicare, a federal agency that covers people over 65 as well as those with disabilities. But Medicare doesn’t verify that NPI applicants who claim to be licensed are, indeed, licensed by their state’s regulators. The agency could do a license check in less than a minute online or in milliseconds if the process is automated.

Medicare said federal regulations do not allow it to verify NPI applicants’ credentials, so the Department of Health and Human Services might need to revise the regulations. Congress could also order the reform.

2. Insurance companies don’t always verify that the people they are paying are licensed medical providers. They could.

Williams avoided scrutiny from insurers by billing as an out-of-network provider, so he didn’t have a contract with them and didn’t have his credentials verified before receiving payments. At Williams’ trial on federal fraud charges, representatives from the insurance companies testified that it’s not cost effective to review every claim. Almost all are automatically paid.

At a minimum, insurers could ensure that anyone billing them has the proper licensing before a payment is made. Again, this screening would take seconds or less.

Regulators could also require that insurers verify the licenses of those they pay. Some experts say it may take state and federal legislation to mandate it. Officials from America’s Health Insurance Plans, the trade group for the insurers, declined to comment on this suggestion.

3. Insurance companies aren’t reporting most cases of suspected fraud to state and federal regulators. They could.

Many states have a law in place that requires insurers to report suspected cases of fraud to state regulators. This allows regulators to spot serial fraudsters and trends, and it helps officials build criminal and civil cases. But the states have a mishmash of requirements, and many don’t do audits to make sure cases are being reported.

At least three insurance companies caught Williams committing fraud. But the Texas Department of Insurance only received one referral about the case, according to internal documents. If all three insurers that Williams defrauded had referred him, his case could have been prioritized and stopped sooner.

The existing state laws don’t apply to self-funded plans where employers pay for the health benefits. Those are overseen by the federal government. And no federal law requires insurers who administer self-funded plans to report suspected cases of fraud.

State and federal laws would need to be changed to require the consistent reporting of suspected fraud. Experts say audits, and the potential for fines, may also be needed to spur the insurers to file the reports.

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for The Big Story newsletter to receive stories like this one in your inbox.


Emotion Recognition: Facial Recognition Software Based Upon Valid Science or Malarkey?

The American Civil Liberties Union (ACLU) reported:

"Emotion recognition is a hot new area, with numerous companies peddling products that claim to be able to read people’s internal emotional states, and artificial intelligence (A.I.) researchers looking to improve computers’ ability to do so. This is done through voice analysis, body language analysis, gait analysis, eye tracking, and remote measurement of physiological signs like pulse and breathing rates. Most of all, though, it’s done through analysis of facial expressions.

A new study, however, strongly suggests that these products are built on a bed of intellectual quicksand... after reviewing over 1,000 scientific papers in the psychological literature, these experts came to a unanimous conclusion: there is no scientific support for the common assumption “that a person’s emotional state can be readily inferred from his or her facial movements.” The scientists conclude that there are three specific misunderstandings “about how emotions are expressed and perceived in facial movements.” The link between facial expressions and emotions is not reliable (i.e., the same emotions are not always expressed in the same way), specific (the same facial expressions do not reliably indicate the same emotions), or generalizable (the effects of different cultures and contexts has not been sufficiently documented)."

Another reason why this is important:

"... an entire industry of automated purported emotion-reading technologies is quickly emerging. As we wrote in our recent paper on “Robot Surveillance,” the market for emotion recognition software is forecast to reach at least $3.8 billion by 2025. Emotion recognition (aka “affect recognition” or “affective computing”) is already being incorporated into products for purposes such as marketing, robotics, driver safety, and audio “aggression detectors.”

Regular readers of this blog are familiar with aggression detectors and the variety of industries where the technology is already deployed. And, one police body-cam maker says it won't deploy facial recognition in its products due to problems with the technology.

Yes, reliability matters -- especially when used for surveillance purposes. Nobody wants law enforcement making decisions about persons based upon software built using unreliable or fake science masquerading as reliable, valid science. Nobody wants education and school officials making decisions about students using unreliable software. Nobody wants hospital administrators and physicians making decisions about patients based upon unreliable software.

What are your opinions?


White Hat Hacker: Social Media Is a 'Goldmine For Details' For Cyberattacks Targeting Companies

Many employees are their own worst enemy when they start a new job. In this Fast Company article, a white hat hacker explains the security fails by employees which compromise their employer's data security.

Stephanie “Snow” Carruthers, the chief people hacker within a group at IBM Inc., explained that hackers troll:

"... social media for photos, videos, and other clues that can help them better target your company in an attack. I know this because I’m one of them... I’m part of an elite team of hackers within IBM known as X-Force Red. Companies hire us to find gaps in their security – before the real bad guys do... Social media posts are a goldmine for details that aid in our “attacks.” What you find in the background of photos is particularly revealing... The first thing you may be surprised to know is that 75% of the time, the information I’m finding is coming from interns or new hires. Younger generations entering the workforce today have grown up on social media, and internships or new jobs are exciting updates to share. Add in the fact that companies often delay security training for new hires until weeks or months after they’ve started, and you’ve got a recipe for disaster..."

The obvious security fails include selfie photos by interns or new hires wearing their security badges, selfies showing log-in credentials on computer screens, and selfies showing passwords written on post-it notes attached to computer monitors. Less obvious security fails include group photos by interns or new hires with their work team. Group photos can help hackers identify team members to craft personalized and more effective phishing e-mails and text messages using co-workers' names, to trick recipients into opening attachments containing malware.

This highlights one business practice interns and new hires should understand. Your immediate boss or supervisor won't scour your social media accounts looking for security fails. Your employer will outsource the job to another company, which will.

If you just started a new job, don't be that clueless employee posting security fails to your social media accounts. Read and understand your employer's social media policy. If you are a manager, schedule security training for your interns and new hires ASAP.