1,212 posts categorized "Corporate Responsibility" Feed

The Obscure Charges That Utility Companies Add to Your Bills

[Editor's note: today's guest post by reporters at ProPublica explores billing practices within the utility industry. Everyone uses electricity, so these new billing practices can negatively impact all consumers. The post is reprinted with permission.]

By Talia Buford, ProPublica

New Jersey was reeling from the Great Recession, and Gov. Jon S. Corzine had a plan. Infrastructure projects, he decided, would help the state shake off the country’s worst economic downturn in generations. In April 2009, the state utility regulator approved nearly $1 billion in projects to install energy-efficient streetlights and replace aging gas lines, and in the process create thousands of jobs across the state.

Utilities wouldn’t have to worry about the cost. Instead of tapping their annual budgets, they were given the green light to impose a surcharge on the gas and electric bills of every customer in the state.

Up till then, such surcharges had been rare, used, for example, in the 1970s when Arab oil-producing countries placed restrictions on exports to countries such as the United States that supported Israel, driving the price of oil to quadruple. Surcharges were used to provide utilities some relief from the volatile oil price swings. But instead of being a one-off, the surcharge championed by the Corzine administration a decade ago helped usher in a new era in the economics of energy.

Across the nation, local and state governments have turned to utilities to address acute and pervasive infrastructure needs, while utility companies have looked to surcharges as a way to finance those projects — and ensure steady profits. Sometimes, utilities have used revenue from surcharges to pay for things other than infrastructure, many of which customers might expect are already included in their rates: tree trimming (Kansas), smart meters (Colorado) and pension costs (Massachusetts).

In New Jersey, gas and electric bills are packed with add-ons that pay for everything from installing solar panels to putting substations on platforms above flood levels. For residential customers, a single charge, added to bills in increments as tiny as a thousandth of a cent per kilowatt hour, can add $35 to $45 a year to costs; for industrial and commercial customers, the charges can add up to tens of thousands of dollars annually. And it’s all on top of the price that regulators have agreed customers should pay for their electricity service.

The use of surcharges has proliferated over the last decade as the energy landscape has changed substantially. The price of oil and gas has dropped as domestic supplies have increased, and residential energy use has plummeted as appliances and lighting have become more efficient. Still, the national average price of electricity has increased slightly over the last decade, with additional surcharges counteracting any potential savings. That means at the end of the day, many customers have likely noticed little, if any change in their final bills.

That remains true in New Jersey, where residential bills last year averaged about $106.28 per month, according to the federal Energy Information Administration. Garden State residents consume less energy than residents of almost all other states, but they have the 12th highest price per kilowatt hour in the nation, at about 15 cents in 2018. Some critics say surcharges have made energy costs more opaque and made it harder for customers to know enough about what they’re paying for to push back.

“Some of these costs might be for important projects and initiatives,” said Evelyn Liebman, advocacy director for AARP New Jersey. “But the question is: How do you evaluate whether or not the price that you’re asking people to pay is fair and that the benefits outweigh the costs?”

To see how surcharges have affected electricity bills, ProPublica examined the charges assessed over the last decade by PSE&G, the utility arm of New Jersey’s largest energy company, PSEG. For PSE&G, adding surcharges has proved to be easier for financing projects than raising rates on its 2.2 million electric customers. The state Board of Public Utilities, which approves rate increases, has to approve surcharges, too, but the waiting period between when the utility spends the money and when it recovers it from customer bills is shorter.

PSE&G went eight years before seeking its most recent rate increase — a lengthy, rigorous process intended to ensure that utilities are reasonable in their charges and prudent in their spending. By October 2018, when its most recent “rate case” was completed, the number of surcharges on PSE&G customer bills had grown to 14, from five in 2009. (Of those, three charges are included in the “societal benefits” charge paid by every utility customer in the state and were created by legislation.)

This year, PSE&G has added two more surcharges to customer bills, bringing the current total to 16. Most notably, one surcharge, the Zero Emissions Certificate Recovery Charge, raises $300 million to prop up PSE&G’s three nuclear power plants. That charge applies to all New Jersey customers, regardless of who supplies their power.

Nationally, the average price of electricity has slightly increased over the last decade, according to data from the Energy Information Administration. But PSE&G said that over the last decade, its customer bills have decreased even with the surcharges, which have financed investments in solar power, energy efficiency and infrastructure upgrades.

The company said the spending has helped keep electricity service reliable, created jobs and reduced emissions. “Programs have costs,” Scott S. Jennings, a PSEG senior vice president, said in an interview. “We totally recognize that. But customers are paying far less than was paid in the past.”

PSE&G said the median monthly bill for customers who only receive electricity was $102 in 2019, down slightly from 2008 when it was $105. The median bill for customers who receive electricity and gas dropped to $176 per month in 2019 from $249 in 2008. Some of those savings can be attributed to lower fuel costs.

“We see that as a win for customers, the economy and the environment,” PSE&G said in a statement.

No federal entity tracks utility surcharges nationwide, but they have been followed for years by consumer advocates and regulatory groups. The National Regulatory Research Institute, the research arm of the association for utility regulators, has cautioned states to consider the potential impacts of surcharges before approving them, with a 2009 paper recommending that the fees be approved “only in special situations.” A review of the fees conducted for the AARP in 2012 found that at least 30 states add surcharges to customer bills for an array of purposes.

In New Jersey, the BPU energy director, Stacy Peterson, said the infrastructure work financed through surcharges needs to be done. Surcharges allow work to be completed more quickly, she said, and the BPU ensures the surcharge revenue is spent properly.

“We always have the ability to step in,” she said. “We’re not just approving these blindly.”

But some critics say utility regulators have lost sight of their mission when it comes to approving surcharges, particularly for what amount to routine business costs.

Regulators “need to remember that the public interest does not mean serving the utilities,” said David Nickel, state consumer counsel in Kansas. “It means serving the public. And sometimes that means looking at the utility and telling them ‘no.’”

Chances are, you give little thought to how your electricity bill is calculated. Surcharges capitalize on that.

“I don’t half look,” said Michael Denning, a 66-year-old retiree from Kearny, New Jersey, who had come to a PSE&G customer service center in Newark on a recent Friday to pay his bill. “They’re on there, but you can’t do anything.”

Other customers said they had not seen the charges and, when approached by a reporter, spent a few minutes shuffling through their bills to decipher what was what.

Each cycle, electricity bills are broken up into two buckets: supply and delivery. Supply charges cover the cost of producing power at a plant or buying it from another producer. Delivery charges cover the cost of bringing that power over transmission lines and ultimately to your light switch. Surcharges — also known in the industry as “trackers” or “clauses” — are included in the delivery bucket and are usually assessed as a fee per kilowatt hour of electricity used.

For a utility, how it seeks to recover expenses comes down to risk.

If a utility chooses to apply for a rate increase, regulators will weigh not only the costs the utility projects for the coming years but also any expenses the utility has made that were not part of its previous rate case. If regulators don’t think the expenses were necessary, they could reject the proposal, leaving the utility on the hook for those outlays.

Surcharges sidestep that risk. Where rate cases entail a fuller review of a utility’s operations, the analysis of a surcharge focuses on a single program. Before any money is spent, that single program is given the blessing of regulators, along with a means to collect the cost from customers up front.

In New Jersey, PSE&G has made surcharges a critical part of its business strategy. In investor materials from as early as 2009, the company notes that its regulatory strategy is to earn all authorized returns on investments and minimize regulatory lag — the time between when a change in costs for the utility is reflected in the customer’s rates.

PSE&G is allowed to earn a profit on some of its investments, and with each program announced came the promise of immediate payback. In a 2011 investor meeting presentation about future investments, the company touted its growth in the solar and energy efficiency arenas alongside receiving approval for immediate repayment through surcharges.

Fitch Ratings, one of the major credit rating agencies, raised the utility’s credit rating in 2012, increasing it one notch from BBB+ to A-, its current rating, citing New Jersey’s “constructive” regulatory environment. At the time, PSE&G had recently added a weather normalization surcharge to gas bills that helped guarantee cash flow even when customers saw a mild winter and used less energy. The BPU’s willingness to allow utilities to recover costs in a “timely manner” meant there was a predictable cash flow even in uncertain outside conditions, the credit agency said at the time.

In a 2014 presentation to industry executives and investors, the company said that it expected to use surcharges to recover 12% of the $11.3 billion invested in solar and energy efficiency programs and an infrastructure hardening program, dubbed “Energy Strong,” which targeted substations that flooded during Superstorm Sandy in 2012.

During another presentation, PSE&G said consumers ultimately wouldn’t feel the surcharge for solar and energy efficiency programs because it would replace a surcharge that was expiring of an equal amount. The move, the company noted, would “fully offset the impact to customer bills,” which wouldn’t go up. Of course, bills wouldn’t go down, either, despite lower fuel costs.

“We can debate the merits of what we should and shouldn’t do,” said Jennings of PSEG. “And different people will have different perspectives. It comes down to affordability and where you draw the line.”

Critics of the charges, however, say projects billed as protecting infrastructure from climate change or increasing reliability are less about improving service and more about ensuring profits.

“If you’re a utility and demand is flat, and you get a return on capital, how can you make a capital investment if no one is buying more electricity,” said David Dismukes, executive director of the Center for Energy Studies at Louisiana State University, who testified against PSE&G’s Energy Strong program. “You say that we need to build in ‘resiliency,’ that’s how you do it.”

PSEG projected roughly $1.6 billion in earnings for 2019. The company has also paid shareholders increasing dividends every year over the past decade.

In New Jersey, surcharges appear to have found a welcoming regulatory environment, especially as the state seeks to ensure its progressive climate policies don’t alienate businesses. It’s a balancing act the state has struggled to pull off. New Jersey has been on the cutting edge of environmental protection legislation, but such efforts were spurred in part by lax enforcement that allowed industrial pollution to do lasting harm to the state’s waterways.

For the most part, utility surcharges and the projects they finance attract only fleeting attention — an article in which residents called PSE&G’s utility pole mounted solar panels an “eyesore,” or others describing work done to help the utility recover after Superstorm Sandy.

“I don’t even pay attention,” Anthony Boone, a 48-year-old artist, said as he ran errands in Newark. “I just pay it. I guess I should be more in tune, but that’s pretty low on the totem pole.”

Some customers did start to pay attention this year after the utility’s parent company, PSEG, sought to impose the surcharge to subsidize its three aging nuclear plants. Without the subsidy, the company said it would have to close the plants, costing the state hundreds of jobs and a key source of clean energy.

Suddenly, surcharges were big news, as officials, executives and legislators sparred over PSEG’s demands and the $300 million price tag.

State experts said the plants were still relatively efficient and not in danger of closing. But a law enacted in May 2018 to compensate nuclear plants for being a cleaner energy source seemed to tie the hands of the BPU. In April, the board voted to impose the surcharge, even as some of the commissioners expressed misgivings, with one likening PSEG’s threats to extortion. The New Jersey rate counsel, Stefanie Brand, whose office advocates on behalf of customers, recently challenged the subsidy in court. In a brief filed this month, Brand said that if PSEG’s threat was all it took to secure the subsidy, then “the ratepayers of this state truly are being held captive.”

As a part of any surcharge agreement, the utility must come back to regulators at a specified point in the project and provide an accounting showing that the money is being spent as stipulated, the BPU said.

Regulators say they also review surcharges as part of a utility’s next application for a rate increase. But until a change made last year, utilities could go as long as they wanted without seeking a rate increase and undergoing the requisite review. A new rule, established by the BPU in January 2018, requires any utility with an infrastructure-related surcharge to submit to a full rate review within five years of the surcharge’s approval. (PSE&G is scheduled to file its next rate case by the end of 2023.)

“Any expense in a rate case has to be prudent,” said Paul Flanagan, executive director of the BPU. “When they’re spending money on building things, one of the issues is: ‘Is it prudent? Is it gold plated? Are they just spending money to earn money?’”

The agency can step in if it believes a charge is being misused, but it almost never does. The BPU doesn’t track such interventions, but of the roughly 1,500 matters that come before the agency annually, Flanagan said interventions have been “fairly rare.”

At core, utilities and regulators see surcharges differently.

Jennings, the PSEG executive, said surcharges help the company invest wisely, ensuring regulators support a project before any money is spent.

“We want to make sure that the other stakeholders, like BPU staff and ultimately the BPU, rate counsel and other key parties agree that it is worthwhile doing,” he said. “They will, through that process, agree that the type of work and basic program is prudent.”

However, the BPU’s Flanagan said surcharges are only a way to make sure that necessary upgrades are made quickly, and he rejected the idea that they are a tacit way for regulators to weigh in on how a company makes investments.

“The utilities run their companies,” he said. “The board doesn’t run the companies. If the utility feels the need to upgrade the system, they’re capable of doing that.”

Garden State residents pay among the highest prices per kilowatt hour in the nation for their electricity. Brand, the state-appointed advocate for customers, said she is concerned about the proliferation of surcharges.

“That kind of surcharge really should be left for extraordinary circumstances and the run-of-the-mill work the utilities should be doing through rates,” Brand said. “If they’re not making enough money to do the work, they always have the ability to come in for a rate case.”

While energy costs may not drive decisions about where to live, for big businesses, energy costs can be a significant factor in locating — or relocating — a facility.

Major commercial customers, such as chemical plants and large retailers, can buy energy from a third party or generate electricity itself, but the power still has to go through a utility’s distribution and transmission lines, which is where the surcharges are applied. That leaves them with no way to avoid the not-so-small impact of the surcharges.

“We have gotten to the point that more money is probably collected at this point through these mechanisms than through base rates,” said Steve Goldenberg, a lawyer for the New Jersey Large Energy Users Coalition, which represents retailers, manufacturers, food chains and pharmaceutical companies. “And that’s the problem.”

For the Kuehne Company, which uses electricity to manufacture industrial grade bleach at plants in New Jersey, Delaware and Connecticut, surcharges have a significant impact on the company’s bottom line.

“We live and die by energy,” said Bill Paulin, the company’s co-president, noting that electricity makes up 40% of the company’s production costs.

“Our energy costs are in the millions,” he said. “We spend more on electricity than we do on medical insurance for our employees.”

The company, which employs 150 people across its three locations, has been in New Jersey since 1919, and it recently built a new manufacturing facility in Kearny, on the industrial peninsula between Newark and Jersey City. Paulin said the company made the decision to stay because of New Jersey’s access to the Northeast markets, and because of the employees who live in the state.

“We decided to take a chance and do what we needed to do to stay,” Paulin said. Still, the new facility, which was built on the site of the company’s older plant, can be dismantled and moved if costs — such as utility bills — continue to rise, he said.

“It wouldn’t be easy or cheap, but we can do it if things get out of whack.”

For now, the bills are holding steady, and not by accident.

A surcharge, imposed five years ago to cover improvements to the utility’s resilience after Hurricane Irene and Superstorm Sandy, was expiring. The program, which collected on average about $4 a month from residential customers and substantially more from commercial customers, would soon be history.

But PSE&G had already asked to impose a new surcharge, which would raise $1.5 billion to elevate or close old substations in flood zones. It would be part of Energy Strong II — an extension of the Sandy recovery program.

During discussions with the BPU and rate counsel this summer, PSE&G scaled back its proposal, and in September, the BPU approved the next phase of the program. The cost to residential customers will be about $3 each month — almost the same amount as the expiring surcharge for the previous round of the recovery program.

For more coverage, read ProPublica’s previous reporting on the environment.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for The Big Story newsletter to receive stories like this one in your inbox.


Google Has Started Home Deliveries Of Packages By Drones

MediaPost reported:

"The first drone home deliveries of packages from Walgreens have started from Wing, the Alphabet subsidiary. Wing recently received an expanded Air Carrier Certificate from the Federal Aviation Administration allowing the first commercial air delivery service by drone directly to homes in the U.S. The FAA permissions are the first allowing multiple pilots to oversee multiple unmanned aircraft making commercial deliveries to the general public simultaneously. Collaborating with Federal Express and Virginia retailer Sugar Magnolia, Wing began delivering over-the-counter medication, gifts and snacks to residents of Christiansburg, Virginia. FedEx completed the first scheduled ecommerce drone delivery on Friday [October 18th]..."


UPS Announces Expansion Of Its Drone Delivery Program

UPS logo Last week, UPS announced an expansion of its B2B drone delivery program titled UPS Flight. The expansion included three items focused upon the healthcare industry. First, UPS began a:

"... new drone delivery service in support of the University of Utah Health hospital campuses, in partnership with Matternet. The University of Utah campus program will involve drone deliveries of samples and other cargo, similar to the program originally introduced at WakeMed Hospital in North Carolina."

The second item included and agreement with:

"... with CVS Health to develop a variety of drone delivery use cases for business-to consumer applications. The program will include evaluation of delivery of prescriptions and retail products to the homes of CVS customers."

The third item included a partnership:

"... with wholesale pharmaceutical distributor AmerisourceBergen... The collaboration will initially deploy the UPS Flight Forward drone airline to transport certain pharmaceuticals, supplies and records to qualifying medical campuses served by AmerisourceBergen across the United States, with plans to then expand its use to other sites of care."

UPS Chief Strategy and Transformation Officer Scott Price said:

“When we launched UPS Flight Forward, we said we would move quickly to scale this business – now the country’s first and only fully-certified drone airline... We started with a hospital campus environment and are now expanding scale and use-cases. UPS Flight Forward will work with new customers in other industries to design additional solutions for a wide array of last-mile and urgent delivery challenges.”


VPN Service Provider Announced A Data Breach Incident Which Occurred in 2018

Consumers in the United States lost both control and privacy protections when the U.S. Federal Communications Commission (FCC), led by President Trump appointee Ajit Pai, a former Verizon lawyer, repealed in 2017 both broadband privacy and net neutrality protections for consumers. Since then, many people have subscribed to Virtual Private Network (VPN) services to regain protections of their sensitive personal information and online activities.

NordVPN logo NordVPN, a provider of VPN services, announced on Monday a data breach:

"1) One server was affected in March 2018 in Finland. The rest of our service was not affected. No other servers of any type were put at risk. This was an attack on our server, not our entire service; 2) The breach was made possible by poor configuration on a third-party datacenter’s part that we were never notified of. Evidence suggests that when the datacenter became aware of the intrusion, they deleted the accounts that had caused the vulnerabilities rather than notify us of their mistake. As soon as we learned of the breach, the server and our contract with the provider were terminated and we began an extensive audit of our service; 3) No user credentials were affected; 4) There are no signs that the intruder attempted to monitor user traffic in any way. Even if they had, they would not have had access to those users’ credentials..."

In 2018, NordVPN operated about 3,000 servers. It now operates about 5,000 servers. The NordVPN announcement includes more information including technical details.

Earlier this month, C/Net and  PC Magazine published their lists of the best VPN services in 2019. PC Magazine's list, which was published before the breach announcement, included NordVPN. So, it is always wise for consumers to do their research before switching to a VPN service.

What to make of this breach? We don't know who performed the attack. My impression: the attack seemed targeted, since few people probably use the single server in Finland. And, this cyberattack seemed very different from the massive retail attacks where hackers seek to steal the payment information (e.g., credit/debit card numbers) of thousands of consumers.

This cyberattack may have targeted a specific person. Perhaps, the attacker was a competitor or the government agency of a country NordVPN has refused to do business with. (Or, maybe this.) Hopefully, investigative journalists with more resources than this solo blogger will probe deeper.

Several things seem clear: a) cybercriminals have added VPN services to their list of high-value targets, b) hackers have identified the outsourcing vendors used by VPN service providers, and c) cyber attacks like this will probably continue. You might say this breach was a warning shot across the bow of the entire VPN industry. Seems like there is lots more news to come.


Court Says Biometric Privacy Lawsuit Against Facebook Can Proceed

Facebook logo MediaPost reported:

"A federal appellate court has rejected Facebook's request for a new hearing over an Illinois biometric privacy law. Unless the Supreme Court steps in, Illinois Facebook users can now proceed with a class-action alleging that Facebook violated Illinois residents' rights by compiling a database of their faceprints... The legal battle, which dates to 2015, when several Illinois residents alleged that Facebook violated the Illinois Biometric Privacy Information Act, which requires companies to obtain written releases from people before collecting “face geometry” and other biometric data, including retinal scans and voiceprints... The fight centers on Facebook's photo-tagging function, which draws on a vast trove of photos to recognize users' faces and suggest their names when they appear in photos uploaded by their friends..."


The National Auto Surveillance Database You Haven't Heard About Has Plenty Of Privacy Issues

Some consumers have heard of Automated License Plate Recognition (ALPR) cameras, the high-speed, computer-controlled technology that automatically reads and records vehicle license plates. Local governments have installed ALPR cameras on stationary objects such as street-light poles, traffic lights, overpasses, highway exit ramps, and electronic toll collection (ETC).

Mobile ALPR cameras have been installed on police cars and/or police surveillance vans. The Houston Police Department explained in this 2016 video how it uses the technology. Last year, a blog post discussed ALPR usage in San Diego and its data-sharing with Vigilant Solutions.

What you probably don't know: the auto repossession industry also uses the technology. Many "repo men" have ALPR cameras installed on their vehicles. The data they collect is fed into a massive, nationwide, and privately-owned database which archives license-plate images. Reporters at Motherboard obtained a private demo of the database tool to understand its capabilities.

Digital Recognition Network logo The demo included tracking a license plate with the vehicle owner's consent. Vice reported:

"This tool, called Digital Recognition Network (DRN), is not run by a government, although law enforcement can also access it. Instead, DRN is a private surveillance system crowdsourced by hundreds of repo men who have installed cameras that passively scan, capture, and upload the license plates of every car they drive by to DRN's database. DRN stretches coast to coast and is available to private individuals and companies focused on tracking and locating people or vehicles. The tool is made by a company that is also called Digital Recognition Network... DRN has more than 600 of these "affiliates" collecting data, according to the contract. These affiliates are paid a monthly bonus for gathering the data..."

ALPR financing image from DRN site on September 20, 2019. Click to view larger version Affiliates are rep men and others, who both use the database tool and upload images to it. DRN even offers financing to help affiliates buy ALPR cameras. The image on the right was taken from the site on September 20, 2019.

When consumers fail to pay their bills, lenders and insurance companies have valid needs to retrieve ( or repossess) their unpaid assets. Lenders hire repo men, who then use the DRN database to find vehicles they've been hired to repossess. Those applications are valid, but there are plenty of privacy issues and opportunity for abuse.

Plenty.

First, the data collection is indiscriminate and broad. As repo men (and women) drive through cities and towns to retrieve wanted vehicles, the ALPR cameras mounted on their cars scan all nearby vehicles: both moving and parked vehicles. Scans are not limited solely to vehicles they've been hired to repossess, nor to vehicles of known/suspected criminals. So, innocent consumers are caught in the massive data collection. According to Vice:

"... in fact, the vast majority of vehicles captured are connected to innocent people. DRN claims to have more than 9 billion license plate scans, according to a DRN contract obtained by Motherboard..."

Second, the data is archived forever. That can provide a very detailed history of a vehicle's (or a person's) movements:

"The results popped up: dozens of sightings, spanning years. The system could see photos of the car parked outside the owner's house; the car in another state as its driver went to visit family; and the car parked in other spots in the owner's city... Some showed the car's location as recently as a few weeks before."

Third, to facilitate searches metadata is automatically attached to the images: GPS or geolocation, date, time, day of week, and more. The metadata helps provide a pretty detailed history of each vehicle's -- or person's -- movements: where and when a vehicle ( or person) travels, patterns such as which days of the week certain locations are visited, and how long the vehicle (or person) parked at specific locations. Vice explained:

"The data is easy to query, according to a DRN training video obtained by Motherboard. The system adds a "tag" to each result, categorising what sort of location the vehicle was likely spotted at, such as "workplace" or "home."

So, DRN can help users to associate specific addresses (work, home, school, doctors, etc.) with specific vehicles. How accurate might this be? While that might help repo men and insurance companies spot fraud via out-of-state registered vehicles whose owners are trying to avoid detection and/or higher premiums, it raises other concerns.

Fourth, consumers -- vehicle owners -- have no control over the data describing them. Vehicle owners cannot opt out of the data collection. Vehicle owners cannot review nor correct any errors in their DRN profiles.

That sounds out of control to me.

The persons which the archived data directly describes have no say. None. That's a huge concern.

Also, I wonder about single females -- victims of domestic violence -- who have protective orders for their safety. Some states, such as Massachusetts, have Address Confidentiality Programs (ACPs) to protect victims of domestic violence, sexual assault, and stalkers. Does DRN accommodate ACP programs? And if so, how? And if not, why not? How does DRN prevent perps from using its database tool? (Yes, DRN access is an issue. Keep reading.) The Vice report didn't say. Hopefully, future reporting will discuss this.

Fifth, DRN is robust. It can be used to track vehicles near or in real time:

"DRN charges $20 to look up a license plate, or $70 for a "live alert", according to the contract. With a live alert, a user can enter a license plate they wish to receive updates on; when the DRN system spots the vehicle, it'll send an email to the user with the newly discovered location."

That makes DRN highly appealing to both valid users (e.g., police, repo men, insurance companies, private investigators) and bad actors posing as valid users. Who might those bad actors be? The Electronic Frontier Foundation (EFF) warned:

"Taken in the aggregate, ALPR data can paint an intimate portrait of a driver’s life and even chill First Amendment protected activity. ALPR technology can be used to target drivers who visit sensitive places such as health centers, immigration clinics, gun shops, union halls, protests, or centers of religious worship."

Sixth, is the problem of access. Anybody can use DRN. According to Vice:

"... a private investigator, or a repo man, or an insurance company does not need a warrant to search for someone's movements over years; they just need to pay to access the DRN system, or find someone willing to share or leverage their access..."

Users simply need to comply with DRN's policies. The company says that, a) users can use its database tool only for certain applications, and b) its contract prohibits users from sharing search results with third parties. We consumers have only DRN's word and assurances that it enforces its policies; and that users comply. As we have seen with Facebook data breaches, it is easy for bad actors to pose as valid users in order to doo end runs around such policies.

What are your opinions of ALPR cameras and DRN?


Facebook To Pay $40 Million To Advertisers To Resolve Allegations of Inflated Advertising Metrics

Facebook logo According to court papers last week, Facebook has entered a proposed settlement agreement where it will pay $40 million to advertisers to resolve allegations in a class-action lawsuit that the social networking platform inflated video advertising engagement metrics. Forbes explained:

"The metrics in question are critical for advertisers on video-based content platforms such as YouTube and Facebook because they show the average amount of time users spend watching their content before clicking away. During the 18 months between February of 2015 and September of 2016, Facebook was incorrectly calculating — and consequently, inflating — two key metrics of this type. Members of the class action are alleging that the faulty metrics led them to spend more money on Facebook ads than they otherwise would have..."

Metrics help advertisers determine if the ads they paid for are delivering results. Reportedly, the lawsuit took three years and Facebook denied any wrongdoing. The proposed settlement must be approved by a court. About $12 million of the $40 million total will be used to pay plaintiffs' attorney fees.

A brief supporting the proposed settlement provided more details:

" One metric—“Average Duration of Video Viewed”—depicted the average number of seconds users watched the video; another—–“Average Percentage of Video Viewed”—depicted the average percentage of the video ad that users watched... Starting in February 2015, Facebook incorrectly calculated Average Duration of Video Viewed... The Average View Duration error, in turn, led to the Average Percentage Viewed metric also being inflated... Because of the error, the average watch times of video ads were exaggerated for about 18 months... Facebook acknowledges there was an error. But Facebook has argued strenuously that the error was an innocent mistake that Facebook corrected shortly after discovering it. Facebook has also pointed out that some advertisers likely never viewed the erroneous metrics and that because Facebook does not set prices based on the impacted metrics, the error did not lead to overcharges... The settlement provides a $40 million cash fund from Facebook, which constitutes as much as 40% of what Plaintiffs estimate they may realistically have been able to recover had the case made it to trial and had Plaintiffs prevailed. Facebook’s $40 million payment will... also cover the costs of settlement administration, class notice, service awards, and Plaintiffs’ litigation costs24 and attorneys’ fees."

It seems that besides a multitude of data breaches and privacy snafus, Facebook can't quite operate reliably its core advertising business. What do you think?


FTC To Distribute $31 Million In Refunds To Affected Lifelock Customers

U.S. Federal Trade Commission logo The U.S. Federal Trade Commission (FTC) announced on Tuesday the distribution of about $31 million worth of refunds to certain customers of Lifelock, an identity protection service. The refunds are part of a previously announced settlement agreement to resolve allegations that the identity-theft service violated a 2010 consent order.

Lifelock has featured notable spokespersons, including radio talk-show host Rush Limbaugh, television personality Montel Williams, actress Angie Harmon, and former New York City Mayor Rudy Giuliani, who is now the personal attorney for President Trump.

The FTC announcement explained:

"The refunds stem from a 2015 settlement LifeLock reached with the Commission, which alleged that from 2012 to 2014 LifeLock violated an FTC order that required the company to secure consumers’ personal information and prohibited it from deceptive advertising. The FTC alleged, among other things, that LifeLock failed to establish and maintain a comprehensive information security program to protect users’ sensitive personal information, falsely advertised that it protected consumers’ sensitive data with the same high-level safeguards used by financial institutions, and falsely claimed it provided 24/7/365 alerts “as soon as” it received any indication a consumer’s identity was being used."

Lifelock logo The 2015 settlement agreement with the FTC required LifeLock agreed to pay $100 million to affected customers. About $68 million has been paid to customers who were part of a class action lawsuit. The FTC is using the remaining money to provide refunds to consumers who were LifeLock members between 2012 and 2014, but did not receive a payment from the class action settlement.

The FTC expects to mail about one million refund checks worth about $29 each.

If you are a Lifelock customer and find this checkered history bothersome, Consumer Reports has some recommendations about what you can do instead. It might save you some money, too.


3 Countries Sent A Joint Letter Asking Facebook To Delay End-To-End Encryption Until Law Enforcement Has Back-Door Access. 58 Concerned Organizations Responded

Plenty of privacy and surveillance news recently. Last week, the governments of three countries sent a joint, open letter to Facebook.com asking the social media platform to delay implementation of end-to-end encryption in its messaging apps until back-door access can be provided for law enforcement.

Facebook logo Buzzfeed News published the joint, open letter by U.S. Attorney General William Barr, United Kingdom Home Secretary Priti Patel, acting US Homeland Security Secretary Kevin McAleenan, and Australian Minister for Home Affairs Peter Dutton. The letter, dated October 4th, was sent to Mark Zuckerberg, the Chief Executive Officer of Facebook. It read in part:

"OPEN LETTER: FACEBOOK’S “PRIVACY FIRST” PROPOSALS

We are writing to request that Facebook does not proceed with its plan to implement end-to-end encryption across its messaging services without ensuring that there is no reduction to user safety and without including a means for lawful access to the content of communications to protect our citizens.

In your post of 6 March 2019, “A Privacy-Focused Vision for Social Networking,” you acknowledged that “there are real safety concerns to address before we can implement end-to-end encryption across all our messaging services.” You stated that “we have a responsibility to work with law enforcement and to help prevent” the use of Facebook for things like child sexual exploitation, terrorism, and extortion. We welcome this commitment to consultation. As you know, our governments have engaged with Facebook on this issue, and some of us have written to you to express our views. Unfortunately, Facebook has not committed to address our serious concerns about the impact its proposals could have on protecting our most vulnerable citizens.

We support strong encryption, which is used by billions of people every day for services such as banking, commerce, and communications. We also respect promises made by technology companies to protect users’ data. Law abiding citizens have a legitimate expectation that their privacy will be protected. However, as your March blog post recognized, we must ensure that technology companies protect their users and others affected by their users’ online activities. Security enhancements to the virtual world should not make us more vulnerable in the physical world..."

The open, joint letter is also available on the United Kingdom government site. Mr. Zuckerberg's complete March 6, 2019 post is available here.

Earlier this year, the U.S. Federal Bureau of Investigation (FBI) issued a Request For Proposals (RFP) seeking quotes from technology companies to build a real-time social media monitoring tool. It seems, such a tool would have limited utility without back-door access to encrypted social media accounts.

In 2016, the Federal Bureau of Investigation (FBI) filed a lawsuit to force Apple Inc. to build "back door" software to unlock an attacker's iPhone. Apple refused as back-door software would provide access to any iPhone, not only this particular smartphone. Ultimately, the FBI found an offshore tech company to build the backdoor. Later that year, then FBI Director James Comey suggested a national discussion about encryption versus safety. It seems, the country still hasn't had that conversation.

According to BuzzFeed, Facebook's initial response to the joint letter:

"In a three paragraph statement, Facebook said it strongly opposes government attempts to build backdoors."

We shall see if Facebook holds steady to that position. Privacy advocates quickly weighed in. The Electronic Frontier Foundation (EFF) wrote:

"This is a staggering attempt to undermine the security and privacy of communications tools used by billions of people. Facebook should not comply. The letter comes in concert with the signing of a new agreement between the US and UK to provide access to allow law enforcement in one jurisdiction to more easily obtain electronic data stored in the other jurisdiction. But the letter to Facebook goes much further: law enforcement and national security agencies in these three countries are asking for nothing less than access to every conversation... The letter focuses on the challenges of investigating the most serious crimes committed using digital tools, including child exploitation, but it ignores the severe risks that introducing encryption backdoors would create. Many people—including journalists, human rights activists, and those at risk of abuse by intimate partners—use encryption to stay safe in the physical world as well as the online one. And encryption is central to preventing criminals and even corporations from spying on our private conversations... What’s more, the backdoors into encrypted communications sought by these governments would be available not just to governments with a supposedly functional rule of law. Facebook and others would face immense pressure to also provide them to authoritarian regimes, who might seek to spy on dissidents..."

The new agreement the EFF referred to was explained in this United Kingdom announcement:

"The world-first UK-US Bilateral Data Access Agreement will dramatically speed up investigations and prosecutions by enabling law enforcement, with appropriate authorisation, to go directly to the tech companies to access data, rather than through governments, which can take years... The current process, which see requests for communications data from law enforcement agencies submitted and approved by central governments via Mutual Legal Assistance (MLA), can often take anywhere from six months to two years. Once in place, the Agreement will see the process reduced to a matter of weeks or even days."

The Agreement will each year accelerate dozens of complex investigations into suspected terrorists and paedophiles... The US will have reciprocal access, under a US court order, to data from UK communication service providers. The UK has obtained assurances which are in line with the government’s continued opposition to the death penalty in all circumstances..."

On Friday, a group of 58 privacy advocates and concerned organizations from several countries sent a joint letter to Facebook regarding its end-to-end encryption plans. The Center For Democracy & Technology (CDT) posted the group's letter:

"Given the remarkable reach of Facebook’s messaging services, ensuring default end-to-end security will provide a substantial boon to worldwide communications freedom, to public safety, and to democratic values, and we urge you to proceed with your plans to encrypt messaging through Facebook products and services. We encourage you to resist calls to create so-called “backdoors” or “exceptional access” to the content of users’ messages, which will fundamentally weaken encryption and the privacy and security of all users."

It seems wise to have a conversation to discuss all of the advantages and disadvantages; and not selectively focus only upon some serious crimes while ignoring other significant risks, since back-door software can be abused like any other technology. What are your opinions?


Transcripts Of Internal Facebook Meetings Reveal True Views Of The Company And Its CEO

Facebook logo It's always good for consumers -- and customers -- to know a company's positions on key issues. Thanks to The Verge, we now know more about Facebook's views. Portions of the leaked transcripts included statements by Mr. Zuckerberg, Facebook's CEO, during internal business meetings. The Verge explained the transcripts:

"In two July meetings, Zuckerberg rallied his employees against critics, competitors, and Senator Elizabeth Warren, among others..."

Portions of statements by Mr. Zuckerberg included:

"I’m certainly more worried that someone is going to try to break up our company... So there might be a political movement where people are angry at the tech companies or are worried about concentration or worried about different issues and worried that they’re not being handled well. That doesn’t mean that, even if there’s anger and that you have someone like Elizabeth Warren who thinks that the right answer is to break up the companies... I mean, if she gets elected president, then I would bet that we will have a legal challenge, and I would bet that we will win the legal challenge... breaking up these companies, whether it’s Facebook or Google or Amazon, is not actually going to solve the issues. And, you know, it doesn’t make election interference less likely. It makes it more likely because now the companies can’t coordinate and work together. It doesn’t make any of the hate speech or issues like that less likely. It makes it more likely..."

An October 1st post by Mr. Zuckerberg confirmed the transcripts. Earlier this year, Mr. Zuckerberg called for more government regulation. Given his latest comments, we now know his true views.

Also, C/Net reported:

"In an interview with the Today show that aired Wednesday, Instagram CEO Adam Mosseri said he generally agrees with the comments Zuckerberg made during the meetings, adding that the company's large size can help it tackle issues like hate speech and election interference on social media."

The claim by Mosseri, Zuckerberg and others that their company needs to be even bigger to tackle issues is frankly -- laughable. Consumers are concerned about several different issues: privacy, hacked and/or cloned social media accounts, costs, consumer choice, surveillance, data collection we can't opt out of, the inability to delete Facebook and other mobile apps, and elections interference. A recent study found that consumers want social sites to collect less data.

Industry consolidation and monopolies/oligopolies usually result with reduced consumer choices and higher prices. Prior studies have documented this. The lack of ISP competition in key markets meant consumers in the United States pay more for broadband and get slower speeds compared to other countries. At the U.S. Federal Trade Commission's "Privacy, Big Data, And Competition" hearing last year, the developers of the Brave web browser submitted this feedback:

""First, big tech companies “cross-use” user data from one part of their business to prop up others. This stifles competition, and hurts innovation and consumer choice. Brave suggests that FTC should investigate..."

Facebook is already huge, and its massive size still hasn't stopped multiple data breaches and privacy snafus. Rather, the snafus have demonstrated an inability (unwillingness?) by the company and its executives to effectively tackle and implement solutions to adequately and truly protect users' sensitive information. Mr. Zuckerberg has repeatedly apologized, but nothing ever seems to change. Given the statements in the transcripts, his apologies seem even less believable and less credible than before.

Alarmingly, Facebook has instead sought more ways to share users' sensitive data. In August of 2018, reports surfaced that Facebook approached several major banks and offered to share its detailed financial information about consumers in order, "to boost user engagement." Reportedly, the detailed financial information included debit/credit/prepaid card transactions and checking account balances. Also last year, Facebook's Onavo VPN App was removed from the Apple App store because the app violated data-collection policies. Not good.

Plus, the larger problem is this: Facebook isn't just a social network. It is also an advertiser, publishing platform, dating service, and wannabe payments service. There are several anti-trust investigations underway involving Facebook. Remember, Facebook tracks both users and non-users around the internet. So, claims about it needing to be bigger to solve problem are malarkey.

And, Mr. Zuckerberg's statements seem to mischaracterize Senator Warren's positions by conflating and ignoring (or minimizing) several issues. Here is what Senator Warren actually stated in March, 2019:

"America’s big tech companies provide valuable products but also wield enormous power over our digital lives. Nearly half of all e-commerce goes through Amazon. More than 70% of all Internet traffic goes through sites owned or operated by Google or Facebook. As these companies have grown larger and more powerful, they have used their resources and control over the way we use the Internet to squash small businesses and innovation, and substitute their own financial interests for the broader interests of the American people... Weak antitrust enforcement has led to a dramatic reduction in competition and innovation in the tech sector. Venture capitalists are now hesitant to fund new startups to compete with these big tech companies because it’s so easy for the big companies to either snap up growing competitors or drive them out of business. The number of tech startups has slumped, there are fewer high-growth young firms typical of the tech industry, and first financing rounds for tech startups have declined 22% since 2012... To restore the balance of power in our democracy, to promote competition, and to ensure that the next generation of technology innovation is as vibrant as the last, it’s time to break up our biggest tech companies..."

Senator Warren listed several examples:

"Using Mergers to Limit Competition: Facebook has purchased potential competitors Instagram and WhatsApp. Amazon has used its immense market power to force smaller competitors like Diapers.com to sell at a discounted rate. Google has snapped up the mapping company Waze and the ad company DoubleClick... Using Proprietary Marketplaces to Limit Competition: Many big tech companies own a marketplace — where buyers and sellers transact — while also participating on the marketplace. This can create a conflict of interest that undermines competition. Amazon crushes small companies by copying the goods they sell on the Amazon Marketplace and then selling its own branded version. Google allegedly snuffed out a competing small search engine by demoting its content on its search algorithm, and it has favored its own restaurant ratings over those of Yelp."

Mr. Zuckerberg would be credible if he addressed each of these examples. In the transcript from The Verge, he didn't.

And, there is plenty of blame to spread around on executives in both tech companies and anti-trust regulators in government. Readers wanting to learn more can read about hijacked product pages and other chaos among sellers on the Amazon platform. There's plenty to fault tech companies for, and it isn't a political attack.

Plenty of operational failures, data security failures, and willful sharing of consumers' data collected. What are your opinions of the transcript?


Vancouver, Canada Welcomed The 'Tesla Of The Cruise Industry." Ports In France Consider Bans For Certain Cruise Ships

For drivers concerned about the environment and pollution, the automobile industry has offered hybrids (which run on gasoline, and electric battery power) and completely electric vehicles (solely on electric battery power). The same technology trend is underway within the cruise industry.

On September 26, the Port of Vancouver welcomed the MS Roald Amundsen. Some call this cruise ship the "Tesla of the cruise industry." The International Business Times explained:

"MS Roald Amundsen can be called Tesla of the cruise industry as it is similar to the electrically powered Tesla car that set off a revolution in the auto sector by running on batteries... The state of the art ship was unveiled earlier this year by Scandinavian cruise operator Hurtigruten. The cruise ship is one of the most sustainable cruise vessels with the distinction of being one of the two hybrid-electric cruise ships in the world. MS Roald Amundsen utilizes hybrid technology to save fuel and reduce carbon dioxide emissions by 20 percent."

Hurtigruten logo With 15 cruise ships, Hurtigruten offers sailings to Norway, Iceland, Alaska, Arctic, Antarctica, Europe, South America, and more. Named after the first man to cross Antarctica and reach the South Pole, the MS Roald Amundsen carries about 530 passengers.

View of solar panels on the Celebrity Solstice cruise ship in March, 2019. Click to view larger version While some cruise ships already use onboard solar panels to reduce fuel consumption, this is the first hybrid-electric cruise ship. It is an important step forward to prove that large ships can be powered in this manner.

Several ships in Royal Caribbean Cruise Line's fleet, including the Oasis of the Seas, have been outfitted with solar panels. The image on the right provides a view of  the solar panels on the Celebrity Solstice cruise ship, while it was docked in Auckland, New Zealand in March, 2019. The panels are small and let sunlight through.

The Vancouver Is Awesome site explained why the city gave the MS Roald Amundsen special attention:

"... the Vancouver Fraser Port Authority, the federal agency responsible for the stewardship of the port, has set its vision to be the world’s most sustainable port. As a part of this vision, the port authority works to ensure the highest level of environmental protection is met in and around the Port of Vancouver. This commitment resulted in the port authority being the first in Canada and third in the world to offer shore power, an emissions-reducing initiative, for cruise ships. That said, a shared commitment to sustainability isn’t the only thing Hurtigruten has in common with our awesome city... The hybrid-electric battery used in the MS Roald Amundsen was created by Vancouver company, Corvus Energy."

Port Of Vancouver, Canada logo Reportedly, the MS Roald Amundsen can operate for brief periods of time only on battery power, resulting in zero fuel usage and zero emissions. The Port of Vancouver's website explains its Approach to Sustainability policy:

"We are on a journey to meet our vision to become the world’s most sustainable port. In 2010 we embarked on a two-year scenario planning process with stakeholders called Port 2050, to improve our understanding of what the region may look like in the future... We believe a sustainable port delivers economic prosperity through trade, maintains a healthy environment, and enables thriving communities, through meaningful dialogue, shared aspirations and collective accountability. Our definition of sustainability includes 10 areas of focus and 22 statements of success..."

I encourage everyone to read the Port of Vancouver's 22 statements of success for a healthy environment and sustainable port. Selected statements from that list:

"Healthy ecosystems:
8) Takes a holistic approach to protecting and improving air, land and water quality to promote biodiversity and human health
9) Champions coordinated management programs to protect habitats and species. Climate action
10) Is a leader among ports in energy conservation and alternative energy to minimize greenhouse gas emissions..."

"Responsible practices:
12) Improves the environmental, social and economic performance of infrastructure through design, construction and operational practices
13) Supports responsible practices throughout the global supply chain..."

"Aboriginal relationships:
18) Respects First Nations’ traditional territories and value traditional knowledge
19) Embraces and celebrates Aboriginal culture and history
20) Understands and considers contemporary interests and aspirations..."

In separate but related news, government officials in the French Riviera city of Cannes are considering a ban of cruise ships to curb pollution. The Travel Pulse site reported:

"The ban would apply to passenger vessels that do not meet a 0.1 percent sulfur cap in their fuel emissions. Any cruise ship that attempted to enter the port that did not meet the higher standards would be turned away without allowing passengers to disembark."

During 2018, about 370,000 cruise ship passengers visited Cannes, making it the fourth busiest port in France. Officials are concerned about pollution. Other European ports are considering similar bans:

"Another French city, Saint-Raphael, has also instituted similar rules to curb the pollution of the water and air around the city. Other European ports such as Santorini and Venice have also cited cruise ships as a significant cause of over-tourism across the region."

If you live and/or work in a port city, it seems worthwhile to ask your local government or port authority what it is doing about sustainability and pollution. The video below explains some of the features in this new "expedition ship" with itineraries and activities that focus upon science:


Video courtesy of Hurtigruten

[Editor's note: this post was updated to include a photo of solar panels on the Celebrity Solstice cruise ship.]


Millions of Americans’ Medical Images and Data Are Available on the Internet. Anyone Can Take a Peek.

[Editor's note: today's guest blog post, by reporters at ProPublica, explores data security issues within the healthcare industry and its outsourcing vendors. It is reprinted with permission.]

By Jack Gillum, Jeff Kao and Jeff Larson - ProPublica

Medical images and health data belonging to millions of Americans, including X-rays, MRIs and CT scans, are sitting unprotected on the internet and available to anyone with basic computer expertise.

Bayerischer Rundfunk logo The records cover more than 5 million patients in the U.S. and millions more around the world. In some cases, a snoop could use free software programs — or just a typical web browser — to view the images and private data, an investigation by ProPublica and the German broadcaster Bayerischer Rundfunk found.

We identified 187 servers — computers that are used to store and retrieve medical data — in the U.S. that were unprotected by passwords or basic security precautions. The computer systems, from Florida to California, are used in doctors’ offices, medical-imaging centers and mobile X-ray services.

The insecure servers we uncovered add to a growing list of medical records systems that have been compromised in recent years. Unlike some of the more infamous recent security breaches, in which hackers circumvented a company’s cyber defenses, these records were often stored on servers that lacked the security precautions that long ago became standard for businesses and government agencies.

"It’s not even hacking. It’s walking into an open door," said Jackie Singh, a cybersecurity researcher and chief executive of the consulting firm Spyglass Security. Some medical providers started locking down their systems after we told them of what we had found.

Our review found that the extent of the exposure varies, depending on the health provider and what software they use. For instance, the server of U.S. company MobilexUSA displayed the names of more than a million patients — all by typing in a simple data query. Their dates of birth, doctors and procedures were also included.

Alerted by ProPublica, MobilexUSA tightened its security earlier this month. The company takes mobile X-rays and provides imaging services to nursing homes, rehabilitation hospitals, hospice agencies and prisons. "We promptly mitigated the potential vulnerabilities identified by ProPublica and immediately began an ongoing, thorough investigation," MobilexUSA’s parent company said in a statement.

Another imaging system, tied to a physician in Los Angeles, allowed anyone on the internet to see his patients’ echocardiograms. (The doctor did not respond to inquiries from ProPublica.) All told, medical data from more than 16 million scans worldwide was available online, including names, birthdates and, in some cases, Social Security numbers.

Experts say it’s hard to pinpoint who’s to blame for the failure to protect the privacy of medical images. Under U.S. law, health care providers and their business associates are legally accountable for securing the privacy of patient data. Several experts said such exposure of patient data could violate the Health Insurance Portability and Accountability Act, or HIPAA, the 1996 law that requires health care providers to keep Americans’ health data confidential and secure.

Although ProPublica found no evidence that patient data was copied from these systems and published elsewhere, the consequences of unauthorized access to such information could be devastating. "Medical records are one of the most important areas for privacy because they’re so sensitive. Medical knowledge can be used against you in malicious ways: to shame people, to blackmail people," said Cooper Quintin, a security researcher and senior staff technologist with the Electronic Frontier Foundation, a digital-rights group.

"This is so utterly irresponsible," he said.

The issue should not be a surprise to medical providers. For years, one expert has tried to warn about the casual handling of personal health data. Oleg Pianykh, the director of medical analytics at Massachusetts General Hospital’s radiology department, said medical imaging software has traditionally been written with the assumption that patients’ data would be secured by the customer’s computer security systems.

But as those networks at hospitals and medical centers became more complex and connected to the internet, the responsibility for security shifted to network administrators who assumed safeguards were in place. "Suddenly, medical security has become a do-it-yourself project," Pianykh wrote in a 2016 research paper he published in a medical journal.

ProPublica’s investigation built upon findings from Greenbone Networks, a security firm based in Germany that identified problems in at least 52 countries on every inhabited continent. Greenbone’s Dirk Schrader first shared his research with Bayerischer Rundfunk after discovering some patients’ health records were at risk. The German journalists then approached ProPublica to explore the extent of the exposure in the U.S.

Schrader found five servers in Germany and 187 in the U.S. that made patients’ records available without a password. ProPublica and Bayerischer Rundfunk also scanned Internet Protocol addresses and identified, when possible, which medical provider they belonged to.

ProPublica independently determined how many patients could be affected in America, and found some servers ran outdated operating systems with known security vulnerabilities. Schrader said that data from more than 13.7 million medical tests in the U.S. were available online, including more than 400,000 in which X-rays and other images could be downloaded.

The privacy problem traces back to the medical profession’s shift from analog to digital technology. Long gone are the days when film X-rays were displayed on fluorescent light boards. Today, imaging studies can be instantly uploaded to servers and viewed over the internet by doctors in their offices.

In the early days of this technology, as with much of the internet, little thought was given to security. The passage of HIPAA required patient information to be protected from unauthorized access. Three years later, the medical imaging industry published its first security standards.

Our reporting indicated that large hospital chains and academic medical centers did put security protections in place. Most of the cases of unprotected data we found involved independent radiologists, medical imaging centers or archiving services.

One German patient, Katharina Gaspari, got an MRI three years ago and said she normally trusts her doctors. But after Bayerischer Rundfunk showed Gaspari her images available online, she said: "Now, I am not sure if I still can." The German system that stored her records was locked down last week.

We found that some systems used to archive medical images also lacked security precautions. Denver-based Offsite Image left open the names and other details of more than 340,000 human and veterinary records, including those of a large cat named "Marshmellow," ProPublica found. An Offsite Image executive told ProPublica the company charges clients $50 for access to the site and then $1 per study. "Your data is safe and secure with us," Offsite Image’s website says.

The company referred ProPublica to its tech consultant, who at first defended Offsite Image’s security practices and insisted that a password was needed to access patient records. The consultant, Matthew Nelms, then called a ProPublica reporter a day later and acknowledged Offsite Image’s servers had been accessible but were now fixed.

Medical Imaging and Technology Alliance logo "We were just never even aware that there was a possibility that could even happen," Nelms said.

In 1985, an industry group that included radiologists and makers of imaging equipment created a standard for medical imaging software. The standard, which is now called DICOM, spelled out how medical imaging devices talk to each other and share information.

We shared our findings with officials from the Medical Imaging & Technology Alliance, the group that oversees the standard. They acknowledged that there were hundreds of servers with an open connection on the internet, but suggested the blame lay with the people who were running them.

"Even though it is a comparatively small number," the organization said in a statement, "it may be possible that some of those systems may contain patient records. Those likely represent bad configuration choices on the part of those operating those systems."

Meeting minutes from 2017 show that a working group on security learned of Pianykh’s findings and suggested meeting with him to discuss them further. That “action item” was listed for several months, but Pianykh said he never was contacted. The medical imaging alliance told ProPublica last week that the group did not meet with Pianykh because the concerns that they had were sufficiently addressed in his article. They said the committee concluded its security standards were not flawed.

Pianykh said that misses the point. It’s not a lack of standards; it’s that medical device makers don’t follow them. “Medical-data security has never been soundly built into the clinical data or devices, and is still largely theoretical and does not exist in practice,” Pianykh wrote in 2016.

ProPublica’s latest findings follow several other major breaches. In 2015, U.S. health insurer Anthem Inc. revealed that private data belonging to more than 78 million people was exposed in a hack. In the last two years, U.S. officials have reported that more than 40 million people have had their medical data compromised, according to an analysis of records from the U.S. Department of Health and Human Services.

Joy Pritts, a former HHS privacy official, said the government isn’t tough enough in policing patient privacy breaches. She cited an April announcement from HHS that lowered the maximum annual fine, from $1.5 million to $250,000, for what’s known as “corrected willful neglect” — the result of conscious failures or reckless indifference that a company tries to fix. She said that large firms would not only consider those fines as just the cost of doing business, but that they could also negotiate with the government to get them reduced. A ProPublica examination in 2015 found few consequences for repeat HIPAA offenders.

A spokeswoman for HHS’ Office for Civil Rights, which enforces HIPAA violations, said it wouldn’t comment on open or potential investigations.

"What we typically see in the health care industry is that there is Band-Aid upon Band-Aid applied" to legacy computer systems, said Singh, the cybersecurity expert. She said it’s a “shared responsibility” among manufacturers, standards makers and hospitals to ensure computer servers are secured.

"It’s 2019," she said. "There’s no reason for this."

How Do I Know if My Medical Imaging Data is Secure?

If you are a patient:

If you have had a medical imaging scan (e.g., X-ray, CT scan, MRI, ultrasound, etc.) ask the health care provider that did the scan — or your doctor — if access to your images requires a login and password. Ask your doctor if their office or the medical imaging provider to which they refer patients conducts a regular security assessment as required by HIPAA.

If you are a medical imaging provider or doctor’s office:

Researchers have found that picture archiving and communication systems (PACS) servers implementing the DICOM standard may be at risk if they are connected directly to the internet without a VPN or firewall, or if access to them does not require a secure password. You or your IT staff should make sure that your PACS server cannot be accessed via the internet without a VPN connection and password. If you know the IP address of your PACS server but are not sure whether it is (or has been) accessible via the internet, please reach out to us at medicalimaging@propublica.org.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for The Big Story newsletter to receive stories like this one in your inbox.


Study: Anonymized Data Can Not Be Totally Anonymous. And 'Homomorphic Encryption' Explained

Many online users have encountered situations where companies collect data with the promised that it is safe because the data has been anonymized -- all personally-identifiable data elements have been removed. How safe is this really? A recent study reinforced the findings that it isn't as safe as promised. Anonymized data can be de-anonymized = re-identified to individual persons.

The Guardian UK reported:

"... data can be deanonymised in a number of ways. In 2008, an anonymised Netflix data set of film ratings was deanonymised by comparing the ratings with public scores on the IMDb film website in 2014; the home addresses of New York taxi drivers were uncovered from an anonymous data set of individual trips in the city; and an attempt by Australia’s health department to offer anonymous medical billing data could be reidentified by cross-referencing “mundane facts” such as the year of birth for older mothers and their children, or for mothers with many children. Now researchers from Belgium’s Université catholique de Louvain (UCLouvain) and Imperial College London have built a model to estimate how easy it would be to deanonymise any arbitrary dataset. A dataset with 15 demographic attributes, for instance, “would render 99.98% of people in Massachusetts unique”. And for smaller populations, it gets easier..."

According to the U.S. Census Bureau, the population of Massachusetts was abut 6.9 million on July 1, 2018. How did this de-anonymization problem happen? Scientific American explained:

"Many commonly used anonymization techniques, however, originated in the 1990s, before the Internet’s rapid development made it possible to collect such an enormous amount of detail about things such as an individual’s health, finances, and shopping and browsing habits. This discrepancy has made it relatively easy to connect an anonymous line of data to a specific person: if a private detective is searching for someone in New York City and knows the subject is male, is 30 to 35 years old and has diabetes, the sleuth would not be able to deduce the man’s name—but could likely do so quite easily if he or she also knows the target’s birthday, number of children, zip code, employer and car model."

Data brokers, including credit-reporting agencies, have collected a massive number of demographic data attributes about every persons. According to this 2018 report, Acxiom has compiled about 5,000 data elements for each of 700 million persons worldwide.

It's reasonable to assume that credit-reporting agencies and other data brokers have similar capabilities. So, data brokers' massive databases can make it relatively easy to re-identify data that was supposedly been anonymized. This means consumers don't have the privacy promised.

What's the solution? Researchers suggest that data brokers must develop new anonymization methods, and rigorously test them to ensure anonymization truly works. And data brokers must be held to higher data security standards.

Any legislation serious about protecting consumers' privacy must address this, too. What do you think?


The Extortion Economy: How Insurance Companies Are Fueling a Rise in Ransomware Attacks

[Editor's note: today's guest post, by reporters at ProPublica, is part of a series which discusses the intersection of cyberattacks, ransomware, and the insurance industry. It is reprinted with permission.]

By Renee Dudley, ProPublica

On June 24, the mayor and council of Lake City, Florida, gathered in an emergency session to decide how to resolve a ransomware attack that had locked the city’s computer files for the preceding fortnight. Following the Pledge of Allegiance, Mayor Stephen Witt led an invocation. “Our heavenly father,” Witt said, “we ask for your guidance today, that we do what’s best for our city and our community.”

Witt and the council members also sought guidance from City Manager Joseph Helfenberger. He recommended that the city allow its cyber insurer, Beazley, an underwriter at Lloyd’s of London, to pay the ransom of 42 bitcoin, then worth about $460,000. Lake City, which was covered for ransomware under its cyber-insurance policy, would only be responsible for a $10,000 deductible. In exchange for the ransom, the hacker would provide a key to unlock the files.

“If this process works, it would save the city substantially in both time and money,” Helfenberger told them.

Without asking questions or deliberating, the mayor and the council unanimously approved paying the ransom. The six-figure payment, one of several that U.S. cities have handed over to hackers in recent months to retrieve files, made national headlines.

Left unmentioned in Helfenberger’s briefing was that the city’s IT staff, together with an outside vendor, had been pursuing an alternative approach. Since the attack, they had been attempting to recover backup files that were deleted during the incident. On Beazley’s recommendation, the city chose to pay the ransom because the cost of a prolonged recovery from backups would have exceeded its $1 million coverage limit, and because it wanted to resume normal services as quickly as possible.

“Our insurance company made [the decision] for us,” city spokesman Michael Lee, a sergeant in the Lake City Police Department, said. “At the end of the day, it really boils down to a business decision on the insurance side of things: them looking at how much is it going to cost to fix it ourselves and how much is it going to cost to pay the ransom.”

The mayor, Witt, said in an interview that he was aware of the efforts to recover backup files but preferred to have the insurer pay the ransom because it was less expensive for the city. “We pay a $10,000 deductible, and we get back to business, hopefully,” he said. “Or we go, ‘No, we’re not going to do that,’ then we spend money we don’t have to just get back up and running. And so to me, it wasn’t a pleasant decision, but it was the only decision.”

Ransomware is proliferating across America, disabling computer systems of corporations, city governments, schools and police departments. This month, attackers seeking millions of dollars encrypted the files of 22 Texas municipalities. Overlooked in the ransomware spree is the role of an industry that is both fueling and benefiting from it: insurance. In recent years, cyber insurance sold by domestic and foreign companies has grown into an estimated $7 billion to $8 billion-a-year market in the U.S. alone, according to Fred Eslami, an associate director at AM Best, a credit rating agency that focuses on the insurance industry. While insurers do not release information about ransom payments, ProPublica has found that they often accommodate attackers’ demands, even when alternatives such as saved backup files may be available.

The FBI and security researchers say paying ransoms contributes to the profitability and spread of cybercrime and in some cases may ultimately be funding terrorist regimes. But for insurers, it makes financial sense, industry insiders said. It holds down claim costs by avoiding expenses such as covering lost revenue from snarled services and ongoing fees for consultants aiding in data recovery. And, by rewarding hackers, it encourages more ransomware attacks, which in turn frighten more businesses and government agencies into buying policies.

“The onus isn’t on the insurance company to stop the criminal, that’s not their mission. Their objective is to help you get back to business. But it does beg the question, when you pay out to these criminals, what happens in the future?” said Loretta Worters, spokeswoman for the Insurance Information Institute, a nonprofit industry group based in New York. Attackers “see the deep pockets. You’ve got the insurance industry that’s going to pay out, this is great.”

A spokesperson for Lloyd’s, which underwrites about one-third of the global cyber-insurance market, said that coverage is designed to mitigate losses and protect against future attacks, and that victims decide whether to pay ransoms. “Coverage is likely to include, in the event of an attack, access to experts who will help repair the damage caused by any cyberattack and ensure any weaknesses in a company’s cyberprotection are eliminated,” the spokesperson said. “A decision whether to pay a ransom will fall to the company or individual that has been attacked.” Beazley declined comment.

Fabian Wosar, chief technology officer for anti-virus provider Emsisoft, said he recently consulted for one U.S. corporation that was attacked by ransomware. After it was determined that restoring files from backups would take weeks, the company’s insurer pressured it to pay the ransom, he said. The insurer wanted to avoid having to reimburse the victim for revenues lost as a result of service interruptions during recovery of backup files, as its coverage required, Wosar said. The company agreed to have the insurer pay the approximately $100,000 ransom. But the decryptor obtained from the attacker in return didn’t work properly and Wosar was called in to fix it, which he did. He declined to identify the client and the insurer, which also covered his services.

“Paying the ransom was a lot cheaper for the insurer,” he said. “Cyber insurance is what’s keeping ransomware alive today. It’s a perverted relationship. They will pay anything, as long as it is cheaper than the loss of revenue they have to cover otherwise.”

Worters, the industry spokeswoman, said ransom payments aren’t the only example of insurers saving money by enriching criminals. For instance, the companies may pay fraudulent claims — for example, from a policyholder who sets a car on fire to collect auto insurance — when it’s cheaper than pursuing criminal charges. “You don’t want to perpetuate people committing fraud,” she said. “But there are some times, quite honestly, when companies say: ’This fraud is not a ton of money. We are better off paying this.’ ... It’s much like the ransomware, where you’re paying all these experts and lawyers, and it becomes this huge thing.”

Insurers approve or recommend paying a ransom when doing so is likely to minimize costs by restoring operations quickly, regulators said. As in Lake City, recovering files from backups can be arduous and time-consuming, potentially leaving insurers on the hook for costs ranging from employee overtime to crisis management public relations efforts, they said.

“They’re going to look at their overall claim and dollar exposure and try to minimize their losses,” said Eric Nordman, a former director of the regulatory services division of the National Association of Insurance Commissioners, or NAIC, the organization of state insurance regulators. “If it’s more expeditious to pay the ransom and get the key to unlock it, then that’s what they’ll do.”

As insurance companies have approved six- and seven-figure ransom payments over the past year, criminals’ demands have climbed. The average ransom payment among clients of Coveware, a Connecticut firm that specializes in ransomware cases, is about $36,000, according to its quarterly report released in July, up sixfold from last October. Josh Zelonis, a principal analyst for the Massachusetts-based research company Forrester, said the increase in payments by cyber insurers has correlated with a resurgence in ransomware after it had started to fall out of favor in the criminal world about two years ago.

One cybersecurity company executive said his firm has been told by the FBI that hackers are specifically extorting American companies that they know have cyber insurance. After one small insurer highlighted the names of some of its cyber policyholders on its website, three of them were attacked by ransomware, Wosar said. Hackers could also identify insured targets from public filings; the Securities and Exchange Commission suggests that public companies consider reporting “insurance coverage relating to cybersecurity incidents.”

Even when the attackers don’t know that insurers are footing the bill, the repeated capitulations to their demands give them confidence to ask for ever-higher sums, said Thomas Hofmann, vice president of intelligence at Flashpoint, a cyber-risk intelligence firm that works with ransomware victims.

Ransom demands used to be “a lot less,” said Worters, the industry spokeswoman. But if hackers think they can get more, “they’re going to ask for more. So that’s what’s happening. ... That’s certainly a concern.”

In the past year, dozens of public entities in the U.S. have been paralyzed by ransomware. Many have paid the ransoms, either from their own funds or through insurance, but others have refused on the grounds that it’s immoral to reward criminals. Rather than pay a $76,000 ransom in May, the city of Baltimore — which did not have cyber insurance — sacrificed more than $5.3 million to date in recovery expenses, a spokesman for the mayor said this month. Similarly, Atlanta, which did have a cyber policy, spurned a $51,000 ransom demand last year and has spent about $8.5 million responding to the attack and recovering files, a spokesman said this month. Spurred by those and other cities, the U.S. Conference of Mayors adopted a resolution this summer not to pay ransoms.

Still, many public agencies are delighted to have their insurers cover ransoms, especially when the ransomware has also encrypted backup files. Johannesburg-Lewiston Area Schools, a school district in Michigan, faced that predicament after being attacked in October. Beazley, the insurer handling the claim, helped the district conduct a cost-benefit analysis, which found that paying a ransom was preferable to rebuilding the systems from scratch, said Superintendent Kathleen Xenakis-Makowski.

“They sat down with our technology director and said, ‘This is what’s affected, and this is what it would take to re-create,’” said Xenakis-Makowski, who has since spoken at conferences for school officials about the importance of having cyber insurance. She said the district did not discuss the ransom decision publicly at the time in part to avoid a prolonged debate over the ethics of paying. “There’s just certain things you have to do to make things work,” she said.

Ransomware is one of the most common cybercrimes in the world. Although it is often cast as a foreign problem, because hacks tend to originate from countries such as Russia and Iran, ProPublica has found that American industries have fostered its proliferation. We reported in May on two ransomware data recovery firms that purported to use their own technology to disable ransomware but in reality often just paid the attackers. One of the firms, Proven Data, of Elmsford, New York, tells victims on its website that insurance is likely to cover the cost of ransomware recovery.

Lloyd’s of London, the world’s largest specialty insurance market, said it pioneered the first cyber liability policy in 1999. Today, it offers cyber coverage through 74 syndicates — formed by one or more Lloyd’s members such as Beazley joining together — that provide capital and accept and spread risk. Eighty percent of the cyber insurance written at Lloyd’s is for entities based in the U.S. The Lloyd’s market is famous for insuring complex, high-risk and unusual exposures, such as climate-change consequences, Arctic explorers and Bruce Springsteen’s voice.

Many insurers were initially reluctant to cover cyber disasters, in part because of the lack of reliable actuarial data. When they protect customers against traditional risks such as fires, floods and auto accidents, they price policies based on authoritative information from national and industry sources. But, as Lloyd’s noted in a 2017 report, “there are no equivalent sources for cyber-risk,” and the data used to set premiums is collected from the internet. Such publicly available data is likely to underestimate the potential financial impact of ransomware for an insurer. According to a report by global consulting firm PwC, both insurers and victimized companies are reluctant to disclose breaches because of concerns over loss of competitive advantage or reputational damage.

Despite the uncertainty over pricing, dozens of carriers eventually followed Lloyd’s in embracing cyber coverage. Other lines of insurance are expected to shrink in the coming decades, said Nordman, the former regulator. Self-driving cars, for example, are expected to lead to significantly fewer car accidents and a corresponding drop in premiums, according to estimates. Insurers are seeking new areas of opportunity, and “cyber is one of the small number of lines that is actually growing,” Nordman said.

Driven partly by the spread of ransomware, the cyber insurance market has grown rapidly. Between 2015 and 2017, total U.S. cyber premiums written by insurers that reported to the NAIC doubled to an estimated $3.1 billion, according to the most recent data available.

Cyber policies have been more profitable for insurers than other lines of insurance. The loss ratio for U.S. cyber policies was about 35% in 2018, according to a report by Aon, a London-based professional services firm. In other words, for every dollar in premiums collected from policyholders, insurers paid out roughly 35 cents in claims. That compares to a loss ratio of about 62% across all property and casualty insurance, according to data compiled by the NAIC of insurers that report to them. Besides ransomware, cyber insurance frequently covers costs for claims related to data breaches, identity theft and electronic financial scams.

During the underwriting process, insurers typically inquire about a prospective policyholder’s cyber security, such as the strength of its firewall or the viability of its backup files, Nordman said. If they believe the organization’s defenses are inadequate, they might decline to write a policy or charge more for it, he said. North Dakota Insurance Commissioner Jon Godfread, chairman of the NAIC’s innovation and technology task force, said some insurers suggest prospective policyholders hire outside firms to conduct “cyber audits” as a “risk mitigation tool” aimed to prevent attacks — and claims — by strengthening security.

“Ultimately, you’re going to see that prevention of the ransomware attack is likely going to come from the insurance carrier side,” Godfread said. “If they can prevent it, they don’t have to pay out a claim, it’s better for everybody.”

Not all cyber insurance policies cover ransom payments. After a ransomware attack on Jackson County, Georgia, last March, the county billed insurance for credit monitoring services and an attorney but had to pay the ransom of about $400,000, County Manager Kevin Poe said. Other victims have struggled to get insurers to pay cyber-related claims. Food company Mondelez International and pharmaceutical company Merck sued insurers last year in state courts after the carriers refused to reimburse costs associated with damage from NotPetya malware. The insurers cited “hostile or warlike action” or “act of war” exclusions because the malware was linked to the Russian military. The cases are pending.

The proliferation of cyber insurers willing to accommodate ransom demands has fostered an industry of data recovery and incident response firms that insurers hire to investigate attacks and negotiate with and pay hackers. This year, two FBI officials who recently retired from the bureau opened an incident response firm in Connecticut. The firm, The Aggeris Group, says on its website that it offers “an expedient response by providing cyber extortion negotiation services and support recovery from a ransomware attack.”

Ramarcus Baylor, a principal consultant for The Crypsis Group, a Virginia incident response firm, said he recently worked with two companies hit by ransomware. Although both clients had backup systems, insurers promised to cover the six-figure ransom payments rather than spend several days assessing whether the backups were working. Losing money every day the systems were down, the clients accepted the offer, he said.

Crypsis CEO Bret Padres said his company gets many of its clients from insurance referrals. There’s “really good money in ransomware” for the cyberattacker, recovery experts and insurers, he said. Routine ransom payments have created a “vicious circle,” he said. “It’s a hard cycle to break because everyone involved profits: We do, the insurance carriers do, the attackers do.”

Chris Loehr, executive vice president of Texas-based Solis Security, said there are “a lot of times” when backups are available but clients still pay ransoms. Everyone from the victim to the insurer wants the ransom paid and systems restored as fast as possible, Loehr said.

“They figure out that it’s going to take a month to restore from the cloud, and so even though they have the data backed up,” paying a ransom to obtain a decryption key is faster, he said.

“Let’s get it negotiated very quickly, let’s just get the keys, and get the customer decrypted to minimize business interruption loss,” he continued. “It makes the client happy, it makes the attorneys happy, it makes the insurance happy.”

If clients morally oppose ransom payments, Loehr said, he reminds them where their financial interests lie, and of the high stakes for their businesses and employees. “I’ll ask, ‘The situation you’re in, how long can you go on like this?’” he said. “They’ll say, ‘Well, not for long.’ Insurance is only going to cover you for up to X amount of dollars, which gets burned up fast.”

“I know it sucks having to pay off assholes, but that’s what you gotta do,” he said. “And they’re like, ‘Yeah, OK, let’s get it done.’ You gotta kind of take charge and tell them, ‘This is the way it’s going to be or you’re dead in the water.’”

Lloyd’s-backed CFC, a specialist insurance provider based in London, uses Solis for some of its U.S. clients hit by ransomware. Graeme Newman, chief innovation officer at CFC, said “we work relentlessly” to help victims improve their backup security. “Our primary objective is always to get our clients back up and running as quickly as possible,” he said. “We would never recommend that our clients pay ransoms. This would only ever be a very final course of action, and any decision to do so would be taken by our clients, not us as an insurance company.”

As ransomware has burgeoned, the incident response division of Solis has “taken off like a rocket,” Loehr said. Loehr’s need for a reliable way to pay ransoms, which typically are transacted in digital currencies such as Bitcoin, spawned Sentinel Crypto, a Florida-based money services business managed by his friend, Wesley Spencer. Sentinel’s business is paying ransoms on behalf of clients whose insurers reimburse them, Loehr and Spencer said.

New York-based Flashpoint also pays ransoms for insurance companies. Hofmann, the vice president, said insurers typically give policyholders a toll-free number to dial as soon as they realize they’ve been hit. The number connects to a lawyer who provides a list of incident response firms and other contractors. Insurers tightly control expenses, approving or denying coverage for the recovery efforts advised by the vendors they suggest.

“Carriers are absolutely involved in the decision making,” Hofmann said. On both sides of the attack, “insurance is going to transform this entire market,” he said.

On June 10, Lake City government officials noticed they couldn’t make calls or send emails. IT staff then discovered encrypted files on the city’s servers and disconnected the infected servers from the internet. The city soon learned it was struck by Ryuk ransomware. Over the past year, unknown attackers using the Ryuk strain have besieged small municipalities and technology and logistics companies, demanding ransoms up to $5 million, according to the FBI.

Shortly after realizing it had been attacked, Lake City contacted the Florida League of Cities, which provides insurance for more than 550 public entities in the state. Beazley is the league’s reinsurer for cyber coverage, and they share the risk. The league declined to comment.

Initially, the city had hoped to restore its systems without paying a ransom. IT staff was “plugging along” and had taken server drives to a local vendor who’d had “moderate success at getting the stuff off of it,” Lee said. However, the process was slow and more challenging than anticipated, he said.

As the local technicians worked on the backups, Beazley requested a sample encrypted file and the ransom note so its approved vendor, Coveware, could open negotiations with the hackers, said Steve Roberts, Lake City’s director of risk management. The initial ransom demand was 86 bitcoin, or about $700,000 at the time, Coveware CEO Bill Siegel said. “Beazley was not happy with it — it was way too high,” Roberts said. “So [Coveware] started negotiations with the perps and got it down to the 42 bitcoin. Insurance stood by with the final negotiation amount, waiting for our decision.”

Lee said Lake City may have been able to achieve a “majority recovery” of its files without paying the ransom, but it probably would have cost “three times as much money trying to get there.” The city fired its IT director, Brian Hawkins, in the midst of the recovery efforts. Hawkins, who is suing the city, said in an interview posted online by his new employer that he was made “the scapegoat” for the city’s unpreparedness. The “recovery process on the files was taking a long time” and “the lengthy process was a major factor in paying the ransom,” he said in the interview.

On June 25, the day after the council meeting, the city said in a press release that while its backup recovery efforts “were initially successful, many systems were determined to be unrecoverable.” Lake City fronted the ransom amount to Coveware, which converted the money to bitcoin, paid the attackers and received a fee for its services. The Florida League of Cities reimbursed the city, Roberts said.

Lee acknowledged that paying ransoms spurs more ransomware attacks. But as cyber insurance becomes ubiquitous, he said, he trusts the industry’s judgment.

“The insurer is the one who is going to get hit with most of this if it continues,” he said. “And if they’re the ones deciding it’s still better to pay out, knowing that means they’re more likely to have to do it again — if they still find that it’s the financially correct decision — it’s kind of hard to argue with them because they know the cost-benefit of that. I have a hard time saying it’s the right decision, but maybe it makes sense with a certain perspective.”

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for The Big Story newsletter to receive stories like this one in your inbox.

 


51 Corporations Tell Congress: A Federal Privacy Law Is Needed. 145 Corporations Tell The U.S. Senate: Inaction On Gun Violence Is 'Simply Unacceptable'

Last week, several of the largest corporations petitioned the United States government for federal legislation in two key topics: consumer privacy and gun reform.

First, the Chief Executive Officers (CEOs) at 51 corporations sent a jointly signed letter to leaders in Congress asking for a federal privacy law to supersede laws emerging in several states. ZD Net reported:

"The open-letter was sent on behalf of Business Roundtable, an association made up of the CEOs of America's largest companies... CEOs blamed a patchwork of differing privacy regulations that are currently being passed in multiple US states, and by several US agencies, as one of the reasons why consumer privacy is a mess in the US. This patchwork of privacy regulations is creating problems for their companies, which have to comply with an ever-increasing number of laws across different states and jurisdictions. Instead, the 51 CEOs would like one law that governs all user privacy and data protection across the US, which would simplify product design, compliance, and data management."

The letter was sent to U.S. Senate Majority Leader Mitch McConnell, U.S. Senate Minority Leader Charles E. Schumer, Senator Roger F. Wicker (Chairman of the Committee on Commerce, Science and Transportation), Nancy Pelosi (Speaker of the U.S. House of Representatives), Kevin McCarthy (Minority Leader of the U.S. House of Representatives), Frank Pallone, Jr. (Chairman of the Committee on Energy and Commerce in the U.S. House of Representatives), and other ranking politicians.

The letter stated, in part:

"Consumers should not and cannot be expected to understand rules that may change depending upon the state in which they reside, the state in which they are accessing the internet, and the state in which the company’s operation is providing those resources or services. Now is the time for Congress to act and ensure that consumers are not faced with confusion about their rights and protections based on a patchwork of inconsistent state laws. Further, as the regulatory landscape becomes increasingly fragmented and more complex, U.S. innovation and global competitiveness in the digital economy are threatened. "

That sounds fair and noble enough. After writing this blog for more than 12 years, I have learned that details matters. Who writes the proposed legislation and the details in that legislation matter. It is too early to tell if the proposed legislation is weaker or stronger than what some states have implemented.

Some of the notable companies which signed the joint letter included AT&T, Amazon, Comcast, Dell Technologies, FedEx, IBM, Qualcomm, Salesforce, SAP, Target, and Walmart. Signers from the financial services sector included American Express, Bank of America, Citigroup, JPMorgan Chase, MasterCard, State Farm Insurance, USAA, and Visa. Several notable companies did not sign the letter: Facebook, Google, Microsoft, and Verizon.

Second, The New York Times reported that executives from 145 companies sent a joint letter to members of the U.S. Senate demanding that they take action on gun violence. The letter stated, in part (emphasis added):

"... we are writing to you because we have a responsibility and obligation to stand up for the safety of our employees ,customers, and all Americans in the communities we serve across the country. Doing nothing about America's gun violence crisis is simply unacceptable and it is time to stand with the American public on gun safety. Gun violence in America is not inevitable; it's preventable. There are steps Congress can, and must take to prevent and reduce gun violence. We need our lawmakers to support common sense gun laws... we urge the Senate to stand with the American public and take action on gun safety by passing a bill to require background checks on all gun sales and a strong Red Flag law that would allow courts to issue life-saving extreme risk protection orders..."

Some of the notable companies which signed the letter included Airbnb, Bain Capital, Bloomberg LP, Conde Nast, DICK'S Sporting Goods, Gap Inc., Levi Strauss & Company, Lyft, Pinterest, Publicis Groupe, Reddit, Royal Caribbean Cruises Ltd., Twitter, Uber, and Yelp.

Earlier this year, the U.S. House of Representatives passed legislation to address gun violence. So far, the U.S. Senate has done nothing. Representative Kathy Castor (14th District in Florida), explained the actions the House took in 2019:

"The Bipartisan Background Checks Act that I championed is a commonsense step to address gun violence and establish measures that protect our community and families. America is suffering from a long-term epidemic of gun violence – each year, 120,000 Americans are injured and 35,000 die by firearms. This bill ensures that all gun sales or transfers are subject to a background check, stopping senseless violence by individuals to themselves and others... Additionally, the Democratic House passed H.R. 1112 – the Enhanced Background Checks Act of 2019 – which addresses the Charleston Loophole that currently allows gun dealers to sell a firearm to dangerous individuals if the FBI background check has not been completed within three business days. H.R. 1112 makes the commonsense and important change to extend the review period to 10 business days..."

Findings from a February, 2018 Quinnipiac national poll:

"American voters support stricter gun laws 66 - 31 percent, the highest level of support ever measured by the independent Quinnipiac University National Poll, with 50 - 44 percent support among gun owners and 62 - 35 percent support from white voters with no college degree and 58 - 38 percent support among white men... Support for universal background checks is itself almost universal, 97 - 2 percent, including 97 - 3 percent among gun owners. Support for gun control on other questions is at its highest level since the Quinnipiac University Poll began focusing on this issue in the wake of the Sandy Hook massacre: i) 67 - 29 percent for a nationwide ban on the sale of assault weapons; ii) 83 - 14 percent for a mandatory waiting period for all gun purchases. It is too easy to buy a gun in the U.S. today..."


Court Okays 'Data Scraping' By Analytics Firm Of Users' Public LinkedIn Profiles. Lots Of Consequences

LinkedIn logo Earlier this week, a Federal appeals court affirmed an August 2017 injunction which required LinkedIn, a professional networking platform owned by Microsoft Corporation, to allow hiQ Labs, Inc. to access members' profiles. This ruling has implications for everyone.

hiQ Labs logo First, some background. The Naked Security blog by Sophos explained in December, 2017:

"... hiQ is a company that makes its money by “scraping” LinkedIn’s public member profiles to feed two analytical systems, Keeper and Skill Mapper. Keeper can be used by employers to detect staff that might be thinking about leaving while Skill Mapper summarizes the skills and status of current and future employees. For several years, this presented no problems until, in 2016, LinkedIn decided to offer something similar, at which point it sent hiQ and others in the sector cease and desist letters and started blocking the bots reading its pages."

So, hiQ apps use algorithms which determine for its clients (prospective or current employers) which employees will stay or go. Gizmodo explained the law which LinkedIn used in its arguments in court, namely the:

".... practice of scraping publicly available information from their platform violated the 1986 Computer Fraud and Abuse Act (CFAA). The CFAA is infamously vaguely written and makes it illegal to access a “protected computer” without or in excess of “authorization”—opening the door to sweeping interpretations that could be used to criminalize conduct not even close to what would traditionally be understood as hacking.

Second, the latest court ruling basically said two things: a) it is legal (and doesn't violate hacking laws) for companies to scrape information contained in publicly available profiles; and b) LinkedIn must allow hiQ (and potentially other firms) to continue with data-scraping. This has plenty of implications.

This recent ruling may surprise some persons, since the issue of data scraping was supposedly settled law previously. MediaPost reported:

"Monday's ruling appears to effectively overrule a decision issued six years ago in a dispute between Craigslist and the data miner 3Taps, which also scraped publicly available listings. In that matter, 3Taps allegedly scraped real estate listings and made them available to the developers PadMapper and Lively. PadMapper allegedly meshed Craigslist's apartment listings with Google maps... U.S. District Court Judge Charles Breyer in the Northern District of California ruled in 2013 that 3Taps potentially violated the anti-hacking law by scraping listings from Craigslist after the company told it to stop doing so."

So, you can bet that both social media sites and data analytics firms closely watched and read the appeal court's ruling this week.

Third, in theory any company or agency could then legally scrape information from public profiles on the LinkedIn platform. This scraping could be done by industries and/or entities (e.g., spy agencies worldwide) which job seekers didn't intend nor want.

Many consumers simply signed up and use LinkedIn to build professional relationship and/or to find jobs, either fulltime as employees or as contractors. The 2019 social media survey by Pew Research found that 27 percent of adults in the United States use LinkedIn, but higher usage penetration among persons with college degrees (51 percent), persons making more than $75K annually (49 percent), persons ages 25 - 29 (44 percent), persons ages 30 - 49 (37 percent), and urban residents (33 percent).  

I'll bet that many LinkedIn users never imagined that their profiles would be used against them by data analytics firms. Like it or not, that is how consumers' valuable, personal data is used (abused?) by social media sites and their clients.

Fourth, the practice of data scraping has divided tech companies. Again, from the Naked Security blog post in 2017:

"Data scraping, its seems, has become a booming tech sector that increasingly divides the industry ideologically. One side believes LinkedIn is simply trying to shut down a competitor wanting to access public data LinkedIn merely displays rather than owns..."

The Electronic Frontier Foundation (EFF), the DuckDuckGo search engine, and the Internet Archived had filed an amicus brief with the appeals court before its ruling. The EFF explained the group's reasoning and urged the:

"... Court of Appeals to reject LinkedIn’s request to transform the CFAA from a law meant to target serious computer break-ins into a tool for enforcing its computer use policies. The social networking giant wants violations of its corporate policy against using automated scripts to access public information on its website to count as felony “hacking” under the Computer Fraud and Abuse Act, a 1986 federal law meant to criminalize breaking into private computer systems to access non-public information. But using automated scripts to access publicly available data is not "hacking," and neither is violating a website’s terms of use. LinkedIn would have the court believe that all "bots" are bad, but they’re actually a common and necessary part of the Internet. "Good bots" were responsible for 23 percent of Web traffic in 2016..."

So, bots are here to stay. And, it's up to LinkedIn executives to find a solution to protect their users' information.

Fifth, according to the Reuters report the court judge suggested a solution for LinkedIn by "eliminating the public access option." Hmmmm. Public, or at least broad access, is what many job seekers desire. So, a balance needs to be struck between truly "public" where anyone, anywhere worldwide could access public profiles, versus intended targets (e.g., hiring executives in potential employers in certain industries).

Sixth, what struck me about the court ruling this week was that nobody was in the court room representing the interests of LinkedIn users, of which I am one. MediaPost reported:

"The appellate court discounted LinkedIn's argument that hiQ was harming users' privacy by scraping data even when people used a "do not broadcast" setting. "There is no evidence in the record to suggest that most people who select the 'Do Not Broadcast' option do so to prevent their employers from being alerted to profile changes made in anticipation of a job search," the judges wrote. "As the district court noted, there are other reasons why users may choose that option -- most notably, many users may simply wish to avoid sending their connections annoying notifications each time there is a profile change." "

What? Really?! We LinkedIn users have a natural, vested interest in control over both our profiles and the sensitive, personal information that describes each of us in our profiles. Somebody at LinkedIn failed to adequately represent users' interests of its users, the court didn't really listen closely nor seek out additional evidence, or all of the above.

Maybe the "there is no evidence in the record" regarding the 'Do Not Broadcast' feature will be the basis of another appeal or lawsuit.

With this latest court ruling, we LinkedIn users have totally lost control (except for deleting or suspending our LinkedIn accounts). It makes me wonder how a court could reach its decision without hearing directly from somebody representing LinkedIn users.

Seventh, it seems that LinkedIn needs to modify its platform in three key ways:

  1. Allow its users to specify which uses or applications (e.g., find fulltime work, find contract work, build contacts in my industry or area of expertise, find/screen job candidates, advertise/promote a business, academic research, publish content, read news, dating, etc.) their profiles can only be used for. The 'Do Not Broadcast' feature is clearly not strong enough;
  2. Allow its users to specify or approve individual users -- other actual persons who are LinkedIn users and not bots nor corporate accounts -- who can access their full, detailed profiles; and
  3. Outline in the user agreement the list of applications or uses profiles may be accessed for, so that both prospective and current LinkedIn users can make informed decisions. 

This would give LinkedIn users some control over the sensitive, personal information in their profiles. Without control, the benefits of using LinkedIn quickly diminish. And, that's enough to cause me to rethink my use of LinkedIn, and either deactivate or delete my account.

What are your opinions of this ruling? If you currently use LinkedIn, will you continue using it? If you don't use LinkedIn and were considering it, will you still consider using it?


Google And YouTube To Pay $170 Million In Proposed Settlement To Resolve Charges Of Children's Privacy Violations

Google logo Today's blog post contains information all current and future parents should know. On Tuesday, the U.S. Federal Trade Commission (FTC) announced a proposed settlement agreement where YouTube LLC, and its parent company, Google LLC, will pay a monetary fine of $170 million to resolve charges that the video-sharing service illegally collected the personal information of children without their parents' consent.

YouTube logo The proposed settlement agreement requires YouTube and Google to pay $136 million to the FTC and $34 million to New York State to resolve charges that the video sharing service violated the Children’s Online Privacy Protection Act (COPPA) Rule. The announcement explained the allegations:

"... that YouTube violated the COPPA Rule by collecting personal information—in the form of persistent identifiers that are used to track users across the Internet—from viewers of child-directed channels, without first notifying parents and getting their consent. YouTube earned millions of dollars by using the identifiers, commonly known as cookies, to deliver targeted ads to viewers of these channels, according to the complaint."

"The COPPA Rule requires that child-directed websites and online services provide notice of their information practices and obtain parental consent prior to collecting personal information from children under 13, including the use of persistent identifiers to track a user’s Internet browsing habits for targeted advertising. In addition, third parties, such as advertising networks, are also subject to COPPA where they have actual knowledge they are collecting personal information directly from users of child-directed websites and online services... the FTC and New York Attorney General allege that while YouTube claimed to be a general-audience site, some of YouTube’s individual channels—such as those operated by toy companies—are child-directed and therefore must comply with COPPA."

While $170 million is a lot of money, it is tiny compared to the $5 billion fine by the FTC assessed against Facebook. The fine is also tiny compared to Google's earnings. Alphabet Inc., the holding company which owns Google, generated pretax net income of $34.91 billion during 2018 on revenues of $136.96 billion.

In February, the FTC concluded a settlement with Musical.ly, a video social networking app now operating as TikTok, where Musical.ly paid $5.7 million to resolve allegations of COPPA violations. Regarding the proposed settlement with YouTube, Education Week reported:

"YouTube has said its service is intended for ages 13 and older, although younger kids commonly watch videos on the site and many popular YouTube channels feature cartoons or sing-a-longs made for children. YouTube has its own app for children, called YouTube Kids; the company also launched a website version of the service in August. The site says it requires parental consent and uses simple math problems to ensure that kids aren't signing in on their own. YouTube Kids does not target ads based on viewer interests the way YouTube proper does. The children's version does track information about what kids are watching in order to recommend videos. It also collects personally identifying device information."

The proposed settlement also requires YouTube and Google:

"... to develop, implement, and maintain a system that permits channel owners to identify their child-directed content on the YouTube platform so that YouTube can ensure it is complying with COPPA. In addition, the companies must notify channel owners that their child-directed content may be subject to the COPPA Rule’s obligations and provide annual training about complying with COPPA for employees who deal with YouTube channel owners. The settlement also prohibits Google and YouTube from violating the COPPA Rule, and requires them to provide notice about their data collection practices and obtain verifiable parental consent before collecting personal information from children."

The complaint and proposed consent decree were filed in the U.S. District Court for the District of Columbia. After approval by a judge, the proposed settlement become final. Hopefully, the fine and additional requirements will be enough to deter future abuses.


Operating Issues Continue To Affect The Integrity Of Products Sold On Amazon Site

Amazon logo News reports last week described in detail the operating issues that affect the integrity and reliability of products sold on the Amazon site. The Verge reported that some sellers:

"... hop onto fast-selling listings with counterfeit goods, or frame their competitors with fake reviews. One common tactic is to find a once popular, but now abandoned product and hijack its listing, using the page’s old reviews to make whatever you’re selling appear trustworthy. Amazon’s marketplace is so chaotic that not even Amazon itself is safe from getting hijacked. In addition to being a retail platform, Amazon sells its own house-brand goods under names like AmazonBasics, Rivet furniture, Happy Belly food, and hundreds of other labels."

The hijacked product pages include photos, descriptions, reviews, and/or comments from other products -- a confusing mix of content. You probably assumed that it isn't possible for this to happen, but it does. The Verge report explained:

"There are now more than 2 million sellers on the platform, and Amazon has struggled to maintain order. A recent Wall Street Journal investigation found thousands of items for sale on the site that were deceptively labeled or declared unsafe by federal regulators... A former Amazon employee who now works as a consultant for Amazon sellers, she’s worked with clients who have undergone similar hijackings. She says these listings were likely seized by a seller who contacted Amazon’s Seller Support team and asked them to push through a file containing the changes. The team is based mostly overseas, experiences high turnover, and is expected to work quickly, Greer says, and if you find the right person they won’t check what changes the file contains."

This directly affects online shoppers. The article also included this tip for shoppers:

"... the easiest way to detect a hijacking is to check that the reviews refer to the product being sold..."

What a mess! The burden should not fall upon shoppers. Amazon needs to clean up its mess -- quickly. What are your opinions.


Cloud Services Security Vendor Disclosed a 'Security Incident'

Imperva logo Imperva, a cloud-services security company, announced on Tuesday a data breach involving its Cloud Web Application Firewall (WAF) product, formerly known as Incapsula. The August 27th announcement stated:

"... this data exposure is limited to our Cloud WAF product. Here is what we know about the situation today: 1) On August 20, 2019, we learned from a third party of a data exposure that impacts a subset of customers of our Cloud WAF product who had accounts through September 15, 2017; 2) Elements of our Incapsula customer database through September 15, 2017 were exposed. These included: email addresses, hashed and salted passwords; 3) And for a subset of the Incapsula customers through September 15, 2017: API keys and customer-provided SSL certificates..."

Imperva provides firewall and security services to block cyberattacks by bad actors. These security services protect the information its clients (and clients' customers) store in cloud-storage databases. The home page of Imperva's site promotes the following clients: AARP, General Electric, Siemens, Xoom (A PayPal service), and Zillow. Many consumers use these clients' sites and service to store sensitive personal and payment information.

Imperva has informed the appropriate global regulatory agencies, hired forensic experts to help with the breach investigation, reset affected clients' passwords, and is informing affected clients. Security experts quickly weighed in about the data breach. The Krebs On Security blog reported:

"Rich Mogull, founder and vice president of product at Kansas City-based cloud security firm DisruptOps, said Imperva is among the top three Web-based firewall providers... an attacker in possession of a customer’s API keys and SSL certificates could use that access to significantly undermine the security of traffic flowing to and from a customer’s various Web sites. At a minimum, he said, an attacker in possession of these key assets could reduce the security of the WAF settings... A worst-case scenario could allow an attacker to intercept, view or modify traffic destined for an Incapsula client Web site, and even to divert all traffic for that site to or through a site owned by the attacker."

So, this breach and the data elements accessed by hackers were serious. It is another example indicating that hackers are persistent and attack where the money is.

Security experts said the cause of the breach is not yet known. Imperva is based in Redwood Shores, California.


Google Claims Blocking Cookies Is Bad For Privacy. Researchers: Nope. That Is 'Privacy Gaslighting'

Google logo The announcement by Google last week included some dubious claims, which received a fair amount of attention among privacy experts. First, a Senior Product Manager of User Privacy and Trust wrote in a post:

"Ads play a major role in sustaining the free and open web. They underwrite the great content and services that people enjoy... But the ad-supported web is at risk if digital advertising practices don’t evolve to reflect people’s changing expectations around how data is collected and used. The mission is clear: we need to ensure that people all around the world can continue to access ad supported content on the web while also feeling confident that their privacy is protected. As we shared in May, we believe the path to making this happen is also clear: increase transparency into how digital advertising works, offer users additional controls, and ensure that people’s choices about the use of their data are respected."

Okay, that is a fair assessment of today's internet. And, more transparency is good. Google executives are entitled to their opinions. The post also stated:

"The web ecosystem is complex... We’ve seen that approaches that don’t account for the whole ecosystem—or that aren’t supported by the whole ecosystem—will not succeed. For example, efforts by individual browsers to block cookies used for ads personalization without suitable, broadly accepted alternatives have fallen down on two accounts. First, blocking cookies materially reduces publisher revenue... Second, broad cookie restrictions have led some industry participants to use workarounds like fingerprinting, an opaque tracking technique that bypasses user choice and doesn’t allow reasonable transparency or control. Adoption of such workarounds represents a step back for user privacy, not a step forward."

So, Google claims that blocking cookies is bad for privacy. With a statement like that, the "User Privacy and Trust" title seems like an oxymoron. Maybe, that's the best one can expect from a company that gets 87 percent of its revenues from advertising.

Also on August 22nd, the Director of Chrome Engineering repeated this claim and proposed new internet privacy standards (bold emphasis added):

... we are announcing a new initiative to develop a set of open standards to fundamentally enhance privacy on the web. We’re calling this a Privacy Sandbox. Technology that publishers and advertisers use to make advertising even more relevant to people is now being used far beyond its original design intent... some other browsers have attempted to address this problem, but without an agreed upon set of standards, attempts to improve user privacy are having unintended consequences. First, large scale blocking of cookies undermine people’s privacy by encouraging opaque techniques such as fingerprinting. With fingerprinting, developers have found ways to use tiny bits of information that vary between users, such as what device they have or what fonts they have installed to generate a unique identifier which can then be used to match a user across websites. Unlike cookies, users cannot clear their fingerprint, and therefore cannot control how their information is collected... Second, blocking cookies without another way to deliver relevant ads significantly reduces publishers’ primary means of funding, which jeopardizes the future of the vibrant web..."

Yes, fingerprinting is a nasty, privacy-busting technology. No argument with that. But, blocking cookies is bad for privacy? Really? Come on, let's be honest.

This dubious claim ignores corporate responsibility... that some advertisers and website operators made choices -- conscious decisions to use more invasive technologies like fingerprinting to do an end-run around users' needs, desires, and actions to regain online privacy. Sites and advertisers made those invasive-tech choices when other options were available, such as using subscription services to pay for their content.

Plus, Google's claim also ignores the push by corporate internet service providers (ISPs) which resulted in the repeal of online privacy protections for consumers thanks to a compliant, GOP-led Federal Communications Commission (FCC), which seems happy to tilt the playing field further towards corporations and against consumers. So, users are simply trying to regain online privacy.

During the past few years, both privacy-friendly web browsers (e.g., Brave, Firefox) and search engines (e.g., DuckDuckGo) have emerged to meet consumers' online privacy needs. (Well, it's not only consumers that need online privacy. Attorneys and businesses need it, too, to protect their intellectual property and proprietary business methods.) Online users demanded choice, something advertisers need to remember and value.

Privacy experts weighed in about Google's blocking-cookies-is-bad-for-privacy claim. Jonathan Mayer and Arvind Narayanan explained:

That’s the new disingenuous argument from Google, trying to justify why Chrome is so far behind Safari and Firefox in offering privacy protections. As researchers who have spent over a decade studying web tracking and online advertising, we want to set the record straight. Our high-level points are: 1) Cookie blocking does not undermine web privacy. Google’s claim to the contrary is privacy gaslighting; 2) There is little trustworthy evidence on the comparative value of tracking-based advertising; 3) Google has not devised an innovative way to balance privacy and advertising; it is latching onto prior approaches that it previously disclaimed as impractical; and 4) Google is attempting a punt to the web standardization process, which will at best result in years of delay."

The researchers debunked Google's claim with more details:

"Google is trying to thread a needle here, implying that some level of tracking is consistent with both the original design intent for web technology and user privacy expectations. Neither is true. If the benchmark is original design intent, let’s be clear: cookies were not supposed to enable third-party tracking, and browsers were supposed to block third-party cookies. We know this because the authors of the original cookie technical specification said so (RFC 2109, Section 4.3.5). Similarly, if the benchmark is user privacy expectations, let’s be clear: study after study has demonstrated that users don’t understand and don’t want the pervasive web tracking that occurs today."

Moreover:

"... there are several things wrong with Google’s argument. First, while fingerprinting is indeed a privacy invasion, that’s an argument for taking additional steps to protect users from it, rather than throwing up our hands in the air. Indeed, Apple and Mozilla have already taken steps to mitigate fingerprinting, and they are continuing to develop anti-fingerprinting protections. Second, protecting consumer privacy is not like protecting security—just because a clever circumvention is technically possible does not mean it will be widely deployed. Firms face immense reputational and legal pressures against circumventing cookie blocking. Google’s own privacy fumble in 2012 offers a perfect illustration of our point: Google implemented a workaround for Safari’s cookie blocking; it was spotted (in part by one of us), and it had to settle enforcement actions with the Federal Trade Commission and state attorneys general."

Gaslighting, indeed. Online privacy is important. So, too, are consumers' choices and desires. Thanks to Mr. Mayer and Mr. Narayanan for the comprehensive response.

What are your opinions of cookie blocking? Of Google's claims?