Previous month:
October 2018
Next month:
December 2018

11 posts from November 2018

Massive Data Breach At U.S. Postal Service Affects 60 Million Users

United States Postal Service logo The United States Postal Service (USPS) experienced a massive data breach due to a vulnerable component at its website. The "application program interface" or API component allowed unauthorized users to access and download details about other users of the Informed Visibility service.

Security researcher Brian Krebs explained:

"In addition to exposing near real-time data about packages and mail being sent by USPS commercial customers, the flaw let any logged-in usps.com user query the system for account details belonging to any other users, such as email address, username, user ID, account number, street address, phone number, authorized users, mailing campaign data and other information.

Many of the API’s features accepted “wildcard” search parameters, meaning they could be made to return all records for a given data set without the need to search for specific terms. No special hacking tools were needed to pull this data, other than knowledge of how to view and modify data elements processed by a regular Web browser like Chrome or Firefox."

Geez! The USPS has since fixed the API vulnerability. Regardless, this is bad, very bad, for several reasons. Not only should the vulnerable API have prevented one user from viewing details about another, but it allowed changes to some data elements. Krebs added:

"A cursory review by KrebsOnSecurity indicates the promiscuous API let any user request account changes for any other user, such as email address, phone number or other key details. Fortunately, the USPS appears to have included a validation step to prevent unauthorized changes — at least with some data fields... The ability to modify database entries related to Informed Visibility user accounts could create problems for the USPS’s largest customers — think companies like Netflix and others that get discounted rates for high volumes. For instance, the API allowed any user to convert regular usps.com accounts to Informed Visibility business accounts, and vice versa."

About 13 million Informed Delivery users were also affected, since the vulnerable API component affected all USPS.com users. A vulnerability like this makes package theft easier since criminals could determine when certain types of mail (e.g., debit cards, credit cards, etc.) arrive at users' addresses. The vulnerable API probably existed for more than one year, when a security researcher first alerted the USPS about it.

While the USPS provided a response to Krebs on Security, a check at press time of the Newsroom and blog sections of About.USPS.com failed to find any mention of the data breach. Not good. Transparency matters.

If the USPS is serious about data security, then it should issue a public statement. When will users receive breach notification letters, if they haven't been sent? Who fixed the vulnerable API? How long was it broken? What post-breach investigation is underway? What types of changes (e.g., employee training, software testing, outsource vendor management, etc.) are being implement so this won't happen again?

Trust matters. The lack of a public statement makes it difficult for consumers to judge the seriousness of the breach and the seriousness of the fix by USPS. We probably will hear more about this breach.


Ireland Regulator: LinkedIn Processed Email Addresses Of 18 Million Non-Members

LinkedIn logo On Friday November 23rd, the Data Protection Commission (DPC) in Ireland released its annual report. That report includes the results of an investigation by the DPC of the LinkedIn.com social networking site, after a 2017 complaint by a person who didn't use the social networking service. Apparently, LinkedIn obtained 18 million email address of non-members so it could then use the Facebook platform to deliver advertisements encouraging them to join.

The DPC 2018 report (Adobe PDF; 827k bytes) stated on page 21:

"The DPC concluded its audit of LinkedIn Ireland Unlimited Company (LinkedIn) in respect of its processing of personal data following an investigation of a complaint notified to the DPC by a non-LinkedIn user. The complaint concerned LinkedIn’s obtaining and use of the complainant’s email address for the purpose of targeted advertising on the Facebook Platform. Our investigation identified that LinkedIn Corporation (LinkedIn Corp) in the U.S., LinkedIn Ireland’s data processor, had processed hashed email addresses of approximately 18 million non-LinkedIn members and targeted these individuals on the Facebook Platform with the absence of instruction from the data controller (i.e. LinkedIn Ireland), as is required pursuant to Section 2C(3)(a) of the Acts. The complaint was ultimately amicably resolved, with LinkedIn implementing a number of immediate actions to cease the processing of user data for the purposes that gave rise to the complaint."

So, in an attempt to gain more users LinkedIn acquired and processed the email addresses of 18 million non-members without getting governmental "instruction" as required by law. Not good.

The DPC report covered the time frame from January 1st through May 24, 2018. The report did not mention the source(s) from which LinkedIn acquired the email addresses. The DPC report also discussed investigations of Facebook (e.g., WhatsApp, facial recognition),  and Yahoo/Oath. Microsoft acquired LinkedIn in 2016. GDPR went into effect across the EU on May 25, 2018.

There is more. The investigation's findings raised concerns about broader compliance issues, so the DPC conducted a more in-depth audit:

"... to verify that LinkedIn had in place appropriate technical security and organisational measures, particularly for its processing of non-member data and its retention of such data. The audit identified that LinkedIn Corp was undertaking the pre-computation of a suggested professional network for non-LinkedIn members. As a result of the findings of our audit, LinkedIn Corp was instructed by LinkedIn Ireland, as data controller of EU user data, to cease pre-compute processing and to delete all personal data associated with such processing prior to 25 May 2018."

That the DPC ordered LinkedIn to stop this particular data processing, strongly suggests that the social networking service's activity probably violated data protection laws, as the European Union (EU) implements stronger privacy laws, known as General Data Protection Regulation (GDPR). ZDNet explained in this primer:

".... GDPR is a new set of rules designed to give EU citizens more control over their personal data. It aims to simplify the regulatory environment for business so both citizens and businesses in the European Union can fully benefit from the digital economy... almost every aspect of our lives revolves around data. From social media companies, to banks, retailers, and governments -- almost every service we use involves the collection and analysis of our personal data. Your name, address, credit card number and more all collected, analysed and, perhaps most importantly, stored by organisations... Data breaches inevitably happen. Information gets lost, stolen or otherwise released into the hands of people who were never intended to see it -- and those people often have malicious intent. Under the terms of GDPR, not only will organisations have to ensure that personal data is gathered legally and under strict conditions, but those who collect and manage it will be obliged to protect it from misuse and exploitation, as well as to respect the rights of data owners - or face penalties for not doing so... There are two different types of data-handlers the legislation applies to: 'processors' and 'controllers'. The definitions of each are laid out in Article 4 of the General Data Protection Regulation..."

The new GDPR applies to both companies operating within the EU, and to companies located outside of the EU which offer goods or services to customers or businesses inside the EU. As a result, some companies have changed their business processes. TechCrunch reported in April:

"Facebook has another change in the works to respond to the European Union’s beefed up data protection framework — and this one looks intended to shrink its legal liabilities under GDPR, and at scale. Late yesterday Reuters reported on a change incoming to Facebook’s [Terms & Conditions policy] that it said will be pushed out next month — meaning all non-EU international are switched from having their data processed by Facebook Ireland to Facebook USA. With this shift, Facebook will ensure that the privacy protections afforded by the EU’s incoming GDPR — which applies from May 25 — will not cover the ~1.5 billion+ international Facebook users who aren’t EU citizens (but current have their data processed in the EU, by Facebook Ireland). The U.S. does not have a comparable data protection framework to GDPR..."

What was LinkedIn's response to the DPC report? At press time, a search of LinkedIn's blog and press areas failed to find any mentions of the DPC investigation. TechCrunch reported statements by Dennis Kelleher, Head of Privacy, EMEA at LinkedIn:

"... Unfortunately the strong processes and procedures we have in place were not followed and for that we are sorry. We’ve taken appropriate action, and have improved the way we work to ensure that this will not happen again. During the audit, we also identified one further area where we could improve data privacy for non-members and we have voluntarily changed our practices as a result."

What does this mean? Plenty. There seem to be several takeaways for consumer and users of social networking services:

  • EU regulators are proactive and conduct detailed audits to ensure companies both comply with GDPR and act consistent with any promises they made,
  • LinkedIn wants consumers to accept another "we are sorry" corporate statement. No thanks. No more apologies. Actions speak more loudly than words,
  • The DPC didn't fine LinkedIn probably because GDPR didn't become effective until May 25, 2018. This suggests that fines will be applied to violations occurring on or after May 25, 2018, and
  • People in different areas of the world view privacy and data protection differently - as they should. That is fine, and it shouldn't be a surprise. (A global survey about self-driving cars found similar regional differences.) Smart executives in businesses -- and in governments -- worldwide recognize regional differences, find ways to sell products and services across areas without degraded customer experience, and don't try to force their country's approach on other countries or areas which don't want it.

What takeaways do you see?


Amazon Said Its Data Breach Was Due To A "Technical Error" And Discloses Few Breach Details

Amazon logo Amazon.com, the online retail giant, confirmed that it experienced a data breach last Wednesday. CBS News reported:

"Amazon said a technical error on its website exposed the names and email addresses of some customers. The online retail giant its website and systems weren't hacked. "We have fixed the issue and informed customers who may have been impacted," said an Amazon spokesperson. An Amazon spokesman didn't answer additional questions, like how many people were affected or whether any of the information was stolen."

A check of the press center and blog sections with the Amazon.com site failed to find any mentions of the data breach. The Ars Technica blog posted the text of the breach notification email Amazon sent to affected users:

"From: Amazon.com
Sent: 21 November 2018 10:53
To: [email protected]
Subject: Important Information about your Amazon.com Account

Hello,
We’re contacting you to let you know that our website inadvertently disclosed your name and email address due to a technical error. The issue has been fixed. This is not a result of anything you have done, and there is no need for you to change your password or take any other action.

Sincerely,
Customer Service
http://Amazon.com"

What? That's all? No link to a site or to a page for customers with questions?

This incident is a reminder that several things can cause data breaches. It's not only when cyber-criminals break into an organization's computers or systems. Human error causes data breaches, too. In some breaches, employees collude with criminals. In some cases, sloppy data security by outsource vendors causes data breaches. Details matter.

Typically, organizations affected by data breaches hire external security agencies to conduct independent, post-breach investigations to learn important details: when the breach started, how exactly the breach happened, the list of data elements unauthorized users accessed/stole, what else may have happened that wasn't readily apparent when the incident was discovered, and key causal events leading up to the breach -- all so that a complete fix can be implemented, and so that it doesn't happen again.

Who made the "technical error?" Who discovered it? What caused it? How long did the error exist? Who fixed it? Were specialized skills or tools necessary? What changes were made so that it won't happen again? Amazon isn't saying. If management decided to skip a post-breach investigation, consumers deserve to know that and why, too.

Often, the breach starts long before it is discovered by the company, or by a security researcher. Often, the fix includes several improvements: software changes, employee training, and/or improved security processes with contractors.

So, all we know is that names and email addresses were accessed by unauthorized persons. If stolen, that is sufficient to do damage -- spam or phishing email messages, to trick victims into revealing sensitive personal (e.g., usernames, passwords, etc.) and payment (e.g., bank account numbers, credit card numbers, etc.) information. It is not too much to ask Amazon to share both breach details and the results of a post-breach investigation.

Executives at Amazon know all of this, so maybe it was a management decision not to share breach details nor a post-breach investigation -- perhaps, not wanting to risk huge Black Friday holiday sales. Then again, the lack of details could imply the breach was far worse than management wants to admit.

Either way, this is troublesome. It's all about trust. When details are shared, consumers can judge the severity of the breach, the completeness of the company's post-breach response, and ideally feel better about continuing to shop at the site. What do you  think?


Aging Machines, Crowds, Humidity: Problems at the Polls Were Mundane but Widespread

[Editor's Note: today's guest blog post, by Reporters at ProPublica, discusses widespread problems many voters encountered earlier this month. The data below was compiled before the runoffs in Florida, Georgia and other states. It is reprinted with permission.]

By Ian MacDougall, Jessica Huseman, and Isaac Arnsdorf - ProPublica

If the defining risk of Election Day 2016 was a foreign meddling, 2018’s seems to have been a domestic overload. High turnout across the country threw existing problems — aging machines, poorly trained poll workers and a hot political landscape — into sharp relief.

Michael McDonald, a political science professor at the University of Florida who studies turnout, says early numbers indicate Tuesday’s midterm saw the highest percentage turnout since the mid-’60s. “All signs indicate that everyone is now engaged in this country — Republicans and Democrats,” he said, adding that he expects 2020 to also be a year of high turnout. “Election officials need to start planning for that now, and hopefully elected officials who hold the purse strings will be responsive to those needs.”

Aging Technology

Electionland monitored problems across the country on Election Day, supporting the work of 250 local journalists in more than 120 local newsrooms. Thousands of voters reported issues at the polls, and Electionland sought to report on as many as possible. The most striking problem of the night was perhaps the most predictable — aged or ineffective voting equipment caused hours-long lines across the country.

American voting hasn’t had a major technology refresh since the early 2000s, in the aftermath of the Florida recount and the passage of the 2002 Help America Vote Act, which infused billions of dollars into American elections. More recent upgrades, such as poll books that could be accessed via computer, were supposed to reduce bottlenecks at check-ins — but they repeatedly failed on Tuesday, worsening waits in Georgia, South Carolina and Indiana.

While aging infrastructure was already a well-known problem to election administrators, the surge of voters experiencing ordinary glitches led to extraordinarily long waits, sometimes stretching over hours. From Pennsylvania to Georgia to Arizona and Michigan, polling places started the day with broken machines leading to long lines, and never recovered.

“In 2016, we learned the technology has security vulnerabilities. Today was a wake-up call to performance vulnerabilities,” said Trey Grayson, the former president of the National Association of Secretaries of State and a member of the 2013 Presidential Commission on Election Administration. Tuesday, Grayson said, showed “the implications of turnout, stressing the system, revealing planning failures, feel impact of limited resources. If you had more resources, you’d have had more paper ballots, more machines, more polling places.”

The election hotline from the Lawyers’ Committee for Civil Rights Under Law clocked 24,000 calls by 6 p.m., twice the rate in in the 2014 midterm election. “People were not able to vote because of technical issues that are completely avoidable,” Ryan Snow, of the Lawyers’ Committee, said. “People who came to vote — registered to vote, showed up to vote — were not able to vote.”

“We think we can solve all of these voting problems by adding technology, but you have to have a contingency plan for when each of these pieces fail,” said Joseph Lorenzo Hall, the chief technologist at the Center for Democracy & Technology in Washington, D.C. It appears many of the places that saw electronic poll book failures had no viable backup system.

Hall said that problems with machines and computers force election administrators to become technicians on the spot, despite their lack of training. This exacerbates problems: Poll workers aren’t able to accurately or efficiently report issues to their central offices, leading to delays in dispatches of appropriate equipment or staff.

Perhaps the most embarrassing technological faceplant was in New York City, where the machines used to scan ballots proved no match for wet weather. Humidity caused the scanners to malfunction, leading to outages and long lines.

The breakdowns proliferated up and down the East Coast. Humidity also roiled scanners in North Carolina. In Charleston, South Carolina, an interminable delay driven by a downed voting system drove one person to leave for work before she could cast her ballot. “It felt like a type of disenfranchisement,” she told ProPublica. Voting machine outages in some Georgia precincts stranded voters in hours-long lines. In predominantly black sections of St. Petersburg, Florida, wait times ballooned as voting machines froze.

Some of the pressure on the aging technology was relieved by early and mail-in voting, so that everyone didn’t have to vote on the same day, Grayson said. But many states still require people to cast their ballots on Election Day, and others have added time-consuming procedures such as strict ID requirements.

Those sorts of security measures add their own layers of confusion. Many voters reported never receiving their ballots in the mail. Georgia voter Shelley Martin couldn’t vote because her ballot was mailed to the wrong address — even though she filled out her address correctly, the county election office accidentally changed a 9 to a 0. In Ohio, some in-person voters were incorrectly told they had already received an absentee ballot, because of a computer error.

When people show up at the wrong polling place or have problems with their registration, they are usually entitled to cast a provisional ballot that will be counted once it’s verified. But these problems were so common on Tuesday that some locations ran out of provisional ballots and turned people away, according to North Carolina voters’ reports to ProPublica. In Arizona, some voters were told they couldn’t have provisional ballots because of broken printers. In Pennsylvania, some college students encountered glitches with their registration and said poll workers wouldn’t give them provisional ballots.

A newly implemented law in North Dakota left a handful of college students — many of whom had voted in previous elections — confused and unable to vote. “I was so frustrated because I’ve voted in North Dakota before,” said Alissa Maesse, a student at the University of North Dakota who came to the polls with a Minnesota driver’s license and bank statement with a North Dakota address, but needed a North Dakota driver’s license, identification card or tribal ID. “I can’t participate at all and I wanted to.”

Administrative Error

Administrative stumbling blocks and unhelpful election officials left some voters throughout the day scrambling to figure out where or how they were supposed to vote. Across the country, confusion over new laws and poll worker error forced voters to work with attorneys or drive long distances in an attempt to solve problems.

In Missouri, a last-minute court ruling resulted in chaos across the state. Less than a month before, a judge radically altered the state’s voter ID law to allow more valid forms of identification. By then, poll workers had already been trained. Many enforced the incorrect version of the law.

In St. Charles County, northwest of St. Louis, voters across the county reported that poll workers openly argued with voters who showed identification allowed under the new ruling, demanding old forms of ID. By the end of the evening, the county had ignored demand letters from attorneys at Advancement Project, a civil rights group. Denise Lieberman, an attorney with the group, said it is considering legal remedies due to the county’s “flagrant disregard” for the judge’s ruling.

Rich Chrismer, the director of elections for the county, said he never saw the letters — he was at polling places all day. By late morning, he’d been made aware of 12 different polling locations where poll workers were giving incorrect instructions. He utilized the local police to distribute memos to all 121 polling locations, correcting poll worker instructions. They were distributed by the late morning, and complaints dropped off after that, he said.

Chrismer said training had already happened by the time the judge issued his ruling, but that he’d put new instructions in “four different places” in the packet mailed to poll workers ahead of the election. “They were either ignoring me or they didn’t know how to read, which upsets me,” he said.

Dallas County Clerk Stephanie Hendricks expressed similar frustration at the short window of time allowed by the court to retrain poll workers, update signs and ensure voter understanding.

Hendricks said the small county had to “scrape the bottom of the barrel” for poll workers, who only received 90 minutes of training. This, combined with the very short notice for the legal change, made it difficult to help poll workers understand the law. “The last few elections it’s been photo ID, photo ID, photo ID, and now all of a sudden the brakes have been thrown on. It’s confusing for people,” she said.

The frustrations for Chris Sears began on Friday, when he turned up to cast an early ballot at Cinco Ranch Public Library, a brick building abutting a duck pond in the suburbs west of Houston. Sears, a 43-year-old Texan who works in real estate, had voted at the library in the 2016 election, after moving to the area from adjoining Harris County a year earlier. Now, at the library, poll workers couldn’t find him in their rolls. His only recourse, they told him, was to drive the half hour or so to the Fort Bend County election office. Sears, realizing he wouldn’t make it there and back before early voting closed, decided to go first thing Tuesday morning.

After he explained his situation and presented his driver’s license, which had a local address, the clerk at the election office had a terse message for him. She slid a fresh voter registration application across the counter and told him: “Fill this out, and you’ll be eligible to vote in the next election.” Sears told the clerk he hadn’t moved, and that he’d voted in the last election.

The clerk was unmoved. “What you can do,” the clerk repeated, pointing at the registration form, “is fill this out, and vote in the next election.”

Sears wasn’t alone. As he went back and forth with the clerk, three other men who, like Sears, had moved recently from other Texas counties, came in with near-identical complaints. The clerk gave them the same response she had given Sears. County officials told ProPublica they all should have been offered provisional ballots — not sent across town or told to register again.

Ultimately, Sears would cast a provisional ballot, but he didn’t discover this option until he’d done hours of research to try and hunt down the cause of his problems.

“I finally got to vote,” he said. “But that was after driving across two counties and spending five or six hours of my time trying to determine whether there was a way I could do it.”

Some administrative problems were a bit more bizarre — a polling place in Chandler, Arizona, was foreclosed upon overnight. Voter Joann Swain arrived at the Golf Academy of America, which housed the poll, to find TV news crews and a crowd of people in the parking lot of the type of faux Spanish Mission Revival shopping centers that fleck the desert around Phoenix. Voting booths were arrayed along the sidewalk.

A sign affixed to the building’s locked front door indicated that the landlord has foreclosed on the Golf Academy for failing to pay rent. While poll workers had set up the voting booths the night before, that didn’t appear to matter to the landlord. The sign read: “UNAUTHORIZED ENTRY UPON THESE PREMISES OR THE REMOVAL OF PROPERTY HEREFROM MAY RESULT IN CRIMINAL AND/OR CIVIL PROSECUTION.”

The timing struck Swain as suspect. “Were they trying to make it more difficult for people to vote?” she asked Wednesday. Election officials had provided no answers. “It’s just fishy.”

Swain, who is 47, waited in line for two hours as poll workers promised the machines necessary for voters to print and cast their ballot were on their way from Phoenix. She didn’t want to cast a provisional ballot, for fear it wouldn’t be counted. One man in line who took poll workers up on an alternative to waiting — voting at Chandler City Hall — returned not long after he left. With polling site difficulties cropping up throughout the Phoenix area, he hadn’t been able to vote there either.

To the puzzlement of voters waiting in line, Maricopa County Recorder Adrian Fontes tweeted that the Golf Academy polling place was open. “No it’s not. I’m here,” an Arizonan named Gary Taylor shot back.

Other voters reacted to situation more volubly. “I got things to do. I can’t stand around all day waiting because these guys can’t do their job,” a voter named Thomas Wood told reporters. “It’s ridiculous. It’s absolutely ridiculous.”

Swain ultimately left at 8:30 a.m. By the time she returned, later in the day, poll workers had set up the voting machines delivered from Phoenix in another storefront in the shopping center. The original machines remained locked in the Golf Academy, she said.

Electioneering

Back East, reports of potentially improper political messages at polling sites had begun to crop up, and the response from election officials highlighted the at times flimsy nature of electioneering laws. On Tuesday morning, a handwritten sign appeared on the door of a polling station near downtown Pittsburgh, which read “Vote Straight Democrat.” County election officials were alerted to the sign in the early afternoon, but by then the sign had been removed, Amie Downs, an Allegheny County spokeswoman, said in a statement.

An official in the county election office, who declined to give her name, blamed the sign on a member of the local Democratic Party committee. “He said he does that every year but never had problems till this year,” she said. Pennsylvania law prohibits electioneering within 10 feet of a polling place, and Downs said it wasn’t clear whether the sign violated the law.

Down the coast, in New Port Richey — a politically mixed cluster of strip malls northwest of Tampa, Florida — Pastor Al Carlisle triggered upward of 75 complaints to Pasco County election officials after he put up a handwritten sign reading “Don’t Vote for Democrats on Tuesday and Sing ‘Oh How I Love Jesus’ on Sunday” outside his church. That wouldn’t be a problem, except that on Election Day, his church doubles as a polling place. Carlisle remained unrepentant. He continued Wednesday to trumpet the sign on his Facebook page, mixed among posts conflating religious faith with support for President Donald Trump.

Local election officials, however, stopped at mild censure. Pasco County election chief Brian Corley told the Tampa Bay Times the sign was “not appropriate” but legal, since Carlisle had placed it only just more than 100 feet away from where voters were casting their ballot.

Later in the day, some voters complained about large posters opposing abortion — an example: “God Doesn’t Make Mistakes, Choose Life” — plastered on the walls of a church gymnasium in Holts Summit, Missouri, used as a polling place. Despite the political implications, election officials told local radio station KBIA that the posters were legal because there were no abortion-related issues on the ballot.

Behind the scenes, officials nationwide were addressing gaps in website reliability and security. In Kentucky, a handful of county websites that provided information to voters flickered offline for parts of the day. State officials said the issue was likely a technical problem not caused by a malicious attack.

But several states meanwhile alerted U.S. election-security officials to efforts of hackers scanning their computer systems for software vulnerabilities. Days before the election, a county clerk’s office said its email account was compromised and its messages forwarded to a private Gmail address, according to person familiar with the matter who was not authorized to discuss it publicly.

As polls closed Tuesday evening, back in New York, the crowds and ballot scanner failures remained. At one school in Brooklyn that had seen long lines in the morning, the wait to vote at 7 p.m. was no better — still upward of two hours, Emily Chen told ProPublica. By the end of the day, the New York City Council speaker had called for the elections director’s resignation, and the mayor had denounced the technical snags as “absolutely unacceptable.”

Down the coast, in Broward County, Florida, just north of Miami, election officials were struggling with a technical failure of a different sort. Seven precincts were unable to transmit vote tallies electronically. This time, it would force election officials to internalize what voters had suffered throughout much of the day. Around 11 p.m., they walked out into the balmy South Florida night, got into their cars and drove the voter files to the county election office.

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Plenty Of Bad News During November. Are We Watching The Fall Of Facebook?

Facebook logo November has been an eventful month for Facebook, the global social networking giant. And not in a good way. So much has happened, it's easy to miss items. Let's review.

A November 1st investigative report by ProPublica described how some political advertisers exploit gaps in Facebook's advertising transparency policy:

"Although Facebook now requires every political ad to “accurately represent the name of the entity or person responsible,” the social media giant acknowledges that it didn’t check whether Energy4US is actually responsible for the ad. Nor did it question 11 other ad campaigns identified by ProPublica in which U.S. businesses or individuals masked their sponsorship through faux groups with public-spirited names. Some of these campaigns resembled a digital form of what is known as “astroturfing,” or hiding behind the mirage of a spontaneous grassroots movement... Adopted this past May in the wake of Russian interference in the 2016 presidential campaign, Facebook’s rules are designed to hinder foreign meddling in elections by verifying that individuals who run ads on its platform have a U.S. mailing address, governmental ID and a Social Security number. But, once this requirement has been met, Facebook doesn’t check whether the advertiser identified in the “paid for by” disclosure has any legal status, enabling U.S. businesses to promote their political agendas secretly."

So, political ad transparency -however faulty it is -- has only been operating since May, 2018. Not long. Not good.

The day before the November 6th election in the United States, Facebook announced:

"On Sunday evening, US law enforcement contacted us about online activity that they recently discovered and which they believe may be linked to foreign entities. Our very early-stage investigation has so far identified around 30 Facebook accounts and 85 Instagram accounts that may be engaged in coordinated inauthentic behavior. We immediately blocked these accounts and are now investigating them in more detail. Almost all the Facebook Pages associated with these accounts appear to be in the French or Russian languages..."

This happened after Facebook removed 82 Pages, Groups and accounts linked to Iran on October 16th. Thankfully, law enforcement notified Facebook. Interested in more proactive action? Facebook announced on November 8th:

"We are careful not to reveal too much about our enforcement techniques because of adversarial shifts by terrorists. But we believe it’s important to give the public some sense of what we are doing... We now use machine learning to assess Facebook posts that may signal support for ISIS or al-Qaeda. The tool produces a score indicating how likely it is that the post violates our counter-terrorism policies, which, in turn, helps our team of reviewers prioritize posts with the highest scores. In this way, the system ensures that our reviewers are able to focus on the most important content first. In some cases, we will automatically remove posts when the tool indicates with very high confidence that the post contains support for terrorism..."

So, Facebook deployed in 2018 some artificial intelligence to help its human moderators identify terrorism threats -- not automatically remove them, but to identify them -- as the news item also mentioned its appeal process. Then, Facebook announced in a November 13th update:

"Combined with our takedown last Monday, in total we have removed 36 Facebook accounts, 6 Pages, and 99 Instagram accounts for coordinated inauthentic behavior. These accounts were mostly created after mid-2017... Last Tuesday, a website claiming to be associated with the Internet Research Agency, a Russia-based troll farm, published a list of Instagram accounts they said that they’d created. We had already blocked most of them, and based on our internal investigation, we blocked the rest... But finding and investigating potential threats isn’t something we do alone. We also rely on external partners, like the government or security experts...."

So, in 2018 Facebook leans heavily upon both law enforcement and security researchers to identify threats. You have to hunt a bit to find the total number of fake accounts removed. Facebook announced on November 15th:

"We also took down more fake accounts in Q2 and Q3 than in previous quarters, 800 million and 754 million respectively. Most of these fake accounts were the result of commercially motivated spam attacks trying to create fake accounts in bulk. Because we are able to remove most of these accounts within minutes of registration, the prevalence of fake accounts on Facebook remained steady at 3% to 4% of monthly active users..."

That's about 1.5 billion fake accounts by a variety of bad actors. Hmmmm... sounds good, but... it makes one wonder about the digital arms race happening. If the bad actors can programmatically create new fake accounts faster than Facebook can identify and remove them, then not good.

Meanwhile, CNet reported on November 11th that Facebook had ousted Oculus founder Palmer Luckey due to:

"... a $10,000 to an anti-Hillary Clinton group during the 2016 presidential election, he was out of the company he founded. Facebook CEO Mark Zuckerberg, during congressional testimony earlier this year, called Luckey's departure a "personnel issue" that would be "inappropriate" to address, but he denied it was because of Luckey's politics. But that appears to be at the root of Luckey's departure, The Wall Street Journal reported Sunday. Luckey was placed on leave and then fired for supporting Donald Trump, sources told the newspaper... [Luckey] was pressured by executives to publicly voice support for libertarian candidate Gary Johnson, according to the Journal. Luckey later hired an employment lawyer who argued that Facebook illegally punished an employee for political activity and negotiated a payout for Luckey of at least $100 million..."

Facebook acquired Oculus Rift in 2014. Not good treatment of an executive.

The next day, TechCrunch reported that Facebook will provide regulators from France with access to its content moderation processes:

"At the start of 2019, French regulators will launch an informal investigation on algorithm-powered and human moderation... Regulators will look at multiple steps: how flagging works, how Facebook identifies problematic content, how Facebook decides if it’s problematic or not and what happens when Facebook takes down a post, a video or an image. This type of investigation is reminiscent of banking and nuclear regulation. It involves deep cooperation so that regulators can certify that a company is doing everything right... The investigation isn’t going to be limited to talking with the moderation teams and looking at their guidelines. The French government wants to find algorithmic bias and test data sets against Facebook’s automated moderation tools..."

Good. Hopefully, the investigation will be a deep dive. Maybe other countries, which value citizens' privacy, will perform similar investigations. Companies and their executives need to be held accountable.

Then, on November 14th The New York Times published a detailed, comprehensive "Delay, Deny, and Deflect" investigative report based upon interviews of at least 50 persons:

"When Facebook users learned last spring that the company had compromised their privacy in its rush to expand, allowing access to the personal information of tens of millions of people to a political data firm linked to President Trump, Facebook sought to deflect blame and mask the extent of the problem. And when that failed... Facebook went on the attack. While Mr. Zuckerberg has conducted a public apology tour in the last year, Ms. Sandberg has overseen an aggressive lobbying campaign to combat Facebook’s critics, shift public anger toward rival companies and ward off damaging regulation. Facebook employed a Republican opposition-research firm to discredit activist protesters... In a statement, a spokesman acknowledged that Facebook had been slow to address its challenges but had since made progress fixing the platform... Even so, trust in the social network has sunk, while its pell-mell growth has slowed..."

The New York Times' report also highlighted the history of Facebook's focus on revenue growth and lack of focus to identify and respond to threats:

"Like other technology executives, Mr. Zuckerberg and Ms. Sandberg cast their company as a force for social good... But as Facebook grew, so did the hate speech, bullying and other toxic content on the platform. When researchers and activists in Myanmar, India, Germany and elsewhere warned that Facebook had become an instrument of government propaganda and ethnic cleansing, the company largely ignored them. Facebook had positioned itself as a platform, not a publisher. Taking responsibility for what users posted, or acting to censor it, was expensive and complicated. Many Facebook executives worried that any such efforts would backfire... Mr. Zuckerberg typically focused on broader technology issues; politics was Ms. Sandberg’s domain. In 2010, Ms. Sandberg, a Democrat, had recruited a friend and fellow Clinton alum, Marne Levine, as Facebook’s chief Washington representative. A year later, after Republicans seized control of the House, Ms. Sandberg installed another friend, a well-connected Republican: Joel Kaplan, who had attended Harvard with Ms. Sandberg and later served in the George W. Bush administration..."

The report described cozy relationships between the company and Democratic politicians. Not good for a company wanting to deliver unbiased, reliable news. The New York Times' report also described the history of failing to identify and respond quickly to content abuses by bad actors:

"... in the spring of 2016, a company expert on Russian cyberwarfare spotted something worrisome. He reached out to his boss, Mr. Stamos. Mr. Stamos’s team discovered that Russian hackers appeared to be probing Facebook accounts for people connected to the presidential campaigns, said two employees... Mr. Stamos, 39, told Colin Stretch, Facebook’s general counsel, about the findings, said two people involved in the conversations. At the time, Facebook had no policy on disinformation or any resources dedicated to searching for it. Mr. Stamos, acting on his own, then directed a team to scrutinize the extent of Russian activity on Facebook. In December 2016... Ms. Sandberg and Mr. Zuckerberg decided to expand on Mr. Stamos’s work, creating a group called Project P, for “propaganda,” to study false news on the site, according to people involved in the discussions. By January 2017, the group knew that Mr. Stamos’s original team had only scratched the surface of Russian activity on Facebook... Throughout the spring and summer of 2017, Facebook officials repeatedly played down Senate investigators’ concerns about the company, while publicly claiming there had been no Russian effort of any significance on Facebook. But inside the company, employees were tracing more ads, pages and groups back to Russia."

Facebook responded in a November 15th new release:

"There are a number of inaccuracies in the story... We’ve acknowledged publicly on many occasions – including before Congress – that we were too slow to spot Russian interference on Facebook, as well as other misuse. But in the two years since the 2016 Presidential election, we’ve invested heavily in more people and better technology to improve safety and security on our services. While we still have a long way to go, we’re proud of the progress we have made in fighting misinformation..."

So, Facebook wants its users to accept that it has invested more = doing better.

Regardless, the bottom line is trust. Can users trust what Facebook said about doing better? Is better enough? Can users trust Facebook to deliver unbiased news? Can users trust that Facebook's content moderation process is better? Or good enough? Can users trust Facebook to fix and prevent data breaches affecting millions of users? Can users trust Facebook to stop bad actors posing as researchers from using quizzes and automated tools to vacuum up (and allegedly resell later) millions of users' profiles? Can citizens in democracies trust that Facebook has stopped data abuses, by bad actors, designed to disrupt their elections? Is doing better enough?

The very next day, Facebook reported a huge increase in the number of government requests for data, including secret orders. TechCrunch reported about 13 historical national security letters:

"... dated between 2014 and 2017 for several Facebook and Instagram accounts. These demands for data are effectively subpoenas, issued by the U.S. Federal Bureau of Investigation (FBI) without any judicial oversight, compelling companies to turn over limited amounts of data on an individual who is named in a national security investigation. They’re controversial — not least because they come with a gag order that prevents companies from informing the subject of the letter, let alone disclosing its very existence. Companies are often told to turn over IP addresses of everyone a person has corresponded with, online purchase information, email records and cell-site location data... Chris Sonderby, Facebook’s deputy general counsel, said that the government lifted the non-disclosure orders on the letters..."

So, Facebook is a go-to resource for both bad actors and the good guys.

An eventful month, and the month isn't over yet. Taken together, this news is not good for a company wanting its social networking service to be a source of reliable, unbiased news source. This news is not good for a company wanting its users to accept it is doing better -- and that better is enough. The situation begs the question: are we watching the fall of Facebook? Share your thoughts and opinions below.


ABA Updates Guidance For Attorneys' Data Security And Data Breach Obligations. What Their Clients Can Expect

To provide the best representation, attorneys often process and archive sensitive information about their clients. Consumers hire attorneys to complete a variety of transactions: buy (or sell) a home, start (or operate) a business, file a complaint against a company, insurer, or website for unsatisfactory service, file a complaint against a former employer, and more. What are attorneys' obligations regarding data security to protect their clients' sensitive information, intellectual property, and proprietary business methods?

What can consumers expect when the attorney or law firm they've hired experienced a data breach? Yes, law firms experience data breaches. The National Law Review reported last year:

"2016 was the year that law firm data breaches landed and stayed squarely in both the national and international headlines. There have been numerous law firm data breaches involving incidents ranging from lost or stolen laptops and other portable media to deep intrusions... In March, the FBI issued a warning that a cybercrime insider-trading scheme was targeting international law firms to gain non-public information to be used for financial gain. In April, perhaps the largest volume data breach of all time involved law firm Mossack Fonesca in Panama... Finally, Chicago law firm, Johnson & Bell Ltd., was in the news in December when a proposed class action accusing them of failing to protect client data was unsealed."

So, what can clients expect regarding data security and data breaches? A post in the Lexology site reported:

"Lawyers don’t get a free pass when it comes to data security... In a significant ethics opinion issued last month, Formal Opinion 483, Lawyers’ Obligations After an Electronic Data Breach or Cyberattack, the American Bar Association’s Standing Committee on Ethics and Professional Responsibility provides a detailed roadmap to a lawyer’s obligations to current and former clients when they learn that they – or their firm – have been the subject of a data breach... a lawyer’s compliance with state or federal data security laws does "not necessarily achieve compliance with ethics obligations," and identifies six ABA Model Rules that might be implicated in the breach of client information."

Readers of this blog are familiar with the common definition of a data breach: unauthorized persons have accessed, stolen, altered, and/or destroyed information they shouldn't have. Attorneys have an obligation to use technology competently. The post by Patterson Belknap Webb & Tyler LLP also stated:

"... lawyers have an obligation to take “reasonable steps” to monitor for data breaches... When a breach is detected, a lawyer must act “reasonably and promptly” to stop the breach and mitigate damages resulting from the breach... A lawyer must make reasonable efforts to assess whether any electronic files were, in fact, accessed and, if so, identify them. This requires a post-breach investigation... Lawyers must then provide notice to their affected clients of the breach..."

I read the ABA Formal Opinion 483. (A copy of the opinion is also available here.) A follow-up post this week by the National Law Review listed 10 best practices to stop cyberattacks and breaches. Since many law firms outsource some back-office functions, this might be the most important best-practice item:

"4. Evaluate Your Vendors’ Security: Ask to see your vendor’s security certificate. Review the vendor’s security system as you would your own, making sure they exercise the same or stronger security systems than your own law firm..."


Some Surprising Facts About Facebook And Its Users

Facebook logo The Pew Research Center announced findings from its latest survey of social media users:

  • About two-thirds (68%) of adults in the United States use Facebook. That is unchanged from April 2016, but up from 54% in August 2012. Only Youtube gets more adult usage (73%).
  • About three-quarters (74%) of adult Facebook users visit the site at least once a day. That's higher than Snapchat (63%) and Instagram (60%).
  • Facebook is popular across all demographic groups in the United States: 74% of women use it, as do 62% of men, 81% of persons ages 18 to 29, and 41% of persons ages 65 and older.
  • Usage by teenagers has fallen to 51% (at March/April 2018) from 71% during 2014 to 2015. More teens use other social media services: YouTube (85%), Instagram (72%) and Snapchat (69%).
  • 43% of adults use Facebook as a news source. That is higher than other social media services: YouTube (21%), Twitter (12%), Instagram (8%), and LinkedIn (6%). More women (61%) use Facebook as a news source than men (39%). More whites (62%) use Facebook as a news source than nonwhites (37%).
  • 54% of adult users said they adjusted their privacy settings during the past 12 months. 42% said they have taken a break from checking the platform for several weeks or more. 26% said they have deleted the app from their phone during the past year.

Perhaps, the most troubling finding:

"Many adult Facebook users in the U.S. lack a clear understanding of how the platform’s news feed works, according to the May and June survey. Around half of these users (53%) say they do not understand why certain posts are included in their news feed and others are not, including 20% who say they do not understand this at all."

Facebook users should know that the service does not display in their news feed all posts by their friends and groups. Facebook's proprietary algorithm -- called its "secret sauce" by some -- displays items it thinks users will engage with = click the "Like" or other emotion buttons. This makes Facebook a terrible news source, since it doesn't display all news -- only the news you (probably already) agree with.

That's like living life in an online bubble. Sadly, there is more.

If you haven't watched it, PBS has broadcast a two-part documentary titled, "The Facebook Dilemma" (see trailer below), which arguable could have been titled, "the dark side of sharing." The Frontline documentary rightly discusses Facebook's approaches to news, privacy, its focus upon growth via advertising revenues, how various groups have used the service as a weapon, and Facebook's extensive data collection about everyone.

Yes, everyone. Obviously, Facebook collects data about its users. The service also collects data about nonusers in what the industry calls "shadow profiles." CNet explained that during an April:

"... hearing before the House Energy and Commerce Committee, the Facebook CEO confirmed the company collects information on nonusers. "In general, we collect data of people who have not signed up for Facebook for security purposes," he said... That data comes from a range of sources, said Nate Cardozo, senior staff attorney at the Electronic Frontier Foundation. That includes brokers who sell customer information that you gave to other businesses, as well as web browsing data sent to Facebook when you "like" content or make a purchase on a page outside of the social network. It also includes data about you pulled from other Facebook users' contacts lists, no matter how tenuous your connection to them might be. "Those are the [data sources] we're aware of," Cardozo said."

So, there might be more data sources besides the ones we know about. Facebook isn't saying. So much for greater transparency and control claims by Mr. Zuckerberg. Moreover, data breaches highlight the problems with the service's massive data collection and storage:

"The fact that Facebook has [shadow profiles] data isn't new. In 2013, the social network revealed that user data had been exposed by a bug in its system. In the process, it said it had amassed contact information from users and matched it against existing user profiles on the social network. That explained how the leaked data included information users hadn't directly handed over to Facebook. For example, if you gave the social network access to the contacts in your phone, it could have taken your mom's second email address and added it to the information your mom already gave to Facebook herself..."

So, Facebook probably launched shadow profiles when it introduced its mobile app. That means, if you uploaded the address book in your phone to Facebook, then you helped the service collect information about nonusers, too. This means Facebook acts more like a massive advertising network than simply a social media service.

How has Facebook been able to collect massive amounts of data about both users and nonusers? According to the Frontline documentary, we consumers have lax privacy laws in the United States to thank for this massive surveillance advertising mechanism. What do you think?


Federal Reserve Released Its Non-cash Payments Fraud Report. Have Chip Cards Helped?

Many consumers prefer to pay for products and services using methods other than cash. How secure are these non-cash payment methods? The Federal Reserve Board (FRB) analyzed the payments landscape within the United States. Its October 2018 report found good and bad news. The good news: non-cash payments fraud is small. The bad news:

  • Overall, non-cash payments fraud is growing,
  • Card payments fraud drove the growth
Non-Cash Payment Activity And Fraud
Payment Type 2012 2015 Increase (Decrease)
Card payments & ATM withdrawal fraud $4 billion $6.5 billion 62.5 percent
Check fraud $1.1 billion $710 million (35) percent
Non-cash payments fraud $6.1 billion $8.3 billion 37 percent
Total Non-cash payments $161.2 trillion $180.3 trillion 12 percent

The FRB report included:

"... fraud totals and rates for payments processed over general-purpose credit and debit card networks, including non-prepaid and prepaid debit card networks, the automated clearinghouse (ACH) transfer system, and the check clearing system. These payment systems form the core of the noncash payment and settlement systems used to clear and settle everyday payments made by consumers and businesses in the United States. The fraud data were collected as part of Federal Reserve surveys of depository institutions in 2012 and 2015 and payment card networks in 2015 and 2016. The types of fraudulent payments covered in the study are those made by an unauthorized third party."

Data from the card network survey included general-purpose credit and debit (non-prepaid and prepaid) card payments, but did not include ATM withdrawals. The card networks include Visa, MasterCard, Discover and others. Additional findings:

"... the rate of card fraud, by value, was nearly flat from 2015 to 2016, with the rate of in-person card fraud decreasing notably and the rate of remote card fraud increasing significantly..."

The industry defines several categories of card fraud:

  1. "Counterfeit card. Fraud is perpetrated using an altered or cloned card;
  2. Lost or stolen card. Fraud is undertaken using a legitimate card, but without the cardholder’s consent;
  3. Card issued but not received. A newly issued card sent to a cardholder is intercepted and used to commit fraud;
  4. Fraudulent application. A new card is issued based on a fake identity or on someone else’s identity;
  5. Fraudulent use of account number. Fraud is perpetrated without using a physical card. This type of fraud is typically remote, with the card number being provided through an online web form or a mailed paper form, or given orally over the telephone; and
  6. Other. Fraud including fraud from account take-over and any other types of fraud not covered above."
Card Fraud By Category
Fraud Category 2015 2016 Increase/(Decrease)
Fraudulent use of account number $2.88 billion $3.46 billion 20 percent
Counterfeit card fraud $3.05 billion $2.62 billion (14) percent
Lost or stolen card fraud $730 million $810 million 11 percent
Fraudulent application $210 million $360 million 71 percent

The increase in fraudulent application suggests that criminals consider it easy to intercept pre-screened credit and card offers sent via postal mail. It is easy for consumers to opt out of pre-screened credit and card offers. There is also the National Do Not Call Registry. Do both today if you haven't.

The report also covered EMV chip cards, which were introduced to stop counterfeit card fraud. Card networks distributed both chip cards to consumers, and chip-reader terminals to retailers. The banking industry had set an October 1, 2015 deadline to switch to chip cards. The FRB report:

EMV Chip card fraud and payments. Federal Reserve Board. October 2018

The FRB concluded:

"Card systems brought EMV processing online, and a liability shift, beginning in October 2015, created an incentive for merchants to accept chip cards. By value, the share of non-fraudulent in-person payments made with [chip cards] shifted dramatically between 2015 and 2016, with chip-authenticated payments increasing from 3.2 percent to 26.4 percent. The share of fraudulent in-person payments made with [chip cards] also increased from 4.1 percent in 2015 to 22.8 percent in 2016. As [chip cards] are more secure, this growth in the share of fraudulent in-person chip payments may seem counter-intuitive; however, it reflects the overall increase in use. Note that in 2015, the share of fraudulent in-person payments with [chip cards] (4.1 percent) was greater than the share of non-fraudulent in-person payments with [chip cards] (3.2 percent), a relationship that reversed in 2016."


Senator Wyden Introduces Bill To Help Consumers Regain Online Privacy And Control Over Sensitive Data

Late last week, Senator Ron Wyden (Dem - Oregon) introduced a "discussion draft" of legislation to help consumers recover online privacy and control over their sensitive personal data. Senator Wyden said:

"Today’s economy is a giant vacuum for your personal information – Everything you read, everywhere you go, everything you buy and everyone you talk to is sucked up in a corporation’s database. But individual Americans know far too little about how their data is collected, how it’s used and how it’s shared... It’s time for some sunshine on this shadowy network of information sharing. My bill creates radical transparency for consumers, gives them new tools to control their information and backs it up with tough rules with real teeth to punish companies that abuse Americans’ most private information.”

The press release by Senator Wyden's office explained the need for new legislation:

"The government has failed to respond to these new threats: a) Information about consumers’ activities, including their location information and the websites they visit is tracked, sold and monetized without their knowledge by many entities; b) Corporations’ lax cybersecurity and poor oversight of commercial data-sharing partnerships has resulted in major data breaches and the misuse of Americans’ personal data; c) Consumers have no effective way to control companies’ use and sharing of their data."

Consumers in the United States lost both control and privacy protections when the U.S. Federal Communications Commission (FCC), led by President Trump appointee Ajit Pai, a former Verizon lawyer, repealed last year both broadband privacy and net neutrality protections for consumers. A December 2017 study of 1,077 voters found that most want net neutrality protections. President Trump signed the privacy-rollback legislation in April 2017. A prior blog post listed many historical abuses of consumers by some internet service providers (ISPs).

With the repealed broadband privacy, ISPs are free to collect and archive as much data about consumers as desired without having to notify and get consumers' approval of the collection nor of who they share archived data with. That's 100 percent freedom for ISPs and zero freedom for consumers.

By repealing online privacy and net neutrality protections for consumers, the FCC essentially punted responsibility to the U.S. Federal Trade Commission (FTC). According to Senator Wyden's press release:

"The FTC, the nation’s main privacy and data security regulator, currently lacks the authority and resources to address and prevent threats to consumers’ privacy: 1) The FTC cannot fine first-time corporate offenders. Fines for subsequent violations of the law are tiny, and not a credible deterrent; 2) The FTC does not have the power to punish companies unless they lie to consumers about how much they protect their privacy or the companies’ harmful behavior costs consumers money; 3) The FTC does not have the power to set minimum cybersecurity standards for products that process consumer data, nor does any federal regulator; and 4) The FTC does not have enough staff, especially skilled technology experts. Currently about 50 people at the FTC police the entire technology sector and credit agencies."

This means consumers have no protections nor legal options unless the company, or website, violates its published terms-of-conditions and privacy policies. To solves the above gaps, Senator Wyden's new legislation, titled the Consumer Data Privacy Act (CDPA), contains several new and stronger protections. It:

"... allows consumers to control the sale and sharing of their data, gives the FTC the authority to be an effective cop on the beat, and will spur a new market for privacy-protecting services. The bill empowers the FTC to: i) Establish minimum privacy and cybersecurity standards; ii) Issue steep fines (up to 4% of annual revenue), on the first offense for companies and 10-20 year criminal penalties for senior executives; iii) Create a national Do Not Track system that lets consumers stop third-party companies from tracking them on the web by sharing data, selling data, or targeting advertisements based on their personal information. It permits companies to charge consumers who want to use their products and services, but don’t want their information monetized; iv) Give consumers a way to review what personal information a company has about them, learn with whom it has been shared or sold, and to challenge inaccuracies in it; v) Hire 175 more staff to police the largely unregulated market for private data; and vi) Require companies to assess the algorithms that process consumer data to examine their impact on accuracy, fairness, bias, discrimination, privacy, and security."

Permitting companies to charge consumers who opt out of data collection and sharing is a good thing. Why? Monthly payments by consumers are leverage -- a strong incentive for companies to provide better cybersecurity.

Business as usual -- cybersecurity methods by corporate executives and government enforcement -- isn't enough. The tsunami of data breaches is an indication. During October alone:

A few notable breach events from earlier this year:

The status quo, or business as usual, is unacceptable. Executives' behavior won't change without stronger consequences like jail time, since companies perform cost-benefit analyses regarding how much to spend on cybersecurity versus the probability of breaches and fines. Opt-outs of data collection and sharing by consumers, steeper fines, and criminal penalties could change those cost-benefit calculations.

Four former chief technologists at the FCC support Senator Wyden's legislation. Gabriel Weinberg, the Chief Executive Officer of DuckDuckGo also supports it:

"Senator Wyden’s proposed consumer privacy bill creates needed privacy protections for consumers, mandating easy opt-outs from hidden tracking. By forcing companies that sell and monetize user data to be more transparent about their data practices, the bill will also empower consumers to make better-informed privacy decisions online, enabling companies like ours to compete on a more level playing field."

Regular readers of this blog know that the DuckDuckGo search engine (unlike Google, Bing and Yahoo search engines) doesn't track users, doesn't collect nor archive data about users and their devices, and doesn't collect nor store users' search criteria. So, DuckDuckGo users can search knowing their data isn't being sold to advertisers, data brokers, and others.

Lastly, Wyden's proposed legislation includes several key definitions (emphasis added):

"... The term "automated decision system" means a computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision or facilitates human decision making, that impacts consumers... The term "automated decision system impact assessment" means a study evaluating an automated decision system and the automated decision system’s development process, including the design and training data of the automated decision 14 system, for impacts on accuracy, fairness, bias, discrimination, privacy, and security that includes... The term "data protection impact assessment" means a study evaluating the extent to which an information system protects the privacy and security of personal information the system processes... "

The draft legislation requires companies to perform both automated data impact assessments and data protection impact assessments; and requires the FTC to set the frequency and conditions for both. A copy of the CDPA draft is also available here (Adobe PDF; 67.7 k bytes).

This is a good start. It is important... critical... to hold accountable both corporate executives and the automated decision systems their approve and deploy. Based upon history, outsourcing has been one corporate tactic to manage liability by shifting it to providers. Good to close any loopholes now where executives could abuse artificial intelligence and related technologies to avoid responsibility.

What are your thoughts, opinions of the proposed legislation?


Mail-Only Voting In Oregon: Easy, Simple, And Secure. Why Not In All 50 States?

Hopefully, you voted today. A democracy works best when citizens participate. And voting is one way to participate.

If you already stood in line to vote, or if your state was one which closed some polling places, know that it doesn't have to be this way. Consider Oregon. Not only is the process there easier and simpler, but elections officials in Oregon don't have to worry as much as officials in other states about hacks and tampering. Why? The don't have voting machines. Yes, that's correct. No voting machines. No polling places either.

NBC News explained:

"Twenty years ago, Oregon became the first state in the nation to conduct all statewide elections entirely by mail. Three weeks before each election, all of Oregon's nearly 2.7 million registered voters are sent a ballot by the U.S. Postal Service. Then they mark and sign their ballots and send them in. You don't have to ask for the ballot, it just arrives. There are no forms to fill out, no voter ID, no technology except paper and stamps. If you don't want to pay for a stamp, you can drop your ballot in a box at one of the state's hundreds of collection sites."

Reportedly, Washington and Colorado also have mail-only voting. Perhaps most importantly, Oregon gets a higher voter participation:

"In the 2014 election, records showed that 45 percent of registered voters 34 and under marked a ballot — twice the level of many other states."

State and local governments across the United States use a variety of voting technologies. The two dominant are optical-scan ballots or direct-recording electronic (DRE) devices. Optical-scan ballots are paper ballots where voters fill in bubbles or other machine-readable marks. DRE devices include touch-screen devices that store votes in computer memory. A study in 2016 found that half of registered voters (47%) live in areas hat use only optical-scan as their standard voting system, about 28% live in DRE-only areas, 19% live in areas with both optical-scan and DRE systems, and about 5% of registered voters live in areas that conduct elections entirely by mail.

Some voters and many experts worry about areas using old, obsolete DRE devices that lack software and security upgrades. An analysis earlier this year found that the USA has made little progress since the 2016 election in replacing antiquated, vulnerable voting machines; and done even less to improve capabilities to recover from cyberattacks.

Last week, the Pew Research Center released results of its latest survey. Key findings: while nearly nine-in-ten (89%) Americans have confidence in poll workers in their community to do a good job, 67% of Americans say it is very or somewhat likely that Russia (or other foreign governments) will try to influence the midterm elections, and less than half (45%) are very or somewhat confident that election systems are secure from hacking. The survey also found that younger voters (ages 18 - 29) are less likely to view voting as convenient, compared to older voters.

Oregon's process is more secure. There are no local, electronic DRE devices scattered across towns and cities that can be hacked or tampered with; and which don't provide paper backups. If there is a question about the count, the paper ballots are stored in a secure place after the election, so elections officials can perform re-counts when needed for desired communities. According to the NBC News report, Oregon's Secretary of State, Dennis Richardson, said:

"You can't hack paper"

Oregon posts results online at results.oregonvotes.gov starting at 8:00 pm on Tuesday. Residents of Oregon can use the oregonvotes.gov site to check their voter record, track their ballot, find an official drop box, check election results, and find other relevant information. 2) ,

Oregon's process sounds simple, comprehensive, more secure, and easy for voters. Voters don't have to stand in long lines, nor take time off from work to vote. If online retailers can reliably fulfill consumers' online purchases via package delivery, then elections officials in local towns and cities can -- and should -- do the same with paper ballots. Many states already provide absentee ballots via postal mail, so a mail-only process isn't a huge stretch.


When Fatal Crashes Can't Be Avoided, Who Should Self-Driving Cars Save? Or Sacrifice? Results From A Global Survey May Surprise You

Experts predict that there will be 10 million self-driving cars on the roads by 2020. Any outstanding issues need to be resolved before then. One outstanding issue is the "trolley problem" - a situation where a fatal vehicle crash can not be avoided and the self-driving car must decide whether to save the passenger or a nearby pedestrian. Ethical issues with self-driving cars are not new. There are related issues, and some experts have called for a code of ethics.

Like it or not, the software in self-driving cars must be programmed to make decisions like this. Which person in a "trolley problem" should the self-driving car save? In other words, the software must be programmed with moral preferences which dictate which person to sacrifice.

The answer is tricky. You might assume: always save the driver, since nobody would buy self-driving car which would kill their owners. What if the pedestrian is crossing against a 'do not cross' signal within a crosswalk? Does the answer change if there are multiple pedestrians in the crosswalk? What if the pedestrians are children, elders, or pregnant? Or a doctor? Does it matter if the passenger is older than the pedestrians?

To understand what the public wants -- expects -- in self-driving cars, also known as autonomous vehicles (AV), researchers from MIT asked consumers in a massive, online global survey. The survey included 2 million people from 233 countries. The survey included 13 accident scenarios with nine varying factors:

  1. "Sparing people versus pets/animals,
  2. Staying on course versus swerving,
  3. Sparing passengers versus pedestrians,
  4. Sparing more lives versus fewer lives,
  5. Sparing men versus women,
  6. Sparing the young versus the elderly,
  7. Sparing pedestrians who cross legally versus jaywalking,
  8. Sparing the fit versus the less fit, and
  9. Sparing those with higher social status versus lower social status."

Besides recording the accident choices, the researchers also collected demographic information (e.g., gender, age, income, education, attitudes about religion and politics, geo-location) about the survey participants, in order to identify clusters: groups, areas, countries, territories, or regions containing people with similar "moral preferences."

Newsweek reported:

"The study is basically trying to understand the kinds of moral decisions that driverless cars might have to resort to," Edmond Awad, lead author of the study from the MIT Media Lab, said in a statement. "We don't know yet how they should do that."

And the overall findings:

"First, human lives should be spared over those of animals; many people should be saved over a few; and younger people should be preserved ahead of the elderly."

These have implications for policymakers. The researchers noted:

"... given the strong preference for sparing children, policymakers must be aware of a dual challenge if they decide not to give a special status to children: the challenge of explaining the rationale for such a decision, and the challenge of handling the strong backlash that will inevitably occur the day an autonomous vehicle sacrifices children in a dilemma situation."

The researchers found regional differences about who should be saved:

"The first cluster (which we label the Western cluster) contains North America as well as many European countries of Protestant, Catholic, and Orthodox Christian cultural groups. The internal structure within this cluster also exhibits notable face validity, with a sub-cluster containing Scandinavian countries, and a sub-cluster containing Commonwealth countries.

The second cluster (which we call the Eastern cluster) contains many far eastern countries such as Japan and Taiwan that belong to the Confucianist cultural group, and Islamic countries such as Indonesia, Pakistan and Saudi Arabia.

The third cluster (a broadly Southern cluster) consists of the Latin American countries of Central and South America, in addition to some countries that are characterized in part by French influence (for example, metropolitan France, French overseas territories, and territories that were at some point under French leadership). Latin American countries are cleanly separated in their own sub-cluster within the Southern cluster."

The researchers also observed:

"... systematic differences between individualistic cultures and collectivistic cultures. Participants from individualistic cultures, which emphasize the distinctive value of each individual, show a stronger preference for sparing the greater number of characters. Furthermore, participants from collectivistic cultures, which emphasize the respect that is due to older members of the community, show a weaker preference for sparing younger characters... prosperity (as indexed by GDP per capita) and the quality of rules and institutions (as indexed by the Rule of Law) correlate with a greater preference against pedestrians who cross illegally. In other words, participants from countries that are poorer and suffer from weaker institutions are more tolerant of pedestrians who cross illegally, presumably because of their experience of lower rule compliance and weaker punishment of rule deviation... higher country-level economic inequality (as indexed by the country’s Gini coefficient) corresponds to how unequally characters of different social status are treated. Those from countries with less economic equality between the rich and poor also treat the rich and poor less equally... In nearly all countries, participants showed a preference for female characters; however, this preference was stronger in nations with better health and survival prospects for women. In other words, in places where there is less devaluation of women’s lives in health and at birth, males are seen as more expendable..."

This is huge. It makes one question the wisdom of a one-size-fits-all programming approach by AV makers wishing to sell cars globally. Citizens in clusters may resent an AV maker forcing its moral preferences upon them. Some clusters or countries may demand vehicles matching their moral preferences.

The researchers concluded (emphasis added):

"Never in the history of humanity have we allowed a machine to autonomously decide who should live and who should die, in a fraction of a second, without real-time supervision. We are going to cross that bridge any time now, and it will not happen in a distant theatre of military operations; it will happen in that most mundane aspect of our lives, everyday transportation. Before we allow our cars to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers that will regulate them... Our data helped us to identify three strong preferences that can serve as building blocks for discussions of universal machine ethics, even if they are not ultimately endorsed by policymakers: the preference for sparing human lives, the preference for sparing more lives, and the preference for sparing young lives. Some preferences based on gender or social status vary considerably across countries, and appear to reflect underlying societal-level preferences..."

And the researchers advised caution, given this study's limitations (emphasis added):

"Even with a sample size as large as ours, we could not do justice to all of the complexity of autonomous vehicle dilemmas. For example, we did not introduce uncertainty about the fates of the characters, and we did not introduce any uncertainty about the classification of these characters. In our scenarios, characters were recognized as adults, children, and so on with 100% certainty, and life-and-death outcomes were predicted with 100% certainty. These assumptions are technologically unrealistic, but they were necessary... Similarly, we did not manipulate the hypothetical relationship between respondents and characters (for example, relatives or spouses)... Indeed, we can embrace the challenges of machine ethics as a unique opportunity to decide, as a community, what we believe to be right or wrong; and to make sure that machines, unlike humans, unerringly follow these moral preferences. We might not reach universal agreement: even the strongest preferences expressed through the [survey] showed substantial cultural variations..."

Several important limitations to remember. And, there are more. It didn't address self-driving trucks. Should an AV tractor-trailer semi  -- often called a robotruck -- carrying $2 million worth of goods sacrifice its load (and passenger) to save one or more pedestrians? What about one or more drivers on the highway? Does it matter if the other drivers are motorcyclists, school buses, or ambulances?

What about autonomous freighters? Should an AV cargo ship be programed to sacrifice its $80 million load to save a pleasure craft? Does the size (e.g., number of passengers) of the pleasure craft matter? What if the other craft is a cabin cruiser with five persons? Or a cruise ship with 2,000 passengers and a crew of 800? What happens in international waters between AV ships from different countries programmed with different moral preferences?

Regardless, this MIT research seems invaluable. It's a good start. AV makers (e.g., autos, ships, trucks) need to explicitly state what their vehicles will (and won't do). Don't hide behind legalese similar to what exists today in too many online terms-of-use and privacy policies.

Hopefully, corporate executives and government policymakers will listen, consider the limitations, demand follow-up research, and not dive headlong into the AV pool without looking first. After reading this study, it struck me that similar research would have been wise before building a global social media service, since people in different countries or regions having varying preferences with online privacy, sharing information, and corporate surveillance. What are your opinions?