147 posts categorized "Europe" Feed

How to Wrestle Your Data From Data Brokers, Silicon Valley — and Cambridge Analytica

[Editor's note: today's guest post, by reporters at ProPublica, discusses data brokers you may not know, the data collected and archived about consumers, and options for consumers to (re)gain as much privacy as possible. It is reprinted with permission.]

By Jeremy B. Merrill, ProPublica

Cambridge Analytica thinks that I’m a "Very Unlikely Republican." Another political data firm, ALC Digital, has concluded I’m a "Socially Conservative," Republican, "Boomer Voter." In fact, I’m a 27-year-old millennial with no set party allegiance.

For all the fanfare, the burgeoning field of mining our personal data remains an inexact art.

One thing is certain: My personal data, and likely yours, is in more hands than ever. Tech firms, data brokers and political consultants build profiles of what they know — or think they can reasonably guess — about your purchasing habits, personality, hobbies and even what political issues you care about.

You can find out what those companies know about you but be prepared to be stubborn. Very stubborn. To demonstrate how this works, we’ve chosen a couple of representative companies from three major categories: data brokers, big tech firms and political data consultants.

Few of them make it easy. Some will show you on their websites, others will make you ask for your digital profile via the U.S. mail. And then there’s Cambridge Analytica, the controversial Trump campaign vendor that has come under intense fire in light of a report in the British newspaper The Observer and in The New York Times that the company used improperly obtained data from Facebook to help build voter profiles.

To find out what the chaps at the British data firm have on you, you’re going to need both stamps and a "cheque."

Once you see your data, you’ll have a much better understanding of how this shadowy corner of the new economy works. You’ll see what seemingly personal information they know about you … and you’ll probably have some hypotheses about where this data is coming from. You’ll also probably see some predictions about who you are that are hilariously wrong.

And if you do obtain your data from any of these companies, please let us know your thoughts at [email protected]. We won’t share or publish what you say (unless you tell us that’s it’s OK).

Cambridge Analytica and Other Political Consultants

Making statistically informed guesses about Americans’ political beliefs and pet issues is a common business these days, with dozens of firms selling data to candidates and issue groups about the purported leanings of individual American voters.

Few of these firms have to give your data. But Cambridge Analytica is required to do so by an obscure European rule.

Cambridge Analytica:

Around the time of the 2016 election, Paul-Olivier Dehaye, a Belgian mathematician and founder of a website that helps people exercise their data protection rights called PersonalData.IO, approached me with an idea for a story. He flagged some of Cambridge Analytica’s claims about the power of its "psychographic" targeting capabilities and suggested that I demand my data from them.

So I sent off a request, following Dehaye’s coaching, and citing the UK Data Protection Act 1998, the British implementation of a little-known European Union data-protection law that grants individuals (even Americans) the rights to see the data Europeans companies compile about individuals.

It worked. I got back a spreadsheet of data about me. But it took months, cost ten pounds — and I had to give them a photo ID and two utility bills. Presumably they didn’t want my personal data falling into the wrong hands.

How You Can Request Your Data From Cambridge Analytica:

  1. Visit Cambridge Analytica’s website here and fill out this web form.
  2. After you submit the form, the page will immediately request that you email to [email protected] a photo ID and two copies of your utility bills or bank statements, to prove your identity. This page will also include the company’s bank account details.
  3. Find a way to send them 10 GBP. You can try wiring this from your bank, though it may cost you an additional $25 or so — or ask a friend in the UK to go to their bank and get a cashier’s check. Your American bank probably won’t let you write a GBP-denominated check. Two services I tried, Xoom and TransferWise, weren’t able to do it.
  4. Eventually, Cambridge Analytica will email you a small Excel spreadsheet of information and a letter. You might have to wait a few weeks. Celeste LeCompte, ProPublica’s vice president of business development, requested her data on March 27 and still hasn’t received it.

Because the company is based in the United Kingdom, it had no choice but to fulfill my request. In recent weeks, the firm has come under intense fire after The New York Times and the British paper The Observer disclosed that it had used improperly obtained data from Facebook to build profiles of American voters. Facebook told me that data about me was likely transmitted to Cambridge Analytica because a person with whom I am "friends" on the social network had taken the now-infamous "This Is Your Digital Life" quiz. For what it’s worth, my data shows no sign of anything derived from Facebook.

What You Might Get Back From Cambridge Analytica:

Cambridge Analytica had generated 13 data points about my views: 10 political issues, ranked by importance; two guesses at my partisan leanings (one blank); and a guess at whether I would turn out in the 2016 general election.

They told me that the lower the rank, the higher the predicted importance of the issue to me.

Alongside that data labeled "models" were two other types of data that are run-of-the-mill and widely used by political consultants. One sheet of "core data" — that is, personal info, sliced and diced a few different ways, perhaps to be used more easily as parameters for a statistical model. It included my address, my electoral district, the census tract I live in and my date of birth.

The spreadsheet included a few rows of "election returns" — previous elections in New York State in which I had voted. (Intriguingly, Cambridge Analytica missed that I had voted in 2015’s snoozefest of a vote-for-five-of-these-five judicial election. It also didn’t know about elections in which I had voted in North Carolina, where I lived before I lived in New York.)

ALC Digital

ALC Digital is another data broker, which says that its info is "audiences are built from multi-sourced, verified information about an individual." Their data is distributed via Oracle Data Cloud, a service that lets advertisers target specific audience of people — like, perhaps, people who are Boomer Voters and also Republicans.

The firm brags in an Oracle document posted online about how hard it is to avoid their data collection efforts, saying, "It has no cookies to erase and can’t be ‘cleared.’ ALC Real World Data is rooted in reality, and doesn’t rely on inferences or faulty models."

How You Can Request Your Data From ALC Digital:

Here’s how to find the predictions about your political beliefs data in Oracle Data Cloud:

  1. Visit http://www.bluekai.com/registry/. If you use an ad blocker, there may not be much to see here.
  2. Click on the Partner Segments tab.
  3. Scroll on through until you find ALC Digital.

You may have to scroll for a while before you find it.

And not everyone appears to have data from ALC Digital, so don’t be shocked if you can’t find it. If you don’t, there may be other fascinating companies with data about who you are in your Oracle file.

What You Might Get Back From ALC Digital:

When I downloaded the data last year, it said I was "Socially Conservative," "Boomer Voter" — as well as a female voter and a tax reform supporter.

Recently, when I checked my data, those categories had disappeared entirely from my data. I had nothing from ALC Digital.

ALC Digital is not required to release this data. It is disclosed via the Oracle Data Cloud. Fran Green, the company’s president, said that Aristotle, a longtime political data company, “provides us with consumer data that populates these audiences.” She also said that “we do not claim to know people’s ‘beliefs.’”

Big Tech

Big tech firms like Google and Facebook tend to make their money by selling ads, so they build extensive profiles of their users’ interests and activities. They also depend on their users’ goodwill to keep us voluntarily giving them our locations, our browsing histories and plain ol’ lists of our friends and interests. (So far, these popular companies have not faced much regulation.) All three make it easy to download the data that they keep on you.

Firms like Google and Facebook firms don’t sell your data — because it’s their competitive advantage. Google’s privacy page screams in 72 point type: "We do not sell your personal information to anyone." As websites that we visit frequently, they sell access to our attention, so companies that want to reach you in particular can do so with these companies’ sites or other sites that feature their ads.

Facebook

How You Can Request Your Data From Facebook:

You of course have to have a Facebook account and be logged in:

  1. Visit https://www.facebook.com/settings on your computer.
  2. Click the “Download a copy of your Facebook data” link.
  3. On the next page, click “Start My Archive.”
  4. Enter your password, then click “Start My Archive” again.
  5. You’ll get an email immediately, and another one saying “Your Facebook download is ready” when your data is ready to be downloaded. You’ll get a notification on Facebook, too. Mine took just a few minutes.
  6. Once you get that email, click the link, then click Download Archive. Then reenter your password, which will start a zip file downloading..
  7. Unzip the folder; depending on your computer’s operating system, this might be called uncompressing or “expanding.” You’ll get a folder called something like “facebook-jeremybmerrill,” but, of course, with your username instead of mine.
  8. Open the folder and double-click “index.htm” to open it in your web browser.

What You Might Get Back From Facebook

Facebook designed its archive to first show you your profile information. That’s all information you typed into Facebook and that you probably intended to be shared with your friends. It’s no surprise that Facebook knows what city I live in or what my AIM screen name was — I told Facebook those things so that my friends would know.

But it’s a bit of a surprise that they decided to feature a list of my ex-girlfriends — what they blandly termed "Previous Relationships" — so prominently.

As you dig deeper in your archive, you’ll find more information that you gave Facebook, but that you might not have expected the social network to keep hold of for years: if you’re me, that’s the Nickelback concert I apparently RSVPed to, posts about switching high schools and instant messages from my freshman year in college.

But finally, you’ll find the creepier information: what Facebook knows about you that you didn’t tell it, on the "Ads" page. You’ll find "Ads Topics" that Facebook decided you were interested in, like Housing, ESPN or the town of Ellijay, Georgia. And, you’ll find a list of advertisers who have obtained your contact information and uploaded it to Facebook, as part of a so-called Custom Audience of specific people to whom they want to show their ads.

You’ll find more of that creepy information on your Ads Preferences page. Despite Mark Zuckerberg telling Rep. Jerry McNerney, D-Calif., in a hearing earlier this month that “all of your information is included in your ‘download your information,’” my archive didn’t include that list of ad categories that can be used to target ads to me. (Some other types of information aren’t included in the download, like other people’s posts you’ve liked. Those are listed here, along with where to find them — which, for most, is in your Activity Log.)

This area may include Facebook’s guesses about who you are, boiled down from some of your activities. Most Americans’ will have a guess about their politics — Facebook says I’m a "moderate" about U.S. Politics — and some will have a guess about so-called "multicultural affinity," which Facebook insists is not a guess about your ethnicity, but rather what sorts of content "you are interested in or will respond well to." For instance, Facebook recently added that I have a "Multicultural Affinity: African American." (I’m white — though, because Facebook’s definition of "multicultural affinity" is so strange, it’s hard to tell if this is an error on Facebook’s part.)

Facebook also doesn’t include your browsing history — the subject of back-and-forths between Mark Zuckerberg and several members of Congress — it says it keeps that just long enough to boil it down into those “Ad Topics.”

For people without Facebook accounts, Facebook says to email [email protected] or fill out an online form to download what Facebook knows about you. One puzzle here is how Facebook gathers data on people whose identities it may not know. It may know that a person using a phone from Atlanta, Georgia, has accessed a Facebook site and that the same person was last week in Austin, Texas, and before that Cincinnati, but it may not know that that person is me. It’s in principle difficult for the company to give the data it collects about logged-out users if it doesn’t know exactly who they are.

Google

Like Facebook, Google will give you a zip archive of your data. Google’s can be much bigger, because you might have stored gigabytes of files in Google Drive or years of emails in Gmail.

But like Facebook, Google does not provide its guesses about your interests, which it uses to target ads. Those guesses are available elsewhere.

How You Can Request Your Data From Google:

  1. Visit https://takeout.google.com/settings/takeout/ to use Google’s cutely named Takeout service.
  2. You’ll have to pick which data you want to download and examine. You should definitely select My Activity, Location History and Searches. You may not want to download gigabytes of emails, if you use Gmail, since that uses a lot of space and may take a while. (That’s also information you shouldn’t be surprised that Google keeps — you left it with Gmail so that you could use Google’s search expertise to hold on to your emails. )
  3. Google will present you with a few options for how to get your archive. The defaults are fine.
  4. Within a few hours, you should get an email with the subject "Your Google data archive is ready." Click Download Archive and log in again. That should start the download of a file named something like "takeout-20180412T193535.zip."
  5. Unzip the folder; depending on your computer’s operating system, this might be called uncompressing or “expanding.”
  6. You’ll get a folder called Takeout. Open the file inside it called "index.html" in your web browser to explore your archive.

What You Might Get Back From Google:

Once you open the index.html file, you’ll see icons for the data you chose in step 2. Try exploring "Ads" under "My Activity" — you’ll see a list of times you saw Google Ads, including on apps on your phone.

Google also includes your search history, under "Searches" — in my case, going back to 2013. Google knows what I had forgotten: I Googled a bunch of dinosaurs around Valentine’s Day that year… And it’s not just web searches: the Sound Search history reminded me that at some point, I used that service to identify Natalie Imbruglia’s song "Torn."

Android phone users might want to check the "Android" folder: Google keeps a list of each app you’ve used on your phone.

Most of the data contained here are records of ways you’ve directly interacted with Google — and the company really does use the those to improve how their services work for me. I’m glad to see my searches auto-completed, for instance.

But the company also creates data about you: Visit the company’s Ads Settings page to see some of the “topics” Google guesses you’re interested in, and which it uses to personalize the ads you see. Those topics are fairly general — it knows I’m interested in “Politics” — but the company says it has more granular classifications that it doesn’t include on the list. Those more granular, hidden classifications are on various topics, from sports to vacations to politics, where Google does generate a guess whether some people are politically “left-leaning” or “right-leaning.”

Data Brokers

Here’s who really does sell your data. Data brokers like the credit reporting agency Experian and a firm named Epsilon.

These sometimes-shady firms are middlemen who buy your data from tracking firms, survey marketers and retailers, slice and dice the data into “segments,” then sell those on to advertisers.

Experian

Experian is best known as a credit reporting firm, but your credit cards aren’t all they keep track of. They told me that they “firmly believe people should be made aware of how their data is being used” — so if you print and mail them a form, they’ll tell you what data they have on you.

“Educated consumers,” they said, “are better equipped to be effective, successful participants in a world that increasingly relies on the exchange of information to efficiently deliver the products and services consumers demand.”

How You Can Request Your Data From Experian:

  1. Visit Experian’s Marketing Data Request site and print the Marketing Data Report Request form.
  2. Print a copy of your ID and proof of address.
  3. Mail it all to Experian at Experian Marketing Services PO Box 40 Allen, TX 75013
  4. Wait for them to mail you something back.

What You Might Get Back From Experian:

Expect to wait a while. I’ve been waiting almost a month.

They also come up with a guess about your political views that’s integrated with Facebook — our Facebook Political Ad Collector project has found that many political candidates use Experian’s data to target their Facebook ads to likely supporters.

You should hope to find a guess about your political views that’d be useful to those candidates — as well as categories derived from your purchasing data.

Experian told me they generate the data they have about you from a long list of sources, including public records and “historical catalog purchase information” — as well as calculating it from predictive models.

Epsilon

How You Can Request Your Data From Epsilon:

  1. Visit Epsilon’s Marketing Data Summary Request form.
  2. After entering your name and address, Epsilon will answer some of those identity-verification questions that quiz you about your old addresses and cars. If your identity can’t be verified with those, Epsilon will ask you to mail in a form.
  3. Wait for Epsilon to mail you your data; it took about a week for me.

What You Might Get Back From Epsilon:

Epsilon has information on “demographics” and “lifestyle interests” — at the household level. It also includes a list of “household purchases.”

It also has data that political candidates use to target their Facebook ads, including Randy Bryce, a Wisconsin Democrat who’s seeking his party’s nomination to run for retiring Speaker Paul Ryan’s seat, and Rep. Tulsi Gabbard, D-Hawaii.

In my case, Epsilon knows I buy clothes, books and home office supplies, among other things — but isn’t any more specific. They didn’t tell me what political beliefs they believe I hold. The company didn’t respond to a request for comment.

Oracle

Oracle’s Data Cloud aggregates data about you from Oracle, but also so-called third party data from other companies.

How You Can Request Your Data From Oracle:

  1. Visit http://www.bluekai.com/registry/. If you use an ad blocker, there may not be much to see here.
  2. Explore each tab, from “Basic Info” to “Hobbies & Interests” and “Partner Segments.”

Not fun scrolling through all those pages? I have 84 pages of four pieces of data each.

You can’t search. All the text is actually images of text. Oracle declined to say why it chose to make their site so hard to use.

What You Might Get Back From Oracle:

My Oracle profile includes nearly 1500 data points, covering all aspects of my life, from my age to my car to how old my children are to whether I buy eggs. These profiles can even say if you’re likely to dress your pet in a costume for Halloween. But many of them are off-base or contradictory.

Many companies in Oracle’s data, besides ALC Digital, offer guesses about my political views: Data from one company uploaded by AcquireWeb says that my political affiliations are as a Democrat and an Independent … but also that I’m a “Mild Republican.” Another company, an Oracle subsidiary called AddThis, says that I’m a “Liberal.” Cuebiq, which calls itself a “location intelligence” company, says I’m in a subset of “Democrats” called “Liberal Professions.”

If an advertiser wants to show an ad to Spring Break Enthusiasts, Oracle can enable that. I’m apparently a Spring Break Enthusiast. Do I buy eggs? I sure do. Data on Oracle’s site associated with AcquireWeb says I’m a cat owner …

But it also “knows” I’m a dog owner, which I’m not.

Al Gadbut, the CEO of AcquireWeb, explained that the guesses associated with his company weren’t based on my personal data, but rather the tendencies of people in my geographical area — hence the seemingly contradictory political guesses. He said his firm doesn’t generate the data, but rather uploaded it on behalf of other companies. Cuebiq’s guess was a “probabilistic inference” they drew from location data submitted to them by some app on my phone. Valentina Marastoni-Bieser, Cuebiq’s senior vice president of marketing, wouldn’t tell me which app it was, though.

Data for sale here includes a long list what TV shows I — supposedly — watch.

But it’s not all wrong. AddThis can tell that I’m “Young & Hip.”

Takeaways:

The above list is just a sampling of the firms that collect your data and try to draw conclusions about who you are — not just sites you visit like Facebook and controversial firms like Cambridge Analytica.

You can make some guesses as to where this data comes from — especially the more granular consumer data from Oracle. For each data point, it’s worth considering: Who’d be in a position to sell a list of what TV shows I watch, or, at least, a list of what TV shows people demographically like me watch? Who’d be in a position to sell a list of what groceries I, or people similar to me in my area, buy? Some of those companies — companies who you’re likely paying, and for whom the internet adage that “if you’re not paying, you’re the product” doesn’t hold — are likely selling data about you without your knowledge. Other data points, like the location data used by Cuebiq, can come from any number of apps or websites, so it may be difficult to figure out exactly which one has passed it on.

Companies like Google and Facebook often say that they’ll let you “correct” the data that they hold on you — tacitly acknowledgingly that they sometimes get it wrong. But if receiving relevant ads is not important to you, they’ll let you opt-out entirely — or, presumably, “correct” your data to something false.

An upcoming European Union rule called the General Data Protection Regulation portends a dramatic change to how data is collected and used on the web — if only for Europeans. No such law seems likely to be passed in the U.S. in the near future.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Facebook Update: 87 Million Affected By Its Data Breach With Cambridge Analytica. Considerations For All Consumers

Facebook logo Facebook.com has dominated the news during the past three weeks. The news media have reported about many issues, but there are more -- whether or not you use Facebook. Things began about mid-March, when Bloomberg reported:

"Yes, Cambridge Analytica... violated rules when it obtained information from some 50 million Facebook profiles... the data came from someone who didn’t hack the system: a professor who originally told Facebook he wanted it for academic purposes. He set up a personality quiz using tools that let people log in with their Facebook accounts, then asked them to sign over access to their friend lists and likes before using the app. The 270,000 users of that app and their friend networks opened up private data on 50 million people... All of that was allowed under Facebook’s rules, until the professor handed the information off to a third party... "

So, an authorized user shared members' sensitive information with unauthorized users. Facebook confirmed these details on March 16:

"We are suspending Strategic Communication Laboratories (SCL), including their political data analytics firm, Cambridge Analytica (CA), from Facebook... In 2015, we learned that a psychology professor at the University of Cambridge named Dr. Aleksandr Kogan lied to us and violated our Platform Policies by passing data from an app that was using Facebook Login to SCL/CA, a firm that does political, government and military work around the globe. He also passed that data to Christopher Wylie of Eunoia Technologies, Inc.

Like all app developers, Kogan requested and gained access to information from people after they chose to download his app. His app, “thisisyourdigitallife,” offered a personality prediction, and billed itself on Facebook as “a research app used by psychologists.” Approximately 270,000 people downloaded the app. In so doing, they gave their consent for Kogan to access information such as the city they set on their profile, or content they had liked... When we learned of this violation in 2015, we removed his app from Facebook and demanded certifications from Kogan and all parties he had given data to that the information had been destroyed. CA, Kogan and Wylie all certified to us that they destroyed the data... Several days ago, we received reports that, contrary to the certifications we were given, not all data was deleted..."

So, data that should have been deleted wasn't. Then, Facebook relied upon certifications from entities that had lied previously. Not good. Then, Facebook posted this addendum on March 17:

"The claim that this is a data breach is completely false. Aleksandr Kogan requested and gained access to information from users who chose to sign up to his app, and everyone involved gave their consent. People knowingly provided their information, no systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked."

Why the rush to deny a breach? It seems wise to complete a thorough investigation before making such a claim. In the 11+ years I've written this blog, whenever unauthorized persons access data they shouldn't have, it's a breach. You can read about plenty of similar incidents where credit reporting agencies sold sensitive consumer data to ID-theft services and/or data brokers, who then re-sold that information to criminals and fraudsters. Seems like a breach to me.

Cambridge Analytica logo Facebook announced on March 19th that it had hired a digital forensics firm:

"... Stroz Friedberg, to conduct a comprehensive audit of Cambridge Analytica (CA). CA has agreed to comply and afford the firm complete access to their servers and systems. We have approached the other parties involved — Christopher Wylie and Aleksandr Kogan — and asked them to submit to an audit as well. Mr. Kogan has given his verbal agreement to do so. Mr. Wylie thus far has declined. This is part of a comprehensive internal and external review that we are conducting to determine the accuracy of the claims that the Facebook data in question still exists... Independent forensic auditors from Stroz Friedberg were on site at CA’s London office this evening. At the request of the UK Information Commissioner’s Office, which has announced it is pursuing a warrant to conduct its own on-site investigation, the Stroz Friedberg auditors stood down."

That's a good start. An audit would determine or not data which perpetrators said was destroyed, actually had been destroyed. However, Facebook seems to have built a leaky system which allows data harvesting:

"Hundreds of millions of Facebook users are likely to have had their private information harvested by companies that exploited the same terms as the firm that collected data and passed it on to CA, according to a new whistleblower. Sandy Parakilas, the platform operations manager at Facebook responsible for policing data breaches by third-party software developers between 2011 and 2012, told the Guardian he warned senior executives at the company that its lax approach to data protection risked a major breach..."

Reportedly, Parakilas added that Facebook, "did not use its enforcement mechanisms, including audits of external developers, to ensure data was not being misused." Not good. The incident makes one wonder what other developers, corporate, and academic users have violated Facebook's rules: shared sensitive Facebook members' data they shouldn't have.

Facebook announced on March 21st that it will, 1) investigate all apps that had access to large amounts of information and conduct full audits of any apps with suspicious activity; 2) inform users affected by apps that have misused their data; 3) disable an app's access to a member's information if that member hasn't used the app within the last three months; 4) change Login to "reduce the data that an app can request without app review to include only name, profile photo and email address;" 5) encourage members to manage the apps they use; and reward users who find vulnerabilities.

Those actions seem good, but too little too late. Facebook needs to do more... perhaps, revise its Terms Of Use to include large fines for violators of its data security rules. Meanwhile, there has been plenty of news about CA. The Guardian UK reported on March 19:

"The company at the centre of the Facebook data breach boasted of using honey traps, fake news campaigns and operations with ex-spies to swing election campaigns around the world, a new investigation reveals. Executives from Cambridge Analytica spoke to undercover reporters from Channel 4 News about the dark arts used by the company to help clients, which included entrapping rival candidates in fake bribery stings and hiring prostitutes to seduce them."

Geez. After these news reports surfaced, CA's board suspended Alexander Nix, its CEO, pending an internal investigation. So, besides Facebook's failure to secure sensitive members' information, another key issue seems to be the misuse of social media data by a company that openly brags about unethical, and perhaps illegal, behavior.

What else might be happening? The Intercept explained on March 30th that CA:

"... has marketed itself as classifying voters using five personality traits known as OCEAN — Openness, Conscientiousness, Extroversion, Agreeableness, and Neuroticism — the same model used by University of Cambridge researchers for in-house, non-commercial research. The question of whether OCEAN made a difference in the presidential election remains unanswered. Some have argued that big data analytics is a magic bullet for drilling into the psychology of individual voters; others are more skeptical. The predictive power of Facebook likes is not in dispute. A 2013 study by three of Kogan’s former colleagues at the University of Cambridge showed that likes alone could predict race with 95 percent accuracy and political party with 85 percent accuracy. Less clear is their power as a tool for targeted persuasion; CA has claimed that OCEAN scores can be used to drive voter and consumer behavior through “microtargeting,” meaning narrowly tailored messages..."

So, while experts disagree about the effectiveness of data analytics with political campaigns, it seems wise to assume that the practice will continue with improvements. Data analytics fueled by social media input means political campaigns can bypass traditional news media outlets to distribute information and disinformation. That highlights the need for Facebook (and other social media) to improve their data security and compliance audits.

While the UK Information Commissioner's Office aggressively investigates CA, things seem to move at a much slower pace in the USA. TechCrunch reported on April 4th:

"... Facebook’s founder Mark Zuckerberg believes North America users of his platform deserve a lower data protection standard than people everywhere else in the world. In a phone interview with Reuters yesterday Mark Zuckerberg declined to commit to universally implementing changes to the platform that are necessary to comply with the European Union’s incoming General Data Protection Regulation (GDPR). Rather, he said the company was working on a version of the law that would bring some European privacy guarantees worldwide — declining to specify to the reporter which parts of the law would not extend worldwide... Facebook’s leadership has previously implied the product changes it’s making to comply with GDPR’s incoming data protection standard would be extended globally..."

Do users in the USA want weaker data protections than users in other countries? I think not. I don't. Read for yourself the April 4th announcement by Facebook about changes to its terms of service and data policy. It didn't mention specific countries or regions; who gets what and where. Not good.

Mark Zuckerberg apologized and defended his company in a March 21st post:

"I want to share an update on the Cambridge Analytica situation -- including the steps we've already taken and our next steps to address this important issue. We have a responsibility to protect your data, and if we can't then we don't deserve to serve you. I've been working to understand exactly what happened and how to make sure this doesn't happen again. The good news is that the most important actions to prevent this from happening again today we have already taken years ago. But we also made mistakes, there's more to do, and we need to step up and do it... This was a breach of trust between Kogan, Cambridge Analytica and Facebook. But it was also a breach of trust between Facebook and the people who share their data with us and expect us to protect it. We need to fix that... at the end of the day I'm responsible for what happens on our platform. I'm serious about doing what it takes to protect our community. While this specific issue involving Cambridge Analytica should no longer happen with new apps today, that doesn't change what happened in the past. We will learn from this experience to secure our platform further and make our community safer for everyone going forward."

Nice sounding words, but actions speak louder. Wired magazine said:

"Zuckerberg didn't mention in his Facebook post why it took him five days to respond to the scandal... The groundswell of outrage and attention following these revelations has been greater than anything Facebook predicted—or has experienced in its long history of data privacy scandals. By Monday, its stock price nosedived. On Tuesday, Facebook shareholders filed a lawsuit against the company in San Francisco, alleging that Facebook made "materially false and misleading statements" that led to significant losses this week. Meanwhile, in Washington, a bipartisan group of senators called on Zuckerberg to testify before the Senate Judiciary Committee. And the Federal Trade Commission also opened an investigation into whether Facebook had violated a 2011 consent decree, which required the company to notify users when their data was obtained by unauthorized sources."

Frankly, Zuckerberg has lost credibility with me. Why? Facebook's history suggests it can't (or won't) protect users' data it collects. Some of its privacy snafus: settlement of a lawsuit resulting from alleged privacy abuses by its Beacon advertising program, changed members' ad settings without notice nor consent, an advertising platform which allegedly facilitates abuses of older workers, health and privacy concerns about a new service for children ages 6 to 13, transparency concerns about political ads, and new lawsuits about the company's advertising platform. Plus, Zuckerberg made promises in January to clean up the service's advertising. Now, we have yet another apology.

In a press release this afternoon, Facebook revised upward the number affected by the Facebook/CA breach from 50 to 87 million persons. Most, about 70.6 million, are in the United States. The breakdown by country:

Number of affected persons by country in the Facebook - Cambridge Analytica breach. Click to view larger version

So, what should consumers do?

You have options. If you use Facebook, see these instructions by Consumer Reports to deactivate or delete your account. Some people I know simply stopped using Facebook, but left their accounts active. That doesn't seem wise. A better approach is to adjust the privacy settings on your Facebook account to get as much privacy and protections as possible.

Facebook has a new tool for members to review and disable, in bulk, all of the apps with access to their data. Follow these handy step-by-step instructions by Mashable. And, users should also disable the Facebook API platform for their account. If you use the Firefox web browser, then install the new Facebook Container new add-on specifically designed to prevent Facebook from tracking you. Don't use Firefox? You might try the Privacy Badger add-on instead. I've used it happily for years.

Of course, you should submit feedback directly to Facebook demanding that it extend GDPR privacy protections to your country, too. And, wise online users always read the terms and conditions of all Facebook quizzes before taking them.

Don't use Facebook? There are considerations for you, too; especially if you use a different social networking site (or app). Reportedly, Mark Zuckerberg, the CEO of Facebook, will testify before the U.S. Congress on April 11th. His upcoming testimony will be worth monitoring for everyone. Why? The outcome may prod Congress to act by passing new laws giving consumers in the USA data security and privacy protections equal to what's available in the United Kingdom. And, there may be demands for Cambridge Analytica executives to testify before Congress, too.

Or, consumers may demand stronger, faster action by the U.S. Federal Trade Commission (FTC), which announced on March 26th:

"The FTC is firmly and fully committed to using all of its tools to protect the privacy of consumers. Foremost among these tools is enforcement action against companies that fail to honor their privacy promises, including to comply with Privacy Shield, or that engage in unfair acts that cause substantial injury to consumers in violation of the FTC Act. Companies who have settled previous FTC actions must also comply with FTC order provisions imposing privacy and data security requirements. Accordingly, the FTC takes very seriously recent press reports raising substantial concerns about the privacy practices of Facebook. Today, the FTC is confirming that it has an open non-public investigation into these practices."

An "open non-public investigation?" Either the investigation is public, or it isn't. Hopefully, an attorney will explain. And, that announcement read like weak tea. I expect more. Much more.

USA citizens may want stronger data security laws, especially if Facebook's solutions are less than satisfactory, it refuses to provide protections equal to those in the United Kingdom, or if it backtracks later on its promises. Thoughts? Comments?


Airlines Want To Extend 'Dynamic Pricing' Capabilities To Set Ticket Prices By Each Person

In the near future, what you post on social media sites (e.g., Facebook, Instagram, Pinterest, etc.) could affect the price you pay for airline tickets. How's that?

First, airlines already use what the travel industry calls "dynamic pricing" to vary prices by date, time of day, and season. We've all seen higher ticket prices during the holidays and peak travel times. The Telegraph UK reported that airlines want to extend dynamic pricing to set fares by person:

"... the advent of setting fares by the person, rather than the flight, are fast approaching. According to John McBride, director of product management for PROS, a software provider that works with airlines including Lufthansa, Emirates and Southwest, a number of operators have already introduced dynamic pricing on some ticket searches. "2018 will be a very phenomenal year in terms of traction," he told Travel Weekly..."

And, there was a preliminary industry study about how to do it:

" "The introduction of a Dynamic Pricing Engine will allow an airline to take a base published fare that has already been calculated based on journey characteristics and broad segmentation, and further adjust the fare after evaluating details about the travelers and current market conditions," explains a white paper on pricing written by the Airline Tariff Publishing Company (ATPCO), which counts British Airways, Delta and KLM among its 430 airline customers... An ATPCO working group met [in late February] to discuss dynamic pricing, but it is likely that any roll out to its customers would be incremental."

What's "incremental" mean? Experts say first step would be to vary ticket prices in search results at the airline's site, or at an intermediary's site. There's virtually no way for each traveler to know they'd see a personal price that's higher (or lower) from prices presented to others.

With dynamic pricing per person, business travelers would pay more. And, an airline could automatically bundle several fees (e.g., priority boarding, luggage, meals, etc.) for its loyalty program members into each person's ticket price, obscuring transparency and avoiding fairness. Of course, airlines would pitch this as convenience, but alert consumers know that any convenience always has its price.

Thankfully, some politicians in the United States are paying attention. The Shear Social Media Law & Technology blog summarized the situation very well:

"[Dynamic pricing by person] demonstrates why technology companies and the data collection industry needs greater regulation to protect the personal privacy and free speech rights of Americans. Until Silicon Valley and data brokers are properly regulated Americans will continue to be discriminated against based upon the information that technology companies are collecting about us."

Just because something can be done with technology, doesn't mean it should be done. What do you think?


Security Experts: Artificial Intelligence Is Ripe For Misuse By Bad Actors

Over the years, bad actors (e.g., criminals, terrorists, rogue states, ethically-challenged business executives) have used a variety of online technologies to remotely hack computers, track users online without consent nor notice, and circumvent privacy settings by consumers on their internet-connected devices. During the past year or two, reports surfaced about bad actors using advertising and social networking technologies to sway public opinion.

Security researchers and experts have warned in a new report that two of the newest technologies can be also be used maliciously:

"Artificial intelligence and machine learning capabilities are growing at an unprecedented rate. These technologies have many widely beneficial applications, ranging from machine translation to medical image analysis... Less attention has historically been paid to the ways in which artificial intelligence can be used maliciously. This report surveys the landscape of potential security threats from malicious uses of artificial intelligence technologies, and proposes ways to better forecast, prevent, and mitigate these threats. We analyze, but do not conclusively resolve, the question of what the long-term equilibrium between attackers and defenders will be. We focus instead on what sorts of attacks we are likely to see soon if adequate defenses are not developed."

Companies currently use or test artificial intelligence (A.I.) to automate mundane tasks, upgrade and improve existing automated processes, and/or personalize employee (and customer) experiences in a variety of applications and business functions, including sales, customer service, and human resources. "Machine learning" refers to the development of digital systems to improve the performance of a task using experience. Both are part of a business trend often referred to as "digital transformation" or the "intelligent workplace." The CXO Talk site, featuring interviews with business leaders and innovators, is a good resource to learn more about A.I. and digital transformation.

A survey last year of employees in the USA, France, Germany, and the United Kingdom found that they, "see A.I. as the technology that will cause the most disruption to the workplace." The survey also found: 70 percent of employees surveyed expect A.I. to impact their jobs during the next ten years, half expect impacts within the next three years, and about a third percent see A.I. as a job creator.

This new report was authored by 26 security experts from a variety of educational institutions including American University, Stanford University, Yale University, the University of Cambridge, the University of Oxford, and others. The report cited three general ways bad actors could misuse A.I.:

"1. Expansion of existing threats. The costs of attacks may be lowered by the scalable use of AI systems to complete tasks that would ordinarily require human labor, intelligence and expertise. A natural effect would be to expand the set of actors who can carry out particular attacks, the rate at which they can carry out these attacks, and the set of potential targets.

2. Introduction of new threats. New attacks may arise through the use of AI systems to complete tasks that would be otherwise impractical for humans. In addition, malicious actors may exploit the vulnerabilities of AI systems deployed by defenders.

3. Change to the typical character of threats. We believe there is reason to expect attacks enabled by the growing use of AI to be especially effective, finely targeted, difficult to attribute, and likely to exploit vulnerabilities in AI systems."

So, A.I. could make it easier for the bad guys to automated labor-intensive cyber-attacks such as spear-fishing. The bad guys could also create new cyber-attacks by combining A.I. with speech synthesis. The authors of the report cited examples of more threats:

"The use of AI to automate tasks involved in carrying out attacks with drones and other physical systems (e.g. through the deployment of autonomous weapons systems) may expand the threats associated with these attacks. We also expect novel attacks that subvert cyber-physical systems (e.g. causing autonomous vehicles to crash) or involve physical systems that it would be infeasible to direct remotely (e.g. a swarm of thousands of micro-drones)... The use of AI to automate tasks involved in surveillance (e.g. analyzing mass-collected data), persuasion (e.g. creating targeted propaganda), and deception (e.g. manipulating videos) may expand threats associated with privacy invasion and social manipulation..."

BBC News reported even more possible threats:

"Technologies such as AlphaGo - an AI developed by Google's DeepMind and able to outwit human Go players - could be used by hackers to find patterns in data and new exploits in code. A malicious individual could buy a drone and train it with facial recognition software to target a certain individual. Bots could be automated or "fake" lifelike videos for political manipulation. Hackers could use speech synthesis to impersonate targets."

From all of this, one can conclude that the 2016 elections interference cited by intelligence officials is probably mild compared to what will come: more serious, sophisticated, and numerous attacks. The report included four high-level recommendations:

"1. Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.

2. Researchers and engineers in artificial intelligence should take the dual-use nature of their work seriously, allowing misuse-related considerations to influence research priorities and norms, and proactively reaching out to relevant actors when harmful applications are foreseeable.

3. Best practices should be identified in research areas with more mature methods for addressing dual-use concerns, such as computer security, and imported where applicable to the case of AI.

4. Actively seek to expand the range of stakeholders and domain experts involved in discussions of these challenges."

Download the 101-page report titled, "The Malicious Use Of Artificial Intelligence: Forecasting, Prevention, And Mitigation" A copy of the report is also available here (Adobe PDF; 1,400 k bytes)here.

To prepare, both corporate and government executives would be wise to both harden their computer networks and (re)train their employees to recognize and guard against cyber attacks. What do you think?


Citigroup Promises To Close Pay Gaps For Female And Minority Workers

Logo-citigroupUSA Today reported that Citigroup:

"... will boost job compensation for women and minorities in a bid to close pay gaps in the U.S., United Kingdom, and Germany, becoming the first U.S. bank to respond to shareholder pressure about the inequalities. The New York-based financial company announced the effort Monday, saying it came after a Citigroup compensation assessment in the three countries found that women on average were paid 99% of what men got and minorities on average received 99% of what non-minorities were paid... Citigroup's action prompted investment advisory company Arjuna Capital to withdraw the 2018 gender pay shareholder proposal it had filed in an effort to force an investor vote that would require the bank to address pay inequality."

So, the bank made changes only after a major investor forced it to. The news report cited other banks (text links added):

"No other U.S. bank has taken similar action, Arjuna said. Along with Citigroup, Arjuna said it had filed gender pay shareholder proposals this year with U.S. banks JPMorgan Chase, Wells Fargo, Bank of America and Bank of New York Mellon. The investment adviser said it had filed similar proposals with American Express, Mastercard, Reinsurance Group, and Progressive Insurance. If approved by shareholders, the proposals would require the companies to publish their policies and goals to reduce gender pay gaps."

JP Morgan Chase promised in 2016 to raise the pay of 18,000 tellers and branch workers. It seems that the banking industry, kicking and screaming, has been forced to confront its pay-gap issues for employees. What do you think?


Uber's Ripley Program To Thwart Law Enforcement

Uber logo Uber is in the news again, and not in a good way. TechCrunch reported:

"Between spring 2015 until late 2016 the ride-hailing giant routinely used a system designed to thwart police raids in foreign countries, according to Bloomberg, citing three people with knowledge of the system. It reports that Uber’s San Francisco office used the protocol — which apparently came to be referred to internally as ‘Ripley’ — at least two dozen times. The system enabled staff to remotely change passwords and “otherwise lock up data on company-owned smartphones, laptops, and desktops as well as shut down the devices”, it reports. We’ve also been told — via our own sources — about multiple programs at Uber intended to prevent company data from being accessed by oversight authorities... according to Bloomberg Uber created the system in response to raids on its offices in Europe: Specifically following a March 2015 raid on its Brussel’s office in which police gained access to its payments system and financial documents as well as driver and employee information; and after a raid on its Paris office in the same week."

In November of last year, reports emerged that the popular ride-sharing service experienced a data breach affecting 57 million users. Regulators said then that Uber tried to cover it up.

In March of last year, reports surfaced about Greyball, a worldwide program within Uber to thwart code enforcement inspections by governments. TechCrunch also described uLocker:

"We’ve also heard of the existence of a program at Uber called uLocker, although one source with knowledge of the program told us that the intention was to utilize a ransomware cryptolocker exploit and randomize the tokens — with the idea being that if Uber got raided it would cryptolocker its own devices in order to render data inaccessible to oversight authorities. The source said uLocker was being written in-house by Uber’s eng-sec and Marketplace Analytics divisions..."

Geez. First Greyball. Then Reipley and uLocker. And these are the known programs. This raises the question: how many programs are there?

Earlier today, Wired reported:

"The engineer at the heart of the upcoming Waymo vs Uber trial is facing dramatic new allegations of commercial wrongdoing, this time from a former nanny. Erika Wong, who says she cared for Anthony Levandowski’s two children from December 2016 to June 2017, filed a lawsuit in California this month accusing him of breaking a long list of employment laws. The complaint alleges the failure to pay wages, labor and health code violations... In her complaint, Wong alleges that Levandowski was paying a Tesla engineer for updates on its electric truck program, selling microchips abroad, and creating new startups using stolen trade secrets. Her complaint also describes Levandowski reacting to the arrival of the Waymo lawsuit against Uber, strategizing with then-Uber CEO Travis Kalanick, and discussing fleeing to Canada to escape prosecution... Levandowski’s outside dealings while employed at Google and Uber have been central themes in Waymo’s trade secrets case. Waymo says that Levandowski took 14,000 technical files related to laser-ranging lidar and other self-driving technologies with him when he left Google to work at Uber..."

Is this a corporation or organized crime? It seems difficult to tell the difference. What do you think?


Report: Air Travel Globally During 2017 Was The Safest Year On Record

The Independent UK newspaper reported:

"The Dutch-based aviation consultancy, To70, has released its Civil Aviation Safety Review for 2017. It reports only two fatal accidents, both involving small turbo-prop aircraft, with a total of 13 lives lost. No jets crashed in passenger service anywhere in the world... The chances of a plane being involved in a fatal accident is now one in 16 million, according to the lead researcher, Adrian Young... The report warns that electronic devices in checked-in bags pose a growing potential danger: “The increasing use of lithium-ion batteries in electronics creates a fire risk on board aeroplanes as such batteries are difficult to extinguish if they catch fire... The UK has the best air-safety record of any major country. No fatal accidents involving a British airline have happened since the 1980s. The last was on 10 January 1989... In contrast, sub-Saharan Africa has an accident rate 44 per cent worse than the global average, according to the International Air Transport Association (IATA)..."

Read the full 2017 aviation safety report by To70. Below is a chart from the report.

Accident Data Chart from To70 Air Safety Review for 2017. Click to view larger version


Report: Several Impacts From Technology Changes Within The Financial Services Industry

For better or worse, the type of smart device you use can identify you in ways you may not expect. First, a report by London-based Privacy International highlighted the changes within the financial services industry:

"Financial services are changing, with technology being a key driver. It is affecting the nature of financial services from credit and lending through to insurance and even the future of money itself. The field known as “fintech” is where the attention and investment is flowing. Within it, new sources of data are being used by existing institutions and new entrants. They are using new forms of data analysis. These changes are significant to this sector and the lives of the people it serves. We are seeing dramatic changes in the ways that financial products make decisions. The nature of the decision-making is changing, transforming the products in the market and impacting on end results and bottom lines. However, this also means that treatment of individuals will change. This changing terrain of finance has implications for human rights, privacy and identity... Data that people would consider as having nothing to do with the financial sphere, such as their text-messages, is being used at an increasing rate by the financial sector...  Yet protections are weak or absent... It is essential that these innovations are subject to scrutiny... Fintech covers a broad array of sectors and technologies. A non-exhaustive list includes:

  • Alternative credit scoring (new data sources for credit scoring)
  • Payments (new ways of paying for goods and services that often have implications for the data generated)
  • Insurtech (the use of technology in the insurance sector)
  • Regtech (the use of technology to meet regulatory requirements)."

"Similarly, a breadth of technologies are used in the sector, including: Artificial Intelligence; Blockchain; the Internet of Things; Telematics and connected cars..."

While the study focused upon India and Kenya, it has implications for consumers worldwide. More observations and concerns:

"Social media is another source of data for companies in the fintech space. However, decisions are made not on just on the content of posts, but rather social media is being used in other ways: to authenticate customers via facial recognition, for instance... blockchain, or distributed ledger technology, is still best known for cryptocurrencies like BitCoin. However, the technology is being used more broadly, such as the World Bank-backed initiative in Kenya for blockchain-backed bonds10. Yet it is also used in other fields, like the push in digital identities11. A controversial example of this was a very small-scale scheme in the UK to pay benefits using blockchain technology, via an app developed by the fintech GovCoin12 (since renamed DISC). The trial raised concerns, with the BBC reporting a former member of the Government Digital Service describing this as "a potentially efficient way for Department of Work and Pensions to restrict, audit and control exactly what each benefits payment is actually spent on, without the government being perceived as a big brother13..."

Many consumers know that you can buy a wide variety of internet-connected devices for your home. That includes both devices you'd expect (e.g., televisions, printers, smart speakers and assistants, security systems, door locks and cameras, utility meters, hot water heaters, thermostats, refrigerators, robotic vacuum cleaners, lawn mowers) and devices you might not expect (e.g., sex toys, smart watches for children, mouse traps, wine bottlescrock pots, toy dolls, and trash/recycle bins). Add your car or truck to the list:

"With an increasing number of sensors being built into cars, they are increasingly “connected” and communicating with actors including manufacturers, insurers and other vehicles15. Insurers are making use of this data to make decisions about the pricing of insurance, looking for features like sharp acceleration and braking and time of day16. This raises privacy concerns: movements can be tracked, and much about the driver’s life derived from their car use patterns..."

And, there are hidden prices for the convenience of making payments with your favorite smart device:

"The payments sector is a key area of growth in the fintech sector: in 2016, this sector received 40% of the total investment in fintech22. Transactions paid by most electronic means can be tracked, even those in physical shops. In the US, Google has access to 70% of credit and debit card transactions—through Google’s "third-party partnerships", the details of which have not been confirmed23. The growth of alternatives to cash can be seen all over the world... There is a concerted effort against cash from elements of the development community... A disturbing aspect of the cashless debate is the emphasis on the immorality of cash—and, by extension, the immorality of anonymity. A UK Treasury minister, in 2012, said that paying tradesman by cash was "morally wrong"26, as it facilitated tax avoidance... MasterCard states: "Contrary to transactions made with a MasterCard product, the anonymity of digital currency transactions enables any party to facilitate the purchase of illegal goods or services; to launder money or finance terrorism; and to pursue other activity that introduces consumer and social harm without detection by regulatory or police authority."27"

The report cited a loss of control by consumers over their personal information. Going forward, the report included general and actor-specific recommendations. General recommendations:

  • "Protecting the human right to privacy should be an essential element of fintech.
  • Current national and international privacy regulations should be applicable to fintech.
  • Customers should be at the centre of fintech, not their product.
  • Fintech is not a single technology or business model. Any attempt to implement or regulate fintech should take these differences into account, and be based on the type activities they perform, rather than the type of institutions involved."

Want to learn more? Follow Privacy International on Facebook, on Twitter, or read about 10 ways of "Invisible Manipulation" of consumers.


German Regulator Bans Smartwatches For Children

VTech Kidizoom DX smartwatch for children. Select for larger version Parents: considering a smartwatch for your children or grandchildren? Consider the privacy implications first. Bleeping Computer reported on Friday:

"Germany's Federal Network Agency (Bundesnetzagentur), the country's telecommunications agency, has banned the sale of children's smartwatches after it classified such devices as "prohibited listening devices." The ban was announced earlier today... parents are using their children's smartwatches to listen to teachers in the classroom. Recording or listening to private conversations is against the law in Germany without the permission of all recorded persons."

Some smartwatches are designed for children as young as four years of age. Several brands are available at online retailers, such as Amazon and Best Buy.

Why the ban? Gizmodo explained:

"Saying the technology more closely resembles a “spying device” than a toy... Last month, the European Consumer Organization (BEUC) warned that smartwatches marketed to kids were a serious threat to children’s privacy. A report published by the Norwegian Consumer Council in mid-October revealed serious flaws in several of the devices that could easily allow hackers to seize control. "

Clearly, this is another opportunity for parents to carefully research and consider smart device purchases for their family, to teach their children about privacy, and to not record persons without their permission.


Security Experts: Massive Botnet Forming. A 'Botnet Storm' Coming

Online security experts have detected a massive botnet -- a network of zombie robots -- forming. Its operator and purpose are both unknown. Check Point Software Technologies, a cyber security firm, warned in a blog post that its researchers:

"... had discovered of a brand new Botnet evolving and recruiting IoT devices at a far greater pace and with more potential damage than the Mirai botnet of 2016... Ominous signs were first picked up via Check Point’s Intrusion Prevention System (IPS) in the last few days of September. An increasing number of attempts were being made by hackers to exploit a combination of vulnerabilities found in various IoT devices.

With each passing day the malware was evolving to exploit an increasing number of vulnerabilities in Wireless IP Camera devices such as GoAhead, D-Link, TP-Link, AVTECH, NETGEAR, MikroTik, Linksys, Synology and others..."

Reportedly, the botnet has been named either "Reaper" or "IoTroop." The McClatchy news wire reported:

"A Chinese cybersecurity firm, Qihoo 360, says the botnet is swelling by 10,000 devices a day..."

Criminals use malware or computer viruses to add to the botnet weakly protected or insecure Internet-connect devices (commonly referred to as the internet of things, or IoT) in homes and businesses. Then, criminals use botnets to overwhelm a targeted website with page requests. This type of attack, called a Distributed Denial of Service (DDoS), prevents valid users from accessing the targeted site; knocking the site offline. If the attack is large enough, it can disable large portions of the Internet.

A version of the attack could also include a ransom demand, where the criminals will stop the attack only after a large cash payment by the targeted company or website. With multiple sites targeted, either version of cyber attack could have huge, negative impacts upon businesses and users.

How bad was the Mirai botnet? According to the US-CERT unit within the U.S. Department of Homeland Security:

"On September 20, 2016, Brian Krebs’ security blog was targeted by a massive DDoS attack, one of the largest on record... The Mirai malware continuously scans the Internet for vulnerable IoT devices, which are then infected and used in botnet attacks. The Mirai bot uses a short list of 62 common default usernames and passwords to scan for vulnerable devices... The purported Mirai author claimed that over 380,000 IoT devices were enslaved by the Mirai malware in the attack..."

Wired reported last year that after the attack on Krebs' blog, the Mirai botnet:

"... managed to make much of the internet unavailable for millions of people by overwhelming Dyn, a company that provides a significant portion of the US internet's backbone... Mirai disrupted internet service for more than 900,000 Deutsche Telekom customers in Germany, and infected almost 2,400 TalkTalk routers in the UK. This week, researchers published evidence that 80 models of Sony cameras are vulnerable to a Mirai takeover..."

The Wired report also explained the difficulty with identifying and cleaning infected devices:

"One reason Mirai is so difficult to contain is that it lurks on devices, and generally doesn't noticeably affect their performance. There's no reason the average user would ever think that their webcam—or more likely, a small business's—is potentially part of an active botnet. And even if it were, there's not much they could do about it, having no direct way to interface with the infected product."

It this seems scary, it is. The coming botnet storm has the potential to do lots of damage.

So, a word to the wise. Experts advise consumers to, a) disconnect the device from your network and reboot it before re-connecting it to the internet, b) buy internet-connected devices that support security software updates, c) change the passwords on your devices from the defaults to strong passwords, d) update the operating system (OS) software on your devices with security patches as soon as they are available, e) keep the anti-virus software on your devices current, and f) regularly backup the data on your devices.

US-CERT also advised consumers to:

"Disable Universal Plug and Play (UPnP) on routers unless absolutely necessary. Purchase IoT devices from companies with a reputation for providing secure devices... Understand the capabilities of any medical devices intended for at-home use. If the device transmits data or can be operated remotely, it has the potential to be infected."


Equifax Reported 15.2 Million Records Of U.K. Persons Exposed

Equifax logo Yesterday, Equifax's United Kingdom (UK) unit released a press release about the credit reporting agency's massive data breach and the number of breach victims. A portion of the statement:

"It has always been Equifax’s intention to write to those consumers whose information had been illegally compromised, but it would have been inappropriate and irresponsible of us to do so before we had absolute clarity on what data had been accessed. Following the completion of an independent investigation into the attack, and with agreement from appropriate investigatory authorities, Equifax has begun corresponding with affected consumers.

We would like to take this opportunity to emphasize that Equifax correspondence will never ask consumers for money or cite personal details to seek financial information, and if they receive such correspondence they should not respond. For security reasons, we will not be making any outbound telephone calls to consumers. However, customers can call our Freephone number on 0800 587 1584 for more information.

Today Equifax can confirm that a file containing 15.2m UK records dating from between 2011 and 2016 was attacked in this incident. Regrettably this file contained data relating to actual consumers as well as sizeable test data-sets, duplicates and spurious fields... we have been able to place consumers into specific risk categories and define the services to offer them in order to protect against those risks and send letters to offer them Equifax and third-party safeguards with instructions on how to get started. This work has enabled us to confirm that we will need to contact 693,665 consumers by post... The balance of the 14.5m records potentially compromised may contain the name and date of birth of certain UK consumers. Whilst this does not introduce any significant risk to these people Equifax is sorry that this data may have been accessed."

Below is the tabular information of risk categories from the Equifax UK announcement:

Consumer groups Remedial action
12,086 consumers who had an email address associated with their Equifax.co.uk account in 2014 accessed

14,961 consumers who had portions of their Equifax.co.uk membership details such as username, password, secret questions and answers and partial credit card details - from 2014 accessed

29,188 consumers who had their driving license number accessed

We will offer Equifax Protect for free. This is an identity protection service which monitors personal data. Products and services from third party organizations will also be offered at no cost to consumers. In addition to the services set-out above, further information will be outlined in the correspondence.

637,430 consumers who had their phone numbers accessed Consumers who had a phone number accessed will be offered a leading identity monitoring service for free.

Some observations seem warranted.

First, the announcement was vague about whether the 15.2 million U.K. persons affected were included in the prior breach total, or in addition to the prior total. Second, the U.K. unit will send written breach notices to all affected consumers via postal mail, while the U.S. unit refused. The U.K. unit did the right thing, so their users are confused by and don't have to access a hastily built site to see if they were affected.

Third, given the data elements stolen some U.K. breach victims are vulnerable to additional frauds and threats like breach victims in the USA.

Kudos to the Equifax U.K. unit for the postal breach notices and for clearly stating the above risk categories.


Equifax: 2.5 Million More Persons Affected By Massive Data Breach

Equifax logo Equifax disclosed on Monday, October 2, that 2.5 more persons than originally announced were affected by its massive data breach earlier this year. According to the Equifax breach website:

"... cybersecurity firm Mandiant has completed the forensic portion of its investigation of the cybersecurity incident disclosed on September 7 to finalize the consumers potentially impacted... The completed review determined that approximately 2.5 million additional U.S. consumers were potentially impacted, for a total of 145.5 million. Mandiant did not identify any evidence of additional or new attacker activity or any access to new databases or tables. Instead, this additional population of consumers was confirmed during Mandiant’s completion of the remaining investigative tasks and quality assurance procedures built into the investigative process."

The September breach announcement said that persons outside the United States may have been affected. The October 2nd update addressed that, too:

"The completed review also has concluded that there is no evidence the attackers accessed databases located outside of the United States. With respect to potentially impacted Canadian citizens, the company previously had stated that there may have been up to 100,000 Canadian citizens impacted... The completed review subsequently determined that personal information of approximately 8,000 Canadian consumers was impacted. In addition, it also was determined that some of the consumers with affected credit cards announced in the company’s initial statement are Canadian. The company will mail written notice to all of the potentially impacted Canadian citizens."

So, things are worse than originally announced in September: more United States citizens affected, fewer Canadian citizens affected overall but more Canadians' credit card information exposed, and we still don't know the number of United Kingdom residents affected:

"The forensic investigation related to United Kingdom consumers has been completed and the resulting information is now being analyzed in the United Kingdom. Equifax is continuing discussions with regulators in the United Kingdom regarding the scope of the company’s consumer notifications...

And, there's this statement by Paulino do Rego Barros, Jr., the newly appointed interim CEO (after former CEO Richard Smith resigned):

"... As this important phase of our work is now completed, we continue to take numerous steps to review and enhance our cybersecurity practices. We also continue to work closely with our internal team and outside advisors to implement and accelerate long-term security improvements..."

To review? That means Equifax has not finished the job of making its systems and websites more secure with security fixes based upon how the attackers broke in, which identify attacks earlier, and which prevent future breaches. As bad as this sounds, the reality is probably worse.

After testimony before Congress by former Equifax CEO Richard Smith, Wired documented "six fresh horrors" about the breach and the leisurely approach by the credit reporting agency's executives. First, this about the former CEO:

"... during Tuesday's hearing, former CEO Smith added that he first heard about "suspicious activity" in a customer-dispute portal, where Equifax tracks customer complaints and efforts to correct mistakes in their credit reports, on July 31. He moved to hire cybersecurity experts from the law firm King & Spalding to start investigating the issue on August 2. Smith claimed that, at that time, there was no indication that any customer's personally identifying information had been compromised. As it turns out, after repeated questions from lawmakers, Smith admitted he never asked at the time whether PII being affected was even a possibility. Smith further testified that he didn't ask for a briefing about the "suspicious activity" until August 15, almost two weeks after the special investigation began and 18 days after the initial red flag."

Didn't ask about PII? Geez! PII describes the set of data elements which are the most sensitive information about consumers. It's the business of being a credit reporting agency. Waited 2 weeks for a briefing? Not good either. And, that is a most generous description since some experts question whether the breach actually started in March -- about four months before the July event.

Wired reported the following about Smith's Congressional testimony and the March breach:

"Attackers initially got into the affected customer-dispute portal through a vulnerability in the Apache Struts platform, an open-source web application service popular with corporate clients. Apache disclosed and patched the relevant vulnerability on March 6... Smith said there are two reasons the customer-dispute portal didn't receive that patch, known to be critical, in time to prevent the breach. The first excuse Smith gave was "human error." He says there was a particular (unnamed) individual who knew that the portal needed to be patched but failed to notify the appropriate IT team. Second, Smith blamed a scanning system used to spot this sort of oversight that did not identify the customer-dispute portal as vulnerable. Smith said forensic investigators are still looking into why the scanner failed."

Geez! Sounds like a managerial failure, too. Nobody followed up with the unnamed persons responsible for patching the portal? And Equifax executives took a leisurely (and perhaps lackadaisical) approach to protecting sensitive information about consumers:

"When asked by representative Adam Kinzinger of Illinois about what data Equifax encrypts in its systems, Smith admitted that the data compromised in the customer-dispute portal was stored in plaintext and would have been easily readable by attackers... It’s unclear exactly what of the pilfered data resided in the portal versus other parts of Equifax’s system, but it turns out that also didn’t matter much, given Equifax's attitude toward encryption overall. “OK, so this wasn’t [encrypted], but your core is?” Kinzinger asked. “Some, not all," Smith replied. "There are varying levels of security techniques that the team deploys in different environments around the business."

Geez! So, we now have confirmation that the "core" information -- the most sensitive data about consumers -- in Equifax's databases is only partially encrypted.

Context matters. In January of this year, the Consumer Financial Protection Bureau (CFPB) took punitive action against TransUnion and Equifax for deceptive marketing practices involving credit scores and related subscription services. That action included $23.1 million in fines and penalties.

Thanks to member of Congress for asking the tough questions. No thanks to Equifax executives for taking lackadaisical approaches to data security. (TransUnion, Innovis, and Experian executives: are you watching? Learning what mistakes not to repeat?) Equifax has lost my trust.

Until Equifax hardens its systems (I prefer NSA-level hardness), it shouldn't be entrusted with consumers' sensitive personal and payment information. Consumers should be able to totally opt out of credit reporting agencies that fail with data security. This would allow the marketplace to govern things and stop the corporate socialism benefiting credit reporting agencies.

What are your opinions?

[Editor's note: this post was amended on October 7 with information about the CFPB fines.]


Experts Call For Ban of Killer Robotic Weapons

116 robotics and artificial intelligence experts from 26 countries sent a letter to the United Nations (UN) warning against the deployment of lethal autonomous weapons. The Guardian reported:

"The UN recently voted to begin formal discussions on such weapons which include drones, tanks and automated machine guns... In their letter, the [experts] warn the review conference of the convention on conventional weapons that this arms race threatens to usher in the “third revolution in warfare” after gunpowder and nuclear arms... The letter, launching at the opening of the International Joint Conference on Artificial Intelligence (IJCAI) in Melbourne on Monday, has the backing of high-profile figures in the robotics field and strongly stresses the need for urgent action..."

The letter stated in part:

"Once developed, lethal autonomous weapons will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways."

"We do not have long to act. Once this Pandora’s box is opened, it will be hard to close."

This is not science fiction. Autonomous weapons are already deployed:

"Samsung’s SGR-A1 sentry gun, which is reportedly technically capable of firing autonomously but is disputed whether it is deployed as such, is in use along the South Korean border of the 2.5m-wide Korean Demilitarized Zone. The fixed-place sentry gun, developed on behalf of the South Korean government, was the first of its kind with an autonomous system capable of performing surveillance, voice-recognition, tracking and firing with mounted machine gun or grenade launcher... The UK’s Taranis drone, in development by BAE Systems, is intended to be capable of carrying air-to-air and air-to-ground ordnance intercontinentally and incorporating full autonomy..."

Ban, indeed. Your thoughts? Opinions? Reaction?


Russian Malware Targets Hotels In Europe And Middle East

FireEye, a security firm, has issued a warning about malware targeting the hotel industry within both Europe and the Middle East. The warning:

"... a campaign targeting the hospitality sector is attributed to Russian actor APT28. We believe this activity, which dates back to at least July 2017, was intended to target travelers to hotels throughout Europe and the Middle East. The actor has used several notable techniques in these incidents such as sniffing passwords from Wi-Fi traffic... Once inside the network of a hospitality company, APT28 sought out machines that controlled both guest and internal Wi-Fi networks... in a separate incident that occurred in Fall 2016, APT28 gained initial access to a victim’s network via credentials likely stolen from a hotel Wi-Fi network..."

The key takeaway: criminals use malware to infiltrate the WiFi networks at hotels in order to steal the login credentials (IDs, passwords) of traveling business and government executives. The criminals know that executives conduct business while traveling -- log into their employers' computer networks. Stealing those login credentials provides criminals with access to the computer networks operated by corporations and governments. Once inside those networks, the criminals can steal whatever of value they can access: proprietary information, trade secrets, customer lists, executives' and organization payment information, money, or more.

A variety of organizations in both the public and private sectors use software by FireEye to detect intrusions into their computer networks by unauthorized persons. FireEye software detected the breach at Target (which Target employees later ignored). Security researchers at FireEye discovered vulnerabilities in HTC smartphones which failed to adequately protect users' fingerprint data for unlocking phones.

Security warnings earlier this year mentioned malware by the APT28 group targeting Apple Mac users. The latest warning by FireEye also described the 2016 hack in more detail:

"... the victim was compromised after connecting to a hotel Wi-Fi network. Twelve hours after the victim initially connected to the publicly available Wi-Fi network, APT28 logged into the machine with stolen credentials. These 12 hours could have been used to crack a hashed password offline. After successfully accessing the machine, the attacker deployed tools on the machine, spread laterally through the victim's network, and accessed the victim's OWA account. The login originated from a computer on the same subnet, indicating that the attacker machine was physically close to the victim and on the same Wi-Fi network..."

So, travelers aren't safe even when they use strong passwords. How should travelers protect themselves and their sensitive information? FireEye warned:

"Travelers must be aware of the threats posed when traveling – especially to foreign countries – and take extra precautions to secure their systems and data. Publicly accessible Wi-Fi networks present a significant threat and should be avoided whenever possible."


Wisconsin Employer To Offer Its Employees ID Microchip Implants

Microchip implant to be used by Three Square Market. Click to view larger version A Wisconsin company said it will offer to its employees starting August 1 the option of having microchip identification implants. The company, Three Square Market (32M), will allow employees with the microchip implants to make purchases in the employee break room, open locked doors, login to computers, use the copy machine, and related office tasks.

Each microchip, about the size of a grain of rice (see photo on the right), would be implanted under the skin in an employee's hand. The microchips use radio-frequency identification (RFID), a technology that's existed for a while and has been used in variety of devices: employee badges, payment cards, passports, package tracking, and more. Each microchip electronically stores identification information about the user, and uses near-field communications (NFC). Instead of swiping a payment card, employee badge, or their smartphone, instead the employee can unlock a device by waving their hand near a chip reader attached to that device. Purchases in the employee break room can be made by waving their hand near a self-serve kiosk.

Reportedly, 32M would be the first employer in the USA to microchip its employees. CBS News reported in April about Epicenter, a startup based in Sweden:

"The [implant] injections have become so popular that workers at Epicenter hold parties for those willing to get implanted... Epicenter, which is home to more than 100 companies and some 2,000 workers, began implanting workers in January 2015. Now, about 150 workers have [chip implants]... as with most new technologies, it raises security and privacy issues. While biologically safe, the data generated by the chips can show how often an employee comes to work or what they buy. Unlike company swipe cards or smartphones, which can generate the same data, a person cannot easily separate themselves from the chip."

In an interview with Saint Paul-based KSTP, Todd Westby, the Chief Executive Officer at 32M described the optional microchip program as:

"... the next thing that's inevitably going to happen, and we want to be a part of it..."

To implement its microchip implant program, 32M has partnered with Sweden-based BioHax International. Westby explained in a company announcement:

"Eventually, this technology will become standardized allowing you to use this as your passport, public transit, all purchasing opportunities... We see chip technology as the next evolution in payment systems, much like micro markets have steadily replaced vending machines... it is important that 32M continues leading the way with advancements such as chip implants..."

"Mico markets" are small stores located within employers' offices; typically the break rooms where employees relax and/or purchase food. 32M estimates 20,000 micro markets nationwide in the USA. According to its website, the company serves markets in North America, Europe, Asia, and Australia. 32M believes that micro markets, aided by chip implants and self-serve kiosk, offer employers greater employee productivity with lower costs.

Yes, the chip implants are similar to the chip implants many pet owners have inserted to identify their dogs or cats. 32M expects 50 employees to enroll in its chip implant program.

Reportedly, companies in Belgium and Sweden already use chip implants to identify employees. 32M's announcement did not list the data elements each employee's microchip would contain, nor whether the data in the microchips would be encrypted. Historically, unencrypted data stored by RFID technology has been vulnerable to skimming attacks by criminals using portable or hand-held RFID readers. Stolen information would be used to cloned devices to commit identity theft and fraud.

Some states, such as Washington and California, passed anti-skimming laws. Prior government-industry workshops about RFID usage focused upon consumer products, and not employment concerns. Earlier this year, lawmakers in Nevada introduced legislation making it illegal to require employees to accept microchip implants.

A BBC News reporter discussed in 2015 what it is like to be "chipped." And as CBS News reported:

"... hackers could conceivably gain huge swathes of information from embedded microchips. The ethical dilemmas will become bigger the more sophisticated the microchips become. The data that you could possibly get from a chip that is embedded in your body is a lot different from the data that you can get from a smartphone..."

Example: employers installing RFID readers for employees to unlock bathrooms means employers can track when, where, how often, and the duration employees use bathrooms. How does that sound?

Hopefully, future announcements by 32M will discuss the security features and protections. What are your opinions? Are you willing to be an office cyborg? Should employees have a choice, or should employers be able to force their employees to accept microchip implants? How do you feel about your employer tracking what you eat and drink via purchases with your chip implant?

Many employers publish social media policies covering what employees should (shouldn't, or can't) publish online. Should employers have microchip implant policies, too? If so, what should these policies state?


Microsoft Fights Foreign Cyber Criminals And Spies

The Daily Beast explained how Microsoft fights cyber criminals and spies, some of whom with alleged ties to the Kremlin:

"Last year attorneys for the software maker quietly sued the hacker group known as Fancy Bear in a federal court outside Washington DC, accusing it of computer intrusion, cybersquatting, and infringing on Microsoft’s trademarks. The action, though, is not about dragging the hackers into court. The lawsuit is a tool for Microsoft to target what it calls “the most vulnerable point” in Fancy Bear’s espionage operations: the command-and-control servers the hackers use to covertly direct malware on victim computers. These servers can be thought of as the spymasters in Russia's cyber espionage, waiting patiently for contact from their malware agents in the field, then issuing encrypted instructions and accepting stolen documents.

Since August, Microsoft has used the lawsuit to wrest control of 70 different command-and-control points from Fancy Bear. The company’s approach is indirect, but effective. Rather than getting physical custody of the servers, which Fancy Bear rents from data centers around the world, Microsoft has been taking over the Internet domain names that route to them. These are addresses like “livemicrosoft[.]net” or “rsshotmail[.]com” that Fancy Bear registers under aliases for about $10 each. Once under Microsoft’s control, the domains get redirected from Russia’s servers to the company’s, cutting off the hackers from their victims, and giving Microsoft a omniscient view of that servers’ network of automated spies."

Kudos to Microsoft and its attorneys.


Facebook's Secret Censorship Rules Protect White Men from Hate Speech But Not Black Children

[Editor's Note: today's guest post, by the reporters at ProPublica, explores how social networking practice censorship to combat violence and hate speech, plus related practices such as "geo-blocking." It is reprinted with permission.]

Facebook logo by Julia Angwin, ProPublica, and Hannes Grassegger, special to ProPublica

In the wake of a terrorist attack in London earlier this month, a U.S. congressman wrote a Facebook post in which he called for the slaughter of "radicalized" Muslims. "Hunt them, identify them, and kill them," declared U.S. Rep. Clay Higgins, a Louisiana Republican. "Kill them all. For the sake of all that is good and righteous. Kill them all."

Higgins' plea for violent revenge went untouched by Facebook workers who scour the social network deleting offensive speech.

But a May posting on Facebook by Boston poet and Black Lives Matter activist Didi Delgado drew a different response.

"All white people are racist. Start from this reference point, or you've already failed," Delgado wrote. The post was removed and her Facebook account was disabled for seven days.

A trove of internal documents reviewed by ProPublica sheds new light on the secret guidelines that Facebook's censors use to distinguish between hate speech and legitimate political expression. The documents reveal the rationale behind seemingly inconsistent decisions. For instance, Higgins' incitement to violence passed muster because it targeted a specific sub-group of Muslims -- those that are "radicalized" -- while Delgado's post was deleted for attacking whites in general.

Over the past decade, the company has developed hundreds of rules, drawing elaborate distinctions between what should and shouldn't be allowed, in an effort to make the site a safe place for its nearly 2 billion users. The issue of how Facebook monitors this content has become increasingly prominent in recent months, with the rise of "fake news" -- fabricated stories that circulated on Facebook like "Pope Francis Shocks the World, Endorses Donald Trump For President, Releases Statement" -- and growing concern that terrorists are using social media for recruitment.

While Facebook was credited during the 2010-2011 "Arab Spring" with facilitating uprisings against authoritarian regimes, the documents suggest that, at least in some instances, the company's hate-speech rules tend to favor elites and governments over grassroots activists and racial minorities. In so doing, they serve the business interests of the global company, which relies on national governments not to block its service to their citizens.

One Facebook rule, which is cited in the documents but that the company said is no longer in effect, banned posts that praise the use of "violence to resist occupation of an internationally recognized state." The company's workforce of human censors, known as content reviewers, has deleted posts by activists and journalists in disputed territories such as Palestine, Kashmir, Crimea and Western Sahara.

One document trains content reviewers on how to apply the company's global hate speech algorithm. The slide identifies three groups: female drivers, black children and white men. It asks: Which group is protected from hate speech? The correct answer: white men.

The reason is that Facebook deletes curses, slurs, calls for violence and several other types of attacks only when they are directed at "protected categories" -- based on race, sex, gender identity, religious affiliation, national origin, ethnicity, sexual orientation and serious disability/disease. It gives users broader latitude when they write about "subsets" of protected categories. White men are considered a group because both traits are protected, while female drivers and black children, like radicalized Muslims, are subsets, because one of their characteristics is not protected. (The exact rules are in the slide show below.)

The Facebook Rules

Facebook has used these rules to train its "content reviewers" to decide whether to delete or allow posts. Facebook says the exact wording of its rules may have changed slightly in more recent versions. ProPublica recreated the slides.

Behind this seemingly arcane distinction lies a broader philosophy. Unlike American law, which permits preferences such as affirmative action for racial minorities and women for the sake of diversity or redressing discrimination, Facebook's algorithm is designed to defend all races and genders equally.

"Sadly," the rules are "incorporating this color-blindness idea which is not in the spirit of why we have equal protection," said Danielle Citron, a law professor and expert on information privacy at the University of Maryland. This approach, she added, will "protect the people who least need it and take it away from those who really need it."

But Facebook says its goal is different -- to apply consistent standards worldwide. "The policies do not always lead to perfect outcomes," said Monika Bickert, head of global policy management at Facebook. "That is the reality of having policies that apply to a global community where people around the world are going to have very different ideas about what is OK to share."

Facebook's rules constitute a legal world of their own. They stand in sharp contrast to the United States' First Amendment protections of free speech, which courts have interpreted to allow exactly the sort of speech and writing censored by the company's hate speech algorithm. But they also differ -- for example, in permitting postings that deny the Holocaust -- from more restrictive European standards.

The company has long had programs to remove obviously offensive material like child pornography from its stream of images and commentary. Recent articles in the Guardian and Süddeutsche Zeitung have detailed the difficult choices that Facebook faces regarding whether to delete posts containing graphic violence, child abuse, revenge porn and self-mutilation.

The challenge of policing political expression is even more complex. The documents reviewed by ProPublica indicate, for example, that Donald Trump's posts about his campaign proposal to ban Muslim immigration to the United States violated the company's written policies against "calls for exclusion" of a protected group. As The Wall Street Journal reported last year, Facebook exempted Trump's statements from its policies at the order of Mark Zuckerberg, the company's founder and chief executive.

The company recently pledged to nearly double its army of censors to 7,500, up from 4,500, in response to criticism of a video posting of a murder. Their work amounts to what may well be the most far-reaching global censorship operation in history. It is also the least accountable: Facebook does not publish the rules it uses to determine what content to allow and what to delete.

Users whose posts are removed are not usually told what rule they have broken, and they cannot generally appeal Facebook's decision. Appeals are currently only available to people whose profile, group or page is removed.

The company has begun exploring adding an appeals process for people who have individual pieces of content deleted, according to Bickert. "I'll be the first to say that we're not perfect every time," she said.

Facebook is not required by U.S. law to censor content. A 1996 federal law gave most tech companies, including Facebook, legal immunity for the content users post on their services. The law, section 230 of the Telecommunications Act, was passed after Prodigy was sued and held liable for defamation for a post written by a user on a computer message board.

The law freed up online publishers to host online forums without having to legally vet each piece of content before posting it, the way that a news outlet would evaluate an article before publishing it. But early tech companies soon realized that they still needed to supervise their chat rooms to prevent bullying and abuse that could drive away users.

America Online convinced thousands of volunteers to police its chat rooms in exchange for free access to its service. But as more of the world connected to the internet, the job of policing became more difficult and companies started hiring workers to focus on it exclusively. Thus the job of content moderator -- now often called content reviewer -- was born.

In 2004, attorney Nicole Wong joined Google and persuaded the company to hire its first-ever team of reviewers, who responded to complaints and reported to the legal department. Google needed "a rational set of policies and people who were trained to handle requests," for its online forum called Groups, she said.

Google's purchase of YouTube in 2006 made deciding what content was appropriate even more urgent. "Because it was visual, it was universal," Wong said.

While Google wanted to be as permissive as possible, she said, it soon had to contend with controversies such as a video mocking the King of Thailand, which violated Thailand's laws against insulting the king. Wong visited Thailand and was impressed by the nation's reverence for its monarch, so she reluctantly agreed to block the video -- but only for computers located in Thailand.

Since then, selectively banning content by geography -- called "geo-blocking" -- has become a more common request from governments. "I don't love traveling this road of geo-blocking," Wong said, but "it's ended up being a decision that allows companies like Google to operate in a lot of different places."

For social networks like Facebook, however, geo-blocking is difficult because of the way posts are shared with friends across national boundaries. If Facebook geo-blocks a user's post, it would only appear in the news feeds of friends who live in countries where the geo-blocking prohibition doesn't apply. That can make international conversations frustrating, with bits of the exchange hidden from some participants.

As a result, Facebook has long tried to avoid using geography-specific rules when possible, according to people familiar with the company's thinking. However, it does geo-block in some instances, such as when it complied with a request from France to restrict access within its borders to a photo taken after the Nov. 13, 2015, terrorist attack at the Bataclan concert hall in Paris.

Bickert said Facebook takes into consideration the laws in countries where it operates, but doesn't always remove content at a government's request. "If there is something that violates a country's law but does not violate our standards," Bickert said, "we look at who is making that request: Is it the appropriate authority? Then we check to see if it actually violates the law. Sometimes we will make that content unavailable in that country only."

Facebook's goal is to create global rules. "We want to make sure that people are able to communicate in a borderless way," Bickert said.

Founded in 2004, Facebook began as a social network for college students. As it spread beyond campus, Facebook began to use content moderation as a way to compete with the other leading social network of that era, MySpace.

MySpace had positioned itself as the nightclub of the social networking world, offering profile pages that users could decorate with online glitter, colorful layouts and streaming music. It didn't require members to provide their real names and was home to plenty of nude and scantily clad photographs. And it was being investigated by law-enforcement agents across the country who worried it was being used by sexual predators to prey on children. (In a settlement with 49 state attorneys general, MySpace later agreed to strengthen protections for younger users.)

By comparison, Facebook was the buttoned-down Ivy League social network -- all cool grays and blues. Real names and university affiliations were required. Chris Kelly, who joined Facebook in 2005 and was its first general counsel, said he wanted to make sure Facebook didn't end up in law enforcement's crosshairs, like MySpace.

"We were really aggressive about saying we are a no-nudity platform," he said.

The company also began to tackle hate speech. "We drew some difficult lines while I was there -- Holocaust denial being the most prominent," Kelly said. After an internal debate, the company decided to allow Holocaust denials but reaffirmed its ban on group-based bias, which included anti-Semitism. Since Holocaust denial and anti-Semitism frequently went together, he said, the perpetrators were often suspended regardless.

"I've always been a pragmatist on this stuff," said Kelly, who left Facebook in 2010. "Even if you take the most extreme First Amendment positions, there are still limits on speech."

By 2008, the company had begun expanding internationally but its censorship rulebook was still just a single page with a list of material to be excised, such as images of nudity and Hitler. "At the bottom of the page it said, 'Take down anything else that makes you feel uncomfortable,'" said Dave Willner, who joined Facebook's content team that year.

Willner, who reviewed about 15,000 photos a day, soon found the rules were not rigorous enough. He and some colleagues worked to develop a coherent philosophy underpinning the rules, while refining the rules themselves. Soon he was promoted to head the content policy team.

By the time he left Facebook in 2013, Willner had shepherded a 15,000-word rulebook that remains the basis for many of Facebook's content standards today.

"There is no path that makes people happy," Willner said. "All the rules are mildly upsetting." Because of the volume of decisions -- many millions per day -- the approach is "more utilitarian than we are used to in our justice system," he said. "It's fundamentally not rights-oriented."

Willner's then-boss, Jud Hoffman, who has since left Facebook, said that the rules were based on Facebook's mission of "making the world more open and connected." Openness implies a bias toward allowing people to write or post what they want, he said.

But Hoffman said the team also relied on the principle of harm articulated by John Stuart Mill, a 19th-century English political philosopher. It states "that the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others." That led to the development of Facebook's "credible threat" standard, which bans posts that describe specific actions that could threaten others, but allows threats that are not likely to be carried out.

Eventually, however, Hoffman said "we found that limiting it to physical harm wasn't sufficient, so we started exploring how free expression societies deal with this."

The rules developed considerable nuance. There is a ban against pictures of Pepe the Frog, a cartoon character often used by "alt-right" white supremacists to perpetrate racist memes, but swastikas are allowed under a rule that permits the "display [of] hate symbols for political messaging." In the documents examined by ProPublica, which are used to train content reviewers, this rule is illustrated with a picture of Facebook founder Mark Zuckerberg that has been manipulated to apply a swastika to his sleeve.

The documents state that Facebook relies, in part, on the U.S. State Department's list of designated terrorist organizations, which includes groups such as al-Qaida, the Taliban and Boko Haram. But not all groups deemed terrorist by one country or another are included: A recent investigation by the Pakistan newspaper Dawn found that 41 of the 64 terrorist groups banned in Pakistan were operational on Facebook.

There is also a secret list, referred to but not included in the documents, of groups designated as hate organizations that are banned from Facebook. That list apparently doesn't include many Holocaust denial and white supremacist sites that are up on Facebook to this day, such as a group called "Alt-Reich Nation." A member of that group was recently charged with murdering a black college student in Maryland.

As the rules have multiplied, so have exceptions to them. Facebook's decision not to protect subsets of protected groups arose because some subgroups such as "female drivers" didn't seem especially sensitive. The default position was to allow free speech, according to a person familiar with the decision-making.

After the wave of Syrian immigrants began arriving in Europe, Facebook added a special "quasi-protected" category for migrants, according to the documents. They are only protected against calls for violence and dehumanizing generalizations, but not against calls for exclusion and degrading generalizations that are not dehumanizing. So, according to one document, migrants can be referred to as "filthy" but not called "filth." They cannot be likened to filth or disease "when the comparison is in the noun form," the document explains.

Facebook also added an exception to its ban against advocating for anyone to be sent to a concentration camp. "Nazis should be sent to a concentration camp," is allowed, the documents state, because Nazis themselves are a hate group.

The rule against posts that support violent resistance against a foreign occupier was developed because "we didn't want to be in a position of deciding who is a freedom fighter," Willner said. Facebook has since dropped the provision and revised its definition of terrorism to include nongovernmental organizations that carry out premeditated violence "to achieve a political, religious or ideological aim," according to a person familiar with the rules.

The Facebook policy appears to have had repercussions in many of the at least two dozen disputed territories around the world. When Russia occupied Crimea in March 2014, many Ukrainians experienced a surge in Facebook banning posts and suspending profiles. Facebook's director of policy for the region, Thomas Myrup Kristensen, acknowledged at the time that it "found a small number of accounts where we had incorrectly removed content. In each case, this was due to language that appeared to be hate speech but was being used in an ironic way. In these cases, we have restored the content."

Katerina Zolotareva, 34, a Kiev-based Ukrainian working in communications, has been blocked so often that she runs four accounts under her name. Although she supported the "Euromaidan" protests in February 2014 that antagonized Russia, spurring its military intervention in Crimea, she doesn't believe that Facebook took sides in the conflict. "There is war in almost every field of Ukrainian life," she says, "and when war starts, it also starts on Facebook."

In Western Sahara, a disputed territory occupied by Morocco, a group of journalists called Equipe Media say their account was disabled by Facebook, their primary way to reach the outside world. They had to open a new account, which remains active.

"We feel we have never posted anything against any law," said Mohammed Mayarah, the group's general coordinator. "We are a group of media activists. We have the aim to break the Moroccan media blockade imposed since it invaded and occupied Western Sahara."

In Israel, which captured territory from its neighbors in a 1967 war and has occupied it since, Palestinian groups are blocked so often that they have their own hashtag, #FbCensorsPalestine, for it. Last year, for instance, Facebook blocked the accounts of several editors for two leading Palestinian media outlets from the West Bank -- Quds News Network and Sheebab News Agency. After a couple of days, Facebook apologized and un-blocked the journalists' accounts. Earlier this year, Facebook blocked the account of Fatah, the Palestinian Authority's ruling party -- then un-blocked it and apologized.

Last year India cracked down on protesters in Kashmir, shooting pellet guns at them and shutting off cellphone service. Local insurgents are seeking autonomy for Kashmir, which is also caught in a territorial tussle between India and Pakistan. Posts of Kashmir activists were being deleted, and members of a group called the Kashmir Solidarity Network found that all of their Facebook accounts had been blocked on the same day.

Ather Zia, a member of the network and a professor of anthropology at the University of Northern Colorado, said that Facebook restored her account without explanation after two weeks. "We do not trust Facebook any more," she said. "I use Facebook, but it's almost this idea that we will be able to create awareness but then we might not be on it for long."

The rules are one thing. How they're applied is another. Bickert said Facebook conducts weekly audits of every single content reviewer's work to ensure that its rules are being followed consistently. But critics say that reviewers, who have to decide on each post within seconds, may vary in both interpretation and vigilance.

Facebook users who don't mince words in criticizing racism and police killings of racial minorities say that their posts are often taken down. Two years ago, Stacey Patton, a journalism professor at historically black Morgan State University in Baltimore, posed a provocative question on her Facebook page. She asked why "it's not a crime when White freelance vigilantes and agents of 'the state' are serial killers of unarmed Black people, but when Black people kill each other then we are 'animals' or 'criminals.'"

Although it doesn't appear to violate Facebook's policies against hate speech, her post was immediately removed, and her account was disabled for three days. Facebook didn't tell her why. "My posts get deleted about once a month," said Patton, who often writes about racial issues. She said she also is frequently put in Facebook "jail" -- locked out of her account for a period of time after a posting that breaks the rules.

"It's such emotional violence," Patton said. "Particularly as a black person, we're always have these discussions about mass incarceration, and then here's this fiber-optic space where you can express yourself. Then you say something that some anonymous person doesn't like and then you're in 'jail.'"

Didi Delgado, whose post stating that "white people are racist" was deleted, has been banned from Facebook so often that she has set up an account on another service called Patreon, where she posts the content that Facebook suppressed. In May, she deplored the increasingly common Facebook censorship of black activists in an article for Medium titled "Mark Zuckerberg Hates Black People."

Facebook also locked out Leslie Mac, a Michigan resident who runs a service called SafetyPinBox where subscribers contribute financially to "the fight for black liberation," according to her site. Her offense was writing a post stating "White folks. When racism happens in public -- YOUR SILENCE IS VIOLENCE."

The post does not appear to violate Facebook's policies. Facebook apologized and restored her account after TechCrunch wrote an article about Mac's punishment. Since then, Mac has written many other outspoken posts. But, "I have not had a single peep from Facebook," she said, while "not a single one of my black female friends who write about race or social justice have not been banned."

"My takeaway from the whole thing is: If you get publicity, they clean it right up," Mac said. Even so, like most of her friends, she maintains a separate Facebook account in case her main account gets blocked again.

Negative publicity has spurred other Facebook turnabouts as well. Consider the example of the iconic news photograph of a young naked girl running from a napalm bomb during the Vietnam War. Kate Klonick, a Ph.D. candidate at Yale Law School who has spent two years studying censorship operations at tech companies, said the photo had likely been deleted by Facebook thousands of times for violating its ban on nudity.

But last year, Facebook reversed itself after Norway's leading newspaper published a front-page open letter to Zuckerberg accusing him of "abusing his power" by deleting the photo from the newspaper's Facebook account.

Klonick said that while she admires Facebook's dedication to policing content on its website, she fears it is evolving into a place where celebrities, world leaders and other important people "are disproportionately the people who have the power to update the rules."

In December 2015, a month after terrorist attacks in Paris killed 130 people, the European Union began pressuring tech companies to work harder to prevent the spread of violent extremism online.

After a year of negotiations, Facebook, Microsoft, Twitter and YouTube agreed to the European Union's hate speech code of conduct, which commits them to review and remove the majority of valid complaints about illegal content within 24 hours and to be audited by European regulators. The first audit, in December, found that the companies were only reviewing 40 percent of hate speech within 24 hours, and only removing 28 percent of it. Since then, the tech companies have shortened their response times to reports of hate speech and increased the amount of content they are deleting, prompting criticism from free-speech advocates that too much is being censored.

Now the German government is considering legislation that would allow social networks such as Facebook to be fined up to 50 million euros if they don't remove hate speech and fake news quickly enough. Facebook recently posted an article assuring German lawmakers that it is deleting about 15,000 hate speech posts a month. Worldwide, over the last two months, Facebook deleted about 66,000 hate speech posts per week, vice president Richard Allan said in a statement Tuesday on the company's site.

Among posts that Facebook didn't delete were Donald Trump's comments on Muslims. Days after the Paris attacks, Trump, then running for president, posted on Facebook "calling for a total and complete shutdown of Muslims entering the United States until our country's representatives can figure out what is going on."

Candidate Trump's posting -- which has come back to haunt him in court decisions voiding his proposed travel ban -- appeared to violate Facebook's rules against "calls for exclusion" of a protected religious group. Zuckerberg decided to allow it because it was part of the political discourse, according to people familiar with the situation.

However, one person close to Facebook's decision-making said Trump may also have benefited from the exception for sub-groups. A Muslim ban could be interpreted as being directed against a sub-group, Muslim immigrants, and thus might not qualify as hate speech against a protected category.

Hannes Grassegger is a reporter for Das Magazin and Reportagen Magazine based in Zurich.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Russian Cyber Attacks Against US Voting Systems Wider Than First Thought

Cyber attacks upon electoral systems in the United States are wider than originally thought. The attacks occurred in at least 39 states. The Bloomberg report described online attacks in Illinois as an example:

"... investigators found evidence that cyber intruders tried to delete or alter voter data. The hackers accessed software designed to be used by poll workers on Election Day, and in at least one state accessed a campaign finance database. Details of the wave of attacks, in the summer and fall of 2016... In early July 2016, a contractor who works two or three days a week at the state board of elections detected unauthorized data leaving the network, according to Ken Menzel, general counsel for the Illinois board of elections. The hackers had gained access to the state’s voter database, which contained information such as names, dates of birth, genders, driver’s licenses and partial Social Security numbers on 15 million people, half of whom were active voters. As many as 90,000 records were ultimately compromised..."

Politicians have emphasized that the point of the disclosures isn't to embarrass any specific state, but to alert the public to past activities and to the ongoing threat. The Intercept reported:

"Russian military intelligence executed a cyberattack on at least one U.S. voting software supplier and sent spear-phishing emails to more than 100 local election officials just days before last November’s presidential election, according to a highly classified intelligence report obtained by The Intercept.

The top-secret National Security Agency document, which was provided anonymously to The Intercept and independently authenticated, analyzes intelligence very recently acquired by the agency about a months-long Russian intelligence cyber effort against elements of the U.S. election and voting infrastructure. The report, dated May 5, 2017, is the most detailed U.S. government account of Russian interference in the election that has yet come to light."

Spear-fishing is the tactic criminals use by sending malware-laden e-mail messages to targeted individuals, whose names and demographic details may have been collected from social networking sites and other sources. The spam e-mail uses those details to pretend to be valid e-mail from a coworker, business associate, or friend. When the target opens the e-mail attachment, their computer and network are often infected with malware to collect and transmit log-in credentials to the criminals; or to remotely take over the targets' computers (e.g., ransomware) and demand ransom payments. Stolen log-in credentials are how criminals steal consumers' money by breaking into online bank accounts.

The Intercept report explained how the elections systems hackers adopted this tactic:

"... the Russian plan was simple: pose as an e-voting vendor and trick local government employees into opening Microsoft Word documents invisibly tainted with potent malware that could give hackers full control over the infected computers. But in order to dupe the local officials, the hackers needed access to an election software vendor’s internal systems to put together a convincing disguise. So on August 24, 2016, the Russian hackers sent spoofed emails purporting to be from Google to employees of an unnamed U.S. election software company... The spear-phishing email contained a link directing the employees to a malicious, faux-Google website that would request their login credentials and then hand them over to the hackers. The NSA identified seven “potential victims” at the company. While malicious emails targeting three of the potential victims were rejected by an email server, at least one of the employee accounts was likely compromised, the agency concluded..."

Experts believe the voting equipment company targeted was VR Systems, based in Florida. Reportedly, it's electronic voting services and equipment are used in eight states. VR Systems posted online a Frequently Asked Questions document (adobe PDF) about the cyber attacks against elections systems:

"Recent reports indicate that cyber actors impersonated VR Systems and other elections companies. Cyber actors sent an email from a fake account to election officials in an unknown number of districts just days before the 2016 general election. The fraudulent email asked recipients to open an attachment, which would then infect their computer, providing a gateway for more mischief... Because the spear-phishing email did not originate from VR Systems, we do not know how many jurisdictions were potentially impacted. Many election offices report that they never received the email or it was caught by their spam filters before it could reach recipients. It is our understanding that all jurisdictions, including VR Systems customers, have been notified by law enforcement agencies if they were a target of this spear-phishing attack... In August, a small number of phishing emails were sent to VR Systems. These emails were captured by our security protocols and the threat was neutralized. No VR Systems employee’s email was compromised. This prevented the cyber actors from accessing a genuine VR Systems email account. As such, the cyber actors, as part of their late October spear-phishing attack, resorted to creating a fake account to use in that spear-phishing campaign."

It is good news that VR Systems protected its employees' e-mail accounts. Let's hope that those employees were equally diligent about protecting their personal e-mail accounts and home computers, networks, and phones. We all know employees that often work from home.

The Intercept report highlighted a fact about life on the internet, which all internet users should know: stolen log-in credentials are highly valued by criminals:

"Jake Williams, founder of computer security firm Rendition Infosec and formerly of the NSA’s Tailored Access Operations hacking team, said stolen logins can be even more dangerous than an infected computer. “I’ll take credentials most days over malware,” he said, since an employee’s login information can be used to penetrate “corporate VPNs, email, or cloud services,” allowing access to internal corporate data. The risk is particularly heightened given how common it is to use the same password for multiple services. Phishing, as the name implies, doesn’t require everyone to take the bait in order to be a success — though Williams stressed that hackers “never want just one” set of stolen credentials."

So, a word to the wise for all internet users: don't use the same log-in credentials at multiple site. Don't open e-mail attachments from strangers. If you weren't expecting an e-mail attachment from a coworker/friend/business associate, call them on the phone first and verify that they indeed sent an attachment to you. The internet has become a dangerous place.


Hacking Group Reported Security Issues With Samsung 8 Phone's Iris Recognition

Image of Samsung Galaxy S8 phones. Click to view larger version The Computer Chaos Club (CCC), a German hacking group founded in 1981, posted the following report on Monday:

"The iris recognition system of the new Samsung Galaxy S8 was successfully defeated by hackers... The Samsung Galaxy S8 is the first flagship smartphone with iris recognition. The manufacturer of the biometric solution is the company Princeton Identity Inc. The system promises secure individual user authentication by using the unique pattern of the human iris.

A new test conducted by CCC hackers shows that this promise cannot be kept: With a simple to make dummy-eye the phone can be fooled into believing that it sees the eye of the legitimate owner. A video shows the simplicity of the method."

The Samsung Galaxy S8 runs the Android operating system, claims a talk time of up to 30 hours, has a screen optimized for virtual reality (VR) apps, and features Bixby, an "... intelligent interface that is built into the Galaxy S8. With every interaction, Bixby can learn, evolve and adapt to you. Whether it's through touch, type or voice, Bixby will seamlessly help you get things done. (Voice coming soon)"

The CCC report also explained:

"Iris recognition may be barely sufficient to protect a phone against complete strangers unlocking it. But whoever has a photo of the legitimate owner can trivially unlock the phone. "If you value the data on your phone – and possibly want to even use it for payment – using the traditional PIN-protection is a safer approach than using body features for authentication," says Dirk Engling, spokesperson for the CCC."

Phys.org reported that Samsung executives are investigating the CCC report. Samsung views the Galaxy S8 as critical to the company's performance given the Note 7 battery issues and fires last year.

Some consumers might conclude from the CCC report that the best defense against against iris hacks would be to stop posting selfies. This would be wrong to conclude, and an insufficient defense:

"The easiest way for a thief to capture iris pictures is with a digital camera in night-shot mode or the infrared filter removed... Starbug was able to demonstrate that a good digital camera with 200mm-lens at a distance of up to five meters is sufficient to capture suitably good pictures to fool iris recognition systems."

So, more photos besides selfies could reveal your iris details. The CCC report also reminded consumers of the security issues with using fingerprints to protect their devices:

"CCC member and biometrics security researcher starbug has demonstrated time and again how easily biometrics can be defeated with his hacks on fingerprint authentication systems – most recently with his successful defeat of the fingerprint sensor "Touch ID" on Apple’s iPhone. "The security risk to the user from iris recognition is even bigger than with fingerprints as we expose our irises a lot. Under some circumstances, a high-resolution picture from the internet is sufficient to capture an iris," Dirk Engling remarked."

What are your opinions of the CCC report?


The Guardian Site Reviews Documents Used By Facebook Executives To Moderate Content

Facebook logo The Guardian news site in the United Kingdom (UK) published the findings of its review of "The Facebook Files" -- a collection of documents which comprise the rules used by executives at the social site to moderate (e.g., review, approve, and delete) content posted by the site's members. Reporters at The Guardian reviewed:

"... more than 100 internal training manuals, spreadsheets and flowcharts that give unprecedented insight into the blueprints Facebook has used to moderate issues such as violence, hate speech, terrorism, pornography, racism and self-harm. There are even guidelines on match-fixing and cannibalism.

The Facebook Files give the first view of the codes and rules formulated by the site, which is under huge political pressure in Europe and the US. They illustrate difficulties faced by executives scrabbling to react to new challenges such as “revenge porn” – and the challenges for moderators, who say they are overwhelmed by the volume of work, which means they often have “just 10 seconds” to make a decision..."

The Guardian summarized what it learned about Facebook's revenge porn rules for moderators:

Revenge porn content rules found by The Guardian's review of Facebook documents

Reportedly, Facebook moderators reviewed as many as 54,000 cases in a single month related to revenge porn and "sextortion." In January of 2017, the site disabled 14,000 accounts due to this form of sexual violence. Previously, these rules were not available publicly. Findings about other rules are available at The Guardian site.

Other key findings found by The Guardian during its document review:

"One document says Facebook reviews more than 6.5m reports a week relating to potentially fake accounts – known as FNRP (fake, not real person)... Many moderators are said to have concerns about the inconsistency and peculiar nature of some of the policies. Those on sexual content, for example, are said to be the most complex and confusing... Anyone with more than 100,000 followers on a social media platform is designated as a public figure – which denies them the full protections given to private individuals..."

The social site struggles with how to handle violent language:

"Facebook’s leaked policies on subjects including violent death, images of non-sexual physical child abuse and animal cruelty show how the site tries to navigate a minefield... In one of the leaked documents, Facebook acknowledges “people use violent language to express frustration online” and feel “safe to do so” on the site. It says: “They feel that the issue won’t come back to them and they feel indifferent towards the person they are making the threats about because of the lack of empathy created by communication via devices as opposed to face to face..."

Some industry watchers in Europe doubt that Facebook can do what it has set out to accomplish, lacks sufficient staff to effectively moderate content posted by almost 2 billion users, and Facebook management should be more transparent about its content moderation rules. Others believe that Facebook and other social sites should be heavily fined "for failing to remove extremist and hate-crime material."

To learn more, The Guardian site includes at least nine articles about its review of The Facebook Files:

Collection of articles by The Guardian which review Facebook's content policies. Click to view larger version