New Technologies Will Soon Make It More Difficult For Consumers To Spot Fake News

We've all heard the old saying: seeing is believing. Right? Not necessarily anymore.

New technologies  will soon make it very easy for bad actors to manipulate videos of people -- politicians, law enforcement officials, celebrities, or anyone -- to say things they never said. This will cause many problems, one of which will be the increasing difficulty, or impossibility, for consumers to spoke fake news. CBS News explained:

"It starts with a selfie. Using that simple image, Hao Li, CEO of Los Angeles-based Pinscreen, can manipulate someone's face. You can literally put words in someone else's mouth. Li said it's all part of building a new virtual chat room world, but this type of advanced artificial intelligence technology is raising real eyebrows... For example, someone could take an image of President Trump and make him say something he didn't really say. Li said these kind of things are already possible in some ways. Comedian Jordan Peele used lip sync technology in a public service announcement (PSA) out Tuesday, warning against the dangers of fake news..."

Below is the PSA by Peele, which has already gotten more than 2.3 million views:

This is more confirmation that artificial intelligence is ripe for misuse by bad actors. The CBS News report also described some of the efforts by software developers to quickly create tools to spot manipulated images and video. Here's why:

"... at Pinscreen, Li said it won't take long before the line between what's real or not is erased. "It might be a year actually." "

Watch the entire CBS News report. These new image/video detection tools can't come soon enough. Consumers will need them. Journalists, military, intelligence, government watch-dog agencies, and corporate executives will need them, too. One can easily imagine bad actors using A.I. and other new technologies to create fake endorsements by celebrities of products, services, and/or politicians they really didn't endorse. What are your opinions?


How Facebook Tracks Its Users, And Non-Users, Around the Internet

Facebook logo Many Facebook users wrongly believe that the social networking service doesn't track them around the internet when they aren't signed in. Also, many non-users of Facebook wrongly believe that they are not tracked.

Earlier this month, Consumer Reports explained the tracking:

"As you travel through the web, you’re likely to encounter Facebook Like or Share buttons, which the company calls Social Plugins, on all sorts of pages, from news outlets to shopping sites. Click on a Like button and you can see the number on the page’s counter increase by one; click on a Share button and a box opens up to let you post a link to your Facebook account.

But that’s just what’s happening on the surface. "If those buttons are on the page, regardless of whether you touch them or not, Facebook is collecting data," said Casey Oppenheim, co-founder of data security firm Disconnect."

This blog discussed social plugins back in 2010. However, the tracking includes more technologies:

"... every web page contains little bits of code that request the pictures, videos, and text that browsers need to display each item on the page. These requests typically go out to a wide swath of corporate servers—including Facebook—in addition to the website’s owner. And such requests can transmit data about the site you’re on, the browser you are using, and more. Useful data gets sent to Facebook whether you click on one of its buttons or not. If you click, Facebook finds out about that, too. And it learns a bit more about your interests.

In addition to the buttons, many websites also incorporate a Facebook Pixel, a tiny, transparent image file the size of just one of the millions of pixels on a typical computer screen. The web page makes a request for a Facebook Pixel, just as it would request a Like button. No user will ever notice the picture, but the request to get it is packaged with information... Facebook explains what data can be collected using a Pixel, such as products you’ve clicked on or added to a shopping cart, in its documentation for advertisers. Web developers can control what data is collected and when it is transmitted... Even if you’re not logged in, the company can still associate the data with your IP address and all the websites you’ve been to that contain Facebook code."

The article also explains "re-targeting" and how consumers who don't purchase anything at an online retail site will see advertisements later -- around the internet and not solely on the Facebook site -- about the items they viewed but not purchased. Then, there is the database it assembles:

"In materials written for its advertisers, Facebook explains that it sorts consumers into a wide variety of buckets based on factors such as age, gender, language, and geographic location. Facebook also sorts its users based on their online activities—from buying dog food, to reading recipes, to tagging images of kitchen remodeling projects, to using particular mobile devices. The company explains that it can even analyze its database to build “look-alike” audiences that are similar... Facebook can show ads to consumers on other websites and apps as well through the company’s Audience Network."

So, several technologies are used to track both Facebook users and non-users, and assemble a robust, descriptive database. And, some website operators collaborate to facilitate the tracking, which is invisible to most users. Neat, eh?

Like it or not, internet users are automatically included in the tracking and data collection. Can you opt out? Consumer reports also warns:

"The biggest tech companies don’t give you strong tools for opting out of data collection, though. For instance, privacy settings may let you control whether you see targeted ads, but that doesn’t affect whether a company collects and stores information about you."

Given this, one can conclude that Facebook is really a massive advertising network masquerading as a social networking service.

To minimize the tracking, consumers can: disable the Facebook API platform on their Facebook accounts, use the new tools (e.g., see these step-by-step instructions) by Facebook to review and disable the apps with access to their data, use ad-blocking software (e.g., Adblock Plus, Ghostery), use the opt out-out mechanisms offered by the major data brokers, use the OptOutPrescreen.com site to stop pre-approved credit offers, and use VPN software and services.

If you use the Firefox web browser, configure it for Private Browsing and install the new Facebook Container add-on specifically designed to prevent Facebook from tracking you. Don't use Firefox? Several web browsers offer Incognito Mode. And, you might try the Privacy Badger add-on instead. I've used it happily for years.

To combat "canvas fingerprinting" (e.g., tracking users by identifying the unique attributes of your computer, browser, and software), security experts have advised consumers to use different web browsers. For example, you'd use one browser only for online banking, and a different web browser for surfing the internet. However,  this security method may not work much longer given the rise of cross-browser fingerprinting.

It seems that an arms race is underway between software for users to maintain privacy online versus technologies by advertisers to defeat users' privacy. Would Facebook and its affiliates/partners use cross-browser fingerprinting? My guess: yes it would, just like any other advertising network.

What do you think? Some related reading:


Backpage Executive Pled Guilty In Three States. Several Other Executives Indicted

Late last week, the Washington Post reported:

"Carl Ferrer, the chief executive of Backpage.com whose name was conspicuously absent from an indictment of seven other Backpage officials unsealed Monday, has pleaded guilty in state courts in California and Texas and federal court in Arizona to charges of money laundering and conspiracy to facilitate prostitution. In addition, he agreed to testify against the men who co-founded Backpage with him, Michael Lacey and James Larkin, who remained in jail Thursday in Arizona on facilitating prostitution charges. Backpage, in addition to hosting thinly veiled ads for prostitution since 2004, was accused of hosting child sex trafficking ads on its site... Court records show that Ferrer pleaded guilty to conspiracy to facilitate prostitution and money laundering in federal court in Phoenix on April 5, with the hearing and documents sealed. Backpage.com also pleaded guilty, by Ferrer as the CEO, to a money laundering conspiracy in Phoenix, where Backpage was created. Ferrer then on Monday appeared in state court in Corpus Christi, Texas, where he personally pleaded guilty to money laundering..."


How To View The List Of Advertisers Tracking You On Facebook. Any Surprises On Your List?

The massive privacy and data security breach at Facebook.com involving Cambridge Analytica has heightened many users' sensitivity to the advertising practices by the social networking service. Many Facebook users want to know the exact list of advertiser tracking them.

How To View The List Of Advertisers Tracking You

Facebook Ad Preferences page. Click to view larger version How to view this list? It's easy. Sign into Facebook.com and navigate to Settings > Ads > Advertisers You've Interacted With. (When using a web browser, you'll have to click on the tiny arrow in the upper right portion of the page to access the drop-down menu.) Within the Ad Preferences page, click on the "Advertisers You've Interacted With" headline to open that module. When opened, it displays several lists of advertisers:

  1. Who've added their contact list to Facebook
  2. Whose website or app you've used,
  3. Whom you've visited, and
  4. More

The default view of list #1 displays 12 advertisers tracking you. There probably are many more in your list. Select "Show More" to view more advertisers. Facebook doesn't make it easy. The module lacks a "Show All" button, which forces users to repeatedly select "Show More." Not good. Come on Facebook! You can do better.

List #1 includes important explanatory text:

"These advertisers are running ads using a contact list they uploaded that includes your contact info. This info was collected by the advertiser, typically after you shared your email address with them or another business they've partnered with."

The key phrase to remember: or another business they've interacted with. So, list #1 includes not only advertisers but also affiliates or business partners. Not good. More Facebook being Facebook.

I selected "Show More" about two dozen times to view my complete list: 235 advertisers tracking me, and collecting data about me. 235 advertisers even though I never used the Facebook mobile app, and had already disabled the Facebook API platform on my account years ago! Not good.

Your mileage will vary. There may be fewer or more advertisers on your list.

My list #1 included both advertisers I expected and many I didn't expect. The advertisers I expected to see brands I currently do business with (e.g., Marriott Rewards, ACLU), brands I no longer do business with (e.g., Bank of America, AT&T), and/or brands whose Facebook pages I "Liked" or left comments on. The advertisers who I didn't expected to see included politicians in other states I've neither visited nor live in, brands I've never purchased nor interacted with in any manner, brands I have never "Liked," and more.

Who's on your list? A friend shared:

"I looked at my list and it's crazy. Will follow the opt-out links tomorrow and clear them out. Cardi B was in my list of FB advertisers."

A rapper? That's too funny. I guess that's to be expected if you stream and share music online via Facebook. Me? I don't stream music online because that is another way to be tracked. Instead, I enjoy listening to CDs privately in my home. I prefer to keep my home a truly private place.

What's really going on here? Why the crazy long list? Popular Science explained:

"You, can thank the "data providers" for this mess. Mark Zuckerberg spent roughly 11 hours testifying in front of Congressional committees... One thing that got very little attention was the concept of “data brokers,” middleman businesses that collect consumer information and sell it to companies. Facebook stopped using them just last month. However, that long string of companies, personalities, and alternative rock bands is a result of Facebook’s old program... after the Cambridge Analytica scandal broke, but before Mark Zuckerberg’s marathon testimony in front of Congress, Facebook announced that it was ending a program called Partner Categories, canceling a long-standing relationship between the social network and data brokers. The change was announced in a short statement, but it has big implications for your personal information and the agencies that collect and sell it."

"The ability to target advertising is what makes Facebook its money—roughly $40 billion last year... while you provide lots of user information to Facebook, advertisers typically want even more... and that’s where data brokers come in. Facebook calls on brokers like Acxiom, Epsilon, and TransUnion to act as a conduit between Facebook and individual advertisers looking to reach targeted audiences..."

Readers of this blog may recognize TransUnion, one of the three major credit reporting agencies. So, the "advertisers" on Facebook tracking you (and data harvesting) include a variety of entities: traditional advertisers, business partners, affiliates, data brokers, and their intermediaries.

It's called "surveillance capitalism" for good reasons. Many companies besides Facebook do it.

What To Do Next

It's not easy to opt out or delete items from your advertising list. For those brands and entities you have "Liked," you can visit their Facebook page and "Unlike" them. However, that won't stop them or other "advertisers" from re-targeting (and tracking) you in the future. The "Ad Preferences" page for your profile also includes the "Your Information" module where you can toggle on or off advertising based upon certain profile elements:

Your Information module within Ad Preferences. Facebook. Click to view larger version

The above image is from 2017. back then I disabled all of the active toggles you see. Deactivating these toggles might minimize the number of ads displays, but it won't stop the tracking and data collection. The Popular Science article includes links to several opt-out mechanisms for major data brokers. You could (and should) use those. However, two key problems remain.

First, these opt-out links should be easily accessible within Facebook. They aren't. This forces consumers to waste time hunting for the opt-out mechanisms, when Facebook has the expertise to provide them. Facebook probably knows that many consumers will give up and quit, rather than hunt for opt-out links. It's great that Popular Science did a lot of the work for consumers.

Second, the opt-out mechanisms offered by some data brokers are unnecessarily complex. Example: see the opt-out mechanisms offered by Experian, another credit reporting agency:

Experian opt-out site pages. Click to view larger version

Didn't know that Experian plays in both ponds: credit reporting and data brokerage? Most people probably don't know. Experian's site lacks a unified, single opt-out mechanism which forces consumers to wade through seven different mechanisms and methods; some of which are paper-based and lack an online method. Not good!

TransUnion's opt-out mechanism isn't much better. And, it raises more questions than it answers? It links to the OptOutPrescreen.com site, which I completed way back in 2007. Did my Facebook membership undo that? Or is there some other data sharing at work, which the OptOutprescreen doesn't cover? TransUnion's page doesn't explain, and nither does Facebook's page. Not good.

Some people choose to use ad-blocking software (e.g., Adblock Plus, Ghostery) to suppress the display of online ads, but that probably won't stop the tracking and data collection internal to Facebook. There's no substitute for Facebook giving its users internal tools to completely disable and opt out of the tracking and data collection.

That highlights another problem: users are automatically included, so the burden is upon users to (continually) opt out. This is Facebook's business model. The reverse should be the default. Users should not be tracked nor data harvested unless they register and opt into the program. Given the social media site's business model, even if you opt out today, there's nothing stopping Facebook from re-subscribing you in the future with any updates to its system or terms of use.

How many advertisers are on your list? 200 or more? 300? 400? Any surprises on your list?


How To Check If Your Information Was Collected By Cambridge Analytica In The Facebook Breach

You've probably heard about the massive privacy and data security breach at Facebook.com where users' information, plus their friends' information was captured and shared with Cambridge Analytica. by an app created by an academic professor. Now, you want to know if your information was harvested.

How To Check

It's easy to check. Visit this Facebook Help Center page. If you are not signed into your Facebook account, then the page displays as:

Default version of Facebook Help page for users to determine if their information was collected by Cambridge Analytica. Click to view larger version

If you have already signed into your Facebook account and your information was not harvested, then the main column of the page displays:

Default version of Facebook Help page for users to determine if their information was collected by Cambridge Analytica. Click to view larger version

If your information was harvested, then the content under "Was My Information Shared?" will be different. It may display this:

"Based upon our investigation, you don't appear to have logged into "This Is Your Digital Life" with Facebook before we removed it from our platform in 2015. However, a friend of yours did log in. As a result, the following information was likely shared with "This Is Your Digital Life": Your public profile, page likes, date of birth, and current city"

Of course, if you logged into the "This Is Your Digital Life" app yourself, then the page content will say so, and list the data elements harvested. Reportedly, about 270,000 Facebook users logged into the app/quiz which then collected information for an estimated 87 million of those users' Facebook friends.

What To Do Next

There's not a lot you can do immediately. CNN Tech advised:

"Even if you delete your Facebook account, or remove third-party apps connected to your profile, the third-party apps will still have access to data they previously collected. Users have to contact the app individually to have the data be removed... According to a notice on affected accounts, the "small number of people" who accessed the app also shared their News Feed, timeline, posts and messages. A Facebook spokesperson confirmed that 1,500 users who logged into the app granted explicit access to their private message inbox... For now, the platform is directing people to their Settings page to see which apps are connected to their accounts, such as Uber and Netflix. Users can also disconnect those apps... Walt Mossberg, a veteran tech reporter and cofounder of tech website Recode, urged Facebook to let users know which friends accessed the app and when..."

Yeah, that! Facebook should inform affected users which of their friends contributed to the data leakage.

Of course, Facebook wants its users to keep using the service. Facebook announced on March 21st that it will, 1) investigate all apps that had access to large amounts of information and conduct full audits of any apps with suspicious activity; 2) inform users affected by apps that have misused their data; 3) disable an app's access to a member's information if that member hasn't used the app within the last three months; 4) change Login to "reduce the data that an app can request without app review to include only name, profile photo and email address;" 5) encourage members to manage the apps they use; and reward users who find vulnerabilities.

Those actions seem good, but too little too late. What can affected users do?

You have options. If you use Facebook, see these instructions by Consumer Reports to deactivate or delete your account. Some people I know simply stopped using Facebook, but left their accounts active. That doesn't seem wise. A better approach is to adjust the privacy settings on your Facebook account to get as much privacy and protections as possible.

Facebook has a new tool for members to review and disable, in bulk, all of the apps with access to their data. Follow these handy step-by-step instructions by Mashable. And, users should also disable the Facebook API platform for their account. If you use the Firefox web browser, then install the new Facebook Container add-on specifically designed to prevent Facebook from tracking you. Don't use Firefox? You might try the Privacy Badger add-on instead. I've used it happily for years.

Whatever you do, remember that lots of advertising networks and tech companies besides Facebook want to track your movements around the web. Some of those companies include internet service providers (ISPs), since the U.S. Federal Communications Commission (FCC) killed both broadband privacy and net neutrality in 2017.

A windfall for broadband providers, and terrible for consumers. You might contact your elected officials and demand that the FCC put broadband privacy and net neutrality protections back into place.


Security Experts: Breach At Panera Bread Affected Millions. Questions Linger About Vulnerability Fix

Panera Bread logo Apparently, Panera Bread experienced a massive data breach, which the restaurant chain's management allegedly ignored for months. CSO Online reported:

"Panera Bread’s website leaked millions of customer records in plain text for at least eight months, which is how long the company blew off the issues reported by security researcher Dylan Houlihan... Houlihan shared copies of email exchanges with Panera Bread CIO John Meister – who at first accused Houlihan of trying to run a scam when he first reported the security vulnerability back in August 2017... Exactly eight months after reporting the issue to Panera Bread, Houlihan turned to KrebsOnSecurity. Krebs spoke to Meister, and the website was briefly taken offline. Less than two hours later, Panera said it had fixed the problem."

Reportedly, the sensitive customer information leaked included usernames, first and last names, email addresses, phone numbers, home addresses, birthdays, the last four digits of saved credit card numbers, dietary restrictions, food preferences, and "social account integration information."

Security experts disagree about two key issues: a) whether or not the vulnerability was fixed, and b) the number of affected consumers. Panera Bread claimed about 10,000 customers were affected. Then, that number went up:

"After some more poking, Hold Security reported to Krebs that Panera didn’t just leak plain text records of 7 million customers; “the vulnerabilities also appear to have extended to Panera’s commercial division, which serves countless catering companies. At last count, the number of customer records exposed in this breach appears to exceed 37 million.”

A check earlier today of the public-facing pages at Panera's website failed to find a breach notice, which companies usually provide after a data breach. Not good. Shoppers need to know. Many states have breach notification laws.

Panera's behavior doesn't inspire much confidence. It's internal breach-detection mechanisms seem to have failed, and its post-breach response seemed unprepared, unfocused, and disinterested. What do you think?


Apple Computer And Facebook Executives Exchange Criticisms

Chief executives at Apple Computer and Facebook recently exchanged criticisms. During a lengthy interview by Recode's Kara Swisher and MSNBC’s Chris Hayes, Apple CEO Tim Cook responded to questions about Facebook's recent data security and privacy incident. The interview was conducted in Chicago, Illinois on Tuesday, March 27. It was broadcast on MSNBC on Friday, April 6, 2018. The relevant section of the interview:

"Hayes: We are back with Apple CEO Tim Cook. In the wake of the news about data scraping by Cambridge Analytica and Facebook, you had this to say recently, and I thought it was quite interesting. You said, "It’s clear to me that something, some large profound change, is needed. I’m personally not a big fan of regulation because sometimes regulation can have unexpected consequences to it. However, I think this certain situation is so dire, has become so large, that probably some well-crafted regulation is necessary." What’d you mean?

Cook: Yeah. Look, we’ve never believed that these detailed profiles of people — that has incredibly deep personal information that is patched together from several sources — should exist. That the connection of all of these dots, that you could use them in such devious ways if someone wanted to do that, that this was one of the things that were possible in life but shouldn’t exist.

Swisher: Right.

Cook: Shouldn’t be allowed to exist. And so I think the best regulation is no regulation, is self regulation. That is the best regulation, because regulation can have unexpected consequences, right? However, I think we’re beyond that here, and I do think that it’s time for a set of people to think deeply about what can be done here.

Hayes: Now, the cynic in me says, you’ve got other tech companies that are much more dependent on that kind of thing than Apple is. And so, yes, you want regulation here because that would essentially be a comparative advantage, that if regulation were to come in on this privacy question, the people it’s going to hit harder aren’t Apple. It’s places like Facebook and Google.

Cook: Well, the skeptic in you would be wrong. (laughter) The truth is we could make a ton of money if we monetized our customer. If our customer was our product, we could make a ton of money. We’ve elected not to do that. (applause) Because we don’t... our products are iPhones and iPads and Macs and HomePods and the Watch, etc., and if we can convince you to buy one, we’ll make a little bit of money, right? But you are not our product."

The comments about regulation are relevant since Mr. Zuckerberg will testify before Congress this week about Facebook's privacy and data security incident involving Cambridge Analytica. Mr. Cook's comments highlight the radically different business models.

Mr. Cook's comments didn't sit well with Mr. Zuckerberg. Vox's Ezra Klein interviewed Zuckerberg on Monday, April 2. The relevant portion of that interview:

"Ezra Klein: One of the things that has been coming up a lot in the conversation is whether the business model of monetizing user attention is what is letting in a lot of these problems. Tim Cook, the CEO of Apple, gave an interview the other day and he was asked what he would do if he was in your shoes. He said, “I wouldn’t be in this situation,” and argued that Apple sells products to users, it doesn’t sell users to advertisers, and so it’s a sounder business model that doesn’t open itself to these problems.

Do you think part of the problem here is the business model where attention ends up dominating above all else, and so anything that can engage has powerful value within the ecosystem?

Mark Zuckerberg: You know, I find that argument, that if you’re not paying that somehow we can’t care about you, to be extremely glib and not at all aligned with the truth. The reality here is that if you want to build a service that helps connect everyone in the world, then there are a lot of people who can’t afford to pay. And therefore, as with a lot of media, having an advertising-supported model is the only rational model that can support building this service to reach people.

That doesn’t mean that we’re not primarily focused on serving people. I think probably to the dissatisfaction of our sales team here, I make all of our decisions based on what’s going to matter to our community and focus much less on the advertising side of the business.

But if you want to build a service which is not just serving rich people, then you need to have something that people can afford. I thought Jeff Bezos had an excellent saying on this in one of his Kindle launches a number of years back. He said, “There are companies that work hard to charge you more, and there are companies that work hard to charge you less.” And at Facebook, we are squarely in the camp of the companies that work hard to charge you less and provide a free service that everyone can use.

I don’t think at all that that means that we don’t care about people. To the contrary, I think it’s important that we don’t all get Stockholm syndrome and let the companies that work hard to charge you more convince you that they actually care more about you. Because that sounds ridiculous to me."

What to make of this. While Mr. Zuckerberg is entitled to his opinions, an old saying seems to apply: people in glass houses shouldn't throw stones.

There seems no question that Facebook built a platform which collected users' intimate and sensitive information, tracked users around the internet, allowed "advertisers" to collect information about both users who interacted with a quiz app and users' friends (without their friends' knowledge), allowed "advertisers" to target groups of users (regardless of the law and/or consequences), and made it easier for "advertisers" to combine data collected with information from other sources. You may remember, Facebook's "friction-less sharing" program in 2011, where apps automatically posted content in users' timelines without users active involvement. And, Facebook's history with a convoluted and often confusing interface for users to change their privacy settings.

You may remember, it was Apple which fought to protect its customers' sensitive information by resisting demands by federal law enforcement officials to build back-door hacks into its devices. I don't think Facebook can make a similar claim about protecting users' information. Actions speak louder than words.

Nobody forced Facebook to build the platform it built. Its executives made choices. And now, Mr. Zuckerberg is apologizing (again) for his company's behavior. You may remember, an admission of problems and promises to do better by Mr. Zuckerberg in January. Facebook COO Sheryl Sandberg also apologized last week about the executive failures in 2015. You might call it the #Facebookapologytour.

Mr. Zuckerberg's "an advertising-supported model is the only rational model" comment deserves attention. The only model? Mr. Zuckerberg and Facebook made the decision not to charge monthly fees. Would some users pay a monthly fee for guaranteed privacy? I imagine many users would gladly pay. I would. (An Apple co-founder is willing to pay, too.) It seems, a more accurate statement would be: an advertising-supported model is the profit-maximizing model.

Also, Mr. Zuckerberg's "advertising-supported" description of his company's business model seems disingenuous. It gives the impression that traditional advertisers pay money to passively display ads, while the reality is much more. More types of companies than traditional advertisers used the social networking service's sophisticated software tools (e.g., Facebook's API platform) to target groups and then collect data about Facebook users and their connected friends.

This makes one wonder how many other companies like Cambridge Analytica have harvested information -- either directly or indirectly via intermediaries. Facebook has suspended the account of Cubeyou, another alleged data harvester, while it investigates.

If there are more companies and Facebook executives know it, then they must admit it. Its March 21st press release promising to investigate all apps that had access to large amounts of information, and to conduct full audits of any apps with suspicious activity suggests that Facebook doesn't know. I'm not sure which is worse: knowing and not saying, or not knowing.

According to news reports, Cambridge Analytica paid sizeable amounts - US $ .75 to $5.00 per voter - for profiles crafted from Facebook users' information. Do that math... that could be amounts ranging from $1.5 to $10 million, allegedly based upon 2 million users from 11 states: Arkansas, Colorado, Florida, Iowa, Louisiana, Nevada, New Hampshire, North Carolina, Oregon, South Carolina, and West Virginia. Nobody pays that amount of money without expecting satisfactory results.

Later today, Facebook will inform users whose information may have been harvested by Cambridge Analytica. What are your opinions?


4 Ways to Fix Facebook

[Editor's Note: today's guest post, by ProPublica reporters, explores solutions to the massive privacy and data security problems at Facebook.com. It is reprinted with permission.]

By Julia Angwin, ProPublica

Gathered in a Washington, D.C., ballroom last Thursday for their annual “tech prom,” hundreds of tech industry lobbyists and policy makers applauded politely as announcers read out the names of the event’s sponsors. But the room fell silent when “Facebook” was proclaimed — and the silence was punctuated by scattered boos and groans.

Facebook logo These days, it seems the only bipartisan agreement in Washington is to hate Facebook. Democrats blame the social network for costing them the presidential election. Republicans loathe Silicon Valley billionaires like Facebook founder and CEO Mark Zuckerberg for their liberal leanings. Even many tech executives, boosters and acolytes can’t hide their disappointment and recriminations.

The tipping point appears to have been the recent revelation that a voter-profiling outfit working with the Trump campaign, Cambridge Analytica, had obtained data on 87 million Facebook users without their knowledge or consent. News of the breach came after a difficult year in which, among other things, Facebook admitted that it allowed Russians to buy political ads, advertisers to discriminate by race and age, hate groups to spread vile epithets, and hucksters to promote fake news on its platform.

Over the years, Congress and federal regulators have largely left Facebook to police itself. Now, lawmakers around the world are calling for it to be regulated. Congress is gearing up to grill Zuckerberg. The Federal Trade Commission is investigating whether Facebook violated its 2011 settlement agreement with the agency. Zuckerberg himself suggested, in a CNN interview, that perhaps Facebook should be regulated by the government.

The regulatory fever is so strong that even Peter Swire, a privacy law professor at Georgia Institute of Technology who testified last year in an Irish court on behalf of Facebook, recently laid out the legal case for why Google and Facebook might be regulated as public utilities. Both companies, he argued, satisfy the traditional criteria for utility regulation: They have large market share, are natural monopolies, and are difficult for customers to do without.

While the political momentum may not be strong enough right now for something as drastic as that, many in Washington are trying to envision what regulating Facebook would look like. After all, the solutions are not obvious. The world has never tried to rein in a global network with 2 billion users that is built on fast-moving technology and evolving data practices.

I talked to numerous experts about the ideas bubbling up in Washington. They identified four concrete, practical reforms that could address some of Facebook’s main problems. None are specific to Facebook alone; potentially, they could be applied to all social media and the tech industry.

1. Impose Fines for Data Breaches

The Cambridge Analytica data loss was the result of a breach of contract, rather than a technical breach in which a company gets hacked. But either way, it’s far too common for institutions to lose customers’ data — and they rarely suffer significant financial consequences for the loss. In the United States, companies are only required to notify people if their data has been breached in certain states and under certain circumstances — and regulators rarely have the authority to penalize companies that lose personal data.

Consider the Federal Trade Commission, which is the primary agency that regulates internet companies these days. The FTC doesn’t have the authority to demand civil penalties for most data breaches. (There are exceptions for violations of children’s privacy and a few other offenses.) Typically, the FTC can only impose penalties if a company has violated a previous agreement with the agency.

That means Facebook may well face a fine for the Cambridge Analytica breach, assuming the FTC can show that the social network violated a 2011 settlement with the agency. In that settlement, the FTC charged Facebook with eight counts of unfair and deceptive behavior, including allowing outside apps to access data that they didn’t need — which is what Cambridge Analytica reportedly did years later. The settlement carried no financial penalties but included a clause stating that Facebook could face fines of $16,000 per violation per day.

David Vladeck, former FTC director of consumer protection, who crafted the 2011 settlement with Facebook, said he believes Facebook’s actions in the Cambridge Analytica episode violated the agreement on multiple counts. “I predict that if the FTC concludes that Facebook violated the consent decree, there will be a heavy civil penalty that could well be in the amount of $1 billion or more,” he said.

Facebook maintains it has abided by the agreement. “Facebook rejects any suggestion that it violated the consent decree,” spokesman Andy Stone said. “We respected the privacy settings that people had in place.”

If a fine had been levied at the time of the settlement, it might well have served as a stronger deterrent against any future breaches. Daniel J. Weitzner, who served in the White House as the deputy chief technology officer at the time of the Facebook settlement, says that technology should be policed by something similar to the Department of Justice’s environmental crimes unit. The unit has levied hundreds of millions of dollars in fines. Under previous administrations, it filed felony charges against people for such crimes as dumping raw sewage or killing a bald eagle. Some ended up sentenced to prison.

“We know how to do serious law enforcement when we think there’s a real priority and we haven’t gotten there yet when it comes to privacy,” Weitzner said.

2. Police Political Advertising

Last year, Facebook disclosed that it had inadvertently accepted thousands of advertisements that were placed by a Russian disinformation operation — in possible violation of laws that restrict foreign involvement in U.S. elections. FBI special prosecutor Robert Mueller has charged 13 Russians who worked for an internet disinformation organization with conspiring to defraud the United States, but it seems unlikely that Russia will compel them to face trial in the U.S.

Facebook has said it will introduce a new regime of advertising transparency later this year, which will require political advertisers to submit a government-issued ID and to have an authentic mailing address. It said political advertisers will also have to disclose which candidate or organization they represent and that all election ads will be displayed in a public archive.

But Ann Ravel, a former commissioner at the Federal Election Commission, says that more could be done. While she was at the commission, she urged it to consider what it could do to make internet advertising contain as much disclosure as broadcast and print ads. “Do we want Vladimir Putin or drug cartels to be influencing American elections?” she presciently asked at a 2015 commission meeting.

However, the election commission — which is often deadlocked between its evenly split Democratic and Republican commissioners — has not yet ruled on new disclosure rules for internet advertising. Even if it does pass such a rule, the commission’s definition of election advertising is so narrow that many of the ads placed by the Russians may not have qualified for scrutiny. It’s limited to ads that mention a federal candidate and appear within 60 days prior to a general election or 30 days prior to a primary.

This definition, Ravel said, is not going to catch new forms of election interference, such as ads placed months before an election, or the practice of paying individuals or bots to spread a message that doesn’t identify a candidate and looks like authentic communications rather than ads.

To combat this type of interference, Ravel said, the current definition of election advertising needs to be broadened. The FEC, she suggested, should establish “a multi-faceted test” to determine whether certain communications should count as election advertisements. For instance, communications could be examined for their intent, and whether they were paid for in a nontraditional way — such as through an automated bot network.

And to help the tech companies find suspect communications, she suggested setting up an enforcement arm similar to the Treasury Department’s Financial Crimes Enforcement Network, known as FinCEN. FinCEN combats money laundering by investigating suspicious account transactions reported by financial institutions. Ravel said that a similar enforcement arm that would work with tech companies would help the FEC.

“The platforms could turn over lots of communications and the investigative agency could then examine them to determine if they are from prohibited sources,” she said.

3. Make Tech Companies Liable for Objectionable Content

Last year, ProPublica found that Facebook was allowing advertisers to buy discriminatory ads, including ads targeting people who identified themselves as “Jew-haters,” and ads for housing and employment that excluded audiences based on race, age and other protected characteristics under civil rights laws.

Facebook has claimed that it has immunity against liability for such discrimination under section 230 of the 1996 federal Communications Decency Act, which protects online publishers from liability for third-party content.

“Advertisers, not Facebook, are responsible for both the content of their ads and what targeting criteria to use, if any,” Facebook stated in legal filings in a federal case in California challenging Facebook’s use of racial exclusions in ad targeting.

But sentiment is growing in Washington to interpret the law more narrowly. Last month, the House of Representatives passed a bill that carves out an exemption in the law, making websites liable if they aid and abet sex trafficking. Despite fierce opposition by many tech advocates, a version of the bill has already passed the Senate.

And many staunch defenders of the tech industry have started to suggest that more exceptions to section 230 may be needed. In November, Harvard Law professor Jonathan Zittrain wrote an article rethinking his previous support for the law and declared it has become, in effect, “a subsidy” for the tech giants, who don’t bear the costs of ensuring the content they publish is accurate and fair.

“Any honest account must acknowledge the collateral damage it has permitted to be visited upon real people whose reputations, privacy, and dignity have been hurt in ways that defy redress,” Zittrain wrote.

In a December 2017 paper titled “The Internet Will Not Break: Denying Bad Samaritans 230 Immunity,” University of Maryland law professors Danielle Citron and Benjamin Wittes argue that the law should be amended — either through legislation or judicial interpretation — to deny immunity to technology companies that enable and host illegal content.

“The time is now to go back and revise the words of the statute to make clear that it only provides shelter if you take reasonable steps to address illegal activity that you know about,” Citron said in an interview.

4. Install Ethics Review Boards

Cambridge Analytica obtained its data on Facebook users by paying a psychology professor to build a Facebook personality quiz. When 270,000 Facebook users took the quiz, the researcher was able to obtain data about them and all of their Facebook friends — or about 50 million people altogether. (Facebook later ended the ability for quizzes and other apps to pull data on users’ friends.)

Cambridge Analytica then used the data to build a model predicting the psychology of those people, on metrics such as “neuroticism,” political views and extroversion. It then offered that information to political consultants, including those working for the Trump campaign.

The company claimed that it had enough information about people’s psychological vulnerabilities that it could effectively target ads to them that would sway their political opinions. It is not clear whether the company actually achieved its desired effect.

But there is no question that people can be swayed by online content. In a controversial 2014 study, Facebook tested whether it could manipulate the emotions of its users by filling some users’ news feeds with only positive news and other users’ feeds with only negative news. The study found that Facebook could indeed manipulate feelings — and sparked outrage from Facebook users and others who claimed it was unethical to experiment on them without their consent.

Such studies, if conducted by a professor on a college campus, would require approval from an institutional review board, or IRB, overseeing experiments on human subjects. But there is no such standard online. The usual practice is that a company’s terms of service contain a blanket statement of consent that users never read or agree to.

James Grimmelman, a law professor and computer scientist, argued in a 2015 paper that the technology companies should stop burying consent forms in their fine print. Instead, he wrote, “they should seek enthusiastic consent from users, making them into valued partners who feel they have a stake in the research.”

Such a consent process could be overseen by an independent ethics review board, based on the university model, which would also review research proposals and ensure that people’s private information isn’t shared with brokers like Cambridge Analytica.

“I think if we are in the business of requiring IRBs for academics,” Grimmelman said in an interview, “we should ask for appropriate supervisions for companies doing research.”

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

 


Facebook Update: 87 Million Affected By Its Data Breach With Cambridge Analytica. Considerations For All Consumers

Facebook logo Facebook.com has dominated the news during the past three weeks. The news media have reported about many issues, but there are more -- whether or not you use Facebook. Things began about mid-March, when Bloomberg reported:

"Yes, Cambridge Analytica... violated rules when it obtained information from some 50 million Facebook profiles... the data came from someone who didn’t hack the system: a professor who originally told Facebook he wanted it for academic purposes. He set up a personality quiz using tools that let people log in with their Facebook accounts, then asked them to sign over access to their friend lists and likes before using the app. The 270,000 users of that app and their friend networks opened up private data on 50 million people... All of that was allowed under Facebook’s rules, until the professor handed the information off to a third party... "

So, an authorized user shared members' sensitive information with unauthorized users. Facebook confirmed these details on March 16:

"We are suspending Strategic Communication Laboratories (SCL), including their political data analytics firm, Cambridge Analytica (CA), from Facebook... In 2015, we learned that a psychology professor at the University of Cambridge named Dr. Aleksandr Kogan lied to us and violated our Platform Policies by passing data from an app that was using Facebook Login to SCL/CA, a firm that does political, government and military work around the globe. He also passed that data to Christopher Wylie of Eunoia Technologies, Inc.

Like all app developers, Kogan requested and gained access to information from people after they chose to download his app. His app, “thisisyourdigitallife,” offered a personality prediction, and billed itself on Facebook as “a research app used by psychologists.” Approximately 270,000 people downloaded the app. In so doing, they gave their consent for Kogan to access information such as the city they set on their profile, or content they had liked... When we learned of this violation in 2015, we removed his app from Facebook and demanded certifications from Kogan and all parties he had given data to that the information had been destroyed. CA, Kogan and Wylie all certified to us that they destroyed the data... Several days ago, we received reports that, contrary to the certifications we were given, not all data was deleted..."

So, data that should have been deleted wasn't. Then, Facebook relied upon certifications from entities that had lied previously. Not good. Then, Facebook posted this addendum on March 17:

"The claim that this is a data breach is completely false. Aleksandr Kogan requested and gained access to information from users who chose to sign up to his app, and everyone involved gave their consent. People knowingly provided their information, no systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked."

Why the rush to deny a breach? It seems wise to complete a thorough investigation before making such a claim. In the 11+ years I've written this blog, whenever unauthorized persons access data they shouldn't have, it's a breach. You can read about plenty of similar incidents where credit reporting agencies sold sensitive consumer data to ID-theft services and/or data brokers, who then re-sold that information to criminals and fraudsters. Seems like a breach to me.

Cambridge Analytica logo Facebook announced on March 19th that it had hired a digital forensics firm:

"... Stroz Friedberg, to conduct a comprehensive audit of Cambridge Analytica (CA). CA has agreed to comply and afford the firm complete access to their servers and systems. We have approached the other parties involved — Christopher Wylie and Aleksandr Kogan — and asked them to submit to an audit as well. Mr. Kogan has given his verbal agreement to do so. Mr. Wylie thus far has declined. This is part of a comprehensive internal and external review that we are conducting to determine the accuracy of the claims that the Facebook data in question still exists... Independent forensic auditors from Stroz Friedberg were on site at CA’s London office this evening. At the request of the UK Information Commissioner’s Office, which has announced it is pursuing a warrant to conduct its own on-site investigation, the Stroz Friedberg auditors stood down."

That's a good start. An audit would determine or not data which perpetrators said was destroyed, actually had been destroyed. However, Facebook seems to have built a leaky system which allows data harvesting:

"Hundreds of millions of Facebook users are likely to have had their private information harvested by companies that exploited the same terms as the firm that collected data and passed it on to CA, according to a new whistleblower. Sandy Parakilas, the platform operations manager at Facebook responsible for policing data breaches by third-party software developers between 2011 and 2012, told the Guardian he warned senior executives at the company that its lax approach to data protection risked a major breach..."

Reportedly, Parakilas added that Facebook, "did not use its enforcement mechanisms, including audits of external developers, to ensure data was not being misused." Not good. The incident makes one wonder what other developers, corporate, and academic users have violated Facebook's rules: shared sensitive Facebook members' data they shouldn't have.

Facebook announced on March 21st that it will, 1) investigate all apps that had access to large amounts of information and conduct full audits of any apps with suspicious activity; 2) inform users affected by apps that have misused their data; 3) disable an app's access to a member's information if that member hasn't used the app within the last three months; 4) change Login to "reduce the data that an app can request without app review to include only name, profile photo and email address;" 5) encourage members to manage the apps they use; and reward users who find vulnerabilities.

Those actions seem good, but too little too late. Facebook needs to do more... perhaps, revise its Terms Of Use to include large fines for violators of its data security rules. Meanwhile, there has been plenty of news about CA. The Guardian UK reported on March 19:

"The company at the centre of the Facebook data breach boasted of using honey traps, fake news campaigns and operations with ex-spies to swing election campaigns around the world, a new investigation reveals. Executives from Cambridge Analytica spoke to undercover reporters from Channel 4 News about the dark arts used by the company to help clients, which included entrapping rival candidates in fake bribery stings and hiring prostitutes to seduce them."

Geez. After these news reports surfaced, CA's board suspended Alexander Nix, its CEO, pending an internal investigation. So, besides Facebook's failure to secure sensitive members' information, another key issue seems to be the misuse of social media data by a company that openly brags about unethical, and perhaps illegal, behavior.

What else might be happening? The Intercept explained on March 30th that CA:

"... has marketed itself as classifying voters using five personality traits known as OCEAN — Openness, Conscientiousness, Extroversion, Agreeableness, and Neuroticism — the same model used by University of Cambridge researchers for in-house, non-commercial research. The question of whether OCEAN made a difference in the presidential election remains unanswered. Some have argued that big data analytics is a magic bullet for drilling into the psychology of individual voters; others are more skeptical. The predictive power of Facebook likes is not in dispute. A 2013 study by three of Kogan’s former colleagues at the University of Cambridge showed that likes alone could predict race with 95 percent accuracy and political party with 85 percent accuracy. Less clear is their power as a tool for targeted persuasion; CA has claimed that OCEAN scores can be used to drive voter and consumer behavior through “microtargeting,” meaning narrowly tailored messages..."

So, while experts disagree about the effectiveness of data analytics with political campaigns, it seems wise to assume that the practice will continue with improvements. Data analytics fueled by social media input means political campaigns can bypass traditional news media outlets to distribute information and disinformation. That highlights the need for Facebook (and other social media) to improve their data security and compliance audits.

While the UK Information Commissioner's Office aggressively investigates CA, things seem to move at a much slower pace in the USA. TechCrunch reported on April 4th:

"... Facebook’s founder Mark Zuckerberg believes North America users of his platform deserve a lower data protection standard than people everywhere else in the world. In a phone interview with Reuters yesterday Mark Zuckerberg declined to commit to universally implementing changes to the platform that are necessary to comply with the European Union’s incoming General Data Protection Regulation (GDPR). Rather, he said the company was working on a version of the law that would bring some European privacy guarantees worldwide — declining to specify to the reporter which parts of the law would not extend worldwide... Facebook’s leadership has previously implied the product changes it’s making to comply with GDPR’s incoming data protection standard would be extended globally..."

Do users in the USA want weaker data protections than users in other countries? I think not. I don't. Read for yourself the April 4th announcement by Facebook about changes to its terms of service and data policy. It didn't mention specific countries or regions; who gets what and where. Not good.

Mark Zuckerberg apologized and defended his company in a March 21st post:

"I want to share an update on the Cambridge Analytica situation -- including the steps we've already taken and our next steps to address this important issue. We have a responsibility to protect your data, and if we can't then we don't deserve to serve you. I've been working to understand exactly what happened and how to make sure this doesn't happen again. The good news is that the most important actions to prevent this from happening again today we have already taken years ago. But we also made mistakes, there's more to do, and we need to step up and do it... This was a breach of trust between Kogan, Cambridge Analytica and Facebook. But it was also a breach of trust between Facebook and the people who share their data with us and expect us to protect it. We need to fix that... at the end of the day I'm responsible for what happens on our platform. I'm serious about doing what it takes to protect our community. While this specific issue involving Cambridge Analytica should no longer happen with new apps today, that doesn't change what happened in the past. We will learn from this experience to secure our platform further and make our community safer for everyone going forward."

Nice sounding words, but actions speak louder. Wired magazine said:

"Zuckerberg didn't mention in his Facebook post why it took him five days to respond to the scandal... The groundswell of outrage and attention following these revelations has been greater than anything Facebook predicted—or has experienced in its long history of data privacy scandals. By Monday, its stock price nosedived. On Tuesday, Facebook shareholders filed a lawsuit against the company in San Francisco, alleging that Facebook made "materially false and misleading statements" that led to significant losses this week. Meanwhile, in Washington, a bipartisan group of senators called on Zuckerberg to testify before the Senate Judiciary Committee. And the Federal Trade Commission also opened an investigation into whether Facebook had violated a 2011 consent decree, which required the company to notify users when their data was obtained by unauthorized sources."

Frankly, Zuckerberg has lost credibility with me. Why? Facebook's history suggests it can't (or won't) protect users' data it collects. Some of its privacy snafus: settlement of a lawsuit resulting from alleged privacy abuses by its Beacon advertising program, changed members' ad settings without notice nor consent, an advertising platform which allegedly facilitates abuses of older workers, health and privacy concerns about a new service for children ages 6 to 13, transparency concerns about political ads, and new lawsuits about the company's advertising platform. Plus, Zuckerberg made promises in January to clean up the service's advertising. Now, we have yet another apology.

In a press release this afternoon, Facebook revised upward the number affected by the Facebook/CA breach from 50 to 87 million persons. Most, about 70.6 million, are in the United States. The breakdown by country:

Number of affected persons by country in the Facebook - Cambridge Analytica breach. Click to view larger version

So, what should consumers do?

You have options. If you use Facebook, see these instructions by Consumer Reports to deactivate or delete your account. Some people I know simply stopped using Facebook, but left their accounts active. That doesn't seem wise. A better approach is to adjust the privacy settings on your Facebook account to get as much privacy and protections as possible.

Facebook has a new tool for members to review and disable, in bulk, all of the apps with access to their data. Follow these handy step-by-step instructions by Mashable. And, users should also disable the Facebook API platform for their account. If you use the Firefox web browser, then install the new Facebook Container new add-on specifically designed to prevent Facebook from tracking you. Don't use Firefox? You might try the Privacy Badger add-on instead. I've used it happily for years.

Of course, you should submit feedback directly to Facebook demanding that it extend GDPR privacy protections to your country, too. And, wise online users always read the terms and conditions of all Facebook quizzes before taking them.

Don't use Facebook? There are considerations for you, too; especially if you use a different social networking site (or app). Reportedly, Mark Zuckerberg, the CEO of Facebook, will testify before the U.S. Congress on April 11th. His upcoming testimony will be worth monitoring for everyone. Why? The outcome may prod Congress to act by passing new laws giving consumers in the USA data security and privacy protections equal to what's available in the United Kingdom. And, there may be demands for Cambridge Analytica executives to testify before Congress, too.

Or, consumers may demand stronger, faster action by the U.S. Federal Trade Commission (FTC), which announced on March 26th:

"The FTC is firmly and fully committed to using all of its tools to protect the privacy of consumers. Foremost among these tools is enforcement action against companies that fail to honor their privacy promises, including to comply with Privacy Shield, or that engage in unfair acts that cause substantial injury to consumers in violation of the FTC Act. Companies who have settled previous FTC actions must also comply with FTC order provisions imposing privacy and data security requirements. Accordingly, the FTC takes very seriously recent press reports raising substantial concerns about the privacy practices of Facebook. Today, the FTC is confirming that it has an open non-public investigation into these practices."

An "open non-public investigation?" Either the investigation is public, or it isn't. Hopefully, an attorney will explain. And, that announcement read like weak tea. I expect more. Much more.

USA citizens may want stronger data security laws, especially if Facebook's solutions are less than satisfactory, it refuses to provide protections equal to those in the United Kingdom, or if it backtracks later on its promises. Thoughts? Comments?


Fair Housing Groups Sue Facebook for Allowing Discrimination in Housing Ads

[Editor's Note: today's guest post, by reporters at ProPublica, is the latest in a series about advertising and social networking services. It is reprinted with permission.]

Facebook logo By Julia Angwin and Ariana Tobin, ProPublica

In February 2017, in response to a ProPublica investigation, Facebook pledged to crack down on efforts by advertisers of rental housing to discriminate against tenants based on race, disability, gender and other characteristics.

But a new lawsuit, filed Tuesday by the National Fair Housing Alliance in U.S. District Court in the Southern District of New York, alleges that the world’s largest social network still allows advertisers to discriminate against legally protected groups, including mothers, the disabled and Spanish-language speakers.

Since 2018 marks the 50th anniversary of the Fair Housing Act, "it is all the more egregious and shocking" that "Facebook continues to enable landlords and real estate brokers to bar families with children, women and others from receiving rental and sales ads or housing," the lawsuit states. It asks the court, among other things, to declare that Facebook’s policies violate fair housing laws, to bar the company from publishing discriminatory ads, and to require it to develop and make public a written fair housing policy for advertising.

Diane Houk, lead counsel for the alliance, said this type of discrimination is especially difficult to uncover and combat. "The person who is being discriminated against has no way to know" it, because the technology "keeps the discrimination hidden in hopes that it will not be caught," she said.

Facebook disputes the housing groups’ allegations. "There is absolutely no place for discrimination on Facebook. We believe this lawsuit is without merit, and we will defend ourselves vigorously," said Facebook spokesman Joe Osborne.

The lawsuit adds to Facebook’s woes, which are mounting on multiple fronts. The company’s stock plunged last week on the news that it had allowed a voter-profiling outfit, Cambridge Analytica, to obtain data on 50 million of its users without their knowledge or consent. The news came after a troubling year in which, among other things, Facebook admitted that it unwittingly allowed a Russian disinformation operation on its platform and had been promoting fake news in its News Feed algorithm. As a result, lawmakers and regulators around the world have launched investigations into Facebook.

Discrimination in housing advertising has been a persistent problem for Facebook. In October 2016, we described how Facebook let advertisers exclude specific groups with what it called "ethnic affinities," including blacks and Hispanics, from seeing ads. Although Facebook responded by announcing it had built a system to flag and reject these ads, we bought dozens of rental housing ads in November 2017 that we specified would not be shown to blacks, Jews, people interested in wheelchair ramps and other groups.

It wasn’t until ProPublica brought the issue of advertising discrimination on Facebook to light, Houk said, that fair housing advocates learned of it. Emulating ProPublica’s technique, the Washington, D.C.-based national fair housing group, along with member groups in New York, San Antonio and Miami created fake housing companies and placed discriminatory ads on Facebook. The ads were approved by Facebook over a period of a few months, with the most recent buys occurring on Feb. 23.

Using Facebook’s dropdown "exclusion" menu, they were able to buy housing ads that blocked groups such as "trendy moms," "soccer moms," "parents with teenagers," people interested in a disabled parking permit and people interested in Telemundo, the Spanish-language television network.

The Fair Housing Act makes it illegal to publish any advertisement "with respect to the sale or rental of a dwelling that indicates any preference, limitation or discrimination based on race, color, religion, sex, handicap, familial status or national origin." Violators may face tens of thousands of dollars in fines.

After ProPublica’s investigation, Facebook added a self-certification option, which asks housing advertisers to certify that their advertisement is not discriminatory. In some cases, Houk said, the housing groups encountered the self-certification option, and did not submit the ads to Facebook for approval and publication. But that only happened in some of the ad buys, she said.

Since advertisers can falsely attest to fairness, the self-certification screens don’t "seem like a whole-hearted commitment to trying to change the advertising platform to comply with the Fair Housing Act and local fair housing laws," Houk said.

A couple of weeks after the groups bought housing ads, so did ProPublica (independently) — and we excluded some of the same categories, such as "soccer moms." In most of those tests, we encountered self-certification screens. However, when we bought another housing ad this week, we were able to exclude people interested in Telemundo.

Houk said there were so many possible explanations for the difference in results — such as the number of categories excluded or the types of exclusions sought — that it was impossible to speculate about what caused many of her clients’ ad purchases to be approved but not ProPublica’s.

Both the fair housing groups and ProPublica found that Facebook has blocked the use of race as an exclusion category — as it promised to do in November. Facebook rejected a ProPublica housing ad that was specifically aimed at African Americans. It also denied our attempts to buy employment ads targeted by race, and removed a job listing with a question designed to filter by race. However, the housing groups’ and ProPublica’s ability to exclude people interested in Telemundo suggests that advertisers could still discriminate by using proxies for race or ethnicity.

In a separate federal case in California, challenging Facebook’s use of racial exclusions in ad targeting, Facebook has argued that it has immunity against liability for such discrimination. It cited Section 230 of the 1996 federal Communications Decency Act, which protects internet companies from liability for third-party content.

"Advertisers, not Facebook, are responsible for both the content of their ads and what targeting criteria to use, if any," Facebook contended.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


How the Crowd Led ProPublica to Investigate IBM

[Editor's note: today's guest post, by the reporters at ProPublica, discusses employment practices at a major corporation in the United States. The investigation is as interesting as the "Cutting 'Old Heads' At IBM" report. This also caught my attention because a data breach at IBM in 2007 led to the creation of this blog. Today's article is reprinted with permission.]

IBM logo By Ariana Tobin and Peter Gosselin, ProPublica

On March 22, we reported that over the past five years IBM has been removing older U.S. employees from their jobs, replacing some with younger, less experienced, lower-paid American workers and moving many other jobs overseas.

We’ve got documentation and details — most of which are the direct result of a questionnaire filled out by over 1,100 former IBMers.

We’ve gone to the company with our findings. IBM did not answer the specific questions we sent. Spokesman Edward Barbini said: “We are proud of our company and our employees’ ability to reinvent themselves era after era, while always complying with the law. Our ability to do this is why we are the only tech company that has not only survived but thrived for more than 100 years.”

We don’t know the exact size of the problem. Our questionnaire isn’t a scientific sample, nor did all the participants tell us they experienced age discrimination. But the hundreds of similar stories show a pattern of older employees being pushed out even when the company itself says they were doing a good job.

This project wasn’t inspired by a high-level leak or an errant line in secret documents. It came to us through reader engagement. Our investigation took us beyond some of our usual reporting techniques. We’d like to elaborate on this because:

  • We know readers will wonder how we sourced some pretty serious claims.
  • Many ex-employees trusted us with their stories and spent many hours in conversation with us. We think it’s good practice to let them know how we’ve used their information.
  • This is the probably the first time we’ve been pointed to a big project by a community of people we found through digital outreach. We hope that by sharing our experiences, we can help others build on our work.

IBMers found us

This project started as a conversation between the two of us, both reporters at ProPublica. Peter had taken on the age discrimination beat for reasons both personal and professional. Ariana was newly minted into a job called “engagement reporter.”

Ariana suggested that Peter write up a short essay on his own experiences of being laid off at 63 and searching for a job in the aftermath. We attached a short questionnaire to the bottom and headlined it: “Over 50 and looking for a job? We’d like to hear from you.

Dozens of people responded within the first couple of weeks. As we looked through this first round of questionnaires, we noticed a theme: a whole lot of information and technology workers told us they were struggling to stay employed. And those who had lost their jobs? They were having a really hard time finding new work.

Of those IT workers, several mentioned IBM right off the bat. One woman wrote that she and her coworkers were working together to find new jobs in order to “ward off the dreaded old person layoff from IBM.”

Another wrote: “I can probably help you get a lot more stories, contact me if you want to discuss this possibility.”

Another wrote: “Part of the separation agreement was that I not seek collective action against IBM for age discrimination. I was not going to sign as a law firm was planning to file a grievance. However they needed 10 people to agree and they could not get the numbers.”

… and then they connected us with more IBMers

We started making some calls. One of the first people we talked to was Brian Paulson, a 57-year-old senior manager with 18 years at IBM, who was fired for “performance reasons” that the company refused to explain. He was still job-hunting two years later.

Another ex-IBM employee told us that she had seen examples of older workers laid off from many parts of the company on a public Facebook page called WatchingIBM. Ariana spent a day looking through the posts, which were, as promised, crawling with stories, questions, and calls for support from workers of all kinds, as shown in the accompanying screenshot.

We decided to reach out to the page’s administrator, who was a longtime IBM workplace activist, Lee Conrad. He shared our age discrimination questionnaire in the group and more responses poured in.

With dozens of interviews already on the books, we decided to launch a second, more specific questionnaire — this time about IBM

We realized that we had been pointed toward an angry, sad and motivated group. The older ex-IBM workers we called were trying to figure out whether their own layoffs were unique or part of a larger trend. And if they were part of a larger trend... how many people were affected?

A major frustration we saw in comment after comment: These workers couldn’t get information on how many others had been forced out with them.

This was an information gap that immediately struck Peter, because that information is exactly what the law requires employers to disclose at the time of a layoff.

On top of that, many of these sources mentioned having been forced to sign agreements that kept them from going to court or even talking about what had happened to them. They were scared to do anything in violation of those agreements, a fear that kept them from finding out the answers to some big open questions: Why would IBM have stopped releasing the ages and positions of those let go, as they had done before 2014 to comply with federal law? How many workers out there believed they had been “retired” against their will? What did managers really tell their subordinates when the time came to let them go? Who was left to do all of their work?

So we wrote up another questionnaire asking those specific questions.

We learned from the responses, and also the response rate

We contacted people on listservs, found them on open petitions, joined closed LinkedIn networks, and followed each posting on ex-IBM groups. We tweeted the questionnaire out on days that IBM reported its earnings, including the company’s ticker symbol. We talked to trade magazines and IBM historians and organizers who still work at IBM. We bought ads on Facebook and aimed them toward cities and towns where we knew IBM had been cutting its workforce.

As the responses came in, we tried to figure out where most of them were coming from. To identify any meaningful trends, we needed to know who was answering, what was working, and why. We also realized that we needed to introduce ourselves in order to persuade anyone it was worth participating.

When something worked, we’d double down:

We know what worked the best: When people filled out the questionnaire they’d also share their contact information with us. So we asked them to forward the questionnaire around within their own networks:

And we got more leads

We read through all of the responses and identified themes: 183 respondents said the company recorded them as having retired by choice even though they had no desire to retire or flat-out objected to the idea. Forty-five people were told they’d have to uproot their lives and move sometimes thousands of miles from the communities where they had worked for years, or else resign. Fifty-three said their jobs had been moved overseas. Some were happy they’d left. Some were company luminaries, given top ratings throughout their career. Some were still fighting over benefits and health care. Some were worried about finding work ever again.

Inevitably, this categorization process led to us to identify new patterns as we went along, and as new responses accumulated. For each new pattern, we would go back and see how many people fit.

One of the first and most interesting such categories were the people who had received emails congratulating them on their retirement at the same time as they were informed of their layoff. We realized there would be power in numbers there, so we set up a SecureDrop for people who were willing to send us their paperwork.

Eventually, we also created a category called “legal action.” We’d stumbled upon support groups of ex-IBM employees who had filed formal complaints with the Equal Employment Opportunity Commission. Some sent us the company’s responses to their individual complaints, giving us insight into the way the company responded to allegations of discrimination. These seemed, of course, very useful.

In other words: we sent some rather complicated mass emails and were surprised over and over again by the specificity of the responses:

IBM undoubtedly has information that would shed light on the documents, its layoff practices or the overall extent and nature of its job cuts. The company chose not to respond to our questions about those issues.

So we tried to answer ex-IBMers’ questions ourselves, including one of the most basic: How many employees ages 40 and over were let go or left in recent years?

IBM won’t say. In fact, over the years, the company has stopped releasing almost all information about its U.S. workforce. In 2009, it stopped publishing its American employment total. In 2014, it stopped disclosing the numbers and ages of older employees it was laying off, a requirement of the nation’s basic anti-age bias law, the Age Discrimination in Employment Act (ADEA).

So we’ve sought to estimate the number, relying on one of the few remaining bits of company-provided information — a technique developed by a veteran financial analyst who follows IBM for investors — as well as patterns we spotted in internal company documents.

We began with a line in the company’s quarterly and annual filings with the U.S. Securities and Exchange Commission for “workforce rebalancing,” a company term for layoffs, firings and other non-retirement departures. It’s a gauge of what IBM spends to let people go. In the past five years, workforce rebalancing charges have totaled $4.3 billion.

The technique was used by veteran IBM analyst Toni Sacconaghi of Bernstein Research. Sacconaghi is a respected Wall Street analyst who has been named to Institutional Investor’s All-America Research Team every year since 2001. His technique and layoff estimates have been widely cited by news organizations including The Wall Street Journal and Fortune.

Some years ago, Sacconaghi estimated that IBM’s average per-employee cost for laying off a worker was $70,000.

Dividing $4.3 billion by $70,000 suggests that during the past five years IBM’s worldwide job cuts totaled about 62,000. If anything, that number is low, given IBM executives’ comments at a recent investor conference. Internal company documents we reviewed suggest that 50 to 60 percent of cuts were made in the U.S., with older workers representing roughly 60 percent of those. That translates to about 20,000 older American workers let go.

Our analysis suggests the total of U.S. layoffs is almost certainly higher.

First, as Sacconaghi said in a recent interview, IBM’s per-employee rebalancing costs are likely much lower now because, starting in 2016, the company reduced severance payments to departing employees from six months to just 30 days. That means IBM can lay off or fire more people for the same or lower overall costs.

Second, because, as those ex-IBMers told us, the company often converts their layoffs into retirements, the workplace rebalancing numbers don’t tell the whole story.

Right below the line for “workforce rebalancing” in its SEC filings, IBM adds another line for “retirement-related costs,” which reflects how much the company spends each year retiring people out. Some — perhaps a substantial amount of that — went to retirements that were less than fully voluntary. This could add up to thousands more people.

By coming up with answers and investigating in the open, we’ve gotten more sources

Many of the conversations we’ve had during our reporting didn’t make it into the final story. People allowed us to review internal company documents. They let us see long email exchanges with their managers. They dug back through closets and garages to find memos they had saved out of frustration or fatigue or just plain anger.

We can’t go into detail about all of the ways the community helped us report out this story, because we also promised many of our sources that we would protect their confidentiality. The beauty is that they talked to us anyway. They knew where to find us, because our contact information had been spread far and wide.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Report: Social Media Use in 2018

There has been plenty of controversy recently surrounding social media: job advertisements which exclude older workers, concerns that social media threaten democracies, transparency concerns about political advertisements, censorship applied inconsistently, politicians blocking constituents, promises to do better by Facebook, and more. Given these issues, it's reasonable to ask: who uses social media? Which sites? Has this changed over time? Would any users stop using social media?

The Pew Research Center recently released its latest report, "Social Media Use in 2018." Key findings:

"Facebook remains the primary platform for most Americans. Roughly two-thirds of U.S. adults (68%) now report that they are Facebook users, and roughly three-quarters of those users access Facebook on a daily basis. With the exception of those 65 and older, a majority of Americans across a wide range of demographic groups now use Facebook... The video-sharing site YouTube – which contains many social elements, even if it is not a traditional social media platform – is now used by nearly three-quarters of U.S. adults and 94% of 18- to 24-year-olds... Some 78% of 18- to 24-year-olds use Snapchat, and a sizeable majority of these users (71%) visit the platform multiple times per day. Similarly, 71% of Americans in this age group now use Instagram and close to half (45%) are Twitter users... Pinterest remains substantially more popular with women (41% of whom say they use the site) than with men (16%). LinkedIn remains especially popular among college graduates and those in high-income households. Some 50% of Americans with a college degree use LinkedIn, compared with just 9% of those with a high school diploma or less. The messaging service WhatsApp is popular in Latin America, and this popularity also extends to Latinos in the United States – 49% of Hispanics report that they are WhatsApp users, compared with 14% of whites and 21% of blacks."

The report was based on telephone interviews of 2,002 adults (18 years of age or older) living in the United States. The interviews were conducted during Jan. 3 - 10, 2018, and included 500 respondents via landline telephones, and 1,502 respondents via mobile phones. The survey was conducted by interviewers under the direction of Abt Associates.

A couple charts highlight the key findings:

Pew Research Center. Social Media use in 2018. Site use by age groups. Click to view larger version

Pew Research Center. Social Media Use in 2018. Reciprocity usage. Click to view larger version

Pew Research also found:

"... the share of social media users who say these platforms would be hard to give up has increased by 12 percentage points compared with a survey conducted in early 2014. But by the same token, a majority of users (59%) say it would not be hard to stop using these sites, including 29% who say it would not be hard at all to give up social media."

View more information and details in the full report at the Pew Research Center site.