280 posts categorized "Social Networking" Feed

Facial Recognition At Facebook: New Patents, New EU Privacy Laws, And Concerns For Offline Shoppers

Facebook logo Some Facebook users know that the social networking site tracks them both on and off (e.g., signed into, not signed into) the service. Many online users know that Facebook tracks both users and non-users around the internet. Recent developments indicate that the service intends to track people offline, too. The New York Times reported that Facebook:

"... has applied for various patents, many of them still under consideration... One patent application, published last November, described a system that could detect consumers within [brick-and-mortar retail] stores and match those shoppers’ faces with their social networking profiles. Then it could analyze the characteristics of their friends, and other details, using the information to determine a “trust level” for each shopper. Consumers deemed “trustworthy” could be eligible for special treatment, like automatic access to merchandise in locked display cases... Another Facebook patent filing described how cameras near checkout counters could capture shoppers’ faces, match them with their social networking profiles and then send purchase confirmation messages to their phones."

Some important background. First, the usage of surveillance cameras in retail stores is not new. What is new is the scope and accuracy of the technology. In 2012, we first learned about smart mannequins in retail stores. In 2013, we learned about the five ways retail stores spy on shoppers. In 2015, we learned more about tracking of shoppers by retail stores using WiFi connections. In 2018, some smart mannequins are used in the healthcare industry.

Second, Facebook's facial recognition technology scans images uploaded by users, and then allows users identified to accept or decline labels with their name for each photo. Each Facebook user can adjust their privacy settings to enable or disable the adding of their name label to photos. However:

"Facial recognition works by scanning faces of unnamed people in photos or videos and then matching codes of their facial patterns to those in a database of named people... The technology can be used to remotely identify people by name without their knowledge or consent. While proponents view it as a high-tech tool to catch criminals... critics said people cannot actually control the technology — because Facebook scans their faces in photos even when their facial recognition setting is turned off... Rochelle Nadhiri, a Facebook spokeswoman, said its system analyzes faces in users’ photos to check whether they match with those who have their facial recognition setting turned on. If the system cannot find a match, she said, it does not identify the unknown face and immediately deletes the facial data."

Simply stated: Facebook maintains a perpetual database of photos (and videos) with names attached, so it can perform the matching and not display name labels for users who declined and/or disabled the display of name labels in photos (videos). To learn more about facial recognition at Facebook, visit the Electronic Privacy Information Center (EPIC) site.

Third, other tech companies besides Facebook use facial recognition technology:

"... Amazon, Apple, Facebook, Google and Microsoft have filed facial recognition patent applications. In May, civil liberties groups criticized Amazon for marketing facial technology, called Rekognition, to police departments. The company has said the technology has also been used to find lost children at amusement parks and other purposes..."

You may remember, earlier in 2017 Apple launched its iPhone X with Face ID feature for users to unlock their phones. Fourth, since Facebook operates globally it must respond to new laws in certain regions:

"In the European Union, a tough new data protection law called the General Data Protection Regulation now requires companies to obtain explicit and “freely given” consent before collecting sensitive information like facial data. Some critics, including the former government official who originally proposed the new law, contend that Facebook tried to improperly influence user consent by promoting facial recognition as an identity protection tool."

Perhaps, you find the above issues troubling. I do. If my facial image will be captured, archived, tracked by brick-and-mortar stores, and then matched and merged with my online usage, then I want some type of notice before entering a brick-and-mortar store -- just as websites present privacy and terms-of-use policies. Otherwise, there is no notice nor informed consent by shoppers at brick-and-mortar stores.

So, is facial recognition a threat, a protection tool, or both? What are your opinions?


Federal Investigation Into Facebook Widens. Company Stock Price Drops

The Boston Globe reported on Tuesday (links added):

"A federal investigation into Facebook’s sharing of data with political consultancy Cambridge Analytica has broadened to focus on the actions and statements of the tech giant and now involves three agencies, including the Securities and Exchange Commission, according to people familiar with the official inquiries.

Representatives for the FBI, the SEC, and the Federal Trade Commission have joined the Justice Department in its inquiries about the two companies and the sharing of personal information of 71 million Americans... The Justice Department and the other federal agencies declined to comment. The FTC in March disclosed that it was investigating Facebook over possible privacy violations..."

About 87 million persons were affected by the Facebook breach involving Cambridge Analytica. In May, the new Commissioner at the U.S. Federal Trade Commission (FTC) suggested stronger enforcement on tech companies, like Google and Facebook.

After news broke about the wider probe, shares of Facebook stock fell about 18 percent of their value and then recovered somewhat for a net drop of 2 percent. That 2 percent drop is about $12 billion in valuation. Clearly, there will be more news (and stock price fluctuations) to come.

During the last few months, there has been plenty of news about Facebook:


Facebook’s Screening for Political Ads Nabs News Sites Instead of Politicians

[Editor's note: today's post, by reporters at ProPublica, discusses new advertising rules at the Facebook.com social networking service. It is reprinted with permission.]

By Jeremy B. Merrill and Ariana Tobin, ProPublica

One ad couldn’t have been more obviously political. Targeted to people aged 18 and older, it urged them to “vote YES” on June 5 on a ballot proposition to issue bonds for schools in a district near San Francisco. Yet it showed up in users’ news feeds without the “paid for by” disclaimer required for political ads under Facebook’s new policy designed to prevent a repeat of Russian meddling in the 2016 presidential election. Nor does it appear, as it should, in Facebook’s new archive of political ads.

The other ad was from The Hechinger Report, a nonprofit news outlet, promoting one of its articles about financial aid for college students. Yet Facebook’s screening system flagged it as political. For the ad to run, The Hechinger Report would have to undergo the multi-step authorization and authentication process of submitting Social Security numbers and identification that Facebook now requires for anyone running “electoral ads” or “issue ads.”

When The Hechinger Report appealed, Facebook acknowledged that its system should have allowed the ad to run. But Facebook then blocked another ad from The Hechinger Report, about an article headlined, “DACA students persevere, enrolling at, remaining in, and graduating from college.” This time, Facebook rejected The Hechinger Report’s appeal, maintaining that the text or imagery was political.

As these examples suggest, Facebook’s new screening policies to deter manipulation of political ads are creating their own problems. The company’s human reviewers and software algorithms are catching paid posts from legitimate news organizations that mention issues or candidates, while overlooking straightforwardly political posts from candidates and advocacy groups. Participants in ProPublica’s Facebook Political Ad Collector project have submitted 40 ads that should have carried disclaimers under the social network’s policy, but didn’t. Facebook may have underestimated the difficulty of distinguishing between political messages and political news coverage — and the consternation that failing to do so would stir among news organizations.

The rules require anyone running ads that mention candidates for public office, are about elections, or that discuss any of 20 “national issues of public importance” to verify their personal Facebook accounts and add a "paid for by" disclosure to their ads, which are to be preserved in a public archive for seven years. Advertisers who don’t comply will have their ads taken down until they undergo an "authorization" process, submitting a Social Security number, driver’s license photo, and home address, to which Facebook sends a letter with a code to confirm that anyone running ads about American political issues has an American home address. The complication is that the 20 hot-button issues — environment, guns, immigration, values foreign policy, civil rights and the like — are likely to pop up in posts from news organizations as well.

"This could be really confusing to consumers because it’s labeling news content as political ad content," said Stefanie Murray, director of the Center for Cooperative Media at Montclair State University.

The Hechinger Report joined trade organizations representing thousands of publishers earlier this month in protesting this policy, arguing that the filter lumps their stories in with the very organizations and issues they are covering, thus confusing readers already wary of "fake news." Some publishers — including larger outlets like New York Media, which owns New York Magazine — have stopped buying ads on political content they expect would be subject to Facebook’s ad archive disclosure requirement.

"When it comes to news, Facebook still doesn’t get it. In its efforts to clear up one bad mess, it seems set on joining those who want blur the line between reality-based journalism and propaganda," Mark Thompson, chief executive officer of The New York Times, said in prepared remarks at the Open Markets Institute on Tuesday, June 12th.

In a statement Wednesday June 13th, Campbell Brown, Facebook’s head of global news partnerships, said the company recognized "that news content was different from political and issue advertising," and promised to create a "differentiated space within our archive to separate news content from political and issue ads." But Brown rejected the publishers’ request for a "whitelist" of legitimate news organizations whose ads would not be considered political.

"Removing an entire group of advertisers, in this case publishers, would go against our transparency efforts and the work we’re doing to shore up election integrity on Facebook," she wrote."“We don’t want to be in a position where a bad actor obfuscates its identity by claiming to be a news publisher." Many of the foreign agents that bought ads to sway the 2016 presidential election, the company has said, posed as journalistic outlets.

Her response didn’t satisfy news organizations. Facebook "continues to characterize professional news and opinion as ‘advertising’ — which is both misguided and dangerous," said David Chavern, chief executive of the News Media Alliance — a trade association representing 2,000 news organizations in the U.S. and Canada —and co-author of an open letter to Facebook on June 11.

ProPublica asked Facebook to explain its decision to block 14 advertisements shared with us by news outlets. Of those, 12 were ultimately rejected as political content, one was overturned on appeal, and one Facebook could not locate in its records. Most of these publications, including The Hechinger Report, are affiliated with the Institute for Nonprofit News, a consortium of mostly small nonprofit newsrooms that produce primarily investigative journalism (ProPublica is a member).

Here are a few examples of news organization ads that were rejected as political:

  • Voice of Monterey Bay tried to boost an interview with labor leader Dolores Huerta headlined "She Still Can." After the ad ran for about a day, Facebook sent an alert that the ad had been turned off. The outlet is refusing to seek approval for political ads, “since we are a news organization,” said Julie Martinez, co-founder of the nonprofit news site.
  • Ensia tried to advertise an article headlined: "Opinion: We need to talk about how logging in the Southern U.S. is harming local residents." It was rejected as political. Ensia will not appeal or buy new ads until Facebook addresses the issue, said senior editor David Doody.
  • inewsource tried to promote a post about a local candidate, headlined: "Scott Peters’ Plea to Get San Diego Unified Homeless Funding Rejected." The ad was rejected as political. inewsource appealed successfully, but then Facebook changed its mind and rejected it again, a spokeswoman for the social network said.
  • BirminghamWatch tried to boost a post about a story headlined, "‘That is Crazy:’ 17 Steps to Cutting Checks for Birmingham Neighborhood Projects." The ad was rejected as political and rejected again on appeal. A little while later, BirminghamWatch’s advertiser on the account received a message from Facebook: "Finish boosting your post for $15, up to 15,000 people will see it in NewsFeed and it can get more likes, comments, and shares." The nonprofit news site appealed again, and the ad was rejected again.

For most of its history, Facebook treated political ads like any other ads. Last October, a month after disclosing that "inauthentic accounts… operated out of Russia" had spent $100,000 on 3,000 ads that "appeared to focus on amplifying divisive social and political messages," the company announced it would implement new rules for election ads. Then in April, it said the rules would also apply to issue-related ads.

The policy took effect last month, at a time when Facebook’s relationship with the news industry was already rocky. A recent algorithm change reduced the number of posts from news organizations that users see in their news feed, thus decreasing the amount of traffic many media outlets can bring in without paying for wider exposure, and frustrating publishers who had come to rely on Facebook as a way to reach a broader audience.

Facebook has pledged to assign 3,000-4,000 "content moderators" to monitor political ads, but hasn’t reached that staffing level yet. The company told ProPublica that it is committed to meeting the goal by the U.S. midterm elections this fall.

To ward off "bad actors who try to game our enforcement system," Facebook has kept secret its specific parameters and keywords for determining if an ad is political. It has published only the list of 20 national issues, which it says is based in part on a data-coding system developed by a network of political scientists called the Comparative Agendas Project. A director on that project, Frank Baumgartner, said the lack of transparency is problematic.

"I think [filtering for political speech] is a puzzle that can be solved by algorithms and big data, but it has to be done right and the code needs to be transparent and publicly available. You can’t have proprietary algorithms determining what we see," Baumgartner said.

However Facebook’s algorithms work, they are missing overtly political ads. Incumbent members of Congress, national advocacy groups and advocates of local ballot initiatives have all run ads on Facebook without the social network’s promised transparency measures, after they were supposed to be implemented.

Ads from Senator Jeff Merkley, Democrat-Oregon, Representative Don Norcross, Democrat-New Jersey, and Representative Pramila Jayapal, Democrat-Washington, all ran without disclaimers as recently as this past Monday. So did an ad from Alliance Defending Freedom, a right-wing group that represented a Christian baker whose refusal for religious reasons to make a wedding cake for a gay couple was upheld by the Supreme Court this month. And ads from NORML, the marijuana legalization advocacy group and MoveOn, the liberal organization, ran for weeks before being taken down.

ProPublica asked Facebook why these ads weren’t considered political. The company said it is reviewing them. "Enforcement is never perfect at launch," it said.

Clarification, June 15, 2018: This article has been updated to include more specific information about the kinds of advertising New York Media has stopped buying on Facebook’s platform.

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


What Facebook’s New Political Ad System Misses

[Editor's Note: today's guest post is by the reporters at ProPublica. It is reprinted with permission.]

By Jeremy B. Merrill, Ariana Tobin, and Madeleine Varner, ProPublica

Facebook’s long-awaited change in how it handles political advertisements is only a first step toward addressing a problem intrinsic to a social network built on the viral sharing of user posts.

The company’s approach, a searchable database of political ads and their sponsors, depends on the company’s ability to sort through huge quantities of ads and identify which ones are political. Facebook is betting that a combination of voluntary disclosure and review by both people and automated systems will close a vulnerability that was famously exploited by Russian meddlers in the 2016 election.

The company is doubling down on tactics that so far have not prevented the proliferation of hate-filled posts or ads that use Facebook’s capability to target ads particular groups.

If the policy works as Facebook hopes, users will learn who has paid for the ads they see. But the company is not revealing details about the significant aspect of how political advertisers use its platform — the specific attributes the ad buyers used to target a particular person for an ad.

Facebook’s new system is the company’s most ambitious response thus far to the now-documented efforts by Russian agents to circulate items that would boost Donald Trump’s chances or suppress Democratic turnout. The new policies announced Thursday will make it harder for somebody trying to exploit the precise vulnerabilities in Facebook’s system exploited by the Russians in 2016 in several ways:

First, political ads that you see on Facebook will now include the name of the organization or person who paid for it, reminiscent of disclaimers required on political mailers and TV ads. (The ads Facebook identified as placed by Russians carried no such tags.)

The Federal Election Commission requires political ads to carry such clear disclosures but as we have reported, many candidates and groups on Facebook haven’t been following that rule.

Second, all political ads will be published in a searchable database.

Finally, the company will now require that anyone buying a political ad in their system confirm that they’re a U.S. resident. Facebook will even mail advertisers a postcard to make certain they’re in the U.S. Facebook says ads by advertisers whose identities aren’t verified under this process will be taken down starting in about a week, and they will be blocked from buying new ads until they have verified themselves.

While the new system can still be gamed, the specific tactics used by the Russian Internet Research Agency, such as an overseas purchase of ads promoting a Black Lives Matter rally under the name “Blacktivist,” will become harder — or at least harder to do without getting caught.

The company has also pledged to devote more employees to the issue, including 3,000-4,000 more content moderators. But Facebook says these will be not be additional hires — they will be included in the 20,000 already promised to tackle various moderation issues in the coming months.

What Is Facebook Missing?

The most obvious flaw in Facebook’s new system is that it misses ads it should catch. Right now, it’s easy to find political ads that are missing from their archive. Take this one, from the Washington State Democratic Party. Just minutes after Facebook finished announcing its launch of the tool, a participant in ProPublica’s Facebook Political Ad Collector project saw this ad, criticizing Republican congresswoman Cathy McMorris Rodgers… but it wasn’t in the database.

And there are others.

The company acknowledged that the process is still a work in progress, reiterating its request that users pitch in by reporting the political ads that lack disclosures.

Even as Facebook’s system gets better at identifying political ads, the company is withholding a critical piece of information in the ads it’s publishing. While we’ll see some demographic information about who saw a given ad, Facebook is not indicating which audiences the advertiser intended to target — categories that often include racial or political characteristics and which have been controversial in the past.

This information is critical to researchers and journalists trying to make sense of political advertising on Facebook. Take, for instance, this ad promoting the environmental benefits of nuclear power, from a group called Nuclear Matters: the group chose specifically to show it to people interested in veganism — a fact we wouldn’t know from looking at the demographics of the users who saw the ad.

Facebook said it considers the information about who saw an ad — age, gender and location — sufficient. Rob Leathern, Facebook’s Director of Product Management, said that the limited demographics-only breakdown “offers more transparency than the intent, in terms of showing the targeting.”

The company is also promising to launch an API, a technical tool which will allow outsiders to write software that would look for patterns in the new ad database. The company says it will launch an API “later this summer” but hasn’t said what data it will contain or who will have access to it.

ProPublica’s own Facebook Ad Collector tool, which also collects political ads spotted on Facebook, has an API that can be accessed by anyone. It also includes the targeting information — which users can also see on each ad that they view.

Facebook said it would not release data about ads flagged by users as political and then rejected by the system. We’re curious about those, and we know firsthand that their software can be imperfect. We’ve attempted to buy ads specifically about our journalism that were flagged as problematic — because the ads “contained profanity,” or were misclassified as discriminatory ads for “employment, credit or housing opportunities” by mistake.

Facebook’s track record on initiatives aimed at improving the transparency of its massively profitable advertising system is spotty. The company has said it’s going to rely in part on artificial intelligence to review ads — the same sort of technology that the company said in the past it would use to block discriminatory ads for housing, employment and credit opportunities.

When we tested the system almost a year after a ProPublica story showed Facebook was allowing advertisers to target housing ads in a way that violated Fair Housing Act protections, we found that the company was still approving housing ads that excluded African-Americans and other “multicultural affinities” from seeing them. The company was pressured to implement several changes to its ad portal and a Fair Housing group filed a lawsuit against the company.

Facebook also plans to rely in part on users to find and report political ads that get through the system without the required disclosures.

But its track record of moderating user-flagged content — when it comes to both hate speech and advertising — has been uneven. Last December, ProPublica brought 49 cases of user-flagged offensive speech to Facebook, and the company acknowledged that its moderators had made the wrong call in 22 of them.

The company admits it's playing a “cat and mouse game” with people trying to pass political ads through their system unnoticed. Just last month, Ohio Democratic gubernatorial candidate Richard Cordray’s campaign ran Facebook ads criticizing his opponent — but from a page called “Ohio Primary Info.”

The need for ad transparency goes way beyond Russian bad actors. Our tool has already caught scams and malware disguised as politics, which users raised as a problem years before Facebook made any meaningful change.

If you flag an ad to Facebook, please report them to us as well by sending an email to political.ads@propublica.org. We will be watching to see how well Facebook responds when users flag an ad.

How Will They Enforce the New Rules?

It’s one thing to create a set of rules, and another to enforce them consistently and on a large scale.

Facebook, which kept its content moderation and hate speech policies secret until they were revealed by ProPublica, won’t share the specific rules governing political ad content or details about the instructions moderators receive.

Leathern said the company is keeping the rules secret to frustrate the efforts of “bad actors who try to game our enforcement systems”

Facebook has said it’s looking to flag both electoral ads and those that take a position on its list of twenty “national legislative issues of public importance”. These range from the concrete, like “abortion” and “taxes,” to broad topics like “health” and “values.”

Facebook acknowledges its system will make mistakes and says it will improve over time. Ads for specific candidates are relatively easy to detect. “We’ll likely miss ads when they aim to persuade,” said Katie Harbath, Facebook’s Global Politics and Government Outreach Director.

We plan to keep an eye out for ads that don’t make it into the archive. We’ll be looking for ads that our Political Ad Collector tool finds that aren’t in Facebook’s database.

Want to Help?

We need your help building out our independent database of political ads! If you’re still reading this article, we’re giving you permission to stop and install the Political Ad Collector extension. Here’s what you need to know about how it works.

You can also help us find other people who can install the tool. We are especially in need of people who aren’t ProPublica readers already. We need people from a diverse set of backgrounds, and with different perspectives and political beliefs. Please encourage your friends and relatives — especially the ones you avoid talking politics with — to install it.

Do You Work at a News Outlet and Want to Partner With Us on This?

Awesome. We’re already working with quite a few newsrooms all over the world, including the CBC in Canada, Bridge Magazine in Michigan, The Guardian in Australia and more.

In the U.S., we’re trying to get eyes and ears on the ground in as many local elections as possible. If your readers would be interested in joining our transparency effort, please reach out. We’re happy to send more information about this and our larger Electionland project.

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

 


New Commissioner Says FTC Should Get Tough on Companies Like Facebook and Google

[Editor's note: today's guest post, by reporters at ProPublica, explores enforcement policy by the U.S. Federal Trade Commission (FTC), which has become more important  given the "light touch" enforcement approach by the Federal Communications Commission. Today's post is reprinted with permission.]

By Jesse Eisinger, ProPublica

Declaring that "the credibility of law enforcement and regulatory agencies has been undermined by the real or perceived lax treatment of repeat offenders," newly installed Democratic Federal Trade Commissioner Rohit Chopra is calling for much more serious penalties for repeat corporate offenders.

"FTC orders are not suggestions," he wrote in his first official statement, which was released on May 14.

Many giant companies, including Facebook and Google, are under FTC consent orders for various alleged transgressions (such as, in Facebook’s case, not keeping its promises to protect the privacy of its users’ data). Typically, a first FTC action essentially amounts to a warning not to do it again. The second carries potential penalties that are more serious.

Some critics charge that that approach has encouraged companies to treat FTC and other regulatory orders casually, often violating their terms. They also say the FTC and other regulators and law enforcers have gone easy on corporate recidivists.

In 2012, a Republican FTC commissioner, J. Thomas Rosch, dissented from an agency agreement with Google that fined the company $22.5 million for violations of a previous order even as it denied liability. Rosch wrote, “There is no question in my mind that there is ‘reason to believe’ that Google is in contempt of a prior Commission order.” He objected to allowing the company to deny its culpability while accepting a fine.

Chopra’s memo signals a tough stance from Democratic watchdogs — albeit a largely symbolic one, given the fact that Republicans have a 3-2 majority on the FTC — as the Trump administration pursues a wide-ranging deregulatory agenda. Agencies such as the Environmental Protection Agency and the Department of Interior are rolling back rules, while enforcement actions from the Securities and Exchange Commission and the Department of Justice are at multiyear lows.

Chopra, 36, is an ally of Elizabeth Warren and a former assistant director of the Consumer Financial Protection Bureau. President Donald Trump nominated him to his post in October, and he was confirmed last month. The FTC is led by a five-person commission, with a chairman from the president’s party.

The Chopra memo is also a tacit criticism of enforcement in the Obama years. Chopra cites the SEC’s practice of giving waivers to banks that have been sanctioned by the Department of Justice or regulators allowing them to continue to receive preferential access to capital markets. The habitual waivers drew criticism from a Democratic commissioner on the SEC, Kara Stein. Chopra contends in his memo that regulators treated both Wells Fargo and the giant British bank HSBC too lightly after repeated misconduct.

"When companies violate orders, this is usually the result of serious management dysfunction, a calculated risk that the payoff of skirting the law is worth the expected consequences, or both," he wrote. Both require more serious, structural remedies, rather than small fines.

The repeated bad behavior and soft penalties “undermine the rule of law,” he argued.

Chopra called for the FTC to use more aggressive tools: referring criminal matters to the Department of Justice; holding individual executives accountable, even if they weren’t named in the initial complaint; and “meaningful” civil penalties.

The FTC used such aggressive tactics in going after Kevin Trudeau, infomercial marketer of miracle treatments for bodily ailments. Chopra implied that the commission does not treat corporate recidivists with the same toughness. “Regardless of their size and clout, these offenders, too, should be stopped cold,” he writes.

Chopra also suggested other remedies. He called for the FTC to consider banning companies from engaging in certain business practices; requiring that they close or divest the offending business unit or subsidiary; requiring the dismissal of senior executives; and clawing back executive compensation, among other forceful measures.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Privacy Badger Update Fights 'Link Tracking' And 'Link Shims'

Many internet users know that social media companies track both users and non-users. The Electronic Frontier Foundation (EFF) updated its Privacy Badger browser add-on to help consumers fight a specific type of surveillance technology called "Link Tracking," which facebook and many social networking sites use to track users both on and off their social platforms. The EFF explained:

"Say your friend shares an article from EFF’s website on Facebook, and you’re interested. You click on the hyperlink, your browser opens a new tab, and Facebook is no longer a part of the equation. Right? Not exactly. Facebook—and many other companies, including Google and Twitter—use a variation of a technique called link shimming to track the links you click on their sites.

When your friend posts a link to eff.org on Facebook, the website will “wrap” it in a URL that actually points to Facebook.com: something like https://l.facebook.com/l.php?u=https%3A%2F%2Feff.org%2Fpb&h=ATPY93_4krP8Xwq6wg9XMEo_JHFVAh95wWm5awfXqrCAMQSH1TaWX6znA4wvKX8pNIHbWj3nW7M4F-ZGv3yyjHB_vRMRfq4_BgXDIcGEhwYvFgE7prU. This is a link shim.

When you click on that monstrosity, your browser first makes a request to Facebook with information about who you are, where you are coming from, and where you are navigating to. Then, Facebook quickly redirects you to the place you actually wanted to go... Facebook’s approach is a bit sneakier. When the site first loads in your browser, all normal URLs are replaced with their l.facebook.com shim equivalents. But as soon as you hover over a URL, a piece of code triggers that replaces the link shim with the actual link you wanted to see: that way, when you hover over a link, it looks innocuous. The link shim is stored in an invisible HTML attribute behind the scenes. The new link takes you to where you want to go, but when you click on it, another piece of code fires off a request to l.facebook.com in the background—tracking you just the same..."

Lovely. And, Facebook fails to deliver on privacy in more ways:

"According to Facebook's official post on the subject, in addition to helping Facebook track you, link shims are intended to protect users from links that are "spammy or malicious." The post states that Facebook can use click-time detection to save users from visiting malicious sites. However, since we found that link shims are replaced with their unwrapped equivalents before you have a chance to click on them, Facebook's system can't actually protect you in the way they describe.

Facebook also claims that link shims "protect privacy" by obfuscating the HTTP Referer header. With this update, Privacy Badger removes the Referer header from links on facebook.com altogether, protecting your privacy even more than Facebook's system claimed to."

Thanks to the EFF for focusing upon online privacy and delivering effective solutions.


Twitter Advised Its Users To Change Their Passwords After Security Blunder

Yesterday, Twitter.com advised all of its users to change their passwords after a huge security blunder exposed users' passwords online in an unprotected format. The social networking service released a statement on May 3rd:

"We recently identified a bug that stored passwords unmasked in an internal log. We have fixed the bug, and our investigation shows no indication of breach or misuse by anyone. Out of an abundance of caution, we ask that you consider changing your password on all services where you’ve used this password."

Security experts advise consumers not to use the same password at several sites or services. Repeated use of the same password makes it easy for criminals to hack into multiple sites or services.

The statement by Twitter.com also explained that it masks users' passwords:

"... through a process called hashing using a function known as bcrypt, which replaces the actual password with a random set of numbers and letters that are stored in Twitter’s system. This allows our systems to validate your account credentials without revealing your password. This is an industry standard.

Due to a bug, passwords were written to an internal log before completing the hashing process. We found this error ourselves, removed the passwords, and are implementing plans to prevent this bug from happening again."

The good news: Twitter found the buy by itself. The not-so-good news: the statement was short on details. It did not disclose details about the fixes so this blunder doesn't happen again. Nor did the statement say how many users were affected. Twitter has about 330 million users, so it seems that all users were affected.


How to Wrestle Your Data From Data Brokers, Silicon Valley — and Cambridge Analytica

[Editor's note: today's guest post, by reporters at ProPublica, discusses data brokers you may not know, the data collected and archived about consumers, and options for consumers to (re)gain as much privacy as possible. It is reprinted with permission.]

By Jeremy B. Merrill, ProPublica

Cambridge Analytica thinks that I’m a "Very Unlikely Republican." Another political data firm, ALC Digital, has concluded I’m a "Socially Conservative," Republican, "Boomer Voter." In fact, I’m a 27-year-old millennial with no set party allegiance.

For all the fanfare, the burgeoning field of mining our personal data remains an inexact art.

One thing is certain: My personal data, and likely yours, is in more hands than ever. Tech firms, data brokers and political consultants build profiles of what they know — or think they can reasonably guess — about your purchasing habits, personality, hobbies and even what political issues you care about.

You can find out what those companies know about you but be prepared to be stubborn. Very stubborn. To demonstrate how this works, we’ve chosen a couple of representative companies from three major categories: data brokers, big tech firms and political data consultants.

Few of them make it easy. Some will show you on their websites, others will make you ask for your digital profile via the U.S. mail. And then there’s Cambridge Analytica, the controversial Trump campaign vendor that has come under intense fire in light of a report in the British newspaper The Observer and in The New York Times that the company used improperly obtained data from Facebook to help build voter profiles.

To find out what the chaps at the British data firm have on you, you’re going to need both stamps and a "cheque."

Once you see your data, you’ll have a much better understanding of how this shadowy corner of the new economy works. You’ll see what seemingly personal information they know about you … and you’ll probably have some hypotheses about where this data is coming from. You’ll also probably see some predictions about who you are that are hilariously wrong.

And if you do obtain your data from any of these companies, please let us know your thoughts at politicaldata@propublica.org. We won’t share or publish what you say (unless you tell us that’s it’s OK).

Cambridge Analytica and Other Political Consultants

Making statistically informed guesses about Americans’ political beliefs and pet issues is a common business these days, with dozens of firms selling data to candidates and issue groups about the purported leanings of individual American voters.

Few of these firms have to give your data. But Cambridge Analytica is required to do so by an obscure European rule.

Cambridge Analytica:

Around the time of the 2016 election, Paul-Olivier Dehaye, a Belgian mathematician and founder of a website that helps people exercise their data protection rights called PersonalData.IO, approached me with an idea for a story. He flagged some of Cambridge Analytica’s claims about the power of its "psychographic" targeting capabilities and suggested that I demand my data from them.

So I sent off a request, following Dehaye’s coaching, and citing the UK Data Protection Act 1998, the British implementation of a little-known European Union data-protection law that grants individuals (even Americans) the rights to see the data Europeans companies compile about individuals.

It worked. I got back a spreadsheet of data about me. But it took months, cost ten pounds — and I had to give them a photo ID and two utility bills. Presumably they didn’t want my personal data falling into the wrong hands.

How You Can Request Your Data From Cambridge Analytica:

  1. Visit Cambridge Analytica’s website here and fill out this web form.
  2. After you submit the form, the page will immediately request that you email to data.compliance@cambridgeanalytica.org a photo ID and two copies of your utility bills or bank statements, to prove your identity. This page will also include the company’s bank account details.
  3. Find a way to send them 10 GBP. You can try wiring this from your bank, though it may cost you an additional $25 or so — or ask a friend in the UK to go to their bank and get a cashier’s check. Your American bank probably won’t let you write a GBP-denominated check. Two services I tried, Xoom and TransferWise, weren’t able to do it.
  4. Eventually, Cambridge Analytica will email you a small Excel spreadsheet of information and a letter. You might have to wait a few weeks. Celeste LeCompte, ProPublica’s vice president of business development, requested her data on March 27 and still hasn’t received it.

Because the company is based in the United Kingdom, it had no choice but to fulfill my request. In recent weeks, the firm has come under intense fire after The New York Times and the British paper The Observer disclosed that it had used improperly obtained data from Facebook to build profiles of American voters. Facebook told me that data about me was likely transmitted to Cambridge Analytica because a person with whom I am "friends" on the social network had taken the now-infamous "This Is Your Digital Life" quiz. For what it’s worth, my data shows no sign of anything derived from Facebook.

What You Might Get Back From Cambridge Analytica:

Cambridge Analytica had generated 13 data points about my views: 10 political issues, ranked by importance; two guesses at my partisan leanings (one blank); and a guess at whether I would turn out in the 2016 general election.

They told me that the lower the rank, the higher the predicted importance of the issue to me.

Alongside that data labeled "models" were two other types of data that are run-of-the-mill and widely used by political consultants. One sheet of "core data" — that is, personal info, sliced and diced a few different ways, perhaps to be used more easily as parameters for a statistical model. It included my address, my electoral district, the census tract I live in and my date of birth.

The spreadsheet included a few rows of "election returns" — previous elections in New York State in which I had voted. (Intriguingly, Cambridge Analytica missed that I had voted in 2015’s snoozefest of a vote-for-five-of-these-five judicial election. It also didn’t know about elections in which I had voted in North Carolina, where I lived before I lived in New York.)

ALC Digital

ALC Digital is another data broker, which says that its info is "audiences are built from multi-sourced, verified information about an individual." Their data is distributed via Oracle Data Cloud, a service that lets advertisers target specific audience of people — like, perhaps, people who are Boomer Voters and also Republicans.

The firm brags in an Oracle document posted online about how hard it is to avoid their data collection efforts, saying, "It has no cookies to erase and can’t be ‘cleared.’ ALC Real World Data is rooted in reality, and doesn’t rely on inferences or faulty models."

How You Can Request Your Data From ALC Digital:

Here’s how to find the predictions about your political beliefs data in Oracle Data Cloud:

  1. Visit http://www.bluekai.com/registry/. If you use an ad blocker, there may not be much to see here.
  2. Click on the Partner Segments tab.
  3. Scroll on through until you find ALC Digital.

You may have to scroll for a while before you find it.

And not everyone appears to have data from ALC Digital, so don’t be shocked if you can’t find it. If you don’t, there may be other fascinating companies with data about who you are in your Oracle file.

What You Might Get Back From ALC Digital:

When I downloaded the data last year, it said I was "Socially Conservative," "Boomer Voter" — as well as a female voter and a tax reform supporter.

Recently, when I checked my data, those categories had disappeared entirely from my data. I had nothing from ALC Digital.

ALC Digital is not required to release this data. It is disclosed via the Oracle Data Cloud. Fran Green, the company’s president, said that Aristotle, a longtime political data company, “provides us with consumer data that populates these audiences.” She also said that “we do not claim to know people’s ‘beliefs.’”

Big Tech

Big tech firms like Google and Facebook tend to make their money by selling ads, so they build extensive profiles of their users’ interests and activities. They also depend on their users’ goodwill to keep us voluntarily giving them our locations, our browsing histories and plain ol’ lists of our friends and interests. (So far, these popular companies have not faced much regulation.) All three make it easy to download the data that they keep on you.

Firms like Google and Facebook firms don’t sell your data — because it’s their competitive advantage. Google’s privacy page screams in 72 point type: "We do not sell your personal information to anyone." As websites that we visit frequently, they sell access to our attention, so companies that want to reach you in particular can do so with these companies’ sites or other sites that feature their ads.

Facebook

How You Can Request Your Data From Facebook:

You of course have to have a Facebook account and be logged in:

  1. Visit https://www.facebook.com/settings on your computer.
  2. Click the “Download a copy of your Facebook data” link.
  3. On the next page, click “Start My Archive.”
  4. Enter your password, then click “Start My Archive” again.
  5. You’ll get an email immediately, and another one saying “Your Facebook download is ready” when your data is ready to be downloaded. You’ll get a notification on Facebook, too. Mine took just a few minutes.
  6. Once you get that email, click the link, then click Download Archive. Then reenter your password, which will start a zip file downloading..
  7. Unzip the folder; depending on your computer’s operating system, this might be called uncompressing or “expanding.” You’ll get a folder called something like “facebook-jeremybmerrill,” but, of course, with your username instead of mine.
  8. Open the folder and double-click “index.htm” to open it in your web browser.

What You Might Get Back From Facebook

Facebook designed its archive to first show you your profile information. That’s all information you typed into Facebook and that you probably intended to be shared with your friends. It’s no surprise that Facebook knows what city I live in or what my AIM screen name was — I told Facebook those things so that my friends would know.

But it’s a bit of a surprise that they decided to feature a list of my ex-girlfriends — what they blandly termed "Previous Relationships" — so prominently.

As you dig deeper in your archive, you’ll find more information that you gave Facebook, but that you might not have expected the social network to keep hold of for years: if you’re me, that’s the Nickelback concert I apparently RSVPed to, posts about switching high schools and instant messages from my freshman year in college.

But finally, you’ll find the creepier information: what Facebook knows about you that you didn’t tell it, on the "Ads" page. You’ll find "Ads Topics" that Facebook decided you were interested in, like Housing, ESPN or the town of Ellijay, Georgia. And, you’ll find a list of advertisers who have obtained your contact information and uploaded it to Facebook, as part of a so-called Custom Audience of specific people to whom they want to show their ads.

You’ll find more of that creepy information on your Ads Preferences page. Despite Mark Zuckerberg telling Rep. Jerry McNerney, D-Calif., in a hearing earlier this month that “all of your information is included in your ‘download your information,’” my archive didn’t include that list of ad categories that can be used to target ads to me. (Some other types of information aren’t included in the download, like other people’s posts you’ve liked. Those are listed here, along with where to find them — which, for most, is in your Activity Log.)

This area may include Facebook’s guesses about who you are, boiled down from some of your activities. Most Americans’ will have a guess about their politics — Facebook says I’m a "moderate" about U.S. Politics — and some will have a guess about so-called "multicultural affinity," which Facebook insists is not a guess about your ethnicity, but rather what sorts of content "you are interested in or will respond well to." For instance, Facebook recently added that I have a "Multicultural Affinity: African American." (I’m white — though, because Facebook’s definition of "multicultural affinity" is so strange, it’s hard to tell if this is an error on Facebook’s part.)

Facebook also doesn’t include your browsing history — the subject of back-and-forths between Mark Zuckerberg and several members of Congress — it says it keeps that just long enough to boil it down into those “Ad Topics.”

For people without Facebook accounts, Facebook says to email datarequests@support.facebook.com or fill out an online form to download what Facebook knows about you. One puzzle here is how Facebook gathers data on people whose identities it may not know. It may know that a person using a phone from Atlanta, Georgia, has accessed a Facebook site and that the same person was last week in Austin, Texas, and before that Cincinnati, but it may not know that that person is me. It’s in principle difficult for the company to give the data it collects about logged-out users if it doesn’t know exactly who they are.

Google

Like Facebook, Google will give you a zip archive of your data. Google’s can be much bigger, because you might have stored gigabytes of files in Google Drive or years of emails in Gmail.

But like Facebook, Google does not provide its guesses about your interests, which it uses to target ads. Those guesses are available elsewhere.

How You Can Request Your Data From Google:

  1. Visit https://takeout.google.com/settings/takeout/ to use Google’s cutely named Takeout service.
  2. You’ll have to pick which data you want to download and examine. You should definitely select My Activity, Location History and Searches. You may not want to download gigabytes of emails, if you use Gmail, since that uses a lot of space and may take a while. (That’s also information you shouldn’t be surprised that Google keeps — you left it with Gmail so that you could use Google’s search expertise to hold on to your emails. )
  3. Google will present you with a few options for how to get your archive. The defaults are fine.
  4. Within a few hours, you should get an email with the subject "Your Google data archive is ready." Click Download Archive and log in again. That should start the download of a file named something like "takeout-20180412T193535.zip."
  5. Unzip the folder; depending on your computer’s operating system, this might be called uncompressing or “expanding.”
  6. You’ll get a folder called Takeout. Open the file inside it called "index.html" in your web browser to explore your archive.

What You Might Get Back From Google:

Once you open the index.html file, you’ll see icons for the data you chose in step 2. Try exploring "Ads" under "My Activity" — you’ll see a list of times you saw Google Ads, including on apps on your phone.

Google also includes your search history, under "Searches" — in my case, going back to 2013. Google knows what I had forgotten: I Googled a bunch of dinosaurs around Valentine’s Day that year… And it’s not just web searches: the Sound Search history reminded me that at some point, I used that service to identify Natalie Imbruglia’s song "Torn."

Android phone users might want to check the "Android" folder: Google keeps a list of each app you’ve used on your phone.

Most of the data contained here are records of ways you’ve directly interacted with Google — and the company really does use the those to improve how their services work for me. I’m glad to see my searches auto-completed, for instance.

But the company also creates data about you: Visit the company’s Ads Settings page to see some of the “topics” Google guesses you’re interested in, and which it uses to personalize the ads you see. Those topics are fairly general — it knows I’m interested in “Politics” — but the company says it has more granular classifications that it doesn’t include on the list. Those more granular, hidden classifications are on various topics, from sports to vacations to politics, where Google does generate a guess whether some people are politically “left-leaning” or “right-leaning.”

Data Brokers

Here’s who really does sell your data. Data brokers like the credit reporting agency Experian and a firm named Epsilon.

These sometimes-shady firms are middlemen who buy your data from tracking firms, survey marketers and retailers, slice and dice the data into “segments,” then sell those on to advertisers.

Experian

Experian is best known as a credit reporting firm, but your credit cards aren’t all they keep track of. They told me that they “firmly believe people should be made aware of how their data is being used” — so if you print and mail them a form, they’ll tell you what data they have on you.

“Educated consumers,” they said, “are better equipped to be effective, successful participants in a world that increasingly relies on the exchange of information to efficiently deliver the products and services consumers demand.”

How You Can Request Your Data From Experian:

  1. Visit Experian’s Marketing Data Request site and print the Marketing Data Report Request form.
  2. Print a copy of your ID and proof of address.
  3. Mail it all to Experian at Experian Marketing Services PO Box 40 Allen, TX 75013
  4. Wait for them to mail you something back.

What You Might Get Back From Experian:

Expect to wait a while. I’ve been waiting almost a month.

They also come up with a guess about your political views that’s integrated with Facebook — our Facebook Political Ad Collector project has found that many political candidates use Experian’s data to target their Facebook ads to likely supporters.

You should hope to find a guess about your political views that’d be useful to those candidates — as well as categories derived from your purchasing data.

Experian told me they generate the data they have about you from a long list of sources, including public records and “historical catalog purchase information” — as well as calculating it from predictive models.

Epsilon

How You Can Request Your Data From Epsilon:

  1. Visit Epsilon’s Marketing Data Summary Request form.
  2. After entering your name and address, Epsilon will answer some of those identity-verification questions that quiz you about your old addresses and cars. If your identity can’t be verified with those, Epsilon will ask you to mail in a form.
  3. Wait for Epsilon to mail you your data; it took about a week for me.

What You Might Get Back From Epsilon:

Epsilon has information on “demographics” and “lifestyle interests” — at the household level. It also includes a list of “household purchases.”

It also has data that political candidates use to target their Facebook ads, including Randy Bryce, a Wisconsin Democrat who’s seeking his party’s nomination to run for retiring Speaker Paul Ryan’s seat, and Rep. Tulsi Gabbard, D-Hawaii.

In my case, Epsilon knows I buy clothes, books and home office supplies, among other things — but isn’t any more specific. They didn’t tell me what political beliefs they believe I hold. The company didn’t respond to a request for comment.

Oracle

Oracle’s Data Cloud aggregates data about you from Oracle, but also so-called third party data from other companies.

How You Can Request Your Data From Oracle:

  1. Visit http://www.bluekai.com/registry/. If you use an ad blocker, there may not be much to see here.
  2. Explore each tab, from “Basic Info” to “Hobbies & Interests” and “Partner Segments.”

Not fun scrolling through all those pages? I have 84 pages of four pieces of data each.

You can’t search. All the text is actually images of text. Oracle declined to say why it chose to make their site so hard to use.

What You Might Get Back From Oracle:

My Oracle profile includes nearly 1500 data points, covering all aspects of my life, from my age to my car to how old my children are to whether I buy eggs. These profiles can even say if you’re likely to dress your pet in a costume for Halloween. But many of them are off-base or contradictory.

Many companies in Oracle’s data, besides ALC Digital, offer guesses about my political views: Data from one company uploaded by AcquireWeb says that my political affiliations are as a Democrat and an Independent … but also that I’m a “Mild Republican.” Another company, an Oracle subsidiary called AddThis, says that I’m a “Liberal.” Cuebiq, which calls itself a “location intelligence” company, says I’m in a subset of “Democrats” called “Liberal Professions.”

If an advertiser wants to show an ad to Spring Break Enthusiasts, Oracle can enable that. I’m apparently a Spring Break Enthusiast. Do I buy eggs? I sure do. Data on Oracle’s site associated with AcquireWeb says I’m a cat owner …

But it also “knows” I’m a dog owner, which I’m not.

Al Gadbut, the CEO of AcquireWeb, explained that the guesses associated with his company weren’t based on my personal data, but rather the tendencies of people in my geographical area — hence the seemingly contradictory political guesses. He said his firm doesn’t generate the data, but rather uploaded it on behalf of other companies. Cuebiq’s guess was a “probabilistic inference” they drew from location data submitted to them by some app on my phone. Valentina Marastoni-Bieser, Cuebiq’s senior vice president of marketing, wouldn’t tell me which app it was, though.

Data for sale here includes a long list what TV shows I — supposedly — watch.

But it’s not all wrong. AddThis can tell that I’m “Young & Hip.”

Takeaways:

The above list is just a sampling of the firms that collect your data and try to draw conclusions about who you are — not just sites you visit like Facebook and controversial firms like Cambridge Analytica.

You can make some guesses as to where this data comes from — especially the more granular consumer data from Oracle. For each data point, it’s worth considering: Who’d be in a position to sell a list of what TV shows I watch, or, at least, a list of what TV shows people demographically like me watch? Who’d be in a position to sell a list of what groceries I, or people similar to me in my area, buy? Some of those companies — companies who you’re likely paying, and for whom the internet adage that “if you’re not paying, you’re the product” doesn’t hold — are likely selling data about you without your knowledge. Other data points, like the location data used by Cuebiq, can come from any number of apps or websites, so it may be difficult to figure out exactly which one has passed it on.

Companies like Google and Facebook often say that they’ll let you “correct” the data that they hold on you — tacitly acknowledgingly that they sometimes get it wrong. But if receiving relevant ads is not important to you, they’ll let you opt-out entirely — or, presumably, “correct” your data to something false.

An upcoming European Union rule called the General Data Protection Regulation portends a dramatic change to how data is collected and used on the web — if only for Europeans. No such law seems likely to be passed in the U.S. in the near future.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


News Media Alliance Challenges Tech Companies To 'Accept Accountability' And Responsibility For Filtering News In Their Platforms

Last week, David Chavern, the President and CEO of News Media Alliance (NMA), testified before the House Judiciary Committee. The NMA is a nonprofit trade association representing over 2,000 news organizations across the United States. Mr. Chavern's testimony focused upon the problem of fake news, often aided by social networking platform.

His comments first described current conditions:

"... Quality journalism is essential to a healthy and functioning democracy -- and my members are united in their desire to fight for its future.

Too often in today’s information-driven environment, news is included in the broad term "digital content." It’s actually much more important than that. While some low-quality entertainment or posts by friends can be disappointing, inaccurate information about world events can be immediately destructive. Civil society depends upon the availability of real, accurate news.

The internet represents an extraordinary opportunity for broader understanding and education. We have never been more interconnected or had easier and quicker means of communication. However, as currently structured, the digital ecosystem gives tremendous viewpoint control and economic power to a very small number of companies – the tech platforms that distribute online content. That control and power must come with new responsibilities... Historically, newspapers controlled the distribution of their product; the news. They invested in the journalism required to deliver it, and then printed it in a form that could be handed directly to readers. No other party decided who got access to the information, or on what terms. The distribution of online news is now dominated by the major technology platforms. They decide what news is delivered and to whom – and they control the economics of digital news..."

Last month, a survey found that roughly two-thirds of U.S. adults (68%) use Facebook.com, and about three-quarters of those use the social networking site daily. In 2016, a survey found that 62 percent of adults in the United States get their news from social networking sites. The corresponding statistic in 2012 was 49 percent. That 2016 survey also found that fewer social media users get their news from other platforms: local television (46 percent), cable TV (31 percent), nightly network TV (30 percent), news websites/apps (28 percent), radio (25 percent), and print newspapers (20 percent).

Mr. Chavern then described the problems with two specific tech companies:

"The First Amendment prohibits the government from regulating the press. But it doesn’t prevent Facebook and Google from acting as de facto regulators of the news business.

Neither Google nor Facebook are – or have ever been – "neutral pipes." To the contrary, their businesses depend upon their ability to make nuanced decisions through sophisticated algorithms about how and when content is delivered to users. The term “algorithm” makes these decisions seem scientific and neutral. The fact is that, while their decision processes may be highly-automated, both companies make extensive editorial judgments about accuracy, relevance, newsworthiness and many other criteria.

The business models of Facebook and Google are complex and varied. However, we do know that they are both immense advertising platforms that sell people’s time and attention. Their "secret algorithms" are used to cultivate that time and attention. We have seen many examples of the types of content favored by these systems – namely, click-bait and anything that can generate outrage, disgust and passion. Their systems also favor giving users information like that which they previously consumed, thereby generating intense filter bubbles and undermining common understandings of issues and challenges.

All of these things are antithetical to a healthy news business – and a healthy democracy..."

Earlier this month, Apple Computer and Facebook executives exchanged criticisms about each other's business models and privacy. Mr. Chavern's testimony before Congress also described more problems and threats:

"Good journalism is factual, verified and takes into account multiple points of view. It can take a lot of time and investment. Most particularly, it requires someone to take responsibility for what is published. Whether or not one agrees with a particular piece of journalism, my members put their names on their product and stand behind it. Readers know where to send complaints. The same cannot be said of the sea of bad information that is delivered by the platforms in paid priority over my members’ quality information. The major platforms’ control over distribution also threatens the quality of news for another reason: it results in the “commoditization” of news. Many news publishers have spent decades – often more than a century – establishing their brands. Readers know the brands that they can trust — publishers whose reporting demonstrates the principles of verification, accuracy and fidelity to facts. The major platforms, however, work hard to erase these distinctions. Publishers are forced to squeeze their content into uniform, homogeneous formats. The result is that every digital publication starts to look the same. This is reinforced by things like the Google News Carousel, which encourages users to flick back and forth through articles on the same topic without ever noticing the publisher. This erosion of news publishers’ brands has played no small part in the rise of "fake news." When hard news sources and tabloids all look the same, how is a customer supposed to tell the difference? The bottom line is that while Facebook and Google claim that they do not want to be "arbiters of truth," they are continually making huge decisions on how and to whom news content is delivered. These decisions too often favor free and commoditized junk over quality journalism. The platforms created by both companies could be wonderful means for distributing important and high-quality information about the world. But, for that to happen, they must accept accountability for the power they have and the ultimate impacts their decisions have on our economic, social and political systems..."

Download Mr. Chavern's complete testimony. Industry watchers argue that recent changes by Facebook have hurt local news organizations. MediaPost reported:

"When Facebook changed its algorithm earlier this year to focus on “meaningful” interactions, publishers across the board were hit hard. However, local news seemed particularly vulnerable to the alterations. To assuage this issue, the company announced that it would prioritize news related to local towns and metro areas where a user resided... To determine how positively that tweak affected local news outlets, the Tow Center measured interactions for posts from publications coming from 13 metro areas... The survey found that 11 out of those 13 have consistently seen a drop in traffic between January 1 and April 1 of 2018, allowing the results to show how outlets are faring nine weeks after the algorithm change. According to the Tow Center study, three outlets saw interactions on their pages decrease by a dramatic 50%. These include The Dallas Morning News, The Denver Post, and The San Francisco Chronicle. The Atlanta Journal-Constitution saw interactions drop by 46%."

So, huge problems persist.

Early in my business career, I had the opportunity to develop and market an online service using content from Dow Jones News/Retrieval. That experience taught me that the news - hard news - included who, where, when, and what happened. Everything else is either opinion, commentary, analysis, an advertisement, or fiction. And, it is critical to know the differences and/or learn to spot each type. Otherwise, you are likely to be misled, misinformed, or fooled.


Many People Are Concerned About Facebook. Any Other Tech Companies Pose Privacy Threats?

The massive data breach involving Facebook and Cambridge Analytica focused attention and privacy concerns on the social networking giant. Reports about extensive tracking of users and non-users, testimony by its CEO before the U.S. Congress, and online tools allegedly allowing advertisers to violate federal housing laws have also focused attention on Facebook.

Are there any other tech or advertising companies which consumers should have privacy concerns about?  What other companies collect massive amounts of information about consumers? It seems wise to look beyond Facebook in to avoid missing significant threats.

Google logo To answer these questions, the Wall Street Journal compared Facebook and Google:

"... Alphabet Inc.’s Google is a far bigger threat by many measures: the volume of information it gathers, the reach of its tracking and the time people spend on its sites and apps... It’s likely that Google has shadow profiles on at least as many people as Facebook does, says Chandler Givens, chief executive of TrackOff, which develops software to fight identity theft. Google allows everyone, whether they have a Google account or not, to opt out of its ad targeting. Yet, like Facebook, it continues to gather your data... Google Analytics is far and away the web’s most dominant analytics platform. Used on the sites of about half of the biggest companies in the U.S., it has a total reach of 30 million to 50 million sites. Google Analytics tracks you whether or not you are logged in... Google uses, among other things, our browsing and search history, apps we’ve installed, demographics such as age and gender and, from its own analytics and other sources, where we’ve shopped in the real world. Google says it doesn’t use information from “sensitive categories” such as race, religion, sexual orientation or health..."

There's plenty more, so read the entire WSJ article. A good review worthy of further discussion.

However, more companies pose privacy threats. Equifax, one of three major credit reporting agencies, easily makes my list. Its massive data breach affected half the population in the USA, plus persons worldwide. An investigation discovered several data security failures at Equifax.

Also on my list would be the U.S. Federal Communications Commission (FCC). Using some  "light touch" legal ju-jitsu and vague promises of enabling infrastructure investments, the Republican-majority Commissioners and Trump appointee Ajit Pai at the FCC revoked broadband privacy protections for consumers last year... and punted broadband oversight responsibility to the U.S. Federal Trade Commission (FTC). This allowed corporate internet service providers (ISPs) to freely track and collect sensitive data about internet users without requiring notices nor opt-out mechanisms.

Uber logo Uber also makes my list, given its massive data breach affecting 57 million persons. Earlier this month, the FTC announced a revised settlement agreement where Uber:

"... failed to disclose a significant breach of consumer data that occurred in 2016 -- in the midst of the FTC’s investigation that led to the August 2017 settlement announcement... the revised settlement could subject Uber to civil penalties if it fails to notify the FTC of certain future incidents involving unauthorized access of consumer information... In announcing the original proposed settlement with Uber in August 2017, the FTC charged that the company had failed to live up to its claims that it closely monitored employee access to rider and driver data and that it deployed reasonable measures to secure personal information stored on a third-party cloud provider’s servers.

In the revised complaint, the FTC alleges that Uber learned in November 2016 that intruders had again accessed consumer data the company stored on its third-party cloud provider’s servers by using an access key an Uber engineer had posted on a code-sharing website... the intruders used the access key to download from Uber’s cloud storage unencrypted files that contained more than 25 million names and email addresses, 22 million names and mobile phone numbers, and 600,000 names and driver’s license numbers of U.S. Uber drivers and riders... Uber paid the intruders $100,000 through its third-party “bug bounty” program and failed to disclose the breach to consumers or the Commission until November 2017... the new provisions in the revised proposed order include requirements for Uber to submit to the Commission all the reports from the required third-party audits of Uber’s privacy program rather than only the initial such report..."

Yes, Wells Fargo bank makes my list, too. This blog post explains why. Who is on your list of the biggest privacy threats to consumers?


How Facebook Tracks Its Users, And Non-Users, Around the Internet

Facebook logo Many Facebook users wrongly believe that the social networking service doesn't track them around the internet when they aren't signed in. Also, many non-users of Facebook wrongly believe that they are not tracked.

Earlier this month, Consumer Reports explained the tracking:

"As you travel through the web, you’re likely to encounter Facebook Like or Share buttons, which the company calls Social Plugins, on all sorts of pages, from news outlets to shopping sites. Click on a Like button and you can see the number on the page’s counter increase by one; click on a Share button and a box opens up to let you post a link to your Facebook account.

But that’s just what’s happening on the surface. "If those buttons are on the page, regardless of whether you touch them or not, Facebook is collecting data," said Casey Oppenheim, co-founder of data security firm Disconnect."

This blog discussed social plugins back in 2010. However, the tracking includes more technologies:

"... every web page contains little bits of code that request the pictures, videos, and text that browsers need to display each item on the page. These requests typically go out to a wide swath of corporate servers—including Facebook—in addition to the website’s owner. And such requests can transmit data about the site you’re on, the browser you are using, and more. Useful data gets sent to Facebook whether you click on one of its buttons or not. If you click, Facebook finds out about that, too. And it learns a bit more about your interests.

In addition to the buttons, many websites also incorporate a Facebook Pixel, a tiny, transparent image file the size of just one of the millions of pixels on a typical computer screen. The web page makes a request for a Facebook Pixel, just as it would request a Like button. No user will ever notice the picture, but the request to get it is packaged with information... Facebook explains what data can be collected using a Pixel, such as products you’ve clicked on or added to a shopping cart, in its documentation for advertisers. Web developers can control what data is collected and when it is transmitted... Even if you’re not logged in, the company can still associate the data with your IP address and all the websites you’ve been to that contain Facebook code."

The article also explains "re-targeting" and how consumers who don't purchase anything at an online retail site will see advertisements later -- around the internet and not solely on the Facebook site -- about the items they viewed but not purchased. Then, there is the database it assembles:

"In materials written for its advertisers, Facebook explains that it sorts consumers into a wide variety of buckets based on factors such as age, gender, language, and geographic location. Facebook also sorts its users based on their online activities—from buying dog food, to reading recipes, to tagging images of kitchen remodeling projects, to using particular mobile devices. The company explains that it can even analyze its database to build “look-alike” audiences that are similar... Facebook can show ads to consumers on other websites and apps as well through the company’s Audience Network."

So, several technologies are used to track both Facebook users and non-users, and assemble a robust, descriptive database. And, some website operators collaborate to facilitate the tracking, which is invisible to most users. Neat, eh?

Like it or not, internet users are automatically included in the tracking and data collection. Can you opt out? Consumer reports also warns:

"The biggest tech companies don’t give you strong tools for opting out of data collection, though. For instance, privacy settings may let you control whether you see targeted ads, but that doesn’t affect whether a company collects and stores information about you."

Given this, one can conclude that Facebook is really a massive advertising network masquerading as a social networking service.

To minimize the tracking, consumers can: disable the Facebook API platform on their Facebook accounts, use the new tools (e.g., see these step-by-step instructions) by Facebook to review and disable the apps with access to their data, use ad-blocking software (e.g., Adblock Plus, Ghostery), use the opt out-out mechanisms offered by the major data brokers, use the OptOutPrescreen.com site to stop pre-approved credit offers, and use VPN software and services.

If you use the Firefox web browser, configure it for Private Browsing and install the new Facebook Container add-on specifically designed to prevent Facebook from tracking you. Don't use Firefox? Several web browsers offer Incognito Mode. And, you might try the Privacy Badger add-on instead. I've used it happily for years.

To combat "canvas fingerprinting" (e.g., tracking users by identifying the unique attributes of your computer, browser, and software), security experts have advised consumers to use different web browsers. For example, you'd use one browser only for online banking, and a different web browser for surfing the internet. However,  this security method may not work much longer given the rise of cross-browser fingerprinting.

It seems that an arms race is underway between software for users to maintain privacy online versus technologies by advertisers to defeat users' privacy. Would Facebook and its affiliates/partners use cross-browser fingerprinting? My guess: yes it would, just like any other advertising network.

What do you think? Some related reading:


How To View The List Of Advertisers Tracking You On Facebook. Any Surprises On Your List?

The massive privacy and data security breach at Facebook.com involving Cambridge Analytica has heightened many users' sensitivity to the advertising practices by the social networking service. Many Facebook users want to know the exact list of advertiser tracking them.

How To View The List Of Advertisers Tracking You

Facebook Ad Preferences page. Click to view larger version How to view this list? It's easy. Sign into Facebook.com and navigate to Settings > Ads > Advertisers You've Interacted With. (When using a web browser, you'll have to click on the tiny arrow in the upper right portion of the page to access the drop-down menu.) Within the Ad Preferences page, click on the "Advertisers You've Interacted With" headline to open that module. When opened, it displays several lists of advertisers:

  1. Who've added their contact list to Facebook
  2. Whose website or app you've used,
  3. Whom you've visited, and
  4. More

The default view of list #1 displays 12 advertisers tracking you. There probably are many more in your list. Select "Show More" to view more advertisers. Facebook doesn't make it easy. The module lacks a "Show All" button, which forces users to repeatedly select "Show More." Not good. Come on Facebook! You can do better.

List #1 includes important explanatory text:

"These advertisers are running ads using a contact list they uploaded that includes your contact info. This info was collected by the advertiser, typically after you shared your email address with them or another business they've partnered with."

The key phrase to remember: or another business they've interacted with. So, list #1 includes not only advertisers but also affiliates or business partners. Not good. More Facebook being Facebook.

I selected "Show More" about two dozen times to view my complete list: 235 advertisers tracking me, and collecting data about me. 235 advertisers even though I never used the Facebook mobile app, and had already disabled the Facebook API platform on my account years ago! Not good.

Your mileage will vary. There may be fewer or more advertisers on your list.

My list #1 included both advertisers I expected and many I didn't expect. The advertisers I expected to see brands I currently do business with (e.g., Marriott Rewards, ACLU), brands I no longer do business with (e.g., Bank of America, AT&T), and/or brands whose Facebook pages I "Liked" or left comments on. The advertisers who I didn't expected to see included politicians in other states I've neither visited nor live in, brands I've never purchased nor interacted with in any manner, brands I have never "Liked," and more.

Who's on your list? A friend shared:

"I looked at my list and it's crazy. Will follow the opt-out links tomorrow and clear them out. Cardi B was in my list of FB advertisers."

A rapper? That's too funny. I guess that's to be expected if you stream and share music online via Facebook. Me? I don't stream music online because that is another way to be tracked. Instead, I enjoy listening to CDs privately in my home. I prefer to keep my home a truly private place.

What's really going on here? Why the crazy long list? Popular Science explained:

"You, can thank the "data providers" for this mess. Mark Zuckerberg spent roughly 11 hours testifying in front of Congressional committees... One thing that got very little attention was the concept of “data brokers,” middleman businesses that collect consumer information and sell it to companies. Facebook stopped using them just last month. However, that long string of companies, personalities, and alternative rock bands is a result of Facebook’s old program... after the Cambridge Analytica scandal broke, but before Mark Zuckerberg’s marathon testimony in front of Congress, Facebook announced that it was ending a program called Partner Categories, canceling a long-standing relationship between the social network and data brokers. The change was announced in a short statement, but it has big implications for your personal information and the agencies that collect and sell it."

"The ability to target advertising is what makes Facebook its money—roughly $40 billion last year... while you provide lots of user information to Facebook, advertisers typically want even more... and that’s where data brokers come in. Facebook calls on brokers like Acxiom, Epsilon, and TransUnion to act as a conduit between Facebook and individual advertisers looking to reach targeted audiences..."

Readers of this blog may recognize TransUnion, one of the three major credit reporting agencies. So, the "advertisers" on Facebook tracking you (and data harvesting) include a variety of entities: traditional advertisers, business partners, affiliates, data brokers, and their intermediaries.

It's called "surveillance capitalism" for good reasons. Many companies besides Facebook do it.

What To Do Next

It's not easy to opt out or delete items from your advertising list. For those brands and entities you have "Liked," you can visit their Facebook page and "Unlike" them. However, that won't stop them or other "advertisers" from re-targeting (and tracking) you in the future. The "Ad Preferences" page for your profile also includes the "Your Information" module where you can toggle on or off advertising based upon certain profile elements:

Your Information module within Ad Preferences. Facebook. Click to view larger version

The above image is from 2017. back then I disabled all of the active toggles you see. Deactivating these toggles might minimize the number of ads displays, but it won't stop the tracking and data collection. The Popular Science article includes links to several opt-out mechanisms for major data brokers. You could (and should) use those. However, two key problems remain.

First, these opt-out links should be easily accessible within Facebook. They aren't. This forces consumers to waste time hunting for the opt-out mechanisms, when Facebook has the expertise to provide them. Facebook probably knows that many consumers will give up and quit, rather than hunt for opt-out links. It's great that Popular Science did a lot of the work for consumers.

Second, the opt-out mechanisms offered by some data brokers are unnecessarily complex. Example: see the opt-out mechanisms offered by Experian, another credit reporting agency:

Experian opt-out site pages. Click to view larger version

Didn't know that Experian plays in both ponds: credit reporting and data brokerage? Most people probably don't know. Experian's site lacks a unified, single opt-out mechanism which forces consumers to wade through seven different mechanisms and methods; some of which are paper-based and lack an online method. Not good!

TransUnion's opt-out mechanism isn't much better. And, it raises more questions than it answers? It links to the OptOutPrescreen.com site, which I completed way back in 2007. Did my Facebook membership undo that? Or is there some other data sharing at work, which the OptOutprescreen doesn't cover? TransUnion's page doesn't explain, and nither does Facebook's page. Not good.

Some people choose to use ad-blocking software (e.g., Adblock Plus, Ghostery) to suppress the display of online ads, but that probably won't stop the tracking and data collection internal to Facebook. There's no substitute for Facebook giving its users internal tools to completely disable and opt out of the tracking and data collection.

That highlights another problem: users are automatically included, so the burden is upon users to (continually) opt out. This is Facebook's business model. The reverse should be the default. Users should not be tracked nor data harvested unless they register and opt into the program. Given the social media site's business model, even if you opt out today, there's nothing stopping Facebook from re-subscribing you in the future with any updates to its system or terms of use.

How many advertisers are on your list? 200 or more? 300? 400? Any surprises on your list?


How To Check If Your Information Was Collected By Cambridge Analytica In The Facebook Breach

You've probably heard about the massive privacy and data security breach at Facebook.com where users' information, plus their friends' information was captured and shared with Cambridge Analytica. by an app created by an academic professor. Now, you want to know if your information was harvested.

How To Check

It's easy to check. Visit this Facebook Help Center page. If you are not signed into your Facebook account, then the page displays as:

Default version of Facebook Help page for users to determine if their information was collected by Cambridge Analytica. Click to view larger version

If you have already signed into your Facebook account and your information was not harvested, then the main column of the page displays:

Default version of Facebook Help page for users to determine if their information was collected by Cambridge Analytica. Click to view larger version

If your information was harvested, then the content under "Was My Information Shared?" will be different. It may display this:

"Based upon our investigation, you don't appear to have logged into "This Is Your Digital Life" with Facebook before we removed it from our platform in 2015. However, a friend of yours did log in. As a result, the following information was likely shared with "This Is Your Digital Life": Your public profile, page likes, date of birth, and current city"

Of course, if you logged into the "This Is Your Digital Life" app yourself, then the page content will say so, and list the data elements harvested. Reportedly, about 270,000 Facebook users logged into the app/quiz which then collected information for an estimated 87 million of those users' Facebook friends.

What To Do Next

There's not a lot you can do immediately. CNN Tech advised:

"Even if you delete your Facebook account, or remove third-party apps connected to your profile, the third-party apps will still have access to data they previously collected. Users have to contact the app individually to have the data be removed... According to a notice on affected accounts, the "small number of people" who accessed the app also shared their News Feed, timeline, posts and messages. A Facebook spokesperson confirmed that 1,500 users who logged into the app granted explicit access to their private message inbox... For now, the platform is directing people to their Settings page to see which apps are connected to their accounts, such as Uber and Netflix. Users can also disconnect those apps... Walt Mossberg, a veteran tech reporter and cofounder of tech website Recode, urged Facebook to let users know which friends accessed the app and when..."

Yeah, that! Facebook should inform affected users which of their friends contributed to the data leakage.

Of course, Facebook wants its users to keep using the service. Facebook announced on March 21st that it will, 1) investigate all apps that had access to large amounts of information and conduct full audits of any apps with suspicious activity; 2) inform users affected by apps that have misused their data; 3) disable an app's access to a member's information if that member hasn't used the app within the last three months; 4) change Login to "reduce the data that an app can request without app review to include only name, profile photo and email address;" 5) encourage members to manage the apps they use; and reward users who find vulnerabilities.

Those actions seem good, but too little too late. What can affected users do?

You have options. If you use Facebook, see these instructions by Consumer Reports to deactivate or delete your account. Some people I know simply stopped using Facebook, but left their accounts active. That doesn't seem wise. A better approach is to adjust the privacy settings on your Facebook account to get as much privacy and protections as possible.

Facebook has a new tool for members to review and disable, in bulk, all of the apps with access to their data. Follow these handy step-by-step instructions by Mashable. And, users should also disable the Facebook API platform for their account. If you use the Firefox web browser, then install the new Facebook Container add-on specifically designed to prevent Facebook from tracking you. Don't use Firefox? You might try the Privacy Badger add-on instead. I've used it happily for years.

Whatever you do, remember that lots of advertising networks and tech companies besides Facebook want to track your movements around the web. Some of those companies include internet service providers (ISPs), since the U.S. Federal Communications Commission (FCC) killed both broadband privacy and net neutrality in 2017.

A windfall for broadband providers, and terrible for consumers. You might contact your elected officials and demand that the FCC put broadband privacy and net neutrality protections back into place.


Apple Computer And Facebook Executives Exchange Criticisms

Chief executives at Apple Computer and Facebook recently exchanged criticisms. During a lengthy interview by Recode's Kara Swisher and MSNBC’s Chris Hayes, Apple CEO Tim Cook responded to questions about Facebook's recent data security and privacy incident. The interview was conducted in Chicago, Illinois on Tuesday, March 27. It was broadcast on MSNBC on Friday, April 6, 2018. The relevant section of the interview:

"Hayes: We are back with Apple CEO Tim Cook. In the wake of the news about data scraping by Cambridge Analytica and Facebook, you had this to say recently, and I thought it was quite interesting. You said, "It’s clear to me that something, some large profound change, is needed. I’m personally not a big fan of regulation because sometimes regulation can have unexpected consequences to it. However, I think this certain situation is so dire, has become so large, that probably some well-crafted regulation is necessary." What’d you mean?

Cook: Yeah. Look, we’ve never believed that these detailed profiles of people — that has incredibly deep personal information that is patched together from several sources — should exist. That the connection of all of these dots, that you could use them in such devious ways if someone wanted to do that, that this was one of the things that were possible in life but shouldn’t exist.

Swisher: Right.

Cook: Shouldn’t be allowed to exist. And so I think the best regulation is no regulation, is self regulation. That is the best regulation, because regulation can have unexpected consequences, right? However, I think we’re beyond that here, and I do think that it’s time for a set of people to think deeply about what can be done here.

Hayes: Now, the cynic in me says, you’ve got other tech companies that are much more dependent on that kind of thing than Apple is. And so, yes, you want regulation here because that would essentially be a comparative advantage, that if regulation were to come in on this privacy question, the people it’s going to hit harder aren’t Apple. It’s places like Facebook and Google.

Cook: Well, the skeptic in you would be wrong. (laughter) The truth is we could make a ton of money if we monetized our customer. If our customer was our product, we could make a ton of money. We’ve elected not to do that. (applause) Because we don’t... our products are iPhones and iPads and Macs and HomePods and the Watch, etc., and if we can convince you to buy one, we’ll make a little bit of money, right? But you are not our product."

The comments about regulation are relevant since Mr. Zuckerberg will testify before Congress this week about Facebook's privacy and data security incident involving Cambridge Analytica. Mr. Cook's comments highlight the radically different business models.

Mr. Cook's comments didn't sit well with Mr. Zuckerberg. Vox's Ezra Klein interviewed Zuckerberg on Monday, April 2. The relevant portion of that interview:

"Ezra Klein: One of the things that has been coming up a lot in the conversation is whether the business model of monetizing user attention is what is letting in a lot of these problems. Tim Cook, the CEO of Apple, gave an interview the other day and he was asked what he would do if he was in your shoes. He said, “I wouldn’t be in this situation,” and argued that Apple sells products to users, it doesn’t sell users to advertisers, and so it’s a sounder business model that doesn’t open itself to these problems.

Do you think part of the problem here is the business model where attention ends up dominating above all else, and so anything that can engage has powerful value within the ecosystem?

Mark Zuckerberg: You know, I find that argument, that if you’re not paying that somehow we can’t care about you, to be extremely glib and not at all aligned with the truth. The reality here is that if you want to build a service that helps connect everyone in the world, then there are a lot of people who can’t afford to pay. And therefore, as with a lot of media, having an advertising-supported model is the only rational model that can support building this service to reach people.

That doesn’t mean that we’re not primarily focused on serving people. I think probably to the dissatisfaction of our sales team here, I make all of our decisions based on what’s going to matter to our community and focus much less on the advertising side of the business.

But if you want to build a service which is not just serving rich people, then you need to have something that people can afford. I thought Jeff Bezos had an excellent saying on this in one of his Kindle launches a number of years back. He said, “There are companies that work hard to charge you more, and there are companies that work hard to charge you less.” And at Facebook, we are squarely in the camp of the companies that work hard to charge you less and provide a free service that everyone can use.

I don’t think at all that that means that we don’t care about people. To the contrary, I think it’s important that we don’t all get Stockholm syndrome and let the companies that work hard to charge you more convince you that they actually care more about you. Because that sounds ridiculous to me."

What to make of this. While Mr. Zuckerberg is entitled to his opinions, an old saying seems to apply: people in glass houses shouldn't throw stones.

There seems no question that Facebook built a platform which collected users' intimate and sensitive information, tracked users around the internet, allowed "advertisers" to collect information about both users who interacted with a quiz app and users' friends (without their friends' knowledge), allowed "advertisers" to target groups of users (regardless of the law and/or consequences), and made it easier for "advertisers" to combine data collected with information from other sources. You may remember, Facebook's "friction-less sharing" program in 2011, where apps automatically posted content in users' timelines without users active involvement. And, Facebook's history with a convoluted and often confusing interface for users to change their privacy settings.

You may remember, it was Apple which fought to protect its customers' sensitive information by resisting demands by federal law enforcement officials to build back-door hacks into its devices. I don't think Facebook can make a similar claim about protecting users' information. Actions speak louder than words.

Nobody forced Facebook to build the platform it built. Its executives made choices. And now, Mr. Zuckerberg is apologizing (again) for his company's behavior. You may remember, an admission of problems and promises to do better by Mr. Zuckerberg in January. Facebook COO Sheryl Sandberg also apologized last week about the executive failures in 2015. You might call it the #Facebookapologytour.

Mr. Zuckerberg's "an advertising-supported model is the only rational model" comment deserves attention. The only model? Mr. Zuckerberg and Facebook made the decision not to charge monthly fees. Would some users pay a monthly fee for guaranteed privacy? I imagine many users would gladly pay. I would. (An Apple co-founder is willing to pay, too.) It seems, a more accurate statement would be: an advertising-supported model is the profit-maximizing model.

Also, Mr. Zuckerberg's "advertising-supported" description of his company's business model seems disingenuous. It gives the impression that traditional advertisers pay money to passively display ads, while the reality is much more. More types of companies than traditional advertisers used the social networking service's sophisticated software tools (e.g., Facebook's API platform) to target groups and then collect data about Facebook users and their connected friends.

This makes one wonder how many other companies like Cambridge Analytica have harvested information -- either directly or indirectly via intermediaries. Facebook has suspended the account of Cubeyou, another alleged data harvester, while it investigates.

If there are more companies and Facebook executives know it, then they must admit it. Its March 21st press release promising to investigate all apps that had access to large amounts of information, and to conduct full audits of any apps with suspicious activity suggests that Facebook doesn't know. I'm not sure which is worse: knowing and not saying, or not knowing.

According to news reports, Cambridge Analytica paid sizeable amounts - US $ .75 to $5.00 per voter - for profiles crafted from Facebook users' information. Do that math... that could be amounts ranging from $1.5 to $10 million, allegedly based upon 2 million users from 11 states: Arkansas, Colorado, Florida, Iowa, Louisiana, Nevada, New Hampshire, North Carolina, Oregon, South Carolina, and West Virginia. Nobody pays that amount of money without expecting satisfactory results.

Later today, Facebook will inform users whose information may have been harvested by Cambridge Analytica. What are your opinions?


4 Ways to Fix Facebook

[Editor's Note: today's guest post, by ProPublica reporters, explores solutions to the massive privacy and data security problems at Facebook.com. It is reprinted with permission.]

By Julia Angwin, ProPublica

Gathered in a Washington, D.C., ballroom last Thursday for their annual “tech prom,” hundreds of tech industry lobbyists and policy makers applauded politely as announcers read out the names of the event’s sponsors. But the room fell silent when “Facebook” was proclaimed — and the silence was punctuated by scattered boos and groans.

Facebook logo These days, it seems the only bipartisan agreement in Washington is to hate Facebook. Democrats blame the social network for costing them the presidential election. Republicans loathe Silicon Valley billionaires like Facebook founder and CEO Mark Zuckerberg for their liberal leanings. Even many tech executives, boosters and acolytes can’t hide their disappointment and recriminations.

The tipping point appears to have been the recent revelation that a voter-profiling outfit working with the Trump campaign, Cambridge Analytica, had obtained data on 87 million Facebook users without their knowledge or consent. News of the breach came after a difficult year in which, among other things, Facebook admitted that it allowed Russians to buy political ads, advertisers to discriminate by race and age, hate groups to spread vile epithets, and hucksters to promote fake news on its platform.

Over the years, Congress and federal regulators have largely left Facebook to police itself. Now, lawmakers around the world are calling for it to be regulated. Congress is gearing up to grill Zuckerberg. The Federal Trade Commission is investigating whether Facebook violated its 2011 settlement agreement with the agency. Zuckerberg himself suggested, in a CNN interview, that perhaps Facebook should be regulated by the government.

The regulatory fever is so strong that even Peter Swire, a privacy law professor at Georgia Institute of Technology who testified last year in an Irish court on behalf of Facebook, recently laid out the legal case for why Google and Facebook might be regulated as public utilities. Both companies, he argued, satisfy the traditional criteria for utility regulation: They have large market share, are natural monopolies, and are difficult for customers to do without.

While the political momentum may not be strong enough right now for something as drastic as that, many in Washington are trying to envision what regulating Facebook would look like. After all, the solutions are not obvious. The world has never tried to rein in a global network with 2 billion users that is built on fast-moving technology and evolving data practices.

I talked to numerous experts about the ideas bubbling up in Washington. They identified four concrete, practical reforms that could address some of Facebook’s main problems. None are specific to Facebook alone; potentially, they could be applied to all social media and the tech industry.

1. Impose Fines for Data Breaches

The Cambridge Analytica data loss was the result of a breach of contract, rather than a technical breach in which a company gets hacked. But either way, it’s far too common for institutions to lose customers’ data — and they rarely suffer significant financial consequences for the loss. In the United States, companies are only required to notify people if their data has been breached in certain states and under certain circumstances — and regulators rarely have the authority to penalize companies that lose personal data.

Consider the Federal Trade Commission, which is the primary agency that regulates internet companies these days. The FTC doesn’t have the authority to demand civil penalties for most data breaches. (There are exceptions for violations of children’s privacy and a few other offenses.) Typically, the FTC can only impose penalties if a company has violated a previous agreement with the agency.

That means Facebook may well face a fine for the Cambridge Analytica breach, assuming the FTC can show that the social network violated a 2011 settlement with the agency. In that settlement, the FTC charged Facebook with eight counts of unfair and deceptive behavior, including allowing outside apps to access data that they didn’t need — which is what Cambridge Analytica reportedly did years later. The settlement carried no financial penalties but included a clause stating that Facebook could face fines of $16,000 per violation per day.

David Vladeck, former FTC director of consumer protection, who crafted the 2011 settlement with Facebook, said he believes Facebook’s actions in the Cambridge Analytica episode violated the agreement on multiple counts. “I predict that if the FTC concludes that Facebook violated the consent decree, there will be a heavy civil penalty that could well be in the amount of $1 billion or more,” he said.

Facebook maintains it has abided by the agreement. “Facebook rejects any suggestion that it violated the consent decree,” spokesman Andy Stone said. “We respected the privacy settings that people had in place.”

If a fine had been levied at the time of the settlement, it might well have served as a stronger deterrent against any future breaches. Daniel J. Weitzner, who served in the White House as the deputy chief technology officer at the time of the Facebook settlement, says that technology should be policed by something similar to the Department of Justice’s environmental crimes unit. The unit has levied hundreds of millions of dollars in fines. Under previous administrations, it filed felony charges against people for such crimes as dumping raw sewage or killing a bald eagle. Some ended up sentenced to prison.

“We know how to do serious law enforcement when we think there’s a real priority and we haven’t gotten there yet when it comes to privacy,” Weitzner said.

2. Police Political Advertising

Last year, Facebook disclosed that it had inadvertently accepted thousands of advertisements that were placed by a Russian disinformation operation — in possible violation of laws that restrict foreign involvement in U.S. elections. FBI special prosecutor Robert Mueller has charged 13 Russians who worked for an internet disinformation organization with conspiring to defraud the United States, but it seems unlikely that Russia will compel them to face trial in the U.S.

Facebook has said it will introduce a new regime of advertising transparency later this year, which will require political advertisers to submit a government-issued ID and to have an authentic mailing address. It said political advertisers will also have to disclose which candidate or organization they represent and that all election ads will be displayed in a public archive.

But Ann Ravel, a former commissioner at the Federal Election Commission, says that more could be done. While she was at the commission, she urged it to consider what it could do to make internet advertising contain as much disclosure as broadcast and print ads. “Do we want Vladimir Putin or drug cartels to be influencing American elections?” she presciently asked at a 2015 commission meeting.

However, the election commission — which is often deadlocked between its evenly split Democratic and Republican commissioners — has not yet ruled on new disclosure rules for internet advertising. Even if it does pass such a rule, the commission’s definition of election advertising is so narrow that many of the ads placed by the Russians may not have qualified for scrutiny. It’s limited to ads that mention a federal candidate and appear within 60 days prior to a general election or 30 days prior to a primary.

This definition, Ravel said, is not going to catch new forms of election interference, such as ads placed months before an election, or the practice of paying individuals or bots to spread a message that doesn’t identify a candidate and looks like authentic communications rather than ads.

To combat this type of interference, Ravel said, the current definition of election advertising needs to be broadened. The FEC, she suggested, should establish “a multi-faceted test” to determine whether certain communications should count as election advertisements. For instance, communications could be examined for their intent, and whether they were paid for in a nontraditional way — such as through an automated bot network.

And to help the tech companies find suspect communications, she suggested setting up an enforcement arm similar to the Treasury Department’s Financial Crimes Enforcement Network, known as FinCEN. FinCEN combats money laundering by investigating suspicious account transactions reported by financial institutions. Ravel said that a similar enforcement arm that would work with tech companies would help the FEC.

“The platforms could turn over lots of communications and the investigative agency could then examine them to determine if they are from prohibited sources,” she said.

3. Make Tech Companies Liable for Objectionable Content

Last year, ProPublica found that Facebook was allowing advertisers to buy discriminatory ads, including ads targeting people who identified themselves as “Jew-haters,” and ads for housing and employment that excluded audiences based on race, age and other protected characteristics under civil rights laws.

Facebook has claimed that it has immunity against liability for such discrimination under section 230 of the 1996 federal Communications Decency Act, which protects online publishers from liability for third-party content.

“Advertisers, not Facebook, are responsible for both the content of their ads and what targeting criteria to use, if any,” Facebook stated in legal filings in a federal case in California challenging Facebook’s use of racial exclusions in ad targeting.

But sentiment is growing in Washington to interpret the law more narrowly. Last month, the House of Representatives passed a bill that carves out an exemption in the law, making websites liable if they aid and abet sex trafficking. Despite fierce opposition by many tech advocates, a version of the bill has already passed the Senate.

And many staunch defenders of the tech industry have started to suggest that more exceptions to section 230 may be needed. In November, Harvard Law professor Jonathan Zittrain wrote an article rethinking his previous support for the law and declared it has become, in effect, “a subsidy” for the tech giants, who don’t bear the costs of ensuring the content they publish is accurate and fair.

“Any honest account must acknowledge the collateral damage it has permitted to be visited upon real people whose reputations, privacy, and dignity have been hurt in ways that defy redress,” Zittrain wrote.

In a December 2017 paper titled “The Internet Will Not Break: Denying Bad Samaritans 230 Immunity,” University of Maryland law professors Danielle Citron and Benjamin Wittes argue that the law should be amended — either through legislation or judicial interpretation — to deny immunity to technology companies that enable and host illegal content.

“The time is now to go back and revise the words of the statute to make clear that it only provides shelter if you take reasonable steps to address illegal activity that you know about,” Citron said in an interview.

4. Install Ethics Review Boards

Cambridge Analytica obtained its data on Facebook users by paying a psychology professor to build a Facebook personality quiz. When 270,000 Facebook users took the quiz, the researcher was able to obtain data about them and all of their Facebook friends — or about 50 million people altogether. (Facebook later ended the ability for quizzes and other apps to pull data on users’ friends.)

Cambridge Analytica then used the data to build a model predicting the psychology of those people, on metrics such as “neuroticism,” political views and extroversion. It then offered that information to political consultants, including those working for the Trump campaign.

The company claimed that it had enough information about people’s psychological vulnerabilities that it could effectively target ads to them that would sway their political opinions. It is not clear whether the company actually achieved its desired effect.

But there is no question that people can be swayed by online content. In a controversial 2014 study, Facebook tested whether it could manipulate the emotions of its users by filling some users’ news feeds with only positive news and other users’ feeds with only negative news. The study found that Facebook could indeed manipulate feelings — and sparked outrage from Facebook users and others who claimed it was unethical to experiment on them without their consent.

Such studies, if conducted by a professor on a college campus, would require approval from an institutional review board, or IRB, overseeing experiments on human subjects. But there is no such standard online. The usual practice is that a company’s terms of service contain a blanket statement of consent that users never read or agree to.

James Grimmelman, a law professor and computer scientist, argued in a 2015 paper that the technology companies should stop burying consent forms in their fine print. Instead, he wrote, “they should seek enthusiastic consent from users, making them into valued partners who feel they have a stake in the research.”

Such a consent process could be overseen by an independent ethics review board, based on the university model, which would also review research proposals and ensure that people’s private information isn’t shared with brokers like Cambridge Analytica.

“I think if we are in the business of requiring IRBs for academics,” Grimmelman said in an interview, “we should ask for appropriate supervisions for companies doing research.”

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

 


Facebook Update: 87 Million Affected By Its Data Breach With Cambridge Analytica. Considerations For All Consumers

Facebook logo Facebook.com has dominated the news during the past three weeks. The news media have reported about many issues, but there are more -- whether or not you use Facebook. Things began about mid-March, when Bloomberg reported:

"Yes, Cambridge Analytica... violated rules when it obtained information from some 50 million Facebook profiles... the data came from someone who didn’t hack the system: a professor who originally told Facebook he wanted it for academic purposes. He set up a personality quiz using tools that let people log in with their Facebook accounts, then asked them to sign over access to their friend lists and likes before using the app. The 270,000 users of that app and their friend networks opened up private data on 50 million people... All of that was allowed under Facebook’s rules, until the professor handed the information off to a third party... "

So, an authorized user shared members' sensitive information with unauthorized users. Facebook confirmed these details on March 16:

"We are suspending Strategic Communication Laboratories (SCL), including their political data analytics firm, Cambridge Analytica (CA), from Facebook... In 2015, we learned that a psychology professor at the University of Cambridge named Dr. Aleksandr Kogan lied to us and violated our Platform Policies by passing data from an app that was using Facebook Login to SCL/CA, a firm that does political, government and military work around the globe. He also passed that data to Christopher Wylie of Eunoia Technologies, Inc.

Like all app developers, Kogan requested and gained access to information from people after they chose to download his app. His app, “thisisyourdigitallife,” offered a personality prediction, and billed itself on Facebook as “a research app used by psychologists.” Approximately 270,000 people downloaded the app. In so doing, they gave their consent for Kogan to access information such as the city they set on their profile, or content they had liked... When we learned of this violation in 2015, we removed his app from Facebook and demanded certifications from Kogan and all parties he had given data to that the information had been destroyed. CA, Kogan and Wylie all certified to us that they destroyed the data... Several days ago, we received reports that, contrary to the certifications we were given, not all data was deleted..."

So, data that should have been deleted wasn't. Then, Facebook relied upon certifications from entities that had lied previously. Not good. Then, Facebook posted this addendum on March 17:

"The claim that this is a data breach is completely false. Aleksandr Kogan requested and gained access to information from users who chose to sign up to his app, and everyone involved gave their consent. People knowingly provided their information, no systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked."

Why the rush to deny a breach? It seems wise to complete a thorough investigation before making such a claim. In the 11+ years I've written this blog, whenever unauthorized persons access data they shouldn't have, it's a breach. You can read about plenty of similar incidents where credit reporting agencies sold sensitive consumer data to ID-theft services and/or data brokers, who then re-sold that information to criminals and fraudsters. Seems like a breach to me.

Cambridge Analytica logo Facebook announced on March 19th that it had hired a digital forensics firm:

"... Stroz Friedberg, to conduct a comprehensive audit of Cambridge Analytica (CA). CA has agreed to comply and afford the firm complete access to their servers and systems. We have approached the other parties involved — Christopher Wylie and Aleksandr Kogan — and asked them to submit to an audit as well. Mr. Kogan has given his verbal agreement to do so. Mr. Wylie thus far has declined. This is part of a comprehensive internal and external review that we are conducting to determine the accuracy of the claims that the Facebook data in question still exists... Independent forensic auditors from Stroz Friedberg were on site at CA’s London office this evening. At the request of the UK Information Commissioner’s Office, which has announced it is pursuing a warrant to conduct its own on-site investigation, the Stroz Friedberg auditors stood down."

That's a good start. An audit would determine or not data which perpetrators said was destroyed, actually had been destroyed. However, Facebook seems to have built a leaky system which allows data harvesting:

"Hundreds of millions of Facebook users are likely to have had their private information harvested by companies that exploited the same terms as the firm that collected data and passed it on to CA, according to a new whistleblower. Sandy Parakilas, the platform operations manager at Facebook responsible for policing data breaches by third-party software developers between 2011 and 2012, told the Guardian he warned senior executives at the company that its lax approach to data protection risked a major breach..."

Reportedly, Parakilas added that Facebook, "did not use its enforcement mechanisms, including audits of external developers, to ensure data was not being misused." Not good. The incident makes one wonder what other developers, corporate, and academic users have violated Facebook's rules: shared sensitive Facebook members' data they shouldn't have.

Facebook announced on March 21st that it will, 1) investigate all apps that had access to large amounts of information and conduct full audits of any apps with suspicious activity; 2) inform users affected by apps that have misused their data; 3) disable an app's access to a member's information if that member hasn't used the app within the last three months; 4) change Login to "reduce the data that an app can request without app review to include only name, profile photo and email address;" 5) encourage members to manage the apps they use; and reward users who find vulnerabilities.

Those actions seem good, but too little too late. Facebook needs to do more... perhaps, revise its Terms Of Use to include large fines for violators of its data security rules. Meanwhile, there has been plenty of news about CA. The Guardian UK reported on March 19:

"The company at the centre of the Facebook data breach boasted of using honey traps, fake news campaigns and operations with ex-spies to swing election campaigns around the world, a new investigation reveals. Executives from Cambridge Analytica spoke to undercover reporters from Channel 4 News about the dark arts used by the company to help clients, which included entrapping rival candidates in fake bribery stings and hiring prostitutes to seduce them."

Geez. After these news reports surfaced, CA's board suspended Alexander Nix, its CEO, pending an internal investigation. So, besides Facebook's failure to secure sensitive members' information, another key issue seems to be the misuse of social media data by a company that openly brags about unethical, and perhaps illegal, behavior.

What else might be happening? The Intercept explained on March 30th that CA:

"... has marketed itself as classifying voters using five personality traits known as OCEAN — Openness, Conscientiousness, Extroversion, Agreeableness, and Neuroticism — the same model used by University of Cambridge researchers for in-house, non-commercial research. The question of whether OCEAN made a difference in the presidential election remains unanswered. Some have argued that big data analytics is a magic bullet for drilling into the psychology of individual voters; others are more skeptical. The predictive power of Facebook likes is not in dispute. A 2013 study by three of Kogan’s former colleagues at the University of Cambridge showed that likes alone could predict race with 95 percent accuracy and political party with 85 percent accuracy. Less clear is their power as a tool for targeted persuasion; CA has claimed that OCEAN scores can be used to drive voter and consumer behavior through “microtargeting,” meaning narrowly tailored messages..."

So, while experts disagree about the effectiveness of data analytics with political campaigns, it seems wise to assume that the practice will continue with improvements. Data analytics fueled by social media input means political campaigns can bypass traditional news media outlets to distribute information and disinformation. That highlights the need for Facebook (and other social media) to improve their data security and compliance audits.

While the UK Information Commissioner's Office aggressively investigates CA, things seem to move at a much slower pace in the USA. TechCrunch reported on April 4th:

"... Facebook’s founder Mark Zuckerberg believes North America users of his platform deserve a lower data protection standard than people everywhere else in the world. In a phone interview with Reuters yesterday Mark Zuckerberg declined to commit to universally implementing changes to the platform that are necessary to comply with the European Union’s incoming General Data Protection Regulation (GDPR). Rather, he said the company was working on a version of the law that would bring some European privacy guarantees worldwide — declining to specify to the reporter which parts of the law would not extend worldwide... Facebook’s leadership has previously implied the product changes it’s making to comply with GDPR’s incoming data protection standard would be extended globally..."

Do users in the USA want weaker data protections than users in other countries? I think not. I don't. Read for yourself the April 4th announcement by Facebook about changes to its terms of service and data policy. It didn't mention specific countries or regions; who gets what and where. Not good.

Mark Zuckerberg apologized and defended his company in a March 21st post:

"I want to share an update on the Cambridge Analytica situation -- including the steps we've already taken and our next steps to address this important issue. We have a responsibility to protect your data, and if we can't then we don't deserve to serve you. I've been working to understand exactly what happened and how to make sure this doesn't happen again. The good news is that the most important actions to prevent this from happening again today we have already taken years ago. But we also made mistakes, there's more to do, and we need to step up and do it... This was a breach of trust between Kogan, Cambridge Analytica and Facebook. But it was also a breach of trust between Facebook and the people who share their data with us and expect us to protect it. We need to fix that... at the end of the day I'm responsible for what happens on our platform. I'm serious about doing what it takes to protect our community. While this specific issue involving Cambridge Analytica should no longer happen with new apps today, that doesn't change what happened in the past. We will learn from this experience to secure our platform further and make our community safer for everyone going forward."

Nice sounding words, but actions speak louder. Wired magazine said:

"Zuckerberg didn't mention in his Facebook post why it took him five days to respond to the scandal... The groundswell of outrage and attention following these revelations has been greater than anything Facebook predicted—or has experienced in its long history of data privacy scandals. By Monday, its stock price nosedived. On Tuesday, Facebook shareholders filed a lawsuit against the company in San Francisco, alleging that Facebook made "materially false and misleading statements" that led to significant losses this week. Meanwhile, in Washington, a bipartisan group of senators called on Zuckerberg to testify before the Senate Judiciary Committee. And the Federal Trade Commission also opened an investigation into whether Facebook had violated a 2011 consent decree, which required the company to notify users when their data was obtained by unauthorized sources."

Frankly, Zuckerberg has lost credibility with me. Why? Facebook's history suggests it can't (or won't) protect users' data it collects. Some of its privacy snafus: settlement of a lawsuit resulting from alleged privacy abuses by its Beacon advertising program, changed members' ad settings without notice nor consent, an advertising platform which allegedly facilitates abuses of older workers, health and privacy concerns about a new service for children ages 6 to 13, transparency concerns about political ads, and new lawsuits about the company's advertising platform. Plus, Zuckerberg made promises in January to clean up the service's advertising. Now, we have yet another apology.

In a press release this afternoon, Facebook revised upward the number affected by the Facebook/CA breach from 50 to 87 million persons. Most, about 70.6 million, are in the United States. The breakdown by country:

Number of affected persons by country in the Facebook - Cambridge Analytica breach. Click to view larger version

So, what should consumers do?

You have options. If you use Facebook, see these instructions by Consumer Reports to deactivate or delete your account. Some people I know simply stopped using Facebook, but left their accounts active. That doesn't seem wise. A better approach is to adjust the privacy settings on your Facebook account to get as much privacy and protections as possible.

Facebook has a new tool for members to review and disable, in bulk, all of the apps with access to their data. Follow these handy step-by-step instructions by Mashable. And, users should also disable the Facebook API platform for their account. If you use the Firefox web browser, then install the new Facebook Container new add-on specifically designed to prevent Facebook from tracking you. Don't use Firefox? You might try the Privacy Badger add-on instead. I've used it happily for years.

Of course, you should submit feedback directly to Facebook demanding that it extend GDPR privacy protections to your country, too. And, wise online users always read the terms and conditions of all Facebook quizzes before taking them.

Don't use Facebook? There are considerations for you, too; especially if you use a different social networking site (or app). Reportedly, Mark Zuckerberg, the CEO of Facebook, will testify before the U.S. Congress on April 11th. His upcoming testimony will be worth monitoring for everyone. Why? The outcome may prod Congress to act by passing new laws giving consumers in the USA data security and privacy protections equal to what's available in the United Kingdom. And, there may be demands for Cambridge Analytica executives to testify before Congress, too.

Or, consumers may demand stronger, faster action by the U.S. Federal Trade Commission (FTC), which announced on March 26th:

"The FTC is firmly and fully committed to using all of its tools to protect the privacy of consumers. Foremost among these tools is enforcement action against companies that fail to honor their privacy promises, including to comply with Privacy Shield, or that engage in unfair acts that cause substantial injury to consumers in violation of the FTC Act. Companies who have settled previous FTC actions must also comply with FTC order provisions imposing privacy and data security requirements. Accordingly, the FTC takes very seriously recent press reports raising substantial concerns about the privacy practices of Facebook. Today, the FTC is confirming that it has an open non-public investigation into these practices."

An "open non-public investigation?" Either the investigation is public, or it isn't. Hopefully, an attorney will explain. And, that announcement read like weak tea. I expect more. Much more.

USA citizens may want stronger data security laws, especially if Facebook's solutions are less than satisfactory, it refuses to provide protections equal to those in the United Kingdom, or if it backtracks later on its promises. Thoughts? Comments?


Fair Housing Groups Sue Facebook for Allowing Discrimination in Housing Ads

[Editor's Note: today's guest post, by reporters at ProPublica, is the latest in a series about advertising and social networking services. It is reprinted with permission.]

Facebook logo By Julia Angwin and Ariana Tobin, ProPublica

In February 2017, in response to a ProPublica investigation, Facebook pledged to crack down on efforts by advertisers of rental housing to discriminate against tenants based on race, disability, gender and other characteristics.

But a new lawsuit, filed Tuesday by the National Fair Housing Alliance in U.S. District Court in the Southern District of New York, alleges that the world’s largest social network still allows advertisers to discriminate against legally protected groups, including mothers, the disabled and Spanish-language speakers.

Since 2018 marks the 50th anniversary of the Fair Housing Act, "it is all the more egregious and shocking" that "Facebook continues to enable landlords and real estate brokers to bar families with children, women and others from receiving rental and sales ads or housing," the lawsuit states. It asks the court, among other things, to declare that Facebook’s policies violate fair housing laws, to bar the company from publishing discriminatory ads, and to require it to develop and make public a written fair housing policy for advertising.

Diane Houk, lead counsel for the alliance, said this type of discrimination is especially difficult to uncover and combat. "The person who is being discriminated against has no way to know" it, because the technology "keeps the discrimination hidden in hopes that it will not be caught," she said.

Facebook disputes the housing groups’ allegations. "There is absolutely no place for discrimination on Facebook. We believe this lawsuit is without merit, and we will defend ourselves vigorously," said Facebook spokesman Joe Osborne.

The lawsuit adds to Facebook’s woes, which are mounting on multiple fronts. The company’s stock plunged last week on the news that it had allowed a voter-profiling outfit, Cambridge Analytica, to obtain data on 50 million of its users without their knowledge or consent. The news came after a troubling year in which, among other things, Facebook admitted that it unwittingly allowed a Russian disinformation operation on its platform and had been promoting fake news in its News Feed algorithm. As a result, lawmakers and regulators around the world have launched investigations into Facebook.

Discrimination in housing advertising has been a persistent problem for Facebook. In October 2016, we described how Facebook let advertisers exclude specific groups with what it called "ethnic affinities," including blacks and Hispanics, from seeing ads. Although Facebook responded by announcing it had built a system to flag and reject these ads, we bought dozens of rental housing ads in November 2017 that we specified would not be shown to blacks, Jews, people interested in wheelchair ramps and other groups.

It wasn’t until ProPublica brought the issue of advertising discrimination on Facebook to light, Houk said, that fair housing advocates learned of it. Emulating ProPublica’s technique, the Washington, D.C.-based national fair housing group, along with member groups in New York, San Antonio and Miami created fake housing companies and placed discriminatory ads on Facebook. The ads were approved by Facebook over a period of a few months, with the most recent buys occurring on Feb. 23.

Using Facebook’s dropdown "exclusion" menu, they were able to buy housing ads that blocked groups such as "trendy moms," "soccer moms," "parents with teenagers," people interested in a disabled parking permit and people interested in Telemundo, the Spanish-language television network.

The Fair Housing Act makes it illegal to publish any advertisement "with respect to the sale or rental of a dwelling that indicates any preference, limitation or discrimination based on race, color, religion, sex, handicap, familial status or national origin." Violators may face tens of thousands of dollars in fines.

After ProPublica’s investigation, Facebook added a self-certification option, which asks housing advertisers to certify that their advertisement is not discriminatory. In some cases, Houk said, the housing groups encountered the self-certification option, and did not submit the ads to Facebook for approval and publication. But that only happened in some of the ad buys, she said.

Since advertisers can falsely attest to fairness, the self-certification screens don’t "seem like a whole-hearted commitment to trying to change the advertising platform to comply with the Fair Housing Act and local fair housing laws," Houk said.

A couple of weeks after the groups bought housing ads, so did ProPublica (independently) — and we excluded some of the same categories, such as "soccer moms." In most of those tests, we encountered self-certification screens. However, when we bought another housing ad this week, we were able to exclude people interested in Telemundo.

Houk said there were so many possible explanations for the difference in results — such as the number of categories excluded or the types of exclusions sought — that it was impossible to speculate about what caused many of her clients’ ad purchases to be approved but not ProPublica’s.

Both the fair housing groups and ProPublica found that Facebook has blocked the use of race as an exclusion category — as it promised to do in November. Facebook rejected a ProPublica housing ad that was specifically aimed at African Americans. It also denied our attempts to buy employment ads targeted by race, and removed a job listing with a question designed to filter by race. However, the housing groups’ and ProPublica’s ability to exclude people interested in Telemundo suggests that advertisers could still discriminate by using proxies for race or ethnicity.

In a separate federal case in California, challenging Facebook’s use of racial exclusions in ad targeting, Facebook has argued that it has immunity against liability for such discrimination. It cited Section 230 of the 1996 federal Communications Decency Act, which protects internet companies from liability for third-party content.

"Advertisers, not Facebook, are responsible for both the content of their ads and what targeting criteria to use, if any," Facebook contended.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


How the Crowd Led ProPublica to Investigate IBM

[Editor's note: today's guest post, by the reporters at ProPublica, discusses employment practices at a major corporation in the United States. The investigation is as interesting as the "Cutting 'Old Heads' At IBM" report. This also caught my attention because a data breach at IBM in 2007 led to the creation of this blog. Today's article is reprinted with permission.]

IBM logo By Ariana Tobin and Peter Gosselin, ProPublica

On March 22, we reported that over the past five years IBM has been removing older U.S. employees from their jobs, replacing some with younger, less experienced, lower-paid American workers and moving many other jobs overseas.

We’ve got documentation and details — most of which are the direct result of a questionnaire filled out by over 1,100 former IBMers.

We’ve gone to the company with our findings. IBM did not answer the specific questions we sent. Spokesman Edward Barbini said: “We are proud of our company and our employees’ ability to reinvent themselves era after era, while always complying with the law. Our ability to do this is why we are the only tech company that has not only survived but thrived for more than 100 years.”

We don’t know the exact size of the problem. Our questionnaire isn’t a scientific sample, nor did all the participants tell us they experienced age discrimination. But the hundreds of similar stories show a pattern of older employees being pushed out even when the company itself says they were doing a good job.

This project wasn’t inspired by a high-level leak or an errant line in secret documents. It came to us through reader engagement. Our investigation took us beyond some of our usual reporting techniques. We’d like to elaborate on this because:

  • We know readers will wonder how we sourced some pretty serious claims.
  • Many ex-employees trusted us with their stories and spent many hours in conversation with us. We think it’s good practice to let them know how we’ve used their information.
  • This is the probably the first time we’ve been pointed to a big project by a community of people we found through digital outreach. We hope that by sharing our experiences, we can help others build on our work.

IBMers found us

This project started as a conversation between the two of us, both reporters at ProPublica. Peter had taken on the age discrimination beat for reasons both personal and professional. Ariana was newly minted into a job called “engagement reporter.”

Ariana suggested that Peter write up a short essay on his own experiences of being laid off at 63 and searching for a job in the aftermath. We attached a short questionnaire to the bottom and headlined it: “Over 50 and looking for a job? We’d like to hear from you.

Dozens of people responded within the first couple of weeks. As we looked through this first round of questionnaires, we noticed a theme: a whole lot of information and technology workers told us they were struggling to stay employed. And those who had lost their jobs? They were having a really hard time finding new work.

Of those IT workers, several mentioned IBM right off the bat. One woman wrote that she and her coworkers were working together to find new jobs in order to “ward off the dreaded old person layoff from IBM.”

Another wrote: “I can probably help you get a lot more stories, contact me if you want to discuss this possibility.”

Another wrote: “Part of the separation agreement was that I not seek collective action against IBM for age discrimination. I was not going to sign as a law firm was planning to file a grievance. However they needed 10 people to agree and they could not get the numbers.”

… and then they connected us with more IBMers

We started making some calls. One of the first people we talked to was Brian Paulson, a 57-year-old senior manager with 18 years at IBM, who was fired for “performance reasons” that the company refused to explain. He was still job-hunting two years later.

Another ex-IBM employee told us that she had seen examples of older workers laid off from many parts of the company on a public Facebook page called WatchingIBM. Ariana spent a day looking through the posts, which were, as promised, crawling with stories, questions, and calls for support from workers of all kinds, as shown in the accompanying screenshot.

We decided to reach out to the page’s administrator, who was a longtime IBM workplace activist, Lee Conrad. He shared our age discrimination questionnaire in the group and more responses poured in.

With dozens of interviews already on the books, we decided to launch a second, more specific questionnaire — this time about IBM

We realized that we had been pointed toward an angry, sad and motivated group. The older ex-IBM workers we called were trying to figure out whether their own layoffs were unique or part of a larger trend. And if they were part of a larger trend... how many people were affected?

A major frustration we saw in comment after comment: These workers couldn’t get information on how many others had been forced out with them.

This was an information gap that immediately struck Peter, because that information is exactly what the law requires employers to disclose at the time of a layoff.

On top of that, many of these sources mentioned having been forced to sign agreements that kept them from going to court or even talking about what had happened to them. They were scared to do anything in violation of those agreements, a fear that kept them from finding out the answers to some big open questions: Why would IBM have stopped releasing the ages and positions of those let go, as they had done before 2014 to comply with federal law? How many workers out there believed they had been “retired” against their will? What did managers really tell their subordinates when the time came to let them go? Who was left to do all of their work?

So we wrote up another questionnaire asking those specific questions.

We learned from the responses, and also the response rate

We contacted people on listservs, found them on open petitions, joined closed LinkedIn networks, and followed each posting on ex-IBM groups. We tweeted the questionnaire out on days that IBM reported its earnings, including the company’s ticker symbol. We talked to trade magazines and IBM historians and organizers who still work at IBM. We bought ads on Facebook and aimed them toward cities and towns where we knew IBM had been cutting its workforce.

As the responses came in, we tried to figure out where most of them were coming from. To identify any meaningful trends, we needed to know who was answering, what was working, and why. We also realized that we needed to introduce ourselves in order to persuade anyone it was worth participating.

When something worked, we’d double down:

We know what worked the best: When people filled out the questionnaire they’d also share their contact information with us. So we asked them to forward the questionnaire around within their own networks:

And we got more leads

We read through all of the responses and identified themes: 183 respondents said the company recorded them as having retired by choice even though they had no desire to retire or flat-out objected to the idea. Forty-five people were told they’d have to uproot their lives and move sometimes thousands of miles from the communities where they had worked for years, or else resign. Fifty-three said their jobs had been moved overseas. Some were happy they’d left. Some were company luminaries, given top ratings throughout their career. Some were still fighting over benefits and health care. Some were worried about finding work ever again.

Inevitably, this categorization process led to us to identify new patterns as we went along, and as new responses accumulated. For each new pattern, we would go back and see how many people fit.

One of the first and most interesting such categories were the people who had received emails congratulating them on their retirement at the same time as they were informed of their layoff. We realized there would be power in numbers there, so we set up a SecureDrop for people who were willing to send us their paperwork.

Eventually, we also created a category called “legal action.” We’d stumbled upon support groups of ex-IBM employees who had filed formal complaints with the Equal Employment Opportunity Commission. Some sent us the company’s responses to their individual complaints, giving us insight into the way the company responded to allegations of discrimination. These seemed, of course, very useful.

In other words: we sent some rather complicated mass emails and were surprised over and over again by the specificity of the responses:

IBM undoubtedly has information that would shed light on the documents, its layoff practices or the overall extent and nature of its job cuts. The company chose not to respond to our questions about those issues.

So we tried to answer ex-IBMers’ questions ourselves, including one of the most basic: How many employees ages 40 and over were let go or left in recent years?

IBM won’t say. In fact, over the years, the company has stopped releasing almost all information about its U.S. workforce. In 2009, it stopped publishing its American employment total. In 2014, it stopped disclosing the numbers and ages of older employees it was laying off, a requirement of the nation’s basic anti-age bias law, the Age Discrimination in Employment Act (ADEA).

So we’ve sought to estimate the number, relying on one of the few remaining bits of company-provided information — a technique developed by a veteran financial analyst who follows IBM for investors — as well as patterns we spotted in internal company documents.

We began with a line in the company’s quarterly and annual filings with the U.S. Securities and Exchange Commission for “workforce rebalancing,” a company term for layoffs, firings and other non-retirement departures. It’s a gauge of what IBM spends to let people go. In the past five years, workforce rebalancing charges have totaled $4.3 billion.

The technique was used by veteran IBM analyst Toni Sacconaghi of Bernstein Research. Sacconaghi is a respected Wall Street analyst who has been named to Institutional Investor’s All-America Research Team every year since 2001. His technique and layoff estimates have been widely cited by news organizations including The Wall Street Journal and Fortune.

Some years ago, Sacconaghi estimated that IBM’s average per-employee cost for laying off a worker was $70,000.

Dividing $4.3 billion by $70,000 suggests that during the past five years IBM’s worldwide job cuts totaled about 62,000. If anything, that number is low, given IBM executives’ comments at a recent investor conference. Internal company documents we reviewed suggest that 50 to 60 percent of cuts were made in the U.S., with older workers representing roughly 60 percent of those. That translates to about 20,000 older American workers let go.

Our analysis suggests the total of U.S. layoffs is almost certainly higher.

First, as Sacconaghi said in a recent interview, IBM’s per-employee rebalancing costs are likely much lower now because, starting in 2016, the company reduced severance payments to departing employees from six months to just 30 days. That means IBM can lay off or fire more people for the same or lower overall costs.

Second, because, as those ex-IBMers told us, the company often converts their layoffs into retirements, the workplace rebalancing numbers don’t tell the whole story.

Right below the line for “workforce rebalancing” in its SEC filings, IBM adds another line for “retirement-related costs,” which reflects how much the company spends each year retiring people out. Some — perhaps a substantial amount of that — went to retirements that were less than fully voluntary. This could add up to thousands more people.

By coming up with answers and investigating in the open, we’ve gotten more sources

Many of the conversations we’ve had during our reporting didn’t make it into the final story. People allowed us to review internal company documents. They let us see long email exchanges with their managers. They dug back through closets and garages to find memos they had saved out of frustration or fatigue or just plain anger.

We can’t go into detail about all of the ways the community helped us report out this story, because we also promised many of our sources that we would protect their confidentiality. The beauty is that they talked to us anyway. They knew where to find us, because our contact information had been spread far and wide.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Report: Social Media Use in 2018

There has been plenty of controversy recently surrounding social media: job advertisements which exclude older workers, concerns that social media threaten democracies, transparency concerns about political advertisements, censorship applied inconsistently, politicians blocking constituents, promises to do better by Facebook, and more. Given these issues, it's reasonable to ask: who uses social media? Which sites? Has this changed over time? Would any users stop using social media?

The Pew Research Center recently released its latest report, "Social Media Use in 2018." Key findings:

"Facebook remains the primary platform for most Americans. Roughly two-thirds of U.S. adults (68%) now report that they are Facebook users, and roughly three-quarters of those users access Facebook on a daily basis. With the exception of those 65 and older, a majority of Americans across a wide range of demographic groups now use Facebook... The video-sharing site YouTube – which contains many social elements, even if it is not a traditional social media platform – is now used by nearly three-quarters of U.S. adults and 94% of 18- to 24-year-olds... Some 78% of 18- to 24-year-olds use Snapchat, and a sizeable majority of these users (71%) visit the platform multiple times per day. Similarly, 71% of Americans in this age group now use Instagram and close to half (45%) are Twitter users... Pinterest remains substantially more popular with women (41% of whom say they use the site) than with men (16%). LinkedIn remains especially popular among college graduates and those in high-income households. Some 50% of Americans with a college degree use LinkedIn, compared with just 9% of those with a high school diploma or less. The messaging service WhatsApp is popular in Latin America, and this popularity also extends to Latinos in the United States – 49% of Hispanics report that they are WhatsApp users, compared with 14% of whites and 21% of blacks."

The report was based on telephone interviews of 2,002 adults (18 years of age or older) living in the United States. The interviews were conducted during Jan. 3 - 10, 2018, and included 500 respondents via landline telephones, and 1,502 respondents via mobile phones. The survey was conducted by interviewers under the direction of Abt Associates.

A couple charts highlight the key findings:

Pew Research Center. Social Media use in 2018. Site use by age groups. Click to view larger version

Pew Research Center. Social Media Use in 2018. Reciprocity usage. Click to view larger version

Pew Research also found:

"... the share of social media users who say these platforms would be hard to give up has increased by 12 percentage points compared with a survey conducted in early 2014. But by the same token, a majority of users (59%) say it would not be hard to stop using these sites, including 29% who say it would not be hard at all to give up social media."

View more information and details in the full report at the Pew Research Center site.


Airlines Want To Extend 'Dynamic Pricing' Capabilities To Set Ticket Prices By Each Person

In the near future, what you post on social media sites (e.g., Facebook, Instagram, Pinterest, etc.) could affect the price you pay for airline tickets. How's that?

First, airlines already use what the travel industry calls "dynamic pricing" to vary prices by date, time of day, and season. We've all seen higher ticket prices during the holidays and peak travel times. The Telegraph UK reported that airlines want to extend dynamic pricing to set fares by person:

"... the advent of setting fares by the person, rather than the flight, are fast approaching. According to John McBride, director of product management for PROS, a software provider that works with airlines including Lufthansa, Emirates and Southwest, a number of operators have already introduced dynamic pricing on some ticket searches. "2018 will be a very phenomenal year in terms of traction," he told Travel Weekly..."

And, there was a preliminary industry study about how to do it:

" "The introduction of a Dynamic Pricing Engine will allow an airline to take a base published fare that has already been calculated based on journey characteristics and broad segmentation, and further adjust the fare after evaluating details about the travelers and current market conditions," explains a white paper on pricing written by the Airline Tariff Publishing Company (ATPCO), which counts British Airways, Delta and KLM among its 430 airline customers... An ATPCO working group met [in late February] to discuss dynamic pricing, but it is likely that any roll out to its customers would be incremental."

What's "incremental" mean? Experts say first step would be to vary ticket prices in search results at the airline's site, or at an intermediary's site. There's virtually no way for each traveler to know they'd see a personal price that's higher (or lower) from prices presented to others.

With dynamic pricing per person, business travelers would pay more. And, an airline could automatically bundle several fees (e.g., priority boarding, luggage, meals, etc.) for its loyalty program members into each person's ticket price, obscuring transparency and avoiding fairness. Of course, airlines would pitch this as convenience, but alert consumers know that any convenience always has its price.

Thankfully, some politicians in the United States are paying attention. The Shear Social Media Law & Technology blog summarized the situation very well:

"[Dynamic pricing by person] demonstrates why technology companies and the data collection industry needs greater regulation to protect the personal privacy and free speech rights of Americans. Until Silicon Valley and data brokers are properly regulated Americans will continue to be discriminated against based upon the information that technology companies are collecting about us."

Just because something can be done with technology, doesn't mean it should be done. What do you think?