131 posts categorized "Behavioral Advertising" Feed

Besieged Facebook Says New Ad Limits Aren’t Response to Lawsuits

[Editor's note: today's guest post, by reporters at ProPublica, is the latest in a series monitoring Facebook's attempts to clean up its advertising systems and tools. It is reprinted with permission.]

By Ariana Tobin and Jeremy B. Merrill, ProPublica

Facebook logo Facebook’s move to eliminate 5,000 options that enable advertisers on its platform to limit their audiences is unrelated to lawsuits accusing it of fostering housing and employment discrimination, the company said Wednesday.

“We’ve been building these tools for a long time and collecting input from different outside groups,” Facebook spokesman Joe Osborne told ProPublica.

Tuesday’s blog post announcing the elimination of categories that the company has described as “sensitive personal attributes” came four days after the Department of Justice joined a lawsuit brought by fair housing groups against Facebook in federal court in New York City. The suit contends that advertisers could use Facebook’s options to prevent racial and religious minorities and other protected groups from seeing housing ads.

Raising the prospect of tighter regulation, the Justice Department said that the Communications Decency Act of 1996, which gives immunity to internet companies from liability for content on their platforms, did not apply to Facebook’s advertising portal. Facebook has repeatedly cited the act in legal proceedings in claiming immunity from anti-discrimination law. Congress restricted the law’s scope in March by making internet companies more liable for ads and posts related to child sex-trafficking.

Around the same time the Justice Department intervened in the lawsuit, the Department of Housing and Urban Development (HUD) filed a formal complaint against Facebook, signaling that it had found enough evidence during an initial investigation to raise the possibility of legal action against the social media giant for housing discrimination. Facebook has said that its policies strictly prohibit discrimination, that over the past year it has strengthened its systems to protect against misuse, and that it will work with HUD to address the concerns.

“The Fair Housing Act prohibits housing discrimination including those who might limit or deny housing options with a click of a mouse,” Anna María Farías, HUD’s assistant secretary for fair housing and equal opportunity, said in a statement accompanying the complaint. “When Facebook uses the vast amount of personal data it collects to help advertisers to discriminate, it’s the same as slamming the door in someone’s face.”

Regulators in at least one state are also scrutinizing Facebook. Last month, the state of Washington imposed legally binding compliance requirements on the company, barring it from offering advertisers the option of excluding protected groups from seeing ads about housing, credit, employment, insurance or “public accommodations of any kind.”

Advertising is the primary source of revenue for the social media giant, which is under siege on several fronts. A recent study and media coverage have highlighted how hate speech and false rumors on Facebook have spurred anti-refugee discrimination in Germany and violence against minority ethnic groups such as the Rohingya in Myanmar. This week, Facebook said it had found evidence of Russian and Iranian efforts to influence elections in the U.S. and around the world through fake accounts and targeted advertising. It also said it had suspended more than 400 apps “due to concerns around the developers who built them or how the information people chose to share with the app may have been used.”

Facebook declined to identify most of the 5,000 options being removed, saying that the information might help bad actors game the system. It did say that the categories could enable advertisers to exclude racial and religious minorities, and it provided four examples that it deleted: “Native American culture,” “Passover,” “Evangelicalism” and “Buddhism.” It said the changes will be completed next month.

According to Facebook, these categories have not been widely used by advertisers to discriminate, and their removal is intended to be proactive. In some cases, advertisers legitimately use these categories to reach key audiences. According to targeting data from ads submitted to ProPublica’s Political Ad Collector project, Jewish groups used the “Passover” category to promote Jewish cultural events, and the Michael J. Fox Foundation used it to find people of Ashkenazi Jewish ancestry for medical research on Parkinson’s disease.

Facebook is not limiting advertisers’ options for narrowing audiences by age or sex. The company has defended age-based targeting in employment ads as beneficial for employers and job seekers. Advertisers may also still target or exclude by ZIP code — which critics have described as “digital red-lining” but Facebook says is standard industry practice.

A pending suit in federal court in San Francisco alleges that, by allowing employers to target audiences by age, Facebook is enabling employment discrimination against older job applicants. Peter Romer-Friedman, a lawyer representing the plaintiffs in that case, said that Facebook’s removal of the 5,000 options “is a modest step in the right direction.” But allowing employers to sift job seekers by age, he added, “shows what Facebook cares about: its bottom line. There is real money in age-restricted discrimination.”

Senators Bob Casey of Pennsylvania and Susan Collins of Maine have asked Facebook for more information on what steps it is taking to prevent age discrimination on the site.

The issue of discriminatory advertising on Facebook arose in October 2016 when ProPublica revealed that advertisers on the platform could narrow their audiences by excluding so-called “ethnic affinity” categories such as African-Americans and Spanish-speaking Hispanics. At the time, Facebook promised to build a system to flag and reject such ads. However, a year later, we bought dozens of rental housing ads that excluded protected categories. They were approved within seconds. So were ads that excluded older job seekers, as well as ads aimed at anti-Semitic categories such as “Jew hater.”

The removal of the 5,000 options isn’t Facebook’s first change to its advertising portal in response to such criticism. Last November, it added a self-certification option, which asks housing advertisers to check a box agreeing that their advertisement is not discriminatory. The company also plans to require advertisers to read educational material on the site about ethical practices.

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Whirlpool's Online Product Registration: Confidentiality and Privacy Concerns

Earlier this month, my wife and I relocated to a different city within the same state to live closer to our new, 14-month young grandson. During the move, we bought new home appliances -- a clothes washer and dryer, both made by Whirlpool -- which prompted today's blog post.

The packaging and operation instructions included two registration postcards with the model and serial numbers printed in the form. Nothing controversial about that. The registration cards included, "Other Easy Ways To Register," and listed both registration websites for the United States and Canada. I tried the online registration to see what improvements or benefits Whirlpool's United States registration site might offer over the old-school snail-mail method besides speed.

The landing page includes a form for the customer's contact information, product purchased information, and future purchase plans. Pretty standard stuff. Nothing alarming there. Near the bottom of the form and just above the "Complete Registration" button are links to Whirlpool's Terms & Conditions and Privacy policies. I read both and found some surprises.

First, the site uses inconsistent nomenclature: two different policy titles. The link says "Terms & Conditions" while the title of the actual policy page states, "Terms Of Use." Which is it? Inconsistent nomenclature can confuse users. Not good. Come on, Whirlpool! This is not hard. Good website usability includes the consistent use of the same page title, so uses know where they are going when they select a link, and that they've arrived at the expected destination.

Second, the Terms Of Use (well, I had to pick a title so it wold be clear for you) policy page lacks a date. This can be confusing, making it difficult to impossible for consumers to know and reference the exact document read; plus determine what, if any, changes were posted since the prior version. Not good. Come on Whirlpool! Add a publication date. It's not hard.

Third, the Terms Of Use policy contained this clause:

"Whirlpool Corporation welcomes your submissions; however, any information submitted, other than your personal information (for example, your name and e-mail address), to Whirlpool Corporation through this site is the exclusive property of Whirlpool Corporation and is considered NOT to be confidential. Whirlpool Corporation does not receive the submission in confidence or under any confidential or fiduciary relationship. Whirlpool Corporation may use the submission for any purpose without restriction or compensation."

So, the Terms of Use policy is both vague and clear at the same time. It was vague because it didn't list the exact data elements considered "personal information." Not good. This leaves consumers to guess. The policy lists only two data elements. What about the rest? Are all confidential, or only some? And if some, which ones? Here's the list I consider confidential: name, street address, country, phone number, e-mail address, IP address, device type, device model, device operating system, payment card information, billing address, and online credentials (should I create a profile at the Whirlpool site). Come on Whirlpool! Get it together and provide the complete list of data elements you consider "personal information." It's not hard.

Fourth, the Terms Of Use policy was also clear because the above sentences quoted made Whirlpool's intentions clear: submissions to the site other than "personal information" are not confidential and Whirlpool can do with them whatever it wants. Since the policy doesn't list which data elements are personal, one must assume all are.  Not good.

Next, I read Whirlpool's Privacy policy, and hoped that it would clarify things. Thankfully, a little good news. First, the Privacy policy listed a date: May 31, 2018. Second, more inconsistent site nomenclature: the page-bottom links across the site say "Privacy Policy" while the policy page title says "Privacy Statement." I selected the "Expand All" button to view the entire policy. Third, Whirlpool's Privacy Statement listed the items considered personal information:

"- Your contact information, such as your name, email address, mailing address, and phone number
- Your billing information, such as your credit card number and billing address
- Your Whirlpool account information, including your user name, account number, and a password
- Your product and ownership information
- Your preferences, such as product wish lists, order history, and marketing preferences"

This list is a good start. A simple link to this section from the Terms Of Use policy would do wonders to clarify things. However, Whirlpool collects some key data which it more freely collects and trades than "personal information." The Privacy Statement contains this clause:

"Whirlpool and its business partners and service providers may use a variety of technologies that automatically or passively collect information about how you interact with our Websites ("Usage Information"). Usage Information may include: (i) your IP address, which is a unique set of numbers assigned to your computer by your Internet Service Provider (ISP) (which, depending on your ISP, may be a different number every time you connect to the Internet); (ii) the type of browser and operating system you use; and (iii) other information about your online session, such as the URL you came from to get to our Websites and the date and time you visited our Websites."

And, the Privacy Statement mentions the use of several online tracking technologies:

"We use Local Shared Objects (LSOs) such as HTML5 or Flash on our Websites to store content information and preferences. Third parties with whom we partner to provide certain features on our Websites or to display advertising based upon your web browsing activity use LSOs such as HTML5 or Flash to collect and store information... Web beacons are tiny electronic image files that can be embedded within a web page or included in an e-mail message, and are usually invisible to the human eye. When we use web beacons within our web pages, the web beacons (also known as “clear GIFs” or “tracking pixels”) may tell us such things as: how many people are coming to our Websites, whether they are one-time or repeat visitors, which pages they viewed and for how long, how well certain online advertising campaigns are converting, and other similar Website usage data. When used in our e-mail communications, web beacons can tell us the time an e-mail was opened, if and how many times it was forwarded, and what links users click on from within the e- mail message."

While the "EU-US Privacy Shield" section of the privacy policy lists Whirlpool's European subsidiaries, and contains a Privacy Shield link to an external site listing the companies that are probably some of Whirlpool's service and advertising partners, the privacy policy really does not disclose all of the "third parties," "business partners," "service vendors," advertising partners, and affiliates Whirlpool shares data with. Consumers are left in the dark.

Last, the "Your Rights: Choice & Access" section of the privacy policy mentions the opt-out mechanism for consumers. While consumers can opt-out or cancel receiving marketing (e.g., promotional) messaging from Whirlpool, you can't opt-out of the data collection and archival. So, choice is limited.

Given this and the above concerns, I abandoned the product registration form. Yep. Didn't complete it. Maybe I will in the future after Whirlpool fixes things. Perhaps most importantly, today's blog post is a reminder for all consumers: always read companies' privacy and terms-of-use policies. Always. You never know what you'll find that is irksome. And, if you don't know how to read online polices, this blog has some tips and suggestions.


Experts Warn Biases Must Be Removed From Artificial Intelligence

CNN Tech reported:

"Every time humanity goes through a new wave of innovation and technological transformation, there are people who are hurt and there are issues as large as geopolitical conflict," said Fei Fei Li, the director of the Stanford Artificial Intelligence Lab. "AI is no exception." These are not issues for the future, but the present. AI powers the speech recognition that makes Siri and Alexa work. It underpins useful services like Google Photos and Google Translate. It helps Netflix recommend movies, Pandora suggest songs, and Amazon push products..."

Artificial intelligence (AI) technology is not only about autonomous ships, trucks, and preventing crashes involving self-driving cars. AI has global impacts. Researchers have already identified problems and limitations:

"A recent study by Joy Buolamwini at the M.I.T. Media Lab found facial recognition software has trouble identifying women of color. Tests by The Washington Post found that accents often trip up smart speakers like Alexa. And an investigation by ProPublica revealed that software used to sentence criminals is biased against black Americans. Addressing these issues will grow increasingly urgent as things like facial recognition software become more prevalent in law enforcement, border security, and even hiring."

Reportedly, the concerns and limitations were discussed earlier this month at the "AI Summit - Designing A Future For All" conference. Back in 2016, TechCrunch listed five unexpected biases in artificial intelligence. So, there is much important work to be done to remove biases.

According to CNN Tech, a range of solutions are needed:

"Diversifying the backgrounds of those creating artificial intelligence and applying it to everything from policing to shopping to banking...This goes beyond diversifying the ranks of engineers and computer scientists building these tools to include the people pondering how they are used."

Given the history of the internet, there seems to be an important take-away. Early on, many people mistakenly assumed that, "If it's in an e-mail, then it must be true." That mistaken assumption migrated to, "If it's in a website on the internet, then it must be true." And that mistaken assumption migrated to, "If it was posted on social media, then it must be true." Consumers, corporate executives, and technicians must educate themselves and avoid assuming, "If an AI system collected it, then it must be true." Veracity matters. What do you think?


Facial Recognition At Facebook: New Patents, New EU Privacy Laws, And Concerns For Offline Shoppers

Facebook logo Some Facebook users know that the social networking site tracks them both on and off (e.g., signed into, not signed into) the service. Many online users know that Facebook tracks both users and non-users around the internet. Recent developments indicate that the service intends to track people offline, too. The New York Times reported that Facebook:

"... has applied for various patents, many of them still under consideration... One patent application, published last November, described a system that could detect consumers within [brick-and-mortar retail] stores and match those shoppers’ faces with their social networking profiles. Then it could analyze the characteristics of their friends, and other details, using the information to determine a “trust level” for each shopper. Consumers deemed “trustworthy” could be eligible for special treatment, like automatic access to merchandise in locked display cases... Another Facebook patent filing described how cameras near checkout counters could capture shoppers’ faces, match them with their social networking profiles and then send purchase confirmation messages to their phones."

Some important background. First, the usage of surveillance cameras in retail stores is not new. What is new is the scope and accuracy of the technology. In 2012, we first learned about smart mannequins in retail stores. In 2013, we learned about the five ways retail stores spy on shoppers. In 2015, we learned more about tracking of shoppers by retail stores using WiFi connections. In 2018, some smart mannequins are used in the healthcare industry.

Second, Facebook's facial recognition technology scans images uploaded by users, and then allows users identified to accept or decline labels with their name for each photo. Each Facebook user can adjust their privacy settings to enable or disable the adding of their name label to photos. However:

"Facial recognition works by scanning faces of unnamed people in photos or videos and then matching codes of their facial patterns to those in a database of named people... The technology can be used to remotely identify people by name without their knowledge or consent. While proponents view it as a high-tech tool to catch criminals... critics said people cannot actually control the technology — because Facebook scans their faces in photos even when their facial recognition setting is turned off... Rochelle Nadhiri, a Facebook spokeswoman, said its system analyzes faces in users’ photos to check whether they match with those who have their facial recognition setting turned on. If the system cannot find a match, she said, it does not identify the unknown face and immediately deletes the facial data."

Simply stated: Facebook maintains a perpetual database of photos (and videos) with names attached, so it can perform the matching and not display name labels for users who declined and/or disabled the display of name labels in photos (videos). To learn more about facial recognition at Facebook, visit the Electronic Privacy Information Center (EPIC) site.

Third, other tech companies besides Facebook use facial recognition technology:

"... Amazon, Apple, Facebook, Google and Microsoft have filed facial recognition patent applications. In May, civil liberties groups criticized Amazon for marketing facial technology, called Rekognition, to police departments. The company has said the technology has also been used to find lost children at amusement parks and other purposes..."

You may remember, earlier in 2017 Apple launched its iPhone X with Face ID feature for users to unlock their phones. Fourth, since Facebook operates globally it must respond to new laws in certain regions:

"In the European Union, a tough new data protection law called the General Data Protection Regulation now requires companies to obtain explicit and “freely given” consent before collecting sensitive information like facial data. Some critics, including the former government official who originally proposed the new law, contend that Facebook tried to improperly influence user consent by promoting facial recognition as an identity protection tool."

Perhaps, you find the above issues troubling. I do. If my facial image will be captured, archived, tracked by brick-and-mortar stores, and then matched and merged with my online usage, then I want some type of notice before entering a brick-and-mortar store -- just as websites present privacy and terms-of-use policies. Otherwise, there is no notice nor informed consent by shoppers at brick-and-mortar stores.

So, is facial recognition a threat, a protection tool, or both? What are your opinions?


Researchers Find Mobile Apps Can Easily Record Screenshots And Videos of Users' Activities

New academic research highlights how easy it is for mobile apps to both spy upon consumers and violate our privacy. During a recent study to determine whether or not smartphones record users' conversations, researchers at Northeastern University (NU) found:

"... that some companies were sending screenshots and videos of user phone activities to third parties. Although these privacy breaches appeared to be benign, they emphasized how easily a phone’s privacy window could be exploited for profit."

The NU researchers tested 17,260 of the most popular mobile apps running on smartphones using the Android operating system. About 9,000 of the 17,260 apps had the ability to take screenshots. The vulnerability: screenshot and video captures could easily be used to record users' keystrokes, passwords, and related sensitive information:

"This opening will almost certainly be used for malicious purposes," said Christo Wilson, another computer science professor on the research team. "It’s simple to install and collect this information. And what’s most disturbing is that this occurs with no notification to or permission by users."

The NU researchers found one app already recording video of users' screen activity (links added):

"That app was GoPuff, a fast-food delivery service, which sent the screenshots to Appsee, a data analytics firm for mobile devices. All this was done without the awareness of app users. [The researchers] emphasized that neither company appeared to have any nefarious intent. They said that web developers commonly use this type of information to debug their apps... GoPuff has changed its terms of service agreement to alert users that the company may take screenshots of their use patterns. Google issued a statement emphasizing that its policy requires developers to disclose to users how their information will be collected."

May? A brief review of the Appsee site seems to confirm that video recordings of the screens on app users' mobile devices is integral to the service:

"RECORDING: Watch every user action and understand exactly how they use your app, which problems they're experiencing, and how to fix them.​ See the app through your users' eyes to pinpoint usability, UX and performance issues... TOUCH HEAT MAPS: View aggregated touch heatmaps of all the gestures performed in each​ ​screen in your app.​ Discover user navigation and interaction preferences... REALTIME ANALYTICS & ALERTS:Get insightful analytics on user behavior without pre-defining any events. Obtain single-user and aggregate insights in real-time..."

Sounds like a version of "surveillance capitalism" to me. According to the Appsee site, a variety of companies use the service including eBay, Samsung, Virgin airlines, The Weather Network, and several advertising networks. Plus, the Appsee Privacy Policy dated may 23, 2018 stated:

"The Appsee SDK allows Subscribers to record session replays of their end-users' use of Subscribers' mobile applications ("End User Data") and to upload such End User Data to Appsee’s secured cloud servers."

In this scenario, GoPuff is a subscriber and consumers using the GoPuff mobile app are end users. The Appsee SDK is software code embedded within the GoPuff mobile app. The researchers said that this vulnerability, "will not be closed until the phone companies redesign their operating systems..."

Data-analytics services like Appsee raise several issues. First, there seems to be little need for digital agencies to conduct traditional eye-tracking and usability test sessions, since companies can now record, upload and archive what, when, where, and how often users swipe and select in-app content. Before, users were invited to and paid for their participation in user testing sessions.

Second, this in-app tracking and data collection amounts to perpetual, unannounced user testing. Previously, companies have gotten into plenty of trouble with their customers by performing secret user testing; especially when the service varies from the standard, expected configuration and the policies (e.g., privacy, terms of service) don't disclose it. Nobody wants to be a lab rat or crash-test dummy.

Third, surveillance agencies within several governments must be thrilled to learn of these new in-app tracking and spy tools, if they aren't already using them. A reasonable assumption is that Appsee also provides data to law enforcement upon demand.

Fourth, two of the researchers at NU are undergraduate students. Another startling disclosure:

"Coming into this project, I didn’t think much about phone privacy and neither did my friends," said Elleen Pan, who is the first author on the paper. "This has definitely sparked my interest in research, and I will consider going back to graduate school."

Given the tsunami of data breaches, privacy legislation in Europe, and demands by law enforcement for tech firms to build "back door" hacks into their mobile devices and smartphones, it is startling alarming that some college students, "don't think much about phone privacy." This means that Pan and her classmates probably haven't read privacy and terms-of-service policies for the apps and sites they've used. Maybe they will now.

Let's hope so.

Consumers interested in GoPuff should closely read the service's privacy and Terms of Service policies, since the latter includes dispute resolution via binding arbitration and prevents class-action lawsuits.

Hopefully, future studies about privacy and mobile apps will explore further the findings by Pan and her co-researchers. Download the study titled, "Panoptispy: Characterizing Audio and Video Exfiltration from Android Applications" (Adobe PDF) by Elleen Pan, Jingjing Ren, Martina Lindorfer, Christo Wilson, and David Choffnes.


Facebook’s Screening for Political Ads Nabs News Sites Instead of Politicians

[Editor's note: today's post, by reporters at ProPublica, discusses new advertising rules at the Facebook.com social networking service. It is reprinted with permission.]

By Jeremy B. Merrill and Ariana Tobin, ProPublica

One ad couldn’t have been more obviously political. Targeted to people aged 18 and older, it urged them to “vote YES” on June 5 on a ballot proposition to issue bonds for schools in a district near San Francisco. Yet it showed up in users’ news feeds without the “paid for by” disclaimer required for political ads under Facebook’s new policy designed to prevent a repeat of Russian meddling in the 2016 presidential election. Nor does it appear, as it should, in Facebook’s new archive of political ads.

The other ad was from The Hechinger Report, a nonprofit news outlet, promoting one of its articles about financial aid for college students. Yet Facebook’s screening system flagged it as political. For the ad to run, The Hechinger Report would have to undergo the multi-step authorization and authentication process of submitting Social Security numbers and identification that Facebook now requires for anyone running “electoral ads” or “issue ads.”

When The Hechinger Report appealed, Facebook acknowledged that its system should have allowed the ad to run. But Facebook then blocked another ad from The Hechinger Report, about an article headlined, “DACA students persevere, enrolling at, remaining in, and graduating from college.” This time, Facebook rejected The Hechinger Report’s appeal, maintaining that the text or imagery was political.

As these examples suggest, Facebook’s new screening policies to deter manipulation of political ads are creating their own problems. The company’s human reviewers and software algorithms are catching paid posts from legitimate news organizations that mention issues or candidates, while overlooking straightforwardly political posts from candidates and advocacy groups. Participants in ProPublica’s Facebook Political Ad Collector project have submitted 40 ads that should have carried disclaimers under the social network’s policy, but didn’t. Facebook may have underestimated the difficulty of distinguishing between political messages and political news coverage — and the consternation that failing to do so would stir among news organizations.

The rules require anyone running ads that mention candidates for public office, are about elections, or that discuss any of 20 “national issues of public importance” to verify their personal Facebook accounts and add a "paid for by" disclosure to their ads, which are to be preserved in a public archive for seven years. Advertisers who don’t comply will have their ads taken down until they undergo an "authorization" process, submitting a Social Security number, driver’s license photo, and home address, to which Facebook sends a letter with a code to confirm that anyone running ads about American political issues has an American home address. The complication is that the 20 hot-button issues — environment, guns, immigration, values foreign policy, civil rights and the like — are likely to pop up in posts from news organizations as well.

"This could be really confusing to consumers because it’s labeling news content as political ad content," said Stefanie Murray, director of the Center for Cooperative Media at Montclair State University.

The Hechinger Report joined trade organizations representing thousands of publishers earlier this month in protesting this policy, arguing that the filter lumps their stories in with the very organizations and issues they are covering, thus confusing readers already wary of "fake news." Some publishers — including larger outlets like New York Media, which owns New York Magazine — have stopped buying ads on political content they expect would be subject to Facebook’s ad archive disclosure requirement.

"When it comes to news, Facebook still doesn’t get it. In its efforts to clear up one bad mess, it seems set on joining those who want blur the line between reality-based journalism and propaganda," Mark Thompson, chief executive officer of The New York Times, said in prepared remarks at the Open Markets Institute on Tuesday, June 12th.

In a statement Wednesday June 13th, Campbell Brown, Facebook’s head of global news partnerships, said the company recognized "that news content was different from political and issue advertising," and promised to create a "differentiated space within our archive to separate news content from political and issue ads." But Brown rejected the publishers’ request for a "whitelist" of legitimate news organizations whose ads would not be considered political.

"Removing an entire group of advertisers, in this case publishers, would go against our transparency efforts and the work we’re doing to shore up election integrity on Facebook," she wrote."“We don’t want to be in a position where a bad actor obfuscates its identity by claiming to be a news publisher." Many of the foreign agents that bought ads to sway the 2016 presidential election, the company has said, posed as journalistic outlets.

Her response didn’t satisfy news organizations. Facebook "continues to characterize professional news and opinion as ‘advertising’ — which is both misguided and dangerous," said David Chavern, chief executive of the News Media Alliance — a trade association representing 2,000 news organizations in the U.S. and Canada —and co-author of an open letter to Facebook on June 11.

ProPublica asked Facebook to explain its decision to block 14 advertisements shared with us by news outlets. Of those, 12 were ultimately rejected as political content, one was overturned on appeal, and one Facebook could not locate in its records. Most of these publications, including The Hechinger Report, are affiliated with the Institute for Nonprofit News, a consortium of mostly small nonprofit newsrooms that produce primarily investigative journalism (ProPublica is a member).

Here are a few examples of news organization ads that were rejected as political:

  • Voice of Monterey Bay tried to boost an interview with labor leader Dolores Huerta headlined "She Still Can." After the ad ran for about a day, Facebook sent an alert that the ad had been turned off. The outlet is refusing to seek approval for political ads, “since we are a news organization,” said Julie Martinez, co-founder of the nonprofit news site.
  • Ensia tried to advertise an article headlined: "Opinion: We need to talk about how logging in the Southern U.S. is harming local residents." It was rejected as political. Ensia will not appeal or buy new ads until Facebook addresses the issue, said senior editor David Doody.
  • inewsource tried to promote a post about a local candidate, headlined: "Scott Peters’ Plea to Get San Diego Unified Homeless Funding Rejected." The ad was rejected as political. inewsource appealed successfully, but then Facebook changed its mind and rejected it again, a spokeswoman for the social network said.
  • BirminghamWatch tried to boost a post about a story headlined, "‘That is Crazy:’ 17 Steps to Cutting Checks for Birmingham Neighborhood Projects." The ad was rejected as political and rejected again on appeal. A little while later, BirminghamWatch’s advertiser on the account received a message from Facebook: "Finish boosting your post for $15, up to 15,000 people will see it in NewsFeed and it can get more likes, comments, and shares." The nonprofit news site appealed again, and the ad was rejected again.

For most of its history, Facebook treated political ads like any other ads. Last October, a month after disclosing that "inauthentic accounts… operated out of Russia" had spent $100,000 on 3,000 ads that "appeared to focus on amplifying divisive social and political messages," the company announced it would implement new rules for election ads. Then in April, it said the rules would also apply to issue-related ads.

The policy took effect last month, at a time when Facebook’s relationship with the news industry was already rocky. A recent algorithm change reduced the number of posts from news organizations that users see in their news feed, thus decreasing the amount of traffic many media outlets can bring in without paying for wider exposure, and frustrating publishers who had come to rely on Facebook as a way to reach a broader audience.

Facebook has pledged to assign 3,000-4,000 "content moderators" to monitor political ads, but hasn’t reached that staffing level yet. The company told ProPublica that it is committed to meeting the goal by the U.S. midterm elections this fall.

To ward off "bad actors who try to game our enforcement system," Facebook has kept secret its specific parameters and keywords for determining if an ad is political. It has published only the list of 20 national issues, which it says is based in part on a data-coding system developed by a network of political scientists called the Comparative Agendas Project. A director on that project, Frank Baumgartner, said the lack of transparency is problematic.

"I think [filtering for political speech] is a puzzle that can be solved by algorithms and big data, but it has to be done right and the code needs to be transparent and publicly available. You can’t have proprietary algorithms determining what we see," Baumgartner said.

However Facebook’s algorithms work, they are missing overtly political ads. Incumbent members of Congress, national advocacy groups and advocates of local ballot initiatives have all run ads on Facebook without the social network’s promised transparency measures, after they were supposed to be implemented.

Ads from Senator Jeff Merkley, Democrat-Oregon, Representative Don Norcross, Democrat-New Jersey, and Representative Pramila Jayapal, Democrat-Washington, all ran without disclaimers as recently as this past Monday. So did an ad from Alliance Defending Freedom, a right-wing group that represented a Christian baker whose refusal for religious reasons to make a wedding cake for a gay couple was upheld by the Supreme Court this month. And ads from NORML, the marijuana legalization advocacy group and MoveOn, the liberal organization, ran for weeks before being taken down.

ProPublica asked Facebook why these ads weren’t considered political. The company said it is reviewing them. "Enforcement is never perfect at launch," it said.

Clarification, June 15, 2018: This article has been updated to include more specific information about the kinds of advertising New York Media has stopped buying on Facebook’s platform.

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


What Facebook’s New Political Ad System Misses

[Editor's Note: today's guest post is by the reporters at ProPublica. It is reprinted with permission.]

By Jeremy B. Merrill, Ariana Tobin, and Madeleine Varner, ProPublica

Facebook’s long-awaited change in how it handles political advertisements is only a first step toward addressing a problem intrinsic to a social network built on the viral sharing of user posts.

The company’s approach, a searchable database of political ads and their sponsors, depends on the company’s ability to sort through huge quantities of ads and identify which ones are political. Facebook is betting that a combination of voluntary disclosure and review by both people and automated systems will close a vulnerability that was famously exploited by Russian meddlers in the 2016 election.

The company is doubling down on tactics that so far have not prevented the proliferation of hate-filled posts or ads that use Facebook’s capability to target ads particular groups.

If the policy works as Facebook hopes, users will learn who has paid for the ads they see. But the company is not revealing details about the significant aspect of how political advertisers use its platform — the specific attributes the ad buyers used to target a particular person for an ad.

Facebook’s new system is the company’s most ambitious response thus far to the now-documented efforts by Russian agents to circulate items that would boost Donald Trump’s chances or suppress Democratic turnout. The new policies announced Thursday will make it harder for somebody trying to exploit the precise vulnerabilities in Facebook’s system exploited by the Russians in 2016 in several ways:

First, political ads that you see on Facebook will now include the name of the organization or person who paid for it, reminiscent of disclaimers required on political mailers and TV ads. (The ads Facebook identified as placed by Russians carried no such tags.)

The Federal Election Commission requires political ads to carry such clear disclosures but as we have reported, many candidates and groups on Facebook haven’t been following that rule.

Second, all political ads will be published in a searchable database.

Finally, the company will now require that anyone buying a political ad in their system confirm that they’re a U.S. resident. Facebook will even mail advertisers a postcard to make certain they’re in the U.S. Facebook says ads by advertisers whose identities aren’t verified under this process will be taken down starting in about a week, and they will be blocked from buying new ads until they have verified themselves.

While the new system can still be gamed, the specific tactics used by the Russian Internet Research Agency, such as an overseas purchase of ads promoting a Black Lives Matter rally under the name “Blacktivist,” will become harder — or at least harder to do without getting caught.

The company has also pledged to devote more employees to the issue, including 3,000-4,000 more content moderators. But Facebook says these will be not be additional hires — they will be included in the 20,000 already promised to tackle various moderation issues in the coming months.

What Is Facebook Missing?

The most obvious flaw in Facebook’s new system is that it misses ads it should catch. Right now, it’s easy to find political ads that are missing from their archive. Take this one, from the Washington State Democratic Party. Just minutes after Facebook finished announcing its launch of the tool, a participant in ProPublica’s Facebook Political Ad Collector project saw this ad, criticizing Republican congresswoman Cathy McMorris Rodgers… but it wasn’t in the database.

And there are others.

The company acknowledged that the process is still a work in progress, reiterating its request that users pitch in by reporting the political ads that lack disclosures.

Even as Facebook’s system gets better at identifying political ads, the company is withholding a critical piece of information in the ads it’s publishing. While we’ll see some demographic information about who saw a given ad, Facebook is not indicating which audiences the advertiser intended to target — categories that often include racial or political characteristics and which have been controversial in the past.

This information is critical to researchers and journalists trying to make sense of political advertising on Facebook. Take, for instance, this ad promoting the environmental benefits of nuclear power, from a group called Nuclear Matters: the group chose specifically to show it to people interested in veganism — a fact we wouldn’t know from looking at the demographics of the users who saw the ad.

Facebook said it considers the information about who saw an ad — age, gender and location — sufficient. Rob Leathern, Facebook’s Director of Product Management, said that the limited demographics-only breakdown “offers more transparency than the intent, in terms of showing the targeting.”

The company is also promising to launch an API, a technical tool which will allow outsiders to write software that would look for patterns in the new ad database. The company says it will launch an API “later this summer” but hasn’t said what data it will contain or who will have access to it.

ProPublica’s own Facebook Ad Collector tool, which also collects political ads spotted on Facebook, has an API that can be accessed by anyone. It also includes the targeting information — which users can also see on each ad that they view.

Facebook said it would not release data about ads flagged by users as political and then rejected by the system. We’re curious about those, and we know firsthand that their software can be imperfect. We’ve attempted to buy ads specifically about our journalism that were flagged as problematic — because the ads “contained profanity,” or were misclassified as discriminatory ads for “employment, credit or housing opportunities” by mistake.

Facebook’s track record on initiatives aimed at improving the transparency of its massively profitable advertising system is spotty. The company has said it’s going to rely in part on artificial intelligence to review ads — the same sort of technology that the company said in the past it would use to block discriminatory ads for housing, employment and credit opportunities.

When we tested the system almost a year after a ProPublica story showed Facebook was allowing advertisers to target housing ads in a way that violated Fair Housing Act protections, we found that the company was still approving housing ads that excluded African-Americans and other “multicultural affinities” from seeing them. The company was pressured to implement several changes to its ad portal and a Fair Housing group filed a lawsuit against the company.

Facebook also plans to rely in part on users to find and report political ads that get through the system without the required disclosures.

But its track record of moderating user-flagged content — when it comes to both hate speech and advertising — has been uneven. Last December, ProPublica brought 49 cases of user-flagged offensive speech to Facebook, and the company acknowledged that its moderators had made the wrong call in 22 of them.

The company admits it's playing a “cat and mouse game” with people trying to pass political ads through their system unnoticed. Just last month, Ohio Democratic gubernatorial candidate Richard Cordray’s campaign ran Facebook ads criticizing his opponent — but from a page called “Ohio Primary Info.”

The need for ad transparency goes way beyond Russian bad actors. Our tool has already caught scams and malware disguised as politics, which users raised as a problem years before Facebook made any meaningful change.

If you flag an ad to Facebook, please report them to us as well by sending an email to [email protected]. We will be watching to see how well Facebook responds when users flag an ad.

How Will They Enforce the New Rules?

It’s one thing to create a set of rules, and another to enforce them consistently and on a large scale.

Facebook, which kept its content moderation and hate speech policies secret until they were revealed by ProPublica, won’t share the specific rules governing political ad content or details about the instructions moderators receive.

Leathern said the company is keeping the rules secret to frustrate the efforts of “bad actors who try to game our enforcement systems”

Facebook has said it’s looking to flag both electoral ads and those that take a position on its list of twenty “national legislative issues of public importance”. These range from the concrete, like “abortion” and “taxes,” to broad topics like “health” and “values.”

Facebook acknowledges its system will make mistakes and says it will improve over time. Ads for specific candidates are relatively easy to detect. “We’ll likely miss ads when they aim to persuade,” said Katie Harbath, Facebook’s Global Politics and Government Outreach Director.

We plan to keep an eye out for ads that don’t make it into the archive. We’ll be looking for ads that our Political Ad Collector tool finds that aren’t in Facebook’s database.

Want to Help?

We need your help building out our independent database of political ads! If you’re still reading this article, we’re giving you permission to stop and install the Political Ad Collector extension. Here’s what you need to know about how it works.

You can also help us find other people who can install the tool. We are especially in need of people who aren’t ProPublica readers already. We need people from a diverse set of backgrounds, and with different perspectives and political beliefs. Please encourage your friends and relatives — especially the ones you avoid talking politics with — to install it.

Do You Work at a News Outlet and Want to Partner With Us on This?

Awesome. We’re already working with quite a few newsrooms all over the world, including the CBC in Canada, Bridge Magazine in Michigan, The Guardian in Australia and more.

In the U.S., we’re trying to get eyes and ears on the ground in as many local elections as possible. If your readers would be interested in joining our transparency effort, please reach out. We’re happy to send more information about this and our larger Electionland project.

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

 


New Commissioner Says FTC Should Get Tough on Companies Like Facebook and Google

[Editor's note: today's guest post, by reporters at ProPublica, explores enforcement policy by the U.S. Federal Trade Commission (FTC), which has become more important  given the "light touch" enforcement approach by the Federal Communications Commission. Today's post is reprinted with permission.]

By Jesse Eisinger, ProPublica

Declaring that "the credibility of law enforcement and regulatory agencies has been undermined by the real or perceived lax treatment of repeat offenders," newly installed Democratic Federal Trade Commissioner Rohit Chopra is calling for much more serious penalties for repeat corporate offenders.

"FTC orders are not suggestions," he wrote in his first official statement, which was released on May 14.

Many giant companies, including Facebook and Google, are under FTC consent orders for various alleged transgressions (such as, in Facebook’s case, not keeping its promises to protect the privacy of its users’ data). Typically, a first FTC action essentially amounts to a warning not to do it again. The second carries potential penalties that are more serious.

Some critics charge that that approach has encouraged companies to treat FTC and other regulatory orders casually, often violating their terms. They also say the FTC and other regulators and law enforcers have gone easy on corporate recidivists.

In 2012, a Republican FTC commissioner, J. Thomas Rosch, dissented from an agency agreement with Google that fined the company $22.5 million for violations of a previous order even as it denied liability. Rosch wrote, “There is no question in my mind that there is ‘reason to believe’ that Google is in contempt of a prior Commission order.” He objected to allowing the company to deny its culpability while accepting a fine.

Chopra’s memo signals a tough stance from Democratic watchdogs — albeit a largely symbolic one, given the fact that Republicans have a 3-2 majority on the FTC — as the Trump administration pursues a wide-ranging deregulatory agenda. Agencies such as the Environmental Protection Agency and the Department of Interior are rolling back rules, while enforcement actions from the Securities and Exchange Commission and the Department of Justice are at multiyear lows.

Chopra, 36, is an ally of Elizabeth Warren and a former assistant director of the Consumer Financial Protection Bureau. President Donald Trump nominated him to his post in October, and he was confirmed last month. The FTC is led by a five-person commission, with a chairman from the president’s party.

The Chopra memo is also a tacit criticism of enforcement in the Obama years. Chopra cites the SEC’s practice of giving waivers to banks that have been sanctioned by the Department of Justice or regulators allowing them to continue to receive preferential access to capital markets. The habitual waivers drew criticism from a Democratic commissioner on the SEC, Kara Stein. Chopra contends in his memo that regulators treated both Wells Fargo and the giant British bank HSBC too lightly after repeated misconduct.

"When companies violate orders, this is usually the result of serious management dysfunction, a calculated risk that the payoff of skirting the law is worth the expected consequences, or both," he wrote. Both require more serious, structural remedies, rather than small fines.

The repeated bad behavior and soft penalties “undermine the rule of law,” he argued.

Chopra called for the FTC to use more aggressive tools: referring criminal matters to the Department of Justice; holding individual executives accountable, even if they weren’t named in the initial complaint; and “meaningful” civil penalties.

The FTC used such aggressive tactics in going after Kevin Trudeau, infomercial marketer of miracle treatments for bodily ailments. Chopra implied that the commission does not treat corporate recidivists with the same toughness. “Regardless of their size and clout, these offenders, too, should be stopped cold,” he writes.

Chopra also suggested other remedies. He called for the FTC to consider banning companies from engaging in certain business practices; requiring that they close or divest the offending business unit or subsidiary; requiring the dismissal of senior executives; and clawing back executive compensation, among other forceful measures.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


News Media Alliance Challenges Tech Companies To 'Accept Accountability' And Responsibility For Filtering News In Their Platforms

Last week, David Chavern, the President and CEO of News Media Alliance (NMA), testified before the House Judiciary Committee. The NMA is a nonprofit trade association representing over 2,000 news organizations across the United States. Mr. Chavern's testimony focused upon the problem of fake news, often aided by social networking platform.

His comments first described current conditions:

"... Quality journalism is essential to a healthy and functioning democracy -- and my members are united in their desire to fight for its future.

Too often in today’s information-driven environment, news is included in the broad term "digital content." It’s actually much more important than that. While some low-quality entertainment or posts by friends can be disappointing, inaccurate information about world events can be immediately destructive. Civil society depends upon the availability of real, accurate news.

The internet represents an extraordinary opportunity for broader understanding and education. We have never been more interconnected or had easier and quicker means of communication. However, as currently structured, the digital ecosystem gives tremendous viewpoint control and economic power to a very small number of companies – the tech platforms that distribute online content. That control and power must come with new responsibilities... Historically, newspapers controlled the distribution of their product; the news. They invested in the journalism required to deliver it, and then printed it in a form that could be handed directly to readers. No other party decided who got access to the information, or on what terms. The distribution of online news is now dominated by the major technology platforms. They decide what news is delivered and to whom – and they control the economics of digital news..."

Last month, a survey found that roughly two-thirds of U.S. adults (68%) use Facebook.com, and about three-quarters of those use the social networking site daily. In 2016, a survey found that 62 percent of adults in the United States get their news from social networking sites. The corresponding statistic in 2012 was 49 percent. That 2016 survey also found that fewer social media users get their news from other platforms: local television (46 percent), cable TV (31 percent), nightly network TV (30 percent), news websites/apps (28 percent), radio (25 percent), and print newspapers (20 percent).

Mr. Chavern then described the problems with two specific tech companies:

"The First Amendment prohibits the government from regulating the press. But it doesn’t prevent Facebook and Google from acting as de facto regulators of the news business.

Neither Google nor Facebook are – or have ever been – "neutral pipes." To the contrary, their businesses depend upon their ability to make nuanced decisions through sophisticated algorithms about how and when content is delivered to users. The term “algorithm” makes these decisions seem scientific and neutral. The fact is that, while their decision processes may be highly-automated, both companies make extensive editorial judgments about accuracy, relevance, newsworthiness and many other criteria.

The business models of Facebook and Google are complex and varied. However, we do know that they are both immense advertising platforms that sell people’s time and attention. Their "secret algorithms" are used to cultivate that time and attention. We have seen many examples of the types of content favored by these systems – namely, click-bait and anything that can generate outrage, disgust and passion. Their systems also favor giving users information like that which they previously consumed, thereby generating intense filter bubbles and undermining common understandings of issues and challenges.

All of these things are antithetical to a healthy news business – and a healthy democracy..."

Earlier this month, Apple Computer and Facebook executives exchanged criticisms about each other's business models and privacy. Mr. Chavern's testimony before Congress also described more problems and threats:

"Good journalism is factual, verified and takes into account multiple points of view. It can take a lot of time and investment. Most particularly, it requires someone to take responsibility for what is published. Whether or not one agrees with a particular piece of journalism, my members put their names on their product and stand behind it. Readers know where to send complaints. The same cannot be said of the sea of bad information that is delivered by the platforms in paid priority over my members’ quality information. The major platforms’ control over distribution also threatens the quality of news for another reason: it results in the “commoditization” of news. Many news publishers have spent decades – often more than a century – establishing their brands. Readers know the brands that they can trust — publishers whose reporting demonstrates the principles of verification, accuracy and fidelity to facts. The major platforms, however, work hard to erase these distinctions. Publishers are forced to squeeze their content into uniform, homogeneous formats. The result is that every digital publication starts to look the same. This is reinforced by things like the Google News Carousel, which encourages users to flick back and forth through articles on the same topic without ever noticing the publisher. This erosion of news publishers’ brands has played no small part in the rise of "fake news." When hard news sources and tabloids all look the same, how is a customer supposed to tell the difference? The bottom line is that while Facebook and Google claim that they do not want to be "arbiters of truth," they are continually making huge decisions on how and to whom news content is delivered. These decisions too often favor free and commoditized junk over quality journalism. The platforms created by both companies could be wonderful means for distributing important and high-quality information about the world. But, for that to happen, they must accept accountability for the power they have and the ultimate impacts their decisions have on our economic, social and political systems..."

Download Mr. Chavern's complete testimony. Industry watchers argue that recent changes by Facebook have hurt local news organizations. MediaPost reported:

"When Facebook changed its algorithm earlier this year to focus on “meaningful” interactions, publishers across the board were hit hard. However, local news seemed particularly vulnerable to the alterations. To assuage this issue, the company announced that it would prioritize news related to local towns and metro areas where a user resided... To determine how positively that tweak affected local news outlets, the Tow Center measured interactions for posts from publications coming from 13 metro areas... The survey found that 11 out of those 13 have consistently seen a drop in traffic between January 1 and April 1 of 2018, allowing the results to show how outlets are faring nine weeks after the algorithm change. According to the Tow Center study, three outlets saw interactions on their pages decrease by a dramatic 50%. These include The Dallas Morning News, The Denver Post, and The San Francisco Chronicle. The Atlanta Journal-Constitution saw interactions drop by 46%."

So, huge problems persist.

Early in my business career, I had the opportunity to develop and market an online service using content from Dow Jones News/Retrieval. That experience taught me that the news - hard news - included who, where, when, and what happened. Everything else is either opinion, commentary, analysis, an advertisement, or fiction. And, it is critical to know the differences and/or learn to spot each type. Otherwise, you are likely to be misled, misinformed, or fooled.


Many People Are Concerned About Facebook. Any Other Tech Companies Pose Privacy Threats?

The massive data breach involving Facebook and Cambridge Analytica focused attention and privacy concerns on the social networking giant. Reports about extensive tracking of users and non-users, testimony by its CEO before the U.S. Congress, and online tools allegedly allowing advertisers to violate federal housing laws have also focused attention on Facebook.

Are there any other tech or advertising companies which consumers should have privacy concerns about?  What other companies collect massive amounts of information about consumers? It seems wise to look beyond Facebook in to avoid missing significant threats.

Google logo To answer these questions, the Wall Street Journal compared Facebook and Google:

"... Alphabet Inc.’s Google is a far bigger threat by many measures: the volume of information it gathers, the reach of its tracking and the time people spend on its sites and apps... It’s likely that Google has shadow profiles on at least as many people as Facebook does, says Chandler Givens, chief executive of TrackOff, which develops software to fight identity theft. Google allows everyone, whether they have a Google account or not, to opt out of its ad targeting. Yet, like Facebook, it continues to gather your data... Google Analytics is far and away the web’s most dominant analytics platform. Used on the sites of about half of the biggest companies in the U.S., it has a total reach of 30 million to 50 million sites. Google Analytics tracks you whether or not you are logged in... Google uses, among other things, our browsing and search history, apps we’ve installed, demographics such as age and gender and, from its own analytics and other sources, where we’ve shopped in the real world. Google says it doesn’t use information from “sensitive categories” such as race, religion, sexual orientation or health..."

There's plenty more, so read the entire WSJ article. A good review worthy of further discussion.

However, more companies pose privacy threats. Equifax, one of three major credit reporting agencies, easily makes my list. Its massive data breach affected half the population in the USA, plus persons worldwide. An investigation discovered several data security failures at Equifax.

Also on my list would be the U.S. Federal Communications Commission (FCC). Using some  "light touch" legal ju-jitsu and vague promises of enabling infrastructure investments, the Republican-majority Commissioners and Trump appointee Ajit Pai at the FCC revoked broadband privacy protections for consumers last year... and punted broadband oversight responsibility to the U.S. Federal Trade Commission (FTC). This allowed corporate internet service providers (ISPs) to freely track and collect sensitive data about internet users without requiring notices nor opt-out mechanisms.

Uber logo Uber also makes my list, given its massive data breach affecting 57 million persons. Earlier this month, the FTC announced a revised settlement agreement where Uber:

"... failed to disclose a significant breach of consumer data that occurred in 2016 -- in the midst of the FTC’s investigation that led to the August 2017 settlement announcement... the revised settlement could subject Uber to civil penalties if it fails to notify the FTC of certain future incidents involving unauthorized access of consumer information... In announcing the original proposed settlement with Uber in August 2017, the FTC charged that the company had failed to live up to its claims that it closely monitored employee access to rider and driver data and that it deployed reasonable measures to secure personal information stored on a third-party cloud provider’s servers.

In the revised complaint, the FTC alleges that Uber learned in November 2016 that intruders had again accessed consumer data the company stored on its third-party cloud provider’s servers by using an access key an Uber engineer had posted on a code-sharing website... the intruders used the access key to download from Uber’s cloud storage unencrypted files that contained more than 25 million names and email addresses, 22 million names and mobile phone numbers, and 600,000 names and driver’s license numbers of U.S. Uber drivers and riders... Uber paid the intruders $100,000 through its third-party “bug bounty” program and failed to disclose the breach to consumers or the Commission until November 2017... the new provisions in the revised proposed order include requirements for Uber to submit to the Commission all the reports from the required third-party audits of Uber’s privacy program rather than only the initial such report..."

Yes, Wells Fargo bank makes my list, too. This blog post explains why. Who is on your list of the biggest privacy threats to consumers?


How Facebook Tracks Its Users, And Non-Users, Around the Internet

Facebook logo Many Facebook users wrongly believe that the social networking service doesn't track them around the internet when they aren't signed in. Also, many non-users of Facebook wrongly believe that they are not tracked.

Earlier this month, Consumer Reports explained the tracking:

"As you travel through the web, you’re likely to encounter Facebook Like or Share buttons, which the company calls Social Plugins, on all sorts of pages, from news outlets to shopping sites. Click on a Like button and you can see the number on the page’s counter increase by one; click on a Share button and a box opens up to let you post a link to your Facebook account.

But that’s just what’s happening on the surface. "If those buttons are on the page, regardless of whether you touch them or not, Facebook is collecting data," said Casey Oppenheim, co-founder of data security firm Disconnect."

This blog discussed social plugins back in 2010. However, the tracking includes more technologies:

"... every web page contains little bits of code that request the pictures, videos, and text that browsers need to display each item on the page. These requests typically go out to a wide swath of corporate servers—including Facebook—in addition to the website’s owner. And such requests can transmit data about the site you’re on, the browser you are using, and more. Useful data gets sent to Facebook whether you click on one of its buttons or not. If you click, Facebook finds out about that, too. And it learns a bit more about your interests.

In addition to the buttons, many websites also incorporate a Facebook Pixel, a tiny, transparent image file the size of just one of the millions of pixels on a typical computer screen. The web page makes a request for a Facebook Pixel, just as it would request a Like button. No user will ever notice the picture, but the request to get it is packaged with information... Facebook explains what data can be collected using a Pixel, such as products you’ve clicked on or added to a shopping cart, in its documentation for advertisers. Web developers can control what data is collected and when it is transmitted... Even if you’re not logged in, the company can still associate the data with your IP address and all the websites you’ve been to that contain Facebook code."

The article also explains "re-targeting" and how consumers who don't purchase anything at an online retail site will see advertisements later -- around the internet and not solely on the Facebook site -- about the items they viewed but not purchased. Then, there is the database it assembles:

"In materials written for its advertisers, Facebook explains that it sorts consumers into a wide variety of buckets based on factors such as age, gender, language, and geographic location. Facebook also sorts its users based on their online activities—from buying dog food, to reading recipes, to tagging images of kitchen remodeling projects, to using particular mobile devices. The company explains that it can even analyze its database to build “look-alike” audiences that are similar... Facebook can show ads to consumers on other websites and apps as well through the company’s Audience Network."

So, several technologies are used to track both Facebook users and non-users, and assemble a robust, descriptive database. And, some website operators collaborate to facilitate the tracking, which is invisible to most users. Neat, eh?

Like it or not, internet users are automatically included in the tracking and data collection. Can you opt out? Consumer reports also warns:

"The biggest tech companies don’t give you strong tools for opting out of data collection, though. For instance, privacy settings may let you control whether you see targeted ads, but that doesn’t affect whether a company collects and stores information about you."

Given this, one can conclude that Facebook is really a massive advertising network masquerading as a social networking service.

To minimize the tracking, consumers can: disable the Facebook API platform on their Facebook accounts, use the new tools (e.g., see these step-by-step instructions) by Facebook to review and disable the apps with access to their data, use ad-blocking software (e.g., Adblock Plus, Ghostery), use the opt out-out mechanisms offered by the major data brokers, use the OptOutPrescreen.com site to stop pre-approved credit offers, and use VPN software and services.

If you use the Firefox web browser, configure it for Private Browsing and install the new Facebook Container add-on specifically designed to prevent Facebook from tracking you. Don't use Firefox? Several web browsers offer Incognito Mode. And, you might try the Privacy Badger add-on instead. I've used it happily for years.

To combat "canvas fingerprinting" (e.g., tracking users by identifying the unique attributes of your computer, browser, and software), security experts have advised consumers to use different web browsers. For example, you'd use one browser only for online banking, and a different web browser for surfing the internet. However,  this security method may not work much longer given the rise of cross-browser fingerprinting.

It seems that an arms race is underway between software for users to maintain privacy online versus technologies by advertisers to defeat users' privacy. Would Facebook and its affiliates/partners use cross-browser fingerprinting? My guess: yes it would, just like any other advertising network.

What do you think? Some related reading:


Backpage Executive Pled Guilty In Three States. Several Other Executives Indicted

Late last week, the Washington Post reported:

"Carl Ferrer, the chief executive of Backpage.com whose name was conspicuously absent from an indictment of seven other Backpage officials unsealed Monday, has pleaded guilty in state courts in California and Texas and federal court in Arizona to charges of money laundering and conspiracy to facilitate prostitution. In addition, he agreed to testify against the men who co-founded Backpage with him, Michael Lacey and James Larkin, who remained in jail Thursday in Arizona on facilitating prostitution charges. Backpage, in addition to hosting thinly veiled ads for prostitution since 2004, was accused of hosting child sex trafficking ads on its site... Court records show that Ferrer pleaded guilty to conspiracy to facilitate prostitution and money laundering in federal court in Phoenix on April 5, with the hearing and documents sealed. Backpage.com also pleaded guilty, by Ferrer as the CEO, to a money laundering conspiracy in Phoenix, where Backpage was created. Ferrer then on Monday appeared in state court in Corpus Christi, Texas, where he personally pleaded guilty to money laundering..."


How To View The List Of Advertisers Tracking You On Facebook. Any Surprises On Your List?

The massive privacy and data security breach at Facebook.com involving Cambridge Analytica has heightened many users' sensitivity to the advertising practices by the social networking service. Many Facebook users want to know the exact list of advertiser tracking them.

How To View The List Of Advertisers Tracking You

Facebook Ad Preferences page. Click to view larger version How to view this list? It's easy. Sign into Facebook.com and navigate to Settings > Ads > Advertisers You've Interacted With. (When using a web browser, you'll have to click on the tiny arrow in the upper right portion of the page to access the drop-down menu.) Within the Ad Preferences page, click on the "Advertisers You've Interacted With" headline to open that module. When opened, it displays several lists of advertisers:

  1. Who've added their contact list to Facebook
  2. Whose website or app you've used,
  3. Whom you've visited, and
  4. More

The default view of list #1 displays 12 advertisers tracking you. There probably are many more in your list. Select "Show More" to view more advertisers. Facebook doesn't make it easy. The module lacks a "Show All" button, which forces users to repeatedly select "Show More." Not good. Come on Facebook! You can do better.

List #1 includes important explanatory text:

"These advertisers are running ads using a contact list they uploaded that includes your contact info. This info was collected by the advertiser, typically after you shared your email address with them or another business they've partnered with."

The key phrase to remember: or another business they've interacted with. So, list #1 includes not only advertisers but also affiliates or business partners. Not good. More Facebook being Facebook.

I selected "Show More" about two dozen times to view my complete list: 235 advertisers tracking me, and collecting data about me. 235 advertisers even though I never used the Facebook mobile app, and had already disabled the Facebook API platform on my account years ago! Not good.

Your mileage will vary. There may be fewer or more advertisers on your list.

My list #1 included both advertisers I expected and many I didn't expect. The advertisers I expected to see brands I currently do business with (e.g., Marriott Rewards, ACLU), brands I no longer do business with (e.g., Bank of America, AT&T), and/or brands whose Facebook pages I "Liked" or left comments on. The advertisers who I didn't expected to see included politicians in other states I've neither visited nor live in, brands I've never purchased nor interacted with in any manner, brands I have never "Liked," and more.

Who's on your list? A friend shared:

"I looked at my list and it's crazy. Will follow the opt-out links tomorrow and clear them out. Cardi B was in my list of FB advertisers."

A rapper? That's too funny. I guess that's to be expected if you stream and share music online via Facebook. Me? I don't stream music online because that is another way to be tracked. Instead, I enjoy listening to CDs privately in my home. I prefer to keep my home a truly private place.

What's really going on here? Why the crazy long list? Popular Science explained:

"You, can thank the "data providers" for this mess. Mark Zuckerberg spent roughly 11 hours testifying in front of Congressional committees... One thing that got very little attention was the concept of “data brokers,” middleman businesses that collect consumer information and sell it to companies. Facebook stopped using them just last month. However, that long string of companies, personalities, and alternative rock bands is a result of Facebook’s old program... after the Cambridge Analytica scandal broke, but before Mark Zuckerberg’s marathon testimony in front of Congress, Facebook announced that it was ending a program called Partner Categories, canceling a long-standing relationship between the social network and data brokers. The change was announced in a short statement, but it has big implications for your personal information and the agencies that collect and sell it."

"The ability to target advertising is what makes Facebook its money—roughly $40 billion last year... while you provide lots of user information to Facebook, advertisers typically want even more... and that’s where data brokers come in. Facebook calls on brokers like Acxiom, Epsilon, and TransUnion to act as a conduit between Facebook and individual advertisers looking to reach targeted audiences..."

Readers of this blog may recognize TransUnion, one of the three major credit reporting agencies. So, the "advertisers" on Facebook tracking you (and data harvesting) include a variety of entities: traditional advertisers, business partners, affiliates, data brokers, and their intermediaries.

It's called "surveillance capitalism" for good reasons. Many companies besides Facebook do it.

What To Do Next

It's not easy to opt out or delete items from your advertising list. For those brands and entities you have "Liked," you can visit their Facebook page and "Unlike" them. However, that won't stop them or other "advertisers" from re-targeting (and tracking) you in the future. The "Ad Preferences" page for your profile also includes the "Your Information" module where you can toggle on or off advertising based upon certain profile elements:

Your Information module within Ad Preferences. Facebook. Click to view larger version

The above image is from 2017. back then I disabled all of the active toggles you see. Deactivating these toggles might minimize the number of ads displays, but it won't stop the tracking and data collection. The Popular Science article includes links to several opt-out mechanisms for major data brokers. You could (and should) use those. However, two key problems remain.

First, these opt-out links should be easily accessible within Facebook. They aren't. This forces consumers to waste time hunting for the opt-out mechanisms, when Facebook has the expertise to provide them. Facebook probably knows that many consumers will give up and quit, rather than hunt for opt-out links. It's great that Popular Science did a lot of the work for consumers.

Second, the opt-out mechanisms offered by some data brokers are unnecessarily complex. Example: see the opt-out mechanisms offered by Experian, another credit reporting agency:

Experian opt-out site pages. Click to view larger version

Didn't know that Experian plays in both ponds: credit reporting and data brokerage? Most people probably don't know. Experian's site lacks a unified, single opt-out mechanism which forces consumers to wade through seven different mechanisms and methods; some of which are paper-based and lack an online method. Not good!

TransUnion's opt-out mechanism isn't much better. And, it raises more questions than it answers? It links to the OptOutPrescreen.com site, which I completed way back in 2007. Did my Facebook membership undo that? Or is there some other data sharing at work, which the OptOutprescreen doesn't cover? TransUnion's page doesn't explain, and nither does Facebook's page. Not good.

Some people choose to use ad-blocking software (e.g., Adblock Plus, Ghostery) to suppress the display of online ads, but that probably won't stop the tracking and data collection internal to Facebook. There's no substitute for Facebook giving its users internal tools to completely disable and opt out of the tracking and data collection.

That highlights another problem: users are automatically included, so the burden is upon users to (continually) opt out. This is Facebook's business model. The reverse should be the default. Users should not be tracked nor data harvested unless they register and opt into the program. Given the social media site's business model, even if you opt out today, there's nothing stopping Facebook from re-subscribing you in the future with any updates to its system or terms of use.

How many advertisers are on your list? 200 or more? 300? 400? Any surprises on your list?


How To Check If Your Information Was Collected By Cambridge Analytica In The Facebook Breach

You've probably heard about the massive privacy and data security breach at Facebook.com where users' information, plus their friends' information was captured and shared with Cambridge Analytica. by an app created by an academic professor. Now, you want to know if your information was harvested.

How To Check

It's easy to check. Visit this Facebook Help Center page. If you are not signed into your Facebook account, then the page displays as:

Default version of Facebook Help page for users to determine if their information was collected by Cambridge Analytica. Click to view larger version

If you have already signed into your Facebook account and your information was not harvested, then the main column of the page displays:

Default version of Facebook Help page for users to determine if their information was collected by Cambridge Analytica. Click to view larger version

If your information was harvested, then the content under "Was My Information Shared?" will be different. It may display this:

"Based upon our investigation, you don't appear to have logged into "This Is Your Digital Life" with Facebook before we removed it from our platform in 2015. However, a friend of yours did log in. As a result, the following information was likely shared with "This Is Your Digital Life": Your public profile, page likes, date of birth, and current city"

Of course, if you logged into the "This Is Your Digital Life" app yourself, then the page content will say so, and list the data elements harvested. Reportedly, about 270,000 Facebook users logged into the app/quiz which then collected information for an estimated 87 million of those users' Facebook friends.

What To Do Next

There's not a lot you can do immediately. CNN Tech advised:

"Even if you delete your Facebook account, or remove third-party apps connected to your profile, the third-party apps will still have access to data they previously collected. Users have to contact the app individually to have the data be removed... According to a notice on affected accounts, the "small number of people" who accessed the app also shared their News Feed, timeline, posts and messages. A Facebook spokesperson confirmed that 1,500 users who logged into the app granted explicit access to their private message inbox... For now, the platform is directing people to their Settings page to see which apps are connected to their accounts, such as Uber and Netflix. Users can also disconnect those apps... Walt Mossberg, a veteran tech reporter and cofounder of tech website Recode, urged Facebook to let users know which friends accessed the app and when..."

Yeah, that! Facebook should inform affected users which of their friends contributed to the data leakage.

Of course, Facebook wants its users to keep using the service. Facebook announced on March 21st that it will, 1) investigate all apps that had access to large amounts of information and conduct full audits of any apps with suspicious activity; 2) inform users affected by apps that have misused their data; 3) disable an app's access to a member's information if that member hasn't used the app within the last three months; 4) change Login to "reduce the data that an app can request without app review to include only name, profile photo and email address;" 5) encourage members to manage the apps they use; and reward users who find vulnerabilities.

Those actions seem good, but too little too late. What can affected users do?

You have options. If you use Facebook, see these instructions by Consumer Reports to deactivate or delete your account. Some people I know simply stopped using Facebook, but left their accounts active. That doesn't seem wise. A better approach is to adjust the privacy settings on your Facebook account to get as much privacy and protections as possible.

Facebook has a new tool for members to review and disable, in bulk, all of the apps with access to their data. Follow these handy step-by-step instructions by Mashable. And, users should also disable the Facebook API platform for their account. If you use the Firefox web browser, then install the new Facebook Container add-on specifically designed to prevent Facebook from tracking you. Don't use Firefox? You might try the Privacy Badger add-on instead. I've used it happily for years.

Whatever you do, remember that lots of advertising networks and tech companies besides Facebook want to track your movements around the web. Some of those companies include internet service providers (ISPs), since the U.S. Federal Communications Commission (FCC) killed both broadband privacy and net neutrality in 2017.

A windfall for broadband providers, and terrible for consumers. You might contact your elected officials and demand that the FCC put broadband privacy and net neutrality protections back into place.


4 Ways to Fix Facebook

[Editor's Note: today's guest post, by ProPublica reporters, explores solutions to the massive privacy and data security problems at Facebook.com. It is reprinted with permission.]

By Julia Angwin, ProPublica

Gathered in a Washington, D.C., ballroom last Thursday for their annual “tech prom,” hundreds of tech industry lobbyists and policy makers applauded politely as announcers read out the names of the event’s sponsors. But the room fell silent when “Facebook” was proclaimed — and the silence was punctuated by scattered boos and groans.

Facebook logo These days, it seems the only bipartisan agreement in Washington is to hate Facebook. Democrats blame the social network for costing them the presidential election. Republicans loathe Silicon Valley billionaires like Facebook founder and CEO Mark Zuckerberg for their liberal leanings. Even many tech executives, boosters and acolytes can’t hide their disappointment and recriminations.

The tipping point appears to have been the recent revelation that a voter-profiling outfit working with the Trump campaign, Cambridge Analytica, had obtained data on 87 million Facebook users without their knowledge or consent. News of the breach came after a difficult year in which, among other things, Facebook admitted that it allowed Russians to buy political ads, advertisers to discriminate by race and age, hate groups to spread vile epithets, and hucksters to promote fake news on its platform.

Over the years, Congress and federal regulators have largely left Facebook to police itself. Now, lawmakers around the world are calling for it to be regulated. Congress is gearing up to grill Zuckerberg. The Federal Trade Commission is investigating whether Facebook violated its 2011 settlement agreement with the agency. Zuckerberg himself suggested, in a CNN interview, that perhaps Facebook should be regulated by the government.

The regulatory fever is so strong that even Peter Swire, a privacy law professor at Georgia Institute of Technology who testified last year in an Irish court on behalf of Facebook, recently laid out the legal case for why Google and Facebook might be regulated as public utilities. Both companies, he argued, satisfy the traditional criteria for utility regulation: They have large market share, are natural monopolies, and are difficult for customers to do without.

While the political momentum may not be strong enough right now for something as drastic as that, many in Washington are trying to envision what regulating Facebook would look like. After all, the solutions are not obvious. The world has never tried to rein in a global network with 2 billion users that is built on fast-moving technology and evolving data practices.

I talked to numerous experts about the ideas bubbling up in Washington. They identified four concrete, practical reforms that could address some of Facebook’s main problems. None are specific to Facebook alone; potentially, they could be applied to all social media and the tech industry.

1. Impose Fines for Data Breaches

The Cambridge Analytica data loss was the result of a breach of contract, rather than a technical breach in which a company gets hacked. But either way, it’s far too common for institutions to lose customers’ data — and they rarely suffer significant financial consequences for the loss. In the United States, companies are only required to notify people if their data has been breached in certain states and under certain circumstances — and regulators rarely have the authority to penalize companies that lose personal data.

Consider the Federal Trade Commission, which is the primary agency that regulates internet companies these days. The FTC doesn’t have the authority to demand civil penalties for most data breaches. (There are exceptions for violations of children’s privacy and a few other offenses.) Typically, the FTC can only impose penalties if a company has violated a previous agreement with the agency.

That means Facebook may well face a fine for the Cambridge Analytica breach, assuming the FTC can show that the social network violated a 2011 settlement with the agency. In that settlement, the FTC charged Facebook with eight counts of unfair and deceptive behavior, including allowing outside apps to access data that they didn’t need — which is what Cambridge Analytica reportedly did years later. The settlement carried no financial penalties but included a clause stating that Facebook could face fines of $16,000 per violation per day.

David Vladeck, former FTC director of consumer protection, who crafted the 2011 settlement with Facebook, said he believes Facebook’s actions in the Cambridge Analytica episode violated the agreement on multiple counts. “I predict that if the FTC concludes that Facebook violated the consent decree, there will be a heavy civil penalty that could well be in the amount of $1 billion or more,” he said.

Facebook maintains it has abided by the agreement. “Facebook rejects any suggestion that it violated the consent decree,” spokesman Andy Stone said. “We respected the privacy settings that people had in place.”

If a fine had been levied at the time of the settlement, it might well have served as a stronger deterrent against any future breaches. Daniel J. Weitzner, who served in the White House as the deputy chief technology officer at the time of the Facebook settlement, says that technology should be policed by something similar to the Department of Justice’s environmental crimes unit. The unit has levied hundreds of millions of dollars in fines. Under previous administrations, it filed felony charges against people for such crimes as dumping raw sewage or killing a bald eagle. Some ended up sentenced to prison.

“We know how to do serious law enforcement when we think there’s a real priority and we haven’t gotten there yet when it comes to privacy,” Weitzner said.

2. Police Political Advertising

Last year, Facebook disclosed that it had inadvertently accepted thousands of advertisements that were placed by a Russian disinformation operation — in possible violation of laws that restrict foreign involvement in U.S. elections. FBI special prosecutor Robert Mueller has charged 13 Russians who worked for an internet disinformation organization with conspiring to defraud the United States, but it seems unlikely that Russia will compel them to face trial in the U.S.

Facebook has said it will introduce a new regime of advertising transparency later this year, which will require political advertisers to submit a government-issued ID and to have an authentic mailing address. It said political advertisers will also have to disclose which candidate or organization they represent and that all election ads will be displayed in a public archive.

But Ann Ravel, a former commissioner at the Federal Election Commission, says that more could be done. While she was at the commission, she urged it to consider what it could do to make internet advertising contain as much disclosure as broadcast and print ads. “Do we want Vladimir Putin or drug cartels to be influencing American elections?” she presciently asked at a 2015 commission meeting.

However, the election commission — which is often deadlocked between its evenly split Democratic and Republican commissioners — has not yet ruled on new disclosure rules for internet advertising. Even if it does pass such a rule, the commission’s definition of election advertising is so narrow that many of the ads placed by the Russians may not have qualified for scrutiny. It’s limited to ads that mention a federal candidate and appear within 60 days prior to a general election or 30 days prior to a primary.

This definition, Ravel said, is not going to catch new forms of election interference, such as ads placed months before an election, or the practice of paying individuals or bots to spread a message that doesn’t identify a candidate and looks like authentic communications rather than ads.

To combat this type of interference, Ravel said, the current definition of election advertising needs to be broadened. The FEC, she suggested, should establish “a multi-faceted test” to determine whether certain communications should count as election advertisements. For instance, communications could be examined for their intent, and whether they were paid for in a nontraditional way — such as through an automated bot network.

And to help the tech companies find suspect communications, she suggested setting up an enforcement arm similar to the Treasury Department’s Financial Crimes Enforcement Network, known as FinCEN. FinCEN combats money laundering by investigating suspicious account transactions reported by financial institutions. Ravel said that a similar enforcement arm that would work with tech companies would help the FEC.

“The platforms could turn over lots of communications and the investigative agency could then examine them to determine if they are from prohibited sources,” she said.

3. Make Tech Companies Liable for Objectionable Content

Last year, ProPublica found that Facebook was allowing advertisers to buy discriminatory ads, including ads targeting people who identified themselves as “Jew-haters,” and ads for housing and employment that excluded audiences based on race, age and other protected characteristics under civil rights laws.

Facebook has claimed that it has immunity against liability for such discrimination under section 230 of the 1996 federal Communications Decency Act, which protects online publishers from liability for third-party content.

“Advertisers, not Facebook, are responsible for both the content of their ads and what targeting criteria to use, if any,” Facebook stated in legal filings in a federal case in California challenging Facebook’s use of racial exclusions in ad targeting.

But sentiment is growing in Washington to interpret the law more narrowly. Last month, the House of Representatives passed a bill that carves out an exemption in the law, making websites liable if they aid and abet sex trafficking. Despite fierce opposition by many tech advocates, a version of the bill has already passed the Senate.

And many staunch defenders of the tech industry have started to suggest that more exceptions to section 230 may be needed. In November, Harvard Law professor Jonathan Zittrain wrote an article rethinking his previous support for the law and declared it has become, in effect, “a subsidy” for the tech giants, who don’t bear the costs of ensuring the content they publish is accurate and fair.

“Any honest account must acknowledge the collateral damage it has permitted to be visited upon real people whose reputations, privacy, and dignity have been hurt in ways that defy redress,” Zittrain wrote.

In a December 2017 paper titled “The Internet Will Not Break: Denying Bad Samaritans 230 Immunity,” University of Maryland law professors Danielle Citron and Benjamin Wittes argue that the law should be amended — either through legislation or judicial interpretation — to deny immunity to technology companies that enable and host illegal content.

“The time is now to go back and revise the words of the statute to make clear that it only provides shelter if you take reasonable steps to address illegal activity that you know about,” Citron said in an interview.

4. Install Ethics Review Boards

Cambridge Analytica obtained its data on Facebook users by paying a psychology professor to build a Facebook personality quiz. When 270,000 Facebook users took the quiz, the researcher was able to obtain data about them and all of their Facebook friends — or about 50 million people altogether. (Facebook later ended the ability for quizzes and other apps to pull data on users’ friends.)

Cambridge Analytica then used the data to build a model predicting the psychology of those people, on metrics such as “neuroticism,” political views and extroversion. It then offered that information to political consultants, including those working for the Trump campaign.

The company claimed that it had enough information about people’s psychological vulnerabilities that it could effectively target ads to them that would sway their political opinions. It is not clear whether the company actually achieved its desired effect.

But there is no question that people can be swayed by online content. In a controversial 2014 study, Facebook tested whether it could manipulate the emotions of its users by filling some users’ news feeds with only positive news and other users’ feeds with only negative news. The study found that Facebook could indeed manipulate feelings — and sparked outrage from Facebook users and others who claimed it was unethical to experiment on them without their consent.

Such studies, if conducted by a professor on a college campus, would require approval from an institutional review board, or IRB, overseeing experiments on human subjects. But there is no such standard online. The usual practice is that a company’s terms of service contain a blanket statement of consent that users never read or agree to.

James Grimmelman, a law professor and computer scientist, argued in a 2015 paper that the technology companies should stop burying consent forms in their fine print. Instead, he wrote, “they should seek enthusiastic consent from users, making them into valued partners who feel they have a stake in the research.”

Such a consent process could be overseen by an independent ethics review board, based on the university model, which would also review research proposals and ensure that people’s private information isn’t shared with brokers like Cambridge Analytica.

“I think if we are in the business of requiring IRBs for academics,” Grimmelman said in an interview, “we should ask for appropriate supervisions for companies doing research.”

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

 


Fair Housing Groups Sue Facebook for Allowing Discrimination in Housing Ads

[Editor's Note: today's guest post, by reporters at ProPublica, is the latest in a series about advertising and social networking services. It is reprinted with permission.]

Facebook logo By Julia Angwin and Ariana Tobin, ProPublica

In February 2017, in response to a ProPublica investigation, Facebook pledged to crack down on efforts by advertisers of rental housing to discriminate against tenants based on race, disability, gender and other characteristics.

But a new lawsuit, filed Tuesday by the National Fair Housing Alliance in U.S. District Court in the Southern District of New York, alleges that the world’s largest social network still allows advertisers to discriminate against legally protected groups, including mothers, the disabled and Spanish-language speakers.

Since 2018 marks the 50th anniversary of the Fair Housing Act, "it is all the more egregious and shocking" that "Facebook continues to enable landlords and real estate brokers to bar families with children, women and others from receiving rental and sales ads or housing," the lawsuit states. It asks the court, among other things, to declare that Facebook’s policies violate fair housing laws, to bar the company from publishing discriminatory ads, and to require it to develop and make public a written fair housing policy for advertising.

Diane Houk, lead counsel for the alliance, said this type of discrimination is especially difficult to uncover and combat. "The person who is being discriminated against has no way to know" it, because the technology "keeps the discrimination hidden in hopes that it will not be caught," she said.

Facebook disputes the housing groups’ allegations. "There is absolutely no place for discrimination on Facebook. We believe this lawsuit is without merit, and we will defend ourselves vigorously," said Facebook spokesman Joe Osborne.

The lawsuit adds to Facebook’s woes, which are mounting on multiple fronts. The company’s stock plunged last week on the news that it had allowed a voter-profiling outfit, Cambridge Analytica, to obtain data on 50 million of its users without their knowledge or consent. The news came after a troubling year in which, among other things, Facebook admitted that it unwittingly allowed a Russian disinformation operation on its platform and had been promoting fake news in its News Feed algorithm. As a result, lawmakers and regulators around the world have launched investigations into Facebook.

Discrimination in housing advertising has been a persistent problem for Facebook. In October 2016, we described how Facebook let advertisers exclude specific groups with what it called "ethnic affinities," including blacks and Hispanics, from seeing ads. Although Facebook responded by announcing it had built a system to flag and reject these ads, we bought dozens of rental housing ads in November 2017 that we specified would not be shown to blacks, Jews, people interested in wheelchair ramps and other groups.

It wasn’t until ProPublica brought the issue of advertising discrimination on Facebook to light, Houk said, that fair housing advocates learned of it. Emulating ProPublica’s technique, the Washington, D.C.-based national fair housing group, along with member groups in New York, San Antonio and Miami created fake housing companies and placed discriminatory ads on Facebook. The ads were approved by Facebook over a period of a few months, with the most recent buys occurring on Feb. 23.

Using Facebook’s dropdown "exclusion" menu, they were able to buy housing ads that blocked groups such as "trendy moms," "soccer moms," "parents with teenagers," people interested in a disabled parking permit and people interested in Telemundo, the Spanish-language television network.

The Fair Housing Act makes it illegal to publish any advertisement "with respect to the sale or rental of a dwelling that indicates any preference, limitation or discrimination based on race, color, religion, sex, handicap, familial status or national origin." Violators may face tens of thousands of dollars in fines.

After ProPublica’s investigation, Facebook added a self-certification option, which asks housing advertisers to certify that their advertisement is not discriminatory. In some cases, Houk said, the housing groups encountered the self-certification option, and did not submit the ads to Facebook for approval and publication. But that only happened in some of the ad buys, she said.

Since advertisers can falsely attest to fairness, the self-certification screens don’t "seem like a whole-hearted commitment to trying to change the advertising platform to comply with the Fair Housing Act and local fair housing laws," Houk said.

A couple of weeks after the groups bought housing ads, so did ProPublica (independently) — and we excluded some of the same categories, such as "soccer moms." In most of those tests, we encountered self-certification screens. However, when we bought another housing ad this week, we were able to exclude people interested in Telemundo.

Houk said there were so many possible explanations for the difference in results — such as the number of categories excluded or the types of exclusions sought — that it was impossible to speculate about what caused many of her clients’ ad purchases to be approved but not ProPublica’s.

Both the fair housing groups and ProPublica found that Facebook has blocked the use of race as an exclusion category — as it promised to do in November. Facebook rejected a ProPublica housing ad that was specifically aimed at African Americans. It also denied our attempts to buy employment ads targeted by race, and removed a job listing with a question designed to filter by race. However, the housing groups’ and ProPublica’s ability to exclude people interested in Telemundo suggests that advertisers could still discriminate by using proxies for race or ethnicity.

In a separate federal case in California, challenging Facebook’s use of racial exclusions in ad targeting, Facebook has argued that it has immunity against liability for such discrimination. It cited Section 230 of the 1996 federal Communications Decency Act, which protects internet companies from liability for third-party content.

"Advertisers, not Facebook, are responsible for both the content of their ads and what targeting criteria to use, if any," Facebook contended.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


I Approved This Facebook Message — But You Don’t Know That

[Editor's note: today's guest post, by reporters at ProPublica, is the latest in a series about advertising and social networking sites. It is reprinted with permission.]

Facebook logo By Jennifer Valentino-DeVries, ProPublica

Hundreds of federal political ads — including those from major players such as the Democratic National Committee and the Donald Trump 2020 campaign — are running on Facebook without adequate disclaimer language, likely violating Federal Election Commission (FEC) rules, a review by ProPublica has found.

An FEC opinion in December clarified that the requirement for political ads to say who paid for and approved them, which has long applied to print and broadcast outlets, extends to ads on Facebook. So we checked more than 300 ads that had run on the world’s largest social network since the opinion, and that election-law experts told us met the criteria for a disclaimer. Fewer than 40 had disclosures that appeared to satisfy FEC rules.

“I’m totally shocked,” said David Keating, president of the nonprofit Institute for Free Speech in Alexandria, Virginia, which usually opposes restrictions on political advertising. “There’s no excuse,” he said, looking through our database of ads.

The FEC can investigate possible violations of the law and fine people up to thousands of dollars for breaking it — fines double if the violation was “knowing and willful,” according to the regulations. Under the law, it’s up to advertisers, not Facebook, to ensure they have the right disclaimers. The FEC has not imposed penalties on any Facebook advertiser for failing to disclose.

An FEC spokeswoman declined to say whether the commission has any recent complaints about lack of disclosure on Facebook ads. Enforcement matters are confidential until they are resolved, she said.

None of the individuals or groups we contacted whose ads appeared to have inadequate disclaimers, including the Democratic National Committee and the Trump campaign, responded to requests for comment. Facebook declined to comment on ProPublica’s findings or the December opinion. In public documents, the company has urged the FEC to be “flexible” in what it allows online, and to develop a policy for all digital advertising rather than focusing on Facebook.

Insufficient disclaimers can be minor technicalities, not necessarily evidence of intent to deceive. But the pervasiveness of the lapses ProPublica found suggests a larger problem that may raise concerns about the upcoming midterm elections — that political advertising on the world’s largest social network isn’t playing by rules intended to protect the public.

Unease about political ads on Facebook and other social networking sites has intensified since internet companies acknowledged that organizations associated with the Russian government bought ads to influence U.S. voters during the 2016 election. Foreign contributions to campaigns for U.S. federal office are illegal. Online, advertisers can target ads to relatively small groups of people. Once the marketing campaign is over, the ads disappear. This makes it difficult for the public to scrutinize them.

The FEC opinion is part of a push toward more transparency in online political advertising that has come in response to these concerns. In addition to handing down the opinion in a specific case, the FEC is preparing new rules to address ads on social media more broadly. Three senators are sponsoring a bill called the Honest Ads Act, which would require internet companies to provide more information on who is buying political ads. And earlier this month, the election authority in Seattle said Facebook was violating a city law on election-ad disclosures, marking a milestone in municipal attempts to enforce such transparency.

Facebook itself has promised more transparency about political ads in the coming months, including “paid for by” disclosures. Since late October it has been conducting tests in Canada that publish ads on an advertiser’s Facebook page, where people can see them even without being part of the advertiser’s target audience. Those ads are only up while the ad campaign is running, but Facebook says it will create a searchable archive for federal election advertising in the U.S. starting this summer.

ProPublica found the ads using a tool called the Political Ad Collector, which allows Facebook users to automatically send us the political ads that were displayed on their news feeds. Because they reflect what users of the tool are seeing, the ads in our database aren’t a representative sample.

The disclaimers required by the FEC are familiar to anyone who has seen a print or television political ad — think of a candidate saying, “I’m ____, and I approved this message,” at the end of a TV commercial, or a “paid for by” box at the bottom of a newspaper advertisement. They’re intended to make sure the public knows who is paying to support a candidate, and to prevent people from falsely claiming to speak on a candidate’s behalf.

The system does have limitations, reflecting concerns that overuse of disclaimers could inhibit free speech. For starters, the rules apply only to certain types of political ads. Political committees and candidates have to include disclaimers, as do people seeking donations or conducting “express advocacy.” To count as express advocacy, an ad typically must mention a candidate and use certain words clearly campaigning for or against a candidate — such as “vote for,” “reject” or “re-elect.” And the regulations only apply to federal elections, not state and local ones.

The rules also don’t address so-called “issue” ads that advocate a policy stance. These ads may include a candidate’s name without a disclaimer, as long as they aren’t funded by a political committee or candidate and don’t use express-advocacy language. Many of the political ads purchased by Russian groups in 2016 attempted to influence public opinion without mentioning candidates at all — and would not require disclosure even today.

Enforcement of the law often relies on political opponents or a member of the public complaining to the FEC. If only supporters see an ad, as might be the case online, a complaint may never come.

The disclaimer law was last amended in 2002, but online advertising has changed so rapidly that several experts said the FEC has had trouble keeping up. In 2002, the commission found that paid text message ads were exempt from disclosure under the “small-items exception” originally intended for buttons, pins and the like. What counts as small depends on the situation and is up to the FEC.

In 2010, the FEC considered ads on Google that had no graphics or photos and were limited to 95 characters of text. Google proposed that disclaimers not be part of the ads themselves but be included on the web pages that users would go to after clicking on the ads; the FEC agreed.

In 2011, Facebook asked the FEC to allow political ads on the social network to run without disclosures. At the time, Facebook limited all ads on its platform to small, “thumbnail” photos and brief text of only 100 or 160 characters, depending on the type of ad. In that case, the six-person FEC couldn’t muster the four votes needed to issue an opinion, with three commissioners saying only limited disclosure was required and three saying the ads needed no disclosure at all, because it would be “impracticable” for political ads on Facebook to contain more text than other ads. The result was that political ads on Facebook ran without the disclaimers seen on other types of election advertising.

Since then, though, ads on Facebook have expanded. They can now include much more text, as well as graphics or photos that take up a large part of the news feed’s width. Video ads can run for many minutes, giving advertisers plenty of time to show the disclaimer as text or play it in a voiceover.

Last October, a group called Take Back Action Fund decided to test whether these Facebook ads should still be exempt from the rules.

“For years now, people have said, ‘Oh, don’t worry about the rules, because the FEC doesn’t enforce anything on Facebook,’” said John Pudner, president of Take Back Action Fund, which advocates for campaign finance reform. Many political consultants “didn’t think you ever needed a disclaimer on a Facebook ad,” said Pudner, a longtime campaign consultant to conservative candidates.

Take Back Action Fund came up with a plan: Ask the FEC whether it should include disclosures on ads that the group thought clearly needed them.

The group told the FEC it planned to buy “express advocacy” ads on Facebook that included large images or videos on the news feed. In its filing, Take Back Action Fund provided some sample text it said it was thinking of using: “While [Candidate Name] accuses the Russians of helping President Trump get elected, [s/he] refuses to call out [his/her] own Democrat Party for paying to create fake documents that slandered Trump during his presidential campaign. [Name] is unfit to serve.”

In a comment filed with the FEC in the matter, the Internet Association trade group, of which Facebook is a member, asked the commission to follow the precedent of the 2010 Google case and allow a “one-click” disclosure that didn’t need to be on the ad itself but could be on the web page the ad led to.

The FEC didn’t follow that recommendation. It said unanimously that the ads needed full disclaimers.

The opinion, handed down Dec. 15, was narrow, saying that if any of the “facts or assumptions” presented in another case were different in a “material” way, the opinion could not be relied upon. But several legal experts who spoke with ProPublica said the opinion means anyone who would have to include disclaimers in traditional advertising should now do so on large Facebook image ads or video ads — including candidates, political committees and anyone using express advocacy.

“The functionality and capabilities of today’s Facebook Video and Image ads can accommodate the information without the same constrictions imposed by the character-limited ads that Facebook presented to the Commission in 2011,” three commissioners wrote in a concurring statement. A fourth commissioner went further, saying the commission’s earlier decision in the text messaging case should now be completely superseded. The remaining two commissioners didn’t comment beyond the published opinion.

“We are overjoyed at the decision and hope it will have the effect of stopping anonymous attacks,” said Pudner, of Take Back Action Fund. “We think that this is a matter of the voter’s right to know.” He added that the group doesn’t intend to purchase the ads.

This year, the FEC plans to tackle concerns about digital political advertising more generally. Facebook favors such an industry-wide approach, partly for competitive reasons, according to a comment it submitted to the commission.

“Facebook strongly supports the Commission providing further guidance to committees and other advertisers regarding their disclaimer obligations when running election-related Internet communications on any digital platform,” Facebook General Counsel Colin Stretch wrote to the FEC.

Facebook was concerned that its own transparency efforts “will apply only to advertising on Facebook’s platform, which could have the unintended consequence of pushing purchasers who wish to avoid disclosure to use other, less transparent platforms,” Stretch wrote.

He urged the FEC to adopt a “flexible” approach, on the grounds that there are many different types of online ads. “For example, allowing ads to include an icon or other obvious indicator that more information about an ad is available via quick navigation (like a single click) would give clear guidance.”

To test whether political advertisers were following the FEC guidelines, we searched for large U.S. political ads that our tool gathered between Dec. 20 — five days after the opinion — and Feb. 1. We excluded the small ads that run on the right column of Facebook’s website. To find ads that were most likely to fall under the purview of the FEC regulations, we searched for terms like “committee,” “donate” and “chip in.” We also searched for ads that used express advocacy language such as, “for Congress,” “vote against,” “elect” or “defeat.” We left out ads with state and local terms such as “governor” or “mayor,” as well as ads from groups such as the White House Historical Association or National Audubon Society that were obviously not election-oriented. Then we examined the ads, including the text and photos or graphics.

Of nearly 70 entities that ran ads with a large photo or graphic in addition to text, only two used all of the required disclaimer language. About 20 correctly indicated in some fashion the name of the committee associated with the ad but omitted other language, such as whether the ad was endorsed by a candidate. The rest had more significant shortcomings. Many of those that didn’t include disclosures were for relatively inexperienced candidates for Congress, but plenty of seasoned lawmakers and major groups failed to use the proper language as well.

For example, one ad said, “It’s time for Donald Trump, his family, his campaign, and all of his cronies to come clean about their collusion with Russia.” A photo of Donald Trump appeared over a black and red map of Russia, overlaid by the text, “Stop the Lies.” The ad urged people to “Demand Answers Today” and “Sign Up.”

At the top, the ad identified the Democratic Party as the sponsor, and linked to the party’s Facebook page. But, under FEC rules, it should have named the funder, the Democratic National Committee, and given the committee’s address or website. It should also have said whether the ad was endorsed by any candidate. It didn’t. The only nod to the national committee was a link to my.democrats.org, which is paid for by the DNC, at the bottom of the ad. As on all Facebook ads, the word “Sponsored” was included at the top.

Advertisers seemed more likely to put the proper disclaimers on video ads, especially when those ads appeared to have been created for television, where disclaimers have been mandatory for years. Videos that didn’t look made for TV were less likely to include a disclaimer.

One ad that said it was from Donald J. Trump consisted of 20 seconds of video with an American flag background and stirring music. The words “Donate Now! And Enter for a Chance To Win Dinner With Trump!” materialized on the screen with dramatic thuds and crashes. The ad linked to Trump’s Facebook page, and a “Donate” button at the bottom of the ad linked to a website that identified the president’s re-election committee, Donald J. Trump for President, Inc., as its funder. It wasn’t clear on the ad whether Trump himself or his committee paid for it, which should have been specified under FEC rules.

The large majority of advertisements we collected — both those that used disclosures and those that didn’t — were for liberal groups and politicians, possibly reflecting the allegiances of the ProPublica readers who installed our ad-collection tool. There were only four Republican advertisers among the ads we analyzed.

It’s not clear why advertisers aren’t following the FEC regulations. Keating, of the Institute for Free Speech, suggested that advertisers might think the word “Sponsored” and a link to their Facebook page are enough and that reasonable people would know they had paid for the ad.

Others said social media marketers may simply be slow in adjusting to the FEC opinion.

“It’s entirely possible that because disclaimers haven’t been included for years now, candidates and committees just aren’t used to putting them on there,” said Brendan Fischer, director of the Federal and FEC Reform Program at the Campaign Legal Center, the group that provided legal services to Take Back Action Fund. “But they should be on notice,” he added.

There were only two advertisers we saw that included the full, clear disclosures required by the FEC on their large image ads. One was Amy Klobuchar, a Democratic senator from Minnesota who is a co-sponsor of the Honest Ads Act. The other was John Moser, an IT security professional and Democratic primary candidate in Maryland’s 7th Congressional District who received $190 in contributions last year, according to his FEC filings.

Reached by Facebook Messenger, Moser said he is running because he has a plan for ending poverty in the U.S. by restructuring Social Security into a “universal dividend” that gives everyone over age 18 a portion of the country’s per capita income. He complained that Facebook doesn’t make it easy for political advertisers to include the required disclosures. “You have to wedge it in there somewhere,” said Moser, who faces an uphill battle against longtime U.S. Rep. Elijah Cummings. “They need to add specific support for that, honestly.”

Asked why he went to the trouble to put the words on his ad, Moser’s answer was simple: “I included a disclosure because you're supposed to.”

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Unilever To Social Networking Sites: Drain The Online Swamp Or Lose Business

Unilever logo Unilever has placed tech companies and social networking sites on notice... chiefly Facebook and Google. Adweek reported:

"Unilever CMO Keith Weed put the advertising community on notice Monday during a keynote speech at the Interactive Advertising Bureau’s Annual Leadership Meeting in Palm Desert, Calif. Weed called for tech platforms—namely Facebook and YouTube—to step up their efforts in combating divisive content, hate speech and fake news. “I don’t think for a second where the internet right now is how the platforms dreamt it would be,” Weed told Adweek in an interview at the event."

After promising promised to improve the transparency of advertising on its platform, Facebook's program hasn't proceeded smoothly. Unilever spends about $9 billion annually in advertising, with more than 140 brands globally -- all spanning several categories including food and drink (e.g., Ben & Jerry's, Breyers, Country Crock, Hellmann's, Mazola, Knorr, Lipton, Promise), home care, and personal care products (e.g., Axe, Caress, Degree, Dove, Sunsilk, TRESemme, Vaseline). Adweek also reported:

"Much like Procter & Gamble CMO Marc Pritchard—who spoke at the IAB’s 2017 event and outlined a multipronged, yearlong plan—Weed is looking to pressure tech companies to increase their resources on cleaning up the platforms..."

BBC News reported:

"Unilever has pledged to: a) Not invest in platforms that do not protect children or create division in society; b) Only invest in platforms that make a positive contribution to society; c) Tackle gender stereotypes in advertising; and d) Only partner with companies creating a responsible digital infrastructure... At the World Economic Forum in Davos last month Prime Minister Theresa May called on investors to put pressure on tech firms to tackle the problem much more quickly. In December, the European Commission warned the likes of Facebook, Google, YouTube, Twitter and other firms that it was considering legislation if self-regulation continued to fail."

That's great. It'll be interesting to see which, if any other corporate marketers, make pledges similar to Unilever's. Susan Wojcicki, the CEO of Google's YouTube, issued a brief response. MediaPost reported:

"We want to do the right set of things to build [Unilever’s] trust. They are building brands on YouTube, and we want to be sure that our brand is the right place to build their brand."She added that "based on the feedback we had from them," YouTube changed its rules for what channels could be monetized, and began to have humans review all videos uploaded to Google Preferred..."

In December 2017, Youtube pledged a staff of 10,000 to root out divisive video content in 2018. We'll see if tech companies meet their promises. Consumers don't want to wade through social sites filled with divisive, hate, and fake-news content.


Facebook’s Experiment in Ad Transparency Is Like Playing Hide And Seek

[Editor's note: today's guest post, by the reporters at ProPublica, explores a new global program Facebook introduced in Canada. It is reprinted with permission.]

Facebook logo By Jennifer Valentino-DeVries, ProPublica

Shortly before a Toronto City Council vote in December on whether to tighten regulation of short-term rental companies, an entity called Airbnb Citizen ran an ad on the Facebook news feeds of a selected audience, including Toronto residents over the age of 26 who listen to Canadian public radio. The ad featured a photo of a laughing couple from downtown Toronto, with the caption, “Airbnb hosts from the many wards of Toronto raise their voices in support of home sharing. Will you?”

Placed by an interested party to influence a political debate, this is exactly the sort of ad on Facebook that has attracted intense scrutiny. Facebook has acknowledged that a group with ties to the Russian government placed more than 3,000 such ads to influence voters during the 2016 U.S. presidential campaign.

Facebook has also said it plans to avoid a repeat of the Russia fiasco by improving transparency. An approach it’s rolling out in Canada now, and plans to expand to other countries this summer, enables Facebook users outside an advertiser’s targeted audience to see ads. The hope is that enhanced scrutiny will keep advertisers honest and make it easier to detect foreign interference in politics. So we used a remote connection, called a virtual private network, to log into Facebook from Canada and see how this experiment is working.

The answer: It’s an improvement, but nowhere near the openness sought by critics who say online political advertising is a Wild West compared with the tightly regulated worlds of print and broadcast.

The new strategy — which Facebook announced in October, just days before a U.S. Senate hearing on the Russian online manipulation efforts — requires every advertiser to have a Facebook page. Whenever the advertiser is running an ad, the post is automatically placed in a new “Ads” section of the Facebook page, where any users in Canada can view it even if they aren’t part of the intended audience.

Facebook has said that the Canada experiment, which has been running since late October, is the first step toward a more robust setup that will let users know which group or company placed an ad and what other ads it’s running. “Transparency helps everyone, especially political watchdog groups and reporters, keep advertisers accountable for who they say they are and what they say to different groups,” Rob Goldman, Facebook’s vice president of ads, wrote before the launch.

While the new approach makes ads more accessible, they’re only available temporarily, can be hard to find, and can still mislead users about the advertiser’s identity, according to ProPublica’s review. The Airbnb Citizen ad — which we discovered via a ProPublica tool called the Political Ad Collector — is a case in point. Airbnb Citizen professed on its Facebook page to be a “community of hosts, guests and other believers in the power of home sharing to help tackle economic, environmental and social challenges around the world.” Its Facebook page didn’t mention that it is actually a marketing and public policy arm of Airbnb, a for-profit company.

Propublica-airbnb-citizen-adThe ad was part of an effort by the company to drum up support as it fought rental restrictions in Toronto. “These ads were one of the many ways that we engaged in the process before the vote,” Airbnb said. However, anyone who looked on Airbnb’s own Facebook page wouldn’t have found it.

Airbnb told ProPublica that it is clear about its connection to Airbnb Citizen. Airbnb’s webpage links to Airbnb Citizen’s webpage, and Airbnb Citizen’s webpage is copyrighted by Airbnb and uses part of the Airbnb logo. Airbnb said Airbnb Citizen provides information on local home-sharing rules to people who rent out their homes through Airbnb. “Airbnb has always been transparent about our advertising and public engagement efforts,” the statement said.

Political parties in Canada are already benefiting from the test to investigate ads from rival groups, said Nader Mohamed, digital director of Canada’s New Democratic Party, which has the third largest representation in Canada’s Parliament. “You’re going to be more careful with what you put out now, because you could get called on it at any time,” he said. Mohamed said he still expects heavy spending on digital advertising in upcoming campaigns.

After launching the test, Facebook demonstrated its new process to Elections Canada, the independent agency responsible for conducting federal elections there. Elections Canada recommended adding an archive function, so that ads no longer running could still be viewed, said Melanie Wise, the agency’s assistant director for media relations and issues management. The initiative is “helpful” but should go further, Wise said.

Some experts were more critical. Facebook’s new test is “useless,” said Ben Scott, a senior advisor at the think tank New America and a fellow at the Brookfield Institute for Innovation + Entrepreneurship in Toronto who specializes in technology policy. “If an advertiser is inclined to do something unethical, this level of disclosure is not going to stop them. You would have to have an army of people checking pages constantly.”

More effective ways of policing ads, several experts said, might involve making more information about advertisers and their targeting strategies readily available to users from links on ads and in permanent archives. But such tactics could alienate advertisers reluctant to share information with competitors, cutting into Facebook’s revenue. Instead, in Canada, Facebook automatically puts ads up on the advertiser’s Facebook page, and doesn’t indicate the target audience there.

Facebook’s test represents the least the company can do and still avoid stricter regulation on political ads, particularly in the U.S., said Mark Surman, a Toronto resident and executive director of Mozilla, a nonprofit Internet advocacy group that makes the Firefox web browser. “There are lots of people in the company who are trying to do good work. But it’s obvious if you’re Facebook that you’re trying not to get into a long conversation with Congress,” Surman said.

Facebook said it’s listening to its critics. “We’re talking to advertisers, industry folks and watchdog groups and are taking this kind of feedback seriously,” Rob Leathern, Facebook director of product management for ads, said in an email. “We look forward to continue working with lawmakers on the right solution, but we also aren’t waiting for legislation to start getting solutions in place,” he added. The company declined to provide data on how many people in Canada were using the test tools.

Facebook is not the only internet company facing questions about transparency in advertising. Twitter also pledged in October before the Senate hearing that “in the coming weeks” it would build a platform that would “offer everyone visibility into who is advertising on Twitter, details behind those ads, and tools to share your feedback.” So far, nothing has been launched.

Facebook has more than 23 million monthly users in Canada, according to the company. That’s more than 60 percent of Canada’s population but only about 1 percent of Facebook’s user base. The company has said it is launching its new ad-transparency plan in Canada because it already has a program there called the Canadian Election Integrity Initiative. That initiative was in response to a Canadian federal government report, “Cyber Threats to Canada’s Democratic Process,” which warned that “multiple hacktivist groups will very likely deploy cyber capabilities in an attempt to influence the democratic process during the 2019 federal election.” The election integrity plan promotes news literacy and offers a guide for politicians and political parties to avoid getting hacked.

Compared to the U.S., Canada’s laws allow for much stricter government regulation of political advertising, said Michael Pal, a law professor at the University of Ottawa. He said Facebook’s transparency initiative was a good first step but that he saw the extension of strong campaign rules into internet advertising as inevitable in Canada. “This is the sort of question that, in Canada, is going to be handled by regulation,” Pal said.

Several Canadian technology policy experts who spoke with ProPublica said Facebook’s new system was too inconvenient for the average user. There’s no central place where people can search the millions of ads on Facebook to see what ads are running about a certain subject, so unless users are part of the target audience, they wouldn’t necessarily know that a group is even running an ad. If users somehow hear about an ad or simply want to check whether a company or group is running one, they must first navigate to the group’s Facebook page and then click a small tab on the side labeled “Ads” that runs alongside other tabs such as “Videos” and “Community.” Once the user clicks the “Ads” tab, a page opens showing every ad that the page owner is running at that time, one after another.

The group’s Facebook page isn’t always linked from the text of the ad. If it isn’t, users can still find the Facebook page by navigating to the “Why am I seeing this?” link in a drop-down menu at the top right of each ad in their news feed.

As soon as a marketing campaign is over, an ad can no longer be found on the “Ads” page at all. When ProPublica checked the Airbnb Citizen Facebook page a week after collecting the ad, it was no longer there.

Because the “Ads” page also doesn’t disclose the demographics of the advertiser’s target audience, people can only see that data on ads that were aimed at them and were on their own Facebook news feed. Without this information, people outside an ad’s selected audience can’t see to whom companies or politicians are tailoring their messages. ProPublica reported last year that dozens of major companies directed recruitment ads on Facebook only to younger people — information that would likely interest older workers, but would still be concealed from them under the new policy. One recent ad by Prime Minister Justin Trudeau was directed at “people who may be similar to” his supporters, according to the Political Ad Collector data. Under the new system, people who don’t support Trudeau could see the ad on his Facebook page, but wouldn’t know why it was excluded from their news feeds.

Facebook has promised new measures to make political ads more accessible. When it expands the initiative to the U.S., it will start building a searchable electronic archive of ads related to U.S. federal elections. This archive will include details on the amount of money spent and demographic information about the people the ads reached. Facebook will initially limit its definition of political ads to those that “refer to or discuss a political figure” in a federal election, the company said.

The company hasn’t said what, if any, archive will be created for ads for state and local contests, or for political ads in other countries. It has said it will eventually require political advertisers in other countries, and in state elections in the U.S., to provide more documentation, but it’s not clear when that will happen.

Ads that aren’t political will be available under the same system being tested in Canada now.

Even an archive of the sort Facebook envisions wouldn’t solve the problems of misleading advertising on Facebook, Surman said. “It would be interesting to journalists and researchers trying to track this issue. But it won’t help users make informed choices about what ads they see,” he said. That’s because users need more information alongside the ads they are seeing on their news feeds, not in a separate location, he said.

The Airbnb Citizen ad wasn’t the only tactic that Airbnb adopted in an apparent attempt to sway the Toronto City Council. It also packed the council galleries with supporters on the morning of the vote, according to The Globe and Mail. Still, its efforts appear to have been unsuccessful.

On Dec. 6, two days after a reader sent us the ad, the City Council voted to keep people from renting a space that wasn’t their primary residence and stop homeowners from listing units such as basement apartments.

Filed under: Technology

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Advertising Agency Paid $2 Million To Settle Deceptive Advertising Charges

Marketing Architects inc. The U.S. Federal Trade Commission (FTC) announced that Minneapolis-based Marketing Architects, Inc. (MAI):

"... an advertising agency that created and disseminated allegedly deceptive radio ads for weight-loss products marketed by its client, Direct Alternatives, has agreed to pay $2 million to the Federal Trade Commission and State of Maine Attorney General’s Office to settle their complaint..."

First, some background. According to the FTC, MAI created advertising for several products (e.g., Puranol, Pur-Hoodia Plus, Acai Fresh, AF Plus, and Final Trim) by Direct Alternatives from 2006 through February 2015. Then, in 2016 the FTC and the State of Maine settled allegations against Direct Alternatives, which required the company to halt deceptive advertising and illegal billing practices.

Additional background according to the FTC: MAI previously created weight-loss ads for Sensa Products, LLC between March 2009 and May 2011. The FTC filed a complaint against Sensa in 2014, and subsequently Sensa agreed to refund $26.5 million to defrauded consumers. So, there's important, relevant history.

In the latest action, the joint complaint alleged that MAI created and disseminated radio ads with false or unsubstantiated weight-loss claims for AF Plus and Final Trim. Besides:

"... receiving FTC’s Sensa order, MAI was previously made aware of the need to have competent and reliable scientific evidence to back up health claims. Among other things, the complaint alleges that Direct Alternatives provided MAI with documents indicating that some of the weight-loss claims later challenged by the FTC needed to be supported by scientific evidence.

The complaint further charges that MAI developed and disseminated fictitious weight-loss testimonials and created radio ads for weight-loss products falsely disguised as news stories. Finally, the complaint charges MAI with creating inbound call scripts that failed to adequately disclose that consumers would be automatically enrolled in negative-option (auto-ship) continuity plans."

The latest action includes a proposed court order to ban MAI from making weight-loss claims about products the FTC has already advised as false, and:

"... requires MAI to have competent and reliable scientific evidence to support any other claims about the health benefits or efficacy of weight-loss products, and prohibits it from misrepresenting the existence or outcome of tests or studies. In addition, the order prohibits MAI from misrepresenting the experience of consumer testimonialists or that paid commercial advertising is independent programming."

This action is a reminder to advertising and digital agency executives everywhere: ensure that claims are supported by competent, reliable scientific evidence.

Good. Kudos to the FTC for these enforcement actions and for protecting consumers.