95 posts categorized "Behavioral Advertising" Feed

I Approved This Facebook Message — But You Don’t Know That

[Editor's note: today's guest post, by reporters at ProPublica, is the latest in a series about advertising and social networking sites. It is reprinted with permission.]

Facebook logo By Jennifer Valentino-DeVries, ProPublica

Hundreds of federal political ads — including those from major players such as the Democratic National Committee and the Donald Trump 2020 campaign — are running on Facebook without adequate disclaimer language, likely violating Federal Election Commission (FEC) rules, a review by ProPublica has found.

An FEC opinion in December clarified that the requirement for political ads to say who paid for and approved them, which has long applied to print and broadcast outlets, extends to ads on Facebook. So we checked more than 300 ads that had run on the world’s largest social network since the opinion, and that election-law experts told us met the criteria for a disclaimer. Fewer than 40 had disclosures that appeared to satisfy FEC rules.

“I’m totally shocked,” said David Keating, president of the nonprofit Institute for Free Speech in Alexandria, Virginia, which usually opposes restrictions on political advertising. “There’s no excuse,” he said, looking through our database of ads.

The FEC can investigate possible violations of the law and fine people up to thousands of dollars for breaking it — fines double if the violation was “knowing and willful,” according to the regulations. Under the law, it’s up to advertisers, not Facebook, to ensure they have the right disclaimers. The FEC has not imposed penalties on any Facebook advertiser for failing to disclose.

An FEC spokeswoman declined to say whether the commission has any recent complaints about lack of disclosure on Facebook ads. Enforcement matters are confidential until they are resolved, she said.

None of the individuals or groups we contacted whose ads appeared to have inadequate disclaimers, including the Democratic National Committee and the Trump campaign, responded to requests for comment. Facebook declined to comment on ProPublica’s findings or the December opinion. In public documents, the company has urged the FEC to be “flexible” in what it allows online, and to develop a policy for all digital advertising rather than focusing on Facebook.

Insufficient disclaimers can be minor technicalities, not necessarily evidence of intent to deceive. But the pervasiveness of the lapses ProPublica found suggests a larger problem that may raise concerns about the upcoming midterm elections — that political advertising on the world’s largest social network isn’t playing by rules intended to protect the public.

Unease about political ads on Facebook and other social networking sites has intensified since internet companies acknowledged that organizations associated with the Russian government bought ads to influence U.S. voters during the 2016 election. Foreign contributions to campaigns for U.S. federal office are illegal. Online, advertisers can target ads to relatively small groups of people. Once the marketing campaign is over, the ads disappear. This makes it difficult for the public to scrutinize them.

The FEC opinion is part of a push toward more transparency in online political advertising that has come in response to these concerns. In addition to handing down the opinion in a specific case, the FEC is preparing new rules to address ads on social media more broadly. Three senators are sponsoring a bill called the Honest Ads Act, which would require internet companies to provide more information on who is buying political ads. And earlier this month, the election authority in Seattle said Facebook was violating a city law on election-ad disclosures, marking a milestone in municipal attempts to enforce such transparency.

Facebook itself has promised more transparency about political ads in the coming months, including “paid for by” disclosures. Since late October it has been conducting tests in Canada that publish ads on an advertiser’s Facebook page, where people can see them even without being part of the advertiser’s target audience. Those ads are only up while the ad campaign is running, but Facebook says it will create a searchable archive for federal election advertising in the U.S. starting this summer.

ProPublica found the ads using a tool called the Political Ad Collector, which allows Facebook users to automatically send us the political ads that were displayed on their news feeds. Because they reflect what users of the tool are seeing, the ads in our database aren’t a representative sample.

The disclaimers required by the FEC are familiar to anyone who has seen a print or television political ad — think of a candidate saying, “I’m ____, and I approved this message,” at the end of a TV commercial, or a “paid for by” box at the bottom of a newspaper advertisement. They’re intended to make sure the public knows who is paying to support a candidate, and to prevent people from falsely claiming to speak on a candidate’s behalf.

The system does have limitations, reflecting concerns that overuse of disclaimers could inhibit free speech. For starters, the rules apply only to certain types of political ads. Political committees and candidates have to include disclaimers, as do people seeking donations or conducting “express advocacy.” To count as express advocacy, an ad typically must mention a candidate and use certain words clearly campaigning for or against a candidate — such as “vote for,” “reject” or “re-elect.” And the regulations only apply to federal elections, not state and local ones.

The rules also don’t address so-called “issue” ads that advocate a policy stance. These ads may include a candidate’s name without a disclaimer, as long as they aren’t funded by a political committee or candidate and don’t use express-advocacy language. Many of the political ads purchased by Russian groups in 2016 attempted to influence public opinion without mentioning candidates at all — and would not require disclosure even today.

Enforcement of the law often relies on political opponents or a member of the public complaining to the FEC. If only supporters see an ad, as might be the case online, a complaint may never come.

The disclaimer law was last amended in 2002, but online advertising has changed so rapidly that several experts said the FEC has had trouble keeping up. In 2002, the commission found that paid text message ads were exempt from disclosure under the “small-items exception” originally intended for buttons, pins and the like. What counts as small depends on the situation and is up to the FEC.

In 2010, the FEC considered ads on Google that had no graphics or photos and were limited to 95 characters of text. Google proposed that disclaimers not be part of the ads themselves but be included on the web pages that users would go to after clicking on the ads; the FEC agreed.

In 2011, Facebook asked the FEC to allow political ads on the social network to run without disclosures. At the time, Facebook limited all ads on its platform to small, “thumbnail” photos and brief text of only 100 or 160 characters, depending on the type of ad. In that case, the six-person FEC couldn’t muster the four votes needed to issue an opinion, with three commissioners saying only limited disclosure was required and three saying the ads needed no disclosure at all, because it would be “impracticable” for political ads on Facebook to contain more text than other ads. The result was that political ads on Facebook ran without the disclaimers seen on other types of election advertising.

Since then, though, ads on Facebook have expanded. They can now include much more text, as well as graphics or photos that take up a large part of the news feed’s width. Video ads can run for many minutes, giving advertisers plenty of time to show the disclaimer as text or play it in a voiceover.

Last October, a group called Take Back Action Fund decided to test whether these Facebook ads should still be exempt from the rules.

“For years now, people have said, ‘Oh, don’t worry about the rules, because the FEC doesn’t enforce anything on Facebook,’” said John Pudner, president of Take Back Action Fund, which advocates for campaign finance reform. Many political consultants “didn’t think you ever needed a disclaimer on a Facebook ad,” said Pudner, a longtime campaign consultant to conservative candidates.

Take Back Action Fund came up with a plan: Ask the FEC whether it should include disclosures on ads that the group thought clearly needed them.

The group told the FEC it planned to buy “express advocacy” ads on Facebook that included large images or videos on the news feed. In its filing, Take Back Action Fund provided some sample text it said it was thinking of using: “While [Candidate Name] accuses the Russians of helping President Trump get elected, [s/he] refuses to call out [his/her] own Democrat Party for paying to create fake documents that slandered Trump during his presidential campaign. [Name] is unfit to serve.”

In a comment filed with the FEC in the matter, the Internet Association trade group, of which Facebook is a member, asked the commission to follow the precedent of the 2010 Google case and allow a “one-click” disclosure that didn’t need to be on the ad itself but could be on the web page the ad led to.

The FEC didn’t follow that recommendation. It said unanimously that the ads needed full disclaimers.

The opinion, handed down Dec. 15, was narrow, saying that if any of the “facts or assumptions” presented in another case were different in a “material” way, the opinion could not be relied upon. But several legal experts who spoke with ProPublica said the opinion means anyone who would have to include disclaimers in traditional advertising should now do so on large Facebook image ads or video ads — including candidates, political committees and anyone using express advocacy.

“The functionality and capabilities of today’s Facebook Video and Image ads can accommodate the information without the same constrictions imposed by the character-limited ads that Facebook presented to the Commission in 2011,” three commissioners wrote in a concurring statement. A fourth commissioner went further, saying the commission’s earlier decision in the text messaging case should now be completely superseded. The remaining two commissioners didn’t comment beyond the published opinion.

“We are overjoyed at the decision and hope it will have the effect of stopping anonymous attacks,” said Pudner, of Take Back Action Fund. “We think that this is a matter of the voter’s right to know.” He added that the group doesn’t intend to purchase the ads.

This year, the FEC plans to tackle concerns about digital political advertising more generally. Facebook favors such an industry-wide approach, partly for competitive reasons, according to a comment it submitted to the commission.

“Facebook strongly supports the Commission providing further guidance to committees and other advertisers regarding their disclaimer obligations when running election-related Internet communications on any digital platform,” Facebook General Counsel Colin Stretch wrote to the FEC.

Facebook was concerned that its own transparency efforts “will apply only to advertising on Facebook’s platform, which could have the unintended consequence of pushing purchasers who wish to avoid disclosure to use other, less transparent platforms,” Stretch wrote.

He urged the FEC to adopt a “flexible” approach, on the grounds that there are many different types of online ads. “For example, allowing ads to include an icon or other obvious indicator that more information about an ad is available via quick navigation (like a single click) would give clear guidance.”

To test whether political advertisers were following the FEC guidelines, we searched for large U.S. political ads that our tool gathered between Dec. 20 — five days after the opinion — and Feb. 1. We excluded the small ads that run on the right column of Facebook’s website. To find ads that were most likely to fall under the purview of the FEC regulations, we searched for terms like “committee,” “donate” and “chip in.” We also searched for ads that used express advocacy language such as, “for Congress,” “vote against,” “elect” or “defeat.” We left out ads with state and local terms such as “governor” or “mayor,” as well as ads from groups such as the White House Historical Association or National Audubon Society that were obviously not election-oriented. Then we examined the ads, including the text and photos or graphics.

Of nearly 70 entities that ran ads with a large photo or graphic in addition to text, only two used all of the required disclaimer language. About 20 correctly indicated in some fashion the name of the committee associated with the ad but omitted other language, such as whether the ad was endorsed by a candidate. The rest had more significant shortcomings. Many of those that didn’t include disclosures were for relatively inexperienced candidates for Congress, but plenty of seasoned lawmakers and major groups failed to use the proper language as well.

For example, one ad said, “It’s time for Donald Trump, his family, his campaign, and all of his cronies to come clean about their collusion with Russia.” A photo of Donald Trump appeared over a black and red map of Russia, overlaid by the text, “Stop the Lies.” The ad urged people to “Demand Answers Today” and “Sign Up.”

At the top, the ad identified the Democratic Party as the sponsor, and linked to the party’s Facebook page. But, under FEC rules, it should have named the funder, the Democratic National Committee, and given the committee’s address or website. It should also have said whether the ad was endorsed by any candidate. It didn’t. The only nod to the national committee was a link to my.democrats.org, which is paid for by the DNC, at the bottom of the ad. As on all Facebook ads, the word “Sponsored” was included at the top.

Advertisers seemed more likely to put the proper disclaimers on video ads, especially when those ads appeared to have been created for television, where disclaimers have been mandatory for years. Videos that didn’t look made for TV were less likely to include a disclaimer.

One ad that said it was from Donald J. Trump consisted of 20 seconds of video with an American flag background and stirring music. The words “Donate Now! And Enter for a Chance To Win Dinner With Trump!” materialized on the screen with dramatic thuds and crashes. The ad linked to Trump’s Facebook page, and a “Donate” button at the bottom of the ad linked to a website that identified the president’s re-election committee, Donald J. Trump for President, Inc., as its funder. It wasn’t clear on the ad whether Trump himself or his committee paid for it, which should have been specified under FEC rules.

The large majority of advertisements we collected — both those that used disclosures and those that didn’t — were for liberal groups and politicians, possibly reflecting the allegiances of the ProPublica readers who installed our ad-collection tool. There were only four Republican advertisers among the ads we analyzed.

It’s not clear why advertisers aren’t following the FEC regulations. Keating, of the Institute for Free Speech, suggested that advertisers might think the word “Sponsored” and a link to their Facebook page are enough and that reasonable people would know they had paid for the ad.

Others said social media marketers may simply be slow in adjusting to the FEC opinion.

“It’s entirely possible that because disclaimers haven’t been included for years now, candidates and committees just aren’t used to putting them on there,” said Brendan Fischer, director of the Federal and FEC Reform Program at the Campaign Legal Center, the group that provided legal services to Take Back Action Fund. “But they should be on notice,” he added.

There were only two advertisers we saw that included the full, clear disclosures required by the FEC on their large image ads. One was Amy Klobuchar, a Democratic senator from Minnesota who is a co-sponsor of the Honest Ads Act. The other was John Moser, an IT security professional and Democratic primary candidate in Maryland’s 7th Congressional District who received $190 in contributions last year, according to his FEC filings.

Reached by Facebook Messenger, Moser said he is running because he has a plan for ending poverty in the U.S. by restructuring Social Security into a “universal dividend” that gives everyone over age 18 a portion of the country’s per capita income. He complained that Facebook doesn’t make it easy for political advertisers to include the required disclosures. “You have to wedge it in there somewhere,” said Moser, who faces an uphill battle against longtime U.S. Rep. Elijah Cummings. “They need to add specific support for that, honestly.”

Asked why he went to the trouble to put the words on his ad, Moser’s answer was simple: “I included a disclosure because you're supposed to.”

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Unilever To Social Networking Sites: Drain The Online Swamp Or Lose Business

Unilever logo Unilever has placed tech companies and social networking sites on notice... chiefly Facebook and Google. Adweek reported:

"Unilever CMO Keith Weed put the advertising community on notice Monday during a keynote speech at the Interactive Advertising Bureau’s Annual Leadership Meeting in Palm Desert, Calif. Weed called for tech platforms—namely Facebook and YouTube—to step up their efforts in combating divisive content, hate speech and fake news. “I don’t think for a second where the internet right now is how the platforms dreamt it would be,” Weed told Adweek in an interview at the event."

After promising promised to improve the transparency of advertising on its platform, Facebook's program hasn't proceeded smoothly. Unilever spends about $9 billion annually in advertising, with more than 140 brands globally -- all spanning several categories including food and drink (e.g., Ben & Jerry's, Breyers, Country Crock, Hellmann's, Mazola, Knorr, Lipton, Promise), home care, and personal care products (e.g., Axe, Caress, Degree, Dove, Sunsilk, TRESemme, Vaseline). Adweek also reported:

"Much like Procter & Gamble CMO Marc Pritchard—who spoke at the IAB’s 2017 event and outlined a multipronged, yearlong plan—Weed is looking to pressure tech companies to increase their resources on cleaning up the platforms..."

BBC News reported:

"Unilever has pledged to: a) Not invest in platforms that do not protect children or create division in society; b) Only invest in platforms that make a positive contribution to society; c) Tackle gender stereotypes in advertising; and d) Only partner with companies creating a responsible digital infrastructure... At the World Economic Forum in Davos last month Prime Minister Theresa May called on investors to put pressure on tech firms to tackle the problem much more quickly. In December, the European Commission warned the likes of Facebook, Google, YouTube, Twitter and other firms that it was considering legislation if self-regulation continued to fail."

That's great. It'll be interesting to see which, if any other corporate marketers, make pledges similar to Unilever's. Susan Wojcicki, the CEO of Google's YouTube, issued a brief response. MediaPost reported:

"We want to do the right set of things to build [Unilever’s] trust. They are building brands on YouTube, and we want to be sure that our brand is the right place to build their brand."She added that "based on the feedback we had from them," YouTube changed its rules for what channels could be monetized, and began to have humans review all videos uploaded to Google Preferred..."

In December 2017, Youtube pledged a staff of 10,000 to root out divisive video content in 2018. We'll see if tech companies meet their promises. Consumers don't want to wade through social sites filled with divisive, hate, and fake-news content.


Facebook’s Experiment in Ad Transparency Is Like Playing Hide And Seek

[Editor's note: today's guest post, by the reporters at ProPublica, explores a new global program Facebook introduced in Canada. It is reprinted with permission.]

Facebook logo By Jennifer Valentino-DeVries, ProPublica

Shortly before a Toronto City Council vote in December on whether to tighten regulation of short-term rental companies, an entity called Airbnb Citizen ran an ad on the Facebook news feeds of a selected audience, including Toronto residents over the age of 26 who listen to Canadian public radio. The ad featured a photo of a laughing couple from downtown Toronto, with the caption, “Airbnb hosts from the many wards of Toronto raise their voices in support of home sharing. Will you?”

Placed by an interested party to influence a political debate, this is exactly the sort of ad on Facebook that has attracted intense scrutiny. Facebook has acknowledged that a group with ties to the Russian government placed more than 3,000 such ads to influence voters during the 2016 U.S. presidential campaign.

Facebook has also said it plans to avoid a repeat of the Russia fiasco by improving transparency. An approach it’s rolling out in Canada now, and plans to expand to other countries this summer, enables Facebook users outside an advertiser’s targeted audience to see ads. The hope is that enhanced scrutiny will keep advertisers honest and make it easier to detect foreign interference in politics. So we used a remote connection, called a virtual private network, to log into Facebook from Canada and see how this experiment is working.

The answer: It’s an improvement, but nowhere near the openness sought by critics who say online political advertising is a Wild West compared with the tightly regulated worlds of print and broadcast.

The new strategy — which Facebook announced in October, just days before a U.S. Senate hearing on the Russian online manipulation efforts — requires every advertiser to have a Facebook page. Whenever the advertiser is running an ad, the post is automatically placed in a new “Ads” section of the Facebook page, where any users in Canada can view it even if they aren’t part of the intended audience.

Facebook has said that the Canada experiment, which has been running since late October, is the first step toward a more robust setup that will let users know which group or company placed an ad and what other ads it’s running. “Transparency helps everyone, especially political watchdog groups and reporters, keep advertisers accountable for who they say they are and what they say to different groups,” Rob Goldman, Facebook’s vice president of ads, wrote before the launch.

While the new approach makes ads more accessible, they’re only available temporarily, can be hard to find, and can still mislead users about the advertiser’s identity, according to ProPublica’s review. The Airbnb Citizen ad — which we discovered via a ProPublica tool called the Political Ad Collector — is a case in point. Airbnb Citizen professed on its Facebook page to be a “community of hosts, guests and other believers in the power of home sharing to help tackle economic, environmental and social challenges around the world.” Its Facebook page didn’t mention that it is actually a marketing and public policy arm of Airbnb, a for-profit company.

Propublica-airbnb-citizen-adThe ad was part of an effort by the company to drum up support as it fought rental restrictions in Toronto. “These ads were one of the many ways that we engaged in the process before the vote,” Airbnb said. However, anyone who looked on Airbnb’s own Facebook page wouldn’t have found it.

Airbnb told ProPublica that it is clear about its connection to Airbnb Citizen. Airbnb’s webpage links to Airbnb Citizen’s webpage, and Airbnb Citizen’s webpage is copyrighted by Airbnb and uses part of the Airbnb logo. Airbnb said Airbnb Citizen provides information on local home-sharing rules to people who rent out their homes through Airbnb. “Airbnb has always been transparent about our advertising and public engagement efforts,” the statement said.

Political parties in Canada are already benefiting from the test to investigate ads from rival groups, said Nader Mohamed, digital director of Canada’s New Democratic Party, which has the third largest representation in Canada’s Parliament. “You’re going to be more careful with what you put out now, because you could get called on it at any time,” he said. Mohamed said he still expects heavy spending on digital advertising in upcoming campaigns.

After launching the test, Facebook demonstrated its new process to Elections Canada, the independent agency responsible for conducting federal elections there. Elections Canada recommended adding an archive function, so that ads no longer running could still be viewed, said Melanie Wise, the agency’s assistant director for media relations and issues management. The initiative is “helpful” but should go further, Wise said.

Some experts were more critical. Facebook’s new test is “useless,” said Ben Scott, a senior advisor at the think tank New America and a fellow at the Brookfield Institute for Innovation + Entrepreneurship in Toronto who specializes in technology policy. “If an advertiser is inclined to do something unethical, this level of disclosure is not going to stop them. You would have to have an army of people checking pages constantly.”

More effective ways of policing ads, several experts said, might involve making more information about advertisers and their targeting strategies readily available to users from links on ads and in permanent archives. But such tactics could alienate advertisers reluctant to share information with competitors, cutting into Facebook’s revenue. Instead, in Canada, Facebook automatically puts ads up on the advertiser’s Facebook page, and doesn’t indicate the target audience there.

Facebook’s test represents the least the company can do and still avoid stricter regulation on political ads, particularly in the U.S., said Mark Surman, a Toronto resident and executive director of Mozilla, a nonprofit Internet advocacy group that makes the Firefox web browser. “There are lots of people in the company who are trying to do good work. But it’s obvious if you’re Facebook that you’re trying not to get into a long conversation with Congress,” Surman said.

Facebook said it’s listening to its critics. “We’re talking to advertisers, industry folks and watchdog groups and are taking this kind of feedback seriously,” Rob Leathern, Facebook director of product management for ads, said in an email. “We look forward to continue working with lawmakers on the right solution, but we also aren’t waiting for legislation to start getting solutions in place,” he added. The company declined to provide data on how many people in Canada were using the test tools.

Facebook is not the only internet company facing questions about transparency in advertising. Twitter also pledged in October before the Senate hearing that “in the coming weeks” it would build a platform that would “offer everyone visibility into who is advertising on Twitter, details behind those ads, and tools to share your feedback.” So far, nothing has been launched.

Facebook has more than 23 million monthly users in Canada, according to the company. That’s more than 60 percent of Canada’s population but only about 1 percent of Facebook’s user base. The company has said it is launching its new ad-transparency plan in Canada because it already has a program there called the Canadian Election Integrity Initiative. That initiative was in response to a Canadian federal government report, “Cyber Threats to Canada’s Democratic Process,” which warned that “multiple hacktivist groups will very likely deploy cyber capabilities in an attempt to influence the democratic process during the 2019 federal election.” The election integrity plan promotes news literacy and offers a guide for politicians and political parties to avoid getting hacked.

Compared to the U.S., Canada’s laws allow for much stricter government regulation of political advertising, said Michael Pal, a law professor at the University of Ottawa. He said Facebook’s transparency initiative was a good first step but that he saw the extension of strong campaign rules into internet advertising as inevitable in Canada. “This is the sort of question that, in Canada, is going to be handled by regulation,” Pal said.

Several Canadian technology policy experts who spoke with ProPublica said Facebook’s new system was too inconvenient for the average user. There’s no central place where people can search the millions of ads on Facebook to see what ads are running about a certain subject, so unless users are part of the target audience, they wouldn’t necessarily know that a group is even running an ad. If users somehow hear about an ad or simply want to check whether a company or group is running one, they must first navigate to the group’s Facebook page and then click a small tab on the side labeled “Ads” that runs alongside other tabs such as “Videos” and “Community.” Once the user clicks the “Ads” tab, a page opens showing every ad that the page owner is running at that time, one after another.

The group’s Facebook page isn’t always linked from the text of the ad. If it isn’t, users can still find the Facebook page by navigating to the “Why am I seeing this?” link in a drop-down menu at the top right of each ad in their news feed.

As soon as a marketing campaign is over, an ad can no longer be found on the “Ads” page at all. When ProPublica checked the Airbnb Citizen Facebook page a week after collecting the ad, it was no longer there.

Because the “Ads” page also doesn’t disclose the demographics of the advertiser’s target audience, people can only see that data on ads that were aimed at them and were on their own Facebook news feed. Without this information, people outside an ad’s selected audience can’t see to whom companies or politicians are tailoring their messages. ProPublica reported last year that dozens of major companies directed recruitment ads on Facebook only to younger people — information that would likely interest older workers, but would still be concealed from them under the new policy. One recent ad by Prime Minister Justin Trudeau was directed at “people who may be similar to” his supporters, according to the Political Ad Collector data. Under the new system, people who don’t support Trudeau could see the ad on his Facebook page, but wouldn’t know why it was excluded from their news feeds.

Facebook has promised new measures to make political ads more accessible. When it expands the initiative to the U.S., it will start building a searchable electronic archive of ads related to U.S. federal elections. This archive will include details on the amount of money spent and demographic information about the people the ads reached. Facebook will initially limit its definition of political ads to those that “refer to or discuss a political figure” in a federal election, the company said.

The company hasn’t said what, if any, archive will be created for ads for state and local contests, or for political ads in other countries. It has said it will eventually require political advertisers in other countries, and in state elections in the U.S., to provide more documentation, but it’s not clear when that will happen.

Ads that aren’t political will be available under the same system being tested in Canada now.

Even an archive of the sort Facebook envisions wouldn’t solve the problems of misleading advertising on Facebook, Surman said. “It would be interesting to journalists and researchers trying to track this issue. But it won’t help users make informed choices about what ads they see,” he said. That’s because users need more information alongside the ads they are seeing on their news feeds, not in a separate location, he said.

The Airbnb Citizen ad wasn’t the only tactic that Airbnb adopted in an apparent attempt to sway the Toronto City Council. It also packed the council galleries with supporters on the morning of the vote, according to The Globe and Mail. Still, its efforts appear to have been unsuccessful.

On Dec. 6, two days after a reader sent us the ad, the City Council voted to keep people from renting a space that wasn’t their primary residence and stop homeowners from listing units such as basement apartments.

Filed under: Technology

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Advertising Agency Paid $2 Million To Settle Deceptive Advertising Charges

Marketing Architects inc. The U.S. Federal Trade Commission (FTC) announced that Minneapolis-based Marketing Architects, Inc. (MAI):

"... an advertising agency that created and disseminated allegedly deceptive radio ads for weight-loss products marketed by its client, Direct Alternatives, has agreed to pay $2 million to the Federal Trade Commission and State of Maine Attorney General’s Office to settle their complaint..."

First, some background. According to the FTC, MAI created advertising for several products (e.g., Puranol, Pur-Hoodia Plus, Acai Fresh, AF Plus, and Final Trim) by Direct Alternatives from 2006 through February 2015. Then, in 2016 the FTC and the State of Maine settled allegations against Direct Alternatives, which required the company to halt deceptive advertising and illegal billing practices.

Additional background according to the FTC: MAI previously created weight-loss ads for Sensa Products, LLC between March 2009 and May 2011. The FTC filed a complaint against Sensa in 2014, and subsequently Sensa agreed to refund $26.5 million to defrauded consumers. So, there's important, relevant history.

In the latest action, the joint complaint alleged that MAI created and disseminated radio ads with false or unsubstantiated weight-loss claims for AF Plus and Final Trim. Besides:

"... receiving FTC’s Sensa order, MAI was previously made aware of the need to have competent and reliable scientific evidence to back up health claims. Among other things, the complaint alleges that Direct Alternatives provided MAI with documents indicating that some of the weight-loss claims later challenged by the FTC needed to be supported by scientific evidence.

The complaint further charges that MAI developed and disseminated fictitious weight-loss testimonials and created radio ads for weight-loss products falsely disguised as news stories. Finally, the complaint charges MAI with creating inbound call scripts that failed to adequately disclose that consumers would be automatically enrolled in negative-option (auto-ship) continuity plans."

The latest action includes a proposed court order to ban MAI from making weight-loss claims about products the FTC has already advised as false, and:

"... requires MAI to have competent and reliable scientific evidence to support any other claims about the health benefits or efficacy of weight-loss products, and prohibits it from misrepresenting the existence or outcome of tests or studies. In addition, the order prohibits MAI from misrepresenting the experience of consumer testimonialists or that paid commercial advertising is independent programming."

This action is a reminder to advertising and digital agency executives everywhere: ensure that claims are supported by competent, reliable scientific evidence.

Good. Kudos to the FTC for these enforcement actions and for protecting consumers.


Burger King's Whopper Neutrality Ad. Sincere 'Net Neutrality' Support Or Slick Corporate Advertising?

If you haven't seen it, there is a Whopper Neutrality ad online by Burger King, explains net neutrality in a very easy-to-understand way. Blog post continues after the video:

A November, 2017 poll found that 52 percent of registered voters supported the current rules, including 55 percent of Democrats and 53 percent of Republicans. After that poll, the Commissioners at the FCC voted to killed net neutrality protections for consumers.

Some have questions whether the ad is sincere support of an issue consumers care about, or slick corporate advertising which capitalize on a hot topic. I like the ad. Anything that helps more consumers understand the issue, and what we've lost, is a good thing.

Another view of the ad by The Young Turks. Share your opinions below after the video:

Related posts about net neutrality:


Dozens of Companies Are Using Facebook to Exclude Older Workers From Job Ads

[Editor's note: everyone looks for a new job during their life. Today's guest blog post, by the reporters at ProPublica, explores an advertising practice by recruiters using social networking sites. Today's post is reprinted with permission.]

By Julia Angwin and Ariana Tobin of ProPublica, with Noam Scheiber, of The New York Times

A few weeks ago, Verizon placed an ad on Facebook to recruit applicants for a unit focused on financial planning and analysis. The ad showed a smiling, millennial-aged woman seated at a computer and promised that new hires could look forward to a rewarding career in which they would be "more than just a number."

Some relevant numbers were not immediately evident. The promotion was set to run on the Facebook feeds of users 25 to 36 years old who lived in the nation’s capital, or had recently visited there, and had demonstrated an interest in finance. For a vast majority of the hundreds of millions of people who check Facebook every day, the ad did not exist.

Verizon is among dozens of the nation's leading employers — including Amazon, Goldman Sachs, Target and Facebook itself — that placed recruitment ads limited to particular age groups, an investigation by ProPublica and The New York Times has found.

The ability of advertisers to deliver their message to the precise audience most likely to respond is the cornerstone of Facebook’s business model. But using the system to expose job opportunities only to certain age groups has raised concerns about fairness to older workers.

Several experts questioned whether the practice is in keeping with the federal Age Discrimination in Employment Act of 1967, which prohibits bias against people 40 or older in hiring or employment. Many jurisdictions make it a crime to “aid” or “abet” age discrimination, a provision that could apply to companies like Facebook that distribute job ads.

"It’s blatantly unlawful," said Debra Katz, a Washington employment lawyer who represents victims of discrimination.

Facebook defended the practice. "Used responsibly, age-based targeting for employment purposes is an accepted industry practice and for good reason: it helps employers recruit and people of all ages find work," said Rob Goldman, a Facebook vice president.

The revelations come at a time when the unregulated power of the tech companies is under increased scrutiny, and Congress is weighing whether to limit the immunity that it granted to tech companies in 1996 for third-party content on their platforms.

Facebook has argued in court filings that the law, the Communications Decency Act, makes it immune from liability for discriminatory ads.

Although Facebook is a relatively new entrant into the recruiting arena, it is rapidly gaining popularity with employers. Earlier this year, the social network launched a section of its site devoted to job ads. Facebook allows advertisers to select their audience, and then Facebook finds the chosen users with the extensive data it collects about its members.

The use of age targets emerged in a review of data originally compiled by ProPublica readers for a project about political ad placement on Facebook. Many of the ads include a disclosure by Facebook about why the user is seeing the ad, which can be anything from their age to their affinity for folk music.

The precision of Facebook’s ad delivery has helped it dominate an industry once in the hands of print and broadcast outlets. The system, called microtargeting, allows advertisers to reach essentially whomever they prefer, including the people their analysis suggests are the most plausible hires or consumers, lowering the costs and vastly increasing efficiency.

Targeted Facebook ads were an important tool in Russia’s efforts to influence the 2016 election. The social media giant has acknowledged that 126 million people saw Russia-linked content, some of which was aimed at particular demographic groups and regions. Facebook has also come under criticism for the disclosure that it accepted ads aimed at "Jew-haters" as well as housing ads that discriminated by race, gender, disability and other factors.

Other tech companies also offer employers opportunities to discriminate by age. ProPublica bought job ads on Google and LinkedIn that excluded audiences older than 40 — and the ads were instantly approved. Google said it does not prevent advertisers from displaying ads based on the user’s age. After being contacted by ProPublica, LinkedIn changed its system to prevent such targeting in employment ads.

The practice has begun to attract legal challenges. On Wednesday, a class-action complaint alleging age discrimination was filed in federal court in San Francisco on behalf of the Communications Workers of America and its members — as well as all Facebook users 40 or older who may have been denied the chance to learn about job openings. The plaintiffs’ lawyers said the complaint was based on ads for dozens of companies that they had discovered on Facebook.

The database of Facebook ads collected by ProPublica shows how often and precisely employers recruit by age. In a search for “part-time package handlers,” United Parcel Service ran an ad aimed at people 18 to 24. State Farm pitched its hiring promotion to those 19 to 35.

Some companies, including Target, State Farm and UPS, defended their targeting as a part of a broader recruitment strategy that reached candidates of all ages. The group of companies making this case included Facebook itself, which ran career ads on its own platform, many aimed at people 25 to 60. "We completely reject the allegation that these advertisements are discriminatory," said Goldman of Facebook.

After being contacted by ProPublica and the Times, other employers, including Amazon, Northwestern Mutual and the New York City Department of Education, said they had changed or were changing their recruiting strategies.

"We recently audited our recruiting ads on Facebook and discovered some had targeting that was inconsistent with our approach of searching for any candidate over the age of 18," said Nina Lindsey, a spokeswoman for Amazon, which targeted some ads for workers at its distribution centers between the ages of 18 and 50. "We have corrected those ads."

Verizon did not respond to requests for comment.

Several companies argued that targeted recruiting on Facebook was comparable to advertising opportunities in publications like the AARP magazine or Teen Vogue, which are aimed at particular age groups. But this obscures an important distinction. Anyone can buy Teen Vogue and see an ad. Online, however, people outside the targeted age groups can be excluded in ways they will never learn about.

"What happens with Facebook is you don’t know what you don’t know," said David Lopez, a former general counsel for the Equal Employment Opportunity Commission who is one of the lawyers at the firm Outten & Golden bringing the age-discrimination case on behalf of the communication workers union.

‘They Know I’m Dead’

Age discrimination on digital platforms is something that many workers suspect is happening to them, but that is often difficult to prove.

Mark Edelstein, a fitfully employed social-media marketing strategist who is 58 and legally blind, doesn’t pretend to know what he doesn’t know, but he has his suspicions.

Edelstein, who lives in St. Louis, says he never had serious trouble finding a job until he turned 50. “Once you reach your 50s, you may as well be dead,” he said. "I’ve gone into interviews, with my head of gray hair and my receding hairline, and they know I’m dead."

Edelstein spends most of his days scouring sites like LinkedIn and Indeed and pitching hiring managers with personalized appeals. When he scrolled through his Facebook ads on a Wednesday in December, he saw a variety of ads reflecting his interest in social media marketing: ads for the marketing software HubSpot ("15 free infographic templates!") and TripIt, which he used to book a trip to visit his mother in Florida.

What he didn’t see was a single ad for a job in his profession, including one identified by ProPublica that was being shown to younger users: a posting for a social media director job at HubSpot. The company asked that the ad be shown to people aged 27 to 40 who live or were recently living in the United States.

"Hypothetically, had I seen a job for a social media director at HubSpot, even if it involved relocation, I ABSOLUTELY would have applied for it," Edelstein said by email when told about the ad.

A HubSpot spokeswoman, Ellie Botelho, said that the job was posted on many sites, including LinkedIn, The Ladders and Built in Boston, and was open to anyone meeting the qualifications regardless of age or any other demographic characteristic.

She added that “the use of the targeted age-range selection on the Facebook ad was frankly a mistake on our part given our lack of experience using that platform for job postings and not a feature we will use again.”

For his part, Edelstein says he understands why marketers wouldn’t want to target ads at him: "It doesn’t surprise me a bit. Why would they want a 58-year-old white guy who’s disabled?"

Looking for ’Younger Blood’

Although LinkedIn is the leading online recruitment platform, according to an annual survey by SourceCon, an industry website. Facebook is rapidly increasing in popularity for employers.

One reason is that Facebook’s sheer size — two billion monthly active users, versus LinkedIn’s 530 million total members — gives recruiters access to types of workers they can’t find elsewhere.

Consider nurses, whom hospitals are desperate to hire. “They’re less likely to use LinkedIn,” said Josh Rock, a recruiter at a large hospital system in Minnesota who has expertise in digital media. "Nurses are predominantly female, there’s a larger volume of Facebook users. That’s what they use."

There are also millions of hourly workers who have never visited LinkedIn, and may not even have a résumé, but who check Facebook obsessively.

Deb Andrychuk, chief executive of the Arland Group, which helps employers place recruitment ads, said clients sometimes asked her firm to target ads by age, saying they needed “to start bringing younger blood” into their organizations. “It’s not necessarily that we wouldn’t take someone older,” these clients say, according to Andrychuk, “but if you could bring in a younger set of applicants, it would definitely work out better.”

Andrychuk said that “we coach clients to be open and not discriminate” and that after being contacted by The Times, her team updated all their ads to ensure they didn’t exclude any age groups.

But some companies contend that there are permissible reasons to filter audiences by age, as with an ad for entry-level analyst positions at Goldman Sachs that was distributed to people 18 to 64. A Goldman Sachs spokesman, Andrew Williams, said showing it to people above that age range would have wasted money: roughly 25 percent of those who typically click on the firm’s untargeted ads are 65 or older, but people that age almost never apply for the analyst job.

"We welcome and actively recruit applicants of all ages," Williams said. "For some of our social-media ads, we look to get the content to the people most likely to be interested, but do not exclude anyone from our recruiting activity."

Pauline Kim, a professor of employment law at Washington University in St. Louis, said the Age Discrimination in Employment Act, unlike the federal anti-discrimination statute that covers race and gender, allows an employer to take into account “reasonable factors” that may be highly correlated with the protected characteristic, such as cost, as long as they don’t rely on the characteristic explicitly.

The Question of Liability

In various ways, Facebook and LinkedIn have acknowledged at least a modest obligation to police their ad platforms against abuse.

Earlier this year, Facebook said it would require advertisers to "self-certify" that their housing, employment and credit ads were compliant with anti-discrimination laws, but that it would not block marketers from purchasing age-restricted ads.

Still, Facebook didn’t promise to monitor those certifications for accuracy. And Facebook said the self-certification system, announced in February, was still being rolled out to all advertisers.

LinkedIn, in response to inquiries by ProPublica, added a self-certification step that prevents employers from using age ranges once they confirm that they are placing an employment ad.

With these efforts evolving, legal experts say it is unclear how much liability the tech platforms could have. Some civil rights laws, like the Fair Housing Act, explicitly require publishers to assume liability for discriminatory ads.

But the Age Discrimination in Employment Act assigns liability only to employers or employment agencies, like recruiters and advertising firms.

The lawsuit filed against Facebook on behalf of the communications workers argues that the company essentially plays the role of an employment agency — collecting and providing data that helps employers locate candidates, effectively coordinating with the employer to develop the advertising strategies, informing employers about the performance of the ads, and so forth.

Regardless of whether courts accept that argument, the tech companies could also face liability under certain state or local anti-discrimination statutes. For example, California’s Fair Employment and Housing Act makes it unlawful to "aid, abet, incite, compel or coerce the doing" of discriminatory acts proscribed by the statute.

"They may have an obligation there not to aid and abet an ad that enables discrimination," said Cliff Palefsky, an employment lawyer based in San Francisco.

The question may hinge on Section 230 of the federal Communications Decency Act, which protects internet companies from liability for third-party content.

Tech companies have successfully invoked this law to avoid liability for offensive or criminal content — including sex trafficking, revenge porn and calls for violence against Jews. Facebook is currently arguing in Federal court that Section 230 immunizes it against liability for ad placement that blocks members of certain racial and ethnic groups from seeing the ads.

Related Reading ad object. List of coompanies and their age-based ads "Advertisers, not Facebook, are responsible for both the content of their ads and what targeting criteria to use, if any," Facebook argued in its motion to dismiss allegations that its ads violated a host of civil rights laws. The case does not allege age discrimination.

Eric Goldman, professor and co-director of the High Tech Law Institute at the Santa Clara University School of Law, who has written extensively about Section 230, says it is hard to predict how courts would treat Facebook’s age-targeting of employment ads.

Goldman said the law covered the content of ads, and that courts have made clear that Facebook would not be liable for an advertisement in which an employer wrote, say, “no one over 55 need apply.” But it is not clear how the courts would treat Facebook’s offering of age-targeted customization.

According to a federal appellate court decision in a fair-housing case, a platform can be considered to have helped “develop unlawful content” that users play a role in generating, which would negate the immunity.

"Depending on how the targeting is happening, you can make potentially different sorts of arguments about whether or not Google or Facebook or LinkedIn is contributing to the development" of the ad, said Deirdre K. Mulligan, a faculty director of the Berkeley Center for Law and Technology.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Facebook to Temporarily Block Advertisers From Excluding Audiences by Race

[Editor's note: today's guest blog post, by the reporters at ProPublica, discusses advertising practices by both Facebook, a popular social networking site, and some advertisers using the site. Today's post is reprinted with permission.]

By Julia Angwin, ProPublica

Facebook said it would temporarily stop advertisers from being able to exclude viewers by race while it studies the use of its ad targeting system.

“Until we can better ensure that our tools will not be used inappropriately, we are disabling the option that permits advertisers to exclude multicultural affinity segments from the audience for their ads,” Facebook Sheryl Sandberg wrote in a letter to the Congressional Black Caucus.

ProPublica disclosed last week that Facebook was still allowing advertisers to buy housing ads that excluded audiences by race, despite its promises earlier this year to reject such ads. ProPublica also found that Facebook was not asking housing advertisers that blocked other sensitive audience categories — by religion, gender, or disability — to “self-certify” that their ads were compliant with anti-discrimination laws.

Under the Fair Housing Act of 1968, it’s illegal to “to make, print, or publish, or cause to be made, printed, or published any notice, statement, or advertisement, with respect to the sale or rental of a dwelling that indicates any preference, limitation, or discrimination based on race, color, religion, sex, handicap, familial status, or national origin.” Violators face tens of thousands of dollars in fines.

In her letter, Sandberg said the company will examine how advertisers are using its exclusion tool — “focusing particularly on potentially sensitive segments” such as ads that exclude LGBTQ communities or people with disabilities. “During this review, no advertisers will be able to create ads that exclude multicultural affinity groups,” Facebook Vice President Rob Goldman said in an e-mailed statement.

Goldman said the results of the audit would be shared with “groups focused on discrimination in ads,” and that Facebook would work with them to identify further improvements and publish the steps it will take.

Sandberg’s letter to the Congressional Black Caucus is the outgrowth of a dialogue that has been ongoing since last year when ProPublica published its first article revealing Facebook was allowing advertisers to exclude people with an “ethnic affinity” for various minority groups, including African Americans, Asian Americans and Hispanics, from viewing their ads.

At that time, four members of the Congressional Black Caucus reached out to Facebook for an explanation. “This is in direct violation of the Fair Housing Act of 1968, and it is our strong desire to see Facebook address this issue immediately,” wrote the lawmakers.

The U.S. Department of Housing and Urban Development, which enforces the nation’s fair housing laws, opened an inquiry into Facebook’s practices.

But in February, Facebook said it had solved the problem — by building an algorithm that would allow it to spot and reject housing, employment and credit ads that discriminated using racial categories. For audiences not selected by race, Facebook said it would require advertisers to “self-certify” that their ads were compliant with the law.

HUD closed its inquiry. But last week, ProPublica successfully purchased dozens of racist, sexist and otherwise discriminatory ads for a fictional housing company advertising a rental. None of the ads were rejected and none required a self-certification. Facebook said it was a “technical failure” and vowed to fix the problem.

U.S. Rep. Robin Kelly, D-Ill., said that Facebook’s actions to disable the feature are “an appropriate action.” “When I first raised this issue with Facebook, I was disappointed. When it became necessary to raise the issue again, I was irritated,” she said. “I will continue watching this issue very closely to ensure these issues do not raise again.”

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

 


Do Social Media Pose Threats To Democracies?

November 4th cover of The Economist magazine The November 4th issue of The Economist magazine discussed whether social networking sites threaten democracy in the United States and elsewhere. Social media were supposed to better connect us with accurate and reliable information. What we know so far (links added):

"... Facebook acknowledged that before and after last year’s American election, between January 2015 and August this year, 146m users may have seen Russian misinformation on its platform. Google’s YouTube admitted to 1,108 Russian-linked videos and Twitter to 36,746 accounts. Far from bringing enlightenment, social media have been spreading poison. Russia’s trouble-making is only the start. From South Africa to Spain, politics is getting uglier... by spreading untruth and outrage, corroding voters’ judgment and aggravating partisanship, social media erode the conditions..."

You can browse some of the ads Russia bought on Facebook during 2016. (Hopefully, you weren't tricked by any of them.) We also know from this United Press International (UPI) report about social media companies' testimony before Congress:

"Senator Patrick Leahy (D-Vt) said Facebook still has many pages that appear to have been created by the Internet Research Agency, a pro-Kremlin group that bought advertising during the campaign. Senator Al Franken (D-Minn.) said some Russian-backed advertisers even paid for the ads in Russian currency.

"How could you not connect those two dots?" he asked Facebook general council Colin Stretch. "It's a signal we should have been alert to and, in hindsight, one we missed," Stretch answered."

Google logo And during the Congressional testimony:

"Google attorney Richard Salgado said his company's platform is not a newspaper, which has legal responsibilities different from technology platforms. "We are not a newspaper. We are a platform that shares information," he said. "This is a platform from which news can be read from many sources."

Separate from the Congressional testimony, Kent Walker, a Senior Vice President and General Counsel at Google, released a statement which read in part:

"... like other internet platforms, we have found some evidence of efforts to misuse our platforms during the 2016 U.S. election by actors linked to the Internet Research Agency in Russia... We have been conducting a thorough investigation related to the U.S. election across our products drawing on the work of our information security team, research into misinformation campaigns from our teams, and leads provided by other companies. Today, we are sharing results from that investigation... We will be launching several new initiatives to provide more transparency and enhance security, which we also detail in these information sheets: what we found, steps against phishing and hacking, and our work going forward..."

This matters greatly. Why? by The Economist explained that the disinformation distributed via social media and other websites:

"... aggravates the politics of contempt that took hold, in the United States at least, in the 1990s. Because different sides see different facts, they share no empirical basis for reaching a compromise. Because each side hears time and again that the other lot are good for nothing but lying, bad faith and slander, the system has even less room for empathy. Because people are sucked into a maelstrom of pettiness, scandal and outrage, they lose sight of what matters for the society they share. This tends to discredit the compromises and subtleties of liberal democracy, and to boost the politicians who feed off conspiracy and nativism..."

When citizens (via their elected representatives) can't agree nor compromise, then government gridlock results. Nothing gets done. Frustration builds among voters.

What solutions to fix these problems? The Economist article discussed several remedies: better critical-thinking skills by social media users, holding social-media companies accountable, more transparency around ads, better fact checking, anti-trust actions, and/or disallow bots (automated accounts). It will take time for social media users to improve their critical-thinking skills. Considerations about fact checking:

"When Facebook farms out items to independent outfits for fact-checking, the evidence that it moderates behavior is mixed. Moreover, politics is not like other kinds of speech; it is dangerous to ask a handful of big firms to deem what is healthy for society.

Considerations about anti-trust actions:

"Breaking up social-media giants might make sense in antitrust terms, but it would not help with political speech—indeed, by multiplying the number of platforms, it could make the industry harder to manage."

All of the solutions have advantages and disadvantages. It seems the problems will be with us for a long while. Social media has been abused... and will continue to be abused. Comments? What solutions do you think would be best?


What We Do and Don’t Know About Facebook’s New Political Ad Transparency Initiative

[Editor's note: today's guest post is by the reporters at ProPublica. It is reprinted with permission.]

The short answer: It leaves the company some wiggle room.

Facebook logo By Julia Angwin, ProPublica

On Thursday September 21, Facebook Chief Executive Mark Zuckerberg announced several steps to make political ads on the world’s largest social network more transparent. The changes follow Facebook’s acknowledgment in September that $100,000 worth of political ads were placed during the 2016 election cycle by “inauthentic accounts” linked to Russia.

The changes also follow ProPublica’s launch of a crowdsourcing effort during September to collect political advertising from Facebook. Our goal was to ensure that political ads on Facebook, which until now have largely avoided scrutiny, receive the same level of fact-checking by journalists, advocacy groups and political opponents as do print, broadcast and radio political ads. We hope to have some results to share soon.

In the meantime, here’s what we do and don’t know about how Facebook’s changes could play out.

How does Facebook plan to increase disclosure of funders of political ads?
In his statement, Zuckerberg said that Facebook will start requiring political advertisers to disclose “which page paid for an ad.”

This is a reversal for Facebook. In 2011, the company argued to the Federal Election Commission that it would be “inconvenient and impracticable” to include disclaimers in political ads because the ads are so small in size.

While the commission was too divided to make a decision on Facebook’s request for an advisory ruling, the deadlock effectively allowed the company to continue omitting disclosures. (The commission has just reopened discussion of whether to require disclosure for internet advertising).

Now Facebook appears to have dropped its objections to adding disclosures. However, the problem with Facebook’s plan of only revealing which page purchased the ad is that the source of the money behind the page is not always clear.

What is Facebook doing to make political ads more transparent to the public?
Zuckerberg also said that Facebook will start to require political advertisers to place on their pages all the ads they are “currently running to any audience on Facebook.”

This requirement could mean the end of the so-called “dark posts” on Facebook — political ads whose origins were not easily traced. Now, theoretically, each Facebook political ad would be associated with and published on a Facebook page — either for candidates, political action committees or interest groups.

However, the word “currently” suggests that such disclosure could be fleeting. After all, ads can run on Facebook for as little as a few minutes or a few hours. And since campaigns can run dozens, hundreds or even thousands of variations of a single ad — to test which one gets the best response — it will be interesting to see whether and how they manage to display all those ads on their pages simultaneously.

“It would require a lot of vigilance on the part of users and voters to be on those pages at the exact time” that campaigns posted all of their ads, said Brendan Fischer, a lawyer at the Campaign Legal Center, a campaign finance reform watchdog group.

How will Facebook decide which ads are political?
It’s not clear how Facebook will decide which ads are political and which aren’t. There are several existing definitions they could choose from.

The Federal Communications Commission defines political advertising as anything that “communicates a message relating to any political matter of national importance,” but those rules only apply to television and radio broadcasters. FCC rules require extensive disclosure, including the amount paid for the ads, the audiences targeted and how many times the ads run.

The Federal Election Commission has traditionally defined two major types of campaign ads. “Independent expenditures” are ads that expressly advocate the election or defeat of a “clearly identified candidate.” A slightly broader definition, “electioneering communications,” encompasses so-called “issue ads” that mention a candidate but may not directly advocate for his or her election or defeat.

The FEC only requires spending on electioneering ads to be disclosed in the 60 days leading up to a general election or the 30 days leading up to a primary election. And the electioneering communications rule does not apply to online advertising.

Of course, Facebook doesn’t have to choose of any of the existing definitions of political advertising. It could do what it did with hate speech — and make up its own rules.

How will Facebook catch future political ads secretly placed by foreigners?
The law prohibits a foreign national from making any contribution or expenditure in any U.S. election. That means that Russians who bought the ads may have broken the law, but it also means that any American who “knowingly provided substantial assistance” may also have broken the law.

In mid-September, when Facebook disclosed the Russian ad purchase, the company said it was increasing its technical efforts to identify fake and inauthentic pages and to prevent them from running ads.

Zuckerberg said the company would “strengthen our ad review process for political ads” but didn’t specify exactly how. (Separately, Facebook Chief Operating Officer Sheryl Sandberg said in September that the company is adding more human review to its ad-buying categories, after ProPublica revealed that it allowed advertisers to target ads toward “Jew haters.”)

Zuckerberg also said Facebook will work with other tech companies and governments to share information about online risks during elections.

Will ProPublica continue crowd-sourcing Facebook political ads?
Yes, we plan to keep using our tool to monitor political advertising. In September, we worked with news outlets in Germany — Spiegel Online, Süddeutsche Zeitung and Tagesschau — to collect more than 600 political ads during the parliamentary elections.

We believe there is value to creating a permanent database of political ads that can be inspected by the public, and we intend to track whether Facebook lives up to its promises. If you want to help us, download our tool for Firefox or Chrome web browsers.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Facebook Enabled Advertisers to Reach ‘Jew Haters’

[Editor's note: today's guest post, by the reporters at ProPublica, is part of its Machine Bias series. After being contacted by ProPublica, Facebook removed several anti-Semitic ad categories and it no longer allows advertisers to target groups based upon self-reported information. Today's post is reprinted with permission.]

By Julia Angwin, Madeleine Varner, and Ariana Tobin - ProPublica

Facebook logo Want to market Nazi memorabilia, or recruit marchers for a far-right rally? Facebook’s self-service ad-buying platform had the right audience for you.

Until last week, when we asked Facebook about it, the world’s largest social network enabled advertisers to direct their pitches to the news feeds of almost 2,300 people who expressed interest in the topics of “Jew hater,” “How to burn jews,” or, “History of ‘why jews ruin the world.’”

To test if these ad categories were real, we paid $30 to target those groups with three “promoted posts” — in which a ProPublica article or post was displayed in their news feeds. Facebook approved all three ads within 15 minutes.

After we contacted Facebook, it removed the anti-Semitic categories — which were created by an algorithm rather than by people — and said it would explore ways to fix the problem, such as limiting the number of categories available or scrutinizing them before they are displayed to buyers.

“There are times where content is surfaced on our platform that violates our standards,” said Rob Leathern, product management director at Facebook. “In this case, we’ve removed the associated targeting fields in question. We know we have more work to do, so we’re also building new guardrails in our product and review processes to prevent other issues like this from happening in the future.”

Facebook’s advertising has become a focus of national attention since it disclosed last week that it had discovered $100,000 worth of ads placed during the 2016 presidential election season by “inauthentic” accounts that appeared to be affiliated with Russia.

Like many tech companies, Facebook has long taken a hands off approach to its advertising business. Unlike traditional media companies that select the audiences they offer advertisers, Facebook generates its ad categories automatically based both on what users explicitly share with Facebook and what they implicitly convey through their online activity.

Traditionally, tech companies have contended that it’s not their role to censor the Internet or to discourage legitimate political expression. In the wake of the violent protests in Charlottesville by right-wing groups that included self-described Nazis, Facebook and other tech companies vowed to strengthen their monitoring of hate speech.

Facebook CEO Mark Zuckerberg wrote at the time that “there is no place for hate in our community,” and pledged to keep a closer eye on hateful posts and threats of violence on Facebook. “It’s a disgrace that we still need to say that neo-Nazis and white supremacists are wrong — as if this is somehow not obvious,” he wrote.

But Facebook apparently did not intensify its scrutiny of its ad buying platform. In all likelihood, the ad categories that we spotted were automatically generated because people had listed those anti-Semitic themes on their Facebook profiles as an interest, an employer or a “field of study.” Facebook’s algorithm automatically transforms people’s declared interests into advertising categories.

Here is a screenshot of our ad buying process on the company’s advertising portal:

Screenshot of Facebook ad buying process

This is not the first controversy over Facebook’s ad categories. Last year, ProPublica was able to block an ad that we bought in Facebook’s housing categories from being shown to African-Americans, Hispanics and Asian-Americans, raising the question of whether such ad targeting violated laws against discrimination in housing advertising. After ProPublica’s article appeared, Facebook built a system that it said would prevent such ads from being approved.

Last year, ProPublica also collected a list of the advertising categories Facebook was providing to advertisers. We downloaded more than 29,000 ad categories from Facebook’s ad system — and found categories ranging from an interest in “Hungarian sausages” to “People in households that have an estimated household income of between $100K and $125K.”

At that time, we did not find any anti-Semitic categories, but we do not know if we captured all of Facebook’s possible ad categories, or if these categories were added later. A Facebook spokesman didn’t respond to a question about when the categories were introduced.

Two weeks ago, acting on a tip, we logged into Facebook’s automated ad system to see if “Jew hater” was really an ad category. We found it, but discovered that the category — with only 2,274 people in it — was too small for Facebook to allow us to buy an ad pegged only to Jew haters.

Facebook’s automated system suggested “Second Amendment” as an additional category that would boost our audience size to 119,000 people, presumably because its system had correlated gun enthusiasts with anti-Semites.

Instead, we chose additional categories that popped up when we typed in “jew h”: “How to burn Jews,” and “History of ‘why jews ruin the world.’” Then we added a category that Facebook suggested when we typed in “Hitler”: a category called “Hitler did nothing wrong.” All were described as “fields of study.”

These ad categories were tiny. Only two people were listed as the audience size for “how to burn jews,” and just one for “History of ‘why jews ruin the world.’” Another 15 people comprised the viewership for “Hitler did nothing wrong.”

Facebook’s automated system told us that we still didn’t have a large enough audience to make a purchase. So we added “German Schutzstaffel,” commonly known as the Nazi SS, and the “Nazi Party,” which were both described to advertisers as groups of “employers.” Their audiences were larger: 3,194 for the SS and 2,449 for Nazi Party.

Still, Facebook said we needed more — so we added people with an interest in the National Democratic Party of Germany, a far-right, ultranationalist political party, with its much larger viewership of 194,600.

Once we had our audience, we submitted our ad — which promoted an unrelated ProPublica news article. Within 15 minutes, Facebook approved our ad, with one change. In its approval screen, Facebook described the ad targeting category “Jew hater” as “Antysemityzm,” the Polish word for anti-Semitism. Just to make sure it was referring to the same category, we bought two additional ads using the term “Jew hater” in combination with other terms. Both times, Facebook changed the ad targeting category “Jew hater” to “Antisemityzm” in its approval.

Here is one of our approved ads from Facebook:

Screenshot of approved Facebook ad for ProPublica

A few days later, Facebook sent us the results of our campaigns. Our three ads reached 5,897 people, generating 101 clicks, and 13 “engagements” — which could be a “like” a “share” or a comment on a post.

Since we contacted Facebook, most of the anti-Semitic categories have disappeared.

Facebook spokesman Joe Osborne said that they didn’t appear to have been widely used. “We have looked at the use of these audiences and campaigns and it’s not common or widespread,” he said.

We looked for analogous advertising categories for other religions, such as “Muslim haters.” Facebook didn’t have them.

Update, Sept. 14, 2017: This story has been updated to include the Facebook spokesman's name.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Despite Disavowals, Leading Tech Companies Help Extremist Sites Monetize Hate

[Editor's note: today's guest post, by reporters at ProPublica, explores how hate sites maintain an online presence. It is reprinted with permission.]

By Julia Angwin, Jeff Larson, Madeleine Varner and Lauren Kirchner. ProPublica

Because of its "extreme hostility toward Muslims," the website Jihadwatch.org is considered an active hate group by the Southern Poverty Law Center and the Anti-Defamation League. The views of the site's director, Robert Spencer, on Islam led the British Home Office to ban him from entering the country in 2013.

But its designation as a hate site hasn't stopped tech companies -- including PayPal, Amazon and Newsmax -- from maintaining partnerships with Jihad Watch that help to sustain it financially. PayPal facilitates donations to the site. Newsmax -- the online news network run by President Donald Trump's close friend Chris Ruddy -- pays Jihad Watch in return for users clicking on its headlines. Until recently, Amazon allowed Jihad Watch to participate in a program that promised a cut of any book sales that the site generated. All three companies have policies that say they don't do business with hate groups.

Jihad Watch is one of many sites that monetize their extremist views through relationships with technology companies. ProPublica surveyed the most visited websites of groups designated as extremist by either the SPLC or the Anti-Defamation League. We found that more than half of them -- 39 out of 69 -- made money from ads, donations or other revenue streams facilitated by technology companies. At least 10 tech companies played a role directly or indirectly in supporting these sites.

Traditionally, tech companies have justified such relationships by contending that it's not their role to censor the Internet or to discourage legitimate political expression. Also, their management wasn't necessarily aware that they were doing business with hate sites because tech services tend to be automated and based on algorithms tied to demographics.

In the wake of last week's violent protest by alt-right groups in Charlottesville, more tech companies have disavowed relationships with extremist groups. During just the last week, six of the sites on our list were shut down. Even the web services company Cloudflare, which had long defended its laissez-faire approach to political expression, finally ended its relationship with the neo-Nazi site The Daily Stormer last week.

"I can't recall a time where the tech industry was so in step in their response to hate on their platforms," said Oren Segal, director of the ADL's Center on Extremism. "Stopping financial support to hate sites seems like a win-win for everyone."

But ProPublica's findings indicate that some tech companies with anti-hate policies may have failed to establish the monitoring processes needed to weed out hate sites. PayPal, the payment processor, has a policy against working with sites that use its service for "the promotion of hate, violence, [or] racial intolerance." Yet it was by far the top tech provider to the hate sites with donation links on 23 sites, or about one-third of those surveyed by ProPublica. In response to ProPublica's inquiries, PayPal spokesman Justin Higgs said in a statement that the company "strives to conscientiously assess activity and review accounts reported to us."

After Charlottesville, PayPal stopped accepting payments or donations for several high-profile white nationalist groups that participated in the march. It posted a statement that it would remain "vigilant on hate, violence & intolerance." It addresses each case individually, and "strives to navigate the balance between freedom of expression" and the "limiting and closing" of hate sites, it said.

After being contacted by ProPublica, Newsmax said it was unaware that the three sites that it had relationships with were considered hateful. "We will review the content of these sites and make any necessary changes after that review," said Andy Brown, chief operating officer of Newsmax.

Amazon spokeswoman Angie Newman said the company had previously removed Jihad Watch and three other sites identified by ProPublica from its program sharing revenue for book sales, which is called Amazon Associates. When ProPublica pointed out that the sites still carried working links to the program, she said that it was their responsibility to remove the code. "They are no longer paid as an Associate regardless of what links are on their site once we remove them from the Associates Program," she said.

Where to set the boundaries between hate speech and legitimate advocacy for perspectives on the edge of the political spectrum, and who should set them, are complex and difficult questions. Like other media outlets, we relied in part on the Southern Poverty Law Center's public list of "Active Hate Groups 2016." This list is controversial in some circles, with critics questioning whether the SPLC is too quick to brand organizations on the right as hate groups.

Still, the center does provide detailed explanations for many of its designations. For instance, the SPLC documents its decision to include the Family Research Council by citing the evangelical lobbying group's promotion of discredited science and unsubstantiated attacks on gay and lesbian people. We also consulted a list from ADL, which is not public and that was provided to us for research purposes. See our methodology here.

The sites that we identified from the ADL and SPLC lists vehemently denied that they are hate sites.

"It is not hateful, racist or extremist to oppose jihad terror," said Spencer, the director of Jihad Watch. He added that the true extremism was displayed by groups that seek to censor the Internet and that by asking questions about the tech platforms on his site, we were "aiding and abetting a quintessentially fascist enterprise."

Spencer made these comments in response to questions emailed by ProPublica reporter Lauren Kirchner. Afterwards, Spencer posted an item on Jihad Watch alleging that "leftist 'journalist'" Kirchner had threatened the site. He also posted Kirchner's photo and email, as well as his correspondence with her. After being contacted by ProPublica, another anti-Islam activist, Pamela Geller, also posted an attack on Kirchner, calling her a "senior reporting troll." Like Spencer, Geller was banned by the British Home Office; her eponymous site is on the SPLC and ADL lists.

Donations -- and the ability to accept them online through PayPal and similar companies -- are a lifeline for sites like Jihad Watch. In 2015, the nonprofit website disclosed that three quarters of its roughly $100,000 in revenues came from donations, according to publicly available tax records.

In recent weeks, PayPal has been working to shut down donations to extremist sites. This week, it pulled the plug on VDARE.com, an anti-immigration website designated as "white nationalist" by the SPLC and as a hate site by the ADL. VDARE, which denies being white nationalist, immediately switched to its backup system, Stripe.

Stripe, a private company recently described by Bloomberg Businessweek as a $9 billion startup, is unusual in not having a policy against working with hate sites. It does, however, prohibit financial transactions that support drugs, pornography and "psychic services." Stripe provided donation links for 10 sites, second only to PayPal on our list. Stripe did not respond to a request for comment.

VDARE editor Peter Brimelow declared on his site that the PayPal shutdown was likely part of a purge by the "authoritarian Communist Left to punish anyone who disagrees with their anti-American violence against patriotic people." He urged his readers to donate through other channels such as Bitcoins. "We need your help desperately," he wrote. "We must have the resources to defend ourselves and our people."

In 2015, VDARE received nearly all of its revenue -- $267,038 out of total $293,663 -- from donations, according to publicly available tax return forms that the Internal Revenue Service requires nonprofits to disclose.

Brimelow did not respond to our questions, instead characterizing ProPublica as the "Totalitarian Left."

Some sites also supplement their donations with revenue from online advertising. For instance, SonsofLibertyMedia.com, which is on the SPLC list, generated about 10 percent of its revenue -- $37,828 -- from advertising in 2015, according to its tax documents.

The site, which describes itself as promoting a "Judeo-Christian ethic," and recently posted an article declaring that a black activist protesting Confederate statues needed "a serious beat down," does not appear to attract advertisers directly.

Instead, Sons of Liberty benefits from a type of ad-piggybacking arrangement that is becoming more common in the tech industry. The website runs sponsored news articles from a company called Taboola, which shares ad revenues with it. Known for being at the forefront of "click-bait," Taboola places links on websites to articles about celebrities and popular culture.

Taboola's policy prohibits working with sites that have "politically religious agendas" or use hate speech. "We strive to ensure the safety of our network but from time to time, unfortunately, mistakes can happen," said Taboola spokeswoman Dana Miller. "We will ask our Content Policy group to review this site again and take action if needed."

Sons of Liberty founder Bradlee Dean said that he forwarded our questions to his attorney. The lawyer did not respond.

Hate sites can initiate relationships with tech companies with little scrutiny.

Any website can fill out an online form asking to join, for instance, Amazon's network, and often can get approved instantly. Once a website has joined a tech network, it can quickly start earning money through advertising, donations, or content farms such as Taboola that share ad revenues with websites that distribute their articles.

Some companies, such as Newsmax, say that joining their ad network requires explicit prior approval.

But, according to a former Newsmax employee, the only criterion for this approval was whether traffic to the site reached a minimum threshold. There was no content review. Salespeople were told to be aggressive in signing up publishing partners.

"We'd put our news feed on anybody's page, anyone who was willing to listen," he said, "it's about email addresses, it's about marketing, they don't care about ultra conservative or left wing."

Dylan Roof frequented a website described by the SPLC as "white nationalist." He said in a manifesto posted online that finding the website was a turning point in his life. He went on to murder nine African-American churchgoers in Charleston, South Carolina, in 2015. That year, USA Today found Newsmax ads on the site.

They no longer appear there.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

 


How To Control The Ads Facebook Displays

If you use Facebook, then you know that the social networking site serves ads based upon your interests. And, you''d probably be surprised at what Facebook thinks you are interested in versus what you are really interested in.

To see what Facebook thinks you are interested in, you will need to access your Ad Preferences page. Sign into your Facebook account using the browser interface, and click on the triangle drop-down menu icon in the upper right corner. Next select Settings, and then select Ads in the left column. Your Ad Preferences page looks like this:

Default view of the Facebook Ad Preferences page. Click to view larger version

Facebook has neatly organized what it thinks your interests are into several categories: Your Interests, Advertisers You've Interacted With, Your Information, and Ad Settings. Open the Your Interests module:

Your Interests module within Facebook Ad Preferences. Click to view larger version

This module includes several sub-categories: News & Entertainment, Business & Industry, Hobbies & Activities, Travel Places, & Events, People, Technology, and Lifestyle. Mouse over an item to reveal both an explanation why that item appears in your list and the "X" delete button. Click on the "X" button to remove that item.

Facebook has collected impressively long lists about what it thinks your interests are. So, click on the "See More" links within each sub-category. Facebook ads interest items based upon links you've selected, groups you've joined, ads you have viewed, the photos/videos you have uploaded, items (e.g., groups, events, status messages) you have "Liked," and more. There's plenty to browse, so you'll probably want to set aside 15 minutes to review and delete items.

There is a sneaky aspect to Facebook's interface. An item may appear in several categories. So, if you delete it in one category don't assume it was deleted in other categories. You'll have to visit each sub-category and delete it there, too. And, there is no guarantee Facebook won't re-add that item later based upon your activities within the site and/or mobile app.

Caution: even if you delete everything, Facebook will still show advertisements. Why? That's what the social networking service is designed to do. That's its business model. Even if you stop clicking "Like" buttons, Facebook will use alternate criteria to display ads. You can control or limit the topics for ads, but you can't stop ads entirely.

The Your Information module includes toggle switches to either activate or deactivate groups of items within your profile which Facebook uses to display ads:

Your Information module within Facebook Ad Preferences. Click to view larger version

It's probably wise to re-visit your Ad Preference page once yearly to delete items. What do you think?


Berners-Lee: 3 Reasons Why The Internet Is In Serious Trouble

Most people love the Internet. It's a tool that has made life easier and more efficient in many ways. Even with all of those advances, the founder of the Internet listed three reasons why our favorite digital tool is in serious trouble:

  1. Consumers have lost control of their personal information
  2. It's too easy for anyone to publish misinformation online
  3. Political advertising online lacks transparency

Tim Berners-Lee explained the first reason:

"The current business model for many websites offers free content in exchange for personal data. Many of us agree to this – albeit often by accepting long and confusing terms and conditions documents – but fundamentally we do not mind some information being collected in exchange for free services. But, we’re missing a trick. As our data is then held in proprietary silos, out of sight to us, we lose out on the benefits we could realise if we had direct control over this data and chose when and with whom to share it. What’s more, we often do not have any way of feeding back to companies what data we’d rather not share..."

Given appointees in the U.S. Federal Communications Commission (FCC) by President Trump, it will likely get worse as the FCC seeks to revoke online privacy and net neutrality protections for consumers in the United States. Berners-Lee explained the second reason:

"Today, most people find news and information on the web through just a handful of social media sites and search engines. These sites make more money when we click on the links they show us. And they choose what to show us based on algorithms that learn from our personal data that they are constantly harvesting. The net result is that these sites show us content they think we’ll click on – meaning that misinformation, or fake news, which is surprising, shocking, or designed to appeal to our biases, can spread like wildfire..."

Fake news has become so widespread that many public libraries, schools, and colleges teach students how to recognize fake news sites and content. The problem is more widespread and isn't limited to social networking sites like Facebook promoting certain news. It also includes search engines. Readers of this blog are familiar with the DuckDuckGo search engine for both online privacy online and to escape the filter bubble. According to its public traffic page, DuckDuckGo gets about 14 million searches daily.

Most other search engines collect information about their users and that to serve search results items related to what they've searched upon previously. That's called the "filter bubble." It's great for search engines' profitability as it encourages repeat usage, but is terrible for consumers wanting unbiased and unfiltered search results.

Berners-Lee warned that online political advertising:

"... has rapidly become a sophisticated industry. The fact that most people get their information from just a few platforms and the increasing sophistication of algorithms drawing upon rich pools of personal data mean that political campaigns are now building individual adverts targeted directly at users. One source suggests that in the 2016 U.S. election, as many as 50,000 variations of adverts were being served every single day on Facebook, a near-impossible situation to monitor. And there are suggestions that some political adverts – in the US and around the world – are being used in unethical ways – to point voters to fake news sites, for instance, or to keep others away from the polls. Targeted advertising allows a campaign to say completely different, possibly conflicting things to different groups. Is that democratic?"

What do you think of the assessment by Berners-Lee? Of his solutions? Any other issues?


Your Smart TV Is A Blabbermouth. How To Stop Its Spying On You

Internet-connected televisions, often referred to as "smart TVs," collect a wide variety of information about consumers. The devices track the videos you watch from several sources: cable, broadband, set-top box, DVD player, over-the-air broadcasts, and streaming devices. The devices collect a wide variety of information about consumers, including items such as as sex, age, income, marital status, household size, education level, home ownership, and household value. The TV makers sell this information to third parties, such as advertisers and data brokers.

Some people might call this "surveillance capitalism."

Reliability and trust with smart devices are critical for consumers. Earlier this month, Vizio agreed to pay $2.2 million to settle privacy abuse charges by the U.S. Federal Trade Commission (FTC).

What's a consumer to do to protect their privacy? This C/Net article provides good step-by-step instructions to turn off or to minimize the tracking by your smart television. The instructions include several smart TV brands: Samsung, Vizio, LG, Sony, and others. Sample instructions for one brand:

"Samsung: On 2016 TVs, click the remote's Home button, go to Settings (gear icon), scroll down to Support, then down to Terms & Policy. Under "Interest Based Advertisement" click "Disable Interactive Services." Under "Viewing Information Services" unclick "I agree." And under "Voice Recognition Services" click "Disable advanced features of the Voice Recognition services." If you want you can also disagree with the other two, Nuance Voice Recognition and Online Remote Management.

On older Samsung TVs, hit the remote's Menu button (on 2015 models only, then select Menu from the top row of icons), scroll down to Smart Hub, then select Terms & Policy. Disable "SynchPlus and Marketing." You can also disagree with any of the other policies listed there, and if your TV has them, disable the voice recognition and disagree with the Nuance privacy notice described above."

Browse the step-by-step instructions for your brand of television. If you disabled the tracking features on your smart TV, how did it go? If you used a different resource to learn about your smart TV's tracking features, please share it below.


GOP Legislation In Congress To Revoke Consumer Privacy And Protections

Logo for Republican Party, also known as the GOP The MediaPost Policy Blog reported:

"Republican Senator Jeff Flake, who opposes the Federal Communications Commission's broadband privacy rules, says he's readying a resolution to rescind them, Politico reports. Flake's confirmation to Politico comes days after Rep. Marsha Blackburn (R-Tennessee), the head of the House Communications Subcommittee, said she intends to work with the Senate to revoke the privacy regulations."

Blackburn's name is familiar. She was a key part of the GOP effort in 2014 to keep state laws in place to limit broadband competition by preventing citizens from forming local broadband providers. To get both higher speeds and lower prices compared to offerings by corporate internet service providers (ISPs), many people want to form local broadband providers. They can't because 20 states have laws preventing broadband competition. A worldwide study in 2014 found the consumers in the United States get poor broadband value: pay more and get slower speeds. Plus, the only consumers getting good value were community broadband customers. In June 2014, the FCC announced plans to challenge these restrictive state laws that limit competition, and keep your Internet prices high. That FCC effort failed. To encourage competition and lower prices, several Democratic representatives introduced the Community Broadband Act in 2015.That legislation went nowhere in a GOP-controlled Congress.

Pause for a moment and let that sink in. Blackburn and other GOP representatives have pursued policies where we consumers all pay more for broadband due to the lack of competition. The GOP, a party that supposedly dislikes regulation and prefers free-market competition, is happy to do the opposite to help their corporate donors. The GOP, a party that historically has promoted states' rights, now uses state laws to restrict the freedoms of constituents at the city, town, and local levels. And, that includes rural constituents.

Too many GOP voters seem oblivious to this. Why Democrats failed to capitalize on this broadband issue, especially during the Presidential campaign last year, is puzzling. Everyone needs broadband: work, play, school, travel, entertainment.

Now, back to the effort to revoke the FCC's broadband privacy rules. Several cable, telecommunications, and advertising lobbies sent a letter in January asking Congress to remove the broadband privacy rules. That letter said in part:

"... in adopting new broadband privacy rules late last year, the Federal Communications Commission (“FCC”) took action that jeopardizes the vibrancy and success of the internet and the innovations the internet has and should continue to offer. While the FCC’s Order applies only to Internet Service Providers (“ISPs”), the onerous and unnecessary rules it adopted establish a very harmful precedent for the entire internet ecosystem. We therefore urge Congress to enact a resolution of disapproval pursuant to the Congressional Review Act (“CRA”) vitiating the Order."

The new privacy rules by the FCC require broadband providers (a/k/a ISPs) to obtain affirmative “opt-in” consent from consumers before using and sharing consumers' sensitive information; specify the types of information that are sensitive (e.g., geo-location, financial information, health information, children’s information, social security numbers, web browsing history, app usage history and the content of communications); stop using and sharing information about consumers that have opted out of information sharing; meet transparency requirements to clearly notify customers about the information collection sharing and how to change their opt-in or opt-out preferences, prohibit "take-it-or-leave-it" offers where ISPs can refuse to serve customers who don't consent to the information collection and sharing; and comply with "reasonable data security practices and guidelines" to protect the sensitive information collected and shared.

The new FCC privacy rules are common sense stuff, but clearly these companies view common-sense methods as a burden. They want to use consumers' information however they please without limits, and without consideration for consumers' desire to control their own personal information. And, GOP representatives in Congress are happy to oblige these companies in this abuse.

Alarmingly, there is more. Lots more.

The GOP-led Congress also seeks to roll back consumer protections in banking and financial services. According to Consumer Reports, the issue arose earlier this month in:

"... a memo by House Financial Services Committee Chairman Rep. Jeb Hensarling (R-Tex), which was leaked to the press yesterday... The fate of the database was first mentioned [February 9th] when Bloomberg reported on a memo by Hensarling, an outspoken critic of the CFPB. The memo outlined a new version of the Financial CHOICE Act (Creating Hope and Opportunity for Investors, Consumers and Entrepreneurs), a bill originally advanced by the House Financial Services Committee in September. The new bill would lead to the repeal of the Consumer Complaint Database. It would also eliminate the CFPB's authority to punish unfair, deceptive or abusive practices among banks and other lenders, and it would allow the President to handpick—and fire—the bureau's director at will."

Banks have paid billions in fines to resolve a variety of allegations and complaints about wrongdoing. Consumers have often been abused by banks. You may remember the massive $185 million fine for the phony accounts scandal at Wells Fargo. Or, you may remember consumers forced to use prison-release cards. Or, maybe you experienced debt collection scams. And, this blog has covered extensively much of the great work by the CFPB which has helped consumers.

Does these two legislation items bother you? I sincerely hope that they do bother you. Contact your elected officials today and demand that they support the FCC privacy rules.


Facebook Doesn't Tell Users Everything it Really Knows About Them

[Editor's note: today's guest post is by reporters at ProPublica. I've posted it because, a) many consumers don't know how their personal information is bought, sold, and used by companies and social networking sites; b) the USA is capitalist society and the sensitive personal data that describes consumers is consumers' personal property; c) a better appreciation of "a" and "b" will hopefully encourage more consumers to be less willing to trade their personal property for convenience, and demand better privacy protections from products, services, software, apps, and devices; and d) when lobbyists and politicians act to erode consumers' property and privacy rights, hopefully more consumers will respond and act. Facebook is not the only social networking site that trades consumers' information. This news story is reprinted with permission.]

by Julia Angwin, Terry Parris Jr. and Surya Mattu, ProPublica

Facebook has long let users see all sorts of things the site knows about them, like whether they enjoy soccer, have recently moved, or like Melania Trump.

But the tech giant gives users little indication that it buys far more sensitive data about them, including their income, the types of restaurants they frequent and even how many credit cards are in their wallets.

Since September, ProPublica has been encouraging Facebook users to share the categories of interest that the site has assigned to them. Users showed us everything from "Pretending to Text in Awkward Situations" to "Breastfeeding in Public." In total, we collected more than 52,000 unique attributes that Facebook has used to classify users.

Facebook's site says it gets information about its users "from a few different sources."

What the page doesn't say is that those sources include detailed dossiers obtained from commercial data brokers about users' offline lives. Nor does Facebook show users any of the often remarkably detailed information it gets from those brokers.

"They are not being honest," said Jeffrey Chester, executive director of the Center for Digital Democracy. "Facebook is bundling a dozen different data companies to target an individual customer, and an individual should have access to that bundle as well."

When asked this week about the lack of disclosure, Facebook responded that it doesn't tell users about the third-party data because its widely available and was not collected by Facebook.

"Our approach to controls for third-party categories is somewhat different than our approach for Facebook-specific categories," said Steve Satterfield, a Facebook manager of privacy and public policy. "This is because the data providers we work with generally make their categories available across many different ad platforms, not just on Facebook."

Satterfield said users who don't want that information to be available to Facebook should contact the data brokers directly. He said users can visit a page in Facebook's help center, which provides links to the opt-outs for six data brokers that sell personal data to Facebook.

Limiting commercial data brokers' distribution of your personal information is no simple matter. For instance, opting out of Oracle's Datalogix, which provides about 350 types of data to Facebook according to our analysis, requires "sending a written request, along with a copy of government-issued identification" in postal mail to Oracle's chief privacy officer.

Users can ask data brokers to show them the information stored about them. But that can also be complicated. One Facebook broker, Acxiom, requires people to send the last four digits of their social security number to obtain their data. Facebook changes its providers from time to time so members would have to regularly visit the help center page to protect their privacy.

One of us actually tried to do what Facebook suggests. While writing a book about privacy in 2013, reporter Julia Angwin tried to opt out from as many data brokers as she could. Of the 92 brokers she identified that accepted opt-outs, 65 of them required her to submit a form of identification such as a driver's license. In the end, she could not remove her data from the majority of providers.

ProPublica's experiment to gather Facebook's ad categories from readers was part of our Black Box series, which explores the power of algorithms in our lives. Facebook uses algorithms not only to determine the news and advertisements that it displays to users, but also to categorize its users in tens of thousands of micro-targetable groups.

Our crowd-sourced data showed us that Facebook's categories range from innocuous groupings of people who like southern food to sensitive categories such as "Ethnic Affinity" which categorizes people based on their affinity for African-Americans, Hispanics and other ethnic groups. Advertisers can target ads toward a group 2014 or exclude ads from being shown to a particular group.

Last month, after ProPublica bought a Facebook ad in its housing categories that excluded African-Americans, Hispanics and Asian-Americans, the company said it would build an automated system to help it spot ads that illegally discriminate.

Facebook has been working with data brokers since 2012 when it signed a deal with Datalogix. This prompted Chester, the privacy advocate at the Center for Digital Democracy, to filed a complaint with the Federal Trade Commission alleging that Facebook had violated a consent decree with the agency on privacy issues. The FTC has never publicly responded to that complaint and Facebook subsequently signed deals with five other data brokers.

To find out exactly what type of data Facebook buys from brokers, we downloaded a list of 29,000 categories that the site provides to ad buyers. Nearly 600 of the categories were described as being provided by third-party data brokers. (Most categories were described as being generated by clicking pages or ads on Facebook.)

The categories from commercial data brokers were largely financial, such as "total liquid investible assets $1-$24,999," "People in households that have an estimated household income of between $100K and $125K, or even "Individuals that are frequent transactor at lower cost department or dollar stores."

We compared the data broker categories with the crowd-sourced list of what Facebook tells users about themselves. We found none of the data broker information on any of the tens of the thousands of "interests" that Facebook showed users.

Our tool also allowed users to react to the categories they were placed in as being "wrong," "creepy" or "spot on." The category that received the most votes for "wrong" was "Farmville slots." The category that got the most votes for "creepy" was "Away from family." And the category that was rated most "spot on" was "NPR."

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Cable, Telecom And Advertising Lobbies Ask Congress To Remove FCC Broadband Privacy Rules

The Association of National Advertisers (ANA) and 15 other cable, telecommunications, advertising lobbies sent a letter on January 27, 2017 to key leaders in Congress urging them to repeal the broadband privacy rules the U.S. Federal Communications Commission (FCC) adopted in October 2016 requiring Internet service providers (ISPs) to protect the privacy of their customers. 15 advertising and lobbyist groups co-signed the letter with the ANA: the American Cable Association, the Competitive Carriers Association, CTIA-The Wireless Association (formerly known as the Cellular Communications Industry Association), the Data & Marketing Association, the Internet Advertising Bureau, the U.S. Chamber of Commerce, the U.S. Telecom Association, and others.

The letter, available at the ANA site and here (Adobe PDF; 354.4k), explained the groups' reasoning:

"Unfortunately, in adopting new broadband privacy rules late last year, the Federal Communications Commission (“FCC”) took action that jeopardizes the vibrancy and success of the internet and the innovations the internet has and should continue to offer. While the FCC’s Order applies only to Internet Service Providers (“ISPs”), the onerous and unnecessary rules it adopted establish a very harmful precedent for the entire internet ecosystem. We therefore urge Congress to enact a resolution of disapproval pursuant to the Congressional Review Act (“CRA”) vitiating the Order.

Adopted on a party-line 3-2 vote just ten days before the Presidential election, over strenuous objections by the minority and strong concerns expressed by entities throughout the internet ecosystem, the new rules impose overly prescriptive online privacy and data security requirements that will conflict with established law, policy, and practice and cause consumer confusion... the FCC Order would create confusion and interfere with the
ability of consumers to receive customized services and capabilities they enjoy and be informed of new products and discount offers. Further, the Order would also result in consumers being bombarded with trivial data breach notifications."

Data breach notifications are trivial? After writing this blog for almost 10 years, I have learned they aren't. Consumers deserve to know when companies fail to protect their sensitive personal information. Most states have laws requiring breach notifications. It seems as these advertising groups don't want to be responsible nor held accountable.

The Hill explained the CRA and how it usually fails:

"The Congressional Review Act (CRA) has only worked precisely one time as a way for Congress to undo an executive branch regulation... The CRA was passed in 1996 as part of then-Speaker Newt Gingrich's (R-Ga.) "Contract with America." While executive branch agencies can only issue regulations pursuant to statutes passed by Congress, Congress wanted to find a way to make it easier to overturn those regulations. Previously there was a process by which, if one house of Congress voted to overturn the regulation, it was invalidated. This procedure was ruled unconstitutional by the Supreme Court in 1983.

Congress was still able to overturn an executive branch regulation by passing a law. Passing a law is, of course, subject to filibusters in the Senate. We've learned that the filibuster in recent years has made it quite difficult to pass laws. The CRA created a period of 60 "session days" (days in which Congress is in session) during which Congress could use expedited procedures to overturn a regulation.

Also on January 27, several consumer privacy advocates sent a letter (Adobe PDF) to the same Congressional representatives. The letter, signed by 20 privacy advocates including the American Civil Liberties Union, the Center for Democracy and Technology, the Center for Media Justice, Consumers Union, the National Hispanic Media Coalition, the Privacy Rights Clearing House, and others urging the Congressional representatives:

"... to oppose the use of the Congressional Review Act (CRA) to adopt a Resolution of Disapproval overturning the FCC’s broadband privacy order. That order implements the mandates in Section 222 of the 1996 Telecommunications Act, which an overwhelming, bipartisan majority of Congress enacted to protect telecommunications users’ privacy. The cable, telecom, wireless, and advertising lobbies request for CRA intervention is just another industry attempt to overturn rules that empower users and give them a say in how their private information may be used.

Not satisfied with trying to appeal the rules of the agency, industry lobbyists have asked Congress to punish internet users by way of restraining the FCC, when all the agency did was implement Congress’ own directive in the 1996 Act. This irresponsible, scorched-earth tactic is as harmful as it is hypocritical. If Congress were to take the industry up on its request, a Resolution of Disapproval could exempt internet service providers (ISPs) from any and all privacy rules at the FCC... It could also preclude the FCC from addressing any of the other issues in the privacy order like requiring data breach notification and from revisiting these issues as technology continues to evolve in the future... Without these rules, ISPs could use and disclose customer information at will. The result could be extensive harm caused by breaches or misuse of data.

Broadband ISPs, by virtue of their position as gatekeepers to everything on the internet, have a largely unencumbered view into their customers’ online communications. That includes the websites they visit, the videos they watch, and the messages they send. Even when that traffic is encrypted, ISPs can gather vast troves of valuable information on their users’ habits; but researchers have shown that much of the most sensitive information remains unencrypted. The FCC’s order simply restores people’s control over their personal information and lets them choose the terms on which ISPs can use it, share it, or sell it..."

The new FCC broadband privacy rules kept consumers in control of their online privacy. The new rules featured opt-in requirements allowing them to collect consumers' sensitive personal information only after gaining customers' explicit consent.

So, advertisers have finally stated clearly how much they care about protecting consumers' privacy. They really don't. They don't want any constraints upon their ability to collect and archive consumers' (your) sensitive personal information. During the 2016 presidential campaign, candidate and now President Donald Trump promised:

"One of the keys to unlocking growth is scaling-back years of disastrous regulations unilaterally imposed by our out-of-control bureaucracy. In 2015 alone, federal agencies issued over 3,300 final rules and regulations, up from 2,400 the prior year. Every year, over-regulation costs our economy $2 trillion dollars a year and reduces household wealth by almost $15,000 dollars. Mr. Trump has proposed a moratorium on new federal regulations that are not compelled by Congress or public safety, and will ask agency and department heads to identify all needless job-killing regulations and they will be removed... A complete regulatory overhaul will level the playing field for American workers and add trillions in new wealth to our economy – keeping companies here, expanding hiring and investment, and bringing thousands of new companies to our shores."

Are FCC rules protecting your privacy "over-regulation," "onerous and unnecessary?" Are FCC privacy rules keeping consumers in control over their sensitive personal information "disastrous?" Will the Trump administration side with corporate lobbies or consumers' privacy protections? We shall quickly see.

There is a clue what the answer to that question will be. President Trump has named Ajit Pai, a Republican member of the Federal Communications Commission, as the new FCC chair replacing Tom Wheeler, the former chair and Democrat, who stepped down on Friday. This will also give the Republicans a majority on the FCC.

Pai is also an opponent of net neutrality rules the FCC has also adopted, which basically says consumers (and not ISPs) decided where consumers go on the Internet with their broadband connections. Republicans in Congress and lobby groups have long opposed net neutrality. In 2014, more than 100 tech firms urged the FCC to protect net neutrality. With a new President in the White House opposing regulations, some companies and lobby groups seem ready to undo these consumer protections.

What do you think?


Facebook Says it Will Stop Allowing Some Advertisers to Exclude Users by Race

Facebook logo [Editor's note: Today's guest post was originally published by ProPublica on November 11, 2016. It is reprinted with permission. This prior post explained the problems with Facebook's racial advertising filters.]

by Julia Angwin, ProPublica

Facing a wave of criticism for allowing advertisers to exclude anyone with an "affinity" for African-American, Asian-American or Hispanic people from seeing ads, Facebook said it would build an automated system that would let it better spot ads that discriminate illegally.

Federal law prohibits ads for housing, employment and credit that exclude people by race, gender and other factors.

Facebook said it would build an automated system to scan advertisements to determine if they are services in these categories. Facebook will prohibit the use of its "ethnic affinities" for such ads.

Facebook said its new system should roll out within the next few months. "We are going to have to build a solution to do this. It is not going to happen overnight," said Steve Satterfield, privacy and public policy manager at Facebook.

He said that Facebook would also update its advertising policies with "stronger, more specific prohibitions" against discriminatory ads for housing, credit and employment.

In October, ProPublica purchased an ad that targeted Facebook members who were house hunting and excluded anyone with an "affinity" for African-American, Asian-American or Hispanic people. When we showed the ad to a civil rights lawyer, he said it seemed like a blatant violation of the federal Fair Housing Act.

After ProPublica published an article about its ad purchase, Facebook was deluged with criticism. Four members of Congress wrote Facebook demanding that the company stop giving advertisers the option of excluding by ethnic group.

The federal agency that enforces the nation's fair housing laws said it was "in discussions" with Facebook to address what it termed "serious concerns" about the social network's advertising practices.

And a group of Facebook users filed a&n class-action lawsuit against Facebook, alleging that the company's ad-targeting technology violates the Fair Housing Act and the Civil Rights Act of 1964.

Facebook's Satterfield said that today's changes are the result of "a lot of conversations with stakeholders."

Facebook said the new system would not only scan the content of ads, but could also inject pop-up notices alerting buyers when they are attempting to purchase ads that might violate the law or Facebook's ad policies.

"We're glad to see Facebook recognizing the important civil rights protections for housing, credit and employment," said Rachel Goodman, staff attorney with the racial justice program at the American Civil Liberties Union. "We hope other online advertising platforms will recognize that ads in these areas need to be treated differently."

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Facebook Lets Advertisers Exclude Users by Race

Facebook logo [Editor's note: Today's guest post was originally published by ProPublica on October 28, 2016. It is reprinted with permission.]

by Julia Angwin and Terry Parris Jr., ProPublica

Imagine if, during the Jim Crow era, a newspaper offered advertisers the option of placing ads only in copies that went to white readers.

That's basically what Facebook is doing nowadays.

The ubiquitous social network not only allows advertisers to target users by their interests or background, it also gives advertisers the ability to exclude specific groups it calls "Ethnic Affinities." Ads that exclude people based on race, gender and other sensitive factors are prohibited by federal law in housing and employment.

Here is a screenshot of a housing ad that we purchased from Facebook's self-service advertising portal:

Image

The ad we purchased was targeted to Facebook members who were house hunting and excluded anyone with an "affinity" for African-American, Asian-American or Hispanic people. (Here's the ad itself.)

When we showed Facebook's racial exclusion options to a prominent civil rights lawyer John Relman, he gasped and said, "This is horrifying. This is massively illegal. This is about as blatant a violation of the federal Fair Housing Act as one can find."

The Fair Housing Act of 1968 makes it illegal "to make, print, or publish, or cause to be made, printed, or published any notice, statement, or advertisement, with respect to the sale or rental of a dwelling that indicates any preference, limitation, or discrimination based on race, color, religion, sex, handicap, familial status, or national origin." Violators can face tens of thousands of dollars in fines.

The Civil Rights Act of 1964 also prohibits the "printing or publication of notices or advertisements indicating prohibited preference, limitation, specification or discrimination" in employment recruitment.

Facebook's business model is based on allowing advertisers to target specific groups 2014 or, apparently to exclude specific groups 2014 using huge reams of personal data the company has collected about its users. Facebook's microtargeting is particularly helpful for advertisers looking to reach niche audiences, such as swing-state voters concerned about climate change. ProPublica recently offered a tool allowing users to see how Facebook is categorizing them. We found nearly 50,000 unique categories in which Facebook places its users.

Facebook says its policies prohibit advertisers from using the targeting options for discrimination, harassment, disparagement or predatory advertising practices.

"We take a strong stand against advertisers misusing our platform: Our policies prohibit using our targeting options to discriminate, and they require compliance with the law," said Steve Satterfield, privacy and public policy manager at Facebook. "We take prompt enforcement action when we determine that ads violate our policies."

Satterfield said it's important for advertisers to have the ability to both include and exclude groups as they test how their marketing performs. For instance, he said, an advertiser "might run one campaign in English that excludes the Hispanic affinity group to see how well the campaign performs against running that ad campaign in Spanish. This is a common practice in the industry."

He said Facebook began offering the "Ethnic Affinity" categories within the past two years as part of a "multicultural advertising" effort.

Satterfield added that the "Ethnic Affinity" is not the same as race 2014 which Facebook does not ask its members about. Facebook assigns members an "Ethnic Affinity" based on pages and posts they have liked or engaged with on Facebook.

When we asked why "Ethnic Affinity" was included in the "Demographics" category of its ad-targeting tool if it's not a representation of demographics, Facebook responded that it plans to move "Ethnic Affinity" to another section.

Facebook declined to answer questions about why our housing ad excluding minority groups was approved 15 minutes after we placed the order.

By comparison, consider the advertising controls that the New York Times has put in place to prevent discriminatory housing ads. After the newspaper was successfully sued under the Fair Housing Act in 1989, it agreed to review ads for potentially discriminatory content before accepting them for publication.

Steph Jespersen, the Times' director of advertising acceptability, said that the company's staff runs automated programs to make sure that ads that contain discriminatory phrases such as "whites only" and "no kids" are rejected.

The Times' automated program also highlights ads that contain potentially discriminatory code words such as "near churches" or "close to a country club." Humans then review those ads before they can be approved.

Jespersen said the Times also rejects housing ads that contain photographs of too many white people. The people in the ads must represent the diversity of the population of New York, and if they don't, he says he will call up the advertiser and ask them to submit an ad with a more diverse lineup of models.

But, Jespersen said, these days most advertisers know not to submit discriminatory ads: "I haven't seen an ad with 'whites only' for a long time."

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


4 Website Operators Settle With New York State Attorney General For Illegal Tracking of Children

Earlier this month, the Attorney General for the State of New York (NYSAG) announced settlement agreements with the operators of several popular websites for the illegal online tracking of children, which violated the Children's Online Privacy Protection Act (COPPA). The website operators agreed to pay a total of $835,000 in fines, comply with, and implement a comprehensive set of requirements and changes.

COPPA, passed by Congress in 1998 and updated in 2013, prohibits the unauthorized collection, use, and disclosure of children’s personal information (e.g., first name, last name, e-mail address, IP address, etc.) on websites directed to children under the age of 13, including the collection of information for tracking a child’s movements across the Internet. The 2013 update expanded the list of personal information items, and prohibits covered operators from using cookies, IP addresses, and other persistent identifiers to track users across websites for most advertising purposes, amassing profiles on individual users, and serving targeted behavioral advertisements.

The NYSAG operated a program titled "Operation Child Tracker," which analyzed the most popular children’s websites for any unauthorized tracking. The analysis found that four website operators include third-party tracking on their websites -- which is prohibited by COPPA -- and failed to properly evaluate third-party companies, such as advertisers, advertising networks, and marketers. The website operators and their properties included Viacom (websites associated with Nick Jr. and Nickelodeon), Mattel (Barbie, Hot Wheels, and American Girl), JumpStart (Neopets), and Hasbro (My Little Pony, Littlest Pet Shop, and Nerf).

Regular readers of this blog are familiar with the variety of technologies and mechanisms companies have used to track consumers online: web browser cookies, “zombie cookies,” Flash cookies, “zombie e-tags,” super cookies, “zombie databases” on mobile devices, canvas fingerprinting, and augmented reality (which tracks consumers both online and in the physical world). For example, the web browser cookie is a small text file placed by a website on a user’s computer which is stored by the user’s web browser.  Every time a user visits the website, the website retrieves all cookies files stored by that website on the user’s computer. Some website operators shared the information contained in web browser cookies with third-party companies, such as marketing affiliates, advertisers, and tracking companies. This allows web browser cookies to be used to track a user’s browsing history across several websites.

All of this happens in the background without explicit notices in the web browser software, unless the user configures their web browser to provide notice and/or to delete all browser cookies stored. The other technologies represent alternative methods with more technical sophistication and stealth.

The announcement by the NYSAG described each website operator's activities:

"Viacom operates the Nick Jr. website, at www.nickjr.com, and the Nickelodeon website, at www.nick.com... The office of the Attorney General found a variety of improper third party tracking on the Nick Jr. and Nickelodeon websites. These included:

1. Many advertisers and agencies that placed advertisements on Nick Jr. and Nickelodeon websites introduced tracking technologies of third parties that routinely engage in the type of tracking, profiling, and targeted advertising prohibited by COPPA. Viacom considered several approaches to mitigate the risk of COPPA violations from these third parties, including removing adult advertising from a child-directed section of the Nick Jr. website and monitoring advertisements for unexpected tracking... However, Viacom did not timely take either approach and did not implement sufficient safeguards for its users.

2. Some visitors to the homepage of the Nick Jr. website were served behavioral advertising and tracked through a third party advertising platform Viacom used to serve advertisements. Although Viacom considered the homepage of the Nick Jr. website to be parent-directed, and thus not covered by COPPA, the homepage had content that appealed to children. Under COPPA, website operators must treat mixed audience pages as child-directed..."

Mattel logo The NYSAG also found:

"... 26 of Mattel’s websites feature content for young children, including online games, animated cartoons, and downloadable content such as posters, computer desktop wallpaper, and pages for young children to color... The office of the Attorney General found that a variety of improper third party tracking technologies were present on Mattel’s child-directed websites and sections of websites. These included:

1. Mattel deployed a tracking technology supplied by a third party data broker across its Barbie, Hot Wheels, Fisher-Price, Monster High, Ever After High, and Thomas & Friends websites. Mattel used the tracking technology for measuring website metrics, such as the number of visitors to each site, a practice permitted under COPPA. However, the tracking technology supplied by the data broker introduced many other third party tracking technologies in a process known as “piggy backing.” Many of these third parties engage in the type of tracking, profiling, and targeted advertising prohibited by COPPA.

2. A tracking technology that Mattel deployed on the e-commerce portion of the American Girl website, which is not directed to children or covered by COPPA, was inadvertently introduced onto certain child-directed webpages of the American Girl website.

3. Mattel uploaded videos to Google’s YouTube.com, a video hosting platform, and then embedded some of these videos onto the child-directed portion of several Mattel websites, including the Barbie website. When the embedded videos were played by children, it enabled Google tracking technologies, which were used to serve behavioral advertisements.

JumpStart logo Regarding JumpStart, the NYSAG found:

"... several improper third party tracking technologies were present on the Neopets website, both for logged-in users under the age of 13 and users who were not logged-in. These included:

1. JumpStart failed to configure the advertising platform used to serve ads on the Neopets website in a manner that would comply with COPPA. As a result, users under the age of 13 were served behavioral advertising and tracked through the advertising platform.

2. JumpStart integrated a Facebook plug-in into the Neopets website... Facebook uses the tracking information for serving behavioral advertising, among other things, unless the website operator notifies Facebook with a COPPA flag that the website falls is subject to COPPA. JumpStart did not notify Facebook that the Neopets website was directed to children."

Hasbro logo For Hasbro, the NYSAG found:

"... several improper third party tracking technologies were present on Hasbro’s child-directed websites and sections of websites. These included:

1. Hasbro engaged in an advertising campaign that tracked visitors to the Nerf section of Hasbro’s website in order to serve Hasbro advertisements to those same users as they visited other websites at a later time, a type of online behavioral advertising prohibited by COPPA known as “remarketing.”

2. Hasbro integrated a third-party plug-in into many of its websites, that allowed users to be tracked across websites and introduced other third parties that engaged in the type of tracking, profiling, and targeted advertising prohibited under COPPA.

It is important to note that Hasbro participated in a safe harbor program. A website operator that complies with the rules of an FTC-approved safe harbor program is deemed in compliance with COPPA. However, safe harbor programs rely on full disclosure of the operator’s practices and Hasbro failed to disclose the existence of the remarketing campaign through the Nerf website."

The terms of the settlement agreements require the website operators to:

  1. Conduct regular electronic scans for unexpected third party tracking technologies that may appear on their children’s websites. Three of the companies, Viacom, Mattel, and JumpStart will provide regular reports to the office regarding the results of the scans.
  2. Adopt procedures to evaluate third-party companies before they are introduced onto their children’s websites. the evaluation should determine whether and how the third parties collect, use, and disclose, and allow others to collect, use, and disclose, personal information from users.
  3. Provide notice to third parties that collect, use, or disclose personal information of users with information sufficient to enable the third parties to identify the websites or sections of websites that are child directed pursuant to COPPA.
  4. Update website privacy policies with either, a) information sufficient to enable parents and others to identify the websites and portions of websites that are directed to children under COPPA, or b) a means of contacting the company so that parents and others may request such information.

Kudos to the NYSAG office and staff for a comprehensive analysis and enforcement to protect children's online privacy. This type of analysis and enforcement is critical as companies introduce more Internet-connected toys and products classified as part of the Internet of Things (ioT).