261 posts categorized "Social Networking" Feed

Airlines Want To Extend 'Dynamic Pricing' Capabilities To Set Ticket Prices By Each Person

In the near future, what you post on social media sites (e.g., Facebook, Instagram, Pinterest, etc.) could affect the price you pay for airline tickets. How's that?

First, airlines already use what the travel industry calls "dynamic pricing" to vary prices by date, time of day, and season. We've all seen higher ticket prices during the holidays and peak travel times. The Telegraph UK reported that airlines want to extend dynamic pricing to set fares by person:

"... the advent of setting fares by the person, rather than the flight, are fast approaching. According to John McBride, director of product management for PROS, a software provider that works with airlines including Lufthansa, Emirates and Southwest, a number of operators have already introduced dynamic pricing on some ticket searches. "2018 will be a very phenomenal year in terms of traction," he told Travel Weekly..."

And, there was a preliminary industry study about how to do it:

" "The introduction of a Dynamic Pricing Engine will allow an airline to take a base published fare that has already been calculated based on journey characteristics and broad segmentation, and further adjust the fare after evaluating details about the travelers and current market conditions," explains a white paper on pricing written by the Airline Tariff Publishing Company (ATPCO), which counts British Airways, Delta and KLM among its 430 airline customers... An ATPCO working group met [in late February] to discuss dynamic pricing, but it is likely that any roll out to its customers would be incremental."

What's "incremental" mean? Experts say first step would be to vary ticket prices in search results at the airline's site, or at an intermediary's site. There's virtually no way for each traveler to know they'd see a personal price that's higher (or lower) from prices presented to others.

With dynamic pricing per person, business travelers would pay more. And, an airline could automatically bundle several fees (e.g., priority boarding, luggage, meals, etc.) for its loyalty program members into each person's ticket price, obscuring transparency and avoiding fairness. Of course, airlines would pitch this as convenience, but alert consumers know that any convenience always has its price.

Thankfully, some politicians in the United States are paying attention. The Shear Social Media Law & Technology blog summarized the situation very well:

"[Dynamic pricing by person] demonstrates why technology companies and the data collection industry needs greater regulation to protect the personal privacy and free speech rights of Americans. Until Silicon Valley and data brokers are properly regulated Americans will continue to be discriminated against based upon the information that technology companies are collecting about us."

Just because something can be done with technology, doesn't mean it should be done. What do you think?

I Approved This Facebook Message — But You Don’t Know That

[Editor's note: today's guest post, by reporters at ProPublica, is the latest in a series about advertising and social networking sites. It is reprinted with permission.]

Facebook logo By Jennifer Valentino-DeVries, ProPublica

Hundreds of federal political ads — including those from major players such as the Democratic National Committee and the Donald Trump 2020 campaign — are running on Facebook without adequate disclaimer language, likely violating Federal Election Commission (FEC) rules, a review by ProPublica has found.

An FEC opinion in December clarified that the requirement for political ads to say who paid for and approved them, which has long applied to print and broadcast outlets, extends to ads on Facebook. So we checked more than 300 ads that had run on the world’s largest social network since the opinion, and that election-law experts told us met the criteria for a disclaimer. Fewer than 40 had disclosures that appeared to satisfy FEC rules.

“I’m totally shocked,” said David Keating, president of the nonprofit Institute for Free Speech in Alexandria, Virginia, which usually opposes restrictions on political advertising. “There’s no excuse,” he said, looking through our database of ads.

The FEC can investigate possible violations of the law and fine people up to thousands of dollars for breaking it — fines double if the violation was “knowing and willful,” according to the regulations. Under the law, it’s up to advertisers, not Facebook, to ensure they have the right disclaimers. The FEC has not imposed penalties on any Facebook advertiser for failing to disclose.

An FEC spokeswoman declined to say whether the commission has any recent complaints about lack of disclosure on Facebook ads. Enforcement matters are confidential until they are resolved, she said.

None of the individuals or groups we contacted whose ads appeared to have inadequate disclaimers, including the Democratic National Committee and the Trump campaign, responded to requests for comment. Facebook declined to comment on ProPublica’s findings or the December opinion. In public documents, the company has urged the FEC to be “flexible” in what it allows online, and to develop a policy for all digital advertising rather than focusing on Facebook.

Insufficient disclaimers can be minor technicalities, not necessarily evidence of intent to deceive. But the pervasiveness of the lapses ProPublica found suggests a larger problem that may raise concerns about the upcoming midterm elections — that political advertising on the world’s largest social network isn’t playing by rules intended to protect the public.

Unease about political ads on Facebook and other social networking sites has intensified since internet companies acknowledged that organizations associated with the Russian government bought ads to influence U.S. voters during the 2016 election. Foreign contributions to campaigns for U.S. federal office are illegal. Online, advertisers can target ads to relatively small groups of people. Once the marketing campaign is over, the ads disappear. This makes it difficult for the public to scrutinize them.

The FEC opinion is part of a push toward more transparency in online political advertising that has come in response to these concerns. In addition to handing down the opinion in a specific case, the FEC is preparing new rules to address ads on social media more broadly. Three senators are sponsoring a bill called the Honest Ads Act, which would require internet companies to provide more information on who is buying political ads. And earlier this month, the election authority in Seattle said Facebook was violating a city law on election-ad disclosures, marking a milestone in municipal attempts to enforce such transparency.

Facebook itself has promised more transparency about political ads in the coming months, including “paid for by” disclosures. Since late October it has been conducting tests in Canada that publish ads on an advertiser’s Facebook page, where people can see them even without being part of the advertiser’s target audience. Those ads are only up while the ad campaign is running, but Facebook says it will create a searchable archive for federal election advertising in the U.S. starting this summer.

ProPublica found the ads using a tool called the Political Ad Collector, which allows Facebook users to automatically send us the political ads that were displayed on their news feeds. Because they reflect what users of the tool are seeing, the ads in our database aren’t a representative sample.

The disclaimers required by the FEC are familiar to anyone who has seen a print or television political ad — think of a candidate saying, “I’m ____, and I approved this message,” at the end of a TV commercial, or a “paid for by” box at the bottom of a newspaper advertisement. They’re intended to make sure the public knows who is paying to support a candidate, and to prevent people from falsely claiming to speak on a candidate’s behalf.

The system does have limitations, reflecting concerns that overuse of disclaimers could inhibit free speech. For starters, the rules apply only to certain types of political ads. Political committees and candidates have to include disclaimers, as do people seeking donations or conducting “express advocacy.” To count as express advocacy, an ad typically must mention a candidate and use certain words clearly campaigning for or against a candidate — such as “vote for,” “reject” or “re-elect.” And the regulations only apply to federal elections, not state and local ones.

The rules also don’t address so-called “issue” ads that advocate a policy stance. These ads may include a candidate’s name without a disclaimer, as long as they aren’t funded by a political committee or candidate and don’t use express-advocacy language. Many of the political ads purchased by Russian groups in 2016 attempted to influence public opinion without mentioning candidates at all — and would not require disclosure even today.

Enforcement of the law often relies on political opponents or a member of the public complaining to the FEC. If only supporters see an ad, as might be the case online, a complaint may never come.

The disclaimer law was last amended in 2002, but online advertising has changed so rapidly that several experts said the FEC has had trouble keeping up. In 2002, the commission found that paid text message ads were exempt from disclosure under the “small-items exception” originally intended for buttons, pins and the like. What counts as small depends on the situation and is up to the FEC.

In 2010, the FEC considered ads on Google that had no graphics or photos and were limited to 95 characters of text. Google proposed that disclaimers not be part of the ads themselves but be included on the web pages that users would go to after clicking on the ads; the FEC agreed.

In 2011, Facebook asked the FEC to allow political ads on the social network to run without disclosures. At the time, Facebook limited all ads on its platform to small, “thumbnail” photos and brief text of only 100 or 160 characters, depending on the type of ad. In that case, the six-person FEC couldn’t muster the four votes needed to issue an opinion, with three commissioners saying only limited disclosure was required and three saying the ads needed no disclosure at all, because it would be “impracticable” for political ads on Facebook to contain more text than other ads. The result was that political ads on Facebook ran without the disclaimers seen on other types of election advertising.

Since then, though, ads on Facebook have expanded. They can now include much more text, as well as graphics or photos that take up a large part of the news feed’s width. Video ads can run for many minutes, giving advertisers plenty of time to show the disclaimer as text or play it in a voiceover.

Last October, a group called Take Back Action Fund decided to test whether these Facebook ads should still be exempt from the rules.

“For years now, people have said, ‘Oh, don’t worry about the rules, because the FEC doesn’t enforce anything on Facebook,’” said John Pudner, president of Take Back Action Fund, which advocates for campaign finance reform. Many political consultants “didn’t think you ever needed a disclaimer on a Facebook ad,” said Pudner, a longtime campaign consultant to conservative candidates.

Take Back Action Fund came up with a plan: Ask the FEC whether it should include disclosures on ads that the group thought clearly needed them.

The group told the FEC it planned to buy “express advocacy” ads on Facebook that included large images or videos on the news feed. In its filing, Take Back Action Fund provided some sample text it said it was thinking of using: “While [Candidate Name] accuses the Russians of helping President Trump get elected, [s/he] refuses to call out [his/her] own Democrat Party for paying to create fake documents that slandered Trump during his presidential campaign. [Name] is unfit to serve.”

In a comment filed with the FEC in the matter, the Internet Association trade group, of which Facebook is a member, asked the commission to follow the precedent of the 2010 Google case and allow a “one-click” disclosure that didn’t need to be on the ad itself but could be on the web page the ad led to.

The FEC didn’t follow that recommendation. It said unanimously that the ads needed full disclaimers.

The opinion, handed down Dec. 15, was narrow, saying that if any of the “facts or assumptions” presented in another case were different in a “material” way, the opinion could not be relied upon. But several legal experts who spoke with ProPublica said the opinion means anyone who would have to include disclaimers in traditional advertising should now do so on large Facebook image ads or video ads — including candidates, political committees and anyone using express advocacy.

“The functionality and capabilities of today’s Facebook Video and Image ads can accommodate the information without the same constrictions imposed by the character-limited ads that Facebook presented to the Commission in 2011,” three commissioners wrote in a concurring statement. A fourth commissioner went further, saying the commission’s earlier decision in the text messaging case should now be completely superseded. The remaining two commissioners didn’t comment beyond the published opinion.

“We are overjoyed at the decision and hope it will have the effect of stopping anonymous attacks,” said Pudner, of Take Back Action Fund. “We think that this is a matter of the voter’s right to know.” He added that the group doesn’t intend to purchase the ads.

This year, the FEC plans to tackle concerns about digital political advertising more generally. Facebook favors such an industry-wide approach, partly for competitive reasons, according to a comment it submitted to the commission.

“Facebook strongly supports the Commission providing further guidance to committees and other advertisers regarding their disclaimer obligations when running election-related Internet communications on any digital platform,” Facebook General Counsel Colin Stretch wrote to the FEC.

Facebook was concerned that its own transparency efforts “will apply only to advertising on Facebook’s platform, which could have the unintended consequence of pushing purchasers who wish to avoid disclosure to use other, less transparent platforms,” Stretch wrote.

He urged the FEC to adopt a “flexible” approach, on the grounds that there are many different types of online ads. “For example, allowing ads to include an icon or other obvious indicator that more information about an ad is available via quick navigation (like a single click) would give clear guidance.”

To test whether political advertisers were following the FEC guidelines, we searched for large U.S. political ads that our tool gathered between Dec. 20 — five days after the opinion — and Feb. 1. We excluded the small ads that run on the right column of Facebook’s website. To find ads that were most likely to fall under the purview of the FEC regulations, we searched for terms like “committee,” “donate” and “chip in.” We also searched for ads that used express advocacy language such as, “for Congress,” “vote against,” “elect” or “defeat.” We left out ads with state and local terms such as “governor” or “mayor,” as well as ads from groups such as the White House Historical Association or National Audubon Society that were obviously not election-oriented. Then we examined the ads, including the text and photos or graphics.

Of nearly 70 entities that ran ads with a large photo or graphic in addition to text, only two used all of the required disclaimer language. About 20 correctly indicated in some fashion the name of the committee associated with the ad but omitted other language, such as whether the ad was endorsed by a candidate. The rest had more significant shortcomings. Many of those that didn’t include disclosures were for relatively inexperienced candidates for Congress, but plenty of seasoned lawmakers and major groups failed to use the proper language as well.

For example, one ad said, “It’s time for Donald Trump, his family, his campaign, and all of his cronies to come clean about their collusion with Russia.” A photo of Donald Trump appeared over a black and red map of Russia, overlaid by the text, “Stop the Lies.” The ad urged people to “Demand Answers Today” and “Sign Up.”

At the top, the ad identified the Democratic Party as the sponsor, and linked to the party’s Facebook page. But, under FEC rules, it should have named the funder, the Democratic National Committee, and given the committee’s address or website. It should also have said whether the ad was endorsed by any candidate. It didn’t. The only nod to the national committee was a link to my.democrats.org, which is paid for by the DNC, at the bottom of the ad. As on all Facebook ads, the word “Sponsored” was included at the top.

Advertisers seemed more likely to put the proper disclaimers on video ads, especially when those ads appeared to have been created for television, where disclaimers have been mandatory for years. Videos that didn’t look made for TV were less likely to include a disclaimer.

One ad that said it was from Donald J. Trump consisted of 20 seconds of video with an American flag background and stirring music. The words “Donate Now! And Enter for a Chance To Win Dinner With Trump!” materialized on the screen with dramatic thuds and crashes. The ad linked to Trump’s Facebook page, and a “Donate” button at the bottom of the ad linked to a website that identified the president’s re-election committee, Donald J. Trump for President, Inc., as its funder. It wasn’t clear on the ad whether Trump himself or his committee paid for it, which should have been specified under FEC rules.

The large majority of advertisements we collected — both those that used disclosures and those that didn’t — were for liberal groups and politicians, possibly reflecting the allegiances of the ProPublica readers who installed our ad-collection tool. There were only four Republican advertisers among the ads we analyzed.

It’s not clear why advertisers aren’t following the FEC regulations. Keating, of the Institute for Free Speech, suggested that advertisers might think the word “Sponsored” and a link to their Facebook page are enough and that reasonable people would know they had paid for the ad.

Others said social media marketers may simply be slow in adjusting to the FEC opinion.

“It’s entirely possible that because disclaimers haven’t been included for years now, candidates and committees just aren’t used to putting them on there,” said Brendan Fischer, director of the Federal and FEC Reform Program at the Campaign Legal Center, the group that provided legal services to Take Back Action Fund. “But they should be on notice,” he added.

There were only two advertisers we saw that included the full, clear disclosures required by the FEC on their large image ads. One was Amy Klobuchar, a Democratic senator from Minnesota who is a co-sponsor of the Honest Ads Act. The other was John Moser, an IT security professional and Democratic primary candidate in Maryland’s 7th Congressional District who received $190 in contributions last year, according to his FEC filings.

Reached by Facebook Messenger, Moser said he is running because he has a plan for ending poverty in the U.S. by restructuring Social Security into a “universal dividend” that gives everyone over age 18 a portion of the country’s per capita income. He complained that Facebook doesn’t make it easy for political advertisers to include the required disclosures. “You have to wedge it in there somewhere,” said Moser, who faces an uphill battle against longtime U.S. Rep. Elijah Cummings. “They need to add specific support for that, honestly.”

Asked why he went to the trouble to put the words on his ad, Moser’s answer was simple: “I included a disclosure because you're supposed to.”

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

Unilever To Social Networking Sites: Drain The Online Swamp Or Lose Business

Unilever logo Unilever has placed tech companies and social networking sites on notice... chiefly Facebook and Google. Adweek reported:

"Unilever CMO Keith Weed put the advertising community on notice Monday during a keynote speech at the Interactive Advertising Bureau’s Annual Leadership Meeting in Palm Desert, Calif. Weed called for tech platforms—namely Facebook and YouTube—to step up their efforts in combating divisive content, hate speech and fake news. “I don’t think for a second where the internet right now is how the platforms dreamt it would be,” Weed told Adweek in an interview at the event."

After promising promised to improve the transparency of advertising on its platform, Facebook's program hasn't proceeded smoothly. Unilever spends about $9 billion annually in advertising, with more than 140 brands globally -- all spanning several categories including food and drink (e.g., Ben & Jerry's, Breyers, Country Crock, Hellmann's, Mazola, Knorr, Lipton, Promise), home care, and personal care products (e.g., Axe, Caress, Degree, Dove, Sunsilk, TRESemme, Vaseline). Adweek also reported:

"Much like Procter & Gamble CMO Marc Pritchard—who spoke at the IAB’s 2017 event and outlined a multipronged, yearlong plan—Weed is looking to pressure tech companies to increase their resources on cleaning up the platforms..."

BBC News reported:

"Unilever has pledged to: a) Not invest in platforms that do not protect children or create division in society; b) Only invest in platforms that make a positive contribution to society; c) Tackle gender stereotypes in advertising; and d) Only partner with companies creating a responsible digital infrastructure... At the World Economic Forum in Davos last month Prime Minister Theresa May called on investors to put pressure on tech firms to tackle the problem much more quickly. In December, the European Commission warned the likes of Facebook, Google, YouTube, Twitter and other firms that it was considering legislation if self-regulation continued to fail."

That's great. It'll be interesting to see which, if any other corporate marketers, make pledges similar to Unilever's. Susan Wojcicki, the CEO of Google's YouTube, issued a brief response. MediaPost reported:

"We want to do the right set of things to build [Unilever’s] trust. They are building brands on YouTube, and we want to be sure that our brand is the right place to build their brand."She added that "based on the feedback we had from them," YouTube changed its rules for what channels could be monetized, and began to have humans review all videos uploaded to Google Preferred..."

In December 2017, Youtube pledged a staff of 10,000 to root out divisive video content in 2018. We'll see if tech companies meet their promises. Consumers don't want to wade through social sites filled with divisive, hate, and fake-news content.

Facebook’s Experiment in Ad Transparency Is Like Playing Hide And Seek

[Editor's note: today's guest post, by the reporters at ProPublica, explores a new global program Facebook introduced in Canada. It is reprinted with permission.]

Facebook logo By Jennifer Valentino-DeVries, ProPublica

Shortly before a Toronto City Council vote in December on whether to tighten regulation of short-term rental companies, an entity called Airbnb Citizen ran an ad on the Facebook news feeds of a selected audience, including Toronto residents over the age of 26 who listen to Canadian public radio. The ad featured a photo of a laughing couple from downtown Toronto, with the caption, “Airbnb hosts from the many wards of Toronto raise their voices in support of home sharing. Will you?”

Placed by an interested party to influence a political debate, this is exactly the sort of ad on Facebook that has attracted intense scrutiny. Facebook has acknowledged that a group with ties to the Russian government placed more than 3,000 such ads to influence voters during the 2016 U.S. presidential campaign.

Facebook has also said it plans to avoid a repeat of the Russia fiasco by improving transparency. An approach it’s rolling out in Canada now, and plans to expand to other countries this summer, enables Facebook users outside an advertiser’s targeted audience to see ads. The hope is that enhanced scrutiny will keep advertisers honest and make it easier to detect foreign interference in politics. So we used a remote connection, called a virtual private network, to log into Facebook from Canada and see how this experiment is working.

The answer: It’s an improvement, but nowhere near the openness sought by critics who say online political advertising is a Wild West compared with the tightly regulated worlds of print and broadcast.

The new strategy — which Facebook announced in October, just days before a U.S. Senate hearing on the Russian online manipulation efforts — requires every advertiser to have a Facebook page. Whenever the advertiser is running an ad, the post is automatically placed in a new “Ads” section of the Facebook page, where any users in Canada can view it even if they aren’t part of the intended audience.

Facebook has said that the Canada experiment, which has been running since late October, is the first step toward a more robust setup that will let users know which group or company placed an ad and what other ads it’s running. “Transparency helps everyone, especially political watchdog groups and reporters, keep advertisers accountable for who they say they are and what they say to different groups,” Rob Goldman, Facebook’s vice president of ads, wrote before the launch.

While the new approach makes ads more accessible, they’re only available temporarily, can be hard to find, and can still mislead users about the advertiser’s identity, according to ProPublica’s review. The Airbnb Citizen ad — which we discovered via a ProPublica tool called the Political Ad Collector — is a case in point. Airbnb Citizen professed on its Facebook page to be a “community of hosts, guests and other believers in the power of home sharing to help tackle economic, environmental and social challenges around the world.” Its Facebook page didn’t mention that it is actually a marketing and public policy arm of Airbnb, a for-profit company.

Propublica-airbnb-citizen-adThe ad was part of an effort by the company to drum up support as it fought rental restrictions in Toronto. “These ads were one of the many ways that we engaged in the process before the vote,” Airbnb said. However, anyone who looked on Airbnb’s own Facebook page wouldn’t have found it.

Airbnb told ProPublica that it is clear about its connection to Airbnb Citizen. Airbnb’s webpage links to Airbnb Citizen’s webpage, and Airbnb Citizen’s webpage is copyrighted by Airbnb and uses part of the Airbnb logo. Airbnb said Airbnb Citizen provides information on local home-sharing rules to people who rent out their homes through Airbnb. “Airbnb has always been transparent about our advertising and public engagement efforts,” the statement said.

Political parties in Canada are already benefiting from the test to investigate ads from rival groups, said Nader Mohamed, digital director of Canada’s New Democratic Party, which has the third largest representation in Canada’s Parliament. “You’re going to be more careful with what you put out now, because you could get called on it at any time,” he said. Mohamed said he still expects heavy spending on digital advertising in upcoming campaigns.

After launching the test, Facebook demonstrated its new process to Elections Canada, the independent agency responsible for conducting federal elections there. Elections Canada recommended adding an archive function, so that ads no longer running could still be viewed, said Melanie Wise, the agency’s assistant director for media relations and issues management. The initiative is “helpful” but should go further, Wise said.

Some experts were more critical. Facebook’s new test is “useless,” said Ben Scott, a senior advisor at the think tank New America and a fellow at the Brookfield Institute for Innovation + Entrepreneurship in Toronto who specializes in technology policy. “If an advertiser is inclined to do something unethical, this level of disclosure is not going to stop them. You would have to have an army of people checking pages constantly.”

More effective ways of policing ads, several experts said, might involve making more information about advertisers and their targeting strategies readily available to users from links on ads and in permanent archives. But such tactics could alienate advertisers reluctant to share information with competitors, cutting into Facebook’s revenue. Instead, in Canada, Facebook automatically puts ads up on the advertiser’s Facebook page, and doesn’t indicate the target audience there.

Facebook’s test represents the least the company can do and still avoid stricter regulation on political ads, particularly in the U.S., said Mark Surman, a Toronto resident and executive director of Mozilla, a nonprofit Internet advocacy group that makes the Firefox web browser. “There are lots of people in the company who are trying to do good work. But it’s obvious if you’re Facebook that you’re trying not to get into a long conversation with Congress,” Surman said.

Facebook said it’s listening to its critics. “We’re talking to advertisers, industry folks and watchdog groups and are taking this kind of feedback seriously,” Rob Leathern, Facebook director of product management for ads, said in an email. “We look forward to continue working with lawmakers on the right solution, but we also aren’t waiting for legislation to start getting solutions in place,” he added. The company declined to provide data on how many people in Canada were using the test tools.

Facebook is not the only internet company facing questions about transparency in advertising. Twitter also pledged in October before the Senate hearing that “in the coming weeks” it would build a platform that would “offer everyone visibility into who is advertising on Twitter, details behind those ads, and tools to share your feedback.” So far, nothing has been launched.

Facebook has more than 23 million monthly users in Canada, according to the company. That’s more than 60 percent of Canada’s population but only about 1 percent of Facebook’s user base. The company has said it is launching its new ad-transparency plan in Canada because it already has a program there called the Canadian Election Integrity Initiative. That initiative was in response to a Canadian federal government report, “Cyber Threats to Canada’s Democratic Process,” which warned that “multiple hacktivist groups will very likely deploy cyber capabilities in an attempt to influence the democratic process during the 2019 federal election.” The election integrity plan promotes news literacy and offers a guide for politicians and political parties to avoid getting hacked.

Compared to the U.S., Canada’s laws allow for much stricter government regulation of political advertising, said Michael Pal, a law professor at the University of Ottawa. He said Facebook’s transparency initiative was a good first step but that he saw the extension of strong campaign rules into internet advertising as inevitable in Canada. “This is the sort of question that, in Canada, is going to be handled by regulation,” Pal said.

Several Canadian technology policy experts who spoke with ProPublica said Facebook’s new system was too inconvenient for the average user. There’s no central place where people can search the millions of ads on Facebook to see what ads are running about a certain subject, so unless users are part of the target audience, they wouldn’t necessarily know that a group is even running an ad. If users somehow hear about an ad or simply want to check whether a company or group is running one, they must first navigate to the group’s Facebook page and then click a small tab on the side labeled “Ads” that runs alongside other tabs such as “Videos” and “Community.” Once the user clicks the “Ads” tab, a page opens showing every ad that the page owner is running at that time, one after another.

The group’s Facebook page isn’t always linked from the text of the ad. If it isn’t, users can still find the Facebook page by navigating to the “Why am I seeing this?” link in a drop-down menu at the top right of each ad in their news feed.

As soon as a marketing campaign is over, an ad can no longer be found on the “Ads” page at all. When ProPublica checked the Airbnb Citizen Facebook page a week after collecting the ad, it was no longer there.

Because the “Ads” page also doesn’t disclose the demographics of the advertiser’s target audience, people can only see that data on ads that were aimed at them and were on their own Facebook news feed. Without this information, people outside an ad’s selected audience can’t see to whom companies or politicians are tailoring their messages. ProPublica reported last year that dozens of major companies directed recruitment ads on Facebook only to younger people — information that would likely interest older workers, but would still be concealed from them under the new policy. One recent ad by Prime Minister Justin Trudeau was directed at “people who may be similar to” his supporters, according to the Political Ad Collector data. Under the new system, people who don’t support Trudeau could see the ad on his Facebook page, but wouldn’t know why it was excluded from their news feeds.

Facebook has promised new measures to make political ads more accessible. When it expands the initiative to the U.S., it will start building a searchable electronic archive of ads related to U.S. federal elections. This archive will include details on the amount of money spent and demographic information about the people the ads reached. Facebook will initially limit its definition of political ads to those that “refer to or discuss a political figure” in a federal election, the company said.

The company hasn’t said what, if any, archive will be created for ads for state and local contests, or for political ads in other countries. It has said it will eventually require political advertisers in other countries, and in state elections in the U.S., to provide more documentation, but it’s not clear when that will happen.

Ads that aren’t political will be available under the same system being tested in Canada now.

Even an archive of the sort Facebook envisions wouldn’t solve the problems of misleading advertising on Facebook, Surman said. “It would be interesting to journalists and researchers trying to track this issue. But it won’t help users make informed choices about what ads they see,” he said. That’s because users need more information alongside the ads they are seeing on their news feeds, not in a separate location, he said.

The Airbnb Citizen ad wasn’t the only tactic that Airbnb adopted in an apparent attempt to sway the Toronto City Council. It also packed the council galleries with supporters on the morning of the vote, according to The Globe and Mail. Still, its efforts appear to have been unsuccessful.

On Dec. 6, two days after a reader sent us the ad, the City Council voted to keep people from renting a space that wasn’t their primary residence and stop homeowners from listing units such as basement apartments.

Filed under: Technology

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

Health Experts To Facebook: Turn Off Messenger Kids

Facebook logo In December 2017, Facebook launched its Messenger Kids service for children ages six to 13. The service includes a free video calling and messaging app where children can connect only with parent-approved contacts. The ad-free service includes masks, frames, stickers and GIFs for children to, "ids can create fun videos and decorate photos to share moments with loved ones."

Pediatricians and health experts are very concerned. Earlier today, dozens of health professionals sent a letter to Facebook (Adobe PDF) urging the social networking giant to terminate Messenger Kids. The letter stated in part:

"Given Facebook’s enormous reach and marketing prowess, Messenger Kids will likely be the first social media platform widely used by elementary school children. But a growing body of research demonstrates that excessive use of digital devices and social media is harmful to children and teens, making it very likely this new app will undermine children’s healthy development.

Younger children are simply not ready to have social media accounts. They are not old enough to navigate the complexities of online relationships, which often lead to misunderstandings and conflicts even among more mature users. They also do not have a fully developed understanding of privacy, including what’s appropriate to share with others and who has access to their conversations, pictures, and videos.

At a time when there is mounting concern about how social media use affects adolescents’ well being, it is particularly irresponsible to encourage children as young as preschoolers to start using a Facebook product. Social media use by teens is linked to significantly higher rates of depression, and adolescents who spend an hour a day chatting on social networks report less satisfaction with nearly every aspect of their lives. Eighth graders who use social media for 6 - 9 hours per week are 47% more likely to report they are unhappy than their peers who use social media less often. A study of girls between the ages of 10 and 12 found the more they used social networking sites like Facebook, the more likely they were to idealize thinness, have concerns about their bodies, and to have dieted. Teen social media use is also linked to unhealthy sleep habits. Messenger Kids is likely to increase the amount of time pre-school and elementary age kids spend with digital devices. Already, adolescents report difficulty moderating their own social media use: 78% check their phones at least hourly, and 50% say they feel addicted to their phones. Almost half of parents say that regulating their child’s screen time is a constant battle. Messenger Kids will exacerbate this problem... Encouraging kids to move their friendships online will interfere with and displace the face-to-face interactions and play that are crucial for building healthy developmental skills, including the ability to read human emotion, delay gratification, and engage with the physical world..."

The letter contains footnotes to citations with supporting research about the above health concerns. Reportedly, Facebook consulted with the National PTA and several academics before introducing the app. Messenger Kids is a separate service, so children using it can't be found using Facebook's search mechanism.

The letter from health professionals to Facebook also addressed safety concerns:

"Facebook claims that Messenger Kids will provide a safe alternative for the children who have lied their way onto social media platforms designed for teens and adults. But the 11- and 12-year-olds who currently use Snapchat, Instagram, or Facebook are unlikely to switch to an app that is clearly designed for younger children. Messenger Kids is not responding to a need – it is creating one. It appeals primarily to children who otherwise would not have their own social media accounts. It is disingenuous to use Facebook’s failure to keep underage users off their platforms as a rationale for targeting younger children with a new product."

Earlier this month, Facebook's CEO acknowledged problems and promised to do better. We shall see if Facebook's management listens to the documented concerns of pediatricians and health professionals.

What are your opinions about children ages 6 to 13 using social media? About Messenger Kids? Should Facebook terminate Messenger Kids?


Google Photos: Still Blind After All These Years

Earlier today, Wired reported:

"In 2015, a black software developer embarrassed Google by tweeting that the company’s Photos service had labeled photos of him with a black friend as "gorillas." Google declared itself "appalled and genuinely sorry." An engineer who became the public face of the clean-up operation said the label gorilla would no longer be applied to groups of images, and that Google was "working on longer-term fixes."

More than two years later, one of those fixes is erasing gorillas, and some other primates, from the service’s lexicon. The awkward workaround illustrates the difficulties Google and other tech companies face in advancing image-recognition technology... WIRED tested Google Photos using a collection of 40,000 images well-stocked with animals. It performed impressively at finding many creatures, including pandas and poodles. But the service reported "no results" for the search terms "gorilla," "chimp," "chimpanzee," and "monkey."

This is the best facial-recognition software solution Google can do, while it also wants consumers to trust the software in its driver-less vehicles? Geez. #fubar Well, maybe this video will help Google engineers feel better:

Facebook CEO Admits His Social Service Has Problems, And Promises To Do Better In 2018

Facebook logo Mark Zuckerberg, the CEO at Facebook, recently admitted that his social networking service has problems. And, he promised to do better in 2018. The article is important since it highlights the issues causing concerns for Mr. Zuckerberg. The Independent UK reported:

"Each year, the Facebook boss takes on a challenge to complete over the year. For 2018, he has promised to try and fix his company... He said that he had made the decision to concentrate on his own company this year because the world was so divided and he thinks he will "learn more by focusing intensely on these issues..." "

Huh? What else was he focused on instead? You'd think that he'd be focused 24/7/365 on a service with 23,265 employees and 2 billion monthly users worldwide.

The report by the Independent UK also described for Mr. Zuckerberg's concerns, which have implications for everyone:

"... Facebook has been blamed for helping spread hatred and division in the wake of the [2016 U.S.] election, as well as potentially helping with the spread of fake news that allowed it to tip in Donald Trump's favour. Even the site itself has admitted that it can be upsetting and disruptive for those who use it, in a press release that said using the site might be bad for you... He pointed to the fact that the rise of tech companies like Facebook and their increasing power over the internet meant that the internet was becoming centralized in a few powerful hands. He pointed to other technologies like crypto-currency as challenges to that, but said that overall people had "lost faith" in the power of the internet to decentralize things.

A number of complaints have pointed at Facebook's unprecedented power over the way the internet works as a danger. Facebook's ability to control much of the news people read has been blamed for the spread of fake reporting, for instance, and projects like Facebook's Free Basics tools have been blamed for undermining net neutrality. But many of those same projects have been attempts by Facebook to grow its user base... He said he would look at using new technologies – encryption as well as cryptocurrency – to help improve Facebook and the internet by allowing it to stop being controlled by just a few people..."

Regular readers of this blog are aware of the problems, many of which were discussed in prior posts:

Will Mr. Zuckerberg and his senior management team fix these problems? Can they? Some of the ad-targeting mechanisms (that create abuses) have been around for years. Given its history, the cynic in me thinks that Facebook can only get better. Will Facebook do better in 2018? Tell us what you think.

Dozens of Companies Are Using Facebook to Exclude Older Workers From Job Ads

[Editor's note: everyone looks for a new job during their life. Today's guest blog post, by the reporters at ProPublica, explores an advertising practice by recruiters using social networking sites. Today's post is reprinted with permission.]

By Julia Angwin and Ariana Tobin of ProPublica, with Noam Scheiber, of The New York Times

A few weeks ago, Verizon placed an ad on Facebook to recruit applicants for a unit focused on financial planning and analysis. The ad showed a smiling, millennial-aged woman seated at a computer and promised that new hires could look forward to a rewarding career in which they would be "more than just a number."

Some relevant numbers were not immediately evident. The promotion was set to run on the Facebook feeds of users 25 to 36 years old who lived in the nation’s capital, or had recently visited there, and had demonstrated an interest in finance. For a vast majority of the hundreds of millions of people who check Facebook every day, the ad did not exist.

Verizon is among dozens of the nation's leading employers — including Amazon, Goldman Sachs, Target and Facebook itself — that placed recruitment ads limited to particular age groups, an investigation by ProPublica and The New York Times has found.

The ability of advertisers to deliver their message to the precise audience most likely to respond is the cornerstone of Facebook’s business model. But using the system to expose job opportunities only to certain age groups has raised concerns about fairness to older workers.

Several experts questioned whether the practice is in keeping with the federal Age Discrimination in Employment Act of 1967, which prohibits bias against people 40 or older in hiring or employment. Many jurisdictions make it a crime to “aid” or “abet” age discrimination, a provision that could apply to companies like Facebook that distribute job ads.

"It’s blatantly unlawful," said Debra Katz, a Washington employment lawyer who represents victims of discrimination.

Facebook defended the practice. "Used responsibly, age-based targeting for employment purposes is an accepted industry practice and for good reason: it helps employers recruit and people of all ages find work," said Rob Goldman, a Facebook vice president.

The revelations come at a time when the unregulated power of the tech companies is under increased scrutiny, and Congress is weighing whether to limit the immunity that it granted to tech companies in 1996 for third-party content on their platforms.

Facebook has argued in court filings that the law, the Communications Decency Act, makes it immune from liability for discriminatory ads.

Although Facebook is a relatively new entrant into the recruiting arena, it is rapidly gaining popularity with employers. Earlier this year, the social network launched a section of its site devoted to job ads. Facebook allows advertisers to select their audience, and then Facebook finds the chosen users with the extensive data it collects about its members.

The use of age targets emerged in a review of data originally compiled by ProPublica readers for a project about political ad placement on Facebook. Many of the ads include a disclosure by Facebook about why the user is seeing the ad, which can be anything from their age to their affinity for folk music.

The precision of Facebook’s ad delivery has helped it dominate an industry once in the hands of print and broadcast outlets. The system, called microtargeting, allows advertisers to reach essentially whomever they prefer, including the people their analysis suggests are the most plausible hires or consumers, lowering the costs and vastly increasing efficiency.

Targeted Facebook ads were an important tool in Russia’s efforts to influence the 2016 election. The social media giant has acknowledged that 126 million people saw Russia-linked content, some of which was aimed at particular demographic groups and regions. Facebook has also come under criticism for the disclosure that it accepted ads aimed at "Jew-haters" as well as housing ads that discriminated by race, gender, disability and other factors.

Other tech companies also offer employers opportunities to discriminate by age. ProPublica bought job ads on Google and LinkedIn that excluded audiences older than 40 — and the ads were instantly approved. Google said it does not prevent advertisers from displaying ads based on the user’s age. After being contacted by ProPublica, LinkedIn changed its system to prevent such targeting in employment ads.

The practice has begun to attract legal challenges. On Wednesday, a class-action complaint alleging age discrimination was filed in federal court in San Francisco on behalf of the Communications Workers of America and its members — as well as all Facebook users 40 or older who may have been denied the chance to learn about job openings. The plaintiffs’ lawyers said the complaint was based on ads for dozens of companies that they had discovered on Facebook.

The database of Facebook ads collected by ProPublica shows how often and precisely employers recruit by age. In a search for “part-time package handlers,” United Parcel Service ran an ad aimed at people 18 to 24. State Farm pitched its hiring promotion to those 19 to 35.

Some companies, including Target, State Farm and UPS, defended their targeting as a part of a broader recruitment strategy that reached candidates of all ages. The group of companies making this case included Facebook itself, which ran career ads on its own platform, many aimed at people 25 to 60. "We completely reject the allegation that these advertisements are discriminatory," said Goldman of Facebook.

After being contacted by ProPublica and the Times, other employers, including Amazon, Northwestern Mutual and the New York City Department of Education, said they had changed or were changing their recruiting strategies.

"We recently audited our recruiting ads on Facebook and discovered some had targeting that was inconsistent with our approach of searching for any candidate over the age of 18," said Nina Lindsey, a spokeswoman for Amazon, which targeted some ads for workers at its distribution centers between the ages of 18 and 50. "We have corrected those ads."

Verizon did not respond to requests for comment.

Several companies argued that targeted recruiting on Facebook was comparable to advertising opportunities in publications like the AARP magazine or Teen Vogue, which are aimed at particular age groups. But this obscures an important distinction. Anyone can buy Teen Vogue and see an ad. Online, however, people outside the targeted age groups can be excluded in ways they will never learn about.

"What happens with Facebook is you don’t know what you don’t know," said David Lopez, a former general counsel for the Equal Employment Opportunity Commission who is one of the lawyers at the firm Outten & Golden bringing the age-discrimination case on behalf of the communication workers union.

‘They Know I’m Dead’

Age discrimination on digital platforms is something that many workers suspect is happening to them, but that is often difficult to prove.

Mark Edelstein, a fitfully employed social-media marketing strategist who is 58 and legally blind, doesn’t pretend to know what he doesn’t know, but he has his suspicions.

Edelstein, who lives in St. Louis, says he never had serious trouble finding a job until he turned 50. “Once you reach your 50s, you may as well be dead,” he said. "I’ve gone into interviews, with my head of gray hair and my receding hairline, and they know I’m dead."

Edelstein spends most of his days scouring sites like LinkedIn and Indeed and pitching hiring managers with personalized appeals. When he scrolled through his Facebook ads on a Wednesday in December, he saw a variety of ads reflecting his interest in social media marketing: ads for the marketing software HubSpot ("15 free infographic templates!") and TripIt, which he used to book a trip to visit his mother in Florida.

What he didn’t see was a single ad for a job in his profession, including one identified by ProPublica that was being shown to younger users: a posting for a social media director job at HubSpot. The company asked that the ad be shown to people aged 27 to 40 who live or were recently living in the United States.

"Hypothetically, had I seen a job for a social media director at HubSpot, even if it involved relocation, I ABSOLUTELY would have applied for it," Edelstein said by email when told about the ad.

A HubSpot spokeswoman, Ellie Botelho, said that the job was posted on many sites, including LinkedIn, The Ladders and Built in Boston, and was open to anyone meeting the qualifications regardless of age or any other demographic characteristic.

She added that “the use of the targeted age-range selection on the Facebook ad was frankly a mistake on our part given our lack of experience using that platform for job postings and not a feature we will use again.”

For his part, Edelstein says he understands why marketers wouldn’t want to target ads at him: "It doesn’t surprise me a bit. Why would they want a 58-year-old white guy who’s disabled?"

Looking for ’Younger Blood’

Although LinkedIn is the leading online recruitment platform, according to an annual survey by SourceCon, an industry website. Facebook is rapidly increasing in popularity for employers.

One reason is that Facebook’s sheer size — two billion monthly active users, versus LinkedIn’s 530 million total members — gives recruiters access to types of workers they can’t find elsewhere.

Consider nurses, whom hospitals are desperate to hire. “They’re less likely to use LinkedIn,” said Josh Rock, a recruiter at a large hospital system in Minnesota who has expertise in digital media. "Nurses are predominantly female, there’s a larger volume of Facebook users. That’s what they use."

There are also millions of hourly workers who have never visited LinkedIn, and may not even have a résumé, but who check Facebook obsessively.

Deb Andrychuk, chief executive of the Arland Group, which helps employers place recruitment ads, said clients sometimes asked her firm to target ads by age, saying they needed “to start bringing younger blood” into their organizations. “It’s not necessarily that we wouldn’t take someone older,” these clients say, according to Andrychuk, “but if you could bring in a younger set of applicants, it would definitely work out better.”

Andrychuk said that “we coach clients to be open and not discriminate” and that after being contacted by The Times, her team updated all their ads to ensure they didn’t exclude any age groups.

But some companies contend that there are permissible reasons to filter audiences by age, as with an ad for entry-level analyst positions at Goldman Sachs that was distributed to people 18 to 64. A Goldman Sachs spokesman, Andrew Williams, said showing it to people above that age range would have wasted money: roughly 25 percent of those who typically click on the firm’s untargeted ads are 65 or older, but people that age almost never apply for the analyst job.

"We welcome and actively recruit applicants of all ages," Williams said. "For some of our social-media ads, we look to get the content to the people most likely to be interested, but do not exclude anyone from our recruiting activity."

Pauline Kim, a professor of employment law at Washington University in St. Louis, said the Age Discrimination in Employment Act, unlike the federal anti-discrimination statute that covers race and gender, allows an employer to take into account “reasonable factors” that may be highly correlated with the protected characteristic, such as cost, as long as they don’t rely on the characteristic explicitly.

The Question of Liability

In various ways, Facebook and LinkedIn have acknowledged at least a modest obligation to police their ad platforms against abuse.

Earlier this year, Facebook said it would require advertisers to "self-certify" that their housing, employment and credit ads were compliant with anti-discrimination laws, but that it would not block marketers from purchasing age-restricted ads.

Still, Facebook didn’t promise to monitor those certifications for accuracy. And Facebook said the self-certification system, announced in February, was still being rolled out to all advertisers.

LinkedIn, in response to inquiries by ProPublica, added a self-certification step that prevents employers from using age ranges once they confirm that they are placing an employment ad.

With these efforts evolving, legal experts say it is unclear how much liability the tech platforms could have. Some civil rights laws, like the Fair Housing Act, explicitly require publishers to assume liability for discriminatory ads.

But the Age Discrimination in Employment Act assigns liability only to employers or employment agencies, like recruiters and advertising firms.

The lawsuit filed against Facebook on behalf of the communications workers argues that the company essentially plays the role of an employment agency — collecting and providing data that helps employers locate candidates, effectively coordinating with the employer to develop the advertising strategies, informing employers about the performance of the ads, and so forth.

Regardless of whether courts accept that argument, the tech companies could also face liability under certain state or local anti-discrimination statutes. For example, California’s Fair Employment and Housing Act makes it unlawful to "aid, abet, incite, compel or coerce the doing" of discriminatory acts proscribed by the statute.

"They may have an obligation there not to aid and abet an ad that enables discrimination," said Cliff Palefsky, an employment lawyer based in San Francisco.

The question may hinge on Section 230 of the federal Communications Decency Act, which protects internet companies from liability for third-party content.

Tech companies have successfully invoked this law to avoid liability for offensive or criminal content — including sex trafficking, revenge porn and calls for violence against Jews. Facebook is currently arguing in Federal court that Section 230 immunizes it against liability for ad placement that blocks members of certain racial and ethnic groups from seeing the ads.

Related Reading ad object. List of coompanies and their age-based ads "Advertisers, not Facebook, are responsible for both the content of their ads and what targeting criteria to use, if any," Facebook argued in its motion to dismiss allegations that its ads violated a host of civil rights laws. The case does not allege age discrimination.

Eric Goldman, professor and co-director of the High Tech Law Institute at the Santa Clara University School of Law, who has written extensively about Section 230, says it is hard to predict how courts would treat Facebook’s age-targeting of employment ads.

Goldman said the law covered the content of ads, and that courts have made clear that Facebook would not be liable for an advertisement in which an employer wrote, say, “no one over 55 need apply.” But it is not clear how the courts would treat Facebook’s offering of age-targeted customization.

According to a federal appellate court decision in a fair-housing case, a platform can be considered to have helped “develop unlawful content” that users play a role in generating, which would negate the immunity.

"Depending on how the targeting is happening, you can make potentially different sorts of arguments about whether or not Google or Facebook or LinkedIn is contributing to the development" of the ad, said Deirdre K. Mulligan, a faculty director of the Berkeley Center for Law and Technology.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

Governors and Federal Agencies Are Blocking Nearly 1,300 Accounts on Facebook and Twitter

[Editor's note: today's guest blog post, by the reporters at ProPublica, highlights a little-known practice by some elected officials to block their constituents on social networking sites. Today's post is reprinted with permission.]

By Leora Smith and Derek Kravitz - ProPublica

Amanda Farber still doesn’t know why Maryland Gov. Larry Hogan blocked her from his Facebook group. A resident of Bethesda and full-time parent and volunteer, Farber identifies as a Democrat but voted for the Republican Hogan in 2014. Farber says she doesn’t post on her representatives’ pages often. But earlier this year, she said she wrote on the governor’s Facebook page, asking him to oppose the Trump administration’s travel ban and health care proposal.

She never received a response. When she later returned to the page, she noticed her comment had been deleted. She also noticed she had been blocked from commenting. (She is still allowed to share the governor’s posts and messages.)

Farber has repeatedly emailed and called Hogan’s office, asking them to remove her from their blacklist. She remains blocked. According to documents ProPublica obtained through an open-records request this summer, hers is one of 494 accounts that Hogan blocks. Blocked accounts include a schoolteacher who criticized the governor’s education policies and a pastor who opposed the governor’s stance against accepting Syrian refugees. They even have their own Facebook group: Marylanders Blocked by Larry Hogan on Facebook.

Hogan’s office says they “diligently adhere” to their social media policy when deleting comments and blocking users.

In August, ProPublica filed public-records requests with every governor and 22 federal agencies, asking for lists of everyone blocked on their official Facebook and Twitter accounts. The responses we’ve received so far show that governors and agencies across the country are blocking at least 1,298 accounts. More than half of those — 652 accounts — are blocked by Kentucky Governor Matt Bevin, a Republican.

Four other Republican governors and four Democrats, as well as five federal agencies, block hundreds of others, according to their responses to our requests. Five Republican governors and three Democrats responded that they are not blocking any accounts at all. Many agencies and more than half of governors’ offices have not yet responded to our requests. Most of the blocked accounts appear to belong to humans but some could be “bots,” or automated accounts.

When the administrator of a public Facebook page or Twitter handle blocks an account, the blocked user can no longer comment on posts. That can create an inaccurate public image of support for government policies. (Here’s how you can dig into whether your elected officials are blocking constituents.)

ProPublica made the records requests and asked readers for their own examples after we detailed multiple instances of officials blocking constituents.

We heard from dozens of people. The governors’ offices in Alaska, Maine, Mississippi, Nebraska and New Jersey did not respond to our requests for records, but residents in each of those states reported being blocked. People were blocked after commenting on everything from marijuana legislation to Medicaid to a local green jobs bill.

For some, being blocked means losing one of few means to communicate with their elected representatives. Ann-Meredith McNeill, who lives in western rural Kentucky, told ProPublica that Bevin rarely visits anywhere near her. McNeill said she feels like “the internet is all I have” for interacting with the governor.

McNeill said she was blocked after criticizing Bevin’s position on abortion rights. (Last January, Bevin’s administration won a lawsuit that resulted in closing one of Kentucky’s two abortion clinics, the event that McNeill says inspired her comment.)

In response to questions about its social media blocking policies, Bevin’s office said in a statement that “a small number of users misuse [social media] outlets by posting obscene and abusive language or images, or repeated off-topic comments and spam. Constituents of all ages should be able to engage in civil discourse with Governor Bevin via his social media platforms without being subjected to vulgarity or abusive trolls.” McNeill told ProPublica, “I’m sure I got sassy” but she made “no threats or anything.”

Almost every federal agency that responded is blocking accounts. The Department of Veterans Affairs blocked 18 accounts as of July, but said most were originally blocked before 2014. The blocked accounts included a Michigan law firm specializing in auto accident cases and a Virginia real estate consultant who told ProPublica she had “no idea why” she was blocked. The Department of Energy blocked eight accounts as of October. The Department of Labor blocked seven accounts. And the Small Business Administration blocked two accounts, both of which were unverified and claimed to be affiliated with government loan programs.

Many governors and agencies gave us only partial lists or rejected our requests altogether. Outgoing Kansas Gov. Sam Brownback’s office told us they would not share their block lists due to “privacy concerns for those people whose names might appear on it.” Alabama declined to provide public records because our request did not come from an Alabama citizen.

Missouri Gov. Eric Greitens’ office declined to share records from his Facebook or Twitter accounts, arguing they are not “considered to be the ‘official’ social media accounts of the Governor of Missouri” because he created them before he took office.

Increased attention on the issue of blocking seems to be having an impact. In September, the California-based First Amendment Coalition revealed that California Governor Jerry Brown, a Democrat, had blocked more than 1,500 accounts until June, shortly before the organization submitted a request for his social media records.

At some point before fulfilling the coalition’s request, Brown’s office unblocked every account.

Vermont Gov. Phil Scott, a Republican, blocked the activist group Indivisible Vermont on Twitter on Aug. 25. On Aug. 28, Vermont reporter Taylor Dobbs submitted a request for the governor’s full blocked list, shortly after ProPublica’s similar request. Later that day, Scott unblocked the group and released a statement saying the account was “misconstrued as spam.”

Wisconsin Gov. Scott Walker’s office unblocked at least two Facebook users after receiving ProPublica’s request. Here are screenshots they sent us showing that the users have been unblocked:

In the last year, a series of legal claims have called into question the legality of government officials blocking constituents on social media.

At least one federal district court held that government officials who block constituents are violating their First Amendment rights.

Constituents have pending lawsuits against the governors of Kentucky, Maine, and Maryland, as well as Representative Paul Gosar, R-Ariz., and President Trump.

We asked the White House, which is not subject to open-records laws, to disclose the list of people Trump is blocking. Officials there have not responded.

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

Facebook to Temporarily Block Advertisers From Excluding Audiences by Race

[Editor's note: today's guest blog post, by the reporters at ProPublica, discusses advertising practices by both Facebook, a popular social networking site, and some advertisers using the site. Today's post is reprinted with permission.]

By Julia Angwin, ProPublica

Facebook said it would temporarily stop advertisers from being able to exclude viewers by race while it studies the use of its ad targeting system.

“Until we can better ensure that our tools will not be used inappropriately, we are disabling the option that permits advertisers to exclude multicultural affinity segments from the audience for their ads,” Facebook Sheryl Sandberg wrote in a letter to the Congressional Black Caucus.

ProPublica disclosed last week that Facebook was still allowing advertisers to buy housing ads that excluded audiences by race, despite its promises earlier this year to reject such ads. ProPublica also found that Facebook was not asking housing advertisers that blocked other sensitive audience categories — by religion, gender, or disability — to “self-certify” that their ads were compliant with anti-discrimination laws.

Under the Fair Housing Act of 1968, it’s illegal to “to make, print, or publish, or cause to be made, printed, or published any notice, statement, or advertisement, with respect to the sale or rental of a dwelling that indicates any preference, limitation, or discrimination based on race, color, religion, sex, handicap, familial status, or national origin.” Violators face tens of thousands of dollars in fines.

In her letter, Sandberg said the company will examine how advertisers are using its exclusion tool — “focusing particularly on potentially sensitive segments” such as ads that exclude LGBTQ communities or people with disabilities. “During this review, no advertisers will be able to create ads that exclude multicultural affinity groups,” Facebook Vice President Rob Goldman said in an e-mailed statement.

Goldman said the results of the audit would be shared with “groups focused on discrimination in ads,” and that Facebook would work with them to identify further improvements and publish the steps it will take.

Sandberg’s letter to the Congressional Black Caucus is the outgrowth of a dialogue that has been ongoing since last year when ProPublica published its first article revealing Facebook was allowing advertisers to exclude people with an “ethnic affinity” for various minority groups, including African Americans, Asian Americans and Hispanics, from viewing their ads.

At that time, four members of the Congressional Black Caucus reached out to Facebook for an explanation. “This is in direct violation of the Fair Housing Act of 1968, and it is our strong desire to see Facebook address this issue immediately,” wrote the lawmakers.

The U.S. Department of Housing and Urban Development, which enforces the nation’s fair housing laws, opened an inquiry into Facebook’s practices.

But in February, Facebook said it had solved the problem — by building an algorithm that would allow it to spot and reject housing, employment and credit ads that discriminated using racial categories. For audiences not selected by race, Facebook said it would require advertisers to “self-certify” that their ads were compliant with the law.

HUD closed its inquiry. But last week, ProPublica successfully purchased dozens of racist, sexist and otherwise discriminatory ads for a fictional housing company advertising a rental. None of the ads were rejected and none required a self-certification. Facebook said it was a “technical failure” and vowed to fix the problem.

U.S. Rep. Robin Kelly, D-Ill., said that Facebook’s actions to disable the feature are “an appropriate action.” “When I first raised this issue with Facebook, I was disappointed. When it became necessary to raise the issue again, I was irritated,” she said. “I will continue watching this issue very closely to ensure these issues do not raise again.”

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Uber: Data Breach Affected 57 Million Users. Some Say A Post Breach Coverup, Too

Uber logo Uber is in the news again. And not in a good way. The popular ride-sharing service experienced a data breach affecting 57 million users. While many companies experience data breaches, regulators say Uber went further and tried to cover it up.

First, details about the data breach. Bloomberg reported:

"Hackers stole the personal data of 57 million customers and drivers... Compromised data from the October 2016 attack included names, email addresses and phone numbers of 50 million Uber riders around the world, the company told Bloomberg on Tuesday. The personal information of about 7 million drivers was accessed as well, including some 600,000 U.S. driver’s license numbers..."

Second, details about the coverup:

"... the ride-hailing firm ousted its chief security officer and one of his deputies for their roles in keeping the hack under wraps, which included a $100,000 payment to the attackers... At the time of the incident, Uber was negotiating with U.S. regulators investigating separate claims of privacy violations. Uber now says it had a legal obligation to report the hack to regulators and to drivers whose license numbers were taken. Instead, the company paid hackers to delete the data and keep the breach quiet."

Geez. Not tell regulators about a breach? Not tell affected users? 48 states have data breach notification laws requiring various levels of notifications. Consumers need notice in order to take action to protect themselves and their sensitive personal and payment information.

Third, Uber executives learned about the breach soon thereafter:

"Kalanick, Uber’s co-founder and former CEO, learned of the hack in November 2016, a month after it took place, the company said. Uber had just settled a lawsuit with the New York attorney general over data security disclosures and was in the process of negotiating with the Federal Trade Commission over the handling of consumer data. Kalanick declined to comment on the hack."

Reportedly, breach victims with stolen drivers license information will be offered free credit monitoring and identity theft services. Uber said that no Social Security numbers and credit card information was stolen during the breach, but one wonders if Uber and its executives can be trusted.

The company has a long history of sketchy behavior including the 'Greyball' worldwide program by executives to thwart code enforcement inspections by governments, dozens of employees fired or investigated for sexual harassment, a lawsuit descrbing how the company's mobile app allegedly scammed both riders and drivers, and privacy abuses with the 'God View' tool. TechCrunch reported that Uber:

"... reached a settlement with [New York State Attorney General] Schneiderman’s office in January 2016 over its abuse of private data in a rider-tracking system known as “God View” and its failure to disclose a previous data breach that took place in September 2014 in a timely manner."

Several regulators are investigating Uber's latest breach and alleged coverup. CNet reported:

"The New York State Attorney General has opened an investigation into the incident, which Uber made public Tuesday. Officials for Connecticut, Illinois and Massachusetts also confirmed they're investigating the hack. The New Mexico Attorney General sent Uber a letter asking for details of the hack and how the company responded. What's more, Uber appears to have broken a promise made in a Federal Trade Commission settlement not to mislead users about data privacy and security, a legal expert says... In addition to its agreement with the FTC, Uber is required to follow laws in New York and 47 other states that mandate companies to tell people when their drivers' license numbers are breached. Uber acknowledged Tuesday it had a legal requirement to disclose the breach."

The Financial Times reported that the U.K. Information Commissioner's Office is investigating the incident, along with the National Crime Agency and the National Cyber Security Centre. New data protection rules will go into effect in May, 2018 which will require companies to notify regulators within 72 hours of a cyber attack, or incur fines of up to 20 million Euro-dollars or 4 percent of annual global revenues.

Let's summarize the incident. It seems that a few months after settling a lawsuit about a data breach and its data security practices, the company had another data breach, paid the hackers to keep quiet about the breach and what they stole, and then allegedly chose not to tell affected users nor regulators about it, as required by prior settlement agreements, breach laws in most states, and breach laws in some international areas. Geez. What chutzpah!

What are your opinions of the incident? Can Uber and its executives be trusted?

Do Social Media Pose Threats To Democracies?

November 4th cover of The Economist magazine The November 4th issue of The Economist magazine discussed whether social networking sites threaten democracy in the United States and elsewhere. Social media were supposed to better connect us with accurate and reliable information. What we know so far (links added):

"... Facebook acknowledged that before and after last year’s American election, between January 2015 and August this year, 146m users may have seen Russian misinformation on its platform. Google’s YouTube admitted to 1,108 Russian-linked videos and Twitter to 36,746 accounts. Far from bringing enlightenment, social media have been spreading poison. Russia’s trouble-making is only the start. From South Africa to Spain, politics is getting uglier... by spreading untruth and outrage, corroding voters’ judgment and aggravating partisanship, social media erode the conditions..."

You can browse some of the ads Russia bought on Facebook during 2016. (Hopefully, you weren't tricked by any of them.) We also know from this United Press International (UPI) report about social media companies' testimony before Congress:

"Senator Patrick Leahy (D-Vt) said Facebook still has many pages that appear to have been created by the Internet Research Agency, a pro-Kremlin group that bought advertising during the campaign. Senator Al Franken (D-Minn.) said some Russian-backed advertisers even paid for the ads in Russian currency.

"How could you not connect those two dots?" he asked Facebook general council Colin Stretch. "It's a signal we should have been alert to and, in hindsight, one we missed," Stretch answered."

Google logo And during the Congressional testimony:

"Google attorney Richard Salgado said his company's platform is not a newspaper, which has legal responsibilities different from technology platforms. "We are not a newspaper. We are a platform that shares information," he said. "This is a platform from which news can be read from many sources."

Separate from the Congressional testimony, Kent Walker, a Senior Vice President and General Counsel at Google, released a statement which read in part:

"... like other internet platforms, we have found some evidence of efforts to misuse our platforms during the 2016 U.S. election by actors linked to the Internet Research Agency in Russia... We have been conducting a thorough investigation related to the U.S. election across our products drawing on the work of our information security team, research into misinformation campaigns from our teams, and leads provided by other companies. Today, we are sharing results from that investigation... We will be launching several new initiatives to provide more transparency and enhance security, which we also detail in these information sheets: what we found, steps against phishing and hacking, and our work going forward..."

This matters greatly. Why? by The Economist explained that the disinformation distributed via social media and other websites:

"... aggravates the politics of contempt that took hold, in the United States at least, in the 1990s. Because different sides see different facts, they share no empirical basis for reaching a compromise. Because each side hears time and again that the other lot are good for nothing but lying, bad faith and slander, the system has even less room for empathy. Because people are sucked into a maelstrom of pettiness, scandal and outrage, they lose sight of what matters for the society they share. This tends to discredit the compromises and subtleties of liberal democracy, and to boost the politicians who feed off conspiracy and nativism..."

When citizens (via their elected representatives) can't agree nor compromise, then government gridlock results. Nothing gets done. Frustration builds among voters.

What solutions to fix these problems? The Economist article discussed several remedies: better critical-thinking skills by social media users, holding social-media companies accountable, more transparency around ads, better fact checking, anti-trust actions, and/or disallow bots (automated accounts). It will take time for social media users to improve their critical-thinking skills. Considerations about fact checking:

"When Facebook farms out items to independent outfits for fact-checking, the evidence that it moderates behavior is mixed. Moreover, politics is not like other kinds of speech; it is dangerous to ask a handful of big firms to deem what is healthy for society.

Considerations about anti-trust actions:

"Breaking up social-media giants might make sense in antitrust terms, but it would not help with political speech—indeed, by multiplying the number of platforms, it could make the industry harder to manage."

All of the solutions have advantages and disadvantages. It seems the problems will be with us for a long while. Social media has been abused... and will continue to be abused. Comments? What solutions do you think would be best?

What We Do and Don’t Know About Facebook’s New Political Ad Transparency Initiative

[Editor's note: today's guest post is by the reporters at ProPublica. It is reprinted with permission.]

The short answer: It leaves the company some wiggle room.

Facebook logo By Julia Angwin, ProPublica

On Thursday September 21, Facebook Chief Executive Mark Zuckerberg announced several steps to make political ads on the world’s largest social network more transparent. The changes follow Facebook’s acknowledgment in September that $100,000 worth of political ads were placed during the 2016 election cycle by “inauthentic accounts” linked to Russia.

The changes also follow ProPublica’s launch of a crowdsourcing effort during September to collect political advertising from Facebook. Our goal was to ensure that political ads on Facebook, which until now have largely avoided scrutiny, receive the same level of fact-checking by journalists, advocacy groups and political opponents as do print, broadcast and radio political ads. We hope to have some results to share soon.

In the meantime, here’s what we do and don’t know about how Facebook’s changes could play out.

How does Facebook plan to increase disclosure of funders of political ads?
In his statement, Zuckerberg said that Facebook will start requiring political advertisers to disclose “which page paid for an ad.”

This is a reversal for Facebook. In 2011, the company argued to the Federal Election Commission that it would be “inconvenient and impracticable” to include disclaimers in political ads because the ads are so small in size.

While the commission was too divided to make a decision on Facebook’s request for an advisory ruling, the deadlock effectively allowed the company to continue omitting disclosures. (The commission has just reopened discussion of whether to require disclosure for internet advertising).

Now Facebook appears to have dropped its objections to adding disclosures. However, the problem with Facebook’s plan of only revealing which page purchased the ad is that the source of the money behind the page is not always clear.

What is Facebook doing to make political ads more transparent to the public?
Zuckerberg also said that Facebook will start to require political advertisers to place on their pages all the ads they are “currently running to any audience on Facebook.”

This requirement could mean the end of the so-called “dark posts” on Facebook — political ads whose origins were not easily traced. Now, theoretically, each Facebook political ad would be associated with and published on a Facebook page — either for candidates, political action committees or interest groups.

However, the word “currently” suggests that such disclosure could be fleeting. After all, ads can run on Facebook for as little as a few minutes or a few hours. And since campaigns can run dozens, hundreds or even thousands of variations of a single ad — to test which one gets the best response — it will be interesting to see whether and how they manage to display all those ads on their pages simultaneously.

“It would require a lot of vigilance on the part of users and voters to be on those pages at the exact time” that campaigns posted all of their ads, said Brendan Fischer, a lawyer at the Campaign Legal Center, a campaign finance reform watchdog group.

How will Facebook decide which ads are political?
It’s not clear how Facebook will decide which ads are political and which aren’t. There are several existing definitions they could choose from.

The Federal Communications Commission defines political advertising as anything that “communicates a message relating to any political matter of national importance,” but those rules only apply to television and radio broadcasters. FCC rules require extensive disclosure, including the amount paid for the ads, the audiences targeted and how many times the ads run.

The Federal Election Commission has traditionally defined two major types of campaign ads. “Independent expenditures” are ads that expressly advocate the election or defeat of a “clearly identified candidate.” A slightly broader definition, “electioneering communications,” encompasses so-called “issue ads” that mention a candidate but may not directly advocate for his or her election or defeat.

The FEC only requires spending on electioneering ads to be disclosed in the 60 days leading up to a general election or the 30 days leading up to a primary election. And the electioneering communications rule does not apply to online advertising.

Of course, Facebook doesn’t have to choose of any of the existing definitions of political advertising. It could do what it did with hate speech — and make up its own rules.

How will Facebook catch future political ads secretly placed by foreigners?
The law prohibits a foreign national from making any contribution or expenditure in any U.S. election. That means that Russians who bought the ads may have broken the law, but it also means that any American who “knowingly provided substantial assistance” may also have broken the law.

In mid-September, when Facebook disclosed the Russian ad purchase, the company said it was increasing its technical efforts to identify fake and inauthentic pages and to prevent them from running ads.

Zuckerberg said the company would “strengthen our ad review process for political ads” but didn’t specify exactly how. (Separately, Facebook Chief Operating Officer Sheryl Sandberg said in September that the company is adding more human review to its ad-buying categories, after ProPublica revealed that it allowed advertisers to target ads toward “Jew haters.”)

Zuckerberg also said Facebook will work with other tech companies and governments to share information about online risks during elections.

Will ProPublica continue crowd-sourcing Facebook political ads?
Yes, we plan to keep using our tool to monitor political advertising. In September, we worked with news outlets in Germany — Spiegel Online, Süddeutsche Zeitung and Tagesschau — to collect more than 600 political ads during the parliamentary elections.

We believe there is value to creating a permanent database of political ads that can be inspected by the public, and we intend to track whether Facebook lives up to its promises. If you want to help us, download our tool for Firefox or Chrome web browsers.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

Facebook Enabled Advertisers to Reach ‘Jew Haters’

[Editor's note: today's guest post, by the reporters at ProPublica, is part of its Machine Bias series. After being contacted by ProPublica, Facebook removed several anti-Semitic ad categories and it no longer allows advertisers to target groups based upon self-reported information. Today's post is reprinted with permission.]

By Julia Angwin, Madeleine Varner, and Ariana Tobin - ProPublica

Facebook logo Want to market Nazi memorabilia, or recruit marchers for a far-right rally? Facebook’s self-service ad-buying platform had the right audience for you.

Until last week, when we asked Facebook about it, the world’s largest social network enabled advertisers to direct their pitches to the news feeds of almost 2,300 people who expressed interest in the topics of “Jew hater,” “How to burn jews,” or, “History of ‘why jews ruin the world.’”

To test if these ad categories were real, we paid $30 to target those groups with three “promoted posts” — in which a ProPublica article or post was displayed in their news feeds. Facebook approved all three ads within 15 minutes.

After we contacted Facebook, it removed the anti-Semitic categories — which were created by an algorithm rather than by people — and said it would explore ways to fix the problem, such as limiting the number of categories available or scrutinizing them before they are displayed to buyers.

“There are times where content is surfaced on our platform that violates our standards,” said Rob Leathern, product management director at Facebook. “In this case, we’ve removed the associated targeting fields in question. We know we have more work to do, so we’re also building new guardrails in our product and review processes to prevent other issues like this from happening in the future.”

Facebook’s advertising has become a focus of national attention since it disclosed last week that it had discovered $100,000 worth of ads placed during the 2016 presidential election season by “inauthentic” accounts that appeared to be affiliated with Russia.

Like many tech companies, Facebook has long taken a hands off approach to its advertising business. Unlike traditional media companies that select the audiences they offer advertisers, Facebook generates its ad categories automatically based both on what users explicitly share with Facebook and what they implicitly convey through their online activity.

Traditionally, tech companies have contended that it’s not their role to censor the Internet or to discourage legitimate political expression. In the wake of the violent protests in Charlottesville by right-wing groups that included self-described Nazis, Facebook and other tech companies vowed to strengthen their monitoring of hate speech.

Facebook CEO Mark Zuckerberg wrote at the time that “there is no place for hate in our community,” and pledged to keep a closer eye on hateful posts and threats of violence on Facebook. “It’s a disgrace that we still need to say that neo-Nazis and white supremacists are wrong — as if this is somehow not obvious,” he wrote.

But Facebook apparently did not intensify its scrutiny of its ad buying platform. In all likelihood, the ad categories that we spotted were automatically generated because people had listed those anti-Semitic themes on their Facebook profiles as an interest, an employer or a “field of study.” Facebook’s algorithm automatically transforms people’s declared interests into advertising categories.

Here is a screenshot of our ad buying process on the company’s advertising portal:

Screenshot of Facebook ad buying process

This is not the first controversy over Facebook’s ad categories. Last year, ProPublica was able to block an ad that we bought in Facebook’s housing categories from being shown to African-Americans, Hispanics and Asian-Americans, raising the question of whether such ad targeting violated laws against discrimination in housing advertising. After ProPublica’s article appeared, Facebook built a system that it said would prevent such ads from being approved.

Last year, ProPublica also collected a list of the advertising categories Facebook was providing to advertisers. We downloaded more than 29,000 ad categories from Facebook’s ad system — and found categories ranging from an interest in “Hungarian sausages” to “People in households that have an estimated household income of between $100K and $125K.”

At that time, we did not find any anti-Semitic categories, but we do not know if we captured all of Facebook’s possible ad categories, or if these categories were added later. A Facebook spokesman didn’t respond to a question about when the categories were introduced.

Two weeks ago, acting on a tip, we logged into Facebook’s automated ad system to see if “Jew hater” was really an ad category. We found it, but discovered that the category — with only 2,274 people in it — was too small for Facebook to allow us to buy an ad pegged only to Jew haters.

Facebook’s automated system suggested “Second Amendment” as an additional category that would boost our audience size to 119,000 people, presumably because its system had correlated gun enthusiasts with anti-Semites.

Instead, we chose additional categories that popped up when we typed in “jew h”: “How to burn Jews,” and “History of ‘why jews ruin the world.’” Then we added a category that Facebook suggested when we typed in “Hitler”: a category called “Hitler did nothing wrong.” All were described as “fields of study.”

These ad categories were tiny. Only two people were listed as the audience size for “how to burn jews,” and just one for “History of ‘why jews ruin the world.’” Another 15 people comprised the viewership for “Hitler did nothing wrong.”

Facebook’s automated system told us that we still didn’t have a large enough audience to make a purchase. So we added “German Schutzstaffel,” commonly known as the Nazi SS, and the “Nazi Party,” which were both described to advertisers as groups of “employers.” Their audiences were larger: 3,194 for the SS and 2,449 for Nazi Party.

Still, Facebook said we needed more — so we added people with an interest in the National Democratic Party of Germany, a far-right, ultranationalist political party, with its much larger viewership of 194,600.

Once we had our audience, we submitted our ad — which promoted an unrelated ProPublica news article. Within 15 minutes, Facebook approved our ad, with one change. In its approval screen, Facebook described the ad targeting category “Jew hater” as “Antysemityzm,” the Polish word for anti-Semitism. Just to make sure it was referring to the same category, we bought two additional ads using the term “Jew hater” in combination with other terms. Both times, Facebook changed the ad targeting category “Jew hater” to “Antisemityzm” in its approval.

Here is one of our approved ads from Facebook:

Screenshot of approved Facebook ad for ProPublica

A few days later, Facebook sent us the results of our campaigns. Our three ads reached 5,897 people, generating 101 clicks, and 13 “engagements” — which could be a “like” a “share” or a comment on a post.

Since we contacted Facebook, most of the anti-Semitic categories have disappeared.

Facebook spokesman Joe Osborne said that they didn’t appear to have been widely used. “We have looked at the use of these audiences and campaigns and it’s not common or widespread,” he said.

We looked for analogous advertising categories for other religions, such as “Muslim haters.” Facebook didn’t have them.

Update, Sept. 14, 2017: This story has been updated to include the Facebook spokesman's name.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

Bungled Software Update Renders Customers' Smart Door Locks Inoperable

Image of Lockstate RemoteLock 6i device. Click to view larger version A bungled software update by Lockstate, maker of WiFi-enabled door locks, rendered many customers' locks inoperable -- or "bricked." Lockstate notified affected customers in this letter:

"Dear Lockstate Customer,
We notified you earlier today of a potential issue with your LS6i lock. We are sorry to inform you about some unfortunate news. Your lock is among a small subset of locks that had a fatal error rendering it inoperable. After a software update was sent to your lock, it failed to reconnect to our web service making a remote fix impossible...

Many AirBnb operators use smart locks by Lockstate to secure their properties. In its website, Lockstate promotes the LS6i lock as:

"... perfect for your rental property, home or office use. This robust WiFi enabled door lock allows users to lock or unlock doors remotely, know when people unlock your door, and even receive text alerts when codes are used. Issue new codes or delete codes from your computer or phone. Even give temporary codes to guests or office personnel."

Reportedly, about 200 Airbnb customers were affected. The company said 500 locks were affected. ArsTechnica explained how the bungled software update happened:

"The failure occurred last Monday when LockState mistakenly sent some 6i lock models a firmware update developed for 7i locks. The update left earlier 6i models unable to be locked and no longer able to receive over-the-air updates."

Some affected customers shared their frustrations on the company's Twitter page. Lockstate said the affected locks can still be operated with physical keys. While that is helpful, it isn't a solution since customers rely upon the remote features. Affected customers have two repair options: 1) return the back portion of the lock (repair time about 5 to 7 days), or 2) request a replace (response time about 14 to 18 days).

The whole situation seems to be another reminder of the limitations when companies design smart devices with security updates delivered via firmware. And, a better disclosure letter by Lockstate would have explained corrections to internal systems and managerial processes, so this doesn't happen again during another software update.

What are your opinions?

Survey: Online Harassment In 2017

What is online life like for many United States residents? A recent survey by the Pew Research Center provides a good view. 41 percent of adults surveyed have personally experienced online harassment. Even more, 66 percent, witnessed online harassment directed at others.

Types of behaviors. Online Harassment 2017 survey. Pew Research. Click to view larger version The types of online harassment behaviors vary from the less severe (e.g., offensive name calling, efforts to embarrass someone) to the more severe (e.g., physical threats, harassment over a sustained period, sexual harassment, stalking.) 18 percent of survey participants -- nearly one out of every fiver persons -- reported that they had experienced severe behaviors.

Americans reported that social networking sites are the most common locations for online harassment experiences. Of the 41 percent of survey participants who personally experienced online harassment, most of those (82 percent) cited a single site and 58 percent cited "social media."

The reasons vary. 14 percent of survey respondents reported they had been harassed online specifically because of their politics; 9 percent reported that they were targeted due to their physical appearance; e percent said they were targeted due to their race or ethnicity; and 8 percent said they were targeted due to their gender. 5 percent said they were targeted due their religion, and 3 percent said they were targeted due to their sexual orientation.

Some groups experience online harassment more than others. Pew found that younger adults, under age 30, are more likely to experience severe forms of online harassment. Similarly, younger adults are also more likely to witness online harassment targeting others. Pew also found:

"... one-in-four blacks say they have been targeted with harassment online because of their race or ethnicity, as have one-in-ten Hispanics. The share among whites is lower (3%). Similarly, women are about twice as likely as men to say they have been targeted as a result of their gender (11% vs. 5%). Men, however, are around twice as likely as women to say they have experienced harassment online as a result of their political views (19% vs. 10%). Similar shares of Democrats and Republicans say they have been harassed online..."

The impacts upon victims vary, too:

"... ranging from mental or emotional stress to reputational damage or even fear for one’s personal safety. At the same time, harassment does not have to be experienced directly to leave an impact. Around one-quarter of Americans (27%) say they have decided not to post something online after witnessing the harassment of others, while more than one-in-ten (13%) say they have stopped using an online service after witnessing other users engage in harassing behaviors..."

Different attitudes by gender. Online Harassment 2017 survey. Pew Research. Click to view larger version And, attitudes vary by gender. See the table on the right. More women than men consider online harassment a "major problem," and men prioritize free speech over online safety while women prioritize safety first. And, 83 percent of young women (e.g., ages 18 - 29) viewed online harassment as a major problem. Perhaps most importantly, persons who have "faced severe forms of online harassment differ in experiences, reactions, and attitudes."

Pew Research also found that persons who experience severe forms of online harassment, "are more likely to be targeted for personal characteristics and to face offline consequences." So, what happens online doesn't necessarily stay online.

The perpetrators vary, too. Of the 41 percent of survey participants who personally experienced online harassment, 34 percent said the perpetrator was a stranger, and 31 percent said they didn't know the perpetrator's real identity. Also, 26 percent said the perpetrator was an acquaintance, followed by friend (18 percent), family member, (11 percent), former romantic partner (7 percent), and coworker (5 percent).

Pew Research found that the number of Americans who experienced online harassment has increased slightly from 35 percent during a 2014 survey. Pew Research Center surveyed 4,248 U.S. adults during January 9 - 23, 2017. 

Next Steps
62 percent of survey participants view online harassment as a major problem. 5 percent do not consider it a problem at all. People who have experienced severe forms of online harassment said that they have already taken action. Those actions include a mix: a) set up or adjust privacy settings for their profiles in online services, b) reported offensive content to the online service, c) responded directly to the harasser, d) offered support to others targeted, e) changed information in their online profiles, and f) stopped using specific online services.

Views vary about which entities bear responsibility for solutions. 79 percent of survey respondents said that online services have a duty to intervene when harassment occurs on their service. 35 percent believe that better policies and tools from online services are the best way to address online harassment.

Meanwhile, 60 said that bystanders who witness online harassment "should play a major role in addressing this issue," and 15 percent view peer pressure as an effective solution. 49 said law enforcement should play a major role in addressing online harassment, while 31 said stronger laws are needed. Perhaps most troubling:

"... a sizable proportion of Americans (43%) say that law enforcement currently does not take online harassment incidents seriously enough."

Among persons who have experienced severe forms of online harassment, 55 percent said that law enforcement does not take the incidents seriously enough. Compare that statistic with this: nearly three-quarters (73 percent) of young men (ages 18 - 29) feel that offensive online content is taken too seriously.

And Americans are highly divided about how to balance safety concerns versus free:

"When asked how they would prioritize these competing interests, 45% of Americans say it is more important to let people speak their minds freely online; a slightly larger share (53%) feels that it is more important for people to feel welcome and safe online.

Americans are also relatively divided on just how seriously offensive content online should be treated. Some 43% of Americans say that offensive speech online is too often excused as not being a big deal, but a larger share (56%) feel that many people take offensive content online too seriously."

With such divergent views, one wonders if the problem of online harassment can be easily solved. What are your opinions about online harassment?

Facebook's Secret Censorship Rules Protect White Men from Hate Speech But Not Black Children

[Editor's Note: today's guest post, by the reporters at ProPublica, explores how social networking practice censorship to combat violence and hate speech, plus related practices such as "geo-blocking." It is reprinted with permission.]

Facebook logo by Julia Angwin, ProPublica, and Hannes Grassegger, special to ProPublica

In the wake of a terrorist attack in London earlier this month, a U.S. congressman wrote a Facebook post in which he called for the slaughter of "radicalized" Muslims. "Hunt them, identify them, and kill them," declared U.S. Rep. Clay Higgins, a Louisiana Republican. "Kill them all. For the sake of all that is good and righteous. Kill them all."

Higgins' plea for violent revenge went untouched by Facebook workers who scour the social network deleting offensive speech.

But a May posting on Facebook by Boston poet and Black Lives Matter activist Didi Delgado drew a different response.

"All white people are racist. Start from this reference point, or you've already failed," Delgado wrote. The post was removed and her Facebook account was disabled for seven days.

A trove of internal documents reviewed by ProPublica sheds new light on the secret guidelines that Facebook's censors use to distinguish between hate speech and legitimate political expression. The documents reveal the rationale behind seemingly inconsistent decisions. For instance, Higgins' incitement to violence passed muster because it targeted a specific sub-group of Muslims -- those that are "radicalized" -- while Delgado's post was deleted for attacking whites in general.

Over the past decade, the company has developed hundreds of rules, drawing elaborate distinctions between what should and shouldn't be allowed, in an effort to make the site a safe place for its nearly 2 billion users. The issue of how Facebook monitors this content has become increasingly prominent in recent months, with the rise of "fake news" -- fabricated stories that circulated on Facebook like "Pope Francis Shocks the World, Endorses Donald Trump For President, Releases Statement" -- and growing concern that terrorists are using social media for recruitment.

While Facebook was credited during the 2010-2011 "Arab Spring" with facilitating uprisings against authoritarian regimes, the documents suggest that, at least in some instances, the company's hate-speech rules tend to favor elites and governments over grassroots activists and racial minorities. In so doing, they serve the business interests of the global company, which relies on national governments not to block its service to their citizens.

One Facebook rule, which is cited in the documents but that the company said is no longer in effect, banned posts that praise the use of "violence to resist occupation of an internationally recognized state." The company's workforce of human censors, known as content reviewers, has deleted posts by activists and journalists in disputed territories such as Palestine, Kashmir, Crimea and Western Sahara.

One document trains content reviewers on how to apply the company's global hate speech algorithm. The slide identifies three groups: female drivers, black children and white men. It asks: Which group is protected from hate speech? The correct answer: white men.

The reason is that Facebook deletes curses, slurs, calls for violence and several other types of attacks only when they are directed at "protected categories" -- based on race, sex, gender identity, religious affiliation, national origin, ethnicity, sexual orientation and serious disability/disease. It gives users broader latitude when they write about "subsets" of protected categories. White men are considered a group because both traits are protected, while female drivers and black children, like radicalized Muslims, are subsets, because one of their characteristics is not protected. (The exact rules are in the slide show below.)

The Facebook Rules

Facebook has used these rules to train its "content reviewers" to decide whether to delete or allow posts. Facebook says the exact wording of its rules may have changed slightly in more recent versions. ProPublica recreated the slides.

Behind this seemingly arcane distinction lies a broader philosophy. Unlike American law, which permits preferences such as affirmative action for racial minorities and women for the sake of diversity or redressing discrimination, Facebook's algorithm is designed to defend all races and genders equally.

"Sadly," the rules are "incorporating this color-blindness idea which is not in the spirit of why we have equal protection," said Danielle Citron, a law professor and expert on information privacy at the University of Maryland. This approach, she added, will "protect the people who least need it and take it away from those who really need it."

But Facebook says its goal is different -- to apply consistent standards worldwide. "The policies do not always lead to perfect outcomes," said Monika Bickert, head of global policy management at Facebook. "That is the reality of having policies that apply to a global community where people around the world are going to have very different ideas about what is OK to share."

Facebook's rules constitute a legal world of their own. They stand in sharp contrast to the United States' First Amendment protections of free speech, which courts have interpreted to allow exactly the sort of speech and writing censored by the company's hate speech algorithm. But they also differ -- for example, in permitting postings that deny the Holocaust -- from more restrictive European standards.

The company has long had programs to remove obviously offensive material like child pornography from its stream of images and commentary. Recent articles in the Guardian and Süddeutsche Zeitung have detailed the difficult choices that Facebook faces regarding whether to delete posts containing graphic violence, child abuse, revenge porn and self-mutilation.

The challenge of policing political expression is even more complex. The documents reviewed by ProPublica indicate, for example, that Donald Trump's posts about his campaign proposal to ban Muslim immigration to the United States violated the company's written policies against "calls for exclusion" of a protected group. As The Wall Street Journal reported last year, Facebook exempted Trump's statements from its policies at the order of Mark Zuckerberg, the company's founder and chief executive.

The company recently pledged to nearly double its army of censors to 7,500, up from 4,500, in response to criticism of a video posting of a murder. Their work amounts to what may well be the most far-reaching global censorship operation in history. It is also the least accountable: Facebook does not publish the rules it uses to determine what content to allow and what to delete.

Users whose posts are removed are not usually told what rule they have broken, and they cannot generally appeal Facebook's decision. Appeals are currently only available to people whose profile, group or page is removed.

The company has begun exploring adding an appeals process for people who have individual pieces of content deleted, according to Bickert. "I'll be the first to say that we're not perfect every time," she said.

Facebook is not required by U.S. law to censor content. A 1996 federal law gave most tech companies, including Facebook, legal immunity for the content users post on their services. The law, section 230 of the Telecommunications Act, was passed after Prodigy was sued and held liable for defamation for a post written by a user on a computer message board.

The law freed up online publishers to host online forums without having to legally vet each piece of content before posting it, the way that a news outlet would evaluate an article before publishing it. But early tech companies soon realized that they still needed to supervise their chat rooms to prevent bullying and abuse that could drive away users.

America Online convinced thousands of volunteers to police its chat rooms in exchange for free access to its service. But as more of the world connected to the internet, the job of policing became more difficult and companies started hiring workers to focus on it exclusively. Thus the job of content moderator -- now often called content reviewer -- was born.

In 2004, attorney Nicole Wong joined Google and persuaded the company to hire its first-ever team of reviewers, who responded to complaints and reported to the legal department. Google needed "a rational set of policies and people who were trained to handle requests," for its online forum called Groups, she said.

Google's purchase of YouTube in 2006 made deciding what content was appropriate even more urgent. "Because it was visual, it was universal," Wong said.

While Google wanted to be as permissive as possible, she said, it soon had to contend with controversies such as a video mocking the King of Thailand, which violated Thailand's laws against insulting the king. Wong visited Thailand and was impressed by the nation's reverence for its monarch, so she reluctantly agreed to block the video -- but only for computers located in Thailand.

Since then, selectively banning content by geography -- called "geo-blocking" -- has become a more common request from governments. "I don't love traveling this road of geo-blocking," Wong said, but "it's ended up being a decision that allows companies like Google to operate in a lot of different places."

For social networks like Facebook, however, geo-blocking is difficult because of the way posts are shared with friends across national boundaries. If Facebook geo-blocks a user's post, it would only appear in the news feeds of friends who live in countries where the geo-blocking prohibition doesn't apply. That can make international conversations frustrating, with bits of the exchange hidden from some participants.

As a result, Facebook has long tried to avoid using geography-specific rules when possible, according to people familiar with the company's thinking. However, it does geo-block in some instances, such as when it complied with a request from France to restrict access within its borders to a photo taken after the Nov. 13, 2015, terrorist attack at the Bataclan concert hall in Paris.

Bickert said Facebook takes into consideration the laws in countries where it operates, but doesn't always remove content at a government's request. "If there is something that violates a country's law but does not violate our standards," Bickert said, "we look at who is making that request: Is it the appropriate authority? Then we check to see if it actually violates the law. Sometimes we will make that content unavailable in that country only."

Facebook's goal is to create global rules. "We want to make sure that people are able to communicate in a borderless way," Bickert said.

Founded in 2004, Facebook began as a social network for college students. As it spread beyond campus, Facebook began to use content moderation as a way to compete with the other leading social network of that era, MySpace.

MySpace had positioned itself as the nightclub of the social networking world, offering profile pages that users could decorate with online glitter, colorful layouts and streaming music. It didn't require members to provide their real names and was home to plenty of nude and scantily clad photographs. And it was being investigated by law-enforcement agents across the country who worried it was being used by sexual predators to prey on children. (In a settlement with 49 state attorneys general, MySpace later agreed to strengthen protections for younger users.)

By comparison, Facebook was the buttoned-down Ivy League social network -- all cool grays and blues. Real names and university affiliations were required. Chris Kelly, who joined Facebook in 2005 and was its first general counsel, said he wanted to make sure Facebook didn't end up in law enforcement's crosshairs, like MySpace.

"We were really aggressive about saying we are a no-nudity platform," he said.

The company also began to tackle hate speech. "We drew some difficult lines while I was there -- Holocaust denial being the most prominent," Kelly said. After an internal debate, the company decided to allow Holocaust denials but reaffirmed its ban on group-based bias, which included anti-Semitism. Since Holocaust denial and anti-Semitism frequently went together, he said, the perpetrators were often suspended regardless.

"I've always been a pragmatist on this stuff," said Kelly, who left Facebook in 2010. "Even if you take the most extreme First Amendment positions, there are still limits on speech."

By 2008, the company had begun expanding internationally but its censorship rulebook was still just a single page with a list of material to be excised, such as images of nudity and Hitler. "At the bottom of the page it said, 'Take down anything else that makes you feel uncomfortable,'" said Dave Willner, who joined Facebook's content team that year.

Willner, who reviewed about 15,000 photos a day, soon found the rules were not rigorous enough. He and some colleagues worked to develop a coherent philosophy underpinning the rules, while refining the rules themselves. Soon he was promoted to head the content policy team.

By the time he left Facebook in 2013, Willner had shepherded a 15,000-word rulebook that remains the basis for many of Facebook's content standards today.

"There is no path that makes people happy," Willner said. "All the rules are mildly upsetting." Because of the volume of decisions -- many millions per day -- the approach is "more utilitarian than we are used to in our justice system," he said. "It's fundamentally not rights-oriented."

Willner's then-boss, Jud Hoffman, who has since left Facebook, said that the rules were based on Facebook's mission of "making the world more open and connected." Openness implies a bias toward allowing people to write or post what they want, he said.

But Hoffman said the team also relied on the principle of harm articulated by John Stuart Mill, a 19th-century English political philosopher. It states "that the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others." That led to the development of Facebook's "credible threat" standard, which bans posts that describe specific actions that could threaten others, but allows threats that are not likely to be carried out.

Eventually, however, Hoffman said "we found that limiting it to physical harm wasn't sufficient, so we started exploring how free expression societies deal with this."

The rules developed considerable nuance. There is a ban against pictures of Pepe the Frog, a cartoon character often used by "alt-right" white supremacists to perpetrate racist memes, but swastikas are allowed under a rule that permits the "display [of] hate symbols for political messaging." In the documents examined by ProPublica, which are used to train content reviewers, this rule is illustrated with a picture of Facebook founder Mark Zuckerberg that has been manipulated to apply a swastika to his sleeve.

The documents state that Facebook relies, in part, on the U.S. State Department's list of designated terrorist organizations, which includes groups such as al-Qaida, the Taliban and Boko Haram. But not all groups deemed terrorist by one country or another are included: A recent investigation by the Pakistan newspaper Dawn found that 41 of the 64 terrorist groups banned in Pakistan were operational on Facebook.

There is also a secret list, referred to but not included in the documents, of groups designated as hate organizations that are banned from Facebook. That list apparently doesn't include many Holocaust denial and white supremacist sites that are up on Facebook to this day, such as a group called "Alt-Reich Nation." A member of that group was recently charged with murdering a black college student in Maryland.

As the rules have multiplied, so have exceptions to them. Facebook's decision not to protect subsets of protected groups arose because some subgroups such as "female drivers" didn't seem especially sensitive. The default position was to allow free speech, according to a person familiar with the decision-making.

After the wave of Syrian immigrants began arriving in Europe, Facebook added a special "quasi-protected" category for migrants, according to the documents. They are only protected against calls for violence and dehumanizing generalizations, but not against calls for exclusion and degrading generalizations that are not dehumanizing. So, according to one document, migrants can be referred to as "filthy" but not called "filth." They cannot be likened to filth or disease "when the comparison is in the noun form," the document explains.

Facebook also added an exception to its ban against advocating for anyone to be sent to a concentration camp. "Nazis should be sent to a concentration camp," is allowed, the documents state, because Nazis themselves are a hate group.

The rule against posts that support violent resistance against a foreign occupier was developed because "we didn't want to be in a position of deciding who is a freedom fighter," Willner said. Facebook has since dropped the provision and revised its definition of terrorism to include nongovernmental organizations that carry out premeditated violence "to achieve a political, religious or ideological aim," according to a person familiar with the rules.

The Facebook policy appears to have had repercussions in many of the at least two dozen disputed territories around the world. When Russia occupied Crimea in March 2014, many Ukrainians experienced a surge in Facebook banning posts and suspending profiles. Facebook's director of policy for the region, Thomas Myrup Kristensen, acknowledged at the time that it "found a small number of accounts where we had incorrectly removed content. In each case, this was due to language that appeared to be hate speech but was being used in an ironic way. In these cases, we have restored the content."

Katerina Zolotareva, 34, a Kiev-based Ukrainian working in communications, has been blocked so often that she runs four accounts under her name. Although she supported the "Euromaidan" protests in February 2014 that antagonized Russia, spurring its military intervention in Crimea, she doesn't believe that Facebook took sides in the conflict. "There is war in almost every field of Ukrainian life," she says, "and when war starts, it also starts on Facebook."

In Western Sahara, a disputed territory occupied by Morocco, a group of journalists called Equipe Media say their account was disabled by Facebook, their primary way to reach the outside world. They had to open a new account, which remains active.

"We feel we have never posted anything against any law," said Mohammed Mayarah, the group's general coordinator. "We are a group of media activists. We have the aim to break the Moroccan media blockade imposed since it invaded and occupied Western Sahara."

In Israel, which captured territory from its neighbors in a 1967 war and has occupied it since, Palestinian groups are blocked so often that they have their own hashtag, #FbCensorsPalestine, for it. Last year, for instance, Facebook blocked the accounts of several editors for two leading Palestinian media outlets from the West Bank -- Quds News Network and Sheebab News Agency. After a couple of days, Facebook apologized and un-blocked the journalists' accounts. Earlier this year, Facebook blocked the account of Fatah, the Palestinian Authority's ruling party -- then un-blocked it and apologized.

Last year India cracked down on protesters in Kashmir, shooting pellet guns at them and shutting off cellphone service. Local insurgents are seeking autonomy for Kashmir, which is also caught in a territorial tussle between India and Pakistan. Posts of Kashmir activists were being deleted, and members of a group called the Kashmir Solidarity Network found that all of their Facebook accounts had been blocked on the same day.

Ather Zia, a member of the network and a professor of anthropology at the University of Northern Colorado, said that Facebook restored her account without explanation after two weeks. "We do not trust Facebook any more," she said. "I use Facebook, but it's almost this idea that we will be able to create awareness but then we might not be on it for long."

The rules are one thing. How they're applied is another. Bickert said Facebook conducts weekly audits of every single content reviewer's work to ensure that its rules are being followed consistently. But critics say that reviewers, who have to decide on each post within seconds, may vary in both interpretation and vigilance.

Facebook users who don't mince words in criticizing racism and police killings of racial minorities say that their posts are often taken down. Two years ago, Stacey Patton, a journalism professor at historically black Morgan State University in Baltimore, posed a provocative question on her Facebook page. She asked why "it's not a crime when White freelance vigilantes and agents of 'the state' are serial killers of unarmed Black people, but when Black people kill each other then we are 'animals' or 'criminals.'"

Although it doesn't appear to violate Facebook's policies against hate speech, her post was immediately removed, and her account was disabled for three days. Facebook didn't tell her why. "My posts get deleted about once a month," said Patton, who often writes about racial issues. She said she also is frequently put in Facebook "jail" -- locked out of her account for a period of time after a posting that breaks the rules.

"It's such emotional violence," Patton said. "Particularly as a black person, we're always have these discussions about mass incarceration, and then here's this fiber-optic space where you can express yourself. Then you say something that some anonymous person doesn't like and then you're in 'jail.'"

Didi Delgado, whose post stating that "white people are racist" was deleted, has been banned from Facebook so often that she has set up an account on another service called Patreon, where she posts the content that Facebook suppressed. In May, she deplored the increasingly common Facebook censorship of black activists in an article for Medium titled "Mark Zuckerberg Hates Black People."

Facebook also locked out Leslie Mac, a Michigan resident who runs a service called SafetyPinBox where subscribers contribute financially to "the fight for black liberation," according to her site. Her offense was writing a post stating "White folks. When racism happens in public -- YOUR SILENCE IS VIOLENCE."

The post does not appear to violate Facebook's policies. Facebook apologized and restored her account after TechCrunch wrote an article about Mac's punishment. Since then, Mac has written many other outspoken posts. But, "I have not had a single peep from Facebook," she said, while "not a single one of my black female friends who write about race or social justice have not been banned."

"My takeaway from the whole thing is: If you get publicity, they clean it right up," Mac said. Even so, like most of her friends, she maintains a separate Facebook account in case her main account gets blocked again.

Negative publicity has spurred other Facebook turnabouts as well. Consider the example of the iconic news photograph of a young naked girl running from a napalm bomb during the Vietnam War. Kate Klonick, a Ph.D. candidate at Yale Law School who has spent two years studying censorship operations at tech companies, said the photo had likely been deleted by Facebook thousands of times for violating its ban on nudity.

But last year, Facebook reversed itself after Norway's leading newspaper published a front-page open letter to Zuckerberg accusing him of "abusing his power" by deleting the photo from the newspaper's Facebook account.

Klonick said that while she admires Facebook's dedication to policing content on its website, she fears it is evolving into a place where celebrities, world leaders and other important people "are disproportionately the people who have the power to update the rules."

In December 2015, a month after terrorist attacks in Paris killed 130 people, the European Union began pressuring tech companies to work harder to prevent the spread of violent extremism online.

After a year of negotiations, Facebook, Microsoft, Twitter and YouTube agreed to the European Union's hate speech code of conduct, which commits them to review and remove the majority of valid complaints about illegal content within 24 hours and to be audited by European regulators. The first audit, in December, found that the companies were only reviewing 40 percent of hate speech within 24 hours, and only removing 28 percent of it. Since then, the tech companies have shortened their response times to reports of hate speech and increased the amount of content they are deleting, prompting criticism from free-speech advocates that too much is being censored.

Now the German government is considering legislation that would allow social networks such as Facebook to be fined up to 50 million euros if they don't remove hate speech and fake news quickly enough. Facebook recently posted an article assuring German lawmakers that it is deleting about 15,000 hate speech posts a month. Worldwide, over the last two months, Facebook deleted about 66,000 hate speech posts per week, vice president Richard Allan said in a statement Tuesday on the company's site.

Among posts that Facebook didn't delete were Donald Trump's comments on Muslims. Days after the Paris attacks, Trump, then running for president, posted on Facebook "calling for a total and complete shutdown of Muslims entering the United States until our country's representatives can figure out what is going on."

Candidate Trump's posting -- which has come back to haunt him in court decisions voiding his proposed travel ban -- appeared to violate Facebook's rules against "calls for exclusion" of a protected religious group. Zuckerberg decided to allow it because it was part of the political discourse, according to people familiar with the situation.

However, one person close to Facebook's decision-making said Trump may also have benefited from the exception for sub-groups. A Muslim ban could be interpreted as being directed against a sub-group, Muslim immigrants, and thus might not qualify as hate speech against a protected category.

Hannes Grassegger is a reporter for Das Magazin and Reportagen Magazine based in Zurich.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

Dozens Of Uber Employees Fired Or Investigated For Harassment. Uber And Lyft Drivers Unaware of Safety Recalls

Uber logo Ride-sharing companies are in the news again and probably not for the reasons their management executives would prefer. First, TechCrunch reported on Thursday:

"... at a staff meeting in San Francisco, Uber executives revealed to the company’s 12,000 employees that 20 of their colleagues had been fired and that 57 are still being probed over harassment, discrimination and inappropriate behavior, following a string of accusations that Uber had created a toxic workplace and allowed complaints to go unaddressed for years. Those complaints had pushed Uber into crisis mode earlier this year. But the calamity may be just beginning... Uber fired senior executive Eric Alexander after it was leaked to Recode that Alexander had obtained the medical records of an Uber passenger in India who was raped in 2014 by her driver."

"Recode also reported that Alexander had shared the woman’s file with Kalanick and his senior vice president, Emil Michael, and that the three men suspected the woman of working with Uber’s regional competitor in India, Ola, to hamper its chances of success there. Uber eventually settled a lawsuit brought by the woman against the company..."

News broke in March, 2017 about both the Recode article and the Grayball activity at Uber to thwart local government code inspections. In February, a former Uber employee shared a disturbing story with allegations of sexual harassment.

Lyft logo Second, the investigative team at WBZ-TV, the local CBS afiliate in Boston, reported that many Uber and Lyft drivers are unaware of safety recalls affecting their vehicles. This could make rides in these cars unsafe for passengers:

"Using an app from Carfax, we quickly checked the license plates of 167 Uber and Lyft cars picking up passengers at Logan Airport over a two day period. Twenty-seven of those had open safety recalls or about 16%. Recalls are issued when a manufacturer identifies a mechanical problem that needs to be fixed for safety reasons. A recent example is the millions of cars that were recalled when it was determined the airbags made by Takata could release shrapnel when deployed in a crash."

Both ride-sharing companies treat drivers as independent contractors. WBZ-TV reported:

"Uber told the [WBZ-TV investigative] Team that drivers are contractors and not employees of the company. A spokesperson said they provide resources to drivers and encourage them to check for recalls and to perform routine maintenance. Drivers are also reminded quarterly to check with NHTSA for recall information."

According to the president of the Massachusetts Bar Association Jeffrey Catalano, the responsibility to make sure the car is safe for passengers lies mainly with the driver. But because Uber and Lyft both advertise their commitment to safety on their websites, they too could be held responsible."

Trump Is Not the Only One Blocking Constituents on Twitter

[Editor's note: today's guest blog post, by the reporters at ProPublica, explores the emerging debate about whether the appropriate, perhaps ethical, use of social media by publicly elected officials and persons campaigning for office. Should they be able to block constituents posting views they dislike or disagree with? Is it really public speech on a privately-run social networking sites? Would you vote for person who blocks constituents? Do companies operating social networking site have a responsibility in this? Today's post is reprinted with permission.]

by Charles Ornstein, ProPublica

As President Donald Trump faces criticism for blocking users on his Twitter account, people across the country say they, too, have been cut off by elected officials at all levels of government after voicing dissent on social media.

In Arizona, a disabled Army veteran grew so angry when her congressman blocked her and others from posting dissenting views on his Facebook page that she began delivering actual blocks to his office.

A central Texas congressman has barred so many constituents on Twitter that a local activist group has begun selling T-shirts complaining about it.

And in Kentucky, the Democratic Party is using a hashtag, #BevinBlocked, to track those who've been blocked on social media by Republican Gov. Matt Bevin. (Most of the officials blocking constituents appear to be Republican.)

The growing combat over social media is igniting a new-age legal debate over whether losing this form of access to public officials violates constituents' First Amendment rights to free speech and to petition the government for a redress of grievances. Those who've been blocked say it's akin to being thrown out of a town hall meeting for holding up a protest sign.

On Tuesday, the Knight First Amendment Institute at Columbia University called upon Trump to unblock people who've disagreed with him or directed criticism at him or his family via the @realdonaldtrump account, which he used prior to becoming president and continues to use as his principal Twitter outlet.

Trump blocked me after this tweet.Let's all hope the courts continue to protect us. Never stop resisting. pic.twitter.com/TlR4zgHCoU

-- Nick Jack Pappas (@Pappiness) June 5, 2017

"Though the architects of the Constitution surely didn't contemplate presidential Twitter accounts, they understood that the president must not be allowed to banish views from public discourse simply because he finds them objectionable," Jameel Jaffer, the Knight Institute's executive director, said in a statement.

The White House did not respond to a request for comment, but press secretary Sean Spicer said earlier Tuesday that statements the president makes on Twitter should be regarded as official statements.

Similar flare-ups have been playing out in state after state.

Earlier this year, the American Civil Liberties Union of Maryland called on Governor Larry Hogan, a Republican, to stop deleting critical comments and barring people from commenting on his Facebook page. (The Washington Post reported that the governor had blocked 450 people as of February.)

Deborah Jeon, the ACLU's legal director, said Hogan and other elected officials are increasingly foregoing town hall meetings and instead relying on social media as their primary means of communication with constituents. "That's why it's so problematic," she said. "If people are silenced in that medium," they can't effectively interact with their elected representative.

The governor's office did not respond to a request for comment this week. After the letter, however, it reinstated six of the seven people specifically identified by the ACLU (it said it couldn't find the seventh). "While the ACLU should be focusing on much more important activities than monitoring the governor's Facebook page, we appreciated them identifying a handful of individuals -- out of the over 1 million weekly viewers of the page -- that may have been inadvertently denied access," a spokeswoman for the governor told the Post.

Practically speaking, being blocked cuts off constituents from many forms of interacting with public officials. On Facebook, it means no posts, no likes and no questions or comments during live events on the page of the blocker. Even older posts that may not be offensive are taken down. On Twitter, being blocked prevents a user from seeing the other person's tweets on his or her timeline.

Moreover, while Twitter and Facebook themselves usually suspend account holders only temporarily for breaking rules, many elected officials don't have established policies for constituents who want to be reinstated. Sometimes a call is enough to reverse it, other times it's not.

Eugene Volokh, a constitutional law professor at the UCLA School of Law, said that for municipalities and public agencies, such as police departments, social media accounts would generally be considered "limited public forums" and therefore, should be open to all.

"Once they open it up to public comments, they can't then impose viewpoint-based restrictions on it," he said, for instance allowing only supportive comments while deleting critical ones.

But legislators are different because they are people. Elected officials can have personal accounts, campaign accounts and officeholder accounts that may appear quite similar. On their personal and campaign accounts, there's little disagreement that officials can engage with -- or block -- whoever they want. Last month, for instance, ProPublica reported how Rep. Peter King (Republican, New York) blocked users on his campaign account after they criticized his positions on health reform and other issues.

But what about their officeholder social media accounts?

The ACLU's Jeon says that they should be public if they use government resources, including staff time and office equipment to maintain the page. "Where that's the situation and taxpayer resources are going to it, then the full power of the First Amendment applies," she said. "It doesn't matter if they're members of Congress or the governor or a local councilperson."

Volokh of UCLA disagreed. He said that members of Congress are entitled to their own private speech, even on official pages. That's because each is one voice among many, as opposed to a governor or mayor. "It's clear that whatever my senator is, she's not the government. She is one person who is part of a legislative body," he said. "She was elected because she has her own views and it makes sense that if she has a Twitter feed or a Facebook page, that may well be seen as not government speech but the voice of somebody who may be a government official."

Volokh said he's inclined to see Trump's @realdonaldtrump account as a personal one, though other legal experts disagree.

"You could imagine actually some other president running this kind of account in a way that's very public minded -- 'I'm just going to express the views of the executive branch,'" he said. "The @realdonaldtrump account is very much, 'I'm Donald Trump. I'm going to be expressing my views, and if you don't like it, too bad for you.' That sounds like private speech, even done by a government official on government property."

It's possible the fight over the president's Twitter account will end up in court, as such disputes have across the country. Generally, in these situations, the people contesting the government's social media policies have reached settlements ending the questionable practices.

After being sued by the ACLU, three cities in Indiana agreed last year to change their policies by no longer blocking users or deleting comments.

In 2014, a federal judge ordered the City and County of Honolulu to pay $31,000 in attorney's fees to people who sued, contending that the Honolulu Police Department violated their constitutional rights by deleting their critical Facebook posts.

And San Diego County agreed to pay the attorney's fees of a gun parts dealer who sued after its Sheriff's Department deleted two Facebook posts that were critical of the sheriff and banned the dealer from commenting. The department took down its Facebook page after being sued and paid the dealer $20 as part of the settlement.

Angela Greben, a California paralegal, has spent the past two years gathering information about agencies and politicians that have blocked people on social media -- Democrats and Republican alike -- filing ethics complaints and even a lawsuit against the city of San Mateo, California, its mayor and police department. (They settled with her, giving her some of what she wanted.)

Greben has filed numerous public-records requests to agencies as varied as the Transportation Security Administration, the Seattle Police Department and the Connecticut Lottery seeking lists of people they block. She's posted the results online.

"It shouldn't be up to the elected official to decide who can tweet them and who can't," she said. "Everybody deserves to be treated equally and fairly under the law."

Even though she lives in California, Greben recently filed an ethics complaint against Atlanta Mayor Kasim Reed, a Democrat, who has been criticized for blocking not only constituents but also journalists who cover him. Reed has blocked Greben since 2015 when she tweeted about him... well, blocking people on Twitter. "He's notorious for blocking and muting people," she said, meaning he can't see their tweets but they can still see his.

@LizLemeryJoy @KasimReed Mr. Mayor you are violating the #civilrights of all you have #blocked! @Georgia_AG @FOX5Atlanta @11AliveNews

-- Angela Greben (@AngelaGreben) March 7, 2015

In a statement, a city spokeswoman defended the mayor, saying he's now among the top five most-followed mayors in the country. "Mayor Reed uses social media as a personal platform to engage directly with constituents and some journalists. 2026 Like all Twitter users, Mayor Reed has the right to stop engaging in conversations when he determines they are unproductive, intentionally inflammatory, dishonest and/or misleading."

Asked how many people he has blocked, she replied that the office doesn't keep such a list.

J'aime Morgaine, the Arizona veteran who delivered blocks to the office of Rep. Paul Gosar, a Republican, said being blocked on Facebook matters because her representative no longer hosts in-person town hall meetings and has started to answer questions on Facebook Live. Now she can't ask questions or leave comments.

"I have lost and other people who have been blocked have lost our right to participate in the democratic process," said Morgaine, leader of Indivisible Kingman, a group that opposes the president's agenda. "I am outraged that my congressman is blocking my voice and trampling upon my constitutional rights."

@RepGosar ..You weren't home when I delivered this message to your office, but no worries...there WILL be more!Stop BLOCKING Constituents! pic.twitter.com/JTWGQwhxKt

-- Indivisible Kingman (@IndivisibleCD4) May 13, 2017

Morgaine said the rules are not being applied equally. "They're not blocking everybody who's angry," she said. "They're blocking the voices of dissent, and there's no process for getting unblocked. There's no appeals process. There's no accountability."

A spokeswoman for Gosar defended his decision to block constituents but did not answer a question about how many have been blocked.

"Congressman Gosar's policy has been consistent since taking office in January 2010," spokeswoman Kelly Roberson said in an email. "In short: 2018Users whose comments or posts consist of profanity, hate speech, personal attacks, homophobia or Islamophobia may be banned.'"

On his Facebook page, Gosar posts the policy that guides his actions. It says in part, "Users are banned to promote healthy, civil dialogue on this page but are welcome to contact Congressman Gosar using other methods," including phone calls, emails and letters.

Sometimes, users are blocked repeatedly.

Community volunteer Gayle Lacy was named 2015 Wacoan of the Year for her effort to have the site of mammoth fossils in Waco, Texas, designated a national monument. Lacy's latest fight has been with her congressman, Bill Flores, who was with her in the Oval Office when Obama designated the site a national monument in 2015. She has been blocked three times by Flores' congressional Twitter account and once by his campaign account. One of those blocks happened after she tweeted at him: "My father died in service for this country, but you are not representative of that country and neither is your dear leader."

Lacy said she was able to get unblocked each time from Flores' congressional account by calling his office but remains blocked on the campaign one. "I don't know where to call," she said. "I asked in his D.C. office who I needed to call and I was told that they don't have that information."

Lacy and others said Flores blocks those who question him. Austin lawyer Matt Miller said he was blocked for asking when Flores would hold a town hall meeting. "It's totally inappropriate to block somebody, especially for asking a legitimate question of my elected representative," Miller said.

In a statement, Flores spokesman Andre Castro said Flores makes his policies clear on Twitter and on Facebook. "We reserve the right to block users whose comments include profanity, name-calling, threats, personal attacks, constant harping, inappropriate or false accusations, or other inappropriate comments or material. As the Congressman likes to say 2014 2018If you would not say it to your grandmother, we will not allow it here.'"

Ricardo Guerrero, an Austin marketer who is one of the leaders of a local group opposed to Trump's agenda, said he has gotten unblocked by Flores twice but then was blocked again and "just kind of gave up."

"He's creating an echo chamber of only the people that agree with him," Guerrero said of Flores. "He's purposefully removing any semblance of debate or alternative ideas or ideas that challenge his own -- and that seems completely undemocratic. That's the bigger issue in my mind."

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

3 Strategies To Defend GOP Health Bill: Euphemisms, False Statements and Deleted Comments

[Editor's Note: today's guest post is by the reporters as ProPublica. Affordable health care and coverage are important to many, if not most, Americans. It is reprinted with permission.]

by Charles Ornstein, ProPublica

Earlier this month, a day after the House of Representatives passed a bill to repeal and replace major parts of the Affordable Care Act, Ashleigh Morley visited her congressman's Facebook page to voice her dismay.

"Your vote yesterday was unthinkably irresponsible and does not begin to account for the thousands of constituents in your district who rely upon many of the services and provisions provided for them by the ACA," Morley wrote on the page affiliated with the campaign of Representative Peter King (Republican, New York). "You never had my vote and this confirms why."

The next day, Morley said, her comment was deleted and she was blocked from commenting on or reacting to King's posts. The same thing has happened to others critical of King's positions on health care and other matters. King has deleted negative feedback and blocked critics from his Facebook page, several of his constituents say, sharing screenshots of comments that are no longer there.

"Having my voice and opinions shut down by the person who represents me -- especially when my voice and opinion wasn't vulgar and obscene -- is frustrating, it's disheartening, and I think it points to perhaps a larger problem with our representatives and maybe their priorities," Morley said in an interview.

King's office did not respond to requests for comment.

As Republican members of Congress seek to roll back the Affordable Care Act, commonly called Obamacare, and replace it with the American Health Care Act, they have adopted various strategies to influence and cope with public opinion, which polls show mostly opposes their plan. ProPublica, with our partners at Kaiser Health News, Stat and Vox, has been fact-checking members of Congress in this debate and we've found misstatements on both sides, though more by Republicans than Democrats. The Washington Post's Fact Checker has similarly found misstatements by both sides.

Today, we're back with more examples of how legislators are interacting with constituents about repealing Obamacare, whether online or in traditional correspondence. Their more controversial tactics seem to fall into three main categories: providing incorrect information, using euphemisms for the impact of their actions, and deleting comments critical of them. (Share your correspondence with members of Congress with us.)

Incorrect Information

Representative Vicky Hartzler (Republican, Missouri) sent a note to constituents this month explaining her vote in favor of the Republican bill. First, she outlined why she believes the ACA is not sustainable -- namely, higher premiums and few choices. Then she said it was important to have a smooth transition from one system to another.

"This is why I supported the AHCA to follow through on our promise to have an immediate replacement ready to go should the ACA be repealed," she wrote. "The AHCA keeps the ACA for the next three years then phases in a new approach to give people, states, and insurance markets plenty of time to make adjustments."

Except that's not true.

"There are quite a number of changes in the AHCA that take effect within the next three years," wrote ACA expert Timothy Jost, an emeritus professor at Washington and Lee University School of Law, in an email to ProPublica.

The current law's penalties on individuals who do not purchase insurance and on employers who do not offer it would be repealed retroactively to 2016, which could remove the incentive for some employers to offer coverage to their workers. Moreover, beginning in 2018, older people could be charged premiums up to five times more than younger people -- up from three times under current law. The way in which premium tax credits would be calculated would change as well, benefiting younger people at the expense of older ones, Jost said.

"It is certainly not correct to say that everything stays the same for the next three years," he wrote.

In an email, Hartzler spokesman Casey Harper replied, "I can see how this sentence in the letter could be misconstrued. It's very important to the Congresswoman that we give clear, accurate information to her constituents. Thanks for pointing that out."

Other lawmakers have similarly shared incorrect information after voting to repeal the ACA. Representative Diane Black (Republican, Tennessee) wrote in a May 19 email to a constituent that "in 16 of our counties, there are no plans available at all. This system is crumbling before our eyes and we cannot wait another year to act."

Black was referring to the possibility that, in 16 Tennessee counties around Knoxville, there might not have been any insurance options in the ACA marketplace next year. However, 10 days earlier, before she sent her email, BlueCross BlueShield of Tennessee announced that it was willing to provide coverage in those counties and would work with the state Department of Commerce and Insurance "to set the right conditions that would allow our return."

"We stand by our statement of the facts, and Congressman Black is working hard to repeal and replace Obamacare with a system that actually works for Tennessee families and individuals," her deputy chief of staff Dean Thompson said in an email.

On the Democratic side, the Washington Post Fact Checker has called out representatives for saying the AHCA would consider rape or sexual assault as pre-existing conditions. The bill would not do that, although critics counter that any resulting mental health issues or sexually transmitted diseases could be considered existing illnesses.


A number of lawmakers have posted information taken from talking points put out by the House Republican Conference that try to frame the changes in the Republican bill as kinder and gentler than most experts expect them to be.

An answer to one frequently asked question pushes back against criticism that the Republican bill would gut Medicaid, the federal-state health insurance program for the poor, and appears on the websites of Representative Garret Graves (Republican, Louisiana) and others.

"Our plan responsibly unwinds Obamacare's Medicaid expansion," the answer says. "We freeze enrollment and allow natural turnover in the Medicaid program as beneficiaries see their life circumstances change. This strategy is both fiscally responsible and fair, ensuring we don't pull the rug out on anyone while also ending the Obamacare expansion that unfairly prioritizes able-bodied working adults over the most vulnerable."

That is highly misleading, experts say.

The Affordable Care Act allowed states to expand Medicaid eligibility to anyone who earned less than 138 percent of the federal poverty level, with the federal government picking up almost the entire tab. Thirty-one states and the District of Columbia opted to do so. As a result, the program now covers more than 74 million beneficiaries, nearly 17 million more than it did at the end of 2013.

The GOP health care bill would pare that back. Beginning in 2020, it would reduce the share the federal government pays for new enrollees in the Medicaid expansion to the rate it pays for other enrollees in the state, which is considerably less. Also in 2020, the legislation would cap the spending growth rate per Medicaid beneficiary. As a result, a Congressional Budget Office review released Wednesday estimates that millions of Americans would become uninsured.

Sara Rosenbaum, a professor of health law and policy at the Milken Institute School of Public Health at George Washington University, said the GOP's characterization of its Medicaid plan is wrong on many levels. People naturally cycle on and off Medicaid, she said, often because of temporary events, not changing life circumstances -- seasonal workers, for instance, may see their wages rise in summer months before falling back.

"A terrible blow to millions of poor people is recast as an easing off of benefits that really aren't all that important, in a humane way," she said.

Moreover, the GOP bill actually would speed up the "natural turnover" in the Medicaid program, said Diane Rowland, executive vice president of the Kaiser Family Foundation, a health care think tank. Under the ACA, states were only permitted to recheck enrollees' eligibility for Medicaid once a year because cumbersome paperwork requirements have been shown to cause people to lose their coverage. The American Health Care Act would require these checks every six months -- and even give states more money to conduct them.

Rowland also took issue with the GOP talking point that the expansion "unfairly prioritizes able-bodied working adults over the most vulnerable." At a House Energy and Commerce Committee hearing earlier this year, GOP representatives maintained that the Medicaid expansion may be creating longer waits for home- and community-based programs for sick and disabled Medicaid patients needing long-term care, "putting care for some of the most vulnerable Americans at risk."

Research from the Kaiser Family Foundation, however, showed that there was no relationship between waiting lists and states that expanded Medicaid. Such waiting lists pre-dated the expansion and they were worse in states that did not expand Medicaid than in states that did.

"This is a complete misrepresentation of the facts," Rosenbaum said.

Graves' office said the information on his site came from the House Republican Conference. Emails to the conference's press office were not returned.

The GOP talking points also play up a new Patient and State Stability Fund included in the AHCA, which is intended to defray the costs of covering people with expensive health conditions. "All told, $130 billion dollars would be made available to states to finance innovative programs to address their unique patient populations," the information says. "This new stability fund ensures these programs have the necessary funding to protect patients while also giving states the ability to design insurance markets that will lower costs and increase choice."

The fund was modeled after a program in Maine, called an invisible high-risk pool, which advocates say has kept premiums in check in the state. But Senator Susan Collins (Republican, Maine) says the House bill's stability fund wasn't allocated enough money to keep premiums stable.

"In order to do the Maine model 2014 which I've heard many House people say that is what they're aiming for -- it would take $15 billion in the first year and that is not in the House bill," Collins told Politico. "There is actually $3 billion specifically designated for high-risk pools in the first year."

Deleting Comments

Morley, 28, a branded content editor who lives in Seaford, New York, said she moved into Representative King's Long Island district shortly before the 2016 election. She said she did not vote for him and, like many others across the country, said the election results galvanized her into becoming more politically active.

Earlier this year, Morley found an online conversation among King's constituents who said their critical comments were being deleted from his Facebook page. Because she doesn't agree with King's stances, she said she wanted to reserve her comment for an issue she felt strongly about.

A day after the House voted to repeal the ACA, Morley posted her thoughts. "I kind of felt that that was when I wanted to use my one comment, my one strike as it would be," she said.

By noon the next day, it had been deleted and she had been blocked.

"I even wrote in my comment that you can block me but I'm still going to call your office," Morley said in an interview.

Some negative comments about King remain on his Facebook page. But King's critics say his deletions fit a broader pattern. He has declined to hold an in-person town hall meeting this year, saying, "to me all they do is just turn into a screaming session," according to CNN. He held a telephonic town hall meeting but only answered a small fraction of the questions submitted. And he met with Liuba Grechen Shirley, the founder of a local Democratic group in his district, but only after her group held a protest in front of his office that drew around 400 people.

"He's not losing his health care," Grechen Shirley said. "It doesn't affect him. It's a death sentence for many and he doesn't even care enough to meet with his constituents."

King's deleted comments even caught the eye of Andy Slavitt, who until January was the acting administrator of the Centers for Medicare and Medicaid Services. Slavitt has been traveling the country pushing back against attempts to gut the ACA.

.@RepPeteKing, are you silencing your constituents who send you questions? Assume ppl in district will respond if this is happening.

-- Andy Slavitt (@ASlavitt) May 12, 2017

Since the election, other activists across the country who oppose the president's agenda have posted online that they have been blocked from following their elected officials on Twitter or commenting on their Facebook pages because of critical statements they've made about the AHCA and other issues.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.