247 posts categorized "Social Networking" Feed

Bungled Software Update Renders Customers' Smart Door Locks Inoperable

Image of Lockstate RemoteLock 6i device. Click to view larger version A bungled software update by Lockstate, maker of WiFi-enabled door locks, rendered many customers' locks inoperable -- or "bricked." Lockstate notified affected customers in this letter:

"Dear Lockstate Customer,
We notified you earlier today of a potential issue with your LS6i lock. We are sorry to inform you about some unfortunate news. Your lock is among a small subset of locks that had a fatal error rendering it inoperable. After a software update was sent to your lock, it failed to reconnect to our web service making a remote fix impossible...

Many AirBnb operators use smart locks by Lockstate to secure their properties. In its website, Lockstate promotes the LS6i lock as:

"... perfect for your rental property, home or office use. This robust WiFi enabled door lock allows users to lock or unlock doors remotely, know when people unlock your door, and even receive text alerts when codes are used. Issue new codes or delete codes from your computer or phone. Even give temporary codes to guests or office personnel."

Reportedly, about 200 Airbnb customers were affected. The company said 500 locks were affected. ArsTechnica explained how the bungled software update happened:

"The failure occurred last Monday when LockState mistakenly sent some 6i lock models a firmware update developed for 7i locks. The update left earlier 6i models unable to be locked and no longer able to receive over-the-air updates."

Some affected customers shared their frustrations on the company's Twitter page. Lockstate said the affected locks can still be operated with physical keys. While that is helpful, it isn't a solution since customers rely upon the remote features. Affected customers have two repair options: 1) return the back portion of the lock (repair time about 5 to 7 days), or 2) request a replace (response time about 14 to 18 days).

The whole situation seems to be another reminder of the limitations when companies design smart devices with security updates delivered via firmware. And, a better disclosure letter by Lockstate would have explained corrections to internal systems and managerial processes, so this doesn't happen again during another software update.

What are your opinions?


Survey: Online Harassment In 2017

What is online life like for many United States residents? A recent survey by the Pew Research Center provides a good view. 41 percent of adults surveyed have personally experienced online harassment. Even more, 66 percent, witnessed online harassment directed at others.

Types of behaviors. Online Harassment 2017 survey. Pew Research. Click to view larger version The types of online harassment behaviors vary from the less severe (e.g., offensive name calling, efforts to embarrass someone) to the more severe (e.g., physical threats, harassment over a sustained period, sexual harassment, stalking.) 18 percent of survey participants -- nearly one out of every fiver persons -- reported that they had experienced severe behaviors.

Americans reported that social networking sites are the most common locations for online harassment experiences. Of the 41 percent of survey participants who personally experienced online harassment, most of those (82 percent) cited a single site and 58 percent cited "social media."

The reasons vary. 14 percent of survey respondents reported they had been harassed online specifically because of their politics; 9 percent reported that they were targeted due to their physical appearance; e percent said they were targeted due to their race or ethnicity; and 8 percent said they were targeted due to their gender. 5 percent said they were targeted due their religion, and 3 percent said they were targeted due to their sexual orientation.

Some groups experience online harassment more than others. Pew found that younger adults, under age 30, are more likely to experience severe forms of online harassment. Similarly, younger adults are also more likely to witness online harassment targeting others. Pew also found:

"... one-in-four blacks say they have been targeted with harassment online because of their race or ethnicity, as have one-in-ten Hispanics. The share among whites is lower (3%). Similarly, women are about twice as likely as men to say they have been targeted as a result of their gender (11% vs. 5%). Men, however, are around twice as likely as women to say they have experienced harassment online as a result of their political views (19% vs. 10%). Similar shares of Democrats and Republicans say they have been harassed online..."

The impacts upon victims vary, too:

"... ranging from mental or emotional stress to reputational damage or even fear for one’s personal safety. At the same time, harassment does not have to be experienced directly to leave an impact. Around one-quarter of Americans (27%) say they have decided not to post something online after witnessing the harassment of others, while more than one-in-ten (13%) say they have stopped using an online service after witnessing other users engage in harassing behaviors..."

Different attitudes by gender. Online Harassment 2017 survey. Pew Research. Click to view larger version And, attitudes vary by gender. See the table on the right. More women than men consider online harassment a "major problem," and men prioritize free speech over online safety while women prioritize safety first. And, 83 percent of young women (e.g., ages 18 - 29) viewed online harassment as a major problem. Perhaps most importantly, persons who have "faced severe forms of online harassment differ in experiences, reactions, and attitudes."

Pew Research also found that persons who experience severe forms of online harassment, "are more likely to be targeted for personal characteristics and to face offline consequences." So, what happens online doesn't necessarily stay online.

The perpetrators vary, too. Of the 41 percent of survey participants who personally experienced online harassment, 34 percent said the perpetrator was a stranger, and 31 percent said they didn't know the perpetrator's real identity. Also, 26 percent said the perpetrator was an acquaintance, followed by friend (18 percent), family member, (11 percent), former romantic partner (7 percent), and coworker (5 percent).

Pew Research found that the number of Americans who experienced online harassment has increased slightly from 35 percent during a 2014 survey. Pew Research Center surveyed 4,248 U.S. adults during January 9 - 23, 2017. 

Next Steps
62 percent of survey participants view online harassment as a major problem. 5 percent do not consider it a problem at all. People who have experienced severe forms of online harassment said that they have already taken action. Those actions include a mix: a) set up or adjust privacy settings for their profiles in online services, b) reported offensive content to the online service, c) responded directly to the harasser, d) offered support to others targeted, e) changed information in their online profiles, and f) stopped using specific online services.

Views vary about which entities bear responsibility for solutions. 79 percent of survey respondents said that online services have a duty to intervene when harassment occurs on their service. 35 percent believe that better policies and tools from online services are the best way to address online harassment.

Meanwhile, 60 said that bystanders who witness online harassment "should play a major role in addressing this issue," and 15 percent view peer pressure as an effective solution. 49 said law enforcement should play a major role in addressing online harassment, while 31 said stronger laws are needed. Perhaps most troubling:

"... a sizable proportion of Americans (43%) say that law enforcement currently does not take online harassment incidents seriously enough."

Among persons who have experienced severe forms of online harassment, 55 percent said that law enforcement does not take the incidents seriously enough. Compare that statistic with this: nearly three-quarters (73 percent) of young men (ages 18 - 29) feel that offensive online content is taken too seriously.

And Americans are highly divided about how to balance safety concerns versus free:

"When asked how they would prioritize these competing interests, 45% of Americans say it is more important to let people speak their minds freely online; a slightly larger share (53%) feels that it is more important for people to feel welcome and safe online.

Americans are also relatively divided on just how seriously offensive content online should be treated. Some 43% of Americans say that offensive speech online is too often excused as not being a big deal, but a larger share (56%) feel that many people take offensive content online too seriously."

With such divergent views, one wonders if the problem of online harassment can be easily solved. What are your opinions about online harassment?


Facebook's Secret Censorship Rules Protect White Men from Hate Speech But Not Black Children

[Editor's Note: today's guest post, by the reporters at ProPublica, explores how social networking practice censorship to combat violence and hate speech, plus related practices such as "geo-blocking." It is reprinted with permission.]

Facebook logo by Julia Angwin, ProPublica, and Hannes Grassegger, special to ProPublica

In the wake of a terrorist attack in London earlier this month, a U.S. congressman wrote a Facebook post in which he called for the slaughter of "radicalized" Muslims. "Hunt them, identify them, and kill them," declared U.S. Rep. Clay Higgins, a Louisiana Republican. "Kill them all. For the sake of all that is good and righteous. Kill them all."

Higgins' plea for violent revenge went untouched by Facebook workers who scour the social network deleting offensive speech.

But a May posting on Facebook by Boston poet and Black Lives Matter activist Didi Delgado drew a different response.

"All white people are racist. Start from this reference point, or you've already failed," Delgado wrote. The post was removed and her Facebook account was disabled for seven days.

A trove of internal documents reviewed by ProPublica sheds new light on the secret guidelines that Facebook's censors use to distinguish between hate speech and legitimate political expression. The documents reveal the rationale behind seemingly inconsistent decisions. For instance, Higgins' incitement to violence passed muster because it targeted a specific sub-group of Muslims -- those that are "radicalized" -- while Delgado's post was deleted for attacking whites in general.

Over the past decade, the company has developed hundreds of rules, drawing elaborate distinctions between what should and shouldn't be allowed, in an effort to make the site a safe place for its nearly 2 billion users. The issue of how Facebook monitors this content has become increasingly prominent in recent months, with the rise of "fake news" -- fabricated stories that circulated on Facebook like "Pope Francis Shocks the World, Endorses Donald Trump For President, Releases Statement" -- and growing concern that terrorists are using social media for recruitment.

While Facebook was credited during the 2010-2011 "Arab Spring" with facilitating uprisings against authoritarian regimes, the documents suggest that, at least in some instances, the company's hate-speech rules tend to favor elites and governments over grassroots activists and racial minorities. In so doing, they serve the business interests of the global company, which relies on national governments not to block its service to their citizens.

One Facebook rule, which is cited in the documents but that the company said is no longer in effect, banned posts that praise the use of "violence to resist occupation of an internationally recognized state." The company's workforce of human censors, known as content reviewers, has deleted posts by activists and journalists in disputed territories such as Palestine, Kashmir, Crimea and Western Sahara.

One document trains content reviewers on how to apply the company's global hate speech algorithm. The slide identifies three groups: female drivers, black children and white men. It asks: Which group is protected from hate speech? The correct answer: white men.

The reason is that Facebook deletes curses, slurs, calls for violence and several other types of attacks only when they are directed at "protected categories" -- based on race, sex, gender identity, religious affiliation, national origin, ethnicity, sexual orientation and serious disability/disease. It gives users broader latitude when they write about "subsets" of protected categories. White men are considered a group because both traits are protected, while female drivers and black children, like radicalized Muslims, are subsets, because one of their characteristics is not protected. (The exact rules are in the slide show below.)

The Facebook Rules

Facebook has used these rules to train its "content reviewers" to decide whether to delete or allow posts. Facebook says the exact wording of its rules may have changed slightly in more recent versions. ProPublica recreated the slides.

Behind this seemingly arcane distinction lies a broader philosophy. Unlike American law, which permits preferences such as affirmative action for racial minorities and women for the sake of diversity or redressing discrimination, Facebook's algorithm is designed to defend all races and genders equally.

"Sadly," the rules are "incorporating this color-blindness idea which is not in the spirit of why we have equal protection," said Danielle Citron, a law professor and expert on information privacy at the University of Maryland. This approach, she added, will "protect the people who least need it and take it away from those who really need it."

But Facebook says its goal is different -- to apply consistent standards worldwide. "The policies do not always lead to perfect outcomes," said Monika Bickert, head of global policy management at Facebook. "That is the reality of having policies that apply to a global community where people around the world are going to have very different ideas about what is OK to share."

Facebook's rules constitute a legal world of their own. They stand in sharp contrast to the United States' First Amendment protections of free speech, which courts have interpreted to allow exactly the sort of speech and writing censored by the company's hate speech algorithm. But they also differ -- for example, in permitting postings that deny the Holocaust -- from more restrictive European standards.

The company has long had programs to remove obviously offensive material like child pornography from its stream of images and commentary. Recent articles in the Guardian and Süddeutsche Zeitung have detailed the difficult choices that Facebook faces regarding whether to delete posts containing graphic violence, child abuse, revenge porn and self-mutilation.

The challenge of policing political expression is even more complex. The documents reviewed by ProPublica indicate, for example, that Donald Trump's posts about his campaign proposal to ban Muslim immigration to the United States violated the company's written policies against "calls for exclusion" of a protected group. As The Wall Street Journal reported last year, Facebook exempted Trump's statements from its policies at the order of Mark Zuckerberg, the company's founder and chief executive.

The company recently pledged to nearly double its army of censors to 7,500, up from 4,500, in response to criticism of a video posting of a murder. Their work amounts to what may well be the most far-reaching global censorship operation in history. It is also the least accountable: Facebook does not publish the rules it uses to determine what content to allow and what to delete.

Users whose posts are removed are not usually told what rule they have broken, and they cannot generally appeal Facebook's decision. Appeals are currently only available to people whose profile, group or page is removed.

The company has begun exploring adding an appeals process for people who have individual pieces of content deleted, according to Bickert. "I'll be the first to say that we're not perfect every time," she said.

Facebook is not required by U.S. law to censor content. A 1996 federal law gave most tech companies, including Facebook, legal immunity for the content users post on their services. The law, section 230 of the Telecommunications Act, was passed after Prodigy was sued and held liable for defamation for a post written by a user on a computer message board.

The law freed up online publishers to host online forums without having to legally vet each piece of content before posting it, the way that a news outlet would evaluate an article before publishing it. But early tech companies soon realized that they still needed to supervise their chat rooms to prevent bullying and abuse that could drive away users.

America Online convinced thousands of volunteers to police its chat rooms in exchange for free access to its service. But as more of the world connected to the internet, the job of policing became more difficult and companies started hiring workers to focus on it exclusively. Thus the job of content moderator -- now often called content reviewer -- was born.

In 2004, attorney Nicole Wong joined Google and persuaded the company to hire its first-ever team of reviewers, who responded to complaints and reported to the legal department. Google needed "a rational set of policies and people who were trained to handle requests," for its online forum called Groups, she said.

Google's purchase of YouTube in 2006 made deciding what content was appropriate even more urgent. "Because it was visual, it was universal," Wong said.

While Google wanted to be as permissive as possible, she said, it soon had to contend with controversies such as a video mocking the King of Thailand, which violated Thailand's laws against insulting the king. Wong visited Thailand and was impressed by the nation's reverence for its monarch, so she reluctantly agreed to block the video -- but only for computers located in Thailand.

Since then, selectively banning content by geography -- called "geo-blocking" -- has become a more common request from governments. "I don't love traveling this road of geo-blocking," Wong said, but "it's ended up being a decision that allows companies like Google to operate in a lot of different places."

For social networks like Facebook, however, geo-blocking is difficult because of the way posts are shared with friends across national boundaries. If Facebook geo-blocks a user's post, it would only appear in the news feeds of friends who live in countries where the geo-blocking prohibition doesn't apply. That can make international conversations frustrating, with bits of the exchange hidden from some participants.

As a result, Facebook has long tried to avoid using geography-specific rules when possible, according to people familiar with the company's thinking. However, it does geo-block in some instances, such as when it complied with a request from France to restrict access within its borders to a photo taken after the Nov. 13, 2015, terrorist attack at the Bataclan concert hall in Paris.

Bickert said Facebook takes into consideration the laws in countries where it operates, but doesn't always remove content at a government's request. "If there is something that violates a country's law but does not violate our standards," Bickert said, "we look at who is making that request: Is it the appropriate authority? Then we check to see if it actually violates the law. Sometimes we will make that content unavailable in that country only."

Facebook's goal is to create global rules. "We want to make sure that people are able to communicate in a borderless way," Bickert said.

Founded in 2004, Facebook began as a social network for college students. As it spread beyond campus, Facebook began to use content moderation as a way to compete with the other leading social network of that era, MySpace.

MySpace had positioned itself as the nightclub of the social networking world, offering profile pages that users could decorate with online glitter, colorful layouts and streaming music. It didn't require members to provide their real names and was home to plenty of nude and scantily clad photographs. And it was being investigated by law-enforcement agents across the country who worried it was being used by sexual predators to prey on children. (In a settlement with 49 state attorneys general, MySpace later agreed to strengthen protections for younger users.)

By comparison, Facebook was the buttoned-down Ivy League social network -- all cool grays and blues. Real names and university affiliations were required. Chris Kelly, who joined Facebook in 2005 and was its first general counsel, said he wanted to make sure Facebook didn't end up in law enforcement's crosshairs, like MySpace.

"We were really aggressive about saying we are a no-nudity platform," he said.

The company also began to tackle hate speech. "We drew some difficult lines while I was there -- Holocaust denial being the most prominent," Kelly said. After an internal debate, the company decided to allow Holocaust denials but reaffirmed its ban on group-based bias, which included anti-Semitism. Since Holocaust denial and anti-Semitism frequently went together, he said, the perpetrators were often suspended regardless.

"I've always been a pragmatist on this stuff," said Kelly, who left Facebook in 2010. "Even if you take the most extreme First Amendment positions, there are still limits on speech."

By 2008, the company had begun expanding internationally but its censorship rulebook was still just a single page with a list of material to be excised, such as images of nudity and Hitler. "At the bottom of the page it said, 'Take down anything else that makes you feel uncomfortable,'" said Dave Willner, who joined Facebook's content team that year.

Willner, who reviewed about 15,000 photos a day, soon found the rules were not rigorous enough. He and some colleagues worked to develop a coherent philosophy underpinning the rules, while refining the rules themselves. Soon he was promoted to head the content policy team.

By the time he left Facebook in 2013, Willner had shepherded a 15,000-word rulebook that remains the basis for many of Facebook's content standards today.

"There is no path that makes people happy," Willner said. "All the rules are mildly upsetting." Because of the volume of decisions -- many millions per day -- the approach is "more utilitarian than we are used to in our justice system," he said. "It's fundamentally not rights-oriented."

Willner's then-boss, Jud Hoffman, who has since left Facebook, said that the rules were based on Facebook's mission of "making the world more open and connected." Openness implies a bias toward allowing people to write or post what they want, he said.

But Hoffman said the team also relied on the principle of harm articulated by John Stuart Mill, a 19th-century English political philosopher. It states "that the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others." That led to the development of Facebook's "credible threat" standard, which bans posts that describe specific actions that could threaten others, but allows threats that are not likely to be carried out.

Eventually, however, Hoffman said "we found that limiting it to physical harm wasn't sufficient, so we started exploring how free expression societies deal with this."

The rules developed considerable nuance. There is a ban against pictures of Pepe the Frog, a cartoon character often used by "alt-right" white supremacists to perpetrate racist memes, but swastikas are allowed under a rule that permits the "display [of] hate symbols for political messaging." In the documents examined by ProPublica, which are used to train content reviewers, this rule is illustrated with a picture of Facebook founder Mark Zuckerberg that has been manipulated to apply a swastika to his sleeve.

The documents state that Facebook relies, in part, on the U.S. State Department's list of designated terrorist organizations, which includes groups such as al-Qaida, the Taliban and Boko Haram. But not all groups deemed terrorist by one country or another are included: A recent investigation by the Pakistan newspaper Dawn found that 41 of the 64 terrorist groups banned in Pakistan were operational on Facebook.

There is also a secret list, referred to but not included in the documents, of groups designated as hate organizations that are banned from Facebook. That list apparently doesn't include many Holocaust denial and white supremacist sites that are up on Facebook to this day, such as a group called "Alt-Reich Nation." A member of that group was recently charged with murdering a black college student in Maryland.

As the rules have multiplied, so have exceptions to them. Facebook's decision not to protect subsets of protected groups arose because some subgroups such as "female drivers" didn't seem especially sensitive. The default position was to allow free speech, according to a person familiar with the decision-making.

After the wave of Syrian immigrants began arriving in Europe, Facebook added a special "quasi-protected" category for migrants, according to the documents. They are only protected against calls for violence and dehumanizing generalizations, but not against calls for exclusion and degrading generalizations that are not dehumanizing. So, according to one document, migrants can be referred to as "filthy" but not called "filth." They cannot be likened to filth or disease "when the comparison is in the noun form," the document explains.

Facebook also added an exception to its ban against advocating for anyone to be sent to a concentration camp. "Nazis should be sent to a concentration camp," is allowed, the documents state, because Nazis themselves are a hate group.

The rule against posts that support violent resistance against a foreign occupier was developed because "we didn't want to be in a position of deciding who is a freedom fighter," Willner said. Facebook has since dropped the provision and revised its definition of terrorism to include nongovernmental organizations that carry out premeditated violence "to achieve a political, religious or ideological aim," according to a person familiar with the rules.

The Facebook policy appears to have had repercussions in many of the at least two dozen disputed territories around the world. When Russia occupied Crimea in March 2014, many Ukrainians experienced a surge in Facebook banning posts and suspending profiles. Facebook's director of policy for the region, Thomas Myrup Kristensen, acknowledged at the time that it "found a small number of accounts where we had incorrectly removed content. In each case, this was due to language that appeared to be hate speech but was being used in an ironic way. In these cases, we have restored the content."

Katerina Zolotareva, 34, a Kiev-based Ukrainian working in communications, has been blocked so often that she runs four accounts under her name. Although she supported the "Euromaidan" protests in February 2014 that antagonized Russia, spurring its military intervention in Crimea, she doesn't believe that Facebook took sides in the conflict. "There is war in almost every field of Ukrainian life," she says, "and when war starts, it also starts on Facebook."

In Western Sahara, a disputed territory occupied by Morocco, a group of journalists called Equipe Media say their account was disabled by Facebook, their primary way to reach the outside world. They had to open a new account, which remains active.

"We feel we have never posted anything against any law," said Mohammed Mayarah, the group's general coordinator. "We are a group of media activists. We have the aim to break the Moroccan media blockade imposed since it invaded and occupied Western Sahara."

In Israel, which captured territory from its neighbors in a 1967 war and has occupied it since, Palestinian groups are blocked so often that they have their own hashtag, #FbCensorsPalestine, for it. Last year, for instance, Facebook blocked the accounts of several editors for two leading Palestinian media outlets from the West Bank -- Quds News Network and Sheebab News Agency. After a couple of days, Facebook apologized and un-blocked the journalists' accounts. Earlier this year, Facebook blocked the account of Fatah, the Palestinian Authority's ruling party -- then un-blocked it and apologized.

Last year India cracked down on protesters in Kashmir, shooting pellet guns at them and shutting off cellphone service. Local insurgents are seeking autonomy for Kashmir, which is also caught in a territorial tussle between India and Pakistan. Posts of Kashmir activists were being deleted, and members of a group called the Kashmir Solidarity Network found that all of their Facebook accounts had been blocked on the same day.

Ather Zia, a member of the network and a professor of anthropology at the University of Northern Colorado, said that Facebook restored her account without explanation after two weeks. "We do not trust Facebook any more," she said. "I use Facebook, but it's almost this idea that we will be able to create awareness but then we might not be on it for long."

The rules are one thing. How they're applied is another. Bickert said Facebook conducts weekly audits of every single content reviewer's work to ensure that its rules are being followed consistently. But critics say that reviewers, who have to decide on each post within seconds, may vary in both interpretation and vigilance.

Facebook users who don't mince words in criticizing racism and police killings of racial minorities say that their posts are often taken down. Two years ago, Stacey Patton, a journalism professor at historically black Morgan State University in Baltimore, posed a provocative question on her Facebook page. She asked why "it's not a crime when White freelance vigilantes and agents of 'the state' are serial killers of unarmed Black people, but when Black people kill each other then we are 'animals' or 'criminals.'"

Although it doesn't appear to violate Facebook's policies against hate speech, her post was immediately removed, and her account was disabled for three days. Facebook didn't tell her why. "My posts get deleted about once a month," said Patton, who often writes about racial issues. She said she also is frequently put in Facebook "jail" -- locked out of her account for a period of time after a posting that breaks the rules.

"It's such emotional violence," Patton said. "Particularly as a black person, we're always have these discussions about mass incarceration, and then here's this fiber-optic space where you can express yourself. Then you say something that some anonymous person doesn't like and then you're in 'jail.'"

Didi Delgado, whose post stating that "white people are racist" was deleted, has been banned from Facebook so often that she has set up an account on another service called Patreon, where she posts the content that Facebook suppressed. In May, she deplored the increasingly common Facebook censorship of black activists in an article for Medium titled "Mark Zuckerberg Hates Black People."

Facebook also locked out Leslie Mac, a Michigan resident who runs a service called SafetyPinBox where subscribers contribute financially to "the fight for black liberation," according to her site. Her offense was writing a post stating "White folks. When racism happens in public -- YOUR SILENCE IS VIOLENCE."

The post does not appear to violate Facebook's policies. Facebook apologized and restored her account after TechCrunch wrote an article about Mac's punishment. Since then, Mac has written many other outspoken posts. But, "I have not had a single peep from Facebook," she said, while "not a single one of my black female friends who write about race or social justice have not been banned."

"My takeaway from the whole thing is: If you get publicity, they clean it right up," Mac said. Even so, like most of her friends, she maintains a separate Facebook account in case her main account gets blocked again.

Negative publicity has spurred other Facebook turnabouts as well. Consider the example of the iconic news photograph of a young naked girl running from a napalm bomb during the Vietnam War. Kate Klonick, a Ph.D. candidate at Yale Law School who has spent two years studying censorship operations at tech companies, said the photo had likely been deleted by Facebook thousands of times for violating its ban on nudity.

But last year, Facebook reversed itself after Norway's leading newspaper published a front-page open letter to Zuckerberg accusing him of "abusing his power" by deleting the photo from the newspaper's Facebook account.

Klonick said that while she admires Facebook's dedication to policing content on its website, she fears it is evolving into a place where celebrities, world leaders and other important people "are disproportionately the people who have the power to update the rules."

In December 2015, a month after terrorist attacks in Paris killed 130 people, the European Union began pressuring tech companies to work harder to prevent the spread of violent extremism online.

After a year of negotiations, Facebook, Microsoft, Twitter and YouTube agreed to the European Union's hate speech code of conduct, which commits them to review and remove the majority of valid complaints about illegal content within 24 hours and to be audited by European regulators. The first audit, in December, found that the companies were only reviewing 40 percent of hate speech within 24 hours, and only removing 28 percent of it. Since then, the tech companies have shortened their response times to reports of hate speech and increased the amount of content they are deleting, prompting criticism from free-speech advocates that too much is being censored.

Now the German government is considering legislation that would allow social networks such as Facebook to be fined up to 50 million euros if they don't remove hate speech and fake news quickly enough. Facebook recently posted an article assuring German lawmakers that it is deleting about 15,000 hate speech posts a month. Worldwide, over the last two months, Facebook deleted about 66,000 hate speech posts per week, vice president Richard Allan said in a statement Tuesday on the company's site.

Among posts that Facebook didn't delete were Donald Trump's comments on Muslims. Days after the Paris attacks, Trump, then running for president, posted on Facebook "calling for a total and complete shutdown of Muslims entering the United States until our country's representatives can figure out what is going on."

Candidate Trump's posting -- which has come back to haunt him in court decisions voiding his proposed travel ban -- appeared to violate Facebook's rules against "calls for exclusion" of a protected religious group. Zuckerberg decided to allow it because it was part of the political discourse, according to people familiar with the situation.

However, one person close to Facebook's decision-making said Trump may also have benefited from the exception for sub-groups. A Muslim ban could be interpreted as being directed against a sub-group, Muslim immigrants, and thus might not qualify as hate speech against a protected category.

Hannes Grassegger is a reporter for Das Magazin and Reportagen Magazine based in Zurich.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Dozens Of Uber Employees Fired Or Investigated For Harassment. Uber And Lyft Drivers Unaware of Safety Recalls

Uber logo Ride-sharing companies are in the news again and probably not for the reasons their management executives would prefer. First, TechCrunch reported on Thursday:

"... at a staff meeting in San Francisco, Uber executives revealed to the company’s 12,000 employees that 20 of their colleagues had been fired and that 57 are still being probed over harassment, discrimination and inappropriate behavior, following a string of accusations that Uber had created a toxic workplace and allowed complaints to go unaddressed for years. Those complaints had pushed Uber into crisis mode earlier this year. But the calamity may be just beginning... Uber fired senior executive Eric Alexander after it was leaked to Recode that Alexander had obtained the medical records of an Uber passenger in India who was raped in 2014 by her driver."

"Recode also reported that Alexander had shared the woman’s file with Kalanick and his senior vice president, Emil Michael, and that the three men suspected the woman of working with Uber’s regional competitor in India, Ola, to hamper its chances of success there. Uber eventually settled a lawsuit brought by the woman against the company..."

News broke in March, 2017 about both the Recode article and the Grayball activity at Uber to thwart local government code inspections. In February, a former Uber employee shared a disturbing story with allegations of sexual harassment.

Lyft logo Second, the investigative team at WBZ-TV, the local CBS afiliate in Boston, reported that many Uber and Lyft drivers are unaware of safety recalls affecting their vehicles. This could make rides in these cars unsafe for passengers:

"Using an app from Carfax, we quickly checked the license plates of 167 Uber and Lyft cars picking up passengers at Logan Airport over a two day period. Twenty-seven of those had open safety recalls or about 16%. Recalls are issued when a manufacturer identifies a mechanical problem that needs to be fixed for safety reasons. A recent example is the millions of cars that were recalled when it was determined the airbags made by Takata could release shrapnel when deployed in a crash."

Both ride-sharing companies treat drivers as independent contractors. WBZ-TV reported:

"Uber told the [WBZ-TV investigative] Team that drivers are contractors and not employees of the company. A spokesperson said they provide resources to drivers and encourage them to check for recalls and to perform routine maintenance. Drivers are also reminded quarterly to check with NHTSA for recall information."

According to the president of the Massachusetts Bar Association Jeffrey Catalano, the responsibility to make sure the car is safe for passengers lies mainly with the driver. But because Uber and Lyft both advertise their commitment to safety on their websites, they too could be held responsible."


Trump Is Not the Only One Blocking Constituents on Twitter

[Editor's note: today's guest blog post, by the reporters at ProPublica, explores the emerging debate about whether the appropriate, perhaps ethical, use of social media by publicly elected officials and persons campaigning for office. Should they be able to block constituents posting views they dislike or disagree with? Is it really public speech on a privately-run social networking sites? Would you vote for person who blocks constituents? Do companies operating social networking site have a responsibility in this? Today's post is reprinted with permission.]

by Charles Ornstein, ProPublica

As President Donald Trump faces criticism for blocking users on his Twitter account, people across the country say they, too, have been cut off by elected officials at all levels of government after voicing dissent on social media.

In Arizona, a disabled Army veteran grew so angry when her congressman blocked her and others from posting dissenting views on his Facebook page that she began delivering actual blocks to his office.

A central Texas congressman has barred so many constituents on Twitter that a local activist group has begun selling T-shirts complaining about it.

And in Kentucky, the Democratic Party is using a hashtag, #BevinBlocked, to track those who've been blocked on social media by Republican Gov. Matt Bevin. (Most of the officials blocking constituents appear to be Republican.)

The growing combat over social media is igniting a new-age legal debate over whether losing this form of access to public officials violates constituents' First Amendment rights to free speech and to petition the government for a redress of grievances. Those who've been blocked say it's akin to being thrown out of a town hall meeting for holding up a protest sign.

On Tuesday, the Knight First Amendment Institute at Columbia University called upon Trump to unblock people who've disagreed with him or directed criticism at him or his family via the @realdonaldtrump account, which he used prior to becoming president and continues to use as his principal Twitter outlet.

Trump blocked me after this tweet.Let's all hope the courts continue to protect us. Never stop resisting. pic.twitter.com/TlR4zgHCoU

-- Nick Jack Pappas (@Pappiness) June 5, 2017

"Though the architects of the Constitution surely didn't contemplate presidential Twitter accounts, they understood that the president must not be allowed to banish views from public discourse simply because he finds them objectionable," Jameel Jaffer, the Knight Institute's executive director, said in a statement.

The White House did not respond to a request for comment, but press secretary Sean Spicer said earlier Tuesday that statements the president makes on Twitter should be regarded as official statements.

Similar flare-ups have been playing out in state after state.

Earlier this year, the American Civil Liberties Union of Maryland called on Governor Larry Hogan, a Republican, to stop deleting critical comments and barring people from commenting on his Facebook page. (The Washington Post reported that the governor had blocked 450 people as of February.)

Deborah Jeon, the ACLU's legal director, said Hogan and other elected officials are increasingly foregoing town hall meetings and instead relying on social media as their primary means of communication with constituents. "That's why it's so problematic," she said. "If people are silenced in that medium," they can't effectively interact with their elected representative.

The governor's office did not respond to a request for comment this week. After the letter, however, it reinstated six of the seven people specifically identified by the ACLU (it said it couldn't find the seventh). "While the ACLU should be focusing on much more important activities than monitoring the governor's Facebook page, we appreciated them identifying a handful of individuals -- out of the over 1 million weekly viewers of the page -- that may have been inadvertently denied access," a spokeswoman for the governor told the Post.

Practically speaking, being blocked cuts off constituents from many forms of interacting with public officials. On Facebook, it means no posts, no likes and no questions or comments during live events on the page of the blocker. Even older posts that may not be offensive are taken down. On Twitter, being blocked prevents a user from seeing the other person's tweets on his or her timeline.

Moreover, while Twitter and Facebook themselves usually suspend account holders only temporarily for breaking rules, many elected officials don't have established policies for constituents who want to be reinstated. Sometimes a call is enough to reverse it, other times it's not.

Eugene Volokh, a constitutional law professor at the UCLA School of Law, said that for municipalities and public agencies, such as police departments, social media accounts would generally be considered "limited public forums" and therefore, should be open to all.

"Once they open it up to public comments, they can't then impose viewpoint-based restrictions on it," he said, for instance allowing only supportive comments while deleting critical ones.

But legislators are different because they are people. Elected officials can have personal accounts, campaign accounts and officeholder accounts that may appear quite similar. On their personal and campaign accounts, there's little disagreement that officials can engage with -- or block -- whoever they want. Last month, for instance, ProPublica reported how Rep. Peter King (Republican, New York) blocked users on his campaign account after they criticized his positions on health reform and other issues.

But what about their officeholder social media accounts?

The ACLU's Jeon says that they should be public if they use government resources, including staff time and office equipment to maintain the page. "Where that's the situation and taxpayer resources are going to it, then the full power of the First Amendment applies," she said. "It doesn't matter if they're members of Congress or the governor or a local councilperson."

Volokh of UCLA disagreed. He said that members of Congress are entitled to their own private speech, even on official pages. That's because each is one voice among many, as opposed to a governor or mayor. "It's clear that whatever my senator is, she's not the government. She is one person who is part of a legislative body," he said. "She was elected because she has her own views and it makes sense that if she has a Twitter feed or a Facebook page, that may well be seen as not government speech but the voice of somebody who may be a government official."

Volokh said he's inclined to see Trump's @realdonaldtrump account as a personal one, though other legal experts disagree.

"You could imagine actually some other president running this kind of account in a way that's very public minded -- 'I'm just going to express the views of the executive branch,'" he said. "The @realdonaldtrump account is very much, 'I'm Donald Trump. I'm going to be expressing my views, and if you don't like it, too bad for you.' That sounds like private speech, even done by a government official on government property."

It's possible the fight over the president's Twitter account will end up in court, as such disputes have across the country. Generally, in these situations, the people contesting the government's social media policies have reached settlements ending the questionable practices.

After being sued by the ACLU, three cities in Indiana agreed last year to change their policies by no longer blocking users or deleting comments.

In 2014, a federal judge ordered the City and County of Honolulu to pay $31,000 in attorney's fees to people who sued, contending that the Honolulu Police Department violated their constitutional rights by deleting their critical Facebook posts.

And San Diego County agreed to pay the attorney's fees of a gun parts dealer who sued after its Sheriff's Department deleted two Facebook posts that were critical of the sheriff and banned the dealer from commenting. The department took down its Facebook page after being sued and paid the dealer $20 as part of the settlement.

Angela Greben, a California paralegal, has spent the past two years gathering information about agencies and politicians that have blocked people on social media -- Democrats and Republican alike -- filing ethics complaints and even a lawsuit against the city of San Mateo, California, its mayor and police department. (They settled with her, giving her some of what she wanted.)

Greben has filed numerous public-records requests to agencies as varied as the Transportation Security Administration, the Seattle Police Department and the Connecticut Lottery seeking lists of people they block. She's posted the results online.

"It shouldn't be up to the elected official to decide who can tweet them and who can't," she said. "Everybody deserves to be treated equally and fairly under the law."

Even though she lives in California, Greben recently filed an ethics complaint against Atlanta Mayor Kasim Reed, a Democrat, who has been criticized for blocking not only constituents but also journalists who cover him. Reed has blocked Greben since 2015 when she tweeted about him... well, blocking people on Twitter. "He's notorious for blocking and muting people," she said, meaning he can't see their tweets but they can still see his.

@LizLemeryJoy @KasimReed Mr. Mayor you are violating the #civilrights of all you have #blocked! @Georgia_AG @FOX5Atlanta @11AliveNews

-- Angela Greben (@AngelaGreben) March 7, 2015

In a statement, a city spokeswoman defended the mayor, saying he's now among the top five most-followed mayors in the country. "Mayor Reed uses social media as a personal platform to engage directly with constituents and some journalists. 2026 Like all Twitter users, Mayor Reed has the right to stop engaging in conversations when he determines they are unproductive, intentionally inflammatory, dishonest and/or misleading."

Asked how many people he has blocked, she replied that the office doesn't keep such a list.

J'aime Morgaine, the Arizona veteran who delivered blocks to the office of Rep. Paul Gosar, a Republican, said being blocked on Facebook matters because her representative no longer hosts in-person town hall meetings and has started to answer questions on Facebook Live. Now she can't ask questions or leave comments.

"I have lost and other people who have been blocked have lost our right to participate in the democratic process," said Morgaine, leader of Indivisible Kingman, a group that opposes the president's agenda. "I am outraged that my congressman is blocking my voice and trampling upon my constitutional rights."

@RepGosar ..You weren't home when I delivered this message to your office, but no worries...there WILL be more!Stop BLOCKING Constituents! pic.twitter.com/JTWGQwhxKt

-- Indivisible Kingman (@IndivisibleCD4) May 13, 2017

Morgaine said the rules are not being applied equally. "They're not blocking everybody who's angry," she said. "They're blocking the voices of dissent, and there's no process for getting unblocked. There's no appeals process. There's no accountability."

A spokeswoman for Gosar defended his decision to block constituents but did not answer a question about how many have been blocked.

"Congressman Gosar's policy has been consistent since taking office in January 2010," spokeswoman Kelly Roberson said in an email. "In short: 2018Users whose comments or posts consist of profanity, hate speech, personal attacks, homophobia or Islamophobia may be banned.'"

On his Facebook page, Gosar posts the policy that guides his actions. It says in part, "Users are banned to promote healthy, civil dialogue on this page but are welcome to contact Congressman Gosar using other methods," including phone calls, emails and letters.

Sometimes, users are blocked repeatedly.

Community volunteer Gayle Lacy was named 2015 Wacoan of the Year for her effort to have the site of mammoth fossils in Waco, Texas, designated a national monument. Lacy's latest fight has been with her congressman, Bill Flores, who was with her in the Oval Office when Obama designated the site a national monument in 2015. She has been blocked three times by Flores' congressional Twitter account and once by his campaign account. One of those blocks happened after she tweeted at him: "My father died in service for this country, but you are not representative of that country and neither is your dear leader."

Lacy said she was able to get unblocked each time from Flores' congressional account by calling his office but remains blocked on the campaign one. "I don't know where to call," she said. "I asked in his D.C. office who I needed to call and I was told that they don't have that information."

Lacy and others said Flores blocks those who question him. Austin lawyer Matt Miller said he was blocked for asking when Flores would hold a town hall meeting. "It's totally inappropriate to block somebody, especially for asking a legitimate question of my elected representative," Miller said.

In a statement, Flores spokesman Andre Castro said Flores makes his policies clear on Twitter and on Facebook. "We reserve the right to block users whose comments include profanity, name-calling, threats, personal attacks, constant harping, inappropriate or false accusations, or other inappropriate comments or material. As the Congressman likes to say 2014 2018If you would not say it to your grandmother, we will not allow it here.'"

Ricardo Guerrero, an Austin marketer who is one of the leaders of a local group opposed to Trump's agenda, said he has gotten unblocked by Flores twice but then was blocked again and "just kind of gave up."

"He's creating an echo chamber of only the people that agree with him," Guerrero said of Flores. "He's purposefully removing any semblance of debate or alternative ideas or ideas that challenge his own -- and that seems completely undemocratic. That's the bigger issue in my mind."

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


3 Strategies To Defend GOP Health Bill: Euphemisms, False Statements and Deleted Comments

[Editor's Note: today's guest post is by the reporters as ProPublica. Affordable health care and coverage are important to many, if not most, Americans. It is reprinted with permission.]

by Charles Ornstein, ProPublica

Earlier this month, a day after the House of Representatives passed a bill to repeal and replace major parts of the Affordable Care Act, Ashleigh Morley visited her congressman's Facebook page to voice her dismay.

"Your vote yesterday was unthinkably irresponsible and does not begin to account for the thousands of constituents in your district who rely upon many of the services and provisions provided for them by the ACA," Morley wrote on the page affiliated with the campaign of Representative Peter King (Republican, New York). "You never had my vote and this confirms why."

The next day, Morley said, her comment was deleted and she was blocked from commenting on or reacting to King's posts. The same thing has happened to others critical of King's positions on health care and other matters. King has deleted negative feedback and blocked critics from his Facebook page, several of his constituents say, sharing screenshots of comments that are no longer there.

"Having my voice and opinions shut down by the person who represents me -- especially when my voice and opinion wasn't vulgar and obscene -- is frustrating, it's disheartening, and I think it points to perhaps a larger problem with our representatives and maybe their priorities," Morley said in an interview.

King's office did not respond to requests for comment.

As Republican members of Congress seek to roll back the Affordable Care Act, commonly called Obamacare, and replace it with the American Health Care Act, they have adopted various strategies to influence and cope with public opinion, which polls show mostly opposes their plan. ProPublica, with our partners at Kaiser Health News, Stat and Vox, has been fact-checking members of Congress in this debate and we've found misstatements on both sides, though more by Republicans than Democrats. The Washington Post's Fact Checker has similarly found misstatements by both sides.

Today, we're back with more examples of how legislators are interacting with constituents about repealing Obamacare, whether online or in traditional correspondence. Their more controversial tactics seem to fall into three main categories: providing incorrect information, using euphemisms for the impact of their actions, and deleting comments critical of them. (Share your correspondence with members of Congress with us.)

Incorrect Information

Representative Vicky Hartzler (Republican, Missouri) sent a note to constituents this month explaining her vote in favor of the Republican bill. First, she outlined why she believes the ACA is not sustainable -- namely, higher premiums and few choices. Then she said it was important to have a smooth transition from one system to another.

"This is why I supported the AHCA to follow through on our promise to have an immediate replacement ready to go should the ACA be repealed," she wrote. "The AHCA keeps the ACA for the next three years then phases in a new approach to give people, states, and insurance markets plenty of time to make adjustments."

Except that's not true.

"There are quite a number of changes in the AHCA that take effect within the next three years," wrote ACA expert Timothy Jost, an emeritus professor at Washington and Lee University School of Law, in an email to ProPublica.

The current law's penalties on individuals who do not purchase insurance and on employers who do not offer it would be repealed retroactively to 2016, which could remove the incentive for some employers to offer coverage to their workers. Moreover, beginning in 2018, older people could be charged premiums up to five times more than younger people -- up from three times under current law. The way in which premium tax credits would be calculated would change as well, benefiting younger people at the expense of older ones, Jost said.

"It is certainly not correct to say that everything stays the same for the next three years," he wrote.

In an email, Hartzler spokesman Casey Harper replied, "I can see how this sentence in the letter could be misconstrued. It's very important to the Congresswoman that we give clear, accurate information to her constituents. Thanks for pointing that out."

Other lawmakers have similarly shared incorrect information after voting to repeal the ACA. Representative Diane Black (Republican, Tennessee) wrote in a May 19 email to a constituent that "in 16 of our counties, there are no plans available at all. This system is crumbling before our eyes and we cannot wait another year to act."

Black was referring to the possibility that, in 16 Tennessee counties around Knoxville, there might not have been any insurance options in the ACA marketplace next year. However, 10 days earlier, before she sent her email, BlueCross BlueShield of Tennessee announced that it was willing to provide coverage in those counties and would work with the state Department of Commerce and Insurance "to set the right conditions that would allow our return."

"We stand by our statement of the facts, and Congressman Black is working hard to repeal and replace Obamacare with a system that actually works for Tennessee families and individuals," her deputy chief of staff Dean Thompson said in an email.

On the Democratic side, the Washington Post Fact Checker has called out representatives for saying the AHCA would consider rape or sexual assault as pre-existing conditions. The bill would not do that, although critics counter that any resulting mental health issues or sexually transmitted diseases could be considered existing illnesses.

Euphemisms

A number of lawmakers have posted information taken from talking points put out by the House Republican Conference that try to frame the changes in the Republican bill as kinder and gentler than most experts expect them to be.

An answer to one frequently asked question pushes back against criticism that the Republican bill would gut Medicaid, the federal-state health insurance program for the poor, and appears on the websites of Representative Garret Graves (Republican, Louisiana) and others.

"Our plan responsibly unwinds Obamacare's Medicaid expansion," the answer says. "We freeze enrollment and allow natural turnover in the Medicaid program as beneficiaries see their life circumstances change. This strategy is both fiscally responsible and fair, ensuring we don't pull the rug out on anyone while also ending the Obamacare expansion that unfairly prioritizes able-bodied working adults over the most vulnerable."

That is highly misleading, experts say.

The Affordable Care Act allowed states to expand Medicaid eligibility to anyone who earned less than 138 percent of the federal poverty level, with the federal government picking up almost the entire tab. Thirty-one states and the District of Columbia opted to do so. As a result, the program now covers more than 74 million beneficiaries, nearly 17 million more than it did at the end of 2013.

The GOP health care bill would pare that back. Beginning in 2020, it would reduce the share the federal government pays for new enrollees in the Medicaid expansion to the rate it pays for other enrollees in the state, which is considerably less. Also in 2020, the legislation would cap the spending growth rate per Medicaid beneficiary. As a result, a Congressional Budget Office review released Wednesday estimates that millions of Americans would become uninsured.

Sara Rosenbaum, a professor of health law and policy at the Milken Institute School of Public Health at George Washington University, said the GOP's characterization of its Medicaid plan is wrong on many levels. People naturally cycle on and off Medicaid, she said, often because of temporary events, not changing life circumstances -- seasonal workers, for instance, may see their wages rise in summer months before falling back.

"A terrible blow to millions of poor people is recast as an easing off of benefits that really aren't all that important, in a humane way," she said.

Moreover, the GOP bill actually would speed up the "natural turnover" in the Medicaid program, said Diane Rowland, executive vice president of the Kaiser Family Foundation, a health care think tank. Under the ACA, states were only permitted to recheck enrollees' eligibility for Medicaid once a year because cumbersome paperwork requirements have been shown to cause people to lose their coverage. The American Health Care Act would require these checks every six months -- and even give states more money to conduct them.

Rowland also took issue with the GOP talking point that the expansion "unfairly prioritizes able-bodied working adults over the most vulnerable." At a House Energy and Commerce Committee hearing earlier this year, GOP representatives maintained that the Medicaid expansion may be creating longer waits for home- and community-based programs for sick and disabled Medicaid patients needing long-term care, "putting care for some of the most vulnerable Americans at risk."

Research from the Kaiser Family Foundation, however, showed that there was no relationship between waiting lists and states that expanded Medicaid. Such waiting lists pre-dated the expansion and they were worse in states that did not expand Medicaid than in states that did.

"This is a complete misrepresentation of the facts," Rosenbaum said.

Graves' office said the information on his site came from the House Republican Conference. Emails to the conference's press office were not returned.

The GOP talking points also play up a new Patient and State Stability Fund included in the AHCA, which is intended to defray the costs of covering people with expensive health conditions. "All told, $130 billion dollars would be made available to states to finance innovative programs to address their unique patient populations," the information says. "This new stability fund ensures these programs have the necessary funding to protect patients while also giving states the ability to design insurance markets that will lower costs and increase choice."

The fund was modeled after a program in Maine, called an invisible high-risk pool, which advocates say has kept premiums in check in the state. But Senator Susan Collins (Republican, Maine) says the House bill's stability fund wasn't allocated enough money to keep premiums stable.

"In order to do the Maine model 2014 which I've heard many House people say that is what they're aiming for -- it would take $15 billion in the first year and that is not in the House bill," Collins told Politico. "There is actually $3 billion specifically designated for high-risk pools in the first year."

Deleting Comments

Morley, 28, a branded content editor who lives in Seaford, New York, said she moved into Representative King's Long Island district shortly before the 2016 election. She said she did not vote for him and, like many others across the country, said the election results galvanized her into becoming more politically active.

Earlier this year, Morley found an online conversation among King's constituents who said their critical comments were being deleted from his Facebook page. Because she doesn't agree with King's stances, she said she wanted to reserve her comment for an issue she felt strongly about.

A day after the House voted to repeal the ACA, Morley posted her thoughts. "I kind of felt that that was when I wanted to use my one comment, my one strike as it would be," she said.

By noon the next day, it had been deleted and she had been blocked.

"I even wrote in my comment that you can block me but I'm still going to call your office," Morley said in an interview.

Some negative comments about King remain on his Facebook page. But King's critics say his deletions fit a broader pattern. He has declined to hold an in-person town hall meeting this year, saying, "to me all they do is just turn into a screaming session," according to CNN. He held a telephonic town hall meeting but only answered a small fraction of the questions submitted. And he met with Liuba Grechen Shirley, the founder of a local Democratic group in his district, but only after her group held a protest in front of his office that drew around 400 people.

"He's not losing his health care," Grechen Shirley said. "It doesn't affect him. It's a death sentence for many and he doesn't even care enough to meet with his constituents."

King's deleted comments even caught the eye of Andy Slavitt, who until January was the acting administrator of the Centers for Medicare and Medicaid Services. Slavitt has been traveling the country pushing back against attempts to gut the ACA.

.@RepPeteKing, are you silencing your constituents who send you questions? Assume ppl in district will respond if this is happening.

-- Andy Slavitt (@ASlavitt) May 12, 2017

Since the election, other activists across the country who oppose the president's agenda have posted online that they have been blocked from following their elected officials on Twitter or commenting on their Facebook pages because of critical statements they've made about the AHCA and other issues.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


The Guardian Site Reviews Documents Used By Facebook Executives To Moderate Content

Facebook logo The Guardian news site in the United Kingdom (UK) published the findings of its review of "The Facebook Files" -- a collection of documents which comprise the rules used by executives at the social site to moderate (e.g., review, approve, and delete) content posted by the site's members. Reporters at The Guardian reviewed:

"... more than 100 internal training manuals, spreadsheets and flowcharts that give unprecedented insight into the blueprints Facebook has used to moderate issues such as violence, hate speech, terrorism, pornography, racism and self-harm. There are even guidelines on match-fixing and cannibalism.

The Facebook Files give the first view of the codes and rules formulated by the site, which is under huge political pressure in Europe and the US. They illustrate difficulties faced by executives scrabbling to react to new challenges such as “revenge porn” – and the challenges for moderators, who say they are overwhelmed by the volume of work, which means they often have “just 10 seconds” to make a decision..."

The Guardian summarized what it learned about Facebook's revenge porn rules for moderators:

Revenge porn content rules found by The Guardian's review of Facebook documents

Reportedly, Facebook moderators reviewed as many as 54,000 cases in a single month related to revenge porn and "sextortion." In January of 2017, the site disabled 14,000 accounts due to this form of sexual violence. Previously, these rules were not available publicly. Findings about other rules are available at The Guardian site.

Other key findings found by The Guardian during its document review:

"One document says Facebook reviews more than 6.5m reports a week relating to potentially fake accounts – known as FNRP (fake, not real person)... Many moderators are said to have concerns about the inconsistency and peculiar nature of some of the policies. Those on sexual content, for example, are said to be the most complex and confusing... Anyone with more than 100,000 followers on a social media platform is designated as a public figure – which denies them the full protections given to private individuals..."

The social site struggles with how to handle violent language:

"Facebook’s leaked policies on subjects including violent death, images of non-sexual physical child abuse and animal cruelty show how the site tries to navigate a minefield... In one of the leaked documents, Facebook acknowledges “people use violent language to express frustration online” and feel “safe to do so” on the site. It says: “They feel that the issue won’t come back to them and they feel indifferent towards the person they are making the threats about because of the lack of empathy created by communication via devices as opposed to face to face..."

Some industry watchers in Europe doubt that Facebook can do what it has set out to accomplish, lacks sufficient staff to effectively moderate content posted by almost 2 billion users, and Facebook management should be more transparent about its content moderation rules. Others believe that Facebook and other social sites should be heavily fined "for failing to remove extremist and hate-crime material."

To learn more, The Guardian site includes at least nine articles about its review of The Facebook Files:

Collection of articles by The Guardian which review Facebook's content policies. Click to view larger version


How To Control The Ads Facebook Displays

If you use Facebook, then you know that the social networking site serves ads based upon your interests. And, you''d probably be surprised at what Facebook thinks you are interested in versus what you are really interested in.

To see what Facebook thinks you are interested in, you will need to access your Ad Preferences page. Sign into your Facebook account using the browser interface, and click on the triangle drop-down menu icon in the upper right corner. Next select Settings, and then select Ads in the left column. Your Ad Preferences page looks like this:

Default view of the Facebook Ad Preferences page. Click to view larger version

Facebook has neatly organized what it thinks your interests are into several categories: Your Interests, Advertisers You've Interacted With, Your Information, and Ad Settings. Open the Your Interests module:

Your Interests module within Facebook Ad Preferences. Click to view larger version

This module includes several sub-categories: News & Entertainment, Business & Industry, Hobbies & Activities, Travel Places, & Events, People, Technology, and Lifestyle. Mouse over an item to reveal both an explanation why that item appears in your list and the "X" delete button. Click on the "X" button to remove that item.

Facebook has collected impressively long lists about what it thinks your interests are. So, click on the "See More" links within each sub-category. Facebook ads interest items based upon links you've selected, groups you've joined, ads you have viewed, the photos/videos you have uploaded, items (e.g., groups, events, status messages) you have "Liked," and more. There's plenty to browse, so you'll probably want to set aside 15 minutes to review and delete items.

There is a sneaky aspect to Facebook's interface. An item may appear in several categories. So, if you delete it in one category don't assume it was deleted in other categories. You'll have to visit each sub-category and delete it there, too. And, there is no guarantee Facebook won't re-add that item later based upon your activities within the site and/or mobile app.

Caution: even if you delete everything, Facebook will still show advertisements. Why? That's what the social networking service is designed to do. That's its business model. Even if you stop clicking "Like" buttons, Facebook will use alternate criteria to display ads. You can control or limit the topics for ads, but you can't stop ads entirely.

The Your Information module includes toggle switches to either activate or deactivate groups of items within your profile which Facebook uses to display ads:

Your Information module within Facebook Ad Preferences. Click to view larger version

It's probably wise to re-visit your Ad Preference page once yearly to delete items. What do you think?


Lawsuit Claims The Uber Mobile App Scams Both Riders And Drivers

Uber logo A class-action lawsuit against Uber claims that the ride-sharing company manipulated its mobile app to simultaneously short-change drivers and over-charge riders. Ars Technica reported:

"When a rider uses Uber's app to hail a ride, the fare the app immediately shows to the passenger is based on a slower and longer route compared to the one displayed to the driver. The software displays a quicker, shorter route for the driver. But the rider pays the higher fee, and the driver's commission is paid from the cheaper, faster route, according to the lawsuit.

"Specifically, the Uber Defendants deliberately manipulated the navigation data used in determining the fare amount paid by its users and the amount reported and paid to its drivers," according to the suit filed in federal court in Los Angeles."

Controversy surrounds Uber after several high-level executive changes, an investigative news report alleging a worldwide program to thwart oversight by local governments, and a key lawsuit challenging the company's technology.


Uber: President Resigns, Greyball, A Major Lawsuit, Corporate Culture, And Lingering Questions

Uber logo Several executive changes are underway at Uber. The President of Uber's Ridesharing unit, Jeff Jones, resigned after only six months at the company. The Recode site posted a statement by Jones:

"Jones also confirmed the departure with a blistering assessment of the company. "It is now clear, however, that the beliefs and approach to leadership that have guided my career are inconsistent with what I saw and experienced at Uber, and I can no longer continue as president of the ride-sharing business," he said in a statement to Recode."

Prior to joining Uber, Jones had been the Chief Marketing Officer (CMO) at Target stores. Travis Kalanick, the Chief Executive Officer at Uber, disclosed that he met Jones at a Ted conference in Vancouver, British Columbia, Canada.

There have been more executive changes at Uber. The company announced on March 7 its search for a Chief Operating Officer (COO). It announced on March 14 the appointment of Zoubin Ghahramani as its new Chief Scientist based San Francisco. Ghahramani will lead Uber’s AI Labs, our recently created machine learning and artificial intelligence research unit and associated business strategy. Zoubin, a Professor of Information Engineering at the University of Cambridge, joined Uber when it acquired Geometric Intelligence.

In February 2017, CEO Travis Kalanick asked Amit Singhal to resign. Singhal, the company's senior vice president of engineering, had joined Uber a month after 15 years at Google. Reportedly, Singhal was let go for failing to disclose reasons for his departure from Google, including sexual harassment allegations.

Given these movements by executives, one might wonder what is happening at Uber. A brief review of the company's history found controversy accompanying its business practices. Earlier this month, an investigative report by The New York Times described a worldwide program by Uber executives to thwart code enforcement inspections by governments:

"The program, involving a tool called Greyball, uses data collected from the Uber app and other techniques to identify and circumvent officials who were trying to clamp down on the ride-hailing service. Uber used these methods to evade the authorities in cities like Boston, Paris and Las Vegas, and in countries like Australia, China and South Korea.

Greyball was part of a program called VTOS, short for “violation of terms of service,” which Uber created to root out people it thought were using or targeting its service improperly. The program, including Greyball, began as early as 2014 and remains in use, predominantly outside the United States. Greyball was approved by Uber’s legal team."

An example of how the program and Greyball work:

"Uber’s use of Greyball was recorded on video in late 2014, when Erich England, a code enforcement inspector in Portland, Ore., tried to hail an Uber car downtown in a sting operation against the company... officers like Mr. England posed as riders, opening the Uber app to hail a car and watching as miniature vehicles on the screen made their way toward the potential fares. But unknown to Mr. England and other authorities, some of the digital cars they saw in the app did not represent actual vehicles. And the Uber drivers they were able to hail also quickly canceled."

The City of Portland sued Uber in December 2014 and issued a Cease And Desist Order. Uber continued operations in the city, and a pilot program in Portland began in April, 2015. Later in 2015, the City of Portland authorized Uber''s operations. In March 2017, Oregon Live reported a pending investigation:

"An Uber spokesman said Friday that the company has not used the Greyball program in Portland since then. Portland Commissioner Dan Saltzman said Monday that the investigation will focus on whether Uber has used Greyball, or any form of it, to obstruct the city's enforcement of its regulations. The review would examine information the companies have already provided the city, and potentially seeking additional data from them... The investigation also will affect Uber's biggest competitor, Lyft, Saltzman said, though Lyft did not operate in Portland until after its business model was legalized, and there's no indication that it similarly screened regulators... Commissioner Nick Fish earlier called for a broader investigation and said the City Council should seek subpoena powers to determine the extent of Uber's "Greyball" usage..."

This raises questions about other locations Uber may have used its Greyball program. The San Francisco District Attorney's office is investigating, as are government officials in Sydney, Australia. Also this month, the Upstate Transportation Association (UTA), a trade group of taxi companies in New York State, asked government officials to investigate. The Albany Times Union reported:

"In a Tuesday letter to Governor Andrew Cuomo, Assembly Speaker Carl Heastie and Senate Majority Leader John Flanagan, UTA President John Tomassi wrote accused the company of possibly having used the Greyball technology in New York to evade authorities in areas where ride-hailing is not allowed. Uber and companies like it are authorized to operate only in New York City, where they are considered black cars. But UTA’s concerns about Greyball are spurred in part by reported pick-ups in some suburban areas."

A look at Uber's operations in Chicago sheds some light on how the company operates. NBC Channel 5 reported in 2014:

"... news that President Barack Obama's former adviser and campaign strategist David Plouffe has joined the company as senior VP of policy and strategy delivers a strong message to its enemies: Uber means business. How dare you disrupt our disruption? You're going down.

Here in the Land of Lincoln, Plouffe's hiring adds another layer of awkward personal politics to the Great Uber Debate. It's an increasingly tangled web: Plouffe worked in the White House alongside Rahm Emanuel when the Chicago mayor was Chief of Staff. Emanuel, trying to strike a balance between Uber-friendly and cabbie-considerate, recently passed a bill that restricts Uber drivers from picking up passengers at O'Hare, Midway and McCormick Place... Further complicating matters, Emanuel's brother, Hollywood super-agent Ari Emanuel, has invested in Uber..."

That debate also included the Illinois Governor, as politicians try to balance the competing needs of traditional taxi companies, ride-sharing companies, and consumers. The entire situation raises questions about why there aren't Greyball investigations by more cities. Is it due to local political interference?

That isn't all. In 2014, Uber's "God View" tool raised concerns about privacy, the company's tracking of its customers, and a questionable corporate culture. At that time, an Uber executive reportedly suggested that the company hire opposition researchers to dig up dirt about its critics in the news media.

Uber's claims in January 2015 of reduced drunk-driving accidents due to its service seemed dubious after scrutiny. ProPublica explained:

"Uber reported that cities using its ridesharing service have seen a reduction in drunk driving accidents, particularly among young people. But when ProPublica data reporter Ryann Grochowski Jones took a hard look at the numbers, she found the company's claim that it had "likely prevented" 1,800 crashes over the past 2.5 years to be lacking... the first red flag was that Uber didn't include a methodology with its report. A methodology is crucial to show how the statistician did the analysis... Uber eventually sent her a copy of the methodology separately, which showed that drunk-driving accidents involving drivers under 30 dropped in California after Uber's launch. The math itself is fine, Grochowski Jones says, but Uber offers no proof that those under 30 and Uber users are actually the same population.

This seems like one of those famous moments in intro statistics courses where we talk about correlation and causality, ProPublica Editor-in-Chief Steve Engelberg says. Grochowski Jones agrees, showcasing how drowning rates are higher in the summer as are ice cream sales but clearly one doesn't cause the other."

Similar claims by Uber about the benefits of "surge pricing" seemed to wilter under scrutiny. ProPublica reported in October, 2015:

"The company has always said the higher prices actually help passengers by encouraging more drivers to get on the road. But computer scientists from Northeastern University have found that higher prices don’t necessarily result in more drivers. Researchers Le Chen, Alan Mislove and Christo Wilson created 43 new Uber accounts and virtually hailed cars over four weeks from fixed points throughout San Francisco and Manhattan. They found that many drivers actually leave surge areas in anticipation of fewer people ordering rides. "What happens during a surge is, it just kills demand," Wilson told ProPublica."

Another surge-pricing study in 2016 concluded with a positive spin:

"... that consumers can benefit from surge pricing. They find this is the case when a market isn’t fully served by traditional taxis when demand is high. In short, if you can’t find a cab on New Year’s Eve, Daniels’ research says you’re better off with surge pricing... surge pricing allows service to expand during peak demand without creating idleness for drivers during normal demand. This means that more peak demand customers get rides, albeit at a higher price. This also means that the price during normal demand settings drops, allowing more customers service at these normal demand times."

In other words, "can benefit" doesn't ensure that riders will benefit. And "allows service to expand" doesn't ensure that service will expand during peak demand periods. "Surge pricing" does ensure higher prices. A better solution might be surge payments to drivers during peak hours to expand services. Uber will still make more money with more rides during peak periods.

The surge-pricing concept is a reminder of basic economics when prices are raised by suppliers. Demand decreases. A lower price should follow, but the surge-price prevents that. As the prior study highlighted, drivers have learned from this: additional drivers don't enter the market to force down the higher surge-price.

And, there is more. In 2015, the State of California Labor Commission ruled that Uber drivers are employees and not independent contractors, as the company claimed. Concerns about safety and criminal background checks have been raised. Last year, BuzzFeed News analyzed ride data from Uber:

"... the company received five claims of rape and “fewer than” 170 claims of sexual assault directly related to an Uber ride as inbound tickets to its customer service database between December 2012 and August 2015. Uber provided these numbers as a rebuttal to screenshots obtained by BuzzFeed News. The images that were provided by a former Uber customer service representative (CSR) to BuzzFeed News, and subsequently confirmed by multiple other parties, show search queries conducted on Uber’s Zendesk customer support platform from December 2012 through August 2015... In one screenshot, a search query for “sexual assault” returns 6,160 Uber customer support tickets. A search for “rape” returns 5,827 individual tickets."

That news item is interesting since it includes several images of video screens from the company's customer support tool. Uber's response:

"The ride-hail giant repeatedly asserted that the high number of queries from the screenshots is overstated, however Uber declined BuzzFeed News’ request to grant direct access to the data, or view its data analysis procedures. When asked for any additional anonymous data on the five rape complaint tickets it claims to have received between December 2012 and August 2015, Uber declined to provide any information."

Context matters about ride safety and corporate culture. A former Uber employee shared a disturbing story with allegations of sexual harassment:

"I joined Uber as a site reliability engineer (SRE) back in November 2015, and it was a great time to join as an engineer... After the first couple of weeks of training, I chose to join the team that worked on my area of expertise, and this is where things started getting weird. On my first official day rotating on the team, my new manager sent me a string of messages over company chat. He was in an open relationship, he said, and his girlfriend was having an easy time finding new partners but he wasn't. He was trying to stay out of trouble at work, he said, but he couldn't help getting in trouble, because he was looking for women to have sex with... Uber was a pretty good-sized company at that time, and I had pretty standard expectations of how they would handle situations like this. I expected that I would report him to HR, they would handle the situation appropriately, and then life would go on - unfortunately, things played out quite a bit differently. When I reported the situation, I was told by both HR and upper management that even though this was clearly sexual harassment and he was propositioning me, it was this man's first offense, and that they wouldn't feel comfortable giving him anything other than a warning and a stern talking-to... I was then told that I had to make a choice: (i) I could either go and find another team and then never have to interact with this man again, or (ii) I could stay on the team, but I would have to understand that he would most likely give me a poor performance review when review time came around, and there was nothing they could do about that. I remarked that this didn't seem like much of a choice..."

Her story seems very credible. Based upon this and other events, some industry watchers question Uber's value should it seek more investors via an initial public offering (IPO):

"Uber has hired two outside law firms to conduct investigations related to the former employee's claims. One will investigate her claims specifically, the other is conducting a broader investigation into Uber's workplace practices...Taken together, the recent reports paint a picture of a company where sexual harassment is tolerated, laws are seen as inconveniences to be circumvented, and a showcase technology effort might be based on stolen secrets. That's all bad for obvious reasons... What will Uber's valuation look like the next time it has to raise money -- or when it attempts to go public?"

To understand the "might be based on stolen secrets" reference, the San Francisco Examiner newspaper explained on March 20:

"In the past few weeks, Uber’s touted self-driving technology has come under both legal and public scrutiny after Alphabet — Google’s parent company — sued Uber over how it obtained its technology. Alphabet alleges that the technology for Otto, a self-driving truck company acquired by Uber last year, was stolen from Alphabet’s own Waymo self-driving technology... Alphabet alleges Otto founder Anthony Levandowski downloaded proprietary data from Alphabet’s self-driving files. In December 2015, Levandowski download 14,000 design files onto a memory card reader and then wiped all the data from the laptop, according to the lawsuit.

The lawsuit also lays out a timeline where Levandowski and Uber were in cahoots with one another before the download operation. Alphabet alleges the two parties were in communications with each other since the summer of 2015, when Levandowski still worked for Waymo. Levandowski left Waymo in January 2016, started Otto the next month and joined Uber in August as vice president of Uber’s self-driving technology after Otto was purchased by Uber for $700 million... This may become the biggest copyright infringement case brought forth in Silicon Valley since Apple v. Microsoft in 1994, when Apple sued Microsoft over the alleged likeness in the latter’s graphic user interface."

And, just this past Saturday Uber suspended its driverless car program in Arizona after a crash. Reportedly, Uber's driverless car programs in Arizona, Pittsburgh and San Francisco are suspended pending the results of the crash investigation.

No doubt, there will be more news about the lawsuit, safety issues, sexual harassment, Greyball, and investigations by local cities. What are your opinions?


Boston Public Library Offers Workshop About How To Spot Fake News

Fake news image The Boston Public Library (BPL) offers a wide variety of programs, events and workshops for the public. The Grove Hall branch is offering several sessions of the free workshop titled, "Recognizing Fake News."The workshop description:

"Join us for a workshop to learn how to critically watch the news on television and online in order to detect "fake news." Using the News Literacy Project's interactive CheckologyTM curriculum, leading journalists and other experts guide participants through real-life examples from the news industry."

What is fake news? The Public Libraries Association (PLA) offered this definition:

"Fake news is just as it sounds: news that is misleading and not based on fact or, simply put, fake. Unfortunately, the literal defi­nition of fake news is the least complicated aspect of this com­plex topic. Unlike satire news... fake news has the intention of disseminat­ing false information, not for comedy, but for consumption. And without the knowledge of appropriately identifying fake news, these websites can do an effective job of tricking the untrained eye into believing it’s a credible source. Indeed, its intention is deception.

To be sure, fake news is nothing new... The Internet, particularly social media, has completely manipulated the landscape of how information is born, consumed, and shared. No longer is content creation reserved for official publishing houses or media outlets. For better or for worse, anybody can form a platform on the Inter­net and gain a following. In truth, we all have the ability to create viral news—real or fake—with a simple tweet or Facebook post."

The News Literacy Project is a nonpartisan national nonprofit organization that works with educators and journalists to teach middle school and high school students how to distinguish fact from fiction.

The upcoming workshop sessions at the BPL Grove Hall branch are tomorrow, March 11 at 3:00 pm, and Wednesday, March 29 at 1:00 pm. Participants will learn about the four main types of content (e.g., news, opinion, entertainment, and advertising), and the decision processes journalists use to decide which news to publish. The workshop presents real examples enabling workshop participants to test their skills at recognizing the four types of content and "fake news."

While much of the workshop content is targeted at students, adults can also benefit. Nobody wants to be duped by fake or misleading news. Nobody wants to mistake advertising or opinion for news. The sessions include opportunities for participants to ask questions. The workshop lasts about an hour and registration is not required.

Many public libraries across the nation offer various workshops about how to spot "fake news," including Athens (Georgia), Austin (Texas), Bellingham (Washington), Chicago (Illinois), Clifton Park (New York), Davenport (Iowa), Elgin (Illinois), Oakland (California), San Jose (California), and Topeka (Kansas). Some colleges and universities offer similar workshops, including American University and Cornell University. Some workshops included panelists or speakers from local news organizations.

The BPL Grove Hall branch is located at 41 Geneva Avenue in the Roxbury section of Boston. The branch's phone is (617) 427-3337.

Have you attended a "fake news" workshop at a local public library in your town or city? If so, share your experience below.


Advocacy Groups And Legal Experts Denounce DHS Proposal Requiring Travelers To Disclose Social Media Credentials

U.S. Department of Homeland Security logo Several dozen human rights organizations, civil liberties advocates, and legal experts published an open letter on February 21,2017 condemning a proposal by the U.S. Department of Homeland Security to require the social media credentials (e.g., usernames and passwords) of all travelers from majority-Muslim countries. This letter was sent after testimony before Congress by Homeland Security Secretary John Kelly. NBC News reported on February 8:

"Homeland Security Secretary John Kelly told Congress on Tuesday the measure was one of several being considered to vet refugees and visa applicants from seven Muslim-majority countries. "We want to get on their social media, with passwords: What do you do, what do you say?" he told the House Homeland Security Committee. "If they don't want to cooperate then you don't come in."

His comments came the same day judges heard arguments over President Donald Trump's executive order temporarily barring entry to most refugees and travelers from Syria, Iraq, Iran, Somalia, Sudan, Libya and Yemen. Kelly, a Trump appointee, stressed that asking for people's passwords was just one of "the things that we're thinking about" and that none of the suggestions were concrete."

The letter, available at the Center For Democracy & Technology (CDT) website, stated in part (bold emphasis added):

"The undersigned coalition of human rights and civil liberties organizations, trade associations, and experts in security, technology, and the law expresses deep concern about the comments made by Secretary John Kelly at the House Homeland Security Committee hearing on February 7th, 2017, suggesting the Department of Homeland Security could require non-citizens to provide the passwords to their social media accounts as a condition of entering the country.

We recognize the important role that DHS plays in protecting the United States’ borders and the challenges it faces in keeping the U.S. safe, but demanding passwords or other account credentials without cause will fail to increase the security of U.S. citizens and is a direct assault on fundamental rights.

This proposal would enable border officials to invade people’s privacy by examining years of private emails, texts, and messages. It would expose travelers and everyone in their social networks, including potentially millions of U.S. citizens, to excessive, unjustified scrutiny. And it would discourage people from using online services or taking their devices with them while traveling, and would discourage travel for business, tourism, and journalism."

The letter was signed by about 75 organizations and individuals, including the American Civil Liberties Union, the American Library Association, the American Society of Journalists & Authors, the American Society of News Editors, Americans for Immigrant Justice, the Brennan Center for Justice at NYU School of Law, Electronic Frontier Foundation, Human Rights Watch, Immigrant Legal Resource Center, National Hispanic Media Coalition, Public Citizen, Reporters Without Borders, the World Privacy Forum, and many more.

The letter is also available here (Adobe PDF).


Travelers Face Privacy Issues When Crossing Borders

If you travel for business, pleasure, or both then today's blog post will probably interest you. Wired Magazine reported:

"In the weeks since President Trump’s executive order ratcheted up the vetting of travelers from majority Muslim countries, or even people with Muslim-sounding names, passengers have experienced what appears from limited data to be a “spike” in cases of their devices being seized by customs officials. American Civil Liberties Union attorney Nathan Wessler says the group has heard scattered reports of customs agents demanding passwords to those devices, and even social media accounts."

Devices include smartphones, laptops, and tablets. Many consumers realize that relinquishing passwords to social networking sites (e.g., Facebook, Instagram, etc.) discloses sensitive information not just about themselves, but also all of their friends, family, classmates, neighbors, and coworkers -- anyone they are connected with online. The "Bring Your Own Device" policies by many companies and employers means that employees (and contractors) can use their personal devices in the workplace and/or connected remotely to company networks. Those connected devices can easily divulge company trade secrets and other sensitive information when seized by Customs and Border Patrol (CBP) agents for analysis and data collection.

Plus, professionals such as attorneys and consultants are required to protect their clients' sensitive information. These professionals, who also must travel, require data security and privacy for business.

Wired also reported:

"In fact, US Customs and Border Protection has long considered US borders and airports a kind of loophole in the Constitution’s Fourth Amendment protections, one that allows them wide latitude to detain travelers and search their devices. For years, they’ve used that opportunity to hold border-crossers on the slightest suspicion, and demand access to their computers and phones with little formal cause or oversight.

Even citizens are far from immune. CBP detainees from journalists to filmmakers to security researchers have all had their devices taken out of their hands by agents."

For travelers wanting privacy, what are the options? Remain at home? This may not be an option for workers who must travel for business. Leave your devices at home? Again, impractical for many. The Wired article provided several suggestions, including:

"If customs officials do take your devices, don’t make their intrusion easy. Encrypt your hard drive with tools like BitLocker, TrueCrypt, or Apple’s Filevault, and choose a strong passphrase. On your phone—preferably an iPhone, given Apple’s track record of foiling federal cracking—set a strong PIN and disable Siri from the lockscreen by switching off “Access When Locked” under the Siri menu in Settings.

Remember also to turn your devices off before entering customs: Hard drive encryption tools only offer full protection when a computer is fully powered down. If you use TouchID, your iPhone is safest when it’s turned off, too..."

What are the consequences when travelers refuse to disclose passwords and encrpt devices? Ars Technica also explored the issues:

"... Ars spoke with several legal experts, and contacted CBP itself (which did not provide anything beyond previously-published policies). The short answer is: your device probably will be seized (or "detained" in CBP parlance), and you might be kept in physical detention—although no one seems to be sure exactly for how long.

An unnamed CBP spokesman told The New York Times on Tuesday that such electronic searches are extremely rare: he said that 4,444 cellphones and 320 other electronic devices were inspected in 2015, or 0.0012 percent of the 383 million arrivals (presuming that all those people had one device)... The most recent public document to date on this topic appears to be an August 2009 Department of Homeland Security paper entitled "Privacy Impact Assessment for the Border Searches of Electronic Devices." That document states that "For CBP, the detention of devices ordinarily should not exceed five (5) days, unless extenuating circumstances exist." The policy also states that CBP or Immigration and Customs Enforcement "may demand technical assistance, including translation or decryption," citing a federal law, 19 US Code Section 507."

The Electronic Frontier Foundation (EFF) collects stories from travelers who've been detained and had their devices seized. Clearly, we will hear a lot more in the future about these privacy issues. What are your opinions of this?


Facebook Doesn't Tell Users Everything it Really Knows About Them

[Editor's note: today's guest post is by reporters at ProPublica. I've posted it because, a) many consumers don't know how their personal information is bought, sold, and used by companies and social networking sites; b) the USA is capitalist society and the sensitive personal data that describes consumers is consumers' personal property; c) a better appreciation of "a" and "b" will hopefully encourage more consumers to be less willing to trade their personal property for convenience, and demand better privacy protections from products, services, software, apps, and devices; and d) when lobbyists and politicians act to erode consumers' property and privacy rights, hopefully more consumers will respond and act. Facebook is not the only social networking site that trades consumers' information. This news story is reprinted with permission.]

by Julia Angwin, Terry Parris Jr. and Surya Mattu, ProPublica

Facebook has long let users see all sorts of things the site knows about them, like whether they enjoy soccer, have recently moved, or like Melania Trump.

But the tech giant gives users little indication that it buys far more sensitive data about them, including their income, the types of restaurants they frequent and even how many credit cards are in their wallets.

Since September, ProPublica has been encouraging Facebook users to share the categories of interest that the site has assigned to them. Users showed us everything from "Pretending to Text in Awkward Situations" to "Breastfeeding in Public." In total, we collected more than 52,000 unique attributes that Facebook has used to classify users.

Facebook's site says it gets information about its users "from a few different sources."

What the page doesn't say is that those sources include detailed dossiers obtained from commercial data brokers about users' offline lives. Nor does Facebook show users any of the often remarkably detailed information it gets from those brokers.

"They are not being honest," said Jeffrey Chester, executive director of the Center for Digital Democracy. "Facebook is bundling a dozen different data companies to target an individual customer, and an individual should have access to that bundle as well."

When asked this week about the lack of disclosure, Facebook responded that it doesn't tell users about the third-party data because its widely available and was not collected by Facebook.

"Our approach to controls for third-party categories is somewhat different than our approach for Facebook-specific categories," said Steve Satterfield, a Facebook manager of privacy and public policy. "This is because the data providers we work with generally make their categories available across many different ad platforms, not just on Facebook."

Satterfield said users who don't want that information to be available to Facebook should contact the data brokers directly. He said users can visit a page in Facebook's help center, which provides links to the opt-outs for six data brokers that sell personal data to Facebook.

Limiting commercial data brokers' distribution of your personal information is no simple matter. For instance, opting out of Oracle's Datalogix, which provides about 350 types of data to Facebook according to our analysis, requires "sending a written request, along with a copy of government-issued identification" in postal mail to Oracle's chief privacy officer.

Users can ask data brokers to show them the information stored about them. But that can also be complicated. One Facebook broker, Acxiom, requires people to send the last four digits of their social security number to obtain their data. Facebook changes its providers from time to time so members would have to regularly visit the help center page to protect their privacy.

One of us actually tried to do what Facebook suggests. While writing a book about privacy in 2013, reporter Julia Angwin tried to opt out from as many data brokers as she could. Of the 92 brokers she identified that accepted opt-outs, 65 of them required her to submit a form of identification such as a driver's license. In the end, she could not remove her data from the majority of providers.

ProPublica's experiment to gather Facebook's ad categories from readers was part of our Black Box series, which explores the power of algorithms in our lives. Facebook uses algorithms not only to determine the news and advertisements that it displays to users, but also to categorize its users in tens of thousands of micro-targetable groups.

Our crowd-sourced data showed us that Facebook's categories range from innocuous groupings of people who like southern food to sensitive categories such as "Ethnic Affinity" which categorizes people based on their affinity for African-Americans, Hispanics and other ethnic groups. Advertisers can target ads toward a group 2014 or exclude ads from being shown to a particular group.

Last month, after ProPublica bought a Facebook ad in its housing categories that excluded African-Americans, Hispanics and Asian-Americans, the company said it would build an automated system to help it spot ads that illegally discriminate.

Facebook has been working with data brokers since 2012 when it signed a deal with Datalogix. This prompted Chester, the privacy advocate at the Center for Digital Democracy, to filed a complaint with the Federal Trade Commission alleging that Facebook had violated a consent decree with the agency on privacy issues. The FTC has never publicly responded to that complaint and Facebook subsequently signed deals with five other data brokers.

To find out exactly what type of data Facebook buys from brokers, we downloaded a list of 29,000 categories that the site provides to ad buyers. Nearly 600 of the categories were described as being provided by third-party data brokers. (Most categories were described as being generated by clicking pages or ads on Facebook.)

The categories from commercial data brokers were largely financial, such as "total liquid investible assets $1-$24,999," "People in households that have an estimated household income of between $100K and $125K, or even "Individuals that are frequent transactor at lower cost department or dollar stores."

We compared the data broker categories with the crowd-sourced list of what Facebook tells users about themselves. We found none of the data broker information on any of the tens of the thousands of "interests" that Facebook showed users.

Our tool also allowed users to react to the categories they were placed in as being "wrong," "creepy" or "spot on." The category that received the most votes for "wrong" was "Farmville slots." The category that got the most votes for "creepy" was "Away from family." And the category that was rated most "spot on" was "NPR."

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Boston Women's March And Local Law Enforcement

On Saturday, January 21, 2017 the Boston Police Department (BPD) posted on its Facebook page at 5:45 pm the following about the Women's March:

"To the tens of thousands who participated in today’s Women’s March on Boston Common earlier today, Saturday, January 21, 2017, the men and women of the Boston Police Department would like to thank you for the high levels of respectful and responsible behavior on display throughout the day. Said Commissioner Evans: "Really impressed with the amount of respect and courtesy shown to my officers by everybody attending today's Women’s March and I’d just like to personally thank everybody who demonstrated in a peaceful, polite and respectful manner."

The Boston Globe newspaper reported about the event:

"... the enormous crowd began streaming from Boston Common onto Charles Street, heading to Clarendon Street, where they turned around. So many people marched that it took more than an hour and a half to file out of the Common. City officials estimated that 175,000 attended the demonstration... The Boston event was one of more than 600 marches being held nationwide and globally, on the day after Trump took office... Speakers at the Boston kickoff included Warren, Mayor Martin J. Walsh of Boston, US Senator Edward J. Markey, and Attorney General Maura Healey... By about 1 p.m., marchers began to hit the streets, though the crowd was so big that many had to wait before they could get out of the Common. The gathering was almost evenly split between men and women, and a diverse range of agendas was represented: climate change, antiracism, and Trump’s ties to Russia. On Twitter, Boston police thanked protesters for remaining peaceful."

There more demonstrations in Massachusetts in Falmouth, Greenfield, Nantucket, Provincetown, Northampton, and Pittsfield. Social networking posts about the Boston event by the BPD on Twitter:

Tweet about Womens March by Boston Police Department. Click to view larger version

Tweets about Womens March by Boston Police Department. Click to view larger version

Respectful behavior all around: marchers and law enforcement. Congratulations and thanks to everyone involved, plus very respectful messages on social networking sites by the BPD. Hopefully, in the future more citizens and police departments around the country will follow Boston's lead. That is truly #BostonStrong.

Yes, I live and work in Boston. What happened in your city? How did your city's law enforcement respond. Share below.


Ashley Madison Operators Agree to Settlement With FTC And States

Ashley Madison home page image

The operators of the AshleyMadison.com dating site have agreed to settlement with the U.S. Federal Trade Commission (FTC) for security lapses in a massive 2015 data breach. 37 million subscribers were affected and site's poor handling of its password-reset mechanism made accounts discover-able while the site had promised otherwise. The site was know for helping married persons find extra-marital affairs.

The FTC complaint against Avid Life Media Inc. sought relief and refunds for subscribers. The complaint alleged that the dating site:

"... Defendants collect, maintain, and transmit a host of personal information including: full name; username; gender; address, including zip codes; relationship status; date of birth; ethnicity; height; weight; email address; sexual preferences and desired encounters; desired activities; photographs; payment card numbers; hashed passwords; answers to security questions; and travel locations and dates. Defendants also collect and maintain consumers’ communications with each other, such as messages and chats... Until August 2014, Defendants engaged in a practice of using “engager profiles” — that is, fake profiles created by Defendants’ staff who communicate with consumers in the same way that consumers would communicate with each other—as a way to engage or attract additional consumers to AshleyMadison.com. In 2014, there were 28,417 engager profiles on the website. All but 3 of the engager profiles were female. Defendants created these profiles using profile information, including photographs, from existing members who had not had any account activity within the preceding one or more years... Because these engager profiles contained the same type of information as someone who was actually using the website, there was no way for a consumer to determine whether an engager profile was fake or real. To consumers using AshleyMadison.com, the communications generated by engager profiles were indistinguishable from communications generated by actual members... When consumers signed up for AshleyMadison.com, Defendants explained that their system is “100% secure” because consumers can delete their “digital trail”.

More importantly, the complaint alleged that the operators of the site failed to protect subscribers' information in several key ways:

"a. failed to have a written organizational information security policy;
b. failed to implement reasonable access controls. For example, they: i) failed to regularly monitor unsuccessful login attempts; ii) failed to secure remote access; iii) failed to revoke passwords for ex-employees of their service providers; iv) failed to restrict access to systems based on employees’ job functions; v) failed to deploy reasonable controls to identify, detect, and prevent the retention of passwords and encryption keys in clear text files on Defendants’ network; and vi) allowed their employees to reuse passwords to access multiple servers and services;
c. failed to adequately train Defendants’ personnel to perform their data security- related duties and responsibilities;
d. failed to ascertain that third-party service providers implemented reasonable security measures to protect personal information. For example, Defendants failed to contractually require service providers to implement reasonable security; and
e. failed to use readily available security measures to monitor their system and assets at discrete intervals to identify data security events and verify the effectiveness of protective measures."

The above items read like a laundry list of everything not to do regarding information security. Several states also sued the site's operators. Toronto, Ontario-based Ruby Corporation (Formerly called Avid Life media), ADL Media Inc. (based in Delaware), and Ruby Life Inc. (d/b/a Ashley Madison) were named as defendants in the lawsuit. According to its website, Ruby Life operates several adult dating sites: Ashley Madison, Cougar Life, and Established Men.

The Ashley Madison site generated about $47 million in revenues in the United States during 2015. The site has members in 46 countries, and almost 19 million subscribers in the United States created profiles since 2002. About 16 million of those profiles were male.

Terms of the settlement agreement require the operators to pay $1.6 million to settle FTC and state actions, and to implement a comprehensive data-security program with third-party assessments. About $828,500 is payable directly to the FTC within seven days, with an equal amount divided among participating states. If the defendants fail to make that payment to the FTC, then the full judgment of $8.75 million becomes due.

The defendants must submit to the FTC a compliance report one year after the settlement agreement. The third-party assessment programs starts within 180 days of the settlement agreement and continues for 20 years with reports every two years. The terms prohibit the site's operators and defendants from misrepresenting to persons in the United States how their online site and mobile app operate. Clearly, the use of fake profiles is prohibited.

The JD Supra site discussed the fake profiles:

"AshleyMadison/Ruby’s use of chat-bot-based fake or “engager profiles” that lured users into upgrading/paying for full memberships was also addressed in the complaint. According to a report in Fortune Magazine, men who signed up for a free AshleyMadison account would be immediately contacted by a bot posing as an interested woman, but would have to buy credits from AshleyMadison to reply.

Gizmodo, among many other sites, has examined the allegations of fake female bots or “engager profiles” used to entice male users who were using Ashley Madison’s free services to convert to paid services: “Ashley Madison created more than 70,000 female bots to send male users millions of fake messages, hoping to create the illusion of a vast playland of available women.” "

13 states worked on this case with the FTC: Alaska, Arkansas, Hawaii, Louisiana, Maryland, Mississippi, Nebraska, New York, North Dakota, Oregon, Rhode Island, Tennessee, Vermont, and the District of Columbia. The State of Tennessee's share was about $57,000. Vermont Attorney General William H. Sorrell said:

“Creating fake profiles and selling services that are not delivered is unacceptable behavior for any dating website... I was pleased to see the FTC and the state attorneys general working together in such a productive and cooperative manner. Vermont has a long history of such cooperation, and it’s great to see that continuing.”

The Office of the Privacy Commissioner of Canada and the Office of the Australian Information Commissioner reached their own separate settlements with the company. Commissioner Daniel Therrien of the Office of the Privacy Commissioner of Canada said:

“In the digital age, privacy issues can impact millions of people around the world. It’s imperative that regulators work together across borders to ensure that the privacy rights of individuals are respected no matter where they live.”

Australian Privacy Commissioner Timothy Pilgrim stated:

"My office was pleased to work with the FTC and the Office of the Canadian Privacy Commissioner on this investigation through the APEC cross-border enforcement framework... Cross-border cooperation and enforcement is the future for privacy regulation in the global consumer age, and this cooperative approach provides an excellent model for enforcement of consumer privacy rights.”

Kudos to the FTC for holding a company's feet (and its officers' and executives' feet) to the fire to protect consumers' information.


High Tech Companies And A Muslim Registry

Since the Snowden disclosures in 2013, there have been plenty of news reports about how technology companies have assisted the U.S. government with surveillance programs. Some of these activities included surveillance programs by the U.S. National Security Agency (NSA) including innocent citizens, bulk phone calls metadata collection, warrantless searches by the NSA of citizen's phone calls and emails, facial image collection, identification of the best collaborator with NSA spying, fake cell phone towers (a/k/a 'stingrays') used by both federal government agencies and local police departments, and automated license plate readers to track drivers.

You may also remember, after Apple Computer's refusal to build a backdoor into its smartphones, the U.S. Federal Bureau of Investigation bought a hacking tool from a third party. Several tech companies built the reform government surveillance site, while others actively pursue "Surveillance Capitalism" business goals.

During the 2016 political campaign, candidate (and now President Elect) Donald Trump said he would require all Muslims in the United States to register. Mr. Trump's words matter greatly given his lack of government experience. His words are all voters had to rely upon.

So, The Intercept asked several technology companies a key question about the next logical step: whether or not they are willing to help build and implement a Muslim registry:

"Every American corporation, from the largest conglomerate to the smallest firm, should ask itself right now: Will we do business with the Trump administration to further its most extreme, draconian goals? Or will we resist? This question is perhaps most important for the country’s tech companies, which are particularly valuable partners for a budding authoritarian."

The companies queried included IBM, Microsoft, Google, Facebook, Twitter, and others. What's been the response? Well, IBM focused on other areas of collaboration:

"Shortly after the election, IBM CEO Ginni Rometty wrote a personal letter to President-elect Trump in which she offered her congratulations, and more importantly, the services of her company. The six different areas she identified as potential business opportunities between a Trump White House and IBM were all inoffensive and more or less mundane, but showed a disturbing willingness to sell technology to a man with open interest in the ways in which technology can be abused: Mosque surveillance, a “virtual wall” with Mexico, shutting down portions of the internet on command, and so forth."

The response from many other companies has mostly been crickets. So far, only executives at Twitter have flatly refused, and included with its reply a link to its blog post about developer policies:

"Recent reports about Twitter data being used for surveillance, however, have caused us great concern. As a company, our commitment to social justice is core to our mission and well established. And our policies in this area are long-standing. Using Twitter’s Public APIs or data products to track or profile protesters and activists is absolutely unacceptable and prohibited.

To be clear: We prohibit developers using the Public APIs and Gnip data products from allowing law enforcement — or any other entity — to use Twitter data for surveillance purposes. Period. The fact that our Public APIs and Gnip data products provide information that people choose to share publicly does not change our policies in this area. And if developers violate our policies, we will take appropriate action, which can include suspension and termination of access to Twitter’s Public APIs and data products.

We have an internal process to review use cases for Gnip data products when new developers are onboarded and, where appropriate, we may reject all or part of a requested use case..."

Recently, a Trump-Pence supporter floated this trial balloon to justify such a registry:

"A prominent supporter of Donald J. Trump drew concern and condemnation from advocates for Muslims’ rights on Wednesday after he cited World War II-era Japanese-American internment camps as a “precedent” for an immigrant registry suggested by a member of the president-elect’s transition team. The supporter, Carl Higbie, a former spokesman for Great America PAC, an independent fund-raising committee, made the comments in an appearance on “The Kelly File” on Fox News...

“We’ve done it based on race, we’ve done it based on religion, we’ve done it based on region,” Mr. Higbie said. “We’ve done it with Iran back — back a while ago. We did it during World War II with Japanese.”

You can read the replies from nine technology companies at the Intercept site. Will other companies besides Twitter show that they have a spine? Whether or not such a registry ultimately violates the U.S. Constitution, we will definitely hear a lot more about this subject in the near future.


The List of Fake News Sites

New York Magazine reported:

"As Facebook and now Google face scrutiny for promoting fake news stories, Melissa Zimdars, a communication and media professor from Merrimack College in Massachusetts, has compiled a handy list of websites you should think twice about trusting. “Below is a list of fake, false, regularly misleading, and otherwise questionable ‘news’ organizations that are commonly shared on Facebook and other social media sites,” Zimdars explains. “Many of these websites rely on ‘outrage’ by using distorted headlines and decontextualized or dubious information in order to generate likes, shares, and profits.” (Click here to see the list.)

Be warned: Zimdars’s list is expansive in scope, and stretches beyond the bootleg sites (many of them headquartered in Macedonia) that write fake news for the sole reason of selling advertisements. Right-wing sources and conspiracy theorists like Breitbart and Infowars appear alongside pure (but often misinterpreted) satire like the Onion and The New Yorker’s Borowitz Report."

For consumers seeking "hard" news (e.g., the raw who, what, when, and where something happened), some sources: Associated Press (AP), Reuters, and United Press International (UPI). What sources do you use for "hard" news?


Facebook Says it Will Stop Allowing Some Advertisers to Exclude Users by Race

Facebook logo [Editor's note: Today's guest post was originally published by ProPublica on November 11, 2016. It is reprinted with permission. This prior post explained the problems with Facebook's racial advertising filters.]

by Julia Angwin, ProPublica

Facing a wave of criticism for allowing advertisers to exclude anyone with an "affinity" for African-American, Asian-American or Hispanic people from seeing ads, Facebook said it would build an automated system that would let it better spot ads that discriminate illegally.

Federal law prohibits ads for housing, employment and credit that exclude people by race, gender and other factors.

Facebook said it would build an automated system to scan advertisements to determine if they are services in these categories. Facebook will prohibit the use of its "ethnic affinities" for such ads.

Facebook said its new system should roll out within the next few months. "We are going to have to build a solution to do this. It is not going to happen overnight," said Steve Satterfield, privacy and public policy manager at Facebook.

He said that Facebook would also update its advertising policies with "stronger, more specific prohibitions" against discriminatory ads for housing, credit and employment.

In October, ProPublica purchased an ad that targeted Facebook members who were house hunting and excluded anyone with an "affinity" for African-American, Asian-American or Hispanic people. When we showed the ad to a civil rights lawyer, he said it seemed like a blatant violation of the federal Fair Housing Act.

After ProPublica published an article about its ad purchase, Facebook was deluged with criticism. Four members of Congress wrote Facebook demanding that the company stop giving advertisers the option of excluding by ethnic group.

The federal agency that enforces the nation's fair housing laws said it was "in discussions" with Facebook to address what it termed "serious concerns" about the social network's advertising practices.

And a group of Facebook users filed a&n class-action lawsuit against Facebook, alleging that the company's ad-targeting technology violates the Fair Housing Act and the Civil Rights Act of 1964.

Facebook's Satterfield said that today's changes are the result of "a lot of conversations with stakeholders."

Facebook said the new system would not only scan the content of ads, but could also inject pop-up notices alerting buyers when they are attempting to purchase ads that might violate the law or Facebook's ad policies.

"We're glad to see Facebook recognizing the important civil rights protections for housing, credit and employment," said Rachel Goodman, staff attorney with the racial justice program at the American Civil Liberties Union. "We hope other online advertising platforms will recognize that ads in these areas need to be treated differently."

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Facebook Provides Members With Elections Ballot Previews

The Facebook social networking site introduced on October 28, 2016 a new feature where provides its voting-age users with previews of candidates and questions. The site presented users with the following ad:

Facebook Elections Ballot ad. Click to view larger version

Like other ads in the site, users can disable the ad. Users that select the "Preview Your Ballot" link will see next three pop-up pages which explain the new feature:

Facebook Elections Ballot popup window. Click to view larger version

Then,, users can preview their ballot based upon where they live, which includes national candidates running for office and ballot questions. To view local candidates running for office and local ballot questions, users must provide Facebook with their complete street address:

Facebook Elections Ballot landing page. Click to view larger version

Within the new feature, users can preview information about each candidates: Issue Positions, Endorsements, Recent Posts, and Website. "Issue Positions" links to content within the candidate's Facebook page. The "Endorsements" and "Recent Posts" selections link similar. "Website" links to the candidate's external website. Issue Positions includes the topics you might expect: budget, civil rights, economy, education, energy, environment, foreign policy, guns, health, immigration, infrastructure, military, Social Security, taxes, terrorism, and more.

Why did Facebook introduce this new feature? According to a popup within the feature:

"You're seeing this because you may be in a state that has a voter registration deadline or election coming up. We want to help people have their voice heard in the elections this year, so we're showing this message to people who are old enough to vote - no matter who they support.

We send reminders about voting every now and then. If you'd rather not see these in the future, click or tap the in the top right corner of the reminder and select Hide Reminder, then Hide all voting reminders."

The official Facebook announcement on October 28 said:

"Voting is important... we’re encouraging civic participation. We want to make it easier for people who want to participate to do so, and to have a voice in the political process... Today, we’re introducing a new feature that shows you what’s on the ballot — from candidates to ballot initiatives. We also show you where the candidates stand on the issues...Not all states in America mail out sample ballots ahead of an election. This can make it challenging to find comprehensive information about the questions you’ll be expected to consider when you walk into the voting booth. Thanks to data gathered from election officials by the nonpartisan Center for Technology and Civic Life (CTCL), we can present you with a preview of the ballot you’ll receive on November 8. If you notice an issue with the CTCL data, we’ve built in a way for you to provide feedback and help correct the dataset.

Challenging to find information? What a load of bull. The Internet makes it easy to visit websites for candidates and ballot questions. Plus, information is available at every state. Example: ballot information in Massachusetts is available at websites by the Secretary of the Commonwealth and the City of Boston. Sample ballots were available during the primaries, too. Every state in the Union has a Secretary of State whose website you should visit anyway for elections and other information. Find your state in this list.

I first saw Facebook's new Elections Ballot feature on November 2, 2016 -- five days after the announcement, and less than 6 days before the November 8 Elections Day. You'd think that Facebook would have introduced this feature sooner; ideally, as soon as the main parties had nominated their candidates. Facebook didn't. Not good. And, the feature's availability may be too late for early voters.

What else is happening with this new feature? Several items are worth mentioning. First, executives at Facebook are probably well aware that two-thirds of the site's users get their news at the site. This new feature is clearly an attempt to keep users within the Facebook bubble: increase the amount of time on site and the number of pages viewed within the site.

Second, the accuracy of the new feature is suspect. I have never shared my residential address with Facebook, so the elections feature displayed 4 questions when there are actually 5 where I live. The fifth question is a local ballot iniative. Users like me, who haven't provided street address information, may get a wrong impression of what's on their ballot -- if they fail to read the fine print. And, we know that too many consumers never read the fine print.

Third, the local candidates and ballot questions are a slick way for Facebook to force users to share their residential street address information. Fourth, the new feature is an opportunity to capture users' voting information. Of course, not the official ballots, but the next closest thing. Users can select which candidates are their Favorites and share it with their Friends: people, coworkers, classmates, family, neighbors, and others they are connected to at the site. Favoriting a candidate within this new feature seems like a pretty explicit and accurate proxy instead of an official ballot:

Facebook Elections Ballot. Links to learn about or favorite a candidate. Click to view larger version

Fifth, armed with this ballot information about its users, Facebook can probably charge more to advertisers (e.g., political campaigns, political action committees, pollsters, data brokers) interested in purchasing information about voting populations and/or buying targeted ads at the site. Consider this report by BuzzFeed from November 2014:

"At some point in the next two years, the pollsters and ad makers who steer American presidential campaigns will be stumped: The nightly tracking polls are showing a dramatic swing in the opinions of the electorate, but neither of two typical factors — huge news or a major advertising buy — can explain it. They will, eventually, realize that the viral, mass conversation about politics on Facebook and other platforms has finally emerged as a third force in the core business of politics, mass persuasion.

Facebook is on the cusp — and I suspect 2016 will be the year this becomes clear — of replacing television advertising as the place where American elections are fought and won. The vast new network of some 185 million Americans opens the possibility, for instance, of a congressional candidate gaining traction without the expense of television, and of an inexpensive new viral populism. The way people share will shape the outcome of the presidential election."

It seems that day has arrived. Shape the conversation and outcome, indeed. It's all driven by data -- big data -- data mining.

Sixth, the new feature raises questions and issues for users. Should Facebook know your voting decisions? Does Facebook have a right to know your voting decisions? Has Facebook earned the right to know your voting decisions? Facebook is a money-making enterprise, so it will sell your information to as many other companies as possible. According to the October 28 announcement:

"How you vote is a personal matter, and we’ve taken steps to make sure that you have utmost control over your plan. After you make a selection, you have to choose who you want to be able to see it (“Only me” or “Friends”). For example, you may want to be private about your choice for president, but share with friends your pick for a congressional race or a ballot initiative."

The language in the announcement seems to confusingly refer to the Facebook feature as voting, when it isn't. Do all of your friends need to know your voting preferences? What about friends with Facebook profiles that are open to the general public? In the latter case, anybody wandering in can view your voting information. Is that what you really want?

Not me. What happens in the voting booth stays in the voting booth. I may express concerns on Facebook, but my final vote is private. No doubt, some consumers will share their voting preferences without considering the implications.

I visited the CTCL website and found it underwhelming and lacking key information to uderstand what this organization really is and does. Not good.

What are your opinions of Facebook's new elections and ballot feature?