290 posts categorized "Social Networking" Feed

Some Surprising Facts About Facebook And Its Users

Facebook logo The Pew Research Center announced findings from its latest survey of social media users:

  • About two-thirds (68%) of adults in the United States use Facebook. That is unchanged from April 2016, but up from 54% in August 2012. Only Youtube gets more adult usage (73%).
  • About three-quarters (74%) of adult Facebook users visit the site at least once a day. That's higher than Snapchat (63%) and Instagram (60%).
  • Facebook is popular across all demographic groups in the United States: 74% of women use it, as do 62% of men, 81% of persons ages 18 to 29, and 41% of persons ages 65 and older.
  • Usage by teenagers has fallen to 51% (at March/April 2018) from 71% during 2014 to 2015. More teens use other social media services: YouTube (85%), Instagram (72%) and Snapchat (69%).
  • 43% of adults use Facebook as a news source. That is higher than other social media services: YouTube (21%), Twitter (12%), Instagram (8%), and LinkedIn (6%). More women (61%) use Facebook as a news source than men (39%). More whites (62%) use Facebook as a news source than nonwhites (37%).
  • 54% of adult users said they adjusted their privacy settings during the past 12 months. 42% said they have taken a break from checking the platform for several weeks or more. 26% said they have deleted the app from their phone during the past year.

Perhaps, the most troubling finding:

"Many adult Facebook users in the U.S. lack a clear understanding of how the platform’s news feed works, according to the May and June survey. Around half of these users (53%) say they do not understand why certain posts are included in their news feed and others are not, including 20% who say they do not understand this at all."

Facebook users should know that the service does not display in their news feed all posts by their friends and groups. Facebook's proprietary algorithm -- called its "secret sauce" by some -- displays items it thinks users will engage with = click the "Like" or other emotion buttons. This makes Facebook a terrible news source, since it doesn't display all news -- only the news you (probably already) agree with.

That's like living life in an online bubble. Sadly, there is more.

If you haven't watched it, PBS has broadcast a two-part documentary titled, "The Facebook Dilemma" (see trailer below), which arguable could have been titled, "the dark side of sharing." The Frontline documentary rightly discusses Facebook's approaches to news, privacy, its focus upon growth via advertising revenues, how various groups have used the service as a weapon, and Facebook's extensive data collection about everyone.

Yes, everyone. Obviously, Facebook collects data about its users. The service also collects data about nonusers in what the industry calls "shadow profiles." CNet explained that during an April:

"... hearing before the House Energy and Commerce Committee, the Facebook CEO confirmed the company collects information on nonusers. "In general, we collect data of people who have not signed up for Facebook for security purposes," he said... That data comes from a range of sources, said Nate Cardozo, senior staff attorney at the Electronic Frontier Foundation. That includes brokers who sell customer information that you gave to other businesses, as well as web browsing data sent to Facebook when you "like" content or make a purchase on a page outside of the social network. It also includes data about you pulled from other Facebook users' contacts lists, no matter how tenuous your connection to them might be. "Those are the [data sources] we're aware of," Cardozo said."

So, there might be more data sources besides the ones we know about. Facebook isn't saying. So much for greater transparency and control claims by Mr. Zuckerberg. Moreover, data breaches highlight the problems with the service's massive data collection and storage:

"The fact that Facebook has [shadow profiles] data isn't new. In 2013, the social network revealed that user data had been exposed by a bug in its system. In the process, it said it had amassed contact information from users and matched it against existing user profiles on the social network. That explained how the leaked data included information users hadn't directly handed over to Facebook. For example, if you gave the social network access to the contacts in your phone, it could have taken your mom's second email address and added it to the information your mom already gave to Facebook herself..."

So, Facebook probably launched shadow profiles when it introduced its mobile app. That means, if you uploaded the address book in your phone to Facebook, then you helped the service collect information about nonusers, too. This means Facebook acts more like a massive advertising network than simply a social media service.

How has Facebook been able to collect massive amounts of data about both users and nonusers? According to the Frontline documentary, we consumers have lax privacy laws in the United States to thank for this massive surveillance advertising mechanism. What do you think?


Facebook Lowers Its Number of Breach Victims And Explains How Hackers Broke In And Stole Data

Facebook logo In an October 12th Security Update, Facebook lowered the number of users affected during its latest data breach, and explained how hackers broke into its systems and stole users' information during the data breach it first announced on September 28th. During the data breach:

"... the attackers already controlled a set of accounts, which were connected to Facebook friends. They used an automated technique to move from account to account so they could steal the access tokens of those friends, and for friends of those friends, and so on, totaling about 400,000 people. In the process, however, this technique automatically loaded those accounts’ Facebook profiles, mirroring what these 400,000 people would have seen when looking at their own profiles. That includes posts on their timelines, their lists of friends, Groups they are members of, and the names of recent Messenger conversations. Message content was not available to the attackers, with one exception. If a person in this group was a Page admin whose Page had received a message from someone on Facebook, the content of that message was available to the attackers.

The attackers used a portion of these 400,000 people’s lists of friends to steal access tokens for about 30 million people. For 15 million people, attackers accessed two sets of information – name and contact details (phone number, email, or both, depending on what people had on their profiles). For 14 million people, the attackers accessed the same two sets of information, as well as other details people had on their profiles. This included username, gender, locale/language, relationship status, religion, hometown, self-reported current city, birthdate, device types used to access Facebook, education, work, the last 10 places they checked into or were tagged in, website, people or Pages they follow, and the 15 most recent searches. For 1 million people, the attackers did not access any information."

Facebook promises to notify the 30 million breach victims. While it lowered the number of breach victims from 50 to 30 million, this still isn't good. 30 million is still a lot of users. And, hackers stolen the juiciest data elements -- contact and profile information -- about breach victims, enabling them to conduct more fraud against victims, their family, friends, and coworkers. Plus, note the phrase: "the attackers already controlled a set of accounts." This suggest the hackers created bogus Facebook accounts, had the sign-in credentials (e.g., username, password) of valid accounts, or both. Not good.

Moreover, there is probably more bad news coming, as other affected companies assess the (collateral) damage. Experts said that Facebook's latest breach may be worse since many companies participate in the Facebook Connect program. Not good.

The timeline of the data breach and intrusion detection are troubling. Facebook admitted that the vulnerability hackers exploited existed from July, 2017 to September, 2018 when it noticed, "an unusual spike of activity that began on September 14, 2018." While it is good that Facebook's tech team notice the intrusion, the bad news is the long open window the vulnerability existed provided plenty of time for hackers to plot and do damage.  That the hackers used automated tools suggests that the hackers knew about the vulnerabilities for a long time... long enough to decide what to do, and then build automated tools to steal users' information. Where was Facebook's quality assurance (QA) testing department during all of this? Not good.

This latest data breach included a tiny bit of good news:

"This attack did not include Messenger, Messenger Kids, Instagram, WhatsApp, Oculus, Workplace, Pages, payments, third-party apps, or advertising or developer accounts."

Meanwhile, Facebook runs TV advertisements for its new Portal, a voice-activated device with a video screen, always-listening microphone, and camera for video chats within homes.  BuzzFeed reported:

"Portal’s debut comes at a time when Facebook is struggling to reassure the public that it’s capable of protecting users’ privacy... In promoting Portal, Facebook is emphasizing the devices’ security... The company asserts that it doesn't listen or view the content of Portal calls, and the Smart Camera’s artificial intelligence–powered tracking doesn’t run on Facebook servers or use facial recognition. Audio snippets of voice commands can also be deleted from your Facebook Activity Log... because Portal relies on Facebook’s Messenger service, those calls are still under the purview of Facebook’s data privacy policy. The company collects information about “the people, Pages, accounts, hashtags and groups you are connected to and how you interact with them across our Products, such as people you communicate with the most or groups you are part of.” This means that Facebook will know who you’re talking to on Portal and for how long."

Buzzfeed also listed several comments by users. Some are skeptical of privacy promises:

Tweet #1 about Facebook Portal. Click to view larger version

Here's another comment:

Who is going to buy Portal while breach investigation results from this latest data breach, and from its Cambridge Analytica breach, are still murky? What other systems and software vulnerabilities exist? Would you buy Portal?


NPR Podcast: 'The Weaponization Of Social Media'

Any technology can be used for good, or for bad. Social media is no exception. A recent data breach study in Australia listed the vulnerabilities of social media. A study in 2016 found, "social media attractive to vulnerable narcissists."

How have social media sites and mobile apps been used as weapons? The podcast below features an interview of P.W. Singer and Emerson Brooking, authors of a new book, "LikeWar: The Weaponization of Social Media." The authors cite real-world examples of how social media sites and mobile apps have been used during conflicts and demonstrations around the globe -- and continue to be used.

A Kirkus book review stated:

"... Singer and Brooking sagely note the intensity of interpersonal squabbling online as a moral equivalent of actual combat, and they also discuss how "humans as a species are uniquely ill-equipped to handle both the instantaneity and the immensity of information that defines the social media age." The United States seems especially ill-suited, since in the Wild West of the internet, our libertarian tendencies have led us to resist what other nations have put in place, including public notices when external disinformation campaigns are uncovered and “legal action to limit the effect of poisonous super-spreaders.” Information literacy, by this account, becomes a “national security imperative,” one in which the U.S. is badly lagging..."

The new book "LikeWar" is available at several online bookstores, including Barnes and Noble, Powell's, and Amazon. Now, watch the podcast:


'Got Another Friend Request From You' Warnings Circulate On Facebook. What's The Deal?

Facebook logo Several people have posted on their Facebook News Feeds messages with warnings, such as:

"Please do not accept any new Friend requests from me"

And:

"Hi … I actually got another friend request from you yesterday … which I ignored so you may want to check your account. Hold your finger on the message until the forward button appears … then hit forward and all the people you want to forward too … I had to do the people individually. Good Luck!"

Maybe, you've seen one of these warnings. Some of my Facebook friends posted these warnings in their News Feed or in private messages via Messenger. What's happening? The fact-checking site Snopes explained:

"This message played on warnings about the phenomenon of Facebook “pirates” engaging in the “cloning” of Facebook accounts, a real (but much over-hyped) process by which scammers target existing Facebook users accounts by setting up new accounts with identical profile pictures and names, then sending out friend requests which appear to originate from those “cloned” users. Once those friend requests are accepted, the scammers can then spread messages which appear to originate from the targeted account, luring that person’s friends into propagating malware, falling for phishing schemes, or disclosing personal information that can be used for identity theft."

Hacked Versus Cloned Accounts

While everyone wants to warn their friends, it is important to do your homework first. Many Facebook users have confused "hacked" versus "cloned" accounts. A hack is when another person has stolen your password and used it to sign into your account to post fraudulent messages -- pretending to be you.

Snopes described above what a "cloned" account is... basically a second, unauthorized account. Sadly, there are plenty of online sources for scammers to obtain stolen photos and information to create cloned accounts. One source is the multitude of massive corporate data breaches: Equifax, Nationwide, Facebook, the RNC, Uber, and others. Another source are Facebook friends with sloppy security settings on their accounts: the "Public" setting is no security. That allows scammers to access your account via your friends' wide-open accounts lacking security.

It is important to know the differences between "hacked" and "cloned" accounts. Snopes advised:

"... there would be no utility to forwarding [the above] warning to any of your Facebook friends unless you had actually received a second friend request from one of them. Moreover, even if this warning were possibly real, the optimal approach would not be for the recipient to forward it willy-nilly to every single contact on their friends list... If you have reason to believe your Facebook account might have been “cloned,” you should try sending separate private messages to a few of your Facebook friends to check whether any of them had indeed recently received a duplicate friend request from you, as well as searching Facebook for accounts with names and profile pictures identical to yours. Should either method turn up a hit, use Facebook’s "report this profile" link to have the unauthorized account deactivated."

Cloned Accounts

If you received a (second) Friend Request from a person who you are already friends with on Facebook, then that suggests a cloned account. (Cloned accounts are not new. It's one of the disadvantages of social media.) Call your friend on the phone or speak with him/her in-person to: a) tell him/her you received a second Friend Request, and b) determine whether or not he/she really sent that second Friend Request. (Yes, online privacy takes some effort.) If he/she didn't send a second Friend Request, then you know what to do: report the unauthorized profile to Facebook, and then delete the second Friend Request. Don't accept it.

If he/she did send a second Friend Request, ask why. (Let's ignore the practice by some teens to set up multiple accounts; one for parents and a second for peers.) I've had friends -- adults -- forget their online passwords, and set up a second Facebook account -- a clumsy, confusing solution. Not everyone has good online skills. Your friend will tell you which account he/she uses and which account he/she wants you to connect to. Then, un-Friend the other account.

Hacked Accounts

All Facebook users should know how to determine if your Facebook account has been hacked. Online privacy takes effort. How to check:

  1. Sign into Facebook
  2. Select "Settings."
  3. Select "Security and Login."
  4. You will see a list of the locations where your account has been accessed. If one or more of the locations weren't you, then it's likely another person has stolen and used your password. Proceed to step #5.
  5. For each location that wasn't you, select "Not You" and then "Secure Account." Follow the online instructions displayed and change your password immediately.

I've performed this check after friends have (erroneously) informed me that my account was hacked. It wasn't.

Facebook Search and Privacy Settings

Those wanting to be proactive can search the Facebook site to find other persons using the same name. Simply, enter your name in the search mechanism. The results page lists other accounts with the same name. If you see another account using your identical profile photo (and/or other identical personal information and photos), then use Facebook's "report this profile" link to report the unauthorized account.

You can go one step further and warn your Facebook friends who have the "Public" security setting on their accounts. They may be unaware of the privacy risks, and once informed may change their security setting to "Friends Only." Hopefully, they will listen.

If they don't listen, you can suggest that he/she at a minimum change other privacy settings. Users control who can see their photos and list of friends on Facebook. To change the privacy setting, navigate to your Friends List page and select the edit icon. Then, select the "Edit Privacy" link. Next, change both privacy settings for, "Who can see your friends?" and "Who can see the people, Pages, and lists you follow?" to "Only Me." As a last resort, you can un-Friend the security neophyte, if he/she refuses to make any changes to their security settings.


Why The Recent Facebook Data Breach Is Probably Much Worse Than You First Thought

Facebook logo The recent data breach at Facebook has indications that it may be much worse than first thought. It's not the fact that a known 50 million users were affected, and 40 million more may also be affected. There's more. The New York Times reported on Tuesday:

"... the impact could be significantly bigger since those stolen credentials could have been used to gain access to so many other sites. Companies that allow customers to log in with Facebook Connect are scrambling to figure out whether their own user accounts have been compromised."

Facebook Connect, an online tool launched in 2008, allows users to sign into other apps and websites using their Facebook credentials (e.g., username, password). many small, medium, and large businesses joined the Facebook Connect program, which was using:

"... a simple proposition: Connect to our platform, and we’ll make it faster and easier for people to use your apps... The tool was adopted by thousands of other firms, from mom-and-pop publishing companies to high-profile tech outfits like Airbnb and Uber."

Initially, Facebook Connect made online life easier and more convenient. Users could sign up for new apps and sites without having to create and remember new sign-in credentials:

But in July 2017, that measure of security fell short. By exploiting three software bugs, attackers forged “access tokens,” digital keys used to gain entry to a user’s account. From there, the hackers were able to do anything users could do on their own Facebook accounts, including logging in to third-party apps."

On Tuesday, Facebook released a "Login Update," which said in part:

"We have now analyzed our logs for all third-party apps installed or logged in during the attack we discovered last week. That investigation has so far found no evidence that the attackers accessed any apps using Facebook Login.

Any developer using our official Facebook SDKs — and all those that have regularly checked the validity of their users’ access tokens – were automatically protected when we reset people’s access tokens. However, out of an abundance of caution, as some developers may not use our SDKs — or regularly check whether Facebook access tokens are valid — we’re building a tool to enable developers to manually identify the users of their apps who may have been affected, so that they can log them out."

So, there are more news and updates to come about this. According to the New York Times, some companies' experiences so far:

"Tinder, the dating app, has found no evidence that accounts have been breached, based on the "limited information Facebook has provided," Justine Sacco, a spokeswoman for Tinder and its parent company, the Match Group, said in a statement... The security team at Uber, the ride-hailing giant, is logging some users out of their accounts to be cautious, said Melanie Ensign, a spokeswoman for Uber. It is asking them to log back in — a preventive measure that would invalidate older, stolen access tokens."


Facebook Data Breach Affected 90 Million Users. Users Claim Facebook Blocked Posts About the Breach

On Friday, Facebook announced a data breach which affected about 50 million users of the social networking service. Facebook engineers discovered the hack on September 25th. The Facebook announcement explained:

"... that attackers exploited a vulnerability in Facebook’s code that impacted “View As” a feature that lets people see what their own profile looks like to someone else. This allowed them to steal Facebook access tokens which they could then use to take over people’s accounts. Access tokens are the equivalent of digital keys that keep people logged in to Facebook so they don’t need to re-enter their password every time they use the app... This attack exploited the complex interaction of multiple issues in our code. It stemmed from a change we made to our video uploading feature in July 2017, which impacted “View As.” The attackers not only needed to find this vulnerability and use it to get an access token, they then had to pivot from that account to others to steal more tokens."

Facebook Security Update: image for mobile users. Click to view larger version Many mobile users will see the message in the image displayed on the right. Facebook said it has fixed the vulnerability, notified law enforcement, turned off the "View As" feature until the breach investigation is finished, and has already reset the access tokens of about 90 million users.

Why the higher number of 90 million and not 50 million? According to the announcement:

"... we have reset the access tokens of the almost 50 million accounts we know were affected to protect their security. We’re also taking the precautionary step of resetting access tokens for another 40 million accounts that have been subject to a “View As” look-up in the last year. As a result, around 90 million people will now have to log back in to Facebook, or any of their apps that use Facebook Login. After they have logged back in, people will get a notification at the top of their News Feed explaining what happened."

So, 90 million users affected and 50 million known for sure. What to make of this? Wait for findings in the completed breach investigation. Until then, we won't know exactly how attackers broke in, what they stole, and the true number of affected users.

What else to make of this? Facebook's announcement skillfully avoided any direct mentions of exactly when the attack started. The announcement stated that the vulnerability was related to a July 2017 change to the video uploading feature. So, the attack could have started soon after that. Facebook didn't say, and it may not know. Hopefully, the final breach investigation report will clarify things.

And, there is more disturbing news.

Some users have claimed that Facebook blocked them from posting messages about the data breach. TechCrunch reported:

"Some users are reporting that they are unable to post [the] story about a security breach affecting 50 million Facebook users. The issue appears to only affect particular stories from certain outlets, at this time one story from The Guardian and one from the Associated Press, both reputable press outlets... some users, including members of the staff here at TechCrunch who were able to replicate the bug, were met with the following error message which prevented them from sharing the story."

Error message displayed to some users trying to post about Facebook data breach. Click to view larger version

Well, we now know that -- for better or for worse -- Facebook has an automated tool to identify spam content in real-time. And, this tool can easily misidentify content as spam, which isn't spam. Not good.

Reportedly, this error message problem has been fixed. Regardless, it should never have happened. The data breach is big news. Clearly, many people want to read and post about it. Popularity does not indicate spam. And Facebook owes users an explanation about its automated tool.

Did Facebook notify you directly of its data breach? Did you get this spam error message? How concerned are you? Please share your experience and opinions below.


Tips For Parents To Teach Their Children Online Safety

Today's children often use mobile devices at very young ages... four, five, or six years of age. And they don't know anything about online dangers: computer viruses, stalking, cyber-bullying, identity theft, phishing scams, ransomware, and more. Nor do they know how to read terms-of-use and privacy policies. It is parents' responsibility to teach them.

NordVPN logo NordVPN, a maker of privacy software, offers several tips to help parents teach their children about online safety:

"1. Set an example: If you want your kid to be careful and responsible online, you should start with yourself."

Children watch their parents. If you practice good online safety habits, they will learn from watching you. And:

"2. Start talking to your kid early and do it often: If your child already knows how to play a video on Youtube or is able to download a gaming app without your help, they also should learn how to do it safely. Therefore, it’s important to start explaining the basics of privacy and cybersecurity at an early age."

So, long before having the "sex talk" with your children, parents should have the online safety talk. Developing good online safety habits at a young age will help children throughout their lives; especially as adults:

"3. Explain why safe behavior matters: Give relatable examples of what personal information is – your address, social security number, phone number, account credentials, and stress why you can never share this information with strangers."

You wouldn't give this information to a stranger on a city street. The same applies online. That also means discussing social media:

"4. Social media and messaging: a) don’t accept friend requests from people you don’t know; b) never send your pictures to strangers; c) make sure only your friends can see what you post on Facebook; d) turn on timeline review to check posts you are tagged in before they appear on your Facebook timeline; e) if someone asks you for some personal information, always tell your parents; f) don’t share too much on your profile (e.g., home address, phone number, current location); and g) don’t use your social media logins to authorize apps."

These are the basics. Read the entire list of online safety tips for parents by Nord VPN.


Besieged Facebook Says New Ad Limits Aren’t Response to Lawsuits

[Editor's note: today's guest post, by reporters at ProPublica, is the latest in a series monitoring Facebook's attempts to clean up its advertising systems and tools. It is reprinted with permission.]

By Ariana Tobin and Jeremy B. Merrill, ProPublica

Facebook logo Facebook’s move to eliminate 5,000 options that enable advertisers on its platform to limit their audiences is unrelated to lawsuits accusing it of fostering housing and employment discrimination, the company said Wednesday.

“We’ve been building these tools for a long time and collecting input from different outside groups,” Facebook spokesman Joe Osborne told ProPublica.

Tuesday’s blog post announcing the elimination of categories that the company has described as “sensitive personal attributes” came four days after the Department of Justice joined a lawsuit brought by fair housing groups against Facebook in federal court in New York City. The suit contends that advertisers could use Facebook’s options to prevent racial and religious minorities and other protected groups from seeing housing ads.

Raising the prospect of tighter regulation, the Justice Department said that the Communications Decency Act of 1996, which gives immunity to internet companies from liability for content on their platforms, did not apply to Facebook’s advertising portal. Facebook has repeatedly cited the act in legal proceedings in claiming immunity from anti-discrimination law. Congress restricted the law’s scope in March by making internet companies more liable for ads and posts related to child sex-trafficking.

Around the same time the Justice Department intervened in the lawsuit, the Department of Housing and Urban Development (HUD) filed a formal complaint against Facebook, signaling that it had found enough evidence during an initial investigation to raise the possibility of legal action against the social media giant for housing discrimination. Facebook has said that its policies strictly prohibit discrimination, that over the past year it has strengthened its systems to protect against misuse, and that it will work with HUD to address the concerns.

“The Fair Housing Act prohibits housing discrimination including those who might limit or deny housing options with a click of a mouse,” Anna María Farías, HUD’s assistant secretary for fair housing and equal opportunity, said in a statement accompanying the complaint. “When Facebook uses the vast amount of personal data it collects to help advertisers to discriminate, it’s the same as slamming the door in someone’s face.”

Regulators in at least one state are also scrutinizing Facebook. Last month, the state of Washington imposed legally binding compliance requirements on the company, barring it from offering advertisers the option of excluding protected groups from seeing ads about housing, credit, employment, insurance or “public accommodations of any kind.”

Advertising is the primary source of revenue for the social media giant, which is under siege on several fronts. A recent study and media coverage have highlighted how hate speech and false rumors on Facebook have spurred anti-refugee discrimination in Germany and violence against minority ethnic groups such as the Rohingya in Myanmar. This week, Facebook said it had found evidence of Russian and Iranian efforts to influence elections in the U.S. and around the world through fake accounts and targeted advertising. It also said it had suspended more than 400 apps “due to concerns around the developers who built them or how the information people chose to share with the app may have been used.”

Facebook declined to identify most of the 5,000 options being removed, saying that the information might help bad actors game the system. It did say that the categories could enable advertisers to exclude racial and religious minorities, and it provided four examples that it deleted: “Native American culture,” “Passover,” “Evangelicalism” and “Buddhism.” It said the changes will be completed next month.

According to Facebook, these categories have not been widely used by advertisers to discriminate, and their removal is intended to be proactive. In some cases, advertisers legitimately use these categories to reach key audiences. According to targeting data from ads submitted to ProPublica’s Political Ad Collector project, Jewish groups used the “Passover” category to promote Jewish cultural events, and the Michael J. Fox Foundation used it to find people of Ashkenazi Jewish ancestry for medical research on Parkinson’s disease.

Facebook is not limiting advertisers’ options for narrowing audiences by age or sex. The company has defended age-based targeting in employment ads as beneficial for employers and job seekers. Advertisers may also still target or exclude by ZIP code — which critics have described as “digital red-lining” but Facebook says is standard industry practice.

A pending suit in federal court in San Francisco alleges that, by allowing employers to target audiences by age, Facebook is enabling employment discrimination against older job applicants. Peter Romer-Friedman, a lawyer representing the plaintiffs in that case, said that Facebook’s removal of the 5,000 options “is a modest step in the right direction.” But allowing employers to sift job seekers by age, he added, “shows what Facebook cares about: its bottom line. There is real money in age-restricted discrimination.”

Senators Bob Casey of Pennsylvania and Susan Collins of Maine have asked Facebook for more information on what steps it is taking to prevent age discrimination on the site.

The issue of discriminatory advertising on Facebook arose in October 2016 when ProPublica revealed that advertisers on the platform could narrow their audiences by excluding so-called “ethnic affinity” categories such as African-Americans and Spanish-speaking Hispanics. At the time, Facebook promised to build a system to flag and reject such ads. However, a year later, we bought dozens of rental housing ads that excluded protected categories. They were approved within seconds. So were ads that excluded older job seekers, as well as ads aimed at anti-Semitic categories such as “Jew hater.”

The removal of the 5,000 options isn’t Facebook’s first change to its advertising portal in response to such criticism. Last November, it added a self-certification option, which asks housing advertisers to check a box agreeing that their advertisement is not discriminatory. The company also plans to require advertisers to read educational material on the site about ethical practices.

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Facebook To Remove Onavo VPN App From Apple App Store

Not all Virtual Private Network (VPN) software is created equal. Some do a better job at protecting your privacy than others. Mashable reported that Facebook:

"... plans to remove its Onavo VPN app from the App Store after Apple warned the company that the app was in violation of its policies governing data gathering... For those blissfully unaware, Onavo sold itself as a virtual private network that people could run "to take the worry out of using smartphones and tablets." In reality, Facebook used data about users' internet activity collected by the app to inform acquisitions and product decisions. Essentially, Onavo allowed Facebook to run market research on you and your phone, 24/7. It was spyware, dressed up and neatly packaged with a Facebook-blue bow. Data gleaned from the app, notes the Wall Street Journal, reportedly played into the social media giant's decision to start building a rival to the Houseparty app. Oh, and its decision to buy WhatsApp."

Thanks Apple! We've all heard of the #FakeNews hashtag on social media. Yes, there is a #FakeVPN hashtag, too. So, buyer beware... online user beware.


Keep An Eye On Facebook's Moves To Expand Its Collection Of Financial Data About Its Users

Facebook logo On Monday, the Wall Street Journal reported that the social media giant had approached several major banks to share their detailed financial information about consumers in order, "to boost user engagement." Reportedly, Facebook approached JPMorgan Chase, Wells Fargo, Citigroup, and U.S. Bancorp. And, the detailed financial information sought included debit/credit/prepaid card transactions and checking account balances.

The Reuters news service also reported about the talks. The Reuters story mentioned the above banks, plus PayPal and American Express. Then, in a reply Facebook said that the Wall Street Journal news report was wrong. TechCrunch reported:

"Facebook spokesperson Elisabeth Diana tells TechCrunch it’s not asking for credit card transaction data from banks and it’s not interested in building a dedicated banking feature where you could interact with your accounts. It also says its work with banks isn’t to gather data to power ad targeting, or even personalize content... Facebook already lets Citibank customers in Singapore connect their accounts so they can ping their bank’s Messenger chatbot to check their balance, report fraud or get customer service’s help if they’re locked out of their account... That chatbot integration, which has no humans on the other end to limit privacy risks, was announced last year and launched this March. Facebook works with PayPal in more than 40 countries to let users get receipts via Messenger for their purchases. Expansions of these partnerships to more financial services providers could boost usage of Messenger by increasing its convenience — and make it more of a centralized utility akin to China’s WeChat."

There's plenty in the TechCrunch story. Reportedly, Diana's statement said that banks approached Facebook, and that it already partners:

"... with banks and credit card companies to offer services like customer chat or account management. Account linking enables people to receive real-time updates in Facebook Messenger where people can keep track of their transaction data like account balances, receipts, and shipping updates... The idea is that messaging with a bank can be better than waiting on hold over the phone – and it’s completely opt-in. We’re not using this information beyond enabling these types of experiences – not for advertising or anything else. A critical part of these partnerships is keeping people’s information safe and secure."

What to make of this? First, it really doesn't matter who approached whom. There's plenty of history. Way back in 2012, a German credit reporting agency approached Facebook. So, the financial sector is fully aware of the valuable data collected by Facebook.

Second, users doing business on the platform have already given Facebook permission to collect transaction data. Third, while Facebook's reply was about its users generally, its statement said "no" but sounded more like a "yes." Why? Basically, "account linking" or the convenience of purchase notifications is the hook or way into collecting users' financial transaction data. Existing practices, such as fitness apps  and music sharing, highlight the existing "account linking" used for data collection. Whatever users share on the platform allows Facebook to collect that information.

Fourth, the push to collect more banking data appears at best poorly timed, and at worst -- arrogant. Facebook is still trying to recover and regain users' trust after 87 million persons were affected by the massive data breach involving Cambridge Analytica. In May, the new Commissioner at the U.S. Federal Trade Commission (FTC) suggested stronger enforcement on tech companies, like Google and Facebook. Facebook has stumbled as its screening to identify political ads by politicians has incorrectly flagged news sites. Facebook CEO Mark Zuckerberg didn't help matters with his bumbling comments while failing to explain his company's stumbles to identify and prevent fake news.

Gary Cohn, President Donald Trump's former chief economic adviser, sharply criticized social media companies, including Facebook, for allowing fake news:

"In 2008 Facebook was one of those companies that was a big platform to criticize banks, they were very out front of criticizing banks for not being responsible citizens. I think banks were more responsible citizens in 2008 than some of the social media companies are today."

So, it seems wise to keep an eye on Facebook as it attempts to expand its data collection of consumers' financial information. Fifth, banks and banking executives bear some responsibility, too. A guest post on Forbes explained (highlighted text added):

"Whether this [banking] partnership pans or not, the Facebook plans are a reminder that banks sit on mountains of wealth much more valuable than money. Because of the speed at which tech giants move, banks must now make sure their clients agree on who owns their data, consent to the use of them, and understand with who they are shared. For that, it is now or never... In the financial industry, trust between a client and his provider is of primary importance. You can’t sell a customer’s banking data in the same way you sell his or her internet surfing behavior. Finance executives understand this: they even see the appropriate use of customer data as critical to financial stability. It is now or never to define these principles on the use of customer data... It’s why we believe new binding guidelines such as the EU’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act are welcome, even if they have room for improvement... A report by the US Treasury published earlier this week called on Congress to enact a federal data security and breach notification law to protect consumer financial data. The principles outlined above can serve as guidance to lawmakers drafting legislation, and bank executives considering how to respond to advances by Facebook and other big techs..."

Consumers should control their data -- especially financial data. If those rules are not put in place, then consumers have truly lost control of the sensitive personal and financial information that describes them. What are your opinions?


Facial Recognition At Facebook: New Patents, New EU Privacy Laws, And Concerns For Offline Shoppers

Facebook logo Some Facebook users know that the social networking site tracks them both on and off (e.g., signed into, not signed into) the service. Many online users know that Facebook tracks both users and non-users around the internet. Recent developments indicate that the service intends to track people offline, too. The New York Times reported that Facebook:

"... has applied for various patents, many of them still under consideration... One patent application, published last November, described a system that could detect consumers within [brick-and-mortar retail] stores and match those shoppers’ faces with their social networking profiles. Then it could analyze the characteristics of their friends, and other details, using the information to determine a “trust level” for each shopper. Consumers deemed “trustworthy” could be eligible for special treatment, like automatic access to merchandise in locked display cases... Another Facebook patent filing described how cameras near checkout counters could capture shoppers’ faces, match them with their social networking profiles and then send purchase confirmation messages to their phones."

Some important background. First, the usage of surveillance cameras in retail stores is not new. What is new is the scope and accuracy of the technology. In 2012, we first learned about smart mannequins in retail stores. In 2013, we learned about the five ways retail stores spy on shoppers. In 2015, we learned more about tracking of shoppers by retail stores using WiFi connections. In 2018, some smart mannequins are used in the healthcare industry.

Second, Facebook's facial recognition technology scans images uploaded by users, and then allows users identified to accept or decline labels with their name for each photo. Each Facebook user can adjust their privacy settings to enable or disable the adding of their name label to photos. However:

"Facial recognition works by scanning faces of unnamed people in photos or videos and then matching codes of their facial patterns to those in a database of named people... The technology can be used to remotely identify people by name without their knowledge or consent. While proponents view it as a high-tech tool to catch criminals... critics said people cannot actually control the technology — because Facebook scans their faces in photos even when their facial recognition setting is turned off... Rochelle Nadhiri, a Facebook spokeswoman, said its system analyzes faces in users’ photos to check whether they match with those who have their facial recognition setting turned on. If the system cannot find a match, she said, it does not identify the unknown face and immediately deletes the facial data."

Simply stated: Facebook maintains a perpetual database of photos (and videos) with names attached, so it can perform the matching and not display name labels for users who declined and/or disabled the display of name labels in photos (videos). To learn more about facial recognition at Facebook, visit the Electronic Privacy Information Center (EPIC) site.

Third, other tech companies besides Facebook use facial recognition technology:

"... Amazon, Apple, Facebook, Google and Microsoft have filed facial recognition patent applications. In May, civil liberties groups criticized Amazon for marketing facial technology, called Rekognition, to police departments. The company has said the technology has also been used to find lost children at amusement parks and other purposes..."

You may remember, earlier in 2017 Apple launched its iPhone X with Face ID feature for users to unlock their phones. Fourth, since Facebook operates globally it must respond to new laws in certain regions:

"In the European Union, a tough new data protection law called the General Data Protection Regulation now requires companies to obtain explicit and “freely given” consent before collecting sensitive information like facial data. Some critics, including the former government official who originally proposed the new law, contend that Facebook tried to improperly influence user consent by promoting facial recognition as an identity protection tool."

Perhaps, you find the above issues troubling. I do. If my facial image will be captured, archived, tracked by brick-and-mortar stores, and then matched and merged with my online usage, then I want some type of notice before entering a brick-and-mortar store -- just as websites present privacy and terms-of-use policies. Otherwise, there is no notice nor informed consent by shoppers at brick-and-mortar stores.

So, is facial recognition a threat, a protection tool, or both? What are your opinions?


Federal Investigation Into Facebook Widens. Company Stock Price Drops

The Boston Globe reported on Tuesday (links added):

"A federal investigation into Facebook’s sharing of data with political consultancy Cambridge Analytica has broadened to focus on the actions and statements of the tech giant and now involves three agencies, including the Securities and Exchange Commission, according to people familiar with the official inquiries.

Representatives for the FBI, the SEC, and the Federal Trade Commission have joined the Justice Department in its inquiries about the two companies and the sharing of personal information of 71 million Americans... The Justice Department and the other federal agencies declined to comment. The FTC in March disclosed that it was investigating Facebook over possible privacy violations..."

About 87 million persons were affected by the Facebook breach involving Cambridge Analytica. In May, the new Commissioner at the U.S. Federal Trade Commission (FTC) suggested stronger enforcement on tech companies, like Google and Facebook.

After news broke about the wider probe, shares of Facebook stock fell about 18 percent of their value and then recovered somewhat for a net drop of 2 percent. That 2 percent drop is about $12 billion in valuation. Clearly, there will be more news (and stock price fluctuations) to come.

During the last few months, there has been plenty of news about Facebook:


Facebook’s Screening for Political Ads Nabs News Sites Instead of Politicians

[Editor's note: today's post, by reporters at ProPublica, discusses new advertising rules at the Facebook.com social networking service. It is reprinted with permission.]

By Jeremy B. Merrill and Ariana Tobin, ProPublica

One ad couldn’t have been more obviously political. Targeted to people aged 18 and older, it urged them to “vote YES” on June 5 on a ballot proposition to issue bonds for schools in a district near San Francisco. Yet it showed up in users’ news feeds without the “paid for by” disclaimer required for political ads under Facebook’s new policy designed to prevent a repeat of Russian meddling in the 2016 presidential election. Nor does it appear, as it should, in Facebook’s new archive of political ads.

The other ad was from The Hechinger Report, a nonprofit news outlet, promoting one of its articles about financial aid for college students. Yet Facebook’s screening system flagged it as political. For the ad to run, The Hechinger Report would have to undergo the multi-step authorization and authentication process of submitting Social Security numbers and identification that Facebook now requires for anyone running “electoral ads” or “issue ads.”

When The Hechinger Report appealed, Facebook acknowledged that its system should have allowed the ad to run. But Facebook then blocked another ad from The Hechinger Report, about an article headlined, “DACA students persevere, enrolling at, remaining in, and graduating from college.” This time, Facebook rejected The Hechinger Report’s appeal, maintaining that the text or imagery was political.

As these examples suggest, Facebook’s new screening policies to deter manipulation of political ads are creating their own problems. The company’s human reviewers and software algorithms are catching paid posts from legitimate news organizations that mention issues or candidates, while overlooking straightforwardly political posts from candidates and advocacy groups. Participants in ProPublica’s Facebook Political Ad Collector project have submitted 40 ads that should have carried disclaimers under the social network’s policy, but didn’t. Facebook may have underestimated the difficulty of distinguishing between political messages and political news coverage — and the consternation that failing to do so would stir among news organizations.

The rules require anyone running ads that mention candidates for public office, are about elections, or that discuss any of 20 “national issues of public importance” to verify their personal Facebook accounts and add a "paid for by" disclosure to their ads, which are to be preserved in a public archive for seven years. Advertisers who don’t comply will have their ads taken down until they undergo an "authorization" process, submitting a Social Security number, driver’s license photo, and home address, to which Facebook sends a letter with a code to confirm that anyone running ads about American political issues has an American home address. The complication is that the 20 hot-button issues — environment, guns, immigration, values foreign policy, civil rights and the like — are likely to pop up in posts from news organizations as well.

"This could be really confusing to consumers because it’s labeling news content as political ad content," said Stefanie Murray, director of the Center for Cooperative Media at Montclair State University.

The Hechinger Report joined trade organizations representing thousands of publishers earlier this month in protesting this policy, arguing that the filter lumps their stories in with the very organizations and issues they are covering, thus confusing readers already wary of "fake news." Some publishers — including larger outlets like New York Media, which owns New York Magazine — have stopped buying ads on political content they expect would be subject to Facebook’s ad archive disclosure requirement.

"When it comes to news, Facebook still doesn’t get it. In its efforts to clear up one bad mess, it seems set on joining those who want blur the line between reality-based journalism and propaganda," Mark Thompson, chief executive officer of The New York Times, said in prepared remarks at the Open Markets Institute on Tuesday, June 12th.

In a statement Wednesday June 13th, Campbell Brown, Facebook’s head of global news partnerships, said the company recognized "that news content was different from political and issue advertising," and promised to create a "differentiated space within our archive to separate news content from political and issue ads." But Brown rejected the publishers’ request for a "whitelist" of legitimate news organizations whose ads would not be considered political.

"Removing an entire group of advertisers, in this case publishers, would go against our transparency efforts and the work we’re doing to shore up election integrity on Facebook," she wrote."“We don’t want to be in a position where a bad actor obfuscates its identity by claiming to be a news publisher." Many of the foreign agents that bought ads to sway the 2016 presidential election, the company has said, posed as journalistic outlets.

Her response didn’t satisfy news organizations. Facebook "continues to characterize professional news and opinion as ‘advertising’ — which is both misguided and dangerous," said David Chavern, chief executive of the News Media Alliance — a trade association representing 2,000 news organizations in the U.S. and Canada —and co-author of an open letter to Facebook on June 11.

ProPublica asked Facebook to explain its decision to block 14 advertisements shared with us by news outlets. Of those, 12 were ultimately rejected as political content, one was overturned on appeal, and one Facebook could not locate in its records. Most of these publications, including The Hechinger Report, are affiliated with the Institute for Nonprofit News, a consortium of mostly small nonprofit newsrooms that produce primarily investigative journalism (ProPublica is a member).

Here are a few examples of news organization ads that were rejected as political:

  • Voice of Monterey Bay tried to boost an interview with labor leader Dolores Huerta headlined "She Still Can." After the ad ran for about a day, Facebook sent an alert that the ad had been turned off. The outlet is refusing to seek approval for political ads, “since we are a news organization,” said Julie Martinez, co-founder of the nonprofit news site.
  • Ensia tried to advertise an article headlined: "Opinion: We need to talk about how logging in the Southern U.S. is harming local residents." It was rejected as political. Ensia will not appeal or buy new ads until Facebook addresses the issue, said senior editor David Doody.
  • inewsource tried to promote a post about a local candidate, headlined: "Scott Peters’ Plea to Get San Diego Unified Homeless Funding Rejected." The ad was rejected as political. inewsource appealed successfully, but then Facebook changed its mind and rejected it again, a spokeswoman for the social network said.
  • BirminghamWatch tried to boost a post about a story headlined, "‘That is Crazy:’ 17 Steps to Cutting Checks for Birmingham Neighborhood Projects." The ad was rejected as political and rejected again on appeal. A little while later, BirminghamWatch’s advertiser on the account received a message from Facebook: "Finish boosting your post for $15, up to 15,000 people will see it in NewsFeed and it can get more likes, comments, and shares." The nonprofit news site appealed again, and the ad was rejected again.

For most of its history, Facebook treated political ads like any other ads. Last October, a month after disclosing that "inauthentic accounts… operated out of Russia" had spent $100,000 on 3,000 ads that "appeared to focus on amplifying divisive social and political messages," the company announced it would implement new rules for election ads. Then in April, it said the rules would also apply to issue-related ads.

The policy took effect last month, at a time when Facebook’s relationship with the news industry was already rocky. A recent algorithm change reduced the number of posts from news organizations that users see in their news feed, thus decreasing the amount of traffic many media outlets can bring in without paying for wider exposure, and frustrating publishers who had come to rely on Facebook as a way to reach a broader audience.

Facebook has pledged to assign 3,000-4,000 "content moderators" to monitor political ads, but hasn’t reached that staffing level yet. The company told ProPublica that it is committed to meeting the goal by the U.S. midterm elections this fall.

To ward off "bad actors who try to game our enforcement system," Facebook has kept secret its specific parameters and keywords for determining if an ad is political. It has published only the list of 20 national issues, which it says is based in part on a data-coding system developed by a network of political scientists called the Comparative Agendas Project. A director on that project, Frank Baumgartner, said the lack of transparency is problematic.

"I think [filtering for political speech] is a puzzle that can be solved by algorithms and big data, but it has to be done right and the code needs to be transparent and publicly available. You can’t have proprietary algorithms determining what we see," Baumgartner said.

However Facebook’s algorithms work, they are missing overtly political ads. Incumbent members of Congress, national advocacy groups and advocates of local ballot initiatives have all run ads on Facebook without the social network’s promised transparency measures, after they were supposed to be implemented.

Ads from Senator Jeff Merkley, Democrat-Oregon, Representative Don Norcross, Democrat-New Jersey, and Representative Pramila Jayapal, Democrat-Washington, all ran without disclaimers as recently as this past Monday. So did an ad from Alliance Defending Freedom, a right-wing group that represented a Christian baker whose refusal for religious reasons to make a wedding cake for a gay couple was upheld by the Supreme Court this month. And ads from NORML, the marijuana legalization advocacy group and MoveOn, the liberal organization, ran for weeks before being taken down.

ProPublica asked Facebook why these ads weren’t considered political. The company said it is reviewing them. "Enforcement is never perfect at launch," it said.

Clarification, June 15, 2018: This article has been updated to include more specific information about the kinds of advertising New York Media has stopped buying on Facebook’s platform.

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


What Facebook’s New Political Ad System Misses

[Editor's Note: today's guest post is by the reporters at ProPublica. It is reprinted with permission.]

By Jeremy B. Merrill, Ariana Tobin, and Madeleine Varner, ProPublica

Facebook’s long-awaited change in how it handles political advertisements is only a first step toward addressing a problem intrinsic to a social network built on the viral sharing of user posts.

The company’s approach, a searchable database of political ads and their sponsors, depends on the company’s ability to sort through huge quantities of ads and identify which ones are political. Facebook is betting that a combination of voluntary disclosure and review by both people and automated systems will close a vulnerability that was famously exploited by Russian meddlers in the 2016 election.

The company is doubling down on tactics that so far have not prevented the proliferation of hate-filled posts or ads that use Facebook’s capability to target ads particular groups.

If the policy works as Facebook hopes, users will learn who has paid for the ads they see. But the company is not revealing details about the significant aspect of how political advertisers use its platform — the specific attributes the ad buyers used to target a particular person for an ad.

Facebook’s new system is the company’s most ambitious response thus far to the now-documented efforts by Russian agents to circulate items that would boost Donald Trump’s chances or suppress Democratic turnout. The new policies announced Thursday will make it harder for somebody trying to exploit the precise vulnerabilities in Facebook’s system exploited by the Russians in 2016 in several ways:

First, political ads that you see on Facebook will now include the name of the organization or person who paid for it, reminiscent of disclaimers required on political mailers and TV ads. (The ads Facebook identified as placed by Russians carried no such tags.)

The Federal Election Commission requires political ads to carry such clear disclosures but as we have reported, many candidates and groups on Facebook haven’t been following that rule.

Second, all political ads will be published in a searchable database.

Finally, the company will now require that anyone buying a political ad in their system confirm that they’re a U.S. resident. Facebook will even mail advertisers a postcard to make certain they’re in the U.S. Facebook says ads by advertisers whose identities aren’t verified under this process will be taken down starting in about a week, and they will be blocked from buying new ads until they have verified themselves.

While the new system can still be gamed, the specific tactics used by the Russian Internet Research Agency, such as an overseas purchase of ads promoting a Black Lives Matter rally under the name “Blacktivist,” will become harder — or at least harder to do without getting caught.

The company has also pledged to devote more employees to the issue, including 3,000-4,000 more content moderators. But Facebook says these will be not be additional hires — they will be included in the 20,000 already promised to tackle various moderation issues in the coming months.

What Is Facebook Missing?

The most obvious flaw in Facebook’s new system is that it misses ads it should catch. Right now, it’s easy to find political ads that are missing from their archive. Take this one, from the Washington State Democratic Party. Just minutes after Facebook finished announcing its launch of the tool, a participant in ProPublica’s Facebook Political Ad Collector project saw this ad, criticizing Republican congresswoman Cathy McMorris Rodgers… but it wasn’t in the database.

And there are others.

The company acknowledged that the process is still a work in progress, reiterating its request that users pitch in by reporting the political ads that lack disclosures.

Even as Facebook’s system gets better at identifying political ads, the company is withholding a critical piece of information in the ads it’s publishing. While we’ll see some demographic information about who saw a given ad, Facebook is not indicating which audiences the advertiser intended to target — categories that often include racial or political characteristics and which have been controversial in the past.

This information is critical to researchers and journalists trying to make sense of political advertising on Facebook. Take, for instance, this ad promoting the environmental benefits of nuclear power, from a group called Nuclear Matters: the group chose specifically to show it to people interested in veganism — a fact we wouldn’t know from looking at the demographics of the users who saw the ad.

Facebook said it considers the information about who saw an ad — age, gender and location — sufficient. Rob Leathern, Facebook’s Director of Product Management, said that the limited demographics-only breakdown “offers more transparency than the intent, in terms of showing the targeting.”

The company is also promising to launch an API, a technical tool which will allow outsiders to write software that would look for patterns in the new ad database. The company says it will launch an API “later this summer” but hasn’t said what data it will contain or who will have access to it.

ProPublica’s own Facebook Ad Collector tool, which also collects political ads spotted on Facebook, has an API that can be accessed by anyone. It also includes the targeting information — which users can also see on each ad that they view.

Facebook said it would not release data about ads flagged by users as political and then rejected by the system. We’re curious about those, and we know firsthand that their software can be imperfect. We’ve attempted to buy ads specifically about our journalism that were flagged as problematic — because the ads “contained profanity,” or were misclassified as discriminatory ads for “employment, credit or housing opportunities” by mistake.

Facebook’s track record on initiatives aimed at improving the transparency of its massively profitable advertising system is spotty. The company has said it’s going to rely in part on artificial intelligence to review ads — the same sort of technology that the company said in the past it would use to block discriminatory ads for housing, employment and credit opportunities.

When we tested the system almost a year after a ProPublica story showed Facebook was allowing advertisers to target housing ads in a way that violated Fair Housing Act protections, we found that the company was still approving housing ads that excluded African-Americans and other “multicultural affinities” from seeing them. The company was pressured to implement several changes to its ad portal and a Fair Housing group filed a lawsuit against the company.

Facebook also plans to rely in part on users to find and report political ads that get through the system without the required disclosures.

But its track record of moderating user-flagged content — when it comes to both hate speech and advertising — has been uneven. Last December, ProPublica brought 49 cases of user-flagged offensive speech to Facebook, and the company acknowledged that its moderators had made the wrong call in 22 of them.

The company admits it's playing a “cat and mouse game” with people trying to pass political ads through their system unnoticed. Just last month, Ohio Democratic gubernatorial candidate Richard Cordray’s campaign ran Facebook ads criticizing his opponent — but from a page called “Ohio Primary Info.”

The need for ad transparency goes way beyond Russian bad actors. Our tool has already caught scams and malware disguised as politics, which users raised as a problem years before Facebook made any meaningful change.

If you flag an ad to Facebook, please report them to us as well by sending an email to political.ads@propublica.org. We will be watching to see how well Facebook responds when users flag an ad.

How Will They Enforce the New Rules?

It’s one thing to create a set of rules, and another to enforce them consistently and on a large scale.

Facebook, which kept its content moderation and hate speech policies secret until they were revealed by ProPublica, won’t share the specific rules governing political ad content or details about the instructions moderators receive.

Leathern said the company is keeping the rules secret to frustrate the efforts of “bad actors who try to game our enforcement systems”

Facebook has said it’s looking to flag both electoral ads and those that take a position on its list of twenty “national legislative issues of public importance”. These range from the concrete, like “abortion” and “taxes,” to broad topics like “health” and “values.”

Facebook acknowledges its system will make mistakes and says it will improve over time. Ads for specific candidates are relatively easy to detect. “We’ll likely miss ads when they aim to persuade,” said Katie Harbath, Facebook’s Global Politics and Government Outreach Director.

We plan to keep an eye out for ads that don’t make it into the archive. We’ll be looking for ads that our Political Ad Collector tool finds that aren’t in Facebook’s database.

Want to Help?

We need your help building out our independent database of political ads! If you’re still reading this article, we’re giving you permission to stop and install the Political Ad Collector extension. Here’s what you need to know about how it works.

You can also help us find other people who can install the tool. We are especially in need of people who aren’t ProPublica readers already. We need people from a diverse set of backgrounds, and with different perspectives and political beliefs. Please encourage your friends and relatives — especially the ones you avoid talking politics with — to install it.

Do You Work at a News Outlet and Want to Partner With Us on This?

Awesome. We’re already working with quite a few newsrooms all over the world, including the CBC in Canada, Bridge Magazine in Michigan, The Guardian in Australia and more.

In the U.S., we’re trying to get eyes and ears on the ground in as many local elections as possible. If your readers would be interested in joining our transparency effort, please reach out. We’re happy to send more information about this and our larger Electionland project.

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

 


New Commissioner Says FTC Should Get Tough on Companies Like Facebook and Google

[Editor's note: today's guest post, by reporters at ProPublica, explores enforcement policy by the U.S. Federal Trade Commission (FTC), which has become more important  given the "light touch" enforcement approach by the Federal Communications Commission. Today's post is reprinted with permission.]

By Jesse Eisinger, ProPublica

Declaring that "the credibility of law enforcement and regulatory agencies has been undermined by the real or perceived lax treatment of repeat offenders," newly installed Democratic Federal Trade Commissioner Rohit Chopra is calling for much more serious penalties for repeat corporate offenders.

"FTC orders are not suggestions," he wrote in his first official statement, which was released on May 14.

Many giant companies, including Facebook and Google, are under FTC consent orders for various alleged transgressions (such as, in Facebook’s case, not keeping its promises to protect the privacy of its users’ data). Typically, a first FTC action essentially amounts to a warning not to do it again. The second carries potential penalties that are more serious.

Some critics charge that that approach has encouraged companies to treat FTC and other regulatory orders casually, often violating their terms. They also say the FTC and other regulators and law enforcers have gone easy on corporate recidivists.

In 2012, a Republican FTC commissioner, J. Thomas Rosch, dissented from an agency agreement with Google that fined the company $22.5 million for violations of a previous order even as it denied liability. Rosch wrote, “There is no question in my mind that there is ‘reason to believe’ that Google is in contempt of a prior Commission order.” He objected to allowing the company to deny its culpability while accepting a fine.

Chopra’s memo signals a tough stance from Democratic watchdogs — albeit a largely symbolic one, given the fact that Republicans have a 3-2 majority on the FTC — as the Trump administration pursues a wide-ranging deregulatory agenda. Agencies such as the Environmental Protection Agency and the Department of Interior are rolling back rules, while enforcement actions from the Securities and Exchange Commission and the Department of Justice are at multiyear lows.

Chopra, 36, is an ally of Elizabeth Warren and a former assistant director of the Consumer Financial Protection Bureau. President Donald Trump nominated him to his post in October, and he was confirmed last month. The FTC is led by a five-person commission, with a chairman from the president’s party.

The Chopra memo is also a tacit criticism of enforcement in the Obama years. Chopra cites the SEC’s practice of giving waivers to banks that have been sanctioned by the Department of Justice or regulators allowing them to continue to receive preferential access to capital markets. The habitual waivers drew criticism from a Democratic commissioner on the SEC, Kara Stein. Chopra contends in his memo that regulators treated both Wells Fargo and the giant British bank HSBC too lightly after repeated misconduct.

"When companies violate orders, this is usually the result of serious management dysfunction, a calculated risk that the payoff of skirting the law is worth the expected consequences, or both," he wrote. Both require more serious, structural remedies, rather than small fines.

The repeated bad behavior and soft penalties “undermine the rule of law,” he argued.

Chopra called for the FTC to use more aggressive tools: referring criminal matters to the Department of Justice; holding individual executives accountable, even if they weren’t named in the initial complaint; and “meaningful” civil penalties.

The FTC used such aggressive tactics in going after Kevin Trudeau, infomercial marketer of miracle treatments for bodily ailments. Chopra implied that the commission does not treat corporate recidivists with the same toughness. “Regardless of their size and clout, these offenders, too, should be stopped cold,” he writes.

Chopra also suggested other remedies. He called for the FTC to consider banning companies from engaging in certain business practices; requiring that they close or divest the offending business unit or subsidiary; requiring the dismissal of senior executives; and clawing back executive compensation, among other forceful measures.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Privacy Badger Update Fights 'Link Tracking' And 'Link Shims'

Many internet users know that social media companies track both users and non-users. The Electronic Frontier Foundation (EFF) updated its Privacy Badger browser add-on to help consumers fight a specific type of surveillance technology called "Link Tracking," which facebook and many social networking sites use to track users both on and off their social platforms. The EFF explained:

"Say your friend shares an article from EFF’s website on Facebook, and you’re interested. You click on the hyperlink, your browser opens a new tab, and Facebook is no longer a part of the equation. Right? Not exactly. Facebook—and many other companies, including Google and Twitter—use a variation of a technique called link shimming to track the links you click on their sites.

When your friend posts a link to eff.org on Facebook, the website will “wrap” it in a URL that actually points to Facebook.com: something like https://l.facebook.com/l.php?u=https%3A%2F%2Feff.org%2Fpb&h=ATPY93_4krP8Xwq6wg9XMEo_JHFVAh95wWm5awfXqrCAMQSH1TaWX6znA4wvKX8pNIHbWj3nW7M4F-ZGv3yyjHB_vRMRfq4_BgXDIcGEhwYvFgE7prU. This is a link shim.

When you click on that monstrosity, your browser first makes a request to Facebook with information about who you are, where you are coming from, and where you are navigating to. Then, Facebook quickly redirects you to the place you actually wanted to go... Facebook’s approach is a bit sneakier. When the site first loads in your browser, all normal URLs are replaced with their l.facebook.com shim equivalents. But as soon as you hover over a URL, a piece of code triggers that replaces the link shim with the actual link you wanted to see: that way, when you hover over a link, it looks innocuous. The link shim is stored in an invisible HTML attribute behind the scenes. The new link takes you to where you want to go, but when you click on it, another piece of code fires off a request to l.facebook.com in the background—tracking you just the same..."

Lovely. And, Facebook fails to deliver on privacy in more ways:

"According to Facebook's official post on the subject, in addition to helping Facebook track you, link shims are intended to protect users from links that are "spammy or malicious." The post states that Facebook can use click-time detection to save users from visiting malicious sites. However, since we found that link shims are replaced with their unwrapped equivalents before you have a chance to click on them, Facebook's system can't actually protect you in the way they describe.

Facebook also claims that link shims "protect privacy" by obfuscating the HTTP Referer header. With this update, Privacy Badger removes the Referer header from links on facebook.com altogether, protecting your privacy even more than Facebook's system claimed to."

Thanks to the EFF for focusing upon online privacy and delivering effective solutions.


Twitter Advised Its Users To Change Their Passwords After Security Blunder

Yesterday, Twitter.com advised all of its users to change their passwords after a huge security blunder exposed users' passwords online in an unprotected format. The social networking service released a statement on May 3rd:

"We recently identified a bug that stored passwords unmasked in an internal log. We have fixed the bug, and our investigation shows no indication of breach or misuse by anyone. Out of an abundance of caution, we ask that you consider changing your password on all services where you’ve used this password."

Security experts advise consumers not to use the same password at several sites or services. Repeated use of the same password makes it easy for criminals to hack into multiple sites or services.

The statement by Twitter.com also explained that it masks users' passwords:

"... through a process called hashing using a function known as bcrypt, which replaces the actual password with a random set of numbers and letters that are stored in Twitter’s system. This allows our systems to validate your account credentials without revealing your password. This is an industry standard.

Due to a bug, passwords were written to an internal log before completing the hashing process. We found this error ourselves, removed the passwords, and are implementing plans to prevent this bug from happening again."

The good news: Twitter found the buy by itself. The not-so-good news: the statement was short on details. It did not disclose details about the fixes so this blunder doesn't happen again. Nor did the statement say how many users were affected. Twitter has about 330 million users, so it seems that all users were affected.


How to Wrestle Your Data From Data Brokers, Silicon Valley — and Cambridge Analytica

[Editor's note: today's guest post, by reporters at ProPublica, discusses data brokers you may not know, the data collected and archived about consumers, and options for consumers to (re)gain as much privacy as possible. It is reprinted with permission.]

By Jeremy B. Merrill, ProPublica

Cambridge Analytica thinks that I’m a "Very Unlikely Republican." Another political data firm, ALC Digital, has concluded I’m a "Socially Conservative," Republican, "Boomer Voter." In fact, I’m a 27-year-old millennial with no set party allegiance.

For all the fanfare, the burgeoning field of mining our personal data remains an inexact art.

One thing is certain: My personal data, and likely yours, is in more hands than ever. Tech firms, data brokers and political consultants build profiles of what they know — or think they can reasonably guess — about your purchasing habits, personality, hobbies and even what political issues you care about.

You can find out what those companies know about you but be prepared to be stubborn. Very stubborn. To demonstrate how this works, we’ve chosen a couple of representative companies from three major categories: data brokers, big tech firms and political data consultants.

Few of them make it easy. Some will show you on their websites, others will make you ask for your digital profile via the U.S. mail. And then there’s Cambridge Analytica, the controversial Trump campaign vendor that has come under intense fire in light of a report in the British newspaper The Observer and in The New York Times that the company used improperly obtained data from Facebook to help build voter profiles.

To find out what the chaps at the British data firm have on you, you’re going to need both stamps and a "cheque."

Once you see your data, you’ll have a much better understanding of how this shadowy corner of the new economy works. You’ll see what seemingly personal information they know about you … and you’ll probably have some hypotheses about where this data is coming from. You’ll also probably see some predictions about who you are that are hilariously wrong.

And if you do obtain your data from any of these companies, please let us know your thoughts at politicaldata@propublica.org. We won’t share or publish what you say (unless you tell us that’s it’s OK).

Cambridge Analytica and Other Political Consultants

Making statistically informed guesses about Americans’ political beliefs and pet issues is a common business these days, with dozens of firms selling data to candidates and issue groups about the purported leanings of individual American voters.

Few of these firms have to give your data. But Cambridge Analytica is required to do so by an obscure European rule.

Cambridge Analytica:

Around the time of the 2016 election, Paul-Olivier Dehaye, a Belgian mathematician and founder of a website that helps people exercise their data protection rights called PersonalData.IO, approached me with an idea for a story. He flagged some of Cambridge Analytica’s claims about the power of its "psychographic" targeting capabilities and suggested that I demand my data from them.

So I sent off a request, following Dehaye’s coaching, and citing the UK Data Protection Act 1998, the British implementation of a little-known European Union data-protection law that grants individuals (even Americans) the rights to see the data Europeans companies compile about individuals.

It worked. I got back a spreadsheet of data about me. But it took months, cost ten pounds — and I had to give them a photo ID and two utility bills. Presumably they didn’t want my personal data falling into the wrong hands.

How You Can Request Your Data From Cambridge Analytica:

  1. Visit Cambridge Analytica’s website here and fill out this web form.
  2. After you submit the form, the page will immediately request that you email to data.compliance@cambridgeanalytica.org a photo ID and two copies of your utility bills or bank statements, to prove your identity. This page will also include the company’s bank account details.
  3. Find a way to send them 10 GBP. You can try wiring this from your bank, though it may cost you an additional $25 or so — or ask a friend in the UK to go to their bank and get a cashier’s check. Your American bank probably won’t let you write a GBP-denominated check. Two services I tried, Xoom and TransferWise, weren’t able to do it.
  4. Eventually, Cambridge Analytica will email you a small Excel spreadsheet of information and a letter. You might have to wait a few weeks. Celeste LeCompte, ProPublica’s vice president of business development, requested her data on March 27 and still hasn’t received it.

Because the company is based in the United Kingdom, it had no choice but to fulfill my request. In recent weeks, the firm has come under intense fire after The New York Times and the British paper The Observer disclosed that it had used improperly obtained data from Facebook to build profiles of American voters. Facebook told me that data about me was likely transmitted to Cambridge Analytica because a person with whom I am "friends" on the social network had taken the now-infamous "This Is Your Digital Life" quiz. For what it’s worth, my data shows no sign of anything derived from Facebook.

What You Might Get Back From Cambridge Analytica:

Cambridge Analytica had generated 13 data points about my views: 10 political issues, ranked by importance; two guesses at my partisan leanings (one blank); and a guess at whether I would turn out in the 2016 general election.

They told me that the lower the rank, the higher the predicted importance of the issue to me.

Alongside that data labeled "models" were two other types of data that are run-of-the-mill and widely used by political consultants. One sheet of "core data" — that is, personal info, sliced and diced a few different ways, perhaps to be used more easily as parameters for a statistical model. It included my address, my electoral district, the census tract I live in and my date of birth.

The spreadsheet included a few rows of "election returns" — previous elections in New York State in which I had voted. (Intriguingly, Cambridge Analytica missed that I had voted in 2015’s snoozefest of a vote-for-five-of-these-five judicial election. It also didn’t know about elections in which I had voted in North Carolina, where I lived before I lived in New York.)

ALC Digital

ALC Digital is another data broker, which says that its info is "audiences are built from multi-sourced, verified information about an individual." Their data is distributed via Oracle Data Cloud, a service that lets advertisers target specific audience of people — like, perhaps, people who are Boomer Voters and also Republicans.

The firm brags in an Oracle document posted online about how hard it is to avoid their data collection efforts, saying, "It has no cookies to erase and can’t be ‘cleared.’ ALC Real World Data is rooted in reality, and doesn’t rely on inferences or faulty models."

How You Can Request Your Data From ALC Digital:

Here’s how to find the predictions about your political beliefs data in Oracle Data Cloud:

  1. Visit http://www.bluekai.com/registry/. If you use an ad blocker, there may not be much to see here.
  2. Click on the Partner Segments tab.
  3. Scroll on through until you find ALC Digital.

You may have to scroll for a while before you find it.

And not everyone appears to have data from ALC Digital, so don’t be shocked if you can’t find it. If you don’t, there may be other fascinating companies with data about who you are in your Oracle file.

What You Might Get Back From ALC Digital:

When I downloaded the data last year, it said I was "Socially Conservative," "Boomer Voter" — as well as a female voter and a tax reform supporter.

Recently, when I checked my data, those categories had disappeared entirely from my data. I had nothing from ALC Digital.

ALC Digital is not required to release this data. It is disclosed via the Oracle Data Cloud. Fran Green, the company’s president, said that Aristotle, a longtime political data company, “provides us with consumer data that populates these audiences.” She also said that “we do not claim to know people’s ‘beliefs.’”

Big Tech

Big tech firms like Google and Facebook tend to make their money by selling ads, so they build extensive profiles of their users’ interests and activities. They also depend on their users’ goodwill to keep us voluntarily giving them our locations, our browsing histories and plain ol’ lists of our friends and interests. (So far, these popular companies have not faced much regulation.) All three make it easy to download the data that they keep on you.

Firms like Google and Facebook firms don’t sell your data — because it’s their competitive advantage. Google’s privacy page screams in 72 point type: "We do not sell your personal information to anyone." As websites that we visit frequently, they sell access to our attention, so companies that want to reach you in particular can do so with these companies’ sites or other sites that feature their ads.

Facebook

How You Can Request Your Data From Facebook:

You of course have to have a Facebook account and be logged in:

  1. Visit https://www.facebook.com/settings on your computer.
  2. Click the “Download a copy of your Facebook data” link.
  3. On the next page, click “Start My Archive.”
  4. Enter your password, then click “Start My Archive” again.
  5. You’ll get an email immediately, and another one saying “Your Facebook download is ready” when your data is ready to be downloaded. You’ll get a notification on Facebook, too. Mine took just a few minutes.
  6. Once you get that email, click the link, then click Download Archive. Then reenter your password, which will start a zip file downloading..
  7. Unzip the folder; depending on your computer’s operating system, this might be called uncompressing or “expanding.” You’ll get a folder called something like “facebook-jeremybmerrill,” but, of course, with your username instead of mine.
  8. Open the folder and double-click “index.htm” to open it in your web browser.

What You Might Get Back From Facebook

Facebook designed its archive to first show you your profile information. That’s all information you typed into Facebook and that you probably intended to be shared with your friends. It’s no surprise that Facebook knows what city I live in or what my AIM screen name was — I told Facebook those things so that my friends would know.

But it’s a bit of a surprise that they decided to feature a list of my ex-girlfriends — what they blandly termed "Previous Relationships" — so prominently.

As you dig deeper in your archive, you’ll find more information that you gave Facebook, but that you might not have expected the social network to keep hold of for years: if you’re me, that’s the Nickelback concert I apparently RSVPed to, posts about switching high schools and instant messages from my freshman year in college.

But finally, you’ll find the creepier information: what Facebook knows about you that you didn’t tell it, on the "Ads" page. You’ll find "Ads Topics" that Facebook decided you were interested in, like Housing, ESPN or the town of Ellijay, Georgia. And, you’ll find a list of advertisers who have obtained your contact information and uploaded it to Facebook, as part of a so-called Custom Audience of specific people to whom they want to show their ads.

You’ll find more of that creepy information on your Ads Preferences page. Despite Mark Zuckerberg telling Rep. Jerry McNerney, D-Calif., in a hearing earlier this month that “all of your information is included in your ‘download your information,’” my archive didn’t include that list of ad categories that can be used to target ads to me. (Some other types of information aren’t included in the download, like other people’s posts you’ve liked. Those are listed here, along with where to find them — which, for most, is in your Activity Log.)

This area may include Facebook’s guesses about who you are, boiled down from some of your activities. Most Americans’ will have a guess about their politics — Facebook says I’m a "moderate" about U.S. Politics — and some will have a guess about so-called "multicultural affinity," which Facebook insists is not a guess about your ethnicity, but rather what sorts of content "you are interested in or will respond well to." For instance, Facebook recently added that I have a "Multicultural Affinity: African American." (I’m white — though, because Facebook’s definition of "multicultural affinity" is so strange, it’s hard to tell if this is an error on Facebook’s part.)

Facebook also doesn’t include your browsing history — the subject of back-and-forths between Mark Zuckerberg and several members of Congress — it says it keeps that just long enough to boil it down into those “Ad Topics.”

For people without Facebook accounts, Facebook says to email datarequests@support.facebook.com or fill out an online form to download what Facebook knows about you. One puzzle here is how Facebook gathers data on people whose identities it may not know. It may know that a person using a phone from Atlanta, Georgia, has accessed a Facebook site and that the same person was last week in Austin, Texas, and before that Cincinnati, but it may not know that that person is me. It’s in principle difficult for the company to give the data it collects about logged-out users if it doesn’t know exactly who they are.

Google

Like Facebook, Google will give you a zip archive of your data. Google’s can be much bigger, because you might have stored gigabytes of files in Google Drive or years of emails in Gmail.

But like Facebook, Google does not provide its guesses about your interests, which it uses to target ads. Those guesses are available elsewhere.

How You Can Request Your Data From Google:

  1. Visit https://takeout.google.com/settings/takeout/ to use Google’s cutely named Takeout service.
  2. You’ll have to pick which data you want to download and examine. You should definitely select My Activity, Location History and Searches. You may not want to download gigabytes of emails, if you use Gmail, since that uses a lot of space and may take a while. (That’s also information you shouldn’t be surprised that Google keeps — you left it with Gmail so that you could use Google’s search expertise to hold on to your emails. )
  3. Google will present you with a few options for how to get your archive. The defaults are fine.
  4. Within a few hours, you should get an email with the subject "Your Google data archive is ready." Click Download Archive and log in again. That should start the download of a file named something like "takeout-20180412T193535.zip."
  5. Unzip the folder; depending on your computer’s operating system, this might be called uncompressing or “expanding.”
  6. You’ll get a folder called Takeout. Open the file inside it called "index.html" in your web browser to explore your archive.

What You Might Get Back From Google:

Once you open the index.html file, you’ll see icons for the data you chose in step 2. Try exploring "Ads" under "My Activity" — you’ll see a list of times you saw Google Ads, including on apps on your phone.

Google also includes your search history, under "Searches" — in my case, going back to 2013. Google knows what I had forgotten: I Googled a bunch of dinosaurs around Valentine’s Day that year… And it’s not just web searches: the Sound Search history reminded me that at some point, I used that service to identify Natalie Imbruglia’s song "Torn."

Android phone users might want to check the "Android" folder: Google keeps a list of each app you’ve used on your phone.

Most of the data contained here are records of ways you’ve directly interacted with Google — and the company really does use the those to improve how their services work for me. I’m glad to see my searches auto-completed, for instance.

But the company also creates data about you: Visit the company’s Ads Settings page to see some of the “topics” Google guesses you’re interested in, and which it uses to personalize the ads you see. Those topics are fairly general — it knows I’m interested in “Politics” — but the company says it has more granular classifications that it doesn’t include on the list. Those more granular, hidden classifications are on various topics, from sports to vacations to politics, where Google does generate a guess whether some people are politically “left-leaning” or “right-leaning.”

Data Brokers

Here’s who really does sell your data. Data brokers like the credit reporting agency Experian and a firm named Epsilon.

These sometimes-shady firms are middlemen who buy your data from tracking firms, survey marketers and retailers, slice and dice the data into “segments,” then sell those on to advertisers.

Experian

Experian is best known as a credit reporting firm, but your credit cards aren’t all they keep track of. They told me that they “firmly believe people should be made aware of how their data is being used” — so if you print and mail them a form, they’ll tell you what data they have on you.

“Educated consumers,” they said, “are better equipped to be effective, successful participants in a world that increasingly relies on the exchange of information to efficiently deliver the products and services consumers demand.”

How You Can Request Your Data From Experian:

  1. Visit Experian’s Marketing Data Request site and print the Marketing Data Report Request form.
  2. Print a copy of your ID and proof of address.
  3. Mail it all to Experian at Experian Marketing Services PO Box 40 Allen, TX 75013
  4. Wait for them to mail you something back.

What You Might Get Back From Experian:

Expect to wait a while. I’ve been waiting almost a month.

They also come up with a guess about your political views that’s integrated with Facebook — our Facebook Political Ad Collector project has found that many political candidates use Experian’s data to target their Facebook ads to likely supporters.

You should hope to find a guess about your political views that’d be useful to those candidates — as well as categories derived from your purchasing data.

Experian told me they generate the data they have about you from a long list of sources, including public records and “historical catalog purchase information” — as well as calculating it from predictive models.

Epsilon

How You Can Request Your Data From Epsilon:

  1. Visit Epsilon’s Marketing Data Summary Request form.
  2. After entering your name and address, Epsilon will answer some of those identity-verification questions that quiz you about your old addresses and cars. If your identity can’t be verified with those, Epsilon will ask you to mail in a form.
  3. Wait for Epsilon to mail you your data; it took about a week for me.

What You Might Get Back From Epsilon:

Epsilon has information on “demographics” and “lifestyle interests” — at the household level. It also includes a list of “household purchases.”

It also has data that political candidates use to target their Facebook ads, including Randy Bryce, a Wisconsin Democrat who’s seeking his party’s nomination to run for retiring Speaker Paul Ryan’s seat, and Rep. Tulsi Gabbard, D-Hawaii.

In my case, Epsilon knows I buy clothes, books and home office supplies, among other things — but isn’t any more specific. They didn’t tell me what political beliefs they believe I hold. The company didn’t respond to a request for comment.

Oracle

Oracle’s Data Cloud aggregates data about you from Oracle, but also so-called third party data from other companies.

How You Can Request Your Data From Oracle:

  1. Visit http://www.bluekai.com/registry/. If you use an ad blocker, there may not be much to see here.
  2. Explore each tab, from “Basic Info” to “Hobbies & Interests” and “Partner Segments.”

Not fun scrolling through all those pages? I have 84 pages of four pieces of data each.

You can’t search. All the text is actually images of text. Oracle declined to say why it chose to make their site so hard to use.

What You Might Get Back From Oracle:

My Oracle profile includes nearly 1500 data points, covering all aspects of my life, from my age to my car to how old my children are to whether I buy eggs. These profiles can even say if you’re likely to dress your pet in a costume for Halloween. But many of them are off-base or contradictory.

Many companies in Oracle’s data, besides ALC Digital, offer guesses about my political views: Data from one company uploaded by AcquireWeb says that my political affiliations are as a Democrat and an Independent … but also that I’m a “Mild Republican.” Another company, an Oracle subsidiary called AddThis, says that I’m a “Liberal.” Cuebiq, which calls itself a “location intelligence” company, says I’m in a subset of “Democrats” called “Liberal Professions.”

If an advertiser wants to show an ad to Spring Break Enthusiasts, Oracle can enable that. I’m apparently a Spring Break Enthusiast. Do I buy eggs? I sure do. Data on Oracle’s site associated with AcquireWeb says I’m a cat owner …

But it also “knows” I’m a dog owner, which I’m not.

Al Gadbut, the CEO of AcquireWeb, explained that the guesses associated with his company weren’t based on my personal data, but rather the tendencies of people in my geographical area — hence the seemingly contradictory political guesses. He said his firm doesn’t generate the data, but rather uploaded it on behalf of other companies. Cuebiq’s guess was a “probabilistic inference” they drew from location data submitted to them by some app on my phone. Valentina Marastoni-Bieser, Cuebiq’s senior vice president of marketing, wouldn’t tell me which app it was, though.

Data for sale here includes a long list what TV shows I — supposedly — watch.

But it’s not all wrong. AddThis can tell that I’m “Young & Hip.”

Takeaways:

The above list is just a sampling of the firms that collect your data and try to draw conclusions about who you are — not just sites you visit like Facebook and controversial firms like Cambridge Analytica.

You can make some guesses as to where this data comes from — especially the more granular consumer data from Oracle. For each data point, it’s worth considering: Who’d be in a position to sell a list of what TV shows I watch, or, at least, a list of what TV shows people demographically like me watch? Who’d be in a position to sell a list of what groceries I, or people similar to me in my area, buy? Some of those companies — companies who you’re likely paying, and for whom the internet adage that “if you’re not paying, you’re the product” doesn’t hold — are likely selling data about you without your knowledge. Other data points, like the location data used by Cuebiq, can come from any number of apps or websites, so it may be difficult to figure out exactly which one has passed it on.

Companies like Google and Facebook often say that they’ll let you “correct” the data that they hold on you — tacitly acknowledgingly that they sometimes get it wrong. But if receiving relevant ads is not important to you, they’ll let you opt-out entirely — or, presumably, “correct” your data to something false.

An upcoming European Union rule called the General Data Protection Regulation portends a dramatic change to how data is collected and used on the web — if only for Europeans. No such law seems likely to be passed in the U.S. in the near future.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


News Media Alliance Challenges Tech Companies To 'Accept Accountability' And Responsibility For Filtering News In Their Platforms

Last week, David Chavern, the President and CEO of News Media Alliance (NMA), testified before the House Judiciary Committee. The NMA is a nonprofit trade association representing over 2,000 news organizations across the United States. Mr. Chavern's testimony focused upon the problem of fake news, often aided by social networking platform.

His comments first described current conditions:

"... Quality journalism is essential to a healthy and functioning democracy -- and my members are united in their desire to fight for its future.

Too often in today’s information-driven environment, news is included in the broad term "digital content." It’s actually much more important than that. While some low-quality entertainment or posts by friends can be disappointing, inaccurate information about world events can be immediately destructive. Civil society depends upon the availability of real, accurate news.

The internet represents an extraordinary opportunity for broader understanding and education. We have never been more interconnected or had easier and quicker means of communication. However, as currently structured, the digital ecosystem gives tremendous viewpoint control and economic power to a very small number of companies – the tech platforms that distribute online content. That control and power must come with new responsibilities... Historically, newspapers controlled the distribution of their product; the news. They invested in the journalism required to deliver it, and then printed it in a form that could be handed directly to readers. No other party decided who got access to the information, or on what terms. The distribution of online news is now dominated by the major technology platforms. They decide what news is delivered and to whom – and they control the economics of digital news..."

Last month, a survey found that roughly two-thirds of U.S. adults (68%) use Facebook.com, and about three-quarters of those use the social networking site daily. In 2016, a survey found that 62 percent of adults in the United States get their news from social networking sites. The corresponding statistic in 2012 was 49 percent. That 2016 survey also found that fewer social media users get their news from other platforms: local television (46 percent), cable TV (31 percent), nightly network TV (30 percent), news websites/apps (28 percent), radio (25 percent), and print newspapers (20 percent).

Mr. Chavern then described the problems with two specific tech companies:

"The First Amendment prohibits the government from regulating the press. But it doesn’t prevent Facebook and Google from acting as de facto regulators of the news business.

Neither Google nor Facebook are – or have ever been – "neutral pipes." To the contrary, their businesses depend upon their ability to make nuanced decisions through sophisticated algorithms about how and when content is delivered to users. The term “algorithm” makes these decisions seem scientific and neutral. The fact is that, while their decision processes may be highly-automated, both companies make extensive editorial judgments about accuracy, relevance, newsworthiness and many other criteria.

The business models of Facebook and Google are complex and varied. However, we do know that they are both immense advertising platforms that sell people’s time and attention. Their "secret algorithms" are used to cultivate that time and attention. We have seen many examples of the types of content favored by these systems – namely, click-bait and anything that can generate outrage, disgust and passion. Their systems also favor giving users information like that which they previously consumed, thereby generating intense filter bubbles and undermining common understandings of issues and challenges.

All of these things are antithetical to a healthy news business – and a healthy democracy..."

Earlier this month, Apple Computer and Facebook executives exchanged criticisms about each other's business models and privacy. Mr. Chavern's testimony before Congress also described more problems and threats:

"Good journalism is factual, verified and takes into account multiple points of view. It can take a lot of time and investment. Most particularly, it requires someone to take responsibility for what is published. Whether or not one agrees with a particular piece of journalism, my members put their names on their product and stand behind it. Readers know where to send complaints. The same cannot be said of the sea of bad information that is delivered by the platforms in paid priority over my members’ quality information. The major platforms’ control over distribution also threatens the quality of news for another reason: it results in the “commoditization” of news. Many news publishers have spent decades – often more than a century – establishing their brands. Readers know the brands that they can trust — publishers whose reporting demonstrates the principles of verification, accuracy and fidelity to facts. The major platforms, however, work hard to erase these distinctions. Publishers are forced to squeeze their content into uniform, homogeneous formats. The result is that every digital publication starts to look the same. This is reinforced by things like the Google News Carousel, which encourages users to flick back and forth through articles on the same topic without ever noticing the publisher. This erosion of news publishers’ brands has played no small part in the rise of "fake news." When hard news sources and tabloids all look the same, how is a customer supposed to tell the difference? The bottom line is that while Facebook and Google claim that they do not want to be "arbiters of truth," they are continually making huge decisions on how and to whom news content is delivered. These decisions too often favor free and commoditized junk over quality journalism. The platforms created by both companies could be wonderful means for distributing important and high-quality information about the world. But, for that to happen, they must accept accountability for the power they have and the ultimate impacts their decisions have on our economic, social and political systems..."

Download Mr. Chavern's complete testimony. Industry watchers argue that recent changes by Facebook have hurt local news organizations. MediaPost reported:

"When Facebook changed its algorithm earlier this year to focus on “meaningful” interactions, publishers across the board were hit hard. However, local news seemed particularly vulnerable to the alterations. To assuage this issue, the company announced that it would prioritize news related to local towns and metro areas where a user resided... To determine how positively that tweak affected local news outlets, the Tow Center measured interactions for posts from publications coming from 13 metro areas... The survey found that 11 out of those 13 have consistently seen a drop in traffic between January 1 and April 1 of 2018, allowing the results to show how outlets are faring nine weeks after the algorithm change. According to the Tow Center study, three outlets saw interactions on their pages decrease by a dramatic 50%. These include The Dallas Morning News, The Denver Post, and The San Francisco Chronicle. The Atlanta Journal-Constitution saw interactions drop by 46%."

So, huge problems persist.

Early in my business career, I had the opportunity to develop and market an online service using content from Dow Jones News/Retrieval. That experience taught me that the news - hard news - included who, where, when, and what happened. Everything else is either opinion, commentary, analysis, an advertisement, or fiction. And, it is critical to know the differences and/or learn to spot each type. Otherwise, you are likely to be misled, misinformed, or fooled.


Many People Are Concerned About Facebook. Any Other Tech Companies Pose Privacy Threats?

The massive data breach involving Facebook and Cambridge Analytica focused attention and privacy concerns on the social networking giant. Reports about extensive tracking of users and non-users, testimony by its CEO before the U.S. Congress, and online tools allegedly allowing advertisers to violate federal housing laws have also focused attention on Facebook.

Are there any other tech or advertising companies which consumers should have privacy concerns about?  What other companies collect massive amounts of information about consumers? It seems wise to look beyond Facebook in to avoid missing significant threats.

Google logo To answer these questions, the Wall Street Journal compared Facebook and Google:

"... Alphabet Inc.’s Google is a far bigger threat by many measures: the volume of information it gathers, the reach of its tracking and the time people spend on its sites and apps... It’s likely that Google has shadow profiles on at least as many people as Facebook does, says Chandler Givens, chief executive of TrackOff, which develops software to fight identity theft. Google allows everyone, whether they have a Google account or not, to opt out of its ad targeting. Yet, like Facebook, it continues to gather your data... Google Analytics is far and away the web’s most dominant analytics platform. Used on the sites of about half of the biggest companies in the U.S., it has a total reach of 30 million to 50 million sites. Google Analytics tracks you whether or not you are logged in... Google uses, among other things, our browsing and search history, apps we’ve installed, demographics such as age and gender and, from its own analytics and other sources, where we’ve shopped in the real world. Google says it doesn’t use information from “sensitive categories” such as race, religion, sexual orientation or health..."

There's plenty more, so read the entire WSJ article. A good review worthy of further discussion.

However, more companies pose privacy threats. Equifax, one of three major credit reporting agencies, easily makes my list. Its massive data breach affected half the population in the USA, plus persons worldwide. An investigation discovered several data security failures at Equifax.

Also on my list would be the U.S. Federal Communications Commission (FCC). Using some  "light touch" legal ju-jitsu and vague promises of enabling infrastructure investments, the Republican-majority Commissioners and Trump appointee Ajit Pai at the FCC revoked broadband privacy protections for consumers last year... and punted broadband oversight responsibility to the U.S. Federal Trade Commission (FTC). This allowed corporate internet service providers (ISPs) to freely track and collect sensitive data about internet users without requiring notices nor opt-out mechanisms.

Uber logo Uber also makes my list, given its massive data breach affecting 57 million persons. Earlier this month, the FTC announced a revised settlement agreement where Uber:

"... failed to disclose a significant breach of consumer data that occurred in 2016 -- in the midst of the FTC’s investigation that led to the August 2017 settlement announcement... the revised settlement could subject Uber to civil penalties if it fails to notify the FTC of certain future incidents involving unauthorized access of consumer information... In announcing the original proposed settlement with Uber in August 2017, the FTC charged that the company had failed to live up to its claims that it closely monitored employee access to rider and driver data and that it deployed reasonable measures to secure personal information stored on a third-party cloud provider’s servers.

In the revised complaint, the FTC alleges that Uber learned in November 2016 that intruders had again accessed consumer data the company stored on its third-party cloud provider’s servers by using an access key an Uber engineer had posted on a code-sharing website... the intruders used the access key to download from Uber’s cloud storage unencrypted files that contained more than 25 million names and email addresses, 22 million names and mobile phone numbers, and 600,000 names and driver’s license numbers of U.S. Uber drivers and riders... Uber paid the intruders $100,000 through its third-party “bug bounty” program and failed to disclose the breach to consumers or the Commission until November 2017... the new provisions in the revised proposed order include requirements for Uber to submit to the Commission all the reports from the required third-party audits of Uber’s privacy program rather than only the initial such report..."

Yes, Wells Fargo bank makes my list, too. This blog post explains why. Who is on your list of the biggest privacy threats to consumers?