308 posts categorized "Social Networking" Feed

FBI Seeks To Monitor Twitter, Facebook, Instagram, And Other Social Media Accounts For Violent Threats

Federal Bureau of Investigation logo The U.S. Federal Bureau of Investigation (FBI) issued on July 8th a Request For Proposals (RFP) seeking quotes from technology companies to build a "Social Media Alerting" tool, which would enable the FBI to monitor in real-time accounts in several social media services for violence threats. The RFP, which was amended on August 7th, stated:

"The purpose of this procurement is to acquire the services of a company to proactively identify and reactively monitor threats to the United States and its interests through a means of online sources. A subscription to this service shall grant the Federal Bureau of Investigation (FBI) access to tools that will allow for the exploitation of lawfully collected/acquired data from social media platforms that will be stored, vetted and formatted by a vendor... This synopsis and solicitation is being issued as Request for Proposal (RFP) number DJF194750PR0000369 and... This announcement is supplemented by a detailed RFP Notice, an SF-33 document, an accompanying Statement of Objectives (SOO) and associated FBI documents..."

"Proactively identify" suggests the usage of software algorithms or artificial intelligence (AI). And, the vendor selected will archive the collected data for an undisclosed period of time. The RFP also stated:

"Background: The use of social media platforms, by terrorist groups, domestic threats, foreign intelligence services, and criminal organizations to further their illegal activity creates a demonstrated need for tools to properly identify the activity and react appropriately. With increased use of social media platforms by subjects of current FBI investigations and individuals that pose a threat to the United States, it is critical to obtain a service which will allow the FBI to identify relevant information from Twitter, Facebook, Instagram, and other Social media platforms in a timely fashion. Consequently, the FBI needs near real time access to a full range of social media exchanges..."

For context, in 2016 the FBI attempted to force Apple Computer to build "backdoor software" to unclock an alleged terrorist's iPhone in California. The FBI later found an offshore technology company to build its backdoor.

The documents indicate that the FBI wants its staff to use the tool at both headquarters and field-office locations globally, and with mobile devices. The SOO document stated:

"FBI personnel are deployed internationally and sometimes in areas of press censorship. A social media exploitation tool with international reach and paired with a strong language translation capability, can become crucial to their operations and more importantly their safety. The functions of most value to these individuals is early notification, broad international reach, instant translation, and the mobility of the needed capability."

The SOO also explained the data elements too be collected:

"3.3.2.2.1 Obtain the full social media profile of persons-of-interest and their affiliation to any organization or groups through the corroboration of multiple social media sources... Items of interest in this context are social networks, user IDs, emails, IP addresses and telephone numbers, along with likely additional account with similar IDs or aliases... Any connectivity between aliases and their relationship must be identifiable through active link analysis mapping..."
"3.3.3.2.1 Online media is monitored based on location, determined by the users’ delineation or the import of overlays from existing maps (neighborhood, city, county, state or country). These must allow for customization as AOR sometimes cross state or county lines..."

While the document mentioned "user IDs" and didn't mention passwords, the implication seems clear that the FBI wants both in order to access and monitor in real-time social media accounts. And, the "other Social Media platforms" statement raises questions. What is the full list of specific services that refers to? Why list only the three largest platforms by name?

As this FBI project proceeds, let's hope that the full list of social sites includes 8Chan, Reddit, Stormfront, and similar others. Why? In a study released in November of 2018, the Center for Strategic and International Studies (CSIS) found:

"Right-wing extremism in the United States appears to be growing. The number of terrorist attacks by far-right perpetrators rose over the past decade, more than quadrupling between 2016 and 2017. The recent pipe bombs and the October 27, 2018, synagogue attack in Pittsburgh are symptomatic of this trend. U.S. federal and local agencies need to quickly double down to counter this threat. There has also been a rise in far-right attacks in Europe, jumping 43 percent between 2016 and 2017... Of particular concern are white supremacists and anti-government extremists, such as militia groups and so-called sovereign citizens interested in plotting attacks against government, racial, religious, and political targets in the United States... There also is a continuing threat from extremists inspired by the Islamic State and al-Qaeda. But the number of attacks from right-wing extremists since 2014 has been greater than attacks from Islamic extremists. With the rising trend in right-wing extremism, U.S. federal and local agencies need to shift some of their focus and intelligence resources to penetrating far-right networks and preventing future attacks. To be clear, the terms “right-wing extremists” and “left-wing extremists” do not correspond to political parties in the United States..."

The CSIS study also noted:

"... right-wing terrorism commonly refers to the use or threat of violence by sub-national or non-state entities whose goals may include racial, ethnic, or religious supremacy; opposition to government authority; and the end of practices like abortion... Left-wing terrorism, on the other hand, refers to the use or threat of violence by sub-national or non-state entities that oppose capitalism, imperialism, and colonialism; focus on environmental or animal rights issues; espouse pro-communist or pro-socialist beliefs; or support a decentralized sociopolitical system like anarchism."

Terrorism is terrorism. All of it needs to be prosecuted including left-, right-, domestic, and foreign. (This prosecutor is doing the right thing.) It seems wise to monitor the platform where suspects congregate.

This project also raises questions about the effectiveness of monitoring social media? Will this really works. Digital Trends reported:

"Companies like Google, Facebook, Twitter, and Amazon already use algorithms to predict your interests, your behaviors, and crucially, what you like to buy. Sometimes, an algorithm can get your personality right – like when Spotify somehow manages to put together a playlist full of new music you love. In theory, companies could use the same technology to flag potential shooters... But preventing mass shootings before they happen raises thorny legal questions: how do you determine if someone is just angry online rather than someone who could actually carry out a shooting? Can you arrest someone if a computer thinks they’ll eventually become a shooter?"

Some social media users have already experienced inaccuracies (failures?) when sites present irrelevant advertisements and/or political party messaging based upon supposedly accurate software algorithms. The Digital Trends article also dug deeper:

"A Twitter spokesperson wouldn’t say much directly about Trump’s proposal, but did tell Digital Trends that the company suspended 166,513 accounts connected to the promotion of terrorism during the second half of 2018... Twitter also frequently works to help facilitate investigations when authorities request information – but the company largely avoids proactively flagging banned accounts (or the people behind them) to those same authorities. Even if they did, that would mean flagging 166,513 people to the FBI – far more people than the agency could ever investigate."

Then, there is the problem of the content by users in social media posts:

"Even if someone does post to social media immediately before they decide to unleash violence, it’s often not something that would trip up either Twitter or Facebook’s policies. The man who killed three people at the Gilroy Garlic Festival in Northern California posted to Instagram from the event itself – once calling the food served there “overprices” and a second that told people to read a 19th-century pro-fascist book that’s popular with white nationalists."

Also, Amazon got caught up in the hosting mess with 8Chan. So, there is more news to come.

Last, this blog post explored the problems with emotion recognition by facial-recognition software. Let's hope this FBI project is not a waste of taxpayer's hard-earned money.


Emotion Recognition: Facial Recognition Software Based Upon Valid Science or Malarkey?

The American Civil Liberties Union (ACLU) reported:

"Emotion recognition is a hot new area, with numerous companies peddling products that claim to be able to read people’s internal emotional states, and artificial intelligence (A.I.) researchers looking to improve computers’ ability to do so. This is done through voice analysis, body language analysis, gait analysis, eye tracking, and remote measurement of physiological signs like pulse and breathing rates. Most of all, though, it’s done through analysis of facial expressions.

A new study, however, strongly suggests that these products are built on a bed of intellectual quicksand... after reviewing over 1,000 scientific papers in the psychological literature, these experts came to a unanimous conclusion: there is no scientific support for the common assumption “that a person’s emotional state can be readily inferred from his or her facial movements.” The scientists conclude that there are three specific misunderstandings “about how emotions are expressed and perceived in facial movements.” The link between facial expressions and emotions is not reliable (i.e., the same emotions are not always expressed in the same way), specific (the same facial expressions do not reliably indicate the same emotions), or generalizable (the effects of different cultures and contexts has not been sufficiently documented)."

Another reason why this is important:

"... an entire industry of automated purported emotion-reading technologies is quickly emerging. As we wrote in our recent paper on “Robot Surveillance,” the market for emotion recognition software is forecast to reach at least $3.8 billion by 2025. Emotion recognition (aka “affect recognition” or “affective computing”) is already being incorporated into products for purposes such as marketing, robotics, driver safety, and audio “aggression detectors.”

Regular readers of this blog are familiar with aggression detectors and the variety of industries where the technology is already deployed. And, one police body-cam maker says it won't deploy facial recognition in its products due to problems with the technology.

Yes, reliability matters -- especially when used for surveillance purposes. Nobody wants law enforcement making decisions about persons based upon software built using unreliable or fake science masquerading as reliable, valid science. Nobody wants education and school officials making decisions about students using unreliable software. Nobody wants hospital administrators and physicians making decisions about patients based upon unreliable software.

What are your opinions?


White Hat Hacker: Social Media Is a 'Goldmine For Details' For Cyberattacks Targeting Companies

Many employees are their own worst enemy when they start a new job. In this Fast Company article, a white hat hacker explains the security fails by employees which compromise their employer's data security.

Stephanie “Snow” Carruthers, the chief people hacker within a group at IBM Inc., explained that hackers troll:

"... social media for photos, videos, and other clues that can help them better target your company in an attack. I know this because I’m one of them... I’m part of an elite team of hackers within IBM known as X-Force Red. Companies hire us to find gaps in their security – before the real bad guys do... Social media posts are a goldmine for details that aid in our “attacks.” What you find in the background of photos is particularly revealing... The first thing you may be surprised to know is that 75% of the time, the information I’m finding is coming from interns or new hires. Younger generations entering the workforce today have grown up on social media, and internships or new jobs are exciting updates to share. Add in the fact that companies often delay security training for new hires until weeks or months after they’ve started, and you’ve got a recipe for disaster..."

The obvious security fails include selfie photos by interns or new hires wearing their security badges, selfies showing log-in credentials on computer screens, and selfies showing passwords written on post-it notes attached to computer monitors. Less obvious security fails include group photos by interns or new hires with their work team. Group photos can help hackers identify team members to craft personalized and more effective phishing e-mails and text messages using co-workers' names, to trick recipients into opening attachments containing malware.

This highlights one business practice interns and new hires should understand. Your immediate boss or supervisor won't scour your social media accounts looking for security fails. Your employer will outsource the job to another company, which will.

If you just started a new job, don't be that clueless employee posting security fails to your social media accounts. Read and understand your employer's social media policy. If you are a manager, schedule security training for your interns and new hires ASAP.


FTC Levies $5 Billion Fine, 'New Restrictions, And Modified Corporate Structure' To Hold Facebook Accountable. Will These Actions Prevent Future Privacy Abuses?

The U.S. Federal Trade Commission (FTC) announced on July 24th a record-breaking fine against Facebook, Inc., plus new limitations on the social networking service. The FTC announcement stated:

"Facebook, Inc. will pay a record-breaking $5 billion penalty, and submit to new restrictions and a modified corporate structure that will hold the company accountable for the decisions it makes about its users’ privacy, to settle Federal Trade Commission charges that the company violated a 2012 FTC order by deceiving users about their ability to control the privacy of their personal information... The settlement order announced [on July 24th] also imposes unprecedented new restrictions on Facebook’s business operations and creates multiple channels of compliance..."

During 2018, Facebook generated after-tax profits of $22.1 billion on sales of $55.84 billion. While a $5 billion fine is a lot of money, the company can easily afford the record-breaking fine. The fine equals about one month's revenues, or a little over 4 percent of its $117 billion in assets.

U.S. Federal Trade Commission. New compliance system for Facebook. Click to view larger version The FTC announcement explained several "unprecedented" restrictions in the settlement order. First, the restrictions are designed to:

"... prevent Facebook from deceiving its users about privacy in the future, the FTC’s new 20-year settlement order overhauls the way the company makes privacy decisions by boosting the transparency of decision making... It establishes an independent privacy committee of Facebook’s board of directors, removing unfettered control by Facebook’s CEO Mark Zuckerberg over decisions affecting user privacy. Members of the privacy committee must be independent and will be appointed by an independent nominating committee. Members can only be fired by a supermajority of the Facebook board of directors."

Facebook logo Second, the restrictions mandated compliance officers:

"Facebook will be required to designate compliance officers who will be responsible for Facebook’s privacy program. These compliance officers will be subject to the approval of the new board privacy committee and can be removed only by that committee—not by Facebook’s CEO or Facebook employees. Facebook CEO Mark Zuckerberg and designated compliance officers must independently submit to the FTC quarterly certifications that the company is in compliance with the privacy program mandated by the order, as well as an annual certification that the company is in overall compliance with the order. Any false certification will subject them to individual civil and criminal penalties."

Third, the new order strengthens oversight:

"... The order enhances the independent third-party assessor’s ability to evaluate the effectiveness of Facebook’s privacy program and identify any gaps. The assessor’s biennial assessments of Facebook’s privacy program must be based on the assessor’s independent fact-gathering, sampling, and testing, and must not rely primarily on assertions or attestations by Facebook management. The order prohibits the company from making any misrepresentations to the assessor, who can be approved or removed by the FTC. Importantly, the independent assessor will be required to report directly to the new privacy board committee on a quarterly basis. The order also authorizes the FTC to use the discovery tools provided by the Federal Rules of Civil Procedure to monitor Facebook’s compliance with the order."

Fourth, the order included six new privacy requirements:

"i) Facebook must exercise greater oversight over third-party apps, including by terminating app developers that fail to certify that they are in compliance with Facebook’s platform policies or fail to justify their need for specific user data; ii) Facebook is prohibited from using telephone numbers obtained to enable a security feature (e.g., two-factor authentication) for advertising; iii) Facebook must provide clear and conspicuous notice of its use of facial recognition technology, and obtain affirmative express user consent prior to any use that materially exceeds its prior disclosures to users; iv) Facebook must establish, implement, and maintain a comprehensive data security program; v) Facebook must encrypt user passwords and regularly scan to detect whether any passwords are stored in plaintext; and vi) Facebook is prohibited from asking for email passwords to other services when consumers sign up for its services."

Wow! Lots of consequences when a manager builds a corporation with a, "move fast and break things" culture, values, and ethics. Assistant Attorney General Jody Hunt for the Department of Justice’s Civil Division said:

"The Department of Justice is committed to protecting consumer data privacy and ensuring that social media companies like Facebook do not mislead individuals about the use of their personal information... This settlement’s historic penalty and compliance terms will benefit American consumers, and the Department expects Facebook to treat its privacy obligations with the utmost seriousness."

There is disagreement among the five FTC commissioners about the settlement, as the vote for the order was 3 - 2. FTC Commissioner Rebecca Kelly Slaughter stated in her dissent:

"My principal objections are: (1) The negotiated civil penalty is insufficient under the applicable statutory factors we are charged with weighing for order violators: injury to the public, ability to pay, eliminating the benefits derived from the violation, and vindicating the authority of the FTC; (2) While the order includes some encouraging injunctive relief, I am skeptical that its terms will have a meaningful disciplining effect on how Facebook treats data and privacy. Specifically, I cannot view the order as adequately deterrent without both meaningful limitations on how Facebook collects, uses, and shares data and public transparency regarding Facebook’s data use and order compliance; (3) Finally, my deepest concern with this order is that its release of Facebook and its officers from legal liability is far too broad..."

FTC Commissioners Noah Joshua Phillips and Christine S. Wilson stated on July 24th in an 8-page joint statement (Adobe PDF) with Chairman Joseph J. Simons of the U.S. District Court for the District of Columbia:

"In 2012, Facebook entered into a consent order with the FTC, resolving allegations that the company misrepresented to consumers the extent of data sharing with third-party applications and the control consumers had over that sharing. The 2012 order barred such misrepresentations... Our complaint announced today alleges that Facebook failed to live up to its commitments under that order. Facebook subsequently made similar misrepresentations about sharing consumer data with third-party apps and giving users control over that sharing, and misrepresented steps certain consumers needed to take to control [over] facial recognition technology. Facebook also allowed financial considerations to affect decisions about how it would enforce its platform policies against third-party users of data, in violation of its obligation under the 2012 order... The $5 billion penalty serves as an important deterrent to future order violations... For purposes of comparison, the EU’s General Data Protection Regulation (GDPR) is touted as the high-water mark for comprehensive privacy legislation, and the penalty the FTC has negotiated is over 20 times greater than the largest GDPR fine to date... IV. The Settlement Far Exceeds What Could be Achieved in Litigation and Gives Consumers Meaningful Protections Now... Even assuming the FTC would prevail in litigation, a court would not give the Commission carte blanche to reorganize Facebook’s governance structures and business operations as we deem fit. Instead, the court would impose the relief. Such relief would be limited to injunctive relief to remedy the specific proven violations... V. Mark Zuckerberg is Being Held Accountable and the Order Cabins His Authority Our dissenting colleagues argue that the Commission should not have settled because the Commission’s investigation provides an inadequate basis for the decision not to name Mark Zuckerberg personally as a defendant... The provisions of this Order extinguish the ability of Mr. Zuckerberg to make privacy decisions unilaterally by also vesting responsibility and accountability for those decisions within business units, DCOs, and the privacy committee... the Order significantly diminishes Mr. Zuckerberg’s power — something no government agency, anywhere in the world, has thus far accomplished. The Order requires multiple information flows and imposes a robust system of checks and balances..."

Time will tell how effective the order's restrictions and $5 billion are. That Facebook can easily afford the penalty suggests the amount is a weak deterrence. If all or part of the penalty is tax-deductible (yes, tax-deductible fines have happened before to directly reduce a company's taxes), then that would weaken the deterrence effectiveness. And, if all or part of the fine is tax-deductible, then we taxpayers just paid for part of Facebook's alleged wrongdoing. I'll bet most taxpayers wouldn't want that.

Facebook stated in a July 24th news release that its second-quarter 2019 earnings included:

"... an additional $2.0 billion legal expense related to the U.S. Federal Trade Commission (FTC) settlement and a $1.1 billion income tax expense due to the developments in Altera Corp. v. Commissioner, as discussed below. As the FTC expense is not expected to be tax-deductible, it had no effect on our provision for income taxes... In July 2019, we entered into a settlement and modified consent order to resolve the inquiry of the FTC into our platform and user data practices. Among other matters, our settlement with the FTC requires us to pay a penalty of $5.0 billion and to significantly enhance our practices and processes for privacy compliance and oversight. In particular, we have agreed to implement a comprehensive expansion of our privacy program, including substantial management and board of directors oversight, stringent operational requirements and reporting obligations, and a process to regularly certify our compliance with the privacy program to the FTC. In the second quarter of 2019, we recorded an additional $2.0 billion accrual in connection with our settlement with the FTC, which is included in accrued expenses and other current liabilities on our condensed consolidated balance sheet."

"Not expected to be" is not the same as definitely not. And, business expenses reduce a company's taxable net income.

A copy of the FTC settlement order with Facebook is also available here (Adobe PDF format; 920K bytes). Plus, there is more:

"... the FTC also announced today separate law enforcement actions against data analytics company Cambridge Analytica, its former Chief Executive Officer Alexander Nix, and Aleksandr Kogan, an app developer who worked with the company, alleging they used false and deceptive tactics to harvest personal information from millions of Facebook users. Kogan and Nix have agreed to a settlement with the FTC that will restrict how they conduct any business in the future."

Cambridge Analytica was involved in the massive Facebook data breach in 2018 when persons allegedly posed as academic researchers in order to download Facebook users' profile information they really weren't authorized to access.

What are your opinions? Hopefully, some tax experts will weigh in about the fine.


Evite Admitted Data Breach. Doesn't Disclose The Number Of Users Affected

Evite logo Evite, the online social and invitations site, disclosed last month a data breach affecting some of its users:

"We became aware of a data security incident involving potential unauthorized access to our systems in April 2019. We engaged one of the leading data security firms and launched a thorough investigation. The investigation potentially traced the incident to malicious activity starting on February 22, 2019. On May 14, 2019, we concluded that an unauthorized party had acquired an inactive data storage file associated with our user accounts... Upon discovering the incident, we took steps to understand the nature and scope of the issue, and brought in external forensic consultants that specialize in cyber-attacks. We coordinated with law enforcement regarding the incident, and are working with leading security experts to address any vulnerabilities..."

Evite was founded in 1998, so there could be plenty of users affected. The breach announcement did not disclose the number of users affected.

The Evite breach announcement also said, "No user information more recent than 2013 was contained in the file" which was accessed/stolen by unauthorized persons. Evite said it has notified affected users, and has reset the passwords of affected users. The Evite system will prompt affected users to create new passwords when signing into the service.

The announcement listed the data elements accessed/stolen: names, usernames, email addresses, and passwords. If users also entered their birth dates, phone numbers, and mailing addresses then those data elements were also access/stolen. Social Security numbers were not affected since Evite doesn't collect this data. Evite said payment information (e.g., credit cards, debit cards, bank accounts, etc.) was not affected because:

"We do not store financial or payment information. If you opted to store your payment card in your account, your payment information is maintained by and stored on the internal systems of our third-party vendor."

Thank goodness for small wonders. The Evite disclosure did not explain why passwords were not encrypted, nor if that or other data elements would be encrypted in the future. As with any data breach, context matters. ZD Net reported:

"... a hacker named Gnosticplayers put up for sale the customer data of six companies, including Evite. The hacker claimed to be selling ten million Evite user records that included full names, email addresses, IP addresses, and cleartext passwords. ZDNet reached out to notify Evite of the hack and that its data was being sold on the dark web on April 15; however, the company never returned our request for comment... Back in April, the data of 10 million Evite users was put up for sale on a dark web marketplace for ฿0.2419 (~$1,900). The same hacker has breached, stolen, and put up for sale the details of over one billion users from many other companies, including other major online services, such as Canva, 500px, UnderArmor, ShareThis, GfyCat, Ge.tt, and others."

The incident is another reminder of the high value of consumers' personal data, and that hackers take action quickly to use or sell stolen data.


Facebook Announced New Financial Services Offering Available in 2020

On Tuesday, Facebook announced its first financial services offering which will be available in 2020:

"... we’re sharing plans for Calibra, a newly formed Facebook subsidiary whose goal is to provide financial services that will let people access and participate in the Libra network. The first product Calibra will introduce is a digital wallet for Libra, a new global currency powered by blockchain technology. The wallet will be available in Messenger, WhatsApp and as a standalone app — and we expect to launch in 2020... Calibra will let you send Libra to almost anyone with a smartphone, as easily and instantly as you might send a text message and at low to no cost. And, in time, we hope to offer additional services for people and businesses, like paying bills with the push of a button, buying a cup of coffee with the scan of a code or riding your local public transit..."

Long before the announcement, consumers crafted interesting nicknames for the financial service, such as #FaceCoin and #Zuckbucks. Good to see people with a sense of humor.

On a more serious topic, after multiple data breaches and privacy snafus at Facebook (plus repeated promises by CEO Zuckerberg that his company will do better), many people are understandably concerned about data security and privacy. Facebook's announcement also addressed security and privacy:

"... Calibra will have strong protections... We’ll be using all the same verification and anti-fraud processes that banks and credit cards use, and we’ll have automated systems that will proactively monitor activity to detect and prevent fraudulent behavior... We’ll also take steps to protect your privacy. Aside from limited cases, Calibra will not share account information or financial data with Facebook or any third party without customer consent. This means Calibra customers’ account information and financial data will not be used to improve ad targeting on the Facebook family of products. The limited cases where this data may be shared reflect our need to keep people safe, comply with the law and provide basic functionality to the people who use Calibra. Calibra will use Facebook data to comply with the law, secure customers’ accounts, mitigate risk and prevent criminal activity."

So, the new Calibra subsidiary promised that it won't share users' account information with Facebook's core social networking service, except when it will -- to "comply with the law." The announcement encourages interested persons to sign up for email updates. This leaves Calibra customers to trust Facebook's wall separating its business units. "Provide basic functionality to the people who use Calibra" sounds like a huge loophole to justify any data sharing.

Tech and financial experts quickly weighed in on the announcement and its promises. TechCrunch explained why Facebook created a new business subsidiary. After Calibra's Tuesday announcement:

"... critics started harping about the dangers of centralizing control of tomorrow’s money in the hands of a company with a poor track record of privacy and security. Facebook anticipated this, though, and created a subsidiary called Calibra to run its crypto dealings and keep all transaction data separate from your social data. Facebook shares control of Libra with 27 other Libra Association founding members, and as many as 100 total when the token launches in the first half of 2020. Each member gets just one vote on the Libra council, so Facebook can’t hijack the token’s governance even though it invented it."

TechCrunch also explained the risks to Calibra customers:

"... that leaves one giant vector for abuse of Libra: the developer platform... Apparently Facebook has already forgotten how allowing anyone to build on the Facebook app platform and its low barriers to “innovation” are exactly what opened the door for Cambridge Analytica to hijack 87 million people’s personal data and use it for political ad targeting. But in this case, it won’t be users’ interests and birthdays that get grabbed. It could be hundreds or thousands of dollars’ worth of Libra currency that’s stolen. A shady developer could build a wallet that just cleans out a user’s account or funnels their coins to the wrong recipient, mines their purchase history for marketing data or uses them to launder money..."

During the coming months, hopefully Calibra will disclose the controls it will implement on the developer platform to prevent abuses, theft, and fraud.

Readers wanting to learn more should read the Libra White Paper, which provides more details about the companies involved:

"The Libra Association is an independent, not-for-profit membership organization headquartered in Geneva, Switzerland. The association’s purpose is to coordinate and provide a framework for governance for the network... Members of the Libra Association will consist of geographically distributed and diverse businesses, nonprofit and multilateral organizations, and academic institutions. The initial group of organizations that will work together on finalizing the association’s charter and become “Founding Members” upon its completion are, by industry:

1. Payments: Mastercard, PayPal, PayU (Naspers’ fintech arm), Stripe, Visa
2. Technology and marketplaces: Booking Holdings, eBay, Facebook/Calibra, Farfetch, Lyft, Mercado Pago, Spotify AB, Uber Technologies, Inc.
3. Telecommunications: Iliad, Vodafone Group
4. Blockchain: Anchorage, Bison Trails, Coinbase, Inc., Xapo Holdings Limited
5. Venture Capital: Andreessen Horowitz, Breakthrough Initiatives, Ribbit Capital, Thrive Capital, Union Square Ventures
6. Nonprofit and multilateral organizations, and academic institutions: Creative Destruction Lab, Kiva, Mercy Corps, Women’s World Banking"

Yes, the ride-hailing company, Uber, is involved. Yes, the same ride-hailing service which which paid $148 million to settle lawsuits and a coverup from a data breach in 2016. Yes, the same ride-hailing service with a history of data security, compliance, cultural, and privacy snafus. This suggests -- for better or worse -- that in the future consumers will be able to pay for Uber rides using the Libra Network.

Calibra hopes to have about 100 members in the Libra Association by the service launch in 2020. Clearly, there will be plenty more news to come. Below are draft screen images of the new app.

Early version of screen images of the Calibra mobile app. Click to view larger version


Study: While Consumers Want Sites Like Facebook And Google To Collect Less Data, Few Want To Pay For Privacy

A recent study by the Center For Data Innovation explored consumers' attitudes about online privacy. One of the primary findings:

"... when potential tradeoffs were not part of the question approximately 80 percent of Americans agreed that they would like online services such as Facebook and Google to collect less of their data..."

So, most survey participants want more online privacy as defined by less data collected about them. That is good news, right? Maybe. The researchers dug deeper to understand survey participants' views about "tradeoffs" - various ways of paying for online privacy. It found that support for more privacy (e.g., less data collected):

"... eroded when respondents considered these tradeoffs... [support] dropped by 6 percentage points when respondents were asked whether they would like online services to collect less data even if it means seeing ads that are less useful. Support dropped by 27 percentage points when respondents considered whether they would like less data collection even if it means seeing more ads than before. And it dropped by 26 percentage points when respondents were asked whether they would like less data collection even if it means losing access to some features they use now."

So, support for more privacy fell if irrelevant ads, more ads, and/or fewer features were the consequences. There is more:

"The largest drop in support (53 percentage points) came when respondents were asked whether they would like online services to collect less of their data even if it means paying a monthly subscription fee."

This led to a second major finding:

"Only one in four Americans want online services such as Facebook and Google to collect less of their data if it means they would have to start paying a monthly subscription fee..."

So, most want privacy but few are willing to pay for it. This is probably reassuring news for executives in a variety of industries (e.g., social media, tech companies, device manufacturers, etc.) to keep doing what they are doing: massive data collection of consumers' data via sites, mobile apps, partnerships, and however else they can get it.

Next, the survey asked participants if they would accept more data collection if that provided more benefits:

"... approximately 74 percent of Americans opposed having online services such as Google and Facebook collect more of their data. But that opposition decreased by 11 percentage points... if it means seeing ads that are more useful. It dropped by 17 percentage points... if it means seeing fewer ads than before and... if it means getting access to new features they would use. The largest decrease in opposition (18 percentage points) came... if it means getting more free apps and services..."

So, while most consumers want online privacy, they can be easily persuaded to abandon their positions with promises of more benefits. The survey included a national online poll of 3,240 U.S. adult Internet users. It was conducted December 13 - 16, 2018.

What to make of these survey results? Americans are fickle and lazy. We say we want online privacy, but few are willing to pay for it. While nothing in life is free, few consumers seem to realize that this advice applies to online privacy, too. Plus, consumers seem to highly value convenience regardless of the consequences.

What do you think?


UK Parliamentary Committee Issued Its Final Report on Disinformation And Fake News. Facebook And Six4Three Discussed

On February 18th, a United Kingdom (UK) parliamentary committee published its final report on disinformation and "fake news." The 109-page report by the Digital, Culture, Media, And Sport Committee (DCMS) updates its interim report from July, 2018.

The report covers many issues: political advertising (by unnamed entities called "dark adverts"), Brexit and UK elections, data breaches, privacy, and recommendations for UK regulators and government officials. It seems wise to understand the report's findings regarding the business practices of U.S.-based companies mentioned, since these companies' business practices affect consumers globally, including consumers in the United States.

Issues Identified

First, the DCMS' final report built upon issues identified in its:

"... Interim Report: the definition, role and legal liabilities of social media platforms; data misuse and targeting, based around the Facebook, Cambridge Analytica and Aggregate IQ (AIQ) allegations, including evidence from the documents we obtained from Six4Three about Facebook’s knowledge of and participation in data-sharing; political campaigning; Russian influence in political campaigns; SCL influence in foreign elections; and digital literacy..."

The final report includes input from 23 "oral evidence sessions," more than 170 written submissions, interviews of at least 73 witnesses, and more than 4,350 questions asked at hearings. The DCMS Committee sought input from individuals, organizations, industry experts, and other governments. Some of the information sources:

"The Canadian Standing Committee on Access to Information, Privacy and Ethics published its report, “Democracy under threat: risks and solutions in the era of disinformation and data monopoly” in December 2018. The report highlights the Canadian Committee’s study of the breach of personal data involving Cambridge Analytica and Facebook, and broader issues concerning the use of personal data by social media companies and the way in which such companies are responsible for the spreading of misinformation and disinformation... The U.S. Senate Select Committee on Intelligence has an ongoing investigation into the extent of Russian interference in the 2016 U.S. elections. As a result of data sets provided by Facebook, Twitter and Google to the Intelligence Committee -- under its Technical Advisory Group -- two third-party reports were published in December 2018. New Knowledge, an information integrity company, published “The Tactics and Tropes of the Internet Research Agency,” which highlights the Internet Research Agency’s tactics and messages in manipulating and influencing Americans... The Computational Propaganda Research Project and Graphika published the second report, which looks at activities of known Internet Research Agency accounts, using Facebook, Instagram, Twitter and YouTube between 2013 and 2018, to impact US users"

Why Disinformation

Second, definitions matter. According to the DCMS Committee:

"We have even changed the title of our inquiry from “fake news” to “disinformation and ‘fake news’”, as the term ‘fake news’ has developed its own, loaded meaning. As we said in our Interim Report, ‘fake news’ has been used to describe content that a reader might dislike or disagree with... We were pleased that the UK Government accepted our view that the term ‘fake news’ is misleading, and instead sought to address the terms ‘disinformation’ and ‘misinformation'..."

Overall Recommendations

Summary recommendations from the report:

  1. "Compulsory Code of Ethics for tech companies overseen by independent regulator,
  2. Regulator given powers to launch legal action against companies breaching code,
  3. Government to reform current electoral communications laws and rules on overseas involvement in UK elections, and
  4. Social media companies obliged to take down known sources of harmful content, including proven sources of disinformation"

Role And Liability Of Tech Companies

Regarding detailed observations and findings about the role and liability of tech companies, the report stated:

"Social media companies cannot hide behind the claim of being merely a ‘platform’ and maintain that they have no responsibility themselves in regulating the content of their sites. We repeat the recommendation from our Interim Report that a new category of tech company is formulated, which tightens tech companies’ liabilities, and which is not necessarily either a ‘platform’ or a ‘publisher’. This approach would see the tech companies assume legal liability for content identified as harmful after it has been posted by users. We ask the Government to consider this new category of tech company..."

The UK Government and its regulators may adopt some, all, or none of the report's recommendations. More observations and findings in the report:

"... both social media companies and search engines use algorithms, or sequences of instructions, to personalize news and other content for users. The algorithms select content based on factors such as a user’s past online activity, social connections, and their location. The tech companies’ business models rely on revenue coming from the sale of adverts and, because the bottom line is profit, any form of content that increases profit will always be prioritized. Therefore, negative stories will always be prioritized by algorithms, as they are shared more frequently than positive stories... Just as information about the tech companies themselves needs to be more transparent, so does information about their algorithms. These can carry inherent biases, as a result of the way that they are developed by engineers... Monika Bickert, from Facebook, admitted that Facebook was concerned about “any type of bias, whether gender bias, racial bias or other forms of bias that could affect the way that work is done at our company. That includes working on algorithms.” Facebook should be taking a more active and urgent role in tackling such inherent biases..."

Based upon this, the report recommended that the UK's new Centre For Ethics And Innovation (CFEI) should play a key role as an advisor to the UK Government by continually analyzing and anticipating gaps in governance and regulation, suggesting best practices and corporate codes of conduct, and standards for artificial intelligence (AI) and related technologies.

Inferred Data

The report also discussed a critical issue related to algorithms (emphasis added):

"... When Mark Zuckerberg gave evidence to Congress in April 2018, in the wake of the Cambridge Analytica scandal, he made the following claim: “You should have complete control over your data […] If we’re not communicating this clearly, that’s a big thing we should work on”. When asked who owns “the virtual you”, Zuckerberg replied that people themselves own all the “content” they upload, and can delete it at will. However, the advertising profile that Facebook builds up about users cannot be accessed, controlled or deleted by those users... In the UK, the protection of user data is covered by the General Data Protection Regulation (GDPR). However, ‘inferred’ data is not protected; this includes characteristics that may be inferred about a user not based on specific information they have shared, but through analysis of their data profile. This, for example, allows political parties to identify supporters on sites like Facebook, through the data profile matching and the ‘lookalike audience’ advertising targeting tool... Inferred data is therefore regarded by the ICO as personal data, which becomes a problem when users are told that they can own their own data, and that they have power of where that data goes and what it is used for..."

The distinction between uploaded and inferred data cannot be overemphasized. It is critical when evaluating tech companies statements, policies (e.g., privacy, terms of use), and promises about what "data" users have control over. Wise consumers must insist upon clear definitions to avoided getting misled or duped.

What might be an exampled of inferred data? What comes to mind is Facebook's Ad Preferences feature allows users to review and delete the "Interests" -- advertising categories -- Facebook assigns to each user's profile. (The service's algorithms assign Interests based groups/pages/events/advertisements users "Liked" or clicked on, posts submitted, posts commented upon, and more.) These "Interests" are inferred data, since Facebook assigned them, and uers didn't.

In fact, Facebook doesn't notify its users when it assigns new Interests. It just does it. And, Facebook can assign Interests whether you interacted with an item once or many times. How relevant is an Interest assigned after a single interaction, "Like," or click? Most people would say: not relevant. So, does the Interests list assigned to users' profiles accurately describe users? Do Facebook users own the Interests list assigned to their profiles? Any control Facebook users have seems minimal. Why? Facebook users can delete Interests assigned to their profiles, but users cannot stop Facebook from applying new Interests. Users cannot prevent Facebook from re-applying Interests previously deleted. Deleting Interests doesn't reduce the number of ads users see on Facebook.

The only way to know what Interests have been assigned is for Facebook users to visit the Ad Preferences section of their profiles, and browse the list. Depending how frequently a person uses Facebook, it may be necessary to prune an Interests list at least once monthly -- a cumbersome and time consuming task, probably designed that way to discourage reviews and pruning. And, that's one example of inferred data. There are probably plenty more examples, and as the report emphasizes users don't have access to all inferred data with their profiles.

Now, back to the report. To fix problems with inferred data, the DCMS recommended:

"We support the recommendation from the ICO that inferred data should be as protected under the law as personal information. Protections of privacy law should be extended beyond personal information to include models used to make inferences about an individual. We recommend that the Government studies the way in which the protections of privacy law can be expanded to include models that are used to make inferences about individuals, in particular during political campaigning. This will ensure that inferences about individuals are treated as importantly as individuals’ personal information."

Business Practices At Facebook

Next, the DCMS Committee's report said plenty about Facebook, its management style, and executives (emphasis added):

"Despite all the apologies for past mistakes that Facebook has made, it still seems unwilling to be properly scrutinized... Ashkan Soltani, an independent researcher and consultant, and former Chief Technologist to the US Federal Trade Commission (FTC), called into question Facebook’s willingness to be regulated... He discussed the California Consumer Privacy Act, which Facebook supported in public, but lobbied against, behind the scenes... By choosing not to appear before the Committee and by choosing not to respond personally to any of our invitations, Mark Zuckerberg has shown contempt towards both the UK Parliament and the ‘International Grand Committee’, involving members from nine legislatures from around the world. The management structure of Facebook is opaque to those outside the business and this seemed to be designed to conceal knowledge of and responsibility for specific decisions. Facebook used the strategy of sending witnesses who they said were the most appropriate representatives, yet had not been properly briefed on crucial issues, and could not or chose not to answer many of our questions. They then promised to follow up with letters, which -- unsurprisingly -- failed to address all of our questions. We are left in no doubt that this strategy was deliberate."

So, based upon Facebook's actions (or lack thereof), the DCMS concluded that Facebook executives intentionally ducked and dodged issues and questions.

While discussing data use and targeting, the report said more about data breaches and Facebook:

"The scale and importance of the GSR/Cambridge Analytica breach was such that its occurrence should have been referred to Mark Zuckerberg as its CEO immediately. The fact that it was not is evidence that Facebook did not treat the breach with the seriousness it merited. It was a profound failure of governance within Facebook that its CEO did not know what was going on, the company now maintains, until the issue became public to us all in 2018. The incident displays the fundamental weakness of Facebook in managing its responsibilities to the people whose data is used for its own commercial interests..."

So, internal management failed. That's not all. After a detailed review of the GSR/Cambridge Analytica breach and Facebook's 2011 Consent Decree with the U.S. Federal Trade Commission (FTC), the DCMS Committee concluded (emphasis and text link added):

"The Cambridge Analytica scandal was facilitated by Facebook’s policies. If it had fully complied with the FTC settlement, it would not have happened. The FTC Complaint of 2011 ruled against Facebook -- for not protecting users’ data and for letting app developers gain as much access to user data as they liked, without restraint -- and stated that Facebook built their company in a way that made data abuses easy. When asked about Facebook’s failure to act on the FTC’s complaint, Elizabeth Denham, the Information Commissioner, told us: “I am very disappointed that Facebook, being such an innovative company, could not have put more focus, attention and resources into protecting people’s data”. We are equally disappointed."

Wow! Not good. There's more:

"... a current court case at the San Mateo Superior Court in California also concerns Facebook’s data practices. It is alleged that Facebook violated the privacy of US citizens by actively exploiting its privacy policy... The published ‘corrected memorandum of points and authorities to defendants’ special motions to strike’, by the complainant in the case, the U.S.-based app developer Six4Three, describes the allegations against Facebook; that Facebook used its users’ data to persuade app developers to create platforms on its system, by promising access to users’ data, including access to data of users’ friends. The case also alleges that those developers that became successful were targeted and ordered to pay money to Facebook... Six4Three lodged its original case in 2015, after Facebook removed developers’ access to friends’ data, including its own. The DCMS Committee took the unusual, but lawful, step of obtaining these documents, which spanned between 2012 and 2014... Since we published these sealed documents, on 14 January 2019 another court agreed to unseal 135 pages of internal Facebook memos, strategies and employee emails from between 2012 and 2014, connected with Facebook’s inappropriate profiting from business transactions with children. A New York Times investigation published in December 2018 based on internal Facebook documents also revealed that the company had offered preferential access to users data to other major technology companies, including Microsoft, Amazon and Spotify."

"We believed that our publishing the documents was in the public interest and would also be of interest to regulatory bodies... The documents highlight Facebook’s aggressive action against certain apps, including denying them access to data that they were originally promised. They highlight the link between friends’ data and the financial value of the developers’ relationship with Facebook. The main issues concern: ‘white lists’; the value of friends’ data; reciprocity; the sharing of data of users owning Android phones..."

You can read the report's detailed descriptions of those issues. A summary: a) Facebook allegedly used promises of access to users' data to lure developers (often by overriding Facebook users' privacy settings); b) some developers got priority treatment based upon unclear criteria; c) developers who didn't spend enough money with Facebook were denied access to data previously promised; d) Facebook's reciprocity clause demanded that developers also share their users' data with Facebook; e) Facebook's mobile app for Android OS phone users collected far more data about users, allegedly without consent, than users were told; and f) Facebook allegedly targeted certain app developers (emphasis added):

"We received evidence that showed that Facebook not only targeted developers to increase revenue, but also sought to switch off apps where it considered them to be in competition or operating in a lucrative areas of its platform and vulnerable to takeover. Since 1970, the US has possessed high-profile federal legislation, the Racketeer Influenced and Corrupt Organizations Act (RICO); and many individual states have since adopted similar laws. Originally aimed at tackling organized crime syndicates, it has also been used in business cases and has provisions for civil action for damages in RICO-covered offenses... Despite specific requests, Facebook has not provided us with one example of a business excluded from its platform because of serious data breaches. We believe that is because it only ever takes action when breaches become public. We consider that data transfer for value is Facebook’s business model and that Mark Zuckerberg’s statement that “we’ve never sold anyone’s data” is simply untrue.” The evidence that we obtained from the Six4Three court documents indicates that Facebook was willing to override its users’ privacy settings in order to transfer data to some app developers, to charge high prices in advertising to some developers, for the exchange of that data, and to starve some developers—such as Six4Three—of that data, thereby causing them to lose their business. It seems clear that Facebook was, at the very least, in violation of its Federal Trade Commission settlement."

"The Information Commissioner told the Committee that Facebook needs to significantly change its business model and its practices to maintain trust. From the documents we received from Six4Three, it is evident that Facebook intentionally and knowingly violated both data privacy and anti-competition laws. The ICO should carry out a detailed investigation into the practices of the Facebook Platform, its use of users’ and users’ friends’ data, and the use of ‘reciprocity’ of the sharing of data."

The Information Commissioner's Office (ICO) is one of the regulatory agencies within the UK. So, the Committee concluded that Facebook's real business model is, "data transfer for value" -- in other words: have money, get access to data (regardless of Facebook users' privacy settings).

One quickly gets the impression that Facebook acted like a monopoly in its treatment of both users and developers... or worse, like organized crime. The report concluded (emphasis added):

"The Competitions and Market Authority (CMA) should conduct a comprehensive audit of the operation of the advertising market on social media. The Committee made this recommendation its interim report, and we are pleased that it has also been supported in the independent Cairncross Report commissioned by the government and published in February 2019. Given the contents of the Six4Three documents that we have published, it should also investigate whether Facebook specifically has been involved in any anti-competitive practices and conduct a review of Facebook’s business practices towards other developers, to decide whether Facebook is unfairly using its dominant market position in social media to decide which businesses should succeed or fail... Companies like Facebook should not be allowed to behave like ‘digital gangsters’ in the online world, considering themselves to be ahead of and beyond the law."

The DCMS Committee's report also discussed findings from the Cairncross Report. In summary, Damian Collins MP, Chair of the DCMS Committee, said:

“... we cannot delay any longer. Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalized ‘dark adverts’ from unidentifiable sources, delivered through the major social media platforms we use everyday. Much of this is directed from agencies working in foreign countries, including Russia... Companies like Facebook exercise massive market power which enables them to make money by bullying the smaller technology companies and developers... We need a radical shift in the balance of power between the platforms and the people. The age of inadequate self regulation must come to an end. The rights of the citizen need to be established in statute, by requiring the tech companies to adhere to a code of conduct..."

So, the report seems extensive, comprehensive, and detailed. Read the DCMS Committee's announcement, and/or download the full DCMS Committee report (Adobe PDF format, 3,5o7 kilobytes).

Once can assume that governments' intelligence and spy agencies will continue to do what they've always done: collect data about targets and adversaries, use disinformation and other tools to attempt to meddle in other governments' activities. It is clear that social media makes these tasks far easier than before. The DCMS Committee's report provided recommendations about what the UK Government's response should be. Other countries' governments face similar decisions about their responses, if any, to the threats.

Given the data in the DCMS report, it will be interesting to see how the FTC and lawmakers in the United States respond. If increased regulation of social media results, tech companies arguably have only themselves to blame. What do you think?


Senators Demand Answers From Facebook And Google About Project Atlas And Screenwise Meter Programs

After news reports surfaced about Facebook's Project Atlas, a secret program where Facebook paid teenagers (and other users) for a research app installed on their phones to track and collect information about their mobile usage, several United States Senators have demanded explanations. Three Senators sent a join letter on February 7, 2019 to Mark Zuckerberg, Facebook's chief executive officer.

The joint letter to Facebook (Adobe PDF format) stated, in part:

"We write concerned about reports that Facebook is collecting highly-sensitive data on teenagers, including their web browsing, phone use, communications, and locations -- all to profile their behavior without adequate disclosure, consent, or oversight. These reports fit with Longstanding concerns that Facebook has used its products to deeply intrude into personal privacy... According to a journalist who attempted to register as a teen, the linked registration page failed to impose meaningful checks on parental consent. Facebook has more rigorous mechanism to obtain and verify parental consent, such as when it is required to sign up for Messenger Kids... Facebook's monitoring under Project Atlas is particularly concerning because the data data collection performed by the research app was deeply invasive. Facebook's registration process encouraged participants to "set it and forget it," warning that if a participant disconnected from the monitoring for more than ten minutes for a few days, that they could be disqualified. Behind the scenes, the app watched everything on the phone."

The letter included another example highlighting the alleged lack of meaningful disclosures:

"... the app added a VPN connection that would automatically route all of a participant's traffic through Facebook's servers. The app installed a SSL root certificate on the participant's phone, which would allow Facebook to intercept or modify data sent to encrypted websites. As a result, Facebook would have limitless access to monitor normally secure web traffic, even allowing Facebook to watch an individual log into their bank account or exchange pictures with their family. None of the disclosures provided at registration offer a meaningful explanation about how the sensitive data is used, how long it is kept, or who within Facebook has access to it..."

The letter was signed by Senators Richard Blumenthal (Democrat, Connecticut), Edward J. Markey (Democrat, Massachusetts), and Josh Hawley (Republican, Mississippi). Based upon news reports about how Facebook's Research App operated with similar functionality to the Onavo VPN app which was banned last year by Apple, the Senators concluded:

"Faced with that ban, Facebook appears to have circumvented Apple's attempts to protect consumers."

The joint letter also listed twelve questions the Senators want detailed answers about. Below are selected questions from that list:

"1. When did Project Atlas begin and how many individuals participated? How many participants were under age 18?"

"3. Why did Facebook use a less strict mechanism for verifying parental consent than is Required for Messenger Kids or Global Data Protection Requlation (GDPR) compliance?"

"4.What specific types of data was collected (e.g., device identifieers, usage of specific applications, content of messages, friends lists, locations, et al.)?"

"5. Did Facebook use the root certificate installed on a participant's device by the Project Atlas app to decrypt and inspect encrypted web traffic? Did this monitoring include analysis or retention of application-layer content?"

"7. Were app usage data or communications content collected by Project Atlas ever reviewed by or available to Facebook personnel or employees of Facebook partners?"

8." Given that Project Atlas acknowledged the collection of "data about [users'] activities and content within those apps," did Facebook ever collect or retain the private messages, photos, or other communications sent or received over non-Facebook products?"

"11. Why did Facebook bypass Apple's app review? Has Facebook bypassed the App Store aproval processing using enterprise certificates for any other app that was used for non-internal purposes? If so, please list and describe those apps."

Read the entire letter to Facebook (Adobe PDF format). Also on February 7th, the Senators sent a similar letter to Google (Adobe PDF format), addressed to Hiroshi Lockheimer, the Senior Vice President of Platforms & Ecosystems. It stated in part:

"TechCrunch has subsequently reported that Google maintained its own measurement program called "Screenwise Meter," which raises similar concerns as Project Atlas. The Screenwise Meter app also bypassed the App Store using an enterprise certificate and installed a VPN service in order to monitor phones... While Google has since removed the app, questions remain about why it had gone outside Apple's review process to run the monitoring program. Platforms must maintain and consistently enforce clear policies on the monitoring of teens and what constitutes meaningful parental consent..."

The letter to Google includes a similar list of eight questions the Senators seek detailed answers about. Some notable questions:

"5. Why did Google bypass App Store approval for Screenwise Meter app using enterprise certificates? Has Google bypassed the App Store approval processing using enterprise certificates for any other non-internal app? If so, please list and describe those apps."

"6. What measures did Google have in place to ensure that teenage participants in Screenwise Meter had authentic parental consent?"

"7. Given that Apple removed Onavoo protect from the App Store for violating its terms of service regarding privacy, why has Google continued to allow the Onavo Protect app to be available on the Play Store?"

The lawmakers have asked for responses by March 1st. Thanks to all three Senators for protecting consumers' -- and children's -- privacy... and for enforcing transparency and accountability.


Survey: Users Don't Understand Facebook's Advertising System. Some Disagree With Its Classifications

Most people know that many companies collect data about their online activities. Based upon the data collected, companies classify users for a variety of reasons and purposes. Do users agree with these classifications? Do the classifications accurately describe users' habits, interests, and activities?

Facebook logo To answer these questions, the Pew Research Center surveyed users of Facebook. Why Facebook? Besides being the most popular social media platform in the United States, it collects:

"... a wide variety of data about their users’ behaviors. Platforms use this data to deliver content and recommendations based on users’ interests and traits, and to allow advertisers to target ads... But how well do Americans understand these algorithm-driven classification systems, and how much do they think their lives line up with what gets reported about them?"

The findings are significant. First:

"Facebook makes it relatively easy for users to find out how the site’s algorithm has categorized their interests via a “Your ad preferences” page. Overall, however, 74% of Facebook users say they did not know that this list of their traits and interests existed until they were directed to their page as part of this study."

So, almost three quarters of Facebook users surveyed don't know what data Facebook has collected about them, nor how to view it (nor how to edit it, or how to opt out of the ad targeting classifications). According to Wired magazine, Facebook's "Your Ad Preferences" page:

"... can be hard to understand if you haven’t looked at the page before. At the top, Facebook displays “Your interests.” These groupings are assigned based on your behavior on the platform and can be used by marketers to target you with ads. They can include fairly straightforward subjects, like “Netflix,” “Graduate school,” and “Entrepreneurship,” but also more bizarre ones, like “Everything” and “Authority.” Facebook has generated an enormous number of these categories for its users. ProPublica alone has collected over 50,000, including those only marketers can see..."

Now, back to the Pew survey. After survey participants viewed their Ad Preferences page:

"A majority of users (59%) say these categories reflect their real-life interests, while 27% say they are not very or not at all accurate in describing them. And once shown how the platform classifies their interests, roughly half of Facebook users (51%) say they are not comfortable that the company created such a list."

So, about half of persons surveyed use a site whose data collection they are uncomfortable with. Not good. Second, substantial groups said the classifications by Facebook were not accurate:

"... about half of Facebook users (51%) are assigned a political “affinity” by the site. Among those who are assigned a political category by the site, 73% say the platform’s categorization of their politics is very or somewhat accurate, while 27% say it describes them not very or not at all accurately. Put differently, 37% of Facebook users are both assigned a political affinity and say that affinity describes them well, while 14% are both assigned a category and say it does not represent them accurately..."

So, significant numbers of users disagree with the political classifications Facebook assigned to their profiles. Third, its' not only politics:

"... Facebook also lists a category called “multicultural affinity”... this listing is meant to designate a user’s “affinity” with various racial and ethnic groups, rather than assign them to groups reflecting their actual race or ethnic background. Only about a fifth of Facebook users (21%) say they are listed as having a “multicultural affinity.” Overall, 60% of users who are assigned a multicultural affinity category say they do in fact have a very or somewhat strong affinity for the group to which they are assigned, while 37% say their affinity for that group is not particularly strong. Some 57% of those who are assigned to this category say they do in fact consider themselves to be a member of the racial or ethnic group to which Facebook assigned them."

The survey included a nationally representative sample of 963 Facebook users ages 18 and older from the United States. The survey was conducted September 4 to October 1, 2018. Read the entire survey at the Pew Research Center site.

What can consumers conclude from this survey? Social media users should understand that all social sites, and especially mobile apps, collect data about you, and then make judgements... classifications about you. (Remember, some Samsung phone owners were unable to delete Facebook and other mobile apps users. And, everyone wants your geolocation data.) Use any tools the sites provide to edit or adjust your ad preferences to match your interests. Adjust the privacy settings on your profile to limit the data sharing as much as possible.

Last, an important reminder. While Facebook users can edit their ad preferences and can opt out of the ad-targeting classifications, they cannot completely avoid ads. Facebook will still display less-targeted ads. That is simply, Facebook being Facebook to make money. That probably applies to other social sites, too.

What are your opinions of the survey's findings?


Facebook Paid Teens To Install Unauthorized Spyware On Their Phones. Plenty Of Questions Remain

Facebook logoWhile today is the 15th anniversary of Facebook,  more important news rules. Last week featured plenty of news about Facebook. TechCrunch reported on Tuesday:

"Since 2016, Facebook has been paying users ages 13 to 35 up to $20 per month plus referral fees to sell their privacy by installing the iOS or Android “Facebook Research” app. Facebook even asked users to screenshot their Amazon order history page. The program is administered through beta testing services Applause, BetaBound and uTest to cloak Facebook’s involvement, and is referred to in some documentation as “Project Atlas” — a fitting name for Facebook’s effort to map new trends and rivals around the globe... Facebook admitted to TechCrunch it was running the Research program to gather data on usage habits."

So, teenagers installed surveillance software on their phones and tablets, to spy for Facebook on themselves, Facebook's competitors,, and others. This is huge news for several reasons. First, the "Facebook Research" app is VPN (Virtual Private Network) software which:

"... lets the company suck in all of a user’s phone and web activity, similar to Facebook’s Onavo Protect app that Apple banned in June and that was removed in August. Facebook sidesteps the App Store and rewards teenagers and adults to download the Research app and give it root access to network traffic in what may be a violation of Apple policy..."

Reportedly, the Research app collected massive amounts of information: private messages in social media apps, chats from in instant messaging apps, photos/videos sent to others, emails, web searches, web browsing activity, and geo-location data. So, a very intrusive app. And, after being forced to remove oneintrusive app from Apple's store, Facebook continued anyway -- with another app that performed the same function. Not good.

Second, there is the moral issue of using the youngest users as spies... persons who arguably have the lease experience and skills at reading complex documents: corporate terms-of-use and privacy policies. I wonder how many teenagers notified their friends of the spying and data collection. How many teenagers fully understood what they were doing? How many parents were aware of the activity and payments? How many parents notified the parents of their children's friends? How many teens installed the spyware on both their iPhones and iPads? Lots of unanswered questions.

Third, Apple responded quickly. TechCrunch reported Wednesday morning:

"... Apple blocked Facebook’s Research VPN app before the social network could voluntarily shut it down... Apple tells TechCrunch that yesterday evening it revoked the Enterprise Certificate that allows Facebook to distribute the Research app without going through the App Store."

Facebook's usage of the Enterprise Certificate is significant. TechCrunch also published a statement by Apple:

"We designed our Enterprise Developer Program solely for the internal distribution of apps within an organization... Facebook has been using their membership to distribute a data-collecting app to consumers, which is a clear breach of their agreement with Apple. Any developer using their enterprise certificates to distribute apps to consumers will have their certificates revoked..."

So, the Research app violated Apple's policy. Not good. The app also performs similar functions as the banned Onavo VPN app. Worse. This sounds like an end-run to me. So as punishment for its end-run actions, Apple temporarily disable the certificates for internal corporate apps.

Axios described very well Facebook's behavior:

"Facebook took a program designed to let businesses internally test their own app and used it to monitor most, if not everything, a user did on their phone — a degree of surveillance barred in the official App Store."

And the animated Facebook image in the Axios article sure looks like a liar-liar-logo-on-fire image. LOL! Pure gold! Seriously, Facebook's behavior indicates questionable ethics, and/or an expectation of not getting caught. Reportedly, the internal apps which were shut down included shuttle schedules, campus maps, and company calendars. After that, some Facebook employees discussed quitting.

And, it raises more questions. Which Facebook executives approved Project Atlas? What advice did Facebook's legal staff provide prior to approval? Was that advice followed or ignored?

Google logo Fourth, TechCrunch also reported:

"Facebook’s Research program will continue to run on Android."

What? So, Google devices were involved, too. Is this spy program okay with Google executives? A follow-up report on Wednesday by TechCrunch:

"Google has been running an app called Screenwise Meter, which bears a strong resemblance to the app distributed by Facebook Research that has now been barred by Apple... Google invites users aged 18 and up (or 13 if part of a family group) to download the app by way of a special code and registration process using an Enterprise Certificate. That’s the same type of policy violation that led Apple to shut down Facebook’s similar Research VPN iOS app..."

Oy! So, Google operates like Facebook. Also reported by TechCrunch:

"The Screenwise Meter iOS app should not have operated under Apple’s developer enterprise program — this was a mistake, and we apologize. We have disabled this app on iOS devices..."

So, Google will terminate its spy program on Apple devices, but continue its own program with Facebook. Hmmmmm. Well, that answers some questions. I guess Google executives are okay with this spy program. More questions remain.

Fifth, Facebook tried to defend the Research app and its actions in an internal memo to employees. On Thursday, TechCrunch tore apart the claims in an internal Facebook memo from vice president Pedro Canahuati. Chiefly:

"Facebook claims it didn’t hide the program, but it was never formally announced like every other Facebook product. There were no Facebook Help pages, blog posts, or support info from the company. It used intermediaries Applause and CentreCode to run the program under names like Project Atlas and Project Kodiak. Users only found out Facebook was involved once they started the sign-up process and signed a non-disclosure agreement prohibiting them from discussing it publicly... Facebook claims it wasn’t “spying,” yet it never fully laid out the specific kinds of information it would collect. In some cases, descriptions of the app’s data collection power were included in merely a footnote. The program did not specify data types gathered, only saying it would scoop up “which apps are on your phone, how and when you use them” and “information about your internet browsing activity.” The parental consent form from Facebook and Applause lists none of the specific types of data collected...

So, Research app participants (e.g., teenagers, parents) couldn't discuss nor warn their friends (and their friends' parents) about the data collection. I strongly encourage everyone to read the entire TechCrunch analysis. It is eye-opening.

Sixth, a reader shared concerns about whether Facebook's actions violated federal laws. Did Project Atlas violate the Digital Millennium Copyright Act (DMCA); specifically the "anti-circumvention" provision, which prohibits avoiding the security protections in software? Did it violate the Computer Fraud and Abuse Act? What about breach-of-contract and fraud laws? What about states' laws? So, one could ask similar questions about Google's actions, too.

I am not an attorney. Hopefully, some attorneys will weigh in on these questions. Probably, some skilled attorneys will investigate various legal options.

All of this is very disturbing. Is this what consumers can expect of Silicon Valley firms? Is this the best tech firms can do? Is this the low level the United States has sunk to? Kudos to the TechCrunch staff for some excellent reporting.

What are your opinions of Project Atlas? Of Facebook's behavior? Of Google's?


Samsung Phone Owners Unable To Delete Facebook And Other Apps. Anger And Privacy Concerns Result

Some consumers have learned that they can't delete Facebook and other mobile apps from their Samsung smartphones. Bloomberg described one consumer's experiences:

"Winke bought his Samsung Galaxy S8, an Android-based device that comes with Facebook’s social network already installed, when it was introduced in 2017. He has used the Facebook app to connect with old friends and to share pictures of natural landscapes and his Siamese cat -- but he didn’t want to be stuck with it. He tried to remove the program from his phone, but the chatter proved true -- it was undeletable. He found only an option to "disable," and he wasn’t sure what that meant."

Samsung phones operate using Google's Android operating system (OS). The "chatter" refers to online complaints by Samsung phone owners. There were plenty of complaints, ranging from snarky:

To informative:

And:

Some persons shared their (understandable) anger:

One person reminded consumers of bigger issues with Android OS phones:

And, that privacy concern still exists. Sophos Labs reported:

"Advocacy group Privacy International announced the findings in a presentation at the 35th Chaos Computer Congress late last month. The organization tested 34 apps and documented the results, as part of a downloadable report... 61% of the apps tested automatically tell Facebook that a user has opened them. This accompanies other basic event data such as an app being closed, along with information about their device and suspected location based on language and time settings. Apps have been doing this even when users don’t have a Facebook account, the report said. Some apps went far beyond basic event information, sending highly detailed data. For example, the travel app Kayak routinely sends search information including departure and arrival dates and cities, and numbers of tickets (including tickets for children)."

After multiple data breaches and privacy snafus, some Facebook users have decided to either quit the Facebook mobile app or quit the service entirely. Now, some Samsung phone users have learned that quitting can be more difficult, and they don't have as much control over their devices as they thought.

How did this happen? Bloomberg explained:

"Samsung, the world’s largest smartphone maker, said it provides a pre-installed Facebook app on selected models with options to disable it, and once it’s disabled, the app is no longer running. Facebook declined to provide a list of the partners with which it has deals for permanent apps, saying that those agreements vary by region and type... consumers may not know if Facebook is pre-loaded unless they specifically ask a customer service representative when they purchase a phone."

Not good. So, now we know that there are two classes of mobile apps: 1) pre-installed and 2) permanent. Pre-installed apps come on new devices. Some pre-installed apps can be deleted by users. Permanent mobile apps are pre-installed apps which cannot be removed/deleted by users. Users can only disable permanent apps.

Sadly, there's more and it's not only Facebook. Bloomberg cited other agreements:

"A T-Mobile US Inc. list of apps built into its version of the Samsung Galaxy S9, for example, includes the social network as well as Amazon.com Inc. The phone also comes loaded with many Google apps such as YouTube, Google Play Music and Gmail... Other phone makers and service providers, including LG Electronics Inc., Sony Corp., Verizon Communications Inc. and AT&T Inc., have made similar deals with app makers..."

This is disturbing. There seem to be several issues:

  1. Notice: consumers should be informed before purchase of any and all phone apps which can't be removed. The presence of permanent mobile apps suggests either a lack of notice, notice buried within legal language of phone manufacturers' user agreements, or both.
  2. Privacy: just because a mobile app isn't running doesn't mean it isn't operating. Stealth apps can still collect GPS location and device information while running in the background; and then transmit it to manufacturers. Hopefully, some enterprising technicians or testing labs will verify independently whether "disabled" permanent mobile apps have truly stopped working.
  3. Transparency: phone manufacturers should explain and publish their lists of partners with both pre-installed and permanent app agreements -- for each device model. Otherwise, consumers cannot make informed purchase decisions about phones.
  4. Scope: the Samsung-Facebook pre-installed apps raises questions about other devices with permanent apps: phones, tablets, laptops, smart televisions, and automotive vehicles. Perhaps, some independent testing by Consumer Reports can determine a full list of devices with permanent apps.
  5. Nothing is free. Pre-installed app agreements indicate another method which device manufacturers use to make money, by collecting and sharing consumers' data with other tech companies.

The bottom line is trust. Consumers have more valid reasons to distrust some device manufacturers and OS developers. What issues do you see? What are your thoughts about permanent mobile apps?


To Estimate The Value Of Facebook, A Study Asked How Much Money Users Would Demand As Payment To Quit The Service

Facebook logo What is the value of Facebook to its users? In a recent study, researchers explored answers to that question:

"Because [Facebook] users do not pay for the service, its benefits are hard to measure. We report the results of a series of three non-hypothetical auction experiments where winners are paid to deactivate their Facebook accounts for up to one year..."

The study was published in PLOS One, a peer-reviewed journal by the Public Library of Science. The study is important and of interest to economists because:

"... If Facebook were a country, it would be the world’s largest in terms of population with over 2.20 billion monthly active users, 1.45 billion of whom are active on a daily basis, spending an average of 50 minutes each day on Facebook-owned platforms (e.g., Facebook, Messenger, Instagram)... Despite concerns about loss of relevance due to declining personal posts by users, diminished interest in adoption and use by teens and young adults, claims about potential manipulation of its content for political purposes, and leaks that question the company’s handling of private user data, Facebook remains the top social networking site in the world and the third most visited site on the Internet after Google and YouTube...  Since its launch in 2004, Facebook has redefined how we communicate... Facebook had 23,165 employees as of September 30, 2017. This is less than 1% the number employed by Walmart, the world’s largest private employer... Because Facebook’s users pay nothing for the service, Facebook does not contribute directly to gross domestic product (GDP), economists’ standard metric of a nation’s output. In this context, it may seem surprising then that Facebook is the world’s fifth most valuable company with a market capitalization of $541.56 billion in May 2018... In 2017, the company had $40.65 billion in revenues, primarily from advertising, and $20.20 billion in net income..."

The detailed methodology of the study included:

"... a Vickrey second-price approach. In a typical experimental auction, participants bid to purchase a good or service. The highest bidder wins the auction and pays a price equal to the second-highest bid. This approach is designed such that participants’ best strategy is to bid their true willingness-to-pay... Because our study participants already had free access to Facebook, we could not ask people how much they would be willing to pay for access to the service. Instead, people bid for how much they would need in compensation to give up using Facebook. Economists have used these “willingness-to-accept” (WTA) auctions to assess the value of mundane items such as pens and chocolate bars, but also more abstract or novel items such as food safety, goods free of genetically modified ingredients, the stigma associated with HIV, battery life in smartphones, and the payment people require to endure an unpleasant experience... In this study, each bid can be interpreted as the minimum dollar amount a person would be willing to accept in exchange for not using Facebook for a given time period. The three auctions differ in the amount of time winners would have to go without using Facebook..."

The authors also discussed "consumer surplus," an economics term defined as:

"... a measure of value equal to the difference between the most a consumer would be willing to pay for a service and the price she actually pays to use it. When considering all consumers, Figure 1 below shows consumer surplus is the area under the demand curve, which shows consumers’ willingness to pay, and above the price; it is generally interpreted as consumers’ net benefit from being able to access a good or service in the marketplace. GDP, by contrast, is the market value of all final goods and services produced domestically in a given year..."

Journal.pone.0207101.g001

For comparison, the researchers cited related studies:

"... Bapna, Jank, and Shmueli [24] found that eBay users received a median of $4 in consumer surplus per transaction in 2003, or $7 billion in total. Ghose, Smith, and Telang [25] found that Amazon’s used-book market generated $67 million in annual consumer surplus. Brynjolfsson, Hu, and Smith [26] found that the increased variety of books available on Amazon created $1 billion in consumer surplus in 2000. Widening the lens to focus on the entire Internet, Greenstein and McDevitt [27] found that high-speed Internet access (as opposed to dial-up) generated $4.8 billion to $6.7 billion of consumer surplus in total between 1999 and 2006. Dutz, Orszag, and Willig [28] estimated that high-speed internet access generated $32 billion in consumer surplus in 2008 alone..."

Across the three auctions, study participants submitted bids ranging from $1,130 to $2,076 on average. The researchers found:

"... across all three samples, the mean bid to deactivate Facebook for a year exceeded $1,000. Even the most conservative of these mean WTA estimates, if applied to Facebook’s 214 million U.S. users, suggests an annual value of over $240 billion to users... Facebook reached a market capitalization of $542 billion in May 2018. At 2.20 billion active users in March 2018, this suggests a value to investors of almost $250 per user, which is less than one fourth of the annual value of [payments demanded by study participants to quit the service]. This reinforces the idea that the vast majority of benefits of new inventions go not to the inventors but to users."

To summarize, users in the study demanded at least $1,000 yearly each to quit the service. That's a measure of the value of Facebook to users. And, that value far exceeds the $250 value of each user to investors. The authors concluded:

"Concerns about data privacy, such as Cambridge Analytica’s alleged problematic handling of users’ private information, which are thought to have been used to influence the 2016 United States presidential election, only underscore the value Facebook’s users must derive from the service. Despite the parade of negative publicity surrounding the Cambridge Analytica revelations in mid-March 2018, Facebook added 70 million users between the end of 2017 and March 31, 2018. This implies the value users derive from the social network more than offsets the privacy concerns."

The conclusion suggests that the risk of a mass exodus of users is unlikely. I guess Facebook executives will find some comfort in that. However, more research is needed. Different sub-groups of users might demand different values. For example, a sub-group of users who have had their accounts hacked or cloned might demand a different -- perhaps lower -- annual payment amount to quit Facebook.

Another sub-group of users who have been identity theft and fraud victims might demand a higher annual payment to cover the costs of credit monitoring services and/or fraud resolution fees. A third sub-group -- parents and grandparents -- might demand a different payment amount due to the loss of access to family, children and grandchildren.

A one-size-fits-all approach to a WTA value doesn't seem very relevant. Follow-up studies could explore values by these sub-groups and by users with different types of behaviors (e.g., dissatisfaction levels):

  1. Quit the service's mobile apps and use only its browser interface,
  2. Reduced their time on the site (e.g., fewer posts, not using quizzes, not posting photos, not using Facebook Messenger, etc.),
  3. Daily usage ranges (e.g., less than 30 minutes, 31 to 59 minutes,, 60 to 79 minutes, 80 to 99 minutes, 100 minutes or more, etc.),
  4. Disabled the API interface with their accounts (e.g., don't user Facebook credentials to sign into other sites), and
  5. Tightened their privacy settings to display less (e.g., don't display Friends list, suppress personal newsfeed, don't display personal data, don't allow friends to post to their personal newsfeed page, etc.).

Clearly, more research is needed. Would you quit Facebook? If so, how much money would you demand as payment? What follow-up studies are you interested in?


A Series Of Recent Events And Privacy Snafus At Facebook Cause Multiple Concerns. Does Facebook Deserve Users' Data?

Facebook logo So much has happened lately at Facebook that it can be difficult to keep up with the data scandals, data breaches, privacy fumbles, and more at the global social service. To help, below is a review of recent events.

The the New York Times reported on Tuesday, December 18th that for years:

"... Facebook gave some of the world’s largest technology companies more intrusive access to users’ personal data than it has disclosed, effectively exempting those business partners from its usual privacy rules... The special arrangements are detailed in hundreds of pages of Facebook documents obtained by The New York Times. The records, generated in 2017 by the company’s internal system for tracking partnerships, provide the most complete picture yet of the social network’s data-sharing practices... Facebook allowed Microsoft’s Bing search engine to see the names of virtually all Facebook users’ friends without consent... and gave Netflix and Spotify the ability to read Facebook users’ private messages. The social network permitted Amazon to obtain users’ names and contact information through their friends, and it let Yahoo view streams of friends’ posts as recently as this summer, despite public statements that it had stopped that type of sharing years earlier..."

According to the Reuters newswire, a Netflix spokesperson denied that Netflix accessed Facebook users' private messages, nor asked for that access. Facebook responded with denials the same day:

"... none of these partnerships or features gave companies access to information without people’s permission, nor did they violate our 2012 settlement with the FTC... most of these features are now gone. We shut down instant personalization, which powered Bing’s features, in 2014 and we wound down our partnerships with device and platform companies months ago, following an announcement in April. Still, we recognize that we’ve needed tighter management over how partners and developers can access information using our APIs. We’re already in the process of reviewing all our APIs and the partners who can access them."

Needed tighter management with its partners and developers? That's an understatement. During March and April of 2018 we learned that bad actors posed as researchers and used both quizzes and automated tools to vacuum up (and allegedly resell later) profile data for 87 million Facebook users. There's more news about this breach. The Office of the Attorney General for Washington, DC announced on December 19th that it has:

"... sued Facebook, Inc. for failing to protect its users’ data... In its lawsuit, the Office of the Attorney General (OAG) alleges Facebook’s lax oversight and misleading privacy settings allowed, among other things, a third-party application to use the platform to harvest the personal information of millions of users without their permission and then sell it to a political consulting firm. In the run-up to the 2016 presidential election, some Facebook users downloaded a “personality quiz” app which also collected data from the app users’ Facebook friends without their knowledge or consent. The app’s developer then sold this data to Cambridge Analytica, which used it to help presidential campaigns target voters based on their personal traits. Facebook took more than two years to disclose this to its consumers. OAG is seeking monetary and injunctive relief, including relief for harmed consumers, damages, and penalties to the District."

Sadly, there's still more. Facebook announced on December 14th another data breach:

"Our internal team discovered a photo API bug that may have affected people who used Facebook Login and granted permission to third-party apps to access their photos. We have fixed the issue but, because of this bug, some third-party apps may have had access to a broader set of photos than usual for 12 days between September 13 to September 25, 2018... the bug potentially gave developers access to other photos, such as those shared on Marketplace or Facebook Stories. The bug also impacted photos that people uploaded to Facebook but chose not to post... we believe this may have affected up to 6.8 million users and up to 1,500 apps built by 876 developers... Early next week we will be rolling out tools for app developers that will allow them to determine which people using their app might be impacted by this bug. We will be working with those developers to delete the photos from impacted users. We will also notify the people potentially impacted..."

We believe? That sounds like Facebook doesn't know for sure. Where was the quality assurance (QA) team on this? Who is performing the post-breach investigation to determine what happened so it doesn't happen again? This post-breach response seems sloppy. And, the "bug" description seems disingenuous. Anytime persons -- in this case developers -- have access to data they shouldn't have, it is a data breach.

One quickly gets the impression that Facebook has created so many niches, apps, APIs, and special arrangements for developers and advertisers that it really can't manage nor control the data it collects about its users. That implies Facebook users aren't in control of their data, either.

There were other notable stumbles. There were reports after many users experienced repeated bogus Friend Requests, due to hacked and/or cloned accounts. It can be difficult for users to distinguish valid Friend Requests from spammers or bad actors masquerading as friends.

In August, reports surfaced that Facebook approached several major banks offering to share its detailed financial information about consumers in order, "to boost user engagement." Reportedly, the detailed financial information included debit/credit/prepaid card transactions and checking account balances. Not good.

Also in August, Facebook's Onavo VPN App was removed from the Apple App store because the app violated data-collection policies. 9 To 5 Mac reported on December 5th:

"The UK parliament has today publicly shared secret internal Facebook emails that cover a wide-range of the company’s tactics related to its free iOS VPN app that was used as spyware, recording users’ call and text message history, and much more... Onavo was an interesting effort from Facebook. It posed as a free VPN service/app labeled as Facebook’s “Protect” feature, but was more or less spyware designed to collect data from users that Facebook could leverage..."

Why spy? Why the deception? This seems unnecessary for a global social networking company already collecting massive amounts of content.

In November, an investigative report by ProPublica detailed the failures in Facebook's news transparency implementation. The failures mean Facebook hasn't made good on its promises to ensure trustworthy news content, nor stop foreign entities from using the social service to meddle in elections in democratic countries.

There is more. Facebook disclosed in October a massive data breach affecting 30 million users (emphasis added):

For 15 million people, attackers accessed two sets of information – name and contact details (phone number, email, or both, depending on what people had on their profiles). For 14 million people, the attackers accessed the same two sets of information, as well as other details people had on their profiles. This included username, gender, locale/language, relationship status, religion, hometown, self-reported current city, birth date, device types used to access Facebook, education, work, the last 10 places they checked into or were tagged in, website, people or Pages they follow, and the 15 most recent searches..."

The stolen data allows bad actors to operate several types of attacks (e.g., spam, phishing, etc.) against Facebook users. The stolen data allows foreign spy agencies to collect useful information to target persons. Neither is good. Wired summarized the situation:

"Every month this year—and in some months, every week—new information has come out that makes it seem as if Facebook's big rethink is in big trouble... Well-known and well-regarded executives, like the founders of Facebook-owned Instagram, Oculus, and WhatsApp, have left abruptly. And more and more current and former employees are beginning to question whether Facebook's management team, which has been together for most of the last decade, is up to the task.

Technically, Zuckerberg controls enough voting power to resist and reject any moves to remove him as CEO. But the number of times that he and his number two Sheryl Sandberg have over-promised and under-delivered since the 2016 election would doom any other management team... Meanwhile, investigations in November revealed, among other things, that the company had hired a Washington firm to spread its own brand of misinformation on other platforms..."

Hiring a firm to distribute misinformation elsewhere while promising to eliminate misinformation on its platform. Not good. Are Zuckerberg and Sandberg up to the task? The above list of breaches, scandals, fumbles, and stumbles suggest not. What do you think?

The bottom line is trust. Given recent events, BuzzFeed News article posed a relevant question (emphasis added):

"Of all of the statements, apologies, clarifications, walk-backs, defenses, and pleas uttered by Facebook employees in 2018, perhaps the most inadvertently damning came from its CEO, Mark Zuckerberg. Speaking from a full-page ad displayed in major papers across the US and Europe, Zuckerberg proclaimed, "We have a responsibility to protect your information. If we can’t, we don’t deserve it." At the time, the statement was a classic exercise in damage control. But given the privacy blunders that followed, it hasn’t aged well. In fact, it’s become an archetypal criticism of Facebook and the set up for its existential question: Why, after all that’s happened in 2018, does Facebook deserve our personal information?"

Facebook executives have apologized often. Enough is enough. No more apologies. Just fix it! And, if Facebook users haven't asked themselves the above question yet, some surely will. Earlier this week, a friend posted on the site:

"To all my FB friends:
I will be deleting my FB account very soon as I am disgusted by their invasion of the privacy of their users. Please contact me by email in the future. Please note that it will take several days for this action to take effect as FB makes it hard to get out of its grip. Merry Christmas to all and with best wishes for a Healthy, safe, and invasive free New Year."

I reminded this friend to also delete any Instagram and What's App accounts, since Facebook operates those services, too. If you want to quit the service but suffer with FOMO (Fear Of Missing Out), then read the experiences of a person who quit Apple, Google, Facebook, Microsoft, and Amazon for a month. It can be done. And, your social life will continue -- spectacularly. It did before Facebook.

Me? I have reduced my activity on Facebook. And there are certain activities I don't do on Facebook: take quizzes, make online payments, use its emotion reaction buttons (besides "Like"), use its mobile app, use the Messenger mobile app, nor use its voting and ballot previews content. Long ago I disabled the Facebook API platform on my Facebook account. You should, too. I never use my Facebook credentials (e.g., username, password) to sign into other sites. Never.

I will continue to post on Facebook links to posts in this blog, since it is helpful information for many Facebook users. In what ways have you reduced your usage of Facebook?


Massive Data Breach At Quora Affects 100 Million Users

Quora logo Quora, the knowledge-sharing social networking site, announced on Monday a data breach affecting about 100 million of its users. The company discovered the breach on Friday, and a breach investigation is ongoing.

The company’s Chief Executive Officer, Adam D’Angelo, wrote in a blog post that the following data elements were compromised or stolen:

"a) Account information, e.g. name, email address, encrypted password (hashed using bcrypt with a salt that varies for each user), data imported from linked networks when authorized by users; b) Public content and actions, e.g. questions, answers, comments, upvotes; and c) Non-public content and actions, e.g. answer requests, downvotes, direct messages (note that a low percentage of Quora users have sent or received such messages)"

Quora has cancelled affected users' passwords. Quora does not yet know exactly how unauthorized persons accessed its system. The breach announcement did not state when the intrusion began. D'Angelo added:

"We're still investigating the precise causes and in addition to the work being conducted by our internal security teams, we have retained a leading digital forensics and security firm to assist us. We have also notified law enforcement officials."

Affected users are being notified via email. Affected users returning to the site must reset their accounts with new passwords. Quora encourages users with questions to visit its breach help site. Users are warned to change their online passwords.

The New York Times reported:

"... the incident was unlikely to result in identity theft, as the site does not collect sensitive information such as credit card or Social Security numbers... 300 million people around the world use its site at least once a month to ask and answer questions about politics, faith, calculus, unrequited love, the meaning of life and more. By comparison, Twitter claims 326 million monthly active users. But since it blasted onto the social media landscape in 2010, igniting a blaze of interest among tech company employees, Quora has not become the mainstream cultural force that Twitter has..."

This breach is another reminder to all consumers to never use the same password at multiple sites. Cybercriminals are persistent, and will reuse stolen passwords to see which other sites they can break into to steal sensitive personal and payment information.

If you received an email breach notice from Quora, please share it below (after deleting any sensitive personal data).


Ireland Regulator: LinkedIn Processed Email Addresses Of 18 Million Non-Members

LinkedIn logo On Friday November 23rd, the Data Protection Commission (DPC) in Ireland released its annual report. That report includes the results of an investigation by the DPC of the LinkedIn.com social networking site, after a 2017 complaint by a person who didn't use the social networking service. Apparently, LinkedIn obtained 18 million email address of non-members so it could then use the Facebook platform to deliver advertisements encouraging them to join.

The DPC 2018 report (Adobe PDF; 827k bytes) stated on page 21:

"The DPC concluded its audit of LinkedIn Ireland Unlimited Company (LinkedIn) in respect of its processing of personal data following an investigation of a complaint notified to the DPC by a non-LinkedIn user. The complaint concerned LinkedIn’s obtaining and use of the complainant’s email address for the purpose of targeted advertising on the Facebook Platform. Our investigation identified that LinkedIn Corporation (LinkedIn Corp) in the U.S., LinkedIn Ireland’s data processor, had processed hashed email addresses of approximately 18 million non-LinkedIn members and targeted these individuals on the Facebook Platform with the absence of instruction from the data controller (i.e. LinkedIn Ireland), as is required pursuant to Section 2C(3)(a) of the Acts. The complaint was ultimately amicably resolved, with LinkedIn implementing a number of immediate actions to cease the processing of user data for the purposes that gave rise to the complaint."

So, in an attempt to gain more users LinkedIn acquired and processed the email addresses of 18 million non-members without getting governmental "instruction" as required by law. Not good.

The DPC report covered the time frame from January 1st through May 24, 2018. The report did not mention the source(s) from which LinkedIn acquired the email addresses. The DPC report also discussed investigations of Facebook (e.g., WhatsApp, facial recognition),  and Yahoo/Oath. Microsoft acquired LinkedIn in 2016. GDPR went into effect across the EU on May 25, 2018.

There is more. The investigation's findings raised concerns about broader compliance issues, so the DPC conducted a more in-depth audit:

"... to verify that LinkedIn had in place appropriate technical security and organisational measures, particularly for its processing of non-member data and its retention of such data. The audit identified that LinkedIn Corp was undertaking the pre-computation of a suggested professional network for non-LinkedIn members. As a result of the findings of our audit, LinkedIn Corp was instructed by LinkedIn Ireland, as data controller of EU user data, to cease pre-compute processing and to delete all personal data associated with such processing prior to 25 May 2018."

That the DPC ordered LinkedIn to stop this particular data processing, strongly suggests that the social networking service's activity probably violated data protection laws, as the European Union (EU) implements stronger privacy laws, known as General Data Protection Regulation (GDPR). ZDNet explained in this primer:

".... GDPR is a new set of rules designed to give EU citizens more control over their personal data. It aims to simplify the regulatory environment for business so both citizens and businesses in the European Union can fully benefit from the digital economy... almost every aspect of our lives revolves around data. From social media companies, to banks, retailers, and governments -- almost every service we use involves the collection and analysis of our personal data. Your name, address, credit card number and more all collected, analysed and, perhaps most importantly, stored by organisations... Data breaches inevitably happen. Information gets lost, stolen or otherwise released into the hands of people who were never intended to see it -- and those people often have malicious intent. Under the terms of GDPR, not only will organisations have to ensure that personal data is gathered legally and under strict conditions, but those who collect and manage it will be obliged to protect it from misuse and exploitation, as well as to respect the rights of data owners - or face penalties for not doing so... There are two different types of data-handlers the legislation applies to: 'processors' and 'controllers'. The definitions of each are laid out in Article 4 of the General Data Protection Regulation..."

The new GDPR applies to both companies operating within the EU, and to companies located outside of the EU which offer goods or services to customers or businesses inside the EU. As a result, some companies have changed their business processes. TechCrunch reported in April:

"Facebook has another change in the works to respond to the European Union’s beefed up data protection framework — and this one looks intended to shrink its legal liabilities under GDPR, and at scale. Late yesterday Reuters reported on a change incoming to Facebook’s [Terms & Conditions policy] that it said will be pushed out next month — meaning all non-EU international are switched from having their data processed by Facebook Ireland to Facebook USA. With this shift, Facebook will ensure that the privacy protections afforded by the EU’s incoming GDPR — which applies from May 25 — will not cover the ~1.5 billion+ international Facebook users who aren’t EU citizens (but current have their data processed in the EU, by Facebook Ireland). The U.S. does not have a comparable data protection framework to GDPR..."

What was LinkedIn's response to the DPC report? At press time, a search of LinkedIn's blog and press areas failed to find any mentions of the DPC investigation. TechCrunch reported statements by Dennis Kelleher, Head of Privacy, EMEA at LinkedIn:

"... Unfortunately the strong processes and procedures we have in place were not followed and for that we are sorry. We’ve taken appropriate action, and have improved the way we work to ensure that this will not happen again. During the audit, we also identified one further area where we could improve data privacy for non-members and we have voluntarily changed our practices as a result."

What does this mean? Plenty. There seem to be several takeaways for consumer and users of social networking services:

  • EU regulators are proactive and conduct detailed audits to ensure companies both comply with GDPR and act consistent with any promises they made,
  • LinkedIn wants consumers to accept another "we are sorry" corporate statement. No thanks. No more apologies. Actions speak more loudly than words,
  • The DPC didn't fine LinkedIn probably because GDPR didn't become effective until May 25, 2018. This suggests that fines will be applied to violations occurring on or after May 25, 2018, and
  • People in different areas of the world view privacy and data protection differently - as they should. That is fine, and it shouldn't be a surprise. (A global survey about self-driving cars found similar regional differences.) Smart executives in businesses -- and in governments -- worldwide recognize regional differences, find ways to sell products and services across areas without degraded customer experience, and don't try to force their country's approach on other countries or areas which don't want it.

What takeaways do you see?


Plenty Of Bad News During November. Are We Watching The Fall Of Facebook?

Facebook logo November has been an eventful month for Facebook, the global social networking giant. And not in a good way. So much has happened, it's easy to miss items. Let's review.

A November 1st investigative report by ProPublica described how some political advertisers exploit gaps in Facebook's advertising transparency policy:

"Although Facebook now requires every political ad to “accurately represent the name of the entity or person responsible,” the social media giant acknowledges that it didn’t check whether Energy4US is actually responsible for the ad. Nor did it question 11 other ad campaigns identified by ProPublica in which U.S. businesses or individuals masked their sponsorship through faux groups with public-spirited names. Some of these campaigns resembled a digital form of what is known as “astroturfing,” or hiding behind the mirage of a spontaneous grassroots movement... Adopted this past May in the wake of Russian interference in the 2016 presidential campaign, Facebook’s rules are designed to hinder foreign meddling in elections by verifying that individuals who run ads on its platform have a U.S. mailing address, governmental ID and a Social Security number. But, once this requirement has been met, Facebook doesn’t check whether the advertiser identified in the “paid for by” disclosure has any legal status, enabling U.S. businesses to promote their political agendas secretly."

So, political ad transparency -however faulty it is -- has only been operating since May, 2018. Not long. Not good.

The day before the November 6th election in the United States, Facebook announced:

"On Sunday evening, US law enforcement contacted us about online activity that they recently discovered and which they believe may be linked to foreign entities. Our very early-stage investigation has so far identified around 30 Facebook accounts and 85 Instagram accounts that may be engaged in coordinated inauthentic behavior. We immediately blocked these accounts and are now investigating them in more detail. Almost all the Facebook Pages associated with these accounts appear to be in the French or Russian languages..."

This happened after Facebook removed 82 Pages, Groups and accounts linked to Iran on October 16th. Thankfully, law enforcement notified Facebook. Interested in more proactive action? Facebook announced on November 8th:

"We are careful not to reveal too much about our enforcement techniques because of adversarial shifts by terrorists. But we believe it’s important to give the public some sense of what we are doing... We now use machine learning to assess Facebook posts that may signal support for ISIS or al-Qaeda. The tool produces a score indicating how likely it is that the post violates our counter-terrorism policies, which, in turn, helps our team of reviewers prioritize posts with the highest scores. In this way, the system ensures that our reviewers are able to focus on the most important content first. In some cases, we will automatically remove posts when the tool indicates with very high confidence that the post contains support for terrorism..."

So, Facebook deployed in 2018 some artificial intelligence to help its human moderators identify terrorism threats -- not automatically remove them, but to identify them -- as the news item also mentioned its appeal process. Then, Facebook announced in a November 13th update:

"Combined with our takedown last Monday, in total we have removed 36 Facebook accounts, 6 Pages, and 99 Instagram accounts for coordinated inauthentic behavior. These accounts were mostly created after mid-2017... Last Tuesday, a website claiming to be associated with the Internet Research Agency, a Russia-based troll farm, published a list of Instagram accounts they said that they’d created. We had already blocked most of them, and based on our internal investigation, we blocked the rest... But finding and investigating potential threats isn’t something we do alone. We also rely on external partners, like the government or security experts...."

So, in 2018 Facebook leans heavily upon both law enforcement and security researchers to identify threats. You have to hunt a bit to find the total number of fake accounts removed. Facebook announced on November 15th:

"We also took down more fake accounts in Q2 and Q3 than in previous quarters, 800 million and 754 million respectively. Most of these fake accounts were the result of commercially motivated spam attacks trying to create fake accounts in bulk. Because we are able to remove most of these accounts within minutes of registration, the prevalence of fake accounts on Facebook remained steady at 3% to 4% of monthly active users..."

That's about 1.5 billion fake accounts by a variety of bad actors. Hmmmm... sounds good, but... it makes one wonder about the digital arms race happening. If the bad actors can programmatically create new fake accounts faster than Facebook can identify and remove them, then not good.

Meanwhile, CNet reported on November 11th that Facebook had ousted Oculus founder Palmer Luckey due to:

"... a $10,000 to an anti-Hillary Clinton group during the 2016 presidential election, he was out of the company he founded. Facebook CEO Mark Zuckerberg, during congressional testimony earlier this year, called Luckey's departure a "personnel issue" that would be "inappropriate" to address, but he denied it was because of Luckey's politics. But that appears to be at the root of Luckey's departure, The Wall Street Journal reported Sunday. Luckey was placed on leave and then fired for supporting Donald Trump, sources told the newspaper... [Luckey] was pressured by executives to publicly voice support for libertarian candidate Gary Johnson, according to the Journal. Luckey later hired an employment lawyer who argued that Facebook illegally punished an employee for political activity and negotiated a payout for Luckey of at least $100 million..."

Facebook acquired Oculus Rift in 2014. Not good treatment of an executive.

The next day, TechCrunch reported that Facebook will provide regulators from France with access to its content moderation processes:

"At the start of 2019, French regulators will launch an informal investigation on algorithm-powered and human moderation... Regulators will look at multiple steps: how flagging works, how Facebook identifies problematic content, how Facebook decides if it’s problematic or not and what happens when Facebook takes down a post, a video or an image. This type of investigation is reminiscent of banking and nuclear regulation. It involves deep cooperation so that regulators can certify that a company is doing everything right... The investigation isn’t going to be limited to talking with the moderation teams and looking at their guidelines. The French government wants to find algorithmic bias and test data sets against Facebook’s automated moderation tools..."

Good. Hopefully, the investigation will be a deep dive. Maybe other countries, which value citizens' privacy, will perform similar investigations. Companies and their executives need to be held accountable.

Then, on November 14th The New York Times published a detailed, comprehensive "Delay, Deny, and Deflect" investigative report based upon interviews of at least 50 persons:

"When Facebook users learned last spring that the company had compromised their privacy in its rush to expand, allowing access to the personal information of tens of millions of people to a political data firm linked to President Trump, Facebook sought to deflect blame and mask the extent of the problem. And when that failed... Facebook went on the attack. While Mr. Zuckerberg has conducted a public apology tour in the last year, Ms. Sandberg has overseen an aggressive lobbying campaign to combat Facebook’s critics, shift public anger toward rival companies and ward off damaging regulation. Facebook employed a Republican opposition-research firm to discredit activist protesters... In a statement, a spokesman acknowledged that Facebook had been slow to address its challenges but had since made progress fixing the platform... Even so, trust in the social network has sunk, while its pell-mell growth has slowed..."

The New York Times' report also highlighted the history of Facebook's focus on revenue growth and lack of focus to identify and respond to threats:

"Like other technology executives, Mr. Zuckerberg and Ms. Sandberg cast their company as a force for social good... But as Facebook grew, so did the hate speech, bullying and other toxic content on the platform. When researchers and activists in Myanmar, India, Germany and elsewhere warned that Facebook had become an instrument of government propaganda and ethnic cleansing, the company largely ignored them. Facebook had positioned itself as a platform, not a publisher. Taking responsibility for what users posted, or acting to censor it, was expensive and complicated. Many Facebook executives worried that any such efforts would backfire... Mr. Zuckerberg typically focused on broader technology issues; politics was Ms. Sandberg’s domain. In 2010, Ms. Sandberg, a Democrat, had recruited a friend and fellow Clinton alum, Marne Levine, as Facebook’s chief Washington representative. A year later, after Republicans seized control of the House, Ms. Sandberg installed another friend, a well-connected Republican: Joel Kaplan, who had attended Harvard with Ms. Sandberg and later served in the George W. Bush administration..."

The report described cozy relationships between the company and Democratic politicians. Not good for a company wanting to deliver unbiased, reliable news. The New York Times' report also described the history of failing to identify and respond quickly to content abuses by bad actors:

"... in the spring of 2016, a company expert on Russian cyberwarfare spotted something worrisome. He reached out to his boss, Mr. Stamos. Mr. Stamos’s team discovered that Russian hackers appeared to be probing Facebook accounts for people connected to the presidential campaigns, said two employees... Mr. Stamos, 39, told Colin Stretch, Facebook’s general counsel, about the findings, said two people involved in the conversations. At the time, Facebook had no policy on disinformation or any resources dedicated to searching for it. Mr. Stamos, acting on his own, then directed a team to scrutinize the extent of Russian activity on Facebook. In December 2016... Ms. Sandberg and Mr. Zuckerberg decided to expand on Mr. Stamos’s work, creating a group called Project P, for “propaganda,” to study false news on the site, according to people involved in the discussions. By January 2017, the group knew that Mr. Stamos’s original team had only scratched the surface of Russian activity on Facebook... Throughout the spring and summer of 2017, Facebook officials repeatedly played down Senate investigators’ concerns about the company, while publicly claiming there had been no Russian effort of any significance on Facebook. But inside the company, employees were tracing more ads, pages and groups back to Russia."

Facebook responded in a November 15th new release:

"There are a number of inaccuracies in the story... We’ve acknowledged publicly on many occasions – including before Congress – that we were too slow to spot Russian interference on Facebook, as well as other misuse. But in the two years since the 2016 Presidential election, we’ve invested heavily in more people and better technology to improve safety and security on our services. While we still have a long way to go, we’re proud of the progress we have made in fighting misinformation..."

So, Facebook wants its users to accept that it has invested more = doing better.

Regardless, the bottom line is trust. Can users trust what Facebook said about doing better? Is better enough? Can users trust Facebook to deliver unbiased news? Can users trust that Facebook's content moderation process is better? Or good enough? Can users trust Facebook to fix and prevent data breaches affecting millions of users? Can users trust Facebook to stop bad actors posing as researchers from using quizzes and automated tools to vacuum up (and allegedly resell later) millions of users' profiles? Can citizens in democracies trust that Facebook has stopped data abuses, by bad actors, designed to disrupt their elections? Is doing better enough?

The very next day, Facebook reported a huge increase in the number of government requests for data, including secret orders. TechCrunch reported about 13 historical national security letters:

"... dated between 2014 and 2017 for several Facebook and Instagram accounts. These demands for data are effectively subpoenas, issued by the U.S. Federal Bureau of Investigation (FBI) without any judicial oversight, compelling companies to turn over limited amounts of data on an individual who is named in a national security investigation. They’re controversial — not least because they come with a gag order that prevents companies from informing the subject of the letter, let alone disclosing its very existence. Companies are often told to turn over IP addresses of everyone a person has corresponded with, online purchase information, email records and cell-site location data... Chris Sonderby, Facebook’s deputy general counsel, said that the government lifted the non-disclosure orders on the letters..."

So, Facebook is a go-to resource for both bad actors and the good guys.

An eventful month, and the month isn't over yet. Taken together, this news is not good for a company wanting its social networking service to be a source of reliable, unbiased news source. This news is not good for a company wanting its users to accept it is doing better -- and that better is enough. The situation begs the question: are we watching the fall of Facebook? Share your thoughts and opinions below.


Some Surprising Facts About Facebook And Its Users

Facebook logo The Pew Research Center announced findings from its latest survey of social media users:

  • About two-thirds (68%) of adults in the United States use Facebook. That is unchanged from April 2016, but up from 54% in August 2012. Only Youtube gets more adult usage (73%).
  • About three-quarters (74%) of adult Facebook users visit the site at least once a day. That's higher than Snapchat (63%) and Instagram (60%).
  • Facebook is popular across all demographic groups in the United States: 74% of women use it, as do 62% of men, 81% of persons ages 18 to 29, and 41% of persons ages 65 and older.
  • Usage by teenagers has fallen to 51% (at March/April 2018) from 71% during 2014 to 2015. More teens use other social media services: YouTube (85%), Instagram (72%) and Snapchat (69%).
  • 43% of adults use Facebook as a news source. That is higher than other social media services: YouTube (21%), Twitter (12%), Instagram (8%), and LinkedIn (6%). More women (61%) use Facebook as a news source than men (39%). More whites (62%) use Facebook as a news source than nonwhites (37%).
  • 54% of adult users said they adjusted their privacy settings during the past 12 months. 42% said they have taken a break from checking the platform for several weeks or more. 26% said they have deleted the app from their phone during the past year.

Perhaps, the most troubling finding:

"Many adult Facebook users in the U.S. lack a clear understanding of how the platform’s news feed works, according to the May and June survey. Around half of these users (53%) say they do not understand why certain posts are included in their news feed and others are not, including 20% who say they do not understand this at all."

Facebook users should know that the service does not display in their news feed all posts by their friends and groups. Facebook's proprietary algorithm -- called its "secret sauce" by some -- displays items it thinks users will engage with = click the "Like" or other emotion buttons. This makes Facebook a terrible news source, since it doesn't display all news -- only the news you (probably already) agree with.

That's like living life in an online bubble. Sadly, there is more.

If you haven't watched it, PBS has broadcast a two-part documentary titled, "The Facebook Dilemma" (see trailer below), which arguable could have been titled, "the dark side of sharing." The Frontline documentary rightly discusses Facebook's approaches to news, privacy, its focus upon growth via advertising revenues, how various groups have used the service as a weapon, and Facebook's extensive data collection about everyone.

Yes, everyone. Obviously, Facebook collects data about its users. The service also collects data about nonusers in what the industry calls "shadow profiles." CNet explained that during an April:

"... hearing before the House Energy and Commerce Committee, the Facebook CEO confirmed the company collects information on nonusers. "In general, we collect data of people who have not signed up for Facebook for security purposes," he said... That data comes from a range of sources, said Nate Cardozo, senior staff attorney at the Electronic Frontier Foundation. That includes brokers who sell customer information that you gave to other businesses, as well as web browsing data sent to Facebook when you "like" content or make a purchase on a page outside of the social network. It also includes data about you pulled from other Facebook users' contacts lists, no matter how tenuous your connection to them might be. "Those are the [data sources] we're aware of," Cardozo said."

So, there might be more data sources besides the ones we know about. Facebook isn't saying. So much for greater transparency and control claims by Mr. Zuckerberg. Moreover, data breaches highlight the problems with the service's massive data collection and storage:

"The fact that Facebook has [shadow profiles] data isn't new. In 2013, the social network revealed that user data had been exposed by a bug in its system. In the process, it said it had amassed contact information from users and matched it against existing user profiles on the social network. That explained how the leaked data included information users hadn't directly handed over to Facebook. For example, if you gave the social network access to the contacts in your phone, it could have taken your mom's second email address and added it to the information your mom already gave to Facebook herself..."

So, Facebook probably launched shadow profiles when it introduced its mobile app. That means, if you uploaded the address book in your phone to Facebook, then you helped the service collect information about nonusers, too. This means Facebook acts more like a massive advertising network than simply a social media service.

How has Facebook been able to collect massive amounts of data about both users and nonusers? According to the Frontline documentary, we consumers have lax privacy laws in the United States to thank for this massive surveillance advertising mechanism. What do you think?


Facebook Lowers Its Number of Breach Victims And Explains How Hackers Broke In And Stole Data

Facebook logo In an October 12th Security Update, Facebook lowered the number of users affected during its latest data breach, and explained how hackers broke into its systems and stole users' information during the data breach it first announced on September 28th. During the data breach:

"... the attackers already controlled a set of accounts, which were connected to Facebook friends. They used an automated technique to move from account to account so they could steal the access tokens of those friends, and for friends of those friends, and so on, totaling about 400,000 people. In the process, however, this technique automatically loaded those accounts’ Facebook profiles, mirroring what these 400,000 people would have seen when looking at their own profiles. That includes posts on their timelines, their lists of friends, Groups they are members of, and the names of recent Messenger conversations. Message content was not available to the attackers, with one exception. If a person in this group was a Page admin whose Page had received a message from someone on Facebook, the content of that message was available to the attackers.

The attackers used a portion of these 400,000 people’s lists of friends to steal access tokens for about 30 million people. For 15 million people, attackers accessed two sets of information – name and contact details (phone number, email, or both, depending on what people had on their profiles). For 14 million people, the attackers accessed the same two sets of information, as well as other details people had on their profiles. This included username, gender, locale/language, relationship status, religion, hometown, self-reported current city, birthdate, device types used to access Facebook, education, work, the last 10 places they checked into or were tagged in, website, people or Pages they follow, and the 15 most recent searches. For 1 million people, the attackers did not access any information."

Facebook promises to notify the 30 million breach victims. While it lowered the number of breach victims from 50 to 30 million, this still isn't good. 30 million is still a lot of users. And, hackers stolen the juiciest data elements -- contact and profile information -- about breach victims, enabling them to conduct more fraud against victims, their family, friends, and coworkers. Plus, note the phrase: "the attackers already controlled a set of accounts." This suggest the hackers created bogus Facebook accounts, had the sign-in credentials (e.g., username, password) of valid accounts, or both. Not good.

Moreover, there is probably more bad news coming, as other affected companies assess the (collateral) damage. Experts said that Facebook's latest breach may be worse since many companies participate in the Facebook Connect program. Not good.

The timeline of the data breach and intrusion detection are troubling. Facebook admitted that the vulnerability hackers exploited existed from July, 2017 to September, 2018 when it noticed, "an unusual spike of activity that began on September 14, 2018." While it is good that Facebook's tech team notice the intrusion, the bad news is the long open window the vulnerability existed provided plenty of time for hackers to plot and do damage.  That the hackers used automated tools suggests that the hackers knew about the vulnerabilities for a long time... long enough to decide what to do, and then build automated tools to steal users' information. Where was Facebook's quality assurance (QA) testing department during all of this? Not good.

This latest data breach included a tiny bit of good news:

"This attack did not include Messenger, Messenger Kids, Instagram, WhatsApp, Oculus, Workplace, Pages, payments, third-party apps, or advertising or developer accounts."

Meanwhile, Facebook runs TV advertisements for its new Portal, a voice-activated device with a video screen, always-listening microphone, and camera for video chats within homes.  BuzzFeed reported:

"Portal’s debut comes at a time when Facebook is struggling to reassure the public that it’s capable of protecting users’ privacy... In promoting Portal, Facebook is emphasizing the devices’ security... The company asserts that it doesn't listen or view the content of Portal calls, and the Smart Camera’s artificial intelligence–powered tracking doesn’t run on Facebook servers or use facial recognition. Audio snippets of voice commands can also be deleted from your Facebook Activity Log... because Portal relies on Facebook’s Messenger service, those calls are still under the purview of Facebook’s data privacy policy. The company collects information about “the people, Pages, accounts, hashtags and groups you are connected to and how you interact with them across our Products, such as people you communicate with the most or groups you are part of.” This means that Facebook will know who you’re talking to on Portal and for how long."

Buzzfeed also listed several comments by users. Some are skeptical of privacy promises:

Tweet #1 about Facebook Portal. Click to view larger version

Here's another comment:

Who is going to buy Portal while breach investigation results from this latest data breach, and from its Cambridge Analytica breach, are still murky? What other systems and software vulnerabilities exist? Would you buy Portal?