131 posts categorized "Behavioral Advertising" Feed

Facebook To Pay $40 Million To Advertisers To Resolve Allegations of Inflated Advertising Metrics

Facebook logo According to court papers last week, Facebook has entered a proposed settlement agreement where it will pay $40 million to advertisers to resolve allegations in a class-action lawsuit that the social networking platform inflated video advertising engagement metrics. Forbes explained:

"The metrics in question are critical for advertisers on video-based content platforms such as YouTube and Facebook because they show the average amount of time users spend watching their content before clicking away. During the 18 months between February of 2015 and September of 2016, Facebook was incorrectly calculating — and consequently, inflating — two key metrics of this type. Members of the class action are alleging that the faulty metrics led them to spend more money on Facebook ads than they otherwise would have..."

Metrics help advertisers determine if the ads they paid for are delivering results. Reportedly, the lawsuit took three years and Facebook denied any wrongdoing. The proposed settlement must be approved by a court. About $12 million of the $40 million total will be used to pay plaintiffs' attorney fees.

A brief supporting the proposed settlement provided more details:

" One metric—“Average Duration of Video Viewed”—depicted the average number of seconds users watched the video; another—–“Average Percentage of Video Viewed”—depicted the average percentage of the video ad that users watched... Starting in February 2015, Facebook incorrectly calculated Average Duration of Video Viewed... The Average View Duration error, in turn, led to the Average Percentage Viewed metric also being inflated... Because of the error, the average watch times of video ads were exaggerated for about 18 months... Facebook acknowledges there was an error. But Facebook has argued strenuously that the error was an innocent mistake that Facebook corrected shortly after discovering it. Facebook has also pointed out that some advertisers likely never viewed the erroneous metrics and that because Facebook does not set prices based on the impacted metrics, the error did not lead to overcharges... The settlement provides a $40 million cash fund from Facebook, which constitutes as much as 40% of what Plaintiffs estimate they may realistically have been able to recover had the case made it to trial and had Plaintiffs prevailed. Facebook’s $40 million payment will... also cover the costs of settlement administration, class notice, service awards, and Plaintiffs’ litigation costs24 and attorneys’ fees."

It seems that besides a multitude of data breaches and privacy snafus, Facebook can't quite operate reliably its core advertising business. What do you think?


Transcripts Of Internal Facebook Meetings Reveal True Views Of The Company And Its CEO

Facebook logo It's always good for consumers -- and customers -- to know a company's positions on key issues. Thanks to The Verge, we now know more about Facebook's views. Portions of the leaked transcripts included statements by Mr. Zuckerberg, Facebook's CEO, during internal business meetings. The Verge explained the transcripts:

"In two July meetings, Zuckerberg rallied his employees against critics, competitors, and Senator Elizabeth Warren, among others..."

Portions of statements by Mr. Zuckerberg included:

"I’m certainly more worried that someone is going to try to break up our company... So there might be a political movement where people are angry at the tech companies or are worried about concentration or worried about different issues and worried that they’re not being handled well. That doesn’t mean that, even if there’s anger and that you have someone like Elizabeth Warren who thinks that the right answer is to break up the companies... I mean, if she gets elected president, then I would bet that we will have a legal challenge, and I would bet that we will win the legal challenge... breaking up these companies, whether it’s Facebook or Google or Amazon, is not actually going to solve the issues. And, you know, it doesn’t make election interference less likely. It makes it more likely because now the companies can’t coordinate and work together. It doesn’t make any of the hate speech or issues like that less likely. It makes it more likely..."

An October 1st post by Mr. Zuckerberg confirmed the transcripts. Earlier this year, Mr. Zuckerberg called for more government regulation. Given his latest comments, we now know his true views.

Also, C/Net reported:

"In an interview with the Today show that aired Wednesday, Instagram CEO Adam Mosseri said he generally agrees with the comments Zuckerberg made during the meetings, adding that the company's large size can help it tackle issues like hate speech and election interference on social media."

The claim by Mosseri, Zuckerberg and others that their company needs to be even bigger to tackle issues is frankly -- laughable. Consumers are concerned about several different issues: privacy, hacked and/or cloned social media accounts, costs, consumer choice, surveillance, data collection we can't opt out of, the inability to delete Facebook and other mobile apps, and elections interference. A recent study found that consumers want social sites to collect less data.

Industry consolidation and monopolies/oligopolies usually result with reduced consumer choices and higher prices. Prior studies have documented this. The lack of ISP competition in key markets meant consumers in the United States pay more for broadband and get slower speeds compared to other countries. At the U.S. Federal Trade Commission's "Privacy, Big Data, And Competition" hearing last year, the developers of the Brave web browser submitted this feedback:

""First, big tech companies “cross-use” user data from one part of their business to prop up others. This stifles competition, and hurts innovation and consumer choice. Brave suggests that FTC should investigate..."

Facebook is already huge, and its massive size still hasn't stopped multiple data breaches and privacy snafus. Rather, the snafus have demonstrated an inability (unwillingness?) by the company and its executives to effectively tackle and implement solutions to adequately and truly protect users' sensitive information. Mr. Zuckerberg has repeatedly apologized, but nothing ever seems to change. Given the statements in the transcripts, his apologies seem even less believable and less credible than before.

Alarmingly, Facebook has instead sought more ways to share users' sensitive data. In August of 2018, reports surfaced that Facebook approached several major banks and offered to share its detailed financial information about consumers in order, "to boost user engagement." Reportedly, the detailed financial information included debit/credit/prepaid card transactions and checking account balances. Also last year, Facebook's Onavo VPN App was removed from the Apple App store because the app violated data-collection policies. Not good.

Plus, the larger problem is this: Facebook isn't just a social network. It is also an advertiser, publishing platform, dating service, and wannabe payments service. There are several anti-trust investigations underway involving Facebook. Remember, Facebook tracks both users and non-users around the internet. So, claims about it needing to be bigger to solve problem are malarkey.

And, Mr. Zuckerberg's statements seem to mischaracterize Senator Warren's positions by conflating and ignoring (or minimizing) several issues. Here is what Senator Warren actually stated in March, 2019:

"America’s big tech companies provide valuable products but also wield enormous power over our digital lives. Nearly half of all e-commerce goes through Amazon. More than 70% of all Internet traffic goes through sites owned or operated by Google or Facebook. As these companies have grown larger and more powerful, they have used their resources and control over the way we use the Internet to squash small businesses and innovation, and substitute their own financial interests for the broader interests of the American people... Weak antitrust enforcement has led to a dramatic reduction in competition and innovation in the tech sector. Venture capitalists are now hesitant to fund new startups to compete with these big tech companies because it’s so easy for the big companies to either snap up growing competitors or drive them out of business. The number of tech startups has slumped, there are fewer high-growth young firms typical of the tech industry, and first financing rounds for tech startups have declined 22% since 2012... To restore the balance of power in our democracy, to promote competition, and to ensure that the next generation of technology innovation is as vibrant as the last, it’s time to break up our biggest tech companies..."

Senator Warren listed several examples:

"Using Mergers to Limit Competition: Facebook has purchased potential competitors Instagram and WhatsApp. Amazon has used its immense market power to force smaller competitors like Diapers.com to sell at a discounted rate. Google has snapped up the mapping company Waze and the ad company DoubleClick... Using Proprietary Marketplaces to Limit Competition: Many big tech companies own a marketplace — where buyers and sellers transact — while also participating on the marketplace. This can create a conflict of interest that undermines competition. Amazon crushes small companies by copying the goods they sell on the Amazon Marketplace and then selling its own branded version. Google allegedly snuffed out a competing small search engine by demoting its content on its search algorithm, and it has favored its own restaurant ratings over those of Yelp."

Mr. Zuckerberg would be credible if he addressed each of these examples. In the transcript from The Verge, he didn't.

And, there is plenty of blame to spread around on executives in both tech companies and anti-trust regulators in government. Readers wanting to learn more can read about hijacked product pages and other chaos among sellers on the Amazon platform. There's plenty to fault tech companies for, and it isn't a political attack.

Plenty of operational failures, data security failures, and willful sharing of consumers' data collected. What are your opinions of the transcript?


Privacy Study Finds Consumers Less Likely To Share Several Key Data Elements

Advertising Research Foundation logoLast month, the Advertising Research Foundation (ARF) announced the results of its 2019 Privacy Study, which was conducted in March. The survey included 1,100 consumers in the United States weighted by age gender, and region. Key findings including device and internet usage:

"The key differences between 2018 and 2019 are: i) People are spending more time on their mobile devices and less time on their PCs; ii) People are spending more time checking email, banking, listening to music, buying things, playing games, and visiting social media via mobile apps; iii) In general, people are only slightly less likely to share their data than last year. iv) They are least likely to share their social security number; financial and medical information; and their home address and phone numbers; v) People seem to understand the benefits of personalized advertising, but do not value personalization highly and do not understand the technical approaches through which it is accomplished..."

Advertisers use these findings to adjust their advertising, offers, and pitches to maximize responses by consumers. More detail about the above privacy and data sharing findings:

"In general, people were slightly less likely to share their data in 2019 than they were in 2018. They were least likely to share their social security number; financial and medical information; their work address; and their home address and phone numbers in both years. They were most likely to share their gender, race, marital status, employment status, sexual orientation, religion, political affiliation, and citizenship... The biggest changes in respondents’ willingness to share their data from 2018 to 2019 were seen in their home address (-10 percentage points), spouse’s first and last name (-8 percentage points), personal email address (-7 percentage points), and first and last names (-6 percentage points)."

The researchers asked the data sharing question in two ways:

  1. "Which of the following types of information would you be willing to share with a website?"
  2. "Which of the following types of information would you be willing to share for a personalized experience?"

The survey included 20 information types for both questions. For the first question, survey respondents' willingness to share decreased for 15 of 20 information types, remained constant for two information types, and increased slightly for the remainder:

Which of the following types of information
would you be willing to share with a website?
Information Type 2018: %
Respondents
2019: %
Respondents
2019 H/(L)
2018
Birth Date 71 68 (3)
Citizenship Status 82 79 (3)
Employment Status 84 82 (2)
Financial Information 23 20 (3)
First & Last Name 69 63 (6)
Gender 93 93 --
Home Address 41 31 (10)
Home Landline
Phone Number
33 30 (3)
Marital Status 89 85 (4)
Medical Information 29 26 (3)
Personal Email Address 61 54 (7)
Personal Mobile
Phone Number
34 32 (2)
Place Of Birth 62 58 (4)
Political Affiliation 76 77 1
Race or Ethnicity 90 91 1
Religious Preference 78 79 1
Sexual Orientation 80 79 (1)
Social Security Number 10 10 --
Spouse's First
& Last Name
41 33 (8)
Work Address 33 31 (2)

The researchers asked about citizenship status due to controversy related to the upcoming 2020 Census. The researchers concluded:

The survey finding most relevant to these proposals is that the public does not see the value of sharing data to improve personalization of advertising messages..."

Overall, it appears that consumers are getting wiser about their privacy. Consumers' willingness to share decreased for more items than it increased for. View the detailed ARF 2019 Privacy Survey (Adobe PDF).


Google And YouTube To Pay $170 Million In Proposed Settlement To Resolve Charges Of Children's Privacy Violations

Google logo Today's blog post contains information all current and future parents should know. On Tuesday, the U.S. Federal Trade Commission (FTC) announced a proposed settlement agreement where YouTube LLC, and its parent company, Google LLC, will pay a monetary fine of $170 million to resolve charges that the video-sharing service illegally collected the personal information of children without their parents' consent.

YouTube logo The proposed settlement agreement requires YouTube and Google to pay $136 million to the FTC and $34 million to New York State to resolve charges that the video sharing service violated the Children’s Online Privacy Protection Act (COPPA) Rule. The announcement explained the allegations:

"... that YouTube violated the COPPA Rule by collecting personal information—in the form of persistent identifiers that are used to track users across the Internet—from viewers of child-directed channels, without first notifying parents and getting their consent. YouTube earned millions of dollars by using the identifiers, commonly known as cookies, to deliver targeted ads to viewers of these channels, according to the complaint."

"The COPPA Rule requires that child-directed websites and online services provide notice of their information practices and obtain parental consent prior to collecting personal information from children under 13, including the use of persistent identifiers to track a user’s Internet browsing habits for targeted advertising. In addition, third parties, such as advertising networks, are also subject to COPPA where they have actual knowledge they are collecting personal information directly from users of child-directed websites and online services... the FTC and New York Attorney General allege that while YouTube claimed to be a general-audience site, some of YouTube’s individual channels—such as those operated by toy companies—are child-directed and therefore must comply with COPPA."

While $170 million is a lot of money, it is tiny compared to the $5 billion fine by the FTC assessed against Facebook. The fine is also tiny compared to Google's earnings. Alphabet Inc., the holding company which owns Google, generated pretax net income of $34.91 billion during 2018 on revenues of $136.96 billion.

In February, the FTC concluded a settlement with Musical.ly, a video social networking app now operating as TikTok, where Musical.ly paid $5.7 million to resolve allegations of COPPA violations. Regarding the proposed settlement with YouTube, Education Week reported:

"YouTube has said its service is intended for ages 13 and older, although younger kids commonly watch videos on the site and many popular YouTube channels feature cartoons or sing-a-longs made for children. YouTube has its own app for children, called YouTube Kids; the company also launched a website version of the service in August. The site says it requires parental consent and uses simple math problems to ensure that kids aren't signing in on their own. YouTube Kids does not target ads based on viewer interests the way YouTube proper does. The children's version does track information about what kids are watching in order to recommend videos. It also collects personally identifying device information."

The proposed settlement also requires YouTube and Google:

"... to develop, implement, and maintain a system that permits channel owners to identify their child-directed content on the YouTube platform so that YouTube can ensure it is complying with COPPA. In addition, the companies must notify channel owners that their child-directed content may be subject to the COPPA Rule’s obligations and provide annual training about complying with COPPA for employees who deal with YouTube channel owners. The settlement also prohibits Google and YouTube from violating the COPPA Rule, and requires them to provide notice about their data collection practices and obtain verifiable parental consent before collecting personal information from children."

The complaint and proposed consent decree were filed in the U.S. District Court for the District of Columbia. After approval by a judge, the proposed settlement become final. Hopefully, the fine and additional requirements will be enough to deter future abuses.


Google Claims Blocking Cookies Is Bad For Privacy. Researchers: Nope. That Is 'Privacy Gaslighting'

Google logo The announcement by Google last week included some dubious claims, which received a fair amount of attention among privacy experts. First, a Senior Product Manager of User Privacy and Trust wrote in a post:

"Ads play a major role in sustaining the free and open web. They underwrite the great content and services that people enjoy... But the ad-supported web is at risk if digital advertising practices don’t evolve to reflect people’s changing expectations around how data is collected and used. The mission is clear: we need to ensure that people all around the world can continue to access ad supported content on the web while also feeling confident that their privacy is protected. As we shared in May, we believe the path to making this happen is also clear: increase transparency into how digital advertising works, offer users additional controls, and ensure that people’s choices about the use of their data are respected."

Okay, that is a fair assessment of today's internet. And, more transparency is good. Google executives are entitled to their opinions. The post also stated:

"The web ecosystem is complex... We’ve seen that approaches that don’t account for the whole ecosystem—or that aren’t supported by the whole ecosystem—will not succeed. For example, efforts by individual browsers to block cookies used for ads personalization without suitable, broadly accepted alternatives have fallen down on two accounts. First, blocking cookies materially reduces publisher revenue... Second, broad cookie restrictions have led some industry participants to use workarounds like fingerprinting, an opaque tracking technique that bypasses user choice and doesn’t allow reasonable transparency or control. Adoption of such workarounds represents a step back for user privacy, not a step forward."

So, Google claims that blocking cookies is bad for privacy. With a statement like that, the "User Privacy and Trust" title seems like an oxymoron. Maybe, that's the best one can expect from a company that gets 87 percent of its revenues from advertising.

Also on August 22nd, the Director of Chrome Engineering repeated this claim and proposed new internet privacy standards (bold emphasis added):

... we are announcing a new initiative to develop a set of open standards to fundamentally enhance privacy on the web. We’re calling this a Privacy Sandbox. Technology that publishers and advertisers use to make advertising even more relevant to people is now being used far beyond its original design intent... some other browsers have attempted to address this problem, but without an agreed upon set of standards, attempts to improve user privacy are having unintended consequences. First, large scale blocking of cookies undermine people’s privacy by encouraging opaque techniques such as fingerprinting. With fingerprinting, developers have found ways to use tiny bits of information that vary between users, such as what device they have or what fonts they have installed to generate a unique identifier which can then be used to match a user across websites. Unlike cookies, users cannot clear their fingerprint, and therefore cannot control how their information is collected... Second, blocking cookies without another way to deliver relevant ads significantly reduces publishers’ primary means of funding, which jeopardizes the future of the vibrant web..."

Yes, fingerprinting is a nasty, privacy-busting technology. No argument with that. But, blocking cookies is bad for privacy? Really? Come on, let's be honest.

This dubious claim ignores corporate responsibility... that some advertisers and website operators made choices -- conscious decisions to use more invasive technologies like fingerprinting to do an end-run around users' needs, desires, and actions to regain online privacy. Sites and advertisers made those invasive-tech choices when other options were available, such as using subscription services to pay for their content.

Plus, Google's claim also ignores the push by corporate internet service providers (ISPs) which resulted in the repeal of online privacy protections for consumers thanks to a compliant, GOP-led Federal Communications Commission (FCC), which seems happy to tilt the playing field further towards corporations and against consumers. So, users are simply trying to regain online privacy.

During the past few years, both privacy-friendly web browsers (e.g., Brave, Firefox) and search engines (e.g., DuckDuckGo) have emerged to meet consumers' online privacy needs. (Well, it's not only consumers that need online privacy. Attorneys and businesses need it, too, to protect their intellectual property and proprietary business methods.) Online users demanded choice, something advertisers need to remember and value.

Privacy experts weighed in about Google's blocking-cookies-is-bad-for-privacy claim. Jonathan Mayer and Arvind Narayanan explained:

That’s the new disingenuous argument from Google, trying to justify why Chrome is so far behind Safari and Firefox in offering privacy protections. As researchers who have spent over a decade studying web tracking and online advertising, we want to set the record straight. Our high-level points are: 1) Cookie blocking does not undermine web privacy. Google’s claim to the contrary is privacy gaslighting; 2) There is little trustworthy evidence on the comparative value of tracking-based advertising; 3) Google has not devised an innovative way to balance privacy and advertising; it is latching onto prior approaches that it previously disclaimed as impractical; and 4) Google is attempting a punt to the web standardization process, which will at best result in years of delay."

The researchers debunked Google's claim with more details:

"Google is trying to thread a needle here, implying that some level of tracking is consistent with both the original design intent for web technology and user privacy expectations. Neither is true. If the benchmark is original design intent, let’s be clear: cookies were not supposed to enable third-party tracking, and browsers were supposed to block third-party cookies. We know this because the authors of the original cookie technical specification said so (RFC 2109, Section 4.3.5). Similarly, if the benchmark is user privacy expectations, let’s be clear: study after study has demonstrated that users don’t understand and don’t want the pervasive web tracking that occurs today."

Moreover:

"... there are several things wrong with Google’s argument. First, while fingerprinting is indeed a privacy invasion, that’s an argument for taking additional steps to protect users from it, rather than throwing up our hands in the air. Indeed, Apple and Mozilla have already taken steps to mitigate fingerprinting, and they are continuing to develop anti-fingerprinting protections. Second, protecting consumer privacy is not like protecting security—just because a clever circumvention is technically possible does not mean it will be widely deployed. Firms face immense reputational and legal pressures against circumventing cookie blocking. Google’s own privacy fumble in 2012 offers a perfect illustration of our point: Google implemented a workaround for Safari’s cookie blocking; it was spotted (in part by one of us), and it had to settle enforcement actions with the Federal Trade Commission and state attorneys general."

Gaslighting, indeed. Online privacy is important. So, too, are consumers' choices and desires. Thanks to Mr. Mayer and Mr. Narayanan for the comprehensive response.

What are your opinions of cookie blocking? Of Google's claims?


FTC Levies $5 Billion Fine, 'New Restrictions, And Modified Corporate Structure' To Hold Facebook Accountable. Will These Actions Prevent Future Privacy Abuses?

The U.S. Federal Trade Commission (FTC) announced on July 24th a record-breaking fine against Facebook, Inc., plus new limitations on the social networking service. The FTC announcement stated:

"Facebook, Inc. will pay a record-breaking $5 billion penalty, and submit to new restrictions and a modified corporate structure that will hold the company accountable for the decisions it makes about its users’ privacy, to settle Federal Trade Commission charges that the company violated a 2012 FTC order by deceiving users about their ability to control the privacy of their personal information... The settlement order announced [on July 24th] also imposes unprecedented new restrictions on Facebook’s business operations and creates multiple channels of compliance..."

During 2018, Facebook generated after-tax profits of $22.1 billion on sales of $55.84 billion. While a $5 billion fine is a lot of money, the company can easily afford the record-breaking fine. The fine equals about one month's revenues, or a little over 4 percent of its $117 billion in assets.

U.S. Federal Trade Commission. New compliance system for Facebook. Click to view larger version The FTC announcement explained several "unprecedented" restrictions in the settlement order. First, the restrictions are designed to:

"... prevent Facebook from deceiving its users about privacy in the future, the FTC’s new 20-year settlement order overhauls the way the company makes privacy decisions by boosting the transparency of decision making... It establishes an independent privacy committee of Facebook’s board of directors, removing unfettered control by Facebook’s CEO Mark Zuckerberg over decisions affecting user privacy. Members of the privacy committee must be independent and will be appointed by an independent nominating committee. Members can only be fired by a supermajority of the Facebook board of directors."

Facebook logo Second, the restrictions mandated compliance officers:

"Facebook will be required to designate compliance officers who will be responsible for Facebook’s privacy program. These compliance officers will be subject to the approval of the new board privacy committee and can be removed only by that committee—not by Facebook’s CEO or Facebook employees. Facebook CEO Mark Zuckerberg and designated compliance officers must independently submit to the FTC quarterly certifications that the company is in compliance with the privacy program mandated by the order, as well as an annual certification that the company is in overall compliance with the order. Any false certification will subject them to individual civil and criminal penalties."

Third, the new order strengthens oversight:

"... The order enhances the independent third-party assessor’s ability to evaluate the effectiveness of Facebook’s privacy program and identify any gaps. The assessor’s biennial assessments of Facebook’s privacy program must be based on the assessor’s independent fact-gathering, sampling, and testing, and must not rely primarily on assertions or attestations by Facebook management. The order prohibits the company from making any misrepresentations to the assessor, who can be approved or removed by the FTC. Importantly, the independent assessor will be required to report directly to the new privacy board committee on a quarterly basis. The order also authorizes the FTC to use the discovery tools provided by the Federal Rules of Civil Procedure to monitor Facebook’s compliance with the order."

Fourth, the order included six new privacy requirements:

"i) Facebook must exercise greater oversight over third-party apps, including by terminating app developers that fail to certify that they are in compliance with Facebook’s platform policies or fail to justify their need for specific user data; ii) Facebook is prohibited from using telephone numbers obtained to enable a security feature (e.g., two-factor authentication) for advertising; iii) Facebook must provide clear and conspicuous notice of its use of facial recognition technology, and obtain affirmative express user consent prior to any use that materially exceeds its prior disclosures to users; iv) Facebook must establish, implement, and maintain a comprehensive data security program; v) Facebook must encrypt user passwords and regularly scan to detect whether any passwords are stored in plaintext; and vi) Facebook is prohibited from asking for email passwords to other services when consumers sign up for its services."

Wow! Lots of consequences when a manager builds a corporation with a, "move fast and break things" culture, values, and ethics. Assistant Attorney General Jody Hunt for the Department of Justice’s Civil Division said:

"The Department of Justice is committed to protecting consumer data privacy and ensuring that social media companies like Facebook do not mislead individuals about the use of their personal information... This settlement’s historic penalty and compliance terms will benefit American consumers, and the Department expects Facebook to treat its privacy obligations with the utmost seriousness."

There is disagreement among the five FTC commissioners about the settlement, as the vote for the order was 3 - 2. FTC Commissioner Rebecca Kelly Slaughter stated in her dissent:

"My principal objections are: (1) The negotiated civil penalty is insufficient under the applicable statutory factors we are charged with weighing for order violators: injury to the public, ability to pay, eliminating the benefits derived from the violation, and vindicating the authority of the FTC; (2) While the order includes some encouraging injunctive relief, I am skeptical that its terms will have a meaningful disciplining effect on how Facebook treats data and privacy. Specifically, I cannot view the order as adequately deterrent without both meaningful limitations on how Facebook collects, uses, and shares data and public transparency regarding Facebook’s data use and order compliance; (3) Finally, my deepest concern with this order is that its release of Facebook and its officers from legal liability is far too broad..."

FTC Commissioners Noah Joshua Phillips and Christine S. Wilson stated on July 24th in an 8-page joint statement (Adobe PDF) with Chairman Joseph J. Simons of the U.S. District Court for the District of Columbia:

"In 2012, Facebook entered into a consent order with the FTC, resolving allegations that the company misrepresented to consumers the extent of data sharing with third-party applications and the control consumers had over that sharing. The 2012 order barred such misrepresentations... Our complaint announced today alleges that Facebook failed to live up to its commitments under that order. Facebook subsequently made similar misrepresentations about sharing consumer data with third-party apps and giving users control over that sharing, and misrepresented steps certain consumers needed to take to control [over] facial recognition technology. Facebook also allowed financial considerations to affect decisions about how it would enforce its platform policies against third-party users of data, in violation of its obligation under the 2012 order... The $5 billion penalty serves as an important deterrent to future order violations... For purposes of comparison, the EU’s General Data Protection Regulation (GDPR) is touted as the high-water mark for comprehensive privacy legislation, and the penalty the FTC has negotiated is over 20 times greater than the largest GDPR fine to date... IV. The Settlement Far Exceeds What Could be Achieved in Litigation and Gives Consumers Meaningful Protections Now... Even assuming the FTC would prevail in litigation, a court would not give the Commission carte blanche to reorganize Facebook’s governance structures and business operations as we deem fit. Instead, the court would impose the relief. Such relief would be limited to injunctive relief to remedy the specific proven violations... V. Mark Zuckerberg is Being Held Accountable and the Order Cabins His Authority Our dissenting colleagues argue that the Commission should not have settled because the Commission’s investigation provides an inadequate basis for the decision not to name Mark Zuckerberg personally as a defendant... The provisions of this Order extinguish the ability of Mr. Zuckerberg to make privacy decisions unilaterally by also vesting responsibility and accountability for those decisions within business units, DCOs, and the privacy committee... the Order significantly diminishes Mr. Zuckerberg’s power — something no government agency, anywhere in the world, has thus far accomplished. The Order requires multiple information flows and imposes a robust system of checks and balances..."

Time will tell how effective the order's restrictions and $5 billion are. That Facebook can easily afford the penalty suggests the amount is a weak deterrence. If all or part of the penalty is tax-deductible (yes, tax-deductible fines have happened before to directly reduce a company's taxes), then that would weaken the deterrence effectiveness. And, if all or part of the fine is tax-deductible, then we taxpayers just paid for part of Facebook's alleged wrongdoing. I'll bet most taxpayers wouldn't want that.

Facebook stated in a July 24th news release that its second-quarter 2019 earnings included:

"... an additional $2.0 billion legal expense related to the U.S. Federal Trade Commission (FTC) settlement and a $1.1 billion income tax expense due to the developments in Altera Corp. v. Commissioner, as discussed below. As the FTC expense is not expected to be tax-deductible, it had no effect on our provision for income taxes... In July 2019, we entered into a settlement and modified consent order to resolve the inquiry of the FTC into our platform and user data practices. Among other matters, our settlement with the FTC requires us to pay a penalty of $5.0 billion and to significantly enhance our practices and processes for privacy compliance and oversight. In particular, we have agreed to implement a comprehensive expansion of our privacy program, including substantial management and board of directors oversight, stringent operational requirements and reporting obligations, and a process to regularly certify our compliance with the privacy program to the FTC. In the second quarter of 2019, we recorded an additional $2.0 billion accrual in connection with our settlement with the FTC, which is included in accrued expenses and other current liabilities on our condensed consolidated balance sheet."

"Not expected to be" is not the same as definitely not. And, business expenses reduce a company's taxable net income.

A copy of the FTC settlement order with Facebook is also available here (Adobe PDF format; 920K bytes). Plus, there is more:

"... the FTC also announced today separate law enforcement actions against data analytics company Cambridge Analytica, its former Chief Executive Officer Alexander Nix, and Aleksandr Kogan, an app developer who worked with the company, alleging they used false and deceptive tactics to harvest personal information from millions of Facebook users. Kogan and Nix have agreed to a settlement with the FTC that will restrict how they conduct any business in the future."

Cambridge Analytica was involved in the massive Facebook data breach in 2018 when persons allegedly posed as academic researchers in order to download Facebook users' profile information they really weren't authorized to access.

What are your opinions? Hopefully, some tax experts will weigh in about the fine.


Tech Expert Concluded Google Chrome Browser Operates A Lot Like Spy Software

Many consumers still use web browsers. Which are better for your online privacy? You may be interested in this analysis by a tech expert:

"... I've been investigating the secret life of my data, running experiments to see what technology really gets up to under the cover of privacy policies that nobody reads... My tests of Chrome vs. Firefox [browsers] unearthed a personal data caper of absurd proportions. In a week of Web surfing on my desktop, I discovered 11,189 requests for tracker "cookies" that Chrome would have ushered right onto my computer but were automatically blocked by Firefox... Chrome welcomed trackers even at websites you would think would be private. I watched Aetna and the Federal Student Aid website set cookies for Facebook and Google. They surreptitiously told the data giants every time I pulled up the insurance and loan service's log-in pages."

"And that's not the half of it. Look in the upper right corner of your Chrome browser. See a picture or a name in the circle? If so, you're logged in to the browser, and Google might be tapping into your Web activity to target ads. Don't recall signing in? I didn't, either. Chrome recently started doing that automatically when you use Gmail... I felt hoodwinked when Google quietly began signing Gmail users into Chrome last fall. Google says the Chrome shift didn't cause anybody's browsing history to be "synced" unless they specifically opted in — but I found mine was being sent to Google and don't recall ever asking for extra surveillance..."

Also:

"Google's product managers told me in an interview that Chrome prioritizes privacy choices and controls, and they're working on new ones for cookies. But they also said they have to get the right balance with a "healthy Web ecosystem" (read: ad business). Firefox's product managers told me they don't see privacy as an "option" relegated to controls. They've launched a war on surveillance, starting last month with "enhanced tracking protection" that blocks nosy cookies by default on new Firefox installations..."

This tech expert concluded:

"It turns out, having the world's biggest advertising company make the most popular Web browser was about as smart as letting kids run a candy shop. It made me decide to ditch Chrome for a new version of nonprofit Mozilla's Firefox, which has default privacy protections. Switching involved less inconvenience than you might imagine."

Regular readers of this blog are aware of how Google tracks consumers online purchases, the worst mobile apps for privacy, and privacy alternatives such the Brave web browser, the DuckDuckGo search engine, virtual private network (VPN) software, and more. Yes, you can use the Firefox browser on your Apple iPhone. I do.

Me? I've used the Firefox browser since about 2010 on my (Windows) laptop, and the DuckDuckGo search engine since 2013. I stopped using Bing, Yahoo, and Google search engines in 2013. While Firefox installs with Google as the default search engine, you can easily switch it to DuckDuckGo. I did. I am very happy with the results.

Which web browser and search engine do you use? What do you do to protect your online privacy?


FTC Urged To Rule On Legality Of 'Secret Surveillance Scores' Used To Vary Prices By Each Online Shopper

Nobody wants to pay too much for a product. If you like online shopping, you may have been charged higher prices than your neighbors. Gizmodo reported:

"... researchers have documented and studied the use of so-called "surveillance scoring," the shadowy, but widely adopted practice of using computer algorithms that, in commerce, result in customers automatically paying different prices for the same product. The term also encompasses tactics used by employers and landlords to deny applicants jobs and housing, respectively, based on suggestions an algorithm spits out. Now experts allege that much of this surveillance scoring behavior is illegal, and they’re are asking the Federal Trade Commission (FTC) to investigate."

"In a 38-page petition filed last week, the Consumer Education Foundation (CEF), a California nonprofit with close ties to the group Consumer Watchdog, asked the FTC to explore whether the use of surveillance scores constitute “unfair or deceptive practices” under the Federal Trade Commission Act..."

The petition is part of a "Represent Consumers" (RC) program.

Many travelers have experienced dynamic pricing, where airlines vary fares based upon market conditions: when demand increases, prices go up; when demand decreases, prices go down. Similarly, when there are many unsold seats (e.g., plenty of excess supply), prices go down. But that dynamic pricing does not vary for each traveler.

Pricing by each person raises concerns of price discrimination. The legal definition of price discrimination in the United States:

"A seller charging competing buyers different prices for the same "commodity" or discriminating in the provision of "allowances" — compensation for advertising and other services — may be violating the Robinson-Patman Act... Price discriminations are generally lawful, particularly if they reflect the different costs of dealing with different buyers or are the result of a seller's attempts to meet a competitor's offering... There are two legal defenses to these types of alleged Robinson-Patman violations: (1) the price difference is justified by different costs in manufacture, sale, or delivery (e.g., volume discounts), or (2) the price concession was given in good faith to meet a competitor's price."

Airlines have wanted to extend dynamic pricing to each person, and "surveillance scores" seem perfectly suited for the task. The RC petition is packed with information which is instructive for consumers to learn about the extent of the business practices. First, the petition described the industry involved:

"Surveillance scoring starts with "analytics companies," the true number of which is unknown... these firms amass thousands or even tens of thousands of demographic and lifestyle data points about consumers, with the help of an estimated 121 data brokers and aggregators... The analytics firms use algorithms to categorize, grade, or assign a numerical value to a consumer based on the consumer’s estimated predicted behavior. That score then dictates how a company will treat a consumer. Consumers deemed to be less valuable are treated poorly, while consumers with better “grades” get preferential treatment..."

Second, the RC petition cited a study which identified 44 different types of proprietary surveillance scores used by industry participants to predict consumer behavior. Some of the score types (emphasis added):

"The Medication Adherence Score, which predicts whether a consumer is likely to follow a medication regimen; The Health Risk Score, which predicts how much a specific patient will cost an insurance company; The Consumer Profitability Score, which predicts which households may be profitable for a company and hence desirable customers; The Job Security Score, which predicts a person’s future income and ability to pay for things; The Churn Score, which predicts whether a consumer is likely to move her business to another company; The Discretionary Spending Index, which scores how much extra cash a particular consumer might be able to spend on non-necessities; The Invitation to Apply Score, which predicts how likely a consumer is to respond to a sales offer; The Charitable Donor Score, which predicts how likely a household is to make significant charitable donations; and The Pregnancy Predictor Score, which predicts the likelihood of someone getting pregnant."

It is important to note that the RC petition does not call for a halt in the collection of personal data about consumers. Rather, it asks the FTC, "to investigate and prohibit the targeting of consumers’ private data against them after it has been collected." Clarity is needed about what is, and is not, legal when consumers' personal data is used against them.

Third, the RC petition also cited published studies about pricing discrimination:

"An early seminal study of price discrimination published by researchers at Northeastern University in 2014 (Northeastern Price Discrimination Study) examined the pricing practices of e-commerce websites. The researchers developed a software-based methodology for measuring price discrimination and tested it with 300 real-world users who shopped on 16 popular e-commerce websites.37 Of ten different general retailers tested in 2014, only one –- Home Depot –- was confirmed to be engaging in price discrimination. Home Depot quoted prices to mobile-device users that were approximately $100 more than those quoted to desktop users.39 The researchers were unable to ascertain why... The Northeastern Price Discrimination Study also found that “human shoppers got worse bargains on a number of websites,”compared to an automated shopping browser that did not have any personal data trail associated with it,42 validating that Home Depot was considering shoppers’ personal data when setting prices online."

So, concerns about price discrimination aren't simply theory. Related to that, the RC petition cited its own research:

"... researchers at Northeastern University developed an online tool to “expose how websites personalize prices.” The Price Discrimination Tool (PDT) is a plug-in extension used on the Google Chrome browser that allows any Internet user to perform searches on five websites to see if the user is being charged a different price based on whatever information the companies have about that particular user. The PDT uses a remote computer server that is anonymous –- it has no personal data profile... The PDT then displays the price results from the human shopper’s search and those obtained by the remote anonymous computer server. Our own testing using the PDT revealed that Home Depot continues to offer different prices to human shoppers. For example, a search on Home Depot’s website for “white paint” reveals price discrimination. Of the 24 search results on the first page, Home Depot quoted us higher prices for six tubs of white paint than it quoted the anonymous computer... Our testing also revealed similar price discrimination on Home Depot’s website for light bulbs, toilet paper, toilet paper holders, caulk guns, halogen floor lamps and screw drivers... We also detected price discrimination on Walmart’s website using the PDT. Our testing revealed price discrimination on Walmart’s website for items such as paper towels, highlighters, pens, paint and toilet paper roll holders."

The RC petition listed examples: the Home Depot site quoted $59.87 for a five-gallon bucket of paint to the anonymous user, and $62.96 for the same product to a researcher. Another example: the site quoted $10.26 for a toilet-paper holder to the anonymous user, and $20.89 for the same product to a researcher -- double the price. Prices differences per person ranged from small to huge.

Besides concerns about price discrimination, the RC petition discussed "discriminatory customer service," and the data analytics firms allegedly involved:

"Zeta Global sells customer value scores that will determine, among other things, the quality of customer service a consumer receives from one of Zeta’s corporate clients. Zeta Global “has a database of more than 700 million people, with an average of over 2,500 pieces of data per person,” from which it creates the scores. The scores are based on data “such as the number of times a customer has dialed a call center and whether that person has browsed a competitor’s website or searched certain keywords in the past few days.” Based on that score, Zeta will recommend to its clients, which include wireless carriers, whether to respond to one customer more quickly than to others.

"Kustomer Inc.: Customer-service platform Kustomer Inc. uses customer value scores to enable retailers and other businesses to treat customer service inquiries differently..."

"Opera Solutions: describes itself as a “a global provider of advanced analytics software solutions that address the persistent problem of scaling Big Data analytics.” Opera Solutions generates customer value scores for its clients (including airlines, retailers and banks)..."

The petition cited examples of "discriminatory customer service," which include denied product returns, or customers shunted to less helpful customer service options. Plus, there are accuracy concerns:

"Considering that credit scores – the existence of which has been public since 1970 – are routinely based on credit reports found to contain errors that harm consumers’ financial standing,31 it is highly likely that Secret Surveillance Scores are based on inaccurate or outdated information. Since the score and the erroneous data upon which it relies are secret, there is no way to correct an error,32 assuming the consumer was aware of it."

Regular readers of this blog are already aware of errors in reports from credit reporting agencies. A copy of the RC petition is also available here (Adobe PDF, 3.2 Mbytes).

What immediately becomes clear while reading the petition is that massive amount of personal data collected about consumers to create several proprietary scores. Consumers have no way of knowing nor challenging the accuracy of the scores when they are used against them. So, not only has an industry risen which profits by acquiring and then selling, trading, analyzing, and/or using consumers' data; there is little to no accountability.

In other words, the playing field is heavily tilted for corporations and against consumers.

This is also a reminder why telecommunications companies fought hard for the repeal of broadband privacy and repeal of net neutrality, both of which the U.S. Federal Communications Commission (FCC) provided in 2017 under the leadership of FCC Chairman Ajit Pai, a Trump appointee. Repeal of the former consumer protection allows unrestricted collection of consumers' data, plus new revenue streams to sell the data collected to analytics firms, data brokers, and business partners.

Repeal of the second consumer protection allows internet and cable providers to price content using whatever criteria they choose. You see a rudimentary version of this pricing in a business practice called "zero rating." An example: streaming a movie via a provider's internet service counts against a data cap while the same movie viewed through the same provider's cable subscription does not. Yet, the exact same movie is delivered through the exact same cable (or fiber) internet connection.

Smart readers immediately realize that a possible next step includes zero ratings per-person. Streaming a movie might count against your data cap but not for your neighbor. Who would know? Oversight and consumer protections are needed.

What are your opinions of secret surveillance scores?


Study: While Consumers Want Sites Like Facebook And Google To Collect Less Data, Few Want To Pay For Privacy

A recent study by the Center For Data Innovation explored consumers' attitudes about online privacy. One of the primary findings:

"... when potential tradeoffs were not part of the question approximately 80 percent of Americans agreed that they would like online services such as Facebook and Google to collect less of their data..."

So, most survey participants want more online privacy as defined by less data collected about them. That is good news, right? Maybe. The researchers dug deeper to understand survey participants' views about "tradeoffs" - various ways of paying for online privacy. It found that support for more privacy (e.g., less data collected):

"... eroded when respondents considered these tradeoffs... [support] dropped by 6 percentage points when respondents were asked whether they would like online services to collect less data even if it means seeing ads that are less useful. Support dropped by 27 percentage points when respondents considered whether they would like less data collection even if it means seeing more ads than before. And it dropped by 26 percentage points when respondents were asked whether they would like less data collection even if it means losing access to some features they use now."

So, support for more privacy fell if irrelevant ads, more ads, and/or fewer features were the consequences. There is more:

"The largest drop in support (53 percentage points) came when respondents were asked whether they would like online services to collect less of their data even if it means paying a monthly subscription fee."

This led to a second major finding:

"Only one in four Americans want online services such as Facebook and Google to collect less of their data if it means they would have to start paying a monthly subscription fee..."

So, most want privacy but few are willing to pay for it. This is probably reassuring news for executives in a variety of industries (e.g., social media, tech companies, device manufacturers, etc.) to keep doing what they are doing: massive data collection of consumers' data via sites, mobile apps, partnerships, and however else they can get it.

Next, the survey asked participants if they would accept more data collection if that provided more benefits:

"... approximately 74 percent of Americans opposed having online services such as Google and Facebook collect more of their data. But that opposition decreased by 11 percentage points... if it means seeing ads that are more useful. It dropped by 17 percentage points... if it means seeing fewer ads than before and... if it means getting access to new features they would use. The largest decrease in opposition (18 percentage points) came... if it means getting more free apps and services..."

So, while most consumers want online privacy, they can be easily persuaded to abandon their positions with promises of more benefits. The survey included a national online poll of 3,240 U.S. adult Internet users. It was conducted December 13 - 16, 2018.

What to make of these survey results? Americans are fickle and lazy. We say we want online privacy, but few are willing to pay for it. While nothing in life is free, few consumers seem to realize that this advice applies to online privacy, too. Plus, consumers seem to highly value convenience regardless of the consequences.

What do you think?


UK Parliamentary Committee Issued Its Final Report on Disinformation And Fake News. Facebook And Six4Three Discussed

On February 18th, a United Kingdom (UK) parliamentary committee published its final report on disinformation and "fake news." The 109-page report by the Digital, Culture, Media, And Sport Committee (DCMS) updates its interim report from July, 2018.

The report covers many issues: political advertising (by unnamed entities called "dark adverts"), Brexit and UK elections, data breaches, privacy, and recommendations for UK regulators and government officials. It seems wise to understand the report's findings regarding the business practices of U.S.-based companies mentioned, since these companies' business practices affect consumers globally, including consumers in the United States.

Issues Identified

First, the DCMS' final report built upon issues identified in its:

"... Interim Report: the definition, role and legal liabilities of social media platforms; data misuse and targeting, based around the Facebook, Cambridge Analytica and Aggregate IQ (AIQ) allegations, including evidence from the documents we obtained from Six4Three about Facebook’s knowledge of and participation in data-sharing; political campaigning; Russian influence in political campaigns; SCL influence in foreign elections; and digital literacy..."

The final report includes input from 23 "oral evidence sessions," more than 170 written submissions, interviews of at least 73 witnesses, and more than 4,350 questions asked at hearings. The DCMS Committee sought input from individuals, organizations, industry experts, and other governments. Some of the information sources:

"The Canadian Standing Committee on Access to Information, Privacy and Ethics published its report, “Democracy under threat: risks and solutions in the era of disinformation and data monopoly” in December 2018. The report highlights the Canadian Committee’s study of the breach of personal data involving Cambridge Analytica and Facebook, and broader issues concerning the use of personal data by social media companies and the way in which such companies are responsible for the spreading of misinformation and disinformation... The U.S. Senate Select Committee on Intelligence has an ongoing investigation into the extent of Russian interference in the 2016 U.S. elections. As a result of data sets provided by Facebook, Twitter and Google to the Intelligence Committee -- under its Technical Advisory Group -- two third-party reports were published in December 2018. New Knowledge, an information integrity company, published “The Tactics and Tropes of the Internet Research Agency,” which highlights the Internet Research Agency’s tactics and messages in manipulating and influencing Americans... The Computational Propaganda Research Project and Graphika published the second report, which looks at activities of known Internet Research Agency accounts, using Facebook, Instagram, Twitter and YouTube between 2013 and 2018, to impact US users"

Why Disinformation

Second, definitions matter. According to the DCMS Committee:

"We have even changed the title of our inquiry from “fake news” to “disinformation and ‘fake news’”, as the term ‘fake news’ has developed its own, loaded meaning. As we said in our Interim Report, ‘fake news’ has been used to describe content that a reader might dislike or disagree with... We were pleased that the UK Government accepted our view that the term ‘fake news’ is misleading, and instead sought to address the terms ‘disinformation’ and ‘misinformation'..."

Overall Recommendations

Summary recommendations from the report:

  1. "Compulsory Code of Ethics for tech companies overseen by independent regulator,
  2. Regulator given powers to launch legal action against companies breaching code,
  3. Government to reform current electoral communications laws and rules on overseas involvement in UK elections, and
  4. Social media companies obliged to take down known sources of harmful content, including proven sources of disinformation"

Role And Liability Of Tech Companies

Regarding detailed observations and findings about the role and liability of tech companies, the report stated:

"Social media companies cannot hide behind the claim of being merely a ‘platform’ and maintain that they have no responsibility themselves in regulating the content of their sites. We repeat the recommendation from our Interim Report that a new category of tech company is formulated, which tightens tech companies’ liabilities, and which is not necessarily either a ‘platform’ or a ‘publisher’. This approach would see the tech companies assume legal liability for content identified as harmful after it has been posted by users. We ask the Government to consider this new category of tech company..."

The UK Government and its regulators may adopt some, all, or none of the report's recommendations. More observations and findings in the report:

"... both social media companies and search engines use algorithms, or sequences of instructions, to personalize news and other content for users. The algorithms select content based on factors such as a user’s past online activity, social connections, and their location. The tech companies’ business models rely on revenue coming from the sale of adverts and, because the bottom line is profit, any form of content that increases profit will always be prioritized. Therefore, negative stories will always be prioritized by algorithms, as they are shared more frequently than positive stories... Just as information about the tech companies themselves needs to be more transparent, so does information about their algorithms. These can carry inherent biases, as a result of the way that they are developed by engineers... Monika Bickert, from Facebook, admitted that Facebook was concerned about “any type of bias, whether gender bias, racial bias or other forms of bias that could affect the way that work is done at our company. That includes working on algorithms.” Facebook should be taking a more active and urgent role in tackling such inherent biases..."

Based upon this, the report recommended that the UK's new Centre For Ethics And Innovation (CFEI) should play a key role as an advisor to the UK Government by continually analyzing and anticipating gaps in governance and regulation, suggesting best practices and corporate codes of conduct, and standards for artificial intelligence (AI) and related technologies.

Inferred Data

The report also discussed a critical issue related to algorithms (emphasis added):

"... When Mark Zuckerberg gave evidence to Congress in April 2018, in the wake of the Cambridge Analytica scandal, he made the following claim: “You should have complete control over your data […] If we’re not communicating this clearly, that’s a big thing we should work on”. When asked who owns “the virtual you”, Zuckerberg replied that people themselves own all the “content” they upload, and can delete it at will. However, the advertising profile that Facebook builds up about users cannot be accessed, controlled or deleted by those users... In the UK, the protection of user data is covered by the General Data Protection Regulation (GDPR). However, ‘inferred’ data is not protected; this includes characteristics that may be inferred about a user not based on specific information they have shared, but through analysis of their data profile. This, for example, allows political parties to identify supporters on sites like Facebook, through the data profile matching and the ‘lookalike audience’ advertising targeting tool... Inferred data is therefore regarded by the ICO as personal data, which becomes a problem when users are told that they can own their own data, and that they have power of where that data goes and what it is used for..."

The distinction between uploaded and inferred data cannot be overemphasized. It is critical when evaluating tech companies statements, policies (e.g., privacy, terms of use), and promises about what "data" users have control over. Wise consumers must insist upon clear definitions to avoided getting misled or duped.

What might be an exampled of inferred data? What comes to mind is Facebook's Ad Preferences feature allows users to review and delete the "Interests" -- advertising categories -- Facebook assigns to each user's profile. (The service's algorithms assign Interests based groups/pages/events/advertisements users "Liked" or clicked on, posts submitted, posts commented upon, and more.) These "Interests" are inferred data, since Facebook assigned them, and uers didn't.

In fact, Facebook doesn't notify its users when it assigns new Interests. It just does it. And, Facebook can assign Interests whether you interacted with an item once or many times. How relevant is an Interest assigned after a single interaction, "Like," or click? Most people would say: not relevant. So, does the Interests list assigned to users' profiles accurately describe users? Do Facebook users own the Interests list assigned to their profiles? Any control Facebook users have seems minimal. Why? Facebook users can delete Interests assigned to their profiles, but users cannot stop Facebook from applying new Interests. Users cannot prevent Facebook from re-applying Interests previously deleted. Deleting Interests doesn't reduce the number of ads users see on Facebook.

The only way to know what Interests have been assigned is for Facebook users to visit the Ad Preferences section of their profiles, and browse the list. Depending how frequently a person uses Facebook, it may be necessary to prune an Interests list at least once monthly -- a cumbersome and time consuming task, probably designed that way to discourage reviews and pruning. And, that's one example of inferred data. There are probably plenty more examples, and as the report emphasizes users don't have access to all inferred data with their profiles.

Now, back to the report. To fix problems with inferred data, the DCMS recommended:

"We support the recommendation from the ICO that inferred data should be as protected under the law as personal information. Protections of privacy law should be extended beyond personal information to include models used to make inferences about an individual. We recommend that the Government studies the way in which the protections of privacy law can be expanded to include models that are used to make inferences about individuals, in particular during political campaigning. This will ensure that inferences about individuals are treated as importantly as individuals’ personal information."

Business Practices At Facebook

Next, the DCMS Committee's report said plenty about Facebook, its management style, and executives (emphasis added):

"Despite all the apologies for past mistakes that Facebook has made, it still seems unwilling to be properly scrutinized... Ashkan Soltani, an independent researcher and consultant, and former Chief Technologist to the US Federal Trade Commission (FTC), called into question Facebook’s willingness to be regulated... He discussed the California Consumer Privacy Act, which Facebook supported in public, but lobbied against, behind the scenes... By choosing not to appear before the Committee and by choosing not to respond personally to any of our invitations, Mark Zuckerberg has shown contempt towards both the UK Parliament and the ‘International Grand Committee’, involving members from nine legislatures from around the world. The management structure of Facebook is opaque to those outside the business and this seemed to be designed to conceal knowledge of and responsibility for specific decisions. Facebook used the strategy of sending witnesses who they said were the most appropriate representatives, yet had not been properly briefed on crucial issues, and could not or chose not to answer many of our questions. They then promised to follow up with letters, which -- unsurprisingly -- failed to address all of our questions. We are left in no doubt that this strategy was deliberate."

So, based upon Facebook's actions (or lack thereof), the DCMS concluded that Facebook executives intentionally ducked and dodged issues and questions.

While discussing data use and targeting, the report said more about data breaches and Facebook:

"The scale and importance of the GSR/Cambridge Analytica breach was such that its occurrence should have been referred to Mark Zuckerberg as its CEO immediately. The fact that it was not is evidence that Facebook did not treat the breach with the seriousness it merited. It was a profound failure of governance within Facebook that its CEO did not know what was going on, the company now maintains, until the issue became public to us all in 2018. The incident displays the fundamental weakness of Facebook in managing its responsibilities to the people whose data is used for its own commercial interests..."

So, internal management failed. That's not all. After a detailed review of the GSR/Cambridge Analytica breach and Facebook's 2011 Consent Decree with the U.S. Federal Trade Commission (FTC), the DCMS Committee concluded (emphasis and text link added):

"The Cambridge Analytica scandal was facilitated by Facebook’s policies. If it had fully complied with the FTC settlement, it would not have happened. The FTC Complaint of 2011 ruled against Facebook -- for not protecting users’ data and for letting app developers gain as much access to user data as they liked, without restraint -- and stated that Facebook built their company in a way that made data abuses easy. When asked about Facebook’s failure to act on the FTC’s complaint, Elizabeth Denham, the Information Commissioner, told us: “I am very disappointed that Facebook, being such an innovative company, could not have put more focus, attention and resources into protecting people’s data”. We are equally disappointed."

Wow! Not good. There's more:

"... a current court case at the San Mateo Superior Court in California also concerns Facebook’s data practices. It is alleged that Facebook violated the privacy of US citizens by actively exploiting its privacy policy... The published ‘corrected memorandum of points and authorities to defendants’ special motions to strike’, by the complainant in the case, the U.S.-based app developer Six4Three, describes the allegations against Facebook; that Facebook used its users’ data to persuade app developers to create platforms on its system, by promising access to users’ data, including access to data of users’ friends. The case also alleges that those developers that became successful were targeted and ordered to pay money to Facebook... Six4Three lodged its original case in 2015, after Facebook removed developers’ access to friends’ data, including its own. The DCMS Committee took the unusual, but lawful, step of obtaining these documents, which spanned between 2012 and 2014... Since we published these sealed documents, on 14 January 2019 another court agreed to unseal 135 pages of internal Facebook memos, strategies and employee emails from between 2012 and 2014, connected with Facebook’s inappropriate profiting from business transactions with children. A New York Times investigation published in December 2018 based on internal Facebook documents also revealed that the company had offered preferential access to users data to other major technology companies, including Microsoft, Amazon and Spotify."

"We believed that our publishing the documents was in the public interest and would also be of interest to regulatory bodies... The documents highlight Facebook’s aggressive action against certain apps, including denying them access to data that they were originally promised. They highlight the link between friends’ data and the financial value of the developers’ relationship with Facebook. The main issues concern: ‘white lists’; the value of friends’ data; reciprocity; the sharing of data of users owning Android phones..."

You can read the report's detailed descriptions of those issues. A summary: a) Facebook allegedly used promises of access to users' data to lure developers (often by overriding Facebook users' privacy settings); b) some developers got priority treatment based upon unclear criteria; c) developers who didn't spend enough money with Facebook were denied access to data previously promised; d) Facebook's reciprocity clause demanded that developers also share their users' data with Facebook; e) Facebook's mobile app for Android OS phone users collected far more data about users, allegedly without consent, than users were told; and f) Facebook allegedly targeted certain app developers (emphasis added):

"We received evidence that showed that Facebook not only targeted developers to increase revenue, but also sought to switch off apps where it considered them to be in competition or operating in a lucrative areas of its platform and vulnerable to takeover. Since 1970, the US has possessed high-profile federal legislation, the Racketeer Influenced and Corrupt Organizations Act (RICO); and many individual states have since adopted similar laws. Originally aimed at tackling organized crime syndicates, it has also been used in business cases and has provisions for civil action for damages in RICO-covered offenses... Despite specific requests, Facebook has not provided us with one example of a business excluded from its platform because of serious data breaches. We believe that is because it only ever takes action when breaches become public. We consider that data transfer for value is Facebook’s business model and that Mark Zuckerberg’s statement that “we’ve never sold anyone’s data” is simply untrue.” The evidence that we obtained from the Six4Three court documents indicates that Facebook was willing to override its users’ privacy settings in order to transfer data to some app developers, to charge high prices in advertising to some developers, for the exchange of that data, and to starve some developers—such as Six4Three—of that data, thereby causing them to lose their business. It seems clear that Facebook was, at the very least, in violation of its Federal Trade Commission settlement."

"The Information Commissioner told the Committee that Facebook needs to significantly change its business model and its practices to maintain trust. From the documents we received from Six4Three, it is evident that Facebook intentionally and knowingly violated both data privacy and anti-competition laws. The ICO should carry out a detailed investigation into the practices of the Facebook Platform, its use of users’ and users’ friends’ data, and the use of ‘reciprocity’ of the sharing of data."

The Information Commissioner's Office (ICO) is one of the regulatory agencies within the UK. So, the Committee concluded that Facebook's real business model is, "data transfer for value" -- in other words: have money, get access to data (regardless of Facebook users' privacy settings).

One quickly gets the impression that Facebook acted like a monopoly in its treatment of both users and developers... or worse, like organized crime. The report concluded (emphasis added):

"The Competitions and Market Authority (CMA) should conduct a comprehensive audit of the operation of the advertising market on social media. The Committee made this recommendation its interim report, and we are pleased that it has also been supported in the independent Cairncross Report commissioned by the government and published in February 2019. Given the contents of the Six4Three documents that we have published, it should also investigate whether Facebook specifically has been involved in any anti-competitive practices and conduct a review of Facebook’s business practices towards other developers, to decide whether Facebook is unfairly using its dominant market position in social media to decide which businesses should succeed or fail... Companies like Facebook should not be allowed to behave like ‘digital gangsters’ in the online world, considering themselves to be ahead of and beyond the law."

The DCMS Committee's report also discussed findings from the Cairncross Report. In summary, Damian Collins MP, Chair of the DCMS Committee, said:

“... we cannot delay any longer. Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalized ‘dark adverts’ from unidentifiable sources, delivered through the major social media platforms we use everyday. Much of this is directed from agencies working in foreign countries, including Russia... Companies like Facebook exercise massive market power which enables them to make money by bullying the smaller technology companies and developers... We need a radical shift in the balance of power between the platforms and the people. The age of inadequate self regulation must come to an end. The rights of the citizen need to be established in statute, by requiring the tech companies to adhere to a code of conduct..."

So, the report seems extensive, comprehensive, and detailed. Read the DCMS Committee's announcement, and/or download the full DCMS Committee report (Adobe PDF format, 3,5o7 kilobytes).

Once can assume that governments' intelligence and spy agencies will continue to do what they've always done: collect data about targets and adversaries, use disinformation and other tools to attempt to meddle in other governments' activities. It is clear that social media makes these tasks far easier than before. The DCMS Committee's report provided recommendations about what the UK Government's response should be. Other countries' governments face similar decisions about their responses, if any, to the threats.

Given the data in the DCMS report, it will be interesting to see how the FTC and lawmakers in the United States respond. If increased regulation of social media results, tech companies arguably have only themselves to blame. What do you think?


Survey: Users Don't Understand Facebook's Advertising System. Some Disagree With Its Classifications

Most people know that many companies collect data about their online activities. Based upon the data collected, companies classify users for a variety of reasons and purposes. Do users agree with these classifications? Do the classifications accurately describe users' habits, interests, and activities?

Facebook logo To answer these questions, the Pew Research Center surveyed users of Facebook. Why Facebook? Besides being the most popular social media platform in the United States, it collects:

"... a wide variety of data about their users’ behaviors. Platforms use this data to deliver content and recommendations based on users’ interests and traits, and to allow advertisers to target ads... But how well do Americans understand these algorithm-driven classification systems, and how much do they think their lives line up with what gets reported about them?"

The findings are significant. First:

"Facebook makes it relatively easy for users to find out how the site’s algorithm has categorized their interests via a “Your ad preferences” page. Overall, however, 74% of Facebook users say they did not know that this list of their traits and interests existed until they were directed to their page as part of this study."

So, almost three quarters of Facebook users surveyed don't know what data Facebook has collected about them, nor how to view it (nor how to edit it, or how to opt out of the ad targeting classifications). According to Wired magazine, Facebook's "Your Ad Preferences" page:

"... can be hard to understand if you haven’t looked at the page before. At the top, Facebook displays “Your interests.” These groupings are assigned based on your behavior on the platform and can be used by marketers to target you with ads. They can include fairly straightforward subjects, like “Netflix,” “Graduate school,” and “Entrepreneurship,” but also more bizarre ones, like “Everything” and “Authority.” Facebook has generated an enormous number of these categories for its users. ProPublica alone has collected over 50,000, including those only marketers can see..."

Now, back to the Pew survey. After survey participants viewed their Ad Preferences page:

"A majority of users (59%) say these categories reflect their real-life interests, while 27% say they are not very or not at all accurate in describing them. And once shown how the platform classifies their interests, roughly half of Facebook users (51%) say they are not comfortable that the company created such a list."

So, about half of persons surveyed use a site whose data collection they are uncomfortable with. Not good. Second, substantial groups said the classifications by Facebook were not accurate:

"... about half of Facebook users (51%) are assigned a political “affinity” by the site. Among those who are assigned a political category by the site, 73% say the platform’s categorization of their politics is very or somewhat accurate, while 27% say it describes them not very or not at all accurately. Put differently, 37% of Facebook users are both assigned a political affinity and say that affinity describes them well, while 14% are both assigned a category and say it does not represent them accurately..."

So, significant numbers of users disagree with the political classifications Facebook assigned to their profiles. Third, its' not only politics:

"... Facebook also lists a category called “multicultural affinity”... this listing is meant to designate a user’s “affinity” with various racial and ethnic groups, rather than assign them to groups reflecting their actual race or ethnic background. Only about a fifth of Facebook users (21%) say they are listed as having a “multicultural affinity.” Overall, 60% of users who are assigned a multicultural affinity category say they do in fact have a very or somewhat strong affinity for the group to which they are assigned, while 37% say their affinity for that group is not particularly strong. Some 57% of those who are assigned to this category say they do in fact consider themselves to be a member of the racial or ethnic group to which Facebook assigned them."

The survey included a nationally representative sample of 963 Facebook users ages 18 and older from the United States. The survey was conducted September 4 to October 1, 2018. Read the entire survey at the Pew Research Center site.

What can consumers conclude from this survey? Social media users should understand that all social sites, and especially mobile apps, collect data about you, and then make judgements... classifications about you. (Remember, some Samsung phone owners were unable to delete Facebook and other mobile apps users. And, everyone wants your geolocation data.) Use any tools the sites provide to edit or adjust your ad preferences to match your interests. Adjust the privacy settings on your profile to limit the data sharing as much as possible.

Last, an important reminder. While Facebook users can edit their ad preferences and can opt out of the ad-targeting classifications, they cannot completely avoid ads. Facebook will still display less-targeted ads. That is simply, Facebook being Facebook to make money. That probably applies to other social sites, too.

What are your opinions of the survey's findings?


Google Fined 50 Million Euros For Violations Of New European Privacy Law

Google logo Google has been find 50 million Euros (about U.S. $57 million) under the new European privacy law for failing to properly disclose to users how their data is collected and used for targeted advertising. The European Union's General Data Protection Regulations, which went into effect in May 2018, give EU residents more control over their information and how companies use it.

After receiving two complaints last year from privacy-rights groups, France's National Data Protection Commission (CNL) announced earlier this month:

"... CNIL carried out online inspections in September 2018. The aim was to verify the compliance of the processing operations implemented by GOOGLE with the French Data Protection Act and the GDPR by analysing the browsing pattern of a user and the documents he or she can have access, when creating a GOOGLE account during the configuration of a mobile equipment using Android. On the basis of the inspections carried out, the CNIL’s restricted committee responsible for examining breaches of the Data Protection Act observed two types of breaches of the GDPR."

The first violation involved transparency failures:

"... information provided by GOOGLE is not easily accessible for users. Indeed, the general structure of the information chosen by the company does not enable to comply with the Regulation. Essential information, such as the data processing purposes, the data storage periods or the categories of personal data used for the ads personalization, are excessively disseminated across several documents, with buttons and links on which it is required to click to access complementary information. The relevant information is accessible after several steps only, implying sometimes up to 5 or 6 actions... some information is not always clear nor comprehensive. Users are not able to fully understand the extent of the processing operations carried out by GOOGLE. But the processing operations are particularly massive and intrusive because of the number of services offered (about twenty), the amount and the nature of the data processed and combined. The restricted committee observes in particular that the purposes of processing are described in a too generic and vague manner..."

So, important information is buried and scattered across several documents making it difficult for users to access and to understand. The second violation involved the legal basis for personalized ads processing:

"... GOOGLE states that it obtains the user’s consent to process data for ads personalization purposes. However, the restricted committee considers that the consent is not validly obtained for two reasons. First, the restricted committee observes that the users’ consent is not sufficiently informed. The information on processing operations for the ads personalization is diluted in several documents and does not enable the user to be aware of their extent. For example, in the section “Ads Personalization”, it is not possible to be aware of the plurality of services, websites and applications involved in these processing operations (Google search, Youtube, Google home, Google maps, Playstore, Google pictures, etc.) and therefore of the amount of data processed and combined."

"[Second], the restricted committee observes that the collected consent is neither “specific” nor “unambiguous.” When an account is created, the user can admittedly modify some options associated to the account by clicking on the button « More options », accessible above the button « Create Account ». It is notably possible to configure the display of personalized ads. That does not mean that the GDPR is respected. Indeed, the user not only has to click on the button “More options” to access the configuration, but the display of the ads personalization is moreover pre-ticked. However, as provided by the GDPR, consent is “unambiguous” only with a clear affirmative action from the user (by ticking a non-pre-ticked box for instance). Finally, before creating an account, the user is asked to tick the boxes « I agree to Google’s Terms of Service» and « I agree to the processing of my information as described above and further explained in the Privacy Policy» in order to create the account. Therefore, the user gives his or her consent in full, for all the processing operations purposes carried out by GOOGLE based on this consent (ads personalization, speech recognition, etc.). However, the GDPR provides that the consent is “specific” only if it is given distinctly for each purpose."

So, not only is important information buried and scattered across multiple documents (again), but also critical boxes for users to give consent are pre-checked when they shouldn't be.

CNIL explained its reasons for the massive fine:

"The amount decided, and the publicity of the fine, are justified by the severity of the infringements observed regarding the essential principles of the GDPR: transparency, information and consent. Despite the measures implemented by GOOGLE (documentation and configuration tools), the infringements observed deprive the users of essential guarantees regarding processing operations that can reveal important parts of their private life since they are based on a huge amount of data, a wide variety of services and almost unlimited possible combinations... Moreover, the violations are continuous breaches of the Regulation as they are still observed to date. It is not a one-off, time-limited, infringement..."

This is the largest fine, so far, under GDPR laws. Reportedly, Google will appeal the fine:

"We've worked hard to create a GDPR consent process for personalised ads that is as transparent and straightforward as possible, based on regulatory guidance and user experience testing... We're also concerned about the impact of this ruling on publishers, original content creators and tech companies in Europe and beyond... For all these reasons, we've now decided to appeal."

This is not the first EU fine for Google. CNet reported:

"Google is no stranger to fines under EU laws. It's currently awaiting the outcome of yet another antitrust investigation -- after already being slapped with a $5 billion fine last year for anticompetitive Android practices and a $2.7 billion fine in 2017 over Google Shopping."


Companies Want Your Location Data. Recent Examples: The Weather Channel And Burger King

Weather Channel logo It is easy to find examples where companies use mobile apps to collect consumers' real-time GPS location data, so they can archive and resell that information later for additional profits. First, ExpressVPN reported:

"The city of Los Angeles is suing the Weather Company, a subsidiary of IBM, for secretly mining and selling user location data with the extremely popular Weather Channel App. Stating that the app unfairly manipulates users into enabling their location settings for more accurate weather reports, the lawsuit affirms that the app collects and then sells this data to third-party companies... Citing a recent investigation by The New York Times that revealed more than 75 companies silently collecting location data (if you haven’t seen it yet, it’s worth a read), the lawsuit is basing its case on California’s Unfair Competition Law... the California Consumer Privacy Act, which is set to go into effect in 2020, would make it harder for companies to blindly profit off customer data... This lawsuit hopes to fine the Weather Company up to $2,500 for each violation of the Unfair Competition Law. With more than 200 million downloads and a reported 45+ million users..."

Long-term readers remember that a data breach in 2007 at IBM Inc. prompted this blog. It's not only internet service providers which collect consumers' location data. Advertisers, retailers, and data brokers want it, too.

Burger King logo Second, Burger King ran last month a national "Whopper Detour" promotion which offered customers a once-cent Whopper burger if they went near a competitor's store. News 5, the ABC News affiliate in Cleveland, reported:

"If you download the Burger King mobile app and drive to a McDonald’s store, you can get the penny burger until December 12, 2018, according to the fast-food chain. You must be within 600 feet of a McDonald's to claim your discount, and no, McDonald's will not serve you a Whopper — you'll have to order the sandwich in the Burger King app, then head to the nearest participating Burger King location to pick it up. More information about the deal can be found on the app on Apple and Android devices."

Next, the relevant portions from Burger King's privacy policy for its mobile apps (emphasis added):

"We collect information you give us when you use the Services. For example, when you visit one of our restaurants, visit one of our websites or use one of our Services, create an account with us, buy a stored-value card in-restaurant or online, participate in a survey or promotion, or take advantage of our in-restaurant Wi-Fi service, we may ask for information such as your name, e-mail address, year of birth, gender, street address, or mobile phone number so that we can provide Services to you. We may collect payment information, such as your credit card number, security code and expiration date... We also may collect information about the products you buy, including where and how frequently you buy them... we may collect information about your use of the Services. For example, we may collect: 1) Device information - such as your hardware model, IP address, other unique device identifiers, operating system version, and settings of the device you use to access the Services; 2) Usage information - such as information about the Services you use, the time and duration of your use of the Services and other information about your interaction with content offered through a Service, and any information stored in cookies and similar technologies that we have set on your device; and 3) Location information - such as your computer’s IP address, your mobile device’s GPS signal or information about nearby WiFi access points and cell towers that may be transmitted to us..."

So, for the low, low price of one hamburger, participants in this promotion gave RBI, the parent company which owns Burger King, perpetual access to their real-time location data. And, since RBI knows when, where, and how long its customers visit competitors' fast-food stores, it also knows similar details about everywhere else you go -- including school, work, doctors, hospitals, and more. Sweet deal for RBI. A poor deal for consumers.

Expect to see more corporate promotions like this, which privacy advocates call "surveillance capitalism."

Consumers' real-time location data is very valuable. Don't give it away for free. If you decide to share it, demand a fair, ongoing payment in exchange. Read privacy and terms-of-use policies before downloading mobile apps, so you don't get abused or taken. Opinions? Thoughts?


Welcome To The New, Terrifying World Of Fake Porn. Plenty Of Consequences And Implications

First, I'd  like to thank all of my readers -- existing and new ones. Some have shared insightful comments on blog posts. Second, the last post of 2018 features a topic we will probably hear plenty about during 2019: artificial intelligence (AI) technologies.

To learn more about AI and related issues, watch or read the AI episodes within the CXO Talk site. And, MediaPost discussed the deployment of of AI by retail stores:

"... retailers seem much more bullish on artificial intelligence, with 7% already using some form of AI in digital assistants or chatbots, and most (64%) planning to have implemented AI within the next three years, 21% of those within the next 12 months. The top reason for using AI in retail is personalization (42%), followed by pricing and promotions (31%), landing page optimization (15%) and fraud detection (21%)."

Like any other online (or offline) technology, AI can be used for good and for bad. The good guys and bad actors both have access to AI technologies. MotherBoard reported:

"There’s a video of Gal Gadot having sex with her stepbrother on the internet. But it’s not really Gadot’s body, and it’s barely her own face. It’s an approximation... The video was created with a machine learning algorithm, using easily accessible materials and open-source code that anyone with a working knowledge of deep learning algorithms could put together."

You may remember Gadot from the 2017 film, "Wonder Woman." Other actors have been victims, too. Where do bad actors get tools to make AI-assisted fake porn? The fake porn with Gadot was:

"... allegedly the work of one person—a Redditor who goes by the name 'deepfakes'—not a big special effects studio... deepfakes uses open-source machine learning tools like TensorFlow, which Google makes freely available to researchers, graduate students, and anyone with an interest in machine learning. Like the Adobe tool that can make people say anything, and the Face2Face algorithm that can swap a recorded video with real-time face tracking, this new type of fake porn shows that we're on the verge of living in a world where it's trivially easy to fabricate believable videos of people doing and saying things they never did... the software is based on multiple open-source libraries, like Keras with TensorFlow backend. To compile the celebrities’ faces, deepfakes said he used Google image search, stock photos, and YouTube videos..."

There is also an AI App for fake porn. Yikes! As bad as this seems, it is worse. According to The Washington Post:

"... an anonymous online community of creators has in recent months removed many of the hurdles for interested beginners, crafting how-to guides, offering tips and troubleshooting advice — and fulfilling fake-porn requests on their own. To simplify the task, deepfake creators often compile vast bundles of facial images, called “facesets,” and sex-scene videos of women they call “donor bodies.” Some creators use software to automatically extract a woman’s face from her videos and social-media posts. Others have experimented with voice-cloning software to generate potentially convincing audio..."

This is beyond bad. It is terrifying.

The implications: many. Video, including speeches can easily be faked. Fake porn can be used as a weapon to harass women and/or to discredit accusers of sexual abuse and/or battery. Today's fake porn could be tomorrow's fake videos and fake news to discredit others: politicians, business executives, government officials (e.g., judges, military officers, etc.), individuals in minority groups, or activists. This places a premium upon mainstream news outlets to provide reliable, trustworthy news. This places a premium upon fact-checking sites.

The consequences: several. Social media users must first understand that they have made themselves vulnerable to the threats. Parents have made both themselves and their children vulnerable, too. How? The photographs and videos you've already uploaded to Facebook, Instagram, dating apps, and other social sites are source content for bad actors. So, parents must not only teach teenagers how to read terms-of-condition and privacy polices, but also how to fact-check content to avoid being tricked by fake videos.

This means all online users must become skilled consumers of information and news = read several news sources, verify, and fact check items. Otherwise, you are likely to be fooled... duped into joining or contributing to a bogus cause... tricked into voting for someone you wouldn't. This means social media users must carefully consider your photographs before you post online; and whether the social app or service truly provides effective privacy.

It also means that all social media users should NOT retweet or re-post every sensational item you see in their inboxes without fact-checking it first. Otherwise, you are part of the problem. Be part of the solution.

Video advertisements can easily be faked. So, it is in the interest of consumers, companies, and government agencies to both find solutions and to upgrade online privacy and digital laws -- which seem to constantly lag behind new technologies. There probably needs to be stronger consequences for offenders.

The Brookings Institute advised:

"In order to maximize positive outcomes [from AI], organizations should hire ethicists who work with corporate decision-makers and software developers, have a code of AI ethics that lays out how various issues will be handled, organize an AI review board that regularly addresses corporate ethical questions, have AI audit trails that show how various coding decisions have been made, implement AI training programs so staff operationalizes ethical considerations in their daily work, and provide a means for remediation when AI solutions inflict harm or damages on people or organizations."

These recommendations seems to apply to social media sites, which are high-value targets for bad actors wanting to post fake porn or other fake videos. It raises the question: which social sites have AI ethics policies and/or have hired ethicists and related staff to enforce such policies?

To do nothing seem unwise. Sticking our collective heads in the sane regarding new threats seems unwise, too. What issues concern you about AI-assisted fake porn or fake videos? What solutions do you want?


A Series Of Recent Events And Privacy Snafus At Facebook Cause Multiple Concerns. Does Facebook Deserve Users' Data?

Facebook logo So much has happened lately at Facebook that it can be difficult to keep up with the data scandals, data breaches, privacy fumbles, and more at the global social service. To help, below is a review of recent events.

The the New York Times reported on Tuesday, December 18th that for years:

"... Facebook gave some of the world’s largest technology companies more intrusive access to users’ personal data than it has disclosed, effectively exempting those business partners from its usual privacy rules... The special arrangements are detailed in hundreds of pages of Facebook documents obtained by The New York Times. The records, generated in 2017 by the company’s internal system for tracking partnerships, provide the most complete picture yet of the social network’s data-sharing practices... Facebook allowed Microsoft’s Bing search engine to see the names of virtually all Facebook users’ friends without consent... and gave Netflix and Spotify the ability to read Facebook users’ private messages. The social network permitted Amazon to obtain users’ names and contact information through their friends, and it let Yahoo view streams of friends’ posts as recently as this summer, despite public statements that it had stopped that type of sharing years earlier..."

According to the Reuters newswire, a Netflix spokesperson denied that Netflix accessed Facebook users' private messages, nor asked for that access. Facebook responded with denials the same day:

"... none of these partnerships or features gave companies access to information without people’s permission, nor did they violate our 2012 settlement with the FTC... most of these features are now gone. We shut down instant personalization, which powered Bing’s features, in 2014 and we wound down our partnerships with device and platform companies months ago, following an announcement in April. Still, we recognize that we’ve needed tighter management over how partners and developers can access information using our APIs. We’re already in the process of reviewing all our APIs and the partners who can access them."

Needed tighter management with its partners and developers? That's an understatement. During March and April of 2018 we learned that bad actors posed as researchers and used both quizzes and automated tools to vacuum up (and allegedly resell later) profile data for 87 million Facebook users. There's more news about this breach. The Office of the Attorney General for Washington, DC announced on December 19th that it has:

"... sued Facebook, Inc. for failing to protect its users’ data... In its lawsuit, the Office of the Attorney General (OAG) alleges Facebook’s lax oversight and misleading privacy settings allowed, among other things, a third-party application to use the platform to harvest the personal information of millions of users without their permission and then sell it to a political consulting firm. In the run-up to the 2016 presidential election, some Facebook users downloaded a “personality quiz” app which also collected data from the app users’ Facebook friends without their knowledge or consent. The app’s developer then sold this data to Cambridge Analytica, which used it to help presidential campaigns target voters based on their personal traits. Facebook took more than two years to disclose this to its consumers. OAG is seeking monetary and injunctive relief, including relief for harmed consumers, damages, and penalties to the District."

Sadly, there's still more. Facebook announced on December 14th another data breach:

"Our internal team discovered a photo API bug that may have affected people who used Facebook Login and granted permission to third-party apps to access their photos. We have fixed the issue but, because of this bug, some third-party apps may have had access to a broader set of photos than usual for 12 days between September 13 to September 25, 2018... the bug potentially gave developers access to other photos, such as those shared on Marketplace or Facebook Stories. The bug also impacted photos that people uploaded to Facebook but chose not to post... we believe this may have affected up to 6.8 million users and up to 1,500 apps built by 876 developers... Early next week we will be rolling out tools for app developers that will allow them to determine which people using their app might be impacted by this bug. We will be working with those developers to delete the photos from impacted users. We will also notify the people potentially impacted..."

We believe? That sounds like Facebook doesn't know for sure. Where was the quality assurance (QA) team on this? Who is performing the post-breach investigation to determine what happened so it doesn't happen again? This post-breach response seems sloppy. And, the "bug" description seems disingenuous. Anytime persons -- in this case developers -- have access to data they shouldn't have, it is a data breach.

One quickly gets the impression that Facebook has created so many niches, apps, APIs, and special arrangements for developers and advertisers that it really can't manage nor control the data it collects about its users. That implies Facebook users aren't in control of their data, either.

There were other notable stumbles. There were reports after many users experienced repeated bogus Friend Requests, due to hacked and/or cloned accounts. It can be difficult for users to distinguish valid Friend Requests from spammers or bad actors masquerading as friends.

In August, reports surfaced that Facebook approached several major banks offering to share its detailed financial information about consumers in order, "to boost user engagement." Reportedly, the detailed financial information included debit/credit/prepaid card transactions and checking account balances. Not good.

Also in August, Facebook's Onavo VPN App was removed from the Apple App store because the app violated data-collection policies. 9 To 5 Mac reported on December 5th:

"The UK parliament has today publicly shared secret internal Facebook emails that cover a wide-range of the company’s tactics related to its free iOS VPN app that was used as spyware, recording users’ call and text message history, and much more... Onavo was an interesting effort from Facebook. It posed as a free VPN service/app labeled as Facebook’s “Protect” feature, but was more or less spyware designed to collect data from users that Facebook could leverage..."

Why spy? Why the deception? This seems unnecessary for a global social networking company already collecting massive amounts of content.

In November, an investigative report by ProPublica detailed the failures in Facebook's news transparency implementation. The failures mean Facebook hasn't made good on its promises to ensure trustworthy news content, nor stop foreign entities from using the social service to meddle in elections in democratic countries.

There is more. Facebook disclosed in October a massive data breach affecting 30 million users (emphasis added):

For 15 million people, attackers accessed two sets of information – name and contact details (phone number, email, or both, depending on what people had on their profiles). For 14 million people, the attackers accessed the same two sets of information, as well as other details people had on their profiles. This included username, gender, locale/language, relationship status, religion, hometown, self-reported current city, birth date, device types used to access Facebook, education, work, the last 10 places they checked into or were tagged in, website, people or Pages they follow, and the 15 most recent searches..."

The stolen data allows bad actors to operate several types of attacks (e.g., spam, phishing, etc.) against Facebook users. The stolen data allows foreign spy agencies to collect useful information to target persons. Neither is good. Wired summarized the situation:

"Every month this year—and in some months, every week—new information has come out that makes it seem as if Facebook's big rethink is in big trouble... Well-known and well-regarded executives, like the founders of Facebook-owned Instagram, Oculus, and WhatsApp, have left abruptly. And more and more current and former employees are beginning to question whether Facebook's management team, which has been together for most of the last decade, is up to the task.

Technically, Zuckerberg controls enough voting power to resist and reject any moves to remove him as CEO. But the number of times that he and his number two Sheryl Sandberg have over-promised and under-delivered since the 2016 election would doom any other management team... Meanwhile, investigations in November revealed, among other things, that the company had hired a Washington firm to spread its own brand of misinformation on other platforms..."

Hiring a firm to distribute misinformation elsewhere while promising to eliminate misinformation on its platform. Not good. Are Zuckerberg and Sandberg up to the task? The above list of breaches, scandals, fumbles, and stumbles suggest not. What do you think?

The bottom line is trust. Given recent events, BuzzFeed News article posed a relevant question (emphasis added):

"Of all of the statements, apologies, clarifications, walk-backs, defenses, and pleas uttered by Facebook employees in 2018, perhaps the most inadvertently damning came from its CEO, Mark Zuckerberg. Speaking from a full-page ad displayed in major papers across the US and Europe, Zuckerberg proclaimed, "We have a responsibility to protect your information. If we can’t, we don’t deserve it." At the time, the statement was a classic exercise in damage control. But given the privacy blunders that followed, it hasn’t aged well. In fact, it’s become an archetypal criticism of Facebook and the set up for its existential question: Why, after all that’s happened in 2018, does Facebook deserve our personal information?"

Facebook executives have apologized often. Enough is enough. No more apologies. Just fix it! And, if Facebook users haven't asked themselves the above question yet, some surely will. Earlier this week, a friend posted on the site:

"To all my FB friends:
I will be deleting my FB account very soon as I am disgusted by their invasion of the privacy of their users. Please contact me by email in the future. Please note that it will take several days for this action to take effect as FB makes it hard to get out of its grip. Merry Christmas to all and with best wishes for a Healthy, safe, and invasive free New Year."

I reminded this friend to also delete any Instagram and What's App accounts, since Facebook operates those services, too. If you want to quit the service but suffer with FOMO (Fear Of Missing Out), then read the experiences of a person who quit Apple, Google, Facebook, Microsoft, and Amazon for a month. It can be done. And, your social life will continue -- spectacularly. It did before Facebook.

Me? I have reduced my activity on Facebook. And there are certain activities I don't do on Facebook: take quizzes, make online payments, use its emotion reaction buttons (besides "Like"), use its mobile app, use the Messenger mobile app, nor use its voting and ballot previews content. Long ago I disabled the Facebook API platform on my Facebook account. You should, too. I never use my Facebook credentials (e.g., username, password) to sign into other sites. Never.

I will continue to post on Facebook links to posts in this blog, since it is helpful information for many Facebook users. In what ways have you reduced your usage of Facebook?


Oath To Pay Almost $5 Million To Settle Charges By New York AG Regarding Children's Privacy Violations

Oath Inc. logo Barbara D. Underwood, the Attorney General (AG) for New York State, announced last week a settlement with Oath, Inc. for violating the Children’s Online Privacy Protection Act (COPPA). Oath Inc. is a wholly-owned subsidiary of Verizon Communications. Until June 2017, Oath was known as AOL Inc. ("AOL"). The announcement stated:

"The Attorney General’s Office found that AOL conducted billions of auctions for ad space on hundreds of websites the company knew were directed to children under the age of 13. Through these auctions, AOL collected, used, and disclosed personal information from the websites’ users in violation of COPPA, enabling advertisers to track and serve targeted ads to young children. The company has agreed to adopt comprehensive reforms to protect children from improper tracking and pay a record $4.95 million in penalties..."

The United States Congress enacted COPPA in 1998 to protect the safety and privacy of young children online. As many parents know, young children don't understand complicated legal documents such as terms-of-use and privacy policies. COPPA prohibits operators of certain websites from collecting, using, or disclosing personal information (e.g., first and last name, e-mail address) of children under the age of 13 without first obtaining parental consent.

The definition of "personal information" was revised in 2013 to include persistent identifiers that can be used to recognize a user over time and across websites, such as the ID found in a web browser cookie or an Internet Protocol (“IP”) address. The revision effectively prohibits covered operators from using cookies, IP addresses, and other persistent identifiers to track users across websites for most advertising purposes on COPPA-covered websites.

The announcement by AG Underwood explained the alleged violations in detail. Despite policies to the contrary:

"... AOL nevertheless used its display ad exchange to conduct billions of auctions for ad space on websites that it knew to be directed to children under the age of 13 and subject to COPPA. AOL obtained this knowledge in two ways. First, several AOL clients provided notice to AOL that their websites were subject to COPPA. These clients identified more than a dozen COPPA-covered websites to AOL. AOL conducted at least 1.3 billion auctions of display ad space from these websites. Second, AOL itself determined that certain websites were directed to children under the age of 13 when it conducted a review of the content and privacy policies of client websites. Through these reviews, AOL identified hundreds of additional websites that were subject to COPPA. AOL conducted at least 750 million auctions of display ad space from these websites."

AG Underwood said in a statement:

"COPPA is meant to protect young children from being tracked and targeted by advertisers online. AOL flagrantly violated the law – and children’s privacy – and will now pay the largest-ever penalty under COPPA. My office remains committed to protecting children online and will continue to hold accountable those who violate the law."

A check at press time of both the press and "company values" sections of Oath's site failed to find any mentions of the settlement. TechCrunch reported on December 4th:

"We reached out to Oath with a number of questions about this privacy failure. But a spokesman did not engage with any of them directly — emailing a short statement instead, in which it writes: "We are pleased to see this matter resolved and remain wholly committed to protecting children’s privacy online." The spokesman also did not confirm nor dispute the contents of the New York Times report."

Hmmm. Almost a week has passed since AG Underwood's December 4th announcement. You'd think that Oath management would have released a statement by now. Maybe Oath isn't as committed to children's online privacy as they claim. Something for parents to note.

The National Law Review provided some context:

"...in 2016, the New York AG concluded a two-year investigation into the tracking practices of four online publishers for alleged COPPA violations... As recently as September of this year, the New Mexico AG filed a lawsuit for alleged COPPA violations against a children's game app company, Tiny Lab Productions, and the online ad companies that work within Tiny Lab's, including those run by Google and Twitter... The Federal Trade Commission (FTC) continues to vigorously enforce COPPA, closing out investigations of alleged COPPA violations against smart toy manufacturer VTech and online talent search company Explore Talent... there have been a total of 28 enforcement proceedings since the COPPA rule was issued in 2000."

You can read about many of these actions in this blog, and how COPPA was strengthened in 2013.

So, the COPPA law works well and it is being vigorously enforced. Kudos to AG Underwood, her staff, and other states' AGs for taking these actions. What are your opinions about the AOL/Oath settlement?


Ireland Regulator: LinkedIn Processed Email Addresses Of 18 Million Non-Members

LinkedIn logo On Friday November 23rd, the Data Protection Commission (DPC) in Ireland released its annual report. That report includes the results of an investigation by the DPC of the LinkedIn.com social networking site, after a 2017 complaint by a person who didn't use the social networking service. Apparently, LinkedIn obtained 18 million email address of non-members so it could then use the Facebook platform to deliver advertisements encouraging them to join.

The DPC 2018 report (Adobe PDF; 827k bytes) stated on page 21:

"The DPC concluded its audit of LinkedIn Ireland Unlimited Company (LinkedIn) in respect of its processing of personal data following an investigation of a complaint notified to the DPC by a non-LinkedIn user. The complaint concerned LinkedIn’s obtaining and use of the complainant’s email address for the purpose of targeted advertising on the Facebook Platform. Our investigation identified that LinkedIn Corporation (LinkedIn Corp) in the U.S., LinkedIn Ireland’s data processor, had processed hashed email addresses of approximately 18 million non-LinkedIn members and targeted these individuals on the Facebook Platform with the absence of instruction from the data controller (i.e. LinkedIn Ireland), as is required pursuant to Section 2C(3)(a) of the Acts. The complaint was ultimately amicably resolved, with LinkedIn implementing a number of immediate actions to cease the processing of user data for the purposes that gave rise to the complaint."

So, in an attempt to gain more users LinkedIn acquired and processed the email addresses of 18 million non-members without getting governmental "instruction" as required by law. Not good.

The DPC report covered the time frame from January 1st through May 24, 2018. The report did not mention the source(s) from which LinkedIn acquired the email addresses. The DPC report also discussed investigations of Facebook (e.g., WhatsApp, facial recognition),  and Yahoo/Oath. Microsoft acquired LinkedIn in 2016. GDPR went into effect across the EU on May 25, 2018.

There is more. The investigation's findings raised concerns about broader compliance issues, so the DPC conducted a more in-depth audit:

"... to verify that LinkedIn had in place appropriate technical security and organisational measures, particularly for its processing of non-member data and its retention of such data. The audit identified that LinkedIn Corp was undertaking the pre-computation of a suggested professional network for non-LinkedIn members. As a result of the findings of our audit, LinkedIn Corp was instructed by LinkedIn Ireland, as data controller of EU user data, to cease pre-compute processing and to delete all personal data associated with such processing prior to 25 May 2018."

That the DPC ordered LinkedIn to stop this particular data processing, strongly suggests that the social networking service's activity probably violated data protection laws, as the European Union (EU) implements stronger privacy laws, known as General Data Protection Regulation (GDPR). ZDNet explained in this primer:

".... GDPR is a new set of rules designed to give EU citizens more control over their personal data. It aims to simplify the regulatory environment for business so both citizens and businesses in the European Union can fully benefit from the digital economy... almost every aspect of our lives revolves around data. From social media companies, to banks, retailers, and governments -- almost every service we use involves the collection and analysis of our personal data. Your name, address, credit card number and more all collected, analysed and, perhaps most importantly, stored by organisations... Data breaches inevitably happen. Information gets lost, stolen or otherwise released into the hands of people who were never intended to see it -- and those people often have malicious intent. Under the terms of GDPR, not only will organisations have to ensure that personal data is gathered legally and under strict conditions, but those who collect and manage it will be obliged to protect it from misuse and exploitation, as well as to respect the rights of data owners - or face penalties for not doing so... There are two different types of data-handlers the legislation applies to: 'processors' and 'controllers'. The definitions of each are laid out in Article 4 of the General Data Protection Regulation..."

The new GDPR applies to both companies operating within the EU, and to companies located outside of the EU which offer goods or services to customers or businesses inside the EU. As a result, some companies have changed their business processes. TechCrunch reported in April:

"Facebook has another change in the works to respond to the European Union’s beefed up data protection framework — and this one looks intended to shrink its legal liabilities under GDPR, and at scale. Late yesterday Reuters reported on a change incoming to Facebook’s [Terms & Conditions policy] that it said will be pushed out next month — meaning all non-EU international are switched from having their data processed by Facebook Ireland to Facebook USA. With this shift, Facebook will ensure that the privacy protections afforded by the EU’s incoming GDPR — which applies from May 25 — will not cover the ~1.5 billion+ international Facebook users who aren’t EU citizens (but current have their data processed in the EU, by Facebook Ireland). The U.S. does not have a comparable data protection framework to GDPR..."

What was LinkedIn's response to the DPC report? At press time, a search of LinkedIn's blog and press areas failed to find any mentions of the DPC investigation. TechCrunch reported statements by Dennis Kelleher, Head of Privacy, EMEA at LinkedIn:

"... Unfortunately the strong processes and procedures we have in place were not followed and for that we are sorry. We’ve taken appropriate action, and have improved the way we work to ensure that this will not happen again. During the audit, we also identified one further area where we could improve data privacy for non-members and we have voluntarily changed our practices as a result."

What does this mean? Plenty. There seem to be several takeaways for consumer and users of social networking services:

  • EU regulators are proactive and conduct detailed audits to ensure companies both comply with GDPR and act consistent with any promises they made,
  • LinkedIn wants consumers to accept another "we are sorry" corporate statement. No thanks. No more apologies. Actions speak more loudly than words,
  • The DPC didn't fine LinkedIn probably because GDPR didn't become effective until May 25, 2018. This suggests that fines will be applied to violations occurring on or after May 25, 2018, and
  • People in different areas of the world view privacy and data protection differently - as they should. That is fine, and it shouldn't be a surprise. (A global survey about self-driving cars found similar regional differences.) Smart executives in businesses -- and in governments -- worldwide recognize regional differences, find ways to sell products and services across areas without degraded customer experience, and don't try to force their country's approach on other countries or areas which don't want it.

What takeaways do you see?


Plenty Of Bad News During November. Are We Watching The Fall Of Facebook?

Facebook logo November has been an eventful month for Facebook, the global social networking giant. And not in a good way. So much has happened, it's easy to miss items. Let's review.

A November 1st investigative report by ProPublica described how some political advertisers exploit gaps in Facebook's advertising transparency policy:

"Although Facebook now requires every political ad to “accurately represent the name of the entity or person responsible,” the social media giant acknowledges that it didn’t check whether Energy4US is actually responsible for the ad. Nor did it question 11 other ad campaigns identified by ProPublica in which U.S. businesses or individuals masked their sponsorship through faux groups with public-spirited names. Some of these campaigns resembled a digital form of what is known as “astroturfing,” or hiding behind the mirage of a spontaneous grassroots movement... Adopted this past May in the wake of Russian interference in the 2016 presidential campaign, Facebook’s rules are designed to hinder foreign meddling in elections by verifying that individuals who run ads on its platform have a U.S. mailing address, governmental ID and a Social Security number. But, once this requirement has been met, Facebook doesn’t check whether the advertiser identified in the “paid for by” disclosure has any legal status, enabling U.S. businesses to promote their political agendas secretly."

So, political ad transparency -however faulty it is -- has only been operating since May, 2018. Not long. Not good.

The day before the November 6th election in the United States, Facebook announced:

"On Sunday evening, US law enforcement contacted us about online activity that they recently discovered and which they believe may be linked to foreign entities. Our very early-stage investigation has so far identified around 30 Facebook accounts and 85 Instagram accounts that may be engaged in coordinated inauthentic behavior. We immediately blocked these accounts and are now investigating them in more detail. Almost all the Facebook Pages associated with these accounts appear to be in the French or Russian languages..."

This happened after Facebook removed 82 Pages, Groups and accounts linked to Iran on October 16th. Thankfully, law enforcement notified Facebook. Interested in more proactive action? Facebook announced on November 8th:

"We are careful not to reveal too much about our enforcement techniques because of adversarial shifts by terrorists. But we believe it’s important to give the public some sense of what we are doing... We now use machine learning to assess Facebook posts that may signal support for ISIS or al-Qaeda. The tool produces a score indicating how likely it is that the post violates our counter-terrorism policies, which, in turn, helps our team of reviewers prioritize posts with the highest scores. In this way, the system ensures that our reviewers are able to focus on the most important content first. In some cases, we will automatically remove posts when the tool indicates with very high confidence that the post contains support for terrorism..."

So, Facebook deployed in 2018 some artificial intelligence to help its human moderators identify terrorism threats -- not automatically remove them, but to identify them -- as the news item also mentioned its appeal process. Then, Facebook announced in a November 13th update:

"Combined with our takedown last Monday, in total we have removed 36 Facebook accounts, 6 Pages, and 99 Instagram accounts for coordinated inauthentic behavior. These accounts were mostly created after mid-2017... Last Tuesday, a website claiming to be associated with the Internet Research Agency, a Russia-based troll farm, published a list of Instagram accounts they said that they’d created. We had already blocked most of them, and based on our internal investigation, we blocked the rest... But finding and investigating potential threats isn’t something we do alone. We also rely on external partners, like the government or security experts...."

So, in 2018 Facebook leans heavily upon both law enforcement and security researchers to identify threats. You have to hunt a bit to find the total number of fake accounts removed. Facebook announced on November 15th:

"We also took down more fake accounts in Q2 and Q3 than in previous quarters, 800 million and 754 million respectively. Most of these fake accounts were the result of commercially motivated spam attacks trying to create fake accounts in bulk. Because we are able to remove most of these accounts within minutes of registration, the prevalence of fake accounts on Facebook remained steady at 3% to 4% of monthly active users..."

That's about 1.5 billion fake accounts by a variety of bad actors. Hmmmm... sounds good, but... it makes one wonder about the digital arms race happening. If the bad actors can programmatically create new fake accounts faster than Facebook can identify and remove them, then not good.

Meanwhile, CNet reported on November 11th that Facebook had ousted Oculus founder Palmer Luckey due to:

"... a $10,000 to an anti-Hillary Clinton group during the 2016 presidential election, he was out of the company he founded. Facebook CEO Mark Zuckerberg, during congressional testimony earlier this year, called Luckey's departure a "personnel issue" that would be "inappropriate" to address, but he denied it was because of Luckey's politics. But that appears to be at the root of Luckey's departure, The Wall Street Journal reported Sunday. Luckey was placed on leave and then fired for supporting Donald Trump, sources told the newspaper... [Luckey] was pressured by executives to publicly voice support for libertarian candidate Gary Johnson, according to the Journal. Luckey later hired an employment lawyer who argued that Facebook illegally punished an employee for political activity and negotiated a payout for Luckey of at least $100 million..."

Facebook acquired Oculus Rift in 2014. Not good treatment of an executive.

The next day, TechCrunch reported that Facebook will provide regulators from France with access to its content moderation processes:

"At the start of 2019, French regulators will launch an informal investigation on algorithm-powered and human moderation... Regulators will look at multiple steps: how flagging works, how Facebook identifies problematic content, how Facebook decides if it’s problematic or not and what happens when Facebook takes down a post, a video or an image. This type of investigation is reminiscent of banking and nuclear regulation. It involves deep cooperation so that regulators can certify that a company is doing everything right... The investigation isn’t going to be limited to talking with the moderation teams and looking at their guidelines. The French government wants to find algorithmic bias and test data sets against Facebook’s automated moderation tools..."

Good. Hopefully, the investigation will be a deep dive. Maybe other countries, which value citizens' privacy, will perform similar investigations. Companies and their executives need to be held accountable.

Then, on November 14th The New York Times published a detailed, comprehensive "Delay, Deny, and Deflect" investigative report based upon interviews of at least 50 persons:

"When Facebook users learned last spring that the company had compromised their privacy in its rush to expand, allowing access to the personal information of tens of millions of people to a political data firm linked to President Trump, Facebook sought to deflect blame and mask the extent of the problem. And when that failed... Facebook went on the attack. While Mr. Zuckerberg has conducted a public apology tour in the last year, Ms. Sandberg has overseen an aggressive lobbying campaign to combat Facebook’s critics, shift public anger toward rival companies and ward off damaging regulation. Facebook employed a Republican opposition-research firm to discredit activist protesters... In a statement, a spokesman acknowledged that Facebook had been slow to address its challenges but had since made progress fixing the platform... Even so, trust in the social network has sunk, while its pell-mell growth has slowed..."

The New York Times' report also highlighted the history of Facebook's focus on revenue growth and lack of focus to identify and respond to threats:

"Like other technology executives, Mr. Zuckerberg and Ms. Sandberg cast their company as a force for social good... But as Facebook grew, so did the hate speech, bullying and other toxic content on the platform. When researchers and activists in Myanmar, India, Germany and elsewhere warned that Facebook had become an instrument of government propaganda and ethnic cleansing, the company largely ignored them. Facebook had positioned itself as a platform, not a publisher. Taking responsibility for what users posted, or acting to censor it, was expensive and complicated. Many Facebook executives worried that any such efforts would backfire... Mr. Zuckerberg typically focused on broader technology issues; politics was Ms. Sandberg’s domain. In 2010, Ms. Sandberg, a Democrat, had recruited a friend and fellow Clinton alum, Marne Levine, as Facebook’s chief Washington representative. A year later, after Republicans seized control of the House, Ms. Sandberg installed another friend, a well-connected Republican: Joel Kaplan, who had attended Harvard with Ms. Sandberg and later served in the George W. Bush administration..."

The report described cozy relationships between the company and Democratic politicians. Not good for a company wanting to deliver unbiased, reliable news. The New York Times' report also described the history of failing to identify and respond quickly to content abuses by bad actors:

"... in the spring of 2016, a company expert on Russian cyberwarfare spotted something worrisome. He reached out to his boss, Mr. Stamos. Mr. Stamos’s team discovered that Russian hackers appeared to be probing Facebook accounts for people connected to the presidential campaigns, said two employees... Mr. Stamos, 39, told Colin Stretch, Facebook’s general counsel, about the findings, said two people involved in the conversations. At the time, Facebook had no policy on disinformation or any resources dedicated to searching for it. Mr. Stamos, acting on his own, then directed a team to scrutinize the extent of Russian activity on Facebook. In December 2016... Ms. Sandberg and Mr. Zuckerberg decided to expand on Mr. Stamos’s work, creating a group called Project P, for “propaganda,” to study false news on the site, according to people involved in the discussions. By January 2017, the group knew that Mr. Stamos’s original team had only scratched the surface of Russian activity on Facebook... Throughout the spring and summer of 2017, Facebook officials repeatedly played down Senate investigators’ concerns about the company, while publicly claiming there had been no Russian effort of any significance on Facebook. But inside the company, employees were tracing more ads, pages and groups back to Russia."

Facebook responded in a November 15th new release:

"There are a number of inaccuracies in the story... We’ve acknowledged publicly on many occasions – including before Congress – that we were too slow to spot Russian interference on Facebook, as well as other misuse. But in the two years since the 2016 Presidential election, we’ve invested heavily in more people and better technology to improve safety and security on our services. While we still have a long way to go, we’re proud of the progress we have made in fighting misinformation..."

So, Facebook wants its users to accept that it has invested more = doing better.

Regardless, the bottom line is trust. Can users trust what Facebook said about doing better? Is better enough? Can users trust Facebook to deliver unbiased news? Can users trust that Facebook's content moderation process is better? Or good enough? Can users trust Facebook to fix and prevent data breaches affecting millions of users? Can users trust Facebook to stop bad actors posing as researchers from using quizzes and automated tools to vacuum up (and allegedly resell later) millions of users' profiles? Can citizens in democracies trust that Facebook has stopped data abuses, by bad actors, designed to disrupt their elections? Is doing better enough?

The very next day, Facebook reported a huge increase in the number of government requests for data, including secret orders. TechCrunch reported about 13 historical national security letters:

"... dated between 2014 and 2017 for several Facebook and Instagram accounts. These demands for data are effectively subpoenas, issued by the U.S. Federal Bureau of Investigation (FBI) without any judicial oversight, compelling companies to turn over limited amounts of data on an individual who is named in a national security investigation. They’re controversial — not least because they come with a gag order that prevents companies from informing the subject of the letter, let alone disclosing its very existence. Companies are often told to turn over IP addresses of everyone a person has corresponded with, online purchase information, email records and cell-site location data... Chris Sonderby, Facebook’s deputy general counsel, said that the government lifted the non-disclosure orders on the letters..."

So, Facebook is a go-to resource for both bad actors and the good guys.

An eventful month, and the month isn't over yet. Taken together, this news is not good for a company wanting its social networking service to be a source of reliable, unbiased news source. This news is not good for a company wanting its users to accept it is doing better -- and that better is enough. The situation begs the question: are we watching the fall of Facebook? Share your thoughts and opinions below.


Some Surprising Facts About Facebook And Its Users

Facebook logo The Pew Research Center announced findings from its latest survey of social media users:

  • About two-thirds (68%) of adults in the United States use Facebook. That is unchanged from April 2016, but up from 54% in August 2012. Only Youtube gets more adult usage (73%).
  • About three-quarters (74%) of adult Facebook users visit the site at least once a day. That's higher than Snapchat (63%) and Instagram (60%).
  • Facebook is popular across all demographic groups in the United States: 74% of women use it, as do 62% of men, 81% of persons ages 18 to 29, and 41% of persons ages 65 and older.
  • Usage by teenagers has fallen to 51% (at March/April 2018) from 71% during 2014 to 2015. More teens use other social media services: YouTube (85%), Instagram (72%) and Snapchat (69%).
  • 43% of adults use Facebook as a news source. That is higher than other social media services: YouTube (21%), Twitter (12%), Instagram (8%), and LinkedIn (6%). More women (61%) use Facebook as a news source than men (39%). More whites (62%) use Facebook as a news source than nonwhites (37%).
  • 54% of adult users said they adjusted their privacy settings during the past 12 months. 42% said they have taken a break from checking the platform for several weeks or more. 26% said they have deleted the app from their phone during the past year.

Perhaps, the most troubling finding:

"Many adult Facebook users in the U.S. lack a clear understanding of how the platform’s news feed works, according to the May and June survey. Around half of these users (53%) say they do not understand why certain posts are included in their news feed and others are not, including 20% who say they do not understand this at all."

Facebook users should know that the service does not display in their news feed all posts by their friends and groups. Facebook's proprietary algorithm -- called its "secret sauce" by some -- displays items it thinks users will engage with = click the "Like" or other emotion buttons. This makes Facebook a terrible news source, since it doesn't display all news -- only the news you (probably already) agree with.

That's like living life in an online bubble. Sadly, there is more.

If you haven't watched it, PBS has broadcast a two-part documentary titled, "The Facebook Dilemma" (see trailer below), which arguable could have been titled, "the dark side of sharing." The Frontline documentary rightly discusses Facebook's approaches to news, privacy, its focus upon growth via advertising revenues, how various groups have used the service as a weapon, and Facebook's extensive data collection about everyone.

Yes, everyone. Obviously, Facebook collects data about its users. The service also collects data about nonusers in what the industry calls "shadow profiles." CNet explained that during an April:

"... hearing before the House Energy and Commerce Committee, the Facebook CEO confirmed the company collects information on nonusers. "In general, we collect data of people who have not signed up for Facebook for security purposes," he said... That data comes from a range of sources, said Nate Cardozo, senior staff attorney at the Electronic Frontier Foundation. That includes brokers who sell customer information that you gave to other businesses, as well as web browsing data sent to Facebook when you "like" content or make a purchase on a page outside of the social network. It also includes data about you pulled from other Facebook users' contacts lists, no matter how tenuous your connection to them might be. "Those are the [data sources] we're aware of," Cardozo said."

So, there might be more data sources besides the ones we know about. Facebook isn't saying. So much for greater transparency and control claims by Mr. Zuckerberg. Moreover, data breaches highlight the problems with the service's massive data collection and storage:

"The fact that Facebook has [shadow profiles] data isn't new. In 2013, the social network revealed that user data had been exposed by a bug in its system. In the process, it said it had amassed contact information from users and matched it against existing user profiles on the social network. That explained how the leaked data included information users hadn't directly handed over to Facebook. For example, if you gave the social network access to the contacts in your phone, it could have taken your mom's second email address and added it to the information your mom already gave to Facebook herself..."

So, Facebook probably launched shadow profiles when it introduced its mobile app. That means, if you uploaded the address book in your phone to Facebook, then you helped the service collect information about nonusers, too. This means Facebook acts more like a massive advertising network than simply a social media service.

How has Facebook been able to collect massive amounts of data about both users and nonusers? According to the Frontline documentary, we consumers have lax privacy laws in the United States to thank for this massive surveillance advertising mechanism. What do you think?


FTC: How You Should Handle Robocalls. 4 Companies Settle Regarding Privacy Shield Claims

First, it seems that the number of robocalls has increased during the past two years. Some automated calls are English. Some are in other languages. All try to trick consumers into sending money or disclosing sensitive financial and payment information. Advice from the U.S. Federal Trade Commission (FTC):

Second, the FTC announced a settlement agreement with four companies:

"In separate complaints, the FTC alleges that IDmission, LLC, mResource LLC (doing business as Loop Works, LLC), SmartStart Employment Screening, Inc., and VenPath, Inc. falsely claimed to be certified under the EU-U.S. Privacy Shield, which establishes a process to allow companies to transfer consumer data from European Union countries to the United States in compliance with EU law... The Department of Commerce administers the Privacy Shield framework, while the FTC enforces the promises companies make when joining the framework."

According to the lawsuits, IDmission, a cloud-based services firm, applied in 2017 for Privacy Shield certification with the U.S. Department of Commerce but never completed the necessary steps to be certified under the program. The other three companies each obtained Privacy Shield certification in 2016 but allowed their certifications to lapse. VenPath is a data analytics firm. SmartStart offers employment and background screening services. mResource provides talent management and recruitment services.

Terms of the settlement agreements prohibit all four companies from misrepresenting their participation in any privacy or data security program sponsored by the government. Also:

"... VenPath and SmartStart must also continue to apply the Privacy Shield protections to personal information they collected while participating in the program, protect it by another means authorized by the Privacy Shield framework, or return or delete the information within 10 days of the order."