623 posts categorized "Privacy" Feed

New Vermont Law Regulating Data Brokers Drives 120 Businesses From The Shadows

In May of 2018, Vermont was the first (and only) state in the nation to enact a law regulating data brokers. According to the Vermont Secretary of State, a data broker is defined as:

"... a business, or unit or units of a business, separately or together, that knowingly collects and sells or licenses to third parties the brokered personal information of a consumer with whom the business does not have a direct relationship."

The Vermont Secretary of State's website contains links to the new law and more. This new law is important for several reasons. First, many businesses operate as data brokers. Second, consumers historically haven't known who has information about them, nor how to review their profiles for accuracy. Third,  consumers haven't been able to opt out of the data collection. Fourth, if you don't know who the data brokers are, then you can't hold them accountable if they fail with data security. According to Vermont law:

"2447. Data broker duty to protect information; standards; technical requirements (a) Duty to protect personally identifiable information. (1) A data broker shall develop, implement, and maintain a comprehensive information security program that is written in one or more readily accessible parts and contains administrative, technical, and physical safeguards that are appropriate... identification and assessment of reasonably foreseeable internal and external risks to the security, confidentiality, and integrity of any electronic, paper, or other records containing personally identifiable information, and a process for evaluating and improving, where necessary, the effectiveness of the current safeguards for limiting such risks... taking reasonable steps to select and retain third-party service providers that are capable of maintaining appropriate security measures to protect personally identifiable information consistent with applicable law; and (B) requiring third-party service providers by contract to implement and maintain appropriate security measures for personally identifiable information..."

Before this law, there was little to no oversight, no regulation, and no responsibility for data brokers to adequately protect sensitive data about consumers. A federal bill proposed in 2014 went nowhere in the U.S. Senate. You can assume that many data brokers operate in your state, too, since there's plenty of money to be made in the industry.

Portions of the new Vermont law went into effect in May, and the remainder went into effect on January 1, 2019. What has happened since then? Fast Company reported:

"So far, 121 companies have registered, according to data from the Vermont secretary of state’s office... The list of active companies includes divisions of the consumer data giant Experian, online people search engines like Spokeo and Spy Dialer, and a variety of lesser-known organizations that do everything from help landlords research potential tenants to deliver marketing leads to the insurance industry..."

The Fast Company site lists the 120 (so far) registered data brokers in Vermont. Regular readers of this blog will recognize some of the data brokers by name, since prior posts covered Acxiom, Equifax, Experian, LexisNexis, the NCTUE, Oracle, Spokeo, TransUnion, and others. (Yes, both credit reporting agencies and social media firms also operate as data brokers. Some states do it, too.) Reportedly, many privacy advocates support the new law:

"There’s companies that I’ve never heard of before," says Zachary Tomanelli, communications and technology director at the Vermont Public Interest Research Group, which supported the law. "It’s often very cumbersome [for consumers] to know where the places are that you have to go, and how you opt out."

Predictably, the industry has opposed (and continues to oppose) the legislation:

"A coalition of industry groups like the Internet Association, the Association of National Advertisers, and the National Association of Professional Background Screeners, as well as now registered data brokers such as Experian, Acxiom, and IHS Markit, said the law was unnecessary... Requiring companies to disclose breaches of largely public data could be burdensome for businesses and needlessly alarming for consumers, they argue... Other companies, like Axciom, have complained that the law establishes inconsistent boundaries around personal data used by third parties, and the first-party data used by companies like Facebook and Google."

So, no companies want consumers to own and control the data -- property -- that describes them. Real property laws matter. To learn more, read about data brokers at the Privacy Rights Clearinghouse site. Related posts in the Data Brokers section of this blog:

Kudos to Vermont lawmakers for ensuring more disclosures and transparency from the industry. Readers may ask their elected officials why their state has not taken similar action. What are your opinions of the new Vermont law?


Brave Alerts FTC On Threats From Business Practices With Big Data

The U.S. Federal Trade Commission (FTC) held a "Privacy, Big Data, And Competition" hearing on November 6-8, 2018 as part of its "Competition And Consumer Protection in the 21st Century" series of discussions. During that session, the FTC asked for input on several topics:

  1. "What is “big data”? Is there an important technical or policy distinction to be drawn between data and big data?
  2. How have developments involving data – data resources, analytic tools, technology, and business models – changed the understanding and use of personal or commercial information or sensitive data?
  3. Does the importance of data – or large, complex data sets comprising personal or commercial information – in a firm’s ordinary course operations change how the FTC should analyze mergers or firm conduct? If so, how? Does data differ in importance from other assets in assessing firm or industry conduct?
  4. What structural, behavioral or conduct remedies should the FTC consider when remedying antitrust harm in a market or industry where data or personal or commercial information are a significant product or a key competitive input?
  5. Are there policy recommendations that would facilitate competition in markets involving data or personal or commercial information that the FTC should consider?
  6. Do the presence of personal information or privacy concerns inform or change competition analysis?
  7. How do state, federal, and international privacy laws and regulations, adopted to protect data and consumers, affect competition, innovation, and product offerings in the United States and abroad?"

Brave, the developer of a web browser, submitted comments to the FTC which highlighted two concerns:

"First, big tech companies “cross-use” user data from one part of their business to prop up others. This stifles competition, and hurts innovation and consumer choice. Brave suggests that FTC should investigate. Second, the GDPR is emerging as a de facto international standard. Whether this helps or harms United States firms will be determined by whether the United States enacts and actively enforces robust federal privacy laws."

A letter by Dr. Johnny Ryan, the Chief Policy & Industry Relations Officer at Brave, described in detail the company's concerns:

"The cross-use and offensive leveraging of personal information from one line of business to another is likely to have anti-competitive effects. Indeed anti-competitive practices may be inevitable when companies with Google’s degree of market dominance update their privacy policies to include the cross-use of personal information. The result is that a company can leverage all the personal information accumulated from its users in one line of business to dominate other lines of business too. Rather than competing on the merits, the company can enjoy the unfair advantage of massive network effects... The result is that nascent and potential competitors will be stifled, and consumer choice will be limited... The cross-use of data between different lines of business is analogous to the tying of two products. Indeed, tying and cross-use of data can occur at the same time, as Google Chrome’s latest “auto sign in to everything” controversy illustrates..."

Historically, Google let Chrome web browser users decide whether or not to sign in for cross-device usage. The Chrome 69 update forced auto sign-in, but a Chrome 70 update restored users' choice after numerous complaints and criticism.

Regarding topic #7 by the FTC, Brave's response said:

"A de facto international standard appears to be emerging, based on the European Union’s General Data Protection Regulation (GDPR)... the application of GDPR-like laws for commercial use of consumers’ personal data in the EU, Britain (post EU), Japan, India, Brazil, South Korea, Malaysia, Argentina, and China bring more than half of global GDP under a common standard. Whether this emerging standard helps or harms United States firms will be determined by whether the United States enacts and actively enforces robust federal privacy laws. Unless there is a federal GDPR-like law in the United States, there may be a degree of friction and the potential of isolation for United States companies... there is an opportunity in this trend. The United States can assume the global lead by adopting the emerging GDPR standard, and by investing in world-leading regulation that pursues test cases, and defines practical standards..."

Currently, companies collect, archive, share, and sell consumers' personal information at will -- often without notice nor consent. While all 50 states and territories have breach notification laws, most states have not upgraded their breach notification laws to include biometric and passport data. While the Health Insurance Portability and Accountability Act (HIPAA) is the federal law which governs healthcare data and related breaches, many consumers share health data with social media sites -- robbing themselves of HIPAA protections.

Moreover, it's an unregulated free-for-all of data collection, archiving, and sharing by telecommunications companies after the revoking in 2017 of broadband privacy protections for consumers in the USA. Plus, laws have historically focused upon "declared data" (e.g., the data users upload or submit into websites or apps) while ignoring "inferred data" -- which is arguably just as sensitive and revealing.

Regarding future federal privacy legislation, Brave added:

"... The GDPR is compatible with a United States view of consumer protection and privacy principles. Indeed, the FTC has proposed important privacy protections to legislators in 2009, and again in 2012 and 2014, which ended up being incorporated in the GDPR. The high-level principles of the GDPR are closely aligned, and often identical to, the United States’ privacy principles... The GDPR also incorporates principles endorsed by the U.S. in the 1980 OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data; and the principles endorsed by the United States this year, in Article 19.8 (3) of the new United States-Mexico-Canada Agreement."

"The GDPR differs from established United States privacy principles in its explicit reference to “proportionality” as a precondition of data use, and in its more robust approach to data minimization and to purpose specification. In our view, a federal law should incorporate these elements too. We also recommend that federal law should adopt the GDPR definitions of concepts such as “personal data”, “legal basis” including opt-in “consent”, “processing”, “special category personal data”, ”profiling”, “data controller”, “automated decision making”, “purpose limitation”, and so forth, and tools such as data protection impact assessments, breach notification, and records of processing activities."

"In keeping with the fair information practice principles (FIPPs) of the 1974 US Privacy Act, Brave recommends that a federal law should require that the collection of personal information is subject to purpose specification. This means that personal information shall only be collected for specific and explicit purposes. Personal information should not used beyond those purposes without consent, unless a further purpose is poses no risk of harm and is compatible with the initial purpose, in which case the data subject should have the opportunity to opt-out."

Submissions by Brave and others are available to the public at the FTC website in the "Public Comments" section.


Study: Privacy Concerns Have Caused Consumers To Change How They Use The Internet

Facebook commissioned a study by the Economist Intelligence Unit (EIU) to understand "internet inclusion" globally, or how people use the Internet, the benefits received, and the obstacles experienced. The latest survey included 5,069 respondents from 100 countries in Asia-Pacific, the Americas, Europe, the Middle East, North Africa and Sub-Saharan Africa.

Overall findings in the report cited:

"... cause for both optimism and concern. We are seeing steady progress in the number and percentage of households connected to the Internet, narrowing the gender gap and improving accessibility for people with disabilities. The Internet also has become a crucial tool for employment and obtaining job-related skills. On the other hand, growth in Internet connections is slowing, especially among the lowest income countries, and efforts to close the digital divide are stalling..."

The EIU describes itself as, "the world leader in global business intelligence, to help companies, governments and banks understand changes in the world is changing, seize opportunities created by those changes, and manage associated risks. So, any provider of social media services globally would greatly value the EIU's services.

The chart below highlights some of the benefits mentioned by survey respondents:

Chart-internet-benefits-eiu-2019

Other benefits respondents said: almost three-quarters (74.4%) said the Internet is more effective than other methods for finding jobs; 70.5% said their job prospects have improved due to the Internet; and more. So, job seekers and employers both benefit.

Key findings regarding online privacy (emphasis added):

"... More than half (52.2%) of [survey] respondents say they are not confident about their online privacy, hardly changed from 51.5% in the 2018 survey... Most respondents are changing the way they use the Internet because they believe some information may not remain private. For example, 55.8% of respondents say they limit how much financial information they share online because of privacy concerns. This is relatively consistent across different age groups and household income levels... 42.6% say they limit how much personal health and medical information they share. Only 7.5% of respondents say privacy concerns have not changed the way they use the Internet."

So, the lack of online privacy affects how people use the internet -- for business and pleasure. The chart below highlights the types of online changes:

Chart-internet-usage-eiu-2019

Findings regarding privacy and online shopping:

"Despite lingering privacy concerns, people are increasingly shopping online. Whether this continues in the future may hinge on attitudes toward online safety and security... A majority of respondents say that making online purchases is safe and secure, but, at 58.8% it was slightly lower than the 62.1% recorded in the 2018 survey."

So, the percentage of respondents who said online purchases as safe and secure went in the wrong direction -- down. Not good. There were regional differences, too, about online privacy:

"In Europe, the share of respondents confident about their online privacy increased by 8 percentage points from the 2018 survey, probably because of the General Data Protection Regulation (GDPR), the EU’s comprehensive data privacy rules that came into force in May 2018. However, the Middle East and North Africa region saw a decline of 9 percentage points compared with the 2018 survey."

So, sensible legislation to protect consumers' online privacy can have positive impacts. There were other regional differences:

"Trust in online sources of information remained relatively stable, except in the West. Political turbulence in the US and UK may have played a role in causing the share of respondents in North America and Europe who say they trust information on government websites and apps to retreat by 10 percentage points and 6 percentage points, respectively, compared with the 2018 survey."

So, stability is important. The report's authors concluded:

"The survey also reflects anxiety about online privacy and a decline in trust in some sources of information. Indeed, trust in government information has fallen since last year in Europe and North America. The growth and importance of the digital economy will mean that alleviating these anxieties should be a priority of companies, governments, regulators and developers."

Addressing those anxieties is critical, if governments in the West are serious about facilitating business growth via consumer confidence and internet usage. Download the Inclusive Internet Index 2019 Executive Summary (Adobe PDF) report.


New Bill In California To Strengthen Its Consumer Privacy Law

Lawmakers in California have proposed legislation to strengthen the state's existing privacy law. California Attorney General Xavier Becerra and and Senator Hannah-Beth Jackson jointly announced Senate Bill 561, to improve the California Consumer Privacy Act (CCPA). According to the announcement:

"SB 561 helps improve the workability of the [CCPA] by clarifying the Attorney General’s advisory role in providing general guidance on the law, ensuring a level playing field for businesses that play by the rules, and giving consumers the ability to enforce their new rights under the CCPA in court... SB 561 removes requirements that the Office of the Attorney General provide, at taxpayers’ expense, businesses and private parties with individual legal counsel on CCPA compliance; removes language that allows companies a free pass to cure CCPA violations before enforcement can occur; and adds a private right of action, allowing consumers the opportunity to seek legal remedies for themselves under the act..."

Senator Jackson introduced the proposed legislation into the sate Senate. Enacted in 2018, the CCPA will go into effect on January 1, 2020. The law prohibits businesses from discriminating against consumers for exercising their rights under the CCPA. The law also includes several key requirements businesses must comply with:

  • "Businesses must disclose data collection and sharing practices to consumers;
  • Consumers have a right to request their data be deleted;
  • Consumers have a right to opt out of the sale or sharing of their personal information; and
  • Businesses are prohibited from selling personal information of consumers under the age of 16 without explicit consent."

State Senator Jackson said in a statement:

"Our constitutional right to privacy continues to face unprecedented assault. Our locations, relationships, and interests are being tracked without our knowledge, bought and sold by corporate interests for their own economic gain and conducted in order to manipulate us... With the passage of the California Consumer Privacy Act last year, California took an important first step in protecting our fundamental right to privacy. SB 561 will ensure that the most significant privacy protections in the nation are effectively and robustly enforced."

Predictably, the pro-business lobby opposes the legislation. The Sacramento Bee reported:

"Punishment may be an incentive to increase compliance, but — especially where a law is new and vague — eliminating a right to cure does not promote compliance," the California Chamber of Commerce released in a statement on February 25. "SB 561 will not only hurt and possibly bankrupt small businesses in the state, it will kill jobs and innovation."

Sounds to me like fearmongering by the Chamber. Senator Jackson has it right. From the same Sacramento Bee article:

"If you don’t violate the law, you won’t get sued... To have very little recourse when these violations occur means that these large companies can continue with their inappropriate, improper behavior without any kind of recourse and sanction. In order to make sure they comply with the law, we need to make sure that people are able to exercise their rights."

Precisely. Two concepts seem to apply:

  • If you can't protect it, don't collect it (e.g.,  consumers' personal information), and
  • If the data collected is so value, compensate consumers for it

Regarding the second item, the National Law Review reported:

"Much has been made of California Governor Gavin Newsom’s recent endorsement of “data dividends”: payments to consumers for the use of their personal data. Common Sense Media, which helped pass the CCPA last year, plans to propose legislation in California to create such a dividend. The proposal has already proven popular with the public..."

Laws like the CCPA seem to be the way forward. Kudos to California for moving to better protect consumers. This proposed update puts teeth into existing law. Hopefully, other states will follow soon.


UK Parliamentary Committee Issued Its Final Report on Disinformation And Fake News. Facebook And Six4Three Discussed

On February 18th, a United Kingdom (UK) parliamentary committee published its final report on disinformation and "fake news." The 109-page report by the Digital, Culture, Media, And Sport Committee (DCMS) updates its interim report from July, 2018.

The report covers many issues: political advertising (by unnamed entities called "dark adverts"), Brexit and UK elections, data breaches, privacy, and recommendations for UK regulators and government officials. It seems wise to understand the report's findings regarding the business practices of U.S.-based companies mentioned, since these companies' business practices affect consumers globally, including consumers in the United States.

Issues Identified

First, the DCMS' final report built upon issues identified in its:

"... Interim Report: the definition, role and legal liabilities of social media platforms; data misuse and targeting, based around the Facebook, Cambridge Analytica and Aggregate IQ (AIQ) allegations, including evidence from the documents we obtained from Six4Three about Facebook’s knowledge of and participation in data-sharing; political campaigning; Russian influence in political campaigns; SCL influence in foreign elections; and digital literacy..."

The final report includes input from 23 "oral evidence sessions," more than 170 written submissions, interviews of at least 73 witnesses, and more than 4,350 questions asked at hearings. The DCMS Committee sought input from individuals, organizations, industry experts, and other governments. Some of the information sources:

"The Canadian Standing Committee on Access to Information, Privacy and Ethics published its report, “Democracy under threat: risks and solutions in the era of disinformation and data monopoly” in December 2018. The report highlights the Canadian Committee’s study of the breach of personal data involving Cambridge Analytica and Facebook, and broader issues concerning the use of personal data by social media companies and the way in which such companies are responsible for the spreading of misinformation and disinformation... The U.S. Senate Select Committee on Intelligence has an ongoing investigation into the extent of Russian interference in the 2016 U.S. elections. As a result of data sets provided by Facebook, Twitter and Google to the Intelligence Committee -- under its Technical Advisory Group -- two third-party reports were published in December 2018. New Knowledge, an information integrity company, published “The Tactics and Tropes of the Internet Research Agency,” which highlights the Internet Research Agency’s tactics and messages in manipulating and influencing Americans... The Computational Propaganda Research Project and Graphika published the second report, which looks at activities of known Internet Research Agency accounts, using Facebook, Instagram, Twitter and YouTube between 2013 and 2018, to impact US users"

Why Disinformation

Second, definitions matter. According to the DCMS Committee:

"We have even changed the title of our inquiry from “fake news” to “disinformation and ‘fake news’”, as the term ‘fake news’ has developed its own, loaded meaning. As we said in our Interim Report, ‘fake news’ has been used to describe content that a reader might dislike or disagree with... We were pleased that the UK Government accepted our view that the term ‘fake news’ is misleading, and instead sought to address the terms ‘disinformation’ and ‘misinformation'..."

Overall Recommendations

Summary recommendations from the report:

  1. "Compulsory Code of Ethics for tech companies overseen by independent regulator,
  2. Regulator given powers to launch legal action against companies breaching code,
  3. Government to reform current electoral communications laws and rules on overseas involvement in UK elections, and
  4. Social media companies obliged to take down known sources of harmful content, including proven sources of disinformation"

Role And Liability Of Tech Companies

Regarding detailed observations and findings about the role and liability of tech companies, the report stated:

"Social media companies cannot hide behind the claim of being merely a ‘platform’ and maintain that they have no responsibility themselves in regulating the content of their sites. We repeat the recommendation from our Interim Report that a new category of tech company is formulated, which tightens tech companies’ liabilities, and which is not necessarily either a ‘platform’ or a ‘publisher’. This approach would see the tech companies assume legal liability for content identified as harmful after it has been posted by users. We ask the Government to consider this new category of tech company..."

The UK Government and its regulators may adopt some, all, or none of the report's recommendations. More observations and findings in the report:

"... both social media companies and search engines use algorithms, or sequences of instructions, to personalize news and other content for users. The algorithms select content based on factors such as a user’s past online activity, social connections, and their location. The tech companies’ business models rely on revenue coming from the sale of adverts and, because the bottom line is profit, any form of content that increases profit will always be prioritized. Therefore, negative stories will always be prioritized by algorithms, as they are shared more frequently than positive stories... Just as information about the tech companies themselves needs to be more transparent, so does information about their algorithms. These can carry inherent biases, as a result of the way that they are developed by engineers... Monika Bickert, from Facebook, admitted that Facebook was concerned about “any type of bias, whether gender bias, racial bias or other forms of bias that could affect the way that work is done at our company. That includes working on algorithms.” Facebook should be taking a more active and urgent role in tackling such inherent biases..."

Based upon this, the report recommended that the UK's new Centre For Ethics And Innovation (CFEI) should play a key role as an advisor to the UK Government by continually analyzing and anticipating gaps in governance and regulation, suggesting best practices and corporate codes of conduct, and standards for artificial intelligence (AI) and related technologies.

Inferred Data

The report also discussed a critical issue related to algorithms (emphasis added):

"... When Mark Zuckerberg gave evidence to Congress in April 2018, in the wake of the Cambridge Analytica scandal, he made the following claim: “You should have complete control over your data […] If we’re not communicating this clearly, that’s a big thing we should work on”. When asked who owns “the virtual you”, Zuckerberg replied that people themselves own all the “content” they upload, and can delete it at will. However, the advertising profile that Facebook builds up about users cannot be accessed, controlled or deleted by those users... In the UK, the protection of user data is covered by the General Data Protection Regulation (GDPR). However, ‘inferred’ data is not protected; this includes characteristics that may be inferred about a user not based on specific information they have shared, but through analysis of their data profile. This, for example, allows political parties to identify supporters on sites like Facebook, through the data profile matching and the ‘lookalike audience’ advertising targeting tool... Inferred data is therefore regarded by the ICO as personal data, which becomes a problem when users are told that they can own their own data, and that they have power of where that data goes and what it is used for..."

The distinction between uploaded and inferred data cannot be overemphasized. It is critical when evaluating tech companies statements, policies (e.g., privacy, terms of use), and promises about what "data" users have control over. Wise consumers must insist upon clear definitions to avoided getting misled or duped.

What might be an exampled of inferred data? What comes to mind is Facebook's Ad Preferences feature allows users to review and delete the "Interests" -- advertising categories -- Facebook assigns to each user's profile. (The service's algorithms assign Interests based groups/pages/events/advertisements users "Liked" or clicked on, posts submitted, posts commented upon, and more.) These "Interests" are inferred data, since Facebook assigned them, and uers didn't.

In fact, Facebook doesn't notify its users when it assigns new Interests. It just does it. And, Facebook can assign Interests whether you interacted with an item once or many times. How relevant is an Interest assigned after a single interaction, "Like," or click? Most people would say: not relevant. So, does the Interests list assigned to users' profiles accurately describe users? Do Facebook users own the Interests list assigned to their profiles? Any control Facebook users have seems minimal. Why? Facebook users can delete Interests assigned to their profiles, but users cannot stop Facebook from applying new Interests. Users cannot prevent Facebook from re-applying Interests previously deleted. Deleting Interests doesn't reduce the number of ads users see on Facebook.

The only way to know what Interests have been assigned is for Facebook users to visit the Ad Preferences section of their profiles, and browse the list. Depending how frequently a person uses Facebook, it may be necessary to prune an Interests list at least once monthly -- a cumbersome and time consuming task, probably designed that way to discourage reviews and pruning. And, that's one example of inferred data. There are probably plenty more examples, and as the report emphasizes users don't have access to all inferred data with their profiles.

Now, back to the report. To fix problems with inferred data, the DCMS recommended:

"We support the recommendation from the ICO that inferred data should be as protected under the law as personal information. Protections of privacy law should be extended beyond personal information to include models used to make inferences about an individual. We recommend that the Government studies the way in which the protections of privacy law can be expanded to include models that are used to make inferences about individuals, in particular during political campaigning. This will ensure that inferences about individuals are treated as importantly as individuals’ personal information."

Business Practices At Facebook

Next, the DCMS Committee's report said plenty about Facebook, its management style, and executives (emphasis added):

"Despite all the apologies for past mistakes that Facebook has made, it still seems unwilling to be properly scrutinized... Ashkan Soltani, an independent researcher and consultant, and former Chief Technologist to the US Federal Trade Commission (FTC), called into question Facebook’s willingness to be regulated... He discussed the California Consumer Privacy Act, which Facebook supported in public, but lobbied against, behind the scenes... By choosing not to appear before the Committee and by choosing not to respond personally to any of our invitations, Mark Zuckerberg has shown contempt towards both the UK Parliament and the ‘International Grand Committee’, involving members from nine legislatures from around the world. The management structure of Facebook is opaque to those outside the business and this seemed to be designed to conceal knowledge of and responsibility for specific decisions. Facebook used the strategy of sending witnesses who they said were the most appropriate representatives, yet had not been properly briefed on crucial issues, and could not or chose not to answer many of our questions. They then promised to follow up with letters, which -- unsurprisingly -- failed to address all of our questions. We are left in no doubt that this strategy was deliberate."

So, based upon Facebook's actions (or lack thereof), the DCMS concluded that Facebook executives intentionally ducked and dodged issues and questions.

While discussing data use and targeting, the report said more about data breaches and Facebook:

"The scale and importance of the GSR/Cambridge Analytica breach was such that its occurrence should have been referred to Mark Zuckerberg as its CEO immediately. The fact that it was not is evidence that Facebook did not treat the breach with the seriousness it merited. It was a profound failure of governance within Facebook that its CEO did not know what was going on, the company now maintains, until the issue became public to us all in 2018. The incident displays the fundamental weakness of Facebook in managing its responsibilities to the people whose data is used for its own commercial interests..."

So, internal management failed. That's not all. After a detailed review of the GSR/Cambridge Analytica breach and Facebook's 2011 Consent Decree with the U.S. Federal Trade Commission (FTC), the DCMS Committee concluded (emphasis and text link added):

"The Cambridge Analytica scandal was facilitated by Facebook’s policies. If it had fully complied with the FTC settlement, it would not have happened. The FTC Complaint of 2011 ruled against Facebook -- for not protecting users’ data and for letting app developers gain as much access to user data as they liked, without restraint -- and stated that Facebook built their company in a way that made data abuses easy. When asked about Facebook’s failure to act on the FTC’s complaint, Elizabeth Denham, the Information Commissioner, told us: “I am very disappointed that Facebook, being such an innovative company, could not have put more focus, attention and resources into protecting people’s data”. We are equally disappointed."

Wow! Not good. There's more:

"... a current court case at the San Mateo Superior Court in California also concerns Facebook’s data practices. It is alleged that Facebook violated the privacy of US citizens by actively exploiting its privacy policy... The published ‘corrected memorandum of points and authorities to defendants’ special motions to strike’, by the complainant in the case, the U.S.-based app developer Six4Three, describes the allegations against Facebook; that Facebook used its users’ data to persuade app developers to create platforms on its system, by promising access to users’ data, including access to data of users’ friends. The case also alleges that those developers that became successful were targeted and ordered to pay money to Facebook... Six4Three lodged its original case in 2015, after Facebook removed developers’ access to friends’ data, including its own. The DCMS Committee took the unusual, but lawful, step of obtaining these documents, which spanned between 2012 and 2014... Since we published these sealed documents, on 14 January 2019 another court agreed to unseal 135 pages of internal Facebook memos, strategies and employee emails from between 2012 and 2014, connected with Facebook’s inappropriate profiting from business transactions with children. A New York Times investigation published in December 2018 based on internal Facebook documents also revealed that the company had offered preferential access to users data to other major technology companies, including Microsoft, Amazon and Spotify."

"We believed that our publishing the documents was in the public interest and would also be of interest to regulatory bodies... The documents highlight Facebook’s aggressive action against certain apps, including denying them access to data that they were originally promised. They highlight the link between friends’ data and the financial value of the developers’ relationship with Facebook. The main issues concern: ‘white lists’; the value of friends’ data; reciprocity; the sharing of data of users owning Android phones..."

You can read the report's detailed descriptions of those issues. A summary: a) Facebook allegedly used promises of access to users' data to lure developers (often by overriding Facebook users' privacy settings); b) some developers got priority treatment based upon unclear criteria; c) developers who didn't spend enough money with Facebook were denied access to data previously promised; d) Facebook's reciprocity clause demanded that developers also share their users' data with Facebook; e) Facebook's mobile app for Android OS phone users collected far more data about users, allegedly without consent, than users were told; and f) Facebook allegedly targeted certain app developers (emphasis added):

"We received evidence that showed that Facebook not only targeted developers to increase revenue, but also sought to switch off apps where it considered them to be in competition or operating in a lucrative areas of its platform and vulnerable to takeover. Since 1970, the US has possessed high-profile federal legislation, the Racketeer Influenced and Corrupt Organizations Act (RICO); and many individual states have since adopted similar laws. Originally aimed at tackling organized crime syndicates, it has also been used in business cases and has provisions for civil action for damages in RICO-covered offenses... Despite specific requests, Facebook has not provided us with one example of a business excluded from its platform because of serious data breaches. We believe that is because it only ever takes action when breaches become public. We consider that data transfer for value is Facebook’s business model and that Mark Zuckerberg’s statement that “we’ve never sold anyone’s data” is simply untrue.” The evidence that we obtained from the Six4Three court documents indicates that Facebook was willing to override its users’ privacy settings in order to transfer data to some app developers, to charge high prices in advertising to some developers, for the exchange of that data, and to starve some developers—such as Six4Three—of that data, thereby causing them to lose their business. It seems clear that Facebook was, at the very least, in violation of its Federal Trade Commission settlement."

"The Information Commissioner told the Committee that Facebook needs to significantly change its business model and its practices to maintain trust. From the documents we received from Six4Three, it is evident that Facebook intentionally and knowingly violated both data privacy and anti-competition laws. The ICO should carry out a detailed investigation into the practices of the Facebook Platform, its use of users’ and users’ friends’ data, and the use of ‘reciprocity’ of the sharing of data."

The Information Commissioner's Office (ICO) is one of the regulatory agencies within the UK. So, the Committee concluded that Facebook's real business model is, "data transfer for value" -- in other words: have money, get access to data (regardless of Facebook users' privacy settings).

One quickly gets the impression that Facebook acted like a monopoly in its treatment of both users and developers... or worse, like organized crime. The report concluded (emphasis added):

"The Competitions and Market Authority (CMA) should conduct a comprehensive audit of the operation of the advertising market on social media. The Committee made this recommendation its interim report, and we are pleased that it has also been supported in the independent Cairncross Report commissioned by the government and published in February 2019. Given the contents of the Six4Three documents that we have published, it should also investigate whether Facebook specifically has been involved in any anti-competitive practices and conduct a review of Facebook’s business practices towards other developers, to decide whether Facebook is unfairly using its dominant market position in social media to decide which businesses should succeed or fail... Companies like Facebook should not be allowed to behave like ‘digital gangsters’ in the online world, considering themselves to be ahead of and beyond the law."

The DCMS Committee's report also discussed findings from the Cairncross Report. In summary, Damian Collins MP, Chair of the DCMS Committee, said:

“... we cannot delay any longer. Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalized ‘dark adverts’ from unidentifiable sources, delivered through the major social media platforms we use everyday. Much of this is directed from agencies working in foreign countries, including Russia... Companies like Facebook exercise massive market power which enables them to make money by bullying the smaller technology companies and developers... We need a radical shift in the balance of power between the platforms and the people. The age of inadequate self regulation must come to an end. The rights of the citizen need to be established in statute, by requiring the tech companies to adhere to a code of conduct..."

So, the report seems extensive, comprehensive, and detailed. Read the DCMS Committee's announcement, and/or download the full DCMS Committee report (Adobe PDF format, 3,5o7 kilobytes).

Once can assume that governments' intelligence and spy agencies will continue to do what they've always done: collect data about targets and adversaries, use disinformation and other tools to attempt to meddle in other governments' activities. It is clear that social media makes these tasks far easier than before. The DCMS Committee's report provided recommendations about what the UK Government's response should be. Other countries' governments face similar decisions about their responses, if any, to the threats.

Given the data in the DCMS report, it will be interesting to see how the FTC and lawmakers in the United States respond. If increased regulation of social media results, tech companies arguably have only themselves to blame. What do you think?


Popular iOS Apps Record All In-App Activity Causing Privacy, Data Security, And Other Issues

As the internet has evolved, the user testing and market research practices have also evolved. This may surprise consumers. TechCrunch reported that many popular Apple mobile apps record everything customers do with the apps:

"Apps like Abercrombie & Fitch, Hotels.com and Singapore Airlines also use Glassbox, a customer experience analytics firm, one of a handful of companies that allows developers to embed “session replay” technology into their apps. These session replays let app developers record the screen and play them back to see how its users interacted with the app to figure out if something didn’t work or if there was an error. Every tap, button push and keyboard entry is recorded — effectively screenshotted — and sent back to the app developers."

So, customers' entire app sessions and activities have been recorded. Of course, marketers need to understand their customers' needs, and how users interact with their mobile apps, to build better products, services, and apps. However, in doing so some apps have security vulnerabilities:

"The App Analyst... recently found Air Canada’s iPhone app wasn’t properly masking the session replays when they were sent, exposing passport numbers and credit card data in each replay session. Just weeks earlier, Air Canada said its app had a data breach, exposing 20,000 profiles."

Not good for a couple reasons. First, sensitive data like payment information (e.g., credit/debit card numbers, passport numbers, bank account numbers, etc.) should be masked. Second, when sensitive information isn't masked, more data security problems arise. How long is this app usage data archived? What employees, contractors, and business partners have access to the archive? What security methods are used to protect the archive from abuse?

In short, unauthorized persons may have access to the archives and the sensitive information contained. For example, market researchers probably have little or no need to specific customers' payment information. Sensitive information in these archives should be encrypted, to provide the best protection from abuse and from data breaches.

Sadly, there is more bad news:

"Apps that are submitted to Apple’s App Store must have a privacy policy, but none of the apps we reviewed make it clear in their policies that they record a user’s screen... Expedia’s policy makes no mention of recording your screen, nor does Hotels.com’s policy. And in Air Canada’s case, we couldn’t spot a single line in its iOS terms and conditions or privacy policy that suggests the iPhone app sends screen data back to the airline. And in Singapore Airlines’ privacy policy, there’s no mention, either."

So, the app session recordings were done covertly... without explicit language to provide meaningful and clear notice to consumers. I encourage everyone to read the entire TechCrunch article, which also includes responses by some of the companies mentioned. In my opinion, most of the responses fell far short with lame, boilerplate statements.

All of this is very troubling. And, there is more.

The TechCrunch article didn't discuss it, but historically companies hired testing firms to recruit user test participants -- usually current and prospective customers. Test participants were paid for their time. (I know because as a former user experience professional I conducted such in-person test sessions where clients paid test participants.) Things have changed. Not only has user testing and research migrated online, but companies use automated tools to perform perpetual, unannounced user testing -- all without compensating test participants.

While change is inevitable, not all change is good. Plus, things can be done in better ways. If the test information is that valuable, then pay test participants. Otherwise, this seems like another example of corporate greed at consumers' expense. And, it's especially egregious if data transmissions of the recorded app sessions to developers' servers use up cellular data plan capacity consumers paid for. Some consumers (e.g., elders, children, the poor) cannot afford the costs of unlimited cellular data plans.

After this TechCrunch report, Apple notified developers to either stop or disclose screen recording:

"Protecting user privacy is paramount in the Apple ecosystem. Our App Store Review Guidelines require that apps request explicit user consent and provide a clear visual indication when recording, logging, or otherwise making a record of user activity... We have notified the developers that are in violation of these strict privacy terms and guidelines, and will take immediate action if necessary..."

Good. That's a start. Still, user testing and market research is not a free pass for developers to ignore or skip data security best practices. Given these covert recorded app sessions, mobile apps must be continually tested. Otherwise, some ethically-challenged companies may re-introduce covert screen recording features. What are your opinions?


Survey: People In Relationships Spy On Cheating Partners. FTC: Singles Looking For Love Are The Biggest Target Of Scammers

Happy Valentine's Day! First, BestVPN announced the results of a survey of 1,000 adults globally about relationships and trust in today's digital age where social media usage is very popular. Key findings:

"... nearly 30% of respondents admitted to using tracking apps to catch a partner [suspected of or cheating]. After all, over a quarter of those caught cheating were busted by modern technology... 85% of those caught out in the past now take additional steps to protect their privacy, including deleting their browsing data or using a private browsing mode."

Below is an infographic with more findings from the survey.

Valentines-day-infograph-bestvpn-feb2019

Second, the U.S. Federal Trade Commission (FTC) issued a warning earlier this week about fraud affecting single persons:

"... romance scams generated more reported losses than any other consumer fraud type reported to the agency... The number of romance scams reported to the FTC has grown from 8,500 in 2015 to more than 21,000 in 2018, while reported losses to these scams more than quadrupled in recent years—from $33 million in 2015 to $143 million last year. For those who said they lost money to a romance scam, the median reported loss was $2,600, with those 70 and over reporting the biggest median losses at $10,000."

"Romance scammers often find their victims online through a dating site or app or via social media. These scammers create phony profiles that often involve the use of a stranger’s photo they have found online. The goals of these scams are often the same: to gain the victim’s trust and love in order to get them to send money through a wire transfer, gift card, or other means."

So, be careful out there. Don't cheat, and beware of scammers and dating imposters. You have been warned.


Senators Demand Answers From Facebook And Google About Project Atlas And Screenwise Meter Programs

After news reports surfaced about Facebook's Project Atlas, a secret program where Facebook paid teenagers (and other users) for a research app installed on their phones to track and collect information about their mobile usage, several United States Senators have demanded explanations. Three Senators sent a join letter on February 7, 2019 to Mark Zuckerberg, Facebook's chief executive officer.

The joint letter to Facebook (Adobe PDF format) stated, in part:

"We write concerned about reports that Facebook is collecting highly-sensitive data on teenagers, including their web browsing, phone use, communications, and locations -- all to profile their behavior without adequate disclosure, consent, or oversight. These reports fit with Longstanding concerns that Facebook has used its products to deeply intrude into personal privacy... According to a journalist who attempted to register as a teen, the linked registration page failed to impose meaningful checks on parental consent. Facebook has more rigorous mechanism to obtain and verify parental consent, such as when it is required to sign up for Messenger Kids... Facebook's monitoring under Project Atlas is particularly concerning because the data data collection performed by the research app was deeply invasive. Facebook's registration process encouraged participants to "set it and forget it," warning that if a participant disconnected from the monitoring for more than ten minutes for a few days, that they could be disqualified. Behind the scenes, the app watched everything on the phone."

The letter included another example highlighting the alleged lack of meaningful disclosures:

"... the app added a VPN connection that would automatically route all of a participant's traffic through Facebook's servers. The app installed a SSL root certificate on the participant's phone, which would allow Facebook to intercept or modify data sent to encrypted websites. As a result, Facebook would have limitless access to monitor normally secure web traffic, even allowing Facebook to watch an individual log into their bank account or exchange pictures with their family. None of the disclosures provided at registration offer a meaningful explanation about how the sensitive data is used, how long it is kept, or who within Facebook has access to it..."

The letter was signed by Senators Richard Blumenthal (Democrat, Connecticut), Edward J. Markey (Democrat, Massachusetts), and Josh Hawley (Republican, Mississippi). Based upon news reports about how Facebook's Research App operated with similar functionality to the Onavo VPN app which was banned last year by Apple, the Senators concluded:

"Faced with that ban, Facebook appears to have circumvented Apple's attempts to protect consumers."

The joint letter also listed twelve questions the Senators want detailed answers about. Below are selected questions from that list:

"1. When did Project Atlas begin and how many individuals participated? How many participants were under age 18?"

"3. Why did Facebook use a less strict mechanism for verifying parental consent than is Required for Messenger Kids or Global Data Protection Requlation (GDPR) compliance?"

"4.What specific types of data was collected (e.g., device identifieers, usage of specific applications, content of messages, friends lists, locations, et al.)?"

"5. Did Facebook use the root certificate installed on a participant's device by the Project Atlas app to decrypt and inspect encrypted web traffic? Did this monitoring include analysis or retention of application-layer content?"

"7. Were app usage data or communications content collected by Project Atlas ever reviewed by or available to Facebook personnel or employees of Facebook partners?"

8." Given that Project Atlas acknowledged the collection of "data about [users'] activities and content within those apps," did Facebook ever collect or retain the private messages, photos, or other communications sent or received over non-Facebook products?"

"11. Why did Facebook bypass Apple's app review? Has Facebook bypassed the App Store aproval processing using enterprise certificates for any other app that was used for non-internal purposes? If so, please list and describe those apps."

Read the entire letter to Facebook (Adobe PDF format). Also on February 7th, the Senators sent a similar letter to Google (Adobe PDF format), addressed to Hiroshi Lockheimer, the Senior Vice President of Platforms & Ecosystems. It stated in part:

"TechCrunch has subsequently reported that Google maintained its own measurement program called "Screenwise Meter," which raises similar concerns as Project Atlas. The Screenwise Meter app also bypassed the App Store using an enterprise certificate and installed a VPN service in order to monitor phones... While Google has since removed the app, questions remain about why it had gone outside Apple's review process to run the monitoring program. Platforms must maintain and consistently enforce clear policies on the monitoring of teens and what constitutes meaningful parental consent..."

The letter to Google includes a similar list of eight questions the Senators seek detailed answers about. Some notable questions:

"5. Why did Google bypass App Store approval for Screenwise Meter app using enterprise certificates? Has Google bypassed the App Store approval processing using enterprise certificates for any other non-internal app? If so, please list and describe those apps."

"6. What measures did Google have in place to ensure that teenage participants in Screenwise Meter had authentic parental consent?"

"7. Given that Apple removed Onavoo protect from the App Store for violating its terms of service regarding privacy, why has Google continued to allow the Onavo Protect app to be available on the Play Store?"

The lawmakers have asked for responses by March 1st. Thanks to all three Senators for protecting consumers' -- and children's -- privacy... and for enforcing transparency and accountability.


Technology And Human Rights Organizations Sent Joint Letter Urging House Representatives Not To Fund 'Invasive Surveillance' Tech Instead of A Border Wall

More than two dozen technology and human rights organizations sent a joint letter Tuesday to representatives in the House of Representatives, urging them not to fund "invasive surveillance technologies" in replacement of a physical wall or barrier along the southern border of the United States. The joint letter cited five concerns:

"1. Risk-based targeting: The proposal calls for “an expansion of risk-based targeting of passengers and cargo entering the United States.” We are concerned that this includes the expansion of programs — proven to be ineffective and to exacerbate racial profiling — that use mathematical analytics to make targeting determinations. All too often, these systems replicate the biases of their programmers, burden vulnerable communities, lack democratic transparency, and encourage the collection and analysis of ever-increasing amounts of data... 3. Biometrics: The proposal calls for “new cutting edge technology” at the border. If that includes new face surveillance like that deployed at international airline departures, it should not. Senator Jeff Merkley and the Congressional Black Caucus have expressed serious concern that facial recognition technology would place “disproportionate burdens on communities of color and could stifle Americans’ willingness to exercise their first amendment rights in public.” In addition, use of other biometrics, including iris scans and voice recognition, also raise significant privacy concerns... 5. Biometric and DNA data: We oppose biometric screening at the border and the collection of immigrants’ DNA, and fear this may be another form of “new cutting edge technology” under consideration. We are concerned about the threat that any collected biometric data will be stolen or misused, as well as the potential for such programs to be expanded far beyond their original scope..."

The letter was sent to Speaker Nancy Pelosi, Minority Leader Kevin McCarthy, Minority Leader Steny Hoyer, Minority Whip Steve Scalise, Chair Nita Lowey a Ranking Member of House Appropriations, and Kay Granger of the House Appropriations committee.

27 organizations signed the joint letter, including Fight for the Future, the Electronic Frontier Foundation, the American Civil Liberties Union (ACLU), the American-Arab Anti-Discrimination Committee, the Center for Media Justice, the Project On Government Oversight, and others. Read the entire letter.

Earlier this month, a structural and civil engineer cited several reasons why a physical wall won't work and would be vastly more expensive than the $5.7 billion requested.

Clearly, the are distinct advantages and disadvantages for each and all border-protection solutions the House and President are considering. It is a complex problem. These advantages and disadvantages of all proposals need to be clear, transparent, and understood by taxpayers prior to any final decisions.


Facebook Paid Teens To Install Unauthorized Spyware On Their Phones. Plenty Of Questions Remain

Facebook logoWhile today is the 15th anniversary of Facebook,  more important news rules. Last week featured plenty of news about Facebook. TechCrunch reported on Tuesday:

"Since 2016, Facebook has been paying users ages 13 to 35 up to $20 per month plus referral fees to sell their privacy by installing the iOS or Android “Facebook Research” app. Facebook even asked users to screenshot their Amazon order history page. The program is administered through beta testing services Applause, BetaBound and uTest to cloak Facebook’s involvement, and is referred to in some documentation as “Project Atlas” — a fitting name for Facebook’s effort to map new trends and rivals around the globe... Facebook admitted to TechCrunch it was running the Research program to gather data on usage habits."

So, teenagers installed surveillance software on their phones and tablets, to spy for Facebook on themselves, Facebook's competitors,, and others. This is huge news for several reasons. First, the "Facebook Research" app is VPN (Virtual Private Network) software which:

"... lets the company suck in all of a user’s phone and web activity, similar to Facebook’s Onavo Protect app that Apple banned in June and that was removed in August. Facebook sidesteps the App Store and rewards teenagers and adults to download the Research app and give it root access to network traffic in what may be a violation of Apple policy..."

Reportedly, the Research app collected massive amounts of information: private messages in social media apps, chats from in instant messaging apps, photos/videos sent to others, emails, web searches, web browsing activity, and geo-location data. So, a very intrusive app. And, after being forced to remove oneintrusive app from Apple's store, Facebook continued anyway -- with another app that performed the same function. Not good.

Second, there is the moral issue of using the youngest users as spies... persons who arguably have the lease experience and skills at reading complex documents: corporate terms-of-use and privacy policies. I wonder how many teenagers notified their friends of the spying and data collection. How many teenagers fully understood what they were doing? How many parents were aware of the activity and payments? How many parents notified the parents of their children's friends? How many teens installed the spyware on both their iPhones and iPads? Lots of unanswered questions.

Third, Apple responded quickly. TechCrunch reported Wednesday morning:

"... Apple blocked Facebook’s Research VPN app before the social network could voluntarily shut it down... Apple tells TechCrunch that yesterday evening it revoked the Enterprise Certificate that allows Facebook to distribute the Research app without going through the App Store."

Facebook's usage of the Enterprise Certificate is significant. TechCrunch also published a statement by Apple:

"We designed our Enterprise Developer Program solely for the internal distribution of apps within an organization... Facebook has been using their membership to distribute a data-collecting app to consumers, which is a clear breach of their agreement with Apple. Any developer using their enterprise certificates to distribute apps to consumers will have their certificates revoked..."

So, the Research app violated Apple's policy. Not good. The app also performs similar functions as the banned Onavo VPN app. Worse. This sounds like an end-run to me. So as punishment for its end-run actions, Apple temporarily disable the certificates for internal corporate apps.

Axios described very well Facebook's behavior:

"Facebook took a program designed to let businesses internally test their own app and used it to monitor most, if not everything, a user did on their phone — a degree of surveillance barred in the official App Store."

And the animated Facebook image in the Axios article sure looks like a liar-liar-logo-on-fire image. LOL! Pure gold! Seriously, Facebook's behavior indicates questionable ethics, and/or an expectation of not getting caught. Reportedly, the internal apps which were shut down included shuttle schedules, campus maps, and company calendars. After that, some Facebook employees discussed quitting.

And, it raises more questions. Which Facebook executives approved Project Atlas? What advice did Facebook's legal staff provide prior to approval? Was that advice followed or ignored?

Google logo Fourth, TechCrunch also reported:

"Facebook’s Research program will continue to run on Android."

What? So, Google devices were involved, too. Is this spy program okay with Google executives? A follow-up report on Wednesday by TechCrunch:

"Google has been running an app called Screenwise Meter, which bears a strong resemblance to the app distributed by Facebook Research that has now been barred by Apple... Google invites users aged 18 and up (or 13 if part of a family group) to download the app by way of a special code and registration process using an Enterprise Certificate. That’s the same type of policy violation that led Apple to shut down Facebook’s similar Research VPN iOS app..."

Oy! So, Google operates like Facebook. Also reported by TechCrunch:

"The Screenwise Meter iOS app should not have operated under Apple’s developer enterprise program — this was a mistake, and we apologize. We have disabled this app on iOS devices..."

So, Google will terminate its spy program on Apple devices, but continue its own program with Facebook. Hmmmmm. Well, that answers some questions. I guess Google executives are okay with this spy program. More questions remain.

Fifth, Facebook tried to defend the Research app and its actions in an internal memo to employees. On Thursday, TechCrunch tore apart the claims in an internal Facebook memo from vice president Pedro Canahuati. Chiefly:

"Facebook claims it didn’t hide the program, but it was never formally announced like every other Facebook product. There were no Facebook Help pages, blog posts, or support info from the company. It used intermediaries Applause and CentreCode to run the program under names like Project Atlas and Project Kodiak. Users only found out Facebook was involved once they started the sign-up process and signed a non-disclosure agreement prohibiting them from discussing it publicly... Facebook claims it wasn’t “spying,” yet it never fully laid out the specific kinds of information it would collect. In some cases, descriptions of the app’s data collection power were included in merely a footnote. The program did not specify data types gathered, only saying it would scoop up “which apps are on your phone, how and when you use them” and “information about your internet browsing activity.” The parental consent form from Facebook and Applause lists none of the specific types of data collected...

So, Research app participants (e.g., teenagers, parents) couldn't discuss nor warn their friends (and their friends' parents) about the data collection. I strongly encourage everyone to read the entire TechCrunch analysis. It is eye-opening.

Sixth, a reader shared concerns about whether Facebook's actions violated federal laws. Did Project Atlas violate the Digital Millennium Copyright Act (DMCA); specifically the "anti-circumvention" provision, which prohibits avoiding the security protections in software? Did it violate the Computer Fraud and Abuse Act? What about breach-of-contract and fraud laws? What about states' laws? So, one could ask similar questions about Google's actions, too.

I am not an attorney. Hopefully, some attorneys will weigh in on these questions. Probably, some skilled attorneys will investigate various legal options.

All of this is very disturbing. Is this what consumers can expect of Silicon Valley firms? Is this the best tech firms can do? Is this the low level the United States has sunk to? Kudos to the TechCrunch staff for some excellent reporting.

What are your opinions of Project Atlas? Of Facebook's behavior? Of Google's?


Google Fined 50 Million Euros For Violations Of New European Privacy Law

Google logo Google has been find 50 million Euros (about U.S. $57 million) under the new European privacy law for failing to properly disclose to users how their data is collected and used for targeted advertising. The European Union's General Data Protection Regulations, which went into effect in May 2018, give EU residents more control over their information and how companies use it.

After receiving two complaints last year from privacy-rights groups, France's National Data Protection Commission (CNL) announced earlier this month:

"... CNIL carried out online inspections in September 2018. The aim was to verify the compliance of the processing operations implemented by GOOGLE with the French Data Protection Act and the GDPR by analysing the browsing pattern of a user and the documents he or she can have access, when creating a GOOGLE account during the configuration of a mobile equipment using Android. On the basis of the inspections carried out, the CNIL’s restricted committee responsible for examining breaches of the Data Protection Act observed two types of breaches of the GDPR."

The first violation involved transparency failures:

"... information provided by GOOGLE is not easily accessible for users. Indeed, the general structure of the information chosen by the company does not enable to comply with the Regulation. Essential information, such as the data processing purposes, the data storage periods or the categories of personal data used for the ads personalization, are excessively disseminated across several documents, with buttons and links on which it is required to click to access complementary information. The relevant information is accessible after several steps only, implying sometimes up to 5 or 6 actions... some information is not always clear nor comprehensive. Users are not able to fully understand the extent of the processing operations carried out by GOOGLE. But the processing operations are particularly massive and intrusive because of the number of services offered (about twenty), the amount and the nature of the data processed and combined. The restricted committee observes in particular that the purposes of processing are described in a too generic and vague manner..."

So, important information is buried and scattered across several documents making it difficult for users to access and to understand. The second violation involved the legal basis for personalized ads processing:

"... GOOGLE states that it obtains the user’s consent to process data for ads personalization purposes. However, the restricted committee considers that the consent is not validly obtained for two reasons. First, the restricted committee observes that the users’ consent is not sufficiently informed. The information on processing operations for the ads personalization is diluted in several documents and does not enable the user to be aware of their extent. For example, in the section “Ads Personalization”, it is not possible to be aware of the plurality of services, websites and applications involved in these processing operations (Google search, Youtube, Google home, Google maps, Playstore, Google pictures, etc.) and therefore of the amount of data processed and combined."

"[Second], the restricted committee observes that the collected consent is neither “specific” nor “unambiguous.” When an account is created, the user can admittedly modify some options associated to the account by clicking on the button « More options », accessible above the button « Create Account ». It is notably possible to configure the display of personalized ads. That does not mean that the GDPR is respected. Indeed, the user not only has to click on the button “More options” to access the configuration, but the display of the ads personalization is moreover pre-ticked. However, as provided by the GDPR, consent is “unambiguous” only with a clear affirmative action from the user (by ticking a non-pre-ticked box for instance). Finally, before creating an account, the user is asked to tick the boxes « I agree to Google’s Terms of Service» and « I agree to the processing of my information as described above and further explained in the Privacy Policy» in order to create the account. Therefore, the user gives his or her consent in full, for all the processing operations purposes carried out by GOOGLE based on this consent (ads personalization, speech recognition, etc.). However, the GDPR provides that the consent is “specific” only if it is given distinctly for each purpose."

So, not only is important information buried and scattered across multiple documents (again), but also critical boxes for users to give consent are pre-checked when they shouldn't be.

CNIL explained its reasons for the massive fine:

"The amount decided, and the publicity of the fine, are justified by the severity of the infringements observed regarding the essential principles of the GDPR: transparency, information and consent. Despite the measures implemented by GOOGLE (documentation and configuration tools), the infringements observed deprive the users of essential guarantees regarding processing operations that can reveal important parts of their private life since they are based on a huge amount of data, a wide variety of services and almost unlimited possible combinations... Moreover, the violations are continuous breaches of the Regulation as they are still observed to date. It is not a one-off, time-limited, infringement..."

This is the largest fine, so far, under GDPR laws. Reportedly, Google will appeal the fine:

"We've worked hard to create a GDPR consent process for personalised ads that is as transparent and straightforward as possible, based on regulatory guidance and user experience testing... We're also concerned about the impact of this ruling on publishers, original content creators and tech companies in Europe and beyond... For all these reasons, we've now decided to appeal."

This is not the first EU fine for Google. CNet reported:

"Google is no stranger to fines under EU laws. It's currently awaiting the outcome of yet another antitrust investigation -- after already being slapped with a $5 billion fine last year for anticompetitive Android practices and a $2.7 billion fine in 2017 over Google Shopping."


Companies Want Your Location Data. Recent Examples: The Weather Channel And Burger King

Weather Channel logo It is easy to find examples where companies use mobile apps to collect consumers' real-time GPS location data, so they can archive and resell that information later for additional profits. First, ExpressVPN reported:

"The city of Los Angeles is suing the Weather Company, a subsidiary of IBM, for secretly mining and selling user location data with the extremely popular Weather Channel App. Stating that the app unfairly manipulates users into enabling their location settings for more accurate weather reports, the lawsuit affirms that the app collects and then sells this data to third-party companies... Citing a recent investigation by The New York Times that revealed more than 75 companies silently collecting location data (if you haven’t seen it yet, it’s worth a read), the lawsuit is basing its case on California’s Unfair Competition Law... the California Consumer Privacy Act, which is set to go into effect in 2020, would make it harder for companies to blindly profit off customer data... This lawsuit hopes to fine the Weather Company up to $2,500 for each violation of the Unfair Competition Law. With more than 200 million downloads and a reported 45+ million users..."

Long-term readers remember that a data breach in 2007 at IBM Inc. prompted this blog. It's not only internet service providers which collect consumers' location data. Advertisers, retailers, and data brokers want it, too.

Burger King logo Second, Burger King ran last month a national "Whopper Detour" promotion which offered customers a once-cent Whopper burger if they went near a competitor's store. News 5, the ABC News affiliate in Cleveland, reported:

"If you download the Burger King mobile app and drive to a McDonald’s store, you can get the penny burger until December 12, 2018, according to the fast-food chain. You must be within 600 feet of a McDonald's to claim your discount, and no, McDonald's will not serve you a Whopper — you'll have to order the sandwich in the Burger King app, then head to the nearest participating Burger King location to pick it up. More information about the deal can be found on the app on Apple and Android devices."

Next, the relevant portions from Burger King's privacy policy for its mobile apps (emphasis added):

"We collect information you give us when you use the Services. For example, when you visit one of our restaurants, visit one of our websites or use one of our Services, create an account with us, buy a stored-value card in-restaurant or online, participate in a survey or promotion, or take advantage of our in-restaurant Wi-Fi service, we may ask for information such as your name, e-mail address, year of birth, gender, street address, or mobile phone number so that we can provide Services to you. We may collect payment information, such as your credit card number, security code and expiration date... We also may collect information about the products you buy, including where and how frequently you buy them... we may collect information about your use of the Services. For example, we may collect: 1) Device information - such as your hardware model, IP address, other unique device identifiers, operating system version, and settings of the device you use to access the Services; 2) Usage information - such as information about the Services you use, the time and duration of your use of the Services and other information about your interaction with content offered through a Service, and any information stored in cookies and similar technologies that we have set on your device; and 3) Location information - such as your computer’s IP address, your mobile device’s GPS signal or information about nearby WiFi access points and cell towers that may be transmitted to us..."

So, for the low, low price of one hamburger, participants in this promotion gave RBI, the parent company which owns Burger King, perpetual access to their real-time location data. And, since RBI knows when, where, and how long its customers visit competitors' fast-food stores, it also knows similar details about everywhere else you go -- including school, work, doctors, hospitals, and more. Sweet deal for RBI. A poor deal for consumers.

Expect to see more corporate promotions like this, which privacy advocates call "surveillance capitalism."

Consumers' real-time location data is very valuable. Don't give it away for free. If you decide to share it, demand a fair, ongoing payment in exchange. Read privacy and terms-of-use policies before downloading mobile apps, so you don't get abused or taken. Opinions? Thoughts?


The Privacy And Data Security Issues With Medical Marijuana

In the United States, some states have enacted legislation making medical marijuana legal -- despite it being illegal at a federal level. This situation presents privacy issues for both retailers and patients.

In her "Data Security And Privacy" podcast series, privacy consultant Rebecca Harold (@PrivacyProf) interviewed a patient cannabis advocate about privacy and data security issues:

"Most people assume that their data is safe in cannabis stores & medical cannabis dispensaries. Or they believe if they pay in cash there will be no record of their cannabis purchase. Those are incorrect beliefs. How do dispensaries secure & share data? Who WANTS that data? What security is needed? Some in government, law enforcement & employers want data about state legal marijuana and medical cannabis purchases. Michelle Dumay, Cannabis Patient Advocate, helps cannabis dispensaries & stores to secure their customers’ & patients’ data & privacy. Michelle learned through experience getting treatment for her daughter that most medical cannabis dispensaries are not compliant with laws governing the security and privacy of patient data... In this episode, we discuss information security & privacy practices of cannabis shops, risks & what needs to be done when it comes to securing data and understanding privacy laws."

Many consumers know that the Health Insurance Portability and Accountability Act (HIPAA) governs how patients' privacy is protected and the businesses which must comply with that law.

Poor data security (e.g., data breaches, unauthorized recording of patients inside or outside of dispensaries) can result in the misuse of patients' personal and medical information by bad actors and others. Downstream consequences can be negative, such as employers using the data to decline job applications.

After listening to the episode, it seems reasonable for consumers to assume that traditional information industry players (e.g., credit reporting agencies, advertisers, data brokers, law enforcement, government intelligence agencies, etc.) all want marijuana purchase data. Note the use of "consumers," and not only "patients," since about 10 states have legalized recreational marijuana.

Listen to an encore presentation of the "Medical Cannabis Patient Privacy And Data Security" episode.


Google To EU Regulators: No One Country Should Censor The Web Globally. Poll Finds Canadians Support 'Right To Be Forgotten'

For those watching privacy legislation in Europe, MediaPost reported:

"... Maciej Szpunar, an advisor to the highest court in the EU, sided with Google in the fight, arguing that the right to be forgotten should only be enforceable in Europe -- not the entire world. The opinion is non-binding, but seen as likely to be followed."

For those unfamiliar, in the European Union (EU) the right to be forgotten:

"... was created in 2014, when EU judges ruled that Google (and other search engines) must remove links to embarrassing information about Europeans at their request... The right to be forgotten doesn't exist in the United States... Google interpreted the EU's ruling as requiring removal of links to material in search engines designed for European countries but not from its worldwide search results... In 2015, French regulators rejected Google's position and ordered the company to remove material from all of its results pages. Google then asked Europe's highest court to reject that view. The company argues that no one country should be able to censor the web internationally."

No one corporation should be able to censor the web globally, either. Meanwhile, Radio Canada International reported:

"A new poll shows a slim majority of Canadians agree with the concept known as the “right to be forgotten online.” This means the right to have outdated, inaccurate, or no longer relevant information about yourself removed from search engine results. The poll by the Angus Reid Institute found 51 percent of Canadians agree that people should have the right to be forgotten..."

Consumers should have control over their information. If that control is limited to only the country of their residence, then the global nature of the internet means that control is very limited -- and probably irrelevant. What are your opinions?


After Promises To Stop, Mobile Providers Continued Sales Of Location Data About Consumers. What You Can Do To Protect Your Privacy

Sadly, history repeats itself. First, the history: after getting caught selling consumers' real-time GPS location data without notice nor consumers' consent, in 2018 mobile providers promised to stop the practice. The Ars Technica blog reported in June, 2018:

"Verizon and AT&T have promised to stop selling their mobile customers' location information to third-party data brokers following a security problem that leaked the real-time location of US cell phone users. Senator Ron Wyden (D-Ore.) recently urged all four major carriers to stop the practice, and today he published responses he received from Verizon, AT&T, T-Mobile USA, and SprintWyden's statement praised Verizon for "taking quick action to protect its customers' privacy and security," but he criticized the other carriers for not making the same promise... AT&T changed its stance shortly after Wyden's statement... Senator Wyden recognized AT&T's change on Twitter and called on T-Mobile and Sprint to follow suit."

Kudos to Senator Wyden. The other mobile providers soon complied... sort of.

Second, some background: real-time location data is very valuable stuff. It indicates where you are as you (with your phone or other mobile devices) move about the physical world in your daily routine. No delays. No lag. Yes, there are appropriate uses for real-time GPS location data -- such as by law enforcement to quickly find a kidnapped person or child before further harm happens. But, do any and all advertisers need real-time location data about consumers? Data brokers? Others?

I think not. Domestic violence and stalking victims probably would not want their, nor their children's, real-time location data resold publicly. Most parents would not want their children's location data resold publicly. Most patients probably would not want their location data broadcast every time they visit their physician, specialist, rehab, or a hospital. Corporate executives, government officials, and attorneys conducting sensitive negotiations probably wouldn't want their location data collected and resold, either.

So, most consumers probably don't want their real-time location data resold publicly. Well, some of you make location-specific announcements via posts on social media. That's your choice, but I conclude that most people don't. Consumers want control over their location information so they can decide if, when, and with whom to share it. The mass collection and sales of consumers' real-time location data by mobile providers prevents choice -- and it violates persons' privacy.

Third, fast forward seven months from 2018. TechCrunch reported on January 9th:

"... new reporting by Motherboard shows that while [reseller] LocationSmart faced the brunt of the criticism [in 2018], few focused on the other big player in the location-tracking business, Zumigo. A payment of $300 and a phone number was enough for a bounty hunter to track down the participating reporter by obtaining his location using Zumigo’s location data, which was continuing to pay for access from most of the carriers. Worse, Zumigo sold that data on — like LocationSmart did with Securus — to other companies, like Microbilt, a Georgia-based credit reporting company, which in turn sells that data on to other firms that want that data. In this case, it was a bail bond company, whose bounty hunter was paid by Motherboard to track down the reporter — with his permission."

"Everyone seemed to drop the ball. Microbilt said the bounty hunter shouldn’t have used the location data to track the Motherboard reporter. Zumigo said it didn’t mind location data ending up in the hands of the bounty hunter, but still cut Microbilt’s access. But nobody quite dropped the ball like the carriers, which said they would not to share location data again."

The TechCrunch article rightly held offending mobile providers accountable. Example: T-Mobile's chief executive tweeted last year:

Then, Legere tweeted last week:

The right way? In my view, real-time location never should have been collected and resold. Almost a year after reports first surfaced, T-Mobile is finally getting around to stopping the practice and terminating its relationships with location data resellers -- two months from now. Why not announce this slow wind-down last year when the issue first surfaced? "Emergency assistance" is the reason we are supposed to believe. Yeah, right.

The TechCrunch article rightly took AT&T and Verizon to task, too. Good. I strongly encourage everyone to read the entire TechCrunch article.

What can consumers make of this? There seem to be several takeaways:

  1. Transparency is needed, since corporate privacy policies don't list all (or often any) business partners. This lack of transparency provides an easy way for mobile providers to resume location data sales without notice to anyone and without consumers' consent,
  2. Corporate executives will say anything in tweets/social media. A healthy dose of skepticism by consumers and regulators is wise,
  3. Consumers can't trust mobile providers. They are happy to make money selling consumers' real-time location data, regardless of consumers' desires not for our data to be collected and sold,
  4. Data brokers and credit reporting agencies want consumers' location data,
  5. To ensure privacy, consumers also must take action: adjust the privacy settings on your phones to limit or deny mobile apps access to your location data. I did. It's not hard. Do it today, and
  6. Oversight is needed, since a) mobile providers have, at best, sloppy to minimal oversight and internal processes to prevent location data sales; and b) data brokers and others are readily available to enable and facilitate location data transactions.

I cannot over-emphasize #5 above. What issues or takeaways do you see? What are your opinions about real-time location data?


Samsung Phone Owners Unable To Delete Facebook And Other Apps. Anger And Privacy Concerns Result

Some consumers have learned that they can't delete Facebook and other mobile apps from their Samsung smartphones. Bloomberg described one consumer's experiences:

"Winke bought his Samsung Galaxy S8, an Android-based device that comes with Facebook’s social network already installed, when it was introduced in 2017. He has used the Facebook app to connect with old friends and to share pictures of natural landscapes and his Siamese cat -- but he didn’t want to be stuck with it. He tried to remove the program from his phone, but the chatter proved true -- it was undeletable. He found only an option to "disable," and he wasn’t sure what that meant."

Samsung phones operate using Google's Android operating system (OS). The "chatter" refers to online complaints by Samsung phone owners. There were plenty of complaints, ranging from snarky:

To informative:

And:

Some persons shared their (understandable) anger:

One person reminded consumers of bigger issues with Android OS phones:

And, that privacy concern still exists. Sophos Labs reported:

"Advocacy group Privacy International announced the findings in a presentation at the 35th Chaos Computer Congress late last month. The organization tested 34 apps and documented the results, as part of a downloadable report... 61% of the apps tested automatically tell Facebook that a user has opened them. This accompanies other basic event data such as an app being closed, along with information about their device and suspected location based on language and time settings. Apps have been doing this even when users don’t have a Facebook account, the report said. Some apps went far beyond basic event information, sending highly detailed data. For example, the travel app Kayak routinely sends search information including departure and arrival dates and cities, and numbers of tickets (including tickets for children)."

After multiple data breaches and privacy snafus, some Facebook users have decided to either quit the Facebook mobile app or quit the service entirely. Now, some Samsung phone users have learned that quitting can be more difficult, and they don't have as much control over their devices as they thought.

How did this happen? Bloomberg explained:

"Samsung, the world’s largest smartphone maker, said it provides a pre-installed Facebook app on selected models with options to disable it, and once it’s disabled, the app is no longer running. Facebook declined to provide a list of the partners with which it has deals for permanent apps, saying that those agreements vary by region and type... consumers may not know if Facebook is pre-loaded unless they specifically ask a customer service representative when they purchase a phone."

Not good. So, now we know that there are two classes of mobile apps: 1) pre-installed and 2) permanent. Pre-installed apps come on new devices. Some pre-installed apps can be deleted by users. Permanent mobile apps are pre-installed apps which cannot be removed/deleted by users. Users can only disable permanent apps.

Sadly, there's more and it's not only Facebook. Bloomberg cited other agreements:

"A T-Mobile US Inc. list of apps built into its version of the Samsung Galaxy S9, for example, includes the social network as well as Amazon.com Inc. The phone also comes loaded with many Google apps such as YouTube, Google Play Music and Gmail... Other phone makers and service providers, including LG Electronics Inc., Sony Corp., Verizon Communications Inc. and AT&T Inc., have made similar deals with app makers..."

This is disturbing. There seem to be several issues:

  1. Notice: consumers should be informed before purchase of any and all phone apps which can't be removed. The presence of permanent mobile apps suggests either a lack of notice, notice buried within legal language of phone manufacturers' user agreements, or both.
  2. Privacy: just because a mobile app isn't running doesn't mean it isn't operating. Stealth apps can still collect GPS location and device information while running in the background; and then transmit it to manufacturers. Hopefully, some enterprising technicians or testing labs will verify independently whether "disabled" permanent mobile apps have truly stopped working.
  3. Transparency: phone manufacturers should explain and publish their lists of partners with both pre-installed and permanent app agreements -- for each device model. Otherwise, consumers cannot make informed purchase decisions about phones.
  4. Scope: the Samsung-Facebook pre-installed apps raises questions about other devices with permanent apps: phones, tablets, laptops, smart televisions, and automotive vehicles. Perhaps, some independent testing by Consumer Reports can determine a full list of devices with permanent apps.
  5. Nothing is free. Pre-installed app agreements indicate another method which device manufacturers use to make money, by collecting and sharing consumers' data with other tech companies.

The bottom line is trust. Consumers have more valid reasons to distrust some device manufacturers and OS developers. What issues do you see? What are your thoughts about permanent mobile apps?


More Than One Billion Accounts Affected By Data Breaches During 2018

Now that 2019 is here, we can assess 2018. It was a terrible year for identity theft, privacy, and data breaches. Several corporations failed miserably to protect the data they archive about consumers. This included failures within websites and mobile apps. There were so many massive data breaches that it isn't a question of whether or not you were affected.

You were. NordVPN reviewed the failures during 2018:

"If your data wasn’t leaked in 2018, you’re lucky. The information of over a billion people was compromised in 2018 as many of the companies we trust failed to protect our data."

That's billion with a "b." NordVPN provides virtual private network (VPN) services. If you want to use the internet with privacy, a VPN is the way to go. That is especially important for residents of the United States, since the U.S. Federal Communications Commission (FCC) repealed in 2017 both broadband privacy and net neutrality protections for consumers. A December 2017 study of 1,077 voters found that most want net neutrality protections. President Trump signed the privacy-rollback legislation in April 2017. A prior blog post listed many historical abuses of consumers by some ISPs.

PC Magazine reviewed NordVPN earlier this month and concluded:

"... NordVPN has proved itself to be our top service for securing your online activities. The company now has more than 5,100 servers across the globe, making it the largest service we've yet tested. It also takes a strong stance on privacy for its customers and includes tools rarely seen in the competition."

The NordVPN articled listed the major corporate data security failures during 2018. Frequent readers of this blog are familiar with the breaches. Chances are you use one or more of the services. Below is a partial list:

  • Marriott: 500 million
  • Twitter: 330 million
  • My Fitness Pal: 150 million accounts
  • Facebook: 147 million accounts
  • Quora: 100 million accounts
  • Firebase: 100 million accounts
  • Google+ : 500,000 accounts
  • British Airways: 380,000 accounts

While Google is closing its Google+ service, that is little help for breach victims whose personal data is out in the wild. The massive Equifax breach affecting 145.5 million persons isn't on the list because it happened in 2017. It's important to remember Equifax because persons cannot opt out of Equifax, or any of the other credit reporting agencies. Ain't corporate welfare nice?

What can consumers do to protect themselves and their sensitive personal and payment information? NordVPN advised:

  1. "Use strong and unique passwords.
  2. Think twice before posting anything on social media. This information can be used against you.
  3. If you shop online, use a credit card. You will have less liability for fraudulent charges if your financial information leaks.
  4. Provide companies only with necessary information. The less information they have, the less they can leak.
  5. Look out for fraud. If notified that your data was leaked, change your passwords and take the steps advised by the company that compromised your data."

Well, there you go. That's a good starter list for consumers to protect themselves. Do it because your personal data is out in the wild. The only question is which bad actor is abusing it.


A Series Of Recent Events And Privacy Snafus At Facebook Cause Multiple Concerns. Does Facebook Deserve Users' Data?

Facebook logo So much has happened lately at Facebook that it can be difficult to keep up with the data scandals, data breaches, privacy fumbles, and more at the global social service. To help, below is a review of recent events.

The the New York Times reported on Tuesday, December 18th that for years:

"... Facebook gave some of the world’s largest technology companies more intrusive access to users’ personal data than it has disclosed, effectively exempting those business partners from its usual privacy rules... The special arrangements are detailed in hundreds of pages of Facebook documents obtained by The New York Times. The records, generated in 2017 by the company’s internal system for tracking partnerships, provide the most complete picture yet of the social network’s data-sharing practices... Facebook allowed Microsoft’s Bing search engine to see the names of virtually all Facebook users’ friends without consent... and gave Netflix and Spotify the ability to read Facebook users’ private messages. The social network permitted Amazon to obtain users’ names and contact information through their friends, and it let Yahoo view streams of friends’ posts as recently as this summer, despite public statements that it had stopped that type of sharing years earlier..."

According to the Reuters newswire, a Netflix spokesperson denied that Netflix accessed Facebook users' private messages, nor asked for that access. Facebook responded with denials the same day:

"... none of these partnerships or features gave companies access to information without people’s permission, nor did they violate our 2012 settlement with the FTC... most of these features are now gone. We shut down instant personalization, which powered Bing’s features, in 2014 and we wound down our partnerships with device and platform companies months ago, following an announcement in April. Still, we recognize that we’ve needed tighter management over how partners and developers can access information using our APIs. We’re already in the process of reviewing all our APIs and the partners who can access them."

Needed tighter management with its partners and developers? That's an understatement. During March and April of 2018 we learned that bad actors posed as researchers and used both quizzes and automated tools to vacuum up (and allegedly resell later) profile data for 87 million Facebook users. There's more news about this breach. The Office of the Attorney General for Washington, DC announced on December 19th that it has:

"... sued Facebook, Inc. for failing to protect its users’ data... In its lawsuit, the Office of the Attorney General (OAG) alleges Facebook’s lax oversight and misleading privacy settings allowed, among other things, a third-party application to use the platform to harvest the personal information of millions of users without their permission and then sell it to a political consulting firm. In the run-up to the 2016 presidential election, some Facebook users downloaded a “personality quiz” app which also collected data from the app users’ Facebook friends without their knowledge or consent. The app’s developer then sold this data to Cambridge Analytica, which used it to help presidential campaigns target voters based on their personal traits. Facebook took more than two years to disclose this to its consumers. OAG is seeking monetary and injunctive relief, including relief for harmed consumers, damages, and penalties to the District."

Sadly, there's still more. Facebook announced on December 14th another data breach:

"Our internal team discovered a photo API bug that may have affected people who used Facebook Login and granted permission to third-party apps to access their photos. We have fixed the issue but, because of this bug, some third-party apps may have had access to a broader set of photos than usual for 12 days between September 13 to September 25, 2018... the bug potentially gave developers access to other photos, such as those shared on Marketplace or Facebook Stories. The bug also impacted photos that people uploaded to Facebook but chose not to post... we believe this may have affected up to 6.8 million users and up to 1,500 apps built by 876 developers... Early next week we will be rolling out tools for app developers that will allow them to determine which people using their app might be impacted by this bug. We will be working with those developers to delete the photos from impacted users. We will also notify the people potentially impacted..."

We believe? That sounds like Facebook doesn't know for sure. Where was the quality assurance (QA) team on this? Who is performing the post-breach investigation to determine what happened so it doesn't happen again? This post-breach response seems sloppy. And, the "bug" description seems disingenuous. Anytime persons -- in this case developers -- have access to data they shouldn't have, it is a data breach.

One quickly gets the impression that Facebook has created so many niches, apps, APIs, and special arrangements for developers and advertisers that it really can't manage nor control the data it collects about its users. That implies Facebook users aren't in control of their data, either.

There were other notable stumbles. There were reports after many users experienced repeated bogus Friend Requests, due to hacked and/or cloned accounts. It can be difficult for users to distinguish valid Friend Requests from spammers or bad actors masquerading as friends.

In August, reports surfaced that Facebook approached several major banks offering to share its detailed financial information about consumers in order, "to boost user engagement." Reportedly, the detailed financial information included debit/credit/prepaid card transactions and checking account balances. Not good.

Also in August, Facebook's Onavo VPN App was removed from the Apple App store because the app violated data-collection policies. 9 To 5 Mac reported on December 5th:

"The UK parliament has today publicly shared secret internal Facebook emails that cover a wide-range of the company’s tactics related to its free iOS VPN app that was used as spyware, recording users’ call and text message history, and much more... Onavo was an interesting effort from Facebook. It posed as a free VPN service/app labeled as Facebook’s “Protect” feature, but was more or less spyware designed to collect data from users that Facebook could leverage..."

Why spy? Why the deception? This seems unnecessary for a global social networking company already collecting massive amounts of content.

In November, an investigative report by ProPublica detailed the failures in Facebook's news transparency implementation. The failures mean Facebook hasn't made good on its promises to ensure trustworthy news content, nor stop foreign entities from using the social service to meddle in elections in democratic countries.

There is more. Facebook disclosed in October a massive data breach affecting 30 million users (emphasis added):

For 15 million people, attackers accessed two sets of information – name and contact details (phone number, email, or both, depending on what people had on their profiles). For 14 million people, the attackers accessed the same two sets of information, as well as other details people had on their profiles. This included username, gender, locale/language, relationship status, religion, hometown, self-reported current city, birth date, device types used to access Facebook, education, work, the last 10 places they checked into or were tagged in, website, people or Pages they follow, and the 15 most recent searches..."

The stolen data allows bad actors to operate several types of attacks (e.g., spam, phishing, etc.) against Facebook users. The stolen data allows foreign spy agencies to collect useful information to target persons. Neither is good. Wired summarized the situation:

"Every month this year—and in some months, every week—new information has come out that makes it seem as if Facebook's big rethink is in big trouble... Well-known and well-regarded executives, like the founders of Facebook-owned Instagram, Oculus, and WhatsApp, have left abruptly. And more and more current and former employees are beginning to question whether Facebook's management team, which has been together for most of the last decade, is up to the task.

Technically, Zuckerberg controls enough voting power to resist and reject any moves to remove him as CEO. But the number of times that he and his number two Sheryl Sandberg have over-promised and under-delivered since the 2016 election would doom any other management team... Meanwhile, investigations in November revealed, among other things, that the company had hired a Washington firm to spread its own brand of misinformation on other platforms..."

Hiring a firm to distribute misinformation elsewhere while promising to eliminate misinformation on its platform. Not good. Are Zuckerberg and Sandberg up to the task? The above list of breaches, scandals, fumbles, and stumbles suggest not. What do you think?

The bottom line is trust. Given recent events, BuzzFeed News article posed a relevant question (emphasis added):

"Of all of the statements, apologies, clarifications, walk-backs, defenses, and pleas uttered by Facebook employees in 2018, perhaps the most inadvertently damning came from its CEO, Mark Zuckerberg. Speaking from a full-page ad displayed in major papers across the US and Europe, Zuckerberg proclaimed, "We have a responsibility to protect your information. If we can’t, we don’t deserve it." At the time, the statement was a classic exercise in damage control. But given the privacy blunders that followed, it hasn’t aged well. In fact, it’s become an archetypal criticism of Facebook and the set up for its existential question: Why, after all that’s happened in 2018, does Facebook deserve our personal information?"

Facebook executives have apologized often. Enough is enough. No more apologies. Just fix it! And, if Facebook users haven't asked themselves the above question yet, some surely will. Earlier this week, a friend posted on the site:

"To all my FB friends:
I will be deleting my FB account very soon as I am disgusted by their invasion of the privacy of their users. Please contact me by email in the future. Please note that it will take several days for this action to take effect as FB makes it hard to get out of its grip. Merry Christmas to all and with best wishes for a Healthy, safe, and invasive free New Year."

I reminded this friend to also delete any Instagram and What's App accounts, since Facebook operates those services, too. If you want to quit the service but suffer with FOMO (Fear Of Missing Out), then read the experiences of a person who quit Apple, Google, Facebook, Microsoft, and Amazon for a month. It can be done. And, your social life will continue -- spectacularly. It did before Facebook.

Me? I have reduced my activity on Facebook. And there are certain activities I don't do on Facebook: take quizzes, make online payments, use its emotion reaction buttons (besides "Like"), use its mobile app, use the Messenger mobile app, nor use its voting and ballot previews content. Long ago I disabled the Facebook API platform on my Facebook account. You should, too. I never use my Facebook credentials (e.g., username, password) to sign into other sites. Never.

I will continue to post on Facebook links to posts in this blog, since it is helpful information for many Facebook users. In what ways have you reduced your usage of Facebook?


China Blamed For Cyberattack In The Gigantic Marriott-Starwood Hotels Data Breach

Marriott International logo An update on the gigantic Marriott-Starwood data breach where details about 500 million guests were stolen. The New York Times reported that the cyberattack:

"... was part of a Chinese intelligence-gathering effort that also hacked health insurers and the security clearance files of millions more Americans, according to two people briefed on the investigation. The hackers, they said, are suspected of working on behalf of the Ministry of State Security, the country’s Communist-controlled civilian spy agency... While American intelligence agencies have not reached a final assessment of who performed the hacking, a range of firms brought in to assess the damage quickly saw computer code and patterns familiar to operations by Chinese actors... China has reverted over the past 18 months to the kind of intrusions into American companies and government agencies that President Barack Obama thought he had ended in 2015 in an agreement with Mr. Xi. Geng Shuang, a spokesman for China’s Ministry of Foreign Affairs, denied any knowledge of the Marriott hacking..."

Why any country's intelligence agency would want to hack a hotel chain's database:

"The Marriott database contains not only credit card information but passport data. Lisa Monaco, a former homeland security adviser under Mr. Obama, noted last week at a conference that passport information would be particularly valuable in tracking who is crossing borders and what they look like, among other key data."

Also, context matters. First, this corporate acquisition was (thankfully) blocked:

"The effort to amass Americans’ personal information so alarmed government officials that in 2016, the Obama administration threatened to block a $14 billion bid by China’s Anbang Insurance Group Co. to acquire Starwood Hotel & Resorts Worldwide, according to one former official familiar with the work of the Committee on Foreign Investments in the United States, a secretive government body that reviews foreign acquisitions..."

Later that year, Marriott Hotels acquired Starwood for $13.6 billion. Second, remember the massive government data breach in 2014 at the Office of Personnel Management (OPM). The New York Times added that the Marriott breach:

"... was only part of an aggressive operation whose centerpiece was the 2014 hacking into the Office of Personnel Management. At the time, the government bureau loosely guarded the detailed forms that Americans fill out to get security clearances — forms that contain financial data; information about spouses, children and past romantic relationships; and any meetings with foreigners. Such information is exactly what the Chinese use to root out spies, recruit intelligence agents and build a rich repository of Americans’ personal data for future targeting..."

MSS Inside Not good. And, this is not the first time concerns about China have been raised. Reports surfaced in 2016 about malware installed in the firmware of smartphones running the Android operating system (OS) software. In 2015, China enacted a new "secure and controllable" security law which many security experts viewed then as a method to ensure that back doors were built into computing products and devices during into the manufacturing and assembly process.

And, even if China's MSS didn't do this massive cyberattack, it could have been another country's intelligence agency. Not good either.

Regardless who the attackers were, this incident is a huge reminder to executives in government and in the private sector to secure their computer systems. Hopefully, executives at major hotel chains -- especially those frequented by government officials and military members -- now realize that their systems are high-value targets.