New Vermont Law Regulating Data Brokers Drives 120 Businesses From The Shadows

In May of 2018, Vermont was the first (and only) state in the nation to enact a law regulating data brokers. According to the Vermont Secretary of State, a data broker is defined as:

"... a business, or unit or units of a business, separately or together, that knowingly collects and sells or licenses to third parties the brokered personal information of a consumer with whom the business does not have a direct relationship."

The Vermont Secretary of State's website contains links to the new law and more. This new law is important for several reasons. First, many businesses operate as data brokers. Second, consumers historically haven't known who has information about them, nor how to review their profiles for accuracy. Third,  consumers haven't been able to opt out of the data collection. Fourth, if you don't know who the data brokers are, then you can't hold them accountable if they fail with data security. According to Vermont law:

"2447. Data broker duty to protect information; standards; technical requirements (a) Duty to protect personally identifiable information. (1) A data broker shall develop, implement, and maintain a comprehensive information security program that is written in one or more readily accessible parts and contains administrative, technical, and physical safeguards that are appropriate... identification and assessment of reasonably foreseeable internal and external risks to the security, confidentiality, and integrity of any electronic, paper, or other records containing personally identifiable information, and a process for evaluating and improving, where necessary, the effectiveness of the current safeguards for limiting such risks... taking reasonable steps to select and retain third-party service providers that are capable of maintaining appropriate security measures to protect personally identifiable information consistent with applicable law; and (B) requiring third-party service providers by contract to implement and maintain appropriate security measures for personally identifiable information..."

Before this law, there was little to no oversight, no regulation, and no responsibility for data brokers to adequately protect sensitive data about consumers. A federal bill proposed in 2014 went nowhere in the U.S. Senate. You can assume that many data brokers operate in your state, too, since there's plenty of money to be made in the industry.

Portions of the new Vermont law went into effect in May, and the remainder went into effect on January 1, 2019. What has happened since then? Fast Company reported:

"So far, 121 companies have registered, according to data from the Vermont secretary of state’s office... The list of active companies includes divisions of the consumer data giant Experian, online people search engines like Spokeo and Spy Dialer, and a variety of lesser-known organizations that do everything from help landlords research potential tenants to deliver marketing leads to the insurance industry..."

The Fast Company site lists the 120 (so far) registered data brokers in Vermont. Regular readers of this blog will recognize some of the data brokers by name, since prior posts covered Acxiom, Equifax, Experian, LexisNexis, the NCTUE, Oracle, Spokeo, TransUnion, and others. (Yes, both credit reporting agencies and social media firms also operate as data brokers. Some states do it, too.) Reportedly, many privacy advocates support the new law:

"There’s companies that I’ve never heard of before," says Zachary Tomanelli, communications and technology director at the Vermont Public Interest Research Group, which supported the law. "It’s often very cumbersome [for consumers] to know where the places are that you have to go, and how you opt out."

Predictably, the industry has opposed (and continues to oppose) the legislation:

"A coalition of industry groups like the Internet Association, the Association of National Advertisers, and the National Association of Professional Background Screeners, as well as now registered data brokers such as Experian, Acxiom, and IHS Markit, said the law was unnecessary... Requiring companies to disclose breaches of largely public data could be burdensome for businesses and needlessly alarming for consumers, they argue... Other companies, like Axciom, have complained that the law establishes inconsistent boundaries around personal data used by third parties, and the first-party data used by companies like Facebook and Google."

So, no companies want consumers to own and control the data -- property -- that describes them. Real property laws matter. To learn more, read about data brokers at the Privacy Rights Clearinghouse site. Related posts in the Data Brokers section of this blog:

Kudos to Vermont lawmakers for ensuring more disclosures and transparency from the industry. Readers may ask their elected officials why their state has not taken similar action. What are your opinions of the new Vermont law?


Sackler Embraced Plan to Conceal OxyContin’s Strength From Doctors, Sealed Testimony Shows

[Editor's note: today's guest post explores issues within the pharmaceuticals and drug industry. It is reprinted with permission.]

By David Armstrong, ProPublica

In May 1997, the year after Purdue Pharma launched OxyContin, its head of sales and marketing sought input on a key decision from Dr. Richard Sackler, a member of the billionaire family that founded and controls the company. Michael Friedman told Sackler that he didn’t want to correct the false impression among doctors that OxyContin was weaker than morphine, because the myth was boosting prescriptions — and sales.

“It would be extremely dangerous at this early stage in the life of the product,” Friedman wrote to Sackler, “to make physicians think the drug is stronger or equal to morphine….We are well aware of the view held by many physicians that oxycodone [the active ingredient in OxyContin] is weaker than morphine. I do not plan to do anything about that.”

“I agree with you,” Sackler responded. “Is there a general agreement, or are there some holdouts?”

Ten years later, Purdue pleaded guilty in federal court to understating the risk of addiction to OxyContin, including failing to alert doctors that it was a stronger painkiller than morphine, and agreed to pay $600 million in fines and penalties. But Sackler’s support of the decision to conceal OxyContin’s strength from doctors — in email exchanges both with Friedman and another company executive — was not made public.

The email threads were divulged in a sealed court document that ProPublica has obtained: an Aug. 28, 2015, deposition of Richard Sackler. Taken as part of a lawsuit by the state of Kentucky against Purdue, the deposition is believed to be the only time a member of the Sackler family has been questioned under oath about the illegal marketing of OxyContin and what family members knew about it. Purdue has fought a three-year legal battle to keep the deposition and hundreds of other documents secret, in a case brought by STAT, a Boston-based health and medicine news organization; the matter is currently before the Kentucky Supreme Court.

Meanwhile, interest in the deposition’s contents has intensified, as hundreds of cities, counties, states and tribes have sued Purdue and other opioid manufacturers and distributors. A House committee requested the document from Purdue last summer as part of an investigation of drug company marketing practices.

In a statement, Purdue stood behind Sackler’s testimony in the deposition. Sackler, it said, “supports that the company accurately disclosed the potency of OxyContin to healthcare providers.” He “takes great care to explain” that the drug’s label “made clear that OxyContin is twice as potent as morphine,” Purdue said.

Still, Purdue acknowledged, it had made a “determination to avoid emphasizing OxyContin as a powerful cancer pain drug,” out of “a concern that non-cancer patients would be reluctant to take a cancer drug.”

The company, which said it was also speaking on behalf of Sackler, deplored what it called the “intentional leak of the deposition” to ProPublica, calling it “a clear violation of the court’s order” and “regrettable.”

Much of the questioning of Sackler in the 337-page deposition focused on Purdue’s marketing of OxyContin, especially in the first five years after the drug’s 1996 launch. Aggressive marketing of OxyContin is blamed by some analysts for fostering a national crisis that has resulted in 200,000 overdose deaths related to prescription opioids since 1999.

Taken together with a Massachusetts complaint made public last month against Purdue and eight Sacklers, including Richard, the deposition underscores the family’s pivotal role in developing the business strategy for OxyContin and directing the hiring of an expanded sales force to implement a plan to sell the drug at ever-higher doses. Documents show that Richard Sackler was especially involved in the company’s efforts to market the drug, and that he pushed staff to pursue OxyContin’s deregulation in Germany. The son of a Purdue co-founder, he began working at Purdue in 1971 and has been at various times the company’s president and co-chairman of its board.

In a 1996 email introduced during the deposition, Sackler expressed delight at the early success of OxyContin. “Clearly this strategy has outperformed our expectations, market research and fondest dreams,” he wrote. Three years later, he wrote to a Purdue executive, “You won’t believe how committed I am to make OxyContin a huge success. It is almost that I dedicated my life to it. After the initial launch phase, I will have to catch up with my private life again.”

During his deposition, Sackler defended the company’s marketing strategies — including some Purdue had previously acknowledged were improper — and offered benign interpretations of emails that appeared to show Purdue executives or sales representatives minimizing the risks of OxyContin and its euphoric effects. He denied that there was any effort to deceive doctors about the potency of OxyContin and argued that lawyers for Kentucky were misconstruing words such as “stronger” and “weaker” used in email threads.

The term “stronger” in Friedman’s email, Sackler said, “meant more threatening, more frightening. There is no way that this intended or had the effect of causing physicians to overlook the fact that it was twice as potent.”

Emails introduced in the deposition show Sackler’s hidden role in key aspects of the 2007 federal case in which Purdue pleaded guilty. A 19-page statement of facts that Purdue admitted to as part of the plea deal, and which prosecutors said contained the “main violations of law revealed by the government’s criminal investigation,” referred to Friedman’s May 1997 email to Sackler about letting the doctors’ misimpression stand. It did not identify either man by name, attributing the statements to “certain Purdue supervisors and employees.”

Friedman, who by then had risen to chief executive officer, was one of three Purdue executives who pleaded guilty to a misdemeanor of “misbranding” OxyContin. No members of the Sackler family were charged or named as part of the plea agreement. The Massachusetts lawsuit alleges that the Sackler-controlled Purdue board voted that the three executives, but no family members, should plead guilty as individuals. After the case concluded, the Sacklers were concerned about maintaining the allegiance of Friedman and another of the executives, according to the Massachusetts lawsuit. To protect the family, Purdue paid the two executives at least $8 million, that lawsuit alleges.

“The Sacklers spent millions to keep the loyalty of people who knew the truth,” the complaint filed by the Massachusetts attorney general alleges.

The Kentucky deposition’s contents will likely fuel the growing protests against the Sacklers, including pressure to strip the family’s name from cultural and educational institutions to which it has donated. The family has been active in philanthropy for decades, giving away hundreds of millions of dollars. But the source of its wealth received little attention until recent years, in part due to a lack of public information about what the family knew about Purdue’s improper marketing of OxyContin and false claims about the drug’s addictive nature.

Although Purdue has been sued hundreds of times over OxyContin’s marketing, the company has settled many of these cases, and almost never gone to trial. As a condition of settlement, Purdue has often required a confidentiality agreement, shielding millions of records from public view.

That is what happened in Kentucky. In December 2015, the state settled its lawsuit against Purdue, alleging that the company created a “public nuisance” by improperly marketing OxyContin, for $24 million. The settlement required the state attorney general to “completely destroy” documents in its possession from Purdue. But that condition did not apply to records sealed in the circuit court where the case was filed. In March 2016, STAT filed a motion to make those documents public, including Sackler’s deposition. The Kentucky Court of Appeals last year upheld a lower court ruling ordering the deposition and other sealed documents be made public. Purdue asked the state Supreme Court to review the decision, and both sides recently filed briefs. Protesters outside Kentucky’s Capitol last week waved placards urging the court to release the deposition.

Sackler family members have long constituted the majority of Purdue’s board, and company profits flow to trusts that benefit the extended family. During his deposition, which took place over 11 hours in a law office in Louisville, Kentucky, Richard Sackler said “I don’t know” more than 100 times, including when he was asked how much his family had made from OxyContin sales. He acknowledged it was more than $1 billion, but when asked if they had made more than $5 billion, he said, “I don’t know.” Asked if it was more than $10 billion, he replied, “I don’t think so.”

By 2006, OxyContin’s “profit contribution” to Purdue was $4.7 billion, according to a document read at the deposition. From 2007 to 2018, the Sackler family received more than $4 billion in payouts from Purdue, according to the Massachusetts lawsuit.

During the deposition, Sackler was confronted with his email exchanges with company executives about Purdue’s decision not to correct the misperception among many doctors that OxyContin was weaker than morphine. The company viewed this as good news because the softer image of the drug was helping drive sales in the lucrative market for treating conditions like back pain and arthritis, records produced at the deposition show.

Designed to gradually release medicine into the bloodstream, OxyContin allows patients to take fewer pills than they would with other, quicker-acting pain medicines, and its effect lasts longer. But to accomplish these goals, more narcotic is packed into an OxyContin pill than competing products. Abusers quickly figured out how to crush the pills and extract the large amount of narcotic. They would typically snort it or dissolve it into liquid form to inject.

The pending Massachusetts lawsuit against Purdue accuses Sackler and other company executives of determining that “doctors had the crucial misconception that OxyContin was weaker than morphine, which led them to prescribe OxyContin much more often.” It also says that Sackler “directed Purdue staff not to tell doctors the truth,” for fear of reducing sales. But it doesn’t reveal the contents of the email exchange with Friedman, the link between that conversation and the 2007 plea agreement, and the back-and-forth in the deposition.

A few days after the email exchange with Friedman in 1997, Sackler had an email conversation with another company official, Michael Cullen, according to the deposition. “Since oxycodone is perceived as being a weaker opioid than morphine, it has resulted in OxyContin being used much earlier for non-cancer pain,” Cullen wrote to Sackler. “Physicians are positioning this product where Percocet, hydrocodone and Tylenol with codeine have been traditionally used.” Cullen then added, “It is important that we be careful not to change the perception of physicians toward oxycodone when developing promotional pieces, symposia, review articles, studies, et cetera.”

“I think that you have this issue well in hand,” Sackler responded.

Friedman and Cullen could not be reached for comment.

Asked at his deposition about the exchanges with Friedman and Cullen, Sackler didn’t dispute the authenticity of the emails. He said the company was concerned that OxyContin would be stigmatized like morphine, which he said was viewed only as an “end of life” drug that was frightening to people.

“Within this time it appears that people had fallen into a habit of signifying less frightening, less threatening, more patient acceptable as under the rubric of weaker or more frightening, more — less acceptable and less desirable under the rubric or word ‘stronger,’” Sackler said at his deposition. “But we knew that the word ‘weaker’ did not mean less potent. We knew that the word ‘stronger’ did not mean more potent.” He called the use of those words “very unfortunate.”

He said Purdue didn’t want OxyContin “to be polluted by all of the bad associations that patients and healthcare givers had with morphine.”

In his deposition, Sackler also defended sales representatives who, according to the statement of facts in the 2007 plea agreement, falsely told doctors during the 1996-2001 period that OxyContin did not cause euphoria or that it was less likely to do so than other opioids. This euphoric effect experienced by some patients is part of what can make OxyContin addictive. Yet, asked about a 1998 note written by a Purdue salesman, who indicated that he “talked of less euphoria” when promoting OxyContin to a doctor, Sackler argued it wasn’t necessarily improper.

“This was 1998, long before there was an Agreed Statement of Facts,” he said.

The lawyer for the state asked Sackler: “What difference does that make? If it’s improper in 2007, wouldn’t it be improper in 1998?”

“Not necessarily,” Sackler replied.

Shown another sales memo, in which a Purdue representative reported telling a doctor that “there may be less euphoria” with OxyContin, Sackler responded, “We really don’t know what was said.” After further questioning, Sackler said the claim that there may be less euphoria “could be true, and I don’t see the harm.”

The same issue came up regarding a note written by a Purdue sales representative about one doctor: “Got to convince him to counsel patients that they won’t get buzzed as they will with short-acting” opioid painkillers. Sackler defended these comments as well. “Well, what it says here is that they won’t get a buzz. And I don’t think that telling a patient ‘I don’t think you’ll get a buzz’ is harmful,” he said.

Sackler added that the comments from the representative to the doctor “actually could be helpful, because many patients won’t get a buzz, and if he would like to know if they do, he might have had a good medical reason for wanting to know that.”

Sackler said he didn’t believe any of the company sales people working in Kentucky engaged in the improper conduct described in the federal plea deal. “I don’t have any facts to inform me otherwise,” he said.

Purdue said that Sackler’s statements in his deposition “fully acknowledge the wrongful actions taken by some of Purdue’s employees prior to 2002,” as laid out in the 2007 plea agreement. Both the company and Sackler “fully agree” with the facts laid out in that case, Purdue said.

The deposition also reveals that Sackler pushed company officials to find out if German officials could be persuaded to loosen restrictions on the selling of OxyContin. In most countries, narcotic pain relievers are regulated as “controlled” substances because of the potential for abuse. Sackler and other Purdue executives discussed the possibility of persuading German officials to classify OxyContin as an uncontrolled drug, which would likely allow doctors to prescribe the drug more readily — for instance, without seeing a patient. Fewer rules were expected to translate into more sales, according to company documents disclosed at the deposition.

One Purdue official warned Sackler and others that it was a bad idea. Robert Kaiko, who developed OxyContin for Purdue, wrote to Sackler, “If OxyContin is uncontrolled in Germany, it is highly likely that it will eventually be abused there and then controlled.”

Nevertheless, Sackler asked a Purdue executive in Germany for projections of sales with and without controls. He also wondered whether, if one country in the European Union relaxed controls on the drug, others might do the same. When finally informed that German officials had decided the drug would be controlled like other narcotics, Sackler asked in an email if the company could appeal. Told that wasn’t possible, he wrote back to an executive in Germany, “When we are next together we should talk about how this idea was raised and why it failed to be realized. I thought that it was a good idea if it could be done.”

Asked at the deposition about that comment, Sackler responded, “That’s what I said, but I didn’t mean it. I just wanted to be encouraging.” He said he really “was not in favor of” loosening OxyContin regulation and was simply being “polite” and “solicitous” of his own employee.

Near the end of the deposition — after showing Sackler dozens of emails, memos and other records regarding the marketing of OxyContin — a lawyer for Kentucky posed a fundamental question.

“Sitting here today, after all you’ve come to learn as a witness, do you believe Purdue’s conduct in marketing and promoting OxyContin in Kentucky caused any of the prescription drug addiction problems now plaguing the Commonwealth?” he asked.

Sackler replied, “I don’t believe so.”

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for The Big Story newsletter to receive stories like this one in your inbox.


Federal Reserve Enforcement Action Against Banking Executives

Last month, the Federal Reserve Board (FRB) announced several notable enforcement actions. A February 5th press release discussed a:

"Consent Notice of Suspension and Prohibition against Fred Daibes, former Chairman of Mariner's Bancorp, Edgewater, New Jersey, for perpetuating a fraudulent loan scheme, according to a federal indictment."

The order against Daibes described the violations:

"... on October 30, 2018, a federal grand jury in the United States District Court for the District of New Jersey charged [Diabes] and an accomplice by indictment with one count conspiracy to misapply bank funds and to make false entries to deceive a financial institution and the FDIC, five counts of misapplying bank funds, six counts of making false entries to decide a financial institution and the FDIC, and one count of causing reliance on a false document to influence the FDIC... During the relevant time period, Mariner’s was subject to federal banking regulations that placed limits on the amount of money that the Bank could lend to a single borrower... the Indictment charges that in about January 2008 to December 2013, Daibes and others orchestrated a nominee loan scheme designed to circumvent the Lending Limits by ensuring that millions of dollars in loans made by the Bank (the “Nominee Loans”) flowed from the nominees to Daibes, while concealing Daibes’ beneficial interests in those loans from both the Bank and the FDIC. Daibes recruited nominees to make materially false and misleading statements and material omissions..."

The FRB and the U.S. Federal Deposit Insurance Corporation (FDIC) are two of several federal agencies which oversee and regulate the banking industry within the United States. The order bars Daibes from working within the banking industry.

Then, a February 7th FRB press release discussed a:

"Consent Prohibition against Alison Keefe, former employee of SunTrust Bank, Atlanta, Georgia, for violating bank overdraft policies for her own benefit."

The order against Keefe described the violations:

"... between September 2017 and May 2018, while employed as the manager of the Bank’s Hilltop Branch in Virginia Beach, Virginia, Keefe repeatedly overdrew her personal checking account at the Bank and instructed Bank staff, without authorization and contrary to Bank policies, to honor the overdrafts... Keefe’s misconduct described above constituted unsafe or unsound banking practices and demonstrated a reckless disregard for the safety and soundness of the Bank..."

Keefe was fired by the bank on July 12, 2018, and has repaid the bank. The order bars Keefe from working within the banking industry.

A February 21st press release discussed the agency's enforcement action against a former manager at J.P. Morgan Chase bank. The FRB:

"... permanently barred from the banking industry Timothy Fletcher, a former managing director at a non-bank subsidiary of J.P. Morgan Chase & Co. Fletcher consented to the prohibition, which includes allegations that he improperly administered a referral hiring program at the firm by offering internships and other employment opportunities to individuals referred by foreign officials, clients, and prospective clients in order to obtain improper business advantages for the firm. The FRB is also requiring Fletcher to cooperate in any pending or prospective enforcement action against other individuals who are or were affiliated with the firm. The firm was previously fined $61.9 million by the Board relating to this program. In addition, the Department of Justice and the Securities and Exchange Commission have also fined the firm."

The $61.9 million fine was levied against J.P. Morgan Chase in November, 2016. Back then, the FRB found that the bank:

"... did not have adequate enterprise-wide controls to ensure that referred candidates were appropriately vetted and hired in accordance with applicable anti-bribery laws and firm policies. The Federal Reserve's order requires J.P. Morgan Chase to enhance the effectiveness of senior management oversight and controls relating to the firm's referral hiring practices and anti-bribery policies. The Federal Reserve is also requiring the firm to cooperate in its investigation of the individuals..."

Last month's order against Fletcher described the violations:

"... from at least 2008 until 2013 [Fletcher] engaged in unsafe and unsound practices, breaches of fiduciary duty, and violations of law related to his involvement in the Firm’s referral hiring program for the Asia-Pacific region investment bank, whereby candidates who were referred, directly or indirectly, by foreign government officials and existing or prospective commercial clients were offered internships, training, and other employment opportunities in order to obtain improper business advantages for the Firm... the Firm’s internal policies prohibited Firm employees from giving anything of value, including the offer of internships or training, to certain individuals, including relatives of public officials and relatives and associates of non-government corporate representatives, in order to obtain improper business advantages for the Firm..."

Kudos to the FRB for its enforcement action. Executives must suffer direct consequences for wrongdoing. After reading this, one wonders why direct consequences are not applied against executives within the social media industry. The behaviors there do just as much damage; and cross borders, too. What are your opinions?


Brave Alerts FTC On Threats From Business Practices With Big Data

The U.S. Federal Trade Commission (FTC) held a "Privacy, Big Data, And Competition" hearing on November 6-8, 2018 as part of its "Competition And Consumer Protection in the 21st Century" series of discussions. During that session, the FTC asked for input on several topics:

  1. "What is “big data”? Is there an important technical or policy distinction to be drawn between data and big data?
  2. How have developments involving data – data resources, analytic tools, technology, and business models – changed the understanding and use of personal or commercial information or sensitive data?
  3. Does the importance of data – or large, complex data sets comprising personal or commercial information – in a firm’s ordinary course operations change how the FTC should analyze mergers or firm conduct? If so, how? Does data differ in importance from other assets in assessing firm or industry conduct?
  4. What structural, behavioral or conduct remedies should the FTC consider when remedying antitrust harm in a market or industry where data or personal or commercial information are a significant product or a key competitive input?
  5. Are there policy recommendations that would facilitate competition in markets involving data or personal or commercial information that the FTC should consider?
  6. Do the presence of personal information or privacy concerns inform or change competition analysis?
  7. How do state, federal, and international privacy laws and regulations, adopted to protect data and consumers, affect competition, innovation, and product offerings in the United States and abroad?"

Brave, the developer of a web browser, submitted comments to the FTC which highlighted two concerns:

"First, big tech companies “cross-use” user data from one part of their business to prop up others. This stifles competition, and hurts innovation and consumer choice. Brave suggests that FTC should investigate. Second, the GDPR is emerging as a de facto international standard. Whether this helps or harms United States firms will be determined by whether the United States enacts and actively enforces robust federal privacy laws."

A letter by Dr. Johnny Ryan, the Chief Policy & Industry Relations Officer at Brave, described in detail the company's concerns:

"The cross-use and offensive leveraging of personal information from one line of business to another is likely to have anti-competitive effects. Indeed anti-competitive practices may be inevitable when companies with Google’s degree of market dominance update their privacy policies to include the cross-use of personal information. The result is that a company can leverage all the personal information accumulated from its users in one line of business to dominate other lines of business too. Rather than competing on the merits, the company can enjoy the unfair advantage of massive network effects... The result is that nascent and potential competitors will be stifled, and consumer choice will be limited... The cross-use of data between different lines of business is analogous to the tying of two products. Indeed, tying and cross-use of data can occur at the same time, as Google Chrome’s latest “auto sign in to everything” controversy illustrates..."

Historically, Google let Chrome web browser users decide whether or not to sign in for cross-device usage. The Chrome 69 update forced auto sign-in, but a Chrome 70 update restored users' choice after numerous complaints and criticism.

Regarding topic #7 by the FTC, Brave's response said:

"A de facto international standard appears to be emerging, based on the European Union’s General Data Protection Regulation (GDPR)... the application of GDPR-like laws for commercial use of consumers’ personal data in the EU, Britain (post EU), Japan, India, Brazil, South Korea, Malaysia, Argentina, and China bring more than half of global GDP under a common standard. Whether this emerging standard helps or harms United States firms will be determined by whether the United States enacts and actively enforces robust federal privacy laws. Unless there is a federal GDPR-like law in the United States, there may be a degree of friction and the potential of isolation for United States companies... there is an opportunity in this trend. The United States can assume the global lead by adopting the emerging GDPR standard, and by investing in world-leading regulation that pursues test cases, and defines practical standards..."

Currently, companies collect, archive, share, and sell consumers' personal information at will -- often without notice nor consent. While all 50 states and territories have breach notification laws, most states have not upgraded their breach notification laws to include biometric and passport data. While the Health Insurance Portability and Accountability Act (HIPAA) is the federal law which governs healthcare data and related breaches, many consumers share health data with social media sites -- robbing themselves of HIPAA protections.

Moreover, it's an unregulated free-for-all of data collection, archiving, and sharing by telecommunications companies after the revoking in 2017 of broadband privacy protections for consumers in the USA. Plus, laws have historically focused upon "declared data" (e.g., the data users upload or submit into websites or apps) while ignoring "inferred data" -- which is arguably just as sensitive and revealing.

Regarding future federal privacy legislation, Brave added:

"... The GDPR is compatible with a United States view of consumer protection and privacy principles. Indeed, the FTC has proposed important privacy protections to legislators in 2009, and again in 2012 and 2014, which ended up being incorporated in the GDPR. The high-level principles of the GDPR are closely aligned, and often identical to, the United States’ privacy principles... The GDPR also incorporates principles endorsed by the U.S. in the 1980 OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data; and the principles endorsed by the United States this year, in Article 19.8 (3) of the new United States-Mexico-Canada Agreement."

"The GDPR differs from established United States privacy principles in its explicit reference to “proportionality” as a precondition of data use, and in its more robust approach to data minimization and to purpose specification. In our view, a federal law should incorporate these elements too. We also recommend that federal law should adopt the GDPR definitions of concepts such as “personal data”, “legal basis” including opt-in “consent”, “processing”, “special category personal data”, ”profiling”, “data controller”, “automated decision making”, “purpose limitation”, and so forth, and tools such as data protection impact assessments, breach notification, and records of processing activities."

"In keeping with the fair information practice principles (FIPPs) of the 1974 US Privacy Act, Brave recommends that a federal law should require that the collection of personal information is subject to purpose specification. This means that personal information shall only be collected for specific and explicit purposes. Personal information should not used beyond those purposes without consent, unless a further purpose is poses no risk of harm and is compatible with the initial purpose, in which case the data subject should have the opportunity to opt-out."

Submissions by Brave and others are available to the public at the FTC website in the "Public Comments" section.


Google To End Forced Arbitration For Employees

This news item caught my attention. Axios reported:

"Google will no longer require current and future employees to take disputes with the company to arbitration, it said on February 21st... After protests last year, the search giant ended mandatory arbitration for individual cases of sexual harassment or assault for employees. Employees have called for the practice to end in other cases of harassment and discrimination. Google appears to be meeting that demand for employees — but the change will not apply in the same blanket way to the many contractors, vendors and temporary employees it uses."

Reportedly, the change will take effect on March 21, 2019.


Study: Privacy Concerns Have Caused Consumers To Change How They Use The Internet

Facebook commissioned a study by the Economist Intelligence Unit (EIU) to understand "internet inclusion" globally, or how people use the Internet, the benefits received, and the obstacles experienced. The latest survey included 5,069 respondents from 100 countries in Asia-Pacific, the Americas, Europe, the Middle East, North Africa and Sub-Saharan Africa.

Overall findings in the report cited:

"... cause for both optimism and concern. We are seeing steady progress in the number and percentage of households connected to the Internet, narrowing the gender gap and improving accessibility for people with disabilities. The Internet also has become a crucial tool for employment and obtaining job-related skills. On the other hand, growth in Internet connections is slowing, especially among the lowest income countries, and efforts to close the digital divide are stalling..."

The EIU describes itself as, "the world leader in global business intelligence, to help companies, governments and banks understand changes in the world is changing, seize opportunities created by those changes, and manage associated risks. So, any provider of social media services globally would greatly value the EIU's services.

The chart below highlights some of the benefits mentioned by survey respondents:

Chart-internet-benefits-eiu-2019

Other benefits respondents said: almost three-quarters (74.4%) said the Internet is more effective than other methods for finding jobs; 70.5% said their job prospects have improved due to the Internet; and more. So, job seekers and employers both benefit.

Key findings regarding online privacy (emphasis added):

"... More than half (52.2%) of [survey] respondents say they are not confident about their online privacy, hardly changed from 51.5% in the 2018 survey... Most respondents are changing the way they use the Internet because they believe some information may not remain private. For example, 55.8% of respondents say they limit how much financial information they share online because of privacy concerns. This is relatively consistent across different age groups and household income levels... 42.6% say they limit how much personal health and medical information they share. Only 7.5% of respondents say privacy concerns have not changed the way they use the Internet."

So, the lack of online privacy affects how people use the internet -- for business and pleasure. The chart below highlights the types of online changes:

Chart-internet-usage-eiu-2019

Findings regarding privacy and online shopping:

"Despite lingering privacy concerns, people are increasingly shopping online. Whether this continues in the future may hinge on attitudes toward online safety and security... A majority of respondents say that making online purchases is safe and secure, but, at 58.8% it was slightly lower than the 62.1% recorded in the 2018 survey."

So, the percentage of respondents who said online purchases as safe and secure went in the wrong direction -- down. Not good. There were regional differences, too, about online privacy:

"In Europe, the share of respondents confident about their online privacy increased by 8 percentage points from the 2018 survey, probably because of the General Data Protection Regulation (GDPR), the EU’s comprehensive data privacy rules that came into force in May 2018. However, the Middle East and North Africa region saw a decline of 9 percentage points compared with the 2018 survey."

So, sensible legislation to protect consumers' online privacy can have positive impacts. There were other regional differences:

"Trust in online sources of information remained relatively stable, except in the West. Political turbulence in the US and UK may have played a role in causing the share of respondents in North America and Europe who say they trust information on government websites and apps to retreat by 10 percentage points and 6 percentage points, respectively, compared with the 2018 survey."

So, stability is important. The report's authors concluded:

"The survey also reflects anxiety about online privacy and a decline in trust in some sources of information. Indeed, trust in government information has fallen since last year in Europe and North America. The growth and importance of the digital economy will mean that alleviating these anxieties should be a priority of companies, governments, regulators and developers."

Addressing those anxieties is critical, if governments in the West are serious about facilitating business growth via consumer confidence and internet usage. Download the Inclusive Internet Index 2019 Executive Summary (Adobe PDF) report.


New Bill In California To Strengthen Its Consumer Privacy Law

Lawmakers in California have proposed legislation to strengthen the state's existing privacy law. California Attorney General Xavier Becerra and and Senator Hannah-Beth Jackson jointly announced Senate Bill 561, to improve the California Consumer Privacy Act (CCPA). According to the announcement:

"SB 561 helps improve the workability of the [CCPA] by clarifying the Attorney General’s advisory role in providing general guidance on the law, ensuring a level playing field for businesses that play by the rules, and giving consumers the ability to enforce their new rights under the CCPA in court... SB 561 removes requirements that the Office of the Attorney General provide, at taxpayers’ expense, businesses and private parties with individual legal counsel on CCPA compliance; removes language that allows companies a free pass to cure CCPA violations before enforcement can occur; and adds a private right of action, allowing consumers the opportunity to seek legal remedies for themselves under the act..."

Senator Jackson introduced the proposed legislation into the sate Senate. Enacted in 2018, the CCPA will go into effect on January 1, 2020. The law prohibits businesses from discriminating against consumers for exercising their rights under the CCPA. The law also includes several key requirements businesses must comply with:

  • "Businesses must disclose data collection and sharing practices to consumers;
  • Consumers have a right to request their data be deleted;
  • Consumers have a right to opt out of the sale or sharing of their personal information; and
  • Businesses are prohibited from selling personal information of consumers under the age of 16 without explicit consent."

State Senator Jackson said in a statement:

"Our constitutional right to privacy continues to face unprecedented assault. Our locations, relationships, and interests are being tracked without our knowledge, bought and sold by corporate interests for their own economic gain and conducted in order to manipulate us... With the passage of the California Consumer Privacy Act last year, California took an important first step in protecting our fundamental right to privacy. SB 561 will ensure that the most significant privacy protections in the nation are effectively and robustly enforced."

Predictably, the pro-business lobby opposes the legislation. The Sacramento Bee reported:

"Punishment may be an incentive to increase compliance, but — especially where a law is new and vague — eliminating a right to cure does not promote compliance," the California Chamber of Commerce released in a statement on February 25. "SB 561 will not only hurt and possibly bankrupt small businesses in the state, it will kill jobs and innovation."

Sounds to me like fearmongering by the Chamber. Senator Jackson has it right. From the same Sacramento Bee article:

"If you don’t violate the law, you won’t get sued... To have very little recourse when these violations occur means that these large companies can continue with their inappropriate, improper behavior without any kind of recourse and sanction. In order to make sure they comply with the law, we need to make sure that people are able to exercise their rights."

Precisely. Two concepts seem to apply:

  • If you can't protect it, don't collect it (e.g.,  consumers' personal information), and
  • If the data collected is so value, compensate consumers for it

Regarding the second item, the National Law Review reported:

"Much has been made of California Governor Gavin Newsom’s recent endorsement of “data dividends”: payments to consumers for the use of their personal data. Common Sense Media, which helped pass the CCPA last year, plans to propose legislation in California to create such a dividend. The proposal has already proven popular with the public..."

Laws like the CCPA seem to be the way forward. Kudos to California for moving to better protect consumers. This proposed update puts teeth into existing law. Hopefully, other states will follow soon.


Facebook Admits More Teens Used Spyware App Than Previously Disclosed

Facebook logo Facebook has changed its story about how many teenagers used its Research app. When news first broke, Facebook said that less than 5 percent of the mobile app users were teenagers. On Thursday, TechCrunch reported that it:

"... has obtained Facebook’s unpublished February 21st response to questions about the Research program in a letter from Senator Mark Warner, who wrote to CEO Mark Zuckerberg that “Facebook’s apparent lack of full transparency with users – particularly in the context of ‘research’ efforts – has been a source of frustration for me.”

In the response from Facebook’s VP of US public policy Kevin Martin, the company admits that (emphasis ours) “At the time we ended the Facebook Research App on Apple’s iOS platform, less than 5 percent of the people sharing data with us through this program were teens. Analysis shows that number is about 18 percent when you look at the complete lifetime of the program, and also add people who had become inactive and uninstalled the app.”

Three U.S. Senators sent a letter to Facebook on February 7th demanding answers. The TechCrunch article outlined other items in Facebook's changing story: i) it originally claimed its Research App didn't violate Apple's policies and we later learned it did; and ii) it claimed to have removed the app, but Apple later forced that removal.

What to make of Facebook's changing story? Again from TechCrunch:

"The contradictions between Facebook’s initial response to reporters and what it told Warner, who has the power to pursue regulation of the the tech giant, shows Facebook willingness to move fast and play loose with the truth... Facebook’s attempt to minimize the issue in the wake of backlash exemplifies the trend of of the social network’s “reactionary” PR strategy that employees described to BuzzFeed’s Ryan Mac. The company often views its scandals as communications errors rather than actual product screwups or as signals of deep-seeded problems with Facebook’s respect for privacy..."

Kudos to TechCrunch on more excellent reporting. And, there's more regarding children. Fortune reported:

"A coalition of 17 privacy and children’s organizations has asked the Federal Trade Commission to investigate Facebook for allowing children to make unauthorized in-app purchases... The coalition filed a complaint with the FTC on Feb. 21 over Facebook doing little to stop children from buying virtual items through games on its service without parental permission and, in some cases, without realizing those items cost money... Internal memos show that between 2010 and 2014, Facebook encouraged children, some as young as five-years old, to make purchases using their parents’ credit card information, the complaint said. The company then refused to refund parents..."

Not good. Facebook's changing story makes it difficult, or impossible, to trust anything its executives say. Perhaps, the entertainer Lady Gaga said it best:

"Social media is the toilet of the internet."

Facebook's data breaches, constant apologizing, and shifting stories seem to confirm that. Now, it is time for government regulators to act -- and not with wimpy fines.


California Seeks To Close Loopholes In Its Data Breach Notification Law

California pursues legislation to close loopholes in its existing data breach notification law. Current state law in California does not require businesses to notify consumers when their passport and biometric data is exposed or stolen during a data breach. The proposed law would close that loophole.

The legislation was prompted by the gigantic data breach at Marriott's Starwood Hotels unit. The sensitive information of more than 327 million guests was accessed by unauthorized persons. The data accessed -- and probably stolen -- included guests' names, addresses, at least 25 million passport numbers, and more. California Attorney General Xavier Becerra announced the proposed legislation:

"Though [Marriott] did notify consumers of the breach, current law does not require companies to report breaches if only consumers’ passport numbers have been improperly accessed... In 2003, California became the first state to pass a data breach notification law requiring companies to disclose breaches of personal information to California consumers whose personal information was, or was reasonably believed to have been, acquired by an unauthorized person... This bill would update that law to include passport numbers as personal information protected under the statute. Passport numbers are unique, government-issued, static identifiers of a person, which makes them valuable to criminals seeking to create or build fake profiles and commit sophisticated identity theft and fraud. AB 1130 would also update the statute to include protection for a person’s unique biometric information, such as a fingerprint, or image of a retina or iris."

Assembly member Marc Levine (D-San Rafael) introduced the proposed legislation to the California House, and said in a statement:

“There is a real danger when our personal information is not protected by those we trust... Businesses must do more to protect personal data, and I am proud to stand with Attorney General Becerra in demanding greater disclosure by a company when a data breach has occurred. AB 1130 will increase our efforts to protect consumers from fraud and affirms our commitment to demand the strongest consumer protections in the nation."

Good. There are too many examples of companies failing to announce data breaches affecting companies. TechCrunch reported that AB 1130:

"... comes less than a year after state lawmakers passed the California Privacy Act into law, greatly expanding privacy rights for consumers — similar to provisions provided to Europeans under the newly instituted General Data Protection Regulation. The state privacy law, passed in June and set to go into effect in 2020, was met with hostility by tech companies headquartered in the state... Several other states, like Alabama, Florida and Oregon, already require data breach notifications in the event of passport number breaches, and also biometric data in the case of Iowa and Nebraska, among others..."

Kudos to California for moving to better protect consumers. Hopefully, other states will also update their breach notification laws.


Large Natural Gas Producer to Pay West Virginia Plaintiffs $53.5 Million to Settle Royalty Dispute

[Editor's note: today's guest post by ProPublica discusses business practices within the energy industry. It is reprinted with permission.]

By Kate Mishkin and Ken Ward Jr., The Charleston Gazette-Mail

The second-largest natural gas producer in West Virginia will pay $53.5 million to settle a lawsuit that alleged the company was cheating thousands of state residents and businesses by shorting them on gas royalty payments, according to terms of the deal unsealed in court this week.

EQT Corporation logo Pittsburgh-based EQT Corp. agreed to pay the money to end a federal class-action lawsuit, brought on behalf of about 9,000 people, which alleged that EQT wrongly deducted a variety of unacceptable charges from peoples’ royalty checks.

The deal is the latest in a series of settlements in cases that accused natural gas companies of engaging in such maneuvers to pocket a larger share of the profits from the boom in natural gas production in West Virginia.

This lawsuit was among the royalty cases highlighted last year in a joint examination by the Charleston Gazette-Mail and ProPublica that showed how West Virginia’s natural gas producers avoid paying royalties promised to thousands of residents and businesses. The plaintiffs said EQT was improperly deducting transporting and processing costs from their royalty payments. EQT said its royalty payment calculations were correct and fair.

A trial was scheduled to begin in November but was canceled after the parties reached the tentative settlement. Details of the settlement were unsealed earlier this month.

Under the settlement agreement, EQT Production Co. will pay the $53.5 million into a settlement fund. The company will also stop deducting those post-production costs from royalty payments.

“This was an opportunity to turn over a new leaf in our relationship with our West Virginia leaseholders and this mutually beneficial agreement demonstrates our renewed commitment to the state of West Virginia,” EQT’s CEO, Robert McNally, said in a prepared statement.

EQT is working to earn the trust of West Virginians and community leaders, he said.

Marvin Masters, the lead lawyer for the plaintiffs, called the settlement “encouraging” after six years of litigation. (Masters is among a group of investors who bought the Charleston Gazette-Mail last year.)

Funds will be distributed to people who leased the rights to natural gas beneath their land in West Virginia to EQT between Dec. 8, 2009, and Dec. 31, 2017. EQT will also pay up to $2 million in administrative fees to distribute the settlement.

Settlement payments will be calculated based on such factors as the amount of gas produced and sold from each well, as well as how much was deducted from royalty payments. The number of people who submit claims could also affect settlement payments. Each member of the class that submits a claim will receive a minimum payment of at least $200. The settlement allows lawyers to collect up to one-third of the settlement, or roughly $18 million, subject to approval from the court.

The settlement is pending before U.S. District Judge John Preston Bailey in the Northern District of West Virginia. The judge gave it preliminary approval on February 11th, which begins a process for public notice of the terms and a fairness hearing July 11 in Wheeling, West Virginia. Payments would not be made until that process is complete.

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom.Sign up for The Big Story newsletter to receive stories like this one in your inbox.


UK Parliamentary Committee Issued Its Final Report on Disinformation And Fake News. Facebook And Six4Three Discussed

On February 18th, a United Kingdom (UK) parliamentary committee published its final report on disinformation and "fake news." The 109-page report by the Digital, Culture, Media, And Sport Committee (DCMS) updates its interim report from July, 2018.

The report covers many issues: political advertising (by unnamed entities called "dark adverts"), Brexit and UK elections, data breaches, privacy, and recommendations for UK regulators and government officials. It seems wise to understand the report's findings regarding the business practices of U.S.-based companies mentioned, since these companies' business practices affect consumers globally, including consumers in the United States.

Issues Identified

First, the DCMS' final report built upon issues identified in its:

"... Interim Report: the definition, role and legal liabilities of social media platforms; data misuse and targeting, based around the Facebook, Cambridge Analytica and Aggregate IQ (AIQ) allegations, including evidence from the documents we obtained from Six4Three about Facebook’s knowledge of and participation in data-sharing; political campaigning; Russian influence in political campaigns; SCL influence in foreign elections; and digital literacy..."

The final report includes input from 23 "oral evidence sessions," more than 170 written submissions, interviews of at least 73 witnesses, and more than 4,350 questions asked at hearings. The DCMS Committee sought input from individuals, organizations, industry experts, and other governments. Some of the information sources:

"The Canadian Standing Committee on Access to Information, Privacy and Ethics published its report, “Democracy under threat: risks and solutions in the era of disinformation and data monopoly” in December 2018. The report highlights the Canadian Committee’s study of the breach of personal data involving Cambridge Analytica and Facebook, and broader issues concerning the use of personal data by social media companies and the way in which such companies are responsible for the spreading of misinformation and disinformation... The U.S. Senate Select Committee on Intelligence has an ongoing investigation into the extent of Russian interference in the 2016 U.S. elections. As a result of data sets provided by Facebook, Twitter and Google to the Intelligence Committee -- under its Technical Advisory Group -- two third-party reports were published in December 2018. New Knowledge, an information integrity company, published “The Tactics and Tropes of the Internet Research Agency,” which highlights the Internet Research Agency’s tactics and messages in manipulating and influencing Americans... The Computational Propaganda Research Project and Graphika published the second report, which looks at activities of known Internet Research Agency accounts, using Facebook, Instagram, Twitter and YouTube between 2013 and 2018, to impact US users"

Why Disinformation

Second, definitions matter. According to the DCMS Committee:

"We have even changed the title of our inquiry from “fake news” to “disinformation and ‘fake news’”, as the term ‘fake news’ has developed its own, loaded meaning. As we said in our Interim Report, ‘fake news’ has been used to describe content that a reader might dislike or disagree with... We were pleased that the UK Government accepted our view that the term ‘fake news’ is misleading, and instead sought to address the terms ‘disinformation’ and ‘misinformation'..."

Overall Recommendations

Summary recommendations from the report:

  1. "Compulsory Code of Ethics for tech companies overseen by independent regulator,
  2. Regulator given powers to launch legal action against companies breaching code,
  3. Government to reform current electoral communications laws and rules on overseas involvement in UK elections, and
  4. Social media companies obliged to take down known sources of harmful content, including proven sources of disinformation"

Role And Liability Of Tech Companies

Regarding detailed observations and findings about the role and liability of tech companies, the report stated:

"Social media companies cannot hide behind the claim of being merely a ‘platform’ and maintain that they have no responsibility themselves in regulating the content of their sites. We repeat the recommendation from our Interim Report that a new category of tech company is formulated, which tightens tech companies’ liabilities, and which is not necessarily either a ‘platform’ or a ‘publisher’. This approach would see the tech companies assume legal liability for content identified as harmful after it has been posted by users. We ask the Government to consider this new category of tech company..."

The UK Government and its regulators may adopt some, all, or none of the report's recommendations. More observations and findings in the report:

"... both social media companies and search engines use algorithms, or sequences of instructions, to personalize news and other content for users. The algorithms select content based on factors such as a user’s past online activity, social connections, and their location. The tech companies’ business models rely on revenue coming from the sale of adverts and, because the bottom line is profit, any form of content that increases profit will always be prioritized. Therefore, negative stories will always be prioritized by algorithms, as they are shared more frequently than positive stories... Just as information about the tech companies themselves needs to be more transparent, so does information about their algorithms. These can carry inherent biases, as a result of the way that they are developed by engineers... Monika Bickert, from Facebook, admitted that Facebook was concerned about “any type of bias, whether gender bias, racial bias or other forms of bias that could affect the way that work is done at our company. That includes working on algorithms.” Facebook should be taking a more active and urgent role in tackling such inherent biases..."

Based upon this, the report recommended that the UK's new Centre For Ethics And Innovation (CFEI) should play a key role as an advisor to the UK Government by continually analyzing and anticipating gaps in governance and regulation, suggesting best practices and corporate codes of conduct, and standards for artificial intelligence (AI) and related technologies.

Inferred Data

The report also discussed a critical issue related to algorithms (emphasis added):

"... When Mark Zuckerberg gave evidence to Congress in April 2018, in the wake of the Cambridge Analytica scandal, he made the following claim: “You should have complete control over your data […] If we’re not communicating this clearly, that’s a big thing we should work on”. When asked who owns “the virtual you”, Zuckerberg replied that people themselves own all the “content” they upload, and can delete it at will. However, the advertising profile that Facebook builds up about users cannot be accessed, controlled or deleted by those users... In the UK, the protection of user data is covered by the General Data Protection Regulation (GDPR). However, ‘inferred’ data is not protected; this includes characteristics that may be inferred about a user not based on specific information they have shared, but through analysis of their data profile. This, for example, allows political parties to identify supporters on sites like Facebook, through the data profile matching and the ‘lookalike audience’ advertising targeting tool... Inferred data is therefore regarded by the ICO as personal data, which becomes a problem when users are told that they can own their own data, and that they have power of where that data goes and what it is used for..."

The distinction between uploaded and inferred data cannot be overemphasized. It is critical when evaluating tech companies statements, policies (e.g., privacy, terms of use), and promises about what "data" users have control over. Wise consumers must insist upon clear definitions to avoided getting misled or duped.

What might be an exampled of inferred data? What comes to mind is Facebook's Ad Preferences feature allows users to review and delete the "Interests" -- advertising categories -- Facebook assigns to each user's profile. (The service's algorithms assign Interests based groups/pages/events/advertisements users "Liked" or clicked on, posts submitted, posts commented upon, and more.) These "Interests" are inferred data, since Facebook assigned them, and uers didn't.

In fact, Facebook doesn't notify its users when it assigns new Interests. It just does it. And, Facebook can assign Interests whether you interacted with an item once or many times. How relevant is an Interest assigned after a single interaction, "Like," or click? Most people would say: not relevant. So, does the Interests list assigned to users' profiles accurately describe users? Do Facebook users own the Interests list assigned to their profiles? Any control Facebook users have seems minimal. Why? Facebook users can delete Interests assigned to their profiles, but users cannot stop Facebook from applying new Interests. Users cannot prevent Facebook from re-applying Interests previously deleted. Deleting Interests doesn't reduce the number of ads users see on Facebook.

The only way to know what Interests have been assigned is for Facebook users to visit the Ad Preferences section of their profiles, and browse the list. Depending how frequently a person uses Facebook, it may be necessary to prune an Interests list at least once monthly -- a cumbersome and time consuming task, probably designed that way to discourage reviews and pruning. And, that's one example of inferred data. There are probably plenty more examples, and as the report emphasizes users don't have access to all inferred data with their profiles.

Now, back to the report. To fix problems with inferred data, the DCMS recommended:

"We support the recommendation from the ICO that inferred data should be as protected under the law as personal information. Protections of privacy law should be extended beyond personal information to include models used to make inferences about an individual. We recommend that the Government studies the way in which the protections of privacy law can be expanded to include models that are used to make inferences about individuals, in particular during political campaigning. This will ensure that inferences about individuals are treated as importantly as individuals’ personal information."

Business Practices At Facebook

Next, the DCMS Committee's report said plenty about Facebook, its management style, and executives (emphasis added):

"Despite all the apologies for past mistakes that Facebook has made, it still seems unwilling to be properly scrutinized... Ashkan Soltani, an independent researcher and consultant, and former Chief Technologist to the US Federal Trade Commission (FTC), called into question Facebook’s willingness to be regulated... He discussed the California Consumer Privacy Act, which Facebook supported in public, but lobbied against, behind the scenes... By choosing not to appear before the Committee and by choosing not to respond personally to any of our invitations, Mark Zuckerberg has shown contempt towards both the UK Parliament and the ‘International Grand Committee’, involving members from nine legislatures from around the world. The management structure of Facebook is opaque to those outside the business and this seemed to be designed to conceal knowledge of and responsibility for specific decisions. Facebook used the strategy of sending witnesses who they said were the most appropriate representatives, yet had not been properly briefed on crucial issues, and could not or chose not to answer many of our questions. They then promised to follow up with letters, which -- unsurprisingly -- failed to address all of our questions. We are left in no doubt that this strategy was deliberate."

So, based upon Facebook's actions (or lack thereof), the DCMS concluded that Facebook executives intentionally ducked and dodged issues and questions.

While discussing data use and targeting, the report said more about data breaches and Facebook:

"The scale and importance of the GSR/Cambridge Analytica breach was such that its occurrence should have been referred to Mark Zuckerberg as its CEO immediately. The fact that it was not is evidence that Facebook did not treat the breach with the seriousness it merited. It was a profound failure of governance within Facebook that its CEO did not know what was going on, the company now maintains, until the issue became public to us all in 2018. The incident displays the fundamental weakness of Facebook in managing its responsibilities to the people whose data is used for its own commercial interests..."

So, internal management failed. That's not all. After a detailed review of the GSR/Cambridge Analytica breach and Facebook's 2011 Consent Decree with the U.S. Federal Trade Commission (FTC), the DCMS Committee concluded (emphasis and text link added):

"The Cambridge Analytica scandal was facilitated by Facebook’s policies. If it had fully complied with the FTC settlement, it would not have happened. The FTC Complaint of 2011 ruled against Facebook -- for not protecting users’ data and for letting app developers gain as much access to user data as they liked, without restraint -- and stated that Facebook built their company in a way that made data abuses easy. When asked about Facebook’s failure to act on the FTC’s complaint, Elizabeth Denham, the Information Commissioner, told us: “I am very disappointed that Facebook, being such an innovative company, could not have put more focus, attention and resources into protecting people’s data”. We are equally disappointed."

Wow! Not good. There's more:

"... a current court case at the San Mateo Superior Court in California also concerns Facebook’s data practices. It is alleged that Facebook violated the privacy of US citizens by actively exploiting its privacy policy... The published ‘corrected memorandum of points and authorities to defendants’ special motions to strike’, by the complainant in the case, the U.S.-based app developer Six4Three, describes the allegations against Facebook; that Facebook used its users’ data to persuade app developers to create platforms on its system, by promising access to users’ data, including access to data of users’ friends. The case also alleges that those developers that became successful were targeted and ordered to pay money to Facebook... Six4Three lodged its original case in 2015, after Facebook removed developers’ access to friends’ data, including its own. The DCMS Committee took the unusual, but lawful, step of obtaining these documents, which spanned between 2012 and 2014... Since we published these sealed documents, on 14 January 2019 another court agreed to unseal 135 pages of internal Facebook memos, strategies and employee emails from between 2012 and 2014, connected with Facebook’s inappropriate profiting from business transactions with children. A New York Times investigation published in December 2018 based on internal Facebook documents also revealed that the company had offered preferential access to users data to other major technology companies, including Microsoft, Amazon and Spotify."

"We believed that our publishing the documents was in the public interest and would also be of interest to regulatory bodies... The documents highlight Facebook’s aggressive action against certain apps, including denying them access to data that they were originally promised. They highlight the link between friends’ data and the financial value of the developers’ relationship with Facebook. The main issues concern: ‘white lists’; the value of friends’ data; reciprocity; the sharing of data of users owning Android phones..."

You can read the report's detailed descriptions of those issues. A summary: a) Facebook allegedly used promises of access to users' data to lure developers (often by overriding Facebook users' privacy settings); b) some developers got priority treatment based upon unclear criteria; c) developers who didn't spend enough money with Facebook were denied access to data previously promised; d) Facebook's reciprocity clause demanded that developers also share their users' data with Facebook; e) Facebook's mobile app for Android OS phone users collected far more data about users, allegedly without consent, than users were told; and f) Facebook allegedly targeted certain app developers (emphasis added):

"We received evidence that showed that Facebook not only targeted developers to increase revenue, but also sought to switch off apps where it considered them to be in competition or operating in a lucrative areas of its platform and vulnerable to takeover. Since 1970, the US has possessed high-profile federal legislation, the Racketeer Influenced and Corrupt Organizations Act (RICO); and many individual states have since adopted similar laws. Originally aimed at tackling organized crime syndicates, it has also been used in business cases and has provisions for civil action for damages in RICO-covered offenses... Despite specific requests, Facebook has not provided us with one example of a business excluded from its platform because of serious data breaches. We believe that is because it only ever takes action when breaches become public. We consider that data transfer for value is Facebook’s business model and that Mark Zuckerberg’s statement that “we’ve never sold anyone’s data” is simply untrue.” The evidence that we obtained from the Six4Three court documents indicates that Facebook was willing to override its users’ privacy settings in order to transfer data to some app developers, to charge high prices in advertising to some developers, for the exchange of that data, and to starve some developers—such as Six4Three—of that data, thereby causing them to lose their business. It seems clear that Facebook was, at the very least, in violation of its Federal Trade Commission settlement."

"The Information Commissioner told the Committee that Facebook needs to significantly change its business model and its practices to maintain trust. From the documents we received from Six4Three, it is evident that Facebook intentionally and knowingly violated both data privacy and anti-competition laws. The ICO should carry out a detailed investigation into the practices of the Facebook Platform, its use of users’ and users’ friends’ data, and the use of ‘reciprocity’ of the sharing of data."

The Information Commissioner's Office (ICO) is one of the regulatory agencies within the UK. So, the Committee concluded that Facebook's real business model is, "data transfer for value" -- in other words: have money, get access to data (regardless of Facebook users' privacy settings).

One quickly gets the impression that Facebook acted like a monopoly in its treatment of both users and developers... or worse, like organized crime. The report concluded (emphasis added):

"The Competitions and Market Authority (CMA) should conduct a comprehensive audit of the operation of the advertising market on social media. The Committee made this recommendation its interim report, and we are pleased that it has also been supported in the independent Cairncross Report commissioned by the government and published in February 2019. Given the contents of the Six4Three documents that we have published, it should also investigate whether Facebook specifically has been involved in any anti-competitive practices and conduct a review of Facebook’s business practices towards other developers, to decide whether Facebook is unfairly using its dominant market position in social media to decide which businesses should succeed or fail... Companies like Facebook should not be allowed to behave like ‘digital gangsters’ in the online world, considering themselves to be ahead of and beyond the law."

The DCMS Committee's report also discussed findings from the Cairncross Report. In summary, Damian Collins MP, Chair of the DCMS Committee, said:

“... we cannot delay any longer. Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalized ‘dark adverts’ from unidentifiable sources, delivered through the major social media platforms we use everyday. Much of this is directed from agencies working in foreign countries, including Russia... Companies like Facebook exercise massive market power which enables them to make money by bullying the smaller technology companies and developers... We need a radical shift in the balance of power between the platforms and the people. The age of inadequate self regulation must come to an end. The rights of the citizen need to be established in statute, by requiring the tech companies to adhere to a code of conduct..."

So, the report seems extensive, comprehensive, and detailed. Read the DCMS Committee's announcement, and/or download the full DCMS Committee report (Adobe PDF format, 3,5o7 kilobytes).

Once can assume that governments' intelligence and spy agencies will continue to do what they've always done: collect data about targets and adversaries, use disinformation and other tools to attempt to meddle in other governments' activities. It is clear that social media makes these tasks far easier than before. The DCMS Committee's report provided recommendations about what the UK Government's response should be. Other countries' governments face similar decisions about their responses, if any, to the threats.

Given the data in the DCMS report, it will be interesting to see how the FTC and lawmakers in the United States respond. If increased regulation of social media results, tech companies arguably have only themselves to blame. What do you think?


'Software Pirates' Stole Apple Tech To Distribute Hacked Mobile Apps To Consumers

Prior news reports highlighted the abuse of Apple's corporate digital certificates. Now, we learn that this abuse is more widespread than first thought. CNet reported:

"Pirates used Apple's enterprise developer certificates to put out hacked versions of some major apps... The altered versions of Spotify, Angry Birds, Pokemon Go and Minecraft make paid features available for free and remove in-app ads... The pirates appear to have figured out how to use digital certs to get around Apple's carefully policed App Store by saying the apps will be used only by their employees, when they're actually being distributed to everyone."

So, bad actors abuse technology intended for a company's employees to distribute apps directly to consumers. Software pirates, indeed.

To avoid paying for hacked apps, consumers need to shop wisely from trusted sites. A fix is underway. According to CNet:

"Apple will reportedly take steps to fight back by requiring all app makers to use its two-factor authentication protocol from the end of February, so logging into an Apple ID will require a password and code sent to a trusted Apple device."

Let's hope that fix is sufficient.


Ex-IBM Executive Says She Was Told Not to Disclose Names of Employees Over Age 50 Who’d Been Laid Off

[Editor's note: today's guest blog post, by reporters at ProPublica, explores employment and hiring practices within the workplace. Part of a series, it is reprinted with permission.]

IBM logo By Peter Gosselin, ProPublica

In sworn testimony filed recently as part of a class-action lawsuit against IBM, a former executive says she was ordered not to comply with a federal agency’s request that the company disclose the names of employees over 50 who’d been laid off from her business unit.

Catherine A. Rodgers, a vice president who was then IBM’s senior executive in Nevada, cited the order among several practices she said prompted her to warn IBM superiors the company was leaving itself open to allegations of age discrimination. She claims she was fired in 2017 because of her warnings.

Company spokesman Edward Barbini labeled Rodgers’ claims related to potential age discrimination “false,” adding that the reasons for her firing were “wholly unrelated to her allegations.”

Rodgers’ affidavit was filed Jan. 17 as part of a lawsuit in federal district court in New York. The suit cites a March 2018 ProPublica story that IBM engaged in a strategy designed to, in the words of one internal company document, “correct seniority mix” by flouting or outflanking U.S. anti-age discrimination laws to force out tens of thousands of older workers in the five years through 2017 alone.

Rodgers said in an interview Sunday that IBM “appears to be engaged in a concerted and disproportionate targeting of older workers.” She said that if the company releases the ages of those laid off, something required by federal law and that IBM did until 2014, “the facts will speak for themselves.”

“IBM is a data company. Release the data,” she said.

Rodgers is not a plaintiff in the New York case but intends to become one, said Shannon Liss-Riordan, the attorney for the employees.

IBM has not yet responded to Rodgers’ affidavit in the class-action suit. But in a filing in a separate age-bias lawsuit in federal district court in Austin, Texas, where a laid-off IBM sales executive introduced the document to bolster his case, lawyers for the company termed the order for Rodgers not to disclose the layoffs of older workers from her business unit “unremarkable.”

They said that the U.S. Department of Labor sought the names of the workers so it could determine whether they qualified for federal Trade Adjustment Assistance, or TAA, which provides jobless benefits and re-training to those who lose their jobs because of foreign competition. They said that company executives concluded that only one of about 10 workers whose names Rodgers had sought to provide qualified.

In its reporting, ProPublica found that IBM has gone to considerable lengths to avoid reporting its layoff numbers by, among other things, limiting its involvement in government programs that might require disclosure. Although the company has laid off tens of thousands of U.S. workers in recent years and shipped many jobs overseas, it sought and won TAA aid for just three during the past decade, government records show.

Company lawyers in the Texas case said that Rodgers, 62 at the time of her firing and a 39-year veteran of IBM, was let go in July 2017 because of "gross misconduct."

Rodgers said that she received “excellent” job performance reviews for decades before questioning IBM’s practices toward older workers. She rejected the misconduct charge as unfounded.

Legal action against IBM over its treatment of older workers appears to be growing. In addition to the suits in New York and Texas, cases are also underway in California, New Jersey and North Carolina.

Liss-Riordan, who has represented workers against a series of tech giants including Amazon, Google and Uber, has added 41 plaintiffs to the original three in the New York case and is asking the judge to require that IBM notify all U.S. workers whom it has laid off since July 2017 of the suit and of their option to challenge the company.

One complicating factor is that IBM requires departing employees who want to receive severance pay to sign a document waiving their right to take the company to court and limiting them to private, individual arbitration. Studies show this process rarely results in decisions that favor workers. To date, neither plaintiffs’ lawyers nor the government has challenged the legality of IBM’s waiver document.

Many ex-employees also don’t act within the 300-day federal statute of limitations for bringing a case. Of about 500 ex-employees who Liss-Riordan said contacted her since she filed the New York case last September, only 100 had timely claims and, of these, only about 40 had not signed the waivers and so were eligible to join the lawsuit. She said she’s filed arbitration cases for the other 60.

At key points, Rodgers’ account of IBM’s practices is similar to those reported by ProPublica. Among the parallels:

  • Rodgers said that all layoffs in her business unit were of older workers and that younger workers were unaffected. (ProPublica estimated that about 60 percent of the company’s U.S. layoffs from 2014 through 2017 were workers age 40 and above.)
  • She said that she and other managers were told to encourage workers flagged for layoff to use IBM’s internal hiring system to find other jobs in the company even as upper management erected insurmountable barriers to their being hired for these jobs.
  • Rodgers said the company reversed a decades long practice of encouraging employees to work from home and ordered many to begin reporting to a few “hub” offices around the country, a change she said appeared designed to prompt people to quit. She said that in one case an employee agreed to relocate to Connecticut only to be told to relocate again to North Carolina.

Barbini, the IBM spokesman, didn’t comment on individual elements of Rodgers’ allegations. Last year, he did not address a 10-page summary of ProPublica’s findings, but issued a statement that read in part, “We are proud of our company and our employees’ ability to reinvent themselves era after era, while always complying with the law.”

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for The Big Story newsletter to receive stories like this one in your inbox.


Popular iOS Apps Record All In-App Activity Causing Privacy, Data Security, And Other Issues

As the internet has evolved, the user testing and market research practices have also evolved. This may surprise consumers. TechCrunch reported that many popular Apple mobile apps record everything customers do with the apps:

"Apps like Abercrombie & Fitch, Hotels.com and Singapore Airlines also use Glassbox, a customer experience analytics firm, one of a handful of companies that allows developers to embed “session replay” technology into their apps. These session replays let app developers record the screen and play them back to see how its users interacted with the app to figure out if something didn’t work or if there was an error. Every tap, button push and keyboard entry is recorded — effectively screenshotted — and sent back to the app developers."

So, customers' entire app sessions and activities have been recorded. Of course, marketers need to understand their customers' needs, and how users interact with their mobile apps, to build better products, services, and apps. However, in doing so some apps have security vulnerabilities:

"The App Analyst... recently found Air Canada’s iPhone app wasn’t properly masking the session replays when they were sent, exposing passport numbers and credit card data in each replay session. Just weeks earlier, Air Canada said its app had a data breach, exposing 20,000 profiles."

Not good for a couple reasons. First, sensitive data like payment information (e.g., credit/debit card numbers, passport numbers, bank account numbers, etc.) should be masked. Second, when sensitive information isn't masked, more data security problems arise. How long is this app usage data archived? What employees, contractors, and business partners have access to the archive? What security methods are used to protect the archive from abuse?

In short, unauthorized persons may have access to the archives and the sensitive information contained. For example, market researchers probably have little or no need to specific customers' payment information. Sensitive information in these archives should be encrypted, to provide the best protection from abuse and from data breaches.

Sadly, there is more bad news:

"Apps that are submitted to Apple’s App Store must have a privacy policy, but none of the apps we reviewed make it clear in their policies that they record a user’s screen... Expedia’s policy makes no mention of recording your screen, nor does Hotels.com’s policy. And in Air Canada’s case, we couldn’t spot a single line in its iOS terms and conditions or privacy policy that suggests the iPhone app sends screen data back to the airline. And in Singapore Airlines’ privacy policy, there’s no mention, either."

So, the app session recordings were done covertly... without explicit language to provide meaningful and clear notice to consumers. I encourage everyone to read the entire TechCrunch article, which also includes responses by some of the companies mentioned. In my opinion, most of the responses fell far short with lame, boilerplate statements.

All of this is very troubling. And, there is more.

The TechCrunch article didn't discuss it, but historically companies hired testing firms to recruit user test participants -- usually current and prospective customers. Test participants were paid for their time. (I know because as a former user experience professional I conducted such in-person test sessions where clients paid test participants.) Things have changed. Not only has user testing and research migrated online, but companies use automated tools to perform perpetual, unannounced user testing -- all without compensating test participants.

While change is inevitable, not all change is good. Plus, things can be done in better ways. If the test information is that valuable, then pay test participants. Otherwise, this seems like another example of corporate greed at consumers' expense. And, it's especially egregious if data transmissions of the recorded app sessions to developers' servers use up cellular data plan capacity consumers paid for. Some consumers (e.g., elders, children, the poor) cannot afford the costs of unlimited cellular data plans.

After this TechCrunch report, Apple notified developers to either stop or disclose screen recording:

"Protecting user privacy is paramount in the Apple ecosystem. Our App Store Review Guidelines require that apps request explicit user consent and provide a clear visual indication when recording, logging, or otherwise making a record of user activity... We have notified the developers that are in violation of these strict privacy terms and guidelines, and will take immediate action if necessary..."

Good. That's a start. Still, user testing and market research is not a free pass for developers to ignore or skip data security best practices. Given these covert recorded app sessions, mobile apps must be continually tested. Otherwise, some ethically-challenged companies may re-introduce covert screen recording features. What are your opinions?


Survey: People In Relationships Spy On Cheating Partners. FTC: Singles Looking For Love Are The Biggest Target Of Scammers

Happy Valentine's Day! First, BestVPN announced the results of a survey of 1,000 adults globally about relationships and trust in today's digital age where social media usage is very popular. Key findings:

"... nearly 30% of respondents admitted to using tracking apps to catch a partner [suspected of or cheating]. After all, over a quarter of those caught cheating were busted by modern technology... 85% of those caught out in the past now take additional steps to protect their privacy, including deleting their browsing data or using a private browsing mode."

Below is an infographic with more findings from the survey.

Valentines-day-infograph-bestvpn-feb2019

Second, the U.S. Federal Trade Commission (FTC) issued a warning earlier this week about fraud affecting single persons:

"... romance scams generated more reported losses than any other consumer fraud type reported to the agency... The number of romance scams reported to the FTC has grown from 8,500 in 2015 to more than 21,000 in 2018, while reported losses to these scams more than quadrupled in recent years—from $33 million in 2015 to $143 million last year. For those who said they lost money to a romance scam, the median reported loss was $2,600, with those 70 and over reporting the biggest median losses at $10,000."

"Romance scammers often find their victims online through a dating site or app or via social media. These scammers create phony profiles that often involve the use of a stranger’s photo they have found online. The goals of these scams are often the same: to gain the victim’s trust and love in order to get them to send money through a wire transfer, gift card, or other means."

So, be careful out there. Don't cheat, and beware of scammers and dating imposters. You have been warned.


Walgreens To Pay About $2 Million To Massachusetts To Settle Multiple Price Abuse Allegations. Other Settlement Payments Exceed $200 Million

Walgreens logo The Office of the Attorney General of the Commonwealth of Massachusetts announced two settlement agreements with Walgreens, a national pharmacy chain. Walgreens has agreed to pay about $2 million to settle multiple allegations of pricing abuses. According to the announcement:

"Under the first settlement, Walgreens will pay $774,486 to resolve allegations that it submitted claims to MassHealth in which it reported prices for certain prescription drugs at levels that were higher than what Walgreens actually charged, resulting in fraudulent overpayments."

"Under the second settlement, Walgreens will pay $1,437,366 to resolve allegations that from January 2006 through December 2017, rather than dispensing the quantity of insulin called for by a patient’s prescription, Walgreens exceeded the prescription amount and falsified information on claims submitted for reimbursement to MassHealth, including the quantity of insulin and/or days’ supply dispensed."

Both settlements arose from whistle-blower activity. MassHealth is the state's healthcare program based upon a state law passed in 2006 to provide health insurance to all Commonwealth residents. The law was amended in 2008 and 2010 to make it consistent with the federal Affordable Care Act.

Massachusetts Attorney General (AG) Maura Healey said:

"Walgreens repeatedly failed to provide MassHealth with accurate information regarding its dispensing and billing practices, resulting in overpayment to the company at taxpayers’ expense... We will continue to investigate cases of fraud and take action to protect the integrity of MassHealth."

In a separate case, Walgreen's will pay $1 million to the state of Arkansas to settle allegations of Medicaid fraud. Last month, the New York State Attorney General announced that New York State, other states, and the federal government reached:

"... an agreement in principle with Walgreens to settle allegations that Walgreens violated the False Claims Act by billing Medicaid at rates higher than its usual and customary (U&C) rates for certain prescription drugs... Walgreens will pay the states and federal government $60 million, all of which is attributable to the states’ Medicaid programs... The national federal and state civil settlement will resolve allegations relating to Walgreens’ discount drug program, known as the Prescription Savings Club (PSC). The investigation revealed that Walgreens submitted claims to the states’ Medicaid programs in which it identified U&C prices for certain prescription drugs sold through the PSC program that were higher than what Walgreens actually charged for those drugs... This is the second false claims act settlement reached with Walgreens today. On January 22, 2019, AG James announced that Walgreens is to pay New York over $6.5 million as part of a $209.2 million settlement with the federal government and other states, resolving allegations that Walgreens knowingly engaged in fraudulent conduct when it dispensed insulin pens..."

States involved in the settlement include New York, California, Illinois, Indiana, Michigan and Ohio. Kudos to all Attorneys General and their staffs for protecting patients against corporate greed.


Senators Demand Answers From Facebook And Google About Project Atlas And Screenwise Meter Programs

After news reports surfaced about Facebook's Project Atlas, a secret program where Facebook paid teenagers (and other users) for a research app installed on their phones to track and collect information about their mobile usage, several United States Senators have demanded explanations. Three Senators sent a join letter on February 7, 2019 to Mark Zuckerberg, Facebook's chief executive officer.

The joint letter to Facebook (Adobe PDF format) stated, in part:

"We write concerned about reports that Facebook is collecting highly-sensitive data on teenagers, including their web browsing, phone use, communications, and locations -- all to profile their behavior without adequate disclosure, consent, or oversight. These reports fit with Longstanding concerns that Facebook has used its products to deeply intrude into personal privacy... According to a journalist who attempted to register as a teen, the linked registration page failed to impose meaningful checks on parental consent. Facebook has more rigorous mechanism to obtain and verify parental consent, such as when it is required to sign up for Messenger Kids... Facebook's monitoring under Project Atlas is particularly concerning because the data data collection performed by the research app was deeply invasive. Facebook's registration process encouraged participants to "set it and forget it," warning that if a participant disconnected from the monitoring for more than ten minutes for a few days, that they could be disqualified. Behind the scenes, the app watched everything on the phone."

The letter included another example highlighting the alleged lack of meaningful disclosures:

"... the app added a VPN connection that would automatically route all of a participant's traffic through Facebook's servers. The app installed a SSL root certificate on the participant's phone, which would allow Facebook to intercept or modify data sent to encrypted websites. As a result, Facebook would have limitless access to monitor normally secure web traffic, even allowing Facebook to watch an individual log into their bank account or exchange pictures with their family. None of the disclosures provided at registration offer a meaningful explanation about how the sensitive data is used, how long it is kept, or who within Facebook has access to it..."

The letter was signed by Senators Richard Blumenthal (Democrat, Connecticut), Edward J. Markey (Democrat, Massachusetts), and Josh Hawley (Republican, Mississippi). Based upon news reports about how Facebook's Research App operated with similar functionality to the Onavo VPN app which was banned last year by Apple, the Senators concluded:

"Faced with that ban, Facebook appears to have circumvented Apple's attempts to protect consumers."

The joint letter also listed twelve questions the Senators want detailed answers about. Below are selected questions from that list:

"1. When did Project Atlas begin and how many individuals participated? How many participants were under age 18?"

"3. Why did Facebook use a less strict mechanism for verifying parental consent than is Required for Messenger Kids or Global Data Protection Requlation (GDPR) compliance?"

"4.What specific types of data was collected (e.g., device identifieers, usage of specific applications, content of messages, friends lists, locations, et al.)?"

"5. Did Facebook use the root certificate installed on a participant's device by the Project Atlas app to decrypt and inspect encrypted web traffic? Did this monitoring include analysis or retention of application-layer content?"

"7. Were app usage data or communications content collected by Project Atlas ever reviewed by or available to Facebook personnel or employees of Facebook partners?"

8." Given that Project Atlas acknowledged the collection of "data about [users'] activities and content within those apps," did Facebook ever collect or retain the private messages, photos, or other communications sent or received over non-Facebook products?"

"11. Why did Facebook bypass Apple's app review? Has Facebook bypassed the App Store aproval processing using enterprise certificates for any other app that was used for non-internal purposes? If so, please list and describe those apps."

Read the entire letter to Facebook (Adobe PDF format). Also on February 7th, the Senators sent a similar letter to Google (Adobe PDF format), addressed to Hiroshi Lockheimer, the Senior Vice President of Platforms & Ecosystems. It stated in part:

"TechCrunch has subsequently reported that Google maintained its own measurement program called "Screenwise Meter," which raises similar concerns as Project Atlas. The Screenwise Meter app also bypassed the App Store using an enterprise certificate and installed a VPN service in order to monitor phones... While Google has since removed the app, questions remain about why it had gone outside Apple's review process to run the monitoring program. Platforms must maintain and consistently enforce clear policies on the monitoring of teens and what constitutes meaningful parental consent..."

The letter to Google includes a similar list of eight questions the Senators seek detailed answers about. Some notable questions:

"5. Why did Google bypass App Store approval for Screenwise Meter app using enterprise certificates? Has Google bypassed the App Store approval processing using enterprise certificates for any other non-internal app? If so, please list and describe those apps."

"6. What measures did Google have in place to ensure that teenage participants in Screenwise Meter had authentic parental consent?"

"7. Given that Apple removed Onavoo protect from the App Store for violating its terms of service regarding privacy, why has Google continued to allow the Onavo Protect app to be available on the Play Store?"

The lawmakers have asked for responses by March 1st. Thanks to all three Senators for protecting consumers' -- and children's -- privacy... and for enforcing transparency and accountability.


Technology And Human Rights Organizations Sent Joint Letter Urging House Representatives Not To Fund 'Invasive Surveillance' Tech Instead of A Border Wall

More than two dozen technology and human rights organizations sent a joint letter Tuesday to representatives in the House of Representatives, urging them not to fund "invasive surveillance technologies" in replacement of a physical wall or barrier along the southern border of the United States. The joint letter cited five concerns:

"1. Risk-based targeting: The proposal calls for “an expansion of risk-based targeting of passengers and cargo entering the United States.” We are concerned that this includes the expansion of programs — proven to be ineffective and to exacerbate racial profiling — that use mathematical analytics to make targeting determinations. All too often, these systems replicate the biases of their programmers, burden vulnerable communities, lack democratic transparency, and encourage the collection and analysis of ever-increasing amounts of data... 3. Biometrics: The proposal calls for “new cutting edge technology” at the border. If that includes new face surveillance like that deployed at international airline departures, it should not. Senator Jeff Merkley and the Congressional Black Caucus have expressed serious concern that facial recognition technology would place “disproportionate burdens on communities of color and could stifle Americans’ willingness to exercise their first amendment rights in public.” In addition, use of other biometrics, including iris scans and voice recognition, also raise significant privacy concerns... 5. Biometric and DNA data: We oppose biometric screening at the border and the collection of immigrants’ DNA, and fear this may be another form of “new cutting edge technology” under consideration. We are concerned about the threat that any collected biometric data will be stolen or misused, as well as the potential for such programs to be expanded far beyond their original scope..."

The letter was sent to Speaker Nancy Pelosi, Minority Leader Kevin McCarthy, Minority Leader Steny Hoyer, Minority Whip Steve Scalise, Chair Nita Lowey a Ranking Member of House Appropriations, and Kay Granger of the House Appropriations committee.

27 organizations signed the joint letter, including Fight for the Future, the Electronic Frontier Foundation, the American Civil Liberties Union (ACLU), the American-Arab Anti-Discrimination Committee, the Center for Media Justice, the Project On Government Oversight, and others. Read the entire letter.

Earlier this month, a structural and civil engineer cited several reasons why a physical wall won't work and would be vastly more expensive than the $5.7 billion requested.

Clearly, the are distinct advantages and disadvantages for each and all border-protection solutions the House and President are considering. It is a complex problem. These advantages and disadvantages of all proposals need to be clear, transparent, and understood by taxpayers prior to any final decisions.


Survey: Users Don't Understand Facebook's Advertising System. Some Disagree With Its Classifications

Most people know that many companies collect data about their online activities. Based upon the data collected, companies classify users for a variety of reasons and purposes. Do users agree with these classifications? Do the classifications accurately describe users' habits, interests, and activities?

Facebook logo To answer these questions, the Pew Research Center surveyed users of Facebook. Why Facebook? Besides being the most popular social media platform in the United States, it collects:

"... a wide variety of data about their users’ behaviors. Platforms use this data to deliver content and recommendations based on users’ interests and traits, and to allow advertisers to target ads... But how well do Americans understand these algorithm-driven classification systems, and how much do they think their lives line up with what gets reported about them?"

The findings are significant. First:

"Facebook makes it relatively easy for users to find out how the site’s algorithm has categorized their interests via a “Your ad preferences” page. Overall, however, 74% of Facebook users say they did not know that this list of their traits and interests existed until they were directed to their page as part of this study."

So, almost three quarters of Facebook users surveyed don't know what data Facebook has collected about them, nor how to view it (nor how to edit it, or how to opt out of the ad targeting classifications). According to Wired magazine, Facebook's "Your Ad Preferences" page:

"... can be hard to understand if you haven’t looked at the page before. At the top, Facebook displays “Your interests.” These groupings are assigned based on your behavior on the platform and can be used by marketers to target you with ads. They can include fairly straightforward subjects, like “Netflix,” “Graduate school,” and “Entrepreneurship,” but also more bizarre ones, like “Everything” and “Authority.” Facebook has generated an enormous number of these categories for its users. ProPublica alone has collected over 50,000, including those only marketers can see..."

Now, back to the Pew survey. After survey participants viewed their Ad Preferences page:

"A majority of users (59%) say these categories reflect their real-life interests, while 27% say they are not very or not at all accurate in describing them. And once shown how the platform classifies their interests, roughly half of Facebook users (51%) say they are not comfortable that the company created such a list."

So, about half of persons surveyed use a site whose data collection they are uncomfortable with. Not good. Second, substantial groups said the classifications by Facebook were not accurate:

"... about half of Facebook users (51%) are assigned a political “affinity” by the site. Among those who are assigned a political category by the site, 73% say the platform’s categorization of their politics is very or somewhat accurate, while 27% say it describes them not very or not at all accurately. Put differently, 37% of Facebook users are both assigned a political affinity and say that affinity describes them well, while 14% are both assigned a category and say it does not represent them accurately..."

So, significant numbers of users disagree with the political classifications Facebook assigned to their profiles. Third, its' not only politics:

"... Facebook also lists a category called “multicultural affinity”... this listing is meant to designate a user’s “affinity” with various racial and ethnic groups, rather than assign them to groups reflecting their actual race or ethnic background. Only about a fifth of Facebook users (21%) say they are listed as having a “multicultural affinity.” Overall, 60% of users who are assigned a multicultural affinity category say they do in fact have a very or somewhat strong affinity for the group to which they are assigned, while 37% say their affinity for that group is not particularly strong. Some 57% of those who are assigned to this category say they do in fact consider themselves to be a member of the racial or ethnic group to which Facebook assigned them."

The survey included a nationally representative sample of 963 Facebook users ages 18 and older from the United States. The survey was conducted September 4 to October 1, 2018. Read the entire survey at the Pew Research Center site.

What can consumers conclude from this survey? Social media users should understand that all social sites, and especially mobile apps, collect data about you, and then make judgements... classifications about you. (Remember, some Samsung phone owners were unable to delete Facebook and other mobile apps users. And, everyone wants your geolocation data.) Use any tools the sites provide to edit or adjust your ad preferences to match your interests. Adjust the privacy settings on your profile to limit the data sharing as much as possible.

Last, an important reminder. While Facebook users can edit their ad preferences and can opt out of the ad-targeting classifications, they cannot completely avoid ads. Facebook will still display less-targeted ads. That is simply, Facebook being Facebook to make money. That probably applies to other social sites, too.

What are your opinions of the survey's findings?