1,182 posts categorized "Corporate Responsibility" Feed

Behind the Scenes, Health Insurers Use Cash and Gifts to Sway Which Benefits Employers Choose

[Editor's note: today's guest post, by reporters at ProPublica, explores business practices within the health insurance industry. It is reprinted with permission.]

By Marshall Allen, ProPublica

The pitches to the health insurance brokers are tantalizing.

“Set sail for Bermuda,” says insurance giant Cigna, offering top-selling brokers five days at one of the island’s luxury resorts.

Health Net of California’s pitch is not subtle: A smiling woman in a business suit rides a giant $100 bill like it’s a surfboard. “Sell more, enroll more, get paid more!” In some cases, its ad says, a broker can “power up” the bonus to $150,000 per employer group.

Not to be outdone, New York’s EmblemHealth promises top-selling brokers “the chance of a lifetime”: going to bat against the retired legendary New York Yankees pitcher Mariano Rivera. In another offer, the company, which bills itself as the state’s largest nonprofit plan, focuses on cash: “The more subscribers you enroll … the bigger the payout.” Bonuses, it says, top out at $100,000 per group, and “there’s no limit to the number of bonuses you can earn.

Such incentives sound like typical business tactics, until you understand who ends up paying for them: the employers who sign up with the insurers — and, of course, their employees.

Human resource directors often rely on independent health insurance brokers to guide them through the thicket of costly and confusing benefit options offered by insurance companies. But what many don’t fully realize is how the health insurance industry steers the process through lucrative financial incentives and commissions. Those enticements, critics say, don’t reward brokers for finding their clients the most cost-effective options.

Here’s how it typically works: Insurers pay brokers a commission for the employers they sign up. That fee is usually a healthy 3 to 6 percent of the total premium. That could be about $50,000 a year on the premiums of a company with 100 people, payable for as long as the plan is in place. That’s $50,000 a year for a single client. And as the client pays more in premiums, the broker’s commission increases.

Commissions can be even higher, up to 40 or 50 percent of the premium, on supplemental plans that employers can buy to cover employees’ dental costs, cancer care or long-term hospitalization.

Those commissions come from the insurers. But the cost is built into the premiums the employer and employees pay for the benefit plan.

Now, layer on top of that the additional bonuses that brokers can earn from some insurers. The offers, some marked “confidential,” are easy to find on the websites of insurance companies and broker agencies. But many brokers say the bonuses are not disclosed to employers unless they ask. These bonuses, too, are indirectly included in the overall cost of health plans.

These industry payments can’t help but influence which plans brokers highlight for employers, said Eric Campbell, director of research at the University of Colorado Center for Bioethics and Humanities.

“It’s a classic conflict of interest,” Campbell said.

There’s “a large body of virtually irrefutable evidence,” Campbell said, that shows drug company payments to doctors influence the way they prescribe. “Denying this effect is like denying that gravity exists.” And there’s no reason, he said, to think brokers are any different.

Critics say the setup is akin to a single real estate agent representing both the buyer and seller in a home sale. A buyer would not expect the seller’s agent to negotiate the lowest price or highlight all the clauses and fine print that add unnecessary costs.

“If you want to draw a straight conclusion: It has been in the best interest of a broker, from a financial point of view, to keep that premium moving up,” said Jeffrey Hogan, a regional manager in Connecticut for a national insurance brokerage and one of a band of outliers in the industry pushing for changes in the way brokers are paid.

As the average cost of employer-sponsored health insurance premiums has tripled in the past two decades, to almost $20,000 for a family of four, a small, but growing, contingent of brokers are questioning their role in the rise in costs. They’ve started negotiating flat fees paid directly by the employers. The fee may be a similar amount to the commission they could have earned, but since it doesn’t come from the insurer, Hogan said, it “eliminates the conflict of interest” and frees brokers to consider unorthodox plans tailored to individual employers’ needs. Any bonuses could also be paid directly by the employer.

Brokers provide a variety of services to employers. They present them with benefits options, enroll them in plans and help them with claims and payment issues. Insurance industry payments to brokers are not illegal and have been accepted as a cost of doing business for generations. When brokers are paid directly by employers, the results can be mutually beneficial.

In 2017, David Contorno, the broker for Palmer Johnson Power Systems, a heavy-equipment distribution company in Madison, Wisconsin, saved the firm so much money while also improving coverage that Palmer Johnson took all 120 employees on an all-expenses paid trip to Vail, Colorado, where they rode four-wheelers and went whitewater rafting. In 2018, the company saved money again and rewarded each employee with a health care “dividend” of about $700.

Contorno is not being altruistic. He earned a flat fee, plus a bonus based on how much the plan saved, with the total equal to roughly what would have made otherwise.

Craig Parsons, who owns Palmer Johnson, said the new payment arrangement puts pressure on the broker to prevent overspending. His previous broker, he said, didn’t have any real incentive to help him reduce costs. “We didn’t have an advocate,” he said. “We didn’t have someone truly watching out for our best interests.” (The former broker acknowledged there were some issues, but said it had provided a valuable service.)

Working for Employers, Not Insurers

Contorno is part of a group called the Health Rosetta, which certifies brokers who agree to follow certain best practices related to health benefits, including eliminating any hidden agreements that raise the cost of employee benefits. To be certified, brokers (who refer to themselves as “benefits advisers”) must disclose all their direct and indirect sources of income — bonuses, commissions, consulting fees, for example — and who pays them to the employers they advise.

Dave Chase, a Washington businessman, created Rosetta in 2016 after working with tech health startups and launching Microsoft’s services to the health industry. He said he saw an opportunity to transform the health care industry by changing the way employers buy benefits. He said brokers have the most underestimated role in the health care system. “The good ones are worth their weight in gold,” Chase said. “But most of the benefit brokers are pitching themselves as buyer’s agents, but they are paid like a seller’s agent.”

There are only 110 Rosetta certified brokers in an industry of more than 100,000, although others who follow a similar philosophy consider themselves part of the movement.

From the employer’s point of view, one big advantage of working with brokers like those certified by Rosetta, is transparency. Currently, there’s no industry standard for how brokers must disclose their payments from insurance companies, so many employers may have no idea how much brokers are making from their business, said Marcy Buckner, vice president of government affairs for the National Association of Health Underwriters, the trade group for health benefits brokers. And thus, she said, employers have no clear sense of the conflicts of interest that may color their broker’s advice to them.

Buckner’s group encourages brokers to bill employers for their commissions directly to eliminate any conflict of interest, but, she said, it’s challenging to shift the culture. Nevertheless, Buckner said she doesn’t think payments from insurers undermine the work done by brokers, who must act in their clients’ best interests or risk losing them. “They want to have these clients for a really long term,” Buckner said.

Industrywide, transparency is not the standard. ProPublica sent a list of questions to 10 of the largest broker agencies, some worth $1 billion or more, including Marsh & McLennan, Aon and Willis Towers Watson, asking if they took bonuses and commissions from insurance companies, and whether they disclosed them to their clients. Four firms declined to answer; the others never responded despite repeated requests.

Insurers also don’t seem to have a problem with the payments. In 2017, Health Care Service Corporation, which oversees Blue Cross Blue Shield plans serving 15 million members in five states, disclosed in its corporate filings that it spent $816 million on broker bonuses and commissions, about 3 percent of its revenue that year. A company spokeswoman acknowledged in an email that employers are actually the ones who pay those fees; the money is just passed through the insurer. “We do not believe there is a conflict of interest,” she said.

In one email to a broker reviewed by ProPublica, Blue Cross Blue Shield of North Carolina called the bonuses it offered — up to $110,000 for bringing in a group of more than 1,000 — the “cherry on top.” The company told ProPublica that such bonuses are standard and that it always encourages brokers to “match their clients with the best product for them.”

Cathryn Donaldson, spokeswoman for the trade group America’s Health Insurance Plans, said in an email that brokers are incentivized “above all else” to serve their clients. “Guiding employees to a plan that offers quality, affordable care will help establish their business and reputation in the industry,” she said.

Some insurer’s pitches, however, clearly reward brokers’ devotion to them, not necessarily their clients. “To thank you for your loyalty to Humana, we want to extend our thanks with a bonus,” says one brochure pitched to brokers online. Horizon Blue Cross Blue Shield of New Jersey offered brokers a bonus as “a way to express our appreciation for your support.” Empire Blue Cross told brokers it would deliver new bonuses “for bringing in large group business ... and for keeping it with us.”

Delta Dental of California’s pitches appears to go one step further, rewarding brokers as “key members of our Small Business Program team.”

ProPublica reached out to all the insurers named in this story, and many didn’t respond. Cigna said in a statement that it offers affordable, high-quality benefit plans and doesn’t see a problem with providing incentives to brokers. Delta Dental emphasized in an email it follows applicable laws and regulations. And Horizon Blue Cross said its gives employers the option of how to pay brokers and discloses all compensation.

The effect of such financial incentives is troubling, said Michael Thompson, president of the National Alliance of Healthcare Purchaser Coalitions, which represents groups of employers who provide benefits. He said brokers don’t typically undermine their clients in a blatant way, but their own financial interests can create a “cozy relationship” that may make them wary of “stirring the pot.”

Employers should know how their brokers are paid, but health care is complex, so they are often not even aware of what they should ask, Thompson said. Employers rely on brokers to be a “trusted adviser,” he added. “Sometimes that trust is warranted and sometimes it’s not.”

Bad Faith Tactics

When officials in Morris County, New Jersey, sought a new broker to manage the county’s benefits, they specified that applicants could not take insurance company payouts related to their business. Instead, the county would pay the broker directly to ensure an unbiased search for the best benefits. The county hired Frenkel Benefits, a New York City broker, in February 2015.

Now, the county is suing the firm in Superior Court of New Jersey, accusing it of double-dipping. In addition to the fees from the county, the broker is accused of collecting a $235,000 commission in 2016 from the insurance giant Cigna. The broker got an additional $19,206 the next year, the lawsuit claims. To get the commission, one of the agency’s brokers allegedly certified, falsely, that the county would be told about the payment, the suit said. The county claims it was never notified and never approved the commission.

The suit also alleges the broker “purposefully concealed” the costs of switching the county’s health coverage to Cigna, which included administrative fees of $800,000.

In an interview, John Bowens, the county’s attorney, said the county had tried to guard against the broker being swayed by a large commission from an insurer. The brokers at Frenkel did not respond to requests for comment. The firm has not filed a response to the claims in the lawsuit. Steven Weisman, one of attorneys representing Frenkel, declined to comment.

Sometimes employers don’t find out their broker didn’t get them the best deal until they switch to another broker.

Josh Butler, a broker in Amarillo, Texas, who is also certified by Rosetta, recently took on a company of about 200 employees that had been signed up for a plan that had high out-of-pocket costs. The previous broker had enrolled the company in a supplemental plan that paid workers $1,000 if they were admitted to the hospital to help pay for uncovered costs. But Butler said the premiums for this coverage cost about $100,000 a year, and only nine employees had used it. That would make it much cheaper to pay for the benefit without insurance.

Butler suspects the previous broker encouraged the hospital benefits because they came with a sizable commission. He sells the same type of policies for the same insurer, so he knows the plan came with a 40 percent commission in the first year. That means about $40,000 of the employer’s premium went into the broker’s pocket.

Butler and other brokers said the insurance companies offer huge commissions to promote lucrative supplemental plans like dental, vision and disability. The total commissions on a supplemental cancer plan one insurer offered come to 57 percent, Butler said.

These massive year-one commissions lead some unscrupulous brokers to “churn” their supplemental benefits, Butler said, convincing employers to jump between insurers every year for the same type of benefits. The insurers don’t mind, Butler said, because the employers end up paying the tab. Brokers may also “product dump,” Butler said, which means pushing employers to sign employees up for multiple types of voluntary supplemental coverage, which brings them a hefty commission on each product.

Carl Schuessler, a broker in Atlanta who is certified by the Rosetta group, said he likes to help employers find out how much profit insurers are making on their premiums. Some states require insurers to provide the information, so when he took over the account for The Gasparilla Inn, an island resort on the Gulf Coast of Florida, he obtained the report for the company’s recent three years of coverage with UnitedHealthcare. He learned that the insurer had only paid out in claims about 65 percent of what the Inn had paid in premiums.

But in those same years the insurer had increased the Inn’s premiums, said Glenn Price, its chief financial officer. “It’s tough to swallow” increases to our premium when the insurer is making healthy profits, Price said. UnitedHealthcare declined to comment.

Schuessler, who is paid by the Inn, helped it transition to a self-funded plan, meaning the company bears the cost of the health care bills. Price said the Inn went from spending about $1 million a year to about $700,000, with lower costs and better benefits for employees, and no increases in three years.

A Need for Regulation

Despite the important function of brokers as middlemen, there’s been scant examination of their role in the marketplace.

Don Reiman, head of a Boise, Idaho, broker agency and a financial planner, said the federal government should require health benefit brokers to adhere to the same regulation he sees in the finance arena. The Employee Retirement Income Security Act, better known as ERISA, requires retirement plan advisers to disclose to employers all compensation that’s related to their plans, exposing potential conflicts.

The Department of Labor requires certain employers that provide health benefits to file documents every year about their plans, including payments to brokers. The department posts the information on its website.

But the data is notoriously messy. After a 2012 report found 23 percent of the forms contained errors, there was a proposal to revamp the data collection in 2016. It is unclear if that work was done, but ProPublica tried to analyze the data and found it incomplete or inaccurate. The data shortcomings mean employers have no real ability to compare payments to brokers.

About five years ago, Contorno, one of the leaders in the Rosetta movement, was blithely happy with the status quo: He had his favored insurers and could usually find traditional plans that appeared to fit his clients’ needs.

Today, he regrets his role in driving up employers’ health costs. One of his LinkedIn posts compares the industry’s acceptance of control by insurance companies to Stockholm Syndrome, the feelings of trust a hostage would have toward a captor.

Contorno began advising Palmer Johnson in 2016. When he took over, the company had a self-funded plan and its claims were reviewed by an administrator owned by its broker, Iowa-based Cottingham & Butler. Contorno brought in an independent claims administrator who closely scrutinized the claims and provided detailed cost information. The switch led to significant savings, said Parsons, the company owner. “It opened our eyes to what a good claims review process can mean to us,” he said.

Brad Plummer, senior vice president for employee benefits for Cottingham & Butler, acknowledged “things didn’t go swimmingly” with the claims company. But overall his company provided valuable service to Palmer Johnson, he said.

Contorno also provided resources to help Palmer Johnson employees find high-quality, low-cost providers, and the company waived any out-of-pocket expense as an incentive to get employees to see those medical providers. If a patient needed an out-of-network procedure, the price was negotiated up front to avoid massive surprise bills to the plan or the patient. The company also contracted with a vendor for drug coverage that does not use the secret rebates and hidden pricing schemes that are common in the industry. Palmer Johnson’s yearly health care costs per employee dropped by more than 25 percent, from about $11,252 in 2015 to $8,288 in 2018. That’s lower than they’d been in 2011, Contorno said.

“Now that my compensation is fully tied to meeting the clients’ goals, that is my sole objective,” he said. “Your broker works for whoever is cutting them the check.”

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for The Big Story newsletter to receive stories like this one in your inbox.


New Vermont Law Regulating Data Brokers Drives 120 Businesses From The Shadows

In May of 2018, Vermont was the first (and only) state in the nation to enact a law regulating data brokers. According to the Vermont Secretary of State, a data broker is defined as:

"... a business, or unit or units of a business, separately or together, that knowingly collects and sells or licenses to third parties the brokered personal information of a consumer with whom the business does not have a direct relationship."

The Vermont Secretary of State's website contains links to the new law and more. This new law is important for several reasons. First, many businesses operate as data brokers. Second, consumers historically haven't known who has information about them, nor how to review their profiles for accuracy. Third,  consumers haven't been able to opt out of the data collection. Fourth, if you don't know who the data brokers are, then you can't hold them accountable if they fail with data security. According to Vermont law:

"2447. Data broker duty to protect information; standards; technical requirements (a) Duty to protect personally identifiable information. (1) A data broker shall develop, implement, and maintain a comprehensive information security program that is written in one or more readily accessible parts and contains administrative, technical, and physical safeguards that are appropriate... identification and assessment of reasonably foreseeable internal and external risks to the security, confidentiality, and integrity of any electronic, paper, or other records containing personally identifiable information, and a process for evaluating and improving, where necessary, the effectiveness of the current safeguards for limiting such risks... taking reasonable steps to select and retain third-party service providers that are capable of maintaining appropriate security measures to protect personally identifiable information consistent with applicable law; and (B) requiring third-party service providers by contract to implement and maintain appropriate security measures for personally identifiable information..."

Before this law, there was little to no oversight, no regulation, and no responsibility for data brokers to adequately protect sensitive data about consumers. A federal bill proposed in 2014 went nowhere in the U.S. Senate. You can assume that many data brokers operate in your state, too, since there's plenty of money to be made in the industry.

Portions of the new Vermont law went into effect in May, and the remainder went into effect on January 1, 2019. What has happened since then? Fast Company reported:

"So far, 121 companies have registered, according to data from the Vermont secretary of state’s office... The list of active companies includes divisions of the consumer data giant Experian, online people search engines like Spokeo and Spy Dialer, and a variety of lesser-known organizations that do everything from help landlords research potential tenants to deliver marketing leads to the insurance industry..."

The Fast Company site lists the 120 (so far) registered data brokers in Vermont. Regular readers of this blog will recognize some of the data brokers by name, since prior posts covered Acxiom, Equifax, Experian, LexisNexis, the NCTUE, Oracle, Spokeo, TransUnion, and others. (Yes, both credit reporting agencies and social media firms also operate as data brokers. Some states do it, too.) Reportedly, many privacy advocates support the new law:

"There’s companies that I’ve never heard of before," says Zachary Tomanelli, communications and technology director at the Vermont Public Interest Research Group, which supported the law. "It’s often very cumbersome [for consumers] to know where the places are that you have to go, and how you opt out."

Predictably, the industry has opposed (and continues to oppose) the legislation:

"A coalition of industry groups like the Internet Association, the Association of National Advertisers, and the National Association of Professional Background Screeners, as well as now registered data brokers such as Experian, Acxiom, and IHS Markit, said the law was unnecessary... Requiring companies to disclose breaches of largely public data could be burdensome for businesses and needlessly alarming for consumers, they argue... Other companies, like Axciom, have complained that the law establishes inconsistent boundaries around personal data used by third parties, and the first-party data used by companies like Facebook and Google."

So, no companies want consumers to own and control the data -- property -- that describes them. Real property laws matter. To learn more, read about data brokers at the Privacy Rights Clearinghouse site. Related posts in the Data Brokers section of this blog:

Kudos to Vermont lawmakers for ensuring more disclosures and transparency from the industry. Readers may ask their elected officials why their state has not taken similar action. What are your opinions of the new Vermont law?


Sackler Embraced Plan to Conceal OxyContin’s Strength From Doctors, Sealed Testimony Shows

[Editor's note: today's guest post explores issues within the pharmaceuticals and drug industry. It is reprinted with permission.]

By David Armstrong, ProPublica

In May 1997, the year after Purdue Pharma launched OxyContin, its head of sales and marketing sought input on a key decision from Dr. Richard Sackler, a member of the billionaire family that founded and controls the company. Michael Friedman told Sackler that he didn’t want to correct the false impression among doctors that OxyContin was weaker than morphine, because the myth was boosting prescriptions — and sales.

“It would be extremely dangerous at this early stage in the life of the product,” Friedman wrote to Sackler, “to make physicians think the drug is stronger or equal to morphine….We are well aware of the view held by many physicians that oxycodone [the active ingredient in OxyContin] is weaker than morphine. I do not plan to do anything about that.”

“I agree with you,” Sackler responded. “Is there a general agreement, or are there some holdouts?”

Ten years later, Purdue pleaded guilty in federal court to understating the risk of addiction to OxyContin, including failing to alert doctors that it was a stronger painkiller than morphine, and agreed to pay $600 million in fines and penalties. But Sackler’s support of the decision to conceal OxyContin’s strength from doctors — in email exchanges both with Friedman and another company executive — was not made public.

The email threads were divulged in a sealed court document that ProPublica has obtained: an Aug. 28, 2015, deposition of Richard Sackler. Taken as part of a lawsuit by the state of Kentucky against Purdue, the deposition is believed to be the only time a member of the Sackler family has been questioned under oath about the illegal marketing of OxyContin and what family members knew about it. Purdue has fought a three-year legal battle to keep the deposition and hundreds of other documents secret, in a case brought by STAT, a Boston-based health and medicine news organization; the matter is currently before the Kentucky Supreme Court.

Meanwhile, interest in the deposition’s contents has intensified, as hundreds of cities, counties, states and tribes have sued Purdue and other opioid manufacturers and distributors. A House committee requested the document from Purdue last summer as part of an investigation of drug company marketing practices.

In a statement, Purdue stood behind Sackler’s testimony in the deposition. Sackler, it said, “supports that the company accurately disclosed the potency of OxyContin to healthcare providers.” He “takes great care to explain” that the drug’s label “made clear that OxyContin is twice as potent as morphine,” Purdue said.

Still, Purdue acknowledged, it had made a “determination to avoid emphasizing OxyContin as a powerful cancer pain drug,” out of “a concern that non-cancer patients would be reluctant to take a cancer drug.”

The company, which said it was also speaking on behalf of Sackler, deplored what it called the “intentional leak of the deposition” to ProPublica, calling it “a clear violation of the court’s order” and “regrettable.”

Much of the questioning of Sackler in the 337-page deposition focused on Purdue’s marketing of OxyContin, especially in the first five years after the drug’s 1996 launch. Aggressive marketing of OxyContin is blamed by some analysts for fostering a national crisis that has resulted in 200,000 overdose deaths related to prescription opioids since 1999.

Taken together with a Massachusetts complaint made public last month against Purdue and eight Sacklers, including Richard, the deposition underscores the family’s pivotal role in developing the business strategy for OxyContin and directing the hiring of an expanded sales force to implement a plan to sell the drug at ever-higher doses. Documents show that Richard Sackler was especially involved in the company’s efforts to market the drug, and that he pushed staff to pursue OxyContin’s deregulation in Germany. The son of a Purdue co-founder, he began working at Purdue in 1971 and has been at various times the company’s president and co-chairman of its board.

In a 1996 email introduced during the deposition, Sackler expressed delight at the early success of OxyContin. “Clearly this strategy has outperformed our expectations, market research and fondest dreams,” he wrote. Three years later, he wrote to a Purdue executive, “You won’t believe how committed I am to make OxyContin a huge success. It is almost that I dedicated my life to it. After the initial launch phase, I will have to catch up with my private life again.”

During his deposition, Sackler defended the company’s marketing strategies — including some Purdue had previously acknowledged were improper — and offered benign interpretations of emails that appeared to show Purdue executives or sales representatives minimizing the risks of OxyContin and its euphoric effects. He denied that there was any effort to deceive doctors about the potency of OxyContin and argued that lawyers for Kentucky were misconstruing words such as “stronger” and “weaker” used in email threads.

The term “stronger” in Friedman’s email, Sackler said, “meant more threatening, more frightening. There is no way that this intended or had the effect of causing physicians to overlook the fact that it was twice as potent.”

Emails introduced in the deposition show Sackler’s hidden role in key aspects of the 2007 federal case in which Purdue pleaded guilty. A 19-page statement of facts that Purdue admitted to as part of the plea deal, and which prosecutors said contained the “main violations of law revealed by the government’s criminal investigation,” referred to Friedman’s May 1997 email to Sackler about letting the doctors’ misimpression stand. It did not identify either man by name, attributing the statements to “certain Purdue supervisors and employees.”

Friedman, who by then had risen to chief executive officer, was one of three Purdue executives who pleaded guilty to a misdemeanor of “misbranding” OxyContin. No members of the Sackler family were charged or named as part of the plea agreement. The Massachusetts lawsuit alleges that the Sackler-controlled Purdue board voted that the three executives, but no family members, should plead guilty as individuals. After the case concluded, the Sacklers were concerned about maintaining the allegiance of Friedman and another of the executives, according to the Massachusetts lawsuit. To protect the family, Purdue paid the two executives at least $8 million, that lawsuit alleges.

“The Sacklers spent millions to keep the loyalty of people who knew the truth,” the complaint filed by the Massachusetts attorney general alleges.

The Kentucky deposition’s contents will likely fuel the growing protests against the Sacklers, including pressure to strip the family’s name from cultural and educational institutions to which it has donated. The family has been active in philanthropy for decades, giving away hundreds of millions of dollars. But the source of its wealth received little attention until recent years, in part due to a lack of public information about what the family knew about Purdue’s improper marketing of OxyContin and false claims about the drug’s addictive nature.

Although Purdue has been sued hundreds of times over OxyContin’s marketing, the company has settled many of these cases, and almost never gone to trial. As a condition of settlement, Purdue has often required a confidentiality agreement, shielding millions of records from public view.

That is what happened in Kentucky. In December 2015, the state settled its lawsuit against Purdue, alleging that the company created a “public nuisance” by improperly marketing OxyContin, for $24 million. The settlement required the state attorney general to “completely destroy” documents in its possession from Purdue. But that condition did not apply to records sealed in the circuit court where the case was filed. In March 2016, STAT filed a motion to make those documents public, including Sackler’s deposition. The Kentucky Court of Appeals last year upheld a lower court ruling ordering the deposition and other sealed documents be made public. Purdue asked the state Supreme Court to review the decision, and both sides recently filed briefs. Protesters outside Kentucky’s Capitol last week waved placards urging the court to release the deposition.

Sackler family members have long constituted the majority of Purdue’s board, and company profits flow to trusts that benefit the extended family. During his deposition, which took place over 11 hours in a law office in Louisville, Kentucky, Richard Sackler said “I don’t know” more than 100 times, including when he was asked how much his family had made from OxyContin sales. He acknowledged it was more than $1 billion, but when asked if they had made more than $5 billion, he said, “I don’t know.” Asked if it was more than $10 billion, he replied, “I don’t think so.”

By 2006, OxyContin’s “profit contribution” to Purdue was $4.7 billion, according to a document read at the deposition. From 2007 to 2018, the Sackler family received more than $4 billion in payouts from Purdue, according to the Massachusetts lawsuit.

During the deposition, Sackler was confronted with his email exchanges with company executives about Purdue’s decision not to correct the misperception among many doctors that OxyContin was weaker than morphine. The company viewed this as good news because the softer image of the drug was helping drive sales in the lucrative market for treating conditions like back pain and arthritis, records produced at the deposition show.

Designed to gradually release medicine into the bloodstream, OxyContin allows patients to take fewer pills than they would with other, quicker-acting pain medicines, and its effect lasts longer. But to accomplish these goals, more narcotic is packed into an OxyContin pill than competing products. Abusers quickly figured out how to crush the pills and extract the large amount of narcotic. They would typically snort it or dissolve it into liquid form to inject.

The pending Massachusetts lawsuit against Purdue accuses Sackler and other company executives of determining that “doctors had the crucial misconception that OxyContin was weaker than morphine, which led them to prescribe OxyContin much more often.” It also says that Sackler “directed Purdue staff not to tell doctors the truth,” for fear of reducing sales. But it doesn’t reveal the contents of the email exchange with Friedman, the link between that conversation and the 2007 plea agreement, and the back-and-forth in the deposition.

A few days after the email exchange with Friedman in 1997, Sackler had an email conversation with another company official, Michael Cullen, according to the deposition. “Since oxycodone is perceived as being a weaker opioid than morphine, it has resulted in OxyContin being used much earlier for non-cancer pain,” Cullen wrote to Sackler. “Physicians are positioning this product where Percocet, hydrocodone and Tylenol with codeine have been traditionally used.” Cullen then added, “It is important that we be careful not to change the perception of physicians toward oxycodone when developing promotional pieces, symposia, review articles, studies, et cetera.”

“I think that you have this issue well in hand,” Sackler responded.

Friedman and Cullen could not be reached for comment.

Asked at his deposition about the exchanges with Friedman and Cullen, Sackler didn’t dispute the authenticity of the emails. He said the company was concerned that OxyContin would be stigmatized like morphine, which he said was viewed only as an “end of life” drug that was frightening to people.

“Within this time it appears that people had fallen into a habit of signifying less frightening, less threatening, more patient acceptable as under the rubric of weaker or more frightening, more — less acceptable and less desirable under the rubric or word ‘stronger,’” Sackler said at his deposition. “But we knew that the word ‘weaker’ did not mean less potent. We knew that the word ‘stronger’ did not mean more potent.” He called the use of those words “very unfortunate.”

He said Purdue didn’t want OxyContin “to be polluted by all of the bad associations that patients and healthcare givers had with morphine.”

In his deposition, Sackler also defended sales representatives who, according to the statement of facts in the 2007 plea agreement, falsely told doctors during the 1996-2001 period that OxyContin did not cause euphoria or that it was less likely to do so than other opioids. This euphoric effect experienced by some patients is part of what can make OxyContin addictive. Yet, asked about a 1998 note written by a Purdue salesman, who indicated that he “talked of less euphoria” when promoting OxyContin to a doctor, Sackler argued it wasn’t necessarily improper.

“This was 1998, long before there was an Agreed Statement of Facts,” he said.

The lawyer for the state asked Sackler: “What difference does that make? If it’s improper in 2007, wouldn’t it be improper in 1998?”

“Not necessarily,” Sackler replied.

Shown another sales memo, in which a Purdue representative reported telling a doctor that “there may be less euphoria” with OxyContin, Sackler responded, “We really don’t know what was said.” After further questioning, Sackler said the claim that there may be less euphoria “could be true, and I don’t see the harm.”

The same issue came up regarding a note written by a Purdue sales representative about one doctor: “Got to convince him to counsel patients that they won’t get buzzed as they will with short-acting” opioid painkillers. Sackler defended these comments as well. “Well, what it says here is that they won’t get a buzz. And I don’t think that telling a patient ‘I don’t think you’ll get a buzz’ is harmful,” he said.

Sackler added that the comments from the representative to the doctor “actually could be helpful, because many patients won’t get a buzz, and if he would like to know if they do, he might have had a good medical reason for wanting to know that.”

Sackler said he didn’t believe any of the company sales people working in Kentucky engaged in the improper conduct described in the federal plea deal. “I don’t have any facts to inform me otherwise,” he said.

Purdue said that Sackler’s statements in his deposition “fully acknowledge the wrongful actions taken by some of Purdue’s employees prior to 2002,” as laid out in the 2007 plea agreement. Both the company and Sackler “fully agree” with the facts laid out in that case, Purdue said.

The deposition also reveals that Sackler pushed company officials to find out if German officials could be persuaded to loosen restrictions on the selling of OxyContin. In most countries, narcotic pain relievers are regulated as “controlled” substances because of the potential for abuse. Sackler and other Purdue executives discussed the possibility of persuading German officials to classify OxyContin as an uncontrolled drug, which would likely allow doctors to prescribe the drug more readily — for instance, without seeing a patient. Fewer rules were expected to translate into more sales, according to company documents disclosed at the deposition.

One Purdue official warned Sackler and others that it was a bad idea. Robert Kaiko, who developed OxyContin for Purdue, wrote to Sackler, “If OxyContin is uncontrolled in Germany, it is highly likely that it will eventually be abused there and then controlled.”

Nevertheless, Sackler asked a Purdue executive in Germany for projections of sales with and without controls. He also wondered whether, if one country in the European Union relaxed controls on the drug, others might do the same. When finally informed that German officials had decided the drug would be controlled like other narcotics, Sackler asked in an email if the company could appeal. Told that wasn’t possible, he wrote back to an executive in Germany, “When we are next together we should talk about how this idea was raised and why it failed to be realized. I thought that it was a good idea if it could be done.”

Asked at the deposition about that comment, Sackler responded, “That’s what I said, but I didn’t mean it. I just wanted to be encouraging.” He said he really “was not in favor of” loosening OxyContin regulation and was simply being “polite” and “solicitous” of his own employee.

Near the end of the deposition — after showing Sackler dozens of emails, memos and other records regarding the marketing of OxyContin — a lawyer for Kentucky posed a fundamental question.

“Sitting here today, after all you’ve come to learn as a witness, do you believe Purdue’s conduct in marketing and promoting OxyContin in Kentucky caused any of the prescription drug addiction problems now plaguing the Commonwealth?” he asked.

Sackler replied, “I don’t believe so.”

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for The Big Story newsletter to receive stories like this one in your inbox.


Federal Reserve Enforcement Action Against Banking Executives

Last month, the Federal Reserve Board (FRB) announced several notable enforcement actions. A February 5th press release discussed a:

"Consent Notice of Suspension and Prohibition against Fred Daibes, former Chairman of Mariner's Bancorp, Edgewater, New Jersey, for perpetuating a fraudulent loan scheme, according to a federal indictment."

The order against Daibes described the violations:

"... on October 30, 2018, a federal grand jury in the United States District Court for the District of New Jersey charged [Diabes] and an accomplice by indictment with one count conspiracy to misapply bank funds and to make false entries to deceive a financial institution and the FDIC, five counts of misapplying bank funds, six counts of making false entries to decide a financial institution and the FDIC, and one count of causing reliance on a false document to influence the FDIC... During the relevant time period, Mariner’s was subject to federal banking regulations that placed limits on the amount of money that the Bank could lend to a single borrower... the Indictment charges that in about January 2008 to December 2013, Daibes and others orchestrated a nominee loan scheme designed to circumvent the Lending Limits by ensuring that millions of dollars in loans made by the Bank (the “Nominee Loans”) flowed from the nominees to Daibes, while concealing Daibes’ beneficial interests in those loans from both the Bank and the FDIC. Daibes recruited nominees to make materially false and misleading statements and material omissions..."

The FRB and the U.S. Federal Deposit Insurance Corporation (FDIC) are two of several federal agencies which oversee and regulate the banking industry within the United States. The order bars Daibes from working within the banking industry.

Then, a February 7th FRB press release discussed a:

"Consent Prohibition against Alison Keefe, former employee of SunTrust Bank, Atlanta, Georgia, for violating bank overdraft policies for her own benefit."

The order against Keefe described the violations:

"... between September 2017 and May 2018, while employed as the manager of the Bank’s Hilltop Branch in Virginia Beach, Virginia, Keefe repeatedly overdrew her personal checking account at the Bank and instructed Bank staff, without authorization and contrary to Bank policies, to honor the overdrafts... Keefe’s misconduct described above constituted unsafe or unsound banking practices and demonstrated a reckless disregard for the safety and soundness of the Bank..."

Keefe was fired by the bank on July 12, 2018, and has repaid the bank. The order bars Keefe from working within the banking industry.

A February 21st press release discussed the agency's enforcement action against a former manager at J.P. Morgan Chase bank. The FRB:

"... permanently barred from the banking industry Timothy Fletcher, a former managing director at a non-bank subsidiary of J.P. Morgan Chase & Co. Fletcher consented to the prohibition, which includes allegations that he improperly administered a referral hiring program at the firm by offering internships and other employment opportunities to individuals referred by foreign officials, clients, and prospective clients in order to obtain improper business advantages for the firm. The FRB is also requiring Fletcher to cooperate in any pending or prospective enforcement action against other individuals who are or were affiliated with the firm. The firm was previously fined $61.9 million by the Board relating to this program. In addition, the Department of Justice and the Securities and Exchange Commission have also fined the firm."

The $61.9 million fine was levied against J.P. Morgan Chase in November, 2016. Back then, the FRB found that the bank:

"... did not have adequate enterprise-wide controls to ensure that referred candidates were appropriately vetted and hired in accordance with applicable anti-bribery laws and firm policies. The Federal Reserve's order requires J.P. Morgan Chase to enhance the effectiveness of senior management oversight and controls relating to the firm's referral hiring practices and anti-bribery policies. The Federal Reserve is also requiring the firm to cooperate in its investigation of the individuals..."

Last month's order against Fletcher described the violations:

"... from at least 2008 until 2013 [Fletcher] engaged in unsafe and unsound practices, breaches of fiduciary duty, and violations of law related to his involvement in the Firm’s referral hiring program for the Asia-Pacific region investment bank, whereby candidates who were referred, directly or indirectly, by foreign government officials and existing or prospective commercial clients were offered internships, training, and other employment opportunities in order to obtain improper business advantages for the Firm... the Firm’s internal policies prohibited Firm employees from giving anything of value, including the offer of internships or training, to certain individuals, including relatives of public officials and relatives and associates of non-government corporate representatives, in order to obtain improper business advantages for the Firm..."

Kudos to the FRB for its enforcement action. Executives must suffer direct consequences for wrongdoing. After reading this, one wonders why direct consequences are not applied against executives within the social media industry. The behaviors there do just as much damage; and cross borders, too. What are your opinions?


Google To End Forced Arbitration For Employees

This news item caught my attention. Axios reported:

"Google will no longer require current and future employees to take disputes with the company to arbitration, it said on February 21st... After protests last year, the search giant ended mandatory arbitration for individual cases of sexual harassment or assault for employees. Employees have called for the practice to end in other cases of harassment and discrimination. Google appears to be meeting that demand for employees — but the change will not apply in the same blanket way to the many contractors, vendors and temporary employees it uses."

Reportedly, the change will take effect on March 21, 2019.


New Bill In California To Strengthen Its Consumer Privacy Law

Lawmakers in California have proposed legislation to strengthen the state's existing privacy law. California Attorney General Xavier Becerra and and Senator Hannah-Beth Jackson jointly announced Senate Bill 561, to improve the California Consumer Privacy Act (CCPA). According to the announcement:

"SB 561 helps improve the workability of the [CCPA] by clarifying the Attorney General’s advisory role in providing general guidance on the law, ensuring a level playing field for businesses that play by the rules, and giving consumers the ability to enforce their new rights under the CCPA in court... SB 561 removes requirements that the Office of the Attorney General provide, at taxpayers’ expense, businesses and private parties with individual legal counsel on CCPA compliance; removes language that allows companies a free pass to cure CCPA violations before enforcement can occur; and adds a private right of action, allowing consumers the opportunity to seek legal remedies for themselves under the act..."

Senator Jackson introduced the proposed legislation into the sate Senate. Enacted in 2018, the CCPA will go into effect on January 1, 2020. The law prohibits businesses from discriminating against consumers for exercising their rights under the CCPA. The law also includes several key requirements businesses must comply with:

  • "Businesses must disclose data collection and sharing practices to consumers;
  • Consumers have a right to request their data be deleted;
  • Consumers have a right to opt out of the sale or sharing of their personal information; and
  • Businesses are prohibited from selling personal information of consumers under the age of 16 without explicit consent."

State Senator Jackson said in a statement:

"Our constitutional right to privacy continues to face unprecedented assault. Our locations, relationships, and interests are being tracked without our knowledge, bought and sold by corporate interests for their own economic gain and conducted in order to manipulate us... With the passage of the California Consumer Privacy Act last year, California took an important first step in protecting our fundamental right to privacy. SB 561 will ensure that the most significant privacy protections in the nation are effectively and robustly enforced."

Predictably, the pro-business lobby opposes the legislation. The Sacramento Bee reported:

"Punishment may be an incentive to increase compliance, but — especially where a law is new and vague — eliminating a right to cure does not promote compliance," the California Chamber of Commerce released in a statement on February 25. "SB 561 will not only hurt and possibly bankrupt small businesses in the state, it will kill jobs and innovation."

Sounds to me like fearmongering by the Chamber. Senator Jackson has it right. From the same Sacramento Bee article:

"If you don’t violate the law, you won’t get sued... To have very little recourse when these violations occur means that these large companies can continue with their inappropriate, improper behavior without any kind of recourse and sanction. In order to make sure they comply with the law, we need to make sure that people are able to exercise their rights."

Precisely. Two concepts seem to apply:

  • If you can't protect it, don't collect it (e.g.,  consumers' personal information), and
  • If the data collected is so value, compensate consumers for it

Regarding the second item, the National Law Review reported:

"Much has been made of California Governor Gavin Newsom’s recent endorsement of “data dividends”: payments to consumers for the use of their personal data. Common Sense Media, which helped pass the CCPA last year, plans to propose legislation in California to create such a dividend. The proposal has already proven popular with the public..."

Laws like the CCPA seem to be the way forward. Kudos to California for moving to better protect consumers. This proposed update puts teeth into existing law. Hopefully, other states will follow soon.


Facebook Admits More Teens Used Spyware App Than Previously Disclosed

Facebook logo Facebook has changed its story about how many teenagers used its Research app. When news first broke, Facebook said that less than 5 percent of the mobile app users were teenagers. On Thursday, TechCrunch reported that it:

"... has obtained Facebook’s unpublished February 21st response to questions about the Research program in a letter from Senator Mark Warner, who wrote to CEO Mark Zuckerberg that “Facebook’s apparent lack of full transparency with users – particularly in the context of ‘research’ efforts – has been a source of frustration for me.”

In the response from Facebook’s VP of US public policy Kevin Martin, the company admits that (emphasis ours) “At the time we ended the Facebook Research App on Apple’s iOS platform, less than 5 percent of the people sharing data with us through this program were teens. Analysis shows that number is about 18 percent when you look at the complete lifetime of the program, and also add people who had become inactive and uninstalled the app.”

Three U.S. Senators sent a letter to Facebook on February 7th demanding answers. The TechCrunch article outlined other items in Facebook's changing story: i) it originally claimed its Research App didn't violate Apple's policies and we later learned it did; and ii) it claimed to have removed the app, but Apple later forced that removal.

What to make of Facebook's changing story? Again from TechCrunch:

"The contradictions between Facebook’s initial response to reporters and what it told Warner, who has the power to pursue regulation of the the tech giant, shows Facebook willingness to move fast and play loose with the truth... Facebook’s attempt to minimize the issue in the wake of backlash exemplifies the trend of of the social network’s “reactionary” PR strategy that employees described to BuzzFeed’s Ryan Mac. The company often views its scandals as communications errors rather than actual product screwups or as signals of deep-seeded problems with Facebook’s respect for privacy..."

Kudos to TechCrunch on more excellent reporting. And, there's more regarding children. Fortune reported:

"A coalition of 17 privacy and children’s organizations has asked the Federal Trade Commission to investigate Facebook for allowing children to make unauthorized in-app purchases... The coalition filed a complaint with the FTC on Feb. 21 over Facebook doing little to stop children from buying virtual items through games on its service without parental permission and, in some cases, without realizing those items cost money... Internal memos show that between 2010 and 2014, Facebook encouraged children, some as young as five-years old, to make purchases using their parents’ credit card information, the complaint said. The company then refused to refund parents..."

Not good. Facebook's changing story makes it difficult, or impossible, to trust anything its executives say. Perhaps, the entertainer Lady Gaga said it best:

"Social media is the toilet of the internet."

Facebook's data breaches, constant apologizing, and shifting stories seem to confirm that. Now, it is time for government regulators to act -- and not with wimpy fines.


Large Natural Gas Producer to Pay West Virginia Plaintiffs $53.5 Million to Settle Royalty Dispute

[Editor's note: today's guest post by ProPublica discusses business practices within the energy industry. It is reprinted with permission.]

By Kate Mishkin and Ken Ward Jr., The Charleston Gazette-Mail

The second-largest natural gas producer in West Virginia will pay $53.5 million to settle a lawsuit that alleged the company was cheating thousands of state residents and businesses by shorting them on gas royalty payments, according to terms of the deal unsealed in court this week.

EQT Corporation logo Pittsburgh-based EQT Corp. agreed to pay the money to end a federal class-action lawsuit, brought on behalf of about 9,000 people, which alleged that EQT wrongly deducted a variety of unacceptable charges from peoples’ royalty checks.

The deal is the latest in a series of settlements in cases that accused natural gas companies of engaging in such maneuvers to pocket a larger share of the profits from the boom in natural gas production in West Virginia.

This lawsuit was among the royalty cases highlighted last year in a joint examination by the Charleston Gazette-Mail and ProPublica that showed how West Virginia’s natural gas producers avoid paying royalties promised to thousands of residents and businesses. The plaintiffs said EQT was improperly deducting transporting and processing costs from their royalty payments. EQT said its royalty payment calculations were correct and fair.

A trial was scheduled to begin in November but was canceled after the parties reached the tentative settlement. Details of the settlement were unsealed earlier this month.

Under the settlement agreement, EQT Production Co. will pay the $53.5 million into a settlement fund. The company will also stop deducting those post-production costs from royalty payments.

“This was an opportunity to turn over a new leaf in our relationship with our West Virginia leaseholders and this mutually beneficial agreement demonstrates our renewed commitment to the state of West Virginia,” EQT’s CEO, Robert McNally, said in a prepared statement.

EQT is working to earn the trust of West Virginians and community leaders, he said.

Marvin Masters, the lead lawyer for the plaintiffs, called the settlement “encouraging” after six years of litigation. (Masters is among a group of investors who bought the Charleston Gazette-Mail last year.)

Funds will be distributed to people who leased the rights to natural gas beneath their land in West Virginia to EQT between Dec. 8, 2009, and Dec. 31, 2017. EQT will also pay up to $2 million in administrative fees to distribute the settlement.

Settlement payments will be calculated based on such factors as the amount of gas produced and sold from each well, as well as how much was deducted from royalty payments. The number of people who submit claims could also affect settlement payments. Each member of the class that submits a claim will receive a minimum payment of at least $200. The settlement allows lawyers to collect up to one-third of the settlement, or roughly $18 million, subject to approval from the court.

The settlement is pending before U.S. District Judge John Preston Bailey in the Northern District of West Virginia. The judge gave it preliminary approval on February 11th, which begins a process for public notice of the terms and a fairness hearing July 11 in Wheeling, West Virginia. Payments would not be made until that process is complete.

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom.Sign up for The Big Story newsletter to receive stories like this one in your inbox.


'Software Pirates' Stole Apple Tech To Distribute Hacked Mobile Apps To Consumers

Prior news reports highlighted the abuse of Apple's corporate digital certificates. Now, we learn that this abuse is more widespread than first thought. CNet reported:

"Pirates used Apple's enterprise developer certificates to put out hacked versions of some major apps... The altered versions of Spotify, Angry Birds, Pokemon Go and Minecraft make paid features available for free and remove in-app ads... The pirates appear to have figured out how to use digital certs to get around Apple's carefully policed App Store by saying the apps will be used only by their employees, when they're actually being distributed to everyone."

So, bad actors abuse technology intended for a company's employees to distribute apps directly to consumers. Software pirates, indeed.

To avoid paying for hacked apps, consumers need to shop wisely from trusted sites. A fix is underway. According to CNet:

"Apple will reportedly take steps to fight back by requiring all app makers to use its two-factor authentication protocol from the end of February, so logging into an Apple ID will require a password and code sent to a trusted Apple device."

Let's hope that fix is sufficient.


Ex-IBM Executive Says She Was Told Not to Disclose Names of Employees Over Age 50 Who’d Been Laid Off

[Editor's note: today's guest blog post, by reporters at ProPublica, explores employment and hiring practices within the workplace. Part of a series, it is reprinted with permission.]

IBM logo By Peter Gosselin, ProPublica

In sworn testimony filed recently as part of a class-action lawsuit against IBM, a former executive says she was ordered not to comply with a federal agency’s request that the company disclose the names of employees over 50 who’d been laid off from her business unit.

Catherine A. Rodgers, a vice president who was then IBM’s senior executive in Nevada, cited the order among several practices she said prompted her to warn IBM superiors the company was leaving itself open to allegations of age discrimination. She claims she was fired in 2017 because of her warnings.

Company spokesman Edward Barbini labeled Rodgers’ claims related to potential age discrimination “false,” adding that the reasons for her firing were “wholly unrelated to her allegations.”

Rodgers’ affidavit was filed Jan. 17 as part of a lawsuit in federal district court in New York. The suit cites a March 2018 ProPublica story that IBM engaged in a strategy designed to, in the words of one internal company document, “correct seniority mix” by flouting or outflanking U.S. anti-age discrimination laws to force out tens of thousands of older workers in the five years through 2017 alone.

Rodgers said in an interview Sunday that IBM “appears to be engaged in a concerted and disproportionate targeting of older workers.” She said that if the company releases the ages of those laid off, something required by federal law and that IBM did until 2014, “the facts will speak for themselves.”

“IBM is a data company. Release the data,” she said.

Rodgers is not a plaintiff in the New York case but intends to become one, said Shannon Liss-Riordan, the attorney for the employees.

IBM has not yet responded to Rodgers’ affidavit in the class-action suit. But in a filing in a separate age-bias lawsuit in federal district court in Austin, Texas, where a laid-off IBM sales executive introduced the document to bolster his case, lawyers for the company termed the order for Rodgers not to disclose the layoffs of older workers from her business unit “unremarkable.”

They said that the U.S. Department of Labor sought the names of the workers so it could determine whether they qualified for federal Trade Adjustment Assistance, or TAA, which provides jobless benefits and re-training to those who lose their jobs because of foreign competition. They said that company executives concluded that only one of about 10 workers whose names Rodgers had sought to provide qualified.

In its reporting, ProPublica found that IBM has gone to considerable lengths to avoid reporting its layoff numbers by, among other things, limiting its involvement in government programs that might require disclosure. Although the company has laid off tens of thousands of U.S. workers in recent years and shipped many jobs overseas, it sought and won TAA aid for just three during the past decade, government records show.

Company lawyers in the Texas case said that Rodgers, 62 at the time of her firing and a 39-year veteran of IBM, was let go in July 2017 because of "gross misconduct."

Rodgers said that she received “excellent” job performance reviews for decades before questioning IBM’s practices toward older workers. She rejected the misconduct charge as unfounded.

Legal action against IBM over its treatment of older workers appears to be growing. In addition to the suits in New York and Texas, cases are also underway in California, New Jersey and North Carolina.

Liss-Riordan, who has represented workers against a series of tech giants including Amazon, Google and Uber, has added 41 plaintiffs to the original three in the New York case and is asking the judge to require that IBM notify all U.S. workers whom it has laid off since July 2017 of the suit and of their option to challenge the company.

One complicating factor is that IBM requires departing employees who want to receive severance pay to sign a document waiving their right to take the company to court and limiting them to private, individual arbitration. Studies show this process rarely results in decisions that favor workers. To date, neither plaintiffs’ lawyers nor the government has challenged the legality of IBM’s waiver document.

Many ex-employees also don’t act within the 300-day federal statute of limitations for bringing a case. Of about 500 ex-employees who Liss-Riordan said contacted her since she filed the New York case last September, only 100 had timely claims and, of these, only about 40 had not signed the waivers and so were eligible to join the lawsuit. She said she’s filed arbitration cases for the other 60.

At key points, Rodgers’ account of IBM’s practices is similar to those reported by ProPublica. Among the parallels:

  • Rodgers said that all layoffs in her business unit were of older workers and that younger workers were unaffected. (ProPublica estimated that about 60 percent of the company’s U.S. layoffs from 2014 through 2017 were workers age 40 and above.)
  • She said that she and other managers were told to encourage workers flagged for layoff to use IBM’s internal hiring system to find other jobs in the company even as upper management erected insurmountable barriers to their being hired for these jobs.
  • Rodgers said the company reversed a decades long practice of encouraging employees to work from home and ordered many to begin reporting to a few “hub” offices around the country, a change she said appeared designed to prompt people to quit. She said that in one case an employee agreed to relocate to Connecticut only to be told to relocate again to North Carolina.

Barbini, the IBM spokesman, didn’t comment on individual elements of Rodgers’ allegations. Last year, he did not address a 10-page summary of ProPublica’s findings, but issued a statement that read in part, “We are proud of our company and our employees’ ability to reinvent themselves era after era, while always complying with the law.”

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for The Big Story newsletter to receive stories like this one in your inbox.


Popular iOS Apps Record All In-App Activity Causing Privacy, Data Security, And Other Issues

As the internet has evolved, the user testing and market research practices have also evolved. This may surprise consumers. TechCrunch reported that many popular Apple mobile apps record everything customers do with the apps:

"Apps like Abercrombie & Fitch, Hotels.com and Singapore Airlines also use Glassbox, a customer experience analytics firm, one of a handful of companies that allows developers to embed “session replay” technology into their apps. These session replays let app developers record the screen and play them back to see how its users interacted with the app to figure out if something didn’t work or if there was an error. Every tap, button push and keyboard entry is recorded — effectively screenshotted — and sent back to the app developers."

So, customers' entire app sessions and activities have been recorded. Of course, marketers need to understand their customers' needs, and how users interact with their mobile apps, to build better products, services, and apps. However, in doing so some apps have security vulnerabilities:

"The App Analyst... recently found Air Canada’s iPhone app wasn’t properly masking the session replays when they were sent, exposing passport numbers and credit card data in each replay session. Just weeks earlier, Air Canada said its app had a data breach, exposing 20,000 profiles."

Not good for a couple reasons. First, sensitive data like payment information (e.g., credit/debit card numbers, passport numbers, bank account numbers, etc.) should be masked. Second, when sensitive information isn't masked, more data security problems arise. How long is this app usage data archived? What employees, contractors, and business partners have access to the archive? What security methods are used to protect the archive from abuse?

In short, unauthorized persons may have access to the archives and the sensitive information contained. For example, market researchers probably have little or no need to specific customers' payment information. Sensitive information in these archives should be encrypted, to provide the best protection from abuse and from data breaches.

Sadly, there is more bad news:

"Apps that are submitted to Apple’s App Store must have a privacy policy, but none of the apps we reviewed make it clear in their policies that they record a user’s screen... Expedia’s policy makes no mention of recording your screen, nor does Hotels.com’s policy. And in Air Canada’s case, we couldn’t spot a single line in its iOS terms and conditions or privacy policy that suggests the iPhone app sends screen data back to the airline. And in Singapore Airlines’ privacy policy, there’s no mention, either."

So, the app session recordings were done covertly... without explicit language to provide meaningful and clear notice to consumers. I encourage everyone to read the entire TechCrunch article, which also includes responses by some of the companies mentioned. In my opinion, most of the responses fell far short with lame, boilerplate statements.

All of this is very troubling. And, there is more.

The TechCrunch article didn't discuss it, but historically companies hired testing firms to recruit user test participants -- usually current and prospective customers. Test participants were paid for their time. (I know because as a former user experience professional I conducted such in-person test sessions where clients paid test participants.) Things have changed. Not only has user testing and research migrated online, but companies use automated tools to perform perpetual, unannounced user testing -- all without compensating test participants.

While change is inevitable, not all change is good. Plus, things can be done in better ways. If the test information is that valuable, then pay test participants. Otherwise, this seems like another example of corporate greed at consumers' expense. And, it's especially egregious if data transmissions of the recorded app sessions to developers' servers use up cellular data plan capacity consumers paid for. Some consumers (e.g., elders, children, the poor) cannot afford the costs of unlimited cellular data plans.

After this TechCrunch report, Apple notified developers to either stop or disclose screen recording:

"Protecting user privacy is paramount in the Apple ecosystem. Our App Store Review Guidelines require that apps request explicit user consent and provide a clear visual indication when recording, logging, or otherwise making a record of user activity... We have notified the developers that are in violation of these strict privacy terms and guidelines, and will take immediate action if necessary..."

Good. That's a start. Still, user testing and market research is not a free pass for developers to ignore or skip data security best practices. Given these covert recorded app sessions, mobile apps must be continually tested. Otherwise, some ethically-challenged companies may re-introduce covert screen recording features. What are your opinions?


Walgreens To Pay About $2 Million To Massachusetts To Settle Multiple Price Abuse Allegations. Other Settlement Payments Exceed $200 Million

Walgreens logo The Office of the Attorney General of the Commonwealth of Massachusetts announced two settlement agreements with Walgreens, a national pharmacy chain. Walgreens has agreed to pay about $2 million to settle multiple allegations of pricing abuses. According to the announcement:

"Under the first settlement, Walgreens will pay $774,486 to resolve allegations that it submitted claims to MassHealth in which it reported prices for certain prescription drugs at levels that were higher than what Walgreens actually charged, resulting in fraudulent overpayments."

"Under the second settlement, Walgreens will pay $1,437,366 to resolve allegations that from January 2006 through December 2017, rather than dispensing the quantity of insulin called for by a patient’s prescription, Walgreens exceeded the prescription amount and falsified information on claims submitted for reimbursement to MassHealth, including the quantity of insulin and/or days’ supply dispensed."

Both settlements arose from whistle-blower activity. MassHealth is the state's healthcare program based upon a state law passed in 2006 to provide health insurance to all Commonwealth residents. The law was amended in 2008 and 2010 to make it consistent with the federal Affordable Care Act.

Massachusetts Attorney General (AG) Maura Healey said:

"Walgreens repeatedly failed to provide MassHealth with accurate information regarding its dispensing and billing practices, resulting in overpayment to the company at taxpayers’ expense... We will continue to investigate cases of fraud and take action to protect the integrity of MassHealth."

In a separate case, Walgreen's will pay $1 million to the state of Arkansas to settle allegations of Medicaid fraud. Last month, the New York State Attorney General announced that New York State, other states, and the federal government reached:

"... an agreement in principle with Walgreens to settle allegations that Walgreens violated the False Claims Act by billing Medicaid at rates higher than its usual and customary (U&C) rates for certain prescription drugs... Walgreens will pay the states and federal government $60 million, all of which is attributable to the states’ Medicaid programs... The national federal and state civil settlement will resolve allegations relating to Walgreens’ discount drug program, known as the Prescription Savings Club (PSC). The investigation revealed that Walgreens submitted claims to the states’ Medicaid programs in which it identified U&C prices for certain prescription drugs sold through the PSC program that were higher than what Walgreens actually charged for those drugs... This is the second false claims act settlement reached with Walgreens today. On January 22, 2019, AG James announced that Walgreens is to pay New York over $6.5 million as part of a $209.2 million settlement with the federal government and other states, resolving allegations that Walgreens knowingly engaged in fraudulent conduct when it dispensed insulin pens..."

States involved in the settlement include New York, California, Illinois, Indiana, Michigan and Ohio. Kudos to all Attorneys General and their staffs for protecting patients against corporate greed.


Senators Demand Answers From Facebook And Google About Project Atlas And Screenwise Meter Programs

After news reports surfaced about Facebook's Project Atlas, a secret program where Facebook paid teenagers (and other users) for a research app installed on their phones to track and collect information about their mobile usage, several United States Senators have demanded explanations. Three Senators sent a join letter on February 7, 2019 to Mark Zuckerberg, Facebook's chief executive officer.

The joint letter to Facebook (Adobe PDF format) stated, in part:

"We write concerned about reports that Facebook is collecting highly-sensitive data on teenagers, including their web browsing, phone use, communications, and locations -- all to profile their behavior without adequate disclosure, consent, or oversight. These reports fit with Longstanding concerns that Facebook has used its products to deeply intrude into personal privacy... According to a journalist who attempted to register as a teen, the linked registration page failed to impose meaningful checks on parental consent. Facebook has more rigorous mechanism to obtain and verify parental consent, such as when it is required to sign up for Messenger Kids... Facebook's monitoring under Project Atlas is particularly concerning because the data data collection performed by the research app was deeply invasive. Facebook's registration process encouraged participants to "set it and forget it," warning that if a participant disconnected from the monitoring for more than ten minutes for a few days, that they could be disqualified. Behind the scenes, the app watched everything on the phone."

The letter included another example highlighting the alleged lack of meaningful disclosures:

"... the app added a VPN connection that would automatically route all of a participant's traffic through Facebook's servers. The app installed a SSL root certificate on the participant's phone, which would allow Facebook to intercept or modify data sent to encrypted websites. As a result, Facebook would have limitless access to monitor normally secure web traffic, even allowing Facebook to watch an individual log into their bank account or exchange pictures with their family. None of the disclosures provided at registration offer a meaningful explanation about how the sensitive data is used, how long it is kept, or who within Facebook has access to it..."

The letter was signed by Senators Richard Blumenthal (Democrat, Connecticut), Edward J. Markey (Democrat, Massachusetts), and Josh Hawley (Republican, Mississippi). Based upon news reports about how Facebook's Research App operated with similar functionality to the Onavo VPN app which was banned last year by Apple, the Senators concluded:

"Faced with that ban, Facebook appears to have circumvented Apple's attempts to protect consumers."

The joint letter also listed twelve questions the Senators want detailed answers about. Below are selected questions from that list:

"1. When did Project Atlas begin and how many individuals participated? How many participants were under age 18?"

"3. Why did Facebook use a less strict mechanism for verifying parental consent than is Required for Messenger Kids or Global Data Protection Requlation (GDPR) compliance?"

"4.What specific types of data was collected (e.g., device identifieers, usage of specific applications, content of messages, friends lists, locations, et al.)?"

"5. Did Facebook use the root certificate installed on a participant's device by the Project Atlas app to decrypt and inspect encrypted web traffic? Did this monitoring include analysis or retention of application-layer content?"

"7. Were app usage data or communications content collected by Project Atlas ever reviewed by or available to Facebook personnel or employees of Facebook partners?"

8." Given that Project Atlas acknowledged the collection of "data about [users'] activities and content within those apps," did Facebook ever collect or retain the private messages, photos, or other communications sent or received over non-Facebook products?"

"11. Why did Facebook bypass Apple's app review? Has Facebook bypassed the App Store aproval processing using enterprise certificates for any other app that was used for non-internal purposes? If so, please list and describe those apps."

Read the entire letter to Facebook (Adobe PDF format). Also on February 7th, the Senators sent a similar letter to Google (Adobe PDF format), addressed to Hiroshi Lockheimer, the Senior Vice President of Platforms & Ecosystems. It stated in part:

"TechCrunch has subsequently reported that Google maintained its own measurement program called "Screenwise Meter," which raises similar concerns as Project Atlas. The Screenwise Meter app also bypassed the App Store using an enterprise certificate and installed a VPN service in order to monitor phones... While Google has since removed the app, questions remain about why it had gone outside Apple's review process to run the monitoring program. Platforms must maintain and consistently enforce clear policies on the monitoring of teens and what constitutes meaningful parental consent..."

The letter to Google includes a similar list of eight questions the Senators seek detailed answers about. Some notable questions:

"5. Why did Google bypass App Store approval for Screenwise Meter app using enterprise certificates? Has Google bypassed the App Store approval processing using enterprise certificates for any other non-internal app? If so, please list and describe those apps."

"6. What measures did Google have in place to ensure that teenage participants in Screenwise Meter had authentic parental consent?"

"7. Given that Apple removed Onavoo protect from the App Store for violating its terms of service regarding privacy, why has Google continued to allow the Onavo Protect app to be available on the Play Store?"

The lawmakers have asked for responses by March 1st. Thanks to all three Senators for protecting consumers' -- and children's -- privacy... and for enforcing transparency and accountability.


Facebook Paid Teens To Install Unauthorized Spyware On Their Phones. Plenty Of Questions Remain

Facebook logoWhile today is the 15th anniversary of Facebook,  more important news rules. Last week featured plenty of news about Facebook. TechCrunch reported on Tuesday:

"Since 2016, Facebook has been paying users ages 13 to 35 up to $20 per month plus referral fees to sell their privacy by installing the iOS or Android “Facebook Research” app. Facebook even asked users to screenshot their Amazon order history page. The program is administered through beta testing services Applause, BetaBound and uTest to cloak Facebook’s involvement, and is referred to in some documentation as “Project Atlas” — a fitting name for Facebook’s effort to map new trends and rivals around the globe... Facebook admitted to TechCrunch it was running the Research program to gather data on usage habits."

So, teenagers installed surveillance software on their phones and tablets, to spy for Facebook on themselves, Facebook's competitors,, and others. This is huge news for several reasons. First, the "Facebook Research" app is VPN (Virtual Private Network) software which:

"... lets the company suck in all of a user’s phone and web activity, similar to Facebook’s Onavo Protect app that Apple banned in June and that was removed in August. Facebook sidesteps the App Store and rewards teenagers and adults to download the Research app and give it root access to network traffic in what may be a violation of Apple policy..."

Reportedly, the Research app collected massive amounts of information: private messages in social media apps, chats from in instant messaging apps, photos/videos sent to others, emails, web searches, web browsing activity, and geo-location data. So, a very intrusive app. And, after being forced to remove oneintrusive app from Apple's store, Facebook continued anyway -- with another app that performed the same function. Not good.

Second, there is the moral issue of using the youngest users as spies... persons who arguably have the lease experience and skills at reading complex documents: corporate terms-of-use and privacy policies. I wonder how many teenagers notified their friends of the spying and data collection. How many teenagers fully understood what they were doing? How many parents were aware of the activity and payments? How many parents notified the parents of their children's friends? How many teens installed the spyware on both their iPhones and iPads? Lots of unanswered questions.

Third, Apple responded quickly. TechCrunch reported Wednesday morning:

"... Apple blocked Facebook’s Research VPN app before the social network could voluntarily shut it down... Apple tells TechCrunch that yesterday evening it revoked the Enterprise Certificate that allows Facebook to distribute the Research app without going through the App Store."

Facebook's usage of the Enterprise Certificate is significant. TechCrunch also published a statement by Apple:

"We designed our Enterprise Developer Program solely for the internal distribution of apps within an organization... Facebook has been using their membership to distribute a data-collecting app to consumers, which is a clear breach of their agreement with Apple. Any developer using their enterprise certificates to distribute apps to consumers will have their certificates revoked..."

So, the Research app violated Apple's policy. Not good. The app also performs similar functions as the banned Onavo VPN app. Worse. This sounds like an end-run to me. So as punishment for its end-run actions, Apple temporarily disable the certificates for internal corporate apps.

Axios described very well Facebook's behavior:

"Facebook took a program designed to let businesses internally test their own app and used it to monitor most, if not everything, a user did on their phone — a degree of surveillance barred in the official App Store."

And the animated Facebook image in the Axios article sure looks like a liar-liar-logo-on-fire image. LOL! Pure gold! Seriously, Facebook's behavior indicates questionable ethics, and/or an expectation of not getting caught. Reportedly, the internal apps which were shut down included shuttle schedules, campus maps, and company calendars. After that, some Facebook employees discussed quitting.

And, it raises more questions. Which Facebook executives approved Project Atlas? What advice did Facebook's legal staff provide prior to approval? Was that advice followed or ignored?

Google logo Fourth, TechCrunch also reported:

"Facebook’s Research program will continue to run on Android."

What? So, Google devices were involved, too. Is this spy program okay with Google executives? A follow-up report on Wednesday by TechCrunch:

"Google has been running an app called Screenwise Meter, which bears a strong resemblance to the app distributed by Facebook Research that has now been barred by Apple... Google invites users aged 18 and up (or 13 if part of a family group) to download the app by way of a special code and registration process using an Enterprise Certificate. That’s the same type of policy violation that led Apple to shut down Facebook’s similar Research VPN iOS app..."

Oy! So, Google operates like Facebook. Also reported by TechCrunch:

"The Screenwise Meter iOS app should not have operated under Apple’s developer enterprise program — this was a mistake, and we apologize. We have disabled this app on iOS devices..."

So, Google will terminate its spy program on Apple devices, but continue its own program with Facebook. Hmmmmm. Well, that answers some questions. I guess Google executives are okay with this spy program. More questions remain.

Fifth, Facebook tried to defend the Research app and its actions in an internal memo to employees. On Thursday, TechCrunch tore apart the claims in an internal Facebook memo from vice president Pedro Canahuati. Chiefly:

"Facebook claims it didn’t hide the program, but it was never formally announced like every other Facebook product. There were no Facebook Help pages, blog posts, or support info from the company. It used intermediaries Applause and CentreCode to run the program under names like Project Atlas and Project Kodiak. Users only found out Facebook was involved once they started the sign-up process and signed a non-disclosure agreement prohibiting them from discussing it publicly... Facebook claims it wasn’t “spying,” yet it never fully laid out the specific kinds of information it would collect. In some cases, descriptions of the app’s data collection power were included in merely a footnote. The program did not specify data types gathered, only saying it would scoop up “which apps are on your phone, how and when you use them” and “information about your internet browsing activity.” The parental consent form from Facebook and Applause lists none of the specific types of data collected...

So, Research app participants (e.g., teenagers, parents) couldn't discuss nor warn their friends (and their friends' parents) about the data collection. I strongly encourage everyone to read the entire TechCrunch analysis. It is eye-opening.

Sixth, a reader shared concerns about whether Facebook's actions violated federal laws. Did Project Atlas violate the Digital Millennium Copyright Act (DMCA); specifically the "anti-circumvention" provision, which prohibits avoiding the security protections in software? Did it violate the Computer Fraud and Abuse Act? What about breach-of-contract and fraud laws? What about states' laws? So, one could ask similar questions about Google's actions, too.

I am not an attorney. Hopefully, some attorneys will weigh in on these questions. Probably, some skilled attorneys will investigate various legal options.

All of this is very disturbing. Is this what consumers can expect of Silicon Valley firms? Is this the best tech firms can do? Is this the low level the United States has sunk to? Kudos to the TechCrunch staff for some excellent reporting.

What are your opinions of Project Atlas? Of Facebook's behavior? Of Google's?


Google Fined 50 Million Euros For Violations Of New European Privacy Law

Google logo Google has been find 50 million Euros (about U.S. $57 million) under the new European privacy law for failing to properly disclose to users how their data is collected and used for targeted advertising. The European Union's General Data Protection Regulations, which went into effect in May 2018, give EU residents more control over their information and how companies use it.

After receiving two complaints last year from privacy-rights groups, France's National Data Protection Commission (CNL) announced earlier this month:

"... CNIL carried out online inspections in September 2018. The aim was to verify the compliance of the processing operations implemented by GOOGLE with the French Data Protection Act and the GDPR by analysing the browsing pattern of a user and the documents he or she can have access, when creating a GOOGLE account during the configuration of a mobile equipment using Android. On the basis of the inspections carried out, the CNIL’s restricted committee responsible for examining breaches of the Data Protection Act observed two types of breaches of the GDPR."

The first violation involved transparency failures:

"... information provided by GOOGLE is not easily accessible for users. Indeed, the general structure of the information chosen by the company does not enable to comply with the Regulation. Essential information, such as the data processing purposes, the data storage periods or the categories of personal data used for the ads personalization, are excessively disseminated across several documents, with buttons and links on which it is required to click to access complementary information. The relevant information is accessible after several steps only, implying sometimes up to 5 or 6 actions... some information is not always clear nor comprehensive. Users are not able to fully understand the extent of the processing operations carried out by GOOGLE. But the processing operations are particularly massive and intrusive because of the number of services offered (about twenty), the amount and the nature of the data processed and combined. The restricted committee observes in particular that the purposes of processing are described in a too generic and vague manner..."

So, important information is buried and scattered across several documents making it difficult for users to access and to understand. The second violation involved the legal basis for personalized ads processing:

"... GOOGLE states that it obtains the user’s consent to process data for ads personalization purposes. However, the restricted committee considers that the consent is not validly obtained for two reasons. First, the restricted committee observes that the users’ consent is not sufficiently informed. The information on processing operations for the ads personalization is diluted in several documents and does not enable the user to be aware of their extent. For example, in the section “Ads Personalization”, it is not possible to be aware of the plurality of services, websites and applications involved in these processing operations (Google search, Youtube, Google home, Google maps, Playstore, Google pictures, etc.) and therefore of the amount of data processed and combined."

"[Second], the restricted committee observes that the collected consent is neither “specific” nor “unambiguous.” When an account is created, the user can admittedly modify some options associated to the account by clicking on the button « More options », accessible above the button « Create Account ». It is notably possible to configure the display of personalized ads. That does not mean that the GDPR is respected. Indeed, the user not only has to click on the button “More options” to access the configuration, but the display of the ads personalization is moreover pre-ticked. However, as provided by the GDPR, consent is “unambiguous” only with a clear affirmative action from the user (by ticking a non-pre-ticked box for instance). Finally, before creating an account, the user is asked to tick the boxes « I agree to Google’s Terms of Service» and « I agree to the processing of my information as described above and further explained in the Privacy Policy» in order to create the account. Therefore, the user gives his or her consent in full, for all the processing operations purposes carried out by GOOGLE based on this consent (ads personalization, speech recognition, etc.). However, the GDPR provides that the consent is “specific” only if it is given distinctly for each purpose."

So, not only is important information buried and scattered across multiple documents (again), but also critical boxes for users to give consent are pre-checked when they shouldn't be.

CNIL explained its reasons for the massive fine:

"The amount decided, and the publicity of the fine, are justified by the severity of the infringements observed regarding the essential principles of the GDPR: transparency, information and consent. Despite the measures implemented by GOOGLE (documentation and configuration tools), the infringements observed deprive the users of essential guarantees regarding processing operations that can reveal important parts of their private life since they are based on a huge amount of data, a wide variety of services and almost unlimited possible combinations... Moreover, the violations are continuous breaches of the Regulation as they are still observed to date. It is not a one-off, time-limited, infringement..."

This is the largest fine, so far, under GDPR laws. Reportedly, Google will appeal the fine:

"We've worked hard to create a GDPR consent process for personalised ads that is as transparent and straightforward as possible, based on regulatory guidance and user experience testing... We're also concerned about the impact of this ruling on publishers, original content creators and tech companies in Europe and beyond... For all these reasons, we've now decided to appeal."

This is not the first EU fine for Google. CNet reported:

"Google is no stranger to fines under EU laws. It's currently awaiting the outcome of yet another antitrust investigation -- after already being slapped with a $5 billion fine last year for anticompetitive Android practices and a $2.7 billion fine in 2017 over Google Shopping."


Companies Want Your Location Data. Recent Examples: The Weather Channel And Burger King

Weather Channel logo It is easy to find examples where companies use mobile apps to collect consumers' real-time GPS location data, so they can archive and resell that information later for additional profits. First, ExpressVPN reported:

"The city of Los Angeles is suing the Weather Company, a subsidiary of IBM, for secretly mining and selling user location data with the extremely popular Weather Channel App. Stating that the app unfairly manipulates users into enabling their location settings for more accurate weather reports, the lawsuit affirms that the app collects and then sells this data to third-party companies... Citing a recent investigation by The New York Times that revealed more than 75 companies silently collecting location data (if you haven’t seen it yet, it’s worth a read), the lawsuit is basing its case on California’s Unfair Competition Law... the California Consumer Privacy Act, which is set to go into effect in 2020, would make it harder for companies to blindly profit off customer data... This lawsuit hopes to fine the Weather Company up to $2,500 for each violation of the Unfair Competition Law. With more than 200 million downloads and a reported 45+ million users..."

Long-term readers remember that a data breach in 2007 at IBM Inc. prompted this blog. It's not only internet service providers which collect consumers' location data. Advertisers, retailers, and data brokers want it, too.

Burger King logo Second, Burger King ran last month a national "Whopper Detour" promotion which offered customers a once-cent Whopper burger if they went near a competitor's store. News 5, the ABC News affiliate in Cleveland, reported:

"If you download the Burger King mobile app and drive to a McDonald’s store, you can get the penny burger until December 12, 2018, according to the fast-food chain. You must be within 600 feet of a McDonald's to claim your discount, and no, McDonald's will not serve you a Whopper — you'll have to order the sandwich in the Burger King app, then head to the nearest participating Burger King location to pick it up. More information about the deal can be found on the app on Apple and Android devices."

Next, the relevant portions from Burger King's privacy policy for its mobile apps (emphasis added):

"We collect information you give us when you use the Services. For example, when you visit one of our restaurants, visit one of our websites or use one of our Services, create an account with us, buy a stored-value card in-restaurant or online, participate in a survey or promotion, or take advantage of our in-restaurant Wi-Fi service, we may ask for information such as your name, e-mail address, year of birth, gender, street address, or mobile phone number so that we can provide Services to you. We may collect payment information, such as your credit card number, security code and expiration date... We also may collect information about the products you buy, including where and how frequently you buy them... we may collect information about your use of the Services. For example, we may collect: 1) Device information - such as your hardware model, IP address, other unique device identifiers, operating system version, and settings of the device you use to access the Services; 2) Usage information - such as information about the Services you use, the time and duration of your use of the Services and other information about your interaction with content offered through a Service, and any information stored in cookies and similar technologies that we have set on your device; and 3) Location information - such as your computer’s IP address, your mobile device’s GPS signal or information about nearby WiFi access points and cell towers that may be transmitted to us..."

So, for the low, low price of one hamburger, participants in this promotion gave RBI, the parent company which owns Burger King, perpetual access to their real-time location data. And, since RBI knows when, where, and how long its customers visit competitors' fast-food stores, it also knows similar details about everywhere else you go -- including school, work, doctors, hospitals, and more. Sweet deal for RBI. A poor deal for consumers.

Expect to see more corporate promotions like this, which privacy advocates call "surveillance capitalism."

Consumers' real-time location data is very valuable. Don't give it away for free. If you decide to share it, demand a fair, ongoing payment in exchange. Read privacy and terms-of-use policies before downloading mobile apps, so you don't get abused or taken. Opinions? Thoughts?


The Privacy And Data Security Issues With Medical Marijuana

In the United States, some states have enacted legislation making medical marijuana legal -- despite it being illegal at a federal level. This situation presents privacy issues for both retailers and patients.

In her "Data Security And Privacy" podcast series, privacy consultant Rebecca Harold (@PrivacyProf) interviewed a patient cannabis advocate about privacy and data security issues:

"Most people assume that their data is safe in cannabis stores & medical cannabis dispensaries. Or they believe if they pay in cash there will be no record of their cannabis purchase. Those are incorrect beliefs. How do dispensaries secure & share data? Who WANTS that data? What security is needed? Some in government, law enforcement & employers want data about state legal marijuana and medical cannabis purchases. Michelle Dumay, Cannabis Patient Advocate, helps cannabis dispensaries & stores to secure their customers’ & patients’ data & privacy. Michelle learned through experience getting treatment for her daughter that most medical cannabis dispensaries are not compliant with laws governing the security and privacy of patient data... In this episode, we discuss information security & privacy practices of cannabis shops, risks & what needs to be done when it comes to securing data and understanding privacy laws."

Many consumers know that the Health Insurance Portability and Accountability Act (HIPAA) governs how patients' privacy is protected and the businesses which must comply with that law.

Poor data security (e.g., data breaches, unauthorized recording of patients inside or outside of dispensaries) can result in the misuse of patients' personal and medical information by bad actors and others. Downstream consequences can be negative, such as employers using the data to decline job applications.

After listening to the episode, it seems reasonable for consumers to assume that traditional information industry players (e.g., credit reporting agencies, advertisers, data brokers, law enforcement, government intelligence agencies, etc.) all want marijuana purchase data. Note the use of "consumers," and not only "patients," since about 10 states have legalized recreational marijuana.

Listen to an encore presentation of the "Medical Cannabis Patient Privacy And Data Security" episode.


Report: Navient Tops List Of Student Loan Complaints

The Consumer Financial Protection Bureau (CFPB), a federal government agency in the United States, collects complaints about banks and other financial institutions. That includes lenders of student loans.

The CFPB and private-sector firms analyze these complaints, looking for patterns. Forbes magazine reported:

"The team at Make Lemonade analyzed these complaints [submitted during 2018], and found that there were 8,752 related to student loans. About 64% were related to federal student loans and 36% were related to private student loans. Nearly 67% of complaints were related to an issue with a student loan lender or student loan servicer."

"Navient, one of the nation's largest student loan servicers, ranked highest in terms of student loan complaints. In 2018, student loan borrowers submitted 4,032 complaints about Navient to the CFPB, which represents 46% of all student loan complaints. AES/PHEAA and Nelnet, two other major student loan servicers, received approximately 20% and 7%, respectively."

When looking for a student loan, wise consumers shop around, do their research, and shop wisely. Some lenders are better than others. The Forbes article is very helpful as it contains links to additional resources and information for consumers.

Learn more about the CFPB and its complaints database designed to help consumers and regulators:


After Promises To Stop, Mobile Providers Continued Sales Of Location Data About Consumers. What You Can Do To Protect Your Privacy

Sadly, history repeats itself. First, the history: after getting caught selling consumers' real-time GPS location data without notice nor consumers' consent, in 2018 mobile providers promised to stop the practice. The Ars Technica blog reported in June, 2018:

"Verizon and AT&T have promised to stop selling their mobile customers' location information to third-party data brokers following a security problem that leaked the real-time location of US cell phone users. Senator Ron Wyden (D-Ore.) recently urged all four major carriers to stop the practice, and today he published responses he received from Verizon, AT&T, T-Mobile USA, and SprintWyden's statement praised Verizon for "taking quick action to protect its customers' privacy and security," but he criticized the other carriers for not making the same promise... AT&T changed its stance shortly after Wyden's statement... Senator Wyden recognized AT&T's change on Twitter and called on T-Mobile and Sprint to follow suit."

Kudos to Senator Wyden. The other mobile providers soon complied... sort of.

Second, some background: real-time location data is very valuable stuff. It indicates where you are as you (with your phone or other mobile devices) move about the physical world in your daily routine. No delays. No lag. Yes, there are appropriate uses for real-time GPS location data -- such as by law enforcement to quickly find a kidnapped person or child before further harm happens. But, do any and all advertisers need real-time location data about consumers? Data brokers? Others?

I think not. Domestic violence and stalking victims probably would not want their, nor their children's, real-time location data resold publicly. Most parents would not want their children's location data resold publicly. Most patients probably would not want their location data broadcast every time they visit their physician, specialist, rehab, or a hospital. Corporate executives, government officials, and attorneys conducting sensitive negotiations probably wouldn't want their location data collected and resold, either.

So, most consumers probably don't want their real-time location data resold publicly. Well, some of you make location-specific announcements via posts on social media. That's your choice, but I conclude that most people don't. Consumers want control over their location information so they can decide if, when, and with whom to share it. The mass collection and sales of consumers' real-time location data by mobile providers prevents choice -- and it violates persons' privacy.

Third, fast forward seven months from 2018. TechCrunch reported on January 9th:

"... new reporting by Motherboard shows that while [reseller] LocationSmart faced the brunt of the criticism [in 2018], few focused on the other big player in the location-tracking business, Zumigo. A payment of $300 and a phone number was enough for a bounty hunter to track down the participating reporter by obtaining his location using Zumigo’s location data, which was continuing to pay for access from most of the carriers. Worse, Zumigo sold that data on — like LocationSmart did with Securus — to other companies, like Microbilt, a Georgia-based credit reporting company, which in turn sells that data on to other firms that want that data. In this case, it was a bail bond company, whose bounty hunter was paid by Motherboard to track down the reporter — with his permission."

"Everyone seemed to drop the ball. Microbilt said the bounty hunter shouldn’t have used the location data to track the Motherboard reporter. Zumigo said it didn’t mind location data ending up in the hands of the bounty hunter, but still cut Microbilt’s access. But nobody quite dropped the ball like the carriers, which said they would not to share location data again."

The TechCrunch article rightly held offending mobile providers accountable. Example: T-Mobile's chief executive tweeted last year:

Then, Legere tweeted last week:

The right way? In my view, real-time location never should have been collected and resold. Almost a year after reports first surfaced, T-Mobile is finally getting around to stopping the practice and terminating its relationships with location data resellers -- two months from now. Why not announce this slow wind-down last year when the issue first surfaced? "Emergency assistance" is the reason we are supposed to believe. Yeah, right.

The TechCrunch article rightly took AT&T and Verizon to task, too. Good. I strongly encourage everyone to read the entire TechCrunch article.

What can consumers make of this? There seem to be several takeaways:

  1. Transparency is needed, since corporate privacy policies don't list all (or often any) business partners. This lack of transparency provides an easy way for mobile providers to resume location data sales without notice to anyone and without consumers' consent,
  2. Corporate executives will say anything in tweets/social media. A healthy dose of skepticism by consumers and regulators is wise,
  3. Consumers can't trust mobile providers. They are happy to make money selling consumers' real-time location data, regardless of consumers' desires not for our data to be collected and sold,
  4. Data brokers and credit reporting agencies want consumers' location data,
  5. To ensure privacy, consumers also must take action: adjust the privacy settings on your phones to limit or deny mobile apps access to your location data. I did. It's not hard. Do it today, and
  6. Oversight is needed, since a) mobile providers have, at best, sloppy to minimal oversight and internal processes to prevent location data sales; and b) data brokers and others are readily available to enable and facilitate location data transactions.

I cannot over-emphasize #5 above. What issues or takeaways do you see? What are your opinions about real-time location data?