2 Healthcare Software Providers Agree To Settlement With 16 States' Attorneys General To Resolve Charges About 2015 Data Breach

The Attorney General's Office for the State of Arizona announced last month a major settlement agreement with two healthcare software providers: Medical Informatics Engineering Inc. and its subsidiary, NoMoreClipboard, LLC (hereafter, referred to jointly as "MIE") following a massive data breach at MIE in 2015.  The press release by AG Mike Brnovich stated:

"The settlement resolves a bipartisan lawsuit filed by Arizona and 15 other states against MIE relating to a 2015 data breach, which was the first such multistate lawsuit involving claims under the federal Health Insurance Portability and Accountability Act ("HIPAA"). As a result of the settlement, MIE will pay $900,000 to the states, and it has agreed to a comprehensive injunction requiring the implementation of significant data-security improvements."

Medical Informatics Engineering logo The case was filed in the U.S. District Court for the Northern District of Indiana, where MIE is headquartered. States involved in the joint lawsuit and settlement included Arizona, Arkansas, Connecticut, Florida, Indiana, Iowa, Kansas, Kentucky, Louisiana, Michigan, Minnesota, Nebraska, North Carolina, Tennessee, West Virginia, and Wisconsin.

The data breach occurred between May 7, 2015, and May 26, 2015, when hackers broke into WebChart, a web application by MIE and stole:

"... the electronic Protected Health Information ("ePHI") of more than 3.9 million individuals, including roughly 26,000 Arizonans. Stolen ePHI included names, telephone numbers, mailing addresses, usernames, hashed passwords, security questions and answers, spousal information (name and potentially date of birth), email addresses, dates of birth, Social Security numbers, lab results, health insurance policy information, diagnoses, disability codes, doctors’ names, medical conditions, and children’s names and birth statistics."

The consent order and judgment is available here. Indiana’s share was $174,745.29. Indiana AG Curtis Hill said:

"Hoosier consumers trust us to look out for their interests... Once again, we have acted on their behalf to pursue the appropriate penalties and remedies available under the law. We hope our proactive measures serve to motivate all companies doing business in Indiana to exercise the highest possible ethics and the utmost diligence in making sure their systems are safe and secure."


Google Home Devices Recorded Users' Conversations. Legal Questions Result. Google Says It Is Investigating

Many consumers love the hands-free convenience of smart speakers. However, there are risks with the technology. BBC News reported on Thursday:

"Belgian broadcaster VRT exposed the recordings made by Google Home devices in Belgium and the Netherlands... VRT said the majority of the recordings it reviewed were short clips logged by the Google Home devices as owners used them. However, it said, 153 were "conversations that should never have been recorded" because the wake phrase of "OK Google" was not given. These unintentionally recorded exchanges included: a) blazing rows; b) bedroom chatter; c) parents talking to their children; d) phone calls exposing confidential information. It said it believed the devices logged these conversations because users said a word or phrase that sounded similar to "OK Google" that triggered the device..."

So, conversations that shouldn't have been recorded were recorded by Google Home devices. Consumers use the devices to perform and control a variety of tasks, such as entertainment (e.g., music, movies, games), internet searches (e.g., cooking recipes), security systems and cameras, thermostats, window blinds and shades, appliances (e.g., coffee makers), online shopping, internet searches, and more.

The device software doesn't seem accurate, since it mistook similar phrases as wake phrases. Google calls these errors "false accepts." Google replied in a blog post:

"We just learned that one of these language reviewers has violated our data security policies by leaking confidential Dutch audio data. Our Security and Privacy Response teams have been activated on this issue, are investigating, and we will take action. We are conducting a full review of our safeguards... We apply a wide range of safeguards to protect user privacy throughout the entire review process. Language experts only review around 0.2 percent of all audio snippets. Audio snippets are not associated with user accounts as part of the review process, and reviewers are directed not to transcribe background conversations or other noises, and only to transcribe snippets that are directed to Google."

"The Google Assistant only sends audio to Google after your device detects that you’re interacting with the Assistant—for example, by saying “Hey Google” or by physically triggering the Google Assistant... Rarely, devices that have the Google Assistant built in may experience what we call a “false accept.” This means that there was some noise or words in the background that our software interpreted to be the hotword (like “Ok Google”). We have a number of protections in place to prevent false accepts from occurring in your home... We also provide you with tools to manage and control the data stored in your account. You can turn off storing audio data to your Google account completely, or choose to auto-delete data after every 3 months or 18 months..."

To be fair, Google is not alone. Amazon Alexa devices also record and archive users' conversations. Would you want your bedroom chatter recorded (and stored indefinitely)? Or your conversations with your children? Many persons work remotely from home, so would you want business conversations with coworkers recorded? I think not. Very troubling news.

And, there is more.

This data security incident confirms that human workers listen to recordings by Google Assistant devices. Those workers can be employees or outsourced contractors. Who are these contractors, by name? What methods does Google employ to confirm privacy compliance by contractors? So many unanswered questions.

Also, according to U.S. News & World Report:

"Google's recording feature can be turned off, but doing so means Assistant loses some of its personalized touch. People who turn off the recording feature lose the ability for the Assistant to recognize individual voices and learn your voice pattern. Assistant recording is actually turned off by default — but the technology prompts users to turn on recording and other tools in order to get personalized features."

So, to get the full value of the technology, users must enable recordings. That sounds a lot like surveillance by design. Not good. You'd think that Google software developers would have developed a standard vocabulary, or dictionary, in several languages (by beta test participants) to test the accuracy of Assistant software; rather than review users' actual conversations. I guess they viewed it easier, faster, and cheaper to snoop on users.

Since Google already scans the contents of Gmail users' email messages, maybe this is simply technology creep and Google assumed nobody would mind human reviews of Assistant recordings.

About the review of recordings by human workers, the M.I.T. Technology Review said:

"Legally questionable: Because Google doesn’t inform users that humans review recordings in this way, and thus doesn’t seek their explicit consent for the practice, it’s quite possible that it could be breaking EU data protection regulations. We have asked Google for a response and will update if we hear back."

So, it will be interesting to see what European Union regulators have to say about the recordings and human reviews.

To summarize: consumers have willingly installed perpetual surveillance devices in their homes. What are your views of this data security incident? Do you enable recordings on your smart speakers? Should human workers have access to archives of your recorded conversations?


Tech Expert Concluded Google Chrome Browser Operates A Lot Like Spy Software

Many consumers still use web browsers. Which are better for your online privacy? You may be interested in this analysis by a tech expert:

"... I've been investigating the secret life of my data, running experiments to see what technology really gets up to under the cover of privacy policies that nobody reads... My tests of Chrome vs. Firefox [browsers] unearthed a personal data caper of absurd proportions. In a week of Web surfing on my desktop, I discovered 11,189 requests for tracker "cookies" that Chrome would have ushered right onto my computer but were automatically blocked by Firefox... Chrome welcomed trackers even at websites you would think would be private. I watched Aetna and the Federal Student Aid website set cookies for Facebook and Google. They surreptitiously told the data giants every time I pulled up the insurance and loan service's log-in pages."

"And that's not the half of it. Look in the upper right corner of your Chrome browser. See a picture or a name in the circle? If so, you're logged in to the browser, and Google might be tapping into your Web activity to target ads. Don't recall signing in? I didn't, either. Chrome recently started doing that automatically when you use Gmail... I felt hoodwinked when Google quietly began signing Gmail users into Chrome last fall. Google says the Chrome shift didn't cause anybody's browsing history to be "synced" unless they specifically opted in — but I found mine was being sent to Google and don't recall ever asking for extra surveillance..."

Also:

"Google's product managers told me in an interview that Chrome prioritizes privacy choices and controls, and they're working on new ones for cookies. But they also said they have to get the right balance with a "healthy Web ecosystem" (read: ad business). Firefox's product managers told me they don't see privacy as an "option" relegated to controls. They've launched a war on surveillance, starting last month with "enhanced tracking protection" that blocks nosy cookies by default on new Firefox installations..."

This tech expert concluded:

"It turns out, having the world's biggest advertising company make the most popular Web browser was about as smart as letting kids run a candy shop. It made me decide to ditch Chrome for a new version of nonprofit Mozilla's Firefox, which has default privacy protections. Switching involved less inconvenience than you might imagine."

Regular readers of this blog are aware of how Google tracks consumers online purchases, the worst mobile apps for privacy, and privacy alternatives such the Brave web browser, the DuckDuckGo search engine, virtual private network (VPN) software, and more. Yes, you can use the Firefox browser on your Apple iPhone. I do.

Me? I've used the Firefox browser since about 2010 on my (Windows) laptop, and the DuckDuckGo search engine since 2013. I stopped using Bing, Yahoo, and Google search engines in 2013. While Firefox installs with Google as the default search engine, you can easily switch it to DuckDuckGo. I did. I am very happy with the results.

Which web browser and search engine do you use? What do you do to protect your online privacy?


Aggression Detectors: What They Are, Who Uses Them, And Why

Sound Intelligence logo Like most people, you probably have not heard of "aggression detectors." What are these devices? Who makes them? Who uses these devices and why? What consumers are affected?

To answer these questions, ProPublica explained who makes the devices and why:

"In response to mass shootings, some schools and hospitals are installing microphones equipped with algorithms. The devices purport to identify stress and anger before violence erupts... By deploying surveillance technology in public spaces like hallways and cafeterias, device makers and school officials hope to anticipate and prevent everything from mass shootings to underage smoking... Besides Sound Intelligence, South Korea-based Hanwha Techwin, formerly part of Samsung, makes a similar “scream detection” product that’s been installed in American schools. U.K.-based Audio Analytic used to sell its aggression- and gunshot-detection software to customers in Europe and the United States... Sound Intelligence CEO Derek van der Vorst said security cameras made by Sweden-based Axis Communications account for 90% of the detector’s worldwide sales, with privately held Louroe making up the other 10%... Mounted inconspicuously on the ceiling, Louroe’s smoke-detector-sized microphones measure aggression on a scale from zero to one. Users choose threshold settings. Any time they’re exceeded for long enough, the detector alerts the facility’s security apparatus, either through an existing surveillance system or a text message pinpointing the microphone that picked up the sound..."

Louroe Electronics logo The microphone-equipped sensors have been installed in a variety of industries. The Sound Intelligence website listed prisons, schools, public transportation, banks, healthcare institutes, retail stores, public spaces, and more. Louroe Electronics' site included a similar list plus law enforcement.

The ProPublica article also discussed several key issues. First, sensor accuracy and its own tests:

"... ProPublica’s analysis, as well as the experiences of some U.S. schools and hospitals that have used Sound Intelligence’s aggression detector, suggest that it can be less than reliable. At the heart of the device is what the company calls a machine learning algorithm. Our research found that it tends to equate aggression with rough, strained noises in a relatively high pitch, like [a student's] coughing. A 1994 YouTube clip of abrasive-sounding comedian Gilbert Gottfried ("Is it hot in here or am I crazy?") set off the detector, which analyzes sound but doesn’t take words or meaning into account... Sound Intelligence and Louroe said they prefer whenever possible to fine-tune sensors at each new customer’s location over a period of days or weeks..."

Second, accuracy concerns:

"[Sound Intelligence CEO] Van der Vorst acknowledged that the detector is imperfect and confirmed our finding that it registers rougher tones as aggressive. He said he “guarantees 100%” that the system will at times misconstrue innocent behavior. But he’s more concerned about failing to catch indicators of violence, and he said the system gives schools and other facilities a much-needed early warning system..."

This is interesting and troubling. Sound Intelligence's position seems to suggest that it is okay for sensor to miss-identify innocent persons as aggressive in order to avoid failures to identify truly aggressive persons seeking to do harm. That sounds like the old saying: the ends justify the means. Not good. The harms against innocent persons matters, especially when they are young students.

Yesterday's blog post described a far better corporate approach. Based upon current inaccuracies and biases with the technology, a police body camera assembled an ethics board to help guide its decisions regarding the technology; and then followed that board's recommendations not to implement facial recognition in its devices. When the inaccuracies and biases are resolved, then it would implement facial recognition.

What ethics boards have Sound Intelligence, Louroe, and other aggression detector makers utilized?

Third, the use of aggression detectors raises the issue of notice. Are there physical postings on-site at schools, hospitals, healthcare facilities, and other locations? Notice seems appropriate, especially since almost all entities provide notice (e.g., terms of service, privacy policy) for visitors to their websites.

Fourth, privacy concerns:

"Although a Louroe spokesman said the detector doesn’t intrude on student privacy because it only captures sound patterns deemed aggressive, its microphones allow administrators to record, replay and store those snippets of conversation indefinitely..."

I encourage parents of school-age children to read the entire ProPublica article. Concerned parents may demand explanations by school officials about the surveillance activities and devices used within their children's schools. Teachers may also be concerned. Patients at healthcare facilities may also be concerned.

Concerned persons may seek answers to several issues:

  • The vendor selection process, which aggression detector devices were selected, and why
  • Evidence supporting the accuracy of aggression detectors used
  • The school's/hospital's policy, if it has one, covering surveillance devices; plus any posted notices
  • The treatment and rights of wrongly identified persons (e.g., students, patients,, visitors, staff) by aggression detector devices
  • Approaches by the vendor and school to improve device accuracy for both types of errors: a) wrongly identified persons, and b) failures to identify truly aggressive or threatening persons
  • How long the school and/or vendor archive recorded conversations
  • What persons have access to the archived recordings
  • The data security methods used by the school and by the vendor to prevent unauthorized access and abuse of archived recordings
  • All entities, by name, which the school and/or vendor share archived recordings with

What are your opinions of aggression detectors? Of device inaccuracy? Of the privacy concerns?


Police Body Cam Maker Says It Won't Use Facial Recognition Due To Problems With The Technology

We've all heard of the following three technologies: police body cameras, artificial intelligence, and facial recognition software. Across the nation, some police departments use body cameras.

Do the three technologies go together -- work well together? The Washington Post reported:

"Axon, the country’s biggest seller of police body cameras, announced that it accepts the recommendation of an ethics board and will not use facial recognition in its devices... the company convened the independent board last year to assess the possible consequences and ethical costs of artificial intelligence and facial-recognition software. The board’s first report, published June 27, concluded that “face recognition technology is not currently reliable enough to ethically justify its use” — guidance that Axon plans to follow."

So, a major U.S. corporation assembled an ethics board to guide its activities. Good. That's not something you read about often. Then, the same corporation followed that board's advice. Even better.

Why reject using facial recognition with body cameras? Axon explained in a statement:

"Current face matching technology raises serious ethical concerns. In addition, there are technological limitations to using this technology on body cameras. Consistent with the board's recommendation, Axon will not be commercializing face matching products on our body cameras at this time. We do believe face matching technology deserves further research to better understand and solve for the key issues identified in the report, including evaluating ways to de-bias algorithms as the board recommends. Our AI team will continue to evaluate the state of face recognition technologies and will keep the board informed about our research..."

Two types of inaccuracies occur with facial recognition software: i) persons falsely identified (a/k/a "false positives;" and ii) persons not identified (a/k/a "false negatives) who should have been identified. The ethics board's report provided detailed explanations:

"The truth is that current technology does not perform as well on people of color compared to whites, on women compared to men, or young people compared to older people, to name a few disparities. These disparities exist in both directions — a greater false positive rate and false negative rate."

The ethics board's report also explained the problem of bias:

"One cause of these biases is statistically unrepresentative training data — the face images that engineers use to “train” the face recognition algorithm. These images are unrepresentative for a variety of reasons but in part because of decisions that have been made for decades that have prioritized certain groups at the cost of others. These disparities make real-world face recognition deployment a complete nonstarter for the Board. Until we have something approaching parity, this technology should remain on the shelf. Policing today already exhibits all manner of disparities (particularly racial). In this undeniable context, adding a tool that will exacerbate this disparity would be unacceptable..."

So, well-meaning software engineers can create bias in their algorithms by using sets of images that are not representative of the population. The ethic board's 42-page report titled, "First Report Of The Axon A.I. & Policing Technology Ethics Board" (Adobe PDF; 3.1 Megabytes) listed six general conclusions:

"1: Face recognition technology is not currently reliable enough to ethically justify its use on body-worn cameras. At the least, face recognition technology should not be deployed until the technology performs with far greater accuracy and performs equally well across races, ethnicities, genders, and other identity groups. Whether face recognition on body-worn cameras can ever be ethically justifiable is an issue the Board has begun to discuss in the context of the use cases outlined in Part IV.A, and will take up again if and when these prerequisites are met."

"2: When assessing face recognition algorithms, rather than talking about “accuracy,” we prefer to discuss false positive and false negative rates. Our tolerance for one or the other will depend on the use case."

"3: The Board is unwilling to endorse the development of face recognition technology of any sort that can be completely customized by the user. It strongly prefers a model in which the technologies that are made available are limited in what functions they can perform, so as to prevent misuse by law enforcement."

"4: No jurisdiction should adopt face recognition technology without going through open, transparent, democratic processes, with adequate opportunity for genuinely representative public analysis, input, and objection."

"5: Development of face recognition products should be premised on evidence-based benefits. Unless and until those benefits are clear, there is no need to discuss costs or adoption of any particular product."

"6: When assessing the costs and benefits of potential use cases, one must take into account both the realities of policing in America (and in other jurisdictions) and existing technological limitations."

The board included persons with legal, technology, law enforcement, and civil rights backgrounds; plus members from the affected communities. Axon management listened to the report's conclusions and is following the board's recommendations (emphasis added):

"Respond publicly to this report, including to the Board’s conclusions and recommendations regarding face recognition technology. Commit, based on the concerns raised by the Board, not to proceed with the development of face matching products, including adding such capabilities to body-worn cameras or to Axon Evidence (Evidence.com)... Invest company resources to work, in a transparent manner and in tandem with leading independent researchers, to ensure training data are statistically representative of the appropriate populations and that algorithms work equally well across different populations. Continue to comply with the Board’s Operating Principles, including by involving the Board in the earliest possible stages of new or anticipated products. Work with the Board to produce products and services designed to improve policing transparency and democratic accountability, including by developing products in ways that assure audit trails or that collect information that agencies can release to the public about their use of Axon products..."

Admirable. Encouraging. The Washington Post reported:

"San Francisco in May became the first U.S. city to ban city police and agencies from using facial-recognition software... Somerville, Massachusetts became the second, with other cities, including Berkeley and Oakland, Calif., considering similar measures..."

Clearly, this topic bears monitoring. Consumers and government officials are concerned about accuracy and bias. So, too, are some corporations.

And, more news seems likely. Will other technology companies and local governments utilize similar A.I. ethics boards? Will schools, healthcare facilities, and other customers of surveillance devices demand products with accuracy and without bias supported by evidence?


Digital Jail: How Electronic Monitoring Drives Defendants Into Debt

[Editor's note: today's guest post, by reporters at ProPublica, discusses the convergence of law enforcement, outsourcing, smart devices, surveillance, "offender funded" programs, and "e-gentrification." It is reprinted with permission.]

By Ava Kofman, ProPublica

On Oct. 12, 2018, Daehaun White walked free, or so he thought. A guard handed him shoelaces and the $19 that had been in his pocket at the time of his booking, along with a letter from his public defender. The lanky 19-year-old had been sitting for almost a month in St. Louis’ Medium Security Institution, a city jail known as the Workhouse, after being pulled over for driving some friends around in a stolen Chevy Cavalier. When the police charged him with tampering with a motor vehicle — driving a car without its owner’s consent — and held him overnight, he assumed he would be released by morning. He told the police that he hadn’t known that the Chevy, which a friend had lent him a few hours earlier, was stolen. He had no previous convictions. But the $1,500 he needed for the bond was far beyond what he or his family could afford. It wasn’t until his public defender, Erika Wurst, persuaded the judge to lower the amount to $500 cash, and a nonprofit fund, the Bail Project, paid it for him, that he was able to leave the notoriously grim jail. “Once they said I was getting released, I was so excited I stopped listening,” he told me recently. He would no longer have to drink water blackened with mold or share a cell with rats, mice and cockroaches. He did a round of victory pushups and gave away all of the snack cakes he had been saving from the cafeteria.

Emass logo When he finally read Wurst’s letter, however, he realized there was a catch. Even though Wurst had argued against it, the judge, Nicole Colbert-Botchway, had ordered him to wear an ankle monitor that would track his location at every moment using GPS. For as long as he would wear it, he would be required to pay $10 a day to a private company, Eastern Missouri Alternative Sentencing Services, or EMASS. Just to get the monitor attached, he would have to report to EMASS and pay $300 up front — enough to cover the first 25 days, plus a $50 installation fee.

White didn’t know how to find that kind of money. Before his arrest, he was earning minimum wage as a temp, wrapping up boxes of shampoo. His father was largely absent, and his mother, Lakisha Thompson, had recently lost her job as the housekeeping manager at a Holiday Inn. Raising Daehaun and his four siblings, she had struggled to keep up with the bills. The family bounced between houses and apartments in northern St. Louis County, where, as a result of Jim Crow redlining, most of the area’s black population lives. In 2014, they were living on Canfield Drive in Ferguson when Michael Brown was shot and killed there by a police officer. During the ensuing turmoil, Thompson moved the family to Green Bay, Wisconsin. White felt out of place. He was looked down on for his sagging pants, called the N-word when riding his bike. After six months, he moved back to St. Louis County on his own to live with three of his siblings and stepsiblings in a gray house with vinyl siding.

When White got home on the night of his release, he was so overwhelmed to see his family again that he forgot about the letter. He spent the next few days hanging out with his siblings, his mother, who had returned to Missouri earlier that year, and his girlfriend, Demetria, who was seven months pregnant. He didn’t report to EMASS.

What he didn’t realize was that he had failed to meet a deadline. Typically, defendants assigned to monitors must pay EMASS in person and have the device installed within 24 hours of their release from jail. Otherwise, they have to return to court to explain why they’ve violated the judge’s orders. White, however, wasn’t called back for a hearing. Instead, a week after he left the Workhouse, Colbert-Botchway issued a warrant for his arrest.

Three days later, a large group of police officers knocked on Thompson’s door, looking for information about an unrelated case, a robbery. White and his brother had been making dinner with their mother, and the officers asked them for identification. White’s name matched the warrant issued by Colbert-Botchway. “They didn’t tell me what the warrant was for,” he said. “Just that it was for a violation of my release.” He was taken downtown and held for transfer back to the Workhouse. “I kept saying to myself, ’Why am I locked up?’” he recalled.

The next morning, Thompson called the courthouse to find the answer. She learned that her son had been jailed over his failure to acquire and pay for his GPS monitor. To get him out, she needed to pay EMASS on his behalf.

This seemed absurd to her. When Daehaun was 13, she had worn an ankle monitor after violating probation for a minor theft, but the state hadn’t required her to cover the cost of her own supervision. “This is a 19-year-old coming out of the Workhouse,” she told me recently. “There’s no way he has $300 saved.” Thompson felt that the court was forcing her to choose between getting White out of jail and supporting the rest of her family.

Over the past half-century, the number of people behind bars in the United States jumped by more than 500%, to 2.2 million. This extraordinary rise, often attributed to decades of “tough on crime” policies and harsh sentencing laws, has ensured that even as crime rates have dropped since the 1990s, the number of people locked up and the average length of their stay have increased. According to the Bureau of Justice Statistics, the cost of keeping people in jails and prisons soared to $87 billion in 2015 from $19 billion in 1980, in current dollars.

In recent years, politicians on both sides of the aisle have joined criminal-justice reformers in recognizing mass incarceration as both a moral outrage and a fiscal sinkhole. As ankle bracelets have become compact and cost-effective, legislators have embraced them as an enlightened alternative. More than 125,000 people in the criminal-justice system were supervised with monitors in 2015, compared with just 53,000 people in 2005, according to the Pew Charitable Trusts. Although no current national tally is available, data from several cities — Austin, Texas; Indianapolis; Chicago; and San Francisco — show that this number continues to rise. Last December, the First Step Act, which includes provisions for home detention, was signed into law by President Donald Trump with support from the private prison giants GEO Group and CoreCivic. These corporations dominate the so-called community-corrections market — services such as day-reporting and electronic monitoring — that represents one of the fastest-growing revenue sectors of their industry.

By far the most decisive factor promoting the expansion of monitors is the financial one. The United States government pays for monitors for some of those in the federal criminal-justice system and for tens of thousands of immigrants supervised by Immigration and Customs Enforcement. But states and cities, which incur around 90% of the expenditures for jails and prisons, are increasingly passing the financial burden of the devices onto those who wear them. It costs St. Louis roughly $90 a day to detain a person awaiting trial in the Workhouse, where in 2017 the average stay was 291 days. When individuals pay EMASS $10 a day for their own supervision, it costs the city nothing. A 2014 study by NPR and the Brennan Center found that, with the exception of Hawaii, every state required people to pay at least part of the costs associated with GPS monitoring. Some probation offices and sheriffs run their own monitoring programs — renting the equipment from manufacturers, hiring staff and collecting fees directly from participants. Others have outsourced the supervision of defendants, parolees and probationers to private companies.

“There are a lot of judges who reflexively put people on monitors, without making much of a pretense of seriously weighing it at all,” said Chris Albin-Lackey, a senior legal adviser with Human Rights Watch who has researched private-supervision companies. “The limiting factor is the cost it might impose on the public, but when that expense is sourced out, even that minimal brake on judicial discretion goes out the window.”

Nowhere is the pressure to adopt monitors more pronounced than in places like St. Louis: cash-strapped municipalities with large populations of people awaiting trial. Nationwide on any given day, half a million people sit in crowded and expensive jails because, like Daehaun White, they cannot purchase their freedom.

As the movement to overhaul cash bail has challenged the constitutionality of jailing these defendants, judges and sheriffs have turned to monitors as an appealing substitute. In San Francisco, the number of people released from jail onto electronic monitors tripled after a 2018 ruling forced courts to release more defendants without bail. In Marion County, Indiana, where jail overcrowding is routine, roughly 5,000 defendants were put on monitors last year. “You would be hard-pressed to find bail-reform legislation in any state that does not include the possibility of electronic monitoring,” said Robin Steinberg, the chief executive of the Bail Project.

Yet like the system of wealth-based detention they are meant to help reform, ankle monitors often place poor people in special jeopardy. Across the country, defendants who have not been convicted of a crime are put on “offender funded” payment plans for monitors that sometimes cost more than their bail. And unlike bail, they don’t get the payment back, even if they’re found innocent. Although a federal survey shows that nearly 40% of Americans would have trouble finding $400 to cover an emergency, companies and courts routinely threaten to lock up defendants if they fall behind on payment. In Greenville, South Carolina, pretrial defendants can be sent back to jail when they fall three weeks behind on fees. (An officer for the Greenville County Detention Center defended this practice on the grounds that participants agree to the costs in advance.) In Mohave County, Arizona, pretrial defendants charged with sex offenses have faced rearrest if they fail to pay for their monitors, even if they prove that they can’t afford them. “We risk replacing an unjust cash-bail system,” Steinberg said, “with one just as unfair, inhumane and unnecessary.”

Many local judges, including in St. Louis, do not conduct hearings on a defendant’s ability to pay for private supervision before assigning them to it; those who do often overestimate poor people’s financial means. Without judicial oversight, defendants are vulnerable to private-supervision companies that set their own rates and charge interest when someone can’t pay up front. Some companies even give their employees bonuses for hitting collection targets.

It’s not only debt that can send defendants back to jail. People who may not otherwise be candidates for incarceration can be punished for breaking the lifestyle rules that come with the devices. A survey in California found that juveniles awaiting trial or on probation face especially difficult rules; in one county, juveniles on monitors were asked to follow more than 50 restrictions, including not participating “in any social activity.” For this reason, many advocates describe electronic monitoring as a “net-widener": Far from serving as an alternative to incarceration, it ends up sweeping more people into the system.

Dressed in a baggy yellow City of St. Louis Corrections shirt, White was walking to the van that would take him back to the Workhouse after his rearrest, when a guard called his name and handed him a bus ticket home. A few hours earlier, his mom had persuaded her sister to lend her the $300 that White owed EMASS. Wurst, his public defender, brought the receipt to court.

The next afternoon, White hitched a ride downtown to the EMASS office, where one of the company’s bond-compliance officers, Nick Buss, clipped a black box around his left ankle. Based in the majority white city of St. Charles, west of St. Louis, EMASS has several field offices throughout eastern Missouri. A former probation and parole officer, Michael Smith, founded the company in 1991 after Missouri became one of the first states to allow private companies to supervise some probationers. (Smith and other EMASS officials declined to comment for this story.)

The St. Louis area has made national headlines for its “offender funded” model of policing and punishment. Stricken by postindustrial decline and the 2008 financial crisis, its municipalities turned to their police departments and courts to make up for shortfalls in revenue. In 2015, the Ferguson Report by the United States Department of Justice put hard numbers to what black residents had long suspected: The police were targeting them with disproportionate arrests, traffic tickets and excessive fines.

EMASS may have saved the city some money, but it also created an extraordinary and arbitrary-seeming new expense for poor defendants. When cities cover the cost of monitoring, they often pay private contractors $2 to $3 a day for the same equipment and services for which EMASS charges defendants $10 a day. To come up with the money, EMASS clients told me, they had to find second jobs, take their children out of day care and cut into disability checks. Others hurried to plead guilty for no better reason than that being on probation was cheaper than paying for a monitor.

At the downtown office, White signed a contract stating that he would charge his monitor for an hour and a half each day and “report” to EMASS with $70 each week. He could shower, but was not to bathe or swim (the monitor is water-resistant, not waterproof). Interfering with the monitor’s functioning was a felony.

White assumed that GPS supervision would prove a minor annoyance. Instead, it was a constant burden. The box was bulky and the size of a fist, so he couldn’t hide it under his jeans. Whenever he left the house, people stared. There were snide comments ("nice bracelet") and cutting jokes. His brothers teased him about having a babysitter. “I’m nobody to watch,” he insisted.

The biggest problem was finding work. Confident and outgoing, White had never struggled to land jobs; after dropping out of high school in his junior year, he flipped burgers at McDonald’s and Steak ’n Shake. To pay for the monitor, he applied to be a custodian at Julia Davis Library, a cashier at Home Depot, a clerk at Menards. The conversation at Home Depot had gone especially well, White thought, until the interviewer casually asked what was on his leg.

To help improve his chances, he enrolled in Mission: St. Louis, a job-training center for people reentering society. One afternoon in January, he and a classmate role-played how to talk to potential employers about criminal charges. White didn’t know how much detail to go into. Should he tell interviewers that he was bringing his pregnant girlfriend some snacks when he was pulled over? He still isn’t sure, because a police officer came looking for him midway through the class. The battery on his monitor had died. The officer sent him home, and White missed the rest of the lesson.

With all of the restrictions and rules, keeping a job on a monitor can be as difficult as finding one. The hours for weekly check-ins at the downtown EMASS office — 1 p.m. to 6 p.m. on Tuesdays and Wednesdays, and 1 p.m. until 5 p.m. on Mondays — are inconvenient for those who work. In 2011, the National Institute of Justice surveyed 5,000 people on electronic monitors and found that 22% said they had been fired or asked to leave a job because of the device. Juawanna Caves, a young St. Louis native and mother of two, was placed on a monitor in December after being charged with unlawful use of a weapon. She said she stopped showing up to work as a housekeeper when her co-workers made her uncomfortable by asking questions and later lost a job at a nursing home because too many exceptions had to be made for her court dates and EMASS check-ins.

Perpetual surveillance also takes a mental toll. Nearly everyone I spoke to who wore a monitor described feeling trapped, as though they were serving a sentence before they had even gone to trial. White was never really sure about what he could or couldn’t do under supervision. In January, when his girlfriend had their daughter, Rylan, White left the hospital shortly after the birth, under the impression that he had a midnight curfew. Later that night, he let his monitor die so that he could sneak back before sunrise to see the baby again.

EMASS makes its money from defendants. But it gets its power over them from judges. It was in 2012 that the judges of the St. Louis court started to use the company’s services — which previously involved people on probation for misdemeanors — for defendants awaiting trial. Last year, the company supervised 239 defendants in the city of St. Louis on GPS monitors, according to numbers provided by EMASS to the court. The alliance with the courts gives the company not just a steady stream of business but a reliable means of recouping debts: Unlike, say, a credit-card company, which must file a civil suit to collect from overdue customers, EMASS can initiate criminal-court proceedings, threatening defendants with another stay in the Workhouse.

In early April, I visited Judge Rex Burlison in his chambers on the 10th floor of the St. Louis civil courts building. A few months earlier, Burlison, who has short gray hair and light blue eyes, had been elected by his peers as presiding judge, overseeing the city’s docket, budget and operations, including the contract with EMASS. It was one of the first warm days of the year, and from the office window I could see sunlight glimmering on the silver Gateway Arch.

I asked Burlison about the court’s philosophy for using pretrial GPS. He stressed that while each case was unique and subject to the judge’s discretion, monitoring was most commonly used for defendants who posed a flight risk, endangered public safety or had an alleged victim. Judges vary in how often they order defendants to wear monitors, and critics have attacked the inconsistency. Colbert-Botchway, the judge who put White on a monitor, regularly made pretrial GPS a condition of release, according to public defenders. (Colbert-Botchway declined to comment.) But another St. Louis city judge, David Roither, told me, “I really don’t use it very often because people here are too poor to pay for it.”

Whenever a defendant on a monitor violates a condition of release, whether related to payment or a curfew or something else, EMASS sends a letter to the court. Last year, Burlison said, the court received two to three letters a week from EMASS about violations. In response, the judge usually calls the defendant in for a hearing. As far as he knew, Burlison said, judges did not incarcerate people simply for failing to pay EMASS debts. “Why would you?” he asked me. When people were put back in jail, he said, there were always other factors at play, like the defendant’s missing a hearing, for instance. (Issuing a warrant for White’s arrest without a hearing, he acknowledged after looking at the docket, was not the court’s standard practice.)

The contract with EMASS allows the court to assign indigent defendants to the company to oversee “at no cost.” Yet neither Burlison nor any of the other current or former judges I spoke with recalled waiving fees when ordering someone to wear an ankle monitor. When I asked Burlison why he didn’t, he said that he was concerned that if he started to make exceptions on the basis of income, the company might stop providing ankle-monitoring services in St. Louis.

“People get arrested because of life choices,” Burlison said. “Whether they’re good for the charge or not, they’re still arrested and have to deal with it, and part of dealing with it is the finances.” To release defendants without monitors simply because they can’t afford the fee, he said, would be to disregard the safety of their victims or the community. “We can’t just release everybody because they’re poor,” he continued.

But many people in the Workhouse awaiting trial are poor. In January, civil rights groups filed suit against the city and the court, claiming that the St. Louis bail system violated the Constitution, in part by discriminating against those who can’t afford to post bail. That same month, the Missouri Supreme Court announced new rules that urged local courts to consider releasing defendants without monetary conditions and to waive fees for poor people placed on monitors. Shortly before the rules went into effect, on July 1, Burlison said that the city intends to shift the way ankle monitors are distributed and plans to establish a fund to help indigent defendants pay for their ankle bracelets. But he said he didn’t know how much money would be in the fund or whether it was temporary or permanent. The need for funding could grow quickly. The pending bail lawsuit has temporarily spurred the release of more defendants from custody, and as a result, public defenders say, the demand for monitors has increased.

Judges are anxious about what people released without posting bail might do once they get out. Several told me that monitors may ensure that the defendants return to court. Not unlike doctors who order a battery of tests for a mildly ill patient to avoid a potential malpractice suit, judges seem to view monitors as a precaution against their faces appearing on the front page of the newspaper. “Every judge’s fear is to let somebody out on recognizance and he commits murder, and then everyone asks, ’How in the hell was this person let out?’” said Robert Dierker, who served as a judge in St. Louis from 1986 to 2017 and now represents the city in the bail lawsuit. “But with GPS, you can say, ’Well, I have him on GPS, what else can I do?’”

Critics of monitors contend that their public-safety appeal is illusory: If defendants are intent on harming someone or skipping town, the bracelet, which can be easily removed with a pair of scissors, would not stop them. Studies showing that people tracked by GPS appear in court more reliably are scarce, and research about its effectiveness as a deterrent is inconclusive.

“The fundamental question is, What purpose is electronic monitoring serving?” said Blake Strode, the executive director of ArchCity Defenders, a nonprofit civil rights law firm in St. Louis that is one of several firms representing the plaintiffs in the bail lawsuit. “If the only purpose it’s serving is to make judges feel better because they don’t want to be on the hook if something goes wrong, then that’s not a sensible approach. We should not simply be monitoring for monitoring’s sake.”

Electronic monitoring was first conceived in the early 1960s by Ralph and Robert Gable, identical twins studying at Harvard under the psychologists Timothy Leary and B.F. Skinner, respectively. Influenced in part by Skinner’s theories of positive reinforcement, the Gables rigged up some surplus missile-tracking equipment to monitor teenagers on probation; those who showed up at the right places at the right times were rewarded with movie tickets, limo rides and other prizes.

Although this round-the-clock monitoring was intended as a tool for rehabilitation, observers and participants alike soon recognized its potential to enhance surveillance. All but two of the 16 volunteers in their initial study dropped out, finding the two bulky radio transmitters oppressive. “They felt like it was a prosthetic conscience, and who would want Mother all the time along with you?” Robert Gable told me. Psychology Today labeled the invention a “belt from Big Brother.”

The reality of electronic monitoring today is that Big Brother is watching some groups more than others. No national statistics are available on the racial breakdown of Americans wearing ankle monitors, but all indications suggest that mass supervision, like mass incarceration, disproportionately affects black people. In Cook County, Illinois, for instance, black people make up 24% of the population, and 67% of those on monitors. The sociologist Simone Browne has connected contemporary surveillance technologies like GPS monitors to America’s long history of controlling where black people live, move and work. In her 2015 book, “Dark Matters,” she traces the ways in which “surveillance is nothing new to black folks,” from the branding of enslaved people and the shackling of convict laborers to Jim Crow segregation and the home visits of welfare agencies. These historical inequities, Browne notes, influence where and on whom new tools like ankle monitors are imposed.

For some black families, including White’s, monitoring stretches across generations. Annette Taylor, the director of Ripple Effect, an advocacy group for prisoners and their families based in Champaign, Illinois, has seen her ex-husband, brother, son, nephew and sister’s husband wear ankle monitors over the years. She had to wear one herself, about a decade ago, she said, for driving with a suspended license. “You’re making people a prisoner of their home,” she told me. When her son was paroled and placed on house arrest, he couldn’t live with her, because he was forbidden to associate with people convicted of felonies, including his stepfather, who was also on house arrest.

Some people on monitors are further constrained by geographic restrictions — areas in the city or neighborhood that they can’t go without triggering an alarm. James Kilgore, a research scholar at the University of Illinois at Champaign-Urbana, has cautioned that these exclusionary zones could lead to “e-gentrification,” effectively keeping people out of more-prosperous neighborhoods. In 2016, after serving four years in prison for drug conspiracy, Bryan Otero wore a monitor as a condition of parole. He commuted from the Bronx to jobs at a restaurant and a department store in Manhattan, but he couldn’t visit his family or doctor because he was forbidden to enter a swath of Manhattan between 117th Street and 131st Street. “All my family and childhood friends live in that area,” he said. “I grew up there.”

Michelle Alexander, a legal scholar and columnist for The Times, has argued that monitoring engenders a new form of oppression under the guise of progress. In her 2010 book, “The New Jim Crow,” she wrote that the term “mass incarceration” should refer to the “system that locks people not only behind actual bars in actual prisons, but also behind virtual bars and virtual walls — walls that are invisible to the naked eye but function nearly as effectively as Jim Crow laws once did at locking people of color into a permanent second-class citizenship.”

BI Incorporated logo As the cost of monitoring continues to fall, those who are required to submit to it may worry less about the expense and more about the intrusive surveillance. The devices, some of which are equipped with two-way microphones, can give corrections officials unprecedented access to the private lives not just of those monitored but also of their families and friends. GPS location data appeals to the police, who can use it to investigate crimes. Already the goal is both to track what individuals are doing and to anticipate what they might do next. BI Incorporated, an electronic-monitoring subsidiary of GEO Group, has the ability to assign risk scores to the behavioral patterns of those monitored, so that law enforcement can “address potential problems before they happen.” Judges leery of recidivism have begun to embrace risk-assessment tools. As a result, defendants who have yet to be convicted of an offense in court may be categorized by their future chances of reoffending.

The combination of GPS location data with other tracking technologies such as automatic license-plate readers represents an uncharted frontier for finer-grained surveillance. In some cities, police have concentrated these tools in neighborhoods of color. A CityLab investigation found that Baltimore police were more likely to deploy the Stingray — the controversial and secretive cellphone tracking technology — where African Americans lived. In the aftermath of Freddie Gray’s death in 2015, the police spied on Black Lives Matter protesters with face recognition technology. Given this pattern, the term “electronic monitoring” may soon refer not just to a specific piece of equipment but to an all-encompassing strategy.

If the evolution of the criminal-justice system is any guide, it is very likely that the ankle bracelet will go out of fashion. Some GPS monitoring vendors have already started to offer smartphone applications that verify someone’s location through voice and face recognition. These apps, with names like Smart-LINK and Shadowtrack, promise to be cheaper and more convenient than a boxy bracelet. They’re also less visible, mitigating the stigma and normalizing surveillance. While reducing the number of people in physical prison, these seductive applications could, paradoxically, increase its reach. For the nearly 4.5 million Americans on probation or parole, it is not difficult to imagine a virtual prison system as ubiquitous — and invasive — as Instagram or Facebook.

On January 24, exactly three months after White had his monitor installed, his public defender successfully argued in court for its removal. His phone service had been shut off because he had fallen behind on the bill, so his mother told him the good news over video chat.

When White showed up to EMASS a few days later to have the ankle bracelet removed, he said, one of the company’s employees told him that he couldn’t take off his monitor until he paid his debt. White offered him the $35 in his wallet — all the money he had. It wasn’t enough. The employee explained that he needed to pay at least half of the $700 he owed. Somewhere in the contract he had signed months earlier, White had agreed to pay his full balance “at the time of removal.” But as White saw it, the court that had ordered the monitor’s installation was now ordering its removal. Didn’t that count?

“That’s the only thing that’s killing me,” White told me a few weeks later, in early March. “Why are you all not taking it off?” We were in his brother’s room, which, unlike White’s down the hall, had space for a wobbly chair. White sat on the bed, his head resting against the frame, while his brother sat on the other end by the TV, mumbling commands into a headset for the fantasy video game Fortnite. By then, the prosecutor had offered White two to three years of probation in exchange for a plea. (White is waiting to hear if he has been accepted into the city’s diversion program for “youthful offenders,” which would allow him to avoid pleading and wipe the charges from his record in a year.)

White was wearing a loosefitting Nike track jacket and red sweats that bunched up over the top of his monitor. He had recently stopped charging it, and so far, the police hadn’t come knocking. “I don’t even have to have it on,” he said, looking down at his ankle. “But without a job, I can’t get it taken off.” In the last few weeks, he had sold his laptop, his phone and his TV. That cash went to rent, food and his daughter, and what was left barely made a dent in what he owed EMASS.

It was a Monday — a check-in day — but he hadn’t been reporting for the past couple of weeks. He didn’t see the point; he didn’t have the money to get the monitor removed and the office was an hour away by bus. I offered him a ride.

EMASS check-ins take place in a three-story brick building with a low-slung facade draped in ivy. The office doesn’t take cash payments, and a Western Union is conveniently located next door. The other men in the waiting room were also wearing monitors. When it was White’s turn to check-in, Buss, the bond-compliance officer, unclipped the band from his ankle and threw the device into a bin, White said. He wasn’t sure why EMASS had now softened its approach, but his debts nonetheless remained.

Buss calculated the money White owed going back to November: $755, plus 10% annual interest. Over the next nine months, EMASS expected him to make monthly payments that would add up to $850 — more than the court had required for his bond. White looked at the receipt and shook his head. “I get in trouble for living,” he said as he walked out of the office. “For being me.”

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for The Big Story newsletter to receive stories like this one in your inbox.


Evite Admitted Data Breach. Doesn't Disclose The Number Of Users Affected

Evite logo Evite, the online social and invitations site, disclosed last month a data breach affecting some of its users:

"We became aware of a data security incident involving potential unauthorized access to our systems in April 2019. We engaged one of the leading data security firms and launched a thorough investigation. The investigation potentially traced the incident to malicious activity starting on February 22, 2019. On May 14, 2019, we concluded that an unauthorized party had acquired an inactive data storage file associated with our user accounts... Upon discovering the incident, we took steps to understand the nature and scope of the issue, and brought in external forensic consultants that specialize in cyber-attacks. We coordinated with law enforcement regarding the incident, and are working with leading security experts to address any vulnerabilities..."

Evite was founded in 1998, so there could be plenty of users affected. The breach announcement did not disclose the number of users affected.

The Evite breach announcement also said, "No user information more recent than 2013 was contained in the file" which was accessed/stolen by unauthorized persons. Evite said it has notified affected users, and has reset the passwords of affected users. The Evite system will prompt affected users to create new passwords when signing into the service.

The announcement listed the data elements accessed/stolen: names, usernames, email addresses, and passwords. If users also entered their birth dates, phone numbers, and mailing addresses then those data elements were also access/stolen. Social Security numbers were not affected since Evite doesn't collect this data. Evite said payment information (e.g., credit cards, debit cards, bank accounts, etc.) was not affected because:

"We do not store financial or payment information. If you opted to store your payment card in your account, your payment information is maintained by and stored on the internal systems of our third-party vendor."

Thank goodness for small wonders. The Evite disclosure did not explain why passwords were not encrypted, nor if that or other data elements would be encrypted in the future. As with any data breach, context matters. ZD Net reported:

"... a hacker named Gnosticplayers put up for sale the customer data of six companies, including Evite. The hacker claimed to be selling ten million Evite user records that included full names, email addresses, IP addresses, and cleartext passwords. ZDNet reached out to notify Evite of the hack and that its data was being sold on the dark web on April 15; however, the company never returned our request for comment... Back in April, the data of 10 million Evite users was put up for sale on a dark web marketplace for ฿0.2419 (~$1,900). The same hacker has breached, stolen, and put up for sale the details of over one billion users from many other companies, including other major online services, such as Canva, 500px, UnderArmor, ShareThis, GfyCat, Ge.tt, and others."

The incident is another reminder of the high value of consumers' personal data, and that hackers take action quickly to use or sell stolen data.


FTC Urged To Rule On Legality Of 'Secret Surveillance Scores' Used To Vary Prices By Each Online Shopper

Nobody wants to pay too much for a product. If you like online shopping, you may have been charged higher prices than your neighbors. Gizmodo reported:

"... researchers have documented and studied the use of so-called "surveillance scoring," the shadowy, but widely adopted practice of using computer algorithms that, in commerce, result in customers automatically paying different prices for the same product. The term also encompasses tactics used by employers and landlords to deny applicants jobs and housing, respectively, based on suggestions an algorithm spits out. Now experts allege that much of this surveillance scoring behavior is illegal, and they’re are asking the Federal Trade Commission (FTC) to investigate."

"In a 38-page petition filed last week, the Consumer Education Foundation (CEF), a California nonprofit with close ties to the group Consumer Watchdog, asked the FTC to explore whether the use of surveillance scores constitute “unfair or deceptive practices” under the Federal Trade Commission Act..."

The petition is part of a "Represent Consumers" (RC) program.

Many travelers have experienced dynamic pricing, where airlines vary fares based upon market conditions: when demand increases, prices go up; when demand decreases, prices go down. Similarly, when there are many unsold seats (e.g., plenty of excess supply), prices go down. But that dynamic pricing does not vary for each traveler.

Pricing by each person raises concerns of price discrimination. The legal definition of price discrimination in the United States:

"A seller charging competing buyers different prices for the same "commodity" or discriminating in the provision of "allowances" — compensation for advertising and other services — may be violating the Robinson-Patman Act... Price discriminations are generally lawful, particularly if they reflect the different costs of dealing with different buyers or are the result of a seller's attempts to meet a competitor's offering... There are two legal defenses to these types of alleged Robinson-Patman violations: (1) the price difference is justified by different costs in manufacture, sale, or delivery (e.g., volume discounts), or (2) the price concession was given in good faith to meet a competitor's price."

Airlines have wanted to extend dynamic pricing to each person, and "surveillance scores" seem perfectly suited for the task. The RC petition is packed with information which is instructive for consumers to learn about the extent of the business practices. First, the petition described the industry involved:

"Surveillance scoring starts with "analytics companies," the true number of which is unknown... these firms amass thousands or even tens of thousands of demographic and lifestyle data points about consumers, with the help of an estimated 121 data brokers and aggregators... The analytics firms use algorithms to categorize, grade, or assign a numerical value to a consumer based on the consumer’s estimated predicted behavior. That score then dictates how a company will treat a consumer. Consumers deemed to be less valuable are treated poorly, while consumers with better “grades” get preferential treatment..."

Second, the RC petition cited a study which identified 44 different types of proprietary surveillance scores used by industry participants to predict consumer behavior. Some of the score types (emphasis added):

"The Medication Adherence Score, which predicts whether a consumer is likely to follow a medication regimen; The Health Risk Score, which predicts how much a specific patient will cost an insurance company; The Consumer Profitability Score, which predicts which households may be profitable for a company and hence desirable customers; The Job Security Score, which predicts a person’s future income and ability to pay for things; The Churn Score, which predicts whether a consumer is likely to move her business to another company; The Discretionary Spending Index, which scores how much extra cash a particular consumer might be able to spend on non-necessities; The Invitation to Apply Score, which predicts how likely a consumer is to respond to a sales offer; The Charitable Donor Score, which predicts how likely a household is to make significant charitable donations; and The Pregnancy Predictor Score, which predicts the likelihood of someone getting pregnant."

It is important to note that the RC petition does not call for a halt in the collection of personal data about consumers. Rather, it asks the FTC, "to investigate and prohibit the targeting of consumers’ private data against them after it has been collected." Clarity is needed about what is, and is not, legal when consumers' personal data is used against them.

Third, the RC petition also cited published studies about pricing discrimination:

"An early seminal study of price discrimination published by researchers at Northeastern University in 2014 (Northeastern Price Discrimination Study) examined the pricing practices of e-commerce websites. The researchers developed a software-based methodology for measuring price discrimination and tested it with 300 real-world users who shopped on 16 popular e-commerce websites.37 Of ten different general retailers tested in 2014, only one –- Home Depot –- was confirmed to be engaging in price discrimination. Home Depot quoted prices to mobile-device users that were approximately $100 more than those quoted to desktop users.39 The researchers were unable to ascertain why... The Northeastern Price Discrimination Study also found that “human shoppers got worse bargains on a number of websites,”compared to an automated shopping browser that did not have any personal data trail associated with it,42 validating that Home Depot was considering shoppers’ personal data when setting prices online."

So, concerns about price discrimination aren't simply theory. Related to that, the RC petition cited its own research:

"... researchers at Northeastern University developed an online tool to “expose how websites personalize prices.” The Price Discrimination Tool (PDT) is a plug-in extension used on the Google Chrome browser that allows any Internet user to perform searches on five websites to see if the user is being charged a different price based on whatever information the companies have about that particular user. The PDT uses a remote computer server that is anonymous –- it has no personal data profile... The PDT then displays the price results from the human shopper’s search and those obtained by the remote anonymous computer server. Our own testing using the PDT revealed that Home Depot continues to offer different prices to human shoppers. For example, a search on Home Depot’s website for “white paint” reveals price discrimination. Of the 24 search results on the first page, Home Depot quoted us higher prices for six tubs of white paint than it quoted the anonymous computer... Our testing also revealed similar price discrimination on Home Depot’s website for light bulbs, toilet paper, toilet paper holders, caulk guns, halogen floor lamps and screw drivers... We also detected price discrimination on Walmart’s website using the PDT. Our testing revealed price discrimination on Walmart’s website for items such as paper towels, highlighters, pens, paint and toilet paper roll holders."

The RC petition listed examples: the Home Depot site quoted $59.87 for a five-gallon bucket of paint to the anonymous user, and $62.96 for the same product to a researcher. Another example: the site quoted $10.26 for a toilet-paper holder to the anonymous user, and $20.89 for the same product to a researcher -- double the price. Prices differences per person ranged from small to huge.

Besides concerns about price discrimination, the RC petition discussed "discriminatory customer service," and the data analytics firms allegedly involved:

"Zeta Global sells customer value scores that will determine, among other things, the quality of customer service a consumer receives from one of Zeta’s corporate clients. Zeta Global “has a database of more than 700 million people, with an average of over 2,500 pieces of data per person,” from which it creates the scores. The scores are based on data “such as the number of times a customer has dialed a call center and whether that person has browsed a competitor’s website or searched certain keywords in the past few days.” Based on that score, Zeta will recommend to its clients, which include wireless carriers, whether to respond to one customer more quickly than to others.

"Kustomer Inc.: Customer-service platform Kustomer Inc. uses customer value scores to enable retailers and other businesses to treat customer service inquiries differently..."

"Opera Solutions: describes itself as a “a global provider of advanced analytics software solutions that address the persistent problem of scaling Big Data analytics.” Opera Solutions generates customer value scores for its clients (including airlines, retailers and banks)..."

The petition cited examples of "discriminatory customer service," which include denied product returns, or customers shunted to less helpful customer service options. Plus, there are accuracy concerns:

"Considering that credit scores – the existence of which has been public since 1970 – are routinely based on credit reports found to contain errors that harm consumers’ financial standing,31 it is highly likely that Secret Surveillance Scores are based on inaccurate or outdated information. Since the score and the erroneous data upon which it relies are secret, there is no way to correct an error,32 assuming the consumer was aware of it."

Regular readers of this blog are already aware of errors in reports from credit reporting agencies. A copy of the RC petition is also available here (Adobe PDF, 3.2 Mbytes).

What immediately becomes clear while reading the petition is that massive amount of personal data collected about consumers to create several proprietary scores. Consumers have no way of knowing nor challenging the accuracy of the scores when they are used against them. So, not only has an industry risen which profits by acquiring and then selling, trading, analyzing, and/or using consumers' data; there is little to no accountability.

In other words, the playing field is heavily tilted for corporations and against consumers.

This is also a reminder why telecommunications companies fought hard for the repeal of broadband privacy and repeal of net neutrality, both of which the U.S. Federal Communications Commission (FCC) provided in 2017 under the leadership of FCC Chairman Ajit Pai, a Trump appointee. Repeal of the former consumer protection allows unrestricted collection of consumers' data, plus new revenue streams to sell the data collected to analytics firms, data brokers, and business partners.

Repeal of the second consumer protection allows internet and cable providers to price content using whatever criteria they choose. You see a rudimentary version of this pricing in a business practice called "zero rating." An example: streaming a movie via a provider's internet service counts against a data cap while the same movie viewed through the same provider's cable subscription does not. Yet, the exact same movie is delivered through the exact same cable (or fiber) internet connection.

Smart readers immediately realize that a possible next step includes zero ratings per-person. Streaming a movie might count against your data cap but not for your neighbor. Who would know? Oversight and consumer protections are needed.

What are your opinions of secret surveillance scores?


Celebrating 12 Years Online!

Twelve years ago today, I started the I've Been Mugged blog. During the first few years, the blog served as a tool to organize news, resources, and observations about data breaches, fraud alerts, credit reports, and credit monitoring services. All of this was new, as I struggled with how to respond to a former employer's data breach.

Over time, the I've Been Mugged blog expanded to cover privacy, surveillance, internet-connected devices, payments processors, energy providers, travel, and more. Many new technologies have emerged with sensors and cameras that collect data about consumers. The good news: there's plenty to blog about. The bad news: there's plenty to blog about.

Along the way, several guest authors have contributed. Thanks to them all. Bill Seebeck is still missed. I valued greatly his deep experience in banking and public relations. ProPublica has emerged as a new source of content.

Next, I'd like to thank all I've Been Mugged readers. I am grateful for your readership and for the comments you have submitted. We have explored together many interesting topics.

And, I especially want to thank my wife, Alison. Without her support and flexibility, I couldn't write I've Been Mugged. Last, some humor is below. Enjoy!

Searching for an affordable vacation package. Click to view larger version.


Fracking Companies Lost on Trespassing, but a Court Just Gave Them a Different Win

[Editor's note: today's guest post by ProPublica discusses business practices within the fracking industry. It is reprinted with permission.]

By Ken Ward Jr., The Charleston Gazette-Mail

A week after the West Virginia Supreme Court unanimously upheld the property rights of landowners battling one natural gas giant, the same court tossed out a challenge filed by another group of landowners against a different natural gas company.

In the latest case, decided earlier this month, the court upheld a lower court ruling that threw out a collection of lawsuits alleging dust, traffic and noise from gas operations were creating a nuisance for nearby landowners.

Charlie Burd, executive director of the Independent Oil and Gas Association of West Virginia, said the latest ruling lets “Wall Street know capital investment in oil and natural gas is welcome in West Virginia” and increases the possibility of more such investments in drilling and in so-called “downstream” chemical and manufacturing plants related to the gas industry.

In the property rights case last week, the justices set a clear legal standard that natural gas companies can’t trespass on a person’s land, without permission, to tap into gas reserves from neighboring tracts. In Monday’s case, the justices didn’t articulate a new legal precedent.

The mixed messages of the two cases show that “this is new litigation and the theories are evolving,” said Anthony Majestro, a lawyer who represented residents who lost their nuisance action before the Supreme Court.

“As the Marcellus shale drilling has expanded, there have been conflicts between surface owners and the companies that are drilling,” Majestro said. “Absent some legal requirement to require the industry to be good neighbors, I’m afraid we’ll continue to have these situations.”

Majestro’s clients were a group of residents in the Cherry Camp area of Harrison County, in north-central West Virginia. They wanted Antero Resources, the state’s largest gas company, to compensate them for unbearable traffic, “constant dust” that hangs in the air and settles on homes and vehicles, disruptive heavy equipment noise and bright lights that shine into their homes day and night.

The case focused on two dozen wells and a compressor station on six pads. The plaintiffs argued that their lives were being interfered with by Antero’s production of gas from beneath their property, even though the wells were on neighboring land, not on their own properties.

Across West Virginia’s gas-producing region, many residents own the surface of the land where they live, but don’t hold the minerals located beneath. Often, rights to the natural gas were signed over decades ago, long before drilling and gas production of the size and scope now conducted was even dreamed of.

The two court cases were featured last year as part of a series of stories by the Gazette-Mail and ProPublica that explored the impacts of the growth of natural gas on West Virginia communities.

In some ways, the Antero case was more complex than the earlier matter, in which the state court ruled clearly for Doddridge County residents Beth Crowder and David Wentz in their dispute with EQT Corp., West Virginia’s second-largest gas producer.

EQT had built a well pad and pipelines on Crowder and Wentz’s property to reach natural gas not located beneath their farm, but under neighboring tracts, including some that were thousands of feet away. Modern natural gas drilling uses horizontal drilling to use smaller numbers of larger wells to reach much greater amounts of gas.

Justice John Hutchison wrote the court’s 5-0 decision against EQT, including a new point of law that sets a precedent that calls what the company did trespassing and forbids it from being done in the future.

The ruling in the Antero case was a split, 3-2 decision, and the opinion by Justice Evan Jenkins included no new points of law setting precedent for future cases.

Instead, his opinion was based on the view that Antero had gas leases that created a right for it to do whatever was “reasonably necessary” to get at its mineral holdings.

Antero spokeswoman Stephanie Iaquinta said, “We appreciate the court’s thorough review of this important matter and its decision.”

Chief Justice Beth Walker wrote a concurring opinion, pointing out that the majority decision wasn’t necessarily getting to the heart of the matter: whether the kinds of gas industry impacts complained about by the Harrison County residents constitute a legal nuisance.

And Justice Margaret Workman wrote a strongly worded dissent, saying that the court had not only ducked the central legal issue in the case, but that it had usurped the authority of a jury to decide if the facts of how Antero operates should be deemed to be “reasonably necessary” to produce natural gas.

“For a century, the tenor of our mineral easement case law, in each temporal and technological ideation, has been that there must be a balance of the rights of surface owners and mineral owners,” Workman wrote. “Rather than making any attempt to establish legal guidance for that goal in this new context, the majority endorses a gross inequity that effectively gives this new industrialization carte blanche to operate without any regard for the rights of those who live on the land.”

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for The Big Story newsletter to receive stories like this one in your inbox.

 


Walmart To Pay $282 Million To Settle Bribery Charges By Regulators In The United States

Walmart logo The U.S. Securities And Exchange Commission (SEC) announced on June 20th a settlement agreement to resolve charges that Walmart violated:

"... the Foreign Corrupt Practices Act (FCPA) by failing to operate a sufficient anti-corruption compliance program for more than a decade as the retailer experienced rapid international growth... According to the SEC’s order, Walmart failed to sufficiently investigate or mitigate certain anti-corruption risks and allowed subsidiaries in Brazil, China, India, and Mexico to employ third-party intermediaries who made payments to foreign government officials without reasonable assurances that they complied with the FCPA. The SEC’s order details several instances when Walmart planned to implement proper compliance and training only to put those plans on hold or otherwise allow deficient internal accounting controls to persist even in the face of red flags and corruption allegations."

Walmart agreed to pay more than $144 million to settle the SEC’s charges and about $138 million to resolve parallel criminal charges by the U.S. Department of Justice (DOJ), for a combined total of more than $282 million. The settlements cover activities by the retailer's foreign subsidiaries in Brazil, China, India, and Mexico.

United States Department of Justice logo The DOJ announcement on June 20th stated:

"According to Walmart’s admissions, from 2000 until 2011, certain Walmart personnel responsible for implementing and maintaining the company’s internal accounting controls related to anti-corruption were aware of certain failures involving these controls, including relating to potentially improper payments to government officials in certain Walmart foreign subsidiaries, but nevertheless failed to implement sufficient controls that, among other things, would have ensured: (a) that sufficient anti-corruption-related due diligence was conducted on all third-party intermediaries (TPIs) who interacted with foreign officials; (b) that sufficient anti-corruption-related internal accounting controls concerning payments to TPIs existed; (c) that proof was required that TPIs had performed services before Walmart paid them; (d) that TPIs had written contracts that included anti-corruption clauses; (e) that donations ostensibly made to foreign government agencies were not converted to personal use by foreign officials; and (f) that policies covering gifts, travel and entertainment sufficiently addressed giving things of value to foreign officials and were implemented. Even though senior Walmart personnel responsible for implementing and maintaining the company’s internal accounting controls related to anti-corruption knew of these issues, Walmart did not begin to change its internal accounting controls related to anti-corruption to comply with U.S. criminal laws until 2011... In a number of instances, insufficiencies in Walmart’s anti-corruption-related internal accounting controls in these foreign subsidiaries were reported to senior Walmart employees and executives. The internal control failures allowed the foreign subsidiaries in Mexico, India, Brazil and China to open stores faster than they would have with sufficient internal accounting controls related to anti-corruption. Consequently, Walmart earned additional profits through these subsidiaries by opening some of its stores faster..."

So, to fast-track store openings company executives allegedly made secret payments to "third-party individuals" who passed the money on to specific government officials who approve permits. CBS News reported:

"... the payments to the intermediary were recorded as payments to a construction company, even though there were numerous "red flags" to indicate that the intermediary was actually a government official... The federal agreement does not identify the intermediary, but describes her in some detail: It says she became known inside Walmart Brazil as a "sorceress" or "genie" for her "ability to acquire permits quickly by 'sort(ing) things out like magic.' " The plea agreement also includes a provision barring the Brazilian subsidiary from making public claims or issuing press releases contradicting the facts outlined under the plea agreement."

United States Securities And Exchange Commission logo Walmart is not alone regarding FCPA violations. According to the SEC, several companies agreed to settlement agreements and payments during 2019:

Readers of this blog may remember, Fresenius paid $3.5 million last year to resolve HIPAA violations from 5 small data breaches during 2012. And, last week a whistleblower report discussed Cognizant's content moderation work as a Facebook subcontractor.

Notable companies with SEC settlement agreements and payments during 2018:


Facebook Announced New Financial Services Offering Available in 2020

On Tuesday, Facebook announced its first financial services offering which will be available in 2020:

"... we’re sharing plans for Calibra, a newly formed Facebook subsidiary whose goal is to provide financial services that will let people access and participate in the Libra network. The first product Calibra will introduce is a digital wallet for Libra, a new global currency powered by blockchain technology. The wallet will be available in Messenger, WhatsApp and as a standalone app — and we expect to launch in 2020... Calibra will let you send Libra to almost anyone with a smartphone, as easily and instantly as you might send a text message and at low to no cost. And, in time, we hope to offer additional services for people and businesses, like paying bills with the push of a button, buying a cup of coffee with the scan of a code or riding your local public transit..."

Long before the announcement, consumers crafted interesting nicknames for the financial service, such as #FaceCoin and #Zuckbucks. Good to see people with a sense of humor.

On a more serious topic, after multiple data breaches and privacy snafus at Facebook (plus repeated promises by CEO Zuckerberg that his company will do better), many people are understandably concerned about data security and privacy. Facebook's announcement also addressed security and privacy:

"... Calibra will have strong protections... We’ll be using all the same verification and anti-fraud processes that banks and credit cards use, and we’ll have automated systems that will proactively monitor activity to detect and prevent fraudulent behavior... We’ll also take steps to protect your privacy. Aside from limited cases, Calibra will not share account information or financial data with Facebook or any third party without customer consent. This means Calibra customers’ account information and financial data will not be used to improve ad targeting on the Facebook family of products. The limited cases where this data may be shared reflect our need to keep people safe, comply with the law and provide basic functionality to the people who use Calibra. Calibra will use Facebook data to comply with the law, secure customers’ accounts, mitigate risk and prevent criminal activity."

So, the new Calibra subsidiary promised that it won't share users' account information with Facebook's core social networking service, except when it will -- to "comply with the law." The announcement encourages interested persons to sign up for email updates. This leaves Calibra customers to trust Facebook's wall separating its business units. "Provide basic functionality to the people who use Calibra" sounds like a huge loophole to justify any data sharing.

Tech and financial experts quickly weighed in on the announcement and its promises. TechCrunch explained why Facebook created a new business subsidiary. After Calibra's Tuesday announcement:

"... critics started harping about the dangers of centralizing control of tomorrow’s money in the hands of a company with a poor track record of privacy and security. Facebook anticipated this, though, and created a subsidiary called Calibra to run its crypto dealings and keep all transaction data separate from your social data. Facebook shares control of Libra with 27 other Libra Association founding members, and as many as 100 total when the token launches in the first half of 2020. Each member gets just one vote on the Libra council, so Facebook can’t hijack the token’s governance even though it invented it."

TechCrunch also explained the risks to Calibra customers:

"... that leaves one giant vector for abuse of Libra: the developer platform... Apparently Facebook has already forgotten how allowing anyone to build on the Facebook app platform and its low barriers to “innovation” are exactly what opened the door for Cambridge Analytica to hijack 87 million people’s personal data and use it for political ad targeting. But in this case, it won’t be users’ interests and birthdays that get grabbed. It could be hundreds or thousands of dollars’ worth of Libra currency that’s stolen. A shady developer could build a wallet that just cleans out a user’s account or funnels their coins to the wrong recipient, mines their purchase history for marketing data or uses them to launder money..."

During the coming months, hopefully Calibra will disclose the controls it will implement on the developer platform to prevent abuses, theft, and fraud.

Readers wanting to learn more should read the Libra White Paper, which provides more details about the companies involved:

"The Libra Association is an independent, not-for-profit membership organization headquartered in Geneva, Switzerland. The association’s purpose is to coordinate and provide a framework for governance for the network... Members of the Libra Association will consist of geographically distributed and diverse businesses, nonprofit and multilateral organizations, and academic institutions. The initial group of organizations that will work together on finalizing the association’s charter and become “Founding Members” upon its completion are, by industry:

1. Payments: Mastercard, PayPal, PayU (Naspers’ fintech arm), Stripe, Visa
2. Technology and marketplaces: Booking Holdings, eBay, Facebook/Calibra, Farfetch, Lyft, Mercado Pago, Spotify AB, Uber Technologies, Inc.
3. Telecommunications: Iliad, Vodafone Group
4. Blockchain: Anchorage, Bison Trails, Coinbase, Inc., Xapo Holdings Limited
5. Venture Capital: Andreessen Horowitz, Breakthrough Initiatives, Ribbit Capital, Thrive Capital, Union Square Ventures
6. Nonprofit and multilateral organizations, and academic institutions: Creative Destruction Lab, Kiva, Mercy Corps, Women’s World Banking"

Yes, the ride-hailing company, Uber, is involved. Yes, the same ride-hailing service which which paid $148 million to settle lawsuits and a coverup from a data breach in 2016. Yes, the same ride-hailing service with a history of data security, compliance, cultural, and privacy snafus. This suggests -- for better or worse -- that in the future consumers will be able to pay for Uber rides using the Libra Network.

Calibra hopes to have about 100 members in the Libra Association by the service launch in 2020. Clearly, there will be plenty more news to come. Below are draft screen images of the new app.

Early version of screen images of the Calibra mobile app. Click to view larger version