145 posts categorized "Surveillance" Feed

ExpressVPN Survey Indicates Americans Care About Privacy. Some Have Already Taken Action

ExpressVPN published the results of its privacy survey. The survey, commissioned by ExpressVPN and conducted by Propeller Insights, included a representative sample of about 1,000 adults in the United States.

Overall, 29.3% of survey respondents said they already use had used a virtual private network (VPN) or a proxy network. Survey respondents cited three broad reasons for using a VPN service: 1) to avoid surveillance, 2) to access content, and 3) to stay safe online. Detailed survey results about surveillance concerns:

"The most popular reasons to use a VPN are related to surveillance, with 41.7% of respondents aiming to protect against sites seeing their IP, 26.4% to prevent their internet service provider (ISP) from gathering information, and 16.6% to shield against their local government."

Who performs the surveillance matters to consumers. People are more concerned with surveillance by companies than by law enforcement agencies within the U.S. government:

"Among the respondents, 15.9% say they fear the FBI surveillance, and only 6.4% fear the NSA spying on them. People are by far most worried about information gathering by ISPs (23.2%) and Facebook (20.5%). Google spying is more of a worry for people (5.9%) than snooping by employers (2.6%) or family members (5.1%).

Concerns with internet service providers (ISPs) are not surprising since these telecommunications company enjoy a unique position enabling them to track all online activities by consumers. Concerns about Facebook are not surprising since it tracks both users and non-users, similar to advertising networks. The "protect against sites seeing their IP" suggests that consumers, or at least VPN users, want to protect themselves and their devices against advertisers, advertising networks, and privacy-busting mobile apps which track their geo-location.

Detailed survey results about content access concerns:

"... 26.7% use [a VPN service] to access their corporate or academic network, 19.9% to access content otherwise not available in their region, and 16.9% to circumvent censorship."

The survey also found that consumers generally trust their mobile devices:

" Only 30.5% of Android users are “not at all” or “not very” confident in their devices. iOS fares slightly better, with 27.4% of users expressing a lack of confidence."

The survey uncovered views about government intervention and policies:

"Net neutrality continues to be popular (70% more respondents support it rather then don’t), but 51.4% say they don’t know enough about it to form an opinion... 82.9% also believe Congress should enact laws to require tech companies to get permission before collecting personal data. Even more, 85.2% believe there should be fines for companies that lose users’ data, and 90.2% believe there should be further fines if the data is misused. Of the respondents, 47.4% believe Congress should go as far as breaking up Facebook and Google."

The survey found views about smart devices (e.g., door bells, voice assistants, smart speakers) installed in many consumers' homes, since these devices are equipped with always-on cameras and/or microphones:

"... 85% of survey respondents say they are extremely (24.7%), very (23.4%), or somewhat (28.0%) concerned about smart devices monitoring their personal habits... Almost a quarter (24.8%) of survey respondents do not own any smart devices at all, while almost as many (24.4%) always turn off their devices’ microphones if they are not using them. However, one-fifth (21.2%) say they always leave the microphone on. The numbers are similar for camera use..."

There are more statistics and findings in the entire survey report by ExpressVPN. I encourage everyone to read it.


Researcher Uncovers Several Browser Extensions That Track Users' Online Activity And Share Data

Many consumers use web browsers since websites contain full content and functionality, versus pieces of websites in mobile apps. A researcher has found that as many as four million consumers have been affected by browser extensions, the optional functionality for web browsers, which collected sensitive personal and financial information.

Ars Technica reported about DataSpii, the name of the online privacy issue:

"The term DataSpii was coined by Sam Jadali, the researcher who discovered—or more accurately re-discovered—the browser extension privacy issue. Jadali intended for the DataSpii name to capture the unseen collection of both internal corporate data and personally identifiable information (PII).... DataSpii begins with browser extensions—available mostly for Chrome but in more limited cases for Firefox as well—that, by Google's account, had as many as 4.1 million users. These extensions collected the URLs, webpage titles, and in some cases the embedded hyperlinks of every page that the browser user visited. Most of these collected Web histories were then published by a fee-based service called Nacho Analytics..."

At first glance, this may not sound important, but it is. Why? First, the data collected included the most sensitive and personal information:

"Home and business surveillance videos hosted on Nest and other security services; tax returns, billing invoices, business documents, and presentation slides posted to, or hosted on, Microsoft OneDrive, Intuit.com, and other online services; vehicle identification numbers of recently bought automobiles, along with the names and addresses of the buyers; patient names, the doctors they visited, and other details listed by DrChrono, a patient care cloud platform that contracts with medical services; travel itineraries hosted on Priceline, Booking.com, and airline websites; Facebook Messenger attachments..."

I'll bet you thought your Facebook Messenger stuff was truly private. Second, because:

"... the published URLs wouldn’t open a page unless the person following them supplied an account password or had access to the private network that hosted the content. But even in these cases, the combination of the full URL and the corresponding page name sometimes divulged sensitive internal information. DataSpii is known to have affected 50 companies..."

Ars Technica also reported:

"Principals with both Nacho Analytics and the browser extensions say that any data collection is strictly "opt in." They also insist that links are anonymized and scrubbed of sensitive data before being published. Ars, however, saw numerous cases where names, locations, and other sensitive data appeared directly in URLs, in page titles, or by clicking on the links. The privacy policies for the browser extensions do give fair warning that some sort of data collection will occur..."

So, the data collection may be legal, but is it ethical -- especially if the anonymization is partial? After the researcher's report went public, many of the suspect browser extensions were deleted from online stores. However, extensions already installed locally on users' browsers can still collect data:

"Beginning on July 3—about 24 hours after Jadali reported the data collection to Google—Fairshare Unlock, SpeakIt!, Hover Zoom, PanelMeasurement, Branded Surveys, and Panel Community Surveys were no longer available in the Chrome Web Store... While the notices say the extensions violate the Chrome Web Store policy, they make no mention of data collection nor of the publishing of data by Nacho Analytics. The toggle button in the bottom-right of the notice allows users to "force enable" the extension. Doing so causes browsing data to be collected just as it was before... In response to follow-up questions from Ars, a Google representative didn't explain why these technical changes failed to detect or prevent the data collection they were designed to stop... But removing an extension from an online marketplace doesn't necessarily stop the problems. Even after the removals of Super Zoom in February or March, Jadali said, code already installed by the Chrome and Firefox versions of the extension continued to collect visited URL information..."

Since browser developers haven't remotely disabled leaky browser extensions, the burden is on consumers. The Ars Technica report lists the leaky browser extensions by name. Since online stores can't seem to consistently police browser extensions for privacy compliance, again the burden falls upon consumers.

The bottom line: browser extensions can easily compromise your online privacy and security. That means like any other software, wise consumers: read independent online reviews first, read the developer's terms of use and privacy policy before installing the browser extension, and use a privacy-focused web browser.

Consumer Reports advises consumers to, a) install browser extensions only from companies you trust, and b) uninstall browser extensions you don't need nor use. For consumers that don't know how, the Consumer Reports article also lists step-by-step instructions to uninstall browser extensions in Google Chrome, Firefox, Safari, and Internet Explorer branded web browsers.


FBI Seeks To Monitor Twitter, Facebook, Instagram, And Other Social Media Accounts For Violent Threats

Federal Bureau of Investigation logo The U.S. Federal Bureau of Investigation (FBI) issued on July 8th a Request For Proposals (RFP) seeking quotes from technology companies to build a "Social Media Alerting" tool, which would enable the FBI to monitor in real-time accounts in several social media services for violence threats. The RFP, which was amended on August 7th, stated:

"The purpose of this procurement is to acquire the services of a company to proactively identify and reactively monitor threats to the United States and its interests through a means of online sources. A subscription to this service shall grant the Federal Bureau of Investigation (FBI) access to tools that will allow for the exploitation of lawfully collected/acquired data from social media platforms that will be stored, vetted and formatted by a vendor... This synopsis and solicitation is being issued as Request for Proposal (RFP) number DJF194750PR0000369 and... This announcement is supplemented by a detailed RFP Notice, an SF-33 document, an accompanying Statement of Objectives (SOO) and associated FBI documents..."

"Proactively identify" suggests the usage of software algorithms or artificial intelligence (AI). And, the vendor selected will archive the collected data for an undisclosed period of time. The RFP also stated:

"Background: The use of social media platforms, by terrorist groups, domestic threats, foreign intelligence services, and criminal organizations to further their illegal activity creates a demonstrated need for tools to properly identify the activity and react appropriately. With increased use of social media platforms by subjects of current FBI investigations and individuals that pose a threat to the United States, it is critical to obtain a service which will allow the FBI to identify relevant information from Twitter, Facebook, Instagram, and other Social media platforms in a timely fashion. Consequently, the FBI needs near real time access to a full range of social media exchanges..."

For context, in 2016 the FBI attempted to force Apple Computer to build "backdoor software" to unclock an alleged terrorist's iPhone in California. The FBI later found an offshore technology company to build its backdoor.

The documents indicate that the FBI wants its staff to use the tool at both headquarters and field-office locations globally, and with mobile devices. The SOO document stated:

"FBI personnel are deployed internationally and sometimes in areas of press censorship. A social media exploitation tool with international reach and paired with a strong language translation capability, can become crucial to their operations and more importantly their safety. The functions of most value to these individuals is early notification, broad international reach, instant translation, and the mobility of the needed capability."

The SOO also explained the data elements too be collected:

"3.3.2.2.1 Obtain the full social media profile of persons-of-interest and their affiliation to any organization or groups through the corroboration of multiple social media sources... Items of interest in this context are social networks, user IDs, emails, IP addresses and telephone numbers, along with likely additional account with similar IDs or aliases... Any connectivity between aliases and their relationship must be identifiable through active link analysis mapping..."
"3.3.3.2.1 Online media is monitored based on location, determined by the users’ delineation or the import of overlays from existing maps (neighborhood, city, county, state or country). These must allow for customization as AOR sometimes cross state or county lines..."

While the document mentioned "user IDs" and didn't mention passwords, the implication seems clear that the FBI wants both in order to access and monitor in real-time social media accounts. And, the "other Social Media platforms" statement raises questions. What is the full list of specific services that refers to? Why list only the three largest platforms by name?

As this FBI project proceeds, let's hope that the full list of social sites includes 8Chan, Reddit, Stormfront, and similar others. Why? In a study released in November of 2018, the Center for Strategic and International Studies (CSIS) found:

"Right-wing extremism in the United States appears to be growing. The number of terrorist attacks by far-right perpetrators rose over the past decade, more than quadrupling between 2016 and 2017. The recent pipe bombs and the October 27, 2018, synagogue attack in Pittsburgh are symptomatic of this trend. U.S. federal and local agencies need to quickly double down to counter this threat. There has also been a rise in far-right attacks in Europe, jumping 43 percent between 2016 and 2017... Of particular concern are white supremacists and anti-government extremists, such as militia groups and so-called sovereign citizens interested in plotting attacks against government, racial, religious, and political targets in the United States... There also is a continuing threat from extremists inspired by the Islamic State and al-Qaeda. But the number of attacks from right-wing extremists since 2014 has been greater than attacks from Islamic extremists. With the rising trend in right-wing extremism, U.S. federal and local agencies need to shift some of their focus and intelligence resources to penetrating far-right networks and preventing future attacks. To be clear, the terms “right-wing extremists” and “left-wing extremists” do not correspond to political parties in the United States..."

The CSIS study also noted:

"... right-wing terrorism commonly refers to the use or threat of violence by sub-national or non-state entities whose goals may include racial, ethnic, or religious supremacy; opposition to government authority; and the end of practices like abortion... Left-wing terrorism, on the other hand, refers to the use or threat of violence by sub-national or non-state entities that oppose capitalism, imperialism, and colonialism; focus on environmental or animal rights issues; espouse pro-communist or pro-socialist beliefs; or support a decentralized sociopolitical system like anarchism."

Terrorism is terrorism. All of it needs to be prosecuted including left-, right-, domestic, and foreign. (This prosecutor is doing the right thing.) It seems wise to monitor the platform where suspects congregate.

This project also raises questions about the effectiveness of monitoring social media? Will this really works. Digital Trends reported:

"Companies like Google, Facebook, Twitter, and Amazon already use algorithms to predict your interests, your behaviors, and crucially, what you like to buy. Sometimes, an algorithm can get your personality right – like when Spotify somehow manages to put together a playlist full of new music you love. In theory, companies could use the same technology to flag potential shooters... But preventing mass shootings before they happen raises thorny legal questions: how do you determine if someone is just angry online rather than someone who could actually carry out a shooting? Can you arrest someone if a computer thinks they’ll eventually become a shooter?"

Some social media users have already experienced inaccuracies (failures?) when sites present irrelevant advertisements and/or political party messaging based upon supposedly accurate software algorithms. The Digital Trends article also dug deeper:

"A Twitter spokesperson wouldn’t say much directly about Trump’s proposal, but did tell Digital Trends that the company suspended 166,513 accounts connected to the promotion of terrorism during the second half of 2018... Twitter also frequently works to help facilitate investigations when authorities request information – but the company largely avoids proactively flagging banned accounts (or the people behind them) to those same authorities. Even if they did, that would mean flagging 166,513 people to the FBI – far more people than the agency could ever investigate."

Then, there is the problem of the content by users in social media posts:

"Even if someone does post to social media immediately before they decide to unleash violence, it’s often not something that would trip up either Twitter or Facebook’s policies. The man who killed three people at the Gilroy Garlic Festival in Northern California posted to Instagram from the event itself – once calling the food served there “overprices” and a second that told people to read a 19th-century pro-fascist book that’s popular with white nationalists."

Also, Amazon got caught up in the hosting mess with 8Chan. So, there is more news to come.

Last, this blog post explored the problems with emotion recognition by facial-recognition software. Let's hope this FBI project is not a waste of taxpayer's hard-earned money.


Emotion Recognition: Facial Recognition Software Based Upon Valid Science or Malarkey?

The American Civil Liberties Union (ACLU) reported:

"Emotion recognition is a hot new area, with numerous companies peddling products that claim to be able to read people’s internal emotional states, and artificial intelligence (A.I.) researchers looking to improve computers’ ability to do so. This is done through voice analysis, body language analysis, gait analysis, eye tracking, and remote measurement of physiological signs like pulse and breathing rates. Most of all, though, it’s done through analysis of facial expressions.

A new study, however, strongly suggests that these products are built on a bed of intellectual quicksand... after reviewing over 1,000 scientific papers in the psychological literature, these experts came to a unanimous conclusion: there is no scientific support for the common assumption “that a person’s emotional state can be readily inferred from his or her facial movements.” The scientists conclude that there are three specific misunderstandings “about how emotions are expressed and perceived in facial movements.” The link between facial expressions and emotions is not reliable (i.e., the same emotions are not always expressed in the same way), specific (the same facial expressions do not reliably indicate the same emotions), or generalizable (the effects of different cultures and contexts has not been sufficiently documented)."

Another reason why this is important:

"... an entire industry of automated purported emotion-reading technologies is quickly emerging. As we wrote in our recent paper on “Robot Surveillance,” the market for emotion recognition software is forecast to reach at least $3.8 billion by 2025. Emotion recognition (aka “affect recognition” or “affective computing”) is already being incorporated into products for purposes such as marketing, robotics, driver safety, and audio “aggression detectors.”

Regular readers of this blog are familiar with aggression detectors and the variety of industries where the technology is already deployed. And, one police body-cam maker says it won't deploy facial recognition in its products due to problems with the technology.

Yes, reliability matters -- especially when used for surveillance purposes. Nobody wants law enforcement making decisions about persons based upon software built using unreliable or fake science masquerading as reliable, valid science. Nobody wants education and school officials making decisions about students using unreliable software. Nobody wants hospital administrators and physicians making decisions about patients based upon unreliable software.

What are your opinions?


Tech Expert Concluded Google Chrome Browser Operates A Lot Like Spy Software

Many consumers still use web browsers. Which are better for your online privacy? You may be interested in this analysis by a tech expert:

"... I've been investigating the secret life of my data, running experiments to see what technology really gets up to under the cover of privacy policies that nobody reads... My tests of Chrome vs. Firefox [browsers] unearthed a personal data caper of absurd proportions. In a week of Web surfing on my desktop, I discovered 11,189 requests for tracker "cookies" that Chrome would have ushered right onto my computer but were automatically blocked by Firefox... Chrome welcomed trackers even at websites you would think would be private. I watched Aetna and the Federal Student Aid website set cookies for Facebook and Google. They surreptitiously told the data giants every time I pulled up the insurance and loan service's log-in pages."

"And that's not the half of it. Look in the upper right corner of your Chrome browser. See a picture or a name in the circle? If so, you're logged in to the browser, and Google might be tapping into your Web activity to target ads. Don't recall signing in? I didn't, either. Chrome recently started doing that automatically when you use Gmail... I felt hoodwinked when Google quietly began signing Gmail users into Chrome last fall. Google says the Chrome shift didn't cause anybody's browsing history to be "synced" unless they specifically opted in — but I found mine was being sent to Google and don't recall ever asking for extra surveillance..."

Also:

"Google's product managers told me in an interview that Chrome prioritizes privacy choices and controls, and they're working on new ones for cookies. But they also said they have to get the right balance with a "healthy Web ecosystem" (read: ad business). Firefox's product managers told me they don't see privacy as an "option" relegated to controls. They've launched a war on surveillance, starting last month with "enhanced tracking protection" that blocks nosy cookies by default on new Firefox installations..."

This tech expert concluded:

"It turns out, having the world's biggest advertising company make the most popular Web browser was about as smart as letting kids run a candy shop. It made me decide to ditch Chrome for a new version of nonprofit Mozilla's Firefox, which has default privacy protections. Switching involved less inconvenience than you might imagine."

Regular readers of this blog are aware of how Google tracks consumers online purchases, the worst mobile apps for privacy, and privacy alternatives such the Brave web browser, the DuckDuckGo search engine, virtual private network (VPN) software, and more. Yes, you can use the Firefox browser on your Apple iPhone. I do.

Me? I've used the Firefox browser since about 2010 on my (Windows) laptop, and the DuckDuckGo search engine since 2013. I stopped using Bing, Yahoo, and Google search engines in 2013. While Firefox installs with Google as the default search engine, you can easily switch it to DuckDuckGo. I did. I am very happy with the results.

Which web browser and search engine do you use? What do you do to protect your online privacy?


Aggression Detectors: What They Are, Who Uses Them, And Why

Sound Intelligence logo Like most people, you probably have not heard of "aggression detectors." What are these devices? Who makes them? Who uses these devices and why? What consumers are affected?

To answer these questions, ProPublica explained who makes the devices and why:

"In response to mass shootings, some schools and hospitals are installing microphones equipped with algorithms. The devices purport to identify stress and anger before violence erupts... By deploying surveillance technology in public spaces like hallways and cafeterias, device makers and school officials hope to anticipate and prevent everything from mass shootings to underage smoking... Besides Sound Intelligence, South Korea-based Hanwha Techwin, formerly part of Samsung, makes a similar “scream detection” product that’s been installed in American schools. U.K.-based Audio Analytic used to sell its aggression- and gunshot-detection software to customers in Europe and the United States... Sound Intelligence CEO Derek van der Vorst said security cameras made by Sweden-based Axis Communications account for 90% of the detector’s worldwide sales, with privately held Louroe making up the other 10%... Mounted inconspicuously on the ceiling, Louroe’s smoke-detector-sized microphones measure aggression on a scale from zero to one. Users choose threshold settings. Any time they’re exceeded for long enough, the detector alerts the facility’s security apparatus, either through an existing surveillance system or a text message pinpointing the microphone that picked up the sound..."

Louroe Electronics logo The microphone-equipped sensors have been installed in a variety of industries. The Sound Intelligence website listed prisons, schools, public transportation, banks, healthcare institutes, retail stores, public spaces, and more. Louroe Electronics' site included a similar list plus law enforcement.

The ProPublica article also discussed several key issues. First, sensor accuracy and its own tests:

"... ProPublica’s analysis, as well as the experiences of some U.S. schools and hospitals that have used Sound Intelligence’s aggression detector, suggest that it can be less than reliable. At the heart of the device is what the company calls a machine learning algorithm. Our research found that it tends to equate aggression with rough, strained noises in a relatively high pitch, like [a student's] coughing. A 1994 YouTube clip of abrasive-sounding comedian Gilbert Gottfried ("Is it hot in here or am I crazy?") set off the detector, which analyzes sound but doesn’t take words or meaning into account... Sound Intelligence and Louroe said they prefer whenever possible to fine-tune sensors at each new customer’s location over a period of days or weeks..."

Second, accuracy concerns:

"[Sound Intelligence CEO] Van der Vorst acknowledged that the detector is imperfect and confirmed our finding that it registers rougher tones as aggressive. He said he “guarantees 100%” that the system will at times misconstrue innocent behavior. But he’s more concerned about failing to catch indicators of violence, and he said the system gives schools and other facilities a much-needed early warning system..."

This is interesting and troubling. Sound Intelligence's position seems to suggest that it is okay for sensor to miss-identify innocent persons as aggressive in order to avoid failures to identify truly aggressive persons seeking to do harm. That sounds like the old saying: the ends justify the means. Not good. The harms against innocent persons matters, especially when they are young students.

Yesterday's blog post described a far better corporate approach. Based upon current inaccuracies and biases with the technology, a police body camera assembled an ethics board to help guide its decisions regarding the technology; and then followed that board's recommendations not to implement facial recognition in its devices. When the inaccuracies and biases are resolved, then it would implement facial recognition.

What ethics boards have Sound Intelligence, Louroe, and other aggression detector makers utilized?

Third, the use of aggression detectors raises the issue of notice. Are there physical postings on-site at schools, hospitals, healthcare facilities, and other locations? Notice seems appropriate, especially since almost all entities provide notice (e.g., terms of service, privacy policy) for visitors to their websites.

Fourth, privacy concerns:

"Although a Louroe spokesman said the detector doesn’t intrude on student privacy because it only captures sound patterns deemed aggressive, its microphones allow administrators to record, replay and store those snippets of conversation indefinitely..."

I encourage parents of school-age children to read the entire ProPublica article. Concerned parents may demand explanations by school officials about the surveillance activities and devices used within their children's schools. Teachers may also be concerned. Patients at healthcare facilities may also be concerned.

Concerned persons may seek answers to several issues:

  • The vendor selection process, which aggression detector devices were selected, and why
  • Evidence supporting the accuracy of aggression detectors used
  • The school's/hospital's policy, if it has one, covering surveillance devices; plus any posted notices
  • The treatment and rights of wrongly identified persons (e.g., students, patients,, visitors, staff) by aggression detector devices
  • Approaches by the vendor and school to improve device accuracy for both types of errors: a) wrongly identified persons, and b) failures to identify truly aggressive or threatening persons
  • How long the school and/or vendor archive recorded conversations
  • What persons have access to the archived recordings
  • The data security methods used by the school and by the vendor to prevent unauthorized access and abuse of archived recordings
  • All entities, by name, which the school and/or vendor share archived recordings with

What are your opinions of aggression detectors? Of device inaccuracy? Of the privacy concerns?


Police Body Cam Maker Says It Won't Use Facial Recognition Due To Problems With The Technology

We've all heard of the following three technologies: police body cameras, artificial intelligence, and facial recognition software. Across the nation, some police departments use body cameras.

Do the three technologies go together -- work well together? The Washington Post reported:

"Axon, the country’s biggest seller of police body cameras, announced that it accepts the recommendation of an ethics board and will not use facial recognition in its devices... the company convened the independent board last year to assess the possible consequences and ethical costs of artificial intelligence and facial-recognition software. The board’s first report, published June 27, concluded that “face recognition technology is not currently reliable enough to ethically justify its use” — guidance that Axon plans to follow."

So, a major U.S. corporation assembled an ethics board to guide its activities. Good. That's not something you read about often. Then, the same corporation followed that board's advice. Even better.

Why reject using facial recognition with body cameras? Axon explained in a statement:

"Current face matching technology raises serious ethical concerns. In addition, there are technological limitations to using this technology on body cameras. Consistent with the board's recommendation, Axon will not be commercializing face matching products on our body cameras at this time. We do believe face matching technology deserves further research to better understand and solve for the key issues identified in the report, including evaluating ways to de-bias algorithms as the board recommends. Our AI team will continue to evaluate the state of face recognition technologies and will keep the board informed about our research..."

Two types of inaccuracies occur with facial recognition software: i) persons falsely identified (a/k/a "false positives;" and ii) persons not identified (a/k/a "false negatives) who should have been identified. The ethics board's report provided detailed explanations:

"The truth is that current technology does not perform as well on people of color compared to whites, on women compared to men, or young people compared to older people, to name a few disparities. These disparities exist in both directions — a greater false positive rate and false negative rate."

The ethics board's report also explained the problem of bias:

"One cause of these biases is statistically unrepresentative training data — the face images that engineers use to “train” the face recognition algorithm. These images are unrepresentative for a variety of reasons but in part because of decisions that have been made for decades that have prioritized certain groups at the cost of others. These disparities make real-world face recognition deployment a complete nonstarter for the Board. Until we have something approaching parity, this technology should remain on the shelf. Policing today already exhibits all manner of disparities (particularly racial). In this undeniable context, adding a tool that will exacerbate this disparity would be unacceptable..."

So, well-meaning software engineers can create bias in their algorithms by using sets of images that are not representative of the population. The ethic board's 42-page report titled, "First Report Of The Axon A.I. & Policing Technology Ethics Board" (Adobe PDF; 3.1 Megabytes) listed six general conclusions:

"1: Face recognition technology is not currently reliable enough to ethically justify its use on body-worn cameras. At the least, face recognition technology should not be deployed until the technology performs with far greater accuracy and performs equally well across races, ethnicities, genders, and other identity groups. Whether face recognition on body-worn cameras can ever be ethically justifiable is an issue the Board has begun to discuss in the context of the use cases outlined in Part IV.A, and will take up again if and when these prerequisites are met."

"2: When assessing face recognition algorithms, rather than talking about “accuracy,” we prefer to discuss false positive and false negative rates. Our tolerance for one or the other will depend on the use case."

"3: The Board is unwilling to endorse the development of face recognition technology of any sort that can be completely customized by the user. It strongly prefers a model in which the technologies that are made available are limited in what functions they can perform, so as to prevent misuse by law enforcement."

"4: No jurisdiction should adopt face recognition technology without going through open, transparent, democratic processes, with adequate opportunity for genuinely representative public analysis, input, and objection."

"5: Development of face recognition products should be premised on evidence-based benefits. Unless and until those benefits are clear, there is no need to discuss costs or adoption of any particular product."

"6: When assessing the costs and benefits of potential use cases, one must take into account both the realities of policing in America (and in other jurisdictions) and existing technological limitations."

The board included persons with legal, technology, law enforcement, and civil rights backgrounds; plus members from the affected communities. Axon management listened to the report's conclusions and is following the board's recommendations (emphasis added):

"Respond publicly to this report, including to the Board’s conclusions and recommendations regarding face recognition technology. Commit, based on the concerns raised by the Board, not to proceed with the development of face matching products, including adding such capabilities to body-worn cameras or to Axon Evidence (Evidence.com)... Invest company resources to work, in a transparent manner and in tandem with leading independent researchers, to ensure training data are statistically representative of the appropriate populations and that algorithms work equally well across different populations. Continue to comply with the Board’s Operating Principles, including by involving the Board in the earliest possible stages of new or anticipated products. Work with the Board to produce products and services designed to improve policing transparency and democratic accountability, including by developing products in ways that assure audit trails or that collect information that agencies can release to the public about their use of Axon products..."

Admirable. Encouraging. The Washington Post reported:

"San Francisco in May became the first U.S. city to ban city police and agencies from using facial-recognition software... Somerville, Massachusetts became the second, with other cities, including Berkeley and Oakland, Calif., considering similar measures..."

Clearly, this topic bears monitoring. Consumers and government officials are concerned about accuracy and bias. So, too, are some corporations.

And, more news seems likely. Will other technology companies and local governments utilize similar A.I. ethics boards? Will schools, healthcare facilities, and other customers of surveillance devices demand products with accuracy and without bias supported by evidence?


Digital Jail: How Electronic Monitoring Drives Defendants Into Debt

[Editor's note: today's guest post, by reporters at ProPublica, discusses the convergence of law enforcement, outsourcing, smart devices, surveillance, "offender funded" programs, and "e-gentrification." It is reprinted with permission.]

By Ava Kofman, ProPublica

On Oct. 12, 2018, Daehaun White walked free, or so he thought. A guard handed him shoelaces and the $19 that had been in his pocket at the time of his booking, along with a letter from his public defender. The lanky 19-year-old had been sitting for almost a month in St. Louis’ Medium Security Institution, a city jail known as the Workhouse, after being pulled over for driving some friends around in a stolen Chevy Cavalier. When the police charged him with tampering with a motor vehicle — driving a car without its owner’s consent — and held him overnight, he assumed he would be released by morning. He told the police that he hadn’t known that the Chevy, which a friend had lent him a few hours earlier, was stolen. He had no previous convictions. But the $1,500 he needed for the bond was far beyond what he or his family could afford. It wasn’t until his public defender, Erika Wurst, persuaded the judge to lower the amount to $500 cash, and a nonprofit fund, the Bail Project, paid it for him, that he was able to leave the notoriously grim jail. “Once they said I was getting released, I was so excited I stopped listening,” he told me recently. He would no longer have to drink water blackened with mold or share a cell with rats, mice and cockroaches. He did a round of victory pushups and gave away all of the snack cakes he had been saving from the cafeteria.

Emass logo When he finally read Wurst’s letter, however, he realized there was a catch. Even though Wurst had argued against it, the judge, Nicole Colbert-Botchway, had ordered him to wear an ankle monitor that would track his location at every moment using GPS. For as long as he would wear it, he would be required to pay $10 a day to a private company, Eastern Missouri Alternative Sentencing Services, or EMASS. Just to get the monitor attached, he would have to report to EMASS and pay $300 up front — enough to cover the first 25 days, plus a $50 installation fee.

White didn’t know how to find that kind of money. Before his arrest, he was earning minimum wage as a temp, wrapping up boxes of shampoo. His father was largely absent, and his mother, Lakisha Thompson, had recently lost her job as the housekeeping manager at a Holiday Inn. Raising Daehaun and his four siblings, she had struggled to keep up with the bills. The family bounced between houses and apartments in northern St. Louis County, where, as a result of Jim Crow redlining, most of the area’s black population lives. In 2014, they were living on Canfield Drive in Ferguson when Michael Brown was shot and killed there by a police officer. During the ensuing turmoil, Thompson moved the family to Green Bay, Wisconsin. White felt out of place. He was looked down on for his sagging pants, called the N-word when riding his bike. After six months, he moved back to St. Louis County on his own to live with three of his siblings and stepsiblings in a gray house with vinyl siding.

When White got home on the night of his release, he was so overwhelmed to see his family again that he forgot about the letter. He spent the next few days hanging out with his siblings, his mother, who had returned to Missouri earlier that year, and his girlfriend, Demetria, who was seven months pregnant. He didn’t report to EMASS.

What he didn’t realize was that he had failed to meet a deadline. Typically, defendants assigned to monitors must pay EMASS in person and have the device installed within 24 hours of their release from jail. Otherwise, they have to return to court to explain why they’ve violated the judge’s orders. White, however, wasn’t called back for a hearing. Instead, a week after he left the Workhouse, Colbert-Botchway issued a warrant for his arrest.

Three days later, a large group of police officers knocked on Thompson’s door, looking for information about an unrelated case, a robbery. White and his brother had been making dinner with their mother, and the officers asked them for identification. White’s name matched the warrant issued by Colbert-Botchway. “They didn’t tell me what the warrant was for,” he said. “Just that it was for a violation of my release.” He was taken downtown and held for transfer back to the Workhouse. “I kept saying to myself, ’Why am I locked up?’” he recalled.

The next morning, Thompson called the courthouse to find the answer. She learned that her son had been jailed over his failure to acquire and pay for his GPS monitor. To get him out, she needed to pay EMASS on his behalf.

This seemed absurd to her. When Daehaun was 13, she had worn an ankle monitor after violating probation for a minor theft, but the state hadn’t required her to cover the cost of her own supervision. “This is a 19-year-old coming out of the Workhouse,” she told me recently. “There’s no way he has $300 saved.” Thompson felt that the court was forcing her to choose between getting White out of jail and supporting the rest of her family.

Over the past half-century, the number of people behind bars in the United States jumped by more than 500%, to 2.2 million. This extraordinary rise, often attributed to decades of “tough on crime” policies and harsh sentencing laws, has ensured that even as crime rates have dropped since the 1990s, the number of people locked up and the average length of their stay have increased. According to the Bureau of Justice Statistics, the cost of keeping people in jails and prisons soared to $87 billion in 2015 from $19 billion in 1980, in current dollars.

In recent years, politicians on both sides of the aisle have joined criminal-justice reformers in recognizing mass incarceration as both a moral outrage and a fiscal sinkhole. As ankle bracelets have become compact and cost-effective, legislators have embraced them as an enlightened alternative. More than 125,000 people in the criminal-justice system were supervised with monitors in 2015, compared with just 53,000 people in 2005, according to the Pew Charitable Trusts. Although no current national tally is available, data from several cities — Austin, Texas; Indianapolis; Chicago; and San Francisco — show that this number continues to rise. Last December, the First Step Act, which includes provisions for home detention, was signed into law by President Donald Trump with support from the private prison giants GEO Group and CoreCivic. These corporations dominate the so-called community-corrections market — services such as day-reporting and electronic monitoring — that represents one of the fastest-growing revenue sectors of their industry.

By far the most decisive factor promoting the expansion of monitors is the financial one. The United States government pays for monitors for some of those in the federal criminal-justice system and for tens of thousands of immigrants supervised by Immigration and Customs Enforcement. But states and cities, which incur around 90% of the expenditures for jails and prisons, are increasingly passing the financial burden of the devices onto those who wear them. It costs St. Louis roughly $90 a day to detain a person awaiting trial in the Workhouse, where in 2017 the average stay was 291 days. When individuals pay EMASS $10 a day for their own supervision, it costs the city nothing. A 2014 study by NPR and the Brennan Center found that, with the exception of Hawaii, every state required people to pay at least part of the costs associated with GPS monitoring. Some probation offices and sheriffs run their own monitoring programs — renting the equipment from manufacturers, hiring staff and collecting fees directly from participants. Others have outsourced the supervision of defendants, parolees and probationers to private companies.

“There are a lot of judges who reflexively put people on monitors, without making much of a pretense of seriously weighing it at all,” said Chris Albin-Lackey, a senior legal adviser with Human Rights Watch who has researched private-supervision companies. “The limiting factor is the cost it might impose on the public, but when that expense is sourced out, even that minimal brake on judicial discretion goes out the window.”

Nowhere is the pressure to adopt monitors more pronounced than in places like St. Louis: cash-strapped municipalities with large populations of people awaiting trial. Nationwide on any given day, half a million people sit in crowded and expensive jails because, like Daehaun White, they cannot purchase their freedom.

As the movement to overhaul cash bail has challenged the constitutionality of jailing these defendants, judges and sheriffs have turned to monitors as an appealing substitute. In San Francisco, the number of people released from jail onto electronic monitors tripled after a 2018 ruling forced courts to release more defendants without bail. In Marion County, Indiana, where jail overcrowding is routine, roughly 5,000 defendants were put on monitors last year. “You would be hard-pressed to find bail-reform legislation in any state that does not include the possibility of electronic monitoring,” said Robin Steinberg, the chief executive of the Bail Project.

Yet like the system of wealth-based detention they are meant to help reform, ankle monitors often place poor people in special jeopardy. Across the country, defendants who have not been convicted of a crime are put on “offender funded” payment plans for monitors that sometimes cost more than their bail. And unlike bail, they don’t get the payment back, even if they’re found innocent. Although a federal survey shows that nearly 40% of Americans would have trouble finding $400 to cover an emergency, companies and courts routinely threaten to lock up defendants if they fall behind on payment. In Greenville, South Carolina, pretrial defendants can be sent back to jail when they fall three weeks behind on fees. (An officer for the Greenville County Detention Center defended this practice on the grounds that participants agree to the costs in advance.) In Mohave County, Arizona, pretrial defendants charged with sex offenses have faced rearrest if they fail to pay for their monitors, even if they prove that they can’t afford them. “We risk replacing an unjust cash-bail system,” Steinberg said, “with one just as unfair, inhumane and unnecessary.”

Many local judges, including in St. Louis, do not conduct hearings on a defendant’s ability to pay for private supervision before assigning them to it; those who do often overestimate poor people’s financial means. Without judicial oversight, defendants are vulnerable to private-supervision companies that set their own rates and charge interest when someone can’t pay up front. Some companies even give their employees bonuses for hitting collection targets.

It’s not only debt that can send defendants back to jail. People who may not otherwise be candidates for incarceration can be punished for breaking the lifestyle rules that come with the devices. A survey in California found that juveniles awaiting trial or on probation face especially difficult rules; in one county, juveniles on monitors were asked to follow more than 50 restrictions, including not participating “in any social activity.” For this reason, many advocates describe electronic monitoring as a “net-widener": Far from serving as an alternative to incarceration, it ends up sweeping more people into the system.

Dressed in a baggy yellow City of St. Louis Corrections shirt, White was walking to the van that would take him back to the Workhouse after his rearrest, when a guard called his name and handed him a bus ticket home. A few hours earlier, his mom had persuaded her sister to lend her the $300 that White owed EMASS. Wurst, his public defender, brought the receipt to court.

The next afternoon, White hitched a ride downtown to the EMASS office, where one of the company’s bond-compliance officers, Nick Buss, clipped a black box around his left ankle. Based in the majority white city of St. Charles, west of St. Louis, EMASS has several field offices throughout eastern Missouri. A former probation and parole officer, Michael Smith, founded the company in 1991 after Missouri became one of the first states to allow private companies to supervise some probationers. (Smith and other EMASS officials declined to comment for this story.)

The St. Louis area has made national headlines for its “offender funded” model of policing and punishment. Stricken by postindustrial decline and the 2008 financial crisis, its municipalities turned to their police departments and courts to make up for shortfalls in revenue. In 2015, the Ferguson Report by the United States Department of Justice put hard numbers to what black residents had long suspected: The police were targeting them with disproportionate arrests, traffic tickets and excessive fines.

EMASS may have saved the city some money, but it also created an extraordinary and arbitrary-seeming new expense for poor defendants. When cities cover the cost of monitoring, they often pay private contractors $2 to $3 a day for the same equipment and services for which EMASS charges defendants $10 a day. To come up with the money, EMASS clients told me, they had to find second jobs, take their children out of day care and cut into disability checks. Others hurried to plead guilty for no better reason than that being on probation was cheaper than paying for a monitor.

At the downtown office, White signed a contract stating that he would charge his monitor for an hour and a half each day and “report” to EMASS with $70 each week. He could shower, but was not to bathe or swim (the monitor is water-resistant, not waterproof). Interfering with the monitor’s functioning was a felony.

White assumed that GPS supervision would prove a minor annoyance. Instead, it was a constant burden. The box was bulky and the size of a fist, so he couldn’t hide it under his jeans. Whenever he left the house, people stared. There were snide comments ("nice bracelet") and cutting jokes. His brothers teased him about having a babysitter. “I’m nobody to watch,” he insisted.

The biggest problem was finding work. Confident and outgoing, White had never struggled to land jobs; after dropping out of high school in his junior year, he flipped burgers at McDonald’s and Steak ’n Shake. To pay for the monitor, he applied to be a custodian at Julia Davis Library, a cashier at Home Depot, a clerk at Menards. The conversation at Home Depot had gone especially well, White thought, until the interviewer casually asked what was on his leg.

To help improve his chances, he enrolled in Mission: St. Louis, a job-training center for people reentering society. One afternoon in January, he and a classmate role-played how to talk to potential employers about criminal charges. White didn’t know how much detail to go into. Should he tell interviewers that he was bringing his pregnant girlfriend some snacks when he was pulled over? He still isn’t sure, because a police officer came looking for him midway through the class. The battery on his monitor had died. The officer sent him home, and White missed the rest of the lesson.

With all of the restrictions and rules, keeping a job on a monitor can be as difficult as finding one. The hours for weekly check-ins at the downtown EMASS office — 1 p.m. to 6 p.m. on Tuesdays and Wednesdays, and 1 p.m. until 5 p.m. on Mondays — are inconvenient for those who work. In 2011, the National Institute of Justice surveyed 5,000 people on electronic monitors and found that 22% said they had been fired or asked to leave a job because of the device. Juawanna Caves, a young St. Louis native and mother of two, was placed on a monitor in December after being charged with unlawful use of a weapon. She said she stopped showing up to work as a housekeeper when her co-workers made her uncomfortable by asking questions and later lost a job at a nursing home because too many exceptions had to be made for her court dates and EMASS check-ins.

Perpetual surveillance also takes a mental toll. Nearly everyone I spoke to who wore a monitor described feeling trapped, as though they were serving a sentence before they had even gone to trial. White was never really sure about what he could or couldn’t do under supervision. In January, when his girlfriend had their daughter, Rylan, White left the hospital shortly after the birth, under the impression that he had a midnight curfew. Later that night, he let his monitor die so that he could sneak back before sunrise to see the baby again.

EMASS makes its money from defendants. But it gets its power over them from judges. It was in 2012 that the judges of the St. Louis court started to use the company’s services — which previously involved people on probation for misdemeanors — for defendants awaiting trial. Last year, the company supervised 239 defendants in the city of St. Louis on GPS monitors, according to numbers provided by EMASS to the court. The alliance with the courts gives the company not just a steady stream of business but a reliable means of recouping debts: Unlike, say, a credit-card company, which must file a civil suit to collect from overdue customers, EMASS can initiate criminal-court proceedings, threatening defendants with another stay in the Workhouse.

In early April, I visited Judge Rex Burlison in his chambers on the 10th floor of the St. Louis civil courts building. A few months earlier, Burlison, who has short gray hair and light blue eyes, had been elected by his peers as presiding judge, overseeing the city’s docket, budget and operations, including the contract with EMASS. It was one of the first warm days of the year, and from the office window I could see sunlight glimmering on the silver Gateway Arch.

I asked Burlison about the court’s philosophy for using pretrial GPS. He stressed that while each case was unique and subject to the judge’s discretion, monitoring was most commonly used for defendants who posed a flight risk, endangered public safety or had an alleged victim. Judges vary in how often they order defendants to wear monitors, and critics have attacked the inconsistency. Colbert-Botchway, the judge who put White on a monitor, regularly made pretrial GPS a condition of release, according to public defenders. (Colbert-Botchway declined to comment.) But another St. Louis city judge, David Roither, told me, “I really don’t use it very often because people here are too poor to pay for it.”

Whenever a defendant on a monitor violates a condition of release, whether related to payment or a curfew or something else, EMASS sends a letter to the court. Last year, Burlison said, the court received two to three letters a week from EMASS about violations. In response, the judge usually calls the defendant in for a hearing. As far as he knew, Burlison said, judges did not incarcerate people simply for failing to pay EMASS debts. “Why would you?” he asked me. When people were put back in jail, he said, there were always other factors at play, like the defendant’s missing a hearing, for instance. (Issuing a warrant for White’s arrest without a hearing, he acknowledged after looking at the docket, was not the court’s standard practice.)

The contract with EMASS allows the court to assign indigent defendants to the company to oversee “at no cost.” Yet neither Burlison nor any of the other current or former judges I spoke with recalled waiving fees when ordering someone to wear an ankle monitor. When I asked Burlison why he didn’t, he said that he was concerned that if he started to make exceptions on the basis of income, the company might stop providing ankle-monitoring services in St. Louis.

“People get arrested because of life choices,” Burlison said. “Whether they’re good for the charge or not, they’re still arrested and have to deal with it, and part of dealing with it is the finances.” To release defendants without monitors simply because they can’t afford the fee, he said, would be to disregard the safety of their victims or the community. “We can’t just release everybody because they’re poor,” he continued.

But many people in the Workhouse awaiting trial are poor. In January, civil rights groups filed suit against the city and the court, claiming that the St. Louis bail system violated the Constitution, in part by discriminating against those who can’t afford to post bail. That same month, the Missouri Supreme Court announced new rules that urged local courts to consider releasing defendants without monetary conditions and to waive fees for poor people placed on monitors. Shortly before the rules went into effect, on July 1, Burlison said that the city intends to shift the way ankle monitors are distributed and plans to establish a fund to help indigent defendants pay for their ankle bracelets. But he said he didn’t know how much money would be in the fund or whether it was temporary or permanent. The need for funding could grow quickly. The pending bail lawsuit has temporarily spurred the release of more defendants from custody, and as a result, public defenders say, the demand for monitors has increased.

Judges are anxious about what people released without posting bail might do once they get out. Several told me that monitors may ensure that the defendants return to court. Not unlike doctors who order a battery of tests for a mildly ill patient to avoid a potential malpractice suit, judges seem to view monitors as a precaution against their faces appearing on the front page of the newspaper. “Every judge’s fear is to let somebody out on recognizance and he commits murder, and then everyone asks, ’How in the hell was this person let out?’” said Robert Dierker, who served as a judge in St. Louis from 1986 to 2017 and now represents the city in the bail lawsuit. “But with GPS, you can say, ’Well, I have him on GPS, what else can I do?’”

Critics of monitors contend that their public-safety appeal is illusory: If defendants are intent on harming someone or skipping town, the bracelet, which can be easily removed with a pair of scissors, would not stop them. Studies showing that people tracked by GPS appear in court more reliably are scarce, and research about its effectiveness as a deterrent is inconclusive.

“The fundamental question is, What purpose is electronic monitoring serving?” said Blake Strode, the executive director of ArchCity Defenders, a nonprofit civil rights law firm in St. Louis that is one of several firms representing the plaintiffs in the bail lawsuit. “If the only purpose it’s serving is to make judges feel better because they don’t want to be on the hook if something goes wrong, then that’s not a sensible approach. We should not simply be monitoring for monitoring’s sake.”

Electronic monitoring was first conceived in the early 1960s by Ralph and Robert Gable, identical twins studying at Harvard under the psychologists Timothy Leary and B.F. Skinner, respectively. Influenced in part by Skinner’s theories of positive reinforcement, the Gables rigged up some surplus missile-tracking equipment to monitor teenagers on probation; those who showed up at the right places at the right times were rewarded with movie tickets, limo rides and other prizes.

Although this round-the-clock monitoring was intended as a tool for rehabilitation, observers and participants alike soon recognized its potential to enhance surveillance. All but two of the 16 volunteers in their initial study dropped out, finding the two bulky radio transmitters oppressive. “They felt like it was a prosthetic conscience, and who would want Mother all the time along with you?” Robert Gable told me. Psychology Today labeled the invention a “belt from Big Brother.”

The reality of electronic monitoring today is that Big Brother is watching some groups more than others. No national statistics are available on the racial breakdown of Americans wearing ankle monitors, but all indications suggest that mass supervision, like mass incarceration, disproportionately affects black people. In Cook County, Illinois, for instance, black people make up 24% of the population, and 67% of those on monitors. The sociologist Simone Browne has connected contemporary surveillance technologies like GPS monitors to America’s long history of controlling where black people live, move and work. In her 2015 book, “Dark Matters,” she traces the ways in which “surveillance is nothing new to black folks,” from the branding of enslaved people and the shackling of convict laborers to Jim Crow segregation and the home visits of welfare agencies. These historical inequities, Browne notes, influence where and on whom new tools like ankle monitors are imposed.

For some black families, including White’s, monitoring stretches across generations. Annette Taylor, the director of Ripple Effect, an advocacy group for prisoners and their families based in Champaign, Illinois, has seen her ex-husband, brother, son, nephew and sister’s husband wear ankle monitors over the years. She had to wear one herself, about a decade ago, she said, for driving with a suspended license. “You’re making people a prisoner of their home,” she told me. When her son was paroled and placed on house arrest, he couldn’t live with her, because he was forbidden to associate with people convicted of felonies, including his stepfather, who was also on house arrest.

Some people on monitors are further constrained by geographic restrictions — areas in the city or neighborhood that they can’t go without triggering an alarm. James Kilgore, a research scholar at the University of Illinois at Champaign-Urbana, has cautioned that these exclusionary zones could lead to “e-gentrification,” effectively keeping people out of more-prosperous neighborhoods. In 2016, after serving four years in prison for drug conspiracy, Bryan Otero wore a monitor as a condition of parole. He commuted from the Bronx to jobs at a restaurant and a department store in Manhattan, but he couldn’t visit his family or doctor because he was forbidden to enter a swath of Manhattan between 117th Street and 131st Street. “All my family and childhood friends live in that area,” he said. “I grew up there.”

Michelle Alexander, a legal scholar and columnist for The Times, has argued that monitoring engenders a new form of oppression under the guise of progress. In her 2010 book, “The New Jim Crow,” she wrote that the term “mass incarceration” should refer to the “system that locks people not only behind actual bars in actual prisons, but also behind virtual bars and virtual walls — walls that are invisible to the naked eye but function nearly as effectively as Jim Crow laws once did at locking people of color into a permanent second-class citizenship.”

BI Incorporated logo As the cost of monitoring continues to fall, those who are required to submit to it may worry less about the expense and more about the intrusive surveillance. The devices, some of which are equipped with two-way microphones, can give corrections officials unprecedented access to the private lives not just of those monitored but also of their families and friends. GPS location data appeals to the police, who can use it to investigate crimes. Already the goal is both to track what individuals are doing and to anticipate what they might do next. BI Incorporated, an electronic-monitoring subsidiary of GEO Group, has the ability to assign risk scores to the behavioral patterns of those monitored, so that law enforcement can “address potential problems before they happen.” Judges leery of recidivism have begun to embrace risk-assessment tools. As a result, defendants who have yet to be convicted of an offense in court may be categorized by their future chances of reoffending.

The combination of GPS location data with other tracking technologies such as automatic license-plate readers represents an uncharted frontier for finer-grained surveillance. In some cities, police have concentrated these tools in neighborhoods of color. A CityLab investigation found that Baltimore police were more likely to deploy the Stingray — the controversial and secretive cellphone tracking technology — where African Americans lived. In the aftermath of Freddie Gray’s death in 2015, the police spied on Black Lives Matter protesters with face recognition technology. Given this pattern, the term “electronic monitoring” may soon refer not just to a specific piece of equipment but to an all-encompassing strategy.

If the evolution of the criminal-justice system is any guide, it is very likely that the ankle bracelet will go out of fashion. Some GPS monitoring vendors have already started to offer smartphone applications that verify someone’s location through voice and face recognition. These apps, with names like Smart-LINK and Shadowtrack, promise to be cheaper and more convenient than a boxy bracelet. They’re also less visible, mitigating the stigma and normalizing surveillance. While reducing the number of people in physical prison, these seductive applications could, paradoxically, increase its reach. For the nearly 4.5 million Americans on probation or parole, it is not difficult to imagine a virtual prison system as ubiquitous — and invasive — as Instagram or Facebook.

On January 24, exactly three months after White had his monitor installed, his public defender successfully argued in court for its removal. His phone service had been shut off because he had fallen behind on the bill, so his mother told him the good news over video chat.

When White showed up to EMASS a few days later to have the ankle bracelet removed, he said, one of the company’s employees told him that he couldn’t take off his monitor until he paid his debt. White offered him the $35 in his wallet — all the money he had. It wasn’t enough. The employee explained that he needed to pay at least half of the $700 he owed. Somewhere in the contract he had signed months earlier, White had agreed to pay his full balance “at the time of removal.” But as White saw it, the court that had ordered the monitor’s installation was now ordering its removal. Didn’t that count?

“That’s the only thing that’s killing me,” White told me a few weeks later, in early March. “Why are you all not taking it off?” We were in his brother’s room, which, unlike White’s down the hall, had space for a wobbly chair. White sat on the bed, his head resting against the frame, while his brother sat on the other end by the TV, mumbling commands into a headset for the fantasy video game Fortnite. By then, the prosecutor had offered White two to three years of probation in exchange for a plea. (White is waiting to hear if he has been accepted into the city’s diversion program for “youthful offenders,” which would allow him to avoid pleading and wipe the charges from his record in a year.)

White was wearing a loosefitting Nike track jacket and red sweats that bunched up over the top of his monitor. He had recently stopped charging it, and so far, the police hadn’t come knocking. “I don’t even have to have it on,” he said, looking down at his ankle. “But without a job, I can’t get it taken off.” In the last few weeks, he had sold his laptop, his phone and his TV. That cash went to rent, food and his daughter, and what was left barely made a dent in what he owed EMASS.

It was a Monday — a check-in day — but he hadn’t been reporting for the past couple of weeks. He didn’t see the point; he didn’t have the money to get the monitor removed and the office was an hour away by bus. I offered him a ride.

EMASS check-ins take place in a three-story brick building with a low-slung facade draped in ivy. The office doesn’t take cash payments, and a Western Union is conveniently located next door. The other men in the waiting room were also wearing monitors. When it was White’s turn to check-in, Buss, the bond-compliance officer, unclipped the band from his ankle and threw the device into a bin, White said. He wasn’t sure why EMASS had now softened its approach, but his debts nonetheless remained.

Buss calculated the money White owed going back to November: $755, plus 10% annual interest. Over the next nine months, EMASS expected him to make monthly payments that would add up to $850 — more than the court had required for his bond. White looked at the receipt and shook his head. “I get in trouble for living,” he said as he walked out of the office. “For being me.”

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for The Big Story newsletter to receive stories like this one in your inbox.


FTC Urged To Rule On Legality Of 'Secret Surveillance Scores' Used To Vary Prices By Each Online Shopper

Nobody wants to pay too much for a product. If you like online shopping, you may have been charged higher prices than your neighbors. Gizmodo reported:

"... researchers have documented and studied the use of so-called "surveillance scoring," the shadowy, but widely adopted practice of using computer algorithms that, in commerce, result in customers automatically paying different prices for the same product. The term also encompasses tactics used by employers and landlords to deny applicants jobs and housing, respectively, based on suggestions an algorithm spits out. Now experts allege that much of this surveillance scoring behavior is illegal, and they’re are asking the Federal Trade Commission (FTC) to investigate."

"In a 38-page petition filed last week, the Consumer Education Foundation (CEF), a California nonprofit with close ties to the group Consumer Watchdog, asked the FTC to explore whether the use of surveillance scores constitute “unfair or deceptive practices” under the Federal Trade Commission Act..."

The petition is part of a "Represent Consumers" (RC) program.

Many travelers have experienced dynamic pricing, where airlines vary fares based upon market conditions: when demand increases, prices go up; when demand decreases, prices go down. Similarly, when there are many unsold seats (e.g., plenty of excess supply), prices go down. But that dynamic pricing does not vary for each traveler.

Pricing by each person raises concerns of price discrimination. The legal definition of price discrimination in the United States:

"A seller charging competing buyers different prices for the same "commodity" or discriminating in the provision of "allowances" — compensation for advertising and other services — may be violating the Robinson-Patman Act... Price discriminations are generally lawful, particularly if they reflect the different costs of dealing with different buyers or are the result of a seller's attempts to meet a competitor's offering... There are two legal defenses to these types of alleged Robinson-Patman violations: (1) the price difference is justified by different costs in manufacture, sale, or delivery (e.g., volume discounts), or (2) the price concession was given in good faith to meet a competitor's price."

Airlines have wanted to extend dynamic pricing to each person, and "surveillance scores" seem perfectly suited for the task. The RC petition is packed with information which is instructive for consumers to learn about the extent of the business practices. First, the petition described the industry involved:

"Surveillance scoring starts with "analytics companies," the true number of which is unknown... these firms amass thousands or even tens of thousands of demographic and lifestyle data points about consumers, with the help of an estimated 121 data brokers and aggregators... The analytics firms use algorithms to categorize, grade, or assign a numerical value to a consumer based on the consumer’s estimated predicted behavior. That score then dictates how a company will treat a consumer. Consumers deemed to be less valuable are treated poorly, while consumers with better “grades” get preferential treatment..."

Second, the RC petition cited a study which identified 44 different types of proprietary surveillance scores used by industry participants to predict consumer behavior. Some of the score types (emphasis added):

"The Medication Adherence Score, which predicts whether a consumer is likely to follow a medication regimen; The Health Risk Score, which predicts how much a specific patient will cost an insurance company; The Consumer Profitability Score, which predicts which households may be profitable for a company and hence desirable customers; The Job Security Score, which predicts a person’s future income and ability to pay for things; The Churn Score, which predicts whether a consumer is likely to move her business to another company; The Discretionary Spending Index, which scores how much extra cash a particular consumer might be able to spend on non-necessities; The Invitation to Apply Score, which predicts how likely a consumer is to respond to a sales offer; The Charitable Donor Score, which predicts how likely a household is to make significant charitable donations; and The Pregnancy Predictor Score, which predicts the likelihood of someone getting pregnant."

It is important to note that the RC petition does not call for a halt in the collection of personal data about consumers. Rather, it asks the FTC, "to investigate and prohibit the targeting of consumers’ private data against them after it has been collected." Clarity is needed about what is, and is not, legal when consumers' personal data is used against them.

Third, the RC petition also cited published studies about pricing discrimination:

"An early seminal study of price discrimination published by researchers at Northeastern University in 2014 (Northeastern Price Discrimination Study) examined the pricing practices of e-commerce websites. The researchers developed a software-based methodology for measuring price discrimination and tested it with 300 real-world users who shopped on 16 popular e-commerce websites.37 Of ten different general retailers tested in 2014, only one –- Home Depot –- was confirmed to be engaging in price discrimination. Home Depot quoted prices to mobile-device users that were approximately $100 more than those quoted to desktop users.39 The researchers were unable to ascertain why... The Northeastern Price Discrimination Study also found that “human shoppers got worse bargains on a number of websites,”compared to an automated shopping browser that did not have any personal data trail associated with it,42 validating that Home Depot was considering shoppers’ personal data when setting prices online."

So, concerns about price discrimination aren't simply theory. Related to that, the RC petition cited its own research:

"... researchers at Northeastern University developed an online tool to “expose how websites personalize prices.” The Price Discrimination Tool (PDT) is a plug-in extension used on the Google Chrome browser that allows any Internet user to perform searches on five websites to see if the user is being charged a different price based on whatever information the companies have about that particular user. The PDT uses a remote computer server that is anonymous –- it has no personal data profile... The PDT then displays the price results from the human shopper’s search and those obtained by the remote anonymous computer server. Our own testing using the PDT revealed that Home Depot continues to offer different prices to human shoppers. For example, a search on Home Depot’s website for “white paint” reveals price discrimination. Of the 24 search results on the first page, Home Depot quoted us higher prices for six tubs of white paint than it quoted the anonymous computer... Our testing also revealed similar price discrimination on Home Depot’s website for light bulbs, toilet paper, toilet paper holders, caulk guns, halogen floor lamps and screw drivers... We also detected price discrimination on Walmart’s website using the PDT. Our testing revealed price discrimination on Walmart’s website for items such as paper towels, highlighters, pens, paint and toilet paper roll holders."

The RC petition listed examples: the Home Depot site quoted $59.87 for a five-gallon bucket of paint to the anonymous user, and $62.96 for the same product to a researcher. Another example: the site quoted $10.26 for a toilet-paper holder to the anonymous user, and $20.89 for the same product to a researcher -- double the price. Prices differences per person ranged from small to huge.

Besides concerns about price discrimination, the RC petition discussed "discriminatory customer service," and the data analytics firms allegedly involved:

"Zeta Global sells customer value scores that will determine, among other things, the quality of customer service a consumer receives from one of Zeta’s corporate clients. Zeta Global “has a database of more than 700 million people, with an average of over 2,500 pieces of data per person,” from which it creates the scores. The scores are based on data “such as the number of times a customer has dialed a call center and whether that person has browsed a competitor’s website or searched certain keywords in the past few days.” Based on that score, Zeta will recommend to its clients, which include wireless carriers, whether to respond to one customer more quickly than to others.

"Kustomer Inc.: Customer-service platform Kustomer Inc. uses customer value scores to enable retailers and other businesses to treat customer service inquiries differently..."

"Opera Solutions: describes itself as a “a global provider of advanced analytics software solutions that address the persistent problem of scaling Big Data analytics.” Opera Solutions generates customer value scores for its clients (including airlines, retailers and banks)..."

The petition cited examples of "discriminatory customer service," which include denied product returns, or customers shunted to less helpful customer service options. Plus, there are accuracy concerns:

"Considering that credit scores – the existence of which has been public since 1970 – are routinely based on credit reports found to contain errors that harm consumers’ financial standing,31 it is highly likely that Secret Surveillance Scores are based on inaccurate or outdated information. Since the score and the erroneous data upon which it relies are secret, there is no way to correct an error,32 assuming the consumer was aware of it."

Regular readers of this blog are already aware of errors in reports from credit reporting agencies. A copy of the RC petition is also available here (Adobe PDF, 3.2 Mbytes).

What immediately becomes clear while reading the petition is that massive amount of personal data collected about consumers to create several proprietary scores. Consumers have no way of knowing nor challenging the accuracy of the scores when they are used against them. So, not only has an industry risen which profits by acquiring and then selling, trading, analyzing, and/or using consumers' data; there is little to no accountability.

In other words, the playing field is heavily tilted for corporations and against consumers.

This is also a reminder why telecommunications companies fought hard for the repeal of broadband privacy and repeal of net neutrality, both of which the U.S. Federal Communications Commission (FCC) provided in 2017 under the leadership of FCC Chairman Ajit Pai, a Trump appointee. Repeal of the former consumer protection allows unrestricted collection of consumers' data, plus new revenue streams to sell the data collected to analytics firms, data brokers, and business partners.

Repeal of the second consumer protection allows internet and cable providers to price content using whatever criteria they choose. You see a rudimentary version of this pricing in a business practice called "zero rating." An example: streaming a movie via a provider's internet service counts against a data cap while the same movie viewed through the same provider's cable subscription does not. Yet, the exact same movie is delivered through the exact same cable (or fiber) internet connection.

Smart readers immediately realize that a possible next step includes zero ratings per-person. Streaming a movie might count against your data cap but not for your neighbor. Who would know? Oversight and consumer protections are needed.

What are your opinions of secret surveillance scores?


CBP Breach Disclosed Images Of Travelers' Faces And Vehicle License Plates. Many Unanswered Questions

United States Customs and Border Patrol logo A security breach at a vendor used by U.S. Customs & Border Patrol (CBP) has disclosed the images of both travelers and vehicles license plates. The Washington Post reported:

"Customs officials said in a statement Monday that the images, which included photos of people’s faces and license plates, had been compromised as part of an attack on a federal subcontractor. CBP makes extensive use of cameras and video recordings at airports and land border crossings, where images of vehicles are captured. Those images are used as part of a growing agency facial-recognition program designed to track the identity of people entering and exiting the United States. Fewer than 100,000 people were impacted, said CBP... Officials said the stolen information did not include other identifying information, and no passport or other travel document photos were compromised..."

Reportedly, CBP learned about the breach on May 31. The newspaper also reported:

"CBP said copies of “license plate images and traveler images collected by CBP” had been transferred to the subcontractor’s company network, violating the agency’s security and privacy rules. The subcontractor’s network was then attacked and breached. No CBP systems were compromised, the agency said."

A reporter posted on Twitter the brief statement by CBP, which was sent to selected news organizations:

"On May 31, 2009, CBP learned that a subcontractor, in violation of CBP policies and without CBP's authorization or knowledge, had transferred copies of license plate images and traveler images collected by CBP to the subcontractor's company network. The subcontractor's network was subsequently compromised by a malicious cyber-attack. No CBP systems were compromised.

Initial information indicates that the subcontractor violated mandatory security and privacy controls outlined in their contract. As of today, none of the image data has been identified on the Dark Web or internet. CBP has alerted Members of Congress and is working closely with other law enforcement agencies and cybersecurity entities, and its own Office of Professional Responsibility to actively investigate the incident. CBP will unwaveringly work with all partners to determine the extent of the breach and the appropriate response. CBP has removed from service all equipment related to the breach and is closely monitoring all CBP work by the contractor..."

Well, that brief statement is a start... a small start. This security breach is very troubling for several reasons.

First, it seems that CBP was unaware of the contractual violation (e.g., downloaded images) until it was informed of the data breach. That suggests an inadequate contractual agreement between the vendor and CBP; or failures by CBP to monitor and enforce its contracts. That also raises more questions:

  • When and which executives at the vendor will be reprimanded for this violation?
  • Why did CBP fail to identify the download violation?
  • What changes are underway to prevent future violations?
  • Why is CBP continuing to use a vendor known to have severely violated its contractual agreement?
  • What other vendors have violated CBP contracts?

Second, CBP refused to disclose the name of the vendor. Why? What would this accomplish? Its statement described the breach as a "malicious cyberattack." That seems to warrant disclosure. Were CBP executives caught unprepared?

Thankfully, reporters at the Washington Post continued investigating:

"... a Microsoft Word document of CBP’s public statement, sent Monday to Washington Post reporters, included the name “Perceptics” in the title: “CBP Perceptics Public Statement.” Perceptics representatives did not immediately respond to requests for comment... reporters at The Register, a British technology news site, reported late last month that a large haul of breached data from the firm Perceptics was being offered as a free download on the dark web."

So, we don't know for sure if Perceptics was the CBP vendor. However, the May 23rd article in The Register indicates that Perceptics executives were already aware of the breach. CBP executives should have known about the breach on May 23, too, since the article mentioned both entities. Then, why did the CBP statement say it learned of the breach on May 31st? Something here smells -- arrogance, incompetence, or both.

Third, a check at press time of the CBP website and newsroom failed to find any mentions of the security breach. CBP executives have had since May 31st (or since May 23rd), so why send a statement only to select news organizations? Why not publish that statement on its website, too? Were CBP executives caught unprepared and then rushed a haphazard response? When will the breach investigation report be released?

This is troubling. It suggests either arrogance or unpreparedness. As a taxpayer, my money funds CBP activities. I want to know that my money is being spent effectively.

Fourth, the lack of a detailed breach announcement means many related questions remain unanswered:

  • When will CBP notify affected persons? If the vendor will notify affected persons, then CBP must disclose the vendor's name in advance.
  • What assistance (e.g., free credit monitoring) will CBP provide affected persons?
  • What is the status of the post-breach investigation? It helps to know how attackers broke in so effective fixes can be implemented.
  • What other data elements were accessed/stolen? Metadata (e.g., image date and timestamp, border crossing GPS location, entering or exiting USA, vehicle brand and model, number and ages of any passengers in vehicles, etc.) attached to the images can be just as damaging.
  • Were any data elements encrypted? If not, why not?
  • Can facial images be matched to vehicle plate images, and/or to other data elements? If so, this creates more problems for impacted persons.
  • When will fixes be implemented so this doesn't happen again?
  • Exactly how many persons were affected, and in what states? Local states' breach notification laws may apply.
  • How many of the affected persons are U.S. citizens? If the 100,000 estimate applies to only affected U.S. citizens, then we need to know the true total number of persons impacted by the breach.
  • Does the 100,000 estimate refer to facial images only? If so, then exactly how many vehicle license plate images were disclosed?

The statement of "fewer than 100,000 persons impacted" seems vague. A breach investigation should determine two fairly precise items: the number of facial images accessed/stolen, and the number of license plate images accessed/stolen.

Plus, it seems wise to assume more data was stolen during the breach. Why? Consider this report by The Atlantic:

"I would be cautious about assuming this data breach contains only photo data," said Chad Loder, the CEO of Habitu8, a cybersecurity firm that trains other companies on security awareness. The full scope of the breach may be much larger than what CBP revealed in its original statement, he said. In recent years, CBP has asked travelers for fingerprints, facial data, and, recently, even social-media accounts. "If CBP’s contractor was targeted specifically, it’s unlikely that the attacker would have stopped with just photo data..."

If social media passwords were stolen, then affected persons need to know so they can change online passwords. And, elected officials are also asking questions. The Hill reported:

"House Homeland Security Committee Chairman Bennie Thompson (D-Miss.) announced on Monday that his committee would hold hearings next month to examine the collection of biometric information by the Department of Homeland Security (DHS), which includes CBP... Homeland Security Committee ranking member Mike Rogers (R-Ala.), used the breach to criticize DHS’s handling of cybersecurity challenges, saying in a statement to The Hill that "the agency is ill-equipped to handle emerging cyberthreats"... Representative Cedric Richmond (D-La.), the chairman of the House Homeland Security subcommittee on cybersecurity, also called for more answers about the breach, which he said would inform Congress's next steps... Senator Brian Schatz (D-Hawaii), the ranking member of the Senate Commerce Subcommittee on Communications, Technology, Innovation and the Internet, said he thinks the breach merits an investigation by the Office of the Inspector General."

Good suggestion by Senator Schatz. Clearly, there's plenty more news to come. Plenty.


How Google Tracks All Of Your Online Purchases. Its Reasons Are Unclear

Google tracks all of your online purchases. How? ExpressVPN reported:

"Initially stumbled across by a CNBC reporter, a "Google Purchases" page keeps track of all digital receipts sent to your Gmail account from as far back as 2012. The page is not limited to purchases made directly from Google, either. From flight tickets to Amazon purchases to food delivery services, if the receipt went to your Gmail, it’s on the list. Google takes the name, date, and other specifics surrounding the purchase and records them in a list on the page."

The tracking is a reminder of the special place Internet service providers (ISPs) enjoy with access to all of users' online activities. Consumers' purchase receipts can include very sensitive information such as foods, medicine, and medical devices -- for parents and/or their children; or bookings for upcoming travel indicating when a home will be vacant; or purchases of medical marijuana, D-I-Y guns, and/or internet-connected adult toys. The bottom line: some consumers may not want their purchase data collected (nor shared with other companies by Google).

Now that you're aware of the tracking, something to consider the next time a cashier at a brick-and-mortar retail store asks: paper or email receipt? I always choose paper. You might, too.

To view your Google Purchase page, visit http://myaccount.google.com/purchases and sign in. Only you can view your purchases page.

Privacy solutions appear ugly. One option is to switch to an email provider that doesn't track you. If you decide to stay with Gmail, the only fix is a manual process which will cost you several hours or days to wade through your archive and delete emails:

"... the only way to remove a purchase from the list is to find and manually delete the email that contains the original receipt. Worse still, you can’t turn off tracking, and there’s no way to delete the list en masse. This process is incredibly tedious... Even more perplexing is that there’s no clear purpose for the collection of this data... the logic behind this reasoning is strange, the info is hiding in Google’s Account page, and it’s not exactly easy to access for users who want to “view and keep track of purchases.” And seeing as this page isn’t really being promoted to its users..."

Google said it is doing more for its customers regarding privacy. Last month, The Washington Post reported:

"... One executive after another at Google’s I/O conference in its hometown of Mountain View, California emphasized new privacy settings in products like search, maps, thermostats and updated mobile phone software. "We strongly believe that privacy and security are for everyone, not just a few," Google CEO Sundar Pichai said.

Said product manager Stephanie Cuthbertson, who introduced a new version of the Android mobile operating system: "You should always be in control of what you share and who you share it with."... Google also committed to improved privacy controls of its Nest-connected home devices, including the ability of users to delete their audio files. Some users have reported having hackers eavesdropping through their Nest devices."

Hmmm. It seems more privacy and control does not extend to Gmail users' purchase data. What are your opinions?

[Editor's note: this page was revised Monday evening to fix a typo and to include the link the Google Purchases page.]


Technology And Human Rights Organizations Sent Joint Letter Urging House Representatives Not To Fund 'Invasive Surveillance' Tech Instead of A Border Wall

More than two dozen technology and human rights organizations sent a joint letter Tuesday to representatives in the House of Representatives, urging them not to fund "invasive surveillance technologies" in replacement of a physical wall or barrier along the southern border of the United States. The joint letter cited five concerns:

"1. Risk-based targeting: The proposal calls for “an expansion of risk-based targeting of passengers and cargo entering the United States.” We are concerned that this includes the expansion of programs — proven to be ineffective and to exacerbate racial profiling — that use mathematical analytics to make targeting determinations. All too often, these systems replicate the biases of their programmers, burden vulnerable communities, lack democratic transparency, and encourage the collection and analysis of ever-increasing amounts of data... 3. Biometrics: The proposal calls for “new cutting edge technology” at the border. If that includes new face surveillance like that deployed at international airline departures, it should not. Senator Jeff Merkley and the Congressional Black Caucus have expressed serious concern that facial recognition technology would place “disproportionate burdens on communities of color and could stifle Americans’ willingness to exercise their first amendment rights in public.” In addition, use of other biometrics, including iris scans and voice recognition, also raise significant privacy concerns... 5. Biometric and DNA data: We oppose biometric screening at the border and the collection of immigrants’ DNA, and fear this may be another form of “new cutting edge technology” under consideration. We are concerned about the threat that any collected biometric data will be stolen or misused, as well as the potential for such programs to be expanded far beyond their original scope..."

The letter was sent to Speaker Nancy Pelosi, Minority Leader Kevin McCarthy, Minority Leader Steny Hoyer, Minority Whip Steve Scalise, Chair Nita Lowey a Ranking Member of House Appropriations, and Kay Granger of the House Appropriations committee.

27 organizations signed the joint letter, including Fight for the Future, the Electronic Frontier Foundation, the American Civil Liberties Union (ACLU), the American-Arab Anti-Discrimination Committee, the Center for Media Justice, the Project On Government Oversight, and others. Read the entire letter.

Earlier this month, a structural and civil engineer cited several reasons why a physical wall won't work and would be vastly more expensive than the $5.7 billion requested.

Clearly, the are distinct advantages and disadvantages for each and all border-protection solutions the House and President are considering. It is a complex problem. These advantages and disadvantages of all proposals need to be clear, transparent, and understood by taxpayers prior to any final decisions.


Facebook Paid Teens To Install Unauthorized Spyware On Their Phones. Plenty Of Questions Remain

Facebook logoWhile today is the 15th anniversary of Facebook,  more important news rules. Last week featured plenty of news about Facebook. TechCrunch reported on Tuesday:

"Since 2016, Facebook has been paying users ages 13 to 35 up to $20 per month plus referral fees to sell their privacy by installing the iOS or Android “Facebook Research” app. Facebook even asked users to screenshot their Amazon order history page. The program is administered through beta testing services Applause, BetaBound and uTest to cloak Facebook’s involvement, and is referred to in some documentation as “Project Atlas” — a fitting name for Facebook’s effort to map new trends and rivals around the globe... Facebook admitted to TechCrunch it was running the Research program to gather data on usage habits."

So, teenagers installed surveillance software on their phones and tablets, to spy for Facebook on themselves, Facebook's competitors,, and others. This is huge news for several reasons. First, the "Facebook Research" app is VPN (Virtual Private Network) software which:

"... lets the company suck in all of a user’s phone and web activity, similar to Facebook’s Onavo Protect app that Apple banned in June and that was removed in August. Facebook sidesteps the App Store and rewards teenagers and adults to download the Research app and give it root access to network traffic in what may be a violation of Apple policy..."

Reportedly, the Research app collected massive amounts of information: private messages in social media apps, chats from in instant messaging apps, photos/videos sent to others, emails, web searches, web browsing activity, and geo-location data. So, a very intrusive app. And, after being forced to remove oneintrusive app from Apple's store, Facebook continued anyway -- with another app that performed the same function. Not good.

Second, there is the moral issue of using the youngest users as spies... persons who arguably have the lease experience and skills at reading complex documents: corporate terms-of-use and privacy policies. I wonder how many teenagers notified their friends of the spying and data collection. How many teenagers fully understood what they were doing? How many parents were aware of the activity and payments? How many parents notified the parents of their children's friends? How many teens installed the spyware on both their iPhones and iPads? Lots of unanswered questions.

Third, Apple responded quickly. TechCrunch reported Wednesday morning:

"... Apple blocked Facebook’s Research VPN app before the social network could voluntarily shut it down... Apple tells TechCrunch that yesterday evening it revoked the Enterprise Certificate that allows Facebook to distribute the Research app without going through the App Store."

Facebook's usage of the Enterprise Certificate is significant. TechCrunch also published a statement by Apple:

"We designed our Enterprise Developer Program solely for the internal distribution of apps within an organization... Facebook has been using their membership to distribute a data-collecting app to consumers, which is a clear breach of their agreement with Apple. Any developer using their enterprise certificates to distribute apps to consumers will have their certificates revoked..."

So, the Research app violated Apple's policy. Not good. The app also performs similar functions as the banned Onavo VPN app. Worse. This sounds like an end-run to me. So as punishment for its end-run actions, Apple temporarily disable the certificates for internal corporate apps.

Axios described very well Facebook's behavior:

"Facebook took a program designed to let businesses internally test their own app and used it to monitor most, if not everything, a user did on their phone — a degree of surveillance barred in the official App Store."

And the animated Facebook image in the Axios article sure looks like a liar-liar-logo-on-fire image. LOL! Pure gold! Seriously, Facebook's behavior indicates questionable ethics, and/or an expectation of not getting caught. Reportedly, the internal apps which were shut down included shuttle schedules, campus maps, and company calendars. After that, some Facebook employees discussed quitting.

And, it raises more questions. Which Facebook executives approved Project Atlas? What advice did Facebook's legal staff provide prior to approval? Was that advice followed or ignored?

Google logo Fourth, TechCrunch also reported:

"Facebook’s Research program will continue to run on Android."

What? So, Google devices were involved, too. Is this spy program okay with Google executives? A follow-up report on Wednesday by TechCrunch:

"Google has been running an app called Screenwise Meter, which bears a strong resemblance to the app distributed by Facebook Research that has now been barred by Apple... Google invites users aged 18 and up (or 13 if part of a family group) to download the app by way of a special code and registration process using an Enterprise Certificate. That’s the same type of policy violation that led Apple to shut down Facebook’s similar Research VPN iOS app..."

Oy! So, Google operates like Facebook. Also reported by TechCrunch:

"The Screenwise Meter iOS app should not have operated under Apple’s developer enterprise program — this was a mistake, and we apologize. We have disabled this app on iOS devices..."

So, Google will terminate its spy program on Apple devices, but continue its own program with Facebook. Hmmmmm. Well, that answers some questions. I guess Google executives are okay with this spy program. More questions remain.

Fifth, Facebook tried to defend the Research app and its actions in an internal memo to employees. On Thursday, TechCrunch tore apart the claims in an internal Facebook memo from vice president Pedro Canahuati. Chiefly:

"Facebook claims it didn’t hide the program, but it was never formally announced like every other Facebook product. There were no Facebook Help pages, blog posts, or support info from the company. It used intermediaries Applause and CentreCode to run the program under names like Project Atlas and Project Kodiak. Users only found out Facebook was involved once they started the sign-up process and signed a non-disclosure agreement prohibiting them from discussing it publicly... Facebook claims it wasn’t “spying,” yet it never fully laid out the specific kinds of information it would collect. In some cases, descriptions of the app’s data collection power were included in merely a footnote. The program did not specify data types gathered, only saying it would scoop up “which apps are on your phone, how and when you use them” and “information about your internet browsing activity.” The parental consent form from Facebook and Applause lists none of the specific types of data collected...

So, Research app participants (e.g., teenagers, parents) couldn't discuss nor warn their friends (and their friends' parents) about the data collection. I strongly encourage everyone to read the entire TechCrunch analysis. It is eye-opening.

Sixth, a reader shared concerns about whether Facebook's actions violated federal laws. Did Project Atlas violate the Digital Millennium Copyright Act (DMCA); specifically the "anti-circumvention" provision, which prohibits avoiding the security protections in software? Did it violate the Computer Fraud and Abuse Act? What about breach-of-contract and fraud laws? What about states' laws? So, one could ask similar questions about Google's actions, too.

I am not an attorney. Hopefully, some attorneys will weigh in on these questions. Probably, some skilled attorneys will investigate various legal options.

All of this is very disturbing. Is this what consumers can expect of Silicon Valley firms? Is this the best tech firms can do? Is this the low level the United States has sunk to? Kudos to the TechCrunch staff for some excellent reporting.

What are your opinions of Project Atlas? Of Facebook's behavior? Of Google's?


Companies Want Your Location Data. Recent Examples: The Weather Channel And Burger King

Weather Channel logo It is easy to find examples where companies use mobile apps to collect consumers' real-time GPS location data, so they can archive and resell that information later for additional profits. First, ExpressVPN reported:

"The city of Los Angeles is suing the Weather Company, a subsidiary of IBM, for secretly mining and selling user location data with the extremely popular Weather Channel App. Stating that the app unfairly manipulates users into enabling their location settings for more accurate weather reports, the lawsuit affirms that the app collects and then sells this data to third-party companies... Citing a recent investigation by The New York Times that revealed more than 75 companies silently collecting location data (if you haven’t seen it yet, it’s worth a read), the lawsuit is basing its case on California’s Unfair Competition Law... the California Consumer Privacy Act, which is set to go into effect in 2020, would make it harder for companies to blindly profit off customer data... This lawsuit hopes to fine the Weather Company up to $2,500 for each violation of the Unfair Competition Law. With more than 200 million downloads and a reported 45+ million users..."

Long-term readers remember that a data breach in 2007 at IBM Inc. prompted this blog. It's not only internet service providers which collect consumers' location data. Advertisers, retailers, and data brokers want it, too.

Burger King logo Second, Burger King ran last month a national "Whopper Detour" promotion which offered customers a once-cent Whopper burger if they went near a competitor's store. News 5, the ABC News affiliate in Cleveland, reported:

"If you download the Burger King mobile app and drive to a McDonald’s store, you can get the penny burger until December 12, 2018, according to the fast-food chain. You must be within 600 feet of a McDonald's to claim your discount, and no, McDonald's will not serve you a Whopper — you'll have to order the sandwich in the Burger King app, then head to the nearest participating Burger King location to pick it up. More information about the deal can be found on the app on Apple and Android devices."

Next, the relevant portions from Burger King's privacy policy for its mobile apps (emphasis added):

"We collect information you give us when you use the Services. For example, when you visit one of our restaurants, visit one of our websites or use one of our Services, create an account with us, buy a stored-value card in-restaurant or online, participate in a survey or promotion, or take advantage of our in-restaurant Wi-Fi service, we may ask for information such as your name, e-mail address, year of birth, gender, street address, or mobile phone number so that we can provide Services to you. We may collect payment information, such as your credit card number, security code and expiration date... We also may collect information about the products you buy, including where and how frequently you buy them... we may collect information about your use of the Services. For example, we may collect: 1) Device information - such as your hardware model, IP address, other unique device identifiers, operating system version, and settings of the device you use to access the Services; 2) Usage information - such as information about the Services you use, the time and duration of your use of the Services and other information about your interaction with content offered through a Service, and any information stored in cookies and similar technologies that we have set on your device; and 3) Location information - such as your computer’s IP address, your mobile device’s GPS signal or information about nearby WiFi access points and cell towers that may be transmitted to us..."

So, for the low, low price of one hamburger, participants in this promotion gave RBI, the parent company which owns Burger King, perpetual access to their real-time location data. And, since RBI knows when, where, and how long its customers visit competitors' fast-food stores, it also knows similar details about everywhere else you go -- including school, work, doctors, hospitals, and more. Sweet deal for RBI. A poor deal for consumers.

Expect to see more corporate promotions like this, which privacy advocates call "surveillance capitalism."

Consumers' real-time location data is very valuable. Don't give it away for free. If you decide to share it, demand a fair, ongoing payment in exchange. Read privacy and terms-of-use policies before downloading mobile apps, so you don't get abused or taken. Opinions? Thoughts?


The Privacy And Data Security Issues With Medical Marijuana

In the United States, some states have enacted legislation making medical marijuana legal -- despite it being illegal at a federal level. This situation presents privacy issues for both retailers and patients.

In her "Data Security And Privacy" podcast series, privacy consultant Rebecca Harold (@PrivacyProf) interviewed a patient cannabis advocate about privacy and data security issues:

"Most people assume that their data is safe in cannabis stores & medical cannabis dispensaries. Or they believe if they pay in cash there will be no record of their cannabis purchase. Those are incorrect beliefs. How do dispensaries secure & share data? Who WANTS that data? What security is needed? Some in government, law enforcement & employers want data about state legal marijuana and medical cannabis purchases. Michelle Dumay, Cannabis Patient Advocate, helps cannabis dispensaries & stores to secure their customers’ & patients’ data & privacy. Michelle learned through experience getting treatment for her daughter that most medical cannabis dispensaries are not compliant with laws governing the security and privacy of patient data... In this episode, we discuss information security & privacy practices of cannabis shops, risks & what needs to be done when it comes to securing data and understanding privacy laws."

Many consumers know that the Health Insurance Portability and Accountability Act (HIPAA) governs how patients' privacy is protected and the businesses which must comply with that law.

Poor data security (e.g., data breaches, unauthorized recording of patients inside or outside of dispensaries) can result in the misuse of patients' personal and medical information by bad actors and others. Downstream consequences can be negative, such as employers using the data to decline job applications.

After listening to the episode, it seems reasonable for consumers to assume that traditional information industry players (e.g., credit reporting agencies, advertisers, data brokers, law enforcement, government intelligence agencies, etc.) all want marijuana purchase data. Note the use of "consumers," and not only "patients," since about 10 states have legalized recreational marijuana.

Listen to an encore presentation of the "Medical Cannabis Patient Privacy And Data Security" episode.


A Series Of Recent Events And Privacy Snafus At Facebook Cause Multiple Concerns. Does Facebook Deserve Users' Data?

Facebook logo So much has happened lately at Facebook that it can be difficult to keep up with the data scandals, data breaches, privacy fumbles, and more at the global social service. To help, below is a review of recent events.

The the New York Times reported on Tuesday, December 18th that for years:

"... Facebook gave some of the world’s largest technology companies more intrusive access to users’ personal data than it has disclosed, effectively exempting those business partners from its usual privacy rules... The special arrangements are detailed in hundreds of pages of Facebook documents obtained by The New York Times. The records, generated in 2017 by the company’s internal system for tracking partnerships, provide the most complete picture yet of the social network’s data-sharing practices... Facebook allowed Microsoft’s Bing search engine to see the names of virtually all Facebook users’ friends without consent... and gave Netflix and Spotify the ability to read Facebook users’ private messages. The social network permitted Amazon to obtain users’ names and contact information through their friends, and it let Yahoo view streams of friends’ posts as recently as this summer, despite public statements that it had stopped that type of sharing years earlier..."

According to the Reuters newswire, a Netflix spokesperson denied that Netflix accessed Facebook users' private messages, nor asked for that access. Facebook responded with denials the same day:

"... none of these partnerships or features gave companies access to information without people’s permission, nor did they violate our 2012 settlement with the FTC... most of these features are now gone. We shut down instant personalization, which powered Bing’s features, in 2014 and we wound down our partnerships with device and platform companies months ago, following an announcement in April. Still, we recognize that we’ve needed tighter management over how partners and developers can access information using our APIs. We’re already in the process of reviewing all our APIs and the partners who can access them."

Needed tighter management with its partners and developers? That's an understatement. During March and April of 2018 we learned that bad actors posed as researchers and used both quizzes and automated tools to vacuum up (and allegedly resell later) profile data for 87 million Facebook users. There's more news about this breach. The Office of the Attorney General for Washington, DC announced on December 19th that it has:

"... sued Facebook, Inc. for failing to protect its users’ data... In its lawsuit, the Office of the Attorney General (OAG) alleges Facebook’s lax oversight and misleading privacy settings allowed, among other things, a third-party application to use the platform to harvest the personal information of millions of users without their permission and then sell it to a political consulting firm. In the run-up to the 2016 presidential election, some Facebook users downloaded a “personality quiz” app which also collected data from the app users’ Facebook friends without their knowledge or consent. The app’s developer then sold this data to Cambridge Analytica, which used it to help presidential campaigns target voters based on their personal traits. Facebook took more than two years to disclose this to its consumers. OAG is seeking monetary and injunctive relief, including relief for harmed consumers, damages, and penalties to the District."

Sadly, there's still more. Facebook announced on December 14th another data breach:

"Our internal team discovered a photo API bug that may have affected people who used Facebook Login and granted permission to third-party apps to access their photos. We have fixed the issue but, because of this bug, some third-party apps may have had access to a broader set of photos than usual for 12 days between September 13 to September 25, 2018... the bug potentially gave developers access to other photos, such as those shared on Marketplace or Facebook Stories. The bug also impacted photos that people uploaded to Facebook but chose not to post... we believe this may have affected up to 6.8 million users and up to 1,500 apps built by 876 developers... Early next week we will be rolling out tools for app developers that will allow them to determine which people using their app might be impacted by this bug. We will be working with those developers to delete the photos from impacted users. We will also notify the people potentially impacted..."

We believe? That sounds like Facebook doesn't know for sure. Where was the quality assurance (QA) team on this? Who is performing the post-breach investigation to determine what happened so it doesn't happen again? This post-breach response seems sloppy. And, the "bug" description seems disingenuous. Anytime persons -- in this case developers -- have access to data they shouldn't have, it is a data breach.

One quickly gets the impression that Facebook has created so many niches, apps, APIs, and special arrangements for developers and advertisers that it really can't manage nor control the data it collects about its users. That implies Facebook users aren't in control of their data, either.

There were other notable stumbles. There were reports after many users experienced repeated bogus Friend Requests, due to hacked and/or cloned accounts. It can be difficult for users to distinguish valid Friend Requests from spammers or bad actors masquerading as friends.

In August, reports surfaced that Facebook approached several major banks offering to share its detailed financial information about consumers in order, "to boost user engagement." Reportedly, the detailed financial information included debit/credit/prepaid card transactions and checking account balances. Not good.

Also in August, Facebook's Onavo VPN App was removed from the Apple App store because the app violated data-collection policies. 9 To 5 Mac reported on December 5th:

"The UK parliament has today publicly shared secret internal Facebook emails that cover a wide-range of the company’s tactics related to its free iOS VPN app that was used as spyware, recording users’ call and text message history, and much more... Onavo was an interesting effort from Facebook. It posed as a free VPN service/app labeled as Facebook’s “Protect” feature, but was more or less spyware designed to collect data from users that Facebook could leverage..."

Why spy? Why the deception? This seems unnecessary for a global social networking company already collecting massive amounts of content.

In November, an investigative report by ProPublica detailed the failures in Facebook's news transparency implementation. The failures mean Facebook hasn't made good on its promises to ensure trustworthy news content, nor stop foreign entities from using the social service to meddle in elections in democratic countries.

There is more. Facebook disclosed in October a massive data breach affecting 30 million users (emphasis added):

For 15 million people, attackers accessed two sets of information – name and contact details (phone number, email, or both, depending on what people had on their profiles). For 14 million people, the attackers accessed the same two sets of information, as well as other details people had on their profiles. This included username, gender, locale/language, relationship status, religion, hometown, self-reported current city, birth date, device types used to access Facebook, education, work, the last 10 places they checked into or were tagged in, website, people or Pages they follow, and the 15 most recent searches..."

The stolen data allows bad actors to operate several types of attacks (e.g., spam, phishing, etc.) against Facebook users. The stolen data allows foreign spy agencies to collect useful information to target persons. Neither is good. Wired summarized the situation:

"Every month this year—and in some months, every week—new information has come out that makes it seem as if Facebook's big rethink is in big trouble... Well-known and well-regarded executives, like the founders of Facebook-owned Instagram, Oculus, and WhatsApp, have left abruptly. And more and more current and former employees are beginning to question whether Facebook's management team, which has been together for most of the last decade, is up to the task.

Technically, Zuckerberg controls enough voting power to resist and reject any moves to remove him as CEO. But the number of times that he and his number two Sheryl Sandberg have over-promised and under-delivered since the 2016 election would doom any other management team... Meanwhile, investigations in November revealed, among other things, that the company had hired a Washington firm to spread its own brand of misinformation on other platforms..."

Hiring a firm to distribute misinformation elsewhere while promising to eliminate misinformation on its platform. Not good. Are Zuckerberg and Sandberg up to the task? The above list of breaches, scandals, fumbles, and stumbles suggest not. What do you think?

The bottom line is trust. Given recent events, BuzzFeed News article posed a relevant question (emphasis added):

"Of all of the statements, apologies, clarifications, walk-backs, defenses, and pleas uttered by Facebook employees in 2018, perhaps the most inadvertently damning came from its CEO, Mark Zuckerberg. Speaking from a full-page ad displayed in major papers across the US and Europe, Zuckerberg proclaimed, "We have a responsibility to protect your information. If we can’t, we don’t deserve it." At the time, the statement was a classic exercise in damage control. But given the privacy blunders that followed, it hasn’t aged well. In fact, it’s become an archetypal criticism of Facebook and the set up for its existential question: Why, after all that’s happened in 2018, does Facebook deserve our personal information?"

Facebook executives have apologized often. Enough is enough. No more apologies. Just fix it! And, if Facebook users haven't asked themselves the above question yet, some surely will. Earlier this week, a friend posted on the site:

"To all my FB friends:
I will be deleting my FB account very soon as I am disgusted by their invasion of the privacy of their users. Please contact me by email in the future. Please note that it will take several days for this action to take effect as FB makes it hard to get out of its grip. Merry Christmas to all and with best wishes for a Healthy, safe, and invasive free New Year."

I reminded this friend to also delete any Instagram and What's App accounts, since Facebook operates those services, too. If you want to quit the service but suffer with FOMO (Fear Of Missing Out), then read the experiences of a person who quit Apple, Google, Facebook, Microsoft, and Amazon for a month. It can be done. And, your social life will continue -- spectacularly. It did before Facebook.

Me? I have reduced my activity on Facebook. And there are certain activities I don't do on Facebook: take quizzes, make online payments, use its emotion reaction buttons (besides "Like"), use its mobile app, use the Messenger mobile app, nor use its voting and ballot previews content. Long ago I disabled the Facebook API platform on my Facebook account. You should, too. I never use my Facebook credentials (e.g., username, password) to sign into other sites. Never.

I will continue to post on Facebook links to posts in this blog, since it is helpful information for many Facebook users. In what ways have you reduced your usage of Facebook?


China Blamed For Cyberattack In The Gigantic Marriott-Starwood Hotels Data Breach

Marriott International logo An update on the gigantic Marriott-Starwood data breach where details about 500 million guests were stolen. The New York Times reported that the cyberattack:

"... was part of a Chinese intelligence-gathering effort that also hacked health insurers and the security clearance files of millions more Americans, according to two people briefed on the investigation. The hackers, they said, are suspected of working on behalf of the Ministry of State Security, the country’s Communist-controlled civilian spy agency... While American intelligence agencies have not reached a final assessment of who performed the hacking, a range of firms brought in to assess the damage quickly saw computer code and patterns familiar to operations by Chinese actors... China has reverted over the past 18 months to the kind of intrusions into American companies and government agencies that President Barack Obama thought he had ended in 2015 in an agreement with Mr. Xi. Geng Shuang, a spokesman for China’s Ministry of Foreign Affairs, denied any knowledge of the Marriott hacking..."

Why any country's intelligence agency would want to hack a hotel chain's database:

"The Marriott database contains not only credit card information but passport data. Lisa Monaco, a former homeland security adviser under Mr. Obama, noted last week at a conference that passport information would be particularly valuable in tracking who is crossing borders and what they look like, among other key data."

Also, context matters. First, this corporate acquisition was (thankfully) blocked:

"The effort to amass Americans’ personal information so alarmed government officials that in 2016, the Obama administration threatened to block a $14 billion bid by China’s Anbang Insurance Group Co. to acquire Starwood Hotel & Resorts Worldwide, according to one former official familiar with the work of the Committee on Foreign Investments in the United States, a secretive government body that reviews foreign acquisitions..."

Later that year, Marriott Hotels acquired Starwood for $13.6 billion. Second, remember the massive government data breach in 2014 at the Office of Personnel Management (OPM). The New York Times added that the Marriott breach:

"... was only part of an aggressive operation whose centerpiece was the 2014 hacking into the Office of Personnel Management. At the time, the government bureau loosely guarded the detailed forms that Americans fill out to get security clearances — forms that contain financial data; information about spouses, children and past romantic relationships; and any meetings with foreigners. Such information is exactly what the Chinese use to root out spies, recruit intelligence agents and build a rich repository of Americans’ personal data for future targeting..."

MSS Inside Not good. And, this is not the first time concerns about China have been raised. Reports surfaced in 2016 about malware installed in the firmware of smartphones running the Android operating system (OS) software. In 2015, China enacted a new "secure and controllable" security law which many security experts viewed then as a method to ensure that back doors were built into computing products and devices during into the manufacturing and assembly process.

And, even if China's MSS didn't do this massive cyberattack, it could have been another country's intelligence agency. Not good either.

Regardless who the attackers were, this incident is a huge reminder to executives in government and in the private sector to secure their computer systems. Hopefully, executives at major hotel chains -- especially those frequented by government officials and military members -- now realize that their systems are high-value targets.


You Snooze, You Lose: Insurers Make The Old Adage Literally True

[Editor's note: today's guest post, by reporters at ProPublica, is part of a series which explores data collection, data sharing, and privacy issues within the healthcare industry. It is reprinted with permission.]

By Marshall Allen, ProPublica

Last March, Tony Schmidt discovered something unsettling about the machine that helps him breathe at night. Without his knowledge, it was spying on him.

From his bedside, the device was tracking when he was using it and sending the information not just to his doctor, but to the maker of the machine, to the medical supply company that provided it and to his health insurer.

Schmidt, an information technology specialist from Carrollton, Texas, was shocked. “I had no idea they were sending my information across the wire.”

Schmidt, 59, has sleep apnea, a disorder that causes worrisome breaks in his breathing at night. Like millions of people, he relies on a continuous positive airway pressure, or CPAP, machine that streams warm air into his nose while he sleeps, keeping his airway open. Without it, Schmidt would wake up hundreds of times a night; then, during the day, he’d nod off at work, sometimes while driving and even as he sat on the toilet.

“I couldn’t keep a job,” he said. “I couldn’t stay awake.” The CPAP, he said, saved his career, maybe even his life.

As many CPAP users discover, the life-altering device comes with caveats: Health insurance companies are often tracking whether patients use them. If they aren’t, the insurers might not cover the machines or the supplies that go with them.

In fact, faced with the popularity of CPAPs, which can cost $400 to $800, and their need for replacement filters, face masks and hoses, health insurers have deployed a host of tactics that can make the therapy more expensive or even price it out of reach.

Patients have been required to rent CPAPs at rates that total much more than the retail price of the devices, or they’ve discovered that the supplies would be substantially cheaper if they didn’t have insurance at all.

Experts who study health care costs say insurers’ CPAP strategies are part of the industry’s playbook of shifting the costs of widely used therapies, devices and tests to unsuspecting patients.

“The doctors and providers are not in control of medicine anymore,” said Harry Lawrence, owner of Advanced Oxy-Med Services, a New York company that provides CPAP supplies. “It’s strictly the insurance companies. They call the shots.”

Insurers say their concerns are legitimate. The masks and hoses can be cumbersome and noisy, and studies show that about third of patients don’t use their CPAPs as directed.

But the companies’ practices have spawned lawsuits and concerns by some doctors who say that policies that restrict access to the machines could have serious, or even deadly, consequences for patients with severe conditions. And privacy experts worry that data collected by insurers could be used to discriminate against patients or raise their costs.

Schmidt’s privacy concerns began the day after he registered his new CPAP unit with ResMed, its manufacturer. He opted out of receiving any further information. But he had barely wiped the sleep out of his eyes the next morning when a peppy email arrived in his inbox. It was ResMed, praising him for completing his first night of therapy. “Congratulations! You’ve earned yourself a badge!” the email said.

Then came this exchange with his supply company, Medigy: Schmidt had emailed the company to praise the “professional, kind, efficient and competent” technician who set up the device. A Medigy representative wrote back, thanking him, then adding that Schmidt’s machine “is doing a great job keeping your airway open.” A report detailing Schmidt’s usage was attached.

Alarmed, Schmidt complained to Medigy and learned his data was also being shared with his insurer, Blue Cross Blue Shield. He’d known his old machine had tracked his sleep because he’d taken its removable data card to his doctor. But this new invasion of privacy felt different. Was the data encrypted to protect his privacy as it was transmitted? What else were they doing with his personal information?

He filed complaints with the Better Business Bureau and the federal government to no avail. “My doctor is the ONLY one that has permission to have my data,” he wrote in one complaint.

In an email, a Blue Cross Blue Shield spokesperson said that it’s standard practice for insurers to monitor sleep apnea patients and deny payment if they aren’t using the machine. And privacy experts said that sharing the data with insurance companies is allowed under federal privacy laws. A ResMed representative said once patients have given consent, it may share the data it gathers, which is encrypted, with the patients’ doctors, insurers and supply companies.

Schmidt returned the new CPAP machine and went back to a model that allowed him to use a removable data card. His doctor can verify his compliance, he said.

Luke Petty, the operations manager for Medigy, said a lot of CPAP users direct their ire at companies like his. The complaints online number in the thousands. But insurance companies set the prices and make the rules, he said, and suppliers follow them, so they can get paid.

“Every year it’s a new hurdle, a new trick, a new game for the patients,” Petty said.

A Sleep Saving Machine Gets Popular

The American Sleep Apnea Association estimates about 22 million Americans have sleep apnea, although it’s often not diagnosed. The number of people seeking treatment has grown along with awareness of the disorder. It’s a potentially serious disorder that left untreated can lead to risks for heart disease, diabetes, cancer and cognitive disorders. CPAP is one of the only treatments that works for many patients.

Exact numbers are hard to come by, but ResMed, the leading device maker, said it’s monitoring the CPAP use of millions of patients.

Sleep apnea specialists and health care cost experts say insurers have countered the deluge by forcing patients to prove they’re using the treatment.

Medicare, the government insurance program for seniors and the disabled, began requiring CPAP “compliance” after a boom in demand. Because of the discomfort of wearing a mask, hooked up to a noisy machine, many patients struggle to adapt to nightly use. Between 2001 and 2009, Medicare payments for individual sleep studies almost quadrupled to $235 million. Many of those studies led to a CPAP prescription. Under Medicare rules, patients must use the CPAP for four hours a night for at least 70 percent of the nights in any 30-day period within three months of getting the device. Medicare requires doctors to document the adherence and effectiveness of the therapy.

Sleep apnea experts deemed Medicare’s requirements arbitrary. But private insurers soon adopted similar rules, verifying usage with data from patients’ machines — with or without their knowledge.

Kristine Grow, spokeswoman for the trade association America’s Health Insurance Plans, said monitoring CPAP use is important because if patients aren’t using the machines, a less expensive therapy might be a smarter option. Monitoring patients also helps insurance companies advise doctors about the best treatment for patients, she said. When asked why insurers don’t just rely on doctors to verify compliance, Grow said she didn’t know.

Many insurers also require patients to rack up monthly rental fees rather than simply pay for a CPAP.

Dr. Ofer Jacobowitz, a sleep apnea expert at ENT and Allergy Associates and assistant professor at The Mount Sinai Hospital in New York, said his patients often pay rental fees for a year or longer before meeting the prices insurers set for their CPAPs. But since patients’ deductibles — the amount they must pay before insurance kicks in — reset at the beginning of each year, they may end up covering the entire cost of the rental for much of that time, he said.

The rental fees can surpass the retail cost of the machine, patients and doctors say. Alan Levy, an attorney who lives in Rahway, New Jersey, bought an individual insurance plan through the now-defunct Health Republic Insurance of New Jersey in 2015. When his doctor prescribed a CPAP, the company that supplied his device, At Home Medical, told him he needed to rent the device for $104 a month for 15 months. The company told him the cost of the CPAP was $2,400.

Levy said he wouldn’t have worried about the cost if his insurance had paid it. But Levy’s plan required him to reach a $5,000 deductible before his insurance plan paid a dime. So Levy looked online and discovered the machine actually cost about $500.

Levy said he called At Home Medical to ask if he could avoid the rental fee and pay $500 up front for the machine, and a company representative said no. “I’m being overcharged simply because I have insurance,” Levy recalled protesting.

Levy refused to pay the rental fees. “At no point did I ever agree to enter into a monthly rental subscription,” he wrote in a letter disputing the charges. He asked for documentation supporting the cost. The company responded that he was being billed under the provisions of his insurance carrier.

Levy’s law practice focuses, ironically, on defending insurance companies in personal injury cases. So he sued At Home Medical, accusing the company of violating the New Jersey Consumer Fraud Act. Levy didn’t expect the case to go to trial. “I knew they were going to have to spend thousands of dollars on attorney’s fees to defend a claim worth hundreds of dollars,” he said.

Sure enough, At Home Medical, agreed to allow Levy to pay $600 — still more than the retail cost — for the machine.

The company declined to comment on the case. Suppliers said that Levy’s case is extreme, but acknowledged that patients’ rental fees often add up to more than the device is worth.

Levy said that he was happy to abide by the terms of his plan, but that didn’t mean the insurance company could charge him an unfair price. “If the machine’s worth $500, no matter what the plan says, or the medical device company says, they shouldn’t be charging many times that price,” he said.

Dr. Douglas Kirsch, president of the American Academy of Sleep Medicine, said high rental fees aren’t the only problem. Patients can also get better deals on CPAP filters, hoses, masks and other supplies when they don’t use insurance, he said.

Cigna, one of the largest health insurers in the country, currently faces a class-action suit in U.S. District Court in Connecticut over its billing practices, including for CPAP supplies. One of the plaintiffs, Jeffrey Neufeld, who lives in Connecticut, contends that Cigna directed him to order his supplies through a middleman who jacked up the prices.

Neufeld declined to comment for this story. But his attorney, Robert Izard, said Cigna contracted with a company called CareCentrix, which coordinates a network of suppliers for the insurer. Neufeld decided to contact his supplier directly to find out what it had been paid for his supplies and compare that to what he was being charged. He discovered that he was paying substantially more than the supplier said the products were worth. For instance, Neufeld owed $25.68 for a disposable filter under his Cigna plan, while the supplier was paid $7.50. He owed $147.78 for a face mask through his Cigna plan while the supplier was paid $95.

ProPublica found all the CPAP supplies billed to Neufeld online at even lower prices than those the supplier had been paid. Longtime CPAP users say it’s well known that supplies are cheaper when they are purchased without insurance.

Neufeld’s cost “should have been based on the lower amount charged by the actual provider, not the marked-up bill from the middleman,” Izard said. Patients covered by other insurance companies may have fallen victim to similar markups, he said.

Cigna would not comment on the case. But in documents filed in the suit, it denied misrepresenting costs or overcharging Neufeld. The supply company did not return calls for comment.

In a statement, Stephen Wogen, CareCentrix’s chief growth officer, said insurers may agree to pay higher prices for some services, while negotiating lower prices for others, to achieve better overall value. For this reason, he said, isolating select prices doesn’t reflect the overall value of the company’s services. CareCentrix declined to comment on Neufeld’s allegations.

Izard said Cigna and CareCentrix benefit from such behind-the-scenes deals by shifting the extra costs to patients, who often end up covering the marked-up prices out of their deductibles. And even once their insurance kicks in, the amount the patients must pay will be much higher.

The ubiquity of CPAP insurance concerns struck home during the reporting of this story, when a ProPublica colleague discovered how his insurer was using his data against him.

Sleep Aid or Surveillance Device?

Without his CPAP, Eric Umansky, a deputy managing editor at ProPublica, wakes up repeatedly through the night and snores so insufferably that he is banished to the living room couch. “My marriage depends on it.”

In September, his doctor prescribed a new mask and airflow setting for his machine. Advanced Oxy-Med Services, the medical supply company approved by his insurer, sent him a modem that he plugged into his machine, giving the company the ability to change the settings remotely if needed.

But when the mask hadn’t arrived a few days later, Umansky called Advanced Oxy-Med. That’s when he got a surprise: His insurance company might not pay for the mask, a customer service representative told him, because he hadn’t been using his machine enough. “On Tuesday night, you only used the mask for three-and-a-half hours,” the representative said. “And on Monday night, you only used it for three hours.”

“Wait — you guys are using this thing to track my sleep?” Umansky recalled saying. “And you are using it to deny me something my doctor says I need?”

Umansky’s new modem had been beaming his personal data from his Brooklyn bedroom to the Newburgh, New York-based supply company, which, in turn, forwarded the information to his insurance company, UnitedHealthcare.

Umansky was bewildered. He hadn’t been using the machine all night because he needed a new mask. But his insurance company wouldn’t pay for the new mask until he proved he was using the machine all night — even though, in his case, he, not the insurance company, is the owner of the device.

“You view it as a device that is yours and is serving you,” Umansky said. “And suddenly you realize it is a surveillance device being used by your health insurance company to limit your access to health care.”

Privacy experts said such concerns are likely to grow as a host of devices now gather data about patients, including insertable heart monitors and blood glucose meters, as well as Fitbits, Apple Watches and other lifestyle applications. Privacy laws have lagged behind this new technology, and patients may be surprised to learn how little control they have over how the data is used or with whom it is shared, said Pam Dixon, executive director of the World Privacy Forum.

“What if they find you only sleep a fitful five hours a night?” Dixon said. “That’s a big deal over time. Does that affect your health care prices?”

UnitedHealthcare said in a statement that it only uses the data from CPAPs to verify patients are using the machines.

Lawrence, the owner of Advanced Oxy-Med Services, conceded that his company should have told Umansky his CPAP use would be monitored for compliance, but it had to follow the insurers’ rules to get paid.

As for Umansky, it’s now been two months since his doctor prescribed him a new airflow setting for his CPAP machine. The supply company has been paying close attention to his usage, Umansky said, but it still hasn’t updated the setting.

The irony is not lost on Umansky: “I wish they would spend as much time providing me actual care as they do monitoring whether I’m ‘compliant.’”

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

 


Google Admitted Tracking Users' Location Even When Phone Setting Disabled

If you are considering, or already have, a smartphone running Google's Android operating system (OS), then take note. ZDNet reported (emphasis added):

"Phones running Android have been gathering data about a user's location and sending it back to Google when connected to the internet, with Quartz first revealing the practice has been occurring since January 2017. According to the report, Android phones and tablets have been collecting the addresses of nearby cellular towers and sending the encrypted data back, even when the location tracking function is disabled by the user... Google does not make this explicitly clear in its Privacy Policy, which means Android users that have disabled location tracking were still being tracked by the search engine giant..."

This is another reminder of the cost of free services and/or cheaper smartphones. You're gonna be tracked... extensively... whether you want it or not. The term "surveillance capitalism" is often used.

A reader shared a blunt assessment, "There is no way to avoid being Google’s property (a/k/a its bitch) if you use an Android phone." Harsh, but accurate. What is your opinion?


Plenty Of Bad News During November. Are We Watching The Fall Of Facebook?

Facebook logo November has been an eventful month for Facebook, the global social networking giant. And not in a good way. So much has happened, it's easy to miss items. Let's review.

A November 1st investigative report by ProPublica described how some political advertisers exploit gaps in Facebook's advertising transparency policy:

"Although Facebook now requires every political ad to “accurately represent the name of the entity or person responsible,” the social media giant acknowledges that it didn’t check whether Energy4US is actually responsible for the ad. Nor did it question 11 other ad campaigns identified by ProPublica in which U.S. businesses or individuals masked their sponsorship through faux groups with public-spirited names. Some of these campaigns resembled a digital form of what is known as “astroturfing,” or hiding behind the mirage of a spontaneous grassroots movement... Adopted this past May in the wake of Russian interference in the 2016 presidential campaign, Facebook’s rules are designed to hinder foreign meddling in elections by verifying that individuals who run ads on its platform have a U.S. mailing address, governmental ID and a Social Security number. But, once this requirement has been met, Facebook doesn’t check whether the advertiser identified in the “paid for by” disclosure has any legal status, enabling U.S. businesses to promote their political agendas secretly."

So, political ad transparency -however faulty it is -- has only been operating since May, 2018. Not long. Not good.

The day before the November 6th election in the United States, Facebook announced:

"On Sunday evening, US law enforcement contacted us about online activity that they recently discovered and which they believe may be linked to foreign entities. Our very early-stage investigation has so far identified around 30 Facebook accounts and 85 Instagram accounts that may be engaged in coordinated inauthentic behavior. We immediately blocked these accounts and are now investigating them in more detail. Almost all the Facebook Pages associated with these accounts appear to be in the French or Russian languages..."

This happened after Facebook removed 82 Pages, Groups and accounts linked to Iran on October 16th. Thankfully, law enforcement notified Facebook. Interested in more proactive action? Facebook announced on November 8th:

"We are careful not to reveal too much about our enforcement techniques because of adversarial shifts by terrorists. But we believe it’s important to give the public some sense of what we are doing... We now use machine learning to assess Facebook posts that may signal support for ISIS or al-Qaeda. The tool produces a score indicating how likely it is that the post violates our counter-terrorism policies, which, in turn, helps our team of reviewers prioritize posts with the highest scores. In this way, the system ensures that our reviewers are able to focus on the most important content first. In some cases, we will automatically remove posts when the tool indicates with very high confidence that the post contains support for terrorism..."

So, Facebook deployed in 2018 some artificial intelligence to help its human moderators identify terrorism threats -- not automatically remove them, but to identify them -- as the news item also mentioned its appeal process. Then, Facebook announced in a November 13th update:

"Combined with our takedown last Monday, in total we have removed 36 Facebook accounts, 6 Pages, and 99 Instagram accounts for coordinated inauthentic behavior. These accounts were mostly created after mid-2017... Last Tuesday, a website claiming to be associated with the Internet Research Agency, a Russia-based troll farm, published a list of Instagram accounts they said that they’d created. We had already blocked most of them, and based on our internal investigation, we blocked the rest... But finding and investigating potential threats isn’t something we do alone. We also rely on external partners, like the government or security experts...."

So, in 2018 Facebook leans heavily upon both law enforcement and security researchers to identify threats. You have to hunt a bit to find the total number of fake accounts removed. Facebook announced on November 15th:

"We also took down more fake accounts in Q2 and Q3 than in previous quarters, 800 million and 754 million respectively. Most of these fake accounts were the result of commercially motivated spam attacks trying to create fake accounts in bulk. Because we are able to remove most of these accounts within minutes of registration, the prevalence of fake accounts on Facebook remained steady at 3% to 4% of monthly active users..."

That's about 1.5 billion fake accounts by a variety of bad actors. Hmmmm... sounds good, but... it makes one wonder about the digital arms race happening. If the bad actors can programmatically create new fake accounts faster than Facebook can identify and remove them, then not good.

Meanwhile, CNet reported on November 11th that Facebook had ousted Oculus founder Palmer Luckey due to:

"... a $10,000 to an anti-Hillary Clinton group during the 2016 presidential election, he was out of the company he founded. Facebook CEO Mark Zuckerberg, during congressional testimony earlier this year, called Luckey's departure a "personnel issue" that would be "inappropriate" to address, but he denied it was because of Luckey's politics. But that appears to be at the root of Luckey's departure, The Wall Street Journal reported Sunday. Luckey was placed on leave and then fired for supporting Donald Trump, sources told the newspaper... [Luckey] was pressured by executives to publicly voice support for libertarian candidate Gary Johnson, according to the Journal. Luckey later hired an employment lawyer who argued that Facebook illegally punished an employee for political activity and negotiated a payout for Luckey of at least $100 million..."

Facebook acquired Oculus Rift in 2014. Not good treatment of an executive.

The next day, TechCrunch reported that Facebook will provide regulators from France with access to its content moderation processes:

"At the start of 2019, French regulators will launch an informal investigation on algorithm-powered and human moderation... Regulators will look at multiple steps: how flagging works, how Facebook identifies problematic content, how Facebook decides if it’s problematic or not and what happens when Facebook takes down a post, a video or an image. This type of investigation is reminiscent of banking and nuclear regulation. It involves deep cooperation so that regulators can certify that a company is doing everything right... The investigation isn’t going to be limited to talking with the moderation teams and looking at their guidelines. The French government wants to find algorithmic bias and test data sets against Facebook’s automated moderation tools..."

Good. Hopefully, the investigation will be a deep dive. Maybe other countries, which value citizens' privacy, will perform similar investigations. Companies and their executives need to be held accountable.

Then, on November 14th The New York Times published a detailed, comprehensive "Delay, Deny, and Deflect" investigative report based upon interviews of at least 50 persons:

"When Facebook users learned last spring that the company had compromised their privacy in its rush to expand, allowing access to the personal information of tens of millions of people to a political data firm linked to President Trump, Facebook sought to deflect blame and mask the extent of the problem. And when that failed... Facebook went on the attack. While Mr. Zuckerberg has conducted a public apology tour in the last year, Ms. Sandberg has overseen an aggressive lobbying campaign to combat Facebook’s critics, shift public anger toward rival companies and ward off damaging regulation. Facebook employed a Republican opposition-research firm to discredit activist protesters... In a statement, a spokesman acknowledged that Facebook had been slow to address its challenges but had since made progress fixing the platform... Even so, trust in the social network has sunk, while its pell-mell growth has slowed..."

The New York Times' report also highlighted the history of Facebook's focus on revenue growth and lack of focus to identify and respond to threats:

"Like other technology executives, Mr. Zuckerberg and Ms. Sandberg cast their company as a force for social good... But as Facebook grew, so did the hate speech, bullying and other toxic content on the platform. When researchers and activists in Myanmar, India, Germany and elsewhere warned that Facebook had become an instrument of government propaganda and ethnic cleansing, the company largely ignored them. Facebook had positioned itself as a platform, not a publisher. Taking responsibility for what users posted, or acting to censor it, was expensive and complicated. Many Facebook executives worried that any such efforts would backfire... Mr. Zuckerberg typically focused on broader technology issues; politics was Ms. Sandberg’s domain. In 2010, Ms. Sandberg, a Democrat, had recruited a friend and fellow Clinton alum, Marne Levine, as Facebook’s chief Washington representative. A year later, after Republicans seized control of the House, Ms. Sandberg installed another friend, a well-connected Republican: Joel Kaplan, who had attended Harvard with Ms. Sandberg and later served in the George W. Bush administration..."

The report described cozy relationships between the company and Democratic politicians. Not good for a company wanting to deliver unbiased, reliable news. The New York Times' report also described the history of failing to identify and respond quickly to content abuses by bad actors:

"... in the spring of 2016, a company expert on Russian cyberwarfare spotted something worrisome. He reached out to his boss, Mr. Stamos. Mr. Stamos’s team discovered that Russian hackers appeared to be probing Facebook accounts for people connected to the presidential campaigns, said two employees... Mr. Stamos, 39, told Colin Stretch, Facebook’s general counsel, about the findings, said two people involved in the conversations. At the time, Facebook had no policy on disinformation or any resources dedicated to searching for it. Mr. Stamos, acting on his own, then directed a team to scrutinize the extent of Russian activity on Facebook. In December 2016... Ms. Sandberg and Mr. Zuckerberg decided to expand on Mr. Stamos’s work, creating a group called Project P, for “propaganda,” to study false news on the site, according to people involved in the discussions. By January 2017, the group knew that Mr. Stamos’s original team had only scratched the surface of Russian activity on Facebook... Throughout the spring and summer of 2017, Facebook officials repeatedly played down Senate investigators’ concerns about the company, while publicly claiming there had been no Russian effort of any significance on Facebook. But inside the company, employees were tracing more ads, pages and groups back to Russia."

Facebook responded in a November 15th new release:

"There are a number of inaccuracies in the story... We’ve acknowledged publicly on many occasions – including before Congress – that we were too slow to spot Russian interference on Facebook, as well as other misuse. But in the two years since the 2016 Presidential election, we’ve invested heavily in more people and better technology to improve safety and security on our services. While we still have a long way to go, we’re proud of the progress we have made in fighting misinformation..."

So, Facebook wants its users to accept that it has invested more = doing better.

Regardless, the bottom line is trust. Can users trust what Facebook said about doing better? Is better enough? Can users trust Facebook to deliver unbiased news? Can users trust that Facebook's content moderation process is better? Or good enough? Can users trust Facebook to fix and prevent data breaches affecting millions of users? Can users trust Facebook to stop bad actors posing as researchers from using quizzes and automated tools to vacuum up (and allegedly resell later) millions of users' profiles? Can citizens in democracies trust that Facebook has stopped data abuses, by bad actors, designed to disrupt their elections? Is doing better enough?

The very next day, Facebook reported a huge increase in the number of government requests for data, including secret orders. TechCrunch reported about 13 historical national security letters:

"... dated between 2014 and 2017 for several Facebook and Instagram accounts. These demands for data are effectively subpoenas, issued by the U.S. Federal Bureau of Investigation (FBI) without any judicial oversight, compelling companies to turn over limited amounts of data on an individual who is named in a national security investigation. They’re controversial — not least because they come with a gag order that prevents companies from informing the subject of the letter, let alone disclosing its very existence. Companies are often told to turn over IP addresses of everyone a person has corresponded with, online purchase information, email records and cell-site location data... Chris Sonderby, Facebook’s deputy general counsel, said that the government lifted the non-disclosure orders on the letters..."

So, Facebook is a go-to resource for both bad actors and the good guys.

An eventful month, and the month isn't over yet. Taken together, this news is not good for a company wanting its social networking service to be a source of reliable, unbiased news source. This news is not good for a company wanting its users to accept it is doing better -- and that better is enough. The situation begs the question: are we watching the fall of Facebook? Share your thoughts and opinions below.