114 posts categorized "Internet of Things" Feed

Report: Auto Emergency Braking With Pedestrian Detection Systems Fail When Needed Most

Image from AAA report on Emergency braking and pedestrian detection. October 2019. Click to view larger version The American Automobile Association (AAA) reported new research results from tests of automatic emergency braking with pedestrian detection systems in automobiles. The AAA found that these systems work inconsistently and failed when most needed: at night. Chief findings from the report:

"... automatic emergency braking systems with pedestrian detection perform inconsistently, and proved to be completely ineffective at night. An alarming result, considering 75% of pedestrian fatalities occur after dark. The systems were also challenged by real-world situations, like a vehicle turning right into the path of an adult. AAA’s testing found that in this simulated scenario, the systems did not react at all, colliding with the adult pedestrian target every time..."

The testing was performed jointly with the Automotive Club of Southern California’s Automotive Research Center in Los Angeles, California. Track testing was conducted on closed surface streets on the grounds of the Auto Club Speedway in Fontana, California. Four test vehicles were used: 2019 Chevy Malibu, 2019 Honda Accord, 2019 Tesla Model 3 and 2019 Toyota Camry. The testing included four scenarios:

  1. "An adult crossing in front of a vehicle traveling at 20 mph and 30 mph during the day and at 25 mph at night;
  2. A child darting out from between two parked cars in front of a vehicle traveling at 20 mph and 30 mph;
  3. A vehicle turning right onto an adjacent road with an adult crossing at the same time; and
  4. Two adults standing along the side of the road with their backs to traffic, with a vehicle approaching at 20 mph and 30 mph."

For scenario #1: a vehicle moving at 20 mph a collision resulted 60% of the time (= the systems avoided a collision 40 percent of the time). For scenario #2: a collision occurred 89% of the time for vehicles moving at 20 mph For scenario #3, collisions resulted 100 percent of the time. For scenario #4, a collision resulted 80 percent of the time for vehicles moving at 20 mph. Additional test results:

"... the systems were ineffective in all scenarios where the vehicle was traveling at 30 mph. At night, none of the systems detected or reacted to the adult pedestrian."

The October 2019 "Automatic Emergency Braking With Pedestrian Detection" AAA report is available here (Adobe PDF).


Google Has Started Home Deliveries Of Packages By Drones

MediaPost reported:

"The first drone home deliveries of packages from Walgreens have started from Wing, the Alphabet subsidiary. Wing recently received an expanded Air Carrier Certificate from the Federal Aviation Administration allowing the first commercial air delivery service by drone directly to homes in the U.S. The FAA permissions are the first allowing multiple pilots to oversee multiple unmanned aircraft making commercial deliveries to the general public simultaneously. Collaborating with Federal Express and Virginia retailer Sugar Magnolia, Wing began delivering over-the-counter medication, gifts and snacks to residents of Christiansburg, Virginia. FedEx completed the first scheduled ecommerce drone delivery on Friday [October 18th]..."


UPS Announces Expansion Of Its Drone Delivery Program

UPS logo Last week, UPS announced an expansion of its B2B drone delivery program titled UPS Flight. The expansion included three items focused upon the healthcare industry. First, UPS began a:

"... new drone delivery service in support of the University of Utah Health hospital campuses, in partnership with Matternet. The University of Utah campus program will involve drone deliveries of samples and other cargo, similar to the program originally introduced at WakeMed Hospital in North Carolina."

The second item included and agreement with:

"... with CVS Health to develop a variety of drone delivery use cases for business-to consumer applications. The program will include evaluation of delivery of prescriptions and retail products to the homes of CVS customers."

The third item included a partnership:

"... with wholesale pharmaceutical distributor AmerisourceBergen... The collaboration will initially deploy the UPS Flight Forward drone airline to transport certain pharmaceuticals, supplies and records to qualifying medical campuses served by AmerisourceBergen across the United States, with plans to then expand its use to other sites of care."

UPS Chief Strategy and Transformation Officer Scott Price said:

“When we launched UPS Flight Forward, we said we would move quickly to scale this business – now the country’s first and only fully-certified drone airline... We started with a hospital campus environment and are now expanding scale and use-cases. UPS Flight Forward will work with new customers in other industries to design additional solutions for a wide array of last-mile and urgent delivery challenges.”


The National Auto Surveillance Database You Haven't Heard About Has Plenty Of Privacy Issues

Some consumers have heard of Automated License Plate Recognition (ALPR) cameras, the high-speed, computer-controlled technology that automatically reads and records vehicle license plates. Local governments have installed ALPR cameras on stationary objects such as street-light poles, traffic lights, overpasses, highway exit ramps, and electronic toll collection (ETC).

Mobile ALPR cameras have been installed on police cars and/or police surveillance vans. The Houston Police Department explained in this 2016 video how it uses the technology. Last year, a blog post discussed ALPR usage in San Diego and its data-sharing with Vigilant Solutions.

What you probably don't know: the auto repossession industry also uses the technology. Many "repo men" have ALPR cameras installed on their vehicles. The data they collect is fed into a massive, nationwide, and privately-owned database which archives license-plate images. Reporters at Motherboard obtained a private demo of the database tool to understand its capabilities.

Digital Recognition Network logo The demo included tracking a license plate with the vehicle owner's consent. Vice reported:

"This tool, called Digital Recognition Network (DRN), is not run by a government, although law enforcement can also access it. Instead, DRN is a private surveillance system crowdsourced by hundreds of repo men who have installed cameras that passively scan, capture, and upload the license plates of every car they drive by to DRN's database. DRN stretches coast to coast and is available to private individuals and companies focused on tracking and locating people or vehicles. The tool is made by a company that is also called Digital Recognition Network... DRN has more than 600 of these "affiliates" collecting data, according to the contract. These affiliates are paid a monthly bonus for gathering the data..."

ALPR financing image from DRN site on September 20, 2019. Click to view larger version Affiliates are rep men and others, who both use the database tool and upload images to it. DRN even offers financing to help affiliates buy ALPR cameras. The image on the right was taken from the site on September 20, 2019.

When consumers fail to pay their bills, lenders and insurance companies have valid needs to retrieve ( or repossess) their unpaid assets. Lenders hire repo men, who then use the DRN database to find vehicles they've been hired to repossess. Those applications are valid, but there are plenty of privacy issues and opportunity for abuse.

Plenty.

First, the data collection is indiscriminate and broad. As repo men (and women) drive through cities and towns to retrieve wanted vehicles, the ALPR cameras mounted on their cars scan all nearby vehicles: both moving and parked vehicles. Scans are not limited solely to vehicles they've been hired to repossess, nor to vehicles of known/suspected criminals. So, innocent consumers are caught in the massive data collection. According to Vice:

"... in fact, the vast majority of vehicles captured are connected to innocent people. DRN claims to have more than 9 billion license plate scans, according to a DRN contract obtained by Motherboard..."

Second, the data is archived forever. That can provide a very detailed history of a vehicle's (or a person's) movements:

"The results popped up: dozens of sightings, spanning years. The system could see photos of the car parked outside the owner's house; the car in another state as its driver went to visit family; and the car parked in other spots in the owner's city... Some showed the car's location as recently as a few weeks before."

Third, to facilitate searches metadata is automatically attached to the images: GPS or geolocation, date, time, day of week, and more. The metadata helps provide a pretty detailed history of each vehicle's -- or person's -- movements: where and when a vehicle ( or person) travels, patterns such as which days of the week certain locations are visited, and how long the vehicle (or person) parked at specific locations. Vice explained:

"The data is easy to query, according to a DRN training video obtained by Motherboard. The system adds a "tag" to each result, categorising what sort of location the vehicle was likely spotted at, such as "workplace" or "home."

So, DRN can help users to associate specific addresses (work, home, school, doctors, etc.) with specific vehicles. How accurate might this be? While that might help repo men and insurance companies spot fraud via out-of-state registered vehicles whose owners are trying to avoid detection and/or higher premiums, it raises other concerns.

Fourth, consumers -- vehicle owners -- have no control over the data describing them. Vehicle owners cannot opt out of the data collection. Vehicle owners cannot review nor correct any errors in their DRN profiles.

That sounds out of control to me.

The persons which the archived data directly describes have no say. None. That's a huge concern.

Also, I wonder about single females -- victims of domestic violence -- who have protective orders for their safety. Some states, such as Massachusetts, have Address Confidentiality Programs (ACPs) to protect victims of domestic violence, sexual assault, and stalkers. Does DRN accommodate ACP programs? And if so, how? And if not, why not? How does DRN prevent perps from using its database tool? (Yes, DRN access is an issue. Keep reading.) The Vice report didn't say. Hopefully, future reporting will discuss this.

Fifth, DRN is robust. It can be used to track vehicles near or in real time:

"DRN charges $20 to look up a license plate, or $70 for a "live alert", according to the contract. With a live alert, a user can enter a license plate they wish to receive updates on; when the DRN system spots the vehicle, it'll send an email to the user with the newly discovered location."

That makes DRN highly appealing to both valid users (e.g., police, repo men, insurance companies, private investigators) and bad actors posing as valid users. Who might those bad actors be? The Electronic Frontier Foundation (EFF) warned:

"Taken in the aggregate, ALPR data can paint an intimate portrait of a driver’s life and even chill First Amendment protected activity. ALPR technology can be used to target drivers who visit sensitive places such as health centers, immigration clinics, gun shops, union halls, protests, or centers of religious worship."

Sixth, is the problem of access. Anybody can use DRN. According to Vice:

"... a private investigator, or a repo man, or an insurance company does not need a warrant to search for someone's movements over years; they just need to pay to access the DRN system, or find someone willing to share or leverage their access..."

Users simply need to comply with DRN's policies. The company says that, a) users can use its database tool only for certain applications, and b) its contract prohibits users from sharing search results with third parties. We consumers have only DRN's word and assurances that it enforces its policies; and that users comply. As we have seen with Facebook data breaches, it is easy for bad actors to pose as valid users in order to doo end runs around such policies.

What are your opinions of ALPR cameras and DRN?


Survey: Consumers Use Smart Home Devices Despite Finding Them 'Creepy'

Selligent Marketing Cloud logo Last month, Selligent Marketing Cloud announced the results of a global survey about how consumers view various brands. Some of the findings included smart speakers or voice assistants. Key findings:

"Sixty-nine percent of surveyed consumers find it “creepy” when they receive ads based on unprompted cues from voice assistants like Apple’s Siri, Amazon’s Alexa and Google Home. Fifty-one percent are worried that voice assistants are listening to conversations without their consent."

Regarding voice assistants, younger consumers are likely to believe they are being listened to without their knowledge. 58 percent of Gen-Z (ages 18-24) versus 36 percent for Baby Boomers (ages 55-75) held this view. Key findings about privacy and social media: 41 percent of respondents said they have reduced their use of social media due to privacy concerns, and 32 percent said they quit at least one social media platform within the last 12 months.

Selligent surveyed 5,000 consumers in North America and Western Europe. The company provides services to help B2C marketers. To learn more, see the Selligent "Global Connected Consumer Index."


Report: World Shipments Of Smart Home Devices Forecasted To Grow To 815 Million In 2019, And To 1.39 Billion in 2023

International Data Corporation logo A report by the International Data Corporation (IDC) has forecasted worldwide shipments of devices for smart homes to grown 23.5% in 2019 over 2018 to nearly 815 million. The report also forecasted a 14.4 percent annual compound growth rate to about 1.39 billion shipments in 2023.

According to the announcement about the report:

"Video entertainment devices are expected to maintain the largest volume of shipments, accounting for 29.9% of all shipments in 2023... Home monitoring/security devices like smart cameras and smart locks will account for 22.1% of the shipments in 2023... Growth in smart speakers and displays is expected to slow to single digits in the next few years... as the installed base of these devices approaches saturation and consumers look to other form factors to access smart assistants in the home, such as thermostats, appliances, and TVs to name a few."

The report, titled "Worldwide Quarterly Smart Home Device Tracker," includes familiar products such as Amazon Echo, Google Home, Philips Hue bulbs, smart speakers, smart thermostats, and connected doorbells. The report covers Asia/Pacific, Canada, Central and Eastern Europe, China, Japan, Latin America, the Middle East and Africa, the United States, and Western Europe.

Surveys in 2018 found that most consumers are satisfied with in-home voice-controlled assistants, and performance issues hinder trust and device adoption. A survey in 2017 found that 90 percent of consumers want security built into smart-home devices. Also in 2017, researchers warned that a hacked Amazon Echo could be turned into always-on surveillance devices.

And, consumers should use these privacy tips for smart speakers in their homes.

Today's smart homes contain a variety of internet-connected appliances -- televisions, utility meters, hot water heaters, thermostats, refrigerators, security systems, solar panels -- and internet-connected devices you might not expect:  mouse traps, water bowls and feeders for your pets, wine bottles, crock pots, toy dolls, trash/recycle bins, vibrators, orgasm trackers, and adult sex toys. It is a connected world, indeed.


Survey Asked Americans Which They Consider Safer: Self-Driving Ride-Shares Or Solo Ride-Shares With Human Drivers

Many consumers use ride-sharing services, such as Lyft and Uber. We all have heard about self-driving cars. A polling firm asked consumers a very relevant question: "Which ride is trusted more? Would you rather take a rideshare alone or a self-driving car?" The results may surprise you.

The questions are relevant given news reports about sexual assaults and kidnapping ride-sharing drivers and imposters. A pedestrian death involving a self-driving ride-sharing car highlighted the ethical issues about who machines should save when fatal crashes can't be avoided. Developers have admitted that self-driving cars can be hacked by bad actors, just like other computers and mobile devices. And, new car buyers stated clear preferences when considering self-driving (a/k/a autonomous) vehicles versus standard vehicles with self-driving modes.

Using Google Consumer Surveys, The Zebra surveyed 2,000 persons in the United States during August, 2019 and found:

"53 percent of people felt safer taking a self-driving car than driver-operated rideshare alone; Baby Boomers (age 55-plus) were the only age group to prefer a solo Uber ride over a driverless car; Gen Z (ages 18–24) were most open to driverless rideshares: 40 percent said they were willing to hail a ride from one."

Founded 7 years ago, The Zebra describes itself as, "the nation's leading insurance comparison site." The survey also found:

"... Baby Boomers were the only group to trust solo ridesharing more than they would a ride in a self-driving car... despite women being subjected to higher rates of sexual violence, the poll found women were only slightly more likely than men to choose a self-driving car over ridesharing alone (53 percent of women compared to 52 percent of men).

It seems safe to assume: trust it or not, the tech is coming. Quickly. What are your opinions?


Global Study: New Car Buyers Still Prefer Standard Cars Over Self-Driving Cars

Ipsos logo A recent worldwide study found that new car buyers continue to enjoy and prefer the experience of driving. When asked whether they would consider fully self-driving cars (a/k/a as autonomous vehicles) or vehicles with autonomous modes, drivers stated their clear preferences. Key findings by Ipsos Mobility:

"1) Roughly half of new car buyers have some familiarity with autonomous mode; Familiarity highest in China and Japan; 2) On a global basis, 36% would consider a vehicle with autonomous mode however, only 12% would Definitely Consider; 3) If given the choice, only 6% of new car buyers would purchase a fully autonomous vehicle while the majority (57%) would purchase a vehicle with an autonomous mode and 37% would just purchase a standard vehicle..."

To summarize: it's the driving experience which matters.

Ipsos surveyed 20,000 drivers across ten countries: Brazil, China, France, Germany, Italy, Japan, Russia, Spain, the United Kingdom, and the United States. The 2019 "Global Mobility Navigator Syndicated Study" includes three modules: a) Autonomous and Advanced Features, b) Electric Vehicles (Needs & Intentions), and c) Shared Mobility (Car Sharing & Ride Hailing). The above findings are from the first module.

Secondary findings about autonomous features in cars:

"The auto industry is also battling an awareness issue with the new technology. Globally, only 15% said they knew a fair amount about Autonomous mode... while there are enjoyment factors to consider in the autonomous future, there are also safety concerns for consumers. The study revealed one is pedestrian safety as well as other vehicles, while the driver’s own safety is a slightly lower concern. Meanwhile, if a driver did use the autonomous mode, 44% state they would still remain focused on the road. This implies a tremendous lack of trust in the system’s ability to safely self-drive. Another big worry for consumers is the security of the vehicle’s data. A strong concern was the possibility of someone hacking into their self-driving system and causing an accident."

The report listed 16 features for "connected cars," including Predicting The Traffic, Advanced Drive Assist Systems, Search For Nearby Parking Lots, Automated Parking, Smart Refueling/Recharging, and more. Additional details about the report and features are available here.


Privacy Tips For The Smart Speakers In Your Home

Many consumers love the hands-free convenience of smart speakers in their homes. The appeal includes several applications: stream music, plan travel, manage your grocery list, get briefed on news headlines, buy movie tickets, hear jokes, get sports scores, and more. Like any other internet-connected device, it's wise to know and use the device's security settings if you value your, your children's, and your guests' privacy.

In the August issue of its print magazine, Consumer Reports (CR) advises the following settings for your smart speakers:

"Protect Your Privacy
If keeping a speaker with a microphone in your home makes you uneasy, you have reason to be. Amazon, Apple, and Google all collect recorded snippets of consumers' commands to improve their voice-computing technology. But they also offer ways to mute the mic when it's not in use. The Amazon Echo has an On/Off button on top of the device. The Google Home's mute button is on the back. And Apple's HomePod requires a voice command: "Hey, Siri, stop listening." (You then use a button to turn the device back on.) For a third-party speaker, consult the owner's manual for instructions."

To learn more, the CR site offers several resources:


Waymo To Test Driverless Cars In Rainy Florida. Expanded Data Set Available

Waymo logo Waymo, formerly the Google self-driving car project, announced it will test vehicles in rainy Florida:

"... we’re bringing both Waymo vehicles — including our Chrysler Pacificas and a Jaguar I-Pace — to the state to begin heavy rain testing. During the summer months of Hurricane Season, Miami is one of the wettest cities in the U.S., averaging an annual 61.9 inches of rain and experiencing some of the most intense weather conditions in the country. Heavy rain can create a lot of noise for our sensors. Wet roads also may result in other road users behaving differently. Testing allows us to understand the unique driving conditions, and get a better handle on how rain affects our own vehicle movements, too."

"First, we’re spending several weeks driving on a closed course in Naples where we will rigorously test our sensor suite — which includes lidar, cameras, and radar — during the rainiest season in the south. Later in the month, we’ll bring our vehicles to public roads in Miami. They’ll be manually operated by our trained test drivers which will give us the opportunity to collect data of real-world driving situations in heavy rain. Additionally, Florida residents will start seeing a few of our vehicles on highways between Orlando, Tampa, Fort Myers and Miami as we learn about Florida roads."

Prior test locations included: a) Novi, Michigan; b) Kirkland, Washington; c) San Francisco, California; and 4) Phoenix, Arizona.

In related news, Waymo announced the availability of an expanded dataset for academic researchers. TechCrunch reported:

"Waymo is opening up its significant stores of autonomous driving data with a new Open Data Set it’s making available for the purposes of research. The data set isn’t for commercial use, but its definition of “research” is fairly broad, and includes researchers at other companies as well as academics... The Waymo Open Data set tries to fill in some of these gaps for their research peers by providing data collected from 1,000 driving segments done by its autonomous vehicles on roads, with each segment representing 20 seconds of continuous driving. It includes a range of different driving conditions, including at night, during rain, at dusk and more. The segments include data collected from five of Waymo’s own proprietary lidars, as well as five standard cameras that face front and to the sides, providing a 360-degree view captured in high resolution, as well as synchronization Waymo uses to fuse lidar and imaging data. Objects, including vehicles, pedestrians, cyclists and signage is all labeled."


ExpressVPN Survey Indicates Americans Care About Privacy. Some Have Already Taken Action

ExpressVPN published the results of its privacy survey. The survey, commissioned by ExpressVPN and conducted by Propeller Insights, included a representative sample of about 1,000 adults in the United States.

Overall, 29.3% of survey respondents said they already use had used a virtual private network (VPN) or a proxy network. Survey respondents cited three broad reasons for using a VPN service: 1) to avoid surveillance, 2) to access content, and 3) to stay safe online. Detailed survey results about surveillance concerns:

"The most popular reasons to use a VPN are related to surveillance, with 41.7% of respondents aiming to protect against sites seeing their IP, 26.4% to prevent their internet service provider (ISP) from gathering information, and 16.6% to shield against their local government."

Who performs the surveillance matters to consumers. People are more concerned with surveillance by companies than by law enforcement agencies within the U.S. government:

"Among the respondents, 15.9% say they fear the FBI surveillance, and only 6.4% fear the NSA spying on them. People are by far most worried about information gathering by ISPs (23.2%) and Facebook (20.5%). Google spying is more of a worry for people (5.9%) than snooping by employers (2.6%) or family members (5.1%).

Concerns with internet service providers (ISPs) are not surprising since these telecommunications company enjoy a unique position enabling them to track all online activities by consumers. Concerns about Facebook are not surprising since it tracks both users and non-users, similar to advertising networks. The "protect against sites seeing their IP" suggests that consumers, or at least VPN users, want to protect themselves and their devices against advertisers, advertising networks, and privacy-busting mobile apps which track their geo-location.

Detailed survey results about content access concerns:

"... 26.7% use [a VPN service] to access their corporate or academic network, 19.9% to access content otherwise not available in their region, and 16.9% to circumvent censorship."

The survey also found that consumers generally trust their mobile devices:

" Only 30.5% of Android users are “not at all” or “not very” confident in their devices. iOS fares slightly better, with 27.4% of users expressing a lack of confidence."

The survey uncovered views about government intervention and policies:

"Net neutrality continues to be popular (70% more respondents support it rather then don’t), but 51.4% say they don’t know enough about it to form an opinion... 82.9% also believe Congress should enact laws to require tech companies to get permission before collecting personal data. Even more, 85.2% believe there should be fines for companies that lose users’ data, and 90.2% believe there should be further fines if the data is misused. Of the respondents, 47.4% believe Congress should go as far as breaking up Facebook and Google."

The survey found views about smart devices (e.g., door bells, voice assistants, smart speakers) installed in many consumers' homes, since these devices are equipped with always-on cameras and/or microphones:

"... 85% of survey respondents say they are extremely (24.7%), very (23.4%), or somewhat (28.0%) concerned about smart devices monitoring their personal habits... Almost a quarter (24.8%) of survey respondents do not own any smart devices at all, while almost as many (24.4%) always turn off their devices’ microphones if they are not using them. However, one-fifth (21.2%) say they always leave the microphone on. The numbers are similar for camera use..."

There are more statistics and findings in the entire survey report by ExpressVPN. I encourage everyone to read it.


Automated Following: The Technology For Platoons Of Self-Driving Trucks

The MediaPost Connected Thinking blog reported:

"At the Automated Vehicle Symposium in Orlando [in July], one company involved in automated vehicle technology unveiled its vision for using a single driver to drive a pair of vehicles. The approach, named Automated Following, is an advanced platooning system created by Peloton Technology. It uses vehicle-to-vehicle (V2V) technology to let a lead driver control the vehicle and one that is following, in this case large trucks... Platooning works by utilizing V2V communications and radar-based active braking systems, combined with vehicle control algorithms, according to Peloton. The system connects a fully automated follow truck with a driver-controlled lead truck. The V2V link lets the human driven lead truck guide the steering, acceleration and braking of the follow truck..."

To learn more, I visited the Peloton Technology website. The Platoon-Pro section of the site lists the benefits below:

Platooning benefits. Peloton-Pro at Peloton Technology website. July 20, 2019. Click to view larger version

While it's good to read about specific estimates of fuel savings, I was hoping to also read similar estimates about decreased crashes and/or decreased severity of crashes. The page simply listed the safety features.

The site's home page features a "Safety & Platoon" video explaining how a 2-truck platoon might operate. On an interstate highway, both trucks are manned with human drivers. (What happened to the single driver benefit?) The video also shows what happens when a passenger vehicle briefly "cuts" in between a 2-truck platoon:

According to the video, the drivers can vary the distance between two trucks in a platoon. That seems to be a good feature.

The technology raises several questions. First, the video features a "cut in" with a small car. What happens when a larger vehicle, such as a bus, cuts in? What happens when several (large) vehicles cut in between? Second, just because we humans can do something doesn't mean we should do it. 2-truck platoons in the near future could expand to 4- or 5-truck platoons after that. One wonders about the wisdom. Are highways, country roads, and city streets designed to accommodate truck platoons this large?

Third, my impression: a 2-truck platoon sounds like a short train. In the near future, motorists will have to navigate in-between and around platoons of self-driving tractor-trailer trucks. Are motorists ready for this? Historically, auto drivers have had difficulty with traditional railroad crossings. The technology seems to be something which requires plenty of testing.

Another way of asking the question: is this what we want on our streets and highways given existing railroads already designed for trains = long platoons of trucks?

Fourth, security matters. What's being done to prevent the technology being abused? Automated following technology in the hands of bad guys could enable terrorists to deliver platoons of car bombs, or platoons of small boats armed with bombs. So, security (against hacking and against theft) is even more of an issue.

What are your opinions?


Google Home Devices Recorded Users' Conversations. Legal Questions Result. Google Says It Is Investigating

Many consumers love the hands-free convenience of smart speakers. However, there are risks with the technology. BBC News reported on Thursday:

"Belgian broadcaster VRT exposed the recordings made by Google Home devices in Belgium and the Netherlands... VRT said the majority of the recordings it reviewed were short clips logged by the Google Home devices as owners used them. However, it said, 153 were "conversations that should never have been recorded" because the wake phrase of "OK Google" was not given. These unintentionally recorded exchanges included: a) blazing rows; b) bedroom chatter; c) parents talking to their children; d) phone calls exposing confidential information. It said it believed the devices logged these conversations because users said a word or phrase that sounded similar to "OK Google" that triggered the device..."

So, conversations that shouldn't have been recorded were recorded by Google Home devices. Consumers use the devices to perform and control a variety of tasks, such as entertainment (e.g., music, movies, games), internet searches (e.g., cooking recipes), security systems and cameras, thermostats, window blinds and shades, appliances (e.g., coffee makers), online shopping, internet searches, and more.

The device software doesn't seem accurate, since it mistook similar phrases as wake phrases. Google calls these errors "false accepts." Google replied in a blog post:

"We just learned that one of these language reviewers has violated our data security policies by leaking confidential Dutch audio data. Our Security and Privacy Response teams have been activated on this issue, are investigating, and we will take action. We are conducting a full review of our safeguards... We apply a wide range of safeguards to protect user privacy throughout the entire review process. Language experts only review around 0.2 percent of all audio snippets. Audio snippets are not associated with user accounts as part of the review process, and reviewers are directed not to transcribe background conversations or other noises, and only to transcribe snippets that are directed to Google."

"The Google Assistant only sends audio to Google after your device detects that you’re interacting with the Assistant—for example, by saying “Hey Google” or by physically triggering the Google Assistant... Rarely, devices that have the Google Assistant built in may experience what we call a “false accept.” This means that there was some noise or words in the background that our software interpreted to be the hotword (like “Ok Google”). We have a number of protections in place to prevent false accepts from occurring in your home... We also provide you with tools to manage and control the data stored in your account. You can turn off storing audio data to your Google account completely, or choose to auto-delete data after every 3 months or 18 months..."

To be fair, Google is not alone. Amazon Alexa devices also record and archive users' conversations. Would you want your bedroom chatter recorded (and stored indefinitely)? Or your conversations with your children? Many persons work remotely from home, so would you want business conversations with coworkers recorded? I think not. Very troubling news.

And, there is more.

This data security incident confirms that human workers listen to recordings by Google Assistant devices. Those workers can be employees or outsourced contractors. Who are these contractors, by name? What methods does Google employ to confirm privacy compliance by contractors? So many unanswered questions.

Also, according to U.S. News & World Report:

"Google's recording feature can be turned off, but doing so means Assistant loses some of its personalized touch. People who turn off the recording feature lose the ability for the Assistant to recognize individual voices and learn your voice pattern. Assistant recording is actually turned off by default — but the technology prompts users to turn on recording and other tools in order to get personalized features."

So, to get the full value of the technology, users must enable recordings. That sounds a lot like surveillance by design. Not good. You'd think that Google software developers would have developed a standard vocabulary, or dictionary, in several languages (by beta test participants) to test the accuracy of Assistant software; rather than review users' actual conversations. I guess they viewed it easier, faster, and cheaper to snoop on users.

Since Google already scans the contents of Gmail users' email messages, maybe this is simply technology creep and Google assumed nobody would mind human reviews of Assistant recordings.

About the review of recordings by human workers, the M.I.T. Technology Review said:

"Legally questionable: Because Google doesn’t inform users that humans review recordings in this way, and thus doesn’t seek their explicit consent for the practice, it’s quite possible that it could be breaking EU data protection regulations. We have asked Google for a response and will update if we hear back."

So, it will be interesting to see what European Union regulators have to say about the recordings and human reviews.

To summarize: consumers have willingly installed perpetual surveillance devices in their homes. What are your views of this data security incident? Do you enable recordings on your smart speakers? Should human workers have access to archives of your recorded conversations?


Aggression Detectors: What They Are, Who Uses Them, And Why

Sound Intelligence logo Like most people, you probably have not heard of "aggression detectors." What are these devices? Who makes them? Who uses these devices and why? What consumers are affected?

To answer these questions, ProPublica explained who makes the devices and why:

"In response to mass shootings, some schools and hospitals are installing microphones equipped with algorithms. The devices purport to identify stress and anger before violence erupts... By deploying surveillance technology in public spaces like hallways and cafeterias, device makers and school officials hope to anticipate and prevent everything from mass shootings to underage smoking... Besides Sound Intelligence, South Korea-based Hanwha Techwin, formerly part of Samsung, makes a similar “scream detection” product that’s been installed in American schools. U.K.-based Audio Analytic used to sell its aggression- and gunshot-detection software to customers in Europe and the United States... Sound Intelligence CEO Derek van der Vorst said security cameras made by Sweden-based Axis Communications account for 90% of the detector’s worldwide sales, with privately held Louroe making up the other 10%... Mounted inconspicuously on the ceiling, Louroe’s smoke-detector-sized microphones measure aggression on a scale from zero to one. Users choose threshold settings. Any time they’re exceeded for long enough, the detector alerts the facility’s security apparatus, either through an existing surveillance system or a text message pinpointing the microphone that picked up the sound..."

Louroe Electronics logo The microphone-equipped sensors have been installed in a variety of industries. The Sound Intelligence website listed prisons, schools, public transportation, banks, healthcare institutes, retail stores, public spaces, and more. Louroe Electronics' site included a similar list plus law enforcement.

The ProPublica article also discussed several key issues. First, sensor accuracy and its own tests:

"... ProPublica’s analysis, as well as the experiences of some U.S. schools and hospitals that have used Sound Intelligence’s aggression detector, suggest that it can be less than reliable. At the heart of the device is what the company calls a machine learning algorithm. Our research found that it tends to equate aggression with rough, strained noises in a relatively high pitch, like [a student's] coughing. A 1994 YouTube clip of abrasive-sounding comedian Gilbert Gottfried ("Is it hot in here or am I crazy?") set off the detector, which analyzes sound but doesn’t take words or meaning into account... Sound Intelligence and Louroe said they prefer whenever possible to fine-tune sensors at each new customer’s location over a period of days or weeks..."

Second, accuracy concerns:

"[Sound Intelligence CEO] Van der Vorst acknowledged that the detector is imperfect and confirmed our finding that it registers rougher tones as aggressive. He said he “guarantees 100%” that the system will at times misconstrue innocent behavior. But he’s more concerned about failing to catch indicators of violence, and he said the system gives schools and other facilities a much-needed early warning system..."

This is interesting and troubling. Sound Intelligence's position seems to suggest that it is okay for sensor to miss-identify innocent persons as aggressive in order to avoid failures to identify truly aggressive persons seeking to do harm. That sounds like the old saying: the ends justify the means. Not good. The harms against innocent persons matters, especially when they are young students.

Yesterday's blog post described a far better corporate approach. Based upon current inaccuracies and biases with the technology, a police body camera assembled an ethics board to help guide its decisions regarding the technology; and then followed that board's recommendations not to implement facial recognition in its devices. When the inaccuracies and biases are resolved, then it would implement facial recognition.

What ethics boards have Sound Intelligence, Louroe, and other aggression detector makers utilized?

Third, the use of aggression detectors raises the issue of notice. Are there physical postings on-site at schools, hospitals, healthcare facilities, and other locations? Notice seems appropriate, especially since almost all entities provide notice (e.g., terms of service, privacy policy) for visitors to their websites.

Fourth, privacy concerns:

"Although a Louroe spokesman said the detector doesn’t intrude on student privacy because it only captures sound patterns deemed aggressive, its microphones allow administrators to record, replay and store those snippets of conversation indefinitely..."

I encourage parents of school-age children to read the entire ProPublica article. Concerned parents may demand explanations by school officials about the surveillance activities and devices used within their children's schools. Teachers may also be concerned. Patients at healthcare facilities may also be concerned.

Concerned persons may seek answers to several issues:

  • The vendor selection process, which aggression detector devices were selected, and why
  • Evidence supporting the accuracy of aggression detectors used
  • The school's/hospital's policy, if it has one, covering surveillance devices; plus any posted notices
  • The treatment and rights of wrongly identified persons (e.g., students, patients,, visitors, staff) by aggression detector devices
  • Approaches by the vendor and school to improve device accuracy for both types of errors: a) wrongly identified persons, and b) failures to identify truly aggressive or threatening persons
  • How long the school and/or vendor archive recorded conversations
  • What persons have access to the archived recordings
  • The data security methods used by the school and by the vendor to prevent unauthorized access and abuse of archived recordings
  • All entities, by name, which the school and/or vendor share archived recordings with

What are your opinions of aggression detectors? Of device inaccuracy? Of the privacy concerns?


Police Body Cam Maker Says It Won't Use Facial Recognition Due To Problems With The Technology

We've all heard of the following three technologies: police body cameras, artificial intelligence, and facial recognition software. Across the nation, some police departments use body cameras.

Do the three technologies go together -- work well together? The Washington Post reported:

"Axon, the country’s biggest seller of police body cameras, announced that it accepts the recommendation of an ethics board and will not use facial recognition in its devices... the company convened the independent board last year to assess the possible consequences and ethical costs of artificial intelligence and facial-recognition software. The board’s first report, published June 27, concluded that “face recognition technology is not currently reliable enough to ethically justify its use” — guidance that Axon plans to follow."

So, a major U.S. corporation assembled an ethics board to guide its activities. Good. That's not something you read about often. Then, the same corporation followed that board's advice. Even better.

Why reject using facial recognition with body cameras? Axon explained in a statement:

"Current face matching technology raises serious ethical concerns. In addition, there are technological limitations to using this technology on body cameras. Consistent with the board's recommendation, Axon will not be commercializing face matching products on our body cameras at this time. We do believe face matching technology deserves further research to better understand and solve for the key issues identified in the report, including evaluating ways to de-bias algorithms as the board recommends. Our AI team will continue to evaluate the state of face recognition technologies and will keep the board informed about our research..."

Two types of inaccuracies occur with facial recognition software: i) persons falsely identified (a/k/a "false positives;" and ii) persons not identified (a/k/a "false negatives) who should have been identified. The ethics board's report provided detailed explanations:

"The truth is that current technology does not perform as well on people of color compared to whites, on women compared to men, or young people compared to older people, to name a few disparities. These disparities exist in both directions — a greater false positive rate and false negative rate."

The ethics board's report also explained the problem of bias:

"One cause of these biases is statistically unrepresentative training data — the face images that engineers use to “train” the face recognition algorithm. These images are unrepresentative for a variety of reasons but in part because of decisions that have been made for decades that have prioritized certain groups at the cost of others. These disparities make real-world face recognition deployment a complete nonstarter for the Board. Until we have something approaching parity, this technology should remain on the shelf. Policing today already exhibits all manner of disparities (particularly racial). In this undeniable context, adding a tool that will exacerbate this disparity would be unacceptable..."

So, well-meaning software engineers can create bias in their algorithms by using sets of images that are not representative of the population. The ethic board's 42-page report titled, "First Report Of The Axon A.I. & Policing Technology Ethics Board" (Adobe PDF; 3.1 Megabytes) listed six general conclusions:

"1: Face recognition technology is not currently reliable enough to ethically justify its use on body-worn cameras. At the least, face recognition technology should not be deployed until the technology performs with far greater accuracy and performs equally well across races, ethnicities, genders, and other identity groups. Whether face recognition on body-worn cameras can ever be ethically justifiable is an issue the Board has begun to discuss in the context of the use cases outlined in Part IV.A, and will take up again if and when these prerequisites are met."

"2: When assessing face recognition algorithms, rather than talking about “accuracy,” we prefer to discuss false positive and false negative rates. Our tolerance for one or the other will depend on the use case."

"3: The Board is unwilling to endorse the development of face recognition technology of any sort that can be completely customized by the user. It strongly prefers a model in which the technologies that are made available are limited in what functions they can perform, so as to prevent misuse by law enforcement."

"4: No jurisdiction should adopt face recognition technology without going through open, transparent, democratic processes, with adequate opportunity for genuinely representative public analysis, input, and objection."

"5: Development of face recognition products should be premised on evidence-based benefits. Unless and until those benefits are clear, there is no need to discuss costs or adoption of any particular product."

"6: When assessing the costs and benefits of potential use cases, one must take into account both the realities of policing in America (and in other jurisdictions) and existing technological limitations."

The board included persons with legal, technology, law enforcement, and civil rights backgrounds; plus members from the affected communities. Axon management listened to the report's conclusions and is following the board's recommendations (emphasis added):

"Respond publicly to this report, including to the Board’s conclusions and recommendations regarding face recognition technology. Commit, based on the concerns raised by the Board, not to proceed with the development of face matching products, including adding such capabilities to body-worn cameras or to Axon Evidence (Evidence.com)... Invest company resources to work, in a transparent manner and in tandem with leading independent researchers, to ensure training data are statistically representative of the appropriate populations and that algorithms work equally well across different populations. Continue to comply with the Board’s Operating Principles, including by involving the Board in the earliest possible stages of new or anticipated products. Work with the Board to produce products and services designed to improve policing transparency and democratic accountability, including by developing products in ways that assure audit trails or that collect information that agencies can release to the public about their use of Axon products..."

Admirable. Encouraging. The Washington Post reported:

"San Francisco in May became the first U.S. city to ban city police and agencies from using facial-recognition software... Somerville, Massachusetts became the second, with other cities, including Berkeley and Oakland, Calif., considering similar measures..."

Clearly, this topic bears monitoring. Consumers and government officials are concerned about accuracy and bias. So, too, are some corporations.

And, more news seems likely. Will other technology companies and local governments utilize similar A.I. ethics boards? Will schools, healthcare facilities, and other customers of surveillance devices demand products with accuracy and without bias supported by evidence?


Digital Jail: How Electronic Monitoring Drives Defendants Into Debt

[Editor's note: today's guest post, by reporters at ProPublica, discusses the convergence of law enforcement, outsourcing, smart devices, surveillance, "offender funded" programs, and "e-gentrification." It is reprinted with permission.]

By Ava Kofman, ProPublica

On Oct. 12, 2018, Daehaun White walked free, or so he thought. A guard handed him shoelaces and the $19 that had been in his pocket at the time of his booking, along with a letter from his public defender. The lanky 19-year-old had been sitting for almost a month in St. Louis’ Medium Security Institution, a city jail known as the Workhouse, after being pulled over for driving some friends around in a stolen Chevy Cavalier. When the police charged him with tampering with a motor vehicle — driving a car without its owner’s consent — and held him overnight, he assumed he would be released by morning. He told the police that he hadn’t known that the Chevy, which a friend had lent him a few hours earlier, was stolen. He had no previous convictions. But the $1,500 he needed for the bond was far beyond what he or his family could afford. It wasn’t until his public defender, Erika Wurst, persuaded the judge to lower the amount to $500 cash, and a nonprofit fund, the Bail Project, paid it for him, that he was able to leave the notoriously grim jail. “Once they said I was getting released, I was so excited I stopped listening,” he told me recently. He would no longer have to drink water blackened with mold or share a cell with rats, mice and cockroaches. He did a round of victory pushups and gave away all of the snack cakes he had been saving from the cafeteria.

Emass logo When he finally read Wurst’s letter, however, he realized there was a catch. Even though Wurst had argued against it, the judge, Nicole Colbert-Botchway, had ordered him to wear an ankle monitor that would track his location at every moment using GPS. For as long as he would wear it, he would be required to pay $10 a day to a private company, Eastern Missouri Alternative Sentencing Services, or EMASS. Just to get the monitor attached, he would have to report to EMASS and pay $300 up front — enough to cover the first 25 days, plus a $50 installation fee.

White didn’t know how to find that kind of money. Before his arrest, he was earning minimum wage as a temp, wrapping up boxes of shampoo. His father was largely absent, and his mother, Lakisha Thompson, had recently lost her job as the housekeeping manager at a Holiday Inn. Raising Daehaun and his four siblings, she had struggled to keep up with the bills. The family bounced between houses and apartments in northern St. Louis County, where, as a result of Jim Crow redlining, most of the area’s black population lives. In 2014, they were living on Canfield Drive in Ferguson when Michael Brown was shot and killed there by a police officer. During the ensuing turmoil, Thompson moved the family to Green Bay, Wisconsin. White felt out of place. He was looked down on for his sagging pants, called the N-word when riding his bike. After six months, he moved back to St. Louis County on his own to live with three of his siblings and stepsiblings in a gray house with vinyl siding.

When White got home on the night of his release, he was so overwhelmed to see his family again that he forgot about the letter. He spent the next few days hanging out with his siblings, his mother, who had returned to Missouri earlier that year, and his girlfriend, Demetria, who was seven months pregnant. He didn’t report to EMASS.

What he didn’t realize was that he had failed to meet a deadline. Typically, defendants assigned to monitors must pay EMASS in person and have the device installed within 24 hours of their release from jail. Otherwise, they have to return to court to explain why they’ve violated the judge’s orders. White, however, wasn’t called back for a hearing. Instead, a week after he left the Workhouse, Colbert-Botchway issued a warrant for his arrest.

Three days later, a large group of police officers knocked on Thompson’s door, looking for information about an unrelated case, a robbery. White and his brother had been making dinner with their mother, and the officers asked them for identification. White’s name matched the warrant issued by Colbert-Botchway. “They didn’t tell me what the warrant was for,” he said. “Just that it was for a violation of my release.” He was taken downtown and held for transfer back to the Workhouse. “I kept saying to myself, ’Why am I locked up?’” he recalled.

The next morning, Thompson called the courthouse to find the answer. She learned that her son had been jailed over his failure to acquire and pay for his GPS monitor. To get him out, she needed to pay EMASS on his behalf.

This seemed absurd to her. When Daehaun was 13, she had worn an ankle monitor after violating probation for a minor theft, but the state hadn’t required her to cover the cost of her own supervision. “This is a 19-year-old coming out of the Workhouse,” she told me recently. “There’s no way he has $300 saved.” Thompson felt that the court was forcing her to choose between getting White out of jail and supporting the rest of her family.

Over the past half-century, the number of people behind bars in the United States jumped by more than 500%, to 2.2 million. This extraordinary rise, often attributed to decades of “tough on crime” policies and harsh sentencing laws, has ensured that even as crime rates have dropped since the 1990s, the number of people locked up and the average length of their stay have increased. According to the Bureau of Justice Statistics, the cost of keeping people in jails and prisons soared to $87 billion in 2015 from $19 billion in 1980, in current dollars.

In recent years, politicians on both sides of the aisle have joined criminal-justice reformers in recognizing mass incarceration as both a moral outrage and a fiscal sinkhole. As ankle bracelets have become compact and cost-effective, legislators have embraced them as an enlightened alternative. More than 125,000 people in the criminal-justice system were supervised with monitors in 2015, compared with just 53,000 people in 2005, according to the Pew Charitable Trusts. Although no current national tally is available, data from several cities — Austin, Texas; Indianapolis; Chicago; and San Francisco — show that this number continues to rise. Last December, the First Step Act, which includes provisions for home detention, was signed into law by President Donald Trump with support from the private prison giants GEO Group and CoreCivic. These corporations dominate the so-called community-corrections market — services such as day-reporting and electronic monitoring — that represents one of the fastest-growing revenue sectors of their industry.

By far the most decisive factor promoting the expansion of monitors is the financial one. The United States government pays for monitors for some of those in the federal criminal-justice system and for tens of thousands of immigrants supervised by Immigration and Customs Enforcement. But states and cities, which incur around 90% of the expenditures for jails and prisons, are increasingly passing the financial burden of the devices onto those who wear them. It costs St. Louis roughly $90 a day to detain a person awaiting trial in the Workhouse, where in 2017 the average stay was 291 days. When individuals pay EMASS $10 a day for their own supervision, it costs the city nothing. A 2014 study by NPR and the Brennan Center found that, with the exception of Hawaii, every state required people to pay at least part of the costs associated with GPS monitoring. Some probation offices and sheriffs run their own monitoring programs — renting the equipment from manufacturers, hiring staff and collecting fees directly from participants. Others have outsourced the supervision of defendants, parolees and probationers to private companies.

“There are a lot of judges who reflexively put people on monitors, without making much of a pretense of seriously weighing it at all,” said Chris Albin-Lackey, a senior legal adviser with Human Rights Watch who has researched private-supervision companies. “The limiting factor is the cost it might impose on the public, but when that expense is sourced out, even that minimal brake on judicial discretion goes out the window.”

Nowhere is the pressure to adopt monitors more pronounced than in places like St. Louis: cash-strapped municipalities with large populations of people awaiting trial. Nationwide on any given day, half a million people sit in crowded and expensive jails because, like Daehaun White, they cannot purchase their freedom.

As the movement to overhaul cash bail has challenged the constitutionality of jailing these defendants, judges and sheriffs have turned to monitors as an appealing substitute. In San Francisco, the number of people released from jail onto electronic monitors tripled after a 2018 ruling forced courts to release more defendants without bail. In Marion County, Indiana, where jail overcrowding is routine, roughly 5,000 defendants were put on monitors last year. “You would be hard-pressed to find bail-reform legislation in any state that does not include the possibility of electronic monitoring,” said Robin Steinberg, the chief executive of the Bail Project.

Yet like the system of wealth-based detention they are meant to help reform, ankle monitors often place poor people in special jeopardy. Across the country, defendants who have not been convicted of a crime are put on “offender funded” payment plans for monitors that sometimes cost more than their bail. And unlike bail, they don’t get the payment back, even if they’re found innocent. Although a federal survey shows that nearly 40% of Americans would have trouble finding $400 to cover an emergency, companies and courts routinely threaten to lock up defendants if they fall behind on payment. In Greenville, South Carolina, pretrial defendants can be sent back to jail when they fall three weeks behind on fees. (An officer for the Greenville County Detention Center defended this practice on the grounds that participants agree to the costs in advance.) In Mohave County, Arizona, pretrial defendants charged with sex offenses have faced rearrest if they fail to pay for their monitors, even if they prove that they can’t afford them. “We risk replacing an unjust cash-bail system,” Steinberg said, “with one just as unfair, inhumane and unnecessary.”

Many local judges, including in St. Louis, do not conduct hearings on a defendant’s ability to pay for private supervision before assigning them to it; those who do often overestimate poor people’s financial means. Without judicial oversight, defendants are vulnerable to private-supervision companies that set their own rates and charge interest when someone can’t pay up front. Some companies even give their employees bonuses for hitting collection targets.

It’s not only debt that can send defendants back to jail. People who may not otherwise be candidates for incarceration can be punished for breaking the lifestyle rules that come with the devices. A survey in California found that juveniles awaiting trial or on probation face especially difficult rules; in one county, juveniles on monitors were asked to follow more than 50 restrictions, including not participating “in any social activity.” For this reason, many advocates describe electronic monitoring as a “net-widener": Far from serving as an alternative to incarceration, it ends up sweeping more people into the system.

Dressed in a baggy yellow City of St. Louis Corrections shirt, White was walking to the van that would take him back to the Workhouse after his rearrest, when a guard called his name and handed him a bus ticket home. A few hours earlier, his mom had persuaded her sister to lend her the $300 that White owed EMASS. Wurst, his public defender, brought the receipt to court.

The next afternoon, White hitched a ride downtown to the EMASS office, where one of the company’s bond-compliance officers, Nick Buss, clipped a black box around his left ankle. Based in the majority white city of St. Charles, west of St. Louis, EMASS has several field offices throughout eastern Missouri. A former probation and parole officer, Michael Smith, founded the company in 1991 after Missouri became one of the first states to allow private companies to supervise some probationers. (Smith and other EMASS officials declined to comment for this story.)

The St. Louis area has made national headlines for its “offender funded” model of policing and punishment. Stricken by postindustrial decline and the 2008 financial crisis, its municipalities turned to their police departments and courts to make up for shortfalls in revenue. In 2015, the Ferguson Report by the United States Department of Justice put hard numbers to what black residents had long suspected: The police were targeting them with disproportionate arrests, traffic tickets and excessive fines.

EMASS may have saved the city some money, but it also created an extraordinary and arbitrary-seeming new expense for poor defendants. When cities cover the cost of monitoring, they often pay private contractors $2 to $3 a day for the same equipment and services for which EMASS charges defendants $10 a day. To come up with the money, EMASS clients told me, they had to find second jobs, take their children out of day care and cut into disability checks. Others hurried to plead guilty for no better reason than that being on probation was cheaper than paying for a monitor.

At the downtown office, White signed a contract stating that he would charge his monitor for an hour and a half each day and “report” to EMASS with $70 each week. He could shower, but was not to bathe or swim (the monitor is water-resistant, not waterproof). Interfering with the monitor’s functioning was a felony.

White assumed that GPS supervision would prove a minor annoyance. Instead, it was a constant burden. The box was bulky and the size of a fist, so he couldn’t hide it under his jeans. Whenever he left the house, people stared. There were snide comments ("nice bracelet") and cutting jokes. His brothers teased him about having a babysitter. “I’m nobody to watch,” he insisted.

The biggest problem was finding work. Confident and outgoing, White had never struggled to land jobs; after dropping out of high school in his junior year, he flipped burgers at McDonald’s and Steak ’n Shake. To pay for the monitor, he applied to be a custodian at Julia Davis Library, a cashier at Home Depot, a clerk at Menards. The conversation at Home Depot had gone especially well, White thought, until the interviewer casually asked what was on his leg.

To help improve his chances, he enrolled in Mission: St. Louis, a job-training center for people reentering society. One afternoon in January, he and a classmate role-played how to talk to potential employers about criminal charges. White didn’t know how much detail to go into. Should he tell interviewers that he was bringing his pregnant girlfriend some snacks when he was pulled over? He still isn’t sure, because a police officer came looking for him midway through the class. The battery on his monitor had died. The officer sent him home, and White missed the rest of the lesson.

With all of the restrictions and rules, keeping a job on a monitor can be as difficult as finding one. The hours for weekly check-ins at the downtown EMASS office — 1 p.m. to 6 p.m. on Tuesdays and Wednesdays, and 1 p.m. until 5 p.m. on Mondays — are inconvenient for those who work. In 2011, the National Institute of Justice surveyed 5,000 people on electronic monitors and found that 22% said they had been fired or asked to leave a job because of the device. Juawanna Caves, a young St. Louis native and mother of two, was placed on a monitor in December after being charged with unlawful use of a weapon. She said she stopped showing up to work as a housekeeper when her co-workers made her uncomfortable by asking questions and later lost a job at a nursing home because too many exceptions had to be made for her court dates and EMASS check-ins.

Perpetual surveillance also takes a mental toll. Nearly everyone I spoke to who wore a monitor described feeling trapped, as though they were serving a sentence before they had even gone to trial. White was never really sure about what he could or couldn’t do under supervision. In January, when his girlfriend had their daughter, Rylan, White left the hospital shortly after the birth, under the impression that he had a midnight curfew. Later that night, he let his monitor die so that he could sneak back before sunrise to see the baby again.

EMASS makes its money from defendants. But it gets its power over them from judges. It was in 2012 that the judges of the St. Louis court started to use the company’s services — which previously involved people on probation for misdemeanors — for defendants awaiting trial. Last year, the company supervised 239 defendants in the city of St. Louis on GPS monitors, according to numbers provided by EMASS to the court. The alliance with the courts gives the company not just a steady stream of business but a reliable means of recouping debts: Unlike, say, a credit-card company, which must file a civil suit to collect from overdue customers, EMASS can initiate criminal-court proceedings, threatening defendants with another stay in the Workhouse.

In early April, I visited Judge Rex Burlison in his chambers on the 10th floor of the St. Louis civil courts building. A few months earlier, Burlison, who has short gray hair and light blue eyes, had been elected by his peers as presiding judge, overseeing the city’s docket, budget and operations, including the contract with EMASS. It was one of the first warm days of the year, and from the office window I could see sunlight glimmering on the silver Gateway Arch.

I asked Burlison about the court’s philosophy for using pretrial GPS. He stressed that while each case was unique and subject to the judge’s discretion, monitoring was most commonly used for defendants who posed a flight risk, endangered public safety or had an alleged victim. Judges vary in how often they order defendants to wear monitors, and critics have attacked the inconsistency. Colbert-Botchway, the judge who put White on a monitor, regularly made pretrial GPS a condition of release, according to public defenders. (Colbert-Botchway declined to comment.) But another St. Louis city judge, David Roither, told me, “I really don’t use it very often because people here are too poor to pay for it.”

Whenever a defendant on a monitor violates a condition of release, whether related to payment or a curfew or something else, EMASS sends a letter to the court. Last year, Burlison said, the court received two to three letters a week from EMASS about violations. In response, the judge usually calls the defendant in for a hearing. As far as he knew, Burlison said, judges did not incarcerate people simply for failing to pay EMASS debts. “Why would you?” he asked me. When people were put back in jail, he said, there were always other factors at play, like the defendant’s missing a hearing, for instance. (Issuing a warrant for White’s arrest without a hearing, he acknowledged after looking at the docket, was not the court’s standard practice.)

The contract with EMASS allows the court to assign indigent defendants to the company to oversee “at no cost.” Yet neither Burlison nor any of the other current or former judges I spoke with recalled waiving fees when ordering someone to wear an ankle monitor. When I asked Burlison why he didn’t, he said that he was concerned that if he started to make exceptions on the basis of income, the company might stop providing ankle-monitoring services in St. Louis.

“People get arrested because of life choices,” Burlison said. “Whether they’re good for the charge or not, they’re still arrested and have to deal with it, and part of dealing with it is the finances.” To release defendants without monitors simply because they can’t afford the fee, he said, would be to disregard the safety of their victims or the community. “We can’t just release everybody because they’re poor,” he continued.

But many people in the Workhouse awaiting trial are poor. In January, civil rights groups filed suit against the city and the court, claiming that the St. Louis bail system violated the Constitution, in part by discriminating against those who can’t afford to post bail. That same month, the Missouri Supreme Court announced new rules that urged local courts to consider releasing defendants without monetary conditions and to waive fees for poor people placed on monitors. Shortly before the rules went into effect, on July 1, Burlison said that the city intends to shift the way ankle monitors are distributed and plans to establish a fund to help indigent defendants pay for their ankle bracelets. But he said he didn’t know how much money would be in the fund or whether it was temporary or permanent. The need for funding could grow quickly. The pending bail lawsuit has temporarily spurred the release of more defendants from custody, and as a result, public defenders say, the demand for monitors has increased.

Judges are anxious about what people released without posting bail might do once they get out. Several told me that monitors may ensure that the defendants return to court. Not unlike doctors who order a battery of tests for a mildly ill patient to avoid a potential malpractice suit, judges seem to view monitors as a precaution against their faces appearing on the front page of the newspaper. “Every judge’s fear is to let somebody out on recognizance and he commits murder, and then everyone asks, ’How in the hell was this person let out?’” said Robert Dierker, who served as a judge in St. Louis from 1986 to 2017 and now represents the city in the bail lawsuit. “But with GPS, you can say, ’Well, I have him on GPS, what else can I do?’”

Critics of monitors contend that their public-safety appeal is illusory: If defendants are intent on harming someone or skipping town, the bracelet, which can be easily removed with a pair of scissors, would not stop them. Studies showing that people tracked by GPS appear in court more reliably are scarce, and research about its effectiveness as a deterrent is inconclusive.

“The fundamental question is, What purpose is electronic monitoring serving?” said Blake Strode, the executive director of ArchCity Defenders, a nonprofit civil rights law firm in St. Louis that is one of several firms representing the plaintiffs in the bail lawsuit. “If the only purpose it’s serving is to make judges feel better because they don’t want to be on the hook if something goes wrong, then that’s not a sensible approach. We should not simply be monitoring for monitoring’s sake.”

Electronic monitoring was first conceived in the early 1960s by Ralph and Robert Gable, identical twins studying at Harvard under the psychologists Timothy Leary and B.F. Skinner, respectively. Influenced in part by Skinner’s theories of positive reinforcement, the Gables rigged up some surplus missile-tracking equipment to monitor teenagers on probation; those who showed up at the right places at the right times were rewarded with movie tickets, limo rides and other prizes.

Although this round-the-clock monitoring was intended as a tool for rehabilitation, observers and participants alike soon recognized its potential to enhance surveillance. All but two of the 16 volunteers in their initial study dropped out, finding the two bulky radio transmitters oppressive. “They felt like it was a prosthetic conscience, and who would want Mother all the time along with you?” Robert Gable told me. Psychology Today labeled the invention a “belt from Big Brother.”

The reality of electronic monitoring today is that Big Brother is watching some groups more than others. No national statistics are available on the racial breakdown of Americans wearing ankle monitors, but all indications suggest that mass supervision, like mass incarceration, disproportionately affects black people. In Cook County, Illinois, for instance, black people make up 24% of the population, and 67% of those on monitors. The sociologist Simone Browne has connected contemporary surveillance technologies like GPS monitors to America’s long history of controlling where black people live, move and work. In her 2015 book, “Dark Matters,” she traces the ways in which “surveillance is nothing new to black folks,” from the branding of enslaved people and the shackling of convict laborers to Jim Crow segregation and the home visits of welfare agencies. These historical inequities, Browne notes, influence where and on whom new tools like ankle monitors are imposed.

For some black families, including White’s, monitoring stretches across generations. Annette Taylor, the director of Ripple Effect, an advocacy group for prisoners and their families based in Champaign, Illinois, has seen her ex-husband, brother, son, nephew and sister’s husband wear ankle monitors over the years. She had to wear one herself, about a decade ago, she said, for driving with a suspended license. “You’re making people a prisoner of their home,” she told me. When her son was paroled and placed on house arrest, he couldn’t live with her, because he was forbidden to associate with people convicted of felonies, including his stepfather, who was also on house arrest.

Some people on monitors are further constrained by geographic restrictions — areas in the city or neighborhood that they can’t go without triggering an alarm. James Kilgore, a research scholar at the University of Illinois at Champaign-Urbana, has cautioned that these exclusionary zones could lead to “e-gentrification,” effectively keeping people out of more-prosperous neighborhoods. In 2016, after serving four years in prison for drug conspiracy, Bryan Otero wore a monitor as a condition of parole. He commuted from the Bronx to jobs at a restaurant and a department store in Manhattan, but he couldn’t visit his family or doctor because he was forbidden to enter a swath of Manhattan between 117th Street and 131st Street. “All my family and childhood friends live in that area,” he said. “I grew up there.”

Michelle Alexander, a legal scholar and columnist for The Times, has argued that monitoring engenders a new form of oppression under the guise of progress. In her 2010 book, “The New Jim Crow,” she wrote that the term “mass incarceration” should refer to the “system that locks people not only behind actual bars in actual prisons, but also behind virtual bars and virtual walls — walls that are invisible to the naked eye but function nearly as effectively as Jim Crow laws once did at locking people of color into a permanent second-class citizenship.”

BI Incorporated logo As the cost of monitoring continues to fall, those who are required to submit to it may worry less about the expense and more about the intrusive surveillance. The devices, some of which are equipped with two-way microphones, can give corrections officials unprecedented access to the private lives not just of those monitored but also of their families and friends. GPS location data appeals to the police, who can use it to investigate crimes. Already the goal is both to track what individuals are doing and to anticipate what they might do next. BI Incorporated, an electronic-monitoring subsidiary of GEO Group, has the ability to assign risk scores to the behavioral patterns of those monitored, so that law enforcement can “address potential problems before they happen.” Judges leery of recidivism have begun to embrace risk-assessment tools. As a result, defendants who have yet to be convicted of an offense in court may be categorized by their future chances of reoffending.

The combination of GPS location data with other tracking technologies such as automatic license-plate readers represents an uncharted frontier for finer-grained surveillance. In some cities, police have concentrated these tools in neighborhoods of color. A CityLab investigation found that Baltimore police were more likely to deploy the Stingray — the controversial and secretive cellphone tracking technology — where African Americans lived. In the aftermath of Freddie Gray’s death in 2015, the police spied on Black Lives Matter protesters with face recognition technology. Given this pattern, the term “electronic monitoring” may soon refer not just to a specific piece of equipment but to an all-encompassing strategy.

If the evolution of the criminal-justice system is any guide, it is very likely that the ankle bracelet will go out of fashion. Some GPS monitoring vendors have already started to offer smartphone applications that verify someone’s location through voice and face recognition. These apps, with names like Smart-LINK and Shadowtrack, promise to be cheaper and more convenient than a boxy bracelet. They’re also less visible, mitigating the stigma and normalizing surveillance. While reducing the number of people in physical prison, these seductive applications could, paradoxically, increase its reach. For the nearly 4.5 million Americans on probation or parole, it is not difficult to imagine a virtual prison system as ubiquitous — and invasive — as Instagram or Facebook.

On January 24, exactly three months after White had his monitor installed, his public defender successfully argued in court for its removal. His phone service had been shut off because he had fallen behind on the bill, so his mother told him the good news over video chat.

When White showed up to EMASS a few days later to have the ankle bracelet removed, he said, one of the company’s employees told him that he couldn’t take off his monitor until he paid his debt. White offered him the $35 in his wallet — all the money he had. It wasn’t enough. The employee explained that he needed to pay at least half of the $700 he owed. Somewhere in the contract he had signed months earlier, White had agreed to pay his full balance “at the time of removal.” But as White saw it, the court that had ordered the monitor’s installation was now ordering its removal. Didn’t that count?

“That’s the only thing that’s killing me,” White told me a few weeks later, in early March. “Why are you all not taking it off?” We were in his brother’s room, which, unlike White’s down the hall, had space for a wobbly chair. White sat on the bed, his head resting against the frame, while his brother sat on the other end by the TV, mumbling commands into a headset for the fantasy video game Fortnite. By then, the prosecutor had offered White two to three years of probation in exchange for a plea. (White is waiting to hear if he has been accepted into the city’s diversion program for “youthful offenders,” which would allow him to avoid pleading and wipe the charges from his record in a year.)

White was wearing a loosefitting Nike track jacket and red sweats that bunched up over the top of his monitor. He had recently stopped charging it, and so far, the police hadn’t come knocking. “I don’t even have to have it on,” he said, looking down at his ankle. “But without a job, I can’t get it taken off.” In the last few weeks, he had sold his laptop, his phone and his TV. That cash went to rent, food and his daughter, and what was left barely made a dent in what he owed EMASS.

It was a Monday — a check-in day — but he hadn’t been reporting for the past couple of weeks. He didn’t see the point; he didn’t have the money to get the monitor removed and the office was an hour away by bus. I offered him a ride.

EMASS check-ins take place in a three-story brick building with a low-slung facade draped in ivy. The office doesn’t take cash payments, and a Western Union is conveniently located next door. The other men in the waiting room were also wearing monitors. When it was White’s turn to check-in, Buss, the bond-compliance officer, unclipped the band from his ankle and threw the device into a bin, White said. He wasn’t sure why EMASS had now softened its approach, but his debts nonetheless remained.

Buss calculated the money White owed going back to November: $755, plus 10% annual interest. Over the next nine months, EMASS expected him to make monthly payments that would add up to $850 — more than the court had required for his bond. White looked at the receipt and shook his head. “I get in trouble for living,” he said as he walked out of the office. “For being me.”

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for The Big Story newsletter to receive stories like this one in your inbox.


Several States Strengthened Their Data Breach Notification Laws in 2019

Legislatures in several states are improving their existing data breach notification laws to provide stronger protections for consumers.

To fully appreciate the changes requires an understanding of the current legal status. The National Conference of State Legislatures summarized the current status:

"All 50 states, the District of Columbia, Guam, Puerto Rico and the Virgin Islands have enacted legislation requiring private or governmental entities to notify individuals of security breaches of information involving personally identifiable information. Security breach laws typically have provisions regarding who must comply with the law (e.g., businesses, data/information brokers, government entities, etc.); definitions of “personal information” (e.g., name combined with SSN, drivers license or state ID, account numbers, etc.); what constitutes a breach (e.g., unauthorized acquisition of data); requirements for notice (e.g., timing or method of notice, who must be notified); and exemptions (e.g., for encrypted information)."

The increased legislative activity comes in the aftermath of the massive Equifax breach in 2017 affecting 145.5 million persons. 2018 was a terrible year with more than one billion consumer accounts affected by multiple data breaches.

Many of the improvements across states requires sooner notice to affected persons, so consumers can check their bank/card statements for fraudulent activity, and take other security actions. Without sooner notice, fraud can perpetuate with more money stolen.

Now, the legislative activity in selected states.

First, legislators amended the requirements in the Maryland Personal Information Protection Act (MPIPA), or House Bill 1154. Maryland Governor Larry Hogan approved of the changes, which will go into effect on October 1, 2019. A summary of the changes:

  • Requires businesses that own or license "computerized data that includes personal information of an individual residing in the State" to conduct a good-faith breach investigation to determine data abuse when they discover or are notified of a data breach,
  • Requires notification of affected persons within 45 days, and
  • Requires businesses to maintain records of the breach for three years of its breach investigation and determination that notification of affected persons is not required.

Second, Massachusetts Governor Charlie Baker signed legislation in January which went into effect on April 11, 2019. Changes in the new law: no fees for consumers to place, lift, or remove Security Freezes; credit monitoring required when Social Security numbers disclosed during the breach; and an expanded list of requirements when businesses provide notice to the Massachusetts Attorney General and to the Massachusetts Office of Consumer Affairs and Business Regulation (OCABR).

Third, New Jersey amended its breach law. SC Magazine summarized the changes:

"The new law expands the definition of what constitutes personal information that, if exposed in a breach, would require a company to issue a notification. Once S-52 takes effect on Sept. 1, 2019, personal information will also include a “user name, email address, or any other account holder identifying information, in combination with any password or security questions and answer…” the law states."

Fourth, Oregon Governor Kate Brown signed into law Senate Bill 684 on May 24, 2019. The JD Supra site reported:

"The most significant changes are around service providers, who will take on an independent obligation to notify the state Attorney General (AG) about data security breaches. A handful of other, more subtle changes are also included in the amendments, which take effect January 1, 2020... The obligation that service providers notify the AG is triggered by breaches affecting the personal information of over 250 Oregon consumers, or when the number cannot be determined... The new obligation increases the number of parties involved in incident response and notice decisions... This round of amendments adds user names, combined with password or other means of authentication, to the list of notice-triggering personal information... One other amendment also touches service providers. Where previously service providers had to notify business customers “as soon as practicable” after discovering a breach, the amendments set a deadline of 10 days."

Many companies outsource back-office work to vendors. So, the Oregon law keeps pace with common business practices. Readers wanting to learn more can read this blog's Outsourcing section.

A new, separate bill in Oregon covers internet-connected devices, also called the Internet of Things (IoT). Many consumers have installed IoT devices in their homes. According to JD Supra:

"The Oregon connected device security law is largely consistent with California’s new connected device security law, and both take effect January 1, 2020. Both require that manufacturers equip IoT devices with reasonable security features. Under either statute that can mean setting unique passwords for each unit shipped, or requiring end users to set a new password when they first access the device, in order to access the devices remotely from outside the devices’ local area network. This is a floor, not a ceiling, and both laws leave room for other security features..."

When manufacturers sell IoT devices all configured with the same universal password, it is a huge security problem. Bad actors can remotely access consumers' IoT devices to commit identity theft, fraud, and more. Consumers require greater protection, and the new IoT law is a good first step. Readers wanting to learn more can read this blog's Internet of Things section.

Fifth, Washington Governor Jay Inslee signed signed HB 1071 on May 7) which expanded the state’s data breach notification law. The changes become effective March 1, 2020. The National Law Review reported that breach:

"... notices must be provided no more than thirty days after the organization discovers the breach. This applies to notices sent to affected consumers as well as to the state’s Attorney General. The threshold requirement for notice to the Attorney General remains the same—it is only required if 500 or more Washington residents were affected by the breach."

The new law in Washington also expanded the list of sensitive data elements comprising "personal information" when combined with a person's name: birth date; "unique private key used to authenticate" electronic records; passport, military, and student ID numbers; health insurance policy or identification number; medical history, health conditions, diagnoses, and treatments; and biometric data (e.g., fingerprints, retina scans, voiceprints, etc.).

As more states announce amended breach notification laws, this blog will cover those actions.


Your Medical Devices Are Not Keeping Your Health Data to Themselves

[Editor's note: today's guest post, by reporters at ProPublica, is part of a series which explores data collection, data sharing, and privacy issues within the healthcare industry. It is reprinted with permission.]

By Derek Kravitz and Marshall Allen, ProPublica

Medical devices are gathering more and more data from their users, whether it’s their heart rates, sleep patterns or the number of steps taken in a day. Insurers and medical device makers say such data can be used to vastly improve health care.

But the data that’s generated can also be used in ways that patients don’t necessarily expect. It can be packaged and sold for advertising. It can anonymized and used by customer support and information technology companies. Or it can be shared with health insurers, who may use it to deny reimbursement. Privacy experts warn that data gathered by insurers could also be used to rate individuals’ health care costs and potentially raise their premiums.

Patients typically have to give consent for their data to be used — so-called “donated data.” But some patients said they weren’t aware that their information was being gathered and shared. And once the data is shared, it can be used in a number of ways. Here are a few of the most popular medical devices that can share data with insurers:

Continuous Positive Airway Pressure, or CPAP, Machines

What Are They?

One of the more popular devices for those with sleep apnea, CPAP machines are covered by insurers after a sleep study confirms the diagnosis. These units, which deliver pressurized air through masks worn by patients as they sleep, collect data and transmit it wirelessly.

What Do They Collect?

It depends on the unit, but CPAP machines can collect data on the number of hours a patient uses the device, the number of interruptions in sleep and the amount of air that leaks from the mask.

Who Gets the Info?

The data may be transmitted to the makers or suppliers of the machines. Doctors may use it to assess whether the therapy is effective. Health insurers may receive the data to track whether patients are using their CPAP machines as directed. They may refuse to reimburse the costs of the machine if the patient doesn’t use it enough. The device maker ResMed said in a statement that patients may withdraw their consent to have their data shared.

Heart Monitors

What Are They?

Heart monitors, oftentimes small, battery-powered devices worn on the body and attached to the skin with electrodes, measure and record the heart’s electrical signals, typically over a few days or weeks, to detect things like irregular heartbeats or abnormal heart rhythms. Some devices implanted under the skin can last up to five years.

What Do They Collect?

Wearable ones include Holter monitors, wired external devices that attach to the skin, and event recorders, which can track slow or fast heartbeats and fainting spells. Data can also be shared from implanted pacemakers, which keep the heart beating properly for those with arrhythmias.

Who Gets the Info?

Low resting heart rates or other abnormal heart conditions are commonly used by insurance companies to place patients in more expensive rate classes. Children undergoing genetic testing are sometimes outfitted with heart monitors before their diagnosis, increasing the odds that their data is used by insurers. This sharing is the most common complaint cited by the World Privacy Forum, a consumer rights group.

Blood Glucose Monitors

What Are They?

Millions of Americans who have diabetes are familiar with blood glucose meters, or glucometers, which take a blood sample on a strip of paper and analyze it for glucose, or sugar, levels. This allows patients and their doctors to monitor their diabetes so they don’t have complications like heart or kidney disease. Blood glucose meters are used by the more the 1.2 million Americans with Type 1 diabetes, which is usually diagnosed in children, teens and young adults.

What Do They Collect?

Blood sugar monitors measure the concentration of glucose in a patient’s blood, a key indicator of proper diabetes management.

Who Gets the Info?

Diabetes monitoring equipment is sold directly to patients, but many still rely on insurer-provided devices. To get reimbursement for blood glucose meters, health insurers will typically ask for at least a month’s worth of blood sugar data.

Lifestyle Monitors

What Are They?

Step counters, medication alerts and trackers, and in-home cameras are among the devices in the increasingly crowded lifestyle health industry.

What Do They Collect?

Many health data research apps are made up of “donated data,” which is provided by consumers and falls outside of federal guidelines that require the sharing of personal health data be disclosed and anonymized to protect the identity of the patient. This data includes everything from counters for the number of steps you take, the calories you eat and the number of flights of stairs you climb to more traditional health metrics, such as pulse and heart rates.

Who Gets the Info?

It varies by device. But the makers of the Fitbit step counter, for example, say they never sell customer personal data or share personal information unless a user requests it; it is part of a legal process; or it is provided on a “confidential basis” to a third-party customer support or IT provider. That said, Fitbit allows users who give consent to share data “with a health insurer or wellness program,” according to a statement from the company.

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


You Snooze, You Lose: Insurers Make The Old Adage Literally True

[Editor's note: today's guest post, by reporters at ProPublica, is part of a series which explores data collection, data sharing, and privacy issues within the healthcare industry. It is reprinted with permission.]

By Marshall Allen, ProPublica

Last March, Tony Schmidt discovered something unsettling about the machine that helps him breathe at night. Without his knowledge, it was spying on him.

From his bedside, the device was tracking when he was using it and sending the information not just to his doctor, but to the maker of the machine, to the medical supply company that provided it and to his health insurer.

Schmidt, an information technology specialist from Carrollton, Texas, was shocked. “I had no idea they were sending my information across the wire.”

Schmidt, 59, has sleep apnea, a disorder that causes worrisome breaks in his breathing at night. Like millions of people, he relies on a continuous positive airway pressure, or CPAP, machine that streams warm air into his nose while he sleeps, keeping his airway open. Without it, Schmidt would wake up hundreds of times a night; then, during the day, he’d nod off at work, sometimes while driving and even as he sat on the toilet.

“I couldn’t keep a job,” he said. “I couldn’t stay awake.” The CPAP, he said, saved his career, maybe even his life.

As many CPAP users discover, the life-altering device comes with caveats: Health insurance companies are often tracking whether patients use them. If they aren’t, the insurers might not cover the machines or the supplies that go with them.

In fact, faced with the popularity of CPAPs, which can cost $400 to $800, and their need for replacement filters, face masks and hoses, health insurers have deployed a host of tactics that can make the therapy more expensive or even price it out of reach.

Patients have been required to rent CPAPs at rates that total much more than the retail price of the devices, or they’ve discovered that the supplies would be substantially cheaper if they didn’t have insurance at all.

Experts who study health care costs say insurers’ CPAP strategies are part of the industry’s playbook of shifting the costs of widely used therapies, devices and tests to unsuspecting patients.

“The doctors and providers are not in control of medicine anymore,” said Harry Lawrence, owner of Advanced Oxy-Med Services, a New York company that provides CPAP supplies. “It’s strictly the insurance companies. They call the shots.”

Insurers say their concerns are legitimate. The masks and hoses can be cumbersome and noisy, and studies show that about third of patients don’t use their CPAPs as directed.

But the companies’ practices have spawned lawsuits and concerns by some doctors who say that policies that restrict access to the machines could have serious, or even deadly, consequences for patients with severe conditions. And privacy experts worry that data collected by insurers could be used to discriminate against patients or raise their costs.

Schmidt’s privacy concerns began the day after he registered his new CPAP unit with ResMed, its manufacturer. He opted out of receiving any further information. But he had barely wiped the sleep out of his eyes the next morning when a peppy email arrived in his inbox. It was ResMed, praising him for completing his first night of therapy. “Congratulations! You’ve earned yourself a badge!” the email said.

Then came this exchange with his supply company, Medigy: Schmidt had emailed the company to praise the “professional, kind, efficient and competent” technician who set up the device. A Medigy representative wrote back, thanking him, then adding that Schmidt’s machine “is doing a great job keeping your airway open.” A report detailing Schmidt’s usage was attached.

Alarmed, Schmidt complained to Medigy and learned his data was also being shared with his insurer, Blue Cross Blue Shield. He’d known his old machine had tracked his sleep because he’d taken its removable data card to his doctor. But this new invasion of privacy felt different. Was the data encrypted to protect his privacy as it was transmitted? What else were they doing with his personal information?

He filed complaints with the Better Business Bureau and the federal government to no avail. “My doctor is the ONLY one that has permission to have my data,” he wrote in one complaint.

In an email, a Blue Cross Blue Shield spokesperson said that it’s standard practice for insurers to monitor sleep apnea patients and deny payment if they aren’t using the machine. And privacy experts said that sharing the data with insurance companies is allowed under federal privacy laws. A ResMed representative said once patients have given consent, it may share the data it gathers, which is encrypted, with the patients’ doctors, insurers and supply companies.

Schmidt returned the new CPAP machine and went back to a model that allowed him to use a removable data card. His doctor can verify his compliance, he said.

Luke Petty, the operations manager for Medigy, said a lot of CPAP users direct their ire at companies like his. The complaints online number in the thousands. But insurance companies set the prices and make the rules, he said, and suppliers follow them, so they can get paid.

“Every year it’s a new hurdle, a new trick, a new game for the patients,” Petty said.

A Sleep Saving Machine Gets Popular

The American Sleep Apnea Association estimates about 22 million Americans have sleep apnea, although it’s often not diagnosed. The number of people seeking treatment has grown along with awareness of the disorder. It’s a potentially serious disorder that left untreated can lead to risks for heart disease, diabetes, cancer and cognitive disorders. CPAP is one of the only treatments that works for many patients.

Exact numbers are hard to come by, but ResMed, the leading device maker, said it’s monitoring the CPAP use of millions of patients.

Sleep apnea specialists and health care cost experts say insurers have countered the deluge by forcing patients to prove they’re using the treatment.

Medicare, the government insurance program for seniors and the disabled, began requiring CPAP “compliance” after a boom in demand. Because of the discomfort of wearing a mask, hooked up to a noisy machine, many patients struggle to adapt to nightly use. Between 2001 and 2009, Medicare payments for individual sleep studies almost quadrupled to $235 million. Many of those studies led to a CPAP prescription. Under Medicare rules, patients must use the CPAP for four hours a night for at least 70 percent of the nights in any 30-day period within three months of getting the device. Medicare requires doctors to document the adherence and effectiveness of the therapy.

Sleep apnea experts deemed Medicare’s requirements arbitrary. But private insurers soon adopted similar rules, verifying usage with data from patients’ machines — with or without their knowledge.

Kristine Grow, spokeswoman for the trade association America’s Health Insurance Plans, said monitoring CPAP use is important because if patients aren’t using the machines, a less expensive therapy might be a smarter option. Monitoring patients also helps insurance companies advise doctors about the best treatment for patients, she said. When asked why insurers don’t just rely on doctors to verify compliance, Grow said she didn’t know.

Many insurers also require patients to rack up monthly rental fees rather than simply pay for a CPAP.

Dr. Ofer Jacobowitz, a sleep apnea expert at ENT and Allergy Associates and assistant professor at The Mount Sinai Hospital in New York, said his patients often pay rental fees for a year or longer before meeting the prices insurers set for their CPAPs. But since patients’ deductibles — the amount they must pay before insurance kicks in — reset at the beginning of each year, they may end up covering the entire cost of the rental for much of that time, he said.

The rental fees can surpass the retail cost of the machine, patients and doctors say. Alan Levy, an attorney who lives in Rahway, New Jersey, bought an individual insurance plan through the now-defunct Health Republic Insurance of New Jersey in 2015. When his doctor prescribed a CPAP, the company that supplied his device, At Home Medical, told him he needed to rent the device for $104 a month for 15 months. The company told him the cost of the CPAP was $2,400.

Levy said he wouldn’t have worried about the cost if his insurance had paid it. But Levy’s plan required him to reach a $5,000 deductible before his insurance plan paid a dime. So Levy looked online and discovered the machine actually cost about $500.

Levy said he called At Home Medical to ask if he could avoid the rental fee and pay $500 up front for the machine, and a company representative said no. “I’m being overcharged simply because I have insurance,” Levy recalled protesting.

Levy refused to pay the rental fees. “At no point did I ever agree to enter into a monthly rental subscription,” he wrote in a letter disputing the charges. He asked for documentation supporting the cost. The company responded that he was being billed under the provisions of his insurance carrier.

Levy’s law practice focuses, ironically, on defending insurance companies in personal injury cases. So he sued At Home Medical, accusing the company of violating the New Jersey Consumer Fraud Act. Levy didn’t expect the case to go to trial. “I knew they were going to have to spend thousands of dollars on attorney’s fees to defend a claim worth hundreds of dollars,” he said.

Sure enough, At Home Medical, agreed to allow Levy to pay $600 — still more than the retail cost — for the machine.

The company declined to comment on the case. Suppliers said that Levy’s case is extreme, but acknowledged that patients’ rental fees often add up to more than the device is worth.

Levy said that he was happy to abide by the terms of his plan, but that didn’t mean the insurance company could charge him an unfair price. “If the machine’s worth $500, no matter what the plan says, or the medical device company says, they shouldn’t be charging many times that price,” he said.

Dr. Douglas Kirsch, president of the American Academy of Sleep Medicine, said high rental fees aren’t the only problem. Patients can also get better deals on CPAP filters, hoses, masks and other supplies when they don’t use insurance, he said.

Cigna, one of the largest health insurers in the country, currently faces a class-action suit in U.S. District Court in Connecticut over its billing practices, including for CPAP supplies. One of the plaintiffs, Jeffrey Neufeld, who lives in Connecticut, contends that Cigna directed him to order his supplies through a middleman who jacked up the prices.

Neufeld declined to comment for this story. But his attorney, Robert Izard, said Cigna contracted with a company called CareCentrix, which coordinates a network of suppliers for the insurer. Neufeld decided to contact his supplier directly to find out what it had been paid for his supplies and compare that to what he was being charged. He discovered that he was paying substantially more than the supplier said the products were worth. For instance, Neufeld owed $25.68 for a disposable filter under his Cigna plan, while the supplier was paid $7.50. He owed $147.78 for a face mask through his Cigna plan while the supplier was paid $95.

ProPublica found all the CPAP supplies billed to Neufeld online at even lower prices than those the supplier had been paid. Longtime CPAP users say it’s well known that supplies are cheaper when they are purchased without insurance.

Neufeld’s cost “should have been based on the lower amount charged by the actual provider, not the marked-up bill from the middleman,” Izard said. Patients covered by other insurance companies may have fallen victim to similar markups, he said.

Cigna would not comment on the case. But in documents filed in the suit, it denied misrepresenting costs or overcharging Neufeld. The supply company did not return calls for comment.

In a statement, Stephen Wogen, CareCentrix’s chief growth officer, said insurers may agree to pay higher prices for some services, while negotiating lower prices for others, to achieve better overall value. For this reason, he said, isolating select prices doesn’t reflect the overall value of the company’s services. CareCentrix declined to comment on Neufeld’s allegations.

Izard said Cigna and CareCentrix benefit from such behind-the-scenes deals by shifting the extra costs to patients, who often end up covering the marked-up prices out of their deductibles. And even once their insurance kicks in, the amount the patients must pay will be much higher.

The ubiquity of CPAP insurance concerns struck home during the reporting of this story, when a ProPublica colleague discovered how his insurer was using his data against him.

Sleep Aid or Surveillance Device?

Without his CPAP, Eric Umansky, a deputy managing editor at ProPublica, wakes up repeatedly through the night and snores so insufferably that he is banished to the living room couch. “My marriage depends on it.”

In September, his doctor prescribed a new mask and airflow setting for his machine. Advanced Oxy-Med Services, the medical supply company approved by his insurer, sent him a modem that he plugged into his machine, giving the company the ability to change the settings remotely if needed.

But when the mask hadn’t arrived a few days later, Umansky called Advanced Oxy-Med. That’s when he got a surprise: His insurance company might not pay for the mask, a customer service representative told him, because he hadn’t been using his machine enough. “On Tuesday night, you only used the mask for three-and-a-half hours,” the representative said. “And on Monday night, you only used it for three hours.”

“Wait — you guys are using this thing to track my sleep?” Umansky recalled saying. “And you are using it to deny me something my doctor says I need?”

Umansky’s new modem had been beaming his personal data from his Brooklyn bedroom to the Newburgh, New York-based supply company, which, in turn, forwarded the information to his insurance company, UnitedHealthcare.

Umansky was bewildered. He hadn’t been using the machine all night because he needed a new mask. But his insurance company wouldn’t pay for the new mask until he proved he was using the machine all night — even though, in his case, he, not the insurance company, is the owner of the device.

“You view it as a device that is yours and is serving you,” Umansky said. “And suddenly you realize it is a surveillance device being used by your health insurance company to limit your access to health care.”

Privacy experts said such concerns are likely to grow as a host of devices now gather data about patients, including insertable heart monitors and blood glucose meters, as well as Fitbits, Apple Watches and other lifestyle applications. Privacy laws have lagged behind this new technology, and patients may be surprised to learn how little control they have over how the data is used or with whom it is shared, said Pam Dixon, executive director of the World Privacy Forum.

“What if they find you only sleep a fitful five hours a night?” Dixon said. “That’s a big deal over time. Does that affect your health care prices?”

UnitedHealthcare said in a statement that it only uses the data from CPAPs to verify patients are using the machines.

Lawrence, the owner of Advanced Oxy-Med Services, conceded that his company should have told Umansky his CPAP use would be monitored for compliance, but it had to follow the insurers’ rules to get paid.

As for Umansky, it’s now been two months since his doctor prescribed him a new airflow setting for his CPAP machine. The supply company has been paying close attention to his usage, Umansky said, but it still hasn’t updated the setting.

The irony is not lost on Umansky: “I wish they would spend as much time providing me actual care as they do monitoring whether I’m ‘compliant.’”

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

 


When Fatal Crashes Can't Be Avoided, Who Should Self-Driving Cars Save? Or Sacrifice? Results From A Global Survey May Surprise You

Experts predict that there will be 10 million self-driving cars on the roads by 2020. Any outstanding issues need to be resolved before then. One outstanding issue is the "trolley problem" - a situation where a fatal vehicle crash can not be avoided and the self-driving car must decide whether to save the passenger or a nearby pedestrian. Ethical issues with self-driving cars are not new. There are related issues, and some experts have called for a code of ethics.

Like it or not, the software in self-driving cars must be programmed to make decisions like this. Which person in a "trolley problem" should the self-driving car save? In other words, the software must be programmed with moral preferences which dictate which person to sacrifice.

The answer is tricky. You might assume: always save the driver, since nobody would buy self-driving car which would kill their owners. What if the pedestrian is crossing against a 'do not cross' signal within a crosswalk? Does the answer change if there are multiple pedestrians in the crosswalk? What if the pedestrians are children, elders, or pregnant? Or a doctor? Does it matter if the passenger is older than the pedestrians?

To understand what the public wants -- expects -- in self-driving cars, also known as autonomous vehicles (AV), researchers from MIT asked consumers in a massive, online global survey. The survey included 2 million people from 233 countries. The survey included 13 accident scenarios with nine varying factors:

  1. "Sparing people versus pets/animals,
  2. Staying on course versus swerving,
  3. Sparing passengers versus pedestrians,
  4. Sparing more lives versus fewer lives,
  5. Sparing men versus women,
  6. Sparing the young versus the elderly,
  7. Sparing pedestrians who cross legally versus jaywalking,
  8. Sparing the fit versus the less fit, and
  9. Sparing those with higher social status versus lower social status."

Besides recording the accident choices, the researchers also collected demographic information (e.g., gender, age, income, education, attitudes about religion and politics, geo-location) about the survey participants, in order to identify clusters: groups, areas, countries, territories, or regions containing people with similar "moral preferences."

Newsweek reported:

"The study is basically trying to understand the kinds of moral decisions that driverless cars might have to resort to," Edmond Awad, lead author of the study from the MIT Media Lab, said in a statement. "We don't know yet how they should do that."

And the overall findings:

"First, human lives should be spared over those of animals; many people should be saved over a few; and younger people should be preserved ahead of the elderly."

These have implications for policymakers. The researchers noted:

"... given the strong preference for sparing children, policymakers must be aware of a dual challenge if they decide not to give a special status to children: the challenge of explaining the rationale for such a decision, and the challenge of handling the strong backlash that will inevitably occur the day an autonomous vehicle sacrifices children in a dilemma situation."

The researchers found regional differences about who should be saved:

"The first cluster (which we label the Western cluster) contains North America as well as many European countries of Protestant, Catholic, and Orthodox Christian cultural groups. The internal structure within this cluster also exhibits notable face validity, with a sub-cluster containing Scandinavian countries, and a sub-cluster containing Commonwealth countries.

The second cluster (which we call the Eastern cluster) contains many far eastern countries such as Japan and Taiwan that belong to the Confucianist cultural group, and Islamic countries such as Indonesia, Pakistan and Saudi Arabia.

The third cluster (a broadly Southern cluster) consists of the Latin American countries of Central and South America, in addition to some countries that are characterized in part by French influence (for example, metropolitan France, French overseas territories, and territories that were at some point under French leadership). Latin American countries are cleanly separated in their own sub-cluster within the Southern cluster."

The researchers also observed:

"... systematic differences between individualistic cultures and collectivistic cultures. Participants from individualistic cultures, which emphasize the distinctive value of each individual, show a stronger preference for sparing the greater number of characters. Furthermore, participants from collectivistic cultures, which emphasize the respect that is due to older members of the community, show a weaker preference for sparing younger characters... prosperity (as indexed by GDP per capita) and the quality of rules and institutions (as indexed by the Rule of Law) correlate with a greater preference against pedestrians who cross illegally. In other words, participants from countries that are poorer and suffer from weaker institutions are more tolerant of pedestrians who cross illegally, presumably because of their experience of lower rule compliance and weaker punishment of rule deviation... higher country-level economic inequality (as indexed by the country’s Gini coefficient) corresponds to how unequally characters of different social status are treated. Those from countries with less economic equality between the rich and poor also treat the rich and poor less equally... In nearly all countries, participants showed a preference for female characters; however, this preference was stronger in nations with better health and survival prospects for women. In other words, in places where there is less devaluation of women’s lives in health and at birth, males are seen as more expendable..."

This is huge. It makes one question the wisdom of a one-size-fits-all programming approach by AV makers wishing to sell cars globally. Citizens in clusters may resent an AV maker forcing its moral preferences upon them. Some clusters or countries may demand vehicles matching their moral preferences.

The researchers concluded (emphasis added):

"Never in the history of humanity have we allowed a machine to autonomously decide who should live and who should die, in a fraction of a second, without real-time supervision. We are going to cross that bridge any time now, and it will not happen in a distant theatre of military operations; it will happen in that most mundane aspect of our lives, everyday transportation. Before we allow our cars to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers that will regulate them... Our data helped us to identify three strong preferences that can serve as building blocks for discussions of universal machine ethics, even if they are not ultimately endorsed by policymakers: the preference for sparing human lives, the preference for sparing more lives, and the preference for sparing young lives. Some preferences based on gender or social status vary considerably across countries, and appear to reflect underlying societal-level preferences..."

And the researchers advised caution, given this study's limitations (emphasis added):

"Even with a sample size as large as ours, we could not do justice to all of the complexity of autonomous vehicle dilemmas. For example, we did not introduce uncertainty about the fates of the characters, and we did not introduce any uncertainty about the classification of these characters. In our scenarios, characters were recognized as adults, children, and so on with 100% certainty, and life-and-death outcomes were predicted with 100% certainty. These assumptions are technologically unrealistic, but they were necessary... Similarly, we did not manipulate the hypothetical relationship between respondents and characters (for example, relatives or spouses)... Indeed, we can embrace the challenges of machine ethics as a unique opportunity to decide, as a community, what we believe to be right or wrong; and to make sure that machines, unlike humans, unerringly follow these moral preferences. We might not reach universal agreement: even the strongest preferences expressed through the [survey] showed substantial cultural variations..."

Several important limitations to remember. And, there are more. It didn't address self-driving trucks. Should an AV tractor-trailer semi  -- often called a robotruck -- carrying $2 million worth of goods sacrifice its load (and passenger) to save one or more pedestrians? What about one or more drivers on the highway? Does it matter if the other drivers are motorcyclists, school buses, or ambulances?

What about autonomous freighters? Should an AV cargo ship be programed to sacrifice its $80 million load to save a pleasure craft? Does the size (e.g., number of passengers) of the pleasure craft matter? What if the other craft is a cabin cruiser with five persons? Or a cruise ship with 2,000 passengers and a crew of 800? What happens in international waters between AV ships from different countries programmed with different moral preferences?

Regardless, this MIT research seems invaluable. It's a good start. AV makers (e.g., autos, ships, trucks) need to explicitly state what their vehicles will (and won't do). Don't hide behind legalese similar to what exists today in too many online terms-of-use and privacy policies.

Hopefully, corporate executives and government policymakers will listen, consider the limitations, demand follow-up research, and not dive headlong into the AV pool without looking first. After reading this study, it struck me that similar research would have been wise before building a global social media service, since people in different countries or regions having varying preferences with online privacy, sharing information, and corporate surveillance. What are your opinions?