104 posts categorized "Internet of Things" Feed

ExpressVPN Survey Indicates Americans Care About Privacy. Some Have Already Taken Action

ExpressVPN published the results of its privacy survey. The survey, commissioned by ExpressVPN and conducted by Propeller Insights, included a representative sample of about 1,000 adults in the United States.

Overall, 29.3% of survey respondents said they already use had used a virtual private network (VPN) or a proxy network. Survey respondents cited three broad reasons for using a VPN service: 1) to avoid surveillance, 2) to access content, and 3) to stay safe online. Detailed survey results about surveillance concerns:

"The most popular reasons to use a VPN are related to surveillance, with 41.7% of respondents aiming to protect against sites seeing their IP, 26.4% to prevent their internet service provider (ISP) from gathering information, and 16.6% to shield against their local government."

Who performs the surveillance matters to consumers. People are more concerned with surveillance by companies than by law enforcement agencies within the U.S. government:

"Among the respondents, 15.9% say they fear the FBI surveillance, and only 6.4% fear the NSA spying on them. People are by far most worried about information gathering by ISPs (23.2%) and Facebook (20.5%). Google spying is more of a worry for people (5.9%) than snooping by employers (2.6%) or family members (5.1%).

Concerns with internet service providers (ISPs) are not surprising since these telecommunications company enjoy a unique position enabling them to track all online activities by consumers. Concerns about Facebook are not surprising since it tracks both users and non-users, similar to advertising networks. The "protect against sites seeing their IP" suggests that consumers, or at least VPN users, want to protect themselves and their devices against advertisers, advertising networks, and privacy-busting mobile apps which track their geo-location.

Detailed survey results about content access concerns:

"... 26.7% use [a VPN service] to access their corporate or academic network, 19.9% to access content otherwise not available in their region, and 16.9% to circumvent censorship."

The survey also found that consumers generally trust their mobile devices:

" Only 30.5% of Android users are “not at all” or “not very” confident in their devices. iOS fares slightly better, with 27.4% of users expressing a lack of confidence."

The survey uncovered views about government intervention and policies:

"Net neutrality continues to be popular (70% more respondents support it rather then don’t), but 51.4% say they don’t know enough about it to form an opinion... 82.9% also believe Congress should enact laws to require tech companies to get permission before collecting personal data. Even more, 85.2% believe there should be fines for companies that lose users’ data, and 90.2% believe there should be further fines if the data is misused. Of the respondents, 47.4% believe Congress should go as far as breaking up Facebook and Google."

The survey found views about smart devices (e.g., door bells, voice assistants, smart speakers) installed in many consumers' homes, since these devices are equipped with always-on cameras and/or microphones:

"... 85% of survey respondents say they are extremely (24.7%), very (23.4%), or somewhat (28.0%) concerned about smart devices monitoring their personal habits... Almost a quarter (24.8%) of survey respondents do not own any smart devices at all, while almost as many (24.4%) always turn off their devices’ microphones if they are not using them. However, one-fifth (21.2%) say they always leave the microphone on. The numbers are similar for camera use..."

There are more statistics and findings in the entire survey report by ExpressVPN. I encourage everyone to read it.


Automated Following: The Technology For Platoons Of Self-Driving Trucks

The MediaPost Connected Thinking blog reported:

"At the Automated Vehicle Symposium in Orlando [in July], one company involved in automated vehicle technology unveiled its vision for using a single driver to drive a pair of vehicles. The approach, named Automated Following, is an advanced platooning system created by Peloton Technology. It uses vehicle-to-vehicle (V2V) technology to let a lead driver control the vehicle and one that is following, in this case large trucks... Platooning works by utilizing V2V communications and radar-based active braking systems, combined with vehicle control algorithms, according to Peloton. The system connects a fully automated follow truck with a driver-controlled lead truck. The V2V link lets the human driven lead truck guide the steering, acceleration and braking of the follow truck..."

To learn more, I visited the Peloton Technology website. The Platoon-Pro section of the site lists the benefits below:

Platooning benefits. Peloton-Pro at Peloton Technology website. July 20, 2019. Click to view larger version

While it's good to read about specific estimates of fuel savings, I was hoping to also read similar estimates about decreased crashes and/or decreased severity of crashes. The page simply listed the safety features.

The site's home page features a "Safety & Platoon" video explaining how a 2-truck platoon might operate. On an interstate highway, both trucks are manned with human drivers. (What happened to the single driver benefit?) The video also shows what happens when a passenger vehicle briefly "cuts" in between a 2-truck platoon:

According to the video, the drivers can vary the distance between two trucks in a platoon. That seems to be a good feature.

The technology raises several questions. First, the video features a "cut in" with a small car. What happens when a larger vehicle, such as a bus, cuts in? What happens when several (large) vehicles cut in between? Second, just because we humans can do something doesn't mean we should do it. 2-truck platoons in the near future could expand to 4- or 5-truck platoons after that. One wonders about the wisdom. Are highways, country roads, and city streets designed to accommodate truck platoons this large?

Third, my impression: a 2-truck platoon sounds like a short train. In the near future, motorists will have to navigate in-between and around platoons of self-driving tractor-trailer trucks. Are motorists ready for this? Historically, auto drivers have had difficulty with traditional railroad crossings. The technology seems to be something which requires plenty of testing.

Another way of asking the question: is this what we want on our streets and highways given existing railroads already designed for trains = long platoons of trucks?

Fourth, security matters. What's being done to prevent the technology being abused? Automated following technology in the hands of bad guys could enable terrorists to deliver platoons of car bombs, or platoons of small boats armed with bombs. So, security (against hacking and against theft) is even more of an issue.

What are your opinions?


Google Home Devices Recorded Users' Conversations. Legal Questions Result. Google Says It Is Investigating

Many consumers love the hands-free convenience of smart speakers. However, there are risks with the technology. BBC News reported on Thursday:

"Belgian broadcaster VRT exposed the recordings made by Google Home devices in Belgium and the Netherlands... VRT said the majority of the recordings it reviewed were short clips logged by the Google Home devices as owners used them. However, it said, 153 were "conversations that should never have been recorded" because the wake phrase of "OK Google" was not given. These unintentionally recorded exchanges included: a) blazing rows; b) bedroom chatter; c) parents talking to their children; d) phone calls exposing confidential information. It said it believed the devices logged these conversations because users said a word or phrase that sounded similar to "OK Google" that triggered the device..."

So, conversations that shouldn't have been recorded were recorded by Google Home devices. Consumers use the devices to perform and control a variety of tasks, such as entertainment (e.g., music, movies, games), internet searches (e.g., cooking recipes), security systems and cameras, thermostats, window blinds and shades, appliances (e.g., coffee makers), online shopping, internet searches, and more.

The device software doesn't seem accurate, since it mistook similar phrases as wake phrases. Google calls these errors "false accepts." Google replied in a blog post:

"We just learned that one of these language reviewers has violated our data security policies by leaking confidential Dutch audio data. Our Security and Privacy Response teams have been activated on this issue, are investigating, and we will take action. We are conducting a full review of our safeguards... We apply a wide range of safeguards to protect user privacy throughout the entire review process. Language experts only review around 0.2 percent of all audio snippets. Audio snippets are not associated with user accounts as part of the review process, and reviewers are directed not to transcribe background conversations or other noises, and only to transcribe snippets that are directed to Google."

"The Google Assistant only sends audio to Google after your device detects that you’re interacting with the Assistant—for example, by saying “Hey Google” or by physically triggering the Google Assistant... Rarely, devices that have the Google Assistant built in may experience what we call a “false accept.” This means that there was some noise or words in the background that our software interpreted to be the hotword (like “Ok Google”). We have a number of protections in place to prevent false accepts from occurring in your home... We also provide you with tools to manage and control the data stored in your account. You can turn off storing audio data to your Google account completely, or choose to auto-delete data after every 3 months or 18 months..."

To be fair, Google is not alone. Amazon Alexa devices also record and archive users' conversations. Would you want your bedroom chatter recorded (and stored indefinitely)? Or your conversations with your children? Many persons work remotely from home, so would you want business conversations with coworkers recorded? I think not. Very troubling news.

And, there is more.

This data security incident confirms that human workers listen to recordings by Google Assistant devices. Those workers can be employees or outsourced contractors. Who are these contractors, by name? What methods does Google employ to confirm privacy compliance by contractors? So many unanswered questions.

Also, according to U.S. News & World Report:

"Google's recording feature can be turned off, but doing so means Assistant loses some of its personalized touch. People who turn off the recording feature lose the ability for the Assistant to recognize individual voices and learn your voice pattern. Assistant recording is actually turned off by default — but the technology prompts users to turn on recording and other tools in order to get personalized features."

So, to get the full value of the technology, users must enable recordings. That sounds a lot like surveillance by design. Not good. You'd think that Google software developers would have developed a standard vocabulary, or dictionary, in several languages (by beta test participants) to test the accuracy of Assistant software; rather than review users' actual conversations. I guess they viewed it easier, faster, and cheaper to snoop on users.

Since Google already scans the contents of Gmail users' email messages, maybe this is simply technology creep and Google assumed nobody would mind human reviews of Assistant recordings.

About the review of recordings by human workers, the M.I.T. Technology Review said:

"Legally questionable: Because Google doesn’t inform users that humans review recordings in this way, and thus doesn’t seek their explicit consent for the practice, it’s quite possible that it could be breaking EU data protection regulations. We have asked Google for a response and will update if we hear back."

So, it will be interesting to see what European Union regulators have to say about the recordings and human reviews.

To summarize: consumers have willingly installed perpetual surveillance devices in their homes. What are your views of this data security incident? Do you enable recordings on your smart speakers? Should human workers have access to archives of your recorded conversations?


Aggression Detectors: What They Are, Who Uses Them, And Why

Sound Intelligence logo Like most people, you probably have not heard of "aggression detectors." What are these devices? Who makes them? Who uses these devices and why? What consumers are affected?

To answer these questions, ProPublica explained who makes the devices and why:

"In response to mass shootings, some schools and hospitals are installing microphones equipped with algorithms. The devices purport to identify stress and anger before violence erupts... By deploying surveillance technology in public spaces like hallways and cafeterias, device makers and school officials hope to anticipate and prevent everything from mass shootings to underage smoking... Besides Sound Intelligence, South Korea-based Hanwha Techwin, formerly part of Samsung, makes a similar “scream detection” product that’s been installed in American schools. U.K.-based Audio Analytic used to sell its aggression- and gunshot-detection software to customers in Europe and the United States... Sound Intelligence CEO Derek van der Vorst said security cameras made by Sweden-based Axis Communications account for 90% of the detector’s worldwide sales, with privately held Louroe making up the other 10%... Mounted inconspicuously on the ceiling, Louroe’s smoke-detector-sized microphones measure aggression on a scale from zero to one. Users choose threshold settings. Any time they’re exceeded for long enough, the detector alerts the facility’s security apparatus, either through an existing surveillance system or a text message pinpointing the microphone that picked up the sound..."

Louroe Electronics logo The microphone-equipped sensors have been installed in a variety of industries. The Sound Intelligence website listed prisons, schools, public transportation, banks, healthcare institutes, retail stores, public spaces, and more. Louroe Electronics' site included a similar list plus law enforcement.

The ProPublica article also discussed several key issues. First, sensor accuracy and its own tests:

"... ProPublica’s analysis, as well as the experiences of some U.S. schools and hospitals that have used Sound Intelligence’s aggression detector, suggest that it can be less than reliable. At the heart of the device is what the company calls a machine learning algorithm. Our research found that it tends to equate aggression with rough, strained noises in a relatively high pitch, like [a student's] coughing. A 1994 YouTube clip of abrasive-sounding comedian Gilbert Gottfried ("Is it hot in here or am I crazy?") set off the detector, which analyzes sound but doesn’t take words or meaning into account... Sound Intelligence and Louroe said they prefer whenever possible to fine-tune sensors at each new customer’s location over a period of days or weeks..."

Second, accuracy concerns:

"[Sound Intelligence CEO] Van der Vorst acknowledged that the detector is imperfect and confirmed our finding that it registers rougher tones as aggressive. He said he “guarantees 100%” that the system will at times misconstrue innocent behavior. But he’s more concerned about failing to catch indicators of violence, and he said the system gives schools and other facilities a much-needed early warning system..."

This is interesting and troubling. Sound Intelligence's position seems to suggest that it is okay for sensor to miss-identify innocent persons as aggressive in order to avoid failures to identify truly aggressive persons seeking to do harm. That sounds like the old saying: the ends justify the means. Not good. The harms against innocent persons matters, especially when they are young students.

Yesterday's blog post described a far better corporate approach. Based upon current inaccuracies and biases with the technology, a police body camera assembled an ethics board to help guide its decisions regarding the technology; and then followed that board's recommendations not to implement facial recognition in its devices. When the inaccuracies and biases are resolved, then it would implement facial recognition.

What ethics boards have Sound Intelligence, Louroe, and other aggression detector makers utilized?

Third, the use of aggression detectors raises the issue of notice. Are there physical postings on-site at schools, hospitals, healthcare facilities, and other locations? Notice seems appropriate, especially since almost all entities provide notice (e.g., terms of service, privacy policy) for visitors to their websites.

Fourth, privacy concerns:

"Although a Louroe spokesman said the detector doesn’t intrude on student privacy because it only captures sound patterns deemed aggressive, its microphones allow administrators to record, replay and store those snippets of conversation indefinitely..."

I encourage parents of school-age children to read the entire ProPublica article. Concerned parents may demand explanations by school officials about the surveillance activities and devices used within their children's schools. Teachers may also be concerned. Patients at healthcare facilities may also be concerned.

Concerned persons may seek answers to several issues:

  • The vendor selection process, which aggression detector devices were selected, and why
  • Evidence supporting the accuracy of aggression detectors used
  • The school's/hospital's policy, if it has one, covering surveillance devices; plus any posted notices
  • The treatment and rights of wrongly identified persons (e.g., students, patients,, visitors, staff) by aggression detector devices
  • Approaches by the vendor and school to improve device accuracy for both types of errors: a) wrongly identified persons, and b) failures to identify truly aggressive or threatening persons
  • How long the school and/or vendor archive recorded conversations
  • What persons have access to the archived recordings
  • The data security methods used by the school and by the vendor to prevent unauthorized access and abuse of archived recordings
  • All entities, by name, which the school and/or vendor share archived recordings with

What are your opinions of aggression detectors? Of device inaccuracy? Of the privacy concerns?


Police Body Cam Maker Says It Won't Use Facial Recognition Due To Problems With The Technology

We've all heard of the following three technologies: police body cameras, artificial intelligence, and facial recognition software. Across the nation, some police departments use body cameras.

Do the three technologies go together -- work well together? The Washington Post reported:

"Axon, the country’s biggest seller of police body cameras, announced that it accepts the recommendation of an ethics board and will not use facial recognition in its devices... the company convened the independent board last year to assess the possible consequences and ethical costs of artificial intelligence and facial-recognition software. The board’s first report, published June 27, concluded that “face recognition technology is not currently reliable enough to ethically justify its use” — guidance that Axon plans to follow."

So, a major U.S. corporation assembled an ethics board to guide its activities. Good. That's not something you read about often. Then, the same corporation followed that board's advice. Even better.

Why reject using facial recognition with body cameras? Axon explained in a statement:

"Current face matching technology raises serious ethical concerns. In addition, there are technological limitations to using this technology on body cameras. Consistent with the board's recommendation, Axon will not be commercializing face matching products on our body cameras at this time. We do believe face matching technology deserves further research to better understand and solve for the key issues identified in the report, including evaluating ways to de-bias algorithms as the board recommends. Our AI team will continue to evaluate the state of face recognition technologies and will keep the board informed about our research..."

Two types of inaccuracies occur with facial recognition software: i) persons falsely identified (a/k/a "false positives;" and ii) persons not identified (a/k/a "false negatives) who should have been identified. The ethics board's report provided detailed explanations:

"The truth is that current technology does not perform as well on people of color compared to whites, on women compared to men, or young people compared to older people, to name a few disparities. These disparities exist in both directions — a greater false positive rate and false negative rate."

The ethics board's report also explained the problem of bias:

"One cause of these biases is statistically unrepresentative training data — the face images that engineers use to “train” the face recognition algorithm. These images are unrepresentative for a variety of reasons but in part because of decisions that have been made for decades that have prioritized certain groups at the cost of others. These disparities make real-world face recognition deployment a complete nonstarter for the Board. Until we have something approaching parity, this technology should remain on the shelf. Policing today already exhibits all manner of disparities (particularly racial). In this undeniable context, adding a tool that will exacerbate this disparity would be unacceptable..."

So, well-meaning software engineers can create bias in their algorithms by using sets of images that are not representative of the population. The ethic board's 42-page report titled, "First Report Of The Axon A.I. & Policing Technology Ethics Board" (Adobe PDF; 3.1 Megabytes) listed six general conclusions:

"1: Face recognition technology is not currently reliable enough to ethically justify its use on body-worn cameras. At the least, face recognition technology should not be deployed until the technology performs with far greater accuracy and performs equally well across races, ethnicities, genders, and other identity groups. Whether face recognition on body-worn cameras can ever be ethically justifiable is an issue the Board has begun to discuss in the context of the use cases outlined in Part IV.A, and will take up again if and when these prerequisites are met."

"2: When assessing face recognition algorithms, rather than talking about “accuracy,” we prefer to discuss false positive and false negative rates. Our tolerance for one or the other will depend on the use case."

"3: The Board is unwilling to endorse the development of face recognition technology of any sort that can be completely customized by the user. It strongly prefers a model in which the technologies that are made available are limited in what functions they can perform, so as to prevent misuse by law enforcement."

"4: No jurisdiction should adopt face recognition technology without going through open, transparent, democratic processes, with adequate opportunity for genuinely representative public analysis, input, and objection."

"5: Development of face recognition products should be premised on evidence-based benefits. Unless and until those benefits are clear, there is no need to discuss costs or adoption of any particular product."

"6: When assessing the costs and benefits of potential use cases, one must take into account both the realities of policing in America (and in other jurisdictions) and existing technological limitations."

The board included persons with legal, technology, law enforcement, and civil rights backgrounds; plus members from the affected communities. Axon management listened to the report's conclusions and is following the board's recommendations (emphasis added):

"Respond publicly to this report, including to the Board’s conclusions and recommendations regarding face recognition technology. Commit, based on the concerns raised by the Board, not to proceed with the development of face matching products, including adding such capabilities to body-worn cameras or to Axon Evidence (Evidence.com)... Invest company resources to work, in a transparent manner and in tandem with leading independent researchers, to ensure training data are statistically representative of the appropriate populations and that algorithms work equally well across different populations. Continue to comply with the Board’s Operating Principles, including by involving the Board in the earliest possible stages of new or anticipated products. Work with the Board to produce products and services designed to improve policing transparency and democratic accountability, including by developing products in ways that assure audit trails or that collect information that agencies can release to the public about their use of Axon products..."

Admirable. Encouraging. The Washington Post reported:

"San Francisco in May became the first U.S. city to ban city police and agencies from using facial-recognition software... Somerville, Massachusetts became the second, with other cities, including Berkeley and Oakland, Calif., considering similar measures..."

Clearly, this topic bears monitoring. Consumers and government officials are concerned about accuracy and bias. So, too, are some corporations.

And, more news seems likely. Will other technology companies and local governments utilize similar A.I. ethics boards? Will schools, healthcare facilities, and other customers of surveillance devices demand products with accuracy and without bias supported by evidence?


Digital Jail: How Electronic Monitoring Drives Defendants Into Debt

[Editor's note: today's guest post, by reporters at ProPublica, discusses the convergence of law enforcement, outsourcing, smart devices, surveillance, "offender funded" programs, and "e-gentrification." It is reprinted with permission.]

By Ava Kofman, ProPublica

On Oct. 12, 2018, Daehaun White walked free, or so he thought. A guard handed him shoelaces and the $19 that had been in his pocket at the time of his booking, along with a letter from his public defender. The lanky 19-year-old had been sitting for almost a month in St. Louis’ Medium Security Institution, a city jail known as the Workhouse, after being pulled over for driving some friends around in a stolen Chevy Cavalier. When the police charged him with tampering with a motor vehicle — driving a car without its owner’s consent — and held him overnight, he assumed he would be released by morning. He told the police that he hadn’t known that the Chevy, which a friend had lent him a few hours earlier, was stolen. He had no previous convictions. But the $1,500 he needed for the bond was far beyond what he or his family could afford. It wasn’t until his public defender, Erika Wurst, persuaded the judge to lower the amount to $500 cash, and a nonprofit fund, the Bail Project, paid it for him, that he was able to leave the notoriously grim jail. “Once they said I was getting released, I was so excited I stopped listening,” he told me recently. He would no longer have to drink water blackened with mold or share a cell with rats, mice and cockroaches. He did a round of victory pushups and gave away all of the snack cakes he had been saving from the cafeteria.

Emass logo When he finally read Wurst’s letter, however, he realized there was a catch. Even though Wurst had argued against it, the judge, Nicole Colbert-Botchway, had ordered him to wear an ankle monitor that would track his location at every moment using GPS. For as long as he would wear it, he would be required to pay $10 a day to a private company, Eastern Missouri Alternative Sentencing Services, or EMASS. Just to get the monitor attached, he would have to report to EMASS and pay $300 up front — enough to cover the first 25 days, plus a $50 installation fee.

White didn’t know how to find that kind of money. Before his arrest, he was earning minimum wage as a temp, wrapping up boxes of shampoo. His father was largely absent, and his mother, Lakisha Thompson, had recently lost her job as the housekeeping manager at a Holiday Inn. Raising Daehaun and his four siblings, she had struggled to keep up with the bills. The family bounced between houses and apartments in northern St. Louis County, where, as a result of Jim Crow redlining, most of the area’s black population lives. In 2014, they were living on Canfield Drive in Ferguson when Michael Brown was shot and killed there by a police officer. During the ensuing turmoil, Thompson moved the family to Green Bay, Wisconsin. White felt out of place. He was looked down on for his sagging pants, called the N-word when riding his bike. After six months, he moved back to St. Louis County on his own to live with three of his siblings and stepsiblings in a gray house with vinyl siding.

When White got home on the night of his release, he was so overwhelmed to see his family again that he forgot about the letter. He spent the next few days hanging out with his siblings, his mother, who had returned to Missouri earlier that year, and his girlfriend, Demetria, who was seven months pregnant. He didn’t report to EMASS.

What he didn’t realize was that he had failed to meet a deadline. Typically, defendants assigned to monitors must pay EMASS in person and have the device installed within 24 hours of their release from jail. Otherwise, they have to return to court to explain why they’ve violated the judge’s orders. White, however, wasn’t called back for a hearing. Instead, a week after he left the Workhouse, Colbert-Botchway issued a warrant for his arrest.

Three days later, a large group of police officers knocked on Thompson’s door, looking for information about an unrelated case, a robbery. White and his brother had been making dinner with their mother, and the officers asked them for identification. White’s name matched the warrant issued by Colbert-Botchway. “They didn’t tell me what the warrant was for,” he said. “Just that it was for a violation of my release.” He was taken downtown and held for transfer back to the Workhouse. “I kept saying to myself, ’Why am I locked up?’” he recalled.

The next morning, Thompson called the courthouse to find the answer. She learned that her son had been jailed over his failure to acquire and pay for his GPS monitor. To get him out, she needed to pay EMASS on his behalf.

This seemed absurd to her. When Daehaun was 13, she had worn an ankle monitor after violating probation for a minor theft, but the state hadn’t required her to cover the cost of her own supervision. “This is a 19-year-old coming out of the Workhouse,” she told me recently. “There’s no way he has $300 saved.” Thompson felt that the court was forcing her to choose between getting White out of jail and supporting the rest of her family.

Over the past half-century, the number of people behind bars in the United States jumped by more than 500%, to 2.2 million. This extraordinary rise, often attributed to decades of “tough on crime” policies and harsh sentencing laws, has ensured that even as crime rates have dropped since the 1990s, the number of people locked up and the average length of their stay have increased. According to the Bureau of Justice Statistics, the cost of keeping people in jails and prisons soared to $87 billion in 2015 from $19 billion in 1980, in current dollars.

In recent years, politicians on both sides of the aisle have joined criminal-justice reformers in recognizing mass incarceration as both a moral outrage and a fiscal sinkhole. As ankle bracelets have become compact and cost-effective, legislators have embraced them as an enlightened alternative. More than 125,000 people in the criminal-justice system were supervised with monitors in 2015, compared with just 53,000 people in 2005, according to the Pew Charitable Trusts. Although no current national tally is available, data from several cities — Austin, Texas; Indianapolis; Chicago; and San Francisco — show that this number continues to rise. Last December, the First Step Act, which includes provisions for home detention, was signed into law by President Donald Trump with support from the private prison giants GEO Group and CoreCivic. These corporations dominate the so-called community-corrections market — services such as day-reporting and electronic monitoring — that represents one of the fastest-growing revenue sectors of their industry.

By far the most decisive factor promoting the expansion of monitors is the financial one. The United States government pays for monitors for some of those in the federal criminal-justice system and for tens of thousands of immigrants supervised by Immigration and Customs Enforcement. But states and cities, which incur around 90% of the expenditures for jails and prisons, are increasingly passing the financial burden of the devices onto those who wear them. It costs St. Louis roughly $90 a day to detain a person awaiting trial in the Workhouse, where in 2017 the average stay was 291 days. When individuals pay EMASS $10 a day for their own supervision, it costs the city nothing. A 2014 study by NPR and the Brennan Center found that, with the exception of Hawaii, every state required people to pay at least part of the costs associated with GPS monitoring. Some probation offices and sheriffs run their own monitoring programs — renting the equipment from manufacturers, hiring staff and collecting fees directly from participants. Others have outsourced the supervision of defendants, parolees and probationers to private companies.

“There are a lot of judges who reflexively put people on monitors, without making much of a pretense of seriously weighing it at all,” said Chris Albin-Lackey, a senior legal adviser with Human Rights Watch who has researched private-supervision companies. “The limiting factor is the cost it might impose on the public, but when that expense is sourced out, even that minimal brake on judicial discretion goes out the window.”

Nowhere is the pressure to adopt monitors more pronounced than in places like St. Louis: cash-strapped municipalities with large populations of people awaiting trial. Nationwide on any given day, half a million people sit in crowded and expensive jails because, like Daehaun White, they cannot purchase their freedom.

As the movement to overhaul cash bail has challenged the constitutionality of jailing these defendants, judges and sheriffs have turned to monitors as an appealing substitute. In San Francisco, the number of people released from jail onto electronic monitors tripled after a 2018 ruling forced courts to release more defendants without bail. In Marion County, Indiana, where jail overcrowding is routine, roughly 5,000 defendants were put on monitors last year. “You would be hard-pressed to find bail-reform legislation in any state that does not include the possibility of electronic monitoring,” said Robin Steinberg, the chief executive of the Bail Project.

Yet like the system of wealth-based detention they are meant to help reform, ankle monitors often place poor people in special jeopardy. Across the country, defendants who have not been convicted of a crime are put on “offender funded” payment plans for monitors that sometimes cost more than their bail. And unlike bail, they don’t get the payment back, even if they’re found innocent. Although a federal survey shows that nearly 40% of Americans would have trouble finding $400 to cover an emergency, companies and courts routinely threaten to lock up defendants if they fall behind on payment. In Greenville, South Carolina, pretrial defendants can be sent back to jail when they fall three weeks behind on fees. (An officer for the Greenville County Detention Center defended this practice on the grounds that participants agree to the costs in advance.) In Mohave County, Arizona, pretrial defendants charged with sex offenses have faced rearrest if they fail to pay for their monitors, even if they prove that they can’t afford them. “We risk replacing an unjust cash-bail system,” Steinberg said, “with one just as unfair, inhumane and unnecessary.”

Many local judges, including in St. Louis, do not conduct hearings on a defendant’s ability to pay for private supervision before assigning them to it; those who do often overestimate poor people’s financial means. Without judicial oversight, defendants are vulnerable to private-supervision companies that set their own rates and charge interest when someone can’t pay up front. Some companies even give their employees bonuses for hitting collection targets.

It’s not only debt that can send defendants back to jail. People who may not otherwise be candidates for incarceration can be punished for breaking the lifestyle rules that come with the devices. A survey in California found that juveniles awaiting trial or on probation face especially difficult rules; in one county, juveniles on monitors were asked to follow more than 50 restrictions, including not participating “in any social activity.” For this reason, many advocates describe electronic monitoring as a “net-widener": Far from serving as an alternative to incarceration, it ends up sweeping more people into the system.

Dressed in a baggy yellow City of St. Louis Corrections shirt, White was walking to the van that would take him back to the Workhouse after his rearrest, when a guard called his name and handed him a bus ticket home. A few hours earlier, his mom had persuaded her sister to lend her the $300 that White owed EMASS. Wurst, his public defender, brought the receipt to court.

The next afternoon, White hitched a ride downtown to the EMASS office, where one of the company’s bond-compliance officers, Nick Buss, clipped a black box around his left ankle. Based in the majority white city of St. Charles, west of St. Louis, EMASS has several field offices throughout eastern Missouri. A former probation and parole officer, Michael Smith, founded the company in 1991 after Missouri became one of the first states to allow private companies to supervise some probationers. (Smith and other EMASS officials declined to comment for this story.)

The St. Louis area has made national headlines for its “offender funded” model of policing and punishment. Stricken by postindustrial decline and the 2008 financial crisis, its municipalities turned to their police departments and courts to make up for shortfalls in revenue. In 2015, the Ferguson Report by the United States Department of Justice put hard numbers to what black residents had long suspected: The police were targeting them with disproportionate arrests, traffic tickets and excessive fines.

EMASS may have saved the city some money, but it also created an extraordinary and arbitrary-seeming new expense for poor defendants. When cities cover the cost of monitoring, they often pay private contractors $2 to $3 a day for the same equipment and services for which EMASS charges defendants $10 a day. To come up with the money, EMASS clients told me, they had to find second jobs, take their children out of day care and cut into disability checks. Others hurried to plead guilty for no better reason than that being on probation was cheaper than paying for a monitor.

At the downtown office, White signed a contract stating that he would charge his monitor for an hour and a half each day and “report” to EMASS with $70 each week. He could shower, but was not to bathe or swim (the monitor is water-resistant, not waterproof). Interfering with the monitor’s functioning was a felony.

White assumed that GPS supervision would prove a minor annoyance. Instead, it was a constant burden. The box was bulky and the size of a fist, so he couldn’t hide it under his jeans. Whenever he left the house, people stared. There were snide comments ("nice bracelet") and cutting jokes. His brothers teased him about having a babysitter. “I’m nobody to watch,” he insisted.

The biggest problem was finding work. Confident and outgoing, White had never struggled to land jobs; after dropping out of high school in his junior year, he flipped burgers at McDonald’s and Steak ’n Shake. To pay for the monitor, he applied to be a custodian at Julia Davis Library, a cashier at Home Depot, a clerk at Menards. The conversation at Home Depot had gone especially well, White thought, until the interviewer casually asked what was on his leg.

To help improve his chances, he enrolled in Mission: St. Louis, a job-training center for people reentering society. One afternoon in January, he and a classmate role-played how to talk to potential employers about criminal charges. White didn’t know how much detail to go into. Should he tell interviewers that he was bringing his pregnant girlfriend some snacks when he was pulled over? He still isn’t sure, because a police officer came looking for him midway through the class. The battery on his monitor had died. The officer sent him home, and White missed the rest of the lesson.

With all of the restrictions and rules, keeping a job on a monitor can be as difficult as finding one. The hours for weekly check-ins at the downtown EMASS office — 1 p.m. to 6 p.m. on Tuesdays and Wednesdays, and 1 p.m. until 5 p.m. on Mondays — are inconvenient for those who work. In 2011, the National Institute of Justice surveyed 5,000 people on electronic monitors and found that 22% said they had been fired or asked to leave a job because of the device. Juawanna Caves, a young St. Louis native and mother of two, was placed on a monitor in December after being charged with unlawful use of a weapon. She said she stopped showing up to work as a housekeeper when her co-workers made her uncomfortable by asking questions and later lost a job at a nursing home because too many exceptions had to be made for her court dates and EMASS check-ins.

Perpetual surveillance also takes a mental toll. Nearly everyone I spoke to who wore a monitor described feeling trapped, as though they were serving a sentence before they had even gone to trial. White was never really sure about what he could or couldn’t do under supervision. In January, when his girlfriend had their daughter, Rylan, White left the hospital shortly after the birth, under the impression that he had a midnight curfew. Later that night, he let his monitor die so that he could sneak back before sunrise to see the baby again.

EMASS makes its money from defendants. But it gets its power over them from judges. It was in 2012 that the judges of the St. Louis court started to use the company’s services — which previously involved people on probation for misdemeanors — for defendants awaiting trial. Last year, the company supervised 239 defendants in the city of St. Louis on GPS monitors, according to numbers provided by EMASS to the court. The alliance with the courts gives the company not just a steady stream of business but a reliable means of recouping debts: Unlike, say, a credit-card company, which must file a civil suit to collect from overdue customers, EMASS can initiate criminal-court proceedings, threatening defendants with another stay in the Workhouse.

In early April, I visited Judge Rex Burlison in his chambers on the 10th floor of the St. Louis civil courts building. A few months earlier, Burlison, who has short gray hair and light blue eyes, had been elected by his peers as presiding judge, overseeing the city’s docket, budget and operations, including the contract with EMASS. It was one of the first warm days of the year, and from the office window I could see sunlight glimmering on the silver Gateway Arch.

I asked Burlison about the court’s philosophy for using pretrial GPS. He stressed that while each case was unique and subject to the judge’s discretion, monitoring was most commonly used for defendants who posed a flight risk, endangered public safety or had an alleged victim. Judges vary in how often they order defendants to wear monitors, and critics have attacked the inconsistency. Colbert-Botchway, the judge who put White on a monitor, regularly made pretrial GPS a condition of release, according to public defenders. (Colbert-Botchway declined to comment.) But another St. Louis city judge, David Roither, told me, “I really don’t use it very often because people here are too poor to pay for it.”

Whenever a defendant on a monitor violates a condition of release, whether related to payment or a curfew or something else, EMASS sends a letter to the court. Last year, Burlison said, the court received two to three letters a week from EMASS about violations. In response, the judge usually calls the defendant in for a hearing. As far as he knew, Burlison said, judges did not incarcerate people simply for failing to pay EMASS debts. “Why would you?” he asked me. When people were put back in jail, he said, there were always other factors at play, like the defendant’s missing a hearing, for instance. (Issuing a warrant for White’s arrest without a hearing, he acknowledged after looking at the docket, was not the court’s standard practice.)

The contract with EMASS allows the court to assign indigent defendants to the company to oversee “at no cost.” Yet neither Burlison nor any of the other current or former judges I spoke with recalled waiving fees when ordering someone to wear an ankle monitor. When I asked Burlison why he didn’t, he said that he was concerned that if he started to make exceptions on the basis of income, the company might stop providing ankle-monitoring services in St. Louis.

“People get arrested because of life choices,” Burlison said. “Whether they’re good for the charge or not, they’re still arrested and have to deal with it, and part of dealing with it is the finances.” To release defendants without monitors simply because they can’t afford the fee, he said, would be to disregard the safety of their victims or the community. “We can’t just release everybody because they’re poor,” he continued.

But many people in the Workhouse awaiting trial are poor. In January, civil rights groups filed suit against the city and the court, claiming that the St. Louis bail system violated the Constitution, in part by discriminating against those who can’t afford to post bail. That same month, the Missouri Supreme Court announced new rules that urged local courts to consider releasing defendants without monetary conditions and to waive fees for poor people placed on monitors. Shortly before the rules went into effect, on July 1, Burlison said that the city intends to shift the way ankle monitors are distributed and plans to establish a fund to help indigent defendants pay for their ankle bracelets. But he said he didn’t know how much money would be in the fund or whether it was temporary or permanent. The need for funding could grow quickly. The pending bail lawsuit has temporarily spurred the release of more defendants from custody, and as a result, public defenders say, the demand for monitors has increased.

Judges are anxious about what people released without posting bail might do once they get out. Several told me that monitors may ensure that the defendants return to court. Not unlike doctors who order a battery of tests for a mildly ill patient to avoid a potential malpractice suit, judges seem to view monitors as a precaution against their faces appearing on the front page of the newspaper. “Every judge’s fear is to let somebody out on recognizance and he commits murder, and then everyone asks, ’How in the hell was this person let out?’” said Robert Dierker, who served as a judge in St. Louis from 1986 to 2017 and now represents the city in the bail lawsuit. “But with GPS, you can say, ’Well, I have him on GPS, what else can I do?’”

Critics of monitors contend that their public-safety appeal is illusory: If defendants are intent on harming someone or skipping town, the bracelet, which can be easily removed with a pair of scissors, would not stop them. Studies showing that people tracked by GPS appear in court more reliably are scarce, and research about its effectiveness as a deterrent is inconclusive.

“The fundamental question is, What purpose is electronic monitoring serving?” said Blake Strode, the executive director of ArchCity Defenders, a nonprofit civil rights law firm in St. Louis that is one of several firms representing the plaintiffs in the bail lawsuit. “If the only purpose it’s serving is to make judges feel better because they don’t want to be on the hook if something goes wrong, then that’s not a sensible approach. We should not simply be monitoring for monitoring’s sake.”

Electronic monitoring was first conceived in the early 1960s by Ralph and Robert Gable, identical twins studying at Harvard under the psychologists Timothy Leary and B.F. Skinner, respectively. Influenced in part by Skinner’s theories of positive reinforcement, the Gables rigged up some surplus missile-tracking equipment to monitor teenagers on probation; those who showed up at the right places at the right times were rewarded with movie tickets, limo rides and other prizes.

Although this round-the-clock monitoring was intended as a tool for rehabilitation, observers and participants alike soon recognized its potential to enhance surveillance. All but two of the 16 volunteers in their initial study dropped out, finding the two bulky radio transmitters oppressive. “They felt like it was a prosthetic conscience, and who would want Mother all the time along with you?” Robert Gable told me. Psychology Today labeled the invention a “belt from Big Brother.”

The reality of electronic monitoring today is that Big Brother is watching some groups more than others. No national statistics are available on the racial breakdown of Americans wearing ankle monitors, but all indications suggest that mass supervision, like mass incarceration, disproportionately affects black people. In Cook County, Illinois, for instance, black people make up 24% of the population, and 67% of those on monitors. The sociologist Simone Browne has connected contemporary surveillance technologies like GPS monitors to America’s long history of controlling where black people live, move and work. In her 2015 book, “Dark Matters,” she traces the ways in which “surveillance is nothing new to black folks,” from the branding of enslaved people and the shackling of convict laborers to Jim Crow segregation and the home visits of welfare agencies. These historical inequities, Browne notes, influence where and on whom new tools like ankle monitors are imposed.

For some black families, including White’s, monitoring stretches across generations. Annette Taylor, the director of Ripple Effect, an advocacy group for prisoners and their families based in Champaign, Illinois, has seen her ex-husband, brother, son, nephew and sister’s husband wear ankle monitors over the years. She had to wear one herself, about a decade ago, she said, for driving with a suspended license. “You’re making people a prisoner of their home,” she told me. When her son was paroled and placed on house arrest, he couldn’t live with her, because he was forbidden to associate with people convicted of felonies, including his stepfather, who was also on house arrest.

Some people on monitors are further constrained by geographic restrictions — areas in the city or neighborhood that they can’t go without triggering an alarm. James Kilgore, a research scholar at the University of Illinois at Champaign-Urbana, has cautioned that these exclusionary zones could lead to “e-gentrification,” effectively keeping people out of more-prosperous neighborhoods. In 2016, after serving four years in prison for drug conspiracy, Bryan Otero wore a monitor as a condition of parole. He commuted from the Bronx to jobs at a restaurant and a department store in Manhattan, but he couldn’t visit his family or doctor because he was forbidden to enter a swath of Manhattan between 117th Street and 131st Street. “All my family and childhood friends live in that area,” he said. “I grew up there.”

Michelle Alexander, a legal scholar and columnist for The Times, has argued that monitoring engenders a new form of oppression under the guise of progress. In her 2010 book, “The New Jim Crow,” she wrote that the term “mass incarceration” should refer to the “system that locks people not only behind actual bars in actual prisons, but also behind virtual bars and virtual walls — walls that are invisible to the naked eye but function nearly as effectively as Jim Crow laws once did at locking people of color into a permanent second-class citizenship.”

BI Incorporated logo As the cost of monitoring continues to fall, those who are required to submit to it may worry less about the expense and more about the intrusive surveillance. The devices, some of which are equipped with two-way microphones, can give corrections officials unprecedented access to the private lives not just of those monitored but also of their families and friends. GPS location data appeals to the police, who can use it to investigate crimes. Already the goal is both to track what individuals are doing and to anticipate what they might do next. BI Incorporated, an electronic-monitoring subsidiary of GEO Group, has the ability to assign risk scores to the behavioral patterns of those monitored, so that law enforcement can “address potential problems before they happen.” Judges leery of recidivism have begun to embrace risk-assessment tools. As a result, defendants who have yet to be convicted of an offense in court may be categorized by their future chances of reoffending.

The combination of GPS location data with other tracking technologies such as automatic license-plate readers represents an uncharted frontier for finer-grained surveillance. In some cities, police have concentrated these tools in neighborhoods of color. A CityLab investigation found that Baltimore police were more likely to deploy the Stingray — the controversial and secretive cellphone tracking technology — where African Americans lived. In the aftermath of Freddie Gray’s death in 2015, the police spied on Black Lives Matter protesters with face recognition technology. Given this pattern, the term “electronic monitoring” may soon refer not just to a specific piece of equipment but to an all-encompassing strategy.

If the evolution of the criminal-justice system is any guide, it is very likely that the ankle bracelet will go out of fashion. Some GPS monitoring vendors have already started to offer smartphone applications that verify someone’s location through voice and face recognition. These apps, with names like Smart-LINK and Shadowtrack, promise to be cheaper and more convenient than a boxy bracelet. They’re also less visible, mitigating the stigma and normalizing surveillance. While reducing the number of people in physical prison, these seductive applications could, paradoxically, increase its reach. For the nearly 4.5 million Americans on probation or parole, it is not difficult to imagine a virtual prison system as ubiquitous — and invasive — as Instagram or Facebook.

On January 24, exactly three months after White had his monitor installed, his public defender successfully argued in court for its removal. His phone service had been shut off because he had fallen behind on the bill, so his mother told him the good news over video chat.

When White showed up to EMASS a few days later to have the ankle bracelet removed, he said, one of the company’s employees told him that he couldn’t take off his monitor until he paid his debt. White offered him the $35 in his wallet — all the money he had. It wasn’t enough. The employee explained that he needed to pay at least half of the $700 he owed. Somewhere in the contract he had signed months earlier, White had agreed to pay his full balance “at the time of removal.” But as White saw it, the court that had ordered the monitor’s installation was now ordering its removal. Didn’t that count?

“That’s the only thing that’s killing me,” White told me a few weeks later, in early March. “Why are you all not taking it off?” We were in his brother’s room, which, unlike White’s down the hall, had space for a wobbly chair. White sat on the bed, his head resting against the frame, while his brother sat on the other end by the TV, mumbling commands into a headset for the fantasy video game Fortnite. By then, the prosecutor had offered White two to three years of probation in exchange for a plea. (White is waiting to hear if he has been accepted into the city’s diversion program for “youthful offenders,” which would allow him to avoid pleading and wipe the charges from his record in a year.)

White was wearing a loosefitting Nike track jacket and red sweats that bunched up over the top of his monitor. He had recently stopped charging it, and so far, the police hadn’t come knocking. “I don’t even have to have it on,” he said, looking down at his ankle. “But without a job, I can’t get it taken off.” In the last few weeks, he had sold his laptop, his phone and his TV. That cash went to rent, food and his daughter, and what was left barely made a dent in what he owed EMASS.

It was a Monday — a check-in day — but he hadn’t been reporting for the past couple of weeks. He didn’t see the point; he didn’t have the money to get the monitor removed and the office was an hour away by bus. I offered him a ride.

EMASS check-ins take place in a three-story brick building with a low-slung facade draped in ivy. The office doesn’t take cash payments, and a Western Union is conveniently located next door. The other men in the waiting room were also wearing monitors. When it was White’s turn to check-in, Buss, the bond-compliance officer, unclipped the band from his ankle and threw the device into a bin, White said. He wasn’t sure why EMASS had now softened its approach, but his debts nonetheless remained.

Buss calculated the money White owed going back to November: $755, plus 10% annual interest. Over the next nine months, EMASS expected him to make monthly payments that would add up to $850 — more than the court had required for his bond. White looked at the receipt and shook his head. “I get in trouble for living,” he said as he walked out of the office. “For being me.”

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for The Big Story newsletter to receive stories like this one in your inbox.


Several States Strengthened Their Data Breach Notification Laws in 2019

Legislatures in several states are improving their existing data breach notification laws to provide stronger protections for consumers.

To fully appreciate the changes requires an understanding of the current legal status. The National Conference of State Legislatures summarized the current status:

"All 50 states, the District of Columbia, Guam, Puerto Rico and the Virgin Islands have enacted legislation requiring private or governmental entities to notify individuals of security breaches of information involving personally identifiable information. Security breach laws typically have provisions regarding who must comply with the law (e.g., businesses, data/information brokers, government entities, etc.); definitions of “personal information” (e.g., name combined with SSN, drivers license or state ID, account numbers, etc.); what constitutes a breach (e.g., unauthorized acquisition of data); requirements for notice (e.g., timing or method of notice, who must be notified); and exemptions (e.g., for encrypted information)."

The increased legislative activity comes in the aftermath of the massive Equifax breach in 2017 affecting 145.5 million persons. 2018 was a terrible year with more than one billion consumer accounts affected by multiple data breaches.

Many of the improvements across states requires sooner notice to affected persons, so consumers can check their bank/card statements for fraudulent activity, and take other security actions. Without sooner notice, fraud can perpetuate with more money stolen.

Now, the legislative activity in selected states.

First, legislators amended the requirements in the Maryland Personal Information Protection Act (MPIPA), or House Bill 1154. Maryland Governor Larry Hogan approved of the changes, which will go into effect on October 1, 2019. A summary of the changes:

  • Requires businesses that own or license "computerized data that includes personal information of an individual residing in the State" to conduct a good-faith breach investigation to determine data abuse when they discover or are notified of a data breach,
  • Requires notification of affected persons within 45 days, and
  • Requires businesses to maintain records of the breach for three years of its breach investigation and determination that notification of affected persons is not required.

Second, Massachusetts Governor Charlie Baker signed legislation in January which went into effect on April 11, 2019. Changes in the new law: no fees for consumers to place, lift, or remove Security Freezes; credit monitoring required when Social Security numbers disclosed during the breach; and an expanded list of requirements when businesses provide notice to the Massachusetts Attorney General and to the Massachusetts Office of Consumer Affairs and Business Regulation (OCABR).

Third, New Jersey amended its breach law. SC Magazine summarized the changes:

"The new law expands the definition of what constitutes personal information that, if exposed in a breach, would require a company to issue a notification. Once S-52 takes effect on Sept. 1, 2019, personal information will also include a “user name, email address, or any other account holder identifying information, in combination with any password or security questions and answer…” the law states."

Fourth, Oregon Governor Kate Brown signed into law Senate Bill 684 on May 24, 2019. The JD Supra site reported:

"The most significant changes are around service providers, who will take on an independent obligation to notify the state Attorney General (AG) about data security breaches. A handful of other, more subtle changes are also included in the amendments, which take effect January 1, 2020... The obligation that service providers notify the AG is triggered by breaches affecting the personal information of over 250 Oregon consumers, or when the number cannot be determined... The new obligation increases the number of parties involved in incident response and notice decisions... This round of amendments adds user names, combined with password or other means of authentication, to the list of notice-triggering personal information... One other amendment also touches service providers. Where previously service providers had to notify business customers “as soon as practicable” after discovering a breach, the amendments set a deadline of 10 days."

Many companies outsource back-office work to vendors. So, the Oregon law keeps pace with common business practices. Readers wanting to learn more can read this blog's Outsourcing section.

A new, separate bill in Oregon covers internet-connected devices, also called the Internet of Things (IoT). Many consumers have installed IoT devices in their homes. According to JD Supra:

"The Oregon connected device security law is largely consistent with California’s new connected device security law, and both take effect January 1, 2020. Both require that manufacturers equip IoT devices with reasonable security features. Under either statute that can mean setting unique passwords for each unit shipped, or requiring end users to set a new password when they first access the device, in order to access the devices remotely from outside the devices’ local area network. This is a floor, not a ceiling, and both laws leave room for other security features..."

When manufacturers sell IoT devices all configured with the same universal password, it is a huge security problem. Bad actors can remotely access consumers' IoT devices to commit identity theft, fraud, and more. Consumers require greater protection, and the new IoT law is a good first step. Readers wanting to learn more can read this blog's Internet of Things section.

Fifth, Washington Governor Jay Inslee signed signed HB 1071 on May 7) which expanded the state’s data breach notification law. The changes become effective March 1, 2020. The National Law Review reported that breach:

"... notices must be provided no more than thirty days after the organization discovers the breach. This applies to notices sent to affected consumers as well as to the state’s Attorney General. The threshold requirement for notice to the Attorney General remains the same—it is only required if 500 or more Washington residents were affected by the breach."

The new law in Washington also expanded the list of sensitive data elements comprising "personal information" when combined with a person's name: birth date; "unique private key used to authenticate" electronic records; passport, military, and student ID numbers; health insurance policy or identification number; medical history, health conditions, diagnoses, and treatments; and biometric data (e.g., fingerprints, retina scans, voiceprints, etc.).

As more states announce amended breach notification laws, this blog will cover those actions.


Your Medical Devices Are Not Keeping Your Health Data to Themselves

[Editor's note: today's guest post, by reporters at ProPublica, is part of a series which explores data collection, data sharing, and privacy issues within the healthcare industry. It is reprinted with permission.]

By Derek Kravitz and Marshall Allen, ProPublica

Medical devices are gathering more and more data from their users, whether it’s their heart rates, sleep patterns or the number of steps taken in a day. Insurers and medical device makers say such data can be used to vastly improve health care.

But the data that’s generated can also be used in ways that patients don’t necessarily expect. It can be packaged and sold for advertising. It can anonymized and used by customer support and information technology companies. Or it can be shared with health insurers, who may use it to deny reimbursement. Privacy experts warn that data gathered by insurers could also be used to rate individuals’ health care costs and potentially raise their premiums.

Patients typically have to give consent for their data to be used — so-called “donated data.” But some patients said they weren’t aware that their information was being gathered and shared. And once the data is shared, it can be used in a number of ways. Here are a few of the most popular medical devices that can share data with insurers:

Continuous Positive Airway Pressure, or CPAP, Machines

What Are They?

One of the more popular devices for those with sleep apnea, CPAP machines are covered by insurers after a sleep study confirms the diagnosis. These units, which deliver pressurized air through masks worn by patients as they sleep, collect data and transmit it wirelessly.

What Do They Collect?

It depends on the unit, but CPAP machines can collect data on the number of hours a patient uses the device, the number of interruptions in sleep and the amount of air that leaks from the mask.

Who Gets the Info?

The data may be transmitted to the makers or suppliers of the machines. Doctors may use it to assess whether the therapy is effective. Health insurers may receive the data to track whether patients are using their CPAP machines as directed. They may refuse to reimburse the costs of the machine if the patient doesn’t use it enough. The device maker ResMed said in a statement that patients may withdraw their consent to have their data shared.

Heart Monitors

What Are They?

Heart monitors, oftentimes small, battery-powered devices worn on the body and attached to the skin with electrodes, measure and record the heart’s electrical signals, typically over a few days or weeks, to detect things like irregular heartbeats or abnormal heart rhythms. Some devices implanted under the skin can last up to five years.

What Do They Collect?

Wearable ones include Holter monitors, wired external devices that attach to the skin, and event recorders, which can track slow or fast heartbeats and fainting spells. Data can also be shared from implanted pacemakers, which keep the heart beating properly for those with arrhythmias.

Who Gets the Info?

Low resting heart rates or other abnormal heart conditions are commonly used by insurance companies to place patients in more expensive rate classes. Children undergoing genetic testing are sometimes outfitted with heart monitors before their diagnosis, increasing the odds that their data is used by insurers. This sharing is the most common complaint cited by the World Privacy Forum, a consumer rights group.

Blood Glucose Monitors

What Are They?

Millions of Americans who have diabetes are familiar with blood glucose meters, or glucometers, which take a blood sample on a strip of paper and analyze it for glucose, or sugar, levels. This allows patients and their doctors to monitor their diabetes so they don’t have complications like heart or kidney disease. Blood glucose meters are used by the more the 1.2 million Americans with Type 1 diabetes, which is usually diagnosed in children, teens and young adults.

What Do They Collect?

Blood sugar monitors measure the concentration of glucose in a patient’s blood, a key indicator of proper diabetes management.

Who Gets the Info?

Diabetes monitoring equipment is sold directly to patients, but many still rely on insurer-provided devices. To get reimbursement for blood glucose meters, health insurers will typically ask for at least a month’s worth of blood sugar data.

Lifestyle Monitors

What Are They?

Step counters, medication alerts and trackers, and in-home cameras are among the devices in the increasingly crowded lifestyle health industry.

What Do They Collect?

Many health data research apps are made up of “donated data,” which is provided by consumers and falls outside of federal guidelines that require the sharing of personal health data be disclosed and anonymized to protect the identity of the patient. This data includes everything from counters for the number of steps you take, the calories you eat and the number of flights of stairs you climb to more traditional health metrics, such as pulse and heart rates.

Who Gets the Info?

It varies by device. But the makers of the Fitbit step counter, for example, say they never sell customer personal data or share personal information unless a user requests it; it is part of a legal process; or it is provided on a “confidential basis” to a third-party customer support or IT provider. That said, Fitbit allows users who give consent to share data “with a health insurer or wellness program,” according to a statement from the company.

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


You Snooze, You Lose: Insurers Make The Old Adage Literally True

[Editor's note: today's guest post, by reporters at ProPublica, is part of a series which explores data collection, data sharing, and privacy issues within the healthcare industry. It is reprinted with permission.]

By Marshall Allen, ProPublica

Last March, Tony Schmidt discovered something unsettling about the machine that helps him breathe at night. Without his knowledge, it was spying on him.

From his bedside, the device was tracking when he was using it and sending the information not just to his doctor, but to the maker of the machine, to the medical supply company that provided it and to his health insurer.

Schmidt, an information technology specialist from Carrollton, Texas, was shocked. “I had no idea they were sending my information across the wire.”

Schmidt, 59, has sleep apnea, a disorder that causes worrisome breaks in his breathing at night. Like millions of people, he relies on a continuous positive airway pressure, or CPAP, machine that streams warm air into his nose while he sleeps, keeping his airway open. Without it, Schmidt would wake up hundreds of times a night; then, during the day, he’d nod off at work, sometimes while driving and even as he sat on the toilet.

“I couldn’t keep a job,” he said. “I couldn’t stay awake.” The CPAP, he said, saved his career, maybe even his life.

As many CPAP users discover, the life-altering device comes with caveats: Health insurance companies are often tracking whether patients use them. If they aren’t, the insurers might not cover the machines or the supplies that go with them.

In fact, faced with the popularity of CPAPs, which can cost $400 to $800, and their need for replacement filters, face masks and hoses, health insurers have deployed a host of tactics that can make the therapy more expensive or even price it out of reach.

Patients have been required to rent CPAPs at rates that total much more than the retail price of the devices, or they’ve discovered that the supplies would be substantially cheaper if they didn’t have insurance at all.

Experts who study health care costs say insurers’ CPAP strategies are part of the industry’s playbook of shifting the costs of widely used therapies, devices and tests to unsuspecting patients.

“The doctors and providers are not in control of medicine anymore,” said Harry Lawrence, owner of Advanced Oxy-Med Services, a New York company that provides CPAP supplies. “It’s strictly the insurance companies. They call the shots.”

Insurers say their concerns are legitimate. The masks and hoses can be cumbersome and noisy, and studies show that about third of patients don’t use their CPAPs as directed.

But the companies’ practices have spawned lawsuits and concerns by some doctors who say that policies that restrict access to the machines could have serious, or even deadly, consequences for patients with severe conditions. And privacy experts worry that data collected by insurers could be used to discriminate against patients or raise their costs.

Schmidt’s privacy concerns began the day after he registered his new CPAP unit with ResMed, its manufacturer. He opted out of receiving any further information. But he had barely wiped the sleep out of his eyes the next morning when a peppy email arrived in his inbox. It was ResMed, praising him for completing his first night of therapy. “Congratulations! You’ve earned yourself a badge!” the email said.

Then came this exchange with his supply company, Medigy: Schmidt had emailed the company to praise the “professional, kind, efficient and competent” technician who set up the device. A Medigy representative wrote back, thanking him, then adding that Schmidt’s machine “is doing a great job keeping your airway open.” A report detailing Schmidt’s usage was attached.

Alarmed, Schmidt complained to Medigy and learned his data was also being shared with his insurer, Blue Cross Blue Shield. He’d known his old machine had tracked his sleep because he’d taken its removable data card to his doctor. But this new invasion of privacy felt different. Was the data encrypted to protect his privacy as it was transmitted? What else were they doing with his personal information?

He filed complaints with the Better Business Bureau and the federal government to no avail. “My doctor is the ONLY one that has permission to have my data,” he wrote in one complaint.

In an email, a Blue Cross Blue Shield spokesperson said that it’s standard practice for insurers to monitor sleep apnea patients and deny payment if they aren’t using the machine. And privacy experts said that sharing the data with insurance companies is allowed under federal privacy laws. A ResMed representative said once patients have given consent, it may share the data it gathers, which is encrypted, with the patients’ doctors, insurers and supply companies.

Schmidt returned the new CPAP machine and went back to a model that allowed him to use a removable data card. His doctor can verify his compliance, he said.

Luke Petty, the operations manager for Medigy, said a lot of CPAP users direct their ire at companies like his. The complaints online number in the thousands. But insurance companies set the prices and make the rules, he said, and suppliers follow them, so they can get paid.

“Every year it’s a new hurdle, a new trick, a new game for the patients,” Petty said.

A Sleep Saving Machine Gets Popular

The American Sleep Apnea Association estimates about 22 million Americans have sleep apnea, although it’s often not diagnosed. The number of people seeking treatment has grown along with awareness of the disorder. It’s a potentially serious disorder that left untreated can lead to risks for heart disease, diabetes, cancer and cognitive disorders. CPAP is one of the only treatments that works for many patients.

Exact numbers are hard to come by, but ResMed, the leading device maker, said it’s monitoring the CPAP use of millions of patients.

Sleep apnea specialists and health care cost experts say insurers have countered the deluge by forcing patients to prove they’re using the treatment.

Medicare, the government insurance program for seniors and the disabled, began requiring CPAP “compliance” after a boom in demand. Because of the discomfort of wearing a mask, hooked up to a noisy machine, many patients struggle to adapt to nightly use. Between 2001 and 2009, Medicare payments for individual sleep studies almost quadrupled to $235 million. Many of those studies led to a CPAP prescription. Under Medicare rules, patients must use the CPAP for four hours a night for at least 70 percent of the nights in any 30-day period within three months of getting the device. Medicare requires doctors to document the adherence and effectiveness of the therapy.

Sleep apnea experts deemed Medicare’s requirements arbitrary. But private insurers soon adopted similar rules, verifying usage with data from patients’ machines — with or without their knowledge.

Kristine Grow, spokeswoman for the trade association America’s Health Insurance Plans, said monitoring CPAP use is important because if patients aren’t using the machines, a less expensive therapy might be a smarter option. Monitoring patients also helps insurance companies advise doctors about the best treatment for patients, she said. When asked why insurers don’t just rely on doctors to verify compliance, Grow said she didn’t know.

Many insurers also require patients to rack up monthly rental fees rather than simply pay for a CPAP.

Dr. Ofer Jacobowitz, a sleep apnea expert at ENT and Allergy Associates and assistant professor at The Mount Sinai Hospital in New York, said his patients often pay rental fees for a year or longer before meeting the prices insurers set for their CPAPs. But since patients’ deductibles — the amount they must pay before insurance kicks in — reset at the beginning of each year, they may end up covering the entire cost of the rental for much of that time, he said.

The rental fees can surpass the retail cost of the machine, patients and doctors say. Alan Levy, an attorney who lives in Rahway, New Jersey, bought an individual insurance plan through the now-defunct Health Republic Insurance of New Jersey in 2015. When his doctor prescribed a CPAP, the company that supplied his device, At Home Medical, told him he needed to rent the device for $104 a month for 15 months. The company told him the cost of the CPAP was $2,400.

Levy said he wouldn’t have worried about the cost if his insurance had paid it. But Levy’s plan required him to reach a $5,000 deductible before his insurance plan paid a dime. So Levy looked online and discovered the machine actually cost about $500.

Levy said he called At Home Medical to ask if he could avoid the rental fee and pay $500 up front for the machine, and a company representative said no. “I’m being overcharged simply because I have insurance,” Levy recalled protesting.

Levy refused to pay the rental fees. “At no point did I ever agree to enter into a monthly rental subscription,” he wrote in a letter disputing the charges. He asked for documentation supporting the cost. The company responded that he was being billed under the provisions of his insurance carrier.

Levy’s law practice focuses, ironically, on defending insurance companies in personal injury cases. So he sued At Home Medical, accusing the company of violating the New Jersey Consumer Fraud Act. Levy didn’t expect the case to go to trial. “I knew they were going to have to spend thousands of dollars on attorney’s fees to defend a claim worth hundreds of dollars,” he said.

Sure enough, At Home Medical, agreed to allow Levy to pay $600 — still more than the retail cost — for the machine.

The company declined to comment on the case. Suppliers said that Levy’s case is extreme, but acknowledged that patients’ rental fees often add up to more than the device is worth.

Levy said that he was happy to abide by the terms of his plan, but that didn’t mean the insurance company could charge him an unfair price. “If the machine’s worth $500, no matter what the plan says, or the medical device company says, they shouldn’t be charging many times that price,” he said.

Dr. Douglas Kirsch, president of the American Academy of Sleep Medicine, said high rental fees aren’t the only problem. Patients can also get better deals on CPAP filters, hoses, masks and other supplies when they don’t use insurance, he said.

Cigna, one of the largest health insurers in the country, currently faces a class-action suit in U.S. District Court in Connecticut over its billing practices, including for CPAP supplies. One of the plaintiffs, Jeffrey Neufeld, who lives in Connecticut, contends that Cigna directed him to order his supplies through a middleman who jacked up the prices.

Neufeld declined to comment for this story. But his attorney, Robert Izard, said Cigna contracted with a company called CareCentrix, which coordinates a network of suppliers for the insurer. Neufeld decided to contact his supplier directly to find out what it had been paid for his supplies and compare that to what he was being charged. He discovered that he was paying substantially more than the supplier said the products were worth. For instance, Neufeld owed $25.68 for a disposable filter under his Cigna plan, while the supplier was paid $7.50. He owed $147.78 for a face mask through his Cigna plan while the supplier was paid $95.

ProPublica found all the CPAP supplies billed to Neufeld online at even lower prices than those the supplier had been paid. Longtime CPAP users say it’s well known that supplies are cheaper when they are purchased without insurance.

Neufeld’s cost “should have been based on the lower amount charged by the actual provider, not the marked-up bill from the middleman,” Izard said. Patients covered by other insurance companies may have fallen victim to similar markups, he said.

Cigna would not comment on the case. But in documents filed in the suit, it denied misrepresenting costs or overcharging Neufeld. The supply company did not return calls for comment.

In a statement, Stephen Wogen, CareCentrix’s chief growth officer, said insurers may agree to pay higher prices for some services, while negotiating lower prices for others, to achieve better overall value. For this reason, he said, isolating select prices doesn’t reflect the overall value of the company’s services. CareCentrix declined to comment on Neufeld’s allegations.

Izard said Cigna and CareCentrix benefit from such behind-the-scenes deals by shifting the extra costs to patients, who often end up covering the marked-up prices out of their deductibles. And even once their insurance kicks in, the amount the patients must pay will be much higher.

The ubiquity of CPAP insurance concerns struck home during the reporting of this story, when a ProPublica colleague discovered how his insurer was using his data against him.

Sleep Aid or Surveillance Device?

Without his CPAP, Eric Umansky, a deputy managing editor at ProPublica, wakes up repeatedly through the night and snores so insufferably that he is banished to the living room couch. “My marriage depends on it.”

In September, his doctor prescribed a new mask and airflow setting for his machine. Advanced Oxy-Med Services, the medical supply company approved by his insurer, sent him a modem that he plugged into his machine, giving the company the ability to change the settings remotely if needed.

But when the mask hadn’t arrived a few days later, Umansky called Advanced Oxy-Med. That’s when he got a surprise: His insurance company might not pay for the mask, a customer service representative told him, because he hadn’t been using his machine enough. “On Tuesday night, you only used the mask for three-and-a-half hours,” the representative said. “And on Monday night, you only used it for three hours.”

“Wait — you guys are using this thing to track my sleep?” Umansky recalled saying. “And you are using it to deny me something my doctor says I need?”

Umansky’s new modem had been beaming his personal data from his Brooklyn bedroom to the Newburgh, New York-based supply company, which, in turn, forwarded the information to his insurance company, UnitedHealthcare.

Umansky was bewildered. He hadn’t been using the machine all night because he needed a new mask. But his insurance company wouldn’t pay for the new mask until he proved he was using the machine all night — even though, in his case, he, not the insurance company, is the owner of the device.

“You view it as a device that is yours and is serving you,” Umansky said. “And suddenly you realize it is a surveillance device being used by your health insurance company to limit your access to health care.”

Privacy experts said such concerns are likely to grow as a host of devices now gather data about patients, including insertable heart monitors and blood glucose meters, as well as Fitbits, Apple Watches and other lifestyle applications. Privacy laws have lagged behind this new technology, and patients may be surprised to learn how little control they have over how the data is used or with whom it is shared, said Pam Dixon, executive director of the World Privacy Forum.

“What if they find you only sleep a fitful five hours a night?” Dixon said. “That’s a big deal over time. Does that affect your health care prices?”

UnitedHealthcare said in a statement that it only uses the data from CPAPs to verify patients are using the machines.

Lawrence, the owner of Advanced Oxy-Med Services, conceded that his company should have told Umansky his CPAP use would be monitored for compliance, but it had to follow the insurers’ rules to get paid.

As for Umansky, it’s now been two months since his doctor prescribed him a new airflow setting for his CPAP machine. The supply company has been paying close attention to his usage, Umansky said, but it still hasn’t updated the setting.

The irony is not lost on Umansky: “I wish they would spend as much time providing me actual care as they do monitoring whether I’m ‘compliant.’”

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

 


When Fatal Crashes Can't Be Avoided, Who Should Self-Driving Cars Save? Or Sacrifice? Results From A Global Survey May Surprise You

Experts predict that there will be 10 million self-driving cars on the roads by 2020. Any outstanding issues need to be resolved before then. One outstanding issue is the "trolley problem" - a situation where a fatal vehicle crash can not be avoided and the self-driving car must decide whether to save the passenger or a nearby pedestrian. Ethical issues with self-driving cars are not new. There are related issues, and some experts have called for a code of ethics.

Like it or not, the software in self-driving cars must be programmed to make decisions like this. Which person in a "trolley problem" should the self-driving car save? In other words, the software must be programmed with moral preferences which dictate which person to sacrifice.

The answer is tricky. You might assume: always save the driver, since nobody would buy self-driving car which would kill their owners. What if the pedestrian is crossing against a 'do not cross' signal within a crosswalk? Does the answer change if there are multiple pedestrians in the crosswalk? What if the pedestrians are children, elders, or pregnant? Or a doctor? Does it matter if the passenger is older than the pedestrians?

To understand what the public wants -- expects -- in self-driving cars, also known as autonomous vehicles (AV), researchers from MIT asked consumers in a massive, online global survey. The survey included 2 million people from 233 countries. The survey included 13 accident scenarios with nine varying factors:

  1. "Sparing people versus pets/animals,
  2. Staying on course versus swerving,
  3. Sparing passengers versus pedestrians,
  4. Sparing more lives versus fewer lives,
  5. Sparing men versus women,
  6. Sparing the young versus the elderly,
  7. Sparing pedestrians who cross legally versus jaywalking,
  8. Sparing the fit versus the less fit, and
  9. Sparing those with higher social status versus lower social status."

Besides recording the accident choices, the researchers also collected demographic information (e.g., gender, age, income, education, attitudes about religion and politics, geo-location) about the survey participants, in order to identify clusters: groups, areas, countries, territories, or regions containing people with similar "moral preferences."

Newsweek reported:

"The study is basically trying to understand the kinds of moral decisions that driverless cars might have to resort to," Edmond Awad, lead author of the study from the MIT Media Lab, said in a statement. "We don't know yet how they should do that."

And the overall findings:

"First, human lives should be spared over those of animals; many people should be saved over a few; and younger people should be preserved ahead of the elderly."

These have implications for policymakers. The researchers noted:

"... given the strong preference for sparing children, policymakers must be aware of a dual challenge if they decide not to give a special status to children: the challenge of explaining the rationale for such a decision, and the challenge of handling the strong backlash that will inevitably occur the day an autonomous vehicle sacrifices children in a dilemma situation."

The researchers found regional differences about who should be saved:

"The first cluster (which we label the Western cluster) contains North America as well as many European countries of Protestant, Catholic, and Orthodox Christian cultural groups. The internal structure within this cluster also exhibits notable face validity, with a sub-cluster containing Scandinavian countries, and a sub-cluster containing Commonwealth countries.

The second cluster (which we call the Eastern cluster) contains many far eastern countries such as Japan and Taiwan that belong to the Confucianist cultural group, and Islamic countries such as Indonesia, Pakistan and Saudi Arabia.

The third cluster (a broadly Southern cluster) consists of the Latin American countries of Central and South America, in addition to some countries that are characterized in part by French influence (for example, metropolitan France, French overseas territories, and territories that were at some point under French leadership). Latin American countries are cleanly separated in their own sub-cluster within the Southern cluster."

The researchers also observed:

"... systematic differences between individualistic cultures and collectivistic cultures. Participants from individualistic cultures, which emphasize the distinctive value of each individual, show a stronger preference for sparing the greater number of characters. Furthermore, participants from collectivistic cultures, which emphasize the respect that is due to older members of the community, show a weaker preference for sparing younger characters... prosperity (as indexed by GDP per capita) and the quality of rules and institutions (as indexed by the Rule of Law) correlate with a greater preference against pedestrians who cross illegally. In other words, participants from countries that are poorer and suffer from weaker institutions are more tolerant of pedestrians who cross illegally, presumably because of their experience of lower rule compliance and weaker punishment of rule deviation... higher country-level economic inequality (as indexed by the country’s Gini coefficient) corresponds to how unequally characters of different social status are treated. Those from countries with less economic equality between the rich and poor also treat the rich and poor less equally... In nearly all countries, participants showed a preference for female characters; however, this preference was stronger in nations with better health and survival prospects for women. In other words, in places where there is less devaluation of women’s lives in health and at birth, males are seen as more expendable..."

This is huge. It makes one question the wisdom of a one-size-fits-all programming approach by AV makers wishing to sell cars globally. Citizens in clusters may resent an AV maker forcing its moral preferences upon them. Some clusters or countries may demand vehicles matching their moral preferences.

The researchers concluded (emphasis added):

"Never in the history of humanity have we allowed a machine to autonomously decide who should live and who should die, in a fraction of a second, without real-time supervision. We are going to cross that bridge any time now, and it will not happen in a distant theatre of military operations; it will happen in that most mundane aspect of our lives, everyday transportation. Before we allow our cars to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers that will regulate them... Our data helped us to identify three strong preferences that can serve as building blocks for discussions of universal machine ethics, even if they are not ultimately endorsed by policymakers: the preference for sparing human lives, the preference for sparing more lives, and the preference for sparing young lives. Some preferences based on gender or social status vary considerably across countries, and appear to reflect underlying societal-level preferences..."

And the researchers advised caution, given this study's limitations (emphasis added):

"Even with a sample size as large as ours, we could not do justice to all of the complexity of autonomous vehicle dilemmas. For example, we did not introduce uncertainty about the fates of the characters, and we did not introduce any uncertainty about the classification of these characters. In our scenarios, characters were recognized as adults, children, and so on with 100% certainty, and life-and-death outcomes were predicted with 100% certainty. These assumptions are technologically unrealistic, but they were necessary... Similarly, we did not manipulate the hypothetical relationship between respondents and characters (for example, relatives or spouses)... Indeed, we can embrace the challenges of machine ethics as a unique opportunity to decide, as a community, what we believe to be right or wrong; and to make sure that machines, unlike humans, unerringly follow these moral preferences. We might not reach universal agreement: even the strongest preferences expressed through the [survey] showed substantial cultural variations..."

Several important limitations to remember. And, there are more. It didn't address self-driving trucks. Should an AV tractor-trailer semi  -- often called a robotruck -- carrying $2 million worth of goods sacrifice its load (and passenger) to save one or more pedestrians? What about one or more drivers on the highway? Does it matter if the other drivers are motorcyclists, school buses, or ambulances?

What about autonomous freighters? Should an AV cargo ship be programed to sacrifice its $80 million load to save a pleasure craft? Does the size (e.g., number of passengers) of the pleasure craft matter? What if the other craft is a cabin cruiser with five persons? Or a cruise ship with 2,000 passengers and a crew of 800? What happens in international waters between AV ships from different countries programmed with different moral preferences?

Regardless, this MIT research seems invaluable. It's a good start. AV makers (e.g., autos, ships, trucks) need to explicitly state what their vehicles will (and won't do). Don't hide behind legalese similar to what exists today in too many online terms-of-use and privacy policies.

Hopefully, corporate executives and government policymakers will listen, consider the limitations, demand follow-up research, and not dive headlong into the AV pool without looking first. After reading this study, it struck me that similar research would have been wise before building a global social media service, since people in different countries or regions having varying preferences with online privacy, sharing information, and corporate surveillance. What are your opinions?


Survey: Most Home Users Satisfied With Voice-Controlled Assistants. Tech Adoption Barriers Exist

Recent survey results reported by MediaPost:

"Amazon Alexa and Google Assistant have the highest satisfaction levels among mobile users, each with an 85% satisfaction rating, followed by Siri and Bixby at 78% and Microsoft’s Cortana at 77%... As found in other studies, virtual assistants are being used for a range of things, including looking up things on the internet (51%), listening to music (48%), getting weather information (46%) and setting a timer (35%)... Smart speaker usage varies, with 31% of Amazon device owners using their speaker at least a few times a week, Google Home owners 25% and Apple HomePod 18%."

Additional survey results are available at Digital Trends and Experian. PWC found:

"Only 10% of surveyed respondents were not familiar with voice-enabled products and devices. Of the 90% who were, the majority have used a voice assistant (72%). Adoption is being driven by younger consumers, households with children, and households with an income of >$100k... Despite being accessible everywhere, three out of every four consumers (74%) are using their mobile voice assistants at home..."

Consumers seem to want privacy when using voice assistants, so usage tends to occur at home and not in public places. Also:

"... the bulk of consumers have yet to graduate to more advanced activities like shopping or controlling other smart devices in the home... 50% of respondents have made a purchase using their voice assistant, and an additional 25% would consider doing so in the future. The majority of items purchased are small and quick.. Usage will continue to increase but consistency must improve for wider adoption... Some consumers see voice assistants as a privacy risk... When forced to choose, 57% of consumers said they would rather watch an ad in the middle of a TV show than listen to an ad spoken by their voice assistant..."

Consumers want control over the presentation of advertisements by voice assistants. Control options desired include skip, select, never while listening to music, only at pre-approved times, customized based upon interests, seamless integration, and match to preferred brands. 38 percent of survey respondents said that they, "don't want something 'listening in' on my life all the time."

What are your preferences with voice assistants? Any privacy concerns?


No, a Teen Did Not Hack a State Election

[Editor's note: today's guest post, by reporters at ProPublica, is the latest in a series about the integrity and security of voting systems in the United States. It is reprinted with permission.]

By Lilia Chang, ProPublica

Headlines from Def Con, a hacking conference held this month in Las Vegas, might have left some thinking that infiltrating state election websites and affecting the 2018 midterm results would be child’s play.

Articles reported that teenage hackers at the event were able to “crash the upcoming midterm elections” and that it had taken “an 11-year-old hacker just 10 minutes to change election results.” A first-person account by a 17-year-old in Politico Magazine described how he shut down a website that would tally votes in November, “bringing the election to a screeching halt.”

But now, elections experts are raising concerns that misunderstandings about the event — many of them stoked by its organizers — have left people with a distorted sense of its implications.

In a website published before r00tz Asylum, the youth section of Def Con, organizers indicated that students would attempt to hack exact duplicates of state election websites, referring to them as “replicas” or “exact clones.” (The language was scaled back after the conference to simply say “clones.”)

Instead, students were working with look-a-likes created for the event that had vulnerabilities they were coached to find. Organizers provided them with cheat sheets, and adults walked the students through the challenges they would encounter.

Josh Franklin, an elections expert formerly at the National Institute of Standards and Technology and a speaker at Def Con, called the websites “fake.”

“When I learned that they were not using exact copies and pains hadn’t been taken to more properly replicate the underlying infrastructure, I was definitely saddened,” Franklin said.

Franklin and David Becker, the executive director of the Center for Election Innovation & Research, also pointed out that while state election websites report voting results, they do not actually tabulate votes. This information is kept separately and would not be affected if hackers got into sites that display vote totals.

“It would be lunacy to directly connect the election management system, of which the tabulation system is a part of, to the internet,” Franklin said.

Jake Braun, the co-organizer of the event, defended the attention-grabbing way it was framed, saying the security issues of election websites haven’t gotten enough attention. Those questioning the technical details of the mock sites and whether their vulnerabilities were realistic are missing the point, he insisted.

“We want elections officials to start putting together communications redundancy plans so they have protocol in place to communicate with voters and the media and so on if this happens on election day,” he said.

Braun provided ProPublica with a report that r00tz plans to circulate more widely that explains the technical underpinnings of the mock websites. They were designed to be vulnerable to a SQL injection attack, a common hack, the report says.

Franklin acknowledged that some state election reporting sites do indeed have this vulnerability, but he said that states have been aware of it for months and are in the process of protecting against it.

Becker said the details spelled out in the r00tz report would have been helpful to have from the start.

“We have to be really careful about adding to the hysteria about our election system not working or being too vulnerable because that’s exactly what someone like President Putin wants,” Becker said. Instead, Becker said that “we should find real vulnerabilities and address them as elections officials are working really hard to do.”

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Study: Most Consumers Fear Companies Will 'Go Too Far' With Artificial Intelligence Technologies

New research has found that consumers are conflicted about artificial intelligence (AI) technologies. A national study of 697 adults during the Spring of 2018 by Elicit Insights found:

"Most consumers are conflicted about AI. They know there are benefits, but recognize the risks, too"

Several specific findings:

  • 73 percent of survey participants (e.g., Strongly Agree, Agree) fear "some companies will go too far with AI"
  • 64 percent agreed (e.g., Strongly Agree, Agree) with the statement: "I'm concerned about how companies will use artificial intelligence and the information they have about me to engage with me"
  • "Six out of 10 Americans agree or strongly agree that AI will never be as good as human interaction. Human interaction remains sacred and there is concern with at least a third of consumers that AI won’t stay focused on mundane tasks and leave the real thinking to humans."

Many of the concerns center around control. As AI applications become smarter and more powerful, they are able to operate independently, without human -- users' -- authorization. When presented with several smart-refrigerator scenarios, the less control users had over purchases the fewer survey participants viewed AI as a benefit:

Smart refrigerator and food purchase scenarios. AI study by Elicit Insights. Click to view larger version

AI technologies can also be used to find and present possible matches for online dating services. Again, survey participants expressed similar control concerns:

Dating service scenarios. AI study by Elicit Insights. Click to view larger version

Download Elicit Insights' complete Artificial Intelligence survey (Adobe PDF). What are your opinions? Do you prefer AI applications that operate independently, or which require your authorization?


Study: Performance Issues Impede IoT Device Trust And Usage Worldwide By Consumers

Dynatrace logo A global survey recently uncovered interesting findings about the usage and satisfaction of Iot (Internet of things) devices by consumers. A survey of consumers in several countries found that 52 percent already use IoT devices, and 64 percent of users have already encountered performance issues with their devices.

Opinium Research logo Dynatrace, a software intelligence company, commissioned Opinium Research to conduct a global survey of 10,002 participants, with 2,000 in the United States, 2,000 in the United Kingdom, and 1,000 respondents each in France, Germany, Australia, Brazil, Singapore, and China. Dynatrace announced several findings, chiefly:

"On average, consumers experience 1.5 digital performance problems every day, and 62% of people fear the number of problems they encounter, and the frequency, will increase due to the rise of IoT."

That seems like plenty of poor performance. Some findings were specific to travel, healthcare, and in-home retail sectors. Regarding travel:

"The digital performance failures consumers are already experiencing with everyday technology is potentially making them wary of other uses of IoT. 85% of respondents said they are concerned that self-driving cars will malfunction... 72% feel it is likely software glitches in self-driving cars will cause serious injuries and fatalities... 84% of consumers said they wouldn’t use self-driving cars due to a fear of software glitches..."

Regarding healthcare:

"... 62% of consumers stated they would not trust IoT devices to administer medication; this sentiment is strongest in the 55+ age range, with 74% expressing distrust. There were also specific concerns about the use of IoT devices to monitor vital signs, such as heart rate and blood pressure. 85% of consumers expressed concern that performance problems with these types of IoT devices could compromise clinical data..."

Regarding in-home retail devices:

"... 83% of consumers are concerned about losing control of their smart home due to digital performance problems... 73% of consumers fear being locked in or out of the smart home due to bugs in smart home technology... 68% of consumers are worried they won’t be able to control the temperature in the smart home due to malfunctions in smart home technology... 81% of consumers are concerned that technology or software problems with smart meters will lead to them being overcharged for gas, electricity, and water."

The findings are a clear call to IoT makers to improve the performance, security, and reliability of their internet-connected devices. To learn more, download the full Dynatrace report titled, "IoT Consumer Confidence Report: Challenges for Enterprise Cloud Monitoring on the Horizon."


Test Finds Amazon's Facial Recognition Software Wrongly Identified Members Of Congress As Persons Arrested. A Few Legislators Demand Answers

In a test of Rekognition, the facial recognition software by Amazon, the American Civil Liberties Union (ACLU) found that the software misidentified 28 members of the United States Congress to mugshot photographs of persons arrested for crimes. Jokes aside about politicians, this is serious stuff. According to the ACLU:

"The members of Congress who were falsely matched with the mugshot database we used in the test include Republicans and Democrats, men and women, and legislators of all ages, from all across the country... To conduct our test, we used the exact same facial recognition system that Amazon offers to the public, which anyone could use to scan for matches between images of faces. And running the entire test cost us $12.33 — less than a large pizza... The false matches were disproportionately of people of color, including six members of the Congressional Black Caucus, among them civil rights legend Rep. John Lewis (D-Ga.). These results demonstrate why Congress should join the ACLU in calling for a moratorium on law enforcement use of face surveillance."

List of 28 Congressional legislators mis-identified by Amazon Rekognition in ACLU study. Click to view larger version With 535 member of Congress, the implied error rate was 5.23 percent. On Thursday, three of the misidentified legislators sent a joint letter to Jeffery Bezos, the Chief executive Officer at Amazon. The letter read in part:

"We write to express our concerns and seek more information about Amazon's facial recognition technology, Rekognition... While facial recognition services might provide a valuable law enforcement tool, the efficacy and impact of the technology are not yet fully understood. In particular, serious concerns have been raised about the dangers facial recognition can pose to privacy and civil rights, especially when it is used as a tool of government surveillance, as well as the accuracy of the technology and its disproportionate impact on communities of color.1 These concerns, including recent reports that Rekognition could lead to mis-identifications, raise serious questions regarding whether Amazon should be selling its technology to law enforcement... One study estimates that more than 117 million American adults are in facial recognition databases that can be searched in criminal investigations..."

The letter was sent by Senator Edward J. Markey (Massachusetts, Representative Luis V. Gutiérrez (Illinois), and Representative Mark DeSaulnier (California). Why only three legislators? Where are the other 25? Nobody else cares about software accuracy?

The three legislators asked Amazon to provide answers by August 20, 2018 to several key requests:

  • The results of any internal accuracy or bias assessments Amazon perform on Rekognition, with details by race, gender, and age,
  • The list of all law enforcement or intelligence agencies Amazon has communicated with regarding Rekognition,
  • The list of all law enforcement agencies which have used or currently use Rekognition,
  • If any law enforcement agencies which used Rekogntion have been investigated, sued, or reprimanded for unlawful or discriminatory policing practices,
  • Describe the protections, if any, Amazon has built into Rekognition to protect the privacy rights of innocent citizens cuaght in the biometric databases used by law enforcement for comparisons,
  • Can Rekognition identify persons younger than age 13, and what protections Amazon uses to comply with Children's Online Privacy Protections Act (COPPA),
  • Whether Amazon conduts any audits of Rekognition to ensure its appropriate and legal uses, and what actions Amazon has taken to correct any abuses,
  • Explain whether Rekognition is integrated with police body cameras and/or "public-facing camera networks."

The letter cited a 2016 report by the Center on Privacy and Technology (CPT) at Georgetown Law School, which found:

"... 16 states let the Federal Bureau of Investigation (FBI) use face recognition technology to compare the faces of suspected criminals to their driver’s license and ID photos, creating a virtual line-up of their state residents. In this line-up, it’s not a human that points to the suspect—it’s an algorithm... Across the country, state and local police departments are building their own face recognition systems, many of them more advanced than the FBI’s. We know very little about these systems. We don’t know how they impact privacy and civil liberties. We don’t know how they address accuracy problems..."

Everyone wants law enforcement to quickly catch criminals, prosecute criminals, and protect the safety and rights of law-abiding citizens. However, accuracy matters. Experts warn that the facial recognition technologies used are unregulated, and the systems' impacts upon innocent citizens are not understood. Key findings in the CPT report:

  1. "Law enforcement face recognition networks include over 117 million American adults. Face recognition is neither new nor rare. FBI face recognition searches are more common than federal court-ordered wiretaps. At least one out of four state or local police departments has the option to run face recognition searches through their or another agency’s system. At least 26 states (and potentially as many as 30) allow law enforcement to run or request searches against their databases of driver’s license and ID photos..."
  2. "Different uses of face recognition create different risks. This report offers a framework to tell them apart. A face recognition search conducted in the field to verify the identity of someone who has been legally stopped or arrested is different, in principle and effect, than an investigatory search of an ATM photo against a driver’s license database, or continuous, real-time scans of people walking by a surveillance camera. The former is targeted and public. The latter are generalized and invisible..."
  3. "By tapping into driver’s license databases, the FBI is using biometrics in a way it’s never done before. Historically, FBI fingerprint and DNA databases have been primarily or exclusively made up of information from criminal arrests or investigations. By running face recognition searches against 16 states’ driver’s license photo databases, the FBI has built a biometric network that primarily includes law-abiding Americans. This is unprecedented and highly problematic."
  4. " Major police departments are exploring face recognition on live surveillance video. Major police departments are exploring real-time face recognition on live surveillance camera video. Real-time face recognition lets police continuously scan the faces of pedestrians walking by a street surveillance camera. It may seem like science fiction. It is real. Contract documents and agency statements show that at least five major police departments—including agencies in Chicago, Dallas, and Los Angeles—either claimed to run real-time face recognition off of street cameras..."
  5. "Law enforcement face recognition is unregulated and in many instances out of control. No state has passed a law comprehensively regulating police face recognition. We are not aware of any agency that requires warrants for searches or limits them to serious crimes. This has consequences..."
  6. "Law enforcement agencies are not taking adequate steps to protect free speech. There is a real risk that police face recognition will be used to stifle free speech. There is also a history of FBI and police surveillance of civil rights protests. Of the 52 agencies that we found to use (or have used) face recognition, we found only one, the Ohio Bureau of Criminal Investigation, whose face recognition use policy expressly prohibits its officers from using face recognition to track individuals engaging in political, religious, or other protected free speech."
  7. "Most law enforcement agencies do little to ensure their systems are accurate. Face recognition is less accurate than fingerprinting, particularly when used in real-time or on large databases. Yet we found only two agencies, the San Francisco Police Department and the Seattle region’s South Sound 911, that conditioned purchase of the technology on accuracy tests or thresholds. There is a need for testing..."
  8. "The human backstop to accuracy is non-standardized and overstated. Companies and police departments largely rely on police officers to decide whether a candidate photo is in fact a match. Yet a recent study showed that, without specialized training, human users make the wrong decision about a match half the time...The training regime for examiners remains a work in progress."
  9. "Police face recognition will disproportionately affect African Americans. Police face recognition will disproportionately affect African Americans. Many police departments do not realize that... the Seattle Police Department says that its face recognition system “does not see race.” Yet an FBI co-authored study suggests that face recognition may be less accurate on black people. Also, due to disproportionately high arrest rates, systems that rely on mug shot databases likely include a disproportionate number of African Americans. Despite these findings, there is no independent testing regime for racially biased error rates. In interviews, two major face recognition companies admitted that they did not run these tests internally, either."
  10. "Agencies are keeping critical information from the public. Ohio’s face recognition system remained almost entirely unknown to the public for five years. The New York Police Department acknowledges using face recognition; press reports suggest it has an advanced system. Yet NYPD denied our records request entirely. The Los Angeles Police Department has repeatedly announced new face recognition initiatives—including a “smart car” equipped with face recognition and real-time face recognition cameras—yet the agency claimed to have “no records responsive” to our document request. Of 52 agencies, only four (less than 10%) have a publicly available use policy. And only one agency, the San Diego Association of Governments, received legislative approval for its policy."

The New York Times reported:

"Nina Lindsey, an Amazon Web Services spokeswoman, said in a statement that the company’s customers had used its facial recognition technology for various beneficial purposes, including preventing human trafficking and reuniting missing children with their families. She added that the A.C.L.U. had used the company’s face-matching technology, called Amazon Rekognition, differently during its test than the company recommended for law enforcement customers.

For one thing, she said, police departments do not typically use the software to make fully autonomous decisions about people’s identities... She also noted that the A.C.L.U had used the system’s default setting for matches, called a “confidence threshold,” of 80 percent. That means the group counted any face matches the system proposed that had a similarity score of 80 percent or more. Amazon itself uses the same percentage in one facial recognition example on its site describing matching an employee’s face with a work ID badge. But Ms. Lindsey said Amazon recommended that police departments use a much higher similarity score — 95 percent — to reduce the likelihood of erroneous matches."

Good of Amazon to respond quickly, but its reply is still insufficient and troublesome. Amazon may recommend 95 percent similarity scores, but the public does not know if police departments actually use the higher setting, or consistently do so across all types of criminal investigations. Plus, the CPT report cast doubt on human "backstop" intervention, which Amazon's reply seems to heavily rely upon.

Where is the rest of Congress on this? On Friday, three Senators sent a similar letter seeking answers from 39 federal law-enforcement agencies about their use facial recognition technology, and what policies, if any, they have put in place to prevent abuse and misuse.

All of the findings in the CPT report are disturbing. Finding #3 is particularly troublesome. So, voters need to know what, if anything, has changed since these findings were published in 2016. Voters need to know what their elected officials are doing to address these findings. Some elected officials seem engaged on the topic, but not enough. What are your opinions?


Experts Warn Biases Must Be Removed From Artificial Intelligence

CNN Tech reported:

"Every time humanity goes through a new wave of innovation and technological transformation, there are people who are hurt and there are issues as large as geopolitical conflict," said Fei Fei Li, the director of the Stanford Artificial Intelligence Lab. "AI is no exception." These are not issues for the future, but the present. AI powers the speech recognition that makes Siri and Alexa work. It underpins useful services like Google Photos and Google Translate. It helps Netflix recommend movies, Pandora suggest songs, and Amazon push products..."

Artificial intelligence (AI) technology is not only about autonomous ships, trucks, and preventing crashes involving self-driving cars. AI has global impacts. Researchers have already identified problems and limitations:

"A recent study by Joy Buolamwini at the M.I.T. Media Lab found facial recognition software has trouble identifying women of color. Tests by The Washington Post found that accents often trip up smart speakers like Alexa. And an investigation by ProPublica revealed that software used to sentence criminals is biased against black Americans. Addressing these issues will grow increasingly urgent as things like facial recognition software become more prevalent in law enforcement, border security, and even hiring."

Reportedly, the concerns and limitations were discussed earlier this month at the "AI Summit - Designing A Future For All" conference. Back in 2016, TechCrunch listed five unexpected biases in artificial intelligence. So, there is much important work to be done to remove biases.

According to CNN Tech, a range of solutions are needed:

"Diversifying the backgrounds of those creating artificial intelligence and applying it to everything from policing to shopping to banking...This goes beyond diversifying the ranks of engineers and computer scientists building these tools to include the people pondering how they are used."

Given the history of the internet, there seems to be an important take-away. Early on, many people mistakenly assumed that, "If it's in an e-mail, then it must be true." That mistaken assumption migrated to, "If it's in a website on the internet, then it must be true." And that mistaken assumption migrated to, "If it was posted on social media, then it must be true." Consumers, corporate executives, and technicians must educate themselves and avoid assuming, "If an AI system collected it, then it must be true." Veracity matters. What do you think?


The DIY Revolution: Consumers Alter Or Build Items Previously Not Possible. Is It A Good Thing?

Recent advances in technology allow consumers to alter, customize, or build locally items previously not possible. These items are often referred to as Do-It-Yourself (DIY) products. You've probably heard DIY used in home repair and renovation projects on television. DIY now happens in some unexpected areas. Today's blog post highlights two areas.

DIY Glucose Monitors

Earlier this year, CNet described the bag an eight-year-old patient carries with her everywhere daily:

"... It houses a Dexcom glucose monitor and a pack of glucose tablets, which work in conjunction with the sensor attached to her arm and the insulin pump plugged into her stomach. The final item in her bag was an iPhone 5S. It's unusual for such a young child to have a smartphone. But Ruby's iPhone, which connects via Bluetooth to her Dexcom monitor, allowing [her mother] to read it remotely, illustrates the way technology has transformed the management of diabetes from an entirely manual process -- pricking fingers to measure blood sugar, writing down numbers in a notebook, calculating insulin doses and injecting it -- to a semi-automatic one..."

Some people have access to these new technologies, but many don't. Others want more connectivity and better capabilities. So, some creative "hacking" has resulted:

"There are people who are unwilling to wait, and who embrace unorthodox methods. (You can find them on Twitter via the hashtag #WeAreNotWaiting.) The Nightscout Foundation, an online diabetes community, figured out a workaround for the Pebble Watch. Groups such as Nightscout, Tidepool and OpenAPS are developing open-source fixes for diabetes that give major medical tech companies a run for their money... One major gripe of many tech-enabled diabetes patients is that the two devices they wear at all times -- the monitor and the pump -- don't talk to each other... diabetes will never be a hands-off disease to manage, but an artificial pancreas is basically as close as it gets. The FDA approved the first artificial pancreas -- the Medtronic 670G -- in October 2017. But thanks to a little DIY spirit, people have had them for years."

CNet shared the experience of another tech-enabled patient:

"Take Dana Lewis, founder of the open-source artificial pancreas system, or OpenAPS. Lewis started hacking her glucose monitor to increase the volume of the alarm so that it would wake her in the night. From there, Lewis tinkered with her equipment until she created a closed-loop system, which she's refined over time in terms of both hardware and algorithms that enable faster distribution of insulin. It has massively reduced the "cognitive burden" on her everyday life... JDRF, one of the biggest global diabetes research charities, said in October that it was backing the open-source community by launching an initiative to encourage rival manufacturers like Dexcom and Medtronic to open their protocols and make their devices interoperable."

Convenience and affordability are huge drivers. As you might have guessed, there are risks:

"Hacking a glucose monitor is not without risk -- inaccurate readings, failed alarms or the wrong dose of insulin distributed by the pump could have fatal consequences... Lewis and the OpenAPS community encourage people to embrace the build-your-own-pancreas method rather than waiting for the tech to become available and affordable."

Are DIY glucose monitors a good thing? Some patients think so as a way to achieve convenient and affordable healthcare solutions. That might lead you to conclude anything DIY is an improvement. Right? Keep reading.

DIY Guns

Got a 3-D printer? If so, then you can print your own DIY gun. How did this happen? How did the USA get to here? Wired explained:

"Five years ago, 25-year-old radical libertarian Cody Wilson stood on a remote central Texas gun range and pulled the trigger on the world’s first fully 3-D-printed gun... he drove back to Austin and uploaded the blueprints for the pistol to his website, Defcad.com... In the days after that first test-firing, his gun was downloaded more than 100,000 times. Wilson made the decision to go all in on the project, dropping out of law school at the University of Texas, as if to confirm his belief that technology supersedes law..."

The law intervened. Wilson stopped, took down his site, and then pursued a legal remedy:

"Two months ago, the Department of Justice quietly offered Wilson a settlement to end a lawsuit he and a group of co-plaintiffs have pursued since 2015 against the United States government. Wilson and his team of lawyers focused their legal argument on a free speech claim: They pointed out that by forbidding Wilson from posting his 3-D-printable data, the State Department was not only violating his right to bear arms but his right to freely share information. By blurring the line between a gun and a digital file, Wilson had also successfully blurred the lines between the Second Amendment and the First."

So, now you... anybody with an internet connection and a 3-D printer (and a computer-controlled milling machine for some advanced parts)... can produce their own DIY gun. No registration required. No licenses nor permits. No training required. And, that's anyone anywhere in the world.

Oh, there's more:

"The Department of Justice's surprising settlement, confirmed in court documents earlier this month, essentially surrenders to that argument. It promises to change the export control rules surrounding any firearm below .50 caliber—with a few exceptions like fully automatic weapons and rare gun designs that use caseless ammunition—and move their regulation to the Commerce Department, which won't try to police technical data about the guns posted on the public internet. In the meantime, it gives Wilson a unique license to publish data about those weapons anywhere he chooses."

As you might have guessed, Wilson is re-launching his website, but this time with blueprints for more DIY weaponry besides pistols: AR-15 rifles and semi-automatic weaponry. So, it will be easier for people to skirt federal and state gun laws. Is that a good thing?

You probably have some thoughts and concerns. I do. There are plenty of issues and questions. Are DIY products a good thing? Who is liable? How should laws be upgraded? How can society facilitate one set of DIY products and not the other? What related issues do you see? Any other notable DIY products?


North Carolina Provides Its Residents With an Opt-out From Smart Meter Installations. Will It Last?

Wise consumers know how smart utility meters operate. Unlike conventional analog meters which must be read manually on-site by a technician from the utility, smart meters perform two-way digital communication with the service provider, have memory to digitally store a year's worth of your usage, and transmit your usage at regular intervals (e.g., every 15 minutes). Plus, consumers have little or no control over smart meters installed on their property.

There is some good news. Residents in North Carolina can say "no" to smart meter installations by their power company. The Charlotte Observer reported:

"Residents who say they suffer from acute sensitivity to radio-frequency waves can say no to Duke's smart meters — as long as they have a notarized doctor's note to attest to their rare condition. The N.C. Utilities Commission, which sets utility rates and rules, created the new standard on Friday, possibly making North Carolina the first state to limit the smart meter technology revolution by means of a medical opinion... Duke Energy's two North Carolina utility subsidiaries are in the midst of switching its 3.4 million North Carolina customers to smart meters..."

While it currently is free to opt out and get an analog meter instead, that could change:

"... Duke had proposed charging customers extra if they refused a smart meter. Duke wanted to charge an initial fee of $150 plus $11.75 a month to cover the expense of sending someone out to that customer's house to take a monthly meter reading. But the Utilities Commission opted to give the benefit of the doubt to customers with smart meter health issues until the Federal Communications Commission determines the health risks of the devices."

The Smart Grid Awareness blog contains more information about activities in North Carolina. There are privacy concerns with smart meters. Smart meters can be used to profile consumers with a high degree of accuracy and details. One can easily deduce the number of persons living in the dwelling, when they are home and the duration, which electric appliances are used when they are home, the presence of security and alarm systems, and any special conditions (e.g., in-home medical equipment, baby appliances, etc.).

Other states are considering similar measures. The Kentucky Public Service Commission (PSC) will hold a public meeting only July 9th and accept public comments about planned smart meter deployments by Kentucky Utilities Co. (KU) and Louisville Gas & Electric Company (LG&E). Smart meters are being deployed in New Jersey.

When Maryland lawmakers considered legislation to provide law enforcement with access to consumers' smart meters, the Electronic Privacy Information Center (EPIC) responded with a January 16, 2018 letter outlining the privacy concerns:

"HB 56 is a sensible and effective response to an emerging privacy issue facing Maryland residents. Smart meters collect detailed personal data about the use of utility services. With a smart meter, it is possible to determine when a person is in a residence, and what they are doing. Moreover the routine collection of this data, without adequate privacy safeguards, would enable ongoing surveillance of Maryland residents without regard to any criminal suspicion."

"HB 56 does not prevent law enforcement use of data generated by smart meters; it simply requires that law enforcement follow clear procedures, subject to judicial oversight, to access the data generated by smart meters. HB 56 is an example of a model privacy law that enables innovation while safeguarding personal privacy."

That's a worthy goal of government: balance the competing needs of the business sector to innovate while protecting consumers' privacy. Is a medical opt-out sufficient? Should Fourth Amendment constitutional concerns apply? What are your opinions?


Google To Exit Weaponized Drone Contract And Pursue Other Defense Projects

Google logo Last month, protests by current and former Google employees, plus academic researchers, cited ethical and transparency concerns with artificial intelligence (AI) help the company provides to the U.S. Department of Defense for Project Maven, a weaponized drone program to identify people. Gizmodo reported that Google plans not to renew its contract for Project Maven:

"Google Cloud CEO Diane Greene announced the decision at a meeting with employees Friday morning, three sources told Gizmodo. The current contract expires in 2019 and there will not be a follow-up contract... The company plans to unveil new ethical principles about its use of AI this week... Google secured the Project Maven contract in late September, the emails reveal, after competing for months against several other “AI heavyweights” for the work. IBM was in the running, as Gizmodo reported last month, along with Amazon and Microsoft... Google is reportedly competing for a Pentagon cloud computing contract worth $10 billion."


FBI Warns Sophisticated Malware Targets Wireless Routers In Homes And Small Businesses

The U.S. Federal Bureau of Investigation (FBI) issued a Public Service Announcement (PSA) warning consumers and small businesses that "foreign cyber actors" have targeted their wireless routers. The May 25th PSA explained the threat:

"The actors used VPNFilter malware to target small office and home office routers. The malware is able to perform multiple functions, including possible information collection, device exploitation, and blocking network traffic... The malware targets routers produced by several manufacturers and network-attached storage devices by at least one manufacturer... VPNFilter is able to render small office and home office routers inoperable. The malware can potentially also collect information passing through the router. Detection and analysis of the malware’s network activity is complicated by its use of encryption and misattributable networks."

The "VPN" acronym usually refers to a Virtual Private Network. Why use the VPNfilter name for a sophisticated computer virus? Wired magazine explained:

"... the versatile code is designed to serve as a multipurpose spy tool, and also creates a network of hijacked routers that serve as unwitting VPNs, potentially hiding the attackers' origin as they carry out other malicious activities."

The FBI's PSA advised users to, a) reboot (e.g., turn off and then back on) their routers; b) disable remote management features which attackers could take over to gain access; and c) update their routers with the latest software and security patches. For routers purchased independently, security experts advise consumers to contact the router manufacturer's tech support or customer service site.

For routers leased or purchased from an internet service providers (ISP), consumers should contact their ISP's customer service or technical department for software updates and security patches. Example: the Verizon FiOS forums site section lists the brands and models affected by the VPNfilter malware, since several manufacturers produce routers for the Verizon FiOS service.

It is critical for consumers to heed this PSA. The New York Times reported:

"An analysis by Talos, the threat intelligence division for the tech giant Cisco, estimated that at least 500,000 routers in at least 54 countries had been infected by the [VPNfilter] malware... A global network of hundreds of thousands of routers is already under the control of the Sofacy Group, the Justice Department said last week. That group, which is also known as A.P.T. 28 and Fancy Bear and believed to be directed by Russia’s military intelligence agency... To disrupt the Sofacy network, the Justice Department sought and received permission to seize the web domain toknowall.com, which it said was a critical part of the malware’s “command-and-control infrastructure.” Now that the domain is under F.B.I. control, any attempts by the malware to reinfect a compromised router will be bounced to an F.B.I. server that can record the I.P. address of the affected device..."

Readers wanting technical details about VPNfilter, should read the Talos Intelligence blog post.

When consumers contact their ISP about router software updates, it is wise to also inquire about security patches for the Krack malware, which the bad actors have used recently. Example: the Verizon site also provides information about the Krack malware.

The latest threat provides several strong reminders:

  1. The conveniences of wireless internet connectivity which consumers demand and enjoy, also benefits the bad guys,
  2. The bad guys are persistent and will continue to target internet-connected devices with weak or no protection, including devices consumers fail to protect,
  3. Wireless benefits come with a responsibility for consumers to shop wisely for internet-connected devices featuring easy, continual software updates and security patches. Otherwise, that shiny new device you recently purchased is nothing more than an expensive "brick," and
  4. Manufacturers have a responsibility to provide consumers with easy, continual software updates and security patches for the internet-connected devices they sell.

What are your opinions of the VPNfilter malware? What has been your experience with securing your wireless home router?