97 posts categorized "Internet of Things" Feed

Your Medical Devices Are Not Keeping Your Health Data to Themselves

[Editor's note: today's guest post, by reporters at ProPublica, is part of a series which explores data collection, data sharing, and privacy issues within the healthcare industry. It is reprinted with permission.]

By Derek Kravitz and Marshall Allen, ProPublica

Medical devices are gathering more and more data from their users, whether it’s their heart rates, sleep patterns or the number of steps taken in a day. Insurers and medical device makers say such data can be used to vastly improve health care.

But the data that’s generated can also be used in ways that patients don’t necessarily expect. It can be packaged and sold for advertising. It can anonymized and used by customer support and information technology companies. Or it can be shared with health insurers, who may use it to deny reimbursement. Privacy experts warn that data gathered by insurers could also be used to rate individuals’ health care costs and potentially raise their premiums.

Patients typically have to give consent for their data to be used — so-called “donated data.” But some patients said they weren’t aware that their information was being gathered and shared. And once the data is shared, it can be used in a number of ways. Here are a few of the most popular medical devices that can share data with insurers:

Continuous Positive Airway Pressure, or CPAP, Machines

What Are They?

One of the more popular devices for those with sleep apnea, CPAP machines are covered by insurers after a sleep study confirms the diagnosis. These units, which deliver pressurized air through masks worn by patients as they sleep, collect data and transmit it wirelessly.

What Do They Collect?

It depends on the unit, but CPAP machines can collect data on the number of hours a patient uses the device, the number of interruptions in sleep and the amount of air that leaks from the mask.

Who Gets the Info?

The data may be transmitted to the makers or suppliers of the machines. Doctors may use it to assess whether the therapy is effective. Health insurers may receive the data to track whether patients are using their CPAP machines as directed. They may refuse to reimburse the costs of the machine if the patient doesn’t use it enough. The device maker ResMed said in a statement that patients may withdraw their consent to have their data shared.

Heart Monitors

What Are They?

Heart monitors, oftentimes small, battery-powered devices worn on the body and attached to the skin with electrodes, measure and record the heart’s electrical signals, typically over a few days or weeks, to detect things like irregular heartbeats or abnormal heart rhythms. Some devices implanted under the skin can last up to five years.

What Do They Collect?

Wearable ones include Holter monitors, wired external devices that attach to the skin, and event recorders, which can track slow or fast heartbeats and fainting spells. Data can also be shared from implanted pacemakers, which keep the heart beating properly for those with arrhythmias.

Who Gets the Info?

Low resting heart rates or other abnormal heart conditions are commonly used by insurance companies to place patients in more expensive rate classes. Children undergoing genetic testing are sometimes outfitted with heart monitors before their diagnosis, increasing the odds that their data is used by insurers. This sharing is the most common complaint cited by the World Privacy Forum, a consumer rights group.

Blood Glucose Monitors

What Are They?

Millions of Americans who have diabetes are familiar with blood glucose meters, or glucometers, which take a blood sample on a strip of paper and analyze it for glucose, or sugar, levels. This allows patients and their doctors to monitor their diabetes so they don’t have complications like heart or kidney disease. Blood glucose meters are used by the more the 1.2 million Americans with Type 1 diabetes, which is usually diagnosed in children, teens and young adults.

What Do They Collect?

Blood sugar monitors measure the concentration of glucose in a patient’s blood, a key indicator of proper diabetes management.

Who Gets the Info?

Diabetes monitoring equipment is sold directly to patients, but many still rely on insurer-provided devices. To get reimbursement for blood glucose meters, health insurers will typically ask for at least a month’s worth of blood sugar data.

Lifestyle Monitors

What Are They?

Step counters, medication alerts and trackers, and in-home cameras are among the devices in the increasingly crowded lifestyle health industry.

What Do They Collect?

Many health data research apps are made up of “donated data,” which is provided by consumers and falls outside of federal guidelines that require the sharing of personal health data be disclosed and anonymized to protect the identity of the patient. This data includes everything from counters for the number of steps you take, the calories you eat and the number of flights of stairs you climb to more traditional health metrics, such as pulse and heart rates.

Who Gets the Info?

It varies by device. But the makers of the Fitbit step counter, for example, say they never sell customer personal data or share personal information unless a user requests it; it is part of a legal process; or it is provided on a “confidential basis” to a third-party customer support or IT provider. That said, Fitbit allows users who give consent to share data “with a health insurer or wellness program,” according to a statement from the company.

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


You Snooze, You Lose: Insurers Make The Old Adage Literally True

[Editor's note: today's guest post, by reporters at ProPublica, is part of a series which explores data collection, data sharing, and privacy issues within the healthcare industry. It is reprinted with permission.]

By Marshall Allen, ProPublica

Last March, Tony Schmidt discovered something unsettling about the machine that helps him breathe at night. Without his knowledge, it was spying on him.

From his bedside, the device was tracking when he was using it and sending the information not just to his doctor, but to the maker of the machine, to the medical supply company that provided it and to his health insurer.

Schmidt, an information technology specialist from Carrollton, Texas, was shocked. “I had no idea they were sending my information across the wire.”

Schmidt, 59, has sleep apnea, a disorder that causes worrisome breaks in his breathing at night. Like millions of people, he relies on a continuous positive airway pressure, or CPAP, machine that streams warm air into his nose while he sleeps, keeping his airway open. Without it, Schmidt would wake up hundreds of times a night; then, during the day, he’d nod off at work, sometimes while driving and even as he sat on the toilet.

“I couldn’t keep a job,” he said. “I couldn’t stay awake.” The CPAP, he said, saved his career, maybe even his life.

As many CPAP users discover, the life-altering device comes with caveats: Health insurance companies are often tracking whether patients use them. If they aren’t, the insurers might not cover the machines or the supplies that go with them.

In fact, faced with the popularity of CPAPs, which can cost $400 to $800, and their need for replacement filters, face masks and hoses, health insurers have deployed a host of tactics that can make the therapy more expensive or even price it out of reach.

Patients have been required to rent CPAPs at rates that total much more than the retail price of the devices, or they’ve discovered that the supplies would be substantially cheaper if they didn’t have insurance at all.

Experts who study health care costs say insurers’ CPAP strategies are part of the industry’s playbook of shifting the costs of widely used therapies, devices and tests to unsuspecting patients.

“The doctors and providers are not in control of medicine anymore,” said Harry Lawrence, owner of Advanced Oxy-Med Services, a New York company that provides CPAP supplies. “It’s strictly the insurance companies. They call the shots.”

Insurers say their concerns are legitimate. The masks and hoses can be cumbersome and noisy, and studies show that about third of patients don’t use their CPAPs as directed.

But the companies’ practices have spawned lawsuits and concerns by some doctors who say that policies that restrict access to the machines could have serious, or even deadly, consequences for patients with severe conditions. And privacy experts worry that data collected by insurers could be used to discriminate against patients or raise their costs.

Schmidt’s privacy concerns began the day after he registered his new CPAP unit with ResMed, its manufacturer. He opted out of receiving any further information. But he had barely wiped the sleep out of his eyes the next morning when a peppy email arrived in his inbox. It was ResMed, praising him for completing his first night of therapy. “Congratulations! You’ve earned yourself a badge!” the email said.

Then came this exchange with his supply company, Medigy: Schmidt had emailed the company to praise the “professional, kind, efficient and competent” technician who set up the device. A Medigy representative wrote back, thanking him, then adding that Schmidt’s machine “is doing a great job keeping your airway open.” A report detailing Schmidt’s usage was attached.

Alarmed, Schmidt complained to Medigy and learned his data was also being shared with his insurer, Blue Cross Blue Shield. He’d known his old machine had tracked his sleep because he’d taken its removable data card to his doctor. But this new invasion of privacy felt different. Was the data encrypted to protect his privacy as it was transmitted? What else were they doing with his personal information?

He filed complaints with the Better Business Bureau and the federal government to no avail. “My doctor is the ONLY one that has permission to have my data,” he wrote in one complaint.

In an email, a Blue Cross Blue Shield spokesperson said that it’s standard practice for insurers to monitor sleep apnea patients and deny payment if they aren’t using the machine. And privacy experts said that sharing the data with insurance companies is allowed under federal privacy laws. A ResMed representative said once patients have given consent, it may share the data it gathers, which is encrypted, with the patients’ doctors, insurers and supply companies.

Schmidt returned the new CPAP machine and went back to a model that allowed him to use a removable data card. His doctor can verify his compliance, he said.

Luke Petty, the operations manager for Medigy, said a lot of CPAP users direct their ire at companies like his. The complaints online number in the thousands. But insurance companies set the prices and make the rules, he said, and suppliers follow them, so they can get paid.

“Every year it’s a new hurdle, a new trick, a new game for the patients,” Petty said.

A Sleep Saving Machine Gets Popular

The American Sleep Apnea Association estimates about 22 million Americans have sleep apnea, although it’s often not diagnosed. The number of people seeking treatment has grown along with awareness of the disorder. It’s a potentially serious disorder that left untreated can lead to risks for heart disease, diabetes, cancer and cognitive disorders. CPAP is one of the only treatments that works for many patients.

Exact numbers are hard to come by, but ResMed, the leading device maker, said it’s monitoring the CPAP use of millions of patients.

Sleep apnea specialists and health care cost experts say insurers have countered the deluge by forcing patients to prove they’re using the treatment.

Medicare, the government insurance program for seniors and the disabled, began requiring CPAP “compliance” after a boom in demand. Because of the discomfort of wearing a mask, hooked up to a noisy machine, many patients struggle to adapt to nightly use. Between 2001 and 2009, Medicare payments for individual sleep studies almost quadrupled to $235 million. Many of those studies led to a CPAP prescription. Under Medicare rules, patients must use the CPAP for four hours a night for at least 70 percent of the nights in any 30-day period within three months of getting the device. Medicare requires doctors to document the adherence and effectiveness of the therapy.

Sleep apnea experts deemed Medicare’s requirements arbitrary. But private insurers soon adopted similar rules, verifying usage with data from patients’ machines — with or without their knowledge.

Kristine Grow, spokeswoman for the trade association America’s Health Insurance Plans, said monitoring CPAP use is important because if patients aren’t using the machines, a less expensive therapy might be a smarter option. Monitoring patients also helps insurance companies advise doctors about the best treatment for patients, she said. When asked why insurers don’t just rely on doctors to verify compliance, Grow said she didn’t know.

Many insurers also require patients to rack up monthly rental fees rather than simply pay for a CPAP.

Dr. Ofer Jacobowitz, a sleep apnea expert at ENT and Allergy Associates and assistant professor at The Mount Sinai Hospital in New York, said his patients often pay rental fees for a year or longer before meeting the prices insurers set for their CPAPs. But since patients’ deductibles — the amount they must pay before insurance kicks in — reset at the beginning of each year, they may end up covering the entire cost of the rental for much of that time, he said.

The rental fees can surpass the retail cost of the machine, patients and doctors say. Alan Levy, an attorney who lives in Rahway, New Jersey, bought an individual insurance plan through the now-defunct Health Republic Insurance of New Jersey in 2015. When his doctor prescribed a CPAP, the company that supplied his device, At Home Medical, told him he needed to rent the device for $104 a month for 15 months. The company told him the cost of the CPAP was $2,400.

Levy said he wouldn’t have worried about the cost if his insurance had paid it. But Levy’s plan required him to reach a $5,000 deductible before his insurance plan paid a dime. So Levy looked online and discovered the machine actually cost about $500.

Levy said he called At Home Medical to ask if he could avoid the rental fee and pay $500 up front for the machine, and a company representative said no. “I’m being overcharged simply because I have insurance,” Levy recalled protesting.

Levy refused to pay the rental fees. “At no point did I ever agree to enter into a monthly rental subscription,” he wrote in a letter disputing the charges. He asked for documentation supporting the cost. The company responded that he was being billed under the provisions of his insurance carrier.

Levy’s law practice focuses, ironically, on defending insurance companies in personal injury cases. So he sued At Home Medical, accusing the company of violating the New Jersey Consumer Fraud Act. Levy didn’t expect the case to go to trial. “I knew they were going to have to spend thousands of dollars on attorney’s fees to defend a claim worth hundreds of dollars,” he said.

Sure enough, At Home Medical, agreed to allow Levy to pay $600 — still more than the retail cost — for the machine.

The company declined to comment on the case. Suppliers said that Levy’s case is extreme, but acknowledged that patients’ rental fees often add up to more than the device is worth.

Levy said that he was happy to abide by the terms of his plan, but that didn’t mean the insurance company could charge him an unfair price. “If the machine’s worth $500, no matter what the plan says, or the medical device company says, they shouldn’t be charging many times that price,” he said.

Dr. Douglas Kirsch, president of the American Academy of Sleep Medicine, said high rental fees aren’t the only problem. Patients can also get better deals on CPAP filters, hoses, masks and other supplies when they don’t use insurance, he said.

Cigna, one of the largest health insurers in the country, currently faces a class-action suit in U.S. District Court in Connecticut over its billing practices, including for CPAP supplies. One of the plaintiffs, Jeffrey Neufeld, who lives in Connecticut, contends that Cigna directed him to order his supplies through a middleman who jacked up the prices.

Neufeld declined to comment for this story. But his attorney, Robert Izard, said Cigna contracted with a company called CareCentrix, which coordinates a network of suppliers for the insurer. Neufeld decided to contact his supplier directly to find out what it had been paid for his supplies and compare that to what he was being charged. He discovered that he was paying substantially more than the supplier said the products were worth. For instance, Neufeld owed $25.68 for a disposable filter under his Cigna plan, while the supplier was paid $7.50. He owed $147.78 for a face mask through his Cigna plan while the supplier was paid $95.

ProPublica found all the CPAP supplies billed to Neufeld online at even lower prices than those the supplier had been paid. Longtime CPAP users say it’s well known that supplies are cheaper when they are purchased without insurance.

Neufeld’s cost “should have been based on the lower amount charged by the actual provider, not the marked-up bill from the middleman,” Izard said. Patients covered by other insurance companies may have fallen victim to similar markups, he said.

Cigna would not comment on the case. But in documents filed in the suit, it denied misrepresenting costs or overcharging Neufeld. The supply company did not return calls for comment.

In a statement, Stephen Wogen, CareCentrix’s chief growth officer, said insurers may agree to pay higher prices for some services, while negotiating lower prices for others, to achieve better overall value. For this reason, he said, isolating select prices doesn’t reflect the overall value of the company’s services. CareCentrix declined to comment on Neufeld’s allegations.

Izard said Cigna and CareCentrix benefit from such behind-the-scenes deals by shifting the extra costs to patients, who often end up covering the marked-up prices out of their deductibles. And even once their insurance kicks in, the amount the patients must pay will be much higher.

The ubiquity of CPAP insurance concerns struck home during the reporting of this story, when a ProPublica colleague discovered how his insurer was using his data against him.

Sleep Aid or Surveillance Device?

Without his CPAP, Eric Umansky, a deputy managing editor at ProPublica, wakes up repeatedly through the night and snores so insufferably that he is banished to the living room couch. “My marriage depends on it.”

In September, his doctor prescribed a new mask and airflow setting for his machine. Advanced Oxy-Med Services, the medical supply company approved by his insurer, sent him a modem that he plugged into his machine, giving the company the ability to change the settings remotely if needed.

But when the mask hadn’t arrived a few days later, Umansky called Advanced Oxy-Med. That’s when he got a surprise: His insurance company might not pay for the mask, a customer service representative told him, because he hadn’t been using his machine enough. “On Tuesday night, you only used the mask for three-and-a-half hours,” the representative said. “And on Monday night, you only used it for three hours.”

“Wait — you guys are using this thing to track my sleep?” Umansky recalled saying. “And you are using it to deny me something my doctor says I need?”

Umansky’s new modem had been beaming his personal data from his Brooklyn bedroom to the Newburgh, New York-based supply company, which, in turn, forwarded the information to his insurance company, UnitedHealthcare.

Umansky was bewildered. He hadn’t been using the machine all night because he needed a new mask. But his insurance company wouldn’t pay for the new mask until he proved he was using the machine all night — even though, in his case, he, not the insurance company, is the owner of the device.

“You view it as a device that is yours and is serving you,” Umansky said. “And suddenly you realize it is a surveillance device being used by your health insurance company to limit your access to health care.”

Privacy experts said such concerns are likely to grow as a host of devices now gather data about patients, including insertable heart monitors and blood glucose meters, as well as Fitbits, Apple Watches and other lifestyle applications. Privacy laws have lagged behind this new technology, and patients may be surprised to learn how little control they have over how the data is used or with whom it is shared, said Pam Dixon, executive director of the World Privacy Forum.

“What if they find you only sleep a fitful five hours a night?” Dixon said. “That’s a big deal over time. Does that affect your health care prices?”

UnitedHealthcare said in a statement that it only uses the data from CPAPs to verify patients are using the machines.

Lawrence, the owner of Advanced Oxy-Med Services, conceded that his company should have told Umansky his CPAP use would be monitored for compliance, but it had to follow the insurers’ rules to get paid.

As for Umansky, it’s now been two months since his doctor prescribed him a new airflow setting for his CPAP machine. The supply company has been paying close attention to his usage, Umansky said, but it still hasn’t updated the setting.

The irony is not lost on Umansky: “I wish they would spend as much time providing me actual care as they do monitoring whether I’m ‘compliant.’”

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

 


When Fatal Crashes Can't Be Avoided, Who Should Self-Driving Cars Save? Or Sacrifice? Results From A Global Survey May Surprise You

Experts predict that there will be 10 million self-driving cars on the roads by 2020. Any outstanding issues need to be resolved before then. One outstanding issue is the "trolley problem" - a situation where a fatal vehicle crash can not be avoided and the self-driving car must decide whether to save the passenger or a nearby pedestrian. Ethical issues with self-driving cars are not new. There are related issues, and some experts have called for a code of ethics.

Like it or not, the software in self-driving cars must be programmed to make decisions like this. Which person in a "trolley problem" should the self-driving car save? In other words, the software must be programmed with moral preferences which dictate which person to sacrifice.

The answer is tricky. You might assume: always save the driver, since nobody would buy self-driving car which would kill their owners. What if the pedestrian is crossing against a 'do not cross' signal within a crosswalk? Does the answer change if there are multiple pedestrians in the crosswalk? What if the pedestrians are children, elders, or pregnant? Or a doctor? Does it matter if the passenger is older than the pedestrians?

To understand what the public wants -- expects -- in self-driving cars, also known as autonomous vehicles (AV), researchers from MIT asked consumers in a massive, online global survey. The survey included 2 million people from 233 countries. The survey included 13 accident scenarios with nine varying factors:

  1. "Sparing people versus pets/animals,
  2. Staying on course versus swerving,
  3. Sparing passengers versus pedestrians,
  4. Sparing more lives versus fewer lives,
  5. Sparing men versus women,
  6. Sparing the young versus the elderly,
  7. Sparing pedestrians who cross legally versus jaywalking,
  8. Sparing the fit versus the less fit, and
  9. Sparing those with higher social status versus lower social status."

Besides recording the accident choices, the researchers also collected demographic information (e.g., gender, age, income, education, attitudes about religion and politics, geo-location) about the survey participants, in order to identify clusters: groups, areas, countries, territories, or regions containing people with similar "moral preferences."

Newsweek reported:

"The study is basically trying to understand the kinds of moral decisions that driverless cars might have to resort to," Edmond Awad, lead author of the study from the MIT Media Lab, said in a statement. "We don't know yet how they should do that."

And the overall findings:

"First, human lives should be spared over those of animals; many people should be saved over a few; and younger people should be preserved ahead of the elderly."

These have implications for policymakers. The researchers noted:

"... given the strong preference for sparing children, policymakers must be aware of a dual challenge if they decide not to give a special status to children: the challenge of explaining the rationale for such a decision, and the challenge of handling the strong backlash that will inevitably occur the day an autonomous vehicle sacrifices children in a dilemma situation."

The researchers found regional differences about who should be saved:

"The first cluster (which we label the Western cluster) contains North America as well as many European countries of Protestant, Catholic, and Orthodox Christian cultural groups. The internal structure within this cluster also exhibits notable face validity, with a sub-cluster containing Scandinavian countries, and a sub-cluster containing Commonwealth countries.

The second cluster (which we call the Eastern cluster) contains many far eastern countries such as Japan and Taiwan that belong to the Confucianist cultural group, and Islamic countries such as Indonesia, Pakistan and Saudi Arabia.

The third cluster (a broadly Southern cluster) consists of the Latin American countries of Central and South America, in addition to some countries that are characterized in part by French influence (for example, metropolitan France, French overseas territories, and territories that were at some point under French leadership). Latin American countries are cleanly separated in their own sub-cluster within the Southern cluster."

The researchers also observed:

"... systematic differences between individualistic cultures and collectivistic cultures. Participants from individualistic cultures, which emphasize the distinctive value of each individual, show a stronger preference for sparing the greater number of characters. Furthermore, participants from collectivistic cultures, which emphasize the respect that is due to older members of the community, show a weaker preference for sparing younger characters... prosperity (as indexed by GDP per capita) and the quality of rules and institutions (as indexed by the Rule of Law) correlate with a greater preference against pedestrians who cross illegally. In other words, participants from countries that are poorer and suffer from weaker institutions are more tolerant of pedestrians who cross illegally, presumably because of their experience of lower rule compliance and weaker punishment of rule deviation... higher country-level economic inequality (as indexed by the country’s Gini coefficient) corresponds to how unequally characters of different social status are treated. Those from countries with less economic equality between the rich and poor also treat the rich and poor less equally... In nearly all countries, participants showed a preference for female characters; however, this preference was stronger in nations with better health and survival prospects for women. In other words, in places where there is less devaluation of women’s lives in health and at birth, males are seen as more expendable..."

This is huge. It makes one question the wisdom of a one-size-fits-all programming approach by AV makers wishing to sell cars globally. Citizens in clusters may resent an AV maker forcing its moral preferences upon them. Some clusters or countries may demand vehicles matching their moral preferences.

The researchers concluded (emphasis added):

"Never in the history of humanity have we allowed a machine to autonomously decide who should live and who should die, in a fraction of a second, without real-time supervision. We are going to cross that bridge any time now, and it will not happen in a distant theatre of military operations; it will happen in that most mundane aspect of our lives, everyday transportation. Before we allow our cars to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers that will regulate them... Our data helped us to identify three strong preferences that can serve as building blocks for discussions of universal machine ethics, even if they are not ultimately endorsed by policymakers: the preference for sparing human lives, the preference for sparing more lives, and the preference for sparing young lives. Some preferences based on gender or social status vary considerably across countries, and appear to reflect underlying societal-level preferences..."

And the researchers advised caution, given this study's limitations (emphasis added):

"Even with a sample size as large as ours, we could not do justice to all of the complexity of autonomous vehicle dilemmas. For example, we did not introduce uncertainty about the fates of the characters, and we did not introduce any uncertainty about the classification of these characters. In our scenarios, characters were recognized as adults, children, and so on with 100% certainty, and life-and-death outcomes were predicted with 100% certainty. These assumptions are technologically unrealistic, but they were necessary... Similarly, we did not manipulate the hypothetical relationship between respondents and characters (for example, relatives or spouses)... Indeed, we can embrace the challenges of machine ethics as a unique opportunity to decide, as a community, what we believe to be right or wrong; and to make sure that machines, unlike humans, unerringly follow these moral preferences. We might not reach universal agreement: even the strongest preferences expressed through the [survey] showed substantial cultural variations..."

Several important limitations to remember. And, there are more. It didn't address self-driving trucks. Should an AV tractor-trailer semi  -- often called a robotruck -- carrying $2 million worth of goods sacrifice its load (and passenger) to save one or more pedestrians? What about one or more drivers on the highway? Does it matter if the other drivers are motorcyclists, school buses, or ambulances?

What about autonomous freighters? Should an AV cargo ship be programed to sacrifice its $80 million load to save a pleasure craft? Does the size (e.g., number of passengers) of the pleasure craft matter? What if the other craft is a cabin cruiser with five persons? Or a cruise ship with 2,000 passengers and a crew of 800? What happens in international waters between AV ships from different countries programmed with different moral preferences?

Regardless, this MIT research seems invaluable. It's a good start. AV makers (e.g., autos, ships, trucks) need to explicitly state what their vehicles will (and won't do). Don't hide behind legalese similar to what exists today in too many online terms-of-use and privacy policies.

Hopefully, corporate executives and government policymakers will listen, consider the limitations, demand follow-up research, and not dive headlong into the AV pool without looking first. After reading this study, it struck me that similar research would have been wise before building a global social media service, since people in different countries or regions having varying preferences with online privacy, sharing information, and corporate surveillance. What are your opinions?


Survey: Most Home Users Satisfied With Voice-Controlled Assistants. Tech Adoption Barriers Exist

Recent survey results reported by MediaPost:

"Amazon Alexa and Google Assistant have the highest satisfaction levels among mobile users, each with an 85% satisfaction rating, followed by Siri and Bixby at 78% and Microsoft’s Cortana at 77%... As found in other studies, virtual assistants are being used for a range of things, including looking up things on the internet (51%), listening to music (48%), getting weather information (46%) and setting a timer (35%)... Smart speaker usage varies, with 31% of Amazon device owners using their speaker at least a few times a week, Google Home owners 25% and Apple HomePod 18%."

Additional survey results are available at Digital Trends and Experian. PWC found:

"Only 10% of surveyed respondents were not familiar with voice-enabled products and devices. Of the 90% who were, the majority have used a voice assistant (72%). Adoption is being driven by younger consumers, households with children, and households with an income of >$100k... Despite being accessible everywhere, three out of every four consumers (74%) are using their mobile voice assistants at home..."

Consumers seem to want privacy when using voice assistants, so usage tends to occur at home and not in public places. Also:

"... the bulk of consumers have yet to graduate to more advanced activities like shopping or controlling other smart devices in the home... 50% of respondents have made a purchase using their voice assistant, and an additional 25% would consider doing so in the future. The majority of items purchased are small and quick.. Usage will continue to increase but consistency must improve for wider adoption... Some consumers see voice assistants as a privacy risk... When forced to choose, 57% of consumers said they would rather watch an ad in the middle of a TV show than listen to an ad spoken by their voice assistant..."

Consumers want control over the presentation of advertisements by voice assistants. Control options desired include skip, select, never while listening to music, only at pre-approved times, customized based upon interests, seamless integration, and match to preferred brands. 38 percent of survey respondents said that they, "don't want something 'listening in' on my life all the time."

What are your preferences with voice assistants? Any privacy concerns?


No, a Teen Did Not Hack a State Election

[Editor's note: today's guest post, by reporters at ProPublica, is the latest in a series about the integrity and security of voting systems in the United States. It is reprinted with permission.]

By Lilia Chang, ProPublica

Headlines from Def Con, a hacking conference held this month in Las Vegas, might have left some thinking that infiltrating state election websites and affecting the 2018 midterm results would be child’s play.

Articles reported that teenage hackers at the event were able to “crash the upcoming midterm elections” and that it had taken “an 11-year-old hacker just 10 minutes to change election results.” A first-person account by a 17-year-old in Politico Magazine described how he shut down a website that would tally votes in November, “bringing the election to a screeching halt.”

But now, elections experts are raising concerns that misunderstandings about the event — many of them stoked by its organizers — have left people with a distorted sense of its implications.

In a website published before r00tz Asylum, the youth section of Def Con, organizers indicated that students would attempt to hack exact duplicates of state election websites, referring to them as “replicas” or “exact clones.” (The language was scaled back after the conference to simply say “clones.”)

Instead, students were working with look-a-likes created for the event that had vulnerabilities they were coached to find. Organizers provided them with cheat sheets, and adults walked the students through the challenges they would encounter.

Josh Franklin, an elections expert formerly at the National Institute of Standards and Technology and a speaker at Def Con, called the websites “fake.”

“When I learned that they were not using exact copies and pains hadn’t been taken to more properly replicate the underlying infrastructure, I was definitely saddened,” Franklin said.

Franklin and David Becker, the executive director of the Center for Election Innovation & Research, also pointed out that while state election websites report voting results, they do not actually tabulate votes. This information is kept separately and would not be affected if hackers got into sites that display vote totals.

“It would be lunacy to directly connect the election management system, of which the tabulation system is a part of, to the internet,” Franklin said.

Jake Braun, the co-organizer of the event, defended the attention-grabbing way it was framed, saying the security issues of election websites haven’t gotten enough attention. Those questioning the technical details of the mock sites and whether their vulnerabilities were realistic are missing the point, he insisted.

“We want elections officials to start putting together communications redundancy plans so they have protocol in place to communicate with voters and the media and so on if this happens on election day,” he said.

Braun provided ProPublica with a report that r00tz plans to circulate more widely that explains the technical underpinnings of the mock websites. They were designed to be vulnerable to a SQL injection attack, a common hack, the report says.

Franklin acknowledged that some state election reporting sites do indeed have this vulnerability, but he said that states have been aware of it for months and are in the process of protecting against it.

Becker said the details spelled out in the r00tz report would have been helpful to have from the start.

“We have to be really careful about adding to the hysteria about our election system not working or being too vulnerable because that’s exactly what someone like President Putin wants,” Becker said. Instead, Becker said that “we should find real vulnerabilities and address them as elections officials are working really hard to do.”

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Study: Most Consumers Fear Companies Will 'Go Too Far' With Artificial Intelligence Technologies

New research has found that consumers are conflicted about artificial intelligence (AI) technologies. A national study of 697 adults during the Spring of 2018 by Elicit Insights found:

"Most consumers are conflicted about AI. They know there are benefits, but recognize the risks, too"

Several specific findings:

  • 73 percent of survey participants (e.g., Strongly Agree, Agree) fear "some companies will go too far with AI"
  • 64 percent agreed (e.g., Strongly Agree, Agree) with the statement: "I'm concerned about how companies will use artificial intelligence and the information they have about me to engage with me"
  • "Six out of 10 Americans agree or strongly agree that AI will never be as good as human interaction. Human interaction remains sacred and there is concern with at least a third of consumers that AI won’t stay focused on mundane tasks and leave the real thinking to humans."

Many of the concerns center around control. As AI applications become smarter and more powerful, they are able to operate independently, without human -- users' -- authorization. When presented with several smart-refrigerator scenarios, the less control users had over purchases the fewer survey participants viewed AI as a benefit:

Smart refrigerator and food purchase scenarios. AI study by Elicit Insights. Click to view larger version

AI technologies can also be used to find and present possible matches for online dating services. Again, survey participants expressed similar control concerns:

Dating service scenarios. AI study by Elicit Insights. Click to view larger version

Download Elicit Insights' complete Artificial Intelligence survey (Adobe PDF). What are your opinions? Do you prefer AI applications that operate independently, or which require your authorization?


Study: Performance Issues Impede IoT Device Trust And Usage Worldwide By Consumers

Dynatrace logo A global survey recently uncovered interesting findings about the usage and satisfaction of Iot (Internet of things) devices by consumers. A survey of consumers in several countries found that 52 percent already use IoT devices, and 64 percent of users have already encountered performance issues with their devices.

Opinium Research logo Dynatrace, a software intelligence company, commissioned Opinium Research to conduct a global survey of 10,002 participants, with 2,000 in the United States, 2,000 in the United Kingdom, and 1,000 respondents each in France, Germany, Australia, Brazil, Singapore, and China. Dynatrace announced several findings, chiefly:

"On average, consumers experience 1.5 digital performance problems every day, and 62% of people fear the number of problems they encounter, and the frequency, will increase due to the rise of IoT."

That seems like plenty of poor performance. Some findings were specific to travel, healthcare, and in-home retail sectors. Regarding travel:

"The digital performance failures consumers are already experiencing with everyday technology is potentially making them wary of other uses of IoT. 85% of respondents said they are concerned that self-driving cars will malfunction... 72% feel it is likely software glitches in self-driving cars will cause serious injuries and fatalities... 84% of consumers said they wouldn’t use self-driving cars due to a fear of software glitches..."

Regarding healthcare:

"... 62% of consumers stated they would not trust IoT devices to administer medication; this sentiment is strongest in the 55+ age range, with 74% expressing distrust. There were also specific concerns about the use of IoT devices to monitor vital signs, such as heart rate and blood pressure. 85% of consumers expressed concern that performance problems with these types of IoT devices could compromise clinical data..."

Regarding in-home retail devices:

"... 83% of consumers are concerned about losing control of their smart home due to digital performance problems... 73% of consumers fear being locked in or out of the smart home due to bugs in smart home technology... 68% of consumers are worried they won’t be able to control the temperature in the smart home due to malfunctions in smart home technology... 81% of consumers are concerned that technology or software problems with smart meters will lead to them being overcharged for gas, electricity, and water."

The findings are a clear call to IoT makers to improve the performance, security, and reliability of their internet-connected devices. To learn more, download the full Dynatrace report titled, "IoT Consumer Confidence Report: Challenges for Enterprise Cloud Monitoring on the Horizon."


Test Finds Amazon's Facial Recognition Software Wrongly Identified Members Of Congress As Persons Arrested. A Few Legislators Demand Answers

In a test of Rekognition, the facial recognition software by Amazon, the American Civil Liberties Union (ACLU) found that the software misidentified 28 members of the United States Congress to mugshot photographs of persons arrested for crimes. Jokes aside about politicians, this is serious stuff. According to the ACLU:

"The members of Congress who were falsely matched with the mugshot database we used in the test include Republicans and Democrats, men and women, and legislators of all ages, from all across the country... To conduct our test, we used the exact same facial recognition system that Amazon offers to the public, which anyone could use to scan for matches between images of faces. And running the entire test cost us $12.33 — less than a large pizza... The false matches were disproportionately of people of color, including six members of the Congressional Black Caucus, among them civil rights legend Rep. John Lewis (D-Ga.). These results demonstrate why Congress should join the ACLU in calling for a moratorium on law enforcement use of face surveillance."

List of 28 Congressional legislators mis-identified by Amazon Rekognition in ACLU study. Click to view larger version With 535 member of Congress, the implied error rate was 5.23 percent. On Thursday, three of the misidentified legislators sent a joint letter to Jeffery Bezos, the Chief executive Officer at Amazon. The letter read in part:

"We write to express our concerns and seek more information about Amazon's facial recognition technology, Rekognition... While facial recognition services might provide a valuable law enforcement tool, the efficacy and impact of the technology are not yet fully understood. In particular, serious concerns have been raised about the dangers facial recognition can pose to privacy and civil rights, especially when it is used as a tool of government surveillance, as well as the accuracy of the technology and its disproportionate impact on communities of color.1 These concerns, including recent reports that Rekognition could lead to mis-identifications, raise serious questions regarding whether Amazon should be selling its technology to law enforcement... One study estimates that more than 117 million American adults are in facial recognition databases that can be searched in criminal investigations..."

The letter was sent by Senator Edward J. Markey (Massachusetts, Representative Luis V. Gutiérrez (Illinois), and Representative Mark DeSaulnier (California). Why only three legislators? Where are the other 25? Nobody else cares about software accuracy?

The three legislators asked Amazon to provide answers by August 20, 2018 to several key requests:

  • The results of any internal accuracy or bias assessments Amazon perform on Rekognition, with details by race, gender, and age,
  • The list of all law enforcement or intelligence agencies Amazon has communicated with regarding Rekognition,
  • The list of all law enforcement agencies which have used or currently use Rekognition,
  • If any law enforcement agencies which used Rekogntion have been investigated, sued, or reprimanded for unlawful or discriminatory policing practices,
  • Describe the protections, if any, Amazon has built into Rekognition to protect the privacy rights of innocent citizens cuaght in the biometric databases used by law enforcement for comparisons,
  • Can Rekognition identify persons younger than age 13, and what protections Amazon uses to comply with Children's Online Privacy Protections Act (COPPA),
  • Whether Amazon conduts any audits of Rekognition to ensure its appropriate and legal uses, and what actions Amazon has taken to correct any abuses,
  • Explain whether Rekognition is integrated with police body cameras and/or "public-facing camera networks."

The letter cited a 2016 report by the Center on Privacy and Technology (CPT) at Georgetown Law School, which found:

"... 16 states let the Federal Bureau of Investigation (FBI) use face recognition technology to compare the faces of suspected criminals to their driver’s license and ID photos, creating a virtual line-up of their state residents. In this line-up, it’s not a human that points to the suspect—it’s an algorithm... Across the country, state and local police departments are building their own face recognition systems, many of them more advanced than the FBI’s. We know very little about these systems. We don’t know how they impact privacy and civil liberties. We don’t know how they address accuracy problems..."

Everyone wants law enforcement to quickly catch criminals, prosecute criminals, and protect the safety and rights of law-abiding citizens. However, accuracy matters. Experts warn that the facial recognition technologies used are unregulated, and the systems' impacts upon innocent citizens are not understood. Key findings in the CPT report:

  1. "Law enforcement face recognition networks include over 117 million American adults. Face recognition is neither new nor rare. FBI face recognition searches are more common than federal court-ordered wiretaps. At least one out of four state or local police departments has the option to run face recognition searches through their or another agency’s system. At least 26 states (and potentially as many as 30) allow law enforcement to run or request searches against their databases of driver’s license and ID photos..."
  2. "Different uses of face recognition create different risks. This report offers a framework to tell them apart. A face recognition search conducted in the field to verify the identity of someone who has been legally stopped or arrested is different, in principle and effect, than an investigatory search of an ATM photo against a driver’s license database, or continuous, real-time scans of people walking by a surveillance camera. The former is targeted and public. The latter are generalized and invisible..."
  3. "By tapping into driver’s license databases, the FBI is using biometrics in a way it’s never done before. Historically, FBI fingerprint and DNA databases have been primarily or exclusively made up of information from criminal arrests or investigations. By running face recognition searches against 16 states’ driver’s license photo databases, the FBI has built a biometric network that primarily includes law-abiding Americans. This is unprecedented and highly problematic."
  4. " Major police departments are exploring face recognition on live surveillance video. Major police departments are exploring real-time face recognition on live surveillance camera video. Real-time face recognition lets police continuously scan the faces of pedestrians walking by a street surveillance camera. It may seem like science fiction. It is real. Contract documents and agency statements show that at least five major police departments—including agencies in Chicago, Dallas, and Los Angeles—either claimed to run real-time face recognition off of street cameras..."
  5. "Law enforcement face recognition is unregulated and in many instances out of control. No state has passed a law comprehensively regulating police face recognition. We are not aware of any agency that requires warrants for searches or limits them to serious crimes. This has consequences..."
  6. "Law enforcement agencies are not taking adequate steps to protect free speech. There is a real risk that police face recognition will be used to stifle free speech. There is also a history of FBI and police surveillance of civil rights protests. Of the 52 agencies that we found to use (or have used) face recognition, we found only one, the Ohio Bureau of Criminal Investigation, whose face recognition use policy expressly prohibits its officers from using face recognition to track individuals engaging in political, religious, or other protected free speech."
  7. "Most law enforcement agencies do little to ensure their systems are accurate. Face recognition is less accurate than fingerprinting, particularly when used in real-time or on large databases. Yet we found only two agencies, the San Francisco Police Department and the Seattle region’s South Sound 911, that conditioned purchase of the technology on accuracy tests or thresholds. There is a need for testing..."
  8. "The human backstop to accuracy is non-standardized and overstated. Companies and police departments largely rely on police officers to decide whether a candidate photo is in fact a match. Yet a recent study showed that, without specialized training, human users make the wrong decision about a match half the time...The training regime for examiners remains a work in progress."
  9. "Police face recognition will disproportionately affect African Americans. Police face recognition will disproportionately affect African Americans. Many police departments do not realize that... the Seattle Police Department says that its face recognition system “does not see race.” Yet an FBI co-authored study suggests that face recognition may be less accurate on black people. Also, due to disproportionately high arrest rates, systems that rely on mug shot databases likely include a disproportionate number of African Americans. Despite these findings, there is no independent testing regime for racially biased error rates. In interviews, two major face recognition companies admitted that they did not run these tests internally, either."
  10. "Agencies are keeping critical information from the public. Ohio’s face recognition system remained almost entirely unknown to the public for five years. The New York Police Department acknowledges using face recognition; press reports suggest it has an advanced system. Yet NYPD denied our records request entirely. The Los Angeles Police Department has repeatedly announced new face recognition initiatives—including a “smart car” equipped with face recognition and real-time face recognition cameras—yet the agency claimed to have “no records responsive” to our document request. Of 52 agencies, only four (less than 10%) have a publicly available use policy. And only one agency, the San Diego Association of Governments, received legislative approval for its policy."

The New York Times reported:

"Nina Lindsey, an Amazon Web Services spokeswoman, said in a statement that the company’s customers had used its facial recognition technology for various beneficial purposes, including preventing human trafficking and reuniting missing children with their families. She added that the A.C.L.U. had used the company’s face-matching technology, called Amazon Rekognition, differently during its test than the company recommended for law enforcement customers.

For one thing, she said, police departments do not typically use the software to make fully autonomous decisions about people’s identities... She also noted that the A.C.L.U had used the system’s default setting for matches, called a “confidence threshold,” of 80 percent. That means the group counted any face matches the system proposed that had a similarity score of 80 percent or more. Amazon itself uses the same percentage in one facial recognition example on its site describing matching an employee’s face with a work ID badge. But Ms. Lindsey said Amazon recommended that police departments use a much higher similarity score — 95 percent — to reduce the likelihood of erroneous matches."

Good of Amazon to respond quickly, but its reply is still insufficient and troublesome. Amazon may recommend 95 percent similarity scores, but the public does not know if police departments actually use the higher setting, or consistently do so across all types of criminal investigations. Plus, the CPT report cast doubt on human "backstop" intervention, which Amazon's reply seems to heavily rely upon.

Where is the rest of Congress on this? On Friday, three Senators sent a similar letter seeking answers from 39 federal law-enforcement agencies about their use facial recognition technology, and what policies, if any, they have put in place to prevent abuse and misuse.

All of the findings in the CPT report are disturbing. Finding #3 is particularly troublesome. So, voters need to know what, if anything, has changed since these findings were published in 2016. Voters need to know what their elected officials are doing to address these findings. Some elected officials seem engaged on the topic, but not enough. What are your opinions?


Experts Warn Biases Must Be Removed From Artificial Intelligence

CNN Tech reported:

"Every time humanity goes through a new wave of innovation and technological transformation, there are people who are hurt and there are issues as large as geopolitical conflict," said Fei Fei Li, the director of the Stanford Artificial Intelligence Lab. "AI is no exception." These are not issues for the future, but the present. AI powers the speech recognition that makes Siri and Alexa work. It underpins useful services like Google Photos and Google Translate. It helps Netflix recommend movies, Pandora suggest songs, and Amazon push products..."

Artificial intelligence (AI) technology is not only about autonomous ships, trucks, and preventing crashes involving self-driving cars. AI has global impacts. Researchers have already identified problems and limitations:

"A recent study by Joy Buolamwini at the M.I.T. Media Lab found facial recognition software has trouble identifying women of color. Tests by The Washington Post found that accents often trip up smart speakers like Alexa. And an investigation by ProPublica revealed that software used to sentence criminals is biased against black Americans. Addressing these issues will grow increasingly urgent as things like facial recognition software become more prevalent in law enforcement, border security, and even hiring."

Reportedly, the concerns and limitations were discussed earlier this month at the "AI Summit - Designing A Future For All" conference. Back in 2016, TechCrunch listed five unexpected biases in artificial intelligence. So, there is much important work to be done to remove biases.

According to CNN Tech, a range of solutions are needed:

"Diversifying the backgrounds of those creating artificial intelligence and applying it to everything from policing to shopping to banking...This goes beyond diversifying the ranks of engineers and computer scientists building these tools to include the people pondering how they are used."

Given the history of the internet, there seems to be an important take-away. Early on, many people mistakenly assumed that, "If it's in an e-mail, then it must be true." That mistaken assumption migrated to, "If it's in a website on the internet, then it must be true." And that mistaken assumption migrated to, "If it was posted on social media, then it must be true." Consumers, corporate executives, and technicians must educate themselves and avoid assuming, "If an AI system collected it, then it must be true." Veracity matters. What do you think?


The DIY Revolution: Consumers Alter Or Build Items Previously Not Possible. Is It A Good Thing?

Recent advances in technology allow consumers to alter, customize, or build locally items previously not possible. These items are often referred to as Do-It-Yourself (DIY) products. You've probably heard DIY used in home repair and renovation projects on television. DIY now happens in some unexpected areas. Today's blog post highlights two areas.

DIY Glucose Monitors

Earlier this year, CNet described the bag an eight-year-old patient carries with her everywhere daily:

"... It houses a Dexcom glucose monitor and a pack of glucose tablets, which work in conjunction with the sensor attached to her arm and the insulin pump plugged into her stomach. The final item in her bag was an iPhone 5S. It's unusual for such a young child to have a smartphone. But Ruby's iPhone, which connects via Bluetooth to her Dexcom monitor, allowing [her mother] to read it remotely, illustrates the way technology has transformed the management of diabetes from an entirely manual process -- pricking fingers to measure blood sugar, writing down numbers in a notebook, calculating insulin doses and injecting it -- to a semi-automatic one..."

Some people have access to these new technologies, but many don't. Others want more connectivity and better capabilities. So, some creative "hacking" has resulted:

"There are people who are unwilling to wait, and who embrace unorthodox methods. (You can find them on Twitter via the hashtag #WeAreNotWaiting.) The Nightscout Foundation, an online diabetes community, figured out a workaround for the Pebble Watch. Groups such as Nightscout, Tidepool and OpenAPS are developing open-source fixes for diabetes that give major medical tech companies a run for their money... One major gripe of many tech-enabled diabetes patients is that the two devices they wear at all times -- the monitor and the pump -- don't talk to each other... diabetes will never be a hands-off disease to manage, but an artificial pancreas is basically as close as it gets. The FDA approved the first artificial pancreas -- the Medtronic 670G -- in October 2017. But thanks to a little DIY spirit, people have had them for years."

CNet shared the experience of another tech-enabled patient:

"Take Dana Lewis, founder of the open-source artificial pancreas system, or OpenAPS. Lewis started hacking her glucose monitor to increase the volume of the alarm so that it would wake her in the night. From there, Lewis tinkered with her equipment until she created a closed-loop system, which she's refined over time in terms of both hardware and algorithms that enable faster distribution of insulin. It has massively reduced the "cognitive burden" on her everyday life... JDRF, one of the biggest global diabetes research charities, said in October that it was backing the open-source community by launching an initiative to encourage rival manufacturers like Dexcom and Medtronic to open their protocols and make their devices interoperable."

Convenience and affordability are huge drivers. As you might have guessed, there are risks:

"Hacking a glucose monitor is not without risk -- inaccurate readings, failed alarms or the wrong dose of insulin distributed by the pump could have fatal consequences... Lewis and the OpenAPS community encourage people to embrace the build-your-own-pancreas method rather than waiting for the tech to become available and affordable."

Are DIY glucose monitors a good thing? Some patients think so as a way to achieve convenient and affordable healthcare solutions. That might lead you to conclude anything DIY is an improvement. Right? Keep reading.

DIY Guns

Got a 3-D printer? If so, then you can print your own DIY gun. How did this happen? How did the USA get to here? Wired explained:

"Five years ago, 25-year-old radical libertarian Cody Wilson stood on a remote central Texas gun range and pulled the trigger on the world’s first fully 3-D-printed gun... he drove back to Austin and uploaded the blueprints for the pistol to his website, Defcad.com... In the days after that first test-firing, his gun was downloaded more than 100,000 times. Wilson made the decision to go all in on the project, dropping out of law school at the University of Texas, as if to confirm his belief that technology supersedes law..."

The law intervened. Wilson stopped, took down his site, and then pursued a legal remedy:

"Two months ago, the Department of Justice quietly offered Wilson a settlement to end a lawsuit he and a group of co-plaintiffs have pursued since 2015 against the United States government. Wilson and his team of lawyers focused their legal argument on a free speech claim: They pointed out that by forbidding Wilson from posting his 3-D-printable data, the State Department was not only violating his right to bear arms but his right to freely share information. By blurring the line between a gun and a digital file, Wilson had also successfully blurred the lines between the Second Amendment and the First."

So, now you... anybody with an internet connection and a 3-D printer (and a computer-controlled milling machine for some advanced parts)... can produce their own DIY gun. No registration required. No licenses nor permits. No training required. And, that's anyone anywhere in the world.

Oh, there's more:

"The Department of Justice's surprising settlement, confirmed in court documents earlier this month, essentially surrenders to that argument. It promises to change the export control rules surrounding any firearm below .50 caliber—with a few exceptions like fully automatic weapons and rare gun designs that use caseless ammunition—and move their regulation to the Commerce Department, which won't try to police technical data about the guns posted on the public internet. In the meantime, it gives Wilson a unique license to publish data about those weapons anywhere he chooses."

As you might have guessed, Wilson is re-launching his website, but this time with blueprints for more DIY weaponry besides pistols: AR-15 rifles and semi-automatic weaponry. So, it will be easier for people to skirt federal and state gun laws. Is that a good thing?

You probably have some thoughts and concerns. I do. There are plenty of issues and questions. Are DIY products a good thing? Who is liable? How should laws be upgraded? How can society facilitate one set of DIY products and not the other? What related issues do you see? Any other notable DIY products?


North Carolina Provides Its Residents With an Opt-out From Smart Meter Installations. Will It Last?

Wise consumers know how smart utility meters operate. Unlike conventional analog meters which must be read manually on-site by a technician from the utility, smart meters perform two-way digital communication with the service provider, have memory to digitally store a year's worth of your usage, and transmit your usage at regular intervals (e.g., every 15 minutes). Plus, consumers have little or no control over smart meters installed on their property.

There is some good news. Residents in North Carolina can say "no" to smart meter installations by their power company. The Charlotte Observer reported:

"Residents who say they suffer from acute sensitivity to radio-frequency waves can say no to Duke's smart meters — as long as they have a notarized doctor's note to attest to their rare condition. The N.C. Utilities Commission, which sets utility rates and rules, created the new standard on Friday, possibly making North Carolina the first state to limit the smart meter technology revolution by means of a medical opinion... Duke Energy's two North Carolina utility subsidiaries are in the midst of switching its 3.4 million North Carolina customers to smart meters..."

While it currently is free to opt out and get an analog meter instead, that could change:

"... Duke had proposed charging customers extra if they refused a smart meter. Duke wanted to charge an initial fee of $150 plus $11.75 a month to cover the expense of sending someone out to that customer's house to take a monthly meter reading. But the Utilities Commission opted to give the benefit of the doubt to customers with smart meter health issues until the Federal Communications Commission determines the health risks of the devices."

The Smart Grid Awareness blog contains more information about activities in North Carolina. There are privacy concerns with smart meters. Smart meters can be used to profile consumers with a high degree of accuracy and details. One can easily deduce the number of persons living in the dwelling, when they are home and the duration, which electric appliances are used when they are home, the presence of security and alarm systems, and any special conditions (e.g., in-home medical equipment, baby appliances, etc.).

Other states are considering similar measures. The Kentucky Public Service Commission (PSC) will hold a public meeting only July 9th and accept public comments about planned smart meter deployments by Kentucky Utilities Co. (KU) and Louisville Gas & Electric Company (LG&E). Smart meters are being deployed in New Jersey.

When Maryland lawmakers considered legislation to provide law enforcement with access to consumers' smart meters, the Electronic Privacy Information Center (EPIC) responded with a January 16, 2018 letter outlining the privacy concerns:

"HB 56 is a sensible and effective response to an emerging privacy issue facing Maryland residents. Smart meters collect detailed personal data about the use of utility services. With a smart meter, it is possible to determine when a person is in a residence, and what they are doing. Moreover the routine collection of this data, without adequate privacy safeguards, would enable ongoing surveillance of Maryland residents without regard to any criminal suspicion."

"HB 56 does not prevent law enforcement use of data generated by smart meters; it simply requires that law enforcement follow clear procedures, subject to judicial oversight, to access the data generated by smart meters. HB 56 is an example of a model privacy law that enables innovation while safeguarding personal privacy."

That's a worthy goal of government: balance the competing needs of the business sector to innovate while protecting consumers' privacy. Is a medical opt-out sufficient? Should Fourth Amendment constitutional concerns apply? What are your opinions?


Google To Exit Weaponized Drone Contract And Pursue Other Defense Projects

Google logo Last month, protests by current and former Google employees, plus academic researchers, cited ethical and transparency concerns with artificial intelligence (AI) help the company provides to the U.S. Department of Defense for Project Maven, a weaponized drone program to identify people. Gizmodo reported that Google plans not to renew its contract for Project Maven:

"Google Cloud CEO Diane Greene announced the decision at a meeting with employees Friday morning, three sources told Gizmodo. The current contract expires in 2019 and there will not be a follow-up contract... The company plans to unveil new ethical principles about its use of AI this week... Google secured the Project Maven contract in late September, the emails reveal, after competing for months against several other “AI heavyweights” for the work. IBM was in the running, as Gizmodo reported last month, along with Amazon and Microsoft... Google is reportedly competing for a Pentagon cloud computing contract worth $10 billion."


FBI Warns Sophisticated Malware Targets Wireless Routers In Homes And Small Businesses

The U.S. Federal Bureau of Investigation (FBI) issued a Public Service Announcement (PSA) warning consumers and small businesses that "foreign cyber actors" have targeted their wireless routers. The May 25th PSA explained the threat:

"The actors used VPNFilter malware to target small office and home office routers. The malware is able to perform multiple functions, including possible information collection, device exploitation, and blocking network traffic... The malware targets routers produced by several manufacturers and network-attached storage devices by at least one manufacturer... VPNFilter is able to render small office and home office routers inoperable. The malware can potentially also collect information passing through the router. Detection and analysis of the malware’s network activity is complicated by its use of encryption and misattributable networks."

The "VPN" acronym usually refers to a Virtual Private Network. Why use the VPNfilter name for a sophisticated computer virus? Wired magazine explained:

"... the versatile code is designed to serve as a multipurpose spy tool, and also creates a network of hijacked routers that serve as unwitting VPNs, potentially hiding the attackers' origin as they carry out other malicious activities."

The FBI's PSA advised users to, a) reboot (e.g., turn off and then back on) their routers; b) disable remote management features which attackers could take over to gain access; and c) update their routers with the latest software and security patches. For routers purchased independently, security experts advise consumers to contact the router manufacturer's tech support or customer service site.

For routers leased or purchased from an internet service providers (ISP), consumers should contact their ISP's customer service or technical department for software updates and security patches. Example: the Verizon FiOS forums site section lists the brands and models affected by the VPNfilter malware, since several manufacturers produce routers for the Verizon FiOS service.

It is critical for consumers to heed this PSA. The New York Times reported:

"An analysis by Talos, the threat intelligence division for the tech giant Cisco, estimated that at least 500,000 routers in at least 54 countries had been infected by the [VPNfilter] malware... A global network of hundreds of thousands of routers is already under the control of the Sofacy Group, the Justice Department said last week. That group, which is also known as A.P.T. 28 and Fancy Bear and believed to be directed by Russia’s military intelligence agency... To disrupt the Sofacy network, the Justice Department sought and received permission to seize the web domain toknowall.com, which it said was a critical part of the malware’s “command-and-control infrastructure.” Now that the domain is under F.B.I. control, any attempts by the malware to reinfect a compromised router will be bounced to an F.B.I. server that can record the I.P. address of the affected device..."

Readers wanting technical details about VPNfilter, should read the Talos Intelligence blog post.

When consumers contact their ISP about router software updates, it is wise to also inquire about security patches for the Krack malware, which the bad actors have used recently. Example: the Verizon site also provides information about the Krack malware.

The latest threat provides several strong reminders:

  1. The conveniences of wireless internet connectivity which consumers demand and enjoy, also benefits the bad guys,
  2. The bad guys are persistent and will continue to target internet-connected devices with weak or no protection, including devices consumers fail to protect,
  3. Wireless benefits come with a responsibility for consumers to shop wisely for internet-connected devices featuring easy, continual software updates and security patches. Otherwise, that shiny new device you recently purchased is nothing more than an expensive "brick," and
  4. Manufacturers have a responsibility to provide consumers with easy, continual software updates and security patches for the internet-connected devices they sell.

What are your opinions of the VPNfilter malware? What has been your experience with securing your wireless home router?


Academic Professors, Researchers, And Google Employees Protest Warfare Programs By The Tech Giant

Google logo Many internet users know that Google's business of model of free services comes with a steep price: the collection of massive amounts of information about users of its services. There are implications you may not be aware of.

A Guardian UK article by three professors asked several questions:

"Should Google, a global company with intimate access to the lives of billions, use its technology to bolster one country’s military dominance? Should it use its state of the art artificial intelligence technologies, its best engineers, its cloud computing services, and the vast personal data that it collects to contribute to programs that advance the development of autonomous weapons? Should it proceed despite moral and ethical opposition by several thousand of its own employees?"

These questions are relevant and necessary for several reasons. First, more than a dozen Google employees resigned citing ethical and transparency concerns with artificial intelligence (AI) help the company provides to the U.S. Department of Defense for Maven, a weaponized drone program to identify people. Reportedly, these are the first known mass resignations.

Second, more than 3,100 employees signed a public letter saying that Google should not be in the business of war. That letter (Adobe PDF) demanded that Google terminate its Maven program assistance, and draft a clear corporate policy that neither it, nor its contractors, will build warfare technology.

Third, more than 700 academic researchers, who study digital technologies, signed a letter in support of the protesting Google employees and former employees. The letter stated, in part:

"We wholeheartedly support their demand that Google terminate its contract with the DoD, and that Google and its parent company Alphabet commit not to develop military technologies and not to use the personal data that they collect for military purposes... We also urge Google and Alphabet’s executives to join other AI and robotics researchers and technology executives in calling for an international treaty to prohibit autonomous weapon systems... Google has become responsible for compiling our email, videos, calendars, and photographs, and guiding us to physical destinations. Like many other digital technology companies, Google has collected vast amounts of data on the behaviors, activities and interests of their users. The private data collected by Google comes with a responsibility not only to use that data to improve its own technologies and expand its business, but also to benefit society. The company’s motto "Don’t Be Evil" famously embraces this responsibility.

Project Maven is a United States military program aimed at using machine learning to analyze massive amounts of drone surveillance footage and to label objects of interest for human analysts. Google is supplying not only the open source ‘deep learning’ technology, but also engineering expertise and assistance to the Department of Defense. According to Defense One, Joint Special Operations Forces “in the Middle East” have conducted initial trials using video footage from a small ScanEagle surveillance drone. The project is slated to expand “to larger, medium-altitude Predator and Reaper drones by next summer” and eventually to Gorgon Stare, “a sophisticated, high-tech series of cameras... that can view entire towns.” With Project Maven, Google becomes implicated in the questionable practice of targeted killings. These include so-called signature strikes and pattern-of-life strikes that target people based not on known activities but on probabilities drawn from long range surveillance footage. The legality of these operations has come into question under international and U.S. law. These operations also have raised significant questions of racial and gender bias..."

I'll bet that many people never imagined -- nor want - that their personal e-mail, photos, calendars, video, social media, map usage, archived photos, social media, and more would be used for automated military applications. What are your opinions?


Report: Software Failure In Fatal Accident With Self-Driving Uber Car

TechCrunch reported:

"The cause of the fatal crash of an Uber self-driving car appears to have been at the software level, specifically a function that determines which objects to ignore and which to attend to, The Information reported. This puts the fault squarely on Uber’s doorstep, though there was never much reason to think it belonged anywhere else.

Given the multiplicity of vision systems and backups on board any given autonomous vehicle, it seemed impossible that any one of them failing could have prevented the car’s systems from perceiving Elaine Herzberg, who was crossing the street directly in front of the lidar and front-facing cameras. Yet the car didn’t even touch the brakes or sound an alarm. Combined with an inattentive safety driver, this failure resulted in Herzberg’s death."

The TechCrunch story provides details about which software subsystem the report said failed.

Not good.

So, the autonomous or self-driving cars are only as good as the software they're programmed with (including maintenance). Anyone who has used computers during the last couple decades probably has experienced software glitches, bugs, and failures. It happens.

This latest incident suggests self-driving cars aren't yet ready. what do you think?


Amazon's Virtual Assistant Randomly Laughs. A Fix Is Underway

Image of Amazon Echo Dot virtual assistant
You may have read or viewed news reports about random, loud laughter by Amazon's virtual assistant products. Some users reported that the laughter was unprompted and with a different voice from the standard Alexa voice. Many users were understandably spooked.

Clearly, there is a problem. According to BuzzFeed, Amazon is aware of the problem and replied to its inquiry with this statement:

"In rare circumstances, Alexa can mistakenly hear the phrase 'Alexa, laugh.' We are changing that phrase to be 'Alexa, can you laugh?' which is less likely to have false positives, and we are disabling the short utterance 'Alexa, laugh.' We are also changing Alexa’s response from simply laughter to 'Sure, I can laugh,' followed by laughter..."

Hopefully, that will fix the #AlexaLaugh bug. No doubt, there will be more news to come about this.


Security Experts: Artificial Intelligence Is Ripe For Misuse By Bad Actors

Over the years, bad actors (e.g., criminals, terrorists, rogue states, ethically-challenged business executives) have used a variety of online technologies to remotely hack computers, track users online without consent nor notice, and circumvent privacy settings by consumers on their internet-connected devices. During the past year or two, reports surfaced about bad actors using advertising and social networking technologies to sway public opinion.

Security researchers and experts have warned in a new report that two of the newest technologies can be also be used maliciously:

"Artificial intelligence and machine learning capabilities are growing at an unprecedented rate. These technologies have many widely beneficial applications, ranging from machine translation to medical image analysis... Less attention has historically been paid to the ways in which artificial intelligence can be used maliciously. This report surveys the landscape of potential security threats from malicious uses of artificial intelligence technologies, and proposes ways to better forecast, prevent, and mitigate these threats. We analyze, but do not conclusively resolve, the question of what the long-term equilibrium between attackers and defenders will be. We focus instead on what sorts of attacks we are likely to see soon if adequate defenses are not developed."

Companies currently use or test artificial intelligence (A.I.) to automate mundane tasks, upgrade and improve existing automated processes, and/or personalize employee (and customer) experiences in a variety of applications and business functions, including sales, customer service, and human resources. "Machine learning" refers to the development of digital systems to improve the performance of a task using experience. Both are part of a business trend often referred to as "digital transformation" or the "intelligent workplace." The CXO Talk site, featuring interviews with business leaders and innovators, is a good resource to learn more about A.I. and digital transformation.

A survey last year of employees in the USA, France, Germany, and the United Kingdom found that they, "see A.I. as the technology that will cause the most disruption to the workplace." The survey also found: 70 percent of employees surveyed expect A.I. to impact their jobs during the next ten years, half expect impacts within the next three years, and about a third percent see A.I. as a job creator.

This new report was authored by 26 security experts from a variety of educational institutions including American University, Stanford University, Yale University, the University of Cambridge, the University of Oxford, and others. The report cited three general ways bad actors could misuse A.I.:

"1. Expansion of existing threats. The costs of attacks may be lowered by the scalable use of AI systems to complete tasks that would ordinarily require human labor, intelligence and expertise. A natural effect would be to expand the set of actors who can carry out particular attacks, the rate at which they can carry out these attacks, and the set of potential targets.

2. Introduction of new threats. New attacks may arise through the use of AI systems to complete tasks that would be otherwise impractical for humans. In addition, malicious actors may exploit the vulnerabilities of AI systems deployed by defenders.

3. Change to the typical character of threats. We believe there is reason to expect attacks enabled by the growing use of AI to be especially effective, finely targeted, difficult to attribute, and likely to exploit vulnerabilities in AI systems."

So, A.I. could make it easier for the bad guys to automated labor-intensive cyber-attacks such as spear-fishing. The bad guys could also create new cyber-attacks by combining A.I. with speech synthesis. The authors of the report cited examples of more threats:

"The use of AI to automate tasks involved in carrying out attacks with drones and other physical systems (e.g. through the deployment of autonomous weapons systems) may expand the threats associated with these attacks. We also expect novel attacks that subvert cyber-physical systems (e.g. causing autonomous vehicles to crash) or involve physical systems that it would be infeasible to direct remotely (e.g. a swarm of thousands of micro-drones)... The use of AI to automate tasks involved in surveillance (e.g. analyzing mass-collected data), persuasion (e.g. creating targeted propaganda), and deception (e.g. manipulating videos) may expand threats associated with privacy invasion and social manipulation..."

BBC News reported even more possible threats:

"Technologies such as AlphaGo - an AI developed by Google's DeepMind and able to outwit human Go players - could be used by hackers to find patterns in data and new exploits in code. A malicious individual could buy a drone and train it with facial recognition software to target a certain individual. Bots could be automated or "fake" lifelike videos for political manipulation. Hackers could use speech synthesis to impersonate targets."

From all of this, one can conclude that the 2016 elections interference cited by intelligence officials is probably mild compared to what will come: more serious, sophisticated, and numerous attacks. The report included four high-level recommendations:

"1. Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.

2. Researchers and engineers in artificial intelligence should take the dual-use nature of their work seriously, allowing misuse-related considerations to influence research priorities and norms, and proactively reaching out to relevant actors when harmful applications are foreseeable.

3. Best practices should be identified in research areas with more mature methods for addressing dual-use concerns, such as computer security, and imported where applicable to the case of AI.

4. Actively seek to expand the range of stakeholders and domain experts involved in discussions of these challenges."

Download the 101-page report titled, "The Malicious Use Of Artificial Intelligence: Forecasting, Prevention, And Mitigation" A copy of the report is also available here (Adobe PDF; 1,400 k bytes)here.

To prepare, both corporate and government executives would be wise to both harden their computer networks and (re)train their employees to recognize and guard against cyber attacks. What do you think?


Fitness Device Usage By U.S. Soldiers Reveal Sensitive Location And Movement Data

Useful technology can often have unintended consequences. The Washington Post reported about an interactive map:

"... posted on the Internet that shows the whereabouts of people who use fitness devices such as Fitbit also reveals highly sensitive information about the locations and activities of soldiers at U.S. military bases, in what appears to be a major security oversight. The Global Heat Map, published by the GPS tracking company Strava, uses satellite information to map the locations and movements of subscribers to the company’s fitness service over a two-year period, by illuminating areas of activity. Strava says it has 27 million users around the world, including people who own widely available fitness devices such as Fitbit and Jawbone, as well as people who directly subscribe to its mobile app. The map is not live — rather, it shows a pattern of accumulated activity between 2015 and September 2017... The U.S.-led coalition against the Islamic State said on Monday it is revising its guidelines on the use of all wireless and technological devices on military facilities as a result of the revelations. "

Takeaway #1: it's easier than you might think for the bad guys to track the locations and movements of high-value targets (e.g, soldiers, corporate executives, politicians, attorneys).

Takeaway #2: unintended consequences from mobile devices is not new, as CNN reported in 2015. Consumers love the convenience of their digital devices. It is wise to remember the warning from a famous economist, "There's no such thing as a free lunch."


GoPro Lays Off Workers And Exits Drone Business

Gopro-karma-drone

TechCrunch reported that GoPro, the mobile digital camera maker:

"... plans to reduce its headcount in 2018 from 1,254 employees to fewer than 1,000. It also plans to exit the drone market and reduce CEO 2018 compensation to $1... Last week TechCrunch reported exclusively on the firings with sources telling us several hundred employees were relieved of duties though officially kept on the books until the middle of February. We were told that the bulk of the layoffs happened in the engineering department of the Karma drone... Though GoPro is clearly done producing the Karma drone, it says it intends to continue to provide service and support to Karma customers."

Reported, the earnings announcement by GoPro expected fourth quarter revenues of $340 million, down 37% from 2016. At press time, the "Shop Now" button for Karma drones was still active. It seems the company is selling off its remaining drone inventory.


Google Photos: Still Blind After All These Years

Earlier today, Wired reported:

"In 2015, a black software developer embarrassed Google by tweeting that the company’s Photos service had labeled photos of him with a black friend as "gorillas." Google declared itself "appalled and genuinely sorry." An engineer who became the public face of the clean-up operation said the label gorilla would no longer be applied to groups of images, and that Google was "working on longer-term fixes."

More than two years later, one of those fixes is erasing gorillas, and some other primates, from the service’s lexicon. The awkward workaround illustrates the difficulties Google and other tech companies face in advancing image-recognition technology... WIRED tested Google Photos using a collection of 40,000 images well-stocked with animals. It performed impressively at finding many creatures, including pandas and poodles. But the service reported "no results" for the search terms "gorilla," "chimp," "chimpanzee," and "monkey."

This is the best facial-recognition software solution Google can do, while it also wants consumers to trust the software in its driver-less vehicles? Geez. #fubar Well, maybe this video will help Google engineers feel better: