88 posts categorized "Internet of Things" Feed

The DIY Revolution: Consumers Alter Or Build Items Previously Not Possible. Is It A Good Thing?

Recent advances in technology allow consumers to alter, customize, or build locally items previously not possible. These items are often referred to as Do-It-Yourself (DIY) products. You've probably heard DIY used in home repair and renovation projects on television. DIY now happens in some unexpected areas. Today's blog post highlights two areas.

DIY Glucose Monitors

Earlier this year, CNet described the bag an eight-year-old patient carries with her everywhere daily:

"... It houses a Dexcom glucose monitor and a pack of glucose tablets, which work in conjunction with the sensor attached to her arm and the insulin pump plugged into her stomach. The final item in her bag was an iPhone 5S. It's unusual for such a young child to have a smartphone. But Ruby's iPhone, which connects via Bluetooth to her Dexcom monitor, allowing [her mother] to read it remotely, illustrates the way technology has transformed the management of diabetes from an entirely manual process -- pricking fingers to measure blood sugar, writing down numbers in a notebook, calculating insulin doses and injecting it -- to a semi-automatic one..."

Some people have access to these new technologies, but many don't. Others want more connectivity and better capabilities. So, some creative "hacking" has resulted:

"There are people who are unwilling to wait, and who embrace unorthodox methods. (You can find them on Twitter via the hashtag #WeAreNotWaiting.) The Nightscout Foundation, an online diabetes community, figured out a workaround for the Pebble Watch. Groups such as Nightscout, Tidepool and OpenAPS are developing open-source fixes for diabetes that give major medical tech companies a run for their money... One major gripe of many tech-enabled diabetes patients is that the two devices they wear at all times -- the monitor and the pump -- don't talk to each other... diabetes will never be a hands-off disease to manage, but an artificial pancreas is basically as close as it gets. The FDA approved the first artificial pancreas -- the Medtronic 670G -- in October 2017. But thanks to a little DIY spirit, people have had them for years."

CNet shared the experience of another tech-enabled patient:

"Take Dana Lewis, founder of the open-source artificial pancreas system, or OpenAPS. Lewis started hacking her glucose monitor to increase the volume of the alarm so that it would wake her in the night. From there, Lewis tinkered with her equipment until she created a closed-loop system, which she's refined over time in terms of both hardware and algorithms that enable faster distribution of insulin. It has massively reduced the "cognitive burden" on her everyday life... JDRF, one of the biggest global diabetes research charities, said in October that it was backing the open-source community by launching an initiative to encourage rival manufacturers like Dexcom and Medtronic to open their protocols and make their devices interoperable."

Convenience and affordability are huge drivers. As you might have guessed, there are risks:

"Hacking a glucose monitor is not without risk -- inaccurate readings, failed alarms or the wrong dose of insulin distributed by the pump could have fatal consequences... Lewis and the OpenAPS community encourage people to embrace the build-your-own-pancreas method rather than waiting for the tech to become available and affordable."

Are DIY glucose monitors a good thing? Some patients think so as a way to achieve convenient and affordable healthcare solutions. That might lead you to conclude anything DIY is an improvement. Right? Keep reading.

DIY Guns

Got a 3-D printer? If so, then you can print your own DIY gun. How did this happen? How did the USA get to here? Wired explained:

"Five years ago, 25-year-old radical libertarian Cody Wilson stood on a remote central Texas gun range and pulled the trigger on the world’s first fully 3-D-printed gun... he drove back to Austin and uploaded the blueprints for the pistol to his website, Defcad.com... In the days after that first test-firing, his gun was downloaded more than 100,000 times. Wilson made the decision to go all in on the project, dropping out of law school at the University of Texas, as if to confirm his belief that technology supersedes law..."

The law intervened. Wilson stopped, took down his site, and then pursued a legal remedy:

"Two months ago, the Department of Justice quietly offered Wilson a settlement to end a lawsuit he and a group of co-plaintiffs have pursued since 2015 against the United States government. Wilson and his team of lawyers focused their legal argument on a free speech claim: They pointed out that by forbidding Wilson from posting his 3-D-printable data, the State Department was not only violating his right to bear arms but his right to freely share information. By blurring the line between a gun and a digital file, Wilson had also successfully blurred the lines between the Second Amendment and the First."

So, now you... anybody with an internet connection and a 3-D printer (and a computer-controlled milling machine for some advanced parts)... can produce their own DIY gun. No registration required. No licenses nor permits. No training required. And, that's anyone anywhere in the world.

Oh, there's more:

"The Department of Justice's surprising settlement, confirmed in court documents earlier this month, essentially surrenders to that argument. It promises to change the export control rules surrounding any firearm below .50 caliber—with a few exceptions like fully automatic weapons and rare gun designs that use caseless ammunition—and move their regulation to the Commerce Department, which won't try to police technical data about the guns posted on the public internet. In the meantime, it gives Wilson a unique license to publish data about those weapons anywhere he chooses."

As you might have guessed, Wilson is re-launching his website, but this time with blueprints for more DIY weaponry besides pistols: AR-15 rifles and semi-automatic weaponry. So, it will be easier for people to skirt federal and state gun laws. Is that a good thing?

You probably have some thoughts and concerns. I do. There are plenty of issues and questions. Are DIY products a good thing? Who is liable? How should laws be upgraded? How can society facilitate one set of DIY products and not the other? What related issues do you see? Any other notable DIY products?


North Carolina Provides Its Residents With an Opt-out From Smart Meter Installations. Will It Last?

Wise consumers know how smart utility meters operate. Unlike conventional analog meters which must be read manually on-site by a technician from the utility, smart meters perform two-way digital communication with the service provider, have memory to digitally store a year's worth of your usage, and transmit your usage at regular intervals (e.g., every 15 minutes). Plus, consumers have little or no control over smart meters installed on their property.

There is some good news. Residents in North Carolina can say "no" to smart meter installations by their power company. The Charlotte Observer reported:

"Residents who say they suffer from acute sensitivity to radio-frequency waves can say no to Duke's smart meters — as long as they have a notarized doctor's note to attest to their rare condition. The N.C. Utilities Commission, which sets utility rates and rules, created the new standard on Friday, possibly making North Carolina the first state to limit the smart meter technology revolution by means of a medical opinion... Duke Energy's two North Carolina utility subsidiaries are in the midst of switching its 3.4 million North Carolina customers to smart meters..."

While it currently is free to opt out and get an analog meter instead, that could change:

"... Duke had proposed charging customers extra if they refused a smart meter. Duke wanted to charge an initial fee of $150 plus $11.75 a month to cover the expense of sending someone out to that customer's house to take a monthly meter reading. But the Utilities Commission opted to give the benefit of the doubt to customers with smart meter health issues until the Federal Communications Commission determines the health risks of the devices."

The Smart Grid Awareness blog contains more information about activities in North Carolina. There are privacy concerns with smart meters. Smart meters can be used to profile consumers with a high degree of accuracy and details. One can easily deduce the number of persons living in the dwelling, when they are home and the duration, which electric appliances are used when they are home, the presence of security and alarm systems, and any special conditions (e.g., in-home medical equipment, baby appliances, etc.).

Other states are considering similar measures. The Kentucky Public Service Commission (PSC) will hold a public meeting only July 9th and accept public comments about planned smart meter deployments by Kentucky Utilities Co. (KU) and Louisville Gas & Electric Company (LG&E). Smart meters are being deployed in New Jersey.

When Maryland lawmakers considered legislation to provide law enforcement with access to consumers' smart meters, the Electronic Privacy Information Center (EPIC) responded with a January 16, 2018 letter outlining the privacy concerns:

"HB 56 is a sensible and effective response to an emerging privacy issue facing Maryland residents. Smart meters collect detailed personal data about the use of utility services. With a smart meter, it is possible to determine when a person is in a residence, and what they are doing. Moreover the routine collection of this data, without adequate privacy safeguards, would enable ongoing surveillance of Maryland residents without regard to any criminal suspicion."

"HB 56 does not prevent law enforcement use of data generated by smart meters; it simply requires that law enforcement follow clear procedures, subject to judicial oversight, to access the data generated by smart meters. HB 56 is an example of a model privacy law that enables innovation while safeguarding personal privacy."

That's a worthy goal of government: balance the competing needs of the business sector to innovate while protecting consumers' privacy. Is a medical opt-out sufficient? Should Fourth Amendment constitutional concerns apply? What are your opinions?


Google To Exit Weaponized Drone Contract And Pursue Other Defense Projects

Google logo Last month, protests by current and former Google employees, plus academic researchers, cited ethical and transparency concerns with artificial intelligence (AI) help the company provides to the U.S. Department of Defense for Project Maven, a weaponized drone program to identify people. Gizmodo reported that Google plans not to renew its contract for Project Maven:

"Google Cloud CEO Diane Greene announced the decision at a meeting with employees Friday morning, three sources told Gizmodo. The current contract expires in 2019 and there will not be a follow-up contract... The company plans to unveil new ethical principles about its use of AI this week... Google secured the Project Maven contract in late September, the emails reveal, after competing for months against several other “AI heavyweights” for the work. IBM was in the running, as Gizmodo reported last month, along with Amazon and Microsoft... Google is reportedly competing for a Pentagon cloud computing contract worth $10 billion."


FBI Warns Sophisticated Malware Targets Wireless Routers In Homes And Small Businesses

The U.S. Federal Bureau of Investigation (FBI) issued a Public Service Announcement (PSA) warning consumers and small businesses that "foreign cyber actors" have targeted their wireless routers. The May 25th PSA explained the threat:

"The actors used VPNFilter malware to target small office and home office routers. The malware is able to perform multiple functions, including possible information collection, device exploitation, and blocking network traffic... The malware targets routers produced by several manufacturers and network-attached storage devices by at least one manufacturer... VPNFilter is able to render small office and home office routers inoperable. The malware can potentially also collect information passing through the router. Detection and analysis of the malware’s network activity is complicated by its use of encryption and misattributable networks."

The "VPN" acronym usually refers to a Virtual Private Network. Why use the VPNfilter name for a sophisticated computer virus? Wired magazine explained:

"... the versatile code is designed to serve as a multipurpose spy tool, and also creates a network of hijacked routers that serve as unwitting VPNs, potentially hiding the attackers' origin as they carry out other malicious activities."

The FBI's PSA advised users to, a) reboot (e.g., turn off and then back on) their routers; b) disable remote management features which attackers could take over to gain access; and c) update their routers with the latest software and security patches. For routers purchased independently, security experts advise consumers to contact the router manufacturer's tech support or customer service site.

For routers leased or purchased from an internet service providers (ISP), consumers should contact their ISP's customer service or technical department for software updates and security patches. Example: the Verizon FiOS forums site section lists the brands and models affected by the VPNfilter malware, since several manufacturers produce routers for the Verizon FiOS service.

It is critical for consumers to heed this PSA. The New York Times reported:

"An analysis by Talos, the threat intelligence division for the tech giant Cisco, estimated that at least 500,000 routers in at least 54 countries had been infected by the [VPNfilter] malware... A global network of hundreds of thousands of routers is already under the control of the Sofacy Group, the Justice Department said last week. That group, which is also known as A.P.T. 28 and Fancy Bear and believed to be directed by Russia’s military intelligence agency... To disrupt the Sofacy network, the Justice Department sought and received permission to seize the web domain toknowall.com, which it said was a critical part of the malware’s “command-and-control infrastructure.” Now that the domain is under F.B.I. control, any attempts by the malware to reinfect a compromised router will be bounced to an F.B.I. server that can record the I.P. address of the affected device..."

Readers wanting technical details about VPNfilter, should read the Talos Intelligence blog post.

When consumers contact their ISP about router software updates, it is wise to also inquire about security patches for the Krack malware, which the bad actors have used recently. Example: the Verizon site also provides information about the Krack malware.

The latest threat provides several strong reminders:

  1. The conveniences of wireless internet connectivity which consumers demand and enjoy, also benefits the bad guys,
  2. The bad guys are persistent and will continue to target internet-connected devices with weak or no protection, including devices consumers fail to protect,
  3. Wireless benefits come with a responsibility for consumers to shop wisely for internet-connected devices featuring easy, continual software updates and security patches. Otherwise, that shiny new device you recently purchased is nothing more than an expensive "brick," and
  4. Manufacturers have a responsibility to provide consumers with easy, continual software updates and security patches for the internet-connected devices they sell.

What are your opinions of the VPNfilter malware? What has been your experience with securing your wireless home router?


Academic Professors, Researchers, And Google Employees Protest Warfare Programs By The Tech Giant

Google logo Many internet users know that Google's business of model of free services comes with a steep price: the collection of massive amounts of information about users of its services. There are implications you may not be aware of.

A Guardian UK article by three professors asked several questions:

"Should Google, a global company with intimate access to the lives of billions, use its technology to bolster one country’s military dominance? Should it use its state of the art artificial intelligence technologies, its best engineers, its cloud computing services, and the vast personal data that it collects to contribute to programs that advance the development of autonomous weapons? Should it proceed despite moral and ethical opposition by several thousand of its own employees?"

These questions are relevant and necessary for several reasons. First, more than a dozen Google employees resigned citing ethical and transparency concerns with artificial intelligence (AI) help the company provides to the U.S. Department of Defense for Maven, a weaponized drone program to identify people. Reportedly, these are the first known mass resignations.

Second, more than 3,100 employees signed a public letter saying that Google should not be in the business of war. That letter (Adobe PDF) demanded that Google terminate its Maven program assistance, and draft a clear corporate policy that neither it, nor its contractors, will build warfare technology.

Third, more than 700 academic researchers, who study digital technologies, signed a letter in support of the protesting Google employees and former employees. The letter stated, in part:

"We wholeheartedly support their demand that Google terminate its contract with the DoD, and that Google and its parent company Alphabet commit not to develop military technologies and not to use the personal data that they collect for military purposes... We also urge Google and Alphabet’s executives to join other AI and robotics researchers and technology executives in calling for an international treaty to prohibit autonomous weapon systems... Google has become responsible for compiling our email, videos, calendars, and photographs, and guiding us to physical destinations. Like many other digital technology companies, Google has collected vast amounts of data on the behaviors, activities and interests of their users. The private data collected by Google comes with a responsibility not only to use that data to improve its own technologies and expand its business, but also to benefit society. The company’s motto "Don’t Be Evil" famously embraces this responsibility.

Project Maven is a United States military program aimed at using machine learning to analyze massive amounts of drone surveillance footage and to label objects of interest for human analysts. Google is supplying not only the open source ‘deep learning’ technology, but also engineering expertise and assistance to the Department of Defense. According to Defense One, Joint Special Operations Forces “in the Middle East” have conducted initial trials using video footage from a small ScanEagle surveillance drone. The project is slated to expand “to larger, medium-altitude Predator and Reaper drones by next summer” and eventually to Gorgon Stare, “a sophisticated, high-tech series of cameras... that can view entire towns.” With Project Maven, Google becomes implicated in the questionable practice of targeted killings. These include so-called signature strikes and pattern-of-life strikes that target people based not on known activities but on probabilities drawn from long range surveillance footage. The legality of these operations has come into question under international and U.S. law. These operations also have raised significant questions of racial and gender bias..."

I'll bet that many people never imagined -- nor want - that their personal e-mail, photos, calendars, video, social media, map usage, archived photos, social media, and more would be used for automated military applications. What are your opinions?


Report: Software Failure In Fatal Accident With Self-Driving Uber Car

TechCrunch reported:

"The cause of the fatal crash of an Uber self-driving car appears to have been at the software level, specifically a function that determines which objects to ignore and which to attend to, The Information reported. This puts the fault squarely on Uber’s doorstep, though there was never much reason to think it belonged anywhere else.

Given the multiplicity of vision systems and backups on board any given autonomous vehicle, it seemed impossible that any one of them failing could have prevented the car’s systems from perceiving Elaine Herzberg, who was crossing the street directly in front of the lidar and front-facing cameras. Yet the car didn’t even touch the brakes or sound an alarm. Combined with an inattentive safety driver, this failure resulted in Herzberg’s death."

The TechCrunch story provides details about which software subsystem the report said failed.

Not good.

So, the autonomous or self-driving cars are only as good as the software they're programmed with (including maintenance). Anyone who has used computers during the last couple decades probably has experienced software glitches, bugs, and failures. It happens.

This latest incident suggests self-driving cars aren't yet ready. what do you think?


Amazon's Virtual Assistant Randomly Laughs. A Fix Is Underway

Image of Amazon Echo Dot virtual assistant
You may have read or viewed news reports about random, loud laughter by Amazon's virtual assistant products. Some users reported that the laughter was unprompted and with a different voice from the standard Alexa voice. Many users were understandably spooked.

Clearly, there is a problem. According to BuzzFeed, Amazon is aware of the problem and replied to its inquiry with this statement:

"In rare circumstances, Alexa can mistakenly hear the phrase 'Alexa, laugh.' We are changing that phrase to be 'Alexa, can you laugh?' which is less likely to have false positives, and we are disabling the short utterance 'Alexa, laugh.' We are also changing Alexa’s response from simply laughter to 'Sure, I can laugh,' followed by laughter..."

Hopefully, that will fix the #AlexaLaugh bug. No doubt, there will be more news to come about this.


Security Experts: Artificial Intelligence Is Ripe For Misuse By Bad Actors

Over the years, bad actors (e.g., criminals, terrorists, rogue states, ethically-challenged business executives) have used a variety of online technologies to remotely hack computers, track users online without consent nor notice, and circumvent privacy settings by consumers on their internet-connected devices. During the past year or two, reports surfaced about bad actors using advertising and social networking technologies to sway public opinion.

Security researchers and experts have warned in a new report that two of the newest technologies can be also be used maliciously:

"Artificial intelligence and machine learning capabilities are growing at an unprecedented rate. These technologies have many widely beneficial applications, ranging from machine translation to medical image analysis... Less attention has historically been paid to the ways in which artificial intelligence can be used maliciously. This report surveys the landscape of potential security threats from malicious uses of artificial intelligence technologies, and proposes ways to better forecast, prevent, and mitigate these threats. We analyze, but do not conclusively resolve, the question of what the long-term equilibrium between attackers and defenders will be. We focus instead on what sorts of attacks we are likely to see soon if adequate defenses are not developed."

Companies currently use or test artificial intelligence (A.I.) to automate mundane tasks, upgrade and improve existing automated processes, and/or personalize employee (and customer) experiences in a variety of applications and business functions, including sales, customer service, and human resources. "Machine learning" refers to the development of digital systems to improve the performance of a task using experience. Both are part of a business trend often referred to as "digital transformation" or the "intelligent workplace." The CXO Talk site, featuring interviews with business leaders and innovators, is a good resource to learn more about A.I. and digital transformation.

A survey last year of employees in the USA, France, Germany, and the United Kingdom found that they, "see A.I. as the technology that will cause the most disruption to the workplace." The survey also found: 70 percent of employees surveyed expect A.I. to impact their jobs during the next ten years, half expect impacts within the next three years, and about a third percent see A.I. as a job creator.

This new report was authored by 26 security experts from a variety of educational institutions including American University, Stanford University, Yale University, the University of Cambridge, the University of Oxford, and others. The report cited three general ways bad actors could misuse A.I.:

"1. Expansion of existing threats. The costs of attacks may be lowered by the scalable use of AI systems to complete tasks that would ordinarily require human labor, intelligence and expertise. A natural effect would be to expand the set of actors who can carry out particular attacks, the rate at which they can carry out these attacks, and the set of potential targets.

2. Introduction of new threats. New attacks may arise through the use of AI systems to complete tasks that would be otherwise impractical for humans. In addition, malicious actors may exploit the vulnerabilities of AI systems deployed by defenders.

3. Change to the typical character of threats. We believe there is reason to expect attacks enabled by the growing use of AI to be especially effective, finely targeted, difficult to attribute, and likely to exploit vulnerabilities in AI systems."

So, A.I. could make it easier for the bad guys to automated labor-intensive cyber-attacks such as spear-fishing. The bad guys could also create new cyber-attacks by combining A.I. with speech synthesis. The authors of the report cited examples of more threats:

"The use of AI to automate tasks involved in carrying out attacks with drones and other physical systems (e.g. through the deployment of autonomous weapons systems) may expand the threats associated with these attacks. We also expect novel attacks that subvert cyber-physical systems (e.g. causing autonomous vehicles to crash) or involve physical systems that it would be infeasible to direct remotely (e.g. a swarm of thousands of micro-drones)... The use of AI to automate tasks involved in surveillance (e.g. analyzing mass-collected data), persuasion (e.g. creating targeted propaganda), and deception (e.g. manipulating videos) may expand threats associated with privacy invasion and social manipulation..."

BBC News reported even more possible threats:

"Technologies such as AlphaGo - an AI developed by Google's DeepMind and able to outwit human Go players - could be used by hackers to find patterns in data and new exploits in code. A malicious individual could buy a drone and train it with facial recognition software to target a certain individual. Bots could be automated or "fake" lifelike videos for political manipulation. Hackers could use speech synthesis to impersonate targets."

From all of this, one can conclude that the 2016 elections interference cited by intelligence officials is probably mild compared to what will come: more serious, sophisticated, and numerous attacks. The report included four high-level recommendations:

"1. Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.

2. Researchers and engineers in artificial intelligence should take the dual-use nature of their work seriously, allowing misuse-related considerations to influence research priorities and norms, and proactively reaching out to relevant actors when harmful applications are foreseeable.

3. Best practices should be identified in research areas with more mature methods for addressing dual-use concerns, such as computer security, and imported where applicable to the case of AI.

4. Actively seek to expand the range of stakeholders and domain experts involved in discussions of these challenges."

Download the 101-page report titled, "The Malicious Use Of Artificial Intelligence: Forecasting, Prevention, And Mitigation" A copy of the report is also available here (Adobe PDF; 1,400 k bytes)here.

To prepare, both corporate and government executives would be wise to both harden their computer networks and (re)train their employees to recognize and guard against cyber attacks. What do you think?


Fitness Device Usage By U.S. Soldiers Reveal Sensitive Location And Movement Data

Useful technology can often have unintended consequences. The Washington Post reported about an interactive map:

"... posted on the Internet that shows the whereabouts of people who use fitness devices such as Fitbit also reveals highly sensitive information about the locations and activities of soldiers at U.S. military bases, in what appears to be a major security oversight. The Global Heat Map, published by the GPS tracking company Strava, uses satellite information to map the locations and movements of subscribers to the company’s fitness service over a two-year period, by illuminating areas of activity. Strava says it has 27 million users around the world, including people who own widely available fitness devices such as Fitbit and Jawbone, as well as people who directly subscribe to its mobile app. The map is not live — rather, it shows a pattern of accumulated activity between 2015 and September 2017... The U.S.-led coalition against the Islamic State said on Monday it is revising its guidelines on the use of all wireless and technological devices on military facilities as a result of the revelations. "

Takeaway #1: it's easier than you might think for the bad guys to track the locations and movements of high-value targets (e.g, soldiers, corporate executives, politicians, attorneys).

Takeaway #2: unintended consequences from mobile devices is not new, as CNN reported in 2015. Consumers love the convenience of their digital devices. It is wise to remember the warning from a famous economist, "There's no such thing as a free lunch."


GoPro Lays Off Workers And Exits Drone Business

Gopro-karma-drone

TechCrunch reported that GoPro, the mobile digital camera maker:

"... plans to reduce its headcount in 2018 from 1,254 employees to fewer than 1,000. It also plans to exit the drone market and reduce CEO 2018 compensation to $1... Last week TechCrunch reported exclusively on the firings with sources telling us several hundred employees were relieved of duties though officially kept on the books until the middle of February. We were told that the bulk of the layoffs happened in the engineering department of the Karma drone... Though GoPro is clearly done producing the Karma drone, it says it intends to continue to provide service and support to Karma customers."

Reported, the earnings announcement by GoPro expected fourth quarter revenues of $340 million, down 37% from 2016. At press time, the "Shop Now" button for Karma drones was still active. It seems the company is selling off its remaining drone inventory.


Google Photos: Still Blind After All These Years

Earlier today, Wired reported:

"In 2015, a black software developer embarrassed Google by tweeting that the company’s Photos service had labeled photos of him with a black friend as "gorillas." Google declared itself "appalled and genuinely sorry." An engineer who became the public face of the clean-up operation said the label gorilla would no longer be applied to groups of images, and that Google was "working on longer-term fixes."

More than two years later, one of those fixes is erasing gorillas, and some other primates, from the service’s lexicon. The awkward workaround illustrates the difficulties Google and other tech companies face in advancing image-recognition technology... WIRED tested Google Photos using a collection of 40,000 images well-stocked with animals. It performed impressively at finding many creatures, including pandas and poodles. But the service reported "no results" for the search terms "gorilla," "chimp," "chimpanzee," and "monkey."

This is the best facial-recognition software solution Google can do, while it also wants consumers to trust the software in its driver-less vehicles? Geez. #fubar Well, maybe this video will help Google engineers feel better:


Smart Lock Maker Suspends Operations

Otto, a smart lock maker, has suspended operations. Sam Jadallah, the firm's CEO, announced the suspension just before the Consumer Electronics Show (CES). TechCrunch reported:

"The company made the decision just ahead of the holidays, a fact that founder and CEO Sam Jadallah recently made public with a lengthy Medium post now pinned to the top of the startup’s site... Jadallah told TechCrunch that the company’s lock made it as far as the manufacturing process, and is currently sitting in a warehouse, unable to be sold by a hardware startup that is effectively no longer operating... The long and short of it is that the company was about to be acquired by someone with a lot more resources and experience in bringing a product to market, only to have the rug apparently pulled out at the last minute..."

The digital door lock market includes a variety of types and technologies, such as biometrics, face recognition, iris recognition, palm recognition, voice recognition, fingerprint recognition, keypad locks, and magnetic stripe locks. Consumer Reports rated bothh door locks and smart locks.

Several digital locks are available at online retail sites, including products by August, Brilong, Kwikset, Samsung, and several other makers.


The Limitations And Issues With Facial Recognition Software

We've all seen television shows where police technicians use facial recognition software to swiftly and accurately identify suspects, or catch the bad guys. How accurate is that? An article in The Guardian newspaper discussed the promises, limitations, and issues with facial recognition software used by law enforcement:

"The software, which has taken an expanding role among law enforcement agencies in the US over the last several years, has been mired in controversy because of its effect on people of color. Experts fear that the new technology may actually be hurting the communities the police claims they are trying to protect... "It’s considered an imperfect biometric," said Clare Garvie, who in 2016 created a study on facial recognition software, published by the Center on Privacy and Technology at Georgetown Law, called The Perpetual Line-Up. "There’s no consensus in the scientific community that it provides a positive identification of somebody"... [Garvie's] report found that black individuals, as with so many aspects of the justice system, were the most likely to be scrutinized by facial recognition software in cases. It also suggested that software was most likely to be incorrect when used on black individuals – a finding corroborated by the FBI's own research. This combination, which is making Lynch’s and other black Americans’ lives excruciatingly difficult, is born from another race issue that has become a subject of national discourse: the lack of diversity in the technology sector... According to a 2011 study by the National Institute of Standards and Technologies (Nist), facial recognition software is actually more accurate on Asian faces when it’s created by firms in Asian countries, suggesting that who makes the software strongly affects how it works... Law enforcement agencies often don’t review their software to check for baked-in racial bias – and there aren’t laws or regulations forcing them to."


Report: Several Impacts From Technology Changes Within The Financial Services Industry

For better or worse, the type of smart device you use can identify you in ways you may not expect. First, a report by London-based Privacy International highlighted the changes within the financial services industry:

"Financial services are changing, with technology being a key driver. It is affecting the nature of financial services from credit and lending through to insurance and even the future of money itself. The field known as “fintech” is where the attention and investment is flowing. Within it, new sources of data are being used by existing institutions and new entrants. They are using new forms of data analysis. These changes are significant to this sector and the lives of the people it serves. We are seeing dramatic changes in the ways that financial products make decisions. The nature of the decision-making is changing, transforming the products in the market and impacting on end results and bottom lines. However, this also means that treatment of individuals will change. This changing terrain of finance has implications for human rights, privacy and identity... Data that people would consider as having nothing to do with the financial sphere, such as their text-messages, is being used at an increasing rate by the financial sector...  Yet protections are weak or absent... It is essential that these innovations are subject to scrutiny... Fintech covers a broad array of sectors and technologies. A non-exhaustive list includes:

  • Alternative credit scoring (new data sources for credit scoring)
  • Payments (new ways of paying for goods and services that often have implications for the data generated)
  • Insurtech (the use of technology in the insurance sector)
  • Regtech (the use of technology to meet regulatory requirements)."

"Similarly, a breadth of technologies are used in the sector, including: Artificial Intelligence; Blockchain; the Internet of Things; Telematics and connected cars..."

While the study focused upon India and Kenya, it has implications for consumers worldwide. More observations and concerns:

"Social media is another source of data for companies in the fintech space. However, decisions are made not on just on the content of posts, but rather social media is being used in other ways: to authenticate customers via facial recognition, for instance... blockchain, or distributed ledger technology, is still best known for cryptocurrencies like BitCoin. However, the technology is being used more broadly, such as the World Bank-backed initiative in Kenya for blockchain-backed bonds10. Yet it is also used in other fields, like the push in digital identities11. A controversial example of this was a very small-scale scheme in the UK to pay benefits using blockchain technology, via an app developed by the fintech GovCoin12 (since renamed DISC). The trial raised concerns, with the BBC reporting a former member of the Government Digital Service describing this as "a potentially efficient way for Department of Work and Pensions to restrict, audit and control exactly what each benefits payment is actually spent on, without the government being perceived as a big brother13..."

Many consumers know that you can buy a wide variety of internet-connected devices for your home. That includes both devices you'd expect (e.g., televisions, printers, smart speakers and assistants, security systems, door locks and cameras, utility meters, hot water heaters, thermostats, refrigerators, robotic vacuum cleaners, lawn mowers) and devices you might not expect (e.g., sex toys, smart watches for children, mouse traps, wine bottlescrock pots, toy dolls, and trash/recycle bins). Add your car or truck to the list:

"With an increasing number of sensors being built into cars, they are increasingly “connected” and communicating with actors including manufacturers, insurers and other vehicles15. Insurers are making use of this data to make decisions about the pricing of insurance, looking for features like sharp acceleration and braking and time of day16. This raises privacy concerns: movements can be tracked, and much about the driver’s life derived from their car use patterns..."

And, there are hidden prices for the convenience of making payments with your favorite smart device:

"The payments sector is a key area of growth in the fintech sector: in 2016, this sector received 40% of the total investment in fintech22. Transactions paid by most electronic means can be tracked, even those in physical shops. In the US, Google has access to 70% of credit and debit card transactions—through Google’s "third-party partnerships", the details of which have not been confirmed23. The growth of alternatives to cash can be seen all over the world... There is a concerted effort against cash from elements of the development community... A disturbing aspect of the cashless debate is the emphasis on the immorality of cash—and, by extension, the immorality of anonymity. A UK Treasury minister, in 2012, said that paying tradesman by cash was "morally wrong"26, as it facilitated tax avoidance... MasterCard states: "Contrary to transactions made with a MasterCard product, the anonymity of digital currency transactions enables any party to facilitate the purchase of illegal goods or services; to launder money or finance terrorism; and to pursue other activity that introduces consumer and social harm without detection by regulatory or police authority."27"

The report cited a loss of control by consumers over their personal information. Going forward, the report included general and actor-specific recommendations. General recommendations:

  • "Protecting the human right to privacy should be an essential element of fintech.
  • Current national and international privacy regulations should be applicable to fintech.
  • Customers should be at the centre of fintech, not their product.
  • Fintech is not a single technology or business model. Any attempt to implement or regulate fintech should take these differences into account, and be based on the type activities they perform, rather than the type of institutions involved."

Want to learn more? Follow Privacy International on Facebook, on Twitter, or read about 10 ways of "Invisible Manipulation" of consumers.


German Regulator Bans Smartwatches For Children

VTech Kidizoom DX smartwatch for children. Select for larger version Parents: considering a smartwatch for your children or grandchildren? Consider the privacy implications first. Bleeping Computer reported on Friday:

"Germany's Federal Network Agency (Bundesnetzagentur), the country's telecommunications agency, has banned the sale of children's smartwatches after it classified such devices as "prohibited listening devices." The ban was announced earlier today... parents are using their children's smartwatches to listen to teachers in the classroom. Recording or listening to private conversations is against the law in Germany without the permission of all recorded persons."

Some smartwatches are designed for children as young as four years of age. Several brands are available at online retailers, such as Amazon and Best Buy.

Why the ban? Gizmodo explained:

"Saying the technology more closely resembles a “spying device” than a toy... Last month, the European Consumer Organization (BEUC) warned that smartwatches marketed to kids were a serious threat to children’s privacy. A report published by the Norwegian Consumer Council in mid-October revealed serious flaws in several of the devices that could easily allow hackers to seize control. "

Clearly, this is another opportunity for parents to carefully research and consider smart device purchases for their family, to teach their children about privacy, and to not record persons without their permission.


Security Experts: Massive Botnet Forming. A 'Botnet Storm' Coming

Online security experts have detected a massive botnet -- a network of zombie robots -- forming. Its operator and purpose are both unknown. Check Point Software Technologies, a cyber security firm, warned in a blog post that its researchers:

"... had discovered of a brand new Botnet evolving and recruiting IoT devices at a far greater pace and with more potential damage than the Mirai botnet of 2016... Ominous signs were first picked up via Check Point’s Intrusion Prevention System (IPS) in the last few days of September. An increasing number of attempts were being made by hackers to exploit a combination of vulnerabilities found in various IoT devices.

With each passing day the malware was evolving to exploit an increasing number of vulnerabilities in Wireless IP Camera devices such as GoAhead, D-Link, TP-Link, AVTECH, NETGEAR, MikroTik, Linksys, Synology and others..."

Reportedly, the botnet has been named either "Reaper" or "IoTroop." The McClatchy news wire reported:

"A Chinese cybersecurity firm, Qihoo 360, says the botnet is swelling by 10,000 devices a day..."

Criminals use malware or computer viruses to add to the botnet weakly protected or insecure Internet-connect devices (commonly referred to as the internet of things, or IoT) in homes and businesses. Then, criminals use botnets to overwhelm a targeted website with page requests. This type of attack, called a Distributed Denial of Service (DDoS), prevents valid users from accessing the targeted site; knocking the site offline. If the attack is large enough, it can disable large portions of the Internet.

A version of the attack could also include a ransom demand, where the criminals will stop the attack only after a large cash payment by the targeted company or website. With multiple sites targeted, either version of cyber attack could have huge, negative impacts upon businesses and users.

How bad was the Mirai botnet? According to the US-CERT unit within the U.S. Department of Homeland Security:

"On September 20, 2016, Brian Krebs’ security blog was targeted by a massive DDoS attack, one of the largest on record... The Mirai malware continuously scans the Internet for vulnerable IoT devices, which are then infected and used in botnet attacks. The Mirai bot uses a short list of 62 common default usernames and passwords to scan for vulnerable devices... The purported Mirai author claimed that over 380,000 IoT devices were enslaved by the Mirai malware in the attack..."

Wired reported last year that after the attack on Krebs' blog, the Mirai botnet:

"... managed to make much of the internet unavailable for millions of people by overwhelming Dyn, a company that provides a significant portion of the US internet's backbone... Mirai disrupted internet service for more than 900,000 Deutsche Telekom customers in Germany, and infected almost 2,400 TalkTalk routers in the UK. This week, researchers published evidence that 80 models of Sony cameras are vulnerable to a Mirai takeover..."

The Wired report also explained the difficulty with identifying and cleaning infected devices:

"One reason Mirai is so difficult to contain is that it lurks on devices, and generally doesn't noticeably affect their performance. There's no reason the average user would ever think that their webcam—or more likely, a small business's—is potentially part of an active botnet. And even if it were, there's not much they could do about it, having no direct way to interface with the infected product."

It this seems scary, it is. The coming botnet storm has the potential to do lots of damage.

So, a word to the wise. Experts advise consumers to, a) disconnect the device from your network and reboot it before re-connecting it to the internet, b) buy internet-connected devices that support security software updates, c) change the passwords on your devices from the defaults to strong passwords, d) update the operating system (OS) software on your devices with security patches as soon as they are available, e) keep the anti-virus software on your devices current, and f) regularly backup the data on your devices.

US-CERT also advised consumers to:

"Disable Universal Plug and Play (UPnP) on routers unless absolutely necessary. Purchase IoT devices from companies with a reputation for providing secure devices... Understand the capabilities of any medical devices intended for at-home use. If the device transmits data or can be operated remotely, it has the potential to be infected."


Hacked Butt Plug Highlights Poor Security Of Many Mobile Devices

Image of butt plug, Hush by Lovense. Click to view larger version

In a blog post on Tuesday, security researcher Giovanni Mellini  discussed how easy it was to hack a Bluetooth-enabled butt plug. Why this Internet-connected sex toy? Mellini explained that after what started as a joke he'd bought a few weeks ago:

"... a Bluetooth Low Energy (BLE) butt plug to test the (in)security of BLE protocol. This caught my attention after researchers told us that a lot of sex toys use this protocol to allow remote control that is insecure by design."

Another security researcher, Simone Margaritelli had previously discussed a BLE scanner he wrote called BLEAH and how to use it to hack BLE-connected devices. Mellini sought to replicate Margaritelli's hack, and was successful:

"The butt plug can be remotely controlled with a mobile application called Lovense Remote (download here). With jadx you can disassemble the java application and find the Bluetooth class used to control the device. Inside you can find the strings to be sent to the toy to start vibration... So we have all the elements to hack the sex toy with BLEAH... At the end is very easy to hack BLE protocol due to poor design choices. Welcome to 2017."

Welcome, indeed, to 2017. The seems to be the year of hacked mobile devices. Too many news reports about devices with poor (or no) security: the encryption security flaw in many home wireless routers and devices, patched Macs still vulnerable to firmware hacks, a robovac maker's plans to resell interior home maps its devices created, a smart vibrator maker paid hefty fines to settle allegations it tracked users without their knowledge nor consent, security researchers hacked a popular smart speaker, and a bungled software update bricked many customers' smart door locks.

In 2016, security researchers hacked an internet-connected vibrator.

And, that's some of the reports. All of this runs counter to consumers' needs. In August, a survey of consumers in six countries found that 90 percent believe it is important for smart devices to have security built in. Are device makers listening?

Newsweek reported:

"Lovense did not immediately respond to a request for comment from Newsweek but the sex toy company has spoken previously about the security of its products. "There are three layers of security," Lovense said in a statement last year. "The server side, the way we transfer information from the user’s phone to our server and on the client side. We take our customer’s private data very seriously, which is why we don’t serve any on our servers." "

I have nothing against sex toys. Use one or not. I don't care. My concern: supposedly smart devices should have robust security to protect consumers' privacy.

Smart shoppers want persons they authorize -- and not unknown hackers -- to remotely control their vibrators. Thoughts? Comments?


Experts Find Security Flaw In Wireless Encryption Software. Most Mobile Devices At Risk

Researchers have found a new security vulnerability which places most computers, smartphones, and wireless routers at risk. The vulnerability allows hackers to decrypt and eavesdrop on victims' wireless network traffic; plus inject content (e.g., malware) into users' wireless data streams. ZDNet reported yesterday:

"The bug, known as "KRACK" for Key Reinstallation Attack, exposes a fundamental flaw in WPA2, a common protocol used in securing most modern wireless networks. Mathy Vanhoef, a computer security academic, who found the flaw, said the weakness lies in the protocol's four-way handshake, which securely allows new devices with a pre-shared password to join the network... The bug represents a complete breakdown of the WPA2 protocol, for both personal and enterprise devices -- putting every supported device at risk."

Reportedly, the vulnerability was confirmed on Monday by U.S. Homeland Security's cyber-emergency unit US-CERT, which had warned vendors about two months ago.

What should consumers do? Experts advise consumers to update the software in all mobile devices connected to their home wireless router. Obviously, that means first contacting the maker of your home wireless router, or your Internet Service Provider (ISP), for software patches to fix the security vulnerability.

ZDNet also reported that the security flaw:

"... could also be devastating for IoT devices, as vendors often fail to implement acceptable security standards or update systems in the supply chain, which has already led to millions of vulnerable and unpatched Internet-of-things (IoT) devices being exposed for use by botnets."

So, plenty of home devices must also be updated. That includes both devices you'd expect (e.g., televisions, printers, smart speakers and assistants, security systems, door locks and cameras, utility meters, hot water heaters, thermostats, refrigerators, robotic vacuum cleaners, lawn mowers) and devices you might not expect (e.g., mouse traps, wine bottlescrock pots, toy dolls, and trash/recycle bins). One "price" of wireless convenience is the responsibility for consumers and device makers to continually update the security software in internet-connected devices. Nobody wants their home router and devices participating in scammers' and fraudsters' botnets with malicious software.

ZDNet also listed software patches by vendor. And:

"In general, Windows and newer versions of iOS are unaffected, but the bug can have a serious impact on Android 6.0 Marshmallow and newer... At the time of writing, neither Toshiba and Samsung responded to our requests for comment..."

Hopefully, all of the Internet-connected devices in your home provide for software updates. If not, then you probably have some choices ahead: whether to keep that device or upgrade to better device for security. Comments?


Report: Patched Macs Still Vulnerable To Firmware Hacks

Apple Inc. logo I've heard numerous times the erroneous assumption by consumers: "Apple-branded devices don't get computer viruses." Well, they do. Ars Technica reported about a particular nasty hack of vulnerabilities in devices' Extensible Firmware Interface (EFI). Never heard of EFI? Well:

"An analysis by security firm Duo Security of more than 73,000 Macs shows that a surprising number remained vulnerable to such attacks even though they received OS updates that were supposed to patch the EFI firmware. On average, 4.2 percent of the Macs analyzed ran EFI versions that were different from what was prescribed by the hardware model and OS version. 47 Mac models remained vulnerable to the original Thunderstrike, and 31 remained vulnerable to Thunderstrike 2. At least 16 models received no EFI updates at all. EFI updates for other models were inconsistently successful, with the 21.5-inch iMac released in late 2015 topping the list, with 43 percent of those sampled running the wrong version."

This is very bad. EFI hacks are particularly effective and nasty because:

"... they give attackers control that starts with the very first instruction a Mac receives... the level of control attackers get far exceeds what they gain by exploiting vulnerabilities in the OS... That means an attacker who compromises a computer's EFI can bypass higher-level security controls, such as those built into the OS or, assuming one is running for extra protection, a virtual machine hypervisor. An EFI infection is also extremely hard to detect and even harder to remedy, as it can survive even after a hard drive is wiped or replaced and a clean version of the OS is installed."

At-risk EFI versions mean that devices running Windows and Linux operating systems are also vulnerable. Reportedly, the exploit requires plenty of computing and technical expertise, so hackers would probably pursue high-value targets (e.g., journalists, attorneys, government officials, contractors with government clearances) first.

The Duo Labs Report (63 pages, Adobe PDF) lists the specific MacBook, MacBookAir, and MacBookPro models at risk. The researchers shared a draft of the report with Apple before publication. The report's "Mitigation" section provides solutions, including but not limited to:

"Always deploy the full update package as released by Apple, do not remove separate packages from the bundle updater... When possible, deploy Combo OS updates instead of Delta updates... As a general rule of thumb, always run the latest version of macOS..."

Scary, huh? The nature of the attack means that hackers probably can disable the anti-virus software on your device(s), and you probably wouldn't know you've been hacked.


Experts Call For Ban of Killer Robotic Weapons

116 robotics and artificial intelligence experts from 26 countries sent a letter to the United Nations (UN) warning against the deployment of lethal autonomous weapons. The Guardian reported:

"The UN recently voted to begin formal discussions on such weapons which include drones, tanks and automated machine guns... In their letter, the [experts] warn the review conference of the convention on conventional weapons that this arms race threatens to usher in the “third revolution in warfare” after gunpowder and nuclear arms... The letter, launching at the opening of the International Joint Conference on Artificial Intelligence (IJCAI) in Melbourne on Monday, has the backing of high-profile figures in the robotics field and strongly stresses the need for urgent action..."

The letter stated in part:

"Once developed, lethal autonomous weapons will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways."

"We do not have long to act. Once this Pandora’s box is opened, it will be hard to close."

This is not science fiction. Autonomous weapons are already deployed:

"Samsung’s SGR-A1 sentry gun, which is reportedly technically capable of firing autonomously but is disputed whether it is deployed as such, is in use along the South Korean border of the 2.5m-wide Korean Demilitarized Zone. The fixed-place sentry gun, developed on behalf of the South Korean government, was the first of its kind with an autonomous system capable of performing surveillance, voice-recognition, tracking and firing with mounted machine gun or grenade launcher... The UK’s Taranis drone, in development by BAE Systems, is intended to be capable of carrying air-to-air and air-to-ground ordnance intercontinentally and incorporating full autonomy..."

Ban, indeed. Your thoughts? Opinions? Reaction?