79 posts categorized "Internet of Things" Feed

GoPro Lays Off Workers And Exits Drone Business

Gopro-karma-drone

TechCrunch reported that GoPro, the mobile digital camera maker:

"... plans to reduce its headcount in 2018 from 1,254 employees to fewer than 1,000. It also plans to exit the drone market and reduce CEO 2018 compensation to $1... Last week TechCrunch reported exclusively on the firings with sources telling us several hundred employees were relieved of duties though officially kept on the books until the middle of February. We were told that the bulk of the layoffs happened in the engineering department of the Karma drone... Though GoPro is clearly done producing the Karma drone, it says it intends to continue to provide service and support to Karma customers."

Reported, the earnings announcement by GoPro expected fourth quarter revenues of $340 million, down 37% from 2016. At press time, the "Shop Now" button for Karma drones was still active. It seems the company is selling off its remaining drone inventory.


Google Photos: Still Blind After All These Years

Earlier today, Wired reported:

"In 2015, a black software developer embarrassed Google by tweeting that the company’s Photos service had labeled photos of him with a black friend as "gorillas." Google declared itself "appalled and genuinely sorry." An engineer who became the public face of the clean-up operation said the label gorilla would no longer be applied to groups of images, and that Google was "working on longer-term fixes."

More than two years later, one of those fixes is erasing gorillas, and some other primates, from the service’s lexicon. The awkward workaround illustrates the difficulties Google and other tech companies face in advancing image-recognition technology... WIRED tested Google Photos using a collection of 40,000 images well-stocked with animals. It performed impressively at finding many creatures, including pandas and poodles. But the service reported "no results" for the search terms "gorilla," "chimp," "chimpanzee," and "monkey."

This is the best facial-recognition software solution Google can do, while it also wants consumers to trust the software in its driver-less vehicles? Geez. #fubar Well, maybe this video will help Google engineers feel better:


Smart Lock Maker Suspends Operations

Otto, a smart lock maker, has suspended operations. Sam Jadallah, the firm's CEO, announced the suspension just before the Consumer Electronics Show (CES). TechCrunch reported:

"The company made the decision just ahead of the holidays, a fact that founder and CEO Sam Jadallah recently made public with a lengthy Medium post now pinned to the top of the startup’s site... Jadallah told TechCrunch that the company’s lock made it as far as the manufacturing process, and is currently sitting in a warehouse, unable to be sold by a hardware startup that is effectively no longer operating... The long and short of it is that the company was about to be acquired by someone with a lot more resources and experience in bringing a product to market, only to have the rug apparently pulled out at the last minute..."

The digital door lock market includes a variety of types and technologies, such as biometrics, face recognition, iris recognition, palm recognition, voice recognition, fingerprint recognition, keypad locks, and magnetic stripe locks. Consumer Reports rated bothh door locks and smart locks.

Several digital locks are available at online retail sites, including products by August, Brilong, Kwikset, Samsung, and several other makers.


The Limitations And Issues With Facial Recognition Software

We've all seen television shows where police technicians use facial recognition software to swiftly and accurately identify suspects, or catch the bad guys. How accurate is that? An article in The Guardian newspaper discussed the promises, limitations, and issues with facial recognition software used by law enforcement:

"The software, which has taken an expanding role among law enforcement agencies in the US over the last several years, has been mired in controversy because of its effect on people of color. Experts fear that the new technology may actually be hurting the communities the police claims they are trying to protect... "It’s considered an imperfect biometric," said Clare Garvie, who in 2016 created a study on facial recognition software, published by the Center on Privacy and Technology at Georgetown Law, called The Perpetual Line-Up. "There’s no consensus in the scientific community that it provides a positive identification of somebody"... [Garvie's] report found that black individuals, as with so many aspects of the justice system, were the most likely to be scrutinized by facial recognition software in cases. It also suggested that software was most likely to be incorrect when used on black individuals – a finding corroborated by the FBI's own research. This combination, which is making Lynch’s and other black Americans’ lives excruciatingly difficult, is born from another race issue that has become a subject of national discourse: the lack of diversity in the technology sector... According to a 2011 study by the National Institute of Standards and Technologies (Nist), facial recognition software is actually more accurate on Asian faces when it’s created by firms in Asian countries, suggesting that who makes the software strongly affects how it works... Law enforcement agencies often don’t review their software to check for baked-in racial bias – and there aren’t laws or regulations forcing them to."


Report: Several Impacts From Technology Changes Within The Financial Services Industry

For better or worse, the type of smart device you use can identify you in ways you may not expect. First, a report by London-based Privacy International highlighted the changes within the financial services industry:

"Financial services are changing, with technology being a key driver. It is affecting the nature of financial services from credit and lending through to insurance and even the future of money itself. The field known as “fintech” is where the attention and investment is flowing. Within it, new sources of data are being used by existing institutions and new entrants. They are using new forms of data analysis. These changes are significant to this sector and the lives of the people it serves. We are seeing dramatic changes in the ways that financial products make decisions. The nature of the decision-making is changing, transforming the products in the market and impacting on end results and bottom lines. However, this also means that treatment of individuals will change. This changing terrain of finance has implications for human rights, privacy and identity... Data that people would consider as having nothing to do with the financial sphere, such as their text-messages, is being used at an increasing rate by the financial sector...  Yet protections are weak or absent... It is essential that these innovations are subject to scrutiny... Fintech covers a broad array of sectors and technologies. A non-exhaustive list includes:

  • Alternative credit scoring (new data sources for credit scoring)
  • Payments (new ways of paying for goods and services that often have implications for the data generated)
  • Insurtech (the use of technology in the insurance sector)
  • Regtech (the use of technology to meet regulatory requirements)."

"Similarly, a breadth of technologies are used in the sector, including: Artificial Intelligence; Blockchain; the Internet of Things; Telematics and connected cars..."

While the study focused upon India and Kenya, it has implications for consumers worldwide. More observations and concerns:

"Social media is another source of data for companies in the fintech space. However, decisions are made not on just on the content of posts, but rather social media is being used in other ways: to authenticate customers via facial recognition, for instance... blockchain, or distributed ledger technology, is still best known for cryptocurrencies like BitCoin. However, the technology is being used more broadly, such as the World Bank-backed initiative in Kenya for blockchain-backed bonds10. Yet it is also used in other fields, like the push in digital identities11. A controversial example of this was a very small-scale scheme in the UK to pay benefits using blockchain technology, via an app developed by the fintech GovCoin12 (since renamed DISC). The trial raised concerns, with the BBC reporting a former member of the Government Digital Service describing this as "a potentially efficient way for Department of Work and Pensions to restrict, audit and control exactly what each benefits payment is actually spent on, without the government being perceived as a big brother13..."

Many consumers know that you can buy a wide variety of internet-connected devices for your home. That includes both devices you'd expect (e.g., televisions, printers, smart speakers and assistants, security systems, door locks and cameras, utility meters, hot water heaters, thermostats, refrigerators, robotic vacuum cleaners, lawn mowers) and devices you might not expect (e.g., sex toys, smart watches for children, mouse traps, wine bottlescrock pots, toy dolls, and trash/recycle bins). Add your car or truck to the list:

"With an increasing number of sensors being built into cars, they are increasingly “connected” and communicating with actors including manufacturers, insurers and other vehicles15. Insurers are making use of this data to make decisions about the pricing of insurance, looking for features like sharp acceleration and braking and time of day16. This raises privacy concerns: movements can be tracked, and much about the driver’s life derived from their car use patterns..."

And, there are hidden prices for the convenience of making payments with your favorite smart device:

"The payments sector is a key area of growth in the fintech sector: in 2016, this sector received 40% of the total investment in fintech22. Transactions paid by most electronic means can be tracked, even those in physical shops. In the US, Google has access to 70% of credit and debit card transactions—through Google’s "third-party partnerships", the details of which have not been confirmed23. The growth of alternatives to cash can be seen all over the world... There is a concerted effort against cash from elements of the development community... A disturbing aspect of the cashless debate is the emphasis on the immorality of cash—and, by extension, the immorality of anonymity. A UK Treasury minister, in 2012, said that paying tradesman by cash was "morally wrong"26, as it facilitated tax avoidance... MasterCard states: "Contrary to transactions made with a MasterCard product, the anonymity of digital currency transactions enables any party to facilitate the purchase of illegal goods or services; to launder money or finance terrorism; and to pursue other activity that introduces consumer and social harm without detection by regulatory or police authority."27"

The report cited a loss of control by consumers over their personal information. Going forward, the report included general and actor-specific recommendations. General recommendations:

  • "Protecting the human right to privacy should be an essential element of fintech.
  • Current national and international privacy regulations should be applicable to fintech.
  • Customers should be at the centre of fintech, not their product.
  • Fintech is not a single technology or business model. Any attempt to implement or regulate fintech should take these differences into account, and be based on the type activities they perform, rather than the type of institutions involved."

Want to learn more? Follow Privacy International on Facebook, on Twitter, or read about 10 ways of "Invisible Manipulation" of consumers.


German Regulator Bans Smartwatches For Children

VTech Kidizoom DX smartwatch for children. Select for larger version Parents: considering a smartwatch for your children or grandchildren? Consider the privacy implications first. Bleeping Computer reported on Friday:

"Germany's Federal Network Agency (Bundesnetzagentur), the country's telecommunications agency, has banned the sale of children's smartwatches after it classified such devices as "prohibited listening devices." The ban was announced earlier today... parents are using their children's smartwatches to listen to teachers in the classroom. Recording or listening to private conversations is against the law in Germany without the permission of all recorded persons."

Some smartwatches are designed for children as young as four years of age. Several brands are available at online retailers, such as Amazon and Best Buy.

Why the ban? Gizmodo explained:

"Saying the technology more closely resembles a “spying device” than a toy... Last month, the European Consumer Organization (BEUC) warned that smartwatches marketed to kids were a serious threat to children’s privacy. A report published by the Norwegian Consumer Council in mid-October revealed serious flaws in several of the devices that could easily allow hackers to seize control. "

Clearly, this is another opportunity for parents to carefully research and consider smart device purchases for their family, to teach their children about privacy, and to not record persons without their permission.


Security Experts: Massive Botnet Forming. A 'Botnet Storm' Coming

Online security experts have detected a massive botnet -- a network of zombie robots -- forming. Its operator and purpose are both unknown. Check Point Software Technologies, a cyber security firm, warned in a blog post that its researchers:

"... had discovered of a brand new Botnet evolving and recruiting IoT devices at a far greater pace and with more potential damage than the Mirai botnet of 2016... Ominous signs were first picked up via Check Point’s Intrusion Prevention System (IPS) in the last few days of September. An increasing number of attempts were being made by hackers to exploit a combination of vulnerabilities found in various IoT devices.

With each passing day the malware was evolving to exploit an increasing number of vulnerabilities in Wireless IP Camera devices such as GoAhead, D-Link, TP-Link, AVTECH, NETGEAR, MikroTik, Linksys, Synology and others..."

Reportedly, the botnet has been named either "Reaper" or "IoTroop." The McClatchy news wire reported:

"A Chinese cybersecurity firm, Qihoo 360, says the botnet is swelling by 10,000 devices a day..."

Criminals use malware or computer viruses to add to the botnet weakly protected or insecure Internet-connect devices (commonly referred to as the internet of things, or IoT) in homes and businesses. Then, criminals use botnets to overwhelm a targeted website with page requests. This type of attack, called a Distributed Denial of Service (DDoS), prevents valid users from accessing the targeted site; knocking the site offline. If the attack is large enough, it can disable large portions of the Internet.

A version of the attack could also include a ransom demand, where the criminals will stop the attack only after a large cash payment by the targeted company or website. With multiple sites targeted, either version of cyber attack could have huge, negative impacts upon businesses and users.

How bad was the Mirai botnet? According to the US-CERT unit within the U.S. Department of Homeland Security:

"On September 20, 2016, Brian Krebs’ security blog was targeted by a massive DDoS attack, one of the largest on record... The Mirai malware continuously scans the Internet for vulnerable IoT devices, which are then infected and used in botnet attacks. The Mirai bot uses a short list of 62 common default usernames and passwords to scan for vulnerable devices... The purported Mirai author claimed that over 380,000 IoT devices were enslaved by the Mirai malware in the attack..."

Wired reported last year that after the attack on Krebs' blog, the Mirai botnet:

"... managed to make much of the internet unavailable for millions of people by overwhelming Dyn, a company that provides a significant portion of the US internet's backbone... Mirai disrupted internet service for more than 900,000 Deutsche Telekom customers in Germany, and infected almost 2,400 TalkTalk routers in the UK. This week, researchers published evidence that 80 models of Sony cameras are vulnerable to a Mirai takeover..."

The Wired report also explained the difficulty with identifying and cleaning infected devices:

"One reason Mirai is so difficult to contain is that it lurks on devices, and generally doesn't noticeably affect their performance. There's no reason the average user would ever think that their webcam—or more likely, a small business's—is potentially part of an active botnet. And even if it were, there's not much they could do about it, having no direct way to interface with the infected product."

It this seems scary, it is. The coming botnet storm has the potential to do lots of damage.

So, a word to the wise. Experts advise consumers to, a) disconnect the device from your network and reboot it before re-connecting it to the internet, b) buy internet-connected devices that support security software updates, c) change the passwords on your devices from the defaults to strong passwords, d) update the operating system (OS) software on your devices with security patches as soon as they are available, e) keep the anti-virus software on your devices current, and f) regularly backup the data on your devices.

US-CERT also advised consumers to:

"Disable Universal Plug and Play (UPnP) on routers unless absolutely necessary. Purchase IoT devices from companies with a reputation for providing secure devices... Understand the capabilities of any medical devices intended for at-home use. If the device transmits data or can be operated remotely, it has the potential to be infected."


Hacked Butt Plug Highlights Poor Security Of Many Mobile Devices

Image of butt plug, Hush by Lovense. Click to view larger version

In a blog post on Tuesday, security researcher Giovanni Mellini  discussed how easy it was to hack a Bluetooth-enabled butt plug. Why this Internet-connected sex toy? Mellini explained that after what started as a joke he'd bought a few weeks ago:

"... a Bluetooth Low Energy (BLE) butt plug to test the (in)security of BLE protocol. This caught my attention after researchers told us that a lot of sex toys use this protocol to allow remote control that is insecure by design."

Another security researcher, Simone Margaritelli had previously discussed a BLE scanner he wrote called BLEAH and how to use it to hack BLE-connected devices. Mellini sought to replicate Margaritelli's hack, and was successful:

"The butt plug can be remotely controlled with a mobile application called Lovense Remote (download here). With jadx you can disassemble the java application and find the Bluetooth class used to control the device. Inside you can find the strings to be sent to the toy to start vibration... So we have all the elements to hack the sex toy with BLEAH... At the end is very easy to hack BLE protocol due to poor design choices. Welcome to 2017."

Welcome, indeed, to 2017. The seems to be the year of hacked mobile devices. Too many news reports about devices with poor (or no) security: the encryption security flaw in many home wireless routers and devices, patched Macs still vulnerable to firmware hacks, a robovac maker's plans to resell interior home maps its devices created, a smart vibrator maker paid hefty fines to settle allegations it tracked users without their knowledge nor consent, security researchers hacked a popular smart speaker, and a bungled software update bricked many customers' smart door locks.

In 2016, security researchers hacked an internet-connected vibrator.

And, that's some of the reports. All of this runs counter to consumers' needs. In August, a survey of consumers in six countries found that 90 percent believe it is important for smart devices to have security built in. Are device makers listening?

Newsweek reported:

"Lovense did not immediately respond to a request for comment from Newsweek but the sex toy company has spoken previously about the security of its products. "There are three layers of security," Lovense said in a statement last year. "The server side, the way we transfer information from the user’s phone to our server and on the client side. We take our customer’s private data very seriously, which is why we don’t serve any on our servers." "

I have nothing against sex toys. Use one or not. I don't care. My concern: supposedly smart devices should have robust security to protect consumers' privacy.

Smart shoppers want persons they authorize -- and not unknown hackers -- to remotely control their vibrators. Thoughts? Comments?


Experts Find Security Flaw In Wireless Encryption Software. Most Mobile Devices At Risk

Researchers have found a new security vulnerability which places most computers, smartphones, and wireless routers at risk. The vulnerability allows hackers to decrypt and eavesdrop on victims' wireless network traffic; plus inject content (e.g., malware) into users' wireless data streams. ZDNet reported yesterday:

"The bug, known as "KRACK" for Key Reinstallation Attack, exposes a fundamental flaw in WPA2, a common protocol used in securing most modern wireless networks. Mathy Vanhoef, a computer security academic, who found the flaw, said the weakness lies in the protocol's four-way handshake, which securely allows new devices with a pre-shared password to join the network... The bug represents a complete breakdown of the WPA2 protocol, for both personal and enterprise devices -- putting every supported device at risk."

Reportedly, the vulnerability was confirmed on Monday by U.S. Homeland Security's cyber-emergency unit US-CERT, which had warned vendors about two months ago.

What should consumers do? Experts advise consumers to update the software in all mobile devices connected to their home wireless router. Obviously, that means first contacting the maker of your home wireless router, or your Internet Service Provider (ISP), for software patches to fix the security vulnerability.

ZDNet also reported that the security flaw:

"... could also be devastating for IoT devices, as vendors often fail to implement acceptable security standards or update systems in the supply chain, which has already led to millions of vulnerable and unpatched Internet-of-things (IoT) devices being exposed for use by botnets."

So, plenty of home devices must also be updated. That includes both devices you'd expect (e.g., televisions, printers, smart speakers and assistants, security systems, door locks and cameras, utility meters, hot water heaters, thermostats, refrigerators, robotic vacuum cleaners, lawn mowers) and devices you might not expect (e.g., mouse traps, wine bottlescrock pots, toy dolls, and trash/recycle bins). One "price" of wireless convenience is the responsibility for consumers and device makers to continually update the security software in internet-connected devices. Nobody wants their home router and devices participating in scammers' and fraudsters' botnets with malicious software.

ZDNet also listed software patches by vendor. And:

"In general, Windows and newer versions of iOS are unaffected, but the bug can have a serious impact on Android 6.0 Marshmallow and newer... At the time of writing, neither Toshiba and Samsung responded to our requests for comment..."

Hopefully, all of the Internet-connected devices in your home provide for software updates. If not, then you probably have some choices ahead: whether to keep that device or upgrade to better device for security. Comments?


Report: Patched Macs Still Vulnerable To Firmware Hacks

Apple Inc. logo I've heard numerous times the erroneous assumption by consumers: "Apple-branded devices don't get computer viruses." Well, they do. Ars Technica reported about a particular nasty hack of vulnerabilities in devices' Extensible Firmware Interface (EFI). Never heard of EFI? Well:

"An analysis by security firm Duo Security of more than 73,000 Macs shows that a surprising number remained vulnerable to such attacks even though they received OS updates that were supposed to patch the EFI firmware. On average, 4.2 percent of the Macs analyzed ran EFI versions that were different from what was prescribed by the hardware model and OS version. 47 Mac models remained vulnerable to the original Thunderstrike, and 31 remained vulnerable to Thunderstrike 2. At least 16 models received no EFI updates at all. EFI updates for other models were inconsistently successful, with the 21.5-inch iMac released in late 2015 topping the list, with 43 percent of those sampled running the wrong version."

This is very bad. EFI hacks are particularly effective and nasty because:

"... they give attackers control that starts with the very first instruction a Mac receives... the level of control attackers get far exceeds what they gain by exploiting vulnerabilities in the OS... That means an attacker who compromises a computer's EFI can bypass higher-level security controls, such as those built into the OS or, assuming one is running for extra protection, a virtual machine hypervisor. An EFI infection is also extremely hard to detect and even harder to remedy, as it can survive even after a hard drive is wiped or replaced and a clean version of the OS is installed."

At-risk EFI versions mean that devices running Windows and Linux operating systems are also vulnerable. Reportedly, the exploit requires plenty of computing and technical expertise, so hackers would probably pursue high-value targets (e.g., journalists, attorneys, government officials, contractors with government clearances) first.

The Duo Labs Report (63 pages, Adobe PDF) lists the specific MacBook, MacBookAir, and MacBookPro models at risk. The researchers shared a draft of the report with Apple before publication. The report's "Mitigation" section provides solutions, including but not limited to:

"Always deploy the full update package as released by Apple, do not remove separate packages from the bundle updater... When possible, deploy Combo OS updates instead of Delta updates... As a general rule of thumb, always run the latest version of macOS..."

Scary, huh? The nature of the attack means that hackers probably can disable the anti-virus software on your device(s), and you probably wouldn't know you've been hacked.


Experts Call For Ban of Killer Robotic Weapons

116 robotics and artificial intelligence experts from 26 countries sent a letter to the United Nations (UN) warning against the deployment of lethal autonomous weapons. The Guardian reported:

"The UN recently voted to begin formal discussions on such weapons which include drones, tanks and automated machine guns... In their letter, the [experts] warn the review conference of the convention on conventional weapons that this arms race threatens to usher in the “third revolution in warfare” after gunpowder and nuclear arms... The letter, launching at the opening of the International Joint Conference on Artificial Intelligence (IJCAI) in Melbourne on Monday, has the backing of high-profile figures in the robotics field and strongly stresses the need for urgent action..."

The letter stated in part:

"Once developed, lethal autonomous weapons will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways."

"We do not have long to act. Once this Pandora’s box is opened, it will be hard to close."

This is not science fiction. Autonomous weapons are already deployed:

"Samsung’s SGR-A1 sentry gun, which is reportedly technically capable of firing autonomously but is disputed whether it is deployed as such, is in use along the South Korean border of the 2.5m-wide Korean Demilitarized Zone. The fixed-place sentry gun, developed on behalf of the South Korean government, was the first of its kind with an autonomous system capable of performing surveillance, voice-recognition, tracking and firing with mounted machine gun or grenade launcher... The UK’s Taranis drone, in development by BAE Systems, is intended to be capable of carrying air-to-air and air-to-ground ordnance intercontinentally and incorporating full autonomy..."

Ban, indeed. Your thoughts? Opinions? Reaction?


'Map Your Orgasm' - A New Smart Device For Women

Recently, Mashable reported about a new smart device for women:

"The Lioness looks like a pretty standard vibrator on the outside, but inside it has four sensors that measure temperature, the force of muscle contractions, and track the movement of the device. When you’re done with your session, you can sync the Lioness with its app (available for iOS and Android). It then provides you with easy-to-read visualization of what was happening to your body while you were busy getting off. So, yes, essentially it gives you a map of your orgasm. You can also tag each session with different terms so you can track how your health, sleep, alcohol consumption, mood, etc. affect your experiences."

Gives you a map of your orgasm? That's a surprising description. Perhaps, I shouldn't have been surprised. First, there were online tools such as "map my ride" and map my run." Good stuff to help consumers stay healthy. I guess a tool resembling 'map your orgasm' was bound to happen.

Lioness sounds like a much better product name. To learn more, I visited the Lioness site. The home page featured this statement: "Don't worry, we will never share your email or spam you." That's a good start.

Privacy is important; especially with smart devices which collect intimate data about consumers. Earlier this year, news reports described a plan by a smart-device maker to resell the interior home maps its robovacs created. And, another smart vibrator maker paid hefty fines to settle allegations that it tracked users without their knowledge nor consent.

A wise person once said, "the devil is in the details." The privacy policy in a company's website is a good place to hunt for details. While blogging about privacy and identity theft during the last 10 years, I've read plenty of privacy policies. Plenty. I read the Lioness Privacy Policy (dated May 1) and found some notable sections:

"This Privacy Policy applies to our vibrators and other devices (“Devices”), our websites, including but not limited to lioness.io (individually a “Site” and collectively “Sites”), the Lioness software (“Software”) and Lioness mobile applications (the “Apps”). The Devices, Sites, Software and Apps are collectively referred to in this Policy as the “Lioness Service,” and by proceeding to use the Lioness Service you consent that we may handle the data that we collect from you in accordance with this Privacy Policy."

Pretty standard stuff so far. Warning: I'm not an attorney. If you want legal advice, hire an attorney. Like you, I'm just a regular consumer trying to understand smart devices while maintaining as much privacy as possible. Additional sections in the policy I found interesting:

"Sync Your Device
When you sync your Device through an App or the Software, data recorded on your Device is transferred from your Device to our servers. This data is stored and used to provide the Lioness Service and is associated with your account. Each time a sync occurs, we log data about the transmission. Some examples of the log data are the sync time and date, device battery level, and the IP address used when syncing."

Let's unpack that. The vibrator and its mobile app, record the date, time, and battery usage. Combine this with data collected from the four sensors and Lioness will know plenty about your usage: when (date and time), location, duration, preferred movement patterns, and more. It indeed could create a map. More sections in the policy:

"WHY WE COLLECT DATA
Lioness uses your data to provide you with the best experience possible, to help you learn about your body, and to improve and protect the Lioness Service. Here are some examples: i) Contact information is used to send you notifications and to inform you about new features or products... ii) Data and logs are used in research to understand and improve the Lioness Device and Lioness Service; to troubleshoot the Lioness Service; to detect and protect against error, fraud or other criminal activity; and to enforce the Lioness Terms of Service; iii) Aggregate data that does not identify you may be used to inform the health community about trends; for marketing and promotional use..."

Data That Could Identify You
Personally Identifiable Information (PII) is data that includes a personal identifier like your name, email or address, or data that could reasonably be linked back to you."

Hmmm. The policy does not list all data elements that personally identify you. For me, that's important to know. And, anything recorded on a smartphone can easily be linked to a person using her 10-digit phone number or the mobile device's serial number.

Informed shoppers probably want to know before purchase which other companies (e.g., business partners, affiliates, advertisers, etc.) Lioness shares data with. Its May 1, 2017 privacy policy also states:

"... companies that are contractually engaged in providing Lioness with services, such as order fulfillment, email management and credit card processing. These companies are obligated by contract to safeguard any PII they receive from us..."

"THIRD PARTIES
Lioness will not be responsible for the practices of third parties that Lioness does not own or control or individuals that Lioness does not employ or manage. The information provided by you to other third parties may be subject to their own privacy policies, which may differ from Lioness’s privacy policy. The Lioness Service may contain links to other sites, and we make every effort to only link to sites that share our high standards and respect for privacy. However, we are not responsible for the privacy practices employed by other sites..."

"DATA RETENTION
Lioness reserves the right to retain your PII for as long as your account remains active..."

So, the policy doesn't mention other companies by name. Not good. That makes it tough for consumers to make informed decisions.

Fitness tracking with the MapMyRide app On Facebook, many of my friends regularly share visual maps of their workouts. (See example on right.) That's their freedom of choice. So, some consumers are probably wondering if Lioness offers a similar share function. Again from the privacy policy:

"Community Posts
The Lioness Service may offer discussion forums, message boards, social networking opportunities, chat pages and other public forums or features in which you may provide personal information, materials and related content. If you submit personal information when using these public features, please note that such personal information may be publicly posted and otherwise disclosed and used without limitation or restriction."

So, the policy doesn't mention literal maps, per se. They might or might not provide the feature to users. The key takeaway: the responsibility rests upon the user. Don't share it if you don't want it made public.

It's probably helpful to also know that the product uses Bluetooth technology to perform data syncing. From the Lioness FAQ page:

"Wait...will there be bluetooth in my vagina?
Nope. We know that there are a lot of people who don’t like the idea of bluetooth being on while in use, so we made it so bluetooth automatically turns off when you use it."

Also, the FAQ page mentioned:

"Is my data stored securely and kept confidential?
Absolutely. We thought about privacy and security from the beginning for this product. You are the only one who can access your individual data. Everything is encrypted and we fully anonymize the data..."

That's good, but the privacy policy didn't mention data encryption. I expected it would. Not sure what to make of that.

Is the Lioness a good deal? Only you can decide for yourself -- and you should after reading both the privacy and terms-of-service policies.

Me? In my opinion, there seems to be too much wiggle-room for data sharing. The policy contains a lot of words and nothing special compared to other policies I've read. What are your opinions?


Bungled Software Update Renders Customers' Smart Door Locks Inoperable

Image of Lockstate RemoteLock 6i device. Click to view larger version A bungled software update by Lockstate, maker of WiFi-enabled door locks, rendered many customers' locks inoperable -- or "bricked." Lockstate notified affected customers in this letter:

"Dear Lockstate Customer,
We notified you earlier today of a potential issue with your LS6i lock. We are sorry to inform you about some unfortunate news. Your lock is among a small subset of locks that had a fatal error rendering it inoperable. After a software update was sent to your lock, it failed to reconnect to our web service making a remote fix impossible...

Many AirBnb operators use smart locks by Lockstate to secure their properties. In its website, Lockstate promotes the LS6i lock as:

"... perfect for your rental property, home or office use. This robust WiFi enabled door lock allows users to lock or unlock doors remotely, know when people unlock your door, and even receive text alerts when codes are used. Issue new codes or delete codes from your computer or phone. Even give temporary codes to guests or office personnel."

Reportedly, about 200 Airbnb customers were affected. The company said 500 locks were affected. ArsTechnica explained how the bungled software update happened:

"The failure occurred last Monday when LockState mistakenly sent some 6i lock models a firmware update developed for 7i locks. The update left earlier 6i models unable to be locked and no longer able to receive over-the-air updates."

Some affected customers shared their frustrations on the company's Twitter page. Lockstate said the affected locks can still be operated with physical keys. While that is helpful, it isn't a solution since customers rely upon the remote features. Affected customers have two repair options: 1) return the back portion of the lock (repair time about 5 to 7 days), or 2) request a replace (response time about 14 to 18 days).

The whole situation seems to be another reminder of the limitations when companies design smart devices with security updates delivered via firmware. And, a better disclosure letter by Lockstate would have explained corrections to internal systems and managerial processes, so this doesn't happen again during another software update.

What are your opinions?


Google And Massachusetts Transportation Department Provide GPS Signals In Tunnels

Smartphone users love their phones. That includes Global Positioning System (GPS) navigation services for driving directions. However, those driving directions don't work in tunnels where phones can't get GPS signals. That is changing.

Google and the Massachusetts Department of Transportation (MassDOT) have entered a partnership to provide GPS navigation services for drivers inside tunnels. If you've familiar with Boston, then you know that portions of both Interstate 93 and the Massachusetts Turnpike include tunnels. The ABC affiliate in Boston, WCVB reported last month that the partnership, part of the Connected Citizens Program, will:

"... install beacons inside Boston's tunnels to help GPS connection stay strong underground. Around 850 beacons are being installed, free of charge, as a part of an ongoing partnership between the state and the traffic app... Installation is scheduled to be complete by the end of July... The beacons are not limited to improving their own app's signal. As long as you are using Bluetooth, they are able to help improve any traffic app's connection."

For those unfamiliar with the technology, beacons are low-powered transmitters which, in this particular application, are installed in the tunnels' walls and provide geographic location information usable by drivers' (or passengers') smartphones passing by (assuming the phones' Bluetooth features are enabled).

Bluetooth beacons are used in a variety of applications and locations. The Privacy SOS blog explained:

"... They’re useful in places where precise location information is necessary but difficult to acquire via satellite. For that reason, they’ve been field tested in museums such as New York’s Metropolitan Museum of Art and airports like London Gatwick. At Gatwick, beacons deliver turn-by-turn directions to users’ phones to help them navigate the airport terminals..."

Within large airports such as Gatwick, the technology can present more precise geolocation data of nearby dining and shopping venues to travelers. According to Bluetooth SIG, Inc., the community of 30,000 companies that use the technology:

"The proliferation and near universal availability of Bluetooth® technology is opening up new markets at all ends of the spectrum. Beacons or iBeacons—small objects transmitting location information to smartphones and powered by Bluetooth with low energy—make the promise of a mobile wallet, mobile couponing, and location-based services possible... The retail space is the first to envision a future for beacons using for everything from in-store analytics to proximity marketing, indoor navigation and contactless payments. Think about a customer who is looking at a new TV and he/she gets a text with a 25 percent off coupon for that same TV and then pays automatically using an online account..."

iBeacons are the version for Apple branded mobile devices. All 12 major automobile makers offer hands-free phone calling systems using the technology. And, social network giant Facebook has developed its own proprietary Bluetooth module for an undisclosed upcoming consumer electronics device.

So, the technology provides new marketing and revenue opportunities to advertisers. TechCrunch explained:

"The Beacons program isn’t looking to get help from individual-driver Wazers in this case, but is looking for cities and tunnel owners who might be fans of the service to step up and apply to its program. The program is powered by Eddystone, a Bluetooth Low Energy beacon profile created by Google that works with cheap, battery-powered BLE Waze Beacon hardware to be installed in participating tunnels. These beacons would be configured to transmit signals to Bluetooth-enabled smartphones... There is a cost to participate — each beacon is $28.50, Waze notes, and a typical installation requires around 42 beacons per mile of tunnel. But for municipalities and tunnel operators, this would actually be a service they can provide drivers, which might actually eliminate frustration and traffic..."

There are several key takeaways here:

  1. GPS navigation services can perform better in previously unavailable areas,
  2. Companies can collect (and share) more precise geolocation data about consumers and our movements,
  3. Consumers' GPS data can now be collected in previously unattainable locations,
  4. What matters aren't the transmissions by beacons, but rather the GPS and related data collected by your phone and the apps you use, which are transmitted back to the apps' developers, and then shared by developers with their business partners (e.g., mobile service providers, smartphone operating system developers, advertisers, and affiliates
  5. You don't have to be a Google user for Google to collect GPS data about you, and
  6. Consumers can expect a coming proliferation of Bluetooth modules in a variety of locations, retail stores, and devices.

So, now you know more about how Google and other companies collect GPS data about you. After analyzing the geolocation data collected, they know not only when and where you go, but also your patterns in the physical world: where you go on certain days and times, how long you stay, where and what you've done before (and after), who you associate with, and more.

Don't like the more precise tracking? Then, don't use the Waze app or Google Maps, delete the blabbermouth apps, or turn off the Bluetooth feature on your phone.

A noted economist once said, "There is no free lunch." And that applies to GPS navigation in tunnels. The price for "free," convenient navigation services means mobile users allow companies to collect and analyze mountains of data about their movements in the physical world.

What are your opinions of GPS navigation services in tunnels? If the city or town where you live has tunnels, have beacons been installed?


Hacked Amazon Echo Converted Into Always-On Surveillance Device

Image of amazon Echo Wired reported how a white-hat hacker provided proof-of-concept that a popular voice-activated, smart home speaker could easily be hacked:

"... British security researcher Mark Barnes detailed a technique anyone can use to install malware on an Amazon Echo, along with his proof-of-concept code that would silently stream audio from the hacked device to his own faraway server. The technique requires gaining physical access to the target Echo, and it works only on devices sold before 2017. But there's no software fix for older units, Barnes warns, and the attack can be performed without leaving any sign of hardware intrusion."

Amazon sells both new and refurbished speakers. Newer models also include cameras. All are probably high-value targets of hackers and spy agencies.

Reportedly, Amazon has fixed the security vulnerability in newer (2017) models. The company advises customers to keep the software on their speakers current, and purchase speakers from trusted retailers. However (bold emphasis added):

"... Barnes agrees that his work should serve as a warning that Echo devices bought from someone other than Amazon—like a secondhand seller—could be compromised. But he also points out that, contrary to the implication of the company's statement, no software update will protect earlier versions of the Echo, since the problem is in the physical connection its hardware exposes.

Instead, he says that people should think twice about the security risks of using an Echo in public or semipublic places, like plans for the Wynn Hotel in Las Vegas to put an Echo in every room."

Voice-activated smart speakers in hotel lobbies and rooms. Nothing could go wrong with that. All it takes is a prior guest, or criminal posing as a hotel staff or cleaning person, to hack and compromise one or more older devices. Will hotels install the newer devices? Will they inform guests?

For guaranteed privacy, it seems hotel guests may soon have to simply turn off (or mute) smart speakers, smart televisions, and personal assistants. Convenience definitely has its price (e.g., security and privacy). What do you think?


Survey: 90 Percent Of Consumers Want Smart Devices With Security Built In

A recent survey of consumers in six countries found that 90 percent believe it is important for smart devices to have security built into the products. Also, 78 percent said they are aware that any smart device connected to their home WiFi network is vulnerable to attacks by hackers wanting to steal personal data stored on the device.

Security importance by country. Irdeto Global Consumer IoT Security Survey. Select to view larger version The Irdeto Global Consumer IoT Security Survey, conducted online from June 22, 2017 to July 10, 2017 by YouGov Plc for Irdeto, included 7,882 adults (aged 18 or older) in six countries: Brazil, China, Germany, India, United Kingdom, and United States. Irdeto provides security solutions to protect platforms and applications for media, entertainment, automotive and Internet-of-things (IoT) connected industries.

Additional key findings:

"... 72% of millennials (ages 18-24 years) indicated that they are aware that any smart device connected to the Wi-Fi in their home has the potential to be targeted by a hacker, compared to 82% of consumers 55+. This indicates that older generations may be more savvy about IoT security or more cautious... More than half of consumers around the globe (56%) think that it is the responsibility of both the end-user and the manufacturer of the product to prevent hacking of smart devices. Alternatively, only 15% of consumers globally think they are responsible, while 20% feel the manufacturer of the device is responsible for cybersecurity. In China, more consumers than any other country surveyed (31%) stated that it is the responsibility of manufacturers. Brazilians led all countries surveyed (23%) in the belief that it is the responsibility of the end-user to prevent hacking of connected devices... Germans expressed the least concern with nearly half (42%) stating that they are not concerned about smart devices being hacked. On the opposite end of the spectrum, Brazilian smart device owners expressed the most concern with 88% of those surveyed saying they were concerned...

And, smart device usage varies by country:

"Regarding the number of smart devices consumers own, 89% of those surveyed have at least one connected device in their home. In addition, 81% of consumers across the globe admitted to having more than one connected device in the home. India led all countries with a staggering 97% of consumers stating that they have at least one smart device in the home, compared to only 80% of US consumers..."

Read the announcement by Irdeto. View the full infographic.

Device security responsibility. Irdeto Global Consumer IoT Security Survey. Select to view larger version


Robotic Vacuum Cleaner Maker To Resell Data Collected Of Customers' Home Interiors

iRobot Roomba autonomous vacuum. Click to view larger image Do you use a robovac -- an autonomous WiFi-connected robotic vacuum cleaner -- in your home? Do you use the mobile app to control your robovac?

Gizmodo reports that iRobot, the maker of the Roomba robotic vacuum cleaner, plans to resell maps generated by robovacs to other smart-home device manufacturers:

"While it may seem like the information that a Roomba could gather is minimal, there’s a lot to be gleaned from the maps it’s constantly updating. It knows the floor plan of your home, the basic shape of everything on your floor, what areas require the most maintenance, and how often you require cleaning cycles, along with many other data points... If a company like Amazon, for example, wanted to improve its Echo smart speaker, the Roomba’s mapping info could certainly help out. Spatial mapping could improve audio performance by taking advantage of the room’s acoustics. Do you have a large room that’s practically empty? Targeted furniture ads might be quite effective. The laser and camera sensors would paint a nice portrait for lighting needs..."

Think about it. The maps identify whether you have one, none, or several sofas -- or other large furniture items. The maps also identify the size, square footage, of your home and the number of rooms. Got a hairy pet? If your robovac needs more frequently cleaning, that data is collected, too.

One can easily confirm this by reading the iRobot Privacy Policy:

"... Some of our Robots are equipped with smart technology which allows the Robots to transmit data wirelessly to the Service. For example, the Robot could collect and transmit information about the Robot’s function and use statistics, such as battery life and health, number of missions, the device identifier, and location mapping. When you register your Robot with the online App, the App will collect and maintain information about the Robot and/or App usage, feature usage, in-App transactions, technical specifications, crashes, and other information about how you use your Robot and the product App. We also collect information provided during set-up.

We use this information to collect and analyze statistics and usage data, diagnose and fix technology problems, enhance device performance, and improve user experience. We may use this information to provide you personalized communications, including marketing and promotional messages... Our Robots do not transmit this information unless you register your device online and connect to WiFi, Bluetooth, or connect to the internet via another method."

Everything seems focused upon making your robovac perform optimally. Seems. Read on:

"When you access the Service by or through a mobile device, we may receive or collect and store a unique identification numbers associated with your device or our mobile application (including, for example, a UDID, Unique ID for Advertisers (“IDFA”), Google Ad ID, or Windows Advertising ID), mobile carrier, device type, model and manufacturer, mobile device operating system brand and model, phone number, and, depending on your mobile device settings, your geographical location data, including GPS coordinates (e.g. latitude and/or longitude) or similar information regarding the location of your mobile device..."

Use the mobile app and your robovac's unique ID number can easily be associated with other data describing you, where you live, and your lifestyle. Valuable stuff.

Another important section of the privacy policy:

"We may share your personal information in the instances described... i) Other companies owned by or under common ownership as iRobot, which also includes our subsidiaries or our ultimate holding company and any subsidiaries it owns. These companies will use your personal information in the same way as we can under this Policy; ii) Third party vendors, affiliates, and other service providers that perform services on our behalf, solely in order to carry out their work for us, which may include identifying and serving targeted advertisements, providing e-commerce services, content or service fulfillment, billing, web site operation, payment processing and authorization, customer service, or providing analytics services.

Well, there seems to be plenty of wiggle room for iRobot to resell your data. And, that assumes it doesn't change its privacy policy to make resales easier. Note: this is not legal advice. If you want legal advice, hire an attorney. I am not an attorney.

The policy goes on to describe customers' choices with stopping or opting out of data collection programs for some data elements. If you've read that, then you know how to opt out of as much as possible of the data collection.

The whole affairs highlights the fact that the data collected from different brands of smart devices in consumers' homes can be combined, massaged, and analyzed in new ways -- ways in which probably are not apparent to consumers, and which reveal more about you than often desired. And, the whole affair is a reminder to read privacy policies before purchases. Know what valuable personal data you will give away for convenience.

Eyes wide open.

Got an autonomous robotic lawn mower? You might re-read the privacy policy for that, too.


The Need For A Code Of Ethics With The Internet Of Things

Earlier this week, The Atlantic website published and interview with Francine Berman, a computer-science professor at Rensselaer Polytechnic Institute, about the need for a code of ethics for connected, autonomous devices, commonly referred to as the internet-of-things (IoT). The IoT is exploding.

Experts forecast 8.4 billion connected devices in use worldwide in 2017, up 31 percent from 2016. Total spending for those devices will reach almost $2 trillion in 2017, and $20.4 billion by 2020. North America, Western Europe, and China, which already comprise 67 percent of the installed base, will drive much of this growth.

In a February, 2017 article (Adobe PDF) in the journal Communications of the Association for Computing Machinery, Berman and Vint Cerf, an engineer, discussed the need for a code of ethics:

"Last October, millions of interconnected devices infected with malware mounted a "denial-of-service" cyberattack on Dyn, a company that operates part of the Internet’s directory service. Such attacks require us to up our technical game in Internet security and safety. They also expose the need to frame and enforce social and ethical behavior, privacy, and appropriate use in Internet environments... At present, policy and laws about online privacy and rights to information are challenging to interpret and difficult to enforce. As IoT technologies become more pervasive, personal information will become more valuable to a diverse set of actors that include organizations, individuals, and autonomous systems with the capacity to make decisions about you."

Given this, it seems wise for voters to consider whether or not elected officials in state, local, and federal government understand the issues. Do they understand the issues? If they understand the issues, are they taking appropriate action? If they aren't taking appropriate action, is due to other priorities? Or are different elected officials needed? At the federal level, recent events with broadband privacy indicate a conscious decision to ignore consumers' needs in favor of business.

In their ACM article, Bermand and Cerf posed three relevant questions:

  1. "What are your rights to privacy in the internet-of-things?
  2. Who is accountable for decisions made by autonomous systems?
  3. How do we promote the ethical use of IoT technologies?"

Researchers and technologists have already raised concerns about the ethical dilemmas of self-driving cars. Recent events have also highlighted the issues.

Some background. Last October, a denial-of-service attack against a hosting service based in France utilized a network of more than 152,000 IoT devices, including closed-circuit-television (CCTV) cameras and DVRs. The fatal crash in May of a Tesla Model S car operating in auto-pilot mode and the crash in February of a Google self-driving car raised concerns. According to researchers, 75 percent of all cars shipped globally will have internet connectivity by 2020. Last month, a security expert explained the difficulty with protecting connected cars from hackers.

And after a customer posted a negative review online, a developer of connected garage-door openers disabled both the customer's device and online account. (Service was later restored.) Earlier this year, a smart TV maker paid $2.2 million to settle privacy abuse charges by the U.S. Federal Trade Commission (FTC). Consumers buy and use a wide variety of connected devices: laptops, tablets, smartphones, personal assistants, printers, lighting and temperature controls, televisions, home security systems, fitness bands, smart watches, toys, smart wine bottles, and home appliances (e.g., refrigerators, hot water heaters, coffee makers, crock pots, etc.). Devices with poor security features don't allow operating system and security software updates, don't encrypt key information such as PIN numbers and passwords, and build the software into the firmware where it cannot be upgraded. In January, the FTC filed a lawsuit against a modem/router maker alleging poor security in its products.

Consumers have less control over many IoT devices, such as smart utility meters, which collect information about consumers. Typically, the devices are owned and maintained by utility companies while installed in or on consumers' premises.

Now, back to the interview in The Atlantic. Professor Berman reminded us that society has met the ethical challenge before:

"Think about the Industrial Revolution: The technologies were very compelling—but perhaps the most compelling part were the social differences it created. During the Industrial Revolution, you saw a move to the cities, you saw the first child-labor laws, you saw manufacturing really come to the fore. Things were available that had not been very available before..."

Well, another revolution is upon us. This time, it includes changes brought about by the internet and the IoT. Berman explained today's challenges include considerations:

"... we never even imagined we’d have to think about. A great example: What if self-driving cars have to make bad choices? How do they do that? Where are the ethics? And then who is accountable for the choices that are made by autonomous systems? This needs to be more of a priority, and we need to be thinking about it more broadly. We need to start designing the systems that are going to be able to support social regulation, social policy, and social practice, to bring out the best of the Internet of Things... Think about designing a car. I want to design it so it’s safe, and so that the opportunity to hack my car is minimized. If I design Internet of Things systems that are effective, provide me a lot of opportunities, and are adaptive, but I only worry about really important things like security and privacy and safety afterwards, it’s much less effective than designing them with those things in mind. We can lessen the number of unintended consequences if we start thinking from the design stage and the innovation stage how we’re going to use these technologies. Then, we put into place the corresponding social framework."

Perhaps, most importantly:

"There’s a shared responsibility between innovators, companies, the government, and the individual, to try and create and utilize a framework that assigns responsibility and accountability based on what promotes the public good."

Will we meet the challenge of this revolution? Will innovators, companies, government, and individuals share responsibility? Will we work for the public good or solely for business growth and profitability?

What do you think?


Security Experts State Privacy Issues With Proposed NHTSA Rules For Vehicle Automation

The Center For Democracy & Technology (CDT) and four cryptographers have stated their security and privacy concerns regarding proposed rules by the National Highway Traffic Safety Administration (NHTSA) for vehicle automation and communications. In a CDT blog post, Chief Technologist Lorenzo Hall stated that the group's concerns about NHTSA's:

"... proposed rulemaking to establish a new Federal Motor Vehicle Safety Standard (FMVSS), No. 150, which intends to mandate and standardize vehicle-to-vehicle (V2V) communications for new light vehicles... Our comments highlight our concern that NHTSA’s proposal standard may not contain adequate measures to protect consumer privacy from third parties who may choose to listen in on the Basic Safety Message (BSM) broadcast by vehicles. Inexpensive real-time tracking of vehicles is not a distant future hypothetical. Vehicle tracking will be exploited by a multitude of companies, governments, and criminal elements for a variety of purposes such as vehicle repossession, blackmail, gaining an advantage in a divorce settlement, mass surveillance, commercial espionage, organized crime, burglary, or stalking.

Our concern is that the privacy protections currently proposed for V2V communications may be easily circumvented by any party determined to perform large-scale real-time tracking of multiple vehicles at once. This poses a serious costs for both individual privacy and society at large..."

FMVSS Standards include regulations automobile and vehicle manufacturers must comply with. Read the proposed FMVSS Rule 150 in the Federal Register. The proposed rule specifies how vehicles will automatically broadcast Basic Safety Messages (BSM).

The group's detailed submission (Adobe PDF) to the U.S. Department of Transportation (DOT) described specific privacy concerns. One example:

"2.1 Linking a vehicle to an individual
The NPRM proposes that vehicle location accurate to within 1.5 meters be included in every BSM. Such high accuracy is sufficient to identify a vehicle’s specific parking spot. Assuming a suburban environment where the parking spot is a driveway, this information is enough to identify the owners or tenants... Vehicles can be further disambiguated among members of a household or people sharing parking spots by when they leave and where they go. For instance, shift workers, 9-to-5 office workers, high school students, and stay-at-home parents will all have different, distinguishable patterns of vehicle use. Even among office commuters, the first few turns after leaving the driveway will be very useful for disambiguating people working at different locations..."

So, when you leave home and the route you take can easily identify individuals. You don't have to be the registered owner of the car. Yes, your smartphone broadcasts to the nearest cellular tower and that identifies your location, but not as precisely. Privacy is needed because the bad guys -- stalkers, criminals -- could also use BSMs to spy upon individuals.

The security experts found the proposed BSM privacy statement by NHTSA to be one-sided and incomplete:

"The examples of third-party collection provided in paragraph (b) of the privacy statement mention only benign collection for beneficial purposes, such as accident avoidance, transit maintenance, or valuable commercial services. They selectively highlight the socially beneficial uses of V2V information without mentioning commercial services [which] may not [be] valuable for consumers; or other potential, detrimental, or even criminal uses. This is especially troubling..."

The CDT and security experts recommended that due to the privacy risks described:

"... we firmly believe that, unless a considerably more privacy-conscious proposal is put forward, consumers should be given the choice to opt-in or opt-out (without a default opt-in), and should be made clearly aware of what they are opting in to..."

I agree. A totally sensible and appropriate approach. The group's detailed submission also compared several vehicle tracking methods:

"... physically following a car or placing a GPS device on it, do not allow for mass tracking of most vehicles in a given area. Some options, such as cellphone tracking or toll collection history, require specialized access to a private infrastructure. Cellular data does not provide precise position information to just anyone who listens in... Moreover, cellular technology is evolving rapidly — today it provides more privacy than in the past... license-plate-based tracking requires a line of sight to a given vehicle, and thus is usually neither pervasive nor real-time. A vehicle can be observed driven or parked, but not tracked continuously unless followed. Only a few vehicles can be observed by a camera at any given time. Thus, license-plate-based tracking provides only episodic reports of locations for most vehicles. In contrast, because receiving the BSM does not require a line of sight and the BSM is transmitted ten times per second, multiple vehicles can be tracked simultaneously, continuously, and in real time.

The Privacy Technical Analysis Report concluded that the only option other than BSMs that may be viable for large-scale real-time tracking without any infrastructure access is via toll transponders."

License-plate tracking and the cameras used are often referred to as Automated License Plate Readers (ALPR). Law enforcement uses four types of ALPR technologies: mobile cameras, stationary cameras, semi-stationary cameras, and ALPR databases.

So, BSM provides large-scale real-time tracking. And, while toll transponders provide consumers with a convenient method to pay and zoom through tolls, the technology can be used to track you. Read the full CDT blog post.


Security Expert Says Protecting Driverless Cars From Hackers Is Hard

Wired Magazine recently interviewed Charlie Miller, an automobile security expert, about the security of driverless cars. You may remember Miller. He and an associated remotely hacked a moving Jeep vehicle in 2015 to demonstrate security vulnerabilities in autos. Miller later worked for Uber, and recently joined Didi.

Wired Magazine reported:

"Autonomous vehicles are at the apex of all the terrible things that can go wrong,” says Miller, who spent years on the NSA’s Tailored Access Operations team of elite hackers before stints at Twitter and Uber. “Cars are already insecure, and you’re adding a bunch of sensors and computers that are controlling them…If a bad guy gets control of that, it’s going to be even worse."

The article highlights the security issues with driverless used by ride-sharing companies. Simply, the driverless taxi or ride-share car is unattended for long periods of time.. That is a huge opportunity for hackers posing as riders to directly access and hack driverless cars:

"There’s going to be someone you don’t necessarily trust sitting in your car for an extended period of time,” says Miller. “The OBD2 port is something that’s pretty easy for a passenger to plug something into and then hop out, and then they have access to your vehicle’s sensitive network."

The article also highlights some of the differences between driverless cars used as personal vehicles versus as ride-sharing (or taxi) cars. In a driverless personal vehicle, the owner -- who is also the inattentive driver -- can regain control after a remote hack and steer/brake to safety. Not so in a driverless ride-sharing car or taxi.

Do you believe that criminals won't try to hack driverless (ride-sharing and taxi) cars? History strongly suggests otherwise. Since consumers love the convenience of pay-at-the-pump in gas stations, criminals have repeatedly installed skimming devices in unattended gas station pumps to steal drivers' debit/credit payment information. No doubt, criminals will want to hack driverless cars to steal riders' payment information.

What are your opinions of the security of driverless cars?