343 posts categorized "Mobile" Feed

Uber: President Resigns, Greyball, A Major Lawsuit, Corporate Culture, And Lingering Questions

Uber logo Several executive changes are underway at Uber. The President of Uber's Ridesharing unit, Jeff Jones, resigned after only six months at the company. The Recode site posted a statement by Jones:

"Jones also confirmed the departure with a blistering assessment of the company. "It is now clear, however, that the beliefs and approach to leadership that have guided my career are inconsistent with what I saw and experienced at Uber, and I can no longer continue as president of the ride-sharing business," he said in a statement to Recode."

Prior to joining Uber, Jones had been the Chief Marketing Officer (CMO) at Target stores. Travis Kalanick, the Chief Executive Officer at Uber, disclosed that he met Jones at a Ted conference in Vancouver, British Columbia, Canada.

There have been more executive changes at Uber. The company announced on March 7 its search for a Chief Operating Officer (COO). It announced on March 14 the appointment of Zoubin Ghahramani as its new Chief Scientist based San Francisco. Ghahramani will lead Uber’s AI Labs, our recently created machine learning and artificial intelligence research unit and associated business strategy. Zoubin, a Professor of Information Engineering at the University of Cambridge, joined Uber when it acquired Geometric Intelligence.

In February 2017, CEO Travis Kalanick asked Amit Singhal to resign. Singhal, the company's senior vice president of engineering, had joined Uber a month after 15 years at Google. Reportedly, Singhal was let go for failing to disclose reasons for his departure from Google, including sexual harassment allegations.

Given these movements by executives, one might wonder what is happening at Uber. A brief review of the company's history found controversy accompanying its business practices. Earlier this month, an investigative report by The New York Times described a worldwide program by Uber executives to thwart code enforcement inspections by governments:

"The program, involving a tool called Greyball, uses data collected from the Uber app and other techniques to identify and circumvent officials who were trying to clamp down on the ride-hailing service. Uber used these methods to evade the authorities in cities like Boston, Paris and Las Vegas, and in countries like Australia, China and South Korea.

Greyball was part of a program called VTOS, short for “violation of terms of service,” which Uber created to root out people it thought were using or targeting its service improperly. The program, including Greyball, began as early as 2014 and remains in use, predominantly outside the United States. Greyball was approved by Uber’s legal team."

An example of how the program and Greyball work:

"Uber’s use of Greyball was recorded on video in late 2014, when Erich England, a code enforcement inspector in Portland, Ore., tried to hail an Uber car downtown in a sting operation against the company... officers like Mr. England posed as riders, opening the Uber app to hail a car and watching as miniature vehicles on the screen made their way toward the potential fares. But unknown to Mr. England and other authorities, some of the digital cars they saw in the app did not represent actual vehicles. And the Uber drivers they were able to hail also quickly canceled."

The City of Portland sued Uber in December 2014 and issued a Cease And Desist Order. Uber continued operations in the city, and a pilot program in Portland began in April, 2015. Later in 2015, the City of Portland authorized Uber''s operations. In March 2017, Oregon Live reported a pending investigation:

"An Uber spokesman said Friday that the company has not used the Greyball program in Portland since then. Portland Commissioner Dan Saltzman said Monday that the investigation will focus on whether Uber has used Greyball, or any form of it, to obstruct the city's enforcement of its regulations. The review would examine information the companies have already provided the city, and potentially seeking additional data from them... The investigation also will affect Uber's biggest competitor, Lyft, Saltzman said, though Lyft did not operate in Portland until after its business model was legalized, and there's no indication that it similarly screened regulators... Commissioner Nick Fish earlier called for a broader investigation and said the City Council should seek subpoena powers to determine the extent of Uber's "Greyball" usage..."

This raises questions about other locations Uber may have used its Greyball program. The San Francisco District Attorney's office is investigating, as are government officials in Sydney, Australia. Also this month, the Upstate Transportation Association (UTA), a trade group of taxi companies in New York State, asked government officials to investigate. The Albany Times Union reported:

"In a Tuesday letter to Governor Andrew Cuomo, Assembly Speaker Carl Heastie and Senate Majority Leader John Flanagan, UTA President John Tomassi wrote accused the company of possibly having used the Greyball technology in New York to evade authorities in areas where ride-hailing is not allowed. Uber and companies like it are authorized to operate only in New York City, where they are considered black cars. But UTA’s concerns about Greyball are spurred in part by reported pick-ups in some suburban areas."

A look at Uber's operations in Chicago sheds some light on how the company operates. NBC Channel 5 reported in 2014:

"... news that President Barack Obama's former adviser and campaign strategist David Plouffe has joined the company as senior VP of policy and strategy delivers a strong message to its enemies: Uber means business. How dare you disrupt our disruption? You're going down.

Here in the Land of Lincoln, Plouffe's hiring adds another layer of awkward personal politics to the Great Uber Debate. It's an increasingly tangled web: Plouffe worked in the White House alongside Rahm Emanuel when the Chicago mayor was Chief of Staff. Emanuel, trying to strike a balance between Uber-friendly and cabbie-considerate, recently passed a bill that restricts Uber drivers from picking up passengers at O'Hare, Midway and McCormick Place... Further complicating matters, Emanuel's brother, Hollywood super-agent Ari Emanuel, has invested in Uber..."

That debate also included the Illinois Governor, as politicians try to balance the competing needs of traditional taxi companies, ride-sharing companies, and consumers. The entire situation raises questions about why there aren't Greyball investigations by more cities. Is it due to local political interference?

That isn't all. In 2014, Uber's "God View" tool raised concerns about privacy, the company's tracking of its customers, and a questionable corporate culture. At that time, an Uber executive reportedly suggested that the company hire opposition researchers to dig up dirt about its critics in the news media.

Uber's claims in January 2015 of reduced drunk-driving accidents due to its service seemed dubious after scrutiny. ProPublica explained:

"Uber reported that cities using its ridesharing service have seen a reduction in drunk driving accidents, particularly among young people. But when ProPublica data reporter Ryann Grochowski Jones took a hard look at the numbers, she found the company's claim that it had "likely prevented" 1,800 crashes over the past 2.5 years to be lacking... the first red flag was that Uber didn't include a methodology with its report. A methodology is crucial to show how the statistician did the analysis... Uber eventually sent her a copy of the methodology separately, which showed that drunk-driving accidents involving drivers under 30 dropped in California after Uber's launch. The math itself is fine, Grochowski Jones says, but Uber offers no proof that those under 30 and Uber users are actually the same population.

This seems like one of those famous moments in intro statistics courses where we talk about correlation and causality, ProPublica Editor-in-Chief Steve Engelberg says. Grochowski Jones agrees, showcasing how drowning rates are higher in the summer as are ice cream sales but clearly one doesn't cause the other."

Similar claims by Uber about the benefits of "surge pricing" seemed to wilter under scrutiny. ProPublica reported in October, 2015:

"The company has always said the higher prices actually help passengers by encouraging more drivers to get on the road. But computer scientists from Northeastern University have found that higher prices don’t necessarily result in more drivers. Researchers Le Chen, Alan Mislove and Christo Wilson created 43 new Uber accounts and virtually hailed cars over four weeks from fixed points throughout San Francisco and Manhattan. They found that many drivers actually leave surge areas in anticipation of fewer people ordering rides. "What happens during a surge is, it just kills demand," Wilson told ProPublica."

Another surge-pricing study in 2016 concluded with a positive spin:

"... that consumers can benefit from surge pricing. They find this is the case when a market isn’t fully served by traditional taxis when demand is high. In short, if you can’t find a cab on New Year’s Eve, Daniels’ research says you’re better off with surge pricing... surge pricing allows service to expand during peak demand without creating idleness for drivers during normal demand. This means that more peak demand customers get rides, albeit at a higher price. This also means that the price during normal demand settings drops, allowing more customers service at these normal demand times."

In other words, "can benefit" doesn't ensure that riders will benefit. And "allows service to expand" doesn't ensure that service will expand during peak demand periods. "Surge pricing" does ensure higher prices. A better solution might be surge payments to drivers during peak hours to expand services. Uber will still make more money with more rides during peak periods.

The surge-pricing concept is a reminder of basic economics when prices are raised by suppliers. Demand decreases. A lower price should follow, but the surge-price prevents that. As the prior study highlighted, drivers have learned from this: additional drivers don't enter the market to force down the higher surge-price.

And, there is more. In 2015, the State of California Labor Commission ruled that Uber drivers are employees and not independent contractors, as the company claimed. Concerns about safety and criminal background checks have been raised. Last year, BuzzFeed News analyzed ride data from Uber:

"... the company received five claims of rape and “fewer than” 170 claims of sexual assault directly related to an Uber ride as inbound tickets to its customer service database between December 2012 and August 2015. Uber provided these numbers as a rebuttal to screenshots obtained by BuzzFeed News. The images that were provided by a former Uber customer service representative (CSR) to BuzzFeed News, and subsequently confirmed by multiple other parties, show search queries conducted on Uber’s Zendesk customer support platform from December 2012 through August 2015... In one screenshot, a search query for “sexual assault” returns 6,160 Uber customer support tickets. A search for “rape” returns 5,827 individual tickets."

That news item is interesting since it includes several images of video screens from the company's customer support tool. Uber's response:

"The ride-hail giant repeatedly asserted that the high number of queries from the screenshots is overstated, however Uber declined BuzzFeed News’ request to grant direct access to the data, or view its data analysis procedures. When asked for any additional anonymous data on the five rape complaint tickets it claims to have received between December 2012 and August 2015, Uber declined to provide any information."

Context matters about ride safety and corporate culture. A former Uber employee shared a disturbing story with allegations of sexual harassment:

"I joined Uber as a site reliability engineer (SRE) back in November 2015, and it was a great time to join as an engineer... After the first couple of weeks of training, I chose to join the team that worked on my area of expertise, and this is where things started getting weird. On my first official day rotating on the team, my new manager sent me a string of messages over company chat. He was in an open relationship, he said, and his girlfriend was having an easy time finding new partners but he wasn't. He was trying to stay out of trouble at work, he said, but he couldn't help getting in trouble, because he was looking for women to have sex with... Uber was a pretty good-sized company at that time, and I had pretty standard expectations of how they would handle situations like this. I expected that I would report him to HR, they would handle the situation appropriately, and then life would go on - unfortunately, things played out quite a bit differently. When I reported the situation, I was told by both HR and upper management that even though this was clearly sexual harassment and he was propositioning me, it was this man's first offense, and that they wouldn't feel comfortable giving him anything other than a warning and a stern talking-to... I was then told that I had to make a choice: (i) I could either go and find another team and then never have to interact with this man again, or (ii) I could stay on the team, but I would have to understand that he would most likely give me a poor performance review when review time came around, and there was nothing they could do about that. I remarked that this didn't seem like much of a choice..."

Her story seems very credible. Based upon this and other events, some industry watchers question Uber's value should it seek more investors via an initial public offering (IPO):

"Uber has hired two outside law firms to conduct investigations related to the former employee's claims. One will investigate her claims specifically, the other is conducting a broader investigation into Uber's workplace practices...Taken together, the recent reports paint a picture of a company where sexual harassment is tolerated, laws are seen as inconveniences to be circumvented, and a showcase technology effort might be based on stolen secrets. That's all bad for obvious reasons... What will Uber's valuation look like the next time it has to raise money -- or when it attempts to go public?"

To understand the "might be based on stolen secrets" reference, the San Francisco Examiner newspaper explained on March 20:

"In the past few weeks, Uber’s touted self-driving technology has come under both legal and public scrutiny after Alphabet — Google’s parent company — sued Uber over how it obtained its technology. Alphabet alleges that the technology for Otto, a self-driving truck company acquired by Uber last year, was stolen from Alphabet’s own Waymo self-driving technology... Alphabet alleges Otto founder Anthony Levandowski downloaded proprietary data from Alphabet’s self-driving files. In December 2015, Levandowski download 14,000 design files onto a memory card reader and then wiped all the data from the laptop, according to the lawsuit.

The lawsuit also lays out a timeline where Levandowski and Uber were in cahoots with one another before the download operation. Alphabet alleges the two parties were in communications with each other since the summer of 2015, when Levandowski still worked for Waymo. Levandowski left Waymo in January 2016, started Otto the next month and joined Uber in August as vice president of Uber’s self-driving technology after Otto was purchased by Uber for $700 million... This may become the biggest copyright infringement case brought forth in Silicon Valley since Apple v. Microsoft in 1994, when Apple sued Microsoft over the alleged likeness in the latter’s graphic user interface."

And, just this past Saturday Uber suspended its driverless car program in Arizona after a crash. Reportedly, Uber's driverless car programs in Arizona, Pittsburgh and San Francisco are suspended pending the results of the crash investigation.

No doubt, there will be more news about the lawsuit, safety issues, sexual harassment, Greyball, and investigations by local cities. What are your opinions?


Maker Of Smart Vibrators To Pay $3.75 Million To Settle Privacy Lawsuit

Today's smart homes contain a variety of internet-connected appliances -- televisions, utility meters, hot water heaters, thermostats, refrigerators, security systems-- and devices you might not expect to have WiFi connections:  mouse traps, wine bottlescrock pots, toy dolls, and trash/recycle bins. Add smart vibrators to the list.

We-Vibe logo We-Vibe, a maker of vibrators for better sex, will pay U.S. $3.75 million to settle a class action lawsuit involving allegations that the company tracked users without their knowledge nor consent. The Guardian reported:

"Following a class-action lawsuit in an Illinois federal court, We-Vibe’s parent company Standard Innovation has been ordered to pay a total of C$4m to owners, with those who used the vibrators associated app entitled to the full amount each. Those who simply bought the vibrator can claim up to $199... the app came with a number of security and privacy vulnerabilities... The app that controls the vibrator is barely secured, allowing anyone within bluetooth range to seize control of the device. In addition, data is collected and sent back to Standard Innovation, letting the company know about the temperature of the device and the vibration intensity – which, combined, reveal intimate information about the user’s sexual habits..."

Image of We-Vibe 4 Plus product with phone. Click to view larger version We-Vibe's products are available online at the Canadian company's online store and at Amazon. This Youtube video (warning: not safe for work) promotes the company's devices. Consumers can use the smart vibrator with or without the mobile app on their smartphones. The app is available at both the Apple iTunes and Google Play online stores.

Like any other digital device, security matters. C/Net reported last summer:

"... two security researchers who go by the names followr and g0ldfisk found flaws in the software that controls the [We-Vibe 4Plus] device. It could potentially let a hacker take over the vibrator while it's in use. But that's -- at this point -- only theoretical. What the researchers found more concerning was the device's use of personal data. Standard Innovation collects information on the temperature of the device and the intensity at which it's vibrating, in real time, the researchers found..."

In the September 2016 complaint (Adobe PDF; 601 K bytes), the plaintiffs sought to stop Standard Innovation from "monitoring, collecting, and transmitting consumers’ usage information," collect damages due to the alleged unauthorized data collection and privacy violations, and reimburse users from their purchase of their We-Vibe devices (because a personal vibrator with this alleged data collection is worth less than a personal vibrator without data collection). That complaint alleged:

"Unbeknownst to its customers, however, Defendant designed We-Connect to (i) collect and record highly intimate and sensitive data regarding consumers’ personal We-Vibe use, including the date and time of each use and the selected vibration settings, and (ii) transmit such usage data — along with the user’s personal email address — to its servers in Canada... By design, the defining feature of the We-Vibe device is the ability to remotely control it through We-Connect. Defendant requires customers to use We-Connect to fully access the We-Vibe’s features and functions. Yet, Defendant fails to notify or warn customers that We-Connect monitors and records, in real time, how they use the device. Nor does Defendant disclose that it transmits the collected private usage information to its servers in Canada... Defendant programmed We-Connect to secretly collect intimate details about its customers’ use of the We-Vibe, including the date and time of each use, the vibration intensity level selected by the user, the vibration mode or patterns selected by the user, and incredibly, the email address of We-Vibe customers who had registered with the App, allowing Defendant to link the usage information to specific customer accounts... In addition, Defendant designed We-Connect to surreptitiously route information from the “connect lover” feature to its servers. For instance, when partners use the “connect lover” feature and one takes remote control of the We-Vibe device or sends a [text or video chat] communication, We-Connect causes all of the information to be routed to its servers, and then collects, at a minimum, certain information about the We-Vibe, including its temperature and battery life. That is, despite promising to create “a secure connection between your smartphones,” Defendant causes all communications to be routed through its servers..."

The We-Vibe Nova product page lists ten different vibration modes (e.g., Crest, Pulse, Wave, Echo, Cha-cha-cha, etc.), or users can create their own custom modes. The settlement agreement defined two groups of affected consumers:

"... the proposed Purchaser Class, consisting of: all individuals in the United States who purchased a Bluetooth-enabled We-Vibe Brand Product before September 26, 2016. As provided in the Settlement Agreement, “We-Vibe Brand Product” means the “We-Vibe® Classic; We-Vibe® 4 Plus; We-Vibe® 4 Plus App Only; Rave by We-VibeTM and Nova by We-VibeTM... the proposed App Class, consisting of: all individuals in the United States who downloaded the We-Connect application and used it to control a We-Vibe Brand Product before September 26, 2016."

According to the settlement agreement, affected users will be notified by e-mail addresses, with notices in the We-Connect mobile app, a settlement website (to be created), a "one-time half of a page summary publication notice in People Magazine and Sports Illustrated," and by online advertisements in several websites such as Google, YouTube, Facebook, Instagram, Twitter, and Pinterest. The settlement site will likely specify additional information including any deadlines and additional notices.

We-Vibe announced in its blog on October 3, 2016 several security improvements:

"... we updated the We-ConnectTM app and our app privacy notice. That update includes: a) Enhanced communication regarding our privacy practices and data collection – in both the onboarding process and in the app settings; b) No registration or account creation. Customers do not provide their name, email or phone number or other identifying information to use We-Connect; c) An option for customers to opt-out of sharing anonymous app usage data is available in the We-Connect settings; d) A new plain language Privacy Notice outlines how we collect and use data for the app to function and to improve We-Vibe products."

I briefly reviewed the We-Connect App Privacy Policy (dated September 26, 2016) linked from the Google Play store. When buying digital products online, often the privacy policy for the mobile app is different than the privacy policy for the website. (Informed shoppers read both.) Some key sections from the app privacy policy:

"Collection And Use of Information: You can use We-Vibe products without the We-Connect app. No information related to your use of We-Vibe products is collected from you if you don’t install and use the app."

I don't have access to the prior version of the privacy policy. That last sentence seems clear and should be a huge warning to prospective users about the data collection. More from the policy:

"We collect and use information for the purposes identified below... To access and use certain We-Vibe product features, the We-Connect app must be installed on an iOS or Android enabled device and paired with a We-Vibe product. We do not ask you to provide your name, address or other personally identifying information as part of the We-Connect app installation process or otherwise... The first time you launch the We-Connect app, our servers will provide you with an anonymous token. The We-Connect app will use this anonymous token to facilitate connections and share control of your We-Vibe with your partner using the Connect Lover feature... certain limited data is required for the We-Connect app to function on your device. This data is collected in a way that does not personally identify individual We-Connect app users. This data includes the type of device hardware and operating system, unique device identifier, IP address, language settings, and the date and time the We-Connect app accesses our servers. We also collect certain information to facilitate the exchange of messages between you and your partner, and to enable you to adjust vibration controls. This data is also collected in a way that does not personally identify individual We-Connect app users."

In a way that does not personally identify individuals? What way? Is that the "anonymous token" or something else? More clarity seems necessary.

Consumers should read the app privacy policy and judge for themselves. Me? I am skeptical. Why? The "unique device identifier" can be used exactly for that... to identify a specific phone. The IP address associated with each mobile device can also be used to identify specific persons. Match either number to the user's 10-digit phone number (readily available on phones), and it seems that one can easily re-assemble anonymously collected data afterwards to make it user-specific.

And since partner(s) can remotely control a user's We-Vibe device, their information is collected, too. Persons with multiple partners (and/or multiple We-Vibe devices) should thoroughly consider the implications.

The About Us page in the We-Vibe site contains this company description:

"We-Vibe designs and manufactures world-leading couples and solo vibrators. Our world-class engineers and industrial designers work closely with sexual wellness experts, doctors and consumers to design and develop intimate products that work in sync with the human body. We use state-of-the-art techniques and tools to make sure our products set new industry standards for ergonomic design and high performance while remaining eco‑friendly and body-safe."

Hmmmm. No mentions of privacy nor security. Hopefully, a future About Us page revision will mention privacy and security. Hopefully, no government officials use these or other branded smart sex toys. This is exactly the type of data collection spies will use to embarrass and/or blackmail targets.

The settlement is a reminder that companies are willing, eager, and happy to exploit consumers' failure to read privacy policies. A study last year found that 74 percent of consumers surveyed never read privacy policies.

All of this should be a reminder to consumers that companies highly value the information they collect about their users, and generate additional revenue streams by selling information collected to corporate affiliates, advertisers, marketing partners, and/or data brokers. Consumers' smartphones are central to that data collection.

What are your opinions of the We-Vibe settlement? Of its products and security?


Can Customs and Border Officials Search Your Phone? These Are Your Rights

[Editor's note: today's guest post is by the reporters at ProPublica. Past actions by CBP, including the search of a domestic flight, have raised privacy concerns among many citizens. Informed consumers know their privacy rights before traveling. This news article first appeared on March 13 and is reprinted with permission.]

by Patrick G. Lee, ProPublica

A NASA scientist heading home to the U.S. said he was detained in January at a Houston airport, where Customs and Border Protection officers pressured him for access to his work phone and its potentially sensitive contents.

Last month, CBP agents checked the identification of passengers leaving a domestic flight at New York's John F. Kennedy Airport during a search for an immigrant with a deportation order.

And in October, border agents seized phones and other work-related material from a Canadian photojournalist. They blocked him from entering the U.S. after he refused to unlock the phones, citing his obligation to protect his sources.

These and other recent incidents have revived confusion and alarm over what powers border officials actually have and, perhaps more importantly, how to know when they are overstepping their authority.

The unsettling fact is that border officials have long had broad powers -- many people just don't know about them. Border officials, for instance, have search powers that extend 100 air miles inland from any external boundary of the U.S. That means border agents can stop and question people at fixed checkpoints dozens of miles from U.S. borders. They can also pull over motorists whom they suspect of a crime as part of "roving" border patrol operations.

Sowing even more uneasiness, ambiguity around the agency's search powers -- especially over electronic devices -- has persisted for years as courts nationwide address legal challenges raised by travelers, privacy advocates and civil-rights groups.

We've dug out answers about the current state-of-play when it comes to border searches, along with links to more detailed resources.

Doesn't the Fourth Amendment protect us from "unreasonable searches and seizures"?

Yes. The Fourth Amendment to the Constitution articulates the "right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures." However, those protections are lessened when entering the country at international terminals at airports, other ports of entry and subsequently any location that falls within 100 air miles of an external U.S. boundary.

How broad is Customs and Border Protection's search authority?

According to federal statutes, regulations and court decisions, CBP officers have the authority to inspect, without a warrant, any person trying to gain entry into the country and their belongings. CBP can also question individuals about their citizenship or immigration status and ask for documents that prove admissibility into the country.

This blanket authority for warrantless, routine searches at a port of entry ends when CBP decides to undertake a more invasive procedure, such as a body cavity search. For these kinds of actions, the CBP official needs to have some level of suspicion that a particular person is engaged in illicit activity, not simply that the individual is trying to enter the U.S.

Does CBP's search authority cover electronic devices like smartphones and laptops?

Yes. CBP refers to several statutes and regulations in justifying its authority to examine "computers, disks, drives, tapes, mobile phones and other communication devices, cameras, music and other media players, and any other electronic or digital devices."

According to current CBP policy, officials should search electronic devices with a supervisor in the room, when feasible, and also in front of the person being questioned "unless there are national security, law enforcement, or other operational considerations" that take priority. For instance, if allowing a traveler to witness the search would reveal sensitive law enforcement techniques or compromise an investigation, "it may not be appropriate to allow the individual to be aware of or participate in a border search," according to a 2009 privacy impact assessment by the Department of Homeland Security.

CBP says it can conduct these searches "with or without" specific suspicion that the person who possesses the items is involved in a crime.

With a supervisor's sign-off, CBP officers can also seize an electronic device -- or a copy of the information on the device -- "for a brief, reasonable period of time to perform a thorough border search." Such seizures typically shouldn't exceed five days, although officers can apply for extensions in up to one-week increments, according to CBP policy. If a review of the device and its contents does not turn up probable cause for seizing it, CBP says it will destroy the copied information and return the device to its owner.

Can CBP really search my electronic devices without any specific suspicion that I might have committed a crime?

The Supreme Court has not directly ruled on this issue. However, a 2013 decision from the U.S. Court of Appeals for the Ninth Circuit -- one level below the Supreme Court -- provides some guidance on potential limits to CBP's search authority.

In a majority decision, the court affirmed that cursory searches of laptops -- such as having travelers turn their devices on and then examining their contents -- does not require any specific suspicions about the travelers to justify them.

The court, however, raised the bar for a "forensic examination" of the devices, such as using "computer software to analyze a hard drive." For these more powerful, intrusive and comprehensive searches, which could provide access to deleted files and search histories, password-protected information and other private details, border officials must have a "reasonable suspicion" of criminal activity -- not just a hunch.

As it stands, the 2013 appeals court decision legally applies only to the nine Western states in the Ninth Circuit, including California, Arizona, Nevada, Oregon and Washington. It's not clear whether CBP has taken the 2013 decision into account more broadly: The last time the agency publicly updated its policy for searching electronic devices was in 2009. CBP is currently reviewing that policy and there is "no specific timeline" for when an updated version might be announced, according to the agency.

"Laptop computers, iPads and the like are simultaneously offices and personal diaries. They contain the most intimate details of our lives," the court's decision said. "It is little comfort to assume that the government -- for now -- does not have the time or resources to seize and search the millions of devices that accompany the millions of travelers who cross our borders. It is the potential unfettered dragnet effect that is troublesome."

During the 2016 fiscal year, CBP officials conducted 23,877 electronic media searches, a five-fold increase from the previous year. In both the 2015 and 2016 fiscal years, the agency processed more than 380 million arriving travelers.

Am I legally required to disclose the password for my electronic device or social media, if CBP asks for it?

That's still an unsettled question, according to Liza Goitein, co-director of the Liberty and National Security Program at the Brennan Center for Justice. "Until it becomes clear that it's illegal to do that, they're going to continue to ask," she said.

The Fifth Amendment says that no one shall be made to serve as "a witness against himself" in a criminal case. Lower courts, however, have produced differing decisions on how exactly the Fifth Amendment applies to the disclosure of passwords to electronic devices.

Customs officers have the statutory authority "to demand the assistance of any person in making any arrest, search, or seizure authorized by any law enforced or administered by customs officers, if such assistance may be necessary." That statute has traditionally been invoked by immigration agents to enlist the help of local, state and other federal law enforcement agencies, according to Nathan Wessler, a staff attorney with the ACLU's Speech, Privacy and Technology Project. Whether the statute also compels individuals being interrogated by border officials to divulge their passwords has not been directly addressed by a court, Wessler said.

Even with this legal uncertainty, CBP officials have broad leverage to induce travelers to share password information, especially when someone just wants to catch their flight, get home to family or be allowed to enter the country. "Failure to provide information to assist CBP may result in the detention and/or seizure of the electronic device," according to a statement provided by CBP.

Travelers who refuse to give up passwords could also be detained for longer periods and have their bags searched more intrusively. Foreign visitors could be turned away at the border, and green card holders could be questioned and challenged about their continued legal status.

"People need to think about their own risks when they are deciding what to do. US citizens may be comfortable doing things that non-citizens aren't, because of how CBP may react," Wessler said.

What is some practical advice for protecting my digital information?

Consider which devices you absolutely need to travel with, and which ones you can leave at home. Setting a strong password and encrypting your devices are helpful in protecting your data, but you may still lose access to your devices for undefined periods should border officials decide to seize and examine their contents.

Another option is to leave all of your devices behind and carry a travel-only phone free of most personal information. However, even this approach carries risks. "We also flag the reality that if you go to extreme measures to protect your data at the border, that itself may raise suspicion with border agents," according to Sophia Cope, a staff attorney at the Electronic Frontier Foundation. "It's so hard to tell what a single border agent is going to do."

The EFF has released an updated guide to data protection options here.

Does CBP recognize any exceptions to what it can examine on electronic devices?

If CBP officials want to search legal documents, attorney work product or information protected by attorney-client privilege, they may have to follow "special handling procedures," according to agency policy. If there's suspicion that the information includes evidence of a crime or otherwise relates to "the jurisdiction of CBP," the border official must consult the CBP associate/assistant chief counsel before undertaking the search.

As for medical records and journalists' notes, CBP says its officers will follow relevant federal laws and agency policies in handling them. When asked for more information on these procedures, an agency spokesperson said that CBP has "specific provisions" for dealing with this kind of information, but did not elaborate further. Questions that arise regarding these potentially sensitive materials can be handled by the CBP associate/assistant chief counsel, according to CBP policy. The agency also says that it will protect business or commercial information from "unauthorized disclosure."

Am I entitled to a lawyer if I'm detained for further questioning by CBP?

No. According to a statement provided by CBP, "All international travelers arriving to the U.S. are subject to CBP processing, and travelers bear the burden of proof to establish that they are clearly eligible to enter the United States. Travelers are not entitled to representation during CBP administrative processing, such as primary and secondary inspection."

Even so, some immigration lawyers recommend that travelers carry with them the number for a legal aid hotline or a specific lawyer who will be able to help them, should they get detained for further questioning at a port of entry.

"It is good practice to ask to speak to a lawyer," said Paromita Shah, associate director at the National Immigration Project of the National Lawyers Guild. "We always encourage people to have a number where their attorney can be reached, so they can explain what is happening and their attorney can try to intervene. It's definitely true that they may not be able to get into the actual space, but they can certainly intervene."

Lawyers who fill out this form on behalf of a traveler headed into the United States might be allowed to advocate for that individual, although local practices can vary, according to Shah.

Can I record my interaction with CBP officials?

Individuals on public land are allowed to record and photograph CBP operations so long as their actions do not hinder traffic, according to CBP. However, the agency prohibits recording and photography in locations with special security and privacy concerns, including some parts of international airports and other secure port areas.

Does CBP's power to stop and question people extend beyond the border and ports of entry?

Yes. Federal statutes and regulations empower CBP to conduct warrantless searches for people travelling illegally from another country in any "railway car, aircraft, conveyance, or vehicle" within 100 air miles from "any external boundary" of the country. About two-thirds of the U.S. population live in this zone, including the residents of New York City, Los Angeles, Chicago, Philadelphia and Houston, according to the ACLU.

As a result, CBP currently operates 35 checkpoints, where they can stop and question motorists traveling in the U.S. about their immigration status and make "quick observations of what is in plain view" in the vehicle without a warrant, according to the agency. Even at a checkpoint, however, border officials cannot search a vehicle's contents or its occupants unless they have probable cause of wrongdoing, the agency says. Failing that, CBP officials can ask motorists to allow them to conduct a search, but travelers are not obligated to give consent.

When asked how many people were stopped at CBP checkpoints in recent years, as well as the proportion of those individuals detained for further scrutiny, CBP said they didn't have the data "on hand" but that the number of people referred for secondary questioning was "minimum." At the same time, the agency says that checkpoints "have proven to be highly effective tools in halting the flow of illegal traffic into the United States."

Within 25 miles of any external boundary, CBP has the additional patrol power to enter onto private land, not including dwellings, without a warrant.

Where can CBP set up checkpoints?

CBP chooses checkpoint locations within the 100-mile zone that help "maximize border enforcement while minimizing effects on legitimate traffic," the agency says.

At airports that fall within the 100-mile zone, CBP can also set up checkpoints next to airport security to screen domestic passengers who are trying to board their flights, according to Chris Rickerd, a policy counsel at the ACLU's National Political Advocacy Department.

"When you fly out of an airport in the southwestern border, say McAllen, Brownsville or El Paso, you have Border Patrol standing beside TSA when they're doing the checks for security. They ask you the same questions as when you're at a checkpoint. 'Are you a US citizen?' They're essentially doing a brief immigration inquiry in the airport because it's part of the 100-mile zone," Rickerd said. "I haven't seen this at the northern border."

Can CBP do anything outside of the 100-mile zone?

Yes. Many of CBP's law enforcement and patrol activities, such as questioning individuals, collecting evidence and making arrests, are not subject to the 100-mile rule, the agency says. For instance, the geographical limit does not apply to stops in which border agents pull a vehicle over as part of a "roving patrol" and not a fixed checkpoint, according to Rickerd of the ACLU. In these scenarios, border agents need reasonable suspicion that an immigration violation or crime has occurred to justify the stop, Rickerd said. For stops outside the 100-mile zone, CBP agents must have probable cause of wrongdoing, the agency said.

The ACLU has sued the government multiple times for data on roving patrol and checkpoint stops. Based on an analysis of records released in response to one of those lawsuits, the ACLU found that CBP officials in Arizona failed "to record any stops that do not lead to an arrest, even when the stop results in a lengthy detention, search, and/or property damage."

The lack of detailed and easily accessible data poses a challenge to those seeking to hold CBP accountable to its duties.

"On the one hand, we fight so hard for reasonable suspicion to actually exist rather than just the whim of an officer to stop someone, but on the other hand, it's not a standard with a lot of teeth," Rickerd said. "The courts would scrutinize it to see if there's anything impermissible about what's going on. But if we don't have data, how do you figure that out?"

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

 


Berners-Lee: 3 Reasons Why The Internet Is In Serious Trouble

Most people love the Internet. It's a tool that has made life easier and more efficient in many ways. Even with all of those advances, the founder of the Internet listed three reasons why our favorite digital tool is in serious trouble:

  1. Consumers have lost control of their personal information
  2. It's too easy for anyone to publish misinformation online
  3. Political advertising online lacks transparency

Tim Berners-Lee explained the first reason:

"The current business model for many websites offers free content in exchange for personal data. Many of us agree to this – albeit often by accepting long and confusing terms and conditions documents – but fundamentally we do not mind some information being collected in exchange for free services. But, we’re missing a trick. As our data is then held in proprietary silos, out of sight to us, we lose out on the benefits we could realise if we had direct control over this data and chose when and with whom to share it. What’s more, we often do not have any way of feeding back to companies what data we’d rather not share..."

Given appointees in the U.S. Federal Communications Commission (FCC) by President Trump, it will likely get worse as the FCC seeks to revoke online privacy and net neutrality protections for consumers in the United States. Berners-Lee explained the second reason:

"Today, most people find news and information on the web through just a handful of social media sites and search engines. These sites make more money when we click on the links they show us. And they choose what to show us based on algorithms that learn from our personal data that they are constantly harvesting. The net result is that these sites show us content they think we’ll click on – meaning that misinformation, or fake news, which is surprising, shocking, or designed to appeal to our biases, can spread like wildfire..."

Fake news has become so widespread that many public libraries, schools, and colleges teach students how to recognize fake news sites and content. The problem is more widespread and isn't limited to social networking sites like Facebook promoting certain news. It also includes search engines. Readers of this blog are familiar with the DuckDuckGo search engine for both online privacy online and to escape the filter bubble. According to its public traffic page, DuckDuckGo gets about 14 million searches daily.

Most other search engines collect information about their users and that to serve search results items related to what they've searched upon previously. That's called the "filter bubble." It's great for search engines' profitability as it encourages repeat usage, but is terrible for consumers wanting unbiased and unfiltered search results.

Berners-Lee warned that online political advertising:

"... has rapidly become a sophisticated industry. The fact that most people get their information from just a few platforms and the increasing sophistication of algorithms drawing upon rich pools of personal data mean that political campaigns are now building individual adverts targeted directly at users. One source suggests that in the 2016 U.S. election, as many as 50,000 variations of adverts were being served every single day on Facebook, a near-impossible situation to monitor. And there are suggestions that some political adverts – in the US and around the world – are being used in unethical ways – to point voters to fake news sites, for instance, or to keep others away from the polls. Targeted advertising allows a campaign to say completely different, possibly conflicting things to different groups. Is that democratic?"

What do you think of the assessment by Berners-Lee? Of his solutions? Any other issues?


Boston Public Library Offers Workshop About How To Spot Fake News

Fake news image The Boston Public Library (BPL) offers a wide variety of programs, events and workshops for the public. The Grove Hall branch is offering several sessions of the free workshop titled, "Recognizing Fake News."The workshop description:

"Join us for a workshop to learn how to critically watch the news on television and online in order to detect "fake news." Using the News Literacy Project's interactive CheckologyTM curriculum, leading journalists and other experts guide participants through real-life examples from the news industry."

What is fake news? The Public Libraries Association (PLA) offered this definition:

"Fake news is just as it sounds: news that is misleading and not based on fact or, simply put, fake. Unfortunately, the literal defi­nition of fake news is the least complicated aspect of this com­plex topic. Unlike satire news... fake news has the intention of disseminat­ing false information, not for comedy, but for consumption. And without the knowledge of appropriately identifying fake news, these websites can do an effective job of tricking the untrained eye into believing it’s a credible source. Indeed, its intention is deception.

To be sure, fake news is nothing new... The Internet, particularly social media, has completely manipulated the landscape of how information is born, consumed, and shared. No longer is content creation reserved for official publishing houses or media outlets. For better or for worse, anybody can form a platform on the Inter­net and gain a following. In truth, we all have the ability to create viral news—real or fake—with a simple tweet or Facebook post."

The News Literacy Project is a nonpartisan national nonprofit organization that works with educators and journalists to teach middle school and high school students how to distinguish fact from fiction.

The upcoming workshop sessions at the BPL Grove Hall branch are tomorrow, March 11 at 3:00 pm, and Wednesday, March 29 at 1:00 pm. Participants will learn about the four main types of content (e.g., news, opinion, entertainment, and advertising), and the decision processes journalists use to decide which news to publish. The workshop presents real examples enabling workshop participants to test their skills at recognizing the four types of content and "fake news."

While much of the workshop content is targeted at students, adults can also benefit. Nobody wants to be duped by fake or misleading news. Nobody wants to mistake advertising or opinion for news. The sessions include opportunities for participants to ask questions. The workshop lasts about an hour and registration is not required.

Many public libraries across the nation offer various workshops about how to spot "fake news," including Athens (Georgia), Austin (Texas), Bellingham (Washington), Chicago (Illinois), Clifton Park (New York), Davenport (Iowa), Elgin (Illinois), Oakland (California), San Jose (California), and Topeka (Kansas). Some colleges and universities offer similar workshops, including American University and Cornell University. Some workshops included panelists or speakers from local news organizations.

The BPL Grove Hall branch is located at 41 Geneva Avenue in the Roxbury section of Boston. The branch's phone is (617) 427-3337.

Have you attended a "fake news" workshop at a local public library in your town or city? If so, share your experience below.


WikiLeaks Claimed CIA Lost Control Of Its Hacking Tools For Phones And Smart TVs

Central Intelligence Agency logo A hacking division of the Central Intelligence Agency (CIA) has collected an arsenal of hundreds of tools to control a variety of smartphones and smart televisions, including devices made by Apple, Google, Microsoft, Samsung and others. The Tuesday, March 7 press release by WikiLeaks claimed this lost arsenal during its release of:

"... 8,761 documents and files from an isolated, high-security network situated inside the CIA's Center for Cyber Intelligence in Langley, Virginia... Recently, the CIA lost control of the majority of its hacking arsenal including malware, viruses, trojans, weaponized "zero day" exploits, malware remote control systems and associated documentation. This extraordinary collection, which amounts to more than several hundred million lines of code, gives its possessor the entire hacking capacity of the CIA. The archive appears to have been circulated among former U.S. government hackers and contractors in an unauthorized manner, one of whom has provided WikiLeaks with portions of the archive."

WikiLeaks used the code name "Vault 7" to identify this release of its first set of documents, and claimed its source for the documents was a former government hacker or contractor. It also said that its source wanted to encourage a public debate about the CIA's capabilities, which allegedly overlap with the National Security Agency (NSA) causing waste.

The announcement also included statements allegedly describing the CIA's capabilities:

"CIA malware and hacking tools are built by EDG (Engineering Development Group), a software development group within CCI (Center for Cyber Intelligence), a department belonging to the CIA's DDI (Directorate for Digital Innovation)... By the end of 2016, the CIA's hacking division, which formally falls under the agency's Center for Cyber Intelligence (CCI), had over 5000 registered users and had produced more than a thousand hacking systems, trojans, viruses, and other "weaponized" malware... The CIA's Mobile Devices Branch (MDB) developed numerous attacks to remotely hack and control popular smart phones. Infected phones can be instructed to send the CIA the user's geolocation, audio and text communications as well as covertly activate the phone's camera and microphone. Despite iPhone's minority share (14.5%) of the global smart phone market in 2016, a specialized unit in the CIA's Mobile Development Branch produces malware to infest, control and exfiltrate data from iPhones and other Apple products running iOS, such as iPads."

CIA's capabilities reportedly include the "Weeping Angel" program:

"... developed by the CIA's Embedded Devices Branch (EDB), which infests smart TVs, transforming them into covert microphones, is surely its most emblematic realization. The attack against Samsung smart TVs was developed in cooperation with the United Kingdom's MI5/BTSS. After infestation, Weeping Angel places the target TV in a 'Fake-Off' mode, so that the owner falsely believes the TV is off when it is on. In 'Fake-Off' mode the TV operates as a bug, recording conversations in the room and sending them over the Internet to a covert CIA server."

Besides phones and smart televisions, WikiLeaks claimed the agency seeks to hack internet-connect autos and vehicles:

"As of October 2014 the CIA was also looking at infecting the vehicle control systems used by modern cars and trucks. The purpose of such control is not specified, but it would permit the CIA to engage in nearly undetectable assassinations."

No doubt that during the coming weeks and months security experts will analyze the documents for veracity. The whole situation is reminiscent of the disclosures in 2013 about broad surveillance programs by the National Security Agency (NSA). You can read more about yesterday's disclosures by WikiLeaks at the Guardian UK, CBS News, the McClatchy DC news wire, and at Consumer Reports.


FCC Announced Approval ot LTE-U Mobile Devices

On Wednesday, the Office of Engineering and Technology (OET) within the U.S. Federal Communications announced the authorization of unlicensed wireless (a/k/a LTE-U) devices to operate in the 5 GHz band:

"This action follows a collaborative industry process to ensure LTE-U with Wi-Fi and other unlicensed devices operating in the 5 GHz band. The Commission’s provisions for unlicensed devices are designed to prevent harmful interference to radio communications services and stipulate that these devices must accept any harmful interference they receive. Industry has developed various standards within the framework of these rules such as Wi-Fi, Bluetooth and Zigbee that are designed to coexist in shared spectrum. These and other unlicensed technologies have been deployed extensively and are used by consumers and industry for a wide variety of applications.

LTE-U is a specification that was developed and supported by a group of companies within the LTE-U Forum... The LTE-U devices that were certified today have been tested to show they meet all of the FCC’s rules. We understand that the LTE-U devices were evaluated successfully under the co-existence test plan. However, this is not an FCC requirement and similar to conformity testing for private sector standards the co-existence test results are not included in the FCC’s equipment certification records."

ComputerWorld explained in 2015 the strain on existing wireless capabilities and why several technology companies pursued the technology:

"According to the wireless providers and Qualcomm, the technology will make use of the existing unlicensed spectrum most commonly used for Wi-Fi. LTE-U is designed to deliver a similar capability as Wi-Fi, namely short-range connectivity to mobile devices.

As billions of mobile devices and Web video continue to strain wireless networks and existing spectrum allocations, the mobile ecosphere is looking for good sources of spectrum. The crunch is significant, and tangible solutions take a long time to develop... as former FCC Chairman Julius Genachowski and FCC Commissioner Robert McDowell recently remarked, “mobile data traffic in the U.S. will grow sevenfold between 2014 and 2019” while “wearable and connected devices in the U.S. will double” in that same period."

Some cable companies, such as Comcast, opposed LTE-U based upon concerns about the technology conflicting with existing home WiFi. According to Computerworld:

"In real-world tests so far, LTE-U delivers better performance than Wi-Fi, doesn’t degrade nearby Wi-Fi performance and may in fact improve the performance of nearby Wi-Fi networks."

Reportedly, in August 2016 Verizon viewed the testing as "fundamentally unfair and biased." Ajit Pai, the new FCC Chairman, said in a statement on Wednesday:

"LTE-U allows wireless providers to deliver mobile data traffic using unlicensed spectrum while sharing the road, so to speak, with Wi-Fi. The excellent staff of the FCC’s Office of Engineering and Technology has certified that the LTE-U devices being approved today are in compliance with FCC rules. And voluntary industry testing has demonstrated that both these devices and Wi-Fi operations can co-exist in the 5 GHz band. This heralds a technical breakthrough in the many shared uses of this spectrum.

This is a great deal for wireless consumers, too. It means they get to enjoy the best of both worlds: a more robust, seamless experience when their devices are using cellular networks and the continued enjoyment of Wi-Fi, one of the most creative uses of spectrum in history..."


EU Privacy Watchdogs Ask Microsoft For Explanations About Data Collection About Users

A privacy watchdog group in the European Union (EU) are concerned about privacy and data collection practices by Microsoft. The group, comprising 28 agencies and referred to as the Article 29 Working Party, sent a letter to Microsoft asking for explanations about privacy concerns with the software company's Windows 10 operating system software.

The February 2017 letter to Brendon Lynch, Chief Privacy Officer, and to Satya Nadella, Chief Executive Officer, was a follow-up to a prior letter sent in January. The February letter explained:

"Following the launch of Windows 10, a new version of the Windows operating system, a number of concerns have been raised, in the media and in signals from concerned citizens to the data protection authorities, regarding protection of your users’ personal data... the Working Party expressed significant concerns about the default installation settings and an apparent lack of control for a user to prevent collection or further processing of data, as well as concerns about the scope of data that are being collected and further processed... "

Microsoft logo While Microsoft has been cooperative so far, the group's specific privacy concerns:

"... user consent can only be valid if fully informed, freely given and specific. Whilst it is clear that the proposed new express installation screen will present users with five options to limit or switch off certain kinds of data processing it is not clear to what extent both new and existing users will be informed about the specific data that are being collected and processed under each of the functionalities. The proposed new explanation when, for example, a user switches the level of telemetry data from 'full' to 'basic' that Microsoft will collect 'less data' is insufficient without further explanation. Such information currently is also not available in the current version of the privacy policy.

Additionally, the purposes for which Microsoft collects personal data have to be specified, explicit and legitimate, and the data may not be further processed in a way incompatible with those purposes. Microsoft processes data collected through Windows 10 for different purposes, including personalised advertising. Microsoft should clearly explain what kinds of personal data are processed for what purposes. Without such information, consent cannot be informed, and therefore, not valid..."

Visit this EU link for more information about the Article 29 Working Party, or download the Article 29 Working Party letter to Microsoft (Adobe PDF).


GOP Legislation In Congress To Revoke Consumer Privacy And Protections

Logo for Republican Party, also known as the GOP The MediaPost Policy Blog reported:

"Republican Senator Jeff Flake, who opposes the Federal Communications Commission's broadband privacy rules, says he's readying a resolution to rescind them, Politico reports. Flake's confirmation to Politico comes days after Rep. Marsha Blackburn (R-Tennessee), the head of the House Communications Subcommittee, said she intends to work with the Senate to revoke the privacy regulations."

Blackburn's name is familiar. She was a key part of the GOP effort in 2014 to keep state laws in place to limit broadband competition by preventing citizens from forming local broadband providers. To get both higher speeds and lower prices compared to offerings by corporate internet service providers (ISPs), many people want to form local broadband providers. They can't because 20 states have laws preventing broadband competition. A worldwide study in 2014 found the consumers in the United States get poor broadband value: pay more and get slower speeds. Plus, the only consumers getting good value were community broadband customers. In June 2014, the FCC announced plans to challenge these restrictive state laws that limit competition, and keep your Internet prices high. That FCC effort failed. To encourage competition and lower prices, several Democratic representatives introduced the Community Broadband Act in 2015.That legislation went nowhere in a GOP-controlled Congress.

Pause for a moment and let that sink in. Blackburn and other GOP representatives have pursued policies where we consumers all pay more for broadband due to the lack of competition. The GOP, a party that supposedly dislikes regulation and prefers free-market competition, is happy to do the opposite to help their corporate donors. The GOP, a party that historically has promoted states' rights, now uses state laws to restrict the freedoms of constituents at the city, town, and local levels. And, that includes rural constituents.

Too many GOP voters seem oblivious to this. Why Democrats failed to capitalize on this broadband issue, especially during the Presidential campaign last year, is puzzling. Everyone needs broadband: work, play, school, travel, entertainment.

Now, back to the effort to revoke the FCC's broadband privacy rules. Several cable, telecommunications, and advertising lobbies sent a letter in January asking Congress to remove the broadband privacy rules. That letter said in part:

"... in adopting new broadband privacy rules late last year, the Federal Communications Commission (“FCC”) took action that jeopardizes the vibrancy and success of the internet and the innovations the internet has and should continue to offer. While the FCC’s Order applies only to Internet Service Providers (“ISPs”), the onerous and unnecessary rules it adopted establish a very harmful precedent for the entire internet ecosystem. We therefore urge Congress to enact a resolution of disapproval pursuant to the Congressional Review Act (“CRA”) vitiating the Order."

The new privacy rules by the FCC require broadband providers (a/k/a ISPs) to obtain affirmative “opt-in” consent from consumers before using and sharing consumers' sensitive information; specify the types of information that are sensitive (e.g., geo-location, financial information, health information, children’s information, social security numbers, web browsing history, app usage history and the content of communications); stop using and sharing information about consumers that have opted out of information sharing; meet transparency requirements to clearly notify customers about the information collection sharing and how to change their opt-in or opt-out preferences, prohibit "take-it-or-leave-it" offers where ISPs can refuse to serve customers who don't consent to the information collection and sharing; and comply with "reasonable data security practices and guidelines" to protect the sensitive information collected and shared.

The new FCC privacy rules are common sense stuff, but clearly these companies view common-sense methods as a burden. They want to use consumers' information however they please without limits, and without consideration for consumers' desire to control their own personal information. And, GOP representatives in Congress are happy to oblige these companies in this abuse.

Alarmingly, there is more. Lots more.

The GOP-led Congress also seeks to roll back consumer protections in banking and financial services. According to Consumer Reports, the issue arose earlier this month in:

"... a memo by House Financial Services Committee Chairman Rep. Jeb Hensarling (R-Tex), which was leaked to the press yesterday... The fate of the database was first mentioned [February 9th] when Bloomberg reported on a memo by Hensarling, an outspoken critic of the CFPB. The memo outlined a new version of the Financial CHOICE Act (Creating Hope and Opportunity for Investors, Consumers and Entrepreneurs), a bill originally advanced by the House Financial Services Committee in September. The new bill would lead to the repeal of the Consumer Complaint Database. It would also eliminate the CFPB's authority to punish unfair, deceptive or abusive practices among banks and other lenders, and it would allow the President to handpick—and fire—the bureau's director at will."

Banks have paid billions in fines to resolve a variety of allegations and complaints about wrongdoing. Consumers have often been abused by banks. You may remember the massive $185 million fine for the phony accounts scandal at Wells Fargo. Or, you may remember consumers forced to use prison-release cards. Or, maybe you experienced debt collection scams. And, this blog has covered extensively much of the great work by the CFPB which has helped consumers.

Does these two legislation items bother you? I sincerely hope that they do bother you. Contact your elected officials today and demand that they support the FCC privacy rules.


Espionage Groups Target Apple Devices With New Malware

ZDNet reported about a group performing multiple online espionage campaigns which targeted:

"... Mac users with malware designed to steal passwords, take screenshots, and steal backed-up iPhone data. This malware, discovered by cybersecurity researchers at Bitdefender, is thought to be linked to the APT28 group, which was accused of interferring in the United States presidential election. Bitdefender notes a number of similarities between the malware attacks against Macs -- which have been taking place since September 2016 -- and previous campaigns by the group, believed to be closely linked to Russia military intelligence and also dubbed Fancy Bear. Known as Xagent, the new form of malware targets victims running Mac OS X and installs a modular backdoor onto the system which enables the perpetrators to carry out cyberespionage activities... Xagent is also capable of stealing iPhone backups stored on a compromised Mac, an action which opens up even more capabilities for conducting cyberespionage, providing the perpetrators with access to additional files..."


Travelers Face Privacy Issues When Crossing Borders

If you travel for business, pleasure, or both then today's blog post will probably interest you. Wired Magazine reported:

"In the weeks since President Trump’s executive order ratcheted up the vetting of travelers from majority Muslim countries, or even people with Muslim-sounding names, passengers have experienced what appears from limited data to be a “spike” in cases of their devices being seized by customs officials. American Civil Liberties Union attorney Nathan Wessler says the group has heard scattered reports of customs agents demanding passwords to those devices, and even social media accounts."

Devices include smartphones, laptops, and tablets. Many consumers realize that relinquishing passwords to social networking sites (e.g., Facebook, Instagram, etc.) discloses sensitive information not just about themselves, but also all of their friends, family, classmates, neighbors, and coworkers -- anyone they are connected with online. The "Bring Your Own Device" policies by many companies and employers means that employees (and contractors) can use their personal devices in the workplace and/or connected remotely to company networks. Those connected devices can easily divulge company trade secrets and other sensitive information when seized by Customs and Border Patrol (CBP) agents for analysis and data collection.

Plus, professionals such as attorneys and consultants are required to protect their clients' sensitive information. These professionals, who also must travel, require data security and privacy for business.

Wired also reported:

"In fact, US Customs and Border Protection has long considered US borders and airports a kind of loophole in the Constitution’s Fourth Amendment protections, one that allows them wide latitude to detain travelers and search their devices. For years, they’ve used that opportunity to hold border-crossers on the slightest suspicion, and demand access to their computers and phones with little formal cause or oversight.

Even citizens are far from immune. CBP detainees from journalists to filmmakers to security researchers have all had their devices taken out of their hands by agents."

For travelers wanting privacy, what are the options? Remain at home? This may not be an option for workers who must travel for business. Leave your devices at home? Again, impractical for many. The Wired article provided several suggestions, including:

"If customs officials do take your devices, don’t make their intrusion easy. Encrypt your hard drive with tools like BitLocker, TrueCrypt, or Apple’s Filevault, and choose a strong passphrase. On your phone—preferably an iPhone, given Apple’s track record of foiling federal cracking—set a strong PIN and disable Siri from the lockscreen by switching off “Access When Locked” under the Siri menu in Settings.

Remember also to turn your devices off before entering customs: Hard drive encryption tools only offer full protection when a computer is fully powered down. If you use TouchID, your iPhone is safest when it’s turned off, too..."

What are the consequences when travelers refuse to disclose passwords and encrpt devices? Ars Technica also explored the issues:

"... Ars spoke with several legal experts, and contacted CBP itself (which did not provide anything beyond previously-published policies). The short answer is: your device probably will be seized (or "detained" in CBP parlance), and you might be kept in physical detention—although no one seems to be sure exactly for how long.

An unnamed CBP spokesman told The New York Times on Tuesday that such electronic searches are extremely rare: he said that 4,444 cellphones and 320 other electronic devices were inspected in 2015, or 0.0012 percent of the 383 million arrivals (presuming that all those people had one device)... The most recent public document to date on this topic appears to be an August 2009 Department of Homeland Security paper entitled "Privacy Impact Assessment for the Border Searches of Electronic Devices." That document states that "For CBP, the detention of devices ordinarily should not exceed five (5) days, unless extenuating circumstances exist." The policy also states that CBP or Immigration and Customs Enforcement "may demand technical assistance, including translation or decryption," citing a federal law, 19 US Code Section 507."

The Electronic Frontier Foundation (EFF) collects stories from travelers who've been detained and had their devices seized. Clearly, we will hear a lot more in the future about these privacy issues. What are your opinions of this?


Survey: Internet of Evil Things Report

Pwnie 2017 Internet of Evil Things report A recent survey of information technology (IT) professionals by Pwnie Express, an information security vendor, found that connected devices bring risks into corporate networks and IT professionals are not keeping up. 90 percent of IT professionals surveyed view connected devices as a security threat to their corporate systems and networks. 66 percent aren't sure how many connected devices are in their organizations.

These findings have huge implications as the installed base of connected devices (a/k/a the "Internet of things" or ioT) takes off. Experts forecast 8.4 billion connected devices in use worldwide in 2017, up 31 percent from 2016. Total spending for those devices will reach almost $2 trillion in 2017, and $20.4 billion by 2020. The regions that will drive this growth include North America, Western Europe, and China; which already comprise 67 percent of the installed base.

Key results from the latest survey by Pwnie Express:

"One in five of the survey respondents (20%) said their IoT devices were hit with ransomware attacks last year. 16 percent of respondents say they experienced Man-in-the-middle attacks through IoT devices. Devices continue to lend themselves to problematic configurations. The default network from common routers “linksys” and “Netgear” were two of the top 10 most common “open default” wireless SSID’s (named networks), and the hotspot network built-in for the configuration and setup of HP printers - “hpsetup”- is #2."

An SSID, or Service Set Identifier, is the name a wireless network broadcasts. Manufacturers ship them with default names, which the bad guys often look for to find open, unprotected networks. While businesses purchase and deploy a variety of connected devices (e.g., smart meters, manufacturing field devices, process sensors for electrical generating plants, real-time location devices for healthcare) and some for "smart buildings" (e.g., LED lighting, HVAC sensors, security systems), other devices are brought into the workplace by workers.

Most companies have Bring Your Own Device (BYOD) policies allowing employees to bring and use in the workplace personal devices (e.g., phones, tablets, smart watches, fitness bands). The risk for corporate IT professionals is that when employees, contractors, and consultants bring their personal devices into the workplace, and connect to corporate networks. A mobile device infected with malware from a wireless home network, or from a public hot-spot (e.g., airport, restaurant) can easily introduce that malware into office networks.

Consumers connect a wide variety of items to their wireless home networks: laptops, tablets, smartphones, printers, lighting and temperature controls, televisions, home security systems, fitness bands, smart watches, toys, smart wine bottles, and home appliances (e.g., refrigerators, hot water heaters, coffee makers, crock pots, etc.). Devices with poor security features don't allow operating system and security software updates, don't encrypt key information such as PIN numbers and passwords, and build the software into the firmware where it cannot be upgraded. Last month, the U.S. Federal Trade Commission (FTC) filed a lawsuit against a modem/router maker alleging poor security in its products.

Security experts advise consumers to perform several steps to protect their wireless home networks: change the SSID name, change all default passwords, enable encryption (e.g., WEP, WPA, WPA2, etc.), create a special password for guests, and enable a firewall. While security experts have warned consumers for years, too many still don't heed the advice.

The survey respondents identified the top connected device threats:

"1. Misconfigured healthcare, security, and IoT devices will provide another route for ransomware and malware to cause harm and affect organizations.

2. Unresolved vulnerabilities or the misconfiguration of popular connected devices, spurred by the vulnerabilities being publicized by botnets, including Mirai and newer, “improved” versions, in the hands of rogue actors will compromise the security of organizations purchasing these devices.

3. Mobile phones will be the attack vector of the future, becoming an extra attack surface and another mode of rogue access points taking advantage of unencrypted Netgear, AT&T, and hpsetup wireless networks to set up man-in-the-middle attacks."

The survey included more than 800 IT security professionals in several industries: financial services, hospitality, retail, manufacturing, professional services, technology, healthcare, energy and more. Download the "2017 Internet of Evil Things Report" by Pwnie.


Facebook Doesn't Tell Users Everything it Really Knows About Them

[Editor's note: today's guest post is by reporters at ProPublica. I've posted it because, a) many consumers don't know how their personal information is bought, sold, and used by companies and social networking sites; b) the USA is capitalist society and the sensitive personal data that describes consumers is consumers' personal property; c) a better appreciation of "a" and "b" will hopefully encourage more consumers to be less willing to trade their personal property for convenience, and demand better privacy protections from products, services, software, apps, and devices; and d) when lobbyists and politicians act to erode consumers' property and privacy rights, hopefully more consumers will respond and act. Facebook is not the only social networking site that trades consumers' information. This news story is reprinted with permission.]

by Julia Angwin, Terry Parris Jr. and Surya Mattu, ProPublica

Facebook has long let users see all sorts of things the site knows about them, like whether they enjoy soccer, have recently moved, or like Melania Trump.

But the tech giant gives users little indication that it buys far more sensitive data about them, including their income, the types of restaurants they frequent and even how many credit cards are in their wallets.

Since September, ProPublica has been encouraging Facebook users to share the categories of interest that the site has assigned to them. Users showed us everything from "Pretending to Text in Awkward Situations" to "Breastfeeding in Public." In total, we collected more than 52,000 unique attributes that Facebook has used to classify users.

Facebook's site says it gets information about its users "from a few different sources."

What the page doesn't say is that those sources include detailed dossiers obtained from commercial data brokers about users' offline lives. Nor does Facebook show users any of the often remarkably detailed information it gets from those brokers.

"They are not being honest," said Jeffrey Chester, executive director of the Center for Digital Democracy. "Facebook is bundling a dozen different data companies to target an individual customer, and an individual should have access to that bundle as well."

When asked this week about the lack of disclosure, Facebook responded that it doesn't tell users about the third-party data because its widely available and was not collected by Facebook.

"Our approach to controls for third-party categories is somewhat different than our approach for Facebook-specific categories," said Steve Satterfield, a Facebook manager of privacy and public policy. "This is because the data providers we work with generally make their categories available across many different ad platforms, not just on Facebook."

Satterfield said users who don't want that information to be available to Facebook should contact the data brokers directly. He said users can visit a page in Facebook's help center, which provides links to the opt-outs for six data brokers that sell personal data to Facebook.

Limiting commercial data brokers' distribution of your personal information is no simple matter. For instance, opting out of Oracle's Datalogix, which provides about 350 types of data to Facebook according to our analysis, requires "sending a written request, along with a copy of government-issued identification" in postal mail to Oracle's chief privacy officer.

Users can ask data brokers to show them the information stored about them. But that can also be complicated. One Facebook broker, Acxiom, requires people to send the last four digits of their social security number to obtain their data. Facebook changes its providers from time to time so members would have to regularly visit the help center page to protect their privacy.

One of us actually tried to do what Facebook suggests. While writing a book about privacy in 2013, reporter Julia Angwin tried to opt out from as many data brokers as she could. Of the 92 brokers she identified that accepted opt-outs, 65 of them required her to submit a form of identification such as a driver's license. In the end, she could not remove her data from the majority of providers.

ProPublica's experiment to gather Facebook's ad categories from readers was part of our Black Box series, which explores the power of algorithms in our lives. Facebook uses algorithms not only to determine the news and advertisements that it displays to users, but also to categorize its users in tens of thousands of micro-targetable groups.

Our crowd-sourced data showed us that Facebook's categories range from innocuous groupings of people who like southern food to sensitive categories such as "Ethnic Affinity" which categorizes people based on their affinity for African-Americans, Hispanics and other ethnic groups. Advertisers can target ads toward a group 2014 or exclude ads from being shown to a particular group.

Last month, after ProPublica bought a Facebook ad in its housing categories that excluded African-Americans, Hispanics and Asian-Americans, the company said it would build an automated system to help it spot ads that illegally discriminate.

Facebook has been working with data brokers since 2012 when it signed a deal with Datalogix. This prompted Chester, the privacy advocate at the Center for Digital Democracy, to filed a complaint with the Federal Trade Commission alleging that Facebook had violated a consent decree with the agency on privacy issues. The FTC has never publicly responded to that complaint and Facebook subsequently signed deals with five other data brokers.

To find out exactly what type of data Facebook buys from brokers, we downloaded a list of 29,000 categories that the site provides to ad buyers. Nearly 600 of the categories were described as being provided by third-party data brokers. (Most categories were described as being generated by clicking pages or ads on Facebook.)

The categories from commercial data brokers were largely financial, such as "total liquid investible assets $1-$24,999," "People in households that have an estimated household income of between $100K and $125K, or even "Individuals that are frequent transactor at lower cost department or dollar stores."

We compared the data broker categories with the crowd-sourced list of what Facebook tells users about themselves. We found none of the data broker information on any of the tens of the thousands of "interests" that Facebook showed users.

Our tool also allowed users to react to the categories they were placed in as being "wrong," "creepy" or "spot on." The category that received the most votes for "wrong" was "Farmville slots." The category that got the most votes for "creepy" was "Away from family." And the category that was rated most "spot on" was "NPR."

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Health App Developer Settles With FTC For Deceptive Marketing Claims

The U.S. Federal Trade Commission (FTC) announced a settlement agreement with Aura Labs, Inc. regarding alleged deceptive claims about its product: the Instant Blood Pressure App. Aura sold the app from at least June 2014 to at least July 31, 2015 at the Apple App Store and at the Google Play marketplace for $3.99 (or $4.99). Sales of the app totaled about $600,000 during this period. Ryan Archdeacon, the Chief Executive Officer and President of Aura, was named as a co-defendant in the suit.

The FTC alleged that the defendants violated the FTC Act. The complaint alleged deceptive marketing claims by Aura about its blood pressure app:

"Although Defendants represent that the Instant Blood Pressure App measures blood pressure as accurately as a traditional blood pressure cuff and serves as a replacement for a traditional cuff, in fact, studies demonstrate clinically and statistically significant deviations between the App’s measurements and those from a traditional blood pressure cuff."

iMedicalApps reported on March 2, 2016:

"A study presented today at the American Heart Association EPI & Lifestyle (AHA EPI) meeting in Phoenix has shown the shocking inaccuracy of a popular medical app, Instant Blood Pressure... Back in 2014, we raised concerns about the Instant Blood Pressure medical app which claimed to measure blood pressure just by having users put their finger over their smartphone’s camera and microphone over their heart presumably to use something akin to a pulse wave velocity... Dr. Timothy Plante, a fellow in general internal medicine at Johns Hopkins, led the study in which a total of 85 participants were recruited to test the accuracy of the Instant Blood Pressure app... When looking at individuals with low blood pressure or high blood pressure, they found that the Instant Blood Pressure app gave falsely normal values. In other words, someone with high blood pressure who used the app would be falsely reassured their blood pressure was normal... the sensitivity for high blood pressure was an abysmal 20%. These results, while striking, should not be surprising. This medical app had no publicly available validation data, despite reassurance from the developer back in 2014 that such data was forthcoming. The use of things like pulse wave velocity as surrogates for blood pressure has been tried and is fraught with problems..."

The FTC complaint listed the problems with an online review posted in the Apple App Store:

"Defendant Ryan Archdeacon left the following review of the Instant Blood Pressure App in the Apple App Store: "Great start by ARCHIE1986 – Version – 1.0.1 – Jun 11, 2014. This app is a breakthrough for blood pressure monitoring. There are some kinks to work out and you do need to pay close attention to the directions in order to get a successful measurement but all-in-all it’s a breakthrough product. For those having connection problems, consider trying again. I have experienced a similar issue. It is also great that the developer is committed to continual improvements. This is a great start!!!" That the review was left by the Chief Executive Officer and President of Aura was not disclosed to consumers and would materially affect the weight and credibility consumers assigned to the endorsement."

The complaint also cited problems with endorsements posted at Aura's web site:

"At times material to this Complaint, the What People Think portion of Defendants’ website contained three endorsements, including the following endorsement from relatives of Aura’s Chairman of the Board and co-founder Aaron Giroux: "This is such a smart idea that will benefit many of us in monitoring our health in an easy and convenient way." That the endorsement was left by relatives of Aura’s Chairman of the Board and co-founder Aaron Giroux was not disclosed to consumers and would materially affect the weight and credibility consumers assigned to the endorsement."

Terms of the settlement prohibit the defendants from making such unsubstantiated claims in the future, refund money to affected customers, reimburse plaintiffs for the costs of this lawsuit, and additional unspecified items. The FTC announcement also stated that the court order imposed:

"... a judgment of $595,945.27, which is suspended based on the defendants’ inability to pay. The full amount will become due, however, it they are later found to have misrepresented their financial condition."

Copies of the complaint are available at the FTC site and here (Adobe PDF). Kudos tot he FTC for its enforcement action. Product claims and endorsements should be truthful and accurate. And consumers still need to do research before purchase. Just because there's an app for it doesn't mean the results promised are guaranteed.

Got an unresolved problem with a product, service, or app? Consumers can file a complaint online with the FTC. What are your opinions of the Aura-FTC settlement? Of claims by app developers?


Big Data Brokers: Failing With Privacy

You may not know that hedge funds, in both the United Kingdom and in the United States, buy and sell a variety of information from data brokers: mobile app purchases, credit card purchases, posts at social networking sites, and lots more. You can bet that a lot of that mobile information includes geo-location data. The problem: consumers' privacy isn't protected consistently.

The industry claims the information sold is anonymous (e.g., doesn't identify specific persons), but researchers have it easy to de-anonymize the information. The Financial Times reported:

"The “alternative data” industry, which sells information such as app downloads and credit card purchases to investment groups, is failing to adequately erase personal details before sharing the material... big data is seen as an increasingly attractive source of information for asset managers seeking a vital investment edge, with data providers selling everything from social media chatter and emailed receipts to federal lobbying data and even satellite images from space..."

One part of the privacy problem:

“The vendors claim to strip out all the personal information, but we occasionally find phone numbers, zip codes and so on,” said Matthew Granade, chief market intelligence officer at Steven Cohen’s Point72. “It’s a big enough deal that we have a couple of full-time tech people wash the data ourselves.” The head of another major hedge fund said that even when personal information had been scrubbed from a data set, it was far too easy to restore..."

A second part of the privacy problem:

“... there is no overarching US privacy law to protect consumers, with standards set individually by different states, industries and even companies, according to Albert Gidari, director of privacy at the Stanford Center for Internet and Society..."

The third part of the privacy problem: consumers are too willing to trade personal information for convenience.


How To Spot Fake News And Not Get Duped

You may have heard about the "pizzagate" conspiracy -- fake news about a supposed child-sex ring operating from a pizzeria in Washington, DC. A heavily armed citizen drove from North Carolina to the pizzeria to investigate to investigate the bogus child-sex ring supposedly run by Presidential candidate Hillary Clinton. The reality: no sex ring. That citizen had been duped by fake news. Shots were fired, and thankfully nobody was hurt.

CBS News reported that the pizzagate conspiracy had been promoted by Michael G. Flynn, son of retired General Michael T. Flynn, Donald Trump's pick for national security adviser. As a result, the younger Flynn resigned Tuesday from President-Elect Trump's transition team.

I use the phrase "fake news" for several types of misleading content: propaganda, unproven or fact-free conspiracy theories, disinformation, and clickbait. The pizzagate incident highlighted two issues: a) fake news has consequences, and b) many people don't know how to distinguish real news from fake news. So, while political operatives reportedly have used a combination of fake news, ads, and social media to both encourage supporters to vote and discourage opponents from voting, there clearly are other real-life consequences.

To help people spot fake news, NPR reported:

"Stopping the proliferation of fake news isn't just the responsibility of the platforms used to spread it. Those who consume news also need to find ways of determining if what they're reading is true. We offer several tips below. The idea is that people should have a fundamental sense of media literacy. And based on a study recently released by Stanford University researchers, many people don't."

The report is enlightening. In the "Evaluating Information: The Cornerstone of Civic Online Reasoning" report, researchers at Stanford University tested about 7,804 students in 12 states between January 2015 and June 2016. They found:

"... at each level—middle school, high school, and college—these variations paled in comparison to a stunning and dismaying consistency. Overall, young people’s ability to reason about the information on the Internet can be summed up in one word: bleak. Our “digital natives” may be able to flit between Facebook and Twitter while simultaneously uploading a selfie to Instagram and texting a friend. But when it comes to evaluating information that flows through social media channels, they are easily duped... We would hope that middle school students could distinguish an ad from a news story. By high school, we would hope that students reading about gun laws would notice that a chart came from a gun owners’ political action committee. And, in 2016, we would hope college students, who spend hours each day online, would look beyond a .org URL and ask who’s behind a site that presents only one side of a contentious issue. But in every case and at every level, we were taken aback by students’ lack of preparation... Many [people] assume that because young people are fluent in social media they are equally savvy about what they find there. Our work shows the opposite."

This is important for both individuals and the future of the nation because:

"For every challenge facing this nation, there are scores of websites pretending to be something they are not. Ordinary people once relied on publishers, editors, and subject matter experts to vet the information they consumed. But on the unregulated Internet, all bets are off... Never have we had so much information at our fingertips. Whether this bounty will make us smarter and better informed or more ignorant and narrow-minded will depend on our awareness of this problem and our educational response to it. At present, we worry that democracy is threatened by the ease at which disinformation about civic issues is allowed to spread and flourish."

While the study focused upon students, but older persons have been duped, too. The suspect in the pizzeria incident was 28 years old. The Stanford report focused upon what teachers and educators can do to better prepare students. According to the researchers, additional solutions are forthcoming.

What can you do to spot fake news? Don't wait for sites and/or social media to do it for you. Become a smarter consumer. The NPR report suggested:

  1. Pay attention to the domain and URL
  2. Read the "About Us" section of the site
  3. Look at the quotes in a story
  4. Look at who said the quotes

All of the suggestions require readers to take the time to understand the website, publication, and/or publisher. A little skepticism is healthy. Also verify the persons quoted and whether the persons quoted are who the article claims. And, verify that any images used actually relate to the event.

We all have to be smarter consumers of news in order to stay informed and meet our civic duties, which includes voting. Nobody wants to vote for politicians that don't represent their interests because they've been duped. To the above list, I would add:

  • Read news wires. These sites include the raw, unfiltered news about who, when, where, and what happened. Some suggested sources: : Associated Press (AP), Reuters, and United Press International (UPI)
  • Learn to recognize advertisements
  • Learn the differences between different types of content: news, opinion, analysis, satire/humor, and entertainment. Reputable sites will label them to help readers.

If you don't know the differences and can't spot each type, then you are likely to get duped.


Millions Of Android Smartphones And Apps Infected With New Malware, And Accounts Breached

Security researchers at Check Point Software Technologies have identified malware infecting an average of 13,000 Android phones daily. More than 1 million Android phones have already been infected. Researchers named the new malware "Gooligan." Check Point explained in a blog post:

"Our research exposes how the malware roots infected devices and steals authentication tokens that can be used to access data from Google Play, Gmail, Google Photos, Google Docs, G Suite, Google Drive, and more. Gooligan is a new variant of the Android malware campaign found by our researchers in the SnapPea app last year... Gooligan potentially affects devices on Android 4 (Jelly Bean, KitKat) and 5 (Lollipop), which is over 74% of in-market devices today. About 57% of these devices are located in Asia and about 9% are in Europe... We found traces of the Gooligan malware code in dozens of legitimate-looking apps on third-party Android app stores. These stores are an attractive alternative to Google Play because many of their apps are free, or offer free versions of paid apps. However, the security of these stores and the apps they sell aren’t always verified... Logs collected by Check Point researchers show that every day Gooligan installs at least 30,000 apps fraudulently on breached devices or over 2 million apps since the campaign began..."

Check Point chart about Gooligan malware. Click to view larger version This Telegraph UK news story listed 24 device manufacturers affected: Archos, Broadcom, Bullitt, CloudProject, Gigaset, HTC, Huaqin, Huawei, Intel, Lenovo, Pantech, Positivio, Samsung, Unitech, and others.The Check Point announcement listed more than 80 fake mobile apps infected with the Gooligan malware: Billiards, Daily Racing, Fingerprint unlock, Hip Good, Hot Photo, Memory Booster, Multifunction Flashlight, Music Cloud, Perfect Cleaner, PornClub, Puzzle Bubble-Pet Paradise, Sex Photo, Slots Mania, StopWatch, Touch Beauty, WiFi Enhancer, WiFi Master, and many more.

Check Point is working closely with the security team at Google. Adrian Ludwig, Google’s director of Android security, issued a statement:

"Since 2014, the Android security team has been tracking a family of malware called 'Ghost Push,' a vast collection of 'Potentially Harmful Apps' (PHAs) that generally fall into the category of 'hostile downloaders.' These apps are most often downloaded outside of Google Play and after they are installed, Ghost Push apps try to download other apps. For over two years, we’ve used Verify Apps to notify users before they install one of these PHAs and let them know if they’ve been affected by this family of malware... Several Ghost Push variants use publicly known vulnerabilities that are unpatched on older devices to gain privileges that allow them to install applications without user consent. In the last few weeks, we've worked closely with Check Point... to investigate and protect users from one of these variants. Nicknamed ‘Gooligan’, this variant used Google credentials on older versions of Android to generate fraudulent installs of other apps... Because Ghost Push only uses publicly known vulnerabilities, devices with up-to-date security patches have not been affected... We’ve taken multiple steps to protect devices and user accounts, and to disrupt the behavior of the malware as well. Verified Boot [https://source.android.com/security/verifiedboot/], which is enabled on newer devices including those that are compatible with Android 6.0, prevents modification of the system partition. Adopted from ChromeOS, Verified Boot makes it easy to remove Ghost Push... We’ve removed apps associated with the Ghost Push family from Google Play. We also removed apps that benefited from installs delivered by Ghost Push to reduce the incentive for this type of abuse in the future."

How the gooligan malware works by Check Point. Click to view larger version Android device users can also have their devices infected by phishing scams where criminals send text and email messages containing links to infected mobile apps. News about this latest malware comes at a time when some consumers are already worried about the security of Android devices.

Recently, there were reports of surveillance malware installed the firmware of some Android devices, and and the Quadrooter security flaw affecting 900 million Android phones and tablets. Last month, Google quietly dropped its ban on personally identifiable web tracking.

News about this latest malware also highlights the problems with Google's security model. We know from prior reports that manufacturers and wireless carriers don't provide OS updates for all Android phones. Hopefully, the introduction last month of the Pixel phone will address those problems. A better announcement would have also highlighted security improvements.

For the Gooligan malware, Check Point has develop a web site for consumers to determine if their Google account has already been compromised:  https://gooligan.checkpoint.com/. Check Point advised consumers with compromised accounts:

"1. A clean installation of an operating system on your mobile device is required (a process called “flashing”). As this is a complex process, we recommend powering off your device and approaching a certified technician, or your mobile service provider, to request that your device be “re-flashed.”

2. Change your Google account passwords immediately after this process."

A word to the wise: a) shop for apps only at trustworthy, reputable sites; b) download and install all operating-system security patches to protect your devices and your information; and c) avoid buying cheap phones that lack operating system software updates and security patches.


Can Apple Move iPhone Production To The United States?

President Elect Donald Trump and his incoming administration have promised to "make America great again." That promise included a key policy position to move manufacturing -- and its jobs -- back to the United States; in particular move production of Apple iPhones to the USA:

"we have to bring Apple — and other companies like Apple — back to the United States. We have to do it. And that’s one of my real dreams for the country, to get … them back. We have a great capacity in this country."

Well, can it be done? And if so, what might the consequences be?

Nikkei Asia Review reported:

"Key Apple assembler Hon Hai Precision Industry, also known as Foxconn Technology Group, has been studying the possibility of moving iPhone production to the United States... Apple asked both Foxconn and Pegatron, the two iPhone assemblers, in June to look into making iPhones in the United States..."

Experts warn that moving production is complex and difficult. Not only must assembly operations be relocated, but new facilities must be located and built, plus nearby suppliers and transport services found, moved, and contracts obtained. During the globalization trend of the last 35 years, many manufacturing facilities in the USA were closed, destroyed, and replaced with other businesses. Plus, the remainaing facilities may be technologically obsolete. After solving these issues, then production workers must be hired.

With any major change, there often are unintended consequences. A possible consequence:

"Making iPhones in the U.S. means the cost will more than double... According to research company IHS Markit, it costs about $225 for Apple to make an iPhone 7 with a 32GB memory, while the unsubsidized price for such a handset is $649..."

Prices for unlocked iPhone7 with 32 GB phones on eBay range from $700 to $1,000.00. 128 and 256 GB versions cost even more. Would consumers be willing to pay higher prices, say 50 percent more, or even double?


Phone Calls, Apple iCloud, Cloud Services, And Your Privacy

A security firm has found a hidden feature that threatens the privacy of Apple iPhone and iCloud users. Forbes magazine reported:

"Whilst it was well-known that iCloud backups would store call logs, contacts and plenty of other valuable data, users should be concerned to learn that their communications records are consistently being sent to Apple servers without explicit permission, said Elcomsoft CEO Vladimir Katalov. Even if those backups are disabled, he added, the call logs continue making their way to the iCloud, Katalov said... All FaceTime calls are logged in the iCloud too, whilst as of iOS 10 incoming missed calls from apps like WhatsApp and Skype are uploaded..."

Reportedly, the feature is automatic and the only option for users wanting privacy is to not use Apple iCloud services. That's not user-friendly.

Should you switch from Apple iCloud to a commercial service? Privacy risks are not unique to Apple iCloud. Duane Morris LLP explained the risks of using cloud services such as Dropbox, SecuriSync, Citrix ShareFile, and Rackspace:

"Users of electronic file sharing and storage service providers are vulnerable to hacking... Dropbox as just one example: If a hacker was to get their hands on your encryption key, which is possible since Dropbox stores the keys for all of its users, hackers can then steal your personal information stored on Dropbox. Just recently, Dropbox reported that more than 68 million users’ email addresses and passwords were hacked and leaked onto the Internet... potentially even more concerning is the fact that because these service providers own their own servers, they also own any information residing on them. Hence, they can legally access any data on their servers at any time. Additionally, many of these companies house their servers outside of the United States, which means the use, operation, content and security of such servers may not be protected by U.S. law. Furthermore, consider the policies regarding the sharing of your information with third parties. Among others, Dropbox has said that if subpoenaed, it will voluntarily disclose your information to a third party, such as the Internal Revenue Service."

Regular readers of this blog know what that means. Many government entities, such as law enforcement and intelligence agencies besides the IRS issue subpoenas.

This highlights the double-edged sword from syncing and file-sharing across multiple devices (e.g., phone, laptop, desktop, tablet). Sure, is a huge benefit to have all of your files, music, videos, contacts, and data easily and conveniently available regardless of which device you use. Along with that benefit comes the downside privacy and security risks: data stored in cloud services is vulnerable to hacking and subject to government warrants, subpoenas, and court actions. As Duane Morris LLP emphasized, it doesn't matter whether your data is encrypted or not.

Also, Forbes magazine reported:

"Katalov believes automated iCloud storage of up-to-date logs would be beneficial for law enforcement wanting to get access to valuable iPhone data. And, he claimed, Apple hadn’t properly disclosed just what data was being stored in the iCloud and, therefore, what information law enforcement could demand."

Well, law enforcement, intelligence agencies, and cyber-criminals now know what information to demand.