123 posts categorized "Surveillance" Feed

Facial Recognition At Facebook: New Patents, New EU Privacy Laws, And Concerns For Offline Shoppers

Facebook logo Some Facebook users know that the social networking site tracks them both on and off (e.g., signed into, not signed into) the service. Many online users know that Facebook tracks both users and non-users around the internet. Recent developments indicate that the service intends to track people offline, too. The New York Times reported that Facebook:

"... has applied for various patents, many of them still under consideration... One patent application, published last November, described a system that could detect consumers within [brick-and-mortar retail] stores and match those shoppers’ faces with their social networking profiles. Then it could analyze the characteristics of their friends, and other details, using the information to determine a “trust level” for each shopper. Consumers deemed “trustworthy” could be eligible for special treatment, like automatic access to merchandise in locked display cases... Another Facebook patent filing described how cameras near checkout counters could capture shoppers’ faces, match them with their social networking profiles and then send purchase confirmation messages to their phones."

Some important background. First, the usage of surveillance cameras in retail stores is not new. What is new is the scope and accuracy of the technology. In 2012, we first learned about smart mannequins in retail stores. In 2013, we learned about the five ways retail stores spy on shoppers. In 2015, we learned more about tracking of shoppers by retail stores using WiFi connections. In 2018, some smart mannequins are used in the healthcare industry.

Second, Facebook's facial recognition technology scans images uploaded by users, and then allows users identified to accept or decline labels with their name for each photo. Each Facebook user can adjust their privacy settings to enable or disable the adding of their name label to photos. However:

"Facial recognition works by scanning faces of unnamed people in photos or videos and then matching codes of their facial patterns to those in a database of named people... The technology can be used to remotely identify people by name without their knowledge or consent. While proponents view it as a high-tech tool to catch criminals... critics said people cannot actually control the technology — because Facebook scans their faces in photos even when their facial recognition setting is turned off... Rochelle Nadhiri, a Facebook spokeswoman, said its system analyzes faces in users’ photos to check whether they match with those who have their facial recognition setting turned on. If the system cannot find a match, she said, it does not identify the unknown face and immediately deletes the facial data."

Simply stated: Facebook maintains a perpetual database of photos (and videos) with names attached, so it can perform the matching and not display name labels for users who declined and/or disabled the display of name labels in photos (videos). To learn more about facial recognition at Facebook, visit the Electronic Privacy Information Center (EPIC) site.

Third, other tech companies besides Facebook use facial recognition technology:

"... Amazon, Apple, Facebook, Google and Microsoft have filed facial recognition patent applications. In May, civil liberties groups criticized Amazon for marketing facial technology, called Rekognition, to police departments. The company has said the technology has also been used to find lost children at amusement parks and other purposes..."

You may remember, earlier in 2017 Apple launched its iPhone X with Face ID feature for users to unlock their phones. Fourth, since Facebook operates globally it must respond to new laws in certain regions:

"In the European Union, a tough new data protection law called the General Data Protection Regulation now requires companies to obtain explicit and “freely given” consent before collecting sensitive information like facial data. Some critics, including the former government official who originally proposed the new law, contend that Facebook tried to improperly influence user consent by promoting facial recognition as an identity protection tool."

Perhaps, you find the above issues troubling. I do. If my facial image will be captured, archived, tracked by brick-and-mortar stores, and then matched and merged with my online usage, then I want some type of notice before entering a brick-and-mortar store -- just as websites present privacy and terms-of-use policies. Otherwise, there is no notice nor informed consent by shoppers at brick-and-mortar stores.

So, is facial recognition a threat, a protection tool, or both? What are your opinions?


The Wireless Carrier With At Least 8 'Hidden Spy Hubs' Helping The NSA

AT&T logo During the late 1970s and 1980s, AT&T conducted an iconic “reach out and touch someone” advertising campaign to encourage consumers to call their friends, family, and classmates. Back then, it was old school -- landlines. The campaign ranked #80 on Ad Age's list of the 100 top ad campaigns from the last century.

Now, we learn a little more about how extensive pervasive surveillance activities are at AT&T facilities to help law enforcement reach out and touch persons. Yesterday, the Intercept reported:

"The NSA considers AT&T to be one of its most trusted partners and has lauded the company’s “extreme willingness to help.” It is a collaboration that dates back decades. Little known, however, is that its scope is not restricted to AT&T’s customers. According to the NSA’s documents, it values AT&T not only because it "has access to information that transits the nation," but also because it maintains unique relationships with other phone and internet providers. The NSA exploits these relationships for surveillance purposes, commandeering AT&T’s massive infrastructure and using it as a platform to covertly tap into communications processed by other companies.”

The new report describes in detail the activities at eight AT&T facilities in major cities across the United States. Consumers who use other branded wireless service providers are also affected:

"Because of AT&T’s position as one of the U.S.’s leading telecommunications companies, it has a large network that is frequently used by other providers to transport their customers’ data. Companies that “peer” with AT&T include the American telecommunications giants Sprint, Cogent Communications, and Level 3, as well as foreign companies such as Sweden’s Telia, India’s Tata Communications, Italy’s Telecom Italia, and Germany’s Deutsche Telekom."

It was five years ago this month that the public learned about extensive surveillance by the U.S. National Security Agency (NSA). Back then, the Guardian UK newspaper reported about a court order allowing the NSA to spy on U.S. citizens. The revelations continued, and by 2016 we'd learned about NSA code inserted in Android operating system software, the FISA Court and how it undermines the public's trust, the importance of metadata and how much it reveals about you (despite some politicians' claims otherwise), the unintended consequences from broad NSA surveillance, U.S. government spy agencies' goal to break all encryption methods, warrantless searches of U.S. citizens' phone calls and e-mail messages, the NSA's facial image data collection program, the data collection programs included ordinary (e.g., innocent) citizens besides legal targets, and how  most hi-tech and telecommunications companies assisted the government with its spy programs. We knew before that AT&T was probably the best collaborator, and now we know more about why. 

Content vacuumed up during the surveillance includes consumers' phone calls, text messages, e-mail messages, and internet activity. The latest report by the Intercept also described:

"The messages that the NSA had unlawfully collected were swept up using a method of surveillance known as “upstream,” which the agency still deploys for other surveillance programs authorized under both Section 702 of FISA and Executive Order 12333. The upstream method involves tapping into communications as they are passing across internet networks – precisely the kind of electronic eavesdropping that appears to have taken place at the eight locations identified by The Intercept."

Former NSA contractor Edward Snowden commented on Twitter:


Apple To Close Security Hole Law Enforcement Frequently Used To Access iPhones

You may remember. In 2016, the U.S. Department of Justice attempted to force Apple Computer to build a back door into its devices so law enforcement could access suspects' iPhones. After Apple refused, the government found a vendor to do the hacking for them. In 2017, multiple espionage campaigns targeted Apple devices with new malware.

Now, we learn a future Apple operating system (iOS) software update will close a security hole frequently used by law enforcement. Reuters reported that the future iOS update will include default settings to terminate communications through the USB port when the device hasn't been unlocked within the past hour. Reportedly, that change may reduce access by 90 percent.

Kudos to the executives at Apple for keeping customers' privacy foremost.


Google To Exit Weaponized Drone Contract And Pursue Other Defense Projects

Google logo Last month, protests by current and former Google employees, plus academic researchers, cited ethical and transparency concerns with artificial intelligence (AI) help the company provides to the U.S. Department of Defense for Project Maven, a weaponized drone program to identify people. Gizmodo reported that Google plans not to renew its contract for Project Maven:

"Google Cloud CEO Diane Greene announced the decision at a meeting with employees Friday morning, three sources told Gizmodo. The current contract expires in 2019 and there will not be a follow-up contract... The company plans to unveil new ethical principles about its use of AI this week... Google secured the Project Maven contract in late September, the emails reveal, after competing for months against several other “AI heavyweights” for the work. IBM was in the running, as Gizmodo reported last month, along with Amazon and Microsoft... Google is reportedly competing for a Pentagon cloud computing contract worth $10 billion."


Privacy Badger Update Fights 'Link Tracking' And 'Link Shims'

Many internet users know that social media companies track both users and non-users. The Electronic Frontier Foundation (EFF) updated its Privacy Badger browser add-on to help consumers fight a specific type of surveillance technology called "Link Tracking," which facebook and many social networking sites use to track users both on and off their social platforms. The EFF explained:

"Say your friend shares an article from EFF’s website on Facebook, and you’re interested. You click on the hyperlink, your browser opens a new tab, and Facebook is no longer a part of the equation. Right? Not exactly. Facebook—and many other companies, including Google and Twitter—use a variation of a technique called link shimming to track the links you click on their sites.

When your friend posts a link to eff.org on Facebook, the website will “wrap” it in a URL that actually points to Facebook.com: something like https://l.facebook.com/l.php?u=https%3A%2F%2Feff.org%2Fpb&h=ATPY93_4krP8Xwq6wg9XMEo_JHFVAh95wWm5awfXqrCAMQSH1TaWX6znA4wvKX8pNIHbWj3nW7M4F-ZGv3yyjHB_vRMRfq4_BgXDIcGEhwYvFgE7prU. This is a link shim.

When you click on that monstrosity, your browser first makes a request to Facebook with information about who you are, where you are coming from, and where you are navigating to. Then, Facebook quickly redirects you to the place you actually wanted to go... Facebook’s approach is a bit sneakier. When the site first loads in your browser, all normal URLs are replaced with their l.facebook.com shim equivalents. But as soon as you hover over a URL, a piece of code triggers that replaces the link shim with the actual link you wanted to see: that way, when you hover over a link, it looks innocuous. The link shim is stored in an invisible HTML attribute behind the scenes. The new link takes you to where you want to go, but when you click on it, another piece of code fires off a request to l.facebook.com in the background—tracking you just the same..."

Lovely. And, Facebook fails to deliver on privacy in more ways:

"According to Facebook's official post on the subject, in addition to helping Facebook track you, link shims are intended to protect users from links that are "spammy or malicious." The post states that Facebook can use click-time detection to save users from visiting malicious sites. However, since we found that link shims are replaced with their unwrapped equivalents before you have a chance to click on them, Facebook's system can't actually protect you in the way they describe.

Facebook also claims that link shims "protect privacy" by obfuscating the HTTP Referer header. With this update, Privacy Badger removes the Referer header from links on facebook.com altogether, protecting your privacy even more than Facebook's system claimed to."

Thanks to the EFF for focusing upon online privacy and delivering effective solutions.


Academic Professors, Researchers, And Google Employees Protest Warfare Programs By The Tech Giant

Google logo Many internet users know that Google's business of model of free services comes with a steep price: the collection of massive amounts of information about users of its services. There are implications you may not be aware of.

A Guardian UK article by three professors asked several questions:

"Should Google, a global company with intimate access to the lives of billions, use its technology to bolster one country’s military dominance? Should it use its state of the art artificial intelligence technologies, its best engineers, its cloud computing services, and the vast personal data that it collects to contribute to programs that advance the development of autonomous weapons? Should it proceed despite moral and ethical opposition by several thousand of its own employees?"

These questions are relevant and necessary for several reasons. First, more than a dozen Google employees resigned citing ethical and transparency concerns with artificial intelligence (AI) help the company provides to the U.S. Department of Defense for Maven, a weaponized drone program to identify people. Reportedly, these are the first known mass resignations.

Second, more than 3,100 employees signed a public letter saying that Google should not be in the business of war. That letter (Adobe PDF) demanded that Google terminate its Maven program assistance, and draft a clear corporate policy that neither it, nor its contractors, will build warfare technology.

Third, more than 700 academic researchers, who study digital technologies, signed a letter in support of the protesting Google employees and former employees. The letter stated, in part:

"We wholeheartedly support their demand that Google terminate its contract with the DoD, and that Google and its parent company Alphabet commit not to develop military technologies and not to use the personal data that they collect for military purposes... We also urge Google and Alphabet’s executives to join other AI and robotics researchers and technology executives in calling for an international treaty to prohibit autonomous weapon systems... Google has become responsible for compiling our email, videos, calendars, and photographs, and guiding us to physical destinations. Like many other digital technology companies, Google has collected vast amounts of data on the behaviors, activities and interests of their users. The private data collected by Google comes with a responsibility not only to use that data to improve its own technologies and expand its business, but also to benefit society. The company’s motto "Don’t Be Evil" famously embraces this responsibility.

Project Maven is a United States military program aimed at using machine learning to analyze massive amounts of drone surveillance footage and to label objects of interest for human analysts. Google is supplying not only the open source ‘deep learning’ technology, but also engineering expertise and assistance to the Department of Defense. According to Defense One, Joint Special Operations Forces “in the Middle East” have conducted initial trials using video footage from a small ScanEagle surveillance drone. The project is slated to expand “to larger, medium-altitude Predator and Reaper drones by next summer” and eventually to Gorgon Stare, “a sophisticated, high-tech series of cameras... that can view entire towns.” With Project Maven, Google becomes implicated in the questionable practice of targeted killings. These include so-called signature strikes and pattern-of-life strikes that target people based not on known activities but on probabilities drawn from long range surveillance footage. The legality of these operations has come into question under international and U.S. law. These operations also have raised significant questions of racial and gender bias..."

I'll bet that many people never imagined -- nor want - that their personal e-mail, photos, calendars, video, social media, map usage, archived photos, social media, and more would be used for automated military applications. What are your opinions?


San Diego Police Widely Share Data From License Plate Database

Images of ALPR device mounted on a patrol car. Click to view larger version Many police departments use automated license plate reader (ALPR or LPR) technology to monitor the movements of drivers and their vehicles. The surveillance has several implications beyond the extensive data collection.

The Voice of San Diego reported that the San Diego Police Departments shares its database of ALPR data with many other agencies:

"SDPD shares that database with the San Diego sector of Border Patrol – and with another 600 agencies across the country, including other agencies within the Department of Homeland Security. The nationwide database is enabled by Vigilant Solutions, a private company that provides data management and software services to agencies across the country for ALPR systems... A memorandum of understanding between SDPD and Vigilant stipulates that each agency retains ownership of its data, and can take steps to determine who sees it. A Vigilant Solutions user manual spells out in detail how agencies can limit access to their data..."

San Diego's ALPR database is fed by a network of cameras which record images plus the date, time and GPS location of the cars that pass by them. So, the associated metadata for each database record probably includes the license plate number, license plate state, vehicle owner, GPS location, travel direction, date and time, road/street/highway name or number, and the LPR device ID number.

Information about San Diego's ALPR activities became public after a data request from the Electronic Frontier Foundation (EFF), a digital privacy organization. ALPRs are a popular tool, and were used in about 38 states in 2014. Typically, the surveillance collects data about both criminals and innocent drivers.

Images of ALPR devices mounted on unmarked patrol cars. Click to view larger version There are several valid applications: find stolen vehicles, find stolen license plates, find wanted vehicles (e.g., abductions), execute search warrants, find parolees, and find wanted parolees. Some ALPR devices are stationary (e.g., mounted on street lights), while others are mounted on (marked and unmarked) patrol cars. Both deployments scan moving vehicles, while the latter also facilitates the scanning of parked vehicles.

Earlier this year, the EFF issued hundreds of similar requests across the country to learn how law enforcement currently uses ALPR technology. The ALPR training manual for the Elk Grove, Illinois PD listed the data archival policies for several states: New Jersey - 5 years, Vermont - 18 months, Utah - 9 months,  Minnesota - 48 hours, Arkansas - 150 days, New Hampshire - not allowed, and California - no set time. The document also stated that more than "50 million captures" are added each month to the Vigilant database. And, the Elk Grove PD seems to broadly share its ALPR data with other police departments and agencies.

The SDPD website includes a "License Plate Recognition: Procedures" document (Adobe PDF), dated May 2015, which describes its ALPR usage and policies:

"The legitimate law enforcement purposes of LPR systems include the following: 1) Locating stolen, wanted, or subject of investigation vehicles; 2) Locating witnesses and victims of a violent crime; 3) Locating missing or abducted children and at risk individuals.

LPR Strategies: 1) LPR equipped vehicles should be deployed as frequently as possible to maximize the utilization of the system; 2) Regular operation of LPR should be considered as a force multiplying extension of an officer’s regular patrol efforts to observe and detect vehicles of interest and specific wanted vehicles; 3) LPR may be legitimately used to collect data that is within public view, but should not be used to gather intelligence of First Amendment activities; 4) Reasonable suspicion or probable cause is not required for the operation of LPR equipment; 5) Use of LPR equipped cars to conduct license plate canvasses and grid searches is encouraged, particularly for major crimes or incidents as well as areas that are experiencing any type of crime series... LPR data will be retained for a period of one year from the time the LPR record was captured by the LPR device..."

The document does not describe its data security methods to protect this sensitive information from breaches, hacks, and unauthorized access. Perhaps most importantly, the 2015 SDPD document describes the data sharing policy:

"Law enforcement officers shall not share LPR data with commercial or private entities or individuals. However, law enforcement officers may disseminate LPR data to government entities with an authorized law enforcement or public safety purpose for access to such data."

However, the Voice of San Diego reported:

"A memorandum of understanding between SDPD and Vigilant stipulates that each agency retains ownership of its data, and can take steps to determine who sees it. A Vigilant Solutions user manual spells out in detail how agencies can limit access to their data... SDPD’s sharing doesn’t stop at Border Patrol. The list of agencies with near immediate access to the travel habits of San Diegans includes law enforcement partners you might expect, like the Carlsbad Police Department – with which SDPD has for years shared license plate reader data, through a countywide arrangement overseen by SANDAG – but also obscure agencies like the police department in Meigs, Georgia, population 1,038, and a private group that is not itself a police department, the Missouri Police Chiefs Association..."

So, the accuracy of the 2015 document is questionable, it it isn't already obsolete. Moreover, what's really critical are the data retention and sharing policies by Vigilant and other agencies.


Oakland Law Mandates 'Technology Impact Reports' By Local Government Agencies Before Purchasing Surveillance Equipment

Popular tools used by law enforcement include stingrays, fake cellular phone towers, and automated license plate readers (ALPRs) to track the movements of persons. Historically, the technologies have often been deployed without notice to track both the bad guys (e.g., criminals and suspects) and innocent citizens.

To better balance the privacy needs of citizens versus the surveillance needs of law enforcement, some areas are implementing new laws. The East Bay Times reported about a new law in Oakland:

"... introduced at Tuesday’s city council meeting, creates a public approval process for surveillance technologies used by the city. The rules also lay a groundwork for the City Council to decide whether the benefits of using the technology outweigh the cost to people’s privacy. Berkeley and Davis have passed similar ordinances this year.

However, Oakland’s ordinance is unlike any other in the nation in that it requires any city department that wants to purchase or use the surveillance technology to submit a "technology impact report" to the city’s Privacy Advisory Commission, creating a “standardized public format” for technologies to be evaluated and approved... city departments must also submit a “surveillance use policy” to the Privacy Advisory Commission for consideration. The approved policy must be adopted by the City Council before the equipment is to be used..."

Reportedly, the city council will review the ordinance a second time before final passage.

The Northern California chapter of the American Civil Liberties Union (ACLU) discussed the problem, the need for transparency, and legislative actions:

"Public safety in the digital era must include transparency and accountability... the ACLU of California and a diverse coalition of civil rights and civil liberties groups support SB 1186, a bill that helps restores power at the local level and makes sure local voices are heard... the use of surveillance technology harms all Californians and disparately harms people of color, immigrants, and political activists... The Oakland Police Department concentrated their use of license plate readers in low income and minority neighborhoods... Across the state, residents are fighting to take back ownership of their neighborhoods... Earlier this year, Alameda, Culver City, and San Pablo rejected license plate reader proposals after hearing about the Immigration & Customs Enforcement (ICE) data [sharing] deal. Communities are enacting ordinances that require transparency, oversight, and accountability for all surveillance technologies. In 2016, Santa Clara County, California passed a groundbreaking ordinance that has been used to scrutinize multiple surveillance technologies in the past year... SB 1186 helps enhance public safety by safeguarding local power and ensuring transparency, accountability... SB 1186 covers the broad array of surveillance technologies used by police, including drones, social media surveillance software, and automated license plate readers. The bill also anticipates – and covers – AI-powered predictive policing systems on the rise today... Without oversight, the sensitive information collected by local governments about our private lives feeds databases that are ripe for abuse by the federal government. This is not a hypothetical threat – earlier this year, ICE announced it had obtained access to a nationwide database of location information collected using license plate readers – potentially sweeping in the 100+ California communities that use this technology. Many residents may not be aware their localities also share their information with fusion centers, federal-state intelligence warehouses that collect and disseminate surveillance data from all levels of government.

Statewide legislation can build on the nationwide Community Control Over Police Surveillance (CCOPS) movement, a reform effort spearheaded by 17 organizations, including the ACLU, that puts local residents and elected officials in charge of decisions about surveillance technology. If passed in its current form, SB 1186 would help protect Californians from intrusive, discriminatory, and unaccountable deployment of law enforcement surveillance technology."

Is there similar legislation in your state?


How Facebook Tracks Its Users, And Non-Users, Around the Internet

Facebook logo Many Facebook users wrongly believe that the social networking service doesn't track them around the internet when they aren't signed in. Also, many non-users of Facebook wrongly believe that they are not tracked.

Earlier this month, Consumer Reports explained the tracking:

"As you travel through the web, you’re likely to encounter Facebook Like or Share buttons, which the company calls Social Plugins, on all sorts of pages, from news outlets to shopping sites. Click on a Like button and you can see the number on the page’s counter increase by one; click on a Share button and a box opens up to let you post a link to your Facebook account.

But that’s just what’s happening on the surface. "If those buttons are on the page, regardless of whether you touch them or not, Facebook is collecting data," said Casey Oppenheim, co-founder of data security firm Disconnect."

This blog discussed social plugins back in 2010. However, the tracking includes more technologies:

"... every web page contains little bits of code that request the pictures, videos, and text that browsers need to display each item on the page. These requests typically go out to a wide swath of corporate servers—including Facebook—in addition to the website’s owner. And such requests can transmit data about the site you’re on, the browser you are using, and more. Useful data gets sent to Facebook whether you click on one of its buttons or not. If you click, Facebook finds out about that, too. And it learns a bit more about your interests.

In addition to the buttons, many websites also incorporate a Facebook Pixel, a tiny, transparent image file the size of just one of the millions of pixels on a typical computer screen. The web page makes a request for a Facebook Pixel, just as it would request a Like button. No user will ever notice the picture, but the request to get it is packaged with information... Facebook explains what data can be collected using a Pixel, such as products you’ve clicked on or added to a shopping cart, in its documentation for advertisers. Web developers can control what data is collected and when it is transmitted... Even if you’re not logged in, the company can still associate the data with your IP address and all the websites you’ve been to that contain Facebook code."

The article also explains "re-targeting" and how consumers who don't purchase anything at an online retail site will see advertisements later -- around the internet and not solely on the Facebook site -- about the items they viewed but not purchased. Then, there is the database it assembles:

"In materials written for its advertisers, Facebook explains that it sorts consumers into a wide variety of buckets based on factors such as age, gender, language, and geographic location. Facebook also sorts its users based on their online activities—from buying dog food, to reading recipes, to tagging images of kitchen remodeling projects, to using particular mobile devices. The company explains that it can even analyze its database to build “look-alike” audiences that are similar... Facebook can show ads to consumers on other websites and apps as well through the company’s Audience Network."

So, several technologies are used to track both Facebook users and non-users, and assemble a robust, descriptive database. And, some website operators collaborate to facilitate the tracking, which is invisible to most users. Neat, eh?

Like it or not, internet users are automatically included in the tracking and data collection. Can you opt out? Consumer reports also warns:

"The biggest tech companies don’t give you strong tools for opting out of data collection, though. For instance, privacy settings may let you control whether you see targeted ads, but that doesn’t affect whether a company collects and stores information about you."

Given this, one can conclude that Facebook is really a massive advertising network masquerading as a social networking service.

To minimize the tracking, consumers can: disable the Facebook API platform on their Facebook accounts, use the new tools (e.g., see these step-by-step instructions) by Facebook to review and disable the apps with access to their data, use ad-blocking software (e.g., Adblock Plus, Ghostery), use the opt out-out mechanisms offered by the major data brokers, use the OptOutPrescreen.com site to stop pre-approved credit offers, and use VPN software and services.

If you use the Firefox web browser, configure it for Private Browsing and install the new Facebook Container add-on specifically designed to prevent Facebook from tracking you. Don't use Firefox? Several web browsers offer Incognito Mode. And, you might try the Privacy Badger add-on instead. I've used it happily for years.

To combat "canvas fingerprinting" (e.g., tracking users by identifying the unique attributes of your computer, browser, and software), security experts have advised consumers to use different web browsers. For example, you'd use one browser only for online banking, and a different web browser for surfing the internet. However,  this security method may not work much longer given the rise of cross-browser fingerprinting.

It seems that an arms race is underway between software for users to maintain privacy online versus technologies by advertisers to defeat users' privacy. Would Facebook and its affiliates/partners use cross-browser fingerprinting? My guess: yes it would, just like any other advertising network.

What do you think? Some related reading:


The 'CLOUD Act' - What It Is And What You Need To Know

Chances are, you probably have not heard of the "CLOUD Act." I hadn't heard about it until recently. A draft of the legislation is available on the website for U.S. Senator Orrin Hatch (Republican - Utah).

Many people who already use cloud services to store and backup data might assume: if it has to do with the cloud, then it must be good.  Such an assumption would be foolish. The full name of the bill: "Clarifying Overseas Use Of Data." What problem does this bill solve? Senator Hatch stated last month why he thinks this bill is needed:

"... the Supreme Court will hear arguments in a case... United States v. Microsoft Corp., colloquially known as the Microsoft Ireland case... The case began back in 2013, when the US Department of Justice asked Microsoft to turn over emails stored in a data center in Ireland. Microsoft refused on the ground that US warrants traditionally have stopped at the water’s edge. Over the last few years, the legal battle has worked its way through the court system up to the Supreme Court... The issues the Microsoft Ireland case raises are complex and have created significant difficulties for both law enforcement and technology companies... law enforcement officials increasingly need access to data stored in other countries for investigations, yet no clear enforcement framework exists for them to obtain overseas data. Meanwhile, technology companies, who have an obligation to keep their customers’ information private, are increasingly caught between conflicting laws that prohibit disclosure to foreign law enforcement. Equally important, the ability of one nation to access data stored in another country implicates national sovereignty... The CLOUD Act bridges the divide that sometimes exists between law enforcement and the tech sector by giving law enforcement the tools it needs to access data throughout the world while at the same time creating a commonsense framework to encourage international cooperation to resolve conflicts of law. To help law enforcement, the bill creates incentives for bilateral agreements—like the pending agreement between the US and the UK—to enable investigators to seek data stored in other countries..."

Senators Coons, Graham, and Whitehouse, support the CLOUD Act, along with House Representatives Collins, Jeffries, and others. The American Civil Liberties Union (ACLU) opposes the bill and warned:

"Despite its fluffy sounding name, the recently introduced CLOUD Act is far from harmless. It threatens activists abroad, individuals here in the U.S., and would empower Attorney General Sessions in new disturbing ways... the CLOUD Act represents a dramatic change in our law, and its effects will be felt across the globe... The bill starts by giving the executive branch dramatically more power than it has today. It would allow Attorney General Sessions to enter into agreements with foreign governments that bypass current law, without any approval from Congress. Under these agreements, foreign governments would be able to get emails and other electronic information without any additional scrutiny by a U.S. judge or official. And, while the attorney general would need to consider a country’s human rights record, he is not prohibited from entering into an agreement with a country that has committed human rights abuses... the bill would for the first time allow these foreign governments to wiretap in the U.S. — even in cases where they do not meet Wiretap Act standards. Paradoxically, that would give foreign governments the power to engage in surveillance — which could sweep in the information of Americans communicating with foreigners — that the U.S. itself would not be able to engage in. The bill also provides broad discretion to funnel this information back to the U.S., circumventing the Fourth Amendment. This information could potentially be used by the U.S. to engage in a variety of law enforcement actions."

Given that warning, I read the draft legislation. One portion immediately struck me:

"A provider of electronic communication service or remote computing service shall comply with the obligations of this chapter to preserve, backup, or disclose the contents of a wire or electronic communication and any record or other information pertaining to a customer or subscriber within such provider’s possession, custody, or control, regardless of whether such communication, record, or other information is located within or outside of the United States."

While I am not an attorney, this bill definitely sounds like an end-run around the Fourth Amendment. The review process is largely governed by the House of Representatives; a body not known for internet knowledge nor savvy. The bill also smells like an attack on internet services consumers regularly use for privacy, such as search engines that don't collect nor archive search data and Virtual Private Networks (VPNs).

Today, for online privacy many consumers in the United States use VPN software and services provided by vendors located offshore. Why? Despite a national poll in 2017 which found the the Republican rollback of FCC broadband privacy rules very unpopular among consumers, the Republican-led Congress proceeded with that rollback, and President Trump signed the privacy-rollback legislation on April 3, 2017. Hopefully, skilled and experienced privacy attorneys will continue to review and monitor the draft legislation.

The ACLU emphasized in its warning:

"Today, the information of global activists — such as those that fight for LGBTQ rights, defend religious freedom, or advocate for gender equality are protected from being disclosed by U.S. companies to governments who may seek to do them harm. The CLOUD Act eliminates many of these protections and replaces them with vague assurances, weak standards, and largely unenforceable restrictions... The CLOUD Act represents a major change in the law — and a major threat to our freedoms. Congress should not try to sneak it by the American people by hiding it inside of a giant spending bill. There has not been even one minute devoted to considering amendments to this proposal. Congress should robustly debate this bill and take steps to fix its many flaws, instead of trying to pull a fast one on the American people."

I agree. Seems like this bill creates far more problems than it solves. Plus, something this important should be openly and thoroughly discussed; not be buried in a spending bill. What do you think?


Fitness Device Usage By U.S. Soldiers Reveal Sensitive Location And Movement Data

Useful technology can often have unintended consequences. The Washington Post reported about an interactive map:

"... posted on the Internet that shows the whereabouts of people who use fitness devices such as Fitbit also reveals highly sensitive information about the locations and activities of soldiers at U.S. military bases, in what appears to be a major security oversight. The Global Heat Map, published by the GPS tracking company Strava, uses satellite information to map the locations and movements of subscribers to the company’s fitness service over a two-year period, by illuminating areas of activity. Strava says it has 27 million users around the world, including people who own widely available fitness devices such as Fitbit and Jawbone, as well as people who directly subscribe to its mobile app. The map is not live — rather, it shows a pattern of accumulated activity between 2015 and September 2017... The U.S.-led coalition against the Islamic State said on Monday it is revising its guidelines on the use of all wireless and technological devices on military facilities as a result of the revelations. "

Takeaway #1: it's easier than you might think for the bad guys to track the locations and movements of high-value targets (e.g, soldiers, corporate executives, politicians, attorneys).

Takeaway #2: unintended consequences from mobile devices is not new, as CNN reported in 2015. Consumers love the convenience of their digital devices. It is wise to remember the warning from a famous economist, "There's no such thing as a free lunch."


Wisconsin Employer To Offer Its Employees ID Microchip Implants

Microchip implant to be used by Three Square Market. Click to view larger version A Wisconsin company said it will offer to its employees starting August 1 the option of having microchip identification implants. The company, Three Square Market (32M), will allow employees with the microchip implants to make purchases in the employee break room, open locked doors, login to computers, use the copy machine, and related office tasks.

Each microchip, about the size of a grain of rice (see photo on the right), would be implanted under the skin in an employee's hand. The microchips use radio-frequency identification (RFID), a technology that's existed for a while and has been used in variety of devices: employee badges, payment cards, passports, package tracking, and more. Each microchip electronically stores identification information about the user, and uses near-field communications (NFC). Instead of swiping a payment card, employee badge, or their smartphone, instead the employee can unlock a device by waving their hand near a chip reader attached to that device. Purchases in the employee break room can be made by waving their hand near a self-serve kiosk.

Reportedly, 32M would be the first employer in the USA to microchip its employees. CBS News reported in April about Epicenter, a startup based in Sweden:

"The [implant] injections have become so popular that workers at Epicenter hold parties for those willing to get implanted... Epicenter, which is home to more than 100 companies and some 2,000 workers, began implanting workers in January 2015. Now, about 150 workers have [chip implants]... as with most new technologies, it raises security and privacy issues. While biologically safe, the data generated by the chips can show how often an employee comes to work or what they buy. Unlike company swipe cards or smartphones, which can generate the same data, a person cannot easily separate themselves from the chip."

In an interview with Saint Paul-based KSTP, Todd Westby, the Chief Executive Officer at 32M described the optional microchip program as:

"... the next thing that's inevitably going to happen, and we want to be a part of it..."

To implement its microchip implant program, 32M has partnered with Sweden-based BioHax International. Westby explained in a company announcement:

"Eventually, this technology will become standardized allowing you to use this as your passport, public transit, all purchasing opportunities... We see chip technology as the next evolution in payment systems, much like micro markets have steadily replaced vending machines... it is important that 32M continues leading the way with advancements such as chip implants..."

"Mico markets" are small stores located within employers' offices; typically the break rooms where employees relax and/or purchase food. 32M estimates 20,000 micro markets nationwide in the USA. According to its website, the company serves markets in North America, Europe, Asia, and Australia. 32M believes that micro markets, aided by chip implants and self-serve kiosk, offer employers greater employee productivity with lower costs.

Yes, the chip implants are similar to the chip implants many pet owners have inserted to identify their dogs or cats. 32M expects 50 employees to enroll in its chip implant program.

Reportedly, companies in Belgium and Sweden already use chip implants to identify employees. 32M's announcement did not list the data elements each employee's microchip would contain, nor whether the data in the microchips would be encrypted. Historically, unencrypted data stored by RFID technology has been vulnerable to skimming attacks by criminals using portable or hand-held RFID readers. Stolen information would be used to cloned devices to commit identity theft and fraud.

Some states, such as Washington and California, passed anti-skimming laws. Prior government-industry workshops about RFID usage focused upon consumer products, and not employment concerns. Earlier this year, lawmakers in Nevada introduced legislation making it illegal to require employees to accept microchip implants.

A BBC News reporter discussed in 2015 what it is like to be "chipped." And as CBS News reported:

"... hackers could conceivably gain huge swathes of information from embedded microchips. The ethical dilemmas will become bigger the more sophisticated the microchips become. The data that you could possibly get from a chip that is embedded in your body is a lot different from the data that you can get from a smartphone..."

Example: employers installing RFID readers for employees to unlock bathrooms means employers can track when, where, how often, and the duration employees use bathrooms. How does that sound?

Hopefully, future announcements by 32M will discuss the security features and protections. What are your opinions? Are you willing to be an office cyborg? Should employees have a choice, or should employers be able to force their employees to accept microchip implants? How do you feel about your employer tracking what you eat and drink via purchases with your chip implant?

Many employers publish social media policies covering what employees should (shouldn't, or can't) publish online. Should employers have microchip implant policies, too? If so, what should these policies state?


Microsoft Fights Foreign Cyber Criminals And Spies

The Daily Beast explained how Microsoft fights cyber criminals and spies, some of whom with alleged ties to the Kremlin:

"Last year attorneys for the software maker quietly sued the hacker group known as Fancy Bear in a federal court outside Washington DC, accusing it of computer intrusion, cybersquatting, and infringing on Microsoft’s trademarks. The action, though, is not about dragging the hackers into court. The lawsuit is a tool for Microsoft to target what it calls “the most vulnerable point” in Fancy Bear’s espionage operations: the command-and-control servers the hackers use to covertly direct malware on victim computers. These servers can be thought of as the spymasters in Russia's cyber espionage, waiting patiently for contact from their malware agents in the field, then issuing encrypted instructions and accepting stolen documents.

Since August, Microsoft has used the lawsuit to wrest control of 70 different command-and-control points from Fancy Bear. The company’s approach is indirect, but effective. Rather than getting physical custody of the servers, which Fancy Bear rents from data centers around the world, Microsoft has been taking over the Internet domain names that route to them. These are addresses like “livemicrosoft[.]net” or “rsshotmail[.]com” that Fancy Bear registers under aliases for about $10 each. Once under Microsoft’s control, the domains get redirected from Russia’s servers to the company’s, cutting off the hackers from their victims, and giving Microsoft a omniscient view of that servers’ network of automated spies."

Kudos to Microsoft and its attorneys.


Russian Cyber Attacks Against US Voting Systems Wider Than First Thought

Cyber attacks upon electoral systems in the United States are wider than originally thought. The attacks occurred in at least 39 states. The Bloomberg report described online attacks in Illinois as an example:

"... investigators found evidence that cyber intruders tried to delete or alter voter data. The hackers accessed software designed to be used by poll workers on Election Day, and in at least one state accessed a campaign finance database. Details of the wave of attacks, in the summer and fall of 2016... In early July 2016, a contractor who works two or three days a week at the state board of elections detected unauthorized data leaving the network, according to Ken Menzel, general counsel for the Illinois board of elections. The hackers had gained access to the state’s voter database, which contained information such as names, dates of birth, genders, driver’s licenses and partial Social Security numbers on 15 million people, half of whom were active voters. As many as 90,000 records were ultimately compromised..."

Politicians have emphasized that the point of the disclosures isn't to embarrass any specific state, but to alert the public to past activities and to the ongoing threat. The Intercept reported:

"Russian military intelligence executed a cyberattack on at least one U.S. voting software supplier and sent spear-phishing emails to more than 100 local election officials just days before last November’s presidential election, according to a highly classified intelligence report obtained by The Intercept.

The top-secret National Security Agency document, which was provided anonymously to The Intercept and independently authenticated, analyzes intelligence very recently acquired by the agency about a months-long Russian intelligence cyber effort against elements of the U.S. election and voting infrastructure. The report, dated May 5, 2017, is the most detailed U.S. government account of Russian interference in the election that has yet come to light."

Spear-fishing is the tactic criminals use by sending malware-laden e-mail messages to targeted individuals, whose names and demographic details may have been collected from social networking sites and other sources. The spam e-mail uses those details to pretend to be valid e-mail from a coworker, business associate, or friend. When the target opens the e-mail attachment, their computer and network are often infected with malware to collect and transmit log-in credentials to the criminals; or to remotely take over the targets' computers (e.g., ransomware) and demand ransom payments. Stolen log-in credentials are how criminals steal consumers' money by breaking into online bank accounts.

The Intercept report explained how the elections systems hackers adopted this tactic:

"... the Russian plan was simple: pose as an e-voting vendor and trick local government employees into opening Microsoft Word documents invisibly tainted with potent malware that could give hackers full control over the infected computers. But in order to dupe the local officials, the hackers needed access to an election software vendor’s internal systems to put together a convincing disguise. So on August 24, 2016, the Russian hackers sent spoofed emails purporting to be from Google to employees of an unnamed U.S. election software company... The spear-phishing email contained a link directing the employees to a malicious, faux-Google website that would request their login credentials and then hand them over to the hackers. The NSA identified seven “potential victims” at the company. While malicious emails targeting three of the potential victims were rejected by an email server, at least one of the employee accounts was likely compromised, the agency concluded..."

Experts believe the voting equipment company targeted was VR Systems, based in Florida. Reportedly, it's electronic voting services and equipment are used in eight states. VR Systems posted online a Frequently Asked Questions document (adobe PDF) about the cyber attacks against elections systems:

"Recent reports indicate that cyber actors impersonated VR Systems and other elections companies. Cyber actors sent an email from a fake account to election officials in an unknown number of districts just days before the 2016 general election. The fraudulent email asked recipients to open an attachment, which would then infect their computer, providing a gateway for more mischief... Because the spear-phishing email did not originate from VR Systems, we do not know how many jurisdictions were potentially impacted. Many election offices report that they never received the email or it was caught by their spam filters before it could reach recipients. It is our understanding that all jurisdictions, including VR Systems customers, have been notified by law enforcement agencies if they were a target of this spear-phishing attack... In August, a small number of phishing emails were sent to VR Systems. These emails were captured by our security protocols and the threat was neutralized. No VR Systems employee’s email was compromised. This prevented the cyber actors from accessing a genuine VR Systems email account. As such, the cyber actors, as part of their late October spear-phishing attack, resorted to creating a fake account to use in that spear-phishing campaign."

It is good news that VR Systems protected its employees' e-mail accounts. Let's hope that those employees were equally diligent about protecting their personal e-mail accounts and home computers, networks, and phones. We all know employees that often work from home.

The Intercept report highlighted a fact about life on the internet, which all internet users should know: stolen log-in credentials are highly valued by criminals:

"Jake Williams, founder of computer security firm Rendition Infosec and formerly of the NSA’s Tailored Access Operations hacking team, said stolen logins can be even more dangerous than an infected computer. “I’ll take credentials most days over malware,” he said, since an employee’s login information can be used to penetrate “corporate VPNs, email, or cloud services,” allowing access to internal corporate data. The risk is particularly heightened given how common it is to use the same password for multiple services. Phishing, as the name implies, doesn’t require everyone to take the bait in order to be a success — though Williams stressed that hackers “never want just one” set of stolen credentials."

So, a word to the wise for all internet users: don't use the same log-in credentials at multiple site. Don't open e-mail attachments from strangers. If you weren't expecting an e-mail attachment from a coworker/friend/business associate, call them on the phone first and verify that they indeed sent an attachment to you. The internet has become a dangerous place.


60 Minutes Re-Broadcast Its 2014 Interview With FBI Director Comey

60 Minutes logo Last night, the 60 Minutes television show re-broadcast its 2014 interview with former Federal Bureau of Investigation (FBI) Director James Comey. The interview is important for several reasons.

Politically liberal people have criticized Comey for mentioning to Congress just before the 2016 election the FBI investigation of former Secretary of State Hilary Clinton's private e-mail server. Many believe that Comey's comments helped candidate Donald Trump win the Presidential election. Politically conservative people criticized Comey for not recommending prosecution of former Secretary Clinton.

The interview is a reminder of history and that reality is often far more nuanced and complicated. Back in 2004, when the George W. Bush administration sought a re-authorization of warrant-less e-mail/phone searches, 60 Minutes explained:

"At the time, Comey was in charge at the Justice Department because Attorney General John Ashcroft was in intensive care with near fatal pancreatitis. When Comey refused to sign off, the president's Chief of Staff Andy Card headed to the hospital to get Ashcroft's OK."

In the 2014 interview, Comey described his concerns in 2004 about key events:

"... [the government] cannot read your emails or listen to your calls without going to a federal judge, making a showing of probable cause that you are a terrorist, an agent of a foreign power, or a serious criminal of some sort, and get permission for a limited period of time to intercept those communications. It is an extremely burdensome process. And I like it that way... I was the deputy attorney general of the United States. We were not going to authorize, reauthorize or participate in activities that did not have a lawful basis."

During the interview in 2014 by 60 Minutes, then FBI Director Comey warned all Americans:

"I believe that Americans should be deeply skeptical of government power. You cannot trust people in power. The founders knew that. That's why they divided power among three branches, to set interest against interest... The promise I've tried to honor my entire career, that the rule of law and the design of the founders, right, the oversight of courts and the oversight of Congress will be at the heart of what the FBI does. The way you'd want it to be..."

The interview highlighted the letter Comey kept on his desk as a cautionary reminder of the excesses of government. That letter was about former FBI Director Herbert Hoover's investigations and excessive surveillance of the late Dr. Martin Luther King, Jr. Is Comey the bad guy that people on both sides of the political spectrum claim? Yes, history is far more complicated and nuanced.

So, history is complex and nuanced... far more than a simplistic, self-serving tweet:

Many have paid close attention for years. After the Snowden disclosures in 2013 about broad, warrantless searches and data collection programs by government intelligence agencies, in 2014 Comey urged all USA citizens to participate in a national discussion about the balance between privacy and surveillance.

You can read the full transcript of the 60 Minutes interview in 2014, watch this preview on Youtube, or watch last night's re-broadcast by 60 Minutes of the 2014 interview.


Berners-Lee: 3 Reasons Why The Internet Is In Serious Trouble

Most people love the Internet. It's a tool that has made life easier and more efficient in many ways. Even with all of those advances, the founder of the Internet listed three reasons why our favorite digital tool is in serious trouble:

  1. Consumers have lost control of their personal information
  2. It's too easy for anyone to publish misinformation online
  3. Political advertising online lacks transparency

Tim Berners-Lee explained the first reason:

"The current business model for many websites offers free content in exchange for personal data. Many of us agree to this – albeit often by accepting long and confusing terms and conditions documents – but fundamentally we do not mind some information being collected in exchange for free services. But, we’re missing a trick. As our data is then held in proprietary silos, out of sight to us, we lose out on the benefits we could realise if we had direct control over this data and chose when and with whom to share it. What’s more, we often do not have any way of feeding back to companies what data we’d rather not share..."

Given appointees in the U.S. Federal Communications Commission (FCC) by President Trump, it will likely get worse as the FCC seeks to revoke online privacy and net neutrality protections for consumers in the United States. Berners-Lee explained the second reason:

"Today, most people find news and information on the web through just a handful of social media sites and search engines. These sites make more money when we click on the links they show us. And they choose what to show us based on algorithms that learn from our personal data that they are constantly harvesting. The net result is that these sites show us content they think we’ll click on – meaning that misinformation, or fake news, which is surprising, shocking, or designed to appeal to our biases, can spread like wildfire..."

Fake news has become so widespread that many public libraries, schools, and colleges teach students how to recognize fake news sites and content. The problem is more widespread and isn't limited to social networking sites like Facebook promoting certain news. It also includes search engines. Readers of this blog are familiar with the DuckDuckGo search engine for both online privacy online and to escape the filter bubble. According to its public traffic page, DuckDuckGo gets about 14 million searches daily.

Most other search engines collect information about their users and that to serve search results items related to what they've searched upon previously. That's called the "filter bubble." It's great for search engines' profitability as it encourages repeat usage, but is terrible for consumers wanting unbiased and unfiltered search results.

Berners-Lee warned that online political advertising:

"... has rapidly become a sophisticated industry. The fact that most people get their information from just a few platforms and the increasing sophistication of algorithms drawing upon rich pools of personal data mean that political campaigns are now building individual adverts targeted directly at users. One source suggests that in the 2016 U.S. election, as many as 50,000 variations of adverts were being served every single day on Facebook, a near-impossible situation to monitor. And there are suggestions that some political adverts – in the US and around the world – are being used in unethical ways – to point voters to fake news sites, for instance, or to keep others away from the polls. Targeted advertising allows a campaign to say completely different, possibly conflicting things to different groups. Is that democratic?"

What do you think of the assessment by Berners-Lee? Of his solutions? Any other issues?


WikiLeaks Claimed CIA Lost Control Of Its Hacking Tools For Phones And Smart TVs

Central Intelligence Agency logo A hacking division of the Central Intelligence Agency (CIA) has collected an arsenal of hundreds of tools to control a variety of smartphones and smart televisions, including devices made by Apple, Google, Microsoft, Samsung and others. The Tuesday, March 7 press release by WikiLeaks claimed this lost arsenal during its release of:

"... 8,761 documents and files from an isolated, high-security network situated inside the CIA's Center for Cyber Intelligence in Langley, Virginia... Recently, the CIA lost control of the majority of its hacking arsenal including malware, viruses, trojans, weaponized "zero day" exploits, malware remote control systems and associated documentation. This extraordinary collection, which amounts to more than several hundred million lines of code, gives its possessor the entire hacking capacity of the CIA. The archive appears to have been circulated among former U.S. government hackers and contractors in an unauthorized manner, one of whom has provided WikiLeaks with portions of the archive."

WikiLeaks used the code name "Vault 7" to identify this release of its first set of documents, and claimed its source for the documents was a former government hacker or contractor. It also said that its source wanted to encourage a public debate about the CIA's capabilities, which allegedly overlap with the National Security Agency (NSA) causing waste.

The announcement also included statements allegedly describing the CIA's capabilities:

"CIA malware and hacking tools are built by EDG (Engineering Development Group), a software development group within CCI (Center for Cyber Intelligence), a department belonging to the CIA's DDI (Directorate for Digital Innovation)... By the end of 2016, the CIA's hacking division, which formally falls under the agency's Center for Cyber Intelligence (CCI), had over 5000 registered users and had produced more than a thousand hacking systems, trojans, viruses, and other "weaponized" malware... The CIA's Mobile Devices Branch (MDB) developed numerous attacks to remotely hack and control popular smart phones. Infected phones can be instructed to send the CIA the user's geolocation, audio and text communications as well as covertly activate the phone's camera and microphone. Despite iPhone's minority share (14.5%) of the global smart phone market in 2016, a specialized unit in the CIA's Mobile Development Branch produces malware to infest, control and exfiltrate data from iPhones and other Apple products running iOS, such as iPads."

CIA's capabilities reportedly include the "Weeping Angel" program:

"... developed by the CIA's Embedded Devices Branch (EDB), which infests smart TVs, transforming them into covert microphones, is surely its most emblematic realization. The attack against Samsung smart TVs was developed in cooperation with the United Kingdom's MI5/BTSS. After infestation, Weeping Angel places the target TV in a 'Fake-Off' mode, so that the owner falsely believes the TV is off when it is on. In 'Fake-Off' mode the TV operates as a bug, recording conversations in the room and sending them over the Internet to a covert CIA server."

Besides phones and smart televisions, WikiLeaks claimed the agency seeks to hack internet-connect autos and vehicles:

"As of October 2014 the CIA was also looking at infecting the vehicle control systems used by modern cars and trucks. The purpose of such control is not specified, but it would permit the CIA to engage in nearly undetectable assassinations."

No doubt that during the coming weeks and months security experts will analyze the documents for veracity. The whole situation is reminiscent of the disclosures in 2013 about broad surveillance programs by the National Security Agency (NSA). You can read more about yesterday's disclosures by WikiLeaks at the Guardian UK, CBS News, the McClatchy DC news wire, and at Consumer Reports.


Advocacy Groups And Legal Experts Denounce DHS Proposal Requiring Travelers To Disclose Social Media Credentials

U.S. Department of Homeland Security logo Several dozen human rights organizations, civil liberties advocates, and legal experts published an open letter on February 21,2017 condemning a proposal by the U.S. Department of Homeland Security to require the social media credentials (e.g., usernames and passwords) of all travelers from majority-Muslim countries. This letter was sent after testimony before Congress by Homeland Security Secretary John Kelly. NBC News reported on February 8:

"Homeland Security Secretary John Kelly told Congress on Tuesday the measure was one of several being considered to vet refugees and visa applicants from seven Muslim-majority countries. "We want to get on their social media, with passwords: What do you do, what do you say?" he told the House Homeland Security Committee. "If they don't want to cooperate then you don't come in."

His comments came the same day judges heard arguments over President Donald Trump's executive order temporarily barring entry to most refugees and travelers from Syria, Iraq, Iran, Somalia, Sudan, Libya and Yemen. Kelly, a Trump appointee, stressed that asking for people's passwords was just one of "the things that we're thinking about" and that none of the suggestions were concrete."

The letter, available at the Center For Democracy & Technology (CDT) website, stated in part (bold emphasis added):

"The undersigned coalition of human rights and civil liberties organizations, trade associations, and experts in security, technology, and the law expresses deep concern about the comments made by Secretary John Kelly at the House Homeland Security Committee hearing on February 7th, 2017, suggesting the Department of Homeland Security could require non-citizens to provide the passwords to their social media accounts as a condition of entering the country.

We recognize the important role that DHS plays in protecting the United States’ borders and the challenges it faces in keeping the U.S. safe, but demanding passwords or other account credentials without cause will fail to increase the security of U.S. citizens and is a direct assault on fundamental rights.

This proposal would enable border officials to invade people’s privacy by examining years of private emails, texts, and messages. It would expose travelers and everyone in their social networks, including potentially millions of U.S. citizens, to excessive, unjustified scrutiny. And it would discourage people from using online services or taking their devices with them while traveling, and would discourage travel for business, tourism, and journalism."

The letter was signed by about 75 organizations and individuals, including the American Civil Liberties Union, the American Library Association, the American Society of Journalists & Authors, the American Society of News Editors, Americans for Immigrant Justice, the Brennan Center for Justice at NYU School of Law, Electronic Frontier Foundation, Human Rights Watch, Immigrant Legal Resource Center, National Hispanic Media Coalition, Public Citizen, Reporters Without Borders, the World Privacy Forum, and many more.

The letter is also available here (Adobe PDF).


High Tech Companies And A Muslim Registry

Since the Snowden disclosures in 2013, there have been plenty of news reports about how technology companies have assisted the U.S. government with surveillance programs. Some of these activities included surveillance programs by the U.S. National Security Agency (NSA) including innocent citizens, bulk phone calls metadata collection, warrantless searches by the NSA of citizen's phone calls and emails, facial image collection, identification of the best collaborator with NSA spying, fake cell phone towers (a/k/a 'stingrays') used by both federal government agencies and local police departments, and automated license plate readers to track drivers.

You may also remember, after Apple Computer's refusal to build a backdoor into its smartphones, the U.S. Federal Bureau of Investigation bought a hacking tool from a third party. Several tech companies built the reform government surveillance site, while others actively pursue "Surveillance Capitalism" business goals.

During the 2016 political campaign, candidate (and now President Elect) Donald Trump said he would require all Muslims in the United States to register. Mr. Trump's words matter greatly given his lack of government experience. His words are all voters had to rely upon.

So, The Intercept asked several technology companies a key question about the next logical step: whether or not they are willing to help build and implement a Muslim registry:

"Every American corporation, from the largest conglomerate to the smallest firm, should ask itself right now: Will we do business with the Trump administration to further its most extreme, draconian goals? Or will we resist? This question is perhaps most important for the country’s tech companies, which are particularly valuable partners for a budding authoritarian."

The companies queried included IBM, Microsoft, Google, Facebook, Twitter, and others. What's been the response? Well, IBM focused on other areas of collaboration:

"Shortly after the election, IBM CEO Ginni Rometty wrote a personal letter to President-elect Trump in which she offered her congratulations, and more importantly, the services of her company. The six different areas she identified as potential business opportunities between a Trump White House and IBM were all inoffensive and more or less mundane, but showed a disturbing willingness to sell technology to a man with open interest in the ways in which technology can be abused: Mosque surveillance, a “virtual wall” with Mexico, shutting down portions of the internet on command, and so forth."

The response from many other companies has mostly been crickets. So far, only executives at Twitter have flatly refused, and included with its reply a link to its blog post about developer policies:

"Recent reports about Twitter data being used for surveillance, however, have caused us great concern. As a company, our commitment to social justice is core to our mission and well established. And our policies in this area are long-standing. Using Twitter’s Public APIs or data products to track or profile protesters and activists is absolutely unacceptable and prohibited.

To be clear: We prohibit developers using the Public APIs and Gnip data products from allowing law enforcement — or any other entity — to use Twitter data for surveillance purposes. Period. The fact that our Public APIs and Gnip data products provide information that people choose to share publicly does not change our policies in this area. And if developers violate our policies, we will take appropriate action, which can include suspension and termination of access to Twitter’s Public APIs and data products.

We have an internal process to review use cases for Gnip data products when new developers are onboarded and, where appropriate, we may reject all or part of a requested use case..."

Recently, a Trump-Pence supporter floated this trial balloon to justify such a registry:

"A prominent supporter of Donald J. Trump drew concern and condemnation from advocates for Muslims’ rights on Wednesday after he cited World War II-era Japanese-American internment camps as a “precedent” for an immigrant registry suggested by a member of the president-elect’s transition team. The supporter, Carl Higbie, a former spokesman for Great America PAC, an independent fund-raising committee, made the comments in an appearance on “The Kelly File” on Fox News...

“We’ve done it based on race, we’ve done it based on religion, we’ve done it based on region,” Mr. Higbie said. “We’ve done it with Iran back — back a while ago. We did it during World War II with Japanese.”

You can read the replies from nine technology companies at the Intercept site. Will other companies besides Twitter show that they have a spine? Whether or not such a registry ultimately violates the U.S. Constitution, we will definitely hear a lot more about this subject in the near future.


Some Android Phones Infected With Surveillance Malware Installed In Firmware

Security analysts recently discovered surveillance malware in some inexpensive smartphones that run the Android operating system (OS) software. The malware secretly transmits information about the device owner and usage to servers in China. The surveillance malware was installed in the phones' firmware. The New York Times reported:

"... you can get a smartphone with a high-definition display, fast data service and, according to security contractors, a secret feature: a backdoor that sends all your text messages to China every 72 hours. Security contractors recently discovered pre-installed software in some Android phones... International customers and users of disposable or prepaid phones are the people most affected by the software... The Chinese company that wrote the software, Shanghai Adups Technology Company, says its code runs on more than 700 million phones, cars and other smart devices. One American phone manufacturer, BLU Products, said that 120,000 of its phones had been affected and that it had updated the software to eliminate the feature."

Shanghai ADUPS Technology Company (ADUPS) is privately owned and based in Shanghai, China. According to Bloomberg, ADUPS:

"... provides professional Firmware Over-The-Air (FOTA) update services. The company offers a cloud-based service, which includes cloud hosts and CDN service, as well as allows manufacturers to update all their device models. It serves smart device manufacturers, mobile operators, and semiconductor vendors worldwide."

Firmware is a special type of software store in read-only memory (ROM) chips that operates a device, including how it controls, monitors, and manipulates data within a device. Kryptowire, a security firm, discovered the malware. The Kryptowire report identified:

"... several models of Android mobile devices that contained firmware that collected sensitive personal data about their users and transmitted this sensitive data to third-party servers without disclosure or the users' consent. These devices were available through major US-based online retailers (Amazon, BestBuy, for example)... These devices actively transmitted user and device information including the full-body of text messages, contact lists, call history with full telephone numbers, unique device identifiers including the International Mobile Subscriber Identity (IMSI) and the International Mobile Equipment Identity (IMEI). The firmware could target specific users and text messages matching remotely defined keywords. The firmware also collected and transmitted information about the use of applications installed on the monitored device, bypassed the Android permission model, executed remote commands with escalated (system) privileges, and was able to remotely reprogram the devices.

The firmware that shipped with the mobile devices and subsequent updates allowed for the remote installation of applications without the users' consent and, in some versions of the software, the transmission of fine-grained device location information... Our findings are based on both code and network analysis of the firmware. The user and device information was collected automatically and transmitted periodically without the users' consent or knowledge. The collected information was encrypted with multiple layers of encryption and then transmitted over secure web protocols to a server located in Shanghai. This software and behavior bypasses the detection of mobile anti-virus tools because they assume that software that ships with the device is not malware and thus, it is white-listed."

So, the malware was powerful, sophisticated, and impossible for consumers to detect.

This incident provides several reminders. First, there were efforts earlier this year by the U.S. Federal Bureau of Investigation (FBI) to force Apple to build "back doors" into its phones for law enforcement. Reportedly, it is unclear what specific law enforcement or intelligence services utilized the data streams produced by the surveillance malware. It is probably wise to assume that the Ministry of State Security, China's intelligence agency, had or has access to data streams.

Second, the incident highlights supply chain concerns raised in 2015 about computer products manufactured in China. Third, the incident indicates how easily consumers' privacy can be compromised by data breaches during a product's supply chain: manufacturing, assembly, transport, and retail sale.

Fourth, the incident highlights Android phone security issues raised earlier this year. We know from prior reports that manufacturers and wireless carriers don't provide OS updates for all Android phones. Fifth, the incident highlights the need for automakers and software developers to ensure the security of both connected cars and driverless cars.

Sixth, the incident raises questions about how and what, if anything, President Elect Donald J. Trump and his incoming administration will do about this trade issue with China. The Trump-Pence campaign site stated about trade with China:

"5. Instruct the Treasury Secretary to label China a currency manipulator.

6. Instruct the U.S. Trade Representative to bring trade cases against China, both in this country and at the WTO. China's unfair subsidy behavior is prohibited by the terms of its entrance to the WTO.

7. Use every lawful presidential power to remedy trade disputes if China does not stop its illegal activities, including its theft of American trade secrets - including the application of tariffs consistent with Section 201 and 301 of the Trade Act of 1974 and Section 232 of the Trade Expansion Act of 1962..."

This incident places consumers in a difficult spot. According to the New York Times:

"Because Adups has not published a list of affected phones, it is not clear how users can determine whether their phones are vulnerable. “People who have some technical skills could,” Mr. Karygiannis, the Kryptowire vice president, said. “But the average consumer? No.” Ms. Lim [an attorney that represents Adups] said she did not know how customers could determine whether they were affected."

Until these supply-chain security issues get resolved it is probably wise for consumers to inquire before purchase where their Android phone was made. There are plenty of customer service sites for existing Android phone owners to determine the country their device was made in. Example: Samsung phone info.

Should consumers avoid buying Android phones made in China or Android phones with firmware made in China? That's a decision only you can make for yourself. Me? When I changed wireless carriers in July, I switched an inexpensive Android phone I'd bought several years ago to an Apple iPhone.

What are your thoughts about the surveillance malware? Would you buy an Android phone?