110 posts categorized "Surveillance" Feed

Russian Cyber Attacks Against US Voting Systems Wider Than First Thought

Cyber attacks upon electoral systems in the United States are wider than originally thought. The attacks occurred in at least 39 states. The Bloomberg report described online attacks in Illinois as an example:

"... investigators found evidence that cyber intruders tried to delete or alter voter data. The hackers accessed software designed to be used by poll workers on Election Day, and in at least one state accessed a campaign finance database. Details of the wave of attacks, in the summer and fall of 2016... In early July 2016, a contractor who works two or three days a week at the state board of elections detected unauthorized data leaving the network, according to Ken Menzel, general counsel for the Illinois board of elections. The hackers had gained access to the state’s voter database, which contained information such as names, dates of birth, genders, driver’s licenses and partial Social Security numbers on 15 million people, half of whom were active voters. As many as 90,000 records were ultimately compromised..."

Politicians have emphasized that the point of the disclosures isn't to embarrass any specific state, but to alert the public to past activities and to the ongoing threat. The Intercept reported:

"Russian military intelligence executed a cyberattack on at least one U.S. voting software supplier and sent spear-phishing emails to more than 100 local election officials just days before last November’s presidential election, according to a highly classified intelligence report obtained by The Intercept.

The top-secret National Security Agency document, which was provided anonymously to The Intercept and independently authenticated, analyzes intelligence very recently acquired by the agency about a months-long Russian intelligence cyber effort against elements of the U.S. election and voting infrastructure. The report, dated May 5, 2017, is the most detailed U.S. government account of Russian interference in the election that has yet come to light."

Spear-fishing is the tactic criminals use by sending malware-laden e-mail messages to targeted individuals, whose names and demographic details may have been collected from social networking sites and other sources. The spam e-mail uses those details to pretend to be valid e-mail from a coworker, business associate, or friend. When the target opens the e-mail attachment, their computer and network are often infected with malware to collect and transmit log-in credentials to the criminals; or to remotely take over the targets' computers (e.g., ransomware) and demand ransom payments. Stolen log-in credentials are how criminals steal consumers' money by breaking into online bank accounts.

The Intercept report explained how the elections systems hackers adopted this tactic:

"... the Russian plan was simple: pose as an e-voting vendor and trick local government employees into opening Microsoft Word documents invisibly tainted with potent malware that could give hackers full control over the infected computers. But in order to dupe the local officials, the hackers needed access to an election software vendor’s internal systems to put together a convincing disguise. So on August 24, 2016, the Russian hackers sent spoofed emails purporting to be from Google to employees of an unnamed U.S. election software company... The spear-phishing email contained a link directing the employees to a malicious, faux-Google website that would request their login credentials and then hand them over to the hackers. The NSA identified seven “potential victims” at the company. While malicious emails targeting three of the potential victims were rejected by an email server, at least one of the employee accounts was likely compromised, the agency concluded..."

Experts believe the voting equipment company targeted was VR Systems, based in Florida. Reportedly, it's electronic voting services and equipment are used in eight states. VR Systems posted online a Frequently Asked Questions document (adobe PDF) about the cyber attacks against elections systems:

"Recent reports indicate that cyber actors impersonated VR Systems and other elections companies. Cyber actors sent an email from a fake account to election officials in an unknown number of districts just days before the 2016 general election. The fraudulent email asked recipients to open an attachment, which would then infect their computer, providing a gateway for more mischief... Because the spear-phishing email did not originate from VR Systems, we do not know how many jurisdictions were potentially impacted. Many election offices report that they never received the email or it was caught by their spam filters before it could reach recipients. It is our understanding that all jurisdictions, including VR Systems customers, have been notified by law enforcement agencies if they were a target of this spear-phishing attack... In August, a small number of phishing emails were sent to VR Systems. These emails were captured by our security protocols and the threat was neutralized. No VR Systems employee’s email was compromised. This prevented the cyber actors from accessing a genuine VR Systems email account. As such, the cyber actors, as part of their late October spear-phishing attack, resorted to creating a fake account to use in that spear-phishing campaign."

It is good news that VR Systems protected its employees' e-mail accounts. Let's hope that those employees were equally diligent about protecting their personal e-mail accounts and home computers, networks, and phones. We all know employees that often work from home.

The Intercept report highlighted a fact about life on the internet, which all internet users should know: stolen log-in credentials are highly valued by criminals:

"Jake Williams, founder of computer security firm Rendition Infosec and formerly of the NSA’s Tailored Access Operations hacking team, said stolen logins can be even more dangerous than an infected computer. “I’ll take credentials most days over malware,” he said, since an employee’s login information can be used to penetrate “corporate VPNs, email, or cloud services,” allowing access to internal corporate data. The risk is particularly heightened given how common it is to use the same password for multiple services. Phishing, as the name implies, doesn’t require everyone to take the bait in order to be a success — though Williams stressed that hackers “never want just one” set of stolen credentials."

So, a word to the wise for all internet users: don't use the same log-in credentials at multiple site. Don't open e-mail attachments from strangers. If you weren't expecting an e-mail attachment from a coworker/friend/business associate, call them on the phone first and verify that they indeed sent an attachment to you. The internet has become a dangerous place.


60 Minutes Re-Broadcast Its 2014 Interview With FBI Director Comey

60 Minutes logo Last night, the 60 Minutes television show re-broadcast its 2014 interview with former Federal Bureau of Investigation (FBI) Director James Comey. The interview is important for several reasons.

Politically liberal people have criticized Comey for mentioning to Congress just before the 2016 election the FBI investigation of former Secretary of State Hilary Clinton's private e-mail server. Many believe that Comey's comments helped candidate Donald Trump win the Presidential election. Politically conservative people criticized Comey for not recommending prosecution of former Secretary Clinton.

The interview is a reminder of history and that reality is often far more nuanced and complicated. Back in 2004, when the George W. Bush administration sought a re-authorization of warrant-less e-mail/phone searches, 60 Minutes explained:

"At the time, Comey was in charge at the Justice Department because Attorney General John Ashcroft was in intensive care with near fatal pancreatitis. When Comey refused to sign off, the president's Chief of Staff Andy Card headed to the hospital to get Ashcroft's OK."

In the 2014 interview, Comey described his concerns in 2004 about key events:

"... [the government] cannot read your emails or listen to your calls without going to a federal judge, making a showing of probable cause that you are a terrorist, an agent of a foreign power, or a serious criminal of some sort, and get permission for a limited period of time to intercept those communications. It is an extremely burdensome process. And I like it that way... I was the deputy attorney general of the United States. We were not going to authorize, reauthorize or participate in activities that did not have a lawful basis."

During the interview in 2014 by 60 Minutes, then FBI Director Comey warned all Americans:

"I believe that Americans should be deeply skeptical of government power. You cannot trust people in power. The founders knew that. That's why they divided power among three branches, to set interest against interest... The promise I've tried to honor my entire career, that the rule of law and the design of the founders, right, the oversight of courts and the oversight of Congress will be at the heart of what the FBI does. The way you'd want it to be..."

The interview highlighted the letter Comey kept on his desk as a cautionary reminder of the excesses of government. That letter was about former FBI Director Herbert Hoover's investigations and excessive surveillance of the late Dr. Martin Luther King, Jr. Is Comey the bad guy that people on both sides of the political spectrum claim? Yes, history is far more complicated and nuanced.

So, history is complex and nuanced... far more than a simplistic, self-serving tweet:

Many have paid close attention for years. After the Snowden disclosures in 2013 about broad, warrantless searches and data collection programs by government intelligence agencies, in 2014 Comey urged all USA citizens to participate in a national discussion about the balance between privacy and surveillance.

You can read the full transcript of the 60 Minutes interview in 2014, watch this preview on Youtube, or watch last night's re-broadcast by 60 Minutes of the 2014 interview.


Berners-Lee: 3 Reasons Why The Internet Is In Serious Trouble

Most people love the Internet. It's a tool that has made life easier and more efficient in many ways. Even with all of those advances, the founder of the Internet listed three reasons why our favorite digital tool is in serious trouble:

  1. Consumers have lost control of their personal information
  2. It's too easy for anyone to publish misinformation online
  3. Political advertising online lacks transparency

Tim Berners-Lee explained the first reason:

"The current business model for many websites offers free content in exchange for personal data. Many of us agree to this – albeit often by accepting long and confusing terms and conditions documents – but fundamentally we do not mind some information being collected in exchange for free services. But, we’re missing a trick. As our data is then held in proprietary silos, out of sight to us, we lose out on the benefits we could realise if we had direct control over this data and chose when and with whom to share it. What’s more, we often do not have any way of feeding back to companies what data we’d rather not share..."

Given appointees in the U.S. Federal Communications Commission (FCC) by President Trump, it will likely get worse as the FCC seeks to revoke online privacy and net neutrality protections for consumers in the United States. Berners-Lee explained the second reason:

"Today, most people find news and information on the web through just a handful of social media sites and search engines. These sites make more money when we click on the links they show us. And they choose what to show us based on algorithms that learn from our personal data that they are constantly harvesting. The net result is that these sites show us content they think we’ll click on – meaning that misinformation, or fake news, which is surprising, shocking, or designed to appeal to our biases, can spread like wildfire..."

Fake news has become so widespread that many public libraries, schools, and colleges teach students how to recognize fake news sites and content. The problem is more widespread and isn't limited to social networking sites like Facebook promoting certain news. It also includes search engines. Readers of this blog are familiar with the DuckDuckGo search engine for both online privacy online and to escape the filter bubble. According to its public traffic page, DuckDuckGo gets about 14 million searches daily.

Most other search engines collect information about their users and that to serve search results items related to what they've searched upon previously. That's called the "filter bubble." It's great for search engines' profitability as it encourages repeat usage, but is terrible for consumers wanting unbiased and unfiltered search results.

Berners-Lee warned that online political advertising:

"... has rapidly become a sophisticated industry. The fact that most people get their information from just a few platforms and the increasing sophistication of algorithms drawing upon rich pools of personal data mean that political campaigns are now building individual adverts targeted directly at users. One source suggests that in the 2016 U.S. election, as many as 50,000 variations of adverts were being served every single day on Facebook, a near-impossible situation to monitor. And there are suggestions that some political adverts – in the US and around the world – are being used in unethical ways – to point voters to fake news sites, for instance, or to keep others away from the polls. Targeted advertising allows a campaign to say completely different, possibly conflicting things to different groups. Is that democratic?"

What do you think of the assessment by Berners-Lee? Of his solutions? Any other issues?


WikiLeaks Claimed CIA Lost Control Of Its Hacking Tools For Phones And Smart TVs

Central Intelligence Agency logo A hacking division of the Central Intelligence Agency (CIA) has collected an arsenal of hundreds of tools to control a variety of smartphones and smart televisions, including devices made by Apple, Google, Microsoft, Samsung and others. The Tuesday, March 7 press release by WikiLeaks claimed this lost arsenal during its release of:

"... 8,761 documents and files from an isolated, high-security network situated inside the CIA's Center for Cyber Intelligence in Langley, Virginia... Recently, the CIA lost control of the majority of its hacking arsenal including malware, viruses, trojans, weaponized "zero day" exploits, malware remote control systems and associated documentation. This extraordinary collection, which amounts to more than several hundred million lines of code, gives its possessor the entire hacking capacity of the CIA. The archive appears to have been circulated among former U.S. government hackers and contractors in an unauthorized manner, one of whom has provided WikiLeaks with portions of the archive."

WikiLeaks used the code name "Vault 7" to identify this release of its first set of documents, and claimed its source for the documents was a former government hacker or contractor. It also said that its source wanted to encourage a public debate about the CIA's capabilities, which allegedly overlap with the National Security Agency (NSA) causing waste.

The announcement also included statements allegedly describing the CIA's capabilities:

"CIA malware and hacking tools are built by EDG (Engineering Development Group), a software development group within CCI (Center for Cyber Intelligence), a department belonging to the CIA's DDI (Directorate for Digital Innovation)... By the end of 2016, the CIA's hacking division, which formally falls under the agency's Center for Cyber Intelligence (CCI), had over 5000 registered users and had produced more than a thousand hacking systems, trojans, viruses, and other "weaponized" malware... The CIA's Mobile Devices Branch (MDB) developed numerous attacks to remotely hack and control popular smart phones. Infected phones can be instructed to send the CIA the user's geolocation, audio and text communications as well as covertly activate the phone's camera and microphone. Despite iPhone's minority share (14.5%) of the global smart phone market in 2016, a specialized unit in the CIA's Mobile Development Branch produces malware to infest, control and exfiltrate data from iPhones and other Apple products running iOS, such as iPads."

CIA's capabilities reportedly include the "Weeping Angel" program:

"... developed by the CIA's Embedded Devices Branch (EDB), which infests smart TVs, transforming them into covert microphones, is surely its most emblematic realization. The attack against Samsung smart TVs was developed in cooperation with the United Kingdom's MI5/BTSS. After infestation, Weeping Angel places the target TV in a 'Fake-Off' mode, so that the owner falsely believes the TV is off when it is on. In 'Fake-Off' mode the TV operates as a bug, recording conversations in the room and sending them over the Internet to a covert CIA server."

Besides phones and smart televisions, WikiLeaks claimed the agency seeks to hack internet-connect autos and vehicles:

"As of October 2014 the CIA was also looking at infecting the vehicle control systems used by modern cars and trucks. The purpose of such control is not specified, but it would permit the CIA to engage in nearly undetectable assassinations."

No doubt that during the coming weeks and months security experts will analyze the documents for veracity. The whole situation is reminiscent of the disclosures in 2013 about broad surveillance programs by the National Security Agency (NSA). You can read more about yesterday's disclosures by WikiLeaks at the Guardian UK, CBS News, the McClatchy DC news wire, and at Consumer Reports.


Advocacy Groups And Legal Experts Denounce DHS Proposal Requiring Travelers To Disclose Social Media Credentials

U.S. Department of Homeland Security logo Several dozen human rights organizations, civil liberties advocates, and legal experts published an open letter on February 21,2017 condemning a proposal by the U.S. Department of Homeland Security to require the social media credentials (e.g., usernames and passwords) of all travelers from majority-Muslim countries. This letter was sent after testimony before Congress by Homeland Security Secretary John Kelly. NBC News reported on February 8:

"Homeland Security Secretary John Kelly told Congress on Tuesday the measure was one of several being considered to vet refugees and visa applicants from seven Muslim-majority countries. "We want to get on their social media, with passwords: What do you do, what do you say?" he told the House Homeland Security Committee. "If they don't want to cooperate then you don't come in."

His comments came the same day judges heard arguments over President Donald Trump's executive order temporarily barring entry to most refugees and travelers from Syria, Iraq, Iran, Somalia, Sudan, Libya and Yemen. Kelly, a Trump appointee, stressed that asking for people's passwords was just one of "the things that we're thinking about" and that none of the suggestions were concrete."

The letter, available at the Center For Democracy & Technology (CDT) website, stated in part (bold emphasis added):

"The undersigned coalition of human rights and civil liberties organizations, trade associations, and experts in security, technology, and the law expresses deep concern about the comments made by Secretary John Kelly at the House Homeland Security Committee hearing on February 7th, 2017, suggesting the Department of Homeland Security could require non-citizens to provide the passwords to their social media accounts as a condition of entering the country.

We recognize the important role that DHS plays in protecting the United States’ borders and the challenges it faces in keeping the U.S. safe, but demanding passwords or other account credentials without cause will fail to increase the security of U.S. citizens and is a direct assault on fundamental rights.

This proposal would enable border officials to invade people’s privacy by examining years of private emails, texts, and messages. It would expose travelers and everyone in their social networks, including potentially millions of U.S. citizens, to excessive, unjustified scrutiny. And it would discourage people from using online services or taking their devices with them while traveling, and would discourage travel for business, tourism, and journalism."

The letter was signed by about 75 organizations and individuals, including the American Civil Liberties Union, the American Library Association, the American Society of Journalists & Authors, the American Society of News Editors, Americans for Immigrant Justice, the Brennan Center for Justice at NYU School of Law, Electronic Frontier Foundation, Human Rights Watch, Immigrant Legal Resource Center, National Hispanic Media Coalition, Public Citizen, Reporters Without Borders, the World Privacy Forum, and many more.

The letter is also available here (Adobe PDF).


High Tech Companies And A Muslim Registry

Since the Snowden disclosures in 2013, there have been plenty of news reports about how technology companies have assisted the U.S. government with surveillance programs. Some of these activities included surveillance programs by the U.S. National Security Agency (NSA) including innocent citizens, bulk phone calls metadata collection, warrantless searches by the NSA of citizen's phone calls and emails, facial image collection, identification of the best collaborator with NSA spying, fake cell phone towers (a/k/a 'stingrays') used by both federal government agencies and local police departments, and automated license plate readers to track drivers.

You may also remember, after Apple Computer's refusal to build a backdoor into its smartphones, the U.S. Federal Bureau of Investigation bought a hacking tool from a third party. Several tech companies built the reform government surveillance site, while others actively pursue "Surveillance Capitalism" business goals.

During the 2016 political campaign, candidate (and now President Elect) Donald Trump said he would require all Muslims in the United States to register. Mr. Trump's words matter greatly given his lack of government experience. His words are all voters had to rely upon.

So, The Intercept asked several technology companies a key question about the next logical step: whether or not they are willing to help build and implement a Muslim registry:

"Every American corporation, from the largest conglomerate to the smallest firm, should ask itself right now: Will we do business with the Trump administration to further its most extreme, draconian goals? Or will we resist? This question is perhaps most important for the country’s tech companies, which are particularly valuable partners for a budding authoritarian."

The companies queried included IBM, Microsoft, Google, Facebook, Twitter, and others. What's been the response? Well, IBM focused on other areas of collaboration:

"Shortly after the election, IBM CEO Ginni Rometty wrote a personal letter to President-elect Trump in which she offered her congratulations, and more importantly, the services of her company. The six different areas she identified as potential business opportunities between a Trump White House and IBM were all inoffensive and more or less mundane, but showed a disturbing willingness to sell technology to a man with open interest in the ways in which technology can be abused: Mosque surveillance, a “virtual wall” with Mexico, shutting down portions of the internet on command, and so forth."

The response from many other companies has mostly been crickets. So far, only executives at Twitter have flatly refused, and included with its reply a link to its blog post about developer policies:

"Recent reports about Twitter data being used for surveillance, however, have caused us great concern. As a company, our commitment to social justice is core to our mission and well established. And our policies in this area are long-standing. Using Twitter’s Public APIs or data products to track or profile protesters and activists is absolutely unacceptable and prohibited.

To be clear: We prohibit developers using the Public APIs and Gnip data products from allowing law enforcement — or any other entity — to use Twitter data for surveillance purposes. Period. The fact that our Public APIs and Gnip data products provide information that people choose to share publicly does not change our policies in this area. And if developers violate our policies, we will take appropriate action, which can include suspension and termination of access to Twitter’s Public APIs and data products.

We have an internal process to review use cases for Gnip data products when new developers are onboarded and, where appropriate, we may reject all or part of a requested use case..."

Recently, a Trump-Pence supporter floated this trial balloon to justify such a registry:

"A prominent supporter of Donald J. Trump drew concern and condemnation from advocates for Muslims’ rights on Wednesday after he cited World War II-era Japanese-American internment camps as a “precedent” for an immigrant registry suggested by a member of the president-elect’s transition team. The supporter, Carl Higbie, a former spokesman for Great America PAC, an independent fund-raising committee, made the comments in an appearance on “The Kelly File” on Fox News...

“We’ve done it based on race, we’ve done it based on religion, we’ve done it based on region,” Mr. Higbie said. “We’ve done it with Iran back — back a while ago. We did it during World War II with Japanese.”

You can read the replies from nine technology companies at the Intercept site. Will other companies besides Twitter show that they have a spine? Whether or not such a registry ultimately violates the U.S. Constitution, we will definitely hear a lot more about this subject in the near future.


Some Android Phones Infected With Surveillance Malware Installed In Firmware

Security analysts recently discovered surveillance malware in some inexpensive smartphones that run the Android operating system (OS) software. The malware secretly transmits information about the device owner and usage to servers in China. The surveillance malware was installed in the phones' firmware. The New York Times reported:

"... you can get a smartphone with a high-definition display, fast data service and, according to security contractors, a secret feature: a backdoor that sends all your text messages to China every 72 hours. Security contractors recently discovered pre-installed software in some Android phones... International customers and users of disposable or prepaid phones are the people most affected by the software... The Chinese company that wrote the software, Shanghai Adups Technology Company, says its code runs on more than 700 million phones, cars and other smart devices. One American phone manufacturer, BLU Products, said that 120,000 of its phones had been affected and that it had updated the software to eliminate the feature."

Shanghai ADUPS Technology Company (ADUPS) is privately owned and based in Shanghai, China. According to Bloomberg, ADUPS:

"... provides professional Firmware Over-The-Air (FOTA) update services. The company offers a cloud-based service, which includes cloud hosts and CDN service, as well as allows manufacturers to update all their device models. It serves smart device manufacturers, mobile operators, and semiconductor vendors worldwide."

Firmware is a special type of software store in read-only memory (ROM) chips that operates a device, including how it controls, monitors, and manipulates data within a device. Kryptowire, a security firm, discovered the malware. The Kryptowire report identified:

"... several models of Android mobile devices that contained firmware that collected sensitive personal data about their users and transmitted this sensitive data to third-party servers without disclosure or the users' consent. These devices were available through major US-based online retailers (Amazon, BestBuy, for example)... These devices actively transmitted user and device information including the full-body of text messages, contact lists, call history with full telephone numbers, unique device identifiers including the International Mobile Subscriber Identity (IMSI) and the International Mobile Equipment Identity (IMEI). The firmware could target specific users and text messages matching remotely defined keywords. The firmware also collected and transmitted information about the use of applications installed on the monitored device, bypassed the Android permission model, executed remote commands with escalated (system) privileges, and was able to remotely reprogram the devices.

The firmware that shipped with the mobile devices and subsequent updates allowed for the remote installation of applications without the users' consent and, in some versions of the software, the transmission of fine-grained device location information... Our findings are based on both code and network analysis of the firmware. The user and device information was collected automatically and transmitted periodically without the users' consent or knowledge. The collected information was encrypted with multiple layers of encryption and then transmitted over secure web protocols to a server located in Shanghai. This software and behavior bypasses the detection of mobile anti-virus tools because they assume that software that ships with the device is not malware and thus, it is white-listed."

So, the malware was powerful, sophisticated, and impossible for consumers to detect.

This incident provides several reminders. First, there were efforts earlier this year by the U.S. Federal Bureau of Investigation (FBI) to force Apple to build "back doors" into its phones for law enforcement. Reportedly, it is unclear what specific law enforcement or intelligence services utilized the data streams produced by the surveillance malware. It is probably wise to assume that the Ministry of State Security, China's intelligence agency, had or has access to data streams.

Second, the incident highlights supply chain concerns raised in 2015 about computer products manufactured in China. Third, the incident indicates how easily consumers' privacy can be compromised by data breaches during a product's supply chain: manufacturing, assembly, transport, and retail sale.

Fourth, the incident highlights Android phone security issues raised earlier this year. We know from prior reports that manufacturers and wireless carriers don't provide OS updates for all Android phones. Fifth, the incident highlights the need for automakers and software developers to ensure the security of both connected cars and driverless cars.

Sixth, the incident raises questions about how and what, if anything, President Elect Donald J. Trump and his incoming administration will do about this trade issue with China. The Trump-Pence campaign site stated about trade with China:

"5. Instruct the Treasury Secretary to label China a currency manipulator.

6. Instruct the U.S. Trade Representative to bring trade cases against China, both in this country and at the WTO. China's unfair subsidy behavior is prohibited by the terms of its entrance to the WTO.

7. Use every lawful presidential power to remedy trade disputes if China does not stop its illegal activities, including its theft of American trade secrets - including the application of tariffs consistent with Section 201 and 301 of the Trade Act of 1974 and Section 232 of the Trade Expansion Act of 1962..."

This incident places consumers in a difficult spot. According to the New York Times:

"Because Adups has not published a list of affected phones, it is not clear how users can determine whether their phones are vulnerable. “People who have some technical skills could,” Mr. Karygiannis, the Kryptowire vice president, said. “But the average consumer? No.” Ms. Lim [an attorney that represents Adups] said she did not know how customers could determine whether they were affected."

Until these supply-chain security issues get resolved it is probably wise for consumers to inquire before purchase where their Android phone was made. There are plenty of customer service sites for existing Android phone owners to determine the country their device was made in. Example: Samsung phone info.

Should consumers avoid buying Android phones made in China or Android phones with firmware made in China? That's a decision only you can make for yourself. Me? When I changed wireless carriers in July, I switched an inexpensive Android phone I'd bought several years ago to an Apple iPhone.

What are your thoughts about the surveillance malware? Would you buy an Android phone?


Google Has Quietly Dropped Ban on Personally Identifiable Web Tracking

[Editor's Note: Today's guest post was originally published by ProPublica on October 21, 2016. It is reprinted with permission.]

Google logo by Julia Angwin, ProPublica

When Google bought the advertising network DoubleClick in 2007, Google founder Sergey Brin said that privacy would be the company's "number one priority when we contemplate new kinds of advertising products."

And, for nearly a decade, Google did in fact keep DoubleClick's massive database of web-browsing records separate by default from the names and other personally identifiable information Google has collected from Gmail and its other login accounts.

But this summer, Google quietly erased that last privacy line in the sand -- literally crossing out the lines in its privacy policy that promised to keep the two pots of data separate by default. In its place, Google substituted new language that says browsing habits "may be" combined with what the company learns from the use Gmail and other tools.

The change is enabled by default for new Google accounts. Existing users were prompted to opt-in to the change this summer.

Revised Google privacy terms

The practical result of the change is that the DoubleClick ads that follow people around on the web may now be customized to them based on the keywords they used in their Gmail. It also means that Google could now, if it wished to, build a complete portrait of a user by name, based on everything they write in email, every website they visit and the searches they conduct.

The move is a sea change for Google and a further blow to the online ad industry's longstanding contention that web tracking is mostly anonymous. In recent years, Facebook, offline data brokers and others have increasingly sought to combine their troves of web tracking data with people's real names. But until this summer, Google held the line.

"The fact that DoubleClick data wasn't being regularly connected to personally identifiable information was a really significant last stand," said Paul Ohm, faculty director of the Center on Privacy and Technology at Georgetown Law.

"It was a border wall between being watched everywhere and maintaining a tiny semblance of privacy," he said. "That wall has just fallen."

Google spokeswoman Andrea Faville emailed a statement describing Google's change in privacy policy as an update to adjust to the "smartphone revolution"

"We updated our ads system, and the associated user controls, to match the way people use Google today: across many different devices," Faville wrote. She added that the change "is 100% optional -- if users do not opt-in to these changes, their Google experience will remain unchanged." (Read Google's entire statement.)

Existing Google users were prompted to opt-into the new tracking this summer through a request with titles such as "Some new features for your Google account."

The "new features" received little scrutiny at the time. Wired wrote that it "gives you more granular control over how ads work across devices." In a personal tech column, the New York Times also described the change as "new controls for the types of advertisements you see around the web."

Connecting web browsing habits to personally identifiable information has long been controversial.

Privacy advocates raised a ruckus in 1999 when DoubleClick purchased a data broker that assembled people's names, addresses and offline interests. The merger could have allowed DoubleClick to combine its web browsing information with people's names. After an investigation by the Federal Trade Commission, DoubleClick sold the broker at a loss.

In response to the controversy, the nascent online advertising industry formed the Network Advertising Initiative in 2000 to establish ethical codes. The industry promised to provide consumers with notice when their data was being collected, and options to opt out.

Most online ad tracking remained essentially anonymous for some time after that. When Google bought DoubleClick in 2007, for instance, the company's privacy policy stated:

"DoubleClick's ad-serving technology will be targeted based only on the non-personally-identifiable information."

In 2012, Google changed its privacy policy to allow it to share data about users between different Google services - such as Gmail and search. But it kept data from DoubleClick 2013 whose tracking technology is enabled on half of the top 1 million websites -- separate.

But the era of social networking has ushered in a new wave of identifiable tracking, in which services such as Facebook and Twitter have been able to track logged-in users when they shared an item from another website.

Two years ago, Facebook announced that it would track its users by name across the Internet when they visit websites containing Facebook buttons such as "Share" and "Like" 2013 even when users don't click on the button. (Here's how you can opt out of the targeted ads generated by that tracking).

Offline data brokers also started to merge their mailing lists with online shoppers. "The marriage of online and offline is the ad targeting of the last 10 years on steroids," said Scott Howe, chief executive of broker firm Acxiom.

To opt-out of Google's identified tracking, visit the Activity controls on Google's My Account page, and uncheck the box next to "Include Chrome browsing history and activity from websites and apps that use Google services." You can also delete past activity from your account.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


FBI Director Calls For A National Discussion About Encryption Versus Safety. What Next

During a speech recently in San Francisco at the American Bar Association's annual conference, Federal Bureau of Investigation (FBI) Director James Comey suggested a national discussion about encryption versus safety. Comey said that during the past 10 months, the FBI was able to access only 650 of 5,000 electronic devices. And, the agency's inability to access devices will get worse as more people use encryption. So, United States citizens should discuss and decide what balance is desired between privacy and law enforcement's ability to access devices.

I agree. That is a valuable conversation that needs to happen. It should happen. So far, the discussion has been sporadic; promptly largely by disclosures in 2013 about a secret court order allowing NSA spy programs on U.S. citizens by former National Security Agency (NSA) contractor Edward Snowden. In June, the Electronic Frontier Foundation (EFF) concluded:

"The Snowden leaks caused a sea change in the policy landscape related to surveillance. EFF worked with dozens of coalition partners across the political spectrum to pass the USA Freedom Act, the first piece of legislation to rein in NSA spying in over thirty years—a bill that would have been unthinkable without the Snowden leaks. They also set the stage for a major showdown in Congress over Section 702 of the FISA Amendments Act, the controversial section of law set to expire in 2017 that the government claims authorizes much of the NSA’s Internet surveillance... Perhaps most importantly, the Snowden leaks published over the last three years have helped to realign a broken relationship between the intelligence community and the public. Whistleblowers often serve as a last-resort failsafe when there are no other methods of bringing accountability to secretive processes. The Snowden leaks have helped illuminate how the NSA was operating outside the law with near impunity, and this in turn drove an international conversation about the dangers of near-omniscient surveillance of our digital communications."

However, the situation is far from resolved. Many surveillance programs still operate.

Moreover, who will participate in the discussion -- lawyers or the general population? Director Comey's suggestion was to a room full of lawyers. Plenty of non-lawyers are interested in this discussion.

After the initial Snowden disclosures, a mentor reminded me: "you just can't run away from the Fourth Amendment." Persons and companies need to be able to protect their personal and intellectual property. So, an expectation of privacy is reasonable and necessary. There are plenty of benefits to privacy, so the erosion of these rights by surveillance programs is not a good thing.

You may be surprised to know that the encryption-versus-safety conversation has already begun. An essay in April in the Yale law Journal by Robert S. Litt, the General Counsel for the Office of the Director of National Intelligence, stated:

"First, I am not proposing a comprehensive theory of Fourth Amendment law. Rather, I want to offer some tentative observations that might be explored in shaping a productive response to the challenges that modern technology creates for existing legal doctrine. In particular, I would like to suggest that the concept of “reasonable expectation of privacy” as a kind of gatekeeper for Fourth Amendment analysis should be revisited.

Second, these thoughts are not informed by deep research into the intent of the Framers, or close analysis of case law or academic scholarship. Rather, they derive from almost forty years of experience in law enforcement and intelligence... I find it hard to understand the alchemy by which information that you choose to disclose to a third party develops an expectation of privacy because you have chosen to disclose a lot of that information. That seems counter-intuitive to say the least..."

"... I suggest that—at least in the context of government acquisition of digital data—we should think about eliminating the separate inquiry into whether there was a “reasonable expectation of privacy” as a gatekeeper for Fourth Amendment analysis. In an era in which huge amounts of data are flowing across the Internet; in which people expose previously unimagined quantities and kinds of information through social media; in which private companies monetize information derived from search requests and GPS location; and in which our cars, dishwashers, and even light bulbs are connected to the Internet, trying to parse out the information in which we do and do not have a reasonable expectation of privacy strikes me as a difficult and sterile task of line-drawing. Rather, we should simply accept that any acquisition of digital information by the Government implicates Fourth Amendment interests...."

"... I agree with those who criticize the broad proposition that any information that is disclosed to third parties is outside the protection of the Fourth Amendment. Courts can appropriately take into account whether information is content or non-content information, whether it is publicly disclosed through social media or is stored in the equivalent of the cloud, or whether its exposure is “voluntary” only in the most technical sense because of the demands of modern technology. But we should not be viewing this analysis of privacy interests as an on/off switch to determine whether or not the Fourth Amendment applies, as today’s third-party doctrine does, but as more of a rheostat to identify the degree of protection that would ensure that the collection and use of that data is reasonable. So the flip-side of my argument is that even where there is a substantial privacy interest in digital data, we should not default immediately to the rule that a warrant is required unless we can fit the collection of such data into one of the twentieth-century exceptions to the warrant requirement..."

I have attempted to highlight relevant sections, but you should read Litt's entire analysis. Cindy Cohn, the Executive Director of the EFF, wrote a rebuttal in July:

"... Mr. Litt makes two initial statements with which I agree. First, he notes that the “reasonable expectation of privacy” test currently employed in Fourth Amendment jurisprudence is a poor test for the digital age. Second, he states that the “third-party doctrine”—under which an individual who voluntarily provides information to a third party loses any reasonable expectation of privacy in that information—should not be an on-off switch for the Fourth Amendment... From there, however, our paths diverge quite sharply.

Mr. Litt argues that since the “reasonable expectation of privacy” formulation is not well suited to digital surveillance, it should simply be eliminated. This would leave a “reasonableness” balancing test to carry the entire weight of the Fourth Amendment’s protection against governmental intrusions. He says that a court in each case should balance the “actual harm” suffered by the individual affected by the surveillance with the governmental interests in conducting the surveillance. This argument throws the baby out with the bathwater. By abandoning the “reasonable expectation of privacy” standard without a suitable replacement, Mr. Litt also implicitly suggests abandoning the foundational constitutional protection against general warrants, as well as the rule that a warrantless search of someone with a reasonable expectation of privacy is per se unconstitutional unless an exception applies..."

"Under current doctrine, since Americans have a reasonable expectation of privacy in the content of their communications, full-content searching is per se unconstitutional unless an exception to the warrant requirement applies. None does. In order to prevail, therefore, the government must convince the Supreme Court to read a broad national security “special needs” exception into the Fourth Amendment authorizing mass, suspicionless seizure and full-content searches of millions of nonsuspect Americans’ most private international and domestic communications. That is a tall order... Such a large implied exception does not readily align with history: the Fourth Amendment contains no national security exception, even though it was adopted in the shadow of the Revolutionary War. Further, the Fourth Amendment was expressly intended to prevent general warrants. The FISA Court of Review—where the government alone presents its case and the arguments and decisions are kept secret—has recognized some form of a national security exception..."

"Moreover, Mr. Litt’s balancing test is unbalanced at its inception. According to his argument, courts can only evaluate the “actual harm” to a single person from mass surveillance because his reformulation retains the caselaw holding that Fourth Amendment rights are personal and cannot be asserted vicariously.20 Meanwhile, Mr. Litt’s formulation would allow the government to present its interest broadly without also showing “actual” increased safety of Americans as a result of the surveillance, much less the individual safety of the plaintiff."

"More importantly, Mr. Litt’s central claim is that there can be no actual harm when a person’s communications are seized by the government and searched, even with content searching, as long as computers but not humans conduct the search. He says that communications are “unseen and unknown” until they turn up in search results that are shown to a human... This argument—what I call the “human-eyes” theory of the Fourth Amendment—is where we most seriously disagree. Mr. Litt’s “human-eyes” theory would effectively authorize a surveillance state in which a person’s every action and interaction could be technologically monitored and algorithmically analyzed without violating the Fourth Amendment..."

Again, I have tried to highlight relevant section, but you should read all of Cohn's rebuttal and her summary. This is important stuff. People are thinking about how to modify the FOurth Amendment of the U.S. Constitution.

Both essays are a good start with the encryption-versus-safety discussion, but the discussion seems focused upon attorneys. Both essays appeared in a legal journal and Director Comey's speech was to a room full of attorneys. One should not have to be an attorney to understand things. Any legislation resulting from the discussions would affect all citizens. So, the discussion needs to be more inclusive. It needs to happen in a way that engages the broader population.

Major newspapers have a role in making this happen. Politicians have a responsibility, too. Senator Ron Wyden (Democrat- Oregon) has been one of too few lone voices warning citizens. More politicians need to step up their game, or get out of the way for ones willing to do so.

What are your opinions of the encryption versus safety discussion? Of the essays by Litt and Cohn?


Senate Narrowly Rejected Bill To Expand Government Surveillance

While consumers may have been distracted with votes in the U.S. Senate about gun reform or the sit-in within the U.S. House, a key vote also happened last week regarding government surveillance. The U.S. Senate narrowly voted down a bill to grant expanded surveillance powers to the Federal Bureau of Investigation.

According to Reuters, the legislation sought to:

"... broaden the type of telephone and internet records the FBI could request from companies such as the Google unit of Alphabet Inc and Verizon Communications Inc without a warrant... filed as an amendment to a criminal justice funding bill, would widen the FBI’s authority to use so-called National Security Letters, which do not require a warrant and whose very existence is usually a secret. Such letters can compel a company to hand over a user's phone billing records. Under the Senate's change, the FBI would be able to demand electronic communications transaction records such as time stamps of emails and the emails' senders and recipients, in addition to some information about websites a person visits and social media log-in data. It would not enable the FBI to use national security letters to obtain the actual content of electronic communications."

Perhaps, more importantly the bill would have made:

"... permanent a provision of the USA Patriot Act that lets the intelligence community conduct surveillance on “lone wolf” suspects who do not have confirmed ties to a foreign terrorist group. That provision, which the Justice Department said last year had never been used, expires in December 2019."

Senate Amendment 4787 was introduced by Senators John McCain and Richard Burr. It failed by two votes: 58-38. Before the vote on Wednesday, Senator Ron Wyden (Dem.-Oregon) had warned:

"If this proposal passes, FBI agents will be able to demand the records of what websites you look at online, who you email and chat with, and your text message logs, with no judicial oversight whatsoever. The reality is the FBI already has the power to demand these electronic records with a court order under the Patriot Act. In emergencies the FBI can even obtain the records right away and go to a judge after the fact. This isn’t about giving law-enforcement new tools, it’s about the FBI not wanting to do paperwork.”

Yep. That rejected bill sounds like an erosion of privacy rights. Senate Majority Leader Mitch McConnell (Rep.-Kentucky) has already filed a motion to reconsider the amendment.


The Third Anniversary of Leaks About NSA Surveillance Programs

Three years ago today, the public learned about extensive surveillance by the U.S. National Security Agency (NSA). Back then, the Guardian UK newspaper reported about a court order allowing the NSA to spy on U.S. citizens. The Electronic Frontier Foundation (EFF) summarized events from 2013:

"It started with a secret order written by the FISA court authorizing the mass surveillance of Verizon Business telephone records—an order that members of Congress quickly confirmed was similar to orders that had been issued every 3 months for years. Over the next year, we saw a steady drumbeat of damning evidence, creating a detailed, horrifying picture of an intelligence agency unrestrained by Congress and shielded from public oversight by a broken classification system. The leaks were thanks in large part to whistleblower Edward Snowden, who has been living in Russia for the last three years, unable to return to the United States for fear of spending his life behind bars..."

Since then, we've learned plenty about how extensive the government surveillance apparatus is and the lack of oversight. We've also learned about NSA code inserted in Android operating system software, the FISA Court and how it undermines the public's trust, the importance of metadata and how much it reveals about you (despite some politicians' claims otherwise), the unintended consequences from broad NSA surveillance, U.S. government spy agencies' goal to break all encryption methods, warrantless searches of U.S. citizens' phone calls and e-mail messages, the NSA's facial image data collection program, the data collection programs included ordinary (e.g., innocent) citizens besides legal targets, and while most hi-tech and telecommunications companies assisted the government with its spy programs, AT&T was probably the best collaborator. A scary, extensive list, eh?

Would the public have learned about all of this without the Snowden leaks? I doubt it. So, thanks to Edward Snowden.

And, this list doesn't include the attempt by the Justice Department to force a hi-tech company to build a "back door" into its products to break encryption. It's been a busy three years. The EFF concluded:

"The Snowden leaks caused a sea change in the policy landscape related to surveillance. EFF worked with dozens of coalition partners across the political spectrum to pass the USA Freedom Act, the first piece of legislation to rein in NSA spying in over thirty years—a bill that would have been unthinkable without the Snowden leaks. They also set the stage for a major showdown in Congress over Section 702 of the FISA Amendments Act, the controversial section of law set to expire in 2017 that the government claims authorizes much of the NSA’s Internet surveillance... Perhaps most importantly, the Snowden leaks published over the last three years have helped to realign a broken relationship between the intelligence community and the public. Whistleblowers often serve as a last-resort failsafe when there are no other methods of bringing accountability to secretive processes. The Snowden leaks have helped illuminate how the NSA was operating outside the law with near impunity, and this in turn drove an international conversation about the dangers of near-omniscient surveillance of our digital communications."

It's not over. The EFF compiled a list of 65 things we know thanks to the Snowden leaks, and a timeline of NSA domestic surveillance. And, Vice News has uncovered some of the documents that highlight the discussions among NSA and government officials about the privacy and Constitutional issues Mr. Snowden raised at the agency before the leaks:

"What's remarkable about this FOIA release, however, is that the NSA has admitted that it altered emails related to its discussions about Snowden. In a letter disclosed to VICE News Friday morning, Justice Department attorney Brigham Bowen said, "Due to a technical flaw in an operating system, some timestamps in email headers were unavoidably altered. Another artifact from this technical flaw is that the organizational designators for records from that system have been unavoidably altered to show the current organizations for the individuals in the To/From/CC lines of the header for the overall email, instead of the organizational designators correct at the time the email was sent."

Because none of the people interviewed by the NSA in the wake of the leaks said that "Snowden mentioned a specific NSA program," and "many" of the people interviewed "affirmed that he never complained about any NSA program," the NSA's counterintelligence chief concluded that these conversations about the Constitution and privacy did not amount to raising concerns about the NSA's spying activities. That was the basis for the agency's public assertions... In April 2014, the month after he testified before the European Parliament, Snowden again challenged the NSA's public narrative about his failure to raise concerns at the agency. In advance of the publication of the Vanity Fair story, the magazine posted a preview online on April 8. "The NSA... not only knows I raised complaints, but that there is evidence that I made my concerns known to the NSA's lawyers, because I did some of it through e-mail," he said."

The Vice News article also discussed the lack of whistle-blower protections for contractors like Mr. Snowden.

Citizens give their government certain powers to act on their behalf. Implicit in that decision is trust. Entrusted with those powers, a government (in a democracy) has an obligation to be transparent with its citizens.


Pending Rule 41 Changes Facilitate Government Spying, So Senators Introduce Legislation To Protect Citizens

Late last week, MacDailyNews reported (links added):

"U.S. Senators Ron Wyden, D-Ore., and Rand Paul, R-Ky., yesterday introduced the Stopping Mass Hacking (SMH) Act to protect millions of law-abiding Americans from government hacking. The Stopping Mass Hacking (SMH) Act prevents recently approved changes to Rule 41 from going into effect. The changes would allow the government to get a single warrant to hack an unlimited number of Americans’ computers if their computers had been affected by criminals, possibly without notifying the victims."

This news story caught my attention because you don't often see Senators Wyden and Paul working together. It raises several questions: what is so important? What is going on?

Last summer, this blog briefly discussed Rule 41 changes the U.S. Justice Department (DOJ) sought. The rule governs how search, seizure, and arrest warrants are obtained by prosecutors for criminal cases. Given sophisticated computer viruses (e.g., malware) that can take over multiple computers in multiple areas and coordinate attacks by those infected computers (a/k/a botnets), the DOJ sought changes where judges could approve warrants where the botnet location is unknown or located in another area, state, or jurisdiction. The Tech Dirt blog covered this well on April 29:

"The DOJ is one step closer to being allowed to remotely access computers anywhere in the world using a normal search warrant issued by a magistrate judge. The proposed amendments to Rule 41 remove jurisdiction limitations, which would allow the FBI to obtain a search warrant in, say, Virginia, and use it to "search" computers across the nation using Network Investigative Techniques (NITs)."

The Tech Dirt blog post also published the relevant section of the pending Rule 41changes approved by the U.S. Supreme Court (SCOTUS):

"Rule 41. Search and Seizure

(b) Venue for a Warrant Application. At the request of a federal law enforcement officer or an attorney for the government:

(6) a magistrate judge with authority in any district where activities related to a crime may have occurred has authority to issue a warrant to use remote access to search electronic storage media and to seize or copy electronically stored information located within or outside that district if:

(A) the district where the media or information is located has been concealed through technological means; or

(B) in an investigation of a violation of 18 U.S.C. § 1030(a)(5), the media are protected computers that have been damaged without authorization and are located in five or more districts.
"

The document also says the following about electronic searches:

"(f) Executing and Returning the Warrant.
(1) Warrant to Search for and Seize a Person or Property.
* * * * *
(C) Receipt. The officer executing the warrant must give a copy of the warrant and a receipt for the property taken to the person from whom, or from whose premises, the property was taken or leave a copy of the warrant and receipt at the place where the officer took the property. For a warrant to use remote access to search electronic storage media and seize or copy electronically stored information, the officer must make reasonable efforts to serve a copy of the warrant and receipt on the person whose property was searched or who possessed the information that was seized or copied. Service may be accomplished by any means, including electronic means, reasonably calculated to reach that person."

So, the remote, electronic searching of computers doesn't target only the computers of the defendant suspected of committing a crime, but it also targets innocent people whose computers may or may not have been infected by the computer virus or botnet. How? Government prosecutors can easily craft broad warrants, and/or computer-illiterate judges can approve them.

And, innocent people won't necessarily receive any notice (e.g., the "reasonable efforts") about remote electronic searches of their devices (e.g., desktops, laptops, phones or tablets) located inside or outside their homes. And, that notice might be after the remote electronic searches were completed. Huh? When the government performs broad searches like this, that is called surveillance... spying.

Were you aware of Rule 41? Of the pending changes? Probably not. And, you'd probably agree that innocent persons' computers shouldn't be searched; and if so, advance notice should be provided. This troubles me and I hope that it troubles you, too.

I also find it troubling that the proposed Rule 41 changes weren't discussed nor debated publicly in Congress. Using the proposed Rule 41 changes, the government has found slick, stealth way to gain broader powers to spy on U.S. citizens while conveniently ignoring the Fourth Amendment of the U.S. Constitution.

Senator Paul said in a statement:

"The Fourth Amendment wisely rejected general warrants and requires individualized suspicion before the government can forcibly search private information. I fear this rule change will make it easier for the government to search innocent Americans’ computers and undermine the requirement for individual suspicion..."

Senator Wyden said in a statement:

"This is a dramatic expansion of the government’s hacking and surveillance authority. Such a substantive change with an enormous impact on Americans’ constitutional rights should be debated by Congress, not maneuvered through an obscure bureaucratic process... Unless Congress acts before December 1, Americans’ security and privacy will be thrown out the window and hacking victims will find themselves hacked again - this time by their own government."

Proponents of the Rule 41 changes will often argue that the changes are needed to fight child predators and terrorists. A wise person once told me, "you can't just run away from the Fourth Amendment." The ends don't justify the means.

The Computer & Communications Industry Association (CCIA) said:

"The proposed rule change has gone largely unnoticed by the public via a behind-the-scenes process usually reserved for procedural updates. The CCIA has voiced its concern about the government’s requested change for the past two years and we invite other technology advocates to join us in supporting this important legislation... We welcome Senators Wyden and Paul’s efforts to prevent this highly controversial rule change from taking effect. They recognize that the far-reaching implications of the government’s proposed changes merit the full attention of their colleagues in Congress. There are Constitutional, international, and technological questions that ought to be addressed transparently... The government’s proposal is a substantive expansion of its ability to conduct electronic searches, and it deserves a public debate in Congress..."

Peter Goldberger, the Co-Chair of Committee on Rules of Procedure at the National Association of Criminal Defense Lawyers (NACDL) said:

"This is a significant and substantive change to the law masquerading as a procedural rule change.. While it is surely possible to craft a constitutional procedure for digital searches, the rule making process is not sufficient for addressing such fundamental constitutional questions. Only a comprehensive legislative approach, crafted after full public hearings, could possibly deal with all the complex aspects of this issue."

You can read the Stopping Mass hacking Act (Adobe PDF) text. It's short. I wish that it went further and, a) cited prior legal cases to prevent the remote electronic searches of innocent persons' devices, b) included stronger language to prevent innocent persons from the burden of responding to court orders, subpoenas, and searches, and c) prevent the government from hiring a third-party to perform the remote electronic searches.

So, now you know. Thankfully, Senators Wyden and Paul are paying attention and have decided to work together. The seriousness demands such. Senators Tammy Baldwin (D-Wisconsin), Steve Daines (R-Montana), and Jon Tester (D-Montana) are co-sponsors of the Senate bill. Contact your Senator and ask why he/she does not support the Stopping Mass Hacking (SMH) Act. Then, contact your Representative and demand that he/she support a similar bill in the House of Representatives. Tell them that rules changes should not masquerade as changes in laws.

Opinions? Comments?


Surveillance Capitalism: A Profitable Business Google And Microsoft Agree About

Google logo The Guardian reported a major shift at both Google and Microsoft. The tech giants have agreed not to sue each other and to focus upon competing in the marketplace:

"This is a gentleman’s agreement. The specifics are secret, but the message on both sides is that the deal reflects a change in management philosophy. Microsoft’s new chief Satya Nadella is eager to push the vision of a dynamic, collaborative Microsoft, partnering with everyone from Apple to Salesforce."

Microsoft logo Microsoft wants to operate in the marketplace that Google already operates in:

"... Microsoft today is facing a very different business ecosystem to the one it dominated in the 1990s. It needs to adapt... what Satya Nadella describes as “systems of intelligence”... cloud-enabled digital feedback loops. They rely on the continuous flow of data from people, places and things, connected to a web of activity. And they promise unprecedented power to reason, predict and gain insight..."

How this relates to "surveillance capitalism":

"For emeritus Harvard Business School professor Shoshana Zuboff, this gets to the core of the Google-Microsoft deal. Zuboff is a leading critic of what she calls “surveillance capitalism”, the monetization of free behavioral data acquired through surveillance and sold on to entities with an interest in your future behavior..."

Whether you call it -- "systems of intelligence" or "surveillance capitalism" -- it shouldn't be a surprise. There has been government surveillance for intelligence and security applications, and for political control. It is more than technologies such asn e-mail trackers, canvass fingerprinting, voice-activated interfaces, and target advertising (a/k/a behavioral advertising). It is more than companies collaborating with government. It is more than smart meters that automatically collect and transmit via wireless your water, gas, and electric utility consumption.

This latest news makes things a lot clearer how companies plan to use the combination of cloud computing services and Internet-of-Things devices installed in smart homes and public spaces.


The Information The FBI Found After Unlocking The San Bernardino Attacker's iPhone

Federal Bureau of Investigation logo Remember the Federal Bureau of Investigation (FBI) lawsuit using a 227-year-old-law to force Apple Inc. to build "back door" software to unlock an iPhone in California? The FBI said it couldn't unlock the phone, claimed the iPhone had important information on it, but later withdrew its lawsuit after it hired an unnamed third party to hack the iPhone. All of of this, you're probably wondering what information the FBI found on that unlocked iPhone.

Guess what they found? Nothing. Nadda. Zilch. Zip. Squat. CNN reported:

"Hacking the San Bernardino terrorist's iPhone has produced data the FBI didn't have before and has helped the investigators answer some remaining questions in the ongoing probe, U.S. law enforcement officials say... Investigators are now more confident that terrorist Syed Farook didn't make contact with another plotter during an 18-minute gap that the FBI said was missing from their time line of the attackers' whereabouts after the mass shooting... The phone didn't contain evidence of contacts with other ISIS supporters or the use of encrypted communications during the period the FBI was concerned about."

More confident? Either you're confident or you aren't. That's like being pregnant. You can't be more pregnant. But hey... you gotta love those unnamed sources. Sometimes they're accurate, and other times not.

Let's translate this into plain English. The attacker's phone contained nothing, which the FBI spun as valuable. Wow! That's like saying the bulk collection (e.g., spying) of all U.S. citizens' phone calls and emails was valuable because not finding anything proved they were not doing anything criminal.

Wow! The arrogance. The waste of time, money, and resources. It takes a brass set of balls to spin crap like this and keep a straight face.

Yet, the legal wrangling ain't over. An FBI versus Apple lawsuit in Brooklyn continues. And, as CNN reported:

"Apple and the FBI are squaring off again Tuesday in testimony at a House hearing on encryption..."

Yesterday's blog post discussed everything that is wrong With the Burr-Feinstein draft anti-encryption proposal circulating the U.S. Senate. The FBI must be feeling pretty cocky, since two Senators have its back while ignoring the consequences.

What are your opinions?


5 Things Wrong With the Burr-Feinstein Anti-Encryption Bill

If you haven't heard, two U.S. Senators proposed a bill that forces technology companies to assist law enforcement and break the encryption built into their products and services. The Just Security blog analyzed the proposed bill, called the Compliance with Court Orders Act of 2016 (CCOA).

The CCOA draft was written by Senators Richard Burr (R-NC) and Dianne Feinstein (D-Calif.), leaders of the Senate Intelligence Committee. It's chief provisions:

"Upon receipt of a court order or warrant for “information or data” sought by a federal, state, local, or tribal government in specific types of investigations or prosecutions, the CCOA requires covered entities to give the government the information or data in an “intelligible” (i.e., unencrypted) format, or to provide any “necessary” technical assistance to render it intelligible. The CCOA only kicks in if the data is “unintelligible” (i.e., encrypted) due to “a feature, product, or service” that is “owned, controlled, created, or provided” by the entity (or by a third party on its behalf). The bill says that no government officer can dictate or prohibit specific design requirements to comply with the law."

Covered entities include tech companies: software developers, device manufacturers, communications providers (wired and wireless), and "remote computing services (RCS)." There are several major things wrong with this proposed legislation:

"In short, the bill prohibits covered entities from designing encryption and other security features so encrypted data is accessible only to the user, not law enforcement nor the entity itself. This is what I would call “effective encryption,” but law enforcement derisively calls “warrant-proof” encryption."

Effective encryption makes sense. It is precisely what is needed by both consumers and businesses to protect and keep private sensitive information, proprietary information, and banking transactions. The Burr-Feinstein proposed bill forces tech companies to build products and services with weaker security:

"...The CCOA would prohibit covered entities in the US from implementing state-of-the-art data security in their products and services... effectively outlaw such cornerstone security concepts as end-to-end encryption, forward secrecy, and HTTPS, which encrypts web traffic against hackers, state-sponsored attackers, and other snoops... It makes covered “license distributors” responsible for the compliance of the software being distributed, meaning Apple’s and Google’s app stores would be on the hook for ensuring every app on offer has weak enough security to meet government standards. It would chill innovation by rendering it largely pointless to work on making software and hardware more secure, because only CCOA-compliant security architectures would be legal."

Think of CCOA-compliant security architectures as GovtOS. The government is forcing tech companies to build a GovtOS. That's wrong. Some of the things wrong with the CCOA:

"2. It can’t stop terrorists and criminals from hiding their activities. The joke in the infosec community used to be that “when crypto is outlawed, only outlaws will use crypto.” The joke’s on Burr and Feinstein... Not only are effective encryption offerings readily available from entities based outside the US, there are already millions upon millions of devices, apps, and software programs presently in use that employ the encryption to be banned going forward. The crypto cat is out of the bag, as New America’s Open Technology Institute put it, and law enforcement’s alarmist and unsupported “going dark” rhetoric can’t hide that fact."

"3. There is no “middle ground” on encryption. This one-sided bill tries to hold itself out as the “middle ground” on encryption... But as cryptography experts have repeatedly explained over the last two decades, there is no middle ground on this issue. Mandating a means of access for law enforcement simply isn’t “appropriate” data security. It is a vulnerability, whose use can’t be limited to “good guys” bearing a court order. This was true 20 years ago and it’s still true today."

That's why many security experts call the CCOA an "anti-encryption" proposal. There's plenty more that's wrong with the CCOA. Read the entire Just Security article.

The CCOA is myopic and wrong. It forces tech companies to build inferior products and services with weaker security; and places U.S.-based tech companies at a disadvantage in the world market. It forces tech companies to do, for free, the investigative work law enforcement should do themselves. The CCOA forces tech companies to build GovtOS, regardless of the negative economic consequences to industry and jobs.

If the CCOA bothers you (and I sincerely hope that it does), tell your elected representatives.


FBI Bought Tool To Hack San Bernardino Attacker's iPhone. Plans Brooklyn Court Action To Force Apple To Unlock iPhone

Federal Bureau of Investigation logo A previous blog post discussed the assistance the U.S. Federal Bureau of Investigation (FBI) has received from an undisclosed company after abandoning its lawsuit against Apple, Inc. regarding the San Bernardino attackers. There have been two important developments this week.

First, CNN reported on Thursday about the hacking method:

"FBI Director James Comey said Wednesday that the government had purchased "a tool" from a private party in order to unlock the iPhone used by one of the San Bernardino shooters... FBI Director James Comey said Wednesday that the government had purchased "a tool" from a private party in order to unlock the iPhone used by one of the San Bernardino shooters."

FBI Director James Comey did not disclose the name of the tool nor the company's name. The CNN news story also discussed whether or not the government will inform Apple about the hacking method:

"Comey said the government was currently considering whether to tell Apple how it pulled off the hack. "We tell Apple, then they're going to fix it, then we're back where we started from," he said. "We may end up there, we just haven't decided yet."

Second, NBC News reported today that the government plans legal action in Brooklyn to force Apple to unlock an iPhone:

"The Justice Department notified a federal judge Friday that it intends to pursue a lawsuit in Brooklyn against Apple, seeking to force the company to open the iPhone of a convicted New York drug dealer. In February, the judge denied the FBI's request to force Apple to open the New York phone, but the Justice Department appealed that ruling... The method a third party provided to open the San Bernardino phone won't work on the Brooklyn phone, federal officials said. "

So the legal fight will continue to force a tech company to build "back door" software into its product. Three things seem clear: a) the FBI wants an updated legal precedent (rather than a 227-year-old law) to force any tech company to build "back door" software into its products and services; b) the FBI believes that it has a stronger case in Brooklyn. Having hacked an iPhone in California, it can argue with more credibility in court why it needs Apple's help in Brooklyn; and c) if successful in court in Brooklyn, the FBI gets investigative tools for free rather than having to pay.

Obviously, news about this story will continue to break. There is so much unknown and undisclosed.


Why iPhones Are Now Less Secure, And How This Affects Everyone

Federal Bureau of Investigation logo Tuesday's blog post discussed the announcement by the U.S. Department of Justice (DOJ) that it had withdrawn its lawsuit against Apple, Inc. because the Federal Bureau of Investigation (FBI), with the help of an unnamed third party, had successfully unlocked the San Bernardino attacker's iPhone and accessed the information in the device. That blog post also discussed several related issues and implications. The government did not disclose the exact method it used to unlock the iPhone.

Today's blog post explores another related issue: whether the government will inform Apple of the vulnerability. With information about the vulnerability, Apple can improve the security of its iPhones. That will help all iPhone users better protect their privacy. The Washington Post reported:

"The FBI plans to classify this access method and to use it to break into other phones in other criminal investigations."

The article described how security research usually works. When security engineers find a vulnerability, they inform the developer so a fix can be quickly built and distributed to users. Also, other developers learn:

"Vulnerabilities are found, fixed, then published. The entire security community is able to learn from the research, and — more important — everyone is more secure as a result of the work. The FBI is doing the exact opposite... All of our iPhones remain vulnerable to this exploit."

No doubt, the FBI and other U.S. government law enforcement (and spy) agencies will use the vulnerability to unlock more iPhones. People forget that iPhones are used by:

"... elected officials and federal workers and the phones used by people who protect our nation’s critical infrastructure and carry out other law enforcement duties, including lots of FBI agents... The problem with computer vulnerabilities is that they’re general. There’s no such thing as a vulnerability that affects only one device. If it affects one copy of an application, operating system or piece of hardware, then it affects all identical copies..."

The worst case scenario: by withholding vulnerability information, the government fosters a situation where Apple products are less secure than other brands developed abroad, whose governments freely shares vulnerability information. That could negatively affect the tech company's revenues and profitability... meaning lost jobs here.

There is one tiny bit of good news in this mess (bold added):

"The FBI did the right thing by using an existing vulnerability rather than forcing Apple to create a new one, but it should be disclosed to Apple and patched immediately."

So now, the bad guys - criminals, hackers, other governments' spy agencies -- know for sure that a vulnerability exists in newer iPhones. If they look hard enough and long enough, they can find it, too. (Many of the bad guys hire skilled, experienced engineers, too.) Once found, they too can use the vulnerability to hack iPhones.

The government's decision to classify the vulnerability seems myopic at best, and at worse extremely unfriendly to users and business. This weakens our defenses. It does not make our defenses stronger.

The government's approach seems to be surveillance trumps privacy. You could say: surveillance by any means necessary (sorry, Malcolm) and damn the consequences. Damn the collateral damage.

Is this wise? Ethical? Is this how you want your government to operate? Was there a debate about this? Did you provide any input to your elected officials? Have they listened?


Justice Department Withdraws Lawsuit Against Apple. Confirms Third Party Successfully Unlocked Attacker's iPhone

Federal Bureau of Investigation logo The U.S. Justice Department (DOJ) announced on Monday its decision to withdraw its lawsuit to force Apple, Inc. to unlock an iPhone used by one of the San Bernardino attackers. U.S. Attorney Eileen M. Decker, of the Central District in California, made the two-paragraph announcement:

"The government has asked a United States Magistrate Judge in Riverside, California to vacate her order compelling Apple to assist the FBI in unlocking the iPhone that was used by one of the terrorists who murdered 14 innocent Americans in San Bernardino on December 2nd of last year. Our decision to conclude the litigation was based solely on the fact that, with the recent assistance of a third party, we are now able to unlock that iPhone without compromising any information on the phone.

We sought an order compelling Apple to help unlock the phone to fulfill a solemn commitment to the victims of the San Bernardino shooting – that we will not rest until we have fully pursued every investigative lead related to the vicious attack. Although this step in the investigation is now complete, we will continue to explore every lead, and seek any appropriate legal process, to ensure our investigation collects all of the evidence related to this terrorist attack. The San Bernardino victims deserve nothing less."

The announcement confirmed that a undisclosed third party had successfully unlocked the attacker's newer model iPhone and retrieved information from it without triggering the auto-erase security feature. Rumors have speculated that Israel-based Cellebrite is the third party assisting the Federal Bureau of Investigation (FBI). There also was speculation that the National Security Agency (NSA) assisted the FBI.

After a cancelled March 22 court hearing, the government had an April 5 deadline to provide a status to the court. In its original complaint, the government used a 227-year-old law to force the tech company to build software to unlock the newer model iPhone and bypass its security features. The judge agreed and Apple appealed the decision.

The announcement did not mention what, if any, useful information the phone revealed. The government had suspected the device may contain information about other persons working with the attackers.

The legal fight between the FBI and Apple probably is not over. The New York Times reported:

"... what happened in the San Bernardino case doesn’t mean the fight is over,” said Esha Bhandari, a staff lawyer at the American Civil Liberties Union. She notes that the government generally goes through a process whereby it decides whether to disclose information about certain vulnerabilities so that manufacturers can patch them. “I would hope they would give that information to Apple so that it can patch any weaknesses,” she said, “but if the government classifies the tool, that suggests it may not.”

Apple released a brief statement yesterday:

"From the beginning, we objected to the FBI’s demand that Apple build a backdoor into the iPhone because we believed it was wrong and would set a dangerous precedent. As a result of the government’s dismissal, neither of these occurred. This case should never have been brought.

We will continue to help law enforcement with their investigations, as we have done all along, and we will continue to increase the security of our products as the threats and attacks on our data become more frequent and more sophisticated. Apple believes deeply that people in the United States and around the world deserve data protection, security and privacy. Sacrificing one for the other only puts people and countries at greater risk..."

At least for now, engineers at Apple can refocus on improving the device's security without being forced to do investigative work the government should have done. According to TechCrunch:

"... the Department of Justice said the method only works on this phone in particular. But it’s hard to believe this argument as there’s no reason the FBI wouldn’t be able to unlock other iPhones 5c running the same version of iOS 9. Moreover, if the FBI found a software exploit, this exploit should work with all iPhones running on this version of iOS 9 (and most likely the current version of iOS, iOS 9.3)..."

What to make of these events?

If the government didn't find any useful information on the attacker's phone, then this court case has been a huge waste of time and taxpayer's money. There was speculation that the government's strategy was to gain broader legal powers to force tech companies to help it break into encrypted devices. (Reread Decker's announcement above, including "... seek any appropriate legal process...") It didn't get that legal precedent by abandoning the case.

However, two U.S. Senators have drafted proposed legislation giving federal judges such broader powers. The latest proposal was drafted by Senators Richard Burr (Rep.-North Carolina) and Dianne Feinstein (Dem.-California), leading members of the Senate Intelligence Committee. Will this proposal continue now that the government has withdrawn its lawsuit? Should this proposal continue? If it does, that bears watching. I guess the DOJ didn't want to wait for a gridlocked Congress to act next year after elections.

What are your opinions of these events?


FBI vs. Apple: Cancelled Hearing, Draft Legislation, New Decryption Capabilities, And An Outside Party

Federal Bureau of Investigation logo A lot happened this week. A lot. Below is a recap of key headlines and events involving Apple, Inc. and the U.S. Federal Bureau of Investigation (FBI).

Late during the day on Monday, the government's lawyers got U.S. Magistrate Sheri Pym to cancel a Tuesday March 22 hearing between Apple and the FBI about an earlier court decision forcing Apple to unlock the iPhone used by one of the San Bernardino attackers. Apple did not object to the cancelled hearing. The FBI was ordered to file a status by April 5, 2016. The government filed court papers on Monday explaining why:

"On Sunday, March 20, 2016, an outside party demonstrated to the FBI a possible method for unlocking Farook's iPhone. Testing is required whether it is a viable method that will not compromise data on Farook's iPhone. If the method is viable, it should eliminate the need for assistance from Apple Inc. set forth in the All Writs Act Order in this case."

So, on or before April 5 we will learn if this outside party successfully demonstrated the ability to unlock and decrypt information stored on this newer model iPhone without any loss of damage to the information stored on it.

Are these decryption capabilities a good thing? Ars Technica reported:

"Jennifer Granick, the director of civil liberties at the Stanford Center for Internet and Society, said that these new government decryption capabilities are not good for privacy and ever-expanding government surveillance. "The DOJ doesn't want bad precedent, and I think Apple had the better side in this argument," she told Ars. "Being able to hack helps DOJ for a while. Apple could upgrade beyond the capability..."

Meanwhile, two U.S. Senators have drafted proposed legislation giving federal judges broad powers to force technology companies like Apple to help law enforcement break into encrypted devices. Prior proposals died in Congress. The latest proposal was drafted by Senators Richard Burr (Rep.-North Carolina) and Dianne Feinstein (Dem.-California), leading members of the Senate Intelligence Committee.

Apple Inc. logo Who is this mysterious outside party helping the FBI unlock and decrypt information on newer model iPhones? There has been speculation that the National Security Agency (NSA) was helping the FBI. One would expect the NSA to have the decryption capabilities. BGR explored this on March 4:

"... the NSA can hack into the device but that it doesn’t want to tell that to the FBI because it never likes to reveal what it’s capable of doing. If that were the case, however, why wouldn’t the NSA help the FBI behind the scenes before the FBI went public with its request for Apple’s assistance? And besides, as The Intercept notes, “courts have affirmed the NSA’s legal right to keep its investigative methods secret.” In fact, security experts explained to Wired earlier this week that the FBI could recruit the NSA to connect the iPhone 5c to a Stingray-like rogue cellular network as it’s booting up, which could give the agency the ability to control the device before it even gets to the unlock screen..."

However, Inverse reported on Thursday who else it might be and why:

"Sun Corporation, the company currently getting rich off public speculation that it can help the FBI break into the notorious San Bernardino iPhone was not always such a fierce competitor. While it’s seen the value of its stock rise 36 percent since Reuters reported that the FBI had enlisted its subsidiary, an Israeli-firm called Cellebrite, to unlock the iPhone..."

NPR reported that it might be a publicity stunt by Cellebrite. Will the FBI meet its April 5 deadline? The NPR report discussed a possible decryption approach:

"Computer forensics researcher Jonathan Zdziarski argues that because the FBI has asked courts for only two weeks to test the viability of the new method, it's likely not highly experimental. It's also likely not something destructive, like the "decapping" method that relies on physically shaving off tiny layers of the microprocessor inside the phone to reveal a special code that would let investigators move the data and crack the passcode. The idea that's garnering the most focus is something called chip cloning, or mirroring or transplantation..."

During a press conference on Friday, FBI Director James Comey wouldn't disclose the name of the outside party. USA Today also reported:

"Law enforcement officials Thursday threw cold water on two recent theories on how the FBI was attempting to hack into an iPhone used by one of the San Bernardino terrorists... FBI Director James Comey, in response to a reporter's question at a briefing, said making a copy of the iPhone’s chip in an effort to circumvent the password lockout “doesn’t work”... A widely discussed scenario in the security world, put forward by a staff technologist at the ACLU, has been that the FBI had found a way to remove crucial chips from the iPhone, make digital copies of them and then run multiple passcode attempts against the digital copies, while keeping the phone's software itself untouched. That would avoid tripping the self-erase program built into the iPhone..."

So, who is helping the FBI -- Cellebrite, the NSA, or both? Or another entity?

Another line of speculation is that the FBI has received assistance from the NSA and has decided to use Cellebrite as a false front. Why might this be true? It allows the FBI to reveal (some) investigation methods without revealing the NSA's real methods. I'm no legal expert, but if this is true, I can't see any judge being pleased about being lied to.

We shall see on or before April 5. What are your opinions? Speculation?


Apple Engineers Consider Their Options, The FBI's Goals, And 'Warrant-Proof Phones' Spin

Apple Inc. logo The encryption engineers at Apple are considering their options, if the U.S. Federal Bureau of Investigation (FBI) is successful at forcing their employer to build back doors into one or several iPhones. The New York Times reported: that

"Apple employees are already discussing what they will do if ordered to help law enforcement authorities. Some say they may balk at the work, while others may even quit their high-paying jobs rather than undermine the security of the software they have already created, according to more than a half-dozen current and former Apple employees. Among those interviewed were Apple engineers who are involved in the development of mobile products and security, as well as former security engineers and executives."

One explanation for this:

“It’s an independent culture and a rebellious one,” said Jean-Louis Gassée, a venture capitalist who was once an engineering manager at Apple. “If the government tries to compel testimony or action from these engineers, good luck with that.”

The tech company estimated it would take 10 engineers about a month to develop the back-door software, some have called, "GovtOS." That estimate assumed the encryption engineers would be on staff and available. Security experts have warned that more court orders to unlock iPhones will likely follow, if the FBI is successful with forcing Apple to unlock the San Bernardino attacker's phone. 

Since the "back doors" are really software, that software must be developed, debugged, tested, and documented like any other. Those tasks require a broader team across multiple disciplines; all of which could be working (instead) on other projects that generate revenue. Then, multiply this by multiple unlock demands. Will the government reimburse Apple for the new, broader project team it creates to build back-door software? Will the government reimburse Apple for the opportunity cost from lost projects and revenues the team members could have completed instead? Will the government reimburse Apple for the costs of hiring engineers and workers to replace those who quit? It will be interesting to see how the financial markets evaluate all of this, if the FBI successfully forces Apple to unlock iPhones.

By using a 227-year-old law, it seems that the FBI and Director James Comey want to direct the development work of private companies to do tasks they should do themselves, while ignoring the unintended consequences to business and jobs. (Remember, experts warned in 2014 that NSA spying could cost the tech industry billions of dollars.) Has the government really thought this through? It seems like they haven't.

Federal Bureau of Investigation logo What are the FBI's goals? An article in Quartz suggested that the FBI is:

"... worried about is the fast-approaching future when its best hackers will be stymied by powerful corporate encryption and security systems. Federal law, in its current state, is of little help. There is no precedent that will allow the government to force a private company to change its security systems so that the FBI can get inside and take a peek. In fact, the Communications Assistance for Law Enforcement Act (CALEA) could be interpreted to restrict the government from doing so. The FBI has apparently decided that it’s time for federal law to change. So its officials have been searching for a particular case that would give them a shot at changing the established legal precedent.."

Learn more about CALEA and the FBI's attempts since 2010 to expand it. An MIT Technology Review article debunked the government's spin and fear-mongering claims of a new period of "warrant-proof phones" (e.g., newer iPhones) and "going dark." There have always been warrant-proof products and services because these (analog or paper-based) items historically didn't archive or store information. So, historical government surveillance was always "dark." While law enforcement may lose some information surveillance sources in the future due to encryption, the multitude of new technologies, products, services, companies, web sites, and mobile apps during the past few years have provided it with far more sources with far more detailed information than it ever had. The old saying seems to apply: can't see the forest for the trees.

I agree. We definitely live in the golden age of surveillance.

The government's argument is weak also because it ignores the option that the well-funded bad guys, such as drug cartels and terrorist networks, can, a) purchase encrypted communications products and services elsewhere outside the USA, and b) hire engineers and programs to maintain their own encrypted systems.

What are your opinions?