Security analysts recently discovered surveillance malware in some inexpensive smartphones that run the Android operating system (OS) software. The malware secretly transmits information about the device owner and usage to servers in China. The surveillance malware was installed in the phones' firmware. The New York Times reported:
"... you can get a smartphone with a high-definition display, fast data service and, according to security contractors, a secret feature: a backdoor that sends all your text messages to China every 72 hours. Security contractors recently discovered pre-installed software in some Android phones... International customers and users of disposable or prepaid phones are the people most affected by the software... The Chinese company that wrote the software, Shanghai Adups Technology Company, says its code runs on more than 700 million phones, cars and other smart devices. One American phone manufacturer, BLU Products, said that 120,000 of its phones had been affected and that it had updated the software to eliminate the feature."
"... provides professional Firmware Over-The-Air (FOTA) update services. The company offers a cloud-based service, which includes cloud hosts and CDN service, as well as allows manufacturers to update all their device models. It serves smart device manufacturers, mobile operators, and semiconductor vendors worldwide."
Firmware is a special type of software store in read-only memory (ROM) chips that operates a device, including how it controls, monitors, and manipulates data within a device. Kryptowire, a security firm, discovered the malware. The Kryptowire report identified:
"... several models of Android mobile devices that contained firmware that collected sensitive personal data about their users and transmitted this sensitive data to third-party servers without disclosure or the users' consent. These devices were available through major US-based online retailers (Amazon, BestBuy, for example)... These devices actively transmitted user and device information including the full-body of text messages, contact lists, call history with full telephone numbers, unique device identifiers including the International Mobile Subscriber Identity (IMSI) and the International Mobile Equipment Identity (IMEI). The firmware could target specific users and text messages matching remotely defined keywords. The firmware also collected and transmitted information about the use of applications installed on the monitored device, bypassed the Android permission model, executed remote commands with escalated (system) privileges, and was able to remotely reprogram the devices.
The firmware that shipped with the mobile devices and subsequent updates allowed for the remote installation of applications without the users' consent and, in some versions of the software, the transmission of fine-grained device location information... Our findings are based on both code and network analysis of the firmware. The user and device information was collected automatically and transmitted periodically without the users' consent or knowledge. The collected information was encrypted with multiple layers of encryption and then transmitted over secure web protocols to a server located in Shanghai. This software and behavior bypasses the detection of mobile anti-virus tools because they assume that software that ships with the device is not malware and thus, it is white-listed."
So, the malware was powerful, sophisticated, and impossible for consumers to detect.
This incident provides several reminders. First, there were efforts earlier this year by the U.S. Federal Bureau of Investigation (FBI) to force Apple to build "back doors" into its phones for law enforcement. Reportedly, it is unclear what specific law enforcement or intelligence services utilized the data streams produced by the surveillance malware. It is probably wise to assume that the Ministry of State Security, China's intelligence agency, had or has access to data streams.
Second, the incident highlights supply chain concerns raised in 2015 about computer products manufactured in China. Third, the incident indicates how easily consumers' privacy can be compromised by data breaches during a product's supply chain: manufacturing, assembly, transport, and retail sale.
Fourth, the incident highlights Android phone security issues raised earlier this year. We know from prior reports that manufacturers and wireless carriers don't provide OS updates for all Android phones. Fifth, the incident highlights the need for automakers and software developers to ensure the security of both connected cars and driverless cars.
Sixth, the incident raises questions about how and what, if anything, President Elect Donald J. Trump and his incoming administration will do about this trade issue with China. The Trump-Pence campaign site stated about trade with China:
"5. Instruct the Treasury Secretary to label China a currency manipulator.
6. Instruct the U.S. Trade Representative to bring trade cases against China, both in this country and at the WTO. China's unfair subsidy behavior is prohibited by the terms of its entrance to the WTO.
7. Use every lawful presidential power to remedy trade disputes if China does not stop its illegal activities, including its theft of American trade secrets - including the application of tariffs consistent with Section 201 and 301 of the Trade Act of 1974 and Section 232 of the Trade Expansion Act of 1962..."
This incident places consumers in a difficult spot. According to the New York Times:
"Because Adups has not published a list of affected phones, it is not clear how users can determine whether their phones are vulnerable. “People who have some technical skills could,” Mr. Karygiannis, the Kryptowire vice president, said. “But the average consumer? No.” Ms. Lim [an attorney that represents Adups] said she did not know how customers could determine whether they were affected."
Until these supply-chain security issues get resolved it is probably wise for consumers to inquire before purchase where their Android phone was made. There are plenty of customer service sites for existing Android phone owners to determine the country their device was made in. Example: Samsung phone info.
Should consumers avoid buying Android phones made in China or Android phones with firmware made in China? That's a decision only you can make for yourself. Me? When I changed wireless carriers in July, I switched an inexpensive Android phone I'd bought several years ago to an Apple iPhone.
What are your thoughts about the surveillance malware? Would you buy an Android phone?
The Facebook social networking site introduced on October 28, 2016 a new feature where provides its voting-age users with previews of candidates and questions. The site presented users with the following ad:
Like other ads in the site, users can disable the ad. Users that select the "Preview Your Ballot" link will see next three pop-up pages which explain the new feature:
Then,, users can preview their ballot based upon where they live, which includes national candidates running for office and ballot questions. To view local candidates running for office and local ballot questions, users must provide Facebook with their complete street address:
Within the new feature, users can preview information about each candidates: Issue Positions, Endorsements, Recent Posts, and Website. "Issue Positions" links to content within the candidate's Facebook page. The "Endorsements" and "Recent Posts" selections link similar. "Website" links to the candidate's external website. Issue Positions includes the topics you might expect: budget, civil rights, economy, education, energy, environment, foreign policy, guns, health, immigration, infrastructure, military, Social Security, taxes, terrorism, and more.
Why did Facebook introduce this new feature? According to a popup within the feature:
"You're seeing this because you may be in a state that has a voter registration deadline or election coming up. We want to help people have their voice heard in the elections this year, so we're showing this message to people who are old enough to vote - no matter who they support.
We send reminders about voting every now and then. If you'd rather not see these in the future, click or tap the in the top right corner of the reminder and select Hide Reminder, then Hide all voting reminders."
"Voting is important... we’re encouraging civic participation. We want to make it easier for people who want to participate to do so, and to have a voice in the political process... Today, we’re introducing a new feature that shows you what’s on the ballot — from candidates to ballot initiatives. We also show you where the candidates stand on the issues...Not all states in America mail out sample ballots ahead of an election. This can make it challenging to find comprehensive information about the questions you’ll be expected to consider when you walk into the voting booth. Thanks to data gathered from election officials by the nonpartisan Center for Technology and Civic Life (CTCL), we can present you with a preview of the ballot you’ll receive on November 8. If you notice an issue with the CTCL data, we’ve built in a way for you to provide feedback and help correct the dataset.
Challenging to find information? What a load of bull. The Internet makes it easy to visit websites for candidates and ballot questions. Plus, information is available at every state. Example: ballot information in Massachusetts is available at websites by the Secretary of the Commonwealth and the City of Boston. Sample ballots were available during the primaries, too. Every state in the Union has a Secretary of State whose website you should visit anyway for elections and other information. Find your state in this list.
I first saw Facebook's new Elections Ballot feature on November 2, 2016 -- five days after the announcement, and less than 6 days before the November 8 Elections Day. You'd think that Facebook would have introduced this feature sooner; ideally, as soon as the main parties had nominated their candidates. Facebook didn't. Not good. And, the feature's availability may be too late for early voters.
What else is happening with this new feature? Several items are worth mentioning. First, executives at Facebook are probably well aware that two-thirds of the site's users get their news at the site. This new feature is clearly an attempt to keep users within the Facebook bubble: increase the amount of time on site and the number of pages viewed within the site.
Second, the accuracy of the new feature is suspect. I have never shared my residential address with Facebook, so the elections feature displayed 4 questions when there are actually 5 where I live. The fifth question is a local ballot iniative. Users like me, who haven't provided street address information, may get a wrong impression of what's on their ballot -- if they fail to read the fine print. And, we know that too many consumers never read the fine print.
Third, the local candidates and ballot questions are a slick way for Facebook to force users to share their residential street address information. Fourth, the new feature is an opportunity to capture users' voting information. Of course, not the official ballots, but the next closest thing. Users can select which candidates are their Favorites and share it with their Friends: people, coworkers, classmates, family, neighbors, and others they are connected to at the site. Favoriting a candidate within this new feature seems like a pretty explicit and accurate proxy instead of an official ballot:
Fifth, armed with this ballot information about its users, Facebook can probably charge more to advertisers (e.g., political campaigns, political action committees, pollsters, data brokers) interested in purchasing information about voting populations and/or buying targeted ads at the site. Consider this report by BuzzFeed from November 2014:
"At some point in the next two years, the pollsters and ad makers who steer American presidential campaigns will be stumped: The nightly tracking polls are showing a dramatic swing in the opinions of the electorate, but neither of two typical factors — huge news or a major advertising buy — can explain it. They will, eventually, realize that the viral, mass conversation about politics on Facebook and other platforms has finally emerged as a third force in the core business of politics, mass persuasion.
Facebook is on the cusp — and I suspect 2016 will be the year this becomes clear — of replacing television advertising as the place where American elections are fought and won. The vast new network of some 185 million Americans opens the possibility, for instance, of a congressional candidate gaining traction without the expense of television, and of an inexpensive new viral populism. The way people share will shape the outcome of the presidential election."
It seems that day has arrived. Shape the conversation and outcome, indeed. It's all driven by data -- big data -- data mining.
Sixth, the new feature raises questions and issues for users. Should Facebook know your voting decisions? Does Facebook have a right to know your voting decisions? Has Facebook earned the right to know your voting decisions? Facebook is a money-making enterprise, so it will sell your information to as many other companies as possible. According to the October 28 announcement:
"How you vote is a personal matter, and we’ve taken steps to make sure that you have utmost control over your plan. After you make a selection, you have to choose who you want to be able to see it (“Only me” or “Friends”). For example, you may want to be private about your choice for president, but share with friends your pick for a congressional race or a ballot initiative."
The language in the announcement seems to confusingly refer to the Facebook feature as voting, when it isn't. Do all of your friends need to know your voting preferences? What about friends with Facebook profiles that are open to the general public? In the latter case, anybody wandering in can view your voting information. Is that what you really want?
Not me. What happens in the voting booth stays in the voting booth. I may express concerns on Facebook, but my final vote is private. No doubt, some consumers will share their voting preferences without considering the implications.
I visited the CTCL website and found it underwhelming and lacking key information to uderstand what this organization really is and does. Not good.
What are your opinions of Facebook's new elections and ballot feature?
Late last month, the U.S. Federal Communications Commission (FC) adopted new privacy rules to require high-speed Internet service providers (ISPs) to protect the privacy of their customers. The FCC announcement explained the new privacy rules:
"Opt-in: ISPs are required to obtain affirmative “opt-in” consent from consumers to use and share sensitive information. The rules specify categories of information that are considered sensitive, which include precise geo-location, financial information, health information, children’s information, social security numbers, web browsing history, app usage history and the content of communications.
Opt-out: ISPs would be allowed to use and share non-sensitive information unless a customer “opts-out.” All other individually identifiable customer information – for example, email address or service tier information – would be considered non-sensitive and the use and sharing of that information would be subject to opt-out consent, consistent with consumer expectations.
Exceptions to consent requirements: Customer consent is inferred for certain purposes specified in the statute, including the provision of broadband service or billing and collection. For the use of this information, no additional customer consent is required beyond the creation of the customer-ISP relationship.
Transparency requirements that require ISPs to provide customers with clear, conspicuous and persistent notice about the information they collect, how it may be used and with whom it may be shared, as well as how customers can change their privacy preferences;
A requirement that broadband providers engage in reasonable data security practices and guidelines on steps ISPs should consider taking, such as implementing relevant industry best practices, providing appropriate oversight of security practices, implementing robust customer authentication tools, and proper disposal of data consistent with FTC best practices and the Consumer Privacy Bill of Rights.
Common-sense data breach notification requirements to encourage ISPs to protect the confidentiality of customer data, and to give consumers and law enforcement notice of failures to protect such information."
The new privacy rules prohibit “take-it-or-leave-it” offers, which means an ISP cannot refuse to serve customers who don’t consent to the use and sharing of their information for commercial purposes. The new rules also addressed the desire by ISPs to charge customers more fees for privacy. According to the FCC Fact Sheet:
"Recognizing that so-called “pay for privacy” offerings raise unique considerations, the rules require heightened disclosure for plans that provide discounts or other incentives in exchange for a customer’s express affirmative consent to the use and sharing of their personal information. The Commission will determine on a case-by-case basis the legitimacy of programs that relate service price to privacy protections. Consumers should not be forced to choose between paying inflated prices and maintaining their privacy.
ISPs like Comcast, AT&T, Charter, and Verizon opposed the stricter privacy rules. Google had argued for broader opt-out provisions and privacy rules the same as for websites, not stricter. The U.S. Chamber of Commerce, a political lobbying organization, opposed the stronger privacy rules the FCC proposed in March. Last week, Reuters reported:
"The final regulation is less restrictive than the initial plan proposed by FCC chairman Tom Wheeler in March and closer to rules imposed on websites by the Federal Trade Commission. Republican commissioners said the rules unfairly give websites the ability to harvest more data than service providers and dominate digital advertising."
FCC Chairman Wheeler released a statement on October 27 about the new broadband privacy rules:
"Last week, I visited Consumer Reports’ headquarters in Yonkers, New York, where I toured their product testing facility and met with senior leadership. When looking at a smart refrigerator that collects and shares data over the Internet, the discussion turned to privacy. Who would have ever imagined that what you have in your refrigerator would be information available to AT&T, Comcast, or whoever your network provider is?
The more our economy and our lives move online, the more information about us goes over our Internet Service Provider (ISP) – and the more consumers want to know how to protect their personal information in the digital age.
Today, the Commission takes a significant step to safeguard consumer privacy in this time of rapid technological change, as we adopt rules that will allow consumers to choose how their Internet Service Provider (ISP) uses and shares their personal data.
The bottom line is that it’s your data. How it’s used and shared should be your choice."
The last sentence cannot be over-emphasized. Consumers: it is our information -- property -- which ISPs use, sell, and make money with. Consumers should decide what data broadband and wireless providers share with marketers. Consumers must be in control.
And, there is more to come as the FCC oversees "pay-for-privacy" schemes by ISPs. So, thanks to the FCC and to Chairman Wheeler for fighting strongly for consumers' online privacy rights. What are your opinions of the new broadband privacy rules?
[Editor's Note: today's blog post is by guest author Cassie Phillips, a technology blogger who developed a special interest in cybersecurity after her webcam was hacked. While she’s interested to see how the Internet of Things changes how we use technology, she is very concerned about all the risks it poses.]
Many people and organizations have raised concerns about the potential risks related to the Internet of Things (IoT). It turns out that they were right to be concerned. Last month the France-based hosting provider, OVH, fell victim to an enormous distributed denial-of-service (DDoS) attack on the Minecraft servers that OVH was hosting.
DDoS attacks are attempts to make a resource (usually a website) inaccessible to its users through an inundation of requests, aiming to overburden the system. In the past, DDoS attacks were carried out by computers, with or without their owner’s consent. Hot Hardware reported:
“OVH was the victim of a wide-scale DDoS attack that was carried via a network of over 152,000 IoT devices… Of those IoT devices participating in the DDoS attack, they were primarily comprised of CCTV cameras and DVRs.”
Before the attack on OVH, there was another DDoS attack on prominent internet security researcher Brian Krebs’ website. This attack was also carried out by IoT devices. Akamai Technologies Inc., a provider of security services worldwide for major companies, cut ties with Mr. Krebs because the DDoS attack on Krebs’ website was enormous. Josh Shaul, Akamai’s vice president, said it was the worst DDoS attack the company had ever seen.
These broad attacks prove that the IoT does pose a significant security risk. And DDoS attacks are by no means the only security risks that the IoT presents. Let’s look at what the IoT is, the risks it presents and, most importantly, how to ensure that any IoT devices you use are secure.
What Is the Internet of Things?
The IoT is the idea that any device can be designed to be able to connect to the internet and other devices. These devices include mobile phones, washing machines, refrigerators, coffee makers, televisions, home thermostats, motion sensors, headphones, Barbie dolls and baby monitors. There is no limit except the imagination.
There are even buildings, cars, and health-related implants (such as pacemakers) that can connect to the internet and to each other. All of these devices can exchange information and collect data, creating a huge pool of information and an enormous network.
What Risks Does the Internet of Things Pose?
As mentioned above, the IoT poses a few risks and concerns. There are four key risks associated with the IoT, with the first being reliability. IoT devices are not necessarily reliable. While this may not be a crisis if the device in question is a refrigerator, it is deadly if devices such as cars fail or are hacked.
The second major risk related to the IoT is privacy. Each device in a network of the IoT can collect and share data. As consumers, we don’t always know who gets this data and what it is used for. The data will almost certainly be used to track consumers’ behavior, allowing companies to target each consumer with tailor-made advertising. While this data probably won’t always be used for nefarious purposes, it can be used in a way that violates our right to privacy. According to Buzzfeed:
“ "We were sleeping in bed, and basically heard some music coming from the nursery, but then when we went into the room the music turned off,” said the anonymous mother. They tracked the IP address that had accessed their camera and discovered a website with “thousands and thousands of pictures of cameras just like their own.” Anyone could use the site to access hacked cameras and monitors located in at least 15 different countries."
This leads to the third major risk associated with the IoT, namely security. Again, each of the IoT devices collects and transmits data. If these devices are hacked, criminals will have access to vast amounts of consumers' private information. Depending on the device, criminals can learn our routines, find out what valuables we keep in our homes, gain access to information about any security measures we use, and even collect sensitive information such as financial payment information.
Another security risk is the potential for hacking medical devices and implants. According to a report by research and advisory firm, Forrester, ransomware in medical devices is the single biggest cybersecurity threat for this year. Security researchers have already managed to hack into hospitals’ networks, pacemakers and other medical devices. This will put people’s lives at risk.
The potential for cyberattacks is the fourth major risk associated with the IoT. Because all these devices are connected, they have the potential to spread malware across homes and entire companies. However, the greatest risk lies in criminals’ ability to use our IoT devices in massive cyberattacks, such as the DDoS attack on OVH. Widespread vulnerabilities are only a few missteps away, and that is a seriously concerning fact.
How to Protect Yourself When Using IoT Devices
Given the risks listed above, it’s vital that consumers learn to protect our devices, our homes, and ourselves. The following actions are all essential to your security when using IoT devices:
While it would be best if security features were built into the design of IoT devices, that’s not always the case. So it’s crucial that you implement the security ideas discussed above. Hopefully, we’ll start seeing a move toward creating an international standard for all IoT devices in the future.
Have you had any bad experiences with IoT devices? How do you think the technology is progressing? Share your thoughts in the comments section below.
[Editor's Note: Today's guest post was originally published by ProPublica on October 21, 2016. It is reprinted with permission.]
When Google bought the advertising network DoubleClick in 2007, Google founder Sergey Brin said that privacy would be the company's "number one priority when we contemplate new kinds of advertising products."
And, for nearly a decade, Google did in fact keep DoubleClick's massive database of web-browsing records separate by default from the names and other personally identifiable information Google has collected from Gmail and its other login accounts.
The change is enabled by default for new Google accounts. Existing users were prompted to opt-in to the change this summer.
The practical result of the change is that the DoubleClick ads that follow people around on the web may now be customized to them based on the keywords they used in their Gmail. It also means that Google could now, if it wished to, build a complete portrait of a user by name, based on everything they write in email, every website they visit and the searches they conduct.
The move is a sea change for Google and a further blow to the online ad industry's longstanding contention that web tracking is mostly anonymous. In recent years, Facebook, offline data brokers and others have increasingly sought to combine their troves of web tracking data with people's real names. But until this summer, Google held the line.
"The fact that DoubleClick data wasn't being regularly connected to personally identifiable information was a really significant last stand," said Paul Ohm, faculty director of the Center on Privacy and Technology at Georgetown Law.
"It was a border wall between being watched everywhere and maintaining a tiny semblance of privacy," he said. "That wall has just fallen."
"We updated our ads system, and the associated user controls, to match the way people use Google today: across many different devices," Faville wrote. She added that the change "is 100% optional -- if users do not opt-in to these changes, their Google experience will remain unchanged." (Read Google's entire statement.)
Existing Google users were prompted to opt-into the new tracking this summer through a request with titles such as "Some new features for your Google account."
The "new features" received little scrutiny at the time. Wired wrote that it "gives you more granular control over how ads work across devices." In a personal tech column, the New York Times also described the change as "new controls for the types of advertisements you see around the web."
Connecting web browsing habits to personally identifiable information has long been controversial.
Privacy advocates raised a ruckus in 1999 when DoubleClick purchased a data broker that assembled people's names, addresses and offline interests. The merger could have allowed DoubleClick to combine its web browsing information with people's names. After an investigation by the Federal Trade Commission, DoubleClick sold the broker at a loss.
In response to the controversy, the nascent online advertising industry formed the Network Advertising Initiative in 2000 to establish ethical codes. The industry promised to provide consumers with notice when their data was being collected, and options to opt out.
"DoubleClick's ad-serving technology will be targeted based only on the non-personally-identifiable information."
But the era of social networking has ushered in a new wave of identifiable tracking, in which services such as Facebook and Twitter have been able to track logged-in users when they shared an item from another website.
Two years ago, Facebook announced that it would track its users by name across the Internet when they visit websites containing Facebook buttons such as "Share" and "Like" 2013 even when users don't click on the button. (Here's how you can opt out of the targeted ads generated by that tracking).
Offline data brokers also started to merge their mailing lists with online shoppers. "The marriage of online and offline is the ad targeting of the last 10 years on steroids," said Scott Howe, chief executive of broker firm Acxiom.
To opt-out of Google's identified tracking, visit the Activity controls on Google's My Account page, and uncheck the box next to "Include Chrome browsing history and activity from websites and apps that use Google services." You can also delete past activity from your account.
ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.
According to a report by the Center on Privacy and Technology (CPT) at Georgetown Law school, about 48 percent of adult Americans -- 117 million people-- are already profiled in facial-recognition databases by law enforcement. The U.S. Federal Bureau of Investigation (FBI) maintains a facial-recognition database, but local police departments do, too.
Issues raised by findings in the report:
"Across the country, state and local police departments are building their own face recognition systems, many of them more advanced than the FBI’s. We know very little about these systems. We don’t know how they impact privacy and civil liberties. We don’t know how they address accuracy problems. And we don’t know how any of these systems—local, state, or federal—affect racial and ethnic minorities."
Facial recognition software is not new, and the report acknowledges that its use is inevitable by law enforcement. The facts include:
"FBI face recognition searches are more common than federal court-ordered wiretaps. At least one out of four state or local police departments has the option to run face recognition searches through their or another agency’s system. At least 26 states (and potentially as many as 30) allow law enforcement to run or request searches against their databases of driver’s license and ID photos. Roughly one in two American adults has their photos searched this way... Historically, FBI fingerprint and DNA databases have been primarily or exclusively made up of information from criminal arrests or investigations. By running face recognition searches against 16 states’ driver’s license photo databases, the FBI has built a biometric network that primarily includes law-abiding Americans. This is unprecedented and highly problematic..."
The report does not want to stop facial-recognition software usage, and it acknowledges that most law enforcement personnel do not want to invade citizens' privacy. The report' raises concerns based upon the data collection primarily includes law-abiding citizens and not just criminals; plus the lack of transparency and regulation regarding accuracy, training, and deployment. Some of the uses that raise concerns:
"Real-time face recognition lets police continuously scan the faces of pedestrians walking by a street surveillance camera... at least five major police departments—including agencies in Chicago, Dallas, and Los Angeles—either claimed to run real-time face recognition off of street cameras, bought technology that can do so, or expressed a written interest in buying it... A face recognition search conducted in the field to verify the identity of someone who has been legally stopped or arrested is different, in principle and effect, than an investigatory search of an ATM photo against a driver’s license database, or continuous, real-time scans of people walking by a surveillance camera. The former is targeted and public. The latter are generalized and invisible. While some agencies, like the San Diego Association of Governments, limit themselves to more targeted use of the technology, others are embracing high and very high risk deployments."
The report described specific examples of usage at the state and local levels:
"No state has passed a law comprehensively regulating police face recognition. We are not aware of any agency that requires warrants for searches or limits them to serious crimes. This has consequences. The Maricopa County Sheriff’s Office enrolled all of Honduras’ driver’s licenses and mug shots into its database. The Pinellas County Sheriff’s Office system runs 8,000 monthly searches on the faces of seven million Florida drivers—without requiring that officers have even a reasonable suspicion before running a search..."
A major concern the report discussed is the:
"... real risk that police face recognition will be used to stifle free speech. There is also a history of FBI and police surveillance of civil rights protests. Of the 52 agencies that we found to use (or have used) face recognition, we found only one, the Ohio Bureau of Criminal Investigation, whose face recognition use policy expressly prohibits its officers from using face recognition to track individuals engaging in political, religious, or other protected free speech."
Another major concern the report discussed:
"Face recognition is less accurate than fingerprinting, particularly when used in real-time or on large databases. Yet we found only two agencies, the San Francisco Police Department and the Seattle region’s South Sound 911, that conditioned purchase of the technology on accuracy tests or thresholds. There is a need for testing. One major face recognition company, FaceFirst, publicly advertises a 95% accuracy rate but disclaims liability for failing to meet that threshold in contracts with the San Diego Association of Governments... Companies and police departments largely rely on police officers to decide whether a candidate photo is in fact a match. Yet a recent study showed that, without specialized training, human users make the wrong decision about a match half the time... an FBI co-authored study suggests that face recognition may be less accurate on black people..."
Regarding the lack of transparency by law enforcement:
"Ohio’s face recognition system remained almost entirely unknown to the public for five years. The New York Police Department acknowledges using face recognition; press reports suggest it has an advanced system. Yet NYPD denied our records request entirely. The Los Angeles Police Department has repeatedly announced new face recognition initiatives—including a “smart car” equipped with face recognition and real-time face recognition cameras—yet the agency claimed to have “no records responsive” to our document request. Of 52 agencies, only four (less than 10%) have a publicly available use policy. And only one agency, the San Diego Association of Governments, received legislative approval for its policy... Maryland’s system, which includes the license photos of over two million residents, was launched in 2011. It has never been audited. The Pinellas County Sheriff’s Office system is almost 15 years old and may be the most frequently used system in the country. When asked if his office audits searches for misuse, Sheriff Bob Gualtieri replied, “No, not really.” Despite assurances to Congress, the FBI has not audited use of its face recognition system, either..."
Learn more about the expanded facial-recognition system the FBI deployed in 2014. The New York Times reported last year about some of the problems:
"Facial recognition software, which American military and intelligence agencies used for years in Iraq and Afghanistan to identify potential terrorists, is being eagerly adopted by dozens of police departments around the country to pursue drug dealers, prostitutes and other conventional criminal suspects. But because it is being used with few guidelines and with little oversight or public disclosure... Law enforcement officers say the technology is much faster than fingerprinting at identifying suspects, although it is unclear how much it is helping the police make arrests... "
The CPT report proposed the following solutions to address privacy concerns:
The year-long investigation by the CPT included more than 100 records requests to police departments around the country. Read the full report: "The Perpetual Line-up: Unregulated Police Face Recognition in America."
We know the National Security Agency (NSA) uses facial recognition software. Some agencies probably acquire photos and related information from them, too. If so, this should be disclosed. In 2012, the U.S. Federal Trade Commission (FTC) proposed guidelines for facial-recognition by social networking sites, companies, and retail stores. Since governments are supposed to report to and serve citizens, similar guidelines should apply to law enforcement.
What are your opinions of real-time facial recognition surveillance? Of the issues raised by the CDT report?
Earlier this month, the Attorney General for the State of New York (NYSAG) announced settlement agreements with the operators of several popular websites for the illegal online tracking of children, which violated the Children's Online Privacy Protection Act (COPPA). The website operators agreed to pay a total of $835,000 in fines, comply with, and implement a comprehensive set of requirements and changes.
COPPA, passed by Congress in 1998 and updated in 2013, prohibits the unauthorized collection, use, and disclosure of children’s personal information (e.g., first name, last name, e-mail address, IP address, etc.) on websites directed to children under the age of 13, including the collection of information for tracking a child’s movements across the Internet. The 2013 update expanded the list of personal information items, and prohibits covered operators from using cookies, IP addresses, and other persistent identifiers to track users across websites for most advertising purposes, amassing profiles on individual users, and serving targeted behavioral advertisements.
The NYSAG operated a program titled "Operation Child Tracker," which analyzed the most popular children’s websites for any unauthorized tracking. The analysis found that four website operators include third-party tracking on their websites -- which is prohibited by COPPA -- and failed to properly evaluate third-party companies, such as advertisers, advertising networks, and marketers. The website operators and their properties included Viacom (websites associated with Nick Jr. and Nickelodeon), Mattel (Barbie, Hot Wheels, and American Girl), JumpStart (Neopets), and Hasbro (My Little Pony, Littlest Pet Shop, and Nerf).
Regular readers of this blog are familiar with the variety of technologies and mechanisms companies have used to track consumers online: web browser cookies, “zombie cookies,” Flash cookies, “zombie e-tags,” super cookies, “zombie databases” on mobile devices, canvas fingerprinting, and augmented reality (which tracks consumers both online and in the physical world). For example, the web browser cookie is a small text file placed by a website on a user’s computer which is stored by the user’s web browser. Every time a user visits the website, the website retrieves all cookies files stored by that website on the user’s computer. Some website operators shared the information contained in web browser cookies with third-party companies, such as marketing affiliates, advertisers, and tracking companies. This allows web browser cookies to be used to track a user’s browsing history across several websites.
All of this happens in the background without explicit notices in the web browser software, unless the user configures their web browser to provide notice and/or to delete all browser cookies stored. The other technologies represent alternative methods with more technical sophistication and stealth.
The announcement by the NYSAG described each website operator's activities:
"Viacom operates the Nick Jr. website, at www.nickjr.com, and the Nickelodeon website, at www.nick.com... The office of the Attorney General found a variety of improper third party tracking on the Nick Jr. and Nickelodeon websites. These included:
1. Many advertisers and agencies that placed advertisements on Nick Jr. and Nickelodeon websites introduced tracking technologies of third parties that routinely engage in the type of tracking, profiling, and targeted advertising prohibited by COPPA. Viacom considered several approaches to mitigate the risk of COPPA violations from these third parties, including removing adult advertising from a child-directed section of the Nick Jr. website and monitoring advertisements for unexpected tracking... However, Viacom did not timely take either approach and did not implement sufficient safeguards for its users.
2. Some visitors to the homepage of the Nick Jr. website were served behavioral advertising and tracked through a third party advertising platform Viacom used to serve advertisements. Although Viacom considered the homepage of the Nick Jr. website to be parent-directed, and thus not covered by COPPA, the homepage had content that appealed to children. Under COPPA, website operators must treat mixed audience pages as child-directed..."
"... 26 of Mattel’s websites feature content for young children, including online games, animated cartoons, and downloadable content such as posters, computer desktop wallpaper, and pages for young children to color... The office of the Attorney General found that a variety of improper third party tracking technologies were present on Mattel’s child-directed websites and sections of websites. These included:
1. Mattel deployed a tracking technology supplied by a third party data broker across its Barbie, Hot Wheels, Fisher-Price, Monster High, Ever After High, and Thomas & Friends websites. Mattel used the tracking technology for measuring website metrics, such as the number of visitors to each site, a practice permitted under COPPA. However, the tracking technology supplied by the data broker introduced many other third party tracking technologies in a process known as “piggy backing.” Many of these third parties engage in the type of tracking, profiling, and targeted advertising prohibited by COPPA.
2. A tracking technology that Mattel deployed on the e-commerce portion of the American Girl website, which is not directed to children or covered by COPPA, was inadvertently introduced onto certain child-directed webpages of the American Girl website.
3. Mattel uploaded videos to Google’s YouTube.com, a video hosting platform, and then embedded some of these videos onto the child-directed portion of several Mattel websites, including the Barbie website. When the embedded videos were played by children, it enabled Google tracking technologies, which were used to serve behavioral advertisements.
"... several improper third party tracking technologies were present on the Neopets website, both for logged-in users under the age of 13 and users who were not logged-in. These included:
1. JumpStart failed to configure the advertising platform used to serve ads on the Neopets website in a manner that would comply with COPPA. As a result, users under the age of 13 were served behavioral advertising and tracked through the advertising platform.
2. JumpStart integrated a Facebook plug-in into the Neopets website... Facebook uses the tracking information for serving behavioral advertising, among other things, unless the website operator notifies Facebook with a COPPA flag that the website falls is subject to COPPA. JumpStart did not notify Facebook that the Neopets website was directed to children."
"... several improper third party tracking technologies were present on Hasbro’s child-directed websites and sections of websites. These included:
1. Hasbro engaged in an advertising campaign that tracked visitors to the Nerf section of Hasbro’s website in order to serve Hasbro advertisements to those same users as they visited other websites at a later time, a type of online behavioral advertising prohibited by COPPA known as “remarketing.”
2. Hasbro integrated a third-party plug-in into many of its websites, that allowed users to be tracked across websites and introduced other third parties that engaged in the type of tracking, profiling, and targeted advertising prohibited under COPPA.
It is important to note that Hasbro participated in a safe harbor program. A website operator that complies with the rules of an FTC-approved safe harbor program is deemed in compliance with COPPA. However, safe harbor programs rely on full disclosure of the operator’s practices and Hasbro failed to disclose the existence of the remarketing campaign through the Nerf website."
The terms of the settlement agreements require the website operators to:
Kudos to the NYSAG office and staff for a comprehensive analysis and enforcement to protect children's online privacy. This type of analysis and enforcement is critical as companies introduce more Internet-connected toys and products classified as part of the Internet of Things (ioT).
Last week, Google settled a long-running class-action lawsuit by agreeing to a $5.5 million payment for ignoring the privacy settings used by Safari browser users. Silicon Beat reported:
"The lawsuit arose out of the 2012 discovery by a Stanford researcher that Google had used a workaround to track Safari users’ web browsing habits. Apple, which owns Safari, had built into it privacy controls that blocked certain cookies, small files that store information that can identify users or track their activities. Google used the improperly harvested user data to dramatically boost ad revenue, the lawsuit suggested. “Behaviorally targeted advertisements based on a user’s tracked internet activity generally sell for at least twice as much as non-targeted, run-of-network ads,” the suit said."
"After Google’s practice came to light, the company agreed to pay $17 million to state attorneys general over privacy violations, and another $22.5 million to the Federal Trade Commission for violating the terms of an earlier settlement. In both cases, Google denied any wrong-doing—an outcome an FTC Commissioner then described as “inexplicable.”
According to the settlement agreement:
"Plaintiffs centrally allege in the Complaint that Defendant Google circumvented Plaintiffs' Safari and Internet Explorer and defeated the default cookie settings of such browsers in violation of federal and state laws. More particularly, Plaintiffs allege that when Plaintiffs and Class Members visited a website containing an advertisement placed by certain Defendants in this case, tracking cookies were placed on Plaintiffs' computers that circumvented Plaintiffs' and Class Members' browser settings that blocked such cookies... The Settlement Class consists of all persons in the United States of America who used the Apple Safari or Microsoft Internet Explorer web browser and who visited a website from which a Doubleclick.net (Google's advertising serving service) from which cookies were placed by the means alleged in the Complaint..."
The terms of the settlement agreement require Google to make payments to counsel and to several nonprofit technology and privacy advocacy groups (instead of class members): the Berkeley Center for Law & Technology, the Berkman Center for Internet & Society at Harvard University, the Center for Democracy & Technology (Privacy and Data Project), Privacy Rights Clearinghouse, and the Center for Internet & Society at Stanford University (Consumer Privacy Project).
The technology giant paid $7 million in 2013 to 38 states to settle unauthorized wireless data collection by Google Streetview cars. Also in 2013, the company admitted its Android operating-system software included code by the NSA. In 2015, Google's holding company dropped the "Don't be evil" motto.
Do no wrong? Apparently, that ship has sailed and isn't returning. "Catch us if you can" might be a more accurate motto.
Some big Internet service providers (ISPs) want consumers to pay for privacy. Earlier this month, both Comcast and the CTIA-The Wireless Association (formerly known as the Cellular Communications Industry Association) submitted comments about the broadband privacy rules proposed by the U.S. Federal Communications Commission (FCC) in April.
"Finally, we briefly noted that allowing consumers a variety of options regarding whether to receive a discount on broadband service in exchange for personalized advertising should be preserved. Hybrid payment models have been in commerce for centuries, including advertising supported magazines, grocery store loyalty programs, and app-based discount programs for retail establishments. Many internet companies rely on use of consumer data as their sole source of income, like search engines and social networks. Such offerings can lead to significant cost savings for all consumers, enable more valuable services for consumers, and mirror much of the economic activity that consumers expect. On this point, we provided a copy of a recent report by the Information Technology & Innovation Foundation, titled “Why Broadband Discounts for Data are Pro-Consumer,” which is attached to this filing."
Let's unpack this. It says that ISPs should be able to charge their customers for privacy, since many ISPs rely upon using (and reselling) their customers' information to make money. This would be an opt-out for customers, since the default is customers' information is used and resold. There are several problems with this approach:
The CTIA's position seems to have followed Comcast's position. Portions of August 1, 2016 comments submitted by Comcast to the FCC:
"We also urged that the Commission allow business models offering discounts or other
value to consumers in exchange for allowing ISPs to use their data. As Comcast and others have argued, the FCC has no authority to prohibit or limit these types of programs. Moreover, such a prohibition would harm consumers by, among other things, depriving them of lower-priced offerings... A bargained-for exchange of information for service is a perfectly acceptable and widely used model throughout the U.S. economy, including the Internet ecosystem, and is consistent with decades of legal precedent and policy goals related to consumer protection and privacy.
Finally, we discussed how Comcast has partnered with vendors who have helped to
enhance consumer data privacy, and that the Commission should be clear that any rules it adopts do not prevent ISPs from providing CPNI to a vendor based on implied consent, provided the ISP has an agreement with the vendor requiring it to safeguard the CPNI and to use it solely on behalf of and as directed by the ISP..."
The same problems I listed above also apply to Comcast's comments. This is not theory. MotherBoard reported:
"Telecom giant AT&T already offers such a [pay for privacy] plan, called “Internet Preferences,” which tempts consumers with “best pricing” if they are willing to let the company “use your individual web browsing information, like the search terms you enter and the web pages you visit, to tailor ads and offers to your interests.” Users who opt-out of "Internet Preferences," which DSLReports calls a “deep packet inspection program that tracks your browsing behavior around the internet—down to the second,” face a $30 premium on their monthly bill."
"But $29 isn’t actually the price that AT&T charges per month for privacy. As I discussed back in May last year after I tried to sign up for AT&T’s GigaPower service to find out more about the pricing and the disclosures associated with the plan, the actual costs were closer to $44 or even $62 per month. This time around the price differentials are $44 for gigabit internet and $66 for HD TV and HBO Go plus gigabit internet."
Like anything else, the devil is in the details. $44 and $62 monthly both sound excessive. Apparently, the more services a consumer has, the more privacy costs. Regular readers of this blog already know about CPNI notices from AT&T.
The problems I see with both Comcast's and the wireless industry's pay-for-privacy positions are rooted in a lack of trust and transparency. The ISP industry has a long history of abuses, customer service failures, and a lack of transparency. Both the Gigaom and MotherBoard articles mentioned above highlight problems and failures, plus:
Consumers are rightfully wary and skeptical of pay-for-privacy schemes. Plus, consumers have no way to confirm that in a pay-for-privacy scheme their information is not being reused and resold anyway.
A solution based upon transparency that promotes trust would help: regular privacy audits by an independent third-party to ensure that the information of consumers who paid a privacy price premium are getting what they paid for.
To me, the whole thing smells like another excuse for ISPs to increase prices on services that are already too expensive and too slow. What do you think?
The game's popularity proliferated after a July 6 launch in Australia, New Zealand, and the United States: 7.5 million downloads during its first week; 50 million downloads from Google Play during its first month; and it was WikiPedia's most visited article by mid-July. (View the game's Wikipedia pageviews.) Everyone noticed. Early in July, a former advertising coworker joked on Facebook:
" 'How about we partner with Pokemon Go?' -- Said in every office at every agency for every client this morning."
Probably. The augmented-reality (AR) mobile game requires players to travel real-life streets to find and capture digital characters superimposed on locations and displayed on the screens of players' phones. The game's screens also display PokeStops and gyms, locations superimposed on real-life landmarks. The CNN video at the end of this blog post provides a good summary. The Apple iTunes site explains important game details:
"Search far and wide for Pokémon and items: Certain Pokémon appear near their native environment—look for Water-type Pokémon by lakes and oceans. Visit PokéStops, found at interesting places like museums, art installations, historical markers, and monuments, to stock up on Poké Balls and helpful items... As you level up, you’ll be able to catch more-powerful Pokémon to complete your Pokédex. You can add to your collection by hatching Pokémon Eggs based on the distances you walk... Take on Gym battles and defend your Gym: As your Charmander evolves to Charmeleon and then Charizard, you can battle together to defeat a Gym and assign your Pokémon to defend it against all comers."
For many players, Pokemon Go has been a nostalgic return to their youth when Pokemon existed in cartoons, video games, and board-games. Some experts have speculated that the game's popularity, as measured by daily active users, may have peaked in the United States.
What do we know so far about the AR game? What has happened since the game's launch? What happens when a mobile fantasy game combines real-life locations? Are non-players affected? What might be the implications for future AR games? I looked for answers, found plenty, and organized my findings into good, bad, and ugly categories -- with apologies to Mr. Leone and Mr. Eastwood.
"... Pokemon Go’s game designers have perfectly executed on the “Hook Model” — a framework for gamification and getting users to come back again and again and again."
Advocates have said that the game has gotten gamers off of their couches (e.g., butts) and out into the real world to get exercise, meet people, and explore locations they probably wouldn't have visited otherwise. Sounds good.
Within the game, PokeStops and gyms are located in publicly-accessible locations, such as theme parks, gardens, and museums. This has increased the sales at some nearby, small businesses. IGN reported on July 21:
"Bok Tower Gardens, a “contemplative garden” and National Historic Landmark located in Lake Wales, Fl, is saturated with PokeStops. The non-profit recorded a 10 to 15 percent increase in ticket sales during the first week of Pokemon Go’s release... So far, the only way to become a PokeStop or gym is to send in a request to Niantic Labs, but it isn't likely to be accepted unless the location is one of cultural significance or in a Pokemon Go deadzone."
The Twitter account Pokemon Archaeology catalogs Pokemon sightings in historic locations. The National Park Service (NPS) has welcomed gamers in many of its parks, but not at memorial sites. Some National Parks have featured programs with the game. Earlier this month, the Sleeping Bear Dunes National Lakeshore offered a new program called "Pokemon Hunt:"
"... to connect “Pokemon Go!” with real-world flora and fauna... This interactive, ranger-guided walk will allow visitors to uncover the creatures, both physical and virtual, that can be found within the National Lakeshore. They will learn how these creatures do or do not fit in with the rest of the environment, and what can be done to help them thrive. At the end of the program, visitors will be able to design their own Pokemon. “Trainers” of all ages are welcome."
Some local businesses near colleges and universities experienced increased sales from gamers. Minnesota Daily reported:
"Many local Minneapolis businesses have considered, or implemented, special promotions to attract more mobile-gamers. Last week, Sencha Tea Bar in Stadium Village released three special shakes in correspondence with the three color teams of the game — red, yellow and blue — said store manager Josh Suwaratana. Suwaratana said the store does special shakes for other occasions, so the Pokemon shakes weren’t anything out of the ordinary... Sencha is also located next to a Pokestop — a real-life location where players can obtain items in the game. Suwaratana said the proximity to the Pokestop has helped business attract players."
"... I would encourage parents to seize the opportunity for their children to capitalize on this gaming experience while at the park or when running errands. My advice is not to judge this new gaming experience as all bad and in need of limits. Rather let’s embrace a step toward video games and virtual reality that may one day be tailored to inspiring those we love with autism spectrum disorder (ASD) to leave the house and receive points/rewards/tokens for gathering information from other people they encounter in the store, at work, or at a place of leisure. To me that sounds an awful lot like what I have been trying to get them to do by learning social skills in my office each week..."
To focus the world's attention upon the impacts to citizens and children, activists have added Pokemon characters to images from war zones. C/Net reported on July 26 that Khaled Akil, a Syrian artist:
"... has taken Pokemon Go creatures and Photoshopped them into pictures of his war-torn homeland, presenting a stark contrast between the whimsy of the augmented-reality game and the sobering day-to-day realities of war... In one image, a young boy walks his bike through a street lined by bombed-out buildings, a Vaporeon by his side. In another, a Pikachu rests on a block of rubble next to a burning car... the activist group Revolutionary Forces of Syria Media Office has been tweeting poignant photos of kids holding up printouts of popular Pokemon creatures, along with their locations, which are identified as being near areas of heavy fighting, and the words 'save me'..."
To view photos, follow the links in the C/Net article to Akil's website and Instagram account.
The Niantic Terms of Service policy clearly encourages safe game play and describes players' responsibilities:
"During game play, please be aware of your surroundings and play safely. You agree that your use of the App and play of the game is at your own risk, and it is your responsibility to maintain such health, liability, hazard, personal injury, medical, life, and other insurance policies as you deem reasonably necessary for any injuries that you may incur while using the Services. You also agree not to use the App to violate any applicable law, rule, or regulation (including but not limited to the laws of trespass) or the Trainer Guidelines, and you agree not to encourage or enable any other individual to violate any applicable law, rule, or regulation or the Trainer Guidelines. Without limiting the foregoing, you agree that in conjunction with your use of the App you will not inflict emotional distress on other people, will not humiliate other people (publicly or otherwise), will not assault or threaten other people, will not enter onto private property without permission, will not impersonate any other person or misrepresent your affiliation, title, or authority, and will not otherwise engage in any activity that may result in injury, death, property damage, and/or liability of any kind."
The "Conduct, General Prohibitions, and Niantic’s Enforcement Rights" section of the policy also lists the responsibilities of players, including players will not:
"... trespass, or in any manner attempt to gain or gain access to any property or location where you do not have a right or permission to be..."
So, it is important for players to know their responsibilities. Do they? Keep reading.
Foot traffic by gamers in public parks hasn't been all good. Some gamers have ignored local laws and ordinances. WPRI in Providence, Rhode Island reported:
"Members of the East Providence Police Department said “Pokemon Go” has drawn huge crowds of people to local parks after hours... Officers say they have responded to several calls about the crowds. “They are very peaceful, they’re not causing problems, but it is in a public area – in public parks – and people who live in those areas do deserve to have their rest at night,” said Maj. William Nebus of the East Providence Police Department. “Our parks do close at 9 p.m. and just to have 200 people lurking in overnight hours is not peaceful to the residents.”
Law enforcement in Michigan ticketed players with misdemeanors after late-night, 12:30 a.m. game play. Nearby property owners have found players intrusive. There are two implications. First, it's important for players to understand and comply with local town ordinances and hour restrictions. Second, taxpayers will likely absorb the additional costs of park maintenance, clean-up, and law enforcement patrols to address the increased foot traffic in local parks.
It's critical for players to remain alert. In somewhat weird news, a gamer kept playing after being stabbed by a mugger. And a North-Texas teenager was bitten by a venomous snake while playing. In Missouri, criminals staked out known PokeStops and robbed players. A gamer in Riverton, Wyoming found a dead body.
While some gamers play on foot, others drive their vehicles. As you've probably guessed, there have been auto accidents. The Atlanta Journal-Constitution reported:
"A driver, distracted by a Squirtle or a Zubat, caught a tree, instead of a Pokemon. That collision occurred last month in Auburn, N.Y., near Syracuse. A few days later, a 28-year-old driver on a highway near Seattle told officials he was focused on the hunt for Pikachu when he ran into the rear end of a Chevrolet. Another distracted driver in Baltimore smashed into a police car. A parked police car."
"Your account was permanently terminated for violations of the Pokémon GO Terms of Service. This includes, but is not limited to: falsifying your location, using emulators, modified or unofficial software and/or accessing Pokémon GO clients or backends in an unauthorized manner including through the use of third party software."
Soon after the game's debut, privacy risks were discovered:
"Security researcher Adam Reeve noted that when some users sign into Pokemon Go through Google on Apple devices, they effectively give the game and its developer full access to their Google account; this means, that at least in theory, Niantic... can access players' Gmail-based email, Google Drive based files, photos and videos stored in Google Photos, and any other content within their Google accounts. From a technical perspective, Niantic could potentially send emails on your behalf, or copy and distribute your photos. This is obviously concerning. Perhaps even scarier - and more eye-opening - is that users are accepting such permissions en masse without regard for the risks."
Since then, Niantic and the Pokemon Company notified Engadget that it fixed the bug in a subsequent update. Regardless, the Offensive Privacy blog warned players who have signed up using their Google credentials:
"... to review Google's guide on controlling and revoking app access to your account and check your account to see what permissions the game has. If it still has full access to your Google account, you can simply revoke access, then sign-in to the game again using your Google account. Your data will be safe and you can ensure your Google account is safe as well."
The Offensive Privacy blog offered privacy tips given the game's usage of smartphone cameras:
"While it's a bit outlandish to think that Niantic collects the video streams from every device, it is always a possibility that cannot be completely ruled out. This means anything your camera sees could, in theory, be stored by Niantic... I suggest some common sense tactics that apply to all cameras and video streams when using the AR mode of the game: 1) Never allow the camera to see personal ID such as your license, passport, or other sensitive document; 2) Never let the camera see a license plate or government building. This is especially true for those working in high-security environments; and 3) Avoid letting the camera see street signs, your house, house numbers, etc. It's also possible that metadata could be embedded in the image and made available if the image is shared publicly..."
Regular readers of this blog are already familiar with the privacy issues associated with metadata collection. Some players may be surprised that tips to maintain privacy while playing requires effort.
Yes, security researchers have already found malware embedded in a rogue version of the Pokemon Go app. So, shop wisely at reputable sites and follow these tips to avoid the malware.
One measure of popularity are parodies. There is a porn parody of the game titled, "Poke-mon Ho!" Depending upon your lifestyle, you might categorize this as "good." Yes, the parody reportedly is NSFW. No, I haven't seen it.
Some property owners view the game as inappropriate for their locations. CNN Reported in July:
"The United States Holocaust Memorial Museum and Arlington National Cemetery, both in Washington, DC area, have both issued appeals for players to avoid hunting Pokemon on their sites. "Playing Pokemon Go in a memorial dedicated to the victims of Nazism is extremely inappropriate," said Andy Hollinger, director of communications at the United States Holocaust Memorial Museum in Washington, D.C., in a statement sent to CNNMoney. "We are attempting to have the Museum removed from the game," the statement said... Pokemon Go has a link set up for people to report sensitive locations and contact on its website... According to a statement from The Pokemon Company International and Niantic -- the creators of Pokemon Go -- Pokestops and gyms in the app are found at publicly accessible places. That includes historical markers, public art installations, museums, monuments -- and apparently churches."
I see two problems with the approach the game's developers used. First, the approach seems to have treated all public spaces the same, without considering the unique needs of cemeteries, memorials, and similar places. Game-play isn't appropriate everywhere. Second, Niantic's approach automatically included real-life locations as PokeStops and gyms without first obtaining the property owners' permissions. This approach places the burden on property owners (who aren't players nor participants) to opt-out of the game. Not good. Maybe this was a slick attempt to force property owners to participate. Not good.
Some players have wandered onto nearby private properties. ComputerWorld reported on August 2:
"Jeffrey Marder, a resident of West Orange, N.J., found in the days after the release of the successful augmented reality game Pokémon Go, that strangers, phone in hand, had begun lingering outside his home. At least five of them knocked on Marder’s door and asked for access to his backyard to catch and add to their virtual collections of the Pokémon images, superimposed over the real world, that the game developer had placed at the residence without his permission."
Marder is part of a lawsuit alleging that the game included locations on private properties, without the owners' permissions. The Click on Detroit site reported on August 15:
"Scott Dodich and Jayme Gotts-Dodich, of St. Clair Shores, filed a class action lawsuit against Niantic, The Pokemon Company and Nintendo... The couple lives on a private cul-de-sac and alleges that over several weeks, Pokemon Go players parked their vehicles on their street and blocked driveways. The couple also alleges that players trespassed on lawns, trampled landscaping and peered into windows. The complaint also alleges that when Jayme Gotts-Dodich asked a Pokemon Go player to leave her property, the player told her to “shut up b****, or else... The suit alleges that the intentional, unauthorized placement of Pokestops and Pokemon gyms on or near private property constitutes a continuing invasion of use and enjoyment. Due to the ignored repeated requests for removal, the couple believes that Niantic is liable for nuisance and that all defendants have been unjustly enriched.”
If a disagreement arises between Niantic and a player, that may not be resolved in court in front of a jury of the gamer's peers. The Niantic Terms of Service policy strips gamers of that right:
"ARBITRATION NOTICE: EXCEPT IF YOU OPT OUT AND EXCEPT FOR CERTAIN TYPES OF DISPUTES DESCRIBED IN THE “AGREEMENT TO ARBITRATE” SECTION BELOW, YOU AGREE THAT DISPUTES BETWEEN YOU AND NIANTIC WILL BE RESOLVED BY BINDING, INDIVIDUAL ARBITRATION, AND YOU ARE WAIVING YOUR RIGHT TO A TRIAL BY JURY OR TO PARTICIPATE AS A PLAINTIFF OR CLASS MEMBER IN ANY PURPORTED CLASS ACTION OR REPRESENTATIVE PROCEEDING."
To opt out of binding arbitration, players must do so within 30 days of sign up. This BoingBong explained how to opt out, and the associated issues. Of course, players should read all game policies in their entirety before sign up. (You did, right?) Regular readers of this blog are familiar with the issues with binding arbitration.
Given the success so far of Pokemon Go, it seems wise to expect copycats. The Motely Fool speculated:
"Pokemon Go has added a new layer of excitement to a day at Disney World for those who seek that variety of enchantment. Disney is benefiting from the craze, even as non-players shake their heads while swerving around distracted gamers. This also could and should be just the beginning. It's only a matter of time before it rolls out its own augmented-reality app... A Disney app likely also wouldn't include a Pokemon-like battle element, at least not in terms of pitting Pluto against Yoda in combat. However, the Disney gym equivalent could be mini-game stations offering everything from speed Disney trivia matches to Virtual Magic Kingdom-type competitions... There are more than 200 Disney Store locations scattered across North America, and more than 120 overseas. These stores can also serve as character-collecting hubs, giving players a local connection for special events. It would also keep interest active outside of theme park visits..."
You can bet we'll see many more AR games with fantasy or fictional characters; probably with co-marketing agreements between AR games, movies, fast-food restaurants, toy stores, and the few remaining shopping malls. Experts estimate the global AR market to be $117.4 billion by 2022.
It's not just fantasy characters. Experts have estimated the augmented reality and virtual reality market within healthcare to be $2.54 billion by 2020. Hopefully, more games (and other services) will offer in their policies opt-out mechanisms from restrictive binding arbitration clauses.
What are your opinions of Pokemon Go? Of AR games? What advantages and disadvantages have you found? Does the good outweigh the bad?
During a speech recently in San Francisco at the American Bar Association's annual conference, Federal Bureau of Investigation (FBI) Director James Comey suggested a national discussion about encryption versus safety. Comey said that during the past 10 months, the FBI was able to access only 650 of 5,000 electronic devices. And, the agency's inability to access devices will get worse as more people use encryption. So, United States citizens should discuss and decide what balance is desired between privacy and law enforcement's ability to access devices.
I agree. That is a valuable conversation that needs to happen. It should happen. So far, the discussion has been sporadic; promptly largely by disclosures in 2013 about a secret court order allowing NSA spy programs on U.S. citizens by former National Security Agency (NSA) contractor Edward Snowden. In June, the Electronic Frontier Foundation (EFF) concluded:
"The Snowden leaks caused a sea change in the policy landscape related to surveillance. EFF worked with dozens of coalition partners across the political spectrum to pass the USA Freedom Act, the first piece of legislation to rein in NSA spying in over thirty years—a bill that would have been unthinkable without the Snowden leaks. They also set the stage for a major showdown in Congress over Section 702 of the FISA Amendments Act, the controversial section of law set to expire in 2017 that the government claims authorizes much of the NSA’s Internet surveillance... Perhaps most importantly, the Snowden leaks published over the last three years have helped to realign a broken relationship between the intelligence community and the public. Whistleblowers often serve as a last-resort failsafe when there are no other methods of bringing accountability to secretive processes. The Snowden leaks have helped illuminate how the NSA was operating outside the law with near impunity, and this in turn drove an international conversation about the dangers of near-omniscient surveillance of our digital communications."
However, the situation is far from resolved. Many surveillance programs still operate.
Moreover, who will participate in the discussion -- lawyers or the general population? Director Comey's suggestion was to a room full of lawyers. Plenty of non-lawyers are interested in this discussion.
After the initial Snowden disclosures, a mentor reminded me: "you just can't run away from the Fourth Amendment." Persons and companies need to be able to protect their personal and intellectual property. So, an expectation of privacy is reasonable and necessary. There are plenty of benefits to privacy, so the erosion of these rights by surveillance programs is not a good thing.
You may be surprised to know that the encryption-versus-safety conversation has already begun. An essay in April in the Yale law Journal by Robert S. Litt, the General Counsel for the Office of the Director of National Intelligence, stated:
"First, I am not proposing a comprehensive theory of Fourth Amendment law. Rather, I want to offer some tentative observations that might be explored in shaping a productive response to the challenges that modern technology creates for existing legal doctrine. In particular, I would like to suggest that the concept of “reasonable expectation of privacy” as a kind of gatekeeper for Fourth Amendment analysis should be revisited.
Second, these thoughts are not informed by deep research into the intent of the Framers, or close analysis of case law or academic scholarship. Rather, they derive from almost forty years of experience in law enforcement and intelligence... I find it hard to understand the alchemy by which information that you choose to disclose to a third party develops an expectation of privacy because you have chosen to disclose a lot of that information. That seems counter-intuitive to say the least..."
"... I suggest that—at least in the context of government acquisition of digital data—we should think about eliminating the separate inquiry into whether there was a “reasonable expectation of privacy” as a gatekeeper for Fourth Amendment analysis. In an era in which huge amounts of data are flowing across the Internet; in which people expose previously unimagined quantities and kinds of information through social media; in which private companies monetize information derived from search requests and GPS location; and in which our cars, dishwashers, and even light bulbs are connected to the Internet, trying to parse out the information in which we do and do not have a reasonable expectation of privacy strikes me as a difficult and sterile task of line-drawing. Rather, we should simply accept that any acquisition of digital information by the Government implicates Fourth Amendment interests...."
"... I agree with those who criticize the broad proposition that any information that is disclosed to third parties is outside the protection of the Fourth Amendment. Courts can appropriately take into account whether information is content or non-content information, whether it is publicly disclosed through social media or is stored in the equivalent of the cloud, or whether its exposure is “voluntary” only in the most technical sense because of the demands of modern technology. But we should not be viewing this analysis of privacy interests as an on/off switch to determine whether or not the Fourth Amendment applies, as today’s third-party doctrine does, but as more of a rheostat to identify the degree of protection that would ensure that the collection and use of that data is reasonable. So the flip-side of my argument is that even where there is a substantial privacy interest in digital data, we should not default immediately to the rule that a warrant is required unless we can fit the collection of such data into one of the twentieth-century exceptions to the warrant requirement..."
I have attempted to highlight relevant sections, but you should read Litt's entire analysis. Cindy Cohn, the Executive Director of the EFF, wrote a rebuttal in July:
"... Mr. Litt makes two initial statements with which I agree. First, he notes that the “reasonable expectation of privacy” test currently employed in Fourth Amendment jurisprudence is a poor test for the digital age. Second, he states that the “third-party doctrine”—under which an individual who voluntarily provides information to a third party loses any reasonable expectation of privacy in that information—should not be an on-off switch for the Fourth Amendment... From there, however, our paths diverge quite sharply.
Mr. Litt argues that since the “reasonable expectation of privacy” formulation is not well suited to digital surveillance, it should simply be eliminated. This would leave a “reasonableness” balancing test to carry the entire weight of the Fourth Amendment’s protection against governmental intrusions. He says that a court in each case should balance the “actual harm” suffered by the individual affected by the surveillance with the governmental interests in conducting the surveillance. This argument throws the baby out with the bathwater. By abandoning the “reasonable expectation of privacy” standard without a suitable replacement, Mr. Litt also implicitly suggests abandoning the foundational constitutional protection against general warrants, as well as the rule that a warrantless search of someone with a reasonable expectation of privacy is per se unconstitutional unless an exception applies..."
"Under current doctrine, since Americans have a reasonable expectation of privacy in the content of their communications, full-content searching is per se unconstitutional unless an exception to the warrant requirement applies. None does. In order to prevail, therefore, the government must convince the Supreme Court to read a broad national security “special needs” exception into the Fourth Amendment authorizing mass, suspicionless seizure and full-content searches of millions of nonsuspect Americans’ most private international and domestic communications. That is a tall order... Such a large implied exception does not readily align with history: the Fourth Amendment contains no national security exception, even though it was adopted in the shadow of the Revolutionary War. Further, the Fourth Amendment was expressly intended to prevent general warrants. The FISA Court of Review—where the government alone presents its case and the arguments and decisions are kept secret—has recognized some form of a national security exception..."
"Moreover, Mr. Litt’s balancing test is unbalanced at its inception. According to his argument, courts can only evaluate the “actual harm” to a single person from mass surveillance because his reformulation retains the caselaw holding that Fourth Amendment rights are personal and cannot be asserted vicariously.20 Meanwhile, Mr. Litt’s formulation would allow the government to present its interest broadly without also showing “actual” increased safety of Americans as a result of the surveillance, much less the individual safety of the plaintiff."
"More importantly, Mr. Litt’s central claim is that there can be no actual harm when a person’s communications are seized by the government and searched, even with content searching, as long as computers but not humans conduct the search. He says that communications are “unseen and unknown” until they turn up in search results that are shown to a human... This argument—what I call the “human-eyes” theory of the Fourth Amendment—is where we most seriously disagree. Mr. Litt’s “human-eyes” theory would effectively authorize a surveillance state in which a person’s every action and interaction could be technologically monitored and algorithmically analyzed without violating the Fourth Amendment..."
Again, I have tried to highlight relevant section, but you should read all of Cohn's rebuttal and her summary. This is important stuff. People are thinking about how to modify the FOurth Amendment of the U.S. Constitution.
Both essays are a good start with the encryption-versus-safety discussion, but the discussion seems focused upon attorneys. Both essays appeared in a legal journal and Director Comey's speech was to a room full of attorneys. One should not have to be an attorney to understand things. Any legislation resulting from the discussions would affect all citizens. So, the discussion needs to be more inclusive. It needs to happen in a way that engages the broader population.
Major newspapers have a role in making this happen. Politicians have a responsibility, too. Senator Ron Wyden (Democrat- Oregon) has been one of too few lone voices warning citizens. More politicians need to step up their game, or get out of the way for ones willing to do so.
What are your opinions of the encryption versus safety discussion? Of the essays by Litt and Cohn?
Hulu.com, the popular TV streaming service, updated its terms of service and privacy policies. An August 5, 2016 e-mail to subscribers stated:
The streaming TV service announced in May 2016 that is subscriber base of about 12 million had grown about 30 percent over 2015. Besides its $8 and $12 monthly subscription options, reportedly the service plans to introduce a third, cable-like bundle of channels for about $40 monthly.
The service's email message summarized the changes in its policies:
Given our constant desire to innovate our service, we clarify that we may experiment with certain features and that the content and services may change from time to time. We provide additional details about our billing practices, including in connection with promotional offers.
We include updated instructions around cancellation and explain that if you sign up and pay for Hulu through a third party (e.g., Apple iTunes) you may need to cancel your subscription or manage your billing through such third party.
We clarify that we may communicate with you electronically and encourage you to keep copies of our electronic communications for your records.
We include an updated list of the types of technologies we or third parties may use to collect data from or about you. This data helps improve the content and advertisements provided to you.
We've likewise updated the section describing how we share information with business partners, service providers and other third parties.
We describe that you can choose to share information through sharing features we may offer, for example, through email, text message or social networks.
We provide instructions on how California residents can obtain more information about our data sharing practices in the event we were to share personal data about our users with third parties for their direct marketing purposes.
You have choices with respect to your use of our services and we include an updated and consolidated list of the various options available to you in a new section called "Your Choices, Including Opt-Out Options" (Section 6) which includes instructions about your opt-out choices related to your use of Hulu on websites, mobile devices and living room devices.
We explain that we may work with third parties who help us to establish connections across your related browsers and devices and how your opt-out choices apply."
What is a consumer to make of this? Hulu is clearly both providing notice to and obtaining consent from its subscribers to perform online experiments. Previously, social sites like OKCupid were heavily criticized for performing online experiments without notice nor consent. So, it is good that Hulu provides this advance notice.
"3.10 Modification/Suspension/Discontinuation. We regularly make changes to the Services. The availability of the Content as well as Access Points through which the Services are available will change from time to time. Hulu reserves the right to replace or remove any Content and Access Points available to you through the Services, including specific titles, and to otherwise make changes in how we operate the Services... In our continued assessment of the Services, we may from time to time, with respect to any or all of our users, experiment with or otherwise offer certain features or other elements of the Services, including promotional features, user interfaces, plans, pricing, and advertisements. You acknowledge that Hulu may do so in Hulu's sole discretion at any time without notice. You also agree that Hulu will not be liable to you for any modification, suspension, or discontinuance of the Services, although if you are a Hulu subscriber and Hulu suspends or discontinues your subscription to the Services, Hulu may, in its sole discretion, provide you with a credit, refund, discount or other form of consideration (for example, we may credit additional days of service to your account) in accordance with Section 4 below. However, if Hulu terminates your account or suspends or discontinues your access to Services due to your violation of these Terms, then you will not be eligible for any such credit, refund, discount or other consideration."
Another section current and prospective subscribers may want to read closely is the "13. Arbitration of Claims" section. While this clause is not new, it is important since it describes how disagreements are resolved between subscribers and Hulu. Basically, most disagreements would be resolved through binding arbitration Individually, and not in court nor through a group action:
"... If we do not reach an agreed upon solution after our discussions for at least 30 days, you and Hulu agree that any claim that either of us may have arising out of or relating to these Terms (including formation, performance, or breach of them), our relationship with each other, or use of the Services must be resolved through binding arbitration before the American Arbitration Association using its Consumer Arbitration Rules, available here. As an exception to this arbitration agreement, Hulu is happy to give you the right to pursue in small claims court any claim that is within that court's jurisdiction as long as you proceed only on an individual basis... you and Hulu agree to begin any arbitration within one year after a claim arises; otherwise, the claim is waived. You and Hulu also agree to arbitrate in each of our individual capacities only, not as a representative or member of a class, and each of us expressly waives any right to file a class action or seek relief on a class basis..."
Regular readers of this blog are familiar with the issues about binding arbitration. Companies in several industries have inserted "binding arbitration" clauses into their terms of service policies with consumers. The Public Citizen website lists the banks, retail stores, entertainment, online shopping, telecommunications, consumer electronics, software, nursing homes, and health care companies that use these clauses.
"This week, the CFPB released new research showing that banks' practice of forcing customers into binding arbitration has a wide range of downsides for consumers... The exhaustive 700+ page CFPB report shows that arbitration clauses have a broad range of negative consequences for consumers. They discourage individual consumers from pursuing claims. The CFPB found that the number of arbitrations filed by individual consumers was much lower than one would expect given the number federal lawsuits filed by those who still have that option... They squelch legitimate class-action lawsuits. Arbitration clauses generally prevent customers from joining together in class-action lawsuits... They reduce consumer protections. The way that many consumer protection laws are enforced is through civil litigation. By blocking civil suits brought by customers, financial institutions effectively give themselves an end-around against these protections... They confuse consumers. In surveys conducted by the CFPB for the report, relatively few customers understood what arbitration was, whether they were subject to it and how it works in practice... They don't lead to lower prices. The big selling point for arbitration has always been that reducing legal costs by blocking customer lawsuits would result in lower prices for consumers. But that hasn't been the case, according to the report..."
Current and prospective subscribers may or may not be comfortable giving up these rights.
"... One technology we use is called a cookie. A cookie is a small data file that is transferred to your computer’s hard disk. We may use both session cookies and persistent cookies to better understand how you interact with the Hulu Services or Hulu advertising published outside of the Hulu Services, to monitor aggregate usage by our users and web traffic routing on the Hulu Services, and to customize Content and advertising... We may collect information through other kinds of local storage (also referred to as "Flash cookies") and HTML5 local storage, including in connection with features such as volume/mute settings for the Video Player. Because these technologies are similar to browser cookies, they are sometimes called "browser cookies," even though they may be stored in different parts of your computer... Please note that disabling cookies or deleting information contained in cookies or Flash cookies may interfere with the performance and features of the Hulu Services, including the Video Player... we may use other technologies such as web beacons or pixel tags, which can be embedded in web pages, videos, or emails, to collect certain types of information from your browser or device, check whether you have viewed a particular web page, ad, or email message, and determine, among other things, the time and date on which you viewed the Content, the IP address of your computer, and the URL of the web page... Mobile Device Identifiers and Software Development Kits ("SDKs"). We may use or work with third parties including our business partners and service providers who use mobile SDKs to collect information, such as mobile identifiers (e.g., "ad-ID" or "IDFA") and information related to how mobile devices interact with the Hulu Services. An SDK is computer code that app developers can include in their apps to enable ads to be shown, data to be collected and related services and functionality to be implemented. A mobile SDK is in effect the mobile app version of a pixel tag or beacon..."
"We work with a number of business partners who help us offer the Hulu Services, including for example our content licensors, distributors, and corporate owners. We may share information collected from or about you with such business partners... When you choose to share information with social networking services about your activities on the Hulu Services, including shows you watch or like on Hulu, information about you and your activities will be shared with that social network... We may share the information collected from or about you with companies that provide services to us and our business partners, including companies that assist with payment processing, analytics, data processing and management, account management, hosting, customer and technical support, marketing (e.g., email, online or direct mail communications) and other services... We may share the information collected from or about you in encrypted, aggregated, or de-identified forms with advertisers and service providers that perform advertising-related services for us and our business partners in order to tailor advertisements, measure, and improve advertising effectiveness, and enable other enhancements. This information includes your use of the Hulu Services, websites you visited, advertisements you viewed, and your other activities online... Our business partners, such as content licensors, as well as our advertisers, seek to measure the performance of their creative material across many platforms, including the Hulu Services. Accordingly, Hulu may permit the use of third-party measurement software that enables third parties (such as Nielsen) to include your viewing on the Hulu Services in calculating measurement statistics such as TV Ratings... If we sell all or part of our business, make a transfer of assets, or otherwise might be involved in a change of control transaction, or in the unlikely event of bankruptcy, we may transfer information from or about you to one or more third parties as part of the transaction, including the due diligence process... Third Parties When Required By Law or When Necessary to Protect Your or Our Rights. In some instances, we may disclose information from or about you without providing you with a choice. For example, we may disclose your information in the following ways: to protect the legal rights of Hulu and our affiliates or partners... and to comply with or respond to the law or legal process or a request for cooperation by a government entity, whether or not legally required..."
It is reasonable to assume that the last group includes law enforcement agencies (e.g., federal, state, local) in the United States, but the policy seems vague about whether those agencies are from other countries, too. Again, (current or prospective) subscribers may want to know the specific names of companies and entities data is shared with.
What are your opinions of Hulu's revised policies?
[Editor's note: this blog post is not legal advice. Consumers wanting legal advice should consult an attorney to help them fully evaluate any contracts or legal agreements.]
For more convenient access to devices and websites, many device manufacturers and online publishers encourage consumers to use items other than passwords for logins. Is this a good deal? To answer that question, one must consider what happens after a data breach when login credentials are stolen by hackers. Typically after a data breach where login credentials are stolen, websites and businesses have advised consumers to change their passwords. However, many of the newer items cannot be changed:
Researchers have confirmed what privacy advocates and government regulators have suspected for a long time: Internet users often ignore online policies: privacy and terms of service. And those consumers who read policies, pay insufficient attention.
In a working paper titled, "The Biggest Lie On The Internet," researchers tested 543 college students (from a communications class) by having them sign up for NameDrop, a fictitious social networking site (SNS). 47 Percent of test participants were female, and the average age of all participants was 19. 62 percent identified as Caucasian, 15 percent as Asian, 6 percent as Black, 2 percent as Hispanic/Latin, and 3 percent as mixed race/ethnicity.
Authors of the working paper were Jonathann A. Obar, a Research Associate at the the Quello Center for Telecommunications Management and Law at Michigan State University, and Anne Oeldorf-Hirsch, at the University of Connecticut. The paper was submitted for peer review and to the U.S. Feral Communications Commission (FCC).
The paper did not mention if reading times varied by device (e.g., phone, tablet, laptop, desktop). The researchers identified three factors that predict policy reading times:
The researchers inserted problematic clauses into the policies which test participants should have spotted and inquired about:
"Implications were revealed as 98 percent missed NameDrop TOS 'gotcha clauses' about data sharing with the National Security Agency (NSA) and employers, and about providing a first-born child as payment for SNS access."
Only 15 percent (83 persons) expressed concerns about NameDrop's policies. Of the 83 persons who expressed concerns, 11 mentioned the NSA clause, and nine mentioned the child-assignment clause. The rest mentioned concerns about the length of the policies and the trustworthiness of the SNS.
The study also asked test participants how long they spent reading policies. The findings supported the "privacy paradox" found by other researchers:
"The paradox suggests that when asked, individuals appear to value privacy, but when behaviors are examined, individual actions suggest that privacy is not a high priority... When participants were asked to self-report their engagement with privacy and TOS policies, results suggested average reading times of approximately five minutes..."
So, test participants said they spent about 5 minutes reading policies while their actual times were about a minute or less, if they read the policies at all.
With most consumers skipping online policies, they have given companies the power to insert any clauses desired into these policies. This has implications for consumers' ability to control their online reputation, privacy, and resolve conflicts (e.g., binding arbitration instead of courts).
This also has implications for how governments enforce data protection for their citizens. Historically:
"... approaches to privacy and increasingly reputation protections by governments throughout the world often draw from a contentious model referred to as the 'notice and choice' privacy framework. Notice and choice evolved from the U.S. Federal Trade Commission's (FTC) Fair Information Practice Principles, developed in the 1970s to address growing information privacy concerns raised by digitization. In the early 1980s, the FIPPs were promoted by the OECD as part of an international set of privacy guidelines, contributing to the implementation of data protection laws and guidelines in the U.S., Canada, the EU, Australia, and elsewhere... The notice and choice privacy framework was designed to "put individuals in charge of the collection and use of their personal information" (Reidenberg et al, 2014: 3)..."
The researchers' focused upon the:
"... notice component, noted by the FTC as "the most fundamental principle" (FTC, 1998: 7) of personal information protection... As the FTC (1998) notes, choice and related principles attempting to offer data control "are only meaningful when a consumer has notice of an entity's policies, and his or her rights with respect thereto." Notice policies typically... appear on websites, applications, are sent in the mail, provided in-person, generally when an individual connects with the entity in question for the first time, and increasingly when policies change. Despite suggestions that notice policy in particular is deeply flawed, strategies for strengthening notice policy continue to be seen as central to address, for example, privacy concerns associated with corporate and government surveillance, and consumer protection concerns about Big Data..."
So, the biggest lie on the Internet is that consumers agree to policies, which they really can't because they haven't read them. Governments, privacy advocates, companies, and usability professionals need to find a better way, because the current approach clearly isn't working:
"The policy implications of these findings contribute to the community of critique suggesting that notice and choice policy is deeply flawed, if not an absolute failure. Transparency is a great place to start, as is notice and choice policy; however, all are terrible places to finish. They leave digital citizens with nothing more than an empty promise of protection, an impractical opportunity for data privacy self-management, and as Daniel Solove (2012) analogizes, too much homework. This doesn't even begin to address the challenges unique to children in the realm of digital reputation..."
Absolutely, since many sites allow children as young as 14 to sign up. Policy reading rates are probably worse among children ages 14 - 17.
Download the working paper: "The Biggest Lie on The Internet" (Adobe PDF). the paper is also available here. The study used students majoring in communications. I wonder if the results would have been different with business majors or law students. What do you think?
Last week, Apple Computer announced both separately and at the Worldwide Developers Conference (WWDC) many new features in iOS 10. You can read about the new features in several computing and technology publications. Today's blog post focuses upon two features with far-reaching implications: On-device Intelligence and Differential Privacy (DP). Apple said in its announcement:
"Privacy in iOS 10
Security and privacy are fundamental to the design of Apple hardware, software and services. iMessage, FaceTime and HomeKit use end-to-end encryption to protect your data by making it unreadable by Apple and others. iOS 10 uses on-device intelligence to identify the people, objects and scenes in Photos, and power QuickType suggestions. Services like Siri, Maps and News send data to Apple’s servers, but this data is not used to build user profiles.
Starting with iOS 10, Apple is using technology called Differential Privacy to help discover the usage patterns of a large number of users without compromising individual privacy. In iOS 10, this technology will help improve QuickType and emoji suggestions, Spotlight deep link suggestions and Lookup Hints in Notes."
This is great news. The Cryptography Engineering blog briefly discussed Differential Privacy and what's known from the iOS 10 Preview Guide:
"Starting with iOS 10, Apple is using Differential Privacy technology to help discover the usage patterns of a large number of users without compromising individual privacy. To obscure an individual’s identity, Differential Privacy adds mathematical noise to a small sample of the individual’s usage pattern. As more people share the same pattern, general patterns begin to emerge, which can inform and enhance the user experience. In iOS 10, this technology will help improve QuickType and emoji suggestions, Spotlight deep link suggestions and Lookup Hints in Notes"
The Naked Security blog by Sophos reported:
"At WWDC, Apple’s Craig Federighi said Apple can offer “great features and great privacy” through differential privacy. Differential privacy is actually statistical analysis that protects individual privacy, rather than a single technology. In its implementation, Apple will protect obscure data with multiple techniques, including hashing (turning data into unreadable characters), subsampling (using data from only a portion of users) and noise injection (adding random data to obscure real data). Apple gave one of the most influential researchers in the field of differential privacy, Aaron Roth, a chance to review some of the math involved in its implementation, quoting Roth at WWDC as saying Apple is a “clear privacy leader among technology companies today.” But not everyone is fully convinced that Apple can pull off the promise of differential privacy, at least not right away..."
The Naked Security blog also discussed On-Device Intelligence:
"Instead of sending your data to Apple to create a personal profile of you with your information, Apple says the new versions of its operating systems – iOS 10 and the replacement for OS X, called macOS – will use on-device intelligence and “crowdsourced learning.” This means iPhones running iOS 10 can personalize your apps – like identify the people and objects in Photos, or serve you more relevant information in Maps and News – without sucking your data up to Apple’s servers."
Good! There are better, more privacy-friendly ways of delivering features. After reading this, I thought of Apple's privacy fight against the FBI'. The FBI had sued Apple to force it to build a back door to unlock a user's iPhone; and bypass security features the company spent years building. On-Device Intelligence means less information transmitted to and stored in the cloud and at remote corporate servers -- a good thing for users' privacy. That suggests a right way -- more privacy friendly way -- to build and deliver the features consumers want and expect. Plus, iOS 10's end-to-end encryption in iMessage, FaceTime and HomeKit all complement this security and privacy focus.
The marketplace is full of home automation, toys, smart products, appliances, thermostats, cable services, and music subscription offerings; many of which include voice interfaces and other features that happily send lots of consumers' information to the cloud. Most companies seem to chase and collect consumers' personal data. Kudos to Apple for placing its customers' privacy first.
You may remember this Reuters news item from March:
"Unlike Google, Amazon, and Facebook, Apple is loathe to use customer data to deliver targeted advertising or personalized recommendations. Indeed, any collection of Apple customer data requires sign-off from a committee of three "privacy czars" and a top executive, according to four former employees who worked on a variety of products that went through privacy vetting.
Approval is anything but automatic: products including the Siri voice-command feature and the recently scaled-back iAd advertising network were restricted over privacy concerns, these people said."
So, Apple isn't just talking security. The executives at Apple have aligned internal management processes, products, and service features all with security and privacy by design. Impressive. Apple is leaving money on the table by keeping consumers' privacy foremost. Will other tech companies follow? Will pay-TV, wireless, telecommunications, and mobile app companies focus upon privacy-by-design? Will toy companies follow and do voice interfaces the right way?
Some companies don't want consumers to have privacy when using high-speed Internet services. Just before the long Memorial Day holiday weekend, the U.S. Chamber of Commerce (USCOC) submitted comments about the broadband privacy rules proposed by the U.S. Federal Communications Commission (FCC) in April. Portions of the USCOC's comments to the FCC:
"... the Chamber opposes the proposed broadband privacy rule because it is unnecessary, exceeds statutory authority, furthers a regulatory digital divide between edge and telecommunications providers, and threatens innovation by stifling the already thriving Internet ecosystem... I. Current broadband provider privacy practices and the market do not justify the proposed rule... II. The Commission is engaging in a regulatory overreach with its proposed rule... III. The NPRM furthers a regulatory digital divide The proposed rule creates regulatory imbalance in which broadband service providers will be subject to highly restrictive and prescriptive “opt-in” privacy regulations while other content and edge providers — like Netflix — remain under the light-touch regulatory framework of the FTC... The Chamber strongly supports voluntary self-regulation as the appropriate mechanism for online data protection... IV. The proposed FCC privacy rule threatens innovation and the current digital ecosystem..."
What is the USCOC? It is a political lobbying organization representing businesses. According to the organization's website:
"The U.S. Chamber of Commerce is the world’s largest business organization representing the interests of more than 3 million businesses of all sizes, sectors, and regions. Our members range from mom-and-pop shops and local chambers to leading industry associations and large corporations. They all share one thing—they count on the Chamber to be their voice in Washington, D.C."
The USCOC's submission claims that the FCC's proposed rules unfairly places restrictions on ISPs compared to "edge providers' or companies that produce content and advertising networks:
"The proposed rule creates regulatory imbalance in which broadband service providers will be subject to highly-restrictive and prescriptive “opt-in” privacy regulations while other content and edge providers — like Netflix — remain under the light-touch regulatory framework of the FTC. The same customer data about Internet usage will be regulated by two very different agencies. Content and edge providers will continue to operate under FTC’s jurisdiction to regulate “unfair and deceptive” trade practices under Section 5 of the Federal Trade Commission Act. 21 Under Section 5, in the case of unfair and deceptive trade practice violations, the FTC generally issues a cease and desist order that does not immediately impose penalties on alleged violators. This practice gives companies notice and a chance to clean up their act. Conversely, broadband providers under section 222 would not be entitled to a notice to correct mistakes and would be subject to the highly-prescriptive regulations imposed by the NPRM. The decision to regulate broadband providers under two different regulatory regimes is entirely arbitrary..."
Huh? Really? Internet access is not content. Content is content. Of course, the two should be treated differently. Internet access includes the connections for devices a consumer uses online: phones, tablets, laptops, desktops, smart televisions, smart thermometers, smart home-security systems, fitness bands, smart watches, connected refrigerators, and more. Consuming content from Netflix, or another provider, may involve a few, one, or none of these devices -- the choice of the consumer.
In its comments to the FCC, the USCOC also said:
The Commission has also failed to offer any evidence that edge and content providers are respecting consumers’ privacy more than broadband providers or that Internet service providers have any meaningful advantage over content and edge providers with respect to personal data."
"Consumer advocacy groups disagree, pointing out that ISPs have access to all unencrypted traffic in their networks. While more sites now encrypt data than in the past, much remains unencrypted. Consider, a recent study by Upturn found that more than 85% of the top 50 sites in health, news and shopping don't fully support encryption. Upturn also noted in its report that ISPs can glean information about consumers even when they visit encrypted sites... Consumer advocacy groups also argue that broadband providers should be subject to tougher privacy rules because consumers have only limited options about which ISP to use, but many choices about which Web sites to visit."
Well said. I would add to this that the industry historically has repeatedly abused consumers' privacy. This blog has covered many of those abuses:
Historically, ISPs have sought increased revenues and viewed targeted (behavioral) advertising as the means. To do this, they partnered with several technology companies (some went out of business after class-action lawsuits) to spy on consumers without notice, without consent, and without providing opt-out mechanisms. Consumers should control their privacy, not ISPs.
Now you know who if fighting for consumers' interests, and who is not.
Three years ago today, the public learned about extensive surveillance by the U.S. National Security Agency (NSA). Back then, the Guardian UK newspaper reported about a court order allowing the NSA to spy on U.S. citizens. The Electronic Frontier Foundation (EFF) summarized events from 2013:
"It started with a secret order written by the FISA court authorizing the mass surveillance of Verizon Business telephone records—an order that members of Congress quickly confirmed was similar to orders that had been issued every 3 months for years. Over the next year, we saw a steady drumbeat of damning evidence, creating a detailed, horrifying picture of an intelligence agency unrestrained by Congress and shielded from public oversight by a broken classification system. The leaks were thanks in large part to whistleblower Edward Snowden, who has been living in Russia for the last three years, unable to return to the United States for fear of spending his life behind bars..."
Since then, we've learned plenty about how extensive the government surveillance apparatus is and the lack of oversight. We've also learned about NSA code inserted in Android operating system software, the FISA Court and how it undermines the public's trust, the importance of metadata and how much it reveals about you (despite some politicians' claims otherwise), the unintended consequences from broad NSA surveillance, U.S. government spy agencies' goal to break all encryption methods, warrantless searches of U.S. citizens' phone calls and e-mail messages, the NSA's facial image data collection program, the data collection programs included ordinary (e.g., innocent) citizens besides legal targets, and while most hi-tech and telecommunications companies assisted the government with its spy programs, AT&T was probably the best collaborator. A scary, extensive list, eh?
Would the public have learned about all of this without the Snowden leaks? I doubt it. So, thanks to Edward Snowden.
And, this list doesn't include the attempt by the Justice Department to force a hi-tech company to build a "back door" into its products to break encryption. It's been a busy three years. The EFF concluded:
"The Snowden leaks caused a sea change in the policy landscape related to surveillance. EFF worked with dozens of coalition partners across the political spectrum to pass the USA Freedom Act, the first piece of legislation to rein in NSA spying in over thirty years—a bill that would have been unthinkable without the Snowden leaks. They also set the stage for a major showdown in Congress over Section 702 of the FISA Amendments Act, the controversial section of law set to expire in 2017 that the government claims authorizes much of the NSA’s Internet surveillance... Perhaps most importantly, the Snowden leaks published over the last three years have helped to realign a broken relationship between the intelligence community and the public. Whistleblowers often serve as a last-resort failsafe when there are no other methods of bringing accountability to secretive processes. The Snowden leaks have helped illuminate how the NSA was operating outside the law with near impunity, and this in turn drove an international conversation about the dangers of near-omniscient surveillance of our digital communications."
It's not over. The EFF compiled a list of 65 things we know thanks to the Snowden leaks, and a timeline of NSA domestic surveillance. And, Vice News has uncovered some of the documents that highlight the discussions among NSA and government officials about the privacy and Constitutional issues Mr. Snowden raised at the agency before the leaks:
"What's remarkable about this FOIA release, however, is that the NSA has admitted that it altered emails related to its discussions about Snowden. In a letter disclosed to VICE News Friday morning, Justice Department attorney Brigham Bowen said, "Due to a technical flaw in an operating system, some timestamps in email headers were unavoidably altered. Another artifact from this technical flaw is that the organizational designators for records from that system have been unavoidably altered to show the current organizations for the individuals in the To/From/CC lines of the header for the overall email, instead of the organizational designators correct at the time the email was sent."
Because none of the people interviewed by the NSA in the wake of the leaks said that "Snowden mentioned a specific NSA program," and "many" of the people interviewed "affirmed that he never complained about any NSA program," the NSA's counterintelligence chief concluded that these conversations about the Constitution and privacy did not amount to raising concerns about the NSA's spying activities. That was the basis for the agency's public assertions... In April 2014, the month after he testified before the European Parliament, Snowden again challenged the NSA's public narrative about his failure to raise concerns at the agency. In advance of the publication of the Vanity Fair story, the magazine posted a preview online on April 8. "The NSA... not only knows I raised complaints, but that there is evidence that I made my concerns known to the NSA's lawyers, because I did some of it through e-mail," he said."
The Vice News article also discussed the lack of whistle-blower protections for contractors like Mr. Snowden.
Citizens give their government certain powers to act on their behalf. Implicit in that decision is trust. Entrusted with those powers, a government (in a democracy) has an obligation to be transparent with its citizens.
If you use Facebook.com, this is for you.
David Carroll, an associate professor of media design at Parsons School of Design, posted the warning below on Twitter. I checked my Facebook settings and this specific advertisement setting had indeed been changed. So, check yours today. It's fast and easy. It will take at most half a minute to check and change it.
What's driving this activity by the social network? The Washington Post summarized the situation well when it discussed new ad features the site introduced in 2014:
"Things are about to get better for Facebook customers! Not you. You are not a Facebook customer. Advertisers are Facebook customers. You are part of the Facebook product... Facebook, at its moneymaking core, is a system for showing ads to people... why we’re seeing this is because Facebook is not a social network. It is an advertising network... And it seems to be banking on what is always banks on: our unwillingness to change any default settings or think about the flip side of data sharing."
Now, go check and restore your ad settings to maintain privacy.