427 posts categorized "Mobile" Feed

The National Auto Surveillance Database You Haven't Heard About Has Plenty Of Privacy Issues

Some consumers have heard of Automated License Plate Recognition (ALPR) cameras, the high-speed, computer-controlled technology that automatically reads and records vehicle license plates. Local governments have installed ALPR cameras on stationary objects such as street-light poles, traffic lights, overpasses, highway exit ramps, and electronic toll collection (ETC).

Mobile ALPR cameras have been installed on police cars and/or police surveillance vans. The Houston Police Department explained in this 2016 video how it uses the technology. Last year, a blog post discussed ALPR usage in San Diego and its data-sharing with Vigilant Solutions.

What you probably don't know: the auto repossession industry also uses the technology. Many "repo men" have ALPR cameras installed on their vehicles. The data they collect is fed into a massive, nationwide, and privately-owned database which archives license-plate images. Reporters at Motherboard obtained a private demo of the database tool to understand its capabilities.

Digital Recognition Network logo The demo included tracking a license plate with the vehicle owner's consent. Vice reported:

"This tool, called Digital Recognition Network (DRN), is not run by a government, although law enforcement can also access it. Instead, DRN is a private surveillance system crowdsourced by hundreds of repo men who have installed cameras that passively scan, capture, and upload the license plates of every car they drive by to DRN's database. DRN stretches coast to coast and is available to private individuals and companies focused on tracking and locating people or vehicles. The tool is made by a company that is also called Digital Recognition Network... DRN has more than 600 of these "affiliates" collecting data, according to the contract. These affiliates are paid a monthly bonus for gathering the data..."

ALPR financing image from DRN site on September 20, 2019. Click to view larger version Affiliates are rep men and others, who both use the database tool and upload images to it. DRN even offers financing to help affiliates buy ALPR cameras. The image on the right was taken from the site on September 20, 2019.

When consumers fail to pay their bills, lenders and insurance companies have valid needs to retrieve ( or repossess) their unpaid assets. Lenders hire repo men, who then use the DRN database to find vehicles they've been hired to repossess. Those applications are valid, but there are plenty of privacy issues and opportunity for abuse.

Plenty.

First, the data collection is indiscriminate and broad. As repo men (and women) drive through cities and towns to retrieve wanted vehicles, the ALPR cameras mounted on their cars scan all nearby vehicles: both moving and parked vehicles. Scans are not limited solely to vehicles they've been hired to repossess, nor to vehicles of known/suspected criminals. So, innocent consumers are caught in the massive data collection. According to Vice:

"... in fact, the vast majority of vehicles captured are connected to innocent people. DRN claims to have more than 9 billion license plate scans, according to a DRN contract obtained by Motherboard..."

Second, the data is archived forever. That can provide a very detailed history of a vehicle's (or a person's) movements:

"The results popped up: dozens of sightings, spanning years. The system could see photos of the car parked outside the owner's house; the car in another state as its driver went to visit family; and the car parked in other spots in the owner's city... Some showed the car's location as recently as a few weeks before."

Third, to facilitate searches metadata is automatically attached to the images: GPS or geolocation, date, time, day of week, and more. The metadata helps provide a pretty detailed history of each vehicle's -- or person's -- movements: where and when a vehicle ( or person) travels, patterns such as which days of the week certain locations are visited, and how long the vehicle (or person) parked at specific locations. Vice explained:

"The data is easy to query, according to a DRN training video obtained by Motherboard. The system adds a "tag" to each result, categorising what sort of location the vehicle was likely spotted at, such as "workplace" or "home."

So, DRN can help users to associate specific addresses (work, home, school, doctors, etc.) with specific vehicles. How accurate might this be? While that might help repo men and insurance companies spot fraud via out-of-state registered vehicles whose owners are trying to avoid detection and/or higher premiums, it raises other concerns.

Fourth, consumers -- vehicle owners -- have no control over the data describing them. Vehicle owners cannot opt out of the data collection. Vehicle owners cannot review nor correct any errors in their DRN profiles.

That sounds out of control to me.

The persons which the archived data directly describes have no say. None. That's a huge concern.

Also, I wonder about single females -- victims of domestic violence -- who have protective orders for their safety. Some states, such as Massachusetts, have Address Confidentiality Programs (ACPs) to protect victims of domestic violence, sexual assault, and stalkers. Does DRN accommodate ACP programs? And if so, how? And if not, why not? How does DRN prevent perps from using its database tool? (Yes, DRN access is an issue. Keep reading.) The Vice report didn't say. Hopefully, future reporting will discuss this.

Fifth, DRN is robust. It can be used to track vehicles near or in real time:

"DRN charges $20 to look up a license plate, or $70 for a "live alert", according to the contract. With a live alert, a user can enter a license plate they wish to receive updates on; when the DRN system spots the vehicle, it'll send an email to the user with the newly discovered location."

That makes DRN highly appealing to both valid users (e.g., police, repo men, insurance companies, private investigators) and bad actors posing as valid users. Who might those bad actors be? The Electronic Frontier Foundation (EFF) warned:

"Taken in the aggregate, ALPR data can paint an intimate portrait of a driver’s life and even chill First Amendment protected activity. ALPR technology can be used to target drivers who visit sensitive places such as health centers, immigration clinics, gun shops, union halls, protests, or centers of religious worship."

Sixth, is the problem of access. Anybody can use DRN. According to Vice:

"... a private investigator, or a repo man, or an insurance company does not need a warrant to search for someone's movements over years; they just need to pay to access the DRN system, or find someone willing to share or leverage their access..."

Users simply need to comply with DRN's policies. The company says that, a) users can use its database tool only for certain applications, and b) its contract prohibits users from sharing search results with third parties. We consumers have only DRN's word and assurances that it enforces its policies; and that users comply. As we have seen with Facebook data breaches, it is easy for bad actors to pose as valid users in order to doo end runs around such policies.

What are your opinions of ALPR cameras and DRN?


Survey: Consumers Use Smart Home Devices Despite Finding Them 'Creepy'

Selligent Marketing Cloud logo Last month, Selligent Marketing Cloud announced the results of a global survey about how consumers view various brands. Some of the findings included smart speakers or voice assistants. Key findings:

"Sixty-nine percent of surveyed consumers find it “creepy” when they receive ads based on unprompted cues from voice assistants like Apple’s Siri, Amazon’s Alexa and Google Home. Fifty-one percent are worried that voice assistants are listening to conversations without their consent."

Regarding voice assistants, younger consumers are likely to believe they are being listened to without their knowledge. 58 percent of Gen-Z (ages 18-24) versus 36 percent for Baby Boomers (ages 55-75) held this view. Key findings about privacy and social media: 41 percent of respondents said they have reduced their use of social media due to privacy concerns, and 32 percent said they quit at least one social media platform within the last 12 months.

Selligent surveyed 5,000 consumers in North America and Western Europe. The company provides services to help B2C marketers. To learn more, see the Selligent "Global Connected Consumer Index."


Report: World Shipments Of Smart Home Devices Forecasted To Grow To 815 Million In 2019, And To 1.39 Billion in 2023

International Data Corporation logo A report by the International Data Corporation (IDC) has forecasted worldwide shipments of devices for smart homes to grown 23.5% in 2019 over 2018 to nearly 815 million. The report also forecasted a 14.4 percent annual compound growth rate to about 1.39 billion shipments in 2023.

According to the announcement about the report:

"Video entertainment devices are expected to maintain the largest volume of shipments, accounting for 29.9% of all shipments in 2023... Home monitoring/security devices like smart cameras and smart locks will account for 22.1% of the shipments in 2023... Growth in smart speakers and displays is expected to slow to single digits in the next few years... as the installed base of these devices approaches saturation and consumers look to other form factors to access smart assistants in the home, such as thermostats, appliances, and TVs to name a few."

The report, titled "Worldwide Quarterly Smart Home Device Tracker," includes familiar products such as Amazon Echo, Google Home, Philips Hue bulbs, smart speakers, smart thermostats, and connected doorbells. The report covers Asia/Pacific, Canada, Central and Eastern Europe, China, Japan, Latin America, the Middle East and Africa, the United States, and Western Europe.

Surveys in 2018 found that most consumers are satisfied with in-home voice-controlled assistants, and performance issues hinder trust and device adoption. A survey in 2017 found that 90 percent of consumers want security built into smart-home devices. Also in 2017, researchers warned that a hacked Amazon Echo could be turned into always-on surveillance devices.

And, consumers should use these privacy tips for smart speakers in their homes.

Today's smart homes contain a variety of internet-connected appliances -- televisions, utility meters, hot water heaters, thermostats, refrigerators, security systems, solar panels -- and internet-connected devices you might not expect:  mouse traps, water bowls and feeders for your pets, wine bottles, crock pots, toy dolls, trash/recycle bins, vibrators, orgasm trackers, and adult sex toys. It is a connected world, indeed.


3 Countries Sent A Joint Letter Asking Facebook To Delay End-To-End Encryption Until Law Enforcement Has Back-Door Access. 58 Concerned Organizations Responded

Plenty of privacy and surveillance news recently. Last week, the governments of three countries sent a joint, open letter to Facebook.com asking the social media platform to delay implementation of end-to-end encryption in its messaging apps until back-door access can be provided for law enforcement.

Facebook logo Buzzfeed News published the joint, open letter by U.S. Attorney General William Barr, United Kingdom Home Secretary Priti Patel, acting US Homeland Security Secretary Kevin McAleenan, and Australian Minister for Home Affairs Peter Dutton. The letter, dated October 4th, was sent to Mark Zuckerberg, the Chief Executive Officer of Facebook. It read in part:

"OPEN LETTER: FACEBOOK’S “PRIVACY FIRST” PROPOSALS

We are writing to request that Facebook does not proceed with its plan to implement end-to-end encryption across its messaging services without ensuring that there is no reduction to user safety and without including a means for lawful access to the content of communications to protect our citizens.

In your post of 6 March 2019, “A Privacy-Focused Vision for Social Networking,” you acknowledged that “there are real safety concerns to address before we can implement end-to-end encryption across all our messaging services.” You stated that “we have a responsibility to work with law enforcement and to help prevent” the use of Facebook for things like child sexual exploitation, terrorism, and extortion. We welcome this commitment to consultation. As you know, our governments have engaged with Facebook on this issue, and some of us have written to you to express our views. Unfortunately, Facebook has not committed to address our serious concerns about the impact its proposals could have on protecting our most vulnerable citizens.

We support strong encryption, which is used by billions of people every day for services such as banking, commerce, and communications. We also respect promises made by technology companies to protect users’ data. Law abiding citizens have a legitimate expectation that their privacy will be protected. However, as your March blog post recognized, we must ensure that technology companies protect their users and others affected by their users’ online activities. Security enhancements to the virtual world should not make us more vulnerable in the physical world..."

The open, joint letter is also available on the United Kingdom government site. Mr. Zuckerberg's complete March 6, 2019 post is available here.

Earlier this year, the U.S. Federal Bureau of Investigation (FBI) issued a Request For Proposals (RFP) seeking quotes from technology companies to build a real-time social media monitoring tool. It seems, such a tool would have limited utility without back-door access to encrypted social media accounts.

In 2016, the Federal Bureau of Investigation (FBI) filed a lawsuit to force Apple Inc. to build "back door" software to unlock an attacker's iPhone. Apple refused as back-door software would provide access to any iPhone, not only this particular smartphone. Ultimately, the FBI found an offshore tech company to build the backdoor. Later that year, then FBI Director James Comey suggested a national discussion about encryption versus safety. It seems, the country still hasn't had that conversation.

According to BuzzFeed, Facebook's initial response to the joint letter:

"In a three paragraph statement, Facebook said it strongly opposes government attempts to build backdoors."

We shall see if Facebook holds steady to that position. Privacy advocates quickly weighed in. The Electronic Frontier Foundation (EFF) wrote:

"This is a staggering attempt to undermine the security and privacy of communications tools used by billions of people. Facebook should not comply. The letter comes in concert with the signing of a new agreement between the US and UK to provide access to allow law enforcement in one jurisdiction to more easily obtain electronic data stored in the other jurisdiction. But the letter to Facebook goes much further: law enforcement and national security agencies in these three countries are asking for nothing less than access to every conversation... The letter focuses on the challenges of investigating the most serious crimes committed using digital tools, including child exploitation, but it ignores the severe risks that introducing encryption backdoors would create. Many people—including journalists, human rights activists, and those at risk of abuse by intimate partners—use encryption to stay safe in the physical world as well as the online one. And encryption is central to preventing criminals and even corporations from spying on our private conversations... What’s more, the backdoors into encrypted communications sought by these governments would be available not just to governments with a supposedly functional rule of law. Facebook and others would face immense pressure to also provide them to authoritarian regimes, who might seek to spy on dissidents..."

The new agreement the EFF referred to was explained in this United Kingdom announcement:

"The world-first UK-US Bilateral Data Access Agreement will dramatically speed up investigations and prosecutions by enabling law enforcement, with appropriate authorisation, to go directly to the tech companies to access data, rather than through governments, which can take years... The current process, which see requests for communications data from law enforcement agencies submitted and approved by central governments via Mutual Legal Assistance (MLA), can often take anywhere from six months to two years. Once in place, the Agreement will see the process reduced to a matter of weeks or even days."

The Agreement will each year accelerate dozens of complex investigations into suspected terrorists and paedophiles... The US will have reciprocal access, under a US court order, to data from UK communication service providers. The UK has obtained assurances which are in line with the government’s continued opposition to the death penalty in all circumstances..."

On Friday, a group of 58 privacy advocates and concerned organizations from several countries sent a joint letter to Facebook regarding its end-to-end encryption plans. The Center For Democracy & Technology (CDT) posted the group's letter:

"Given the remarkable reach of Facebook’s messaging services, ensuring default end-to-end security will provide a substantial boon to worldwide communications freedom, to public safety, and to democratic values, and we urge you to proceed with your plans to encrypt messaging through Facebook products and services. We encourage you to resist calls to create so-called “backdoors” or “exceptional access” to the content of users’ messages, which will fundamentally weaken encryption and the privacy and security of all users."

It seems wise to have a conversation to discuss all of the advantages and disadvantages; and not selectively focus only upon some serious crimes while ignoring other significant risks, since back-door software can be abused like any other technology. What are your opinions?


Survey Asked Americans Which They Consider Safer: Self-Driving Ride-Shares Or Solo Ride-Shares With Human Drivers

Many consumers use ride-sharing services, such as Lyft and Uber. We all have heard about self-driving cars. A polling firm asked consumers a very relevant question: "Which ride is trusted more? Would you rather take a rideshare alone or a self-driving car?" The results may surprise you.

The questions are relevant given news reports about sexual assaults and kidnapping ride-sharing drivers and imposters. A pedestrian death involving a self-driving ride-sharing car highlighted the ethical issues about who machines should save when fatal crashes can't be avoided. Developers have admitted that self-driving cars can be hacked by bad actors, just like other computers and mobile devices. And, new car buyers stated clear preferences when considering self-driving (a/k/a autonomous) vehicles versus standard vehicles with self-driving modes.

Using Google Consumer Surveys, The Zebra surveyed 2,000 persons in the United States during August, 2019 and found:

"53 percent of people felt safer taking a self-driving car than driver-operated rideshare alone; Baby Boomers (age 55-plus) were the only age group to prefer a solo Uber ride over a driverless car; Gen Z (ages 18–24) were most open to driverless rideshares: 40 percent said they were willing to hail a ride from one."

Founded 7 years ago, The Zebra describes itself as, "the nation's leading insurance comparison site." The survey also found:

"... Baby Boomers were the only group to trust solo ridesharing more than they would a ride in a self-driving car... despite women being subjected to higher rates of sexual violence, the poll found women were only slightly more likely than men to choose a self-driving car over ridesharing alone (53 percent of women compared to 52 percent of men).

It seems safe to assume: trust it or not, the tech is coming. Quickly. What are your opinions?


Mashable: 7 Privacy Settings iPhone Users Should Enable Today

Most people want to get the most from their smartphones. That includes using their devices wisely and with privacy. Mashable recommended seven privacy settings for Apple iPhone users. I found the recommendations very helpful, and thought that you would, too.

Three privacy settings stood out. First, many mobile apps have:

"... access to your camera. For some of these, the reasoning is a no-brainer. You want to be able to use Snapchat filters? Fine, the app needs access to your camera. That makes sense. Other apps' reasoning for having access to your camera might be less clear. Once again, head to Settings > Privacy > Camera and review what apps you've granted camera access. See anything in there that doesn't make sense? Go ahead and disable it."

A feature most consumers probably haven't considered:

"... which apps on your phone have requested microphone access. For example, do you want Drivetime to have access to your mic? No? Because if you've downloaded it, then it might. If an app doesn't have a clear reason for needing access to your microphone, don't give it that access."

And, perhaps most importantly:

"Did you forget about your voicemail? Hackers didn't. At the 2018 DEF CON, researchers demonstrated the ability to brute force voicemail accounts and use that access to reset victims' Google and PayPal accounts... Set a random 9-digit voicemail password. Go to Settings > Phone and scroll down to "Change Voicemail Password." You iPhone should let you choose a 9-digit code..."

The full list is a reminder for consumers not to assume that the default settings on mobile apps you install are right for your privacy needs. Wise consumers check and make adjustments.


Privacy Study Finds Consumers Less Likely To Share Several Key Data Elements

Advertising Research Foundation logoLast month, the Advertising Research Foundation (ARF) announced the results of its 2019 Privacy Study, which was conducted in March. The survey included 1,100 consumers in the United States weighted by age gender, and region. Key findings including device and internet usage:

"The key differences between 2018 and 2019 are: i) People are spending more time on their mobile devices and less time on their PCs; ii) People are spending more time checking email, banking, listening to music, buying things, playing games, and visiting social media via mobile apps; iii) In general, people are only slightly less likely to share their data than last year. iv) They are least likely to share their social security number; financial and medical information; and their home address and phone numbers; v) People seem to understand the benefits of personalized advertising, but do not value personalization highly and do not understand the technical approaches through which it is accomplished..."

Advertisers use these findings to adjust their advertising, offers, and pitches to maximize responses by consumers. More detail about the above privacy and data sharing findings:

"In general, people were slightly less likely to share their data in 2019 than they were in 2018. They were least likely to share their social security number; financial and medical information; their work address; and their home address and phone numbers in both years. They were most likely to share their gender, race, marital status, employment status, sexual orientation, religion, political affiliation, and citizenship... The biggest changes in respondents’ willingness to share their data from 2018 to 2019 were seen in their home address (-10 percentage points), spouse’s first and last name (-8 percentage points), personal email address (-7 percentage points), and first and last names (-6 percentage points)."

The researchers asked the data sharing question in two ways:

  1. "Which of the following types of information would you be willing to share with a website?"
  2. "Which of the following types of information would you be willing to share for a personalized experience?"

The survey included 20 information types for both questions. For the first question, survey respondents' willingness to share decreased for 15 of 20 information types, remained constant for two information types, and increased slightly for the remainder:

Which of the following types of information
would you be willing to share with a website?
Information Type 2018: %
Respondents
2019: %
Respondents
2019 H/(L)
2018
Birth Date 71 68 (3)
Citizenship Status 82 79 (3)
Employment Status 84 82 (2)
Financial Information 23 20 (3)
First & Last Name 69 63 (6)
Gender 93 93 --
Home Address 41 31 (10)
Home Landline
Phone Number
33 30 (3)
Marital Status 89 85 (4)
Medical Information 29 26 (3)
Personal Email Address 61 54 (7)
Personal Mobile
Phone Number
34 32 (2)
Place Of Birth 62 58 (4)
Political Affiliation 76 77 1
Race or Ethnicity 90 91 1
Religious Preference 78 79 1
Sexual Orientation 80 79 (1)
Social Security Number 10 10 --
Spouse's First
& Last Name
41 33 (8)
Work Address 33 31 (2)

The researchers asked about citizenship status due to controversy related to the upcoming 2020 Census. The researchers concluded:

The survey finding most relevant to these proposals is that the public does not see the value of sharing data to improve personalization of advertising messages..."

Overall, it appears that consumers are getting wiser about their privacy. Consumers' willingness to share decreased for more items than it increased for. View the detailed ARF 2019 Privacy Survey (Adobe PDF).


Google Claims Blocking Cookies Is Bad For Privacy. Researchers: Nope. That Is 'Privacy Gaslighting'

Google logo The announcement by Google last week included some dubious claims, which received a fair amount of attention among privacy experts. First, a Senior Product Manager of User Privacy and Trust wrote in a post:

"Ads play a major role in sustaining the free and open web. They underwrite the great content and services that people enjoy... But the ad-supported web is at risk if digital advertising practices don’t evolve to reflect people’s changing expectations around how data is collected and used. The mission is clear: we need to ensure that people all around the world can continue to access ad supported content on the web while also feeling confident that their privacy is protected. As we shared in May, we believe the path to making this happen is also clear: increase transparency into how digital advertising works, offer users additional controls, and ensure that people’s choices about the use of their data are respected."

Okay, that is a fair assessment of today's internet. And, more transparency is good. Google executives are entitled to their opinions. The post also stated:

"The web ecosystem is complex... We’ve seen that approaches that don’t account for the whole ecosystem—or that aren’t supported by the whole ecosystem—will not succeed. For example, efforts by individual browsers to block cookies used for ads personalization without suitable, broadly accepted alternatives have fallen down on two accounts. First, blocking cookies materially reduces publisher revenue... Second, broad cookie restrictions have led some industry participants to use workarounds like fingerprinting, an opaque tracking technique that bypasses user choice and doesn’t allow reasonable transparency or control. Adoption of such workarounds represents a step back for user privacy, not a step forward."

So, Google claims that blocking cookies is bad for privacy. With a statement like that, the "User Privacy and Trust" title seems like an oxymoron. Maybe, that's the best one can expect from a company that gets 87 percent of its revenues from advertising.

Also on August 22nd, the Director of Chrome Engineering repeated this claim and proposed new internet privacy standards (bold emphasis added):

... we are announcing a new initiative to develop a set of open standards to fundamentally enhance privacy on the web. We’re calling this a Privacy Sandbox. Technology that publishers and advertisers use to make advertising even more relevant to people is now being used far beyond its original design intent... some other browsers have attempted to address this problem, but without an agreed upon set of standards, attempts to improve user privacy are having unintended consequences. First, large scale blocking of cookies undermine people’s privacy by encouraging opaque techniques such as fingerprinting. With fingerprinting, developers have found ways to use tiny bits of information that vary between users, such as what device they have or what fonts they have installed to generate a unique identifier which can then be used to match a user across websites. Unlike cookies, users cannot clear their fingerprint, and therefore cannot control how their information is collected... Second, blocking cookies without another way to deliver relevant ads significantly reduces publishers’ primary means of funding, which jeopardizes the future of the vibrant web..."

Yes, fingerprinting is a nasty, privacy-busting technology. No argument with that. But, blocking cookies is bad for privacy? Really? Come on, let's be honest.

This dubious claim ignores corporate responsibility... that some advertisers and website operators made choices -- conscious decisions to use more invasive technologies like fingerprinting to do an end-run around users' needs, desires, and actions to regain online privacy. Sites and advertisers made those invasive-tech choices when other options were available, such as using subscription services to pay for their content.

Plus, Google's claim also ignores the push by corporate internet service providers (ISPs) which resulted in the repeal of online privacy protections for consumers thanks to a compliant, GOP-led Federal Communications Commission (FCC), which seems happy to tilt the playing field further towards corporations and against consumers. So, users are simply trying to regain online privacy.

During the past few years, both privacy-friendly web browsers (e.g., Brave, Firefox) and search engines (e.g., DuckDuckGo) have emerged to meet consumers' online privacy needs. (Well, it's not only consumers that need online privacy. Attorneys and businesses need it, too, to protect their intellectual property and proprietary business methods.) Online users demanded choice, something advertisers need to remember and value.

Privacy experts weighed in about Google's blocking-cookies-is-bad-for-privacy claim. Jonathan Mayer and Arvind Narayanan explained:

That’s the new disingenuous argument from Google, trying to justify why Chrome is so far behind Safari and Firefox in offering privacy protections. As researchers who have spent over a decade studying web tracking and online advertising, we want to set the record straight. Our high-level points are: 1) Cookie blocking does not undermine web privacy. Google’s claim to the contrary is privacy gaslighting; 2) There is little trustworthy evidence on the comparative value of tracking-based advertising; 3) Google has not devised an innovative way to balance privacy and advertising; it is latching onto prior approaches that it previously disclaimed as impractical; and 4) Google is attempting a punt to the web standardization process, which will at best result in years of delay."

The researchers debunked Google's claim with more details:

"Google is trying to thread a needle here, implying that some level of tracking is consistent with both the original design intent for web technology and user privacy expectations. Neither is true. If the benchmark is original design intent, let’s be clear: cookies were not supposed to enable third-party tracking, and browsers were supposed to block third-party cookies. We know this because the authors of the original cookie technical specification said so (RFC 2109, Section 4.3.5). Similarly, if the benchmark is user privacy expectations, let’s be clear: study after study has demonstrated that users don’t understand and don’t want the pervasive web tracking that occurs today."

Moreover:

"... there are several things wrong with Google’s argument. First, while fingerprinting is indeed a privacy invasion, that’s an argument for taking additional steps to protect users from it, rather than throwing up our hands in the air. Indeed, Apple and Mozilla have already taken steps to mitigate fingerprinting, and they are continuing to develop anti-fingerprinting protections. Second, protecting consumer privacy is not like protecting security—just because a clever circumvention is technically possible does not mean it will be widely deployed. Firms face immense reputational and legal pressures against circumventing cookie blocking. Google’s own privacy fumble in 2012 offers a perfect illustration of our point: Google implemented a workaround for Safari’s cookie blocking; it was spotted (in part by one of us), and it had to settle enforcement actions with the Federal Trade Commission and state attorneys general."

Gaslighting, indeed. Online privacy is important. So, too, are consumers' choices and desires. Thanks to Mr. Mayer and Mr. Narayanan for the comprehensive response.

What are your opinions of cookie blocking? Of Google's claims?


ExpressVPN Survey Indicates Americans Care About Privacy. Some Have Already Taken Action

ExpressVPN published the results of its privacy survey. The survey, commissioned by ExpressVPN and conducted by Propeller Insights, included a representative sample of about 1,000 adults in the United States.

Overall, 29.3% of survey respondents said they already use had used a virtual private network (VPN) or a proxy network. Survey respondents cited three broad reasons for using a VPN service: 1) to avoid surveillance, 2) to access content, and 3) to stay safe online. Detailed survey results about surveillance concerns:

"The most popular reasons to use a VPN are related to surveillance, with 41.7% of respondents aiming to protect against sites seeing their IP, 26.4% to prevent their internet service provider (ISP) from gathering information, and 16.6% to shield against their local government."

Who performs the surveillance matters to consumers. People are more concerned with surveillance by companies than by law enforcement agencies within the U.S. government:

"Among the respondents, 15.9% say they fear the FBI surveillance, and only 6.4% fear the NSA spying on them. People are by far most worried about information gathering by ISPs (23.2%) and Facebook (20.5%). Google spying is more of a worry for people (5.9%) than snooping by employers (2.6%) or family members (5.1%).

Concerns with internet service providers (ISPs) are not surprising since these telecommunications company enjoy a unique position enabling them to track all online activities by consumers. Concerns about Facebook are not surprising since it tracks both users and non-users, similar to advertising networks. The "protect against sites seeing their IP" suggests that consumers, or at least VPN users, want to protect themselves and their devices against advertisers, advertising networks, and privacy-busting mobile apps which track their geo-location.

Detailed survey results about content access concerns:

"... 26.7% use [a VPN service] to access their corporate or academic network, 19.9% to access content otherwise not available in their region, and 16.9% to circumvent censorship."

The survey also found that consumers generally trust their mobile devices:

" Only 30.5% of Android users are “not at all” or “not very” confident in their devices. iOS fares slightly better, with 27.4% of users expressing a lack of confidence."

The survey uncovered views about government intervention and policies:

"Net neutrality continues to be popular (70% more respondents support it rather then don’t), but 51.4% say they don’t know enough about it to form an opinion... 82.9% also believe Congress should enact laws to require tech companies to get permission before collecting personal data. Even more, 85.2% believe there should be fines for companies that lose users’ data, and 90.2% believe there should be further fines if the data is misused. Of the respondents, 47.4% believe Congress should go as far as breaking up Facebook and Google."

The survey found views about smart devices (e.g., door bells, voice assistants, smart speakers) installed in many consumers' homes, since these devices are equipped with always-on cameras and/or microphones:

"... 85% of survey respondents say they are extremely (24.7%), very (23.4%), or somewhat (28.0%) concerned about smart devices monitoring their personal habits... Almost a quarter (24.8%) of survey respondents do not own any smart devices at all, while almost as many (24.4%) always turn off their devices’ microphones if they are not using them. However, one-fifth (21.2%) say they always leave the microphone on. The numbers are similar for camera use..."

There are more statistics and findings in the entire survey report by ExpressVPN. I encourage everyone to read it.


Researcher Uncovers Several Browser Extensions That Track Users' Online Activity And Share Data

Many consumers use web browsers since websites contain full content and functionality, versus pieces of websites in mobile apps. A researcher has found that as many as four million consumers have been affected by browser extensions, the optional functionality for web browsers, which collected sensitive personal and financial information.

Ars Technica reported about DataSpii, the name of the online privacy issue:

"The term DataSpii was coined by Sam Jadali, the researcher who discovered—or more accurately re-discovered—the browser extension privacy issue. Jadali intended for the DataSpii name to capture the unseen collection of both internal corporate data and personally identifiable information (PII).... DataSpii begins with browser extensions—available mostly for Chrome but in more limited cases for Firefox as well—that, by Google's account, had as many as 4.1 million users. These extensions collected the URLs, webpage titles, and in some cases the embedded hyperlinks of every page that the browser user visited. Most of these collected Web histories were then published by a fee-based service called Nacho Analytics..."

At first glance, this may not sound important, but it is. Why? First, the data collected included the most sensitive and personal information:

"Home and business surveillance videos hosted on Nest and other security services; tax returns, billing invoices, business documents, and presentation slides posted to, or hosted on, Microsoft OneDrive, Intuit.com, and other online services; vehicle identification numbers of recently bought automobiles, along with the names and addresses of the buyers; patient names, the doctors they visited, and other details listed by DrChrono, a patient care cloud platform that contracts with medical services; travel itineraries hosted on Priceline, Booking.com, and airline websites; Facebook Messenger attachments..."

I'll bet you thought your Facebook Messenger stuff was truly private. Second, because:

"... the published URLs wouldn’t open a page unless the person following them supplied an account password or had access to the private network that hosted the content. But even in these cases, the combination of the full URL and the corresponding page name sometimes divulged sensitive internal information. DataSpii is known to have affected 50 companies..."

Ars Technica also reported:

"Principals with both Nacho Analytics and the browser extensions say that any data collection is strictly "opt in." They also insist that links are anonymized and scrubbed of sensitive data before being published. Ars, however, saw numerous cases where names, locations, and other sensitive data appeared directly in URLs, in page titles, or by clicking on the links. The privacy policies for the browser extensions do give fair warning that some sort of data collection will occur..."

So, the data collection may be legal, but is it ethical -- especially if the anonymization is partial? After the researcher's report went public, many of the suspect browser extensions were deleted from online stores. However, extensions already installed locally on users' browsers can still collect data:

"Beginning on July 3—about 24 hours after Jadali reported the data collection to Google—Fairshare Unlock, SpeakIt!, Hover Zoom, PanelMeasurement, Branded Surveys, and Panel Community Surveys were no longer available in the Chrome Web Store... While the notices say the extensions violate the Chrome Web Store policy, they make no mention of data collection nor of the publishing of data by Nacho Analytics. The toggle button in the bottom-right of the notice allows users to "force enable" the extension. Doing so causes browsing data to be collected just as it was before... In response to follow-up questions from Ars, a Google representative didn't explain why these technical changes failed to detect or prevent the data collection they were designed to stop... But removing an extension from an online marketplace doesn't necessarily stop the problems. Even after the removals of Super Zoom in February or March, Jadali said, code already installed by the Chrome and Firefox versions of the extension continued to collect visited URL information..."

Since browser developers haven't remotely disabled leaky browser extensions, the burden is on consumers. The Ars Technica report lists the leaky browser extensions by name. Since online stores can't seem to consistently police browser extensions for privacy compliance, again the burden falls upon consumers.

The bottom line: browser extensions can easily compromise your online privacy and security. That means like any other software, wise consumers: read independent online reviews first, read the developer's terms of use and privacy policy before installing the browser extension, and use a privacy-focused web browser.

Consumer Reports advises consumers to, a) install browser extensions only from companies you trust, and b) uninstall browser extensions you don't need nor use. For consumers that don't know how, the Consumer Reports article also lists step-by-step instructions to uninstall browser extensions in Google Chrome, Firefox, Safari, and Internet Explorer branded web browsers.


FBI Seeks To Monitor Twitter, Facebook, Instagram, And Other Social Media Accounts For Violent Threats

Federal Bureau of Investigation logo The U.S. Federal Bureau of Investigation (FBI) issued on July 8th a Request For Proposals (RFP) seeking quotes from technology companies to build a "Social Media Alerting" tool, which would enable the FBI to monitor in real-time accounts in several social media services for violence threats. The RFP, which was amended on August 7th, stated:

"The purpose of this procurement is to acquire the services of a company to proactively identify and reactively monitor threats to the United States and its interests through a means of online sources. A subscription to this service shall grant the Federal Bureau of Investigation (FBI) access to tools that will allow for the exploitation of lawfully collected/acquired data from social media platforms that will be stored, vetted and formatted by a vendor... This synopsis and solicitation is being issued as Request for Proposal (RFP) number DJF194750PR0000369 and... This announcement is supplemented by a detailed RFP Notice, an SF-33 document, an accompanying Statement of Objectives (SOO) and associated FBI documents..."

"Proactively identify" suggests the usage of software algorithms or artificial intelligence (AI). And, the vendor selected will archive the collected data for an undisclosed period of time. The RFP also stated:

"Background: The use of social media platforms, by terrorist groups, domestic threats, foreign intelligence services, and criminal organizations to further their illegal activity creates a demonstrated need for tools to properly identify the activity and react appropriately. With increased use of social media platforms by subjects of current FBI investigations and individuals that pose a threat to the United States, it is critical to obtain a service which will allow the FBI to identify relevant information from Twitter, Facebook, Instagram, and other Social media platforms in a timely fashion. Consequently, the FBI needs near real time access to a full range of social media exchanges..."

For context, in 2016 the FBI attempted to force Apple Computer to build "backdoor software" to unclock an alleged terrorist's iPhone in California. The FBI later found an offshore technology company to build its backdoor.

The documents indicate that the FBI wants its staff to use the tool at both headquarters and field-office locations globally, and with mobile devices. The SOO document stated:

"FBI personnel are deployed internationally and sometimes in areas of press censorship. A social media exploitation tool with international reach and paired with a strong language translation capability, can become crucial to their operations and more importantly their safety. The functions of most value to these individuals is early notification, broad international reach, instant translation, and the mobility of the needed capability."

The SOO also explained the data elements too be collected:

"3.3.2.2.1 Obtain the full social media profile of persons-of-interest and their affiliation to any organization or groups through the corroboration of multiple social media sources... Items of interest in this context are social networks, user IDs, emails, IP addresses and telephone numbers, along with likely additional account with similar IDs or aliases... Any connectivity between aliases and their relationship must be identifiable through active link analysis mapping..."
"3.3.3.2.1 Online media is monitored based on location, determined by the users’ delineation or the import of overlays from existing maps (neighborhood, city, county, state or country). These must allow for customization as AOR sometimes cross state or county lines..."

While the document mentioned "user IDs" and didn't mention passwords, the implication seems clear that the FBI wants both in order to access and monitor in real-time social media accounts. And, the "other Social Media platforms" statement raises questions. What is the full list of specific services that refers to? Why list only the three largest platforms by name?

As this FBI project proceeds, let's hope that the full list of social sites includes 8Chan, Reddit, Stormfront, and similar others. Why? In a study released in November of 2018, the Center for Strategic and International Studies (CSIS) found:

"Right-wing extremism in the United States appears to be growing. The number of terrorist attacks by far-right perpetrators rose over the past decade, more than quadrupling between 2016 and 2017. The recent pipe bombs and the October 27, 2018, synagogue attack in Pittsburgh are symptomatic of this trend. U.S. federal and local agencies need to quickly double down to counter this threat. There has also been a rise in far-right attacks in Europe, jumping 43 percent between 2016 and 2017... Of particular concern are white supremacists and anti-government extremists, such as militia groups and so-called sovereign citizens interested in plotting attacks against government, racial, religious, and political targets in the United States... There also is a continuing threat from extremists inspired by the Islamic State and al-Qaeda. But the number of attacks from right-wing extremists since 2014 has been greater than attacks from Islamic extremists. With the rising trend in right-wing extremism, U.S. federal and local agencies need to shift some of their focus and intelligence resources to penetrating far-right networks and preventing future attacks. To be clear, the terms “right-wing extremists” and “left-wing extremists” do not correspond to political parties in the United States..."

The CSIS study also noted:

"... right-wing terrorism commonly refers to the use or threat of violence by sub-national or non-state entities whose goals may include racial, ethnic, or religious supremacy; opposition to government authority; and the end of practices like abortion... Left-wing terrorism, on the other hand, refers to the use or threat of violence by sub-national or non-state entities that oppose capitalism, imperialism, and colonialism; focus on environmental or animal rights issues; espouse pro-communist or pro-socialist beliefs; or support a decentralized sociopolitical system like anarchism."

Terrorism is terrorism. All of it needs to be prosecuted including left-, right-, domestic, and foreign. (This prosecutor is doing the right thing.) It seems wise to monitor the platform where suspects congregate.

This project also raises questions about the effectiveness of monitoring social media? Will this really works. Digital Trends reported:

"Companies like Google, Facebook, Twitter, and Amazon already use algorithms to predict your interests, your behaviors, and crucially, what you like to buy. Sometimes, an algorithm can get your personality right – like when Spotify somehow manages to put together a playlist full of new music you love. In theory, companies could use the same technology to flag potential shooters... But preventing mass shootings before they happen raises thorny legal questions: how do you determine if someone is just angry online rather than someone who could actually carry out a shooting? Can you arrest someone if a computer thinks they’ll eventually become a shooter?"

Some social media users have already experienced inaccuracies (failures?) when sites present irrelevant advertisements and/or political party messaging based upon supposedly accurate software algorithms. The Digital Trends article also dug deeper:

"A Twitter spokesperson wouldn’t say much directly about Trump’s proposal, but did tell Digital Trends that the company suspended 166,513 accounts connected to the promotion of terrorism during the second half of 2018... Twitter also frequently works to help facilitate investigations when authorities request information – but the company largely avoids proactively flagging banned accounts (or the people behind them) to those same authorities. Even if they did, that would mean flagging 166,513 people to the FBI – far more people than the agency could ever investigate."

Then, there is the problem of the content by users in social media posts:

"Even if someone does post to social media immediately before they decide to unleash violence, it’s often not something that would trip up either Twitter or Facebook’s policies. The man who killed three people at the Gilroy Garlic Festival in Northern California posted to Instagram from the event itself – once calling the food served there “overprices” and a second that told people to read a 19th-century pro-fascist book that’s popular with white nationalists."

Also, Amazon got caught up in the hosting mess with 8Chan. So, there is more news to come.

Last, this blog post explored the problems with emotion recognition by facial-recognition software. Let's hope this FBI project is not a waste of taxpayer's hard-earned money.


Tech Expert Concluded Google Chrome Browser Operates A Lot Like Spy Software

Many consumers still use web browsers. Which are better for your online privacy? You may be interested in this analysis by a tech expert:

"... I've been investigating the secret life of my data, running experiments to see what technology really gets up to under the cover of privacy policies that nobody reads... My tests of Chrome vs. Firefox [browsers] unearthed a personal data caper of absurd proportions. In a week of Web surfing on my desktop, I discovered 11,189 requests for tracker "cookies" that Chrome would have ushered right onto my computer but were automatically blocked by Firefox... Chrome welcomed trackers even at websites you would think would be private. I watched Aetna and the Federal Student Aid website set cookies for Facebook and Google. They surreptitiously told the data giants every time I pulled up the insurance and loan service's log-in pages."

"And that's not the half of it. Look in the upper right corner of your Chrome browser. See a picture or a name in the circle? If so, you're logged in to the browser, and Google might be tapping into your Web activity to target ads. Don't recall signing in? I didn't, either. Chrome recently started doing that automatically when you use Gmail... I felt hoodwinked when Google quietly began signing Gmail users into Chrome last fall. Google says the Chrome shift didn't cause anybody's browsing history to be "synced" unless they specifically opted in — but I found mine was being sent to Google and don't recall ever asking for extra surveillance..."

Also:

"Google's product managers told me in an interview that Chrome prioritizes privacy choices and controls, and they're working on new ones for cookies. But they also said they have to get the right balance with a "healthy Web ecosystem" (read: ad business). Firefox's product managers told me they don't see privacy as an "option" relegated to controls. They've launched a war on surveillance, starting last month with "enhanced tracking protection" that blocks nosy cookies by default on new Firefox installations..."

This tech expert concluded:

"It turns out, having the world's biggest advertising company make the most popular Web browser was about as smart as letting kids run a candy shop. It made me decide to ditch Chrome for a new version of nonprofit Mozilla's Firefox, which has default privacy protections. Switching involved less inconvenience than you might imagine."

Regular readers of this blog are aware of how Google tracks consumers online purchases, the worst mobile apps for privacy, and privacy alternatives such the Brave web browser, the DuckDuckGo search engine, virtual private network (VPN) software, and more. Yes, you can use the Firefox browser on your Apple iPhone. I do.

Me? I've used the Firefox browser since about 2010 on my (Windows) laptop, and the DuckDuckGo search engine since 2013. I stopped using Bing, Yahoo, and Google search engines in 2013. While Firefox installs with Google as the default search engine, you can easily switch it to DuckDuckGo. I did. I am very happy with the results.

Which web browser and search engine do you use? What do you do to protect your online privacy?


The Worst Mobile Apps For Privacy

ExpressVPN compiled its list for 2019 of the four worst mobile apps for privacy. If you value your online privacy and want to protect yourself, the security firm advises consumers to, "Delete them now." The list of apps includes both predictable items and some surprises:

"1. Angry Birds: If you were an international spying organization, which app would you target to harvest smartphone user information? If you picked Angry Birds, congratulations! You’re thinking just like the NSA and GCHQ did... what it lacks in gameplay, it certainly makes up for in leaky data... A mobile ad platform placed a code snippet in Angry Birds that allowed the company to target advertisements to users based on previously collected information. Unfortunately, the ad’s library of data was visible, meaning it was leaking user information such as phone number, call logs, location, political affiliation, sexual orientation, and marital status..."

"2. The YouVersion Bible App: The YouVersion Bible App is on more than 300 million devices around the world. It claims to be the No. 1 Bible app and comes with over 1,400 Bibles in over 1,000 languages. It also harvests data... Notable permissions the app demands are full internet access, the ability to connect and disconnect to Wi-Fi, modify stored content on the phone, track the device’s location, and read all a user’s contacts..."

Read the full list of sketchy apps at the ExpressVPN site.


How To Do Online Banking Safely And Securely

Most people love the convenience of online banking via their smartphone or other mobile device. However, it is important to do it safely and securely. How? NordVPN listed four items:

"1. Don't lose your phone: The biggest security threat of your mobile phone is also its greatest asset – its size. Phones are small, handy, beautiful, and easy to lose..."

So, keep your phone in your hand. Never place it on a table out of sight. Of course, you should lock your phone with a strong password. NordVPN commented about other locking options:

"Facial recognition: convenient but not secure, since it can sometimes be bypassed with a photograph... Fingerprints: low false-acceptance rates, perfect if you don’t often wear gloves."

More advice:

"2. Use the official banking app, not the browser... If you aren’t careful, you could download a fake banking app created by scammers to break into your account. Make sure your bank created or approves of the app you are downloading. Get it from their website. Moreover, do not use mobile browsers to log in to your bank account – they are less secure than bank-sanctioned apps..."

Obviously, you should sign out of the mobile app when when finished with your online banking session. Otherwise, a thief with your stolen phone has direct access to your money and accounts. NordVPN advises consumers to do your homework first: read app ratings and reviews before downloading any mobile apps.

Readers of this blog are probably familiar with the next item:

"4. Don’t use mobile banking on public Wi-Fi: Anyone on a public Wi-Fi network is in danger of a security breach. Most of these networks lack basic security measures and have poor router configurations and weak passwords..."

Popular places with public Wi-Fi includes coffee shops, fast food restaurants, supermarkets, airports, libraries, and hotels. If you must do online banking in a public place, NordVPN advised:

"... use your cellular network instead. It’s not perfect, but it’s better than public Wi-Fi. Better yet, turn on a virtual private network (VPN) and then use public Wi-Fi..."

There you have it. Read the entire online banking article by NordVPN. Ignore this advice at your own peril.


Brave Alerts FTC On Threats From Business Practices With Big Data

The U.S. Federal Trade Commission (FTC) held a "Privacy, Big Data, And Competition" hearing on November 6-8, 2018 as part of its "Competition And Consumer Protection in the 21st Century" series of discussions. During that session, the FTC asked for input on several topics:

  1. "What is “big data”? Is there an important technical or policy distinction to be drawn between data and big data?
  2. How have developments involving data – data resources, analytic tools, technology, and business models – changed the understanding and use of personal or commercial information or sensitive data?
  3. Does the importance of data – or large, complex data sets comprising personal or commercial information – in a firm’s ordinary course operations change how the FTC should analyze mergers or firm conduct? If so, how? Does data differ in importance from other assets in assessing firm or industry conduct?
  4. What structural, behavioral or conduct remedies should the FTC consider when remedying antitrust harm in a market or industry where data or personal or commercial information are a significant product or a key competitive input?
  5. Are there policy recommendations that would facilitate competition in markets involving data or personal or commercial information that the FTC should consider?
  6. Do the presence of personal information or privacy concerns inform or change competition analysis?
  7. How do state, federal, and international privacy laws and regulations, adopted to protect data and consumers, affect competition, innovation, and product offerings in the United States and abroad?"

Brave, the developer of a web browser, submitted comments to the FTC which highlighted two concerns:

"First, big tech companies “cross-use” user data from one part of their business to prop up others. This stifles competition, and hurts innovation and consumer choice. Brave suggests that FTC should investigate. Second, the GDPR is emerging as a de facto international standard. Whether this helps or harms United States firms will be determined by whether the United States enacts and actively enforces robust federal privacy laws."

A letter by Dr. Johnny Ryan, the Chief Policy & Industry Relations Officer at Brave, described in detail the company's concerns:

"The cross-use and offensive leveraging of personal information from one line of business to another is likely to have anti-competitive effects. Indeed anti-competitive practices may be inevitable when companies with Google’s degree of market dominance update their privacy policies to include the cross-use of personal information. The result is that a company can leverage all the personal information accumulated from its users in one line of business to dominate other lines of business too. Rather than competing on the merits, the company can enjoy the unfair advantage of massive network effects... The result is that nascent and potential competitors will be stifled, and consumer choice will be limited... The cross-use of data between different lines of business is analogous to the tying of two products. Indeed, tying and cross-use of data can occur at the same time, as Google Chrome’s latest “auto sign in to everything” controversy illustrates..."

Historically, Google let Chrome web browser users decide whether or not to sign in for cross-device usage. The Chrome 69 update forced auto sign-in, but a Chrome 70 update restored users' choice after numerous complaints and criticism.

Regarding topic #7 by the FTC, Brave's response said:

"A de facto international standard appears to be emerging, based on the European Union’s General Data Protection Regulation (GDPR)... the application of GDPR-like laws for commercial use of consumers’ personal data in the EU, Britain (post EU), Japan, India, Brazil, South Korea, Malaysia, Argentina, and China bring more than half of global GDP under a common standard. Whether this emerging standard helps or harms United States firms will be determined by whether the United States enacts and actively enforces robust federal privacy laws. Unless there is a federal GDPR-like law in the United States, there may be a degree of friction and the potential of isolation for United States companies... there is an opportunity in this trend. The United States can assume the global lead by adopting the emerging GDPR standard, and by investing in world-leading regulation that pursues test cases, and defines practical standards..."

Currently, companies collect, archive, share, and sell consumers' personal information at will -- often without notice nor consent. While all 50 states and territories have breach notification laws, most states have not upgraded their breach notification laws to include biometric and passport data. While the Health Insurance Portability and Accountability Act (HIPAA) is the federal law which governs healthcare data and related breaches, many consumers share health data with social media sites -- robbing themselves of HIPAA protections.

Moreover, it's an unregulated free-for-all of data collection, archiving, and sharing by telecommunications companies after the revoking in 2017 of broadband privacy protections for consumers in the USA. Plus, laws have historically focused upon "declared data" (e.g., the data users upload or submit into websites or apps) while ignoring "inferred data" -- which is arguably just as sensitive and revealing.

Regarding future federal privacy legislation, Brave added:

"... The GDPR is compatible with a United States view of consumer protection and privacy principles. Indeed, the FTC has proposed important privacy protections to legislators in 2009, and again in 2012 and 2014, which ended up being incorporated in the GDPR. The high-level principles of the GDPR are closely aligned, and often identical to, the United States’ privacy principles... The GDPR also incorporates principles endorsed by the U.S. in the 1980 OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data; and the principles endorsed by the United States this year, in Article 19.8 (3) of the new United States-Mexico-Canada Agreement."

"The GDPR differs from established United States privacy principles in its explicit reference to “proportionality” as a precondition of data use, and in its more robust approach to data minimization and to purpose specification. In our view, a federal law should incorporate these elements too. We also recommend that federal law should adopt the GDPR definitions of concepts such as “personal data”, “legal basis” including opt-in “consent”, “processing”, “special category personal data”, ”profiling”, “data controller”, “automated decision making”, “purpose limitation”, and so forth, and tools such as data protection impact assessments, breach notification, and records of processing activities."

"In keeping with the fair information practice principles (FIPPs) of the 1974 US Privacy Act, Brave recommends that a federal law should require that the collection of personal information is subject to purpose specification. This means that personal information shall only be collected for specific and explicit purposes. Personal information should not used beyond those purposes without consent, unless a further purpose is poses no risk of harm and is compatible with the initial purpose, in which case the data subject should have the opportunity to opt-out."

Submissions by Brave and others are available to the public at the FTC website in the "Public Comments" section.


Study: Privacy Concerns Have Caused Consumers To Change How They Use The Internet

Facebook commissioned a study by the Economist Intelligence Unit (EIU) to understand "internet inclusion" globally, or how people use the Internet, the benefits received, and the obstacles experienced. The latest survey included 5,069 respondents from 100 countries in Asia-Pacific, the Americas, Europe, the Middle East, North Africa and Sub-Saharan Africa.

Overall findings in the report cited:

"... cause for both optimism and concern. We are seeing steady progress in the number and percentage of households connected to the Internet, narrowing the gender gap and improving accessibility for people with disabilities. The Internet also has become a crucial tool for employment and obtaining job-related skills. On the other hand, growth in Internet connections is slowing, especially among the lowest income countries, and efforts to close the digital divide are stalling..."

The EIU describes itself as, "the world leader in global business intelligence, to help companies, governments and banks understand changes in the world is changing, seize opportunities created by those changes, and manage associated risks. So, any provider of social media services globally would greatly value the EIU's services.

The chart below highlights some of the benefits mentioned by survey respondents:

Chart-internet-benefits-eiu-2019

Other benefits respondents said: almost three-quarters (74.4%) said the Internet is more effective than other methods for finding jobs; 70.5% said their job prospects have improved due to the Internet; and more. So, job seekers and employers both benefit.

Key findings regarding online privacy (emphasis added):

"... More than half (52.2%) of [survey] respondents say they are not confident about their online privacy, hardly changed from 51.5% in the 2018 survey... Most respondents are changing the way they use the Internet because they believe some information may not remain private. For example, 55.8% of respondents say they limit how much financial information they share online because of privacy concerns. This is relatively consistent across different age groups and household income levels... 42.6% say they limit how much personal health and medical information they share. Only 7.5% of respondents say privacy concerns have not changed the way they use the Internet."

So, the lack of online privacy affects how people use the internet -- for business and pleasure. The chart below highlights the types of online changes:

Chart-internet-usage-eiu-2019

Findings regarding privacy and online shopping:

"Despite lingering privacy concerns, people are increasingly shopping online. Whether this continues in the future may hinge on attitudes toward online safety and security... A majority of respondents say that making online purchases is safe and secure, but, at 58.8% it was slightly lower than the 62.1% recorded in the 2018 survey."

So, the percentage of respondents who said online purchases as safe and secure went in the wrong direction -- down. Not good. There were regional differences, too, about online privacy:

"In Europe, the share of respondents confident about their online privacy increased by 8 percentage points from the 2018 survey, probably because of the General Data Protection Regulation (GDPR), the EU’s comprehensive data privacy rules that came into force in May 2018. However, the Middle East and North Africa region saw a decline of 9 percentage points compared with the 2018 survey."

So, sensible legislation to protect consumers' online privacy can have positive impacts. There were other regional differences:

"Trust in online sources of information remained relatively stable, except in the West. Political turbulence in the US and UK may have played a role in causing the share of respondents in North America and Europe who say they trust information on government websites and apps to retreat by 10 percentage points and 6 percentage points, respectively, compared with the 2018 survey."

So, stability is important. The report's authors concluded:

"The survey also reflects anxiety about online privacy and a decline in trust in some sources of information. Indeed, trust in government information has fallen since last year in Europe and North America. The growth and importance of the digital economy will mean that alleviating these anxieties should be a priority of companies, governments, regulators and developers."

Addressing those anxieties is critical, if governments in the West are serious about facilitating business growth via consumer confidence and internet usage. Download the Inclusive Internet Index 2019 Executive Summary (Adobe PDF) report.


'Software Pirates' Stole Apple Tech To Distribute Hacked Mobile Apps To Consumers

Prior news reports highlighted the abuse of Apple's corporate digital certificates. Now, we learn that this abuse is more widespread than first thought. CNet reported:

"Pirates used Apple's enterprise developer certificates to put out hacked versions of some major apps... The altered versions of Spotify, Angry Birds, Pokemon Go and Minecraft make paid features available for free and remove in-app ads... The pirates appear to have figured out how to use digital certs to get around Apple's carefully policed App Store by saying the apps will be used only by their employees, when they're actually being distributed to everyone."

So, bad actors abuse technology intended for a company's employees to distribute apps directly to consumers. Software pirates, indeed.

To avoid paying for hacked apps, consumers need to shop wisely from trusted sites. A fix is underway. According to CNet:

"Apple will reportedly take steps to fight back by requiring all app makers to use its two-factor authentication protocol from the end of February, so logging into an Apple ID will require a password and code sent to a trusted Apple device."

Let's hope that fix is sufficient.


Popular iOS Apps Record All In-App Activity Causing Privacy, Data Security, And Other Issues

As the internet has evolved, the user testing and market research practices have also evolved. This may surprise consumers. TechCrunch reported that many popular Apple mobile apps record everything customers do with the apps:

"Apps like Abercrombie & Fitch, Hotels.com and Singapore Airlines also use Glassbox, a customer experience analytics firm, one of a handful of companies that allows developers to embed “session replay” technology into their apps. These session replays let app developers record the screen and play them back to see how its users interacted with the app to figure out if something didn’t work or if there was an error. Every tap, button push and keyboard entry is recorded — effectively screenshotted — and sent back to the app developers."

So, customers' entire app sessions and activities have been recorded. Of course, marketers need to understand their customers' needs, and how users interact with their mobile apps, to build better products, services, and apps. However, in doing so some apps have security vulnerabilities:

"The App Analyst... recently found Air Canada’s iPhone app wasn’t properly masking the session replays when they were sent, exposing passport numbers and credit card data in each replay session. Just weeks earlier, Air Canada said its app had a data breach, exposing 20,000 profiles."

Not good for a couple reasons. First, sensitive data like payment information (e.g., credit/debit card numbers, passport numbers, bank account numbers, etc.) should be masked. Second, when sensitive information isn't masked, more data security problems arise. How long is this app usage data archived? What employees, contractors, and business partners have access to the archive? What security methods are used to protect the archive from abuse?

In short, unauthorized persons may have access to the archives and the sensitive information contained. For example, market researchers probably have little or no need to specific customers' payment information. Sensitive information in these archives should be encrypted, to provide the best protection from abuse and from data breaches.

Sadly, there is more bad news:

"Apps that are submitted to Apple’s App Store must have a privacy policy, but none of the apps we reviewed make it clear in their policies that they record a user’s screen... Expedia’s policy makes no mention of recording your screen, nor does Hotels.com’s policy. And in Air Canada’s case, we couldn’t spot a single line in its iOS terms and conditions or privacy policy that suggests the iPhone app sends screen data back to the airline. And in Singapore Airlines’ privacy policy, there’s no mention, either."

So, the app session recordings were done covertly... without explicit language to provide meaningful and clear notice to consumers. I encourage everyone to read the entire TechCrunch article, which also includes responses by some of the companies mentioned. In my opinion, most of the responses fell far short with lame, boilerplate statements.

All of this is very troubling. And, there is more.

The TechCrunch article didn't discuss it, but historically companies hired testing firms to recruit user test participants -- usually current and prospective customers. Test participants were paid for their time. (I know because as a former user experience professional I conducted such in-person test sessions where clients paid test participants.) Things have changed. Not only has user testing and research migrated online, but companies use automated tools to perform perpetual, unannounced user testing -- all without compensating test participants.

While change is inevitable, not all change is good. Plus, things can be done in better ways. If the test information is that valuable, then pay test participants. Otherwise, this seems like another example of corporate greed at consumers' expense. And, it's especially egregious if data transmissions of the recorded app sessions to developers' servers use up cellular data plan capacity consumers paid for. Some consumers (e.g., elders, children, the poor) cannot afford the costs of unlimited cellular data plans.

After this TechCrunch report, Apple notified developers to either stop or disclose screen recording:

"Protecting user privacy is paramount in the Apple ecosystem. Our App Store Review Guidelines require that apps request explicit user consent and provide a clear visual indication when recording, logging, or otherwise making a record of user activity... We have notified the developers that are in violation of these strict privacy terms and guidelines, and will take immediate action if necessary..."

Good. That's a start. Still, user testing and market research is not a free pass for developers to ignore or skip data security best practices. Given these covert recorded app sessions, mobile apps must be continually tested. Otherwise, some ethically-challenged companies may re-introduce covert screen recording features. What are your opinions?


Survey: People In Relationships Spy On Cheating Partners. FTC: Singles Looking For Love Are The Biggest Target Of Scammers

Happy Valentine's Day! First, BestVPN announced the results of a survey of 1,000 adults globally about relationships and trust in today's digital age where social media usage is very popular. Key findings:

"... nearly 30% of respondents admitted to using tracking apps to catch a partner [suspected of or cheating]. After all, over a quarter of those caught cheating were busted by modern technology... 85% of those caught out in the past now take additional steps to protect their privacy, including deleting their browsing data or using a private browsing mode."

Below is an infographic with more findings from the survey.

Valentines-day-infograph-bestvpn-feb2019

Second, the U.S. Federal Trade Commission (FTC) issued a warning earlier this week about fraud affecting single persons:

"... romance scams generated more reported losses than any other consumer fraud type reported to the agency... The number of romance scams reported to the FTC has grown from 8,500 in 2015 to more than 21,000 in 2018, while reported losses to these scams more than quadrupled in recent years—from $33 million in 2015 to $143 million last year. For those who said they lost money to a romance scam, the median reported loss was $2,600, with those 70 and over reporting the biggest median losses at $10,000."

"Romance scammers often find their victims online through a dating site or app or via social media. These scammers create phony profiles that often involve the use of a stranger’s photo they have found online. The goals of these scams are often the same: to gain the victim’s trust and love in order to get them to send money through a wire transfer, gift card, or other means."

So, be careful out there. Don't cheat, and beware of scammers and dating imposters. You have been warned.


Senators Demand Answers From Facebook And Google About Project Atlas And Screenwise Meter Programs

After news reports surfaced about Facebook's Project Atlas, a secret program where Facebook paid teenagers (and other users) for a research app installed on their phones to track and collect information about their mobile usage, several United States Senators have demanded explanations. Three Senators sent a join letter on February 7, 2019 to Mark Zuckerberg, Facebook's chief executive officer.

The joint letter to Facebook (Adobe PDF format) stated, in part:

"We write concerned about reports that Facebook is collecting highly-sensitive data on teenagers, including their web browsing, phone use, communications, and locations -- all to profile their behavior without adequate disclosure, consent, or oversight. These reports fit with Longstanding concerns that Facebook has used its products to deeply intrude into personal privacy... According to a journalist who attempted to register as a teen, the linked registration page failed to impose meaningful checks on parental consent. Facebook has more rigorous mechanism to obtain and verify parental consent, such as when it is required to sign up for Messenger Kids... Facebook's monitoring under Project Atlas is particularly concerning because the data data collection performed by the research app was deeply invasive. Facebook's registration process encouraged participants to "set it and forget it," warning that if a participant disconnected from the monitoring for more than ten minutes for a few days, that they could be disqualified. Behind the scenes, the app watched everything on the phone."

The letter included another example highlighting the alleged lack of meaningful disclosures:

"... the app added a VPN connection that would automatically route all of a participant's traffic through Facebook's servers. The app installed a SSL root certificate on the participant's phone, which would allow Facebook to intercept or modify data sent to encrypted websites. As a result, Facebook would have limitless access to monitor normally secure web traffic, even allowing Facebook to watch an individual log into their bank account or exchange pictures with their family. None of the disclosures provided at registration offer a meaningful explanation about how the sensitive data is used, how long it is kept, or who within Facebook has access to it..."

The letter was signed by Senators Richard Blumenthal (Democrat, Connecticut), Edward J. Markey (Democrat, Massachusetts), and Josh Hawley (Republican, Mississippi). Based upon news reports about how Facebook's Research App operated with similar functionality to the Onavo VPN app which was banned last year by Apple, the Senators concluded:

"Faced with that ban, Facebook appears to have circumvented Apple's attempts to protect consumers."

The joint letter also listed twelve questions the Senators want detailed answers about. Below are selected questions from that list:

"1. When did Project Atlas begin and how many individuals participated? How many participants were under age 18?"

"3. Why did Facebook use a less strict mechanism for verifying parental consent than is Required for Messenger Kids or Global Data Protection Requlation (GDPR) compliance?"

"4.What specific types of data was collected (e.g., device identifieers, usage of specific applications, content of messages, friends lists, locations, et al.)?"

"5. Did Facebook use the root certificate installed on a participant's device by the Project Atlas app to decrypt and inspect encrypted web traffic? Did this monitoring include analysis or retention of application-layer content?"

"7. Were app usage data or communications content collected by Project Atlas ever reviewed by or available to Facebook personnel or employees of Facebook partners?"

8." Given that Project Atlas acknowledged the collection of "data about [users'] activities and content within those apps," did Facebook ever collect or retain the private messages, photos, or other communications sent or received over non-Facebook products?"

"11. Why did Facebook bypass Apple's app review? Has Facebook bypassed the App Store aproval processing using enterprise certificates for any other app that was used for non-internal purposes? If so, please list and describe those apps."

Read the entire letter to Facebook (Adobe PDF format). Also on February 7th, the Senators sent a similar letter to Google (Adobe PDF format), addressed to Hiroshi Lockheimer, the Senior Vice President of Platforms & Ecosystems. It stated in part:

"TechCrunch has subsequently reported that Google maintained its own measurement program called "Screenwise Meter," which raises similar concerns as Project Atlas. The Screenwise Meter app also bypassed the App Store using an enterprise certificate and installed a VPN service in order to monitor phones... While Google has since removed the app, questions remain about why it had gone outside Apple's review process to run the monitoring program. Platforms must maintain and consistently enforce clear policies on the monitoring of teens and what constitutes meaningful parental consent..."

The letter to Google includes a similar list of eight questions the Senators seek detailed answers about. Some notable questions:

"5. Why did Google bypass App Store approval for Screenwise Meter app using enterprise certificates? Has Google bypassed the App Store approval processing using enterprise certificates for any other non-internal app? If so, please list and describe those apps."

"6. What measures did Google have in place to ensure that teenage participants in Screenwise Meter had authentic parental consent?"

"7. Given that Apple removed Onavoo protect from the App Store for violating its terms of service regarding privacy, why has Google continued to allow the Onavo Protect app to be available on the Play Store?"

The lawmakers have asked for responses by March 1st. Thanks to all three Senators for protecting consumers' -- and children's -- privacy... and for enforcing transparency and accountability.