639 posts categorized "Privacy" Feed

Study: Anonymized Data Can Not Be Totally Anonymous. And 'Homomorphic Encryption' Explained

Many online users have encountered situations where companies collect data with the promised that it is safe because the data has been anonymized -- all personally-identifiable data elements have been removed. How safe is this really? A recent study reinforced the findings that it isn't as safe as promised. Anonymized data can be de-anonymized = re-identified to individual persons.

The Guardian UK reported:

"... data can be deanonymised in a number of ways. In 2008, an anonymised Netflix data set of film ratings was deanonymised by comparing the ratings with public scores on the IMDb film website in 2014; the home addresses of New York taxi drivers were uncovered from an anonymous data set of individual trips in the city; and an attempt by Australia’s health department to offer anonymous medical billing data could be reidentified by cross-referencing “mundane facts” such as the year of birth for older mothers and their children, or for mothers with many children. Now researchers from Belgium’s Université catholique de Louvain (UCLouvain) and Imperial College London have built a model to estimate how easy it would be to deanonymise any arbitrary dataset. A dataset with 15 demographic attributes, for instance, “would render 99.98% of people in Massachusetts unique”. And for smaller populations, it gets easier..."

According to the U.S. Census Bureau, the population of Massachusetts was abut 6.9 million on July 1, 2018. How did this de-anonymization problem happen? Scientific American explained:

"Many commonly used anonymization techniques, however, originated in the 1990s, before the Internet’s rapid development made it possible to collect such an enormous amount of detail about things such as an individual’s health, finances, and shopping and browsing habits. This discrepancy has made it relatively easy to connect an anonymous line of data to a specific person: if a private detective is searching for someone in New York City and knows the subject is male, is 30 to 35 years old and has diabetes, the sleuth would not be able to deduce the man’s name—but could likely do so quite easily if he or she also knows the target’s birthday, number of children, zip code, employer and car model."

Data brokers, including credit-reporting agencies, have collected a massive number of demographic data attributes about every persons. According to this 2018 report, Acxiom has compiled about 5,000 data elements for each of 700 million persons worldwide.

It's reasonable to assume that credit-reporting agencies and other data brokers have similar capabilities. So, data brokers' massive databases can make it relatively easy to re-identify data that was supposedly been anonymized. This means consumers don't have the privacy promised.

What's the solution? Researchers suggest that data brokers must develop new anonymization methods, and rigorously test them to ensure anonymization truly works. And data brokers must be held to higher data security standards.

Any legislation serious about protecting consumers' privacy must address this, too. What do you think?


51 Corporations Tell Congress: A Federal Privacy Law Is Needed. 145 Corporations Tell The U.S. Senate: Inaction On Gun Violence Is 'Simply Unacceptable'

Last week, several of the largest corporations petitioned the United States government for federal legislation in two key topics: consumer privacy and gun reform.

First, the Chief Executive Officers (CEOs) at 51 corporations sent a jointly signed letter to leaders in Congress asking for a federal privacy law to supersede laws emerging in several states. ZD Net reported:

"The open-letter was sent on behalf of Business Roundtable, an association made up of the CEOs of America's largest companies... CEOs blamed a patchwork of differing privacy regulations that are currently being passed in multiple US states, and by several US agencies, as one of the reasons why consumer privacy is a mess in the US. This patchwork of privacy regulations is creating problems for their companies, which have to comply with an ever-increasing number of laws across different states and jurisdictions. Instead, the 51 CEOs would like one law that governs all user privacy and data protection across the US, which would simplify product design, compliance, and data management."

The letter was sent to U.S. Senate Majority Leader Mitch McConnell, U.S. Senate Minority Leader Charles E. Schumer, Senator Roger F. Wicker (Chairman of the Committee on Commerce, Science and Transportation), Nancy Pelosi (Speaker of the U.S. House of Representatives), Kevin McCarthy (Minority Leader of the U.S. House of Representatives), Frank Pallone, Jr. (Chairman of the Committee on Energy and Commerce in the U.S. House of Representatives), and other ranking politicians.

The letter stated, in part:

"Consumers should not and cannot be expected to understand rules that may change depending upon the state in which they reside, the state in which they are accessing the internet, and the state in which the company’s operation is providing those resources or services. Now is the time for Congress to act and ensure that consumers are not faced with confusion about their rights and protections based on a patchwork of inconsistent state laws. Further, as the regulatory landscape becomes increasingly fragmented and more complex, U.S. innovation and global competitiveness in the digital economy are threatened. "

That sounds fair and noble enough. After writing this blog for more than 12 years, I have learned that details matters. Who writes the proposed legislation and the details in that legislation matter. It is too early to tell if the proposed legislation is weaker or stronger than what some states have implemented.

Some of the notable companies which signed the joint letter included AT&T, Amazon, Comcast, Dell Technologies, FedEx, IBM, Qualcomm, Salesforce, SAP, Target, and Walmart. Signers from the financial services sector included American Express, Bank of America, Citigroup, JPMorgan Chase, MasterCard, State Farm Insurance, USAA, and Visa. Several notable companies did not sign the letter: Facebook, Google, Microsoft, and Verizon.

Second, The New York Times reported that executives from 145 companies sent a joint letter to members of the U.S. Senate demanding that they take action on gun violence. The letter stated, in part (emphasis added):

"... we are writing to you because we have a responsibility and obligation to stand up for the safety of our employees ,customers, and all Americans in the communities we serve across the country. Doing nothing about America's gun violence crisis is simply unacceptable and it is time to stand with the American public on gun safety. Gun violence in America is not inevitable; it's preventable. There are steps Congress can, and must take to prevent and reduce gun violence. We need our lawmakers to support common sense gun laws... we urge the Senate to stand with the American public and take action on gun safety by passing a bill to require background checks on all gun sales and a strong Red Flag law that would allow courts to issue life-saving extreme risk protection orders..."

Some of the notable companies which signed the letter included Airbnb, Bain Capital, Bloomberg LP, Conde Nast, DICK'S Sporting Goods, Gap Inc., Levi Strauss & Company, Lyft, Pinterest, Publicis Groupe, Reddit, Royal Caribbean Cruises Ltd., Twitter, Uber, and Yelp.

Earlier this year, the U.S. House of Representatives passed legislation to address gun violence. So far, the U.S. Senate has done nothing. Representative Kathy Castor (14th District in Florida), explained the actions the House took in 2019:

"The Bipartisan Background Checks Act that I championed is a commonsense step to address gun violence and establish measures that protect our community and families. America is suffering from a long-term epidemic of gun violence – each year, 120,000 Americans are injured and 35,000 die by firearms. This bill ensures that all gun sales or transfers are subject to a background check, stopping senseless violence by individuals to themselves and others... Additionally, the Democratic House passed H.R. 1112 – the Enhanced Background Checks Act of 2019 – which addresses the Charleston Loophole that currently allows gun dealers to sell a firearm to dangerous individuals if the FBI background check has not been completed within three business days. H.R. 1112 makes the commonsense and important change to extend the review period to 10 business days..."

Findings from a February, 2018 Quinnipiac national poll:

"American voters support stricter gun laws 66 - 31 percent, the highest level of support ever measured by the independent Quinnipiac University National Poll, with 50 - 44 percent support among gun owners and 62 - 35 percent support from white voters with no college degree and 58 - 38 percent support among white men... Support for universal background checks is itself almost universal, 97 - 2 percent, including 97 - 3 percent among gun owners. Support for gun control on other questions is at its highest level since the Quinnipiac University Poll began focusing on this issue in the wake of the Sandy Hook massacre: i) 67 - 29 percent for a nationwide ban on the sale of assault weapons; ii) 83 - 14 percent for a mandatory waiting period for all gun purchases. It is too easy to buy a gun in the U.S. today..."


Court Okays 'Data Scraping' By Analytics Firm Of Users' Public LinkedIn Profiles. Lots Of Consequences

LinkedIn logo Earlier this week, a Federal appeals court affirmed an August 2017 injunction which required LinkedIn, a professional networking platform owned by Microsoft Corporation, to allow hiQ Labs, Inc. to access members' profiles. This ruling has implications for everyone.

hiQ Labs logo First, some background. The Naked Security blog by Sophos explained in December, 2017:

"... hiQ is a company that makes its money by “scraping” LinkedIn’s public member profiles to feed two analytical systems, Keeper and Skill Mapper. Keeper can be used by employers to detect staff that might be thinking about leaving while Skill Mapper summarizes the skills and status of current and future employees. For several years, this presented no problems until, in 2016, LinkedIn decided to offer something similar, at which point it sent hiQ and others in the sector cease and desist letters and started blocking the bots reading its pages."

So, hiQ apps use algorithms which determine for its clients (prospective or current employers) which employees will stay or go. Gizmodo explained the law which LinkedIn used in its arguments in court, namely the:

".... practice of scraping publicly available information from their platform violated the 1986 Computer Fraud and Abuse Act (CFAA). The CFAA is infamously vaguely written and makes it illegal to access a “protected computer” without or in excess of “authorization”—opening the door to sweeping interpretations that could be used to criminalize conduct not even close to what would traditionally be understood as hacking.

Second, the latest court ruling basically said two things: a) it is legal (and doesn't violate hacking laws) for companies to scrape information contained in publicly available profiles; and b) LinkedIn must allow hiQ (and potentially other firms) to continue with data-scraping. This has plenty of implications.

This recent ruling may surprise some persons, since the issue of data scraping was supposedly settled law previously. MediaPost reported:

"Monday's ruling appears to effectively overrule a decision issued six years ago in a dispute between Craigslist and the data miner 3Taps, which also scraped publicly available listings. In that matter, 3Taps allegedly scraped real estate listings and made them available to the developers PadMapper and Lively. PadMapper allegedly meshed Craigslist's apartment listings with Google maps... U.S. District Court Judge Charles Breyer in the Northern District of California ruled in 2013 that 3Taps potentially violated the anti-hacking law by scraping listings from Craigslist after the company told it to stop doing so."

So, you can bet that both social media sites and data analytics firms closely watched and read the appeal court's ruling this week.

Third, in theory any company or agency could then legally scrape information from public profiles on the LinkedIn platform. This scraping could be done by industries and/or entities (e.g., spy agencies worldwide) which job seekers didn't intend nor want.

Many consumers simply signed up and use LinkedIn to build professional relationship and/or to find jobs, either fulltime as employees or as contractors. The 2019 social media survey by Pew Research found that 27 percent of adults in the United States use LinkedIn, but higher usage penetration among persons with college degrees (51 percent), persons making more than $75K annually (49 percent), persons ages 25 - 29 (44 percent), persons ages 30 - 49 (37 percent), and urban residents (33 percent).  

I'll bet that many LinkedIn users never imagined that their profiles would be used against them by data analytics firms. Like it or not, that is how consumers' valuable, personal data is used (abused?) by social media sites and their clients.

Fourth, the practice of data scraping has divided tech companies. Again, from the Naked Security blog post in 2017:

"Data scraping, its seems, has become a booming tech sector that increasingly divides the industry ideologically. One side believes LinkedIn is simply trying to shut down a competitor wanting to access public data LinkedIn merely displays rather than owns..."

The Electronic Frontier Foundation (EFF), the DuckDuckGo search engine, and the Internet Archived had filed an amicus brief with the appeals court before its ruling. The EFF explained the group's reasoning and urged the:

"... Court of Appeals to reject LinkedIn’s request to transform the CFAA from a law meant to target serious computer break-ins into a tool for enforcing its computer use policies. The social networking giant wants violations of its corporate policy against using automated scripts to access public information on its website to count as felony “hacking” under the Computer Fraud and Abuse Act, a 1986 federal law meant to criminalize breaking into private computer systems to access non-public information. But using automated scripts to access publicly available data is not "hacking," and neither is violating a website’s terms of use. LinkedIn would have the court believe that all "bots" are bad, but they’re actually a common and necessary part of the Internet. "Good bots" were responsible for 23 percent of Web traffic in 2016..."

So, bots are here to stay. And, it's up to LinkedIn executives to find a solution to protect their users' information.

Fifth, according to the Reuters report the court judge suggested a solution for LinkedIn by "eliminating the public access option." Hmmmm. Public, or at least broad access, is what many job seekers desire. So, a balance needs to be struck between truly "public" where anyone, anywhere worldwide could access public profiles, versus intended targets (e.g., hiring executives in potential employers in certain industries).

Sixth, what struck me about the court ruling this week was that nobody was in the court room representing the interests of LinkedIn users, of which I am one. MediaPost reported:

"The appellate court discounted LinkedIn's argument that hiQ was harming users' privacy by scraping data even when people used a "do not broadcast" setting. "There is no evidence in the record to suggest that most people who select the 'Do Not Broadcast' option do so to prevent their employers from being alerted to profile changes made in anticipation of a job search," the judges wrote. "As the district court noted, there are other reasons why users may choose that option -- most notably, many users may simply wish to avoid sending their connections annoying notifications each time there is a profile change." "

What? Really?! We LinkedIn users have a natural, vested interest in control over both our profiles and the sensitive, personal information that describes each of us in our profiles. Somebody at LinkedIn failed to adequately represent users' interests of its users, the court didn't really listen closely nor seek out additional evidence, or all of the above.

Maybe the "there is no evidence in the record" regarding the 'Do Not Broadcast' feature will be the basis of another appeal or lawsuit.

With this latest court ruling, we LinkedIn users have totally lost control (except for deleting or suspending our LinkedIn accounts). It makes me wonder how a court could reach its decision without hearing directly from somebody representing LinkedIn users.

Seventh, it seems that LinkedIn needs to modify its platform in three key ways:

  1. Allow its users to specify which uses or applications (e.g., find fulltime work, find contract work, build contacts in my industry or area of expertise, find/screen job candidates, advertise/promote a business, academic research, publish content, read news, dating, etc.) their profiles can only be used for. The 'Do Not Broadcast' feature is clearly not strong enough;
  2. Allow its users to specify or approve individual users -- other actual persons who are LinkedIn users and not bots nor corporate accounts -- who can access their full, detailed profiles; and
  3. Outline in the user agreement the list of applications or uses profiles may be accessed for, so that both prospective and current LinkedIn users can make informed decisions. 

This would give LinkedIn users some control over the sensitive, personal information in their profiles. Without control, the benefits of using LinkedIn quickly diminish. And, that's enough to cause me to rethink my use of LinkedIn, and either deactivate or delete my account.

What are your opinions of this ruling? If you currently use LinkedIn, will you continue using it? If you don't use LinkedIn and were considering it, will you still consider using it?


Mashable: 7 Privacy Settings iPhone Users Should Enable Today

Most people want to get the most from their smartphones. That includes using their devices wisely and with privacy. Mashable recommended seven privacy settings for Apple iPhone users. I found the recommendations very helpful, and thought that you would, too.

Three privacy settings stood out. First, many mobile apps have:

"... access to your camera. For some of these, the reasoning is a no-brainer. You want to be able to use Snapchat filters? Fine, the app needs access to your camera. That makes sense. Other apps' reasoning for having access to your camera might be less clear. Once again, head to Settings > Privacy > Camera and review what apps you've granted camera access. See anything in there that doesn't make sense? Go ahead and disable it."

A feature most consumers probably haven't considered:

"... which apps on your phone have requested microphone access. For example, do you want Drivetime to have access to your mic? No? Because if you've downloaded it, then it might. If an app doesn't have a clear reason for needing access to your microphone, don't give it that access."

And, perhaps most importantly:

"Did you forget about your voicemail? Hackers didn't. At the 2018 DEF CON, researchers demonstrated the ability to brute force voicemail accounts and use that access to reset victims' Google and PayPal accounts... Set a random 9-digit voicemail password. Go to Settings > Phone and scroll down to "Change Voicemail Password." You iPhone should let you choose a 9-digit code..."

The full list is a reminder for consumers not to assume that the default settings on mobile apps you install are right for your privacy needs. Wise consumers check and make adjustments.


Privacy Study Finds Consumers Less Likely To Share Several Key Data Elements

Advertising Research Foundation logoLast month, the Advertising Research Foundation (ARF) announced the results of its 2019 Privacy Study, which was conducted in March. The survey included 1,100 consumers in the United States weighted by age gender, and region. Key findings including device and internet usage:

"The key differences between 2018 and 2019 are: i) People are spending more time on their mobile devices and less time on their PCs; ii) People are spending more time checking email, banking, listening to music, buying things, playing games, and visiting social media via mobile apps; iii) In general, people are only slightly less likely to share their data than last year. iv) They are least likely to share their social security number; financial and medical information; and their home address and phone numbers; v) People seem to understand the benefits of personalized advertising, but do not value personalization highly and do not understand the technical approaches through which it is accomplished..."

Advertisers use these findings to adjust their advertising, offers, and pitches to maximize responses by consumers. More detail about the above privacy and data sharing findings:

"In general, people were slightly less likely to share their data in 2019 than they were in 2018. They were least likely to share their social security number; financial and medical information; their work address; and their home address and phone numbers in both years. They were most likely to share their gender, race, marital status, employment status, sexual orientation, religion, political affiliation, and citizenship... The biggest changes in respondents’ willingness to share their data from 2018 to 2019 were seen in their home address (-10 percentage points), spouse’s first and last name (-8 percentage points), personal email address (-7 percentage points), and first and last names (-6 percentage points)."

The researchers asked the data sharing question in two ways:

  1. "Which of the following types of information would you be willing to share with a website?"
  2. "Which of the following types of information would you be willing to share for a personalized experience?"

The survey included 20 information types for both questions. For the first question, survey respondents' willingness to share decreased for 15 of 20 information types, remained constant for two information types, and increased slightly for the remainder:

Which of the following types of information
would you be willing to share with a website?
Information Type 2018: %
Respondents
2019: %
Respondents
2019 H/(L)
2018
Birth Date 71 68 (3)
Citizenship Status 82 79 (3)
Employment Status 84 82 (2)
Financial Information 23 20 (3)
First & Last Name 69 63 (6)
Gender 93 93 --
Home Address 41 31 (10)
Home Landline
Phone Number
33 30 (3)
Marital Status 89 85 (4)
Medical Information 29 26 (3)
Personal Email Address 61 54 (7)
Personal Mobile
Phone Number
34 32 (2)
Place Of Birth 62 58 (4)
Political Affiliation 76 77 1
Race or Ethnicity 90 91 1
Religious Preference 78 79 1
Sexual Orientation 80 79 (1)
Social Security Number 10 10 --
Spouse's First
& Last Name
41 33 (8)
Work Address 33 31 (2)

The researchers asked about citizenship status due to controversy related to the upcoming 2020 Census. The researchers concluded:

The survey finding most relevant to these proposals is that the public does not see the value of sharing data to improve personalization of advertising messages..."

Overall, it appears that consumers are getting wiser about their privacy. Consumers' willingness to share decreased for more items than it increased for. View the detailed ARF 2019 Privacy Survey (Adobe PDF).


Privacy Tips For The Smart Speakers In Your Home

Many consumers love the hands-free convenience of smart speakers in their homes. The appeal includes several applications: stream music, plan travel, manage your grocery list, get briefed on news headlines, buy movie tickets, hear jokes, get sports scores, and more. Like any other internet-connected device, it's wise to know and use the device's security settings if you value your, your children's, and your guests' privacy.

In the August issue of its print magazine, Consumer Reports (CR) advises the following settings for your smart speakers:

"Protect Your Privacy
If keeping a speaker with a microphone in your home makes you uneasy, you have reason to be. Amazon, Apple, and Google all collect recorded snippets of consumers' commands to improve their voice-computing technology. But they also offer ways to mute the mic when it's not in use. The Amazon Echo has an On/Off button on top of the device. The Google Home's mute button is on the back. And Apple's HomePod requires a voice command: "Hey, Siri, stop listening." (You then use a button to turn the device back on.) For a third-party speaker, consult the owner's manual for instructions."

To learn more, the CR site offers several resources:


Google Claims Blocking Cookies Is Bad For Privacy. Researchers: Nope. That Is 'Privacy Gaslighting'

Google logo The announcement by Google last week included some dubious claims, which received a fair amount of attention among privacy experts. First, a Senior Product Manager of User Privacy and Trust wrote in a post:

"Ads play a major role in sustaining the free and open web. They underwrite the great content and services that people enjoy... But the ad-supported web is at risk if digital advertising practices don’t evolve to reflect people’s changing expectations around how data is collected and used. The mission is clear: we need to ensure that people all around the world can continue to access ad supported content on the web while also feeling confident that their privacy is protected. As we shared in May, we believe the path to making this happen is also clear: increase transparency into how digital advertising works, offer users additional controls, and ensure that people’s choices about the use of their data are respected."

Okay, that is a fair assessment of today's internet. And, more transparency is good. Google executives are entitled to their opinions. The post also stated:

"The web ecosystem is complex... We’ve seen that approaches that don’t account for the whole ecosystem—or that aren’t supported by the whole ecosystem—will not succeed. For example, efforts by individual browsers to block cookies used for ads personalization without suitable, broadly accepted alternatives have fallen down on two accounts. First, blocking cookies materially reduces publisher revenue... Second, broad cookie restrictions have led some industry participants to use workarounds like fingerprinting, an opaque tracking technique that bypasses user choice and doesn’t allow reasonable transparency or control. Adoption of such workarounds represents a step back for user privacy, not a step forward."

So, Google claims that blocking cookies is bad for privacy. With a statement like that, the "User Privacy and Trust" title seems like an oxymoron. Maybe, that's the best one can expect from a company that gets 87 percent of its revenues from advertising.

Also on August 22nd, the Director of Chrome Engineering repeated this claim and proposed new internet privacy standards (bold emphasis added):

... we are announcing a new initiative to develop a set of open standards to fundamentally enhance privacy on the web. We’re calling this a Privacy Sandbox. Technology that publishers and advertisers use to make advertising even more relevant to people is now being used far beyond its original design intent... some other browsers have attempted to address this problem, but without an agreed upon set of standards, attempts to improve user privacy are having unintended consequences. First, large scale blocking of cookies undermine people’s privacy by encouraging opaque techniques such as fingerprinting. With fingerprinting, developers have found ways to use tiny bits of information that vary between users, such as what device they have or what fonts they have installed to generate a unique identifier which can then be used to match a user across websites. Unlike cookies, users cannot clear their fingerprint, and therefore cannot control how their information is collected... Second, blocking cookies without another way to deliver relevant ads significantly reduces publishers’ primary means of funding, which jeopardizes the future of the vibrant web..."

Yes, fingerprinting is a nasty, privacy-busting technology. No argument with that. But, blocking cookies is bad for privacy? Really? Come on, let's be honest.

This dubious claim ignores corporate responsibility... that some advertisers and website operators made choices -- conscious decisions to use more invasive technologies like fingerprinting to do an end-run around users' needs, desires, and actions to regain online privacy. Sites and advertisers made those invasive-tech choices when other options were available, such as using subscription services to pay for their content.

Plus, Google's claim also ignores the push by corporate internet service providers (ISPs) which resulted in the repeal of online privacy protections for consumers thanks to a compliant, GOP-led Federal Communications Commission (FCC), which seems happy to tilt the playing field further towards corporations and against consumers. So, users are simply trying to regain online privacy.

During the past few years, both privacy-friendly web browsers (e.g., Brave, Firefox) and search engines (e.g., DuckDuckGo) have emerged to meet consumers' online privacy needs. (Well, it's not only consumers that need online privacy. Attorneys and businesses need it, too, to protect their intellectual property and proprietary business methods.) Online users demanded choice, something advertisers need to remember and value.

Privacy experts weighed in about Google's blocking-cookies-is-bad-for-privacy claim. Jonathan Mayer and Arvind Narayanan explained:

That’s the new disingenuous argument from Google, trying to justify why Chrome is so far behind Safari and Firefox in offering privacy protections. As researchers who have spent over a decade studying web tracking and online advertising, we want to set the record straight. Our high-level points are: 1) Cookie blocking does not undermine web privacy. Google’s claim to the contrary is privacy gaslighting; 2) There is little trustworthy evidence on the comparative value of tracking-based advertising; 3) Google has not devised an innovative way to balance privacy and advertising; it is latching onto prior approaches that it previously disclaimed as impractical; and 4) Google is attempting a punt to the web standardization process, which will at best result in years of delay."

The researchers debunked Google's claim with more details:

"Google is trying to thread a needle here, implying that some level of tracking is consistent with both the original design intent for web technology and user privacy expectations. Neither is true. If the benchmark is original design intent, let’s be clear: cookies were not supposed to enable third-party tracking, and browsers were supposed to block third-party cookies. We know this because the authors of the original cookie technical specification said so (RFC 2109, Section 4.3.5). Similarly, if the benchmark is user privacy expectations, let’s be clear: study after study has demonstrated that users don’t understand and don’t want the pervasive web tracking that occurs today."

Moreover:

"... there are several things wrong with Google’s argument. First, while fingerprinting is indeed a privacy invasion, that’s an argument for taking additional steps to protect users from it, rather than throwing up our hands in the air. Indeed, Apple and Mozilla have already taken steps to mitigate fingerprinting, and they are continuing to develop anti-fingerprinting protections. Second, protecting consumer privacy is not like protecting security—just because a clever circumvention is technically possible does not mean it will be widely deployed. Firms face immense reputational and legal pressures against circumventing cookie blocking. Google’s own privacy fumble in 2012 offers a perfect illustration of our point: Google implemented a workaround for Safari’s cookie blocking; it was spotted (in part by one of us), and it had to settle enforcement actions with the Federal Trade Commission and state attorneys general."

Gaslighting, indeed. Online privacy is important. So, too, are consumers' choices and desires. Thanks to Mr. Mayer and Mr. Narayanan for the comprehensive response.

What are your opinions of cookie blocking? Of Google's claims?


ExpressVPN Survey Indicates Americans Care About Privacy. Some Have Already Taken Action

ExpressVPN published the results of its privacy survey. The survey, commissioned by ExpressVPN and conducted by Propeller Insights, included a representative sample of about 1,000 adults in the United States.

Overall, 29.3% of survey respondents said they already use had used a virtual private network (VPN) or a proxy network. Survey respondents cited three broad reasons for using a VPN service: 1) to avoid surveillance, 2) to access content, and 3) to stay safe online. Detailed survey results about surveillance concerns:

"The most popular reasons to use a VPN are related to surveillance, with 41.7% of respondents aiming to protect against sites seeing their IP, 26.4% to prevent their internet service provider (ISP) from gathering information, and 16.6% to shield against their local government."

Who performs the surveillance matters to consumers. People are more concerned with surveillance by companies than by law enforcement agencies within the U.S. government:

"Among the respondents, 15.9% say they fear the FBI surveillance, and only 6.4% fear the NSA spying on them. People are by far most worried about information gathering by ISPs (23.2%) and Facebook (20.5%). Google spying is more of a worry for people (5.9%) than snooping by employers (2.6%) or family members (5.1%).

Concerns with internet service providers (ISPs) are not surprising since these telecommunications company enjoy a unique position enabling them to track all online activities by consumers. Concerns about Facebook are not surprising since it tracks both users and non-users, similar to advertising networks. The "protect against sites seeing their IP" suggests that consumers, or at least VPN users, want to protect themselves and their devices against advertisers, advertising networks, and privacy-busting mobile apps which track their geo-location.

Detailed survey results about content access concerns:

"... 26.7% use [a VPN service] to access their corporate or academic network, 19.9% to access content otherwise not available in their region, and 16.9% to circumvent censorship."

The survey also found that consumers generally trust their mobile devices:

" Only 30.5% of Android users are “not at all” or “not very” confident in their devices. iOS fares slightly better, with 27.4% of users expressing a lack of confidence."

The survey uncovered views about government intervention and policies:

"Net neutrality continues to be popular (70% more respondents support it rather then don’t), but 51.4% say they don’t know enough about it to form an opinion... 82.9% also believe Congress should enact laws to require tech companies to get permission before collecting personal data. Even more, 85.2% believe there should be fines for companies that lose users’ data, and 90.2% believe there should be further fines if the data is misused. Of the respondents, 47.4% believe Congress should go as far as breaking up Facebook and Google."

The survey found views about smart devices (e.g., door bells, voice assistants, smart speakers) installed in many consumers' homes, since these devices are equipped with always-on cameras and/or microphones:

"... 85% of survey respondents say they are extremely (24.7%), very (23.4%), or somewhat (28.0%) concerned about smart devices monitoring their personal habits... Almost a quarter (24.8%) of survey respondents do not own any smart devices at all, while almost as many (24.4%) always turn off their devices’ microphones if they are not using them. However, one-fifth (21.2%) say they always leave the microphone on. The numbers are similar for camera use..."

There are more statistics and findings in the entire survey report by ExpressVPN. I encourage everyone to read it.


Researcher Uncovers Several Browser Extensions That Track Users' Online Activity And Share Data

Many consumers use web browsers since websites contain full content and functionality, versus pieces of websites in mobile apps. A researcher has found that as many as four million consumers have been affected by browser extensions, the optional functionality for web browsers, which collected sensitive personal and financial information.

Ars Technica reported about DataSpii, the name of the online privacy issue:

"The term DataSpii was coined by Sam Jadali, the researcher who discovered—or more accurately re-discovered—the browser extension privacy issue. Jadali intended for the DataSpii name to capture the unseen collection of both internal corporate data and personally identifiable information (PII).... DataSpii begins with browser extensions—available mostly for Chrome but in more limited cases for Firefox as well—that, by Google's account, had as many as 4.1 million users. These extensions collected the URLs, webpage titles, and in some cases the embedded hyperlinks of every page that the browser user visited. Most of these collected Web histories were then published by a fee-based service called Nacho Analytics..."

At first glance, this may not sound important, but it is. Why? First, the data collected included the most sensitive and personal information:

"Home and business surveillance videos hosted on Nest and other security services; tax returns, billing invoices, business documents, and presentation slides posted to, or hosted on, Microsoft OneDrive, Intuit.com, and other online services; vehicle identification numbers of recently bought automobiles, along with the names and addresses of the buyers; patient names, the doctors they visited, and other details listed by DrChrono, a patient care cloud platform that contracts with medical services; travel itineraries hosted on Priceline, Booking.com, and airline websites; Facebook Messenger attachments..."

I'll bet you thought your Facebook Messenger stuff was truly private. Second, because:

"... the published URLs wouldn’t open a page unless the person following them supplied an account password or had access to the private network that hosted the content. But even in these cases, the combination of the full URL and the corresponding page name sometimes divulged sensitive internal information. DataSpii is known to have affected 50 companies..."

Ars Technica also reported:

"Principals with both Nacho Analytics and the browser extensions say that any data collection is strictly "opt in." They also insist that links are anonymized and scrubbed of sensitive data before being published. Ars, however, saw numerous cases where names, locations, and other sensitive data appeared directly in URLs, in page titles, or by clicking on the links. The privacy policies for the browser extensions do give fair warning that some sort of data collection will occur..."

So, the data collection may be legal, but is it ethical -- especially if the anonymization is partial? After the researcher's report went public, many of the suspect browser extensions were deleted from online stores. However, extensions already installed locally on users' browsers can still collect data:

"Beginning on July 3—about 24 hours after Jadali reported the data collection to Google—Fairshare Unlock, SpeakIt!, Hover Zoom, PanelMeasurement, Branded Surveys, and Panel Community Surveys were no longer available in the Chrome Web Store... While the notices say the extensions violate the Chrome Web Store policy, they make no mention of data collection nor of the publishing of data by Nacho Analytics. The toggle button in the bottom-right of the notice allows users to "force enable" the extension. Doing so causes browsing data to be collected just as it was before... In response to follow-up questions from Ars, a Google representative didn't explain why these technical changes failed to detect or prevent the data collection they were designed to stop... But removing an extension from an online marketplace doesn't necessarily stop the problems. Even after the removals of Super Zoom in February or March, Jadali said, code already installed by the Chrome and Firefox versions of the extension continued to collect visited URL information..."

Since browser developers haven't remotely disabled leaky browser extensions, the burden is on consumers. The Ars Technica report lists the leaky browser extensions by name. Since online stores can't seem to consistently police browser extensions for privacy compliance, again the burden falls upon consumers.

The bottom line: browser extensions can easily compromise your online privacy and security. That means like any other software, wise consumers: read independent online reviews first, read the developer's terms of use and privacy policy before installing the browser extension, and use a privacy-focused web browser.

Consumer Reports advises consumers to, a) install browser extensions only from companies you trust, and b) uninstall browser extensions you don't need nor use. For consumers that don't know how, the Consumer Reports article also lists step-by-step instructions to uninstall browser extensions in Google Chrome, Firefox, Safari, and Internet Explorer branded web browsers.


FBI Seeks To Monitor Twitter, Facebook, Instagram, And Other Social Media Accounts For Violent Threats

Federal Bureau of Investigation logo The U.S. Federal Bureau of Investigation (FBI) issued on July 8th a Request For Proposals (RFP) seeking quotes from technology companies to build a "Social Media Alerting" tool, which would enable the FBI to monitor in real-time accounts in several social media services for violence threats. The RFP, which was amended on August 7th, stated:

"The purpose of this procurement is to acquire the services of a company to proactively identify and reactively monitor threats to the United States and its interests through a means of online sources. A subscription to this service shall grant the Federal Bureau of Investigation (FBI) access to tools that will allow for the exploitation of lawfully collected/acquired data from social media platforms that will be stored, vetted and formatted by a vendor... This synopsis and solicitation is being issued as Request for Proposal (RFP) number DJF194750PR0000369 and... This announcement is supplemented by a detailed RFP Notice, an SF-33 document, an accompanying Statement of Objectives (SOO) and associated FBI documents..."

"Proactively identify" suggests the usage of software algorithms or artificial intelligence (AI). And, the vendor selected will archive the collected data for an undisclosed period of time. The RFP also stated:

"Background: The use of social media platforms, by terrorist groups, domestic threats, foreign intelligence services, and criminal organizations to further their illegal activity creates a demonstrated need for tools to properly identify the activity and react appropriately. With increased use of social media platforms by subjects of current FBI investigations and individuals that pose a threat to the United States, it is critical to obtain a service which will allow the FBI to identify relevant information from Twitter, Facebook, Instagram, and other Social media platforms in a timely fashion. Consequently, the FBI needs near real time access to a full range of social media exchanges..."

For context, in 2016 the FBI attempted to force Apple Computer to build "backdoor software" to unclock an alleged terrorist's iPhone in California. The FBI later found an offshore technology company to build its backdoor.

The documents indicate that the FBI wants its staff to use the tool at both headquarters and field-office locations globally, and with mobile devices. The SOO document stated:

"FBI personnel are deployed internationally and sometimes in areas of press censorship. A social media exploitation tool with international reach and paired with a strong language translation capability, can become crucial to their operations and more importantly their safety. The functions of most value to these individuals is early notification, broad international reach, instant translation, and the mobility of the needed capability."

The SOO also explained the data elements too be collected:

"3.3.2.2.1 Obtain the full social media profile of persons-of-interest and their affiliation to any organization or groups through the corroboration of multiple social media sources... Items of interest in this context are social networks, user IDs, emails, IP addresses and telephone numbers, along with likely additional account with similar IDs or aliases... Any connectivity between aliases and their relationship must be identifiable through active link analysis mapping..."
"3.3.3.2.1 Online media is monitored based on location, determined by the users’ delineation or the import of overlays from existing maps (neighborhood, city, county, state or country). These must allow for customization as AOR sometimes cross state or county lines..."

While the document mentioned "user IDs" and didn't mention passwords, the implication seems clear that the FBI wants both in order to access and monitor in real-time social media accounts. And, the "other Social Media platforms" statement raises questions. What is the full list of specific services that refers to? Why list only the three largest platforms by name?

As this FBI project proceeds, let's hope that the full list of social sites includes 8Chan, Reddit, Stormfront, and similar others. Why? In a study released in November of 2018, the Center for Strategic and International Studies (CSIS) found:

"Right-wing extremism in the United States appears to be growing. The number of terrorist attacks by far-right perpetrators rose over the past decade, more than quadrupling between 2016 and 2017. The recent pipe bombs and the October 27, 2018, synagogue attack in Pittsburgh are symptomatic of this trend. U.S. federal and local agencies need to quickly double down to counter this threat. There has also been a rise in far-right attacks in Europe, jumping 43 percent between 2016 and 2017... Of particular concern are white supremacists and anti-government extremists, such as militia groups and so-called sovereign citizens interested in plotting attacks against government, racial, religious, and political targets in the United States... There also is a continuing threat from extremists inspired by the Islamic State and al-Qaeda. But the number of attacks from right-wing extremists since 2014 has been greater than attacks from Islamic extremists. With the rising trend in right-wing extremism, U.S. federal and local agencies need to shift some of their focus and intelligence resources to penetrating far-right networks and preventing future attacks. To be clear, the terms “right-wing extremists” and “left-wing extremists” do not correspond to political parties in the United States..."

The CSIS study also noted:

"... right-wing terrorism commonly refers to the use or threat of violence by sub-national or non-state entities whose goals may include racial, ethnic, or religious supremacy; opposition to government authority; and the end of practices like abortion... Left-wing terrorism, on the other hand, refers to the use or threat of violence by sub-national or non-state entities that oppose capitalism, imperialism, and colonialism; focus on environmental or animal rights issues; espouse pro-communist or pro-socialist beliefs; or support a decentralized sociopolitical system like anarchism."

Terrorism is terrorism. All of it needs to be prosecuted including left-, right-, domestic, and foreign. (This prosecutor is doing the right thing.) It seems wise to monitor the platform where suspects congregate.

This project also raises questions about the effectiveness of monitoring social media? Will this really works. Digital Trends reported:

"Companies like Google, Facebook, Twitter, and Amazon already use algorithms to predict your interests, your behaviors, and crucially, what you like to buy. Sometimes, an algorithm can get your personality right – like when Spotify somehow manages to put together a playlist full of new music you love. In theory, companies could use the same technology to flag potential shooters... But preventing mass shootings before they happen raises thorny legal questions: how do you determine if someone is just angry online rather than someone who could actually carry out a shooting? Can you arrest someone if a computer thinks they’ll eventually become a shooter?"

Some social media users have already experienced inaccuracies (failures?) when sites present irrelevant advertisements and/or political party messaging based upon supposedly accurate software algorithms. The Digital Trends article also dug deeper:

"A Twitter spokesperson wouldn’t say much directly about Trump’s proposal, but did tell Digital Trends that the company suspended 166,513 accounts connected to the promotion of terrorism during the second half of 2018... Twitter also frequently works to help facilitate investigations when authorities request information – but the company largely avoids proactively flagging banned accounts (or the people behind them) to those same authorities. Even if they did, that would mean flagging 166,513 people to the FBI – far more people than the agency could ever investigate."

Then, there is the problem of the content by users in social media posts:

"Even if someone does post to social media immediately before they decide to unleash violence, it’s often not something that would trip up either Twitter or Facebook’s policies. The man who killed three people at the Gilroy Garlic Festival in Northern California posted to Instagram from the event itself – once calling the food served there “overprices” and a second that told people to read a 19th-century pro-fascist book that’s popular with white nationalists."

Also, Amazon got caught up in the hosting mess with 8Chan. So, there is more news to come.

Last, this blog post explored the problems with emotion recognition by facial-recognition software. Let's hope this FBI project is not a waste of taxpayer's hard-earned money.


White Hat Hacker: Social Media Is a 'Goldmine For Details' For Cyberattacks Targeting Companies

Many employees are their own worst enemy when they start a new job. In this Fast Company article, a white hat hacker explains the security fails by employees which compromise their employer's data security.

Stephanie “Snow” Carruthers, the chief people hacker within a group at IBM Inc., explained that hackers troll:

"... social media for photos, videos, and other clues that can help them better target your company in an attack. I know this because I’m one of them... I’m part of an elite team of hackers within IBM known as X-Force Red. Companies hire us to find gaps in their security – before the real bad guys do... Social media posts are a goldmine for details that aid in our “attacks.” What you find in the background of photos is particularly revealing... The first thing you may be surprised to know is that 75% of the time, the information I’m finding is coming from interns or new hires. Younger generations entering the workforce today have grown up on social media, and internships or new jobs are exciting updates to share. Add in the fact that companies often delay security training for new hires until weeks or months after they’ve started, and you’ve got a recipe for disaster..."

The obvious security fails include selfie photos by interns or new hires wearing their security badges, selfies showing log-in credentials on computer screens, and selfies showing passwords written on post-it notes attached to computer monitors. Less obvious security fails include group photos by interns or new hires with their work team. Group photos can help hackers identify team members to craft personalized and more effective phishing e-mails and text messages using co-workers' names, to trick recipients into opening attachments containing malware.

This highlights one business practice interns and new hires should understand. Your immediate boss or supervisor won't scour your social media accounts looking for security fails. Your employer will outsource the job to another company, which will.

If you just started a new job, don't be that clueless employee posting security fails to your social media accounts. Read and understand your employer's social media policy. If you are a manager, schedule security training for your interns and new hires ASAP.


FTC Levies $5 Billion Fine, 'New Restrictions, And Modified Corporate Structure' To Hold Facebook Accountable. Will These Actions Prevent Future Privacy Abuses?

The U.S. Federal Trade Commission (FTC) announced on July 24th a record-breaking fine against Facebook, Inc., plus new limitations on the social networking service. The FTC announcement stated:

"Facebook, Inc. will pay a record-breaking $5 billion penalty, and submit to new restrictions and a modified corporate structure that will hold the company accountable for the decisions it makes about its users’ privacy, to settle Federal Trade Commission charges that the company violated a 2012 FTC order by deceiving users about their ability to control the privacy of their personal information... The settlement order announced [on July 24th] also imposes unprecedented new restrictions on Facebook’s business operations and creates multiple channels of compliance..."

During 2018, Facebook generated after-tax profits of $22.1 billion on sales of $55.84 billion. While a $5 billion fine is a lot of money, the company can easily afford the record-breaking fine. The fine equals about one month's revenues, or a little over 4 percent of its $117 billion in assets.

U.S. Federal Trade Commission. New compliance system for Facebook. Click to view larger version The FTC announcement explained several "unprecedented" restrictions in the settlement order. First, the restrictions are designed to:

"... prevent Facebook from deceiving its users about privacy in the future, the FTC’s new 20-year settlement order overhauls the way the company makes privacy decisions by boosting the transparency of decision making... It establishes an independent privacy committee of Facebook’s board of directors, removing unfettered control by Facebook’s CEO Mark Zuckerberg over decisions affecting user privacy. Members of the privacy committee must be independent and will be appointed by an independent nominating committee. Members can only be fired by a supermajority of the Facebook board of directors."

Facebook logo Second, the restrictions mandated compliance officers:

"Facebook will be required to designate compliance officers who will be responsible for Facebook’s privacy program. These compliance officers will be subject to the approval of the new board privacy committee and can be removed only by that committee—not by Facebook’s CEO or Facebook employees. Facebook CEO Mark Zuckerberg and designated compliance officers must independently submit to the FTC quarterly certifications that the company is in compliance with the privacy program mandated by the order, as well as an annual certification that the company is in overall compliance with the order. Any false certification will subject them to individual civil and criminal penalties."

Third, the new order strengthens oversight:

"... The order enhances the independent third-party assessor’s ability to evaluate the effectiveness of Facebook’s privacy program and identify any gaps. The assessor’s biennial assessments of Facebook’s privacy program must be based on the assessor’s independent fact-gathering, sampling, and testing, and must not rely primarily on assertions or attestations by Facebook management. The order prohibits the company from making any misrepresentations to the assessor, who can be approved or removed by the FTC. Importantly, the independent assessor will be required to report directly to the new privacy board committee on a quarterly basis. The order also authorizes the FTC to use the discovery tools provided by the Federal Rules of Civil Procedure to monitor Facebook’s compliance with the order."

Fourth, the order included six new privacy requirements:

"i) Facebook must exercise greater oversight over third-party apps, including by terminating app developers that fail to certify that they are in compliance with Facebook’s platform policies or fail to justify their need for specific user data; ii) Facebook is prohibited from using telephone numbers obtained to enable a security feature (e.g., two-factor authentication) for advertising; iii) Facebook must provide clear and conspicuous notice of its use of facial recognition technology, and obtain affirmative express user consent prior to any use that materially exceeds its prior disclosures to users; iv) Facebook must establish, implement, and maintain a comprehensive data security program; v) Facebook must encrypt user passwords and regularly scan to detect whether any passwords are stored in plaintext; and vi) Facebook is prohibited from asking for email passwords to other services when consumers sign up for its services."

Wow! Lots of consequences when a manager builds a corporation with a, "move fast and break things" culture, values, and ethics. Assistant Attorney General Jody Hunt for the Department of Justice’s Civil Division said:

"The Department of Justice is committed to protecting consumer data privacy and ensuring that social media companies like Facebook do not mislead individuals about the use of their personal information... This settlement’s historic penalty and compliance terms will benefit American consumers, and the Department expects Facebook to treat its privacy obligations with the utmost seriousness."

There is disagreement among the five FTC commissioners about the settlement, as the vote for the order was 3 - 2. FTC Commissioner Rebecca Kelly Slaughter stated in her dissent:

"My principal objections are: (1) The negotiated civil penalty is insufficient under the applicable statutory factors we are charged with weighing for order violators: injury to the public, ability to pay, eliminating the benefits derived from the violation, and vindicating the authority of the FTC; (2) While the order includes some encouraging injunctive relief, I am skeptical that its terms will have a meaningful disciplining effect on how Facebook treats data and privacy. Specifically, I cannot view the order as adequately deterrent without both meaningful limitations on how Facebook collects, uses, and shares data and public transparency regarding Facebook’s data use and order compliance; (3) Finally, my deepest concern with this order is that its release of Facebook and its officers from legal liability is far too broad..."

FTC Commissioners Noah Joshua Phillips and Christine S. Wilson stated on July 24th in an 8-page joint statement (Adobe PDF) with Chairman Joseph J. Simons of the U.S. District Court for the District of Columbia:

"In 2012, Facebook entered into a consent order with the FTC, resolving allegations that the company misrepresented to consumers the extent of data sharing with third-party applications and the control consumers had over that sharing. The 2012 order barred such misrepresentations... Our complaint announced today alleges that Facebook failed to live up to its commitments under that order. Facebook subsequently made similar misrepresentations about sharing consumer data with third-party apps and giving users control over that sharing, and misrepresented steps certain consumers needed to take to control [over] facial recognition technology. Facebook also allowed financial considerations to affect decisions about how it would enforce its platform policies against third-party users of data, in violation of its obligation under the 2012 order... The $5 billion penalty serves as an important deterrent to future order violations... For purposes of comparison, the EU’s General Data Protection Regulation (GDPR) is touted as the high-water mark for comprehensive privacy legislation, and the penalty the FTC has negotiated is over 20 times greater than the largest GDPR fine to date... IV. The Settlement Far Exceeds What Could be Achieved in Litigation and Gives Consumers Meaningful Protections Now... Even assuming the FTC would prevail in litigation, a court would not give the Commission carte blanche to reorganize Facebook’s governance structures and business operations as we deem fit. Instead, the court would impose the relief. Such relief would be limited to injunctive relief to remedy the specific proven violations... V. Mark Zuckerberg is Being Held Accountable and the Order Cabins His Authority Our dissenting colleagues argue that the Commission should not have settled because the Commission’s investigation provides an inadequate basis for the decision not to name Mark Zuckerberg personally as a defendant... The provisions of this Order extinguish the ability of Mr. Zuckerberg to make privacy decisions unilaterally by also vesting responsibility and accountability for those decisions within business units, DCOs, and the privacy committee... the Order significantly diminishes Mr. Zuckerberg’s power — something no government agency, anywhere in the world, has thus far accomplished. The Order requires multiple information flows and imposes a robust system of checks and balances..."

Time will tell how effective the order's restrictions and $5 billion are. That Facebook can easily afford the penalty suggests the amount is a weak deterrence. If all or part of the penalty is tax-deductible (yes, tax-deductible fines have happened before to directly reduce a company's taxes), then that would weaken the deterrence effectiveness. And, if all or part of the fine is tax-deductible, then we taxpayers just paid for part of Facebook's alleged wrongdoing. I'll bet most taxpayers wouldn't want that.

Facebook stated in a July 24th news release that its second-quarter 2019 earnings included:

"... an additional $2.0 billion legal expense related to the U.S. Federal Trade Commission (FTC) settlement and a $1.1 billion income tax expense due to the developments in Altera Corp. v. Commissioner, as discussed below. As the FTC expense is not expected to be tax-deductible, it had no effect on our provision for income taxes... In July 2019, we entered into a settlement and modified consent order to resolve the inquiry of the FTC into our platform and user data practices. Among other matters, our settlement with the FTC requires us to pay a penalty of $5.0 billion and to significantly enhance our practices and processes for privacy compliance and oversight. In particular, we have agreed to implement a comprehensive expansion of our privacy program, including substantial management and board of directors oversight, stringent operational requirements and reporting obligations, and a process to regularly certify our compliance with the privacy program to the FTC. In the second quarter of 2019, we recorded an additional $2.0 billion accrual in connection with our settlement with the FTC, which is included in accrued expenses and other current liabilities on our condensed consolidated balance sheet."

"Not expected to be" is not the same as definitely not. And, business expenses reduce a company's taxable net income.

A copy of the FTC settlement order with Facebook is also available here (Adobe PDF format; 920K bytes). Plus, there is more:

"... the FTC also announced today separate law enforcement actions against data analytics company Cambridge Analytica, its former Chief Executive Officer Alexander Nix, and Aleksandr Kogan, an app developer who worked with the company, alleging they used false and deceptive tactics to harvest personal information from millions of Facebook users. Kogan and Nix have agreed to a settlement with the FTC that will restrict how they conduct any business in the future."

Cambridge Analytica was involved in the massive Facebook data breach in 2018 when persons allegedly posed as academic researchers in order to download Facebook users' profile information they really weren't authorized to access.

What are your opinions? Hopefully, some tax experts will weigh in about the fine.


EFF Filed Lawsuit In California Against AT&T To Stop Sales Of Wireless Customers' Realtime Geolocations

The Electronic Frontier Foundation (EFF) announced on July 16th that it had filed:

"... a class action lawsuit on behalf of AT&T customers in California to stop the telecom giant and two data location aggregators from allowing numerous entities—including bounty hunters, car dealerships, landlords, and stalkers—to access wireless customers’ real-time locations without authorization. An investigation by Motherboard earlier this year revealed that any cellphone user’s precise, real-time location could be bought for just $300. The report showed that carriers, including AT&T, were making this data available to hundreds of third parties without first verifying that users had authorized such access. AT&T not only failed to obtain its customers’ express consent, making matters worse, it created an active marketplace that trades on its customers’ real-time location data..."

The lawsuit, Scott, et al. v. AT&T Inc., et al., was filed in the U.S. District Court of the Northern District of California. The suit seeks money damages and an injunction against AT&T and the named location data aggregators: LocationSmart and Zumigo. The suit alleges AT&T violated the Federal Communications Act and engaged in deceptive practices under California’s unfair competition law. It also alleges that AT&T, LocationSmart, and Zumigo have violated California’s constitutional, statutory, and common law rights to privacy. The EFF is represented by Pierce Bainbridge Beck Price & Hecht LLP.


Google Home Devices Recorded Users' Conversations. Legal Questions Result. Google Says It Is Investigating

Many consumers love the hands-free convenience of smart speakers. However, there are risks with the technology. BBC News reported on Thursday:

"Belgian broadcaster VRT exposed the recordings made by Google Home devices in Belgium and the Netherlands... VRT said the majority of the recordings it reviewed were short clips logged by the Google Home devices as owners used them. However, it said, 153 were "conversations that should never have been recorded" because the wake phrase of "OK Google" was not given. These unintentionally recorded exchanges included: a) blazing rows; b) bedroom chatter; c) parents talking to their children; d) phone calls exposing confidential information. It said it believed the devices logged these conversations because users said a word or phrase that sounded similar to "OK Google" that triggered the device..."

So, conversations that shouldn't have been recorded were recorded by Google Home devices. Consumers use the devices to perform and control a variety of tasks, such as entertainment (e.g., music, movies, games), internet searches (e.g., cooking recipes), security systems and cameras, thermostats, window blinds and shades, appliances (e.g., coffee makers), online shopping, internet searches, and more.

The device software doesn't seem accurate, since it mistook similar phrases as wake phrases. Google calls these errors "false accepts." Google replied in a blog post:

"We just learned that one of these language reviewers has violated our data security policies by leaking confidential Dutch audio data. Our Security and Privacy Response teams have been activated on this issue, are investigating, and we will take action. We are conducting a full review of our safeguards... We apply a wide range of safeguards to protect user privacy throughout the entire review process. Language experts only review around 0.2 percent of all audio snippets. Audio snippets are not associated with user accounts as part of the review process, and reviewers are directed not to transcribe background conversations or other noises, and only to transcribe snippets that are directed to Google."

"The Google Assistant only sends audio to Google after your device detects that you’re interacting with the Assistant—for example, by saying “Hey Google” or by physically triggering the Google Assistant... Rarely, devices that have the Google Assistant built in may experience what we call a “false accept.” This means that there was some noise or words in the background that our software interpreted to be the hotword (like “Ok Google”). We have a number of protections in place to prevent false accepts from occurring in your home... We also provide you with tools to manage and control the data stored in your account. You can turn off storing audio data to your Google account completely, or choose to auto-delete data after every 3 months or 18 months..."

To be fair, Google is not alone. Amazon Alexa devices also record and archive users' conversations. Would you want your bedroom chatter recorded (and stored indefinitely)? Or your conversations with your children? Many persons work remotely from home, so would you want business conversations with coworkers recorded? I think not. Very troubling news.

And, there is more.

This data security incident confirms that human workers listen to recordings by Google Assistant devices. Those workers can be employees or outsourced contractors. Who are these contractors, by name? What methods does Google employ to confirm privacy compliance by contractors? So many unanswered questions.

Also, according to U.S. News & World Report:

"Google's recording feature can be turned off, but doing so means Assistant loses some of its personalized touch. People who turn off the recording feature lose the ability for the Assistant to recognize individual voices and learn your voice pattern. Assistant recording is actually turned off by default — but the technology prompts users to turn on recording and other tools in order to get personalized features."

So, to get the full value of the technology, users must enable recordings. That sounds a lot like surveillance by design. Not good. You'd think that Google software developers would have developed a standard vocabulary, or dictionary, in several languages (by beta test participants) to test the accuracy of Assistant software; rather than review users' actual conversations. I guess they viewed it easier, faster, and cheaper to snoop on users.

Since Google already scans the contents of Gmail users' email messages, maybe this is simply technology creep and Google assumed nobody would mind human reviews of Assistant recordings.

About the review of recordings by human workers, the M.I.T. Technology Review said:

"Legally questionable: Because Google doesn’t inform users that humans review recordings in this way, and thus doesn’t seek their explicit consent for the practice, it’s quite possible that it could be breaking EU data protection regulations. We have asked Google for a response and will update if we hear back."

So, it will be interesting to see what European Union regulators have to say about the recordings and human reviews.

To summarize: consumers have willingly installed perpetual surveillance devices in their homes. What are your views of this data security incident? Do you enable recordings on your smart speakers? Should human workers have access to archives of your recorded conversations?


Tech Expert Concluded Google Chrome Browser Operates A Lot Like Spy Software

Many consumers still use web browsers. Which are better for your online privacy? You may be interested in this analysis by a tech expert:

"... I've been investigating the secret life of my data, running experiments to see what technology really gets up to under the cover of privacy policies that nobody reads... My tests of Chrome vs. Firefox [browsers] unearthed a personal data caper of absurd proportions. In a week of Web surfing on my desktop, I discovered 11,189 requests for tracker "cookies" that Chrome would have ushered right onto my computer but were automatically blocked by Firefox... Chrome welcomed trackers even at websites you would think would be private. I watched Aetna and the Federal Student Aid website set cookies for Facebook and Google. They surreptitiously told the data giants every time I pulled up the insurance and loan service's log-in pages."

"And that's not the half of it. Look in the upper right corner of your Chrome browser. See a picture or a name in the circle? If so, you're logged in to the browser, and Google might be tapping into your Web activity to target ads. Don't recall signing in? I didn't, either. Chrome recently started doing that automatically when you use Gmail... I felt hoodwinked when Google quietly began signing Gmail users into Chrome last fall. Google says the Chrome shift didn't cause anybody's browsing history to be "synced" unless they specifically opted in — but I found mine was being sent to Google and don't recall ever asking for extra surveillance..."

Also:

"Google's product managers told me in an interview that Chrome prioritizes privacy choices and controls, and they're working on new ones for cookies. But they also said they have to get the right balance with a "healthy Web ecosystem" (read: ad business). Firefox's product managers told me they don't see privacy as an "option" relegated to controls. They've launched a war on surveillance, starting last month with "enhanced tracking protection" that blocks nosy cookies by default on new Firefox installations..."

This tech expert concluded:

"It turns out, having the world's biggest advertising company make the most popular Web browser was about as smart as letting kids run a candy shop. It made me decide to ditch Chrome for a new version of nonprofit Mozilla's Firefox, which has default privacy protections. Switching involved less inconvenience than you might imagine."

Regular readers of this blog are aware of how Google tracks consumers online purchases, the worst mobile apps for privacy, and privacy alternatives such the Brave web browser, the DuckDuckGo search engine, virtual private network (VPN) software, and more. Yes, you can use the Firefox browser on your Apple iPhone. I do.

Me? I've used the Firefox browser since about 2010 on my (Windows) laptop, and the DuckDuckGo search engine since 2013. I stopped using Bing, Yahoo, and Google search engines in 2013. While Firefox installs with Google as the default search engine, you can easily switch it to DuckDuckGo. I did. I am very happy with the results.

Which web browser and search engine do you use? What do you do to protect your online privacy?


Digital Jail: How Electronic Monitoring Drives Defendants Into Debt

[Editor's note: today's guest post, by reporters at ProPublica, discusses the convergence of law enforcement, outsourcing, smart devices, surveillance, "offender funded" programs, and "e-gentrification." It is reprinted with permission.]

By Ava Kofman, ProPublica

On Oct. 12, 2018, Daehaun White walked free, or so he thought. A guard handed him shoelaces and the $19 that had been in his pocket at the time of his booking, along with a letter from his public defender. The lanky 19-year-old had been sitting for almost a month in St. Louis’ Medium Security Institution, a city jail known as the Workhouse, after being pulled over for driving some friends around in a stolen Chevy Cavalier. When the police charged him with tampering with a motor vehicle — driving a car without its owner’s consent — and held him overnight, he assumed he would be released by morning. He told the police that he hadn’t known that the Chevy, which a friend had lent him a few hours earlier, was stolen. He had no previous convictions. But the $1,500 he needed for the bond was far beyond what he or his family could afford. It wasn’t until his public defender, Erika Wurst, persuaded the judge to lower the amount to $500 cash, and a nonprofit fund, the Bail Project, paid it for him, that he was able to leave the notoriously grim jail. “Once they said I was getting released, I was so excited I stopped listening,” he told me recently. He would no longer have to drink water blackened with mold or share a cell with rats, mice and cockroaches. He did a round of victory pushups and gave away all of the snack cakes he had been saving from the cafeteria.

Emass logo When he finally read Wurst’s letter, however, he realized there was a catch. Even though Wurst had argued against it, the judge, Nicole Colbert-Botchway, had ordered him to wear an ankle monitor that would track his location at every moment using GPS. For as long as he would wear it, he would be required to pay $10 a day to a private company, Eastern Missouri Alternative Sentencing Services, or EMASS. Just to get the monitor attached, he would have to report to EMASS and pay $300 up front — enough to cover the first 25 days, plus a $50 installation fee.

White didn’t know how to find that kind of money. Before his arrest, he was earning minimum wage as a temp, wrapping up boxes of shampoo. His father was largely absent, and his mother, Lakisha Thompson, had recently lost her job as the housekeeping manager at a Holiday Inn. Raising Daehaun and his four siblings, she had struggled to keep up with the bills. The family bounced between houses and apartments in northern St. Louis County, where, as a result of Jim Crow redlining, most of the area’s black population lives. In 2014, they were living on Canfield Drive in Ferguson when Michael Brown was shot and killed there by a police officer. During the ensuing turmoil, Thompson moved the family to Green Bay, Wisconsin. White felt out of place. He was looked down on for his sagging pants, called the N-word when riding his bike. After six months, he moved back to St. Louis County on his own to live with three of his siblings and stepsiblings in a gray house with vinyl siding.

When White got home on the night of his release, he was so overwhelmed to see his family again that he forgot about the letter. He spent the next few days hanging out with his siblings, his mother, who had returned to Missouri earlier that year, and his girlfriend, Demetria, who was seven months pregnant. He didn’t report to EMASS.

What he didn’t realize was that he had failed to meet a deadline. Typically, defendants assigned to monitors must pay EMASS in person and have the device installed within 24 hours of their release from jail. Otherwise, they have to return to court to explain why they’ve violated the judge’s orders. White, however, wasn’t called back for a hearing. Instead, a week after he left the Workhouse, Colbert-Botchway issued a warrant for his arrest.

Three days later, a large group of police officers knocked on Thompson’s door, looking for information about an unrelated case, a robbery. White and his brother had been making dinner with their mother, and the officers asked them for identification. White’s name matched the warrant issued by Colbert-Botchway. “They didn’t tell me what the warrant was for,” he said. “Just that it was for a violation of my release.” He was taken downtown and held for transfer back to the Workhouse. “I kept saying to myself, ’Why am I locked up?’” he recalled.

The next morning, Thompson called the courthouse to find the answer. She learned that her son had been jailed over his failure to acquire and pay for his GPS monitor. To get him out, she needed to pay EMASS on his behalf.

This seemed absurd to her. When Daehaun was 13, she had worn an ankle monitor after violating probation for a minor theft, but the state hadn’t required her to cover the cost of her own supervision. “This is a 19-year-old coming out of the Workhouse,” she told me recently. “There’s no way he has $300 saved.” Thompson felt that the court was forcing her to choose between getting White out of jail and supporting the rest of her family.

Over the past half-century, the number of people behind bars in the United States jumped by more than 500%, to 2.2 million. This extraordinary rise, often attributed to decades of “tough on crime” policies and harsh sentencing laws, has ensured that even as crime rates have dropped since the 1990s, the number of people locked up and the average length of their stay have increased. According to the Bureau of Justice Statistics, the cost of keeping people in jails and prisons soared to $87 billion in 2015 from $19 billion in 1980, in current dollars.

In recent years, politicians on both sides of the aisle have joined criminal-justice reformers in recognizing mass incarceration as both a moral outrage and a fiscal sinkhole. As ankle bracelets have become compact and cost-effective, legislators have embraced them as an enlightened alternative. More than 125,000 people in the criminal-justice system were supervised with monitors in 2015, compared with just 53,000 people in 2005, according to the Pew Charitable Trusts. Although no current national tally is available, data from several cities — Austin, Texas; Indianapolis; Chicago; and San Francisco — show that this number continues to rise. Last December, the First Step Act, which includes provisions for home detention, was signed into law by President Donald Trump with support from the private prison giants GEO Group and CoreCivic. These corporations dominate the so-called community-corrections market — services such as day-reporting and electronic monitoring — that represents one of the fastest-growing revenue sectors of their industry.

By far the most decisive factor promoting the expansion of monitors is the financial one. The United States government pays for monitors for some of those in the federal criminal-justice system and for tens of thousands of immigrants supervised by Immigration and Customs Enforcement. But states and cities, which incur around 90% of the expenditures for jails and prisons, are increasingly passing the financial burden of the devices onto those who wear them. It costs St. Louis roughly $90 a day to detain a person awaiting trial in the Workhouse, where in 2017 the average stay was 291 days. When individuals pay EMASS $10 a day for their own supervision, it costs the city nothing. A 2014 study by NPR and the Brennan Center found that, with the exception of Hawaii, every state required people to pay at least part of the costs associated with GPS monitoring. Some probation offices and sheriffs run their own monitoring programs — renting the equipment from manufacturers, hiring staff and collecting fees directly from participants. Others have outsourced the supervision of defendants, parolees and probationers to private companies.

“There are a lot of judges who reflexively put people on monitors, without making much of a pretense of seriously weighing it at all,” said Chris Albin-Lackey, a senior legal adviser with Human Rights Watch who has researched private-supervision companies. “The limiting factor is the cost it might impose on the public, but when that expense is sourced out, even that minimal brake on judicial discretion goes out the window.”

Nowhere is the pressure to adopt monitors more pronounced than in places like St. Louis: cash-strapped municipalities with large populations of people awaiting trial. Nationwide on any given day, half a million people sit in crowded and expensive jails because, like Daehaun White, they cannot purchase their freedom.

As the movement to overhaul cash bail has challenged the constitutionality of jailing these defendants, judges and sheriffs have turned to monitors as an appealing substitute. In San Francisco, the number of people released from jail onto electronic monitors tripled after a 2018 ruling forced courts to release more defendants without bail. In Marion County, Indiana, where jail overcrowding is routine, roughly 5,000 defendants were put on monitors last year. “You would be hard-pressed to find bail-reform legislation in any state that does not include the possibility of electronic monitoring,” said Robin Steinberg, the chief executive of the Bail Project.

Yet like the system of wealth-based detention they are meant to help reform, ankle monitors often place poor people in special jeopardy. Across the country, defendants who have not been convicted of a crime are put on “offender funded” payment plans for monitors that sometimes cost more than their bail. And unlike bail, they don’t get the payment back, even if they’re found innocent. Although a federal survey shows that nearly 40% of Americans would have trouble finding $400 to cover an emergency, companies and courts routinely threaten to lock up defendants if they fall behind on payment. In Greenville, South Carolina, pretrial defendants can be sent back to jail when they fall three weeks behind on fees. (An officer for the Greenville County Detention Center defended this practice on the grounds that participants agree to the costs in advance.) In Mohave County, Arizona, pretrial defendants charged with sex offenses have faced rearrest if they fail to pay for their monitors, even if they prove that they can’t afford them. “We risk replacing an unjust cash-bail system,” Steinberg said, “with one just as unfair, inhumane and unnecessary.”

Many local judges, including in St. Louis, do not conduct hearings on a defendant’s ability to pay for private supervision before assigning them to it; those who do often overestimate poor people’s financial means. Without judicial oversight, defendants are vulnerable to private-supervision companies that set their own rates and charge interest when someone can’t pay up front. Some companies even give their employees bonuses for hitting collection targets.

It’s not only debt that can send defendants back to jail. People who may not otherwise be candidates for incarceration can be punished for breaking the lifestyle rules that come with the devices. A survey in California found that juveniles awaiting trial or on probation face especially difficult rules; in one county, juveniles on monitors were asked to follow more than 50 restrictions, including not participating “in any social activity.” For this reason, many advocates describe electronic monitoring as a “net-widener": Far from serving as an alternative to incarceration, it ends up sweeping more people into the system.

Dressed in a baggy yellow City of St. Louis Corrections shirt, White was walking to the van that would take him back to the Workhouse after his rearrest, when a guard called his name and handed him a bus ticket home. A few hours earlier, his mom had persuaded her sister to lend her the $300 that White owed EMASS. Wurst, his public defender, brought the receipt to court.

The next afternoon, White hitched a ride downtown to the EMASS office, where one of the company’s bond-compliance officers, Nick Buss, clipped a black box around his left ankle. Based in the majority white city of St. Charles, west of St. Louis, EMASS has several field offices throughout eastern Missouri. A former probation and parole officer, Michael Smith, founded the company in 1991 after Missouri became one of the first states to allow private companies to supervise some probationers. (Smith and other EMASS officials declined to comment for this story.)

The St. Louis area has made national headlines for its “offender funded” model of policing and punishment. Stricken by postindustrial decline and the 2008 financial crisis, its municipalities turned to their police departments and courts to make up for shortfalls in revenue. In 2015, the Ferguson Report by the United States Department of Justice put hard numbers to what black residents had long suspected: The police were targeting them with disproportionate arrests, traffic tickets and excessive fines.

EMASS may have saved the city some money, but it also created an extraordinary and arbitrary-seeming new expense for poor defendants. When cities cover the cost of monitoring, they often pay private contractors $2 to $3 a day for the same equipment and services for which EMASS charges defendants $10 a day. To come up with the money, EMASS clients told me, they had to find second jobs, take their children out of day care and cut into disability checks. Others hurried to plead guilty for no better reason than that being on probation was cheaper than paying for a monitor.

At the downtown office, White signed a contract stating that he would charge his monitor for an hour and a half each day and “report” to EMASS with $70 each week. He could shower, but was not to bathe or swim (the monitor is water-resistant, not waterproof). Interfering with the monitor’s functioning was a felony.

White assumed that GPS supervision would prove a minor annoyance. Instead, it was a constant burden. The box was bulky and the size of a fist, so he couldn’t hide it under his jeans. Whenever he left the house, people stared. There were snide comments ("nice bracelet") and cutting jokes. His brothers teased him about having a babysitter. “I’m nobody to watch,” he insisted.

The biggest problem was finding work. Confident and outgoing, White had never struggled to land jobs; after dropping out of high school in his junior year, he flipped burgers at McDonald’s and Steak ’n Shake. To pay for the monitor, he applied to be a custodian at Julia Davis Library, a cashier at Home Depot, a clerk at Menards. The conversation at Home Depot had gone especially well, White thought, until the interviewer casually asked what was on his leg.

To help improve his chances, he enrolled in Mission: St. Louis, a job-training center for people reentering society. One afternoon in January, he and a classmate role-played how to talk to potential employers about criminal charges. White didn’t know how much detail to go into. Should he tell interviewers that he was bringing his pregnant girlfriend some snacks when he was pulled over? He still isn’t sure, because a police officer came looking for him midway through the class. The battery on his monitor had died. The officer sent him home, and White missed the rest of the lesson.

With all of the restrictions and rules, keeping a job on a monitor can be as difficult as finding one. The hours for weekly check-ins at the downtown EMASS office — 1 p.m. to 6 p.m. on Tuesdays and Wednesdays, and 1 p.m. until 5 p.m. on Mondays — are inconvenient for those who work. In 2011, the National Institute of Justice surveyed 5,000 people on electronic monitors and found that 22% said they had been fired or asked to leave a job because of the device. Juawanna Caves, a young St. Louis native and mother of two, was placed on a monitor in December after being charged with unlawful use of a weapon. She said she stopped showing up to work as a housekeeper when her co-workers made her uncomfortable by asking questions and later lost a job at a nursing home because too many exceptions had to be made for her court dates and EMASS check-ins.

Perpetual surveillance also takes a mental toll. Nearly everyone I spoke to who wore a monitor described feeling trapped, as though they were serving a sentence before they had even gone to trial. White was never really sure about what he could or couldn’t do under supervision. In January, when his girlfriend had their daughter, Rylan, White left the hospital shortly after the birth, under the impression that he had a midnight curfew. Later that night, he let his monitor die so that he could sneak back before sunrise to see the baby again.

EMASS makes its money from defendants. But it gets its power over them from judges. It was in 2012 that the judges of the St. Louis court started to use the company’s services — which previously involved people on probation for misdemeanors — for defendants awaiting trial. Last year, the company supervised 239 defendants in the city of St. Louis on GPS monitors, according to numbers provided by EMASS to the court. The alliance with the courts gives the company not just a steady stream of business but a reliable means of recouping debts: Unlike, say, a credit-card company, which must file a civil suit to collect from overdue customers, EMASS can initiate criminal-court proceedings, threatening defendants with another stay in the Workhouse.

In early April, I visited Judge Rex Burlison in his chambers on the 10th floor of the St. Louis civil courts building. A few months earlier, Burlison, who has short gray hair and light blue eyes, had been elected by his peers as presiding judge, overseeing the city’s docket, budget and operations, including the contract with EMASS. It was one of the first warm days of the year, and from the office window I could see sunlight glimmering on the silver Gateway Arch.

I asked Burlison about the court’s philosophy for using pretrial GPS. He stressed that while each case was unique and subject to the judge’s discretion, monitoring was most commonly used for defendants who posed a flight risk, endangered public safety or had an alleged victim. Judges vary in how often they order defendants to wear monitors, and critics have attacked the inconsistency. Colbert-Botchway, the judge who put White on a monitor, regularly made pretrial GPS a condition of release, according to public defenders. (Colbert-Botchway declined to comment.) But another St. Louis city judge, David Roither, told me, “I really don’t use it very often because people here are too poor to pay for it.”

Whenever a defendant on a monitor violates a condition of release, whether related to payment or a curfew or something else, EMASS sends a letter to the court. Last year, Burlison said, the court received two to three letters a week from EMASS about violations. In response, the judge usually calls the defendant in for a hearing. As far as he knew, Burlison said, judges did not incarcerate people simply for failing to pay EMASS debts. “Why would you?” he asked me. When people were put back in jail, he said, there were always other factors at play, like the defendant’s missing a hearing, for instance. (Issuing a warrant for White’s arrest without a hearing, he acknowledged after looking at the docket, was not the court’s standard practice.)

The contract with EMASS allows the court to assign indigent defendants to the company to oversee “at no cost.” Yet neither Burlison nor any of the other current or former judges I spoke with recalled waiving fees when ordering someone to wear an ankle monitor. When I asked Burlison why he didn’t, he said that he was concerned that if he started to make exceptions on the basis of income, the company might stop providing ankle-monitoring services in St. Louis.

“People get arrested because of life choices,” Burlison said. “Whether they’re good for the charge or not, they’re still arrested and have to deal with it, and part of dealing with it is the finances.” To release defendants without monitors simply because they can’t afford the fee, he said, would be to disregard the safety of their victims or the community. “We can’t just release everybody because they’re poor,” he continued.

But many people in the Workhouse awaiting trial are poor. In January, civil rights groups filed suit against the city and the court, claiming that the St. Louis bail system violated the Constitution, in part by discriminating against those who can’t afford to post bail. That same month, the Missouri Supreme Court announced new rules that urged local courts to consider releasing defendants without monetary conditions and to waive fees for poor people placed on monitors. Shortly before the rules went into effect, on July 1, Burlison said that the city intends to shift the way ankle monitors are distributed and plans to establish a fund to help indigent defendants pay for their ankle bracelets. But he said he didn’t know how much money would be in the fund or whether it was temporary or permanent. The need for funding could grow quickly. The pending bail lawsuit has temporarily spurred the release of more defendants from custody, and as a result, public defenders say, the demand for monitors has increased.

Judges are anxious about what people released without posting bail might do once they get out. Several told me that monitors may ensure that the defendants return to court. Not unlike doctors who order a battery of tests for a mildly ill patient to avoid a potential malpractice suit, judges seem to view monitors as a precaution against their faces appearing on the front page of the newspaper. “Every judge’s fear is to let somebody out on recognizance and he commits murder, and then everyone asks, ’How in the hell was this person let out?’” said Robert Dierker, who served as a judge in St. Louis from 1986 to 2017 and now represents the city in the bail lawsuit. “But with GPS, you can say, ’Well, I have him on GPS, what else can I do?’”

Critics of monitors contend that their public-safety appeal is illusory: If defendants are intent on harming someone or skipping town, the bracelet, which can be easily removed with a pair of scissors, would not stop them. Studies showing that people tracked by GPS appear in court more reliably are scarce, and research about its effectiveness as a deterrent is inconclusive.

“The fundamental question is, What purpose is electronic monitoring serving?” said Blake Strode, the executive director of ArchCity Defenders, a nonprofit civil rights law firm in St. Louis that is one of several firms representing the plaintiffs in the bail lawsuit. “If the only purpose it’s serving is to make judges feel better because they don’t want to be on the hook if something goes wrong, then that’s not a sensible approach. We should not simply be monitoring for monitoring’s sake.”

Electronic monitoring was first conceived in the early 1960s by Ralph and Robert Gable, identical twins studying at Harvard under the psychologists Timothy Leary and B.F. Skinner, respectively. Influenced in part by Skinner’s theories of positive reinforcement, the Gables rigged up some surplus missile-tracking equipment to monitor teenagers on probation; those who showed up at the right places at the right times were rewarded with movie tickets, limo rides and other prizes.

Although this round-the-clock monitoring was intended as a tool for rehabilitation, observers and participants alike soon recognized its potential to enhance surveillance. All but two of the 16 volunteers in their initial study dropped out, finding the two bulky radio transmitters oppressive. “They felt like it was a prosthetic conscience, and who would want Mother all the time along with you?” Robert Gable told me. Psychology Today labeled the invention a “belt from Big Brother.”

The reality of electronic monitoring today is that Big Brother is watching some groups more than others. No national statistics are available on the racial breakdown of Americans wearing ankle monitors, but all indications suggest that mass supervision, like mass incarceration, disproportionately affects black people. In Cook County, Illinois, for instance, black people make up 24% of the population, and 67% of those on monitors. The sociologist Simone Browne has connected contemporary surveillance technologies like GPS monitors to America’s long history of controlling where black people live, move and work. In her 2015 book, “Dark Matters,” she traces the ways in which “surveillance is nothing new to black folks,” from the branding of enslaved people and the shackling of convict laborers to Jim Crow segregation and the home visits of welfare agencies. These historical inequities, Browne notes, influence where and on whom new tools like ankle monitors are imposed.

For some black families, including White’s, monitoring stretches across generations. Annette Taylor, the director of Ripple Effect, an advocacy group for prisoners and their families based in Champaign, Illinois, has seen her ex-husband, brother, son, nephew and sister’s husband wear ankle monitors over the years. She had to wear one herself, about a decade ago, she said, for driving with a suspended license. “You’re making people a prisoner of their home,” she told me. When her son was paroled and placed on house arrest, he couldn’t live with her, because he was forbidden to associate with people convicted of felonies, including his stepfather, who was also on house arrest.

Some people on monitors are further constrained by geographic restrictions — areas in the city or neighborhood that they can’t go without triggering an alarm. James Kilgore, a research scholar at the University of Illinois at Champaign-Urbana, has cautioned that these exclusionary zones could lead to “e-gentrification,” effectively keeping people out of more-prosperous neighborhoods. In 2016, after serving four years in prison for drug conspiracy, Bryan Otero wore a monitor as a condition of parole. He commuted from the Bronx to jobs at a restaurant and a department store in Manhattan, but he couldn’t visit his family or doctor because he was forbidden to enter a swath of Manhattan between 117th Street and 131st Street. “All my family and childhood friends live in that area,” he said. “I grew up there.”

Michelle Alexander, a legal scholar and columnist for The Times, has argued that monitoring engenders a new form of oppression under the guise of progress. In her 2010 book, “The New Jim Crow,” she wrote that the term “mass incarceration” should refer to the “system that locks people not only behind actual bars in actual prisons, but also behind virtual bars and virtual walls — walls that are invisible to the naked eye but function nearly as effectively as Jim Crow laws once did at locking people of color into a permanent second-class citizenship.”

BI Incorporated logo As the cost of monitoring continues to fall, those who are required to submit to it may worry less about the expense and more about the intrusive surveillance. The devices, some of which are equipped with two-way microphones, can give corrections officials unprecedented access to the private lives not just of those monitored but also of their families and friends. GPS location data appeals to the police, who can use it to investigate crimes. Already the goal is both to track what individuals are doing and to anticipate what they might do next. BI Incorporated, an electronic-monitoring subsidiary of GEO Group, has the ability to assign risk scores to the behavioral patterns of those monitored, so that law enforcement can “address potential problems before they happen.” Judges leery of recidivism have begun to embrace risk-assessment tools. As a result, defendants who have yet to be convicted of an offense in court may be categorized by their future chances of reoffending.

The combination of GPS location data with other tracking technologies such as automatic license-plate readers represents an uncharted frontier for finer-grained surveillance. In some cities, police have concentrated these tools in neighborhoods of color. A CityLab investigation found that Baltimore police were more likely to deploy the Stingray — the controversial and secretive cellphone tracking technology — where African Americans lived. In the aftermath of Freddie Gray’s death in 2015, the police spied on Black Lives Matter protesters with face recognition technology. Given this pattern, the term “electronic monitoring” may soon refer not just to a specific piece of equipment but to an all-encompassing strategy.

If the evolution of the criminal-justice system is any guide, it is very likely that the ankle bracelet will go out of fashion. Some GPS monitoring vendors have already started to offer smartphone applications that verify someone’s location through voice and face recognition. These apps, with names like Smart-LINK and Shadowtrack, promise to be cheaper and more convenient than a boxy bracelet. They’re also less visible, mitigating the stigma and normalizing surveillance. While reducing the number of people in physical prison, these seductive applications could, paradoxically, increase its reach. For the nearly 4.5 million Americans on probation or parole, it is not difficult to imagine a virtual prison system as ubiquitous — and invasive — as Instagram or Facebook.

On January 24, exactly three months after White had his monitor installed, his public defender successfully argued in court for its removal. His phone service had been shut off because he had fallen behind on the bill, so his mother told him the good news over video chat.

When White showed up to EMASS a few days later to have the ankle bracelet removed, he said, one of the company’s employees told him that he couldn’t take off his monitor until he paid his debt. White offered him the $35 in his wallet — all the money he had. It wasn’t enough. The employee explained that he needed to pay at least half of the $700 he owed. Somewhere in the contract he had signed months earlier, White had agreed to pay his full balance “at the time of removal.” But as White saw it, the court that had ordered the monitor’s installation was now ordering its removal. Didn’t that count?

“That’s the only thing that’s killing me,” White told me a few weeks later, in early March. “Why are you all not taking it off?” We were in his brother’s room, which, unlike White’s down the hall, had space for a wobbly chair. White sat on the bed, his head resting against the frame, while his brother sat on the other end by the TV, mumbling commands into a headset for the fantasy video game Fortnite. By then, the prosecutor had offered White two to three years of probation in exchange for a plea. (White is waiting to hear if he has been accepted into the city’s diversion program for “youthful offenders,” which would allow him to avoid pleading and wipe the charges from his record in a year.)

White was wearing a loosefitting Nike track jacket and red sweats that bunched up over the top of his monitor. He had recently stopped charging it, and so far, the police hadn’t come knocking. “I don’t even have to have it on,” he said, looking down at his ankle. “But without a job, I can’t get it taken off.” In the last few weeks, he had sold his laptop, his phone and his TV. That cash went to rent, food and his daughter, and what was left barely made a dent in what he owed EMASS.

It was a Monday — a check-in day — but he hadn’t been reporting for the past couple of weeks. He didn’t see the point; he didn’t have the money to get the monitor removed and the office was an hour away by bus. I offered him a ride.

EMASS check-ins take place in a three-story brick building with a low-slung facade draped in ivy. The office doesn’t take cash payments, and a Western Union is conveniently located next door. The other men in the waiting room were also wearing monitors. When it was White’s turn to check-in, Buss, the bond-compliance officer, unclipped the band from his ankle and threw the device into a bin, White said. He wasn’t sure why EMASS had now softened its approach, but his debts nonetheless remained.

Buss calculated the money White owed going back to November: $755, plus 10% annual interest. Over the next nine months, EMASS expected him to make monthly payments that would add up to $850 — more than the court had required for his bond. White looked at the receipt and shook his head. “I get in trouble for living,” he said as he walked out of the office. “For being me.”

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for The Big Story newsletter to receive stories like this one in your inbox.


Facebook Announced New Financial Services Offering Available in 2020

On Tuesday, Facebook announced its first financial services offering which will be available in 2020:

"... we’re sharing plans for Calibra, a newly formed Facebook subsidiary whose goal is to provide financial services that will let people access and participate in the Libra network. The first product Calibra will introduce is a digital wallet for Libra, a new global currency powered by blockchain technology. The wallet will be available in Messenger, WhatsApp and as a standalone app — and we expect to launch in 2020... Calibra will let you send Libra to almost anyone with a smartphone, as easily and instantly as you might send a text message and at low to no cost. And, in time, we hope to offer additional services for people and businesses, like paying bills with the push of a button, buying a cup of coffee with the scan of a code or riding your local public transit..."

Long before the announcement, consumers crafted interesting nicknames for the financial service, such as #FaceCoin and #Zuckbucks. Good to see people with a sense of humor.

On a more serious topic, after multiple data breaches and privacy snafus at Facebook (plus repeated promises by CEO Zuckerberg that his company will do better), many people are understandably concerned about data security and privacy. Facebook's announcement also addressed security and privacy:

"... Calibra will have strong protections... We’ll be using all the same verification and anti-fraud processes that banks and credit cards use, and we’ll have automated systems that will proactively monitor activity to detect and prevent fraudulent behavior... We’ll also take steps to protect your privacy. Aside from limited cases, Calibra will not share account information or financial data with Facebook or any third party without customer consent. This means Calibra customers’ account information and financial data will not be used to improve ad targeting on the Facebook family of products. The limited cases where this data may be shared reflect our need to keep people safe, comply with the law and provide basic functionality to the people who use Calibra. Calibra will use Facebook data to comply with the law, secure customers’ accounts, mitigate risk and prevent criminal activity."

So, the new Calibra subsidiary promised that it won't share users' account information with Facebook's core social networking service, except when it will -- to "comply with the law." The announcement encourages interested persons to sign up for email updates. This leaves Calibra customers to trust Facebook's wall separating its business units. "Provide basic functionality to the people who use Calibra" sounds like a huge loophole to justify any data sharing.

Tech and financial experts quickly weighed in on the announcement and its promises. TechCrunch explained why Facebook created a new business subsidiary. After Calibra's Tuesday announcement:

"... critics started harping about the dangers of centralizing control of tomorrow’s money in the hands of a company with a poor track record of privacy and security. Facebook anticipated this, though, and created a subsidiary called Calibra to run its crypto dealings and keep all transaction data separate from your social data. Facebook shares control of Libra with 27 other Libra Association founding members, and as many as 100 total when the token launches in the first half of 2020. Each member gets just one vote on the Libra council, so Facebook can’t hijack the token’s governance even though it invented it."

TechCrunch also explained the risks to Calibra customers:

"... that leaves one giant vector for abuse of Libra: the developer platform... Apparently Facebook has already forgotten how allowing anyone to build on the Facebook app platform and its low barriers to “innovation” are exactly what opened the door for Cambridge Analytica to hijack 87 million people’s personal data and use it for political ad targeting. But in this case, it won’t be users’ interests and birthdays that get grabbed. It could be hundreds or thousands of dollars’ worth of Libra currency that’s stolen. A shady developer could build a wallet that just cleans out a user’s account or funnels their coins to the wrong recipient, mines their purchase history for marketing data or uses them to launder money..."

During the coming months, hopefully Calibra will disclose the controls it will implement on the developer platform to prevent abuses, theft, and fraud.

Readers wanting to learn more should read the Libra White Paper, which provides more details about the companies involved:

"The Libra Association is an independent, not-for-profit membership organization headquartered in Geneva, Switzerland. The association’s purpose is to coordinate and provide a framework for governance for the network... Members of the Libra Association will consist of geographically distributed and diverse businesses, nonprofit and multilateral organizations, and academic institutions. The initial group of organizations that will work together on finalizing the association’s charter and become “Founding Members” upon its completion are, by industry:

1. Payments: Mastercard, PayPal, PayU (Naspers’ fintech arm), Stripe, Visa
2. Technology and marketplaces: Booking Holdings, eBay, Facebook/Calibra, Farfetch, Lyft, Mercado Pago, Spotify AB, Uber Technologies, Inc.
3. Telecommunications: Iliad, Vodafone Group
4. Blockchain: Anchorage, Bison Trails, Coinbase, Inc., Xapo Holdings Limited
5. Venture Capital: Andreessen Horowitz, Breakthrough Initiatives, Ribbit Capital, Thrive Capital, Union Square Ventures
6. Nonprofit and multilateral organizations, and academic institutions: Creative Destruction Lab, Kiva, Mercy Corps, Women’s World Banking"

Yes, the ride-hailing company, Uber, is involved. Yes, the same ride-hailing service which which paid $148 million to settle lawsuits and a coverup from a data breach in 2016. Yes, the same ride-hailing service with a history of data security, compliance, cultural, and privacy snafus. This suggests -- for better or worse -- that in the future consumers will be able to pay for Uber rides using the Libra Network.

Calibra hopes to have about 100 members in the Libra Association by the service launch in 2020. Clearly, there will be plenty more news to come. Below are draft screen images of the new app.

Early version of screen images of the Calibra mobile app. Click to view larger version


How Google Tracks All Of Your Online Purchases. Its Reasons Are Unclear

Google tracks all of your online purchases. How? ExpressVPN reported:

"Initially stumbled across by a CNBC reporter, a "Google Purchases" page keeps track of all digital receipts sent to your Gmail account from as far back as 2012. The page is not limited to purchases made directly from Google, either. From flight tickets to Amazon purchases to food delivery services, if the receipt went to your Gmail, it’s on the list. Google takes the name, date, and other specifics surrounding the purchase and records them in a list on the page."

The tracking is a reminder of the special place Internet service providers (ISPs) enjoy with access to all of users' online activities. Consumers' purchase receipts can include very sensitive information such as foods, medicine, and medical devices -- for parents and/or their children; or bookings for upcoming travel indicating when a home will be vacant; or purchases of medical marijuana, D-I-Y guns, and/or internet-connected adult toys. The bottom line: some consumers may not want their purchase data collected (nor shared with other companies by Google).

Now that you're aware of the tracking, something to consider the next time a cashier at a brick-and-mortar retail store asks: paper or email receipt? I always choose paper. You might, too.

To view your Google Purchase page, visit http://myaccount.google.com/purchases and sign in. Only you can view your purchases page.

Privacy solutions appear ugly. One option is to switch to an email provider that doesn't track you. If you decide to stay with Gmail, the only fix is a manual process which will cost you several hours or days to wade through your archive and delete emails:

"... the only way to remove a purchase from the list is to find and manually delete the email that contains the original receipt. Worse still, you can’t turn off tracking, and there’s no way to delete the list en masse. This process is incredibly tedious... Even more perplexing is that there’s no clear purpose for the collection of this data... the logic behind this reasoning is strange, the info is hiding in Google’s Account page, and it’s not exactly easy to access for users who want to “view and keep track of purchases.” And seeing as this page isn’t really being promoted to its users..."

Google said it is doing more for its customers regarding privacy. Last month, The Washington Post reported:

"... One executive after another at Google’s I/O conference in its hometown of Mountain View, California emphasized new privacy settings in products like search, maps, thermostats and updated mobile phone software. "We strongly believe that privacy and security are for everyone, not just a few," Google CEO Sundar Pichai said.

Said product manager Stephanie Cuthbertson, who introduced a new version of the Android mobile operating system: "You should always be in control of what you share and who you share it with."... Google also committed to improved privacy controls of its Nest-connected home devices, including the ability of users to delete their audio files. Some users have reported having hackers eavesdropping through their Nest devices."

Hmmm. It seems more privacy and control does not extend to Gmail users' purchase data. What are your opinions?

[Editor's note: this page was revised Monday evening to fix a typo and to include the link the Google Purchases page.]


The Worst Mobile Apps For Privacy

ExpressVPN compiled its list for 2019 of the four worst mobile apps for privacy. If you value your online privacy and want to protect yourself, the security firm advises consumers to, "Delete them now." The list of apps includes both predictable items and some surprises:

"1. Angry Birds: If you were an international spying organization, which app would you target to harvest smartphone user information? If you picked Angry Birds, congratulations! You’re thinking just like the NSA and GCHQ did... what it lacks in gameplay, it certainly makes up for in leaky data... A mobile ad platform placed a code snippet in Angry Birds that allowed the company to target advertisements to users based on previously collected information. Unfortunately, the ad’s library of data was visible, meaning it was leaking user information such as phone number, call logs, location, political affiliation, sexual orientation, and marital status..."

"2. The YouVersion Bible App: The YouVersion Bible App is on more than 300 million devices around the world. It claims to be the No. 1 Bible app and comes with over 1,400 Bibles in over 1,000 languages. It also harvests data... Notable permissions the app demands are full internet access, the ability to connect and disconnect to Wi-Fi, modify stored content on the phone, track the device’s location, and read all a user’s contacts..."

Read the full list of sketchy apps at the ExpressVPN site.


How To Do Online Banking Safely And Securely

Most people love the convenience of online banking via their smartphone or other mobile device. However, it is important to do it safely and securely. How? NordVPN listed four items:

"1. Don't lose your phone: The biggest security threat of your mobile phone is also its greatest asset – its size. Phones are small, handy, beautiful, and easy to lose..."

So, keep your phone in your hand. Never place it on a table out of sight. Of course, you should lock your phone with a strong password. NordVPN commented about other locking options:

"Facial recognition: convenient but not secure, since it can sometimes be bypassed with a photograph... Fingerprints: low false-acceptance rates, perfect if you don’t often wear gloves."

More advice:

"2. Use the official banking app, not the browser... If you aren’t careful, you could download a fake banking app created by scammers to break into your account. Make sure your bank created or approves of the app you are downloading. Get it from their website. Moreover, do not use mobile browsers to log in to your bank account – they are less secure than bank-sanctioned apps..."

Obviously, you should sign out of the mobile app when when finished with your online banking session. Otherwise, a thief with your stolen phone has direct access to your money and accounts. NordVPN advises consumers to do your homework first: read app ratings and reviews before downloading any mobile apps.

Readers of this blog are probably familiar with the next item:

"4. Don’t use mobile banking on public Wi-Fi: Anyone on a public Wi-Fi network is in danger of a security breach. Most of these networks lack basic security measures and have poor router configurations and weak passwords..."

Popular places with public Wi-Fi includes coffee shops, fast food restaurants, supermarkets, airports, libraries, and hotels. If you must do online banking in a public place, NordVPN advised:

"... use your cellular network instead. It’s not perfect, but it’s better than public Wi-Fi. Better yet, turn on a virtual private network (VPN) and then use public Wi-Fi..."

There you have it. Read the entire online banking article by NordVPN. Ignore this advice at your own peril.