595 posts categorized "Privacy" Feed

Some Surprising Facts About Facebook And Its Users

Facebook logo The Pew Research Center announced findings from its latest survey of social media users:

  • About two-thirds (68%) of adults in the United States use Facebook. That is unchanged from April 2016, but up from 54% in August 2012. Only Youtube gets more adult usage (73%).
  • About three-quarters (74%) of adult Facebook users visit the site at least once a day. That's higher than Snapchat (63%) and Instagram (60%).
  • Facebook is popular across all demographic groups in the United States: 74% of women use it, as do 62% of men, 81% of persons ages 18 to 29, and 41% of persons ages 65 and older.
  • Usage by teenagers has fallen to 51% (at March/April 2018) from 71% during 2014 to 2015. More teens use other social media services: YouTube (85%), Instagram (72%) and Snapchat (69%).
  • 43% of adults use Facebook as a news source. That is higher than other social media services: YouTube (21%), Twitter (12%), Instagram (8%), and LinkedIn (6%). More women (61%) use Facebook as a news source than men (39%). More whites (62%) use Facebook as a news source than nonwhites (37%).
  • 54% of adult users said they adjusted their privacy settings during the past 12 months. 42% said they have taken a break from checking the platform for several weeks or more. 26% said they have deleted the app from their phone during the past year.

Perhaps, the most troubling finding:

"Many adult Facebook users in the U.S. lack a clear understanding of how the platform’s news feed works, according to the May and June survey. Around half of these users (53%) say they do not understand why certain posts are included in their news feed and others are not, including 20% who say they do not understand this at all."

Facebook users should know that the service does not display in their news feed all posts by their friends and groups. Facebook's proprietary algorithm -- called its "secret sauce" by some -- displays items it thinks users will engage with = click the "Like" or other emotion buttons. This makes Facebook a terrible news source, since it doesn't display all news -- only the news you (probably already) agree with.

That's like living life in an online bubble. Sadly, there is more.

If you haven't watched it, PBS has broadcast a two-part documentary titled, "The Facebook Dilemma" (see trailer below), which arguable could have been titled, "the dark side of sharing." The Frontline documentary rightly discusses Facebook's approaches to news, privacy, its focus upon growth via advertising revenues, how various groups have used the service as a weapon, and Facebook's extensive data collection about everyone.

Yes, everyone. Obviously, Facebook collects data about its users. The service also collects data about nonusers in what the industry calls "shadow profiles." CNet explained that during an April:

"... hearing before the House Energy and Commerce Committee, the Facebook CEO confirmed the company collects information on nonusers. "In general, we collect data of people who have not signed up for Facebook for security purposes," he said... That data comes from a range of sources, said Nate Cardozo, senior staff attorney at the Electronic Frontier Foundation. That includes brokers who sell customer information that you gave to other businesses, as well as web browsing data sent to Facebook when you "like" content or make a purchase on a page outside of the social network. It also includes data about you pulled from other Facebook users' contacts lists, no matter how tenuous your connection to them might be. "Those are the [data sources] we're aware of," Cardozo said."

So, there might be more data sources besides the ones we know about. Facebook isn't saying. So much for greater transparency and control claims by Mr. Zuckerberg. Moreover, data breaches highlight the problems with the service's massive data collection and storage:

"The fact that Facebook has [shadow profiles] data isn't new. In 2013, the social network revealed that user data had been exposed by a bug in its system. In the process, it said it had amassed contact information from users and matched it against existing user profiles on the social network. That explained how the leaked data included information users hadn't directly handed over to Facebook. For example, if you gave the social network access to the contacts in your phone, it could have taken your mom's second email address and added it to the information your mom already gave to Facebook herself..."

So, Facebook probably launched shadow profiles when it introduced its mobile app. That means, if you uploaded the address book in your phone to Facebook, then you helped the service collect information about nonusers, too. This means Facebook acts more like a massive advertising network than simply a social media service.

How has Facebook been able to collect massive amounts of data about both users and nonusers? According to the Frontline documentary, we consumers have lax privacy laws in the United States to thank for this massive surveillance advertising mechanism. What do you think?


Senator Wyden Introduces Bill To Help Consumers Regain Online Privacy And Control Over Sensitive Data

Late last week, Senator Ron Wyden (Dem - Oregon) introduced a "discussion draft" of legislation to help consumers recover online privacy and control over their sensitive personal data. Senator Wyden said:

"Today’s economy is a giant vacuum for your personal information – Everything you read, everywhere you go, everything you buy and everyone you talk to is sucked up in a corporation’s database. But individual Americans know far too little about how their data is collected, how it’s used and how it’s shared... It’s time for some sunshine on this shadowy network of information sharing. My bill creates radical transparency for consumers, gives them new tools to control their information and backs it up with tough rules with real teeth to punish companies that abuse Americans’ most private information.”

The press release by Senator Wyden's office explained the need for new legislation:

"The government has failed to respond to these new threats: a) Information about consumers’ activities, including their location information and the websites they visit is tracked, sold and monetized without their knowledge by many entities; b) Corporations’ lax cybersecurity and poor oversight of commercial data-sharing partnerships has resulted in major data breaches and the misuse of Americans’ personal data; c) Consumers have no effective way to control companies’ use and sharing of their data."

Consumers in the United States lost both control and privacy protections when the U.S. Federal Communications Commission (FCC), led by President Trump appointee Ajit Pai, a former Verizon lawyer, repealed last year both broadband privacy and net neutrality protections for consumers. A December 2017 study of 1,077 voters found that most want net neutrality protections. President Trump signed the privacy-rollback legislation in April 2017. A prior blog post listed many historical abuses of consumers by some internet service providers (ISPs).

With the repealed broadband privacy, ISPs are free to collect and archive as much data about consumers as desired without having to notify and get consumers' approval of the collection nor of who they share archived data with. That's 100 percent freedom for ISPs and zero freedom for consumers.

By repealing online privacy and net neutrality protections for consumers, the FCC essentially punted responsibility to the U.S. Federal Trade Commission (FTC). According to Senator Wyden's press release:

"The FTC, the nation’s main privacy and data security regulator, currently lacks the authority and resources to address and prevent threats to consumers’ privacy: 1) The FTC cannot fine first-time corporate offenders. Fines for subsequent violations of the law are tiny, and not a credible deterrent; 2) The FTC does not have the power to punish companies unless they lie to consumers about how much they protect their privacy or the companies’ harmful behavior costs consumers money; 3) The FTC does not have the power to set minimum cybersecurity standards for products that process consumer data, nor does any federal regulator; and 4) The FTC does not have enough staff, especially skilled technology experts. Currently about 50 people at the FTC police the entire technology sector and credit agencies."

This means consumers have no protections nor legal options unless the company, or website, violates its published terms-of-conditions and privacy policies. To solves the above gaps, Senator Wyden's new legislation, titled the Consumer Data Privacy Act (CDPA), contains several new and stronger protections. It:

"... allows consumers to control the sale and sharing of their data, gives the FTC the authority to be an effective cop on the beat, and will spur a new market for privacy-protecting services. The bill empowers the FTC to: i) Establish minimum privacy and cybersecurity standards; ii) Issue steep fines (up to 4% of annual revenue), on the first offense for companies and 10-20 year criminal penalties for senior executives; iii) Create a national Do Not Track system that lets consumers stop third-party companies from tracking them on the web by sharing data, selling data, or targeting advertisements based on their personal information. It permits companies to charge consumers who want to use their products and services, but don’t want their information monetized; iv) Give consumers a way to review what personal information a company has about them, learn with whom it has been shared or sold, and to challenge inaccuracies in it; v) Hire 175 more staff to police the largely unregulated market for private data; and vi) Require companies to assess the algorithms that process consumer data to examine their impact on accuracy, fairness, bias, discrimination, privacy, and security."

Permitting companies to charge consumers who opt out of data collection and sharing is a good thing. Why? Monthly payments by consumers are leverage -- a strong incentive for companies to provide better cybersecurity.

Business as usual -- cybersecurity methods by corporate executives and government enforcement -- isn't enough. The tsunami of data breaches is an indication. During October alone:

A few notable breach events from earlier this year:

The status quo, or business as usual, is unacceptable. Executives' behavior won't change without stronger consequences like jail time, since companies perform cost-benefit analyses regarding how much to spend on cybersecurity versus the probability of breaches and fines. Opt-outs of data collection and sharing by consumers, steeper fines, and criminal penalties could change those cost-benefit calculations.

Four former chief technologists at the FCC support Senator Wyden's legislation. Gabriel Weinberg, the Chief Executive Officer of DuckDuckGo also supports it:

"Senator Wyden’s proposed consumer privacy bill creates needed privacy protections for consumers, mandating easy opt-outs from hidden tracking. By forcing companies that sell and monetize user data to be more transparent about their data practices, the bill will also empower consumers to make better-informed privacy decisions online, enabling companies like ours to compete on a more level playing field."

Regular readers of this blog know that the DuckDuckGo search engine (unlike Google, Bing and Yahoo search engines) doesn't track users, doesn't collect nor archive data about users and their devices, and doesn't collect nor store users' search criteria. So, DuckDuckGo users can search knowing their data isn't being sold to advertisers, data brokers, and others.

Lastly, Wyden's proposed legislation includes several key definitions (emphasis added):

"... The term "automated decision system" means a computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision or facilitates human decision making, that impacts consumers... The term "automated decision system impact assessment" means a study evaluating an automated decision system and the automated decision system’s development process, including the design and training data of the automated decision 14 system, for impacts on accuracy, fairness, bias, discrimination, privacy, and security that includes... The term "data protection impact assessment" means a study evaluating the extent to which an information system protects the privacy and security of personal information the system processes... "

The draft legislation requires companies to perform both automated data impact assessments and data protection impact assessments; and requires the FTC to set the frequency and conditions for both. A copy of the CDPA draft is also available here (Adobe PDF; 67.7 k bytes).

This is a good start. It is important... critical... to hold accountable both corporate executives and the automated decision systems their approve and deploy. Based upon history, outsourcing has been one corporate tactic to manage liability by shifting it to providers. Good to close any loopholes now where executives could abuse artificial intelligence and related technologies to avoid responsibility.

What are your thoughts, opinions of the proposed legislation?


Aetna To Pay More Than $17 Million To Resolve 2 Privacy Breaches

Aetna logo Aetna inked settlement agreements with several states, including New Jersey, to resolve disclosures of sensitive patient information. According to an announcement by the Attorney General for New Jersey, the settlement agreements resolve:

"... a multi-state investigation focused on two separate privacy breaches by Aetna that occurred in 2017 – one involving a mailing that potentially revealed information about addressees’ HIV/AIDS status, the other involving a mailing that potentially revealed individuals’ involvement in a study of patients with atrial fibrillation (or AFib)..."

Connecticut, Washington, and the District of Columbia joined with New Jersey for both the  investigation and settlement agreements. The multi-state investigation found:

"... that Aetna inadvertently disclosed HIV/AIDS-related information about thousands of individuals across the U.S. – including approximately 647 New Jersey residents – through a third-party mailing on July 28, 2017. The envelopes used in the mailing had an over-sized, transparent glassine address window, which revealed not only the recipients’ names and addresses, but also text that included the words “HIV Medications"... The second breach occurred in September 2017 and involved a mailing sent to 1,600 individuals concerning a study of patients with AFib. The envelopes for the mailing included the name and logo for the study – IMPACT AFib – which could have been interpreted as indicating that the addressee had an AFib diagnosis... Aetna not only violated the federal Health Insurance Portability and Accountability Act (HIPAA), but also state laws pertaining to the protected health information of individuals in general, and of persons with AIDS or HIV infection in particular..."

A class-action lawsuit filed on behalf of affected HIV/AIDS patients has been settled, pending approval from a federal court, which requires Aetna to pay about $17 million to resolve allegations. Terms of the multi-state settlement agreement require Aetna to pay a $365,211.59 civil penalty to New Jersey, and:

  • Implement policy, processes, and employee training reforms to both better protect persons' protected health information, and ensure mailings maintain persons' privacy; and
  • Hire an independent consultant to evaluate and report on its privacy protection practices, and to monitor its compliance with the terms of the settlement agreements.

CVS Health logo In December of last year, CVS Health and Aetna announced a merger agreement where CVS Health acquired Aetna for about $69 billion. Last week, CVS Health announced an expansion of its board of directors to include the addition of three directors from its Aetna unit. At press time, neither company's website mentioned the multi-state settlement agreement.


'Got Another Friend Request From You' Warnings Circulate On Facebook. What's The Deal?

Facebook logo Several people have posted on their Facebook News Feeds messages with warnings, such as:

"Please do not accept any new Friend requests from me"

And:

"Hi … I actually got another friend request from you yesterday … which I ignored so you may want to check your account. Hold your finger on the message until the forward button appears … then hit forward and all the people you want to forward too … I had to do the people individually. Good Luck!"

Maybe, you've seen one of these warnings. Some of my Facebook friends posted these warnings in their News Feed or in private messages via Messenger. What's happening? The fact-checking site Snopes explained:

"This message played on warnings about the phenomenon of Facebook “pirates” engaging in the “cloning” of Facebook accounts, a real (but much over-hyped) process by which scammers target existing Facebook users accounts by setting up new accounts with identical profile pictures and names, then sending out friend requests which appear to originate from those “cloned” users. Once those friend requests are accepted, the scammers can then spread messages which appear to originate from the targeted account, luring that person’s friends into propagating malware, falling for phishing schemes, or disclosing personal information that can be used for identity theft."

Hacked Versus Cloned Accounts

While everyone wants to warn their friends, it is important to do your homework first. Many Facebook users have confused "hacked" versus "cloned" accounts. A hack is when another person has stolen your password and used it to sign into your account to post fraudulent messages -- pretending to be you.

Snopes described above what a "cloned" account is... basically a second, unauthorized account. Sadly, there are plenty of online sources for scammers to obtain stolen photos and information to create cloned accounts. One source is the multitude of massive corporate data breaches: Equifax, Nationwide, Facebook, the RNC, Uber, and others. Another source are Facebook friends with sloppy security settings on their accounts: the "Public" setting is no security. That allows scammers to access your account via your friends' wide-open accounts lacking security.

It is important to know the differences between "hacked" and "cloned" accounts. Snopes advised:

"... there would be no utility to forwarding [the above] warning to any of your Facebook friends unless you had actually received a second friend request from one of them. Moreover, even if this warning were possibly real, the optimal approach would not be for the recipient to forward it willy-nilly to every single contact on their friends list... If you have reason to believe your Facebook account might have been “cloned,” you should try sending separate private messages to a few of your Facebook friends to check whether any of them had indeed recently received a duplicate friend request from you, as well as searching Facebook for accounts with names and profile pictures identical to yours. Should either method turn up a hit, use Facebook’s "report this profile" link to have the unauthorized account deactivated."

Cloned Accounts

If you received a (second) Friend Request from a person who you are already friends with on Facebook, then that suggests a cloned account. (Cloned accounts are not new. It's one of the disadvantages of social media.) Call your friend on the phone or speak with him/her in-person to: a) tell him/her you received a second Friend Request, and b) determine whether or not he/she really sent that second Friend Request. (Yes, online privacy takes some effort.) If he/she didn't send a second Friend Request, then you know what to do: report the unauthorized profile to Facebook, and then delete the second Friend Request. Don't accept it.

If he/she did send a second Friend Request, ask why. (Let's ignore the practice by some teens to set up multiple accounts; one for parents and a second for peers.) I've had friends -- adults -- forget their online passwords, and set up a second Facebook account -- a clumsy, confusing solution. Not everyone has good online skills. Your friend will tell you which account he/she uses and which account he/she wants you to connect to. Then, un-Friend the other account.

Hacked Accounts

All Facebook users should know how to determine if your Facebook account has been hacked. Online privacy takes effort. How to check:

  1. Sign into Facebook
  2. Select "Settings."
  3. Select "Security and Login."
  4. You will see a list of the locations where your account has been accessed. If one or more of the locations weren't you, then it's likely another person has stolen and used your password. Proceed to step #5.
  5. For each location that wasn't you, select "Not You" and then "Secure Account." Follow the online instructions displayed and change your password immediately.

I've performed this check after friends have (erroneously) informed me that my account was hacked. It wasn't.

Facebook Search and Privacy Settings

Those wanting to be proactive can search the Facebook site to find other persons using the same name. Simply, enter your name in the search mechanism. The results page lists other accounts with the same name. If you see another account using your identical profile photo (and/or other identical personal information and photos), then use Facebook's "report this profile" link to report the unauthorized account.

You can go one step further and warn your Facebook friends who have the "Public" security setting on their accounts. They may be unaware of the privacy risks, and once informed may change their security setting to "Friends Only." Hopefully, they will listen.

If they don't listen, you can suggest that he/she at a minimum change other privacy settings. Users control who can see their photos and list of friends on Facebook. To change the privacy setting, navigate to your Friends List page and select the edit icon. Then, select the "Edit Privacy" link. Next, change both privacy settings for, "Who can see your friends?" and "Who can see the people, Pages, and lists you follow?" to "Only Me." As a last resort, you can un-Friend the security neophyte, if he/she refuses to make any changes to their security settings.


Why The Recent Facebook Data Breach Is Probably Much Worse Than You First Thought

Facebook logo The recent data breach at Facebook has indications that it may be much worse than first thought. It's not the fact that a known 50 million users were affected, and 40 million more may also be affected. There's more. The New York Times reported on Tuesday:

"... the impact could be significantly bigger since those stolen credentials could have been used to gain access to so many other sites. Companies that allow customers to log in with Facebook Connect are scrambling to figure out whether their own user accounts have been compromised."

Facebook Connect, an online tool launched in 2008, allows users to sign into other apps and websites using their Facebook credentials (e.g., username, password). many small, medium, and large businesses joined the Facebook Connect program, which was using:

"... a simple proposition: Connect to our platform, and we’ll make it faster and easier for people to use your apps... The tool was adopted by thousands of other firms, from mom-and-pop publishing companies to high-profile tech outfits like Airbnb and Uber."

Initially, Facebook Connect made online life easier and more convenient. Users could sign up for new apps and sites without having to create and remember new sign-in credentials:

But in July 2017, that measure of security fell short. By exploiting three software bugs, attackers forged “access tokens,” digital keys used to gain entry to a user’s account. From there, the hackers were able to do anything users could do on their own Facebook accounts, including logging in to third-party apps."

On Tuesday, Facebook released a "Login Update," which said in part:

"We have now analyzed our logs for all third-party apps installed or logged in during the attack we discovered last week. That investigation has so far found no evidence that the attackers accessed any apps using Facebook Login.

Any developer using our official Facebook SDKs — and all those that have regularly checked the validity of their users’ access tokens – were automatically protected when we reset people’s access tokens. However, out of an abundance of caution, as some developers may not use our SDKs — or regularly check whether Facebook access tokens are valid — we’re building a tool to enable developers to manually identify the users of their apps who may have been affected, so that they can log them out."

So, there are more news and updates to come about this. According to the New York Times, some companies' experiences so far:

"Tinder, the dating app, has found no evidence that accounts have been breached, based on the "limited information Facebook has provided," Justine Sacco, a spokeswoman for Tinder and its parent company, the Match Group, said in a statement... The security team at Uber, the ride-hailing giant, is logging some users out of their accounts to be cautious, said Melanie Ensign, a spokeswoman for Uber. It is asking them to log back in — a preventive measure that would invalidate older, stolen access tokens."


Tips For Parents To Teach Their Children Online Safety

Today's children often use mobile devices at very young ages... four, five, or six years of age. And they don't know anything about online dangers: computer viruses, stalking, cyber-bullying, identity theft, phishing scams, ransomware, and more. Nor do they know how to read terms-of-use and privacy policies. It is parents' responsibility to teach them.

NordVPN logo NordVPN, a maker of privacy software, offers several tips to help parents teach their children about online safety:

"1. Set an example: If you want your kid to be careful and responsible online, you should start with yourself."

Children watch their parents. If you practice good online safety habits, they will learn from watching you. And:

"2. Start talking to your kid early and do it often: If your child already knows how to play a video on Youtube or is able to download a gaming app without your help, they also should learn how to do it safely. Therefore, it’s important to start explaining the basics of privacy and cybersecurity at an early age."

So, long before having the "sex talk" with your children, parents should have the online safety talk. Developing good online safety habits at a young age will help children throughout their lives; especially as adults:

"3. Explain why safe behavior matters: Give relatable examples of what personal information is – your address, social security number, phone number, account credentials, and stress why you can never share this information with strangers."

You wouldn't give this information to a stranger on a city street. The same applies online. That also means discussing social media:

"4. Social media and messaging: a) don’t accept friend requests from people you don’t know; b) never send your pictures to strangers; c) make sure only your friends can see what you post on Facebook; d) turn on timeline review to check posts you are tagged in before they appear on your Facebook timeline; e) if someone asks you for some personal information, always tell your parents; f) don’t share too much on your profile (e.g., home address, phone number, current location); and g) don’t use your social media logins to authorize apps."

These are the basics. Read the entire list of online safety tips for parents by Nord VPN.


Facebook To Remove Onavo VPN App From Apple App Store

Not all Virtual Private Network (VPN) software is created equal. Some do a better job at protecting your privacy than others. Mashable reported that Facebook:

"... plans to remove its Onavo VPN app from the App Store after Apple warned the company that the app was in violation of its policies governing data gathering... For those blissfully unaware, Onavo sold itself as a virtual private network that people could run "to take the worry out of using smartphones and tablets." In reality, Facebook used data about users' internet activity collected by the app to inform acquisitions and product decisions. Essentially, Onavo allowed Facebook to run market research on you and your phone, 24/7. It was spyware, dressed up and neatly packaged with a Facebook-blue bow. Data gleaned from the app, notes the Wall Street Journal, reportedly played into the social media giant's decision to start building a rival to the Houseparty app. Oh, and its decision to buy WhatsApp."

Thanks Apple! We've all heard of the #FakeNews hashtag on social media. Yes, there is a #FakeVPN hashtag, too. So, buyer beware... online user beware.


Whirlpool's Online Product Registration: Confidentiality and Privacy Concerns

Earlier this month, my wife and I relocated to a different city within the same state to live closer to our new, 14-month young grandson. During the move, we bought new home appliances -- a clothes washer and dryer, both made by Whirlpool -- which prompted today's blog post.

The packaging and operation instructions included two registration postcards with the model and serial numbers printed in the form. Nothing controversial about that. The registration cards included, "Other Easy Ways To Register," and listed both registration websites for the United States and Canada. I tried the online registration to see what improvements or benefits Whirlpool's United States registration site might offer over the old-school snail-mail method besides speed.

The landing page includes a form for the customer's contact information, product purchased information, and future purchase plans. Pretty standard stuff. Nothing alarming there. Near the bottom of the form and just above the "Complete Registration" button are links to Whirlpool's Terms & Conditions and Privacy policies. I read both and found some surprises.

First, the site uses inconsistent nomenclature: two different policy titles. The link says "Terms & Conditions" while the title of the actual policy page states, "Terms Of Use." Which is it? Inconsistent nomenclature can confuse users. Not good. Come on, Whirlpool! This is not hard. Good website usability includes the consistent use of the same page title, so uses know where they are going when they select a link, and that they've arrived at the expected destination.

Second, the Terms Of Use (well, I had to pick a title so it wold be clear for you) policy page lacks a date. This can be confusing, making it difficult to impossible for consumers to know and reference the exact document read; plus determine what, if any, changes were posted since the prior version. Not good. Come on Whirlpool! Add a publication date. It's not hard.

Third, the Terms Of Use policy contained this clause:

"Whirlpool Corporation welcomes your submissions; however, any information submitted, other than your personal information (for example, your name and e-mail address), to Whirlpool Corporation through this site is the exclusive property of Whirlpool Corporation and is considered NOT to be confidential. Whirlpool Corporation does not receive the submission in confidence or under any confidential or fiduciary relationship. Whirlpool Corporation may use the submission for any purpose without restriction or compensation."

So, the Terms of Use policy is both vague and clear at the same time. It was vague because it didn't list the exact data elements considered "personal information." Not good. This leaves consumers to guess. The policy lists only two data elements. What about the rest? Are all confidential, or only some? And if some, which ones? Here's the list I consider confidential: name, street address, country, phone number, e-mail address, IP address, device type, device model, device operating system, payment card information, billing address, and online credentials (should I create a profile at the Whirlpool site). Come on Whirlpool! Get it together and provide the complete list of data elements you consider "personal information." It's not hard.

Fourth, the Terms Of Use policy was also clear because the above sentences quoted made Whirlpool's intentions clear: submissions to the site other than "personal information" are not confidential and Whirlpool can do with them whatever it wants. Since the policy doesn't list which data elements are personal, one must assume all are.  Not good.

Next, I read Whirlpool's Privacy policy, and hoped that it would clarify things. Thankfully, a little good news. First, the Privacy policy listed a date: May 31, 2018. Second, more inconsistent site nomenclature: the page-bottom links across the site say "Privacy Policy" while the policy page title says "Privacy Statement." I selected the "Expand All" button to view the entire policy. Third, Whirlpool's Privacy Statement listed the items considered personal information:

"- Your contact information, such as your name, email address, mailing address, and phone number
- Your billing information, such as your credit card number and billing address
- Your Whirlpool account information, including your user name, account number, and a password
- Your product and ownership information
- Your preferences, such as product wish lists, order history, and marketing preferences"

This list is a good start. A simple link to this section from the Terms Of Use policy would do wonders to clarify things. However, Whirlpool collects some key data which it more freely collects and trades than "personal information." The Privacy Statement contains this clause:

"Whirlpool and its business partners and service providers may use a variety of technologies that automatically or passively collect information about how you interact with our Websites ("Usage Information"). Usage Information may include: (i) your IP address, which is a unique set of numbers assigned to your computer by your Internet Service Provider (ISP) (which, depending on your ISP, may be a different number every time you connect to the Internet); (ii) the type of browser and operating system you use; and (iii) other information about your online session, such as the URL you came from to get to our Websites and the date and time you visited our Websites."

And, the Privacy Statement mentions the use of several online tracking technologies:

"We use Local Shared Objects (LSOs) such as HTML5 or Flash on our Websites to store content information and preferences. Third parties with whom we partner to provide certain features on our Websites or to display advertising based upon your web browsing activity use LSOs such as HTML5 or Flash to collect and store information... Web beacons are tiny electronic image files that can be embedded within a web page or included in an e-mail message, and are usually invisible to the human eye. When we use web beacons within our web pages, the web beacons (also known as “clear GIFs” or “tracking pixels”) may tell us such things as: how many people are coming to our Websites, whether they are one-time or repeat visitors, which pages they viewed and for how long, how well certain online advertising campaigns are converting, and other similar Website usage data. When used in our e-mail communications, web beacons can tell us the time an e-mail was opened, if and how many times it was forwarded, and what links users click on from within the e- mail message."

While the "EU-US Privacy Shield" section of the privacy policy lists Whirlpool's European subsidiaries, and contains a Privacy Shield link to an external site listing the companies that are probably some of Whirlpool's service and advertising partners, the privacy policy really does not disclose all of the "third parties," "business partners," "service vendors," advertising partners, and affiliates Whirlpool shares data with. Consumers are left in the dark.

Last, the "Your Rights: Choice & Access" section of the privacy policy mentions the opt-out mechanism for consumers. While consumers can opt-out or cancel receiving marketing (e.g., promotional) messaging from Whirlpool, you can't opt-out of the data collection and archival. So, choice is limited.

Given this and the above concerns, I abandoned the product registration form. Yep. Didn't complete it. Maybe I will in the future after Whirlpool fixes things. Perhaps most importantly, today's blog post is a reminder for all consumers: always read companies' privacy and terms-of-use policies. Always. You never know what you'll find that is irksome. And, if you don't know how to read online polices, this blog has some tips and suggestions.


Keep An Eye On Facebook's Moves To Expand Its Collection Of Financial Data About Its Users

Facebook logo On Monday, the Wall Street Journal reported that the social media giant had approached several major banks to share their detailed financial information about consumers in order, "to boost user engagement." Reportedly, Facebook approached JPMorgan Chase, Wells Fargo, Citigroup, and U.S. Bancorp. And, the detailed financial information sought included debit/credit/prepaid card transactions and checking account balances.

The Reuters news service also reported about the talks. The Reuters story mentioned the above banks, plus PayPal and American Express. Then, in a reply Facebook said that the Wall Street Journal news report was wrong. TechCrunch reported:

"Facebook spokesperson Elisabeth Diana tells TechCrunch it’s not asking for credit card transaction data from banks and it’s not interested in building a dedicated banking feature where you could interact with your accounts. It also says its work with banks isn’t to gather data to power ad targeting, or even personalize content... Facebook already lets Citibank customers in Singapore connect their accounts so they can ping their bank’s Messenger chatbot to check their balance, report fraud or get customer service’s help if they’re locked out of their account... That chatbot integration, which has no humans on the other end to limit privacy risks, was announced last year and launched this March. Facebook works with PayPal in more than 40 countries to let users get receipts via Messenger for their purchases. Expansions of these partnerships to more financial services providers could boost usage of Messenger by increasing its convenience — and make it more of a centralized utility akin to China’s WeChat."

There's plenty in the TechCrunch story. Reportedly, Diana's statement said that banks approached Facebook, and that it already partners:

"... with banks and credit card companies to offer services like customer chat or account management. Account linking enables people to receive real-time updates in Facebook Messenger where people can keep track of their transaction data like account balances, receipts, and shipping updates... The idea is that messaging with a bank can be better than waiting on hold over the phone – and it’s completely opt-in. We’re not using this information beyond enabling these types of experiences – not for advertising or anything else. A critical part of these partnerships is keeping people’s information safe and secure."

What to make of this? First, it really doesn't matter who approached whom. There's plenty of history. Way back in 2012, a German credit reporting agency approached Facebook. So, the financial sector is fully aware of the valuable data collected by Facebook.

Second, users doing business on the platform have already given Facebook permission to collect transaction data. Third, while Facebook's reply was about its users generally, its statement said "no" but sounded more like a "yes." Why? Basically, "account linking" or the convenience of purchase notifications is the hook or way into collecting users' financial transaction data. Existing practices, such as fitness apps  and music sharing, highlight the existing "account linking" used for data collection. Whatever users share on the platform allows Facebook to collect that information.

Fourth, the push to collect more banking data appears at best poorly timed, and at worst -- arrogant. Facebook is still trying to recover and regain users' trust after 87 million persons were affected by the massive data breach involving Cambridge Analytica. In May, the new Commissioner at the U.S. Federal Trade Commission (FTC) suggested stronger enforcement on tech companies, like Google and Facebook. Facebook has stumbled as its screening to identify political ads by politicians has incorrectly flagged news sites. Facebook CEO Mark Zuckerberg didn't help matters with his bumbling comments while failing to explain his company's stumbles to identify and prevent fake news.

Gary Cohn, President Donald Trump's former chief economic adviser, sharply criticized social media companies, including Facebook, for allowing fake news:

"In 2008 Facebook was one of those companies that was a big platform to criticize banks, they were very out front of criticizing banks for not being responsible citizens. I think banks were more responsible citizens in 2008 than some of the social media companies are today."

So, it seems wise to keep an eye on Facebook as it attempts to expand its data collection of consumers' financial information. Fifth, banks and banking executives bear some responsibility, too. A guest post on Forbes explained (highlighted text added):

"Whether this [banking] partnership pans or not, the Facebook plans are a reminder that banks sit on mountains of wealth much more valuable than money. Because of the speed at which tech giants move, banks must now make sure their clients agree on who owns their data, consent to the use of them, and understand with who they are shared. For that, it is now or never... In the financial industry, trust between a client and his provider is of primary importance. You can’t sell a customer’s banking data in the same way you sell his or her internet surfing behavior. Finance executives understand this: they even see the appropriate use of customer data as critical to financial stability. It is now or never to define these principles on the use of customer data... It’s why we believe new binding guidelines such as the EU’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act are welcome, even if they have room for improvement... A report by the US Treasury published earlier this week called on Congress to enact a federal data security and breach notification law to protect consumer financial data. The principles outlined above can serve as guidance to lawmakers drafting legislation, and bank executives considering how to respond to advances by Facebook and other big techs..."

Consumers should control their data -- especially financial data. If those rules are not put in place, then consumers have truly lost control of the sensitive personal and financial information that describes them. What are your opinions?


Test Finds Amazon's Facial Recognition Software Wrongly Identified Members Of Congress As Persons Arrested. A Few Legislators Demand Answers

In a test of Rekognition, the facial recognition software by Amazon, the American Civil Liberties Union (ACLU) found that the software misidentified 28 members of the United States Congress to mugshot photographs of persons arrested for crimes. Jokes aside about politicians, this is serious stuff. According to the ACLU:

"The members of Congress who were falsely matched with the mugshot database we used in the test include Republicans and Democrats, men and women, and legislators of all ages, from all across the country... To conduct our test, we used the exact same facial recognition system that Amazon offers to the public, which anyone could use to scan for matches between images of faces. And running the entire test cost us $12.33 — less than a large pizza... The false matches were disproportionately of people of color, including six members of the Congressional Black Caucus, among them civil rights legend Rep. John Lewis (D-Ga.). These results demonstrate why Congress should join the ACLU in calling for a moratorium on law enforcement use of face surveillance."

List of 28 Congressional legislators mis-identified by Amazon Rekognition in ACLU study. Click to view larger version With 535 member of Congress, the implied error rate was 5.23 percent. On Thursday, three of the misidentified legislators sent a joint letter to Jeffery Bezos, the Chief executive Officer at Amazon. The letter read in part:

"We write to express our concerns and seek more information about Amazon's facial recognition technology, Rekognition... While facial recognition services might provide a valuable law enforcement tool, the efficacy and impact of the technology are not yet fully understood. In particular, serious concerns have been raised about the dangers facial recognition can pose to privacy and civil rights, especially when it is used as a tool of government surveillance, as well as the accuracy of the technology and its disproportionate impact on communities of color.1 These concerns, including recent reports that Rekognition could lead to mis-identifications, raise serious questions regarding whether Amazon should be selling its technology to law enforcement... One study estimates that more than 117 million American adults are in facial recognition databases that can be searched in criminal investigations..."

The letter was sent by Senator Edward J. Markey (Massachusetts, Representative Luis V. Gutiérrez (Illinois), and Representative Mark DeSaulnier (California). Why only three legislators? Where are the other 25? Nobody else cares about software accuracy?

The three legislators asked Amazon to provide answers by August 20, 2018 to several key requests:

  • The results of any internal accuracy or bias assessments Amazon perform on Rekognition, with details by race, gender, and age,
  • The list of all law enforcement or intelligence agencies Amazon has communicated with regarding Rekognition,
  • The list of all law enforcement agencies which have used or currently use Rekognition,
  • If any law enforcement agencies which used Rekogntion have been investigated, sued, or reprimanded for unlawful or discriminatory policing practices,
  • Describe the protections, if any, Amazon has built into Rekognition to protect the privacy rights of innocent citizens cuaght in the biometric databases used by law enforcement for comparisons,
  • Can Rekognition identify persons younger than age 13, and what protections Amazon uses to comply with Children's Online Privacy Protections Act (COPPA),
  • Whether Amazon conduts any audits of Rekognition to ensure its appropriate and legal uses, and what actions Amazon has taken to correct any abuses,
  • Explain whether Rekognition is integrated with police body cameras and/or "public-facing camera networks."

The letter cited a 2016 report by the Center on Privacy and Technology (CPT) at Georgetown Law School, which found:

"... 16 states let the Federal Bureau of Investigation (FBI) use face recognition technology to compare the faces of suspected criminals to their driver’s license and ID photos, creating a virtual line-up of their state residents. In this line-up, it’s not a human that points to the suspect—it’s an algorithm... Across the country, state and local police departments are building their own face recognition systems, many of them more advanced than the FBI’s. We know very little about these systems. We don’t know how they impact privacy and civil liberties. We don’t know how they address accuracy problems..."

Everyone wants law enforcement to quickly catch criminals, prosecute criminals, and protect the safety and rights of law-abiding citizens. However, accuracy matters. Experts warn that the facial recognition technologies used are unregulated, and the systems' impacts upon innocent citizens are not understood. Key findings in the CPT report:

  1. "Law enforcement face recognition networks include over 117 million American adults. Face recognition is neither new nor rare. FBI face recognition searches are more common than federal court-ordered wiretaps. At least one out of four state or local police departments has the option to run face recognition searches through their or another agency’s system. At least 26 states (and potentially as many as 30) allow law enforcement to run or request searches against their databases of driver’s license and ID photos..."
  2. "Different uses of face recognition create different risks. This report offers a framework to tell them apart. A face recognition search conducted in the field to verify the identity of someone who has been legally stopped or arrested is different, in principle and effect, than an investigatory search of an ATM photo against a driver’s license database, or continuous, real-time scans of people walking by a surveillance camera. The former is targeted and public. The latter are generalized and invisible..."
  3. "By tapping into driver’s license databases, the FBI is using biometrics in a way it’s never done before. Historically, FBI fingerprint and DNA databases have been primarily or exclusively made up of information from criminal arrests or investigations. By running face recognition searches against 16 states’ driver’s license photo databases, the FBI has built a biometric network that primarily includes law-abiding Americans. This is unprecedented and highly problematic."
  4. " Major police departments are exploring face recognition on live surveillance video. Major police departments are exploring real-time face recognition on live surveillance camera video. Real-time face recognition lets police continuously scan the faces of pedestrians walking by a street surveillance camera. It may seem like science fiction. It is real. Contract documents and agency statements show that at least five major police departments—including agencies in Chicago, Dallas, and Los Angeles—either claimed to run real-time face recognition off of street cameras..."
  5. "Law enforcement face recognition is unregulated and in many instances out of control. No state has passed a law comprehensively regulating police face recognition. We are not aware of any agency that requires warrants for searches or limits them to serious crimes. This has consequences..."
  6. "Law enforcement agencies are not taking adequate steps to protect free speech. There is a real risk that police face recognition will be used to stifle free speech. There is also a history of FBI and police surveillance of civil rights protests. Of the 52 agencies that we found to use (or have used) face recognition, we found only one, the Ohio Bureau of Criminal Investigation, whose face recognition use policy expressly prohibits its officers from using face recognition to track individuals engaging in political, religious, or other protected free speech."
  7. "Most law enforcement agencies do little to ensure their systems are accurate. Face recognition is less accurate than fingerprinting, particularly when used in real-time or on large databases. Yet we found only two agencies, the San Francisco Police Department and the Seattle region’s South Sound 911, that conditioned purchase of the technology on accuracy tests or thresholds. There is a need for testing..."
  8. "The human backstop to accuracy is non-standardized and overstated. Companies and police departments largely rely on police officers to decide whether a candidate photo is in fact a match. Yet a recent study showed that, without specialized training, human users make the wrong decision about a match half the time...The training regime for examiners remains a work in progress."
  9. "Police face recognition will disproportionately affect African Americans. Police face recognition will disproportionately affect African Americans. Many police departments do not realize that... the Seattle Police Department says that its face recognition system “does not see race.” Yet an FBI co-authored study suggests that face recognition may be less accurate on black people. Also, due to disproportionately high arrest rates, systems that rely on mug shot databases likely include a disproportionate number of African Americans. Despite these findings, there is no independent testing regime for racially biased error rates. In interviews, two major face recognition companies admitted that they did not run these tests internally, either."
  10. "Agencies are keeping critical information from the public. Ohio’s face recognition system remained almost entirely unknown to the public for five years. The New York Police Department acknowledges using face recognition; press reports suggest it has an advanced system. Yet NYPD denied our records request entirely. The Los Angeles Police Department has repeatedly announced new face recognition initiatives—including a “smart car” equipped with face recognition and real-time face recognition cameras—yet the agency claimed to have “no records responsive” to our document request. Of 52 agencies, only four (less than 10%) have a publicly available use policy. And only one agency, the San Diego Association of Governments, received legislative approval for its policy."

The New York Times reported:

"Nina Lindsey, an Amazon Web Services spokeswoman, said in a statement that the company’s customers had used its facial recognition technology for various beneficial purposes, including preventing human trafficking and reuniting missing children with their families. She added that the A.C.L.U. had used the company’s face-matching technology, called Amazon Rekognition, differently during its test than the company recommended for law enforcement customers.

For one thing, she said, police departments do not typically use the software to make fully autonomous decisions about people’s identities... She also noted that the A.C.L.U had used the system’s default setting for matches, called a “confidence threshold,” of 80 percent. That means the group counted any face matches the system proposed that had a similarity score of 80 percent or more. Amazon itself uses the same percentage in one facial recognition example on its site describing matching an employee’s face with a work ID badge. But Ms. Lindsey said Amazon recommended that police departments use a much higher similarity score — 95 percent — to reduce the likelihood of erroneous matches."

Good of Amazon to respond quickly, but its reply is still insufficient and troublesome. Amazon may recommend 95 percent similarity scores, but the public does not know if police departments actually use the higher setting, or consistently do so across all types of criminal investigations. Plus, the CPT report cast doubt on human "backstop" intervention, which Amazon's reply seems to heavily rely upon.

Where is the rest of Congress on this? On Friday, three Senators sent a similar letter seeking answers from 39 federal law-enforcement agencies about their use facial recognition technology, and what policies, if any, they have put in place to prevent abuse and misuse.

All of the findings in the CPT report are disturbing. Finding #3 is particularly troublesome. So, voters need to know what, if anything, has changed since these findings were published in 2016. Voters need to know what their elected officials are doing to address these findings. Some elected officials seem engaged on the topic, but not enough. What are your opinions?


Health Insurers Are Vacuuming Up Details About You — And It Could Raise Your Rates

[Editor's note: today's guest post, by reporters at ProPublica, explores privacy and data collection issues within the healthcare industry. It is reprinted with permission.]

By Marshall Allen, ProPublica

To an outsider, the fancy booths at last month’s health insurance industry gathering in San Diego aren’t very compelling. A handful of companies pitching “lifestyle” data and salespeople touting jargony phrases like “social determinants of health.”

But dig deeper and the implications of what they’re selling might give many patients pause: A future in which everything you do — the things you buy, the food you eat, the time you spend watching TV — may help determine how much you pay for health insurance.

With little public scrutiny, the health insurance industry has joined forces with data brokers to vacuum up personal details about hundreds of millions of Americans, including, odds are, many readers of this story. The companies are tracking your race, education level, TV habits, marital status, net worth. They’re collecting what you post on social media, whether you’re behind on your bills, what you order online. Then they feed this information into complicated computer algorithms that spit out predictions about how much your health care could cost them.

Are you a woman who recently changed your name? You could be newly married and have a pricey pregnancy pending. Or maybe you’re stressed and anxious from a recent divorce. That, too, the computer models predict, may run up your medical bills.

Are you a woman who’s purchased plus-size clothing? You’re considered at risk of depression. Mental health care can be expensive.

Low-income and a minority? That means, the data brokers say, you are more likely to live in a dilapidated and dangerous neighborhood, increasing your health risks.

“We sit on oceans of data,” said Eric McCulley, director of strategic solutions for LexisNexis Risk Solutions, during a conversation at the data firm’s booth. And he isn’t apologetic about using it. “The fact is, our data is in the public domain,” he said. “We didn’t put it out there.”

Insurers contend they use the information to spot health issues in their clients — and flag them so they get services they need. And companies like LexisNexis say the data shouldn’t be used to set prices. But as a research scientist from one company told me: “I can’t say it hasn’t happened.”

At a time when every week brings a new privacy scandal and worries abound about the misuse of personal information, patient advocates and privacy scholars say the insurance industry’s data gathering runs counter to its touted, and federally required, allegiance to patients’ medical privacy. The Health Insurance Portability and Accountability Act, or HIPAA, only protects medical information.

“We have a health privacy machine that’s in crisis,” said Frank Pasquale, a professor at the University of Maryland Carey School of Law who specializes in issues related to machine learning and algorithms. “We have a law that only covers one source of health information. They are rapidly developing another source.”

Patient advocates warn that using unverified, error-prone “lifestyle” data to make medical assumptions could lead insurers to improperly price plans — for instance raising rates based on false information — or discriminate against anyone tagged as high cost. And, they say, the use of the data raises thorny questions that should be debated publicly, such as: Should a person’s rates be raised because algorithms say they are more likely to run up medical bills? Such questions would be moot in Europe, where a strict law took effect in May that bans trading in personal data.

This year, ProPublica and NPR are investigating the various tactics the health insurance industry uses to maximize its profits. Understanding these strategies is important because patients — through taxes, cash payments and insurance premiums — are the ones funding the entire health care system. Yet the industry’s bewildering web of strategies and inside deals often have little to do with patients’ needs. As the series’ first story showed, contrary to popular belief, lower bills aren’t health insurers’ top priority.

Inside the San Diego Convention Center last month, there were few qualms about the way insurance companies were mining Americans’ lives for information — or what they planned to do with the data.

The sprawling convention center was a balmy draw for one of America’s Health Insurance Plans’ marquee gatherings. Insurance executives and managers wandered through the exhibit hall, sampling chocolate-covered strawberries, champagne and other delectables designed to encourage deal-making.

Up front, the prime real estate belonged to the big guns in health data: The booths of Optum, IBM Watson Health and LexisNexis stretched toward the ceiling, with flat screen monitors and some comfy seating. (NPR collaborates with IBM Watson Health on national polls about consumer health topics.)

To understand the scope of what they were offering, consider Optum. The company, owned by the massive UnitedHealth Group, has collected the medical diagnoses, tests, prescriptions, costs and socioeconomic data of 150 million Americans going back to 1993, according to its marketing materials. (UnitedHealth Group provides financial support to NPR.) The company says it uses the information to link patients’ medical outcomes and costs to details like their level of education, net worth, family structure and race. An Optum spokesman said the socioeconomic data is de-identified and is not used for pricing health plans.

Optum’s marketing materials also boast that it now has access to even more. In 2016, the company filed a patent application to gather what people share on platforms like Facebook and Twitter, and link this material to the person’s clinical and payment information. A company spokesman said in an email that the patent application never went anywhere. But the company’s current marketing materials say it combines claims and clinical information with social media interactions.

I had a lot of questions about this and first reached out to Optum in May, but the company didn’t connect me with any of its experts as promised. At the conference, Optum salespeople said they weren’t allowed to talk to me about how the company uses this information.

It isn’t hard to understand the appeal of all this data to insurers. Merging information from data brokers with people’s clinical and payment records is a no-brainer if you overlook potential patient concerns. Electronic medical records now make it easy for insurers to analyze massive amounts of information and combine it with the personal details scooped up by data brokers.

It also makes sense given the shifts in how providers are getting paid. Doctors and hospitals have typically been paid based on the quantity of care they provide. But the industry is moving toward paying them in lump sums for caring for a patient, or for an event, like a knee surgery. In those cases, the medical providers can profit more when patients stay healthy. More money at stake means more interest in the social factors that might affect a patient’s health.

Some insurance companies are already using socioeconomic data to help patients get appropriate care, such as programs to help patients with chronic diseases stay healthy. Studies show social and economic aspects of people’s lives play an important role in their health. Knowing these personal details can help them identify those who may need help paying for medication or help getting to the doctor.

But patient advocates are skeptical health insurers have altruistic designs on people’s personal information.

The industry has a history of boosting profits by signing up healthy people and finding ways to avoid sick people — called “cherry-picking” and “lemon-dropping,” experts say. Among the classic examples: A company was accused of putting its enrollment office on the third floor of a building without an elevator, so only healthy patients could make the trek to sign up. Another tried to appeal to spry seniors by holding square dances.

The Affordable Care Act prohibits insurers from denying people coverage based on pre-existing health conditions or charging sick people more for individual or small group plans. But experts said patients’ personal information could still be used for marketing, and to assess risks and determine the prices of certain plans. And the Trump administration is promoting short-term health plans, which do allow insurers to deny coverage to sick patients.

Robert Greenwald, faculty director of Harvard Law School’s Center for Health Law and Policy Innovation, said insurance companies still cherry-pick, but now they’re subtler. The center analyzes health insurance plans to see if they discriminate. He said insurers will do things like failing to include enough information about which drugs a plan covers — which pushes sick people who need specific medications elsewhere. Or they may change the things a plan covers, or how much a patient has to pay for a type of care, after a patient has enrolled. Or, Greenwald added, they might exclude or limit certain types of providers from their networks — like those who have skill caring for patients with HIV or hepatitis C.

If there were concerns that personal data might be used to cherry-pick or lemon-drop, they weren’t raised at the conference.

At the IBM Watson Health booth, Kevin Ruane, a senior consulting scientist, told me that the company surveys 80,000 Americans a year to assess lifestyle, attitudes and behaviors that could relate to health care. Participants are asked whether they trust their doctor, have financial problems, go online, or own a Fitbit and similar questions. The responses of hundreds of adjacent households are analyzed together to identify social and economic factors for an area.

Ruane said he has used IBM Watson Health’s socioeconomic analysis to help insurance companies assess a potential market. The ACA increased the value of such assessments, experts say, because companies often don’t know the medical history of people seeking coverage. A region with too many sick people, or with patients who don’t take care of themselves, might not be worth the risk.

Ruane acknowledged that the information his company gathers may not be accurate for every person. “We talk to our clients and tell them to be careful about this,” he said. “Use it as a data insight. But it’s not necessarily a fact.”

In a separate conversation, a salesman from a different company joked about the potential for error. “God forbid you live on the wrong street these days,” he said. “You’re going to get lumped in with a lot of bad things.”

The LexisNexis booth was emblazoned with the slogan “Data. Insight. Action.” The company said it uses 442 non-medical personal attributes to predict a person’s medical costs. Its cache includes more than 78 billion records from more than 10,000 public and proprietary sources, including people’s cellphone numbers, criminal records, bankruptcies, property records, neighborhood safety and more. The information is used to predict patients’ health risks and costs in eight areas, including how often they are likely to visit emergency rooms, their total cost, their pharmacy costs, their motivation to stay healthy and their stress levels.

People who downsize their homes tend to have higher health care costs, the company says. As do those whose parents didn’t finish high school. Patients who own more valuable homes are less likely to land back in the hospital within 30 days of their discharge. The company says it has validated its scores against insurance claims and clinical data. But it won’t share its methods and hasn’t published the work in peer-reviewed journals.

McCulley, LexisNexis’ director of strategic solutions, said predictions made by the algorithms about patients are based on the combination of the personal attributes. He gave a hypothetical example: A high school dropout who had a recent income loss and doesn’t have a relative nearby might have higher than expected health costs.

But couldn’t that same type of person be healthy? I asked.

“Sure,” McCulley said, with no apparent dismay at the possibility that the predictions could be wrong.

McCulley and others at LexisNexis insist the scores are only used to help patients get the care they need and not to determine how much someone would pay for their health insurance. The company cited three different federal laws that restricted them and their clients from using the scores in that way. But privacy experts said none of the laws cited by the company bar the practice. The company backed off the assertions when I pointed that the laws did not seem to apply.

LexisNexis officials also said the company’s contracts expressly prohibit using the analysis to help price insurance plans. They would not provide a contract. But I knew that in at least one instance a company was already testing whether the scores could be used as a pricing tool.

Before the conference, I’d seen a press release announcing that the largest health actuarial firm in the world, Milliman, was now using the LexisNexis scores. I tracked down Marcos Dachary, who works in business development for Milliman. Actuaries calculate health care risks and help set the price of premiums for insurers. I asked Dachary if Milliman was using the LexisNexis scores to price health plans and he said: “There could be an opportunity.”

The scores could allow an insurance company to assess the risks posed by individual patients and make adjustments to protect themselves from losses, he said. For example, he said, the company could raise premiums, or revise contracts with providers.

It’s too early to tell whether the LexisNexis scores will actually be useful for pricing, he said. But he was excited about the possibilities. “One thing about social determinants data — it piques your mind,” he said.

Dachary acknowledged the scores could also be used to discriminate. Others, he said, have raised that concern. As much as there could be positive potential, he said, “there could also be negative potential.”

It’s that negative potential that still bothers data analyst Erin Kaufman, who left the health insurance industry in January. The 35-year-old from Atlanta had earned her doctorate in public health because she wanted to help people, but one day at Aetna, her boss told her to work with a new data set.

To her surprise, the company had obtained personal information from a data broker on millions of Americans. The data contained each person’s habits and hobbies, like whether they owned a gun, and if so, what type, she said. It included whether they had magazine subscriptions, liked to ride bikes or run marathons. It had hundreds of personal details about each person.

The Aetna data team merged the data with the information it had on patients it insured. The goal was to see how people’s personal interests and hobbies might relate to their health care costs. But Kaufman said it felt wrong: The information about the people who knitted or crocheted made her think of her grandmother. And the details about individuals who liked camping made her think of herself. What business did the insurance company have looking at this information? “It was a dataset that really dug into our clients’ lives,” she said. “No one gave anyone permission to do this.”

In a statement, Aetna said it uses consumer marketing information to supplement its claims and clinical information. The combined data helps predict the risk of repeat emergency room visits or hospital admissions. The information is used to reach out to members and help them and plays no role in pricing plans or underwriting, the statement said.

Kaufman said she had concerns about the accuracy of drawing inferences about an individual’s health from an analysis of a group of people with similar traits. Health scores generated from arrest records, home ownership and similar material may be wrong, she said.

Pam Dixon, executive director of the World Privacy Forum, a nonprofit that advocates for privacy in the digital age, shares Kaufman’s concerns. She points to a study by the analytics company SAS, which worked in 2012 with an unnamed major health insurance company to predict a person’s health care costs using 1,500 data elements, including the investments and types of cars people owned.

The SAS study said higher health care costs could be predicted by looking at things like ethnicity, watching TV and mail order purchases.

“I find that enormously offensive as a list,” Dixon said. “This is not health data. This is inferred data.”

Data scientist Cathy O’Neil said drawing conclusions about health risks on such data could lead to a bias against some poor people. It would be easy to infer they are prone to costly illnesses based on their backgrounds and living conditions, said O’Neil, author of the book “Weapons of Math Destruction,” which looked at how algorithms can increase inequality. That could lead to poor people being charged more, making it harder for them to get the care they need, she said. Employers, she said, could even decide not to hire people with data points that could indicate high medical costs in the future.

O’Neil said the companies should also measure how the scores might discriminate against the poor, sick or minorities.

American policymakers could do more to protect people’s information, experts said. In the United States, companies can harvest personal data unless a specific law bans it, although California just passed legislation that could create restrictions, said William McGeveran, a professor at the University of Minnesota Law School. Europe, in contrast, passed a strict law called the General Data Protection Regulation, which went into effect in May.

“In Europe, data protection is a constitutional right,” McGeveran said.

Pasquale, the University of Maryland law professor, said health scores should be treated like credit scores. Federal law gives people the right to know their credit scores and how they’re calculated. If people are going to be rated by whether they listen to sad songs on Spotify or look up information about AIDS online, they should know, Pasquale said. “The risk of improper use is extremely high. And data scores are not properly vetted and validated and available for scrutiny.”

As I reported this story I wondered how the data vendors might be using my personal information to score my potential health costs. So, I filled out a request on the LexisNexis website for the company to send me some of the personal information it has on me. A week later a somewhat creepy, 182-page walk down memory lane arrived in the mail. Federal law only requires the company to provide a subset of the information it collected about me. So that’s all I got.

LexisNexis had captured details about my life going back 25 years, many that I’d forgotten. It had my phone numbers going back decades and my home addresses going back to my childhood in Golden, Colorado. Each location had a field to show whether the address was “high risk.” Mine were all blank. The company also collects records of any liens and criminal activity, which, thankfully, I didn’t have.

My report was boring, which isn’t a surprise. I’ve lived a middle-class life and grown up in good neighborhoods. But it made me wonder: What if I had lived in “high risk” neighborhoods? Could that ever be used by insurers to jack up my rates — or to avoid me altogether?

I wanted to see more. If LexisNexis had health risk scores on me, I wanted to see how they were calculated and, more importantly, whether they were accurate. But the company told me that if it had calculated my scores it would have done so on behalf of their client, my insurance company. So, I couldn’t have them.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

 


Facial Recognition At Facebook: New Patents, New EU Privacy Laws, And Concerns For Offline Shoppers

Facebook logo Some Facebook users know that the social networking site tracks them both on and off (e.g., signed into, not signed into) the service. Many online users know that Facebook tracks both users and non-users around the internet. Recent developments indicate that the service intends to track people offline, too. The New York Times reported that Facebook:

"... has applied for various patents, many of them still under consideration... One patent application, published last November, described a system that could detect consumers within [brick-and-mortar retail] stores and match those shoppers’ faces with their social networking profiles. Then it could analyze the characteristics of their friends, and other details, using the information to determine a “trust level” for each shopper. Consumers deemed “trustworthy” could be eligible for special treatment, like automatic access to merchandise in locked display cases... Another Facebook patent filing described how cameras near checkout counters could capture shoppers’ faces, match them with their social networking profiles and then send purchase confirmation messages to their phones."

Some important background. First, the usage of surveillance cameras in retail stores is not new. What is new is the scope and accuracy of the technology. In 2012, we first learned about smart mannequins in retail stores. In 2013, we learned about the five ways retail stores spy on shoppers. In 2015, we learned more about tracking of shoppers by retail stores using WiFi connections. In 2018, some smart mannequins are used in the healthcare industry.

Second, Facebook's facial recognition technology scans images uploaded by users, and then allows users identified to accept or decline labels with their name for each photo. Each Facebook user can adjust their privacy settings to enable or disable the adding of their name label to photos. However:

"Facial recognition works by scanning faces of unnamed people in photos or videos and then matching codes of their facial patterns to those in a database of named people... The technology can be used to remotely identify people by name without their knowledge or consent. While proponents view it as a high-tech tool to catch criminals... critics said people cannot actually control the technology — because Facebook scans their faces in photos even when their facial recognition setting is turned off... Rochelle Nadhiri, a Facebook spokeswoman, said its system analyzes faces in users’ photos to check whether they match with those who have their facial recognition setting turned on. If the system cannot find a match, she said, it does not identify the unknown face and immediately deletes the facial data."

Simply stated: Facebook maintains a perpetual database of photos (and videos) with names attached, so it can perform the matching and not display name labels for users who declined and/or disabled the display of name labels in photos (videos). To learn more about facial recognition at Facebook, visit the Electronic Privacy Information Center (EPIC) site.

Third, other tech companies besides Facebook use facial recognition technology:

"... Amazon, Apple, Facebook, Google and Microsoft have filed facial recognition patent applications. In May, civil liberties groups criticized Amazon for marketing facial technology, called Rekognition, to police departments. The company has said the technology has also been used to find lost children at amusement parks and other purposes..."

You may remember, earlier in 2017 Apple launched its iPhone X with Face ID feature for users to unlock their phones. Fourth, since Facebook operates globally it must respond to new laws in certain regions:

"In the European Union, a tough new data protection law called the General Data Protection Regulation now requires companies to obtain explicit and “freely given” consent before collecting sensitive information like facial data. Some critics, including the former government official who originally proposed the new law, contend that Facebook tried to improperly influence user consent by promoting facial recognition as an identity protection tool."

Perhaps, you find the above issues troubling. I do. If my facial image will be captured, archived, tracked by brick-and-mortar stores, and then matched and merged with my online usage, then I want some type of notice before entering a brick-and-mortar store -- just as websites present privacy and terms-of-use policies. Otherwise, there is no notice nor informed consent by shoppers at brick-and-mortar stores.

So, is facial recognition a threat, a protection tool, or both? What are your opinions?


New Jersey to Suspend Prominent Psychologist for Failing to Protect Patient Privacy

[Editor's note: today's guest blog post, by reporters at ProPublica, explores privacy issues within the healthcare industry. The post is reprinted with permission.]

By Charles Ornstein, ProPublica

A prominent New Jersey psychologist is facing the suspension of his license after state officials concluded that he failed to keep details of mental health diagnoses and treatments confidential when he sued his patients over unpaid bills.

The state Board of Psychological Examiners last month upheld a decision by an administrative law judge that the psychologist, Barry Helfmann, “did not take reasonable measures to protect the confidentiality of his patients’ protected health information,” Lisa Coryell, a spokeswoman for the state attorney general’s office, said in an e-mail.

The administrative law judge recommended that Helfmann pay a fine and a share of the investigative costs. The board went further, ordering that Helfmann’s license be suspended for two years, Coryell wrote. During the first year, he will not be able to practice; during the second, he can practice, but only under supervision. Helfmann also will have to pay a $10,000 civil penalty, take an ethics course and reimburse the state for some of its investigative costs. The suspension is scheduled to begin in September.

New Jersey began to investigate Helfmann after a ProPublica article published in The New York Times in December 2015 that described the lawsuits and the information they contained. The allegations involved Helfmann’s patients as well as those of his colleagues at Short Hills Associates in Clinical Psychology, a New Jersey practice where he has been the managing partner.

Helfmann is a leader in his field, serving as president of the American Group Psychotherapy Association, and as a past president of the New Jersey Psychological Association.

ProPublica identified 24 court cases filed by Short Hills Associates from 2010 to 2014 over unpaid bills in which patients’ names, diagnoses and treatments were listed in documents. The defendants included lawyers, business people and a manager at a nonprofit. In cases involving patients who were minors, the lawsuits included children’s names and diagnoses.

The information was subsequently redacted from court records after a patient counter-sued Helfmann and his partners, the psychology group and the practice’s debt collection lawyers. The patient’s lawsuit was settled.

Helfmann has denied wrongdoing, saying his former debt collection lawyers were responsible for attaching patients’ information to the lawsuits. His current lawyer, Scott Piekarsky, said he intends to file an immediate appeal before the discipline takes effect.

"The discipline imposed is ‘so disproportionate as to be shocking to one’s sense of fairness’ under New Jersey case law," Piekarsky said in a statement.

Piekarsky also noted that the administrative law judge who heard the case found no need for any license suspension and raised questions about the credibility of the patient who sued Helfmann. "We feel this is a political decision due to Dr. Helfmann’s aggressive stance" in litigation, he said.

Helfmann sued the state of New Jersey and Joan Gelber, a senior deputy attorney general, claiming that he was not provided due process and equal protection under the law. He and Short Hills Associates sued his prior debt collection firm for legal malpractice. Those cases have been dismissed, though Helfmann has appealed.

Helfmann and Short Hills Associates also are suing the patient who sued him, as well as the man’s lawyer, claiming the patient and lawyer violated a confidential settlement agreement by talking to a ProPublica reporter and sharing information with a lawyer for the New Jersey attorney general’s office without providing advance notice. In court pleadings, the patient and his lawyer maintain that they did not breach the agreement. Helfmann brought all three of these lawsuits in state court in Union County.

Throughout his career, Helfmann has been an advocate for patient privacy, helping to push a state law limiting the information an insurance company can seek from a psychologist to determine the medical necessity of treatment. He also was a plaintiff in a lawsuit against two insurance companies and a New Jersey state commission, accusing them of requiring psychologists to turn over their treatment notes in order to get paid.

"It is apparent that upholding the ethical standards of his profession was very important to him," Carol Cohen, the administrative law judge, wrote. "Having said that, it appears that in the case of the information released to his attorney and eventually put into court papers, the respondent did not use due diligence in being sure that confidential information was not released and his patients were protected."

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Researchers Find Mobile Apps Can Easily Record Screenshots And Videos of Users' Activities

New academic research highlights how easy it is for mobile apps to both spy upon consumers and violate our privacy. During a recent study to determine whether or not smartphones record users' conversations, researchers at Northeastern University (NU) found:

"... that some companies were sending screenshots and videos of user phone activities to third parties. Although these privacy breaches appeared to be benign, they emphasized how easily a phone’s privacy window could be exploited for profit."

The NU researchers tested 17,260 of the most popular mobile apps running on smartphones using the Android operating system. About 9,000 of the 17,260 apps had the ability to take screenshots. The vulnerability: screenshot and video captures could easily be used to record users' keystrokes, passwords, and related sensitive information:

"This opening will almost certainly be used for malicious purposes," said Christo Wilson, another computer science professor on the research team. "It’s simple to install and collect this information. And what’s most disturbing is that this occurs with no notification to or permission by users."

The NU researchers found one app already recording video of users' screen activity (links added):

"That app was GoPuff, a fast-food delivery service, which sent the screenshots to Appsee, a data analytics firm for mobile devices. All this was done without the awareness of app users. [The researchers] emphasized that neither company appeared to have any nefarious intent. They said that web developers commonly use this type of information to debug their apps... GoPuff has changed its terms of service agreement to alert users that the company may take screenshots of their use patterns. Google issued a statement emphasizing that its policy requires developers to disclose to users how their information will be collected."

May? A brief review of the Appsee site seems to confirm that video recordings of the screens on app users' mobile devices is integral to the service:

"RECORDING: Watch every user action and understand exactly how they use your app, which problems they're experiencing, and how to fix them.​ See the app through your users' eyes to pinpoint usability, UX and performance issues... TOUCH HEAT MAPS: View aggregated touch heatmaps of all the gestures performed in each​ ​screen in your app.​ Discover user navigation and interaction preferences... REALTIME ANALYTICS & ALERTS:Get insightful analytics on user behavior without pre-defining any events. Obtain single-user and aggregate insights in real-time..."

Sounds like a version of "surveillance capitalism" to me. According to the Appsee site, a variety of companies use the service including eBay, Samsung, Virgin airlines, The Weather Network, and several advertising networks. Plus, the Appsee Privacy Policy dated may 23, 2018 stated:

"The Appsee SDK allows Subscribers to record session replays of their end-users' use of Subscribers' mobile applications ("End User Data") and to upload such End User Data to Appsee’s secured cloud servers."

In this scenario, GoPuff is a subscriber and consumers using the GoPuff mobile app are end users. The Appsee SDK is software code embedded within the GoPuff mobile app. The researchers said that this vulnerability, "will not be closed until the phone companies redesign their operating systems..."

Data-analytics services like Appsee raise several issues. First, there seems to be little need for digital agencies to conduct traditional eye-tracking and usability test sessions, since companies can now record, upload and archive what, when, where, and how often users swipe and select in-app content. Before, users were invited to and paid for their participation in user testing sessions.

Second, this in-app tracking and data collection amounts to perpetual, unannounced user testing. Previously, companies have gotten into plenty of trouble with their customers by performing secret user testing; especially when the service varies from the standard, expected configuration and the policies (e.g., privacy, terms of service) don't disclose it. Nobody wants to be a lab rat or crash-test dummy.

Third, surveillance agencies within several governments must be thrilled to learn of these new in-app tracking and spy tools, if they aren't already using them. A reasonable assumption is that Appsee also provides data to law enforcement upon demand.

Fourth, two of the researchers at NU are undergraduate students. Another startling disclosure:

"Coming into this project, I didn’t think much about phone privacy and neither did my friends," said Elleen Pan, who is the first author on the paper. "This has definitely sparked my interest in research, and I will consider going back to graduate school."

Given the tsunami of data breaches, privacy legislation in Europe, and demands by law enforcement for tech firms to build "back door" hacks into their mobile devices and smartphones, it is startling alarming that some college students, "don't think much about phone privacy." This means that Pan and her classmates probably haven't read privacy and terms-of-service policies for the apps and sites they've used. Maybe they will now.

Let's hope so.

Consumers interested in GoPuff should closely read the service's privacy and Terms of Service policies, since the latter includes dispute resolution via binding arbitration and prevents class-action lawsuits.

Hopefully, future studies about privacy and mobile apps will explore further the findings by Pan and her co-researchers. Download the study titled, "Panoptispy: Characterizing Audio and Video Exfiltration from Android Applications" (Adobe PDF) by Elleen Pan, Jingjing Ren, Martina Lindorfer, Christo Wilson, and David Choffnes.


The Wireless Carrier With At Least 8 'Hidden Spy Hubs' Helping The NSA

AT&T logo During the late 1970s and 1980s, AT&T conducted an iconic “reach out and touch someone” advertising campaign to encourage consumers to call their friends, family, and classmates. Back then, it was old school -- landlines. The campaign ranked #80 on Ad Age's list of the 100 top ad campaigns from the last century.

Now, we learn a little more about how extensive pervasive surveillance activities are at AT&T facilities to help law enforcement reach out and touch persons. Yesterday, the Intercept reported:

"The NSA considers AT&T to be one of its most trusted partners and has lauded the company’s “extreme willingness to help.” It is a collaboration that dates back decades. Little known, however, is that its scope is not restricted to AT&T’s customers. According to the NSA’s documents, it values AT&T not only because it "has access to information that transits the nation," but also because it maintains unique relationships with other phone and internet providers. The NSA exploits these relationships for surveillance purposes, commandeering AT&T’s massive infrastructure and using it as a platform to covertly tap into communications processed by other companies.”

The new report describes in detail the activities at eight AT&T facilities in major cities across the United States. Consumers who use other branded wireless service providers are also affected:

"Because of AT&T’s position as one of the U.S.’s leading telecommunications companies, it has a large network that is frequently used by other providers to transport their customers’ data. Companies that “peer” with AT&T include the American telecommunications giants Sprint, Cogent Communications, and Level 3, as well as foreign companies such as Sweden’s Telia, India’s Tata Communications, Italy’s Telecom Italia, and Germany’s Deutsche Telekom."

It was five years ago this month that the public learned about extensive surveillance by the U.S. National Security Agency (NSA). Back then, the Guardian UK newspaper reported about a court order allowing the NSA to spy on U.S. citizens. The revelations continued, and by 2016 we'd learned about NSA code inserted in Android operating system software, the FISA Court and how it undermines the public's trust, the importance of metadata and how much it reveals about you (despite some politicians' claims otherwise), the unintended consequences from broad NSA surveillance, U.S. government spy agencies' goal to break all encryption methods, warrantless searches of U.S. citizens' phone calls and e-mail messages, the NSA's facial image data collection program, the data collection programs included ordinary (e.g., innocent) citizens besides legal targets, and how  most hi-tech and telecommunications companies assisted the government with its spy programs. We knew before that AT&T was probably the best collaborator, and now we know more about why. 

Content vacuumed up during the surveillance includes consumers' phone calls, text messages, e-mail messages, and internet activity. The latest report by the Intercept also described:

"The messages that the NSA had unlawfully collected were swept up using a method of surveillance known as “upstream,” which the agency still deploys for other surveillance programs authorized under both Section 702 of FISA and Executive Order 12333. The upstream method involves tapping into communications as they are passing across internet networks – precisely the kind of electronic eavesdropping that appears to have taken place at the eight locations identified by The Intercept."

Former NSA contractor Edward Snowden commented on Twitter:


Supreme Court Ruling Requires Government To Obtain Search Warrants To Collect Users' Location Data

On Friday, the Supreme Court of the United States (SCOTUS) issued a decision which requires the government to obtain warrants in order to collect information from wireless carriers such as geo-location data. 9to5Mac reported that the court case resulted from:

"... a 2010 case of armed robberies in Detroit in which prosecutors used data from wireless carriers to make a conviction. In this case, lawyers had access to about 13,000 location data points. The sticking point has been whether access and use of data like this violates the Fourth Amendment. Apple, along with Google and Facebook had previously submitted a brief to the Supreme Court arguing for privacy protection..."

The Fourth Amendment in the U.S. Constitution states:

"The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized."

The New York Times reported:

"The 5-to-4 ruling will protect "deeply revealing" records associated with 400 million devices, the chief justice wrote. It did not matter, he wrote, that the records were in the hands of a third party. That aspect of the ruling was a significant break from earlier decisions. The Constitution must take account of vast technological changes, Chief Justice Roberts wrote, noting that digital data can provide a comprehensive, detailed — and intrusive — overview of private affairs that would have been impossible to imagine not long ago. The decision made exceptions for emergencies like bomb threats and child abductions..."

Background regarding the Fourth Amendment:

"In a pair of recent decisions, the Supreme Court expressed discomfort with allowing unlimited government access to digital data. In United States v. Jones, it limited the ability of the police to use GPS devices to track suspects’ movements. And in Riley v. California, it required a warrant to search cellphones. Chief Justice Roberts wrote that both decisions supported the result in the new case.

The Supreme court's decision also discussed historical use of the "third-party doctrine" by law enforcement:

"In 1979, for instance, in Smith v. Maryland, the Supreme Court ruled that a robbery suspect had no reasonable expectation that his right to privacy extended to the numbers dialed from his landline phone. The court reasoned that the suspect had voluntarily turned over that information to a third party: the phone company. Relying on the Smith decision’s “third-party doctrine,” federal appeals courts have said that government investigators seeking data from cellphone companies showing users’ movements do not require a warrant. But Chief Justice Roberts wrote that the doctrine is of limited use in the digital age. “While the third-party doctrine applies to telephone numbers and bank records, it is not clear whether its logic extends to the qualitatively different category of cell-site records,” he wrote."

The ruling also covered the Stored Communications Act, which requires:

"... prosecutors to go to court to obtain tracking data, but the showing they must make under the law is not probable cause, the standard for a warrant. Instead, they must demonstrate only that there were “specific and articulable facts showing that there are reasonable grounds to believe” that the records sought “are relevant and material to an ongoing criminal investigation.” That was insufficient, the court ruled. But Chief Justice Roberts emphasized the limits of the decision. It did not address real-time cell tower data, he wrote, “or call into question conventional surveillance techniques and tools, such as security cameras.” "

What else this Supreme Court decision might mean:

"The decision thus has implications for all kinds of personal information held by third parties, including email and text messages, internet searches, and bank and credit card records. But Chief Justice Roberts said the ruling had limits. "We hold only that a warrant is required in the rare case where the suspect has a legitimate privacy interest in records held by a third party," the chief justice wrote. The court’s four more liberal members — Justices Ruth Bader Ginsburg, Stephen G. Breyer, Sonia Sotomayor and Elena Kagan — joined his opinion."

Dissenting opinions by conservative Justices cited restrictions on law enforcement's abilities and further litigation. Breitbart News focused upon divisions within the Supreme Court and dissenting Justices' opinions, rather than a comprehensive explanation of the majority's opinion and law. Some conservatives say that President Trump will have an opportunity to appoint two Supreme Court Justices.

Albert Gidari, the Consulting Director of Privacy at the Stanford Law Center for Internet and Society, discussed the Court's ruling:

"What a Difference a Week Makes. The government sought seven days of records from the carrier; it got two days. The Court held that seven days or more was a search and required a warrant. So can the government just ask for 6 days with a subpoena or court order under the Stored Communications Act? Here’s what Justice Roberts said in footnote 3: “[W]e need not decide whether there is a limited period for which the Government may obtain an individual’s historical CSLI free from Fourth Amendment scrutiny, and if so, how long that period might be. It is sufficient for our purposes today to hold that accessing seven days of CSLI constitutes a Fourth Amendment search.” You can bet that will be litigated in the coming years, but the real question is what will mobile carriers do in the meantime... Where You Walk and Perhaps Your Mere Presence in Public Spaces Can Be Private. The Court said this clearly: “A person does not surrender all Fourth Amendment protection by venturing into the public sphere. To the contrary, “what [one] seeks to preserve as private, even in an area accessible to the public, may be constitutionally protected.”” This is the most important part of the Opinion in my view. It’s potential impact is much broader than the location record at issue in the case..."

Mr. Gidari's essay explored several more issues:

  • Does the Decision Really Make a Difference to Law Enforcement?
  • Are All Business Records in the Hands of Third Parties Now Protected?
  • Does It Matter Whether You Voluntarily Give the Data to a Third Party?

And:

Most people carry their smartphones with them 24/7 and everywhere they go. Hence, the geo-location data trail contains unique and very personal movements: where and whom you visit, how often and long you visit, who else (e.g., their smartphones) is nearby, and what you do (e.g., calls, mobile apps) at certain locations. The Supreme Court, or at least a majority of its Justices, seem to recognize and value this.

What are your opinions of the Supreme Court ruling?


Lawmakers In California Cave To Industry Lobbying, And Backtrack With Weakened Net Neutrality Bill

After the U.S. Federal Communications Commission (FCC) acted last year to repeal net neutrality rules, those protections officially expired on June 11th. Meanwhile, legislators in California have acted to protect their state's residents. In January, State Senator Weiner introduced in January a proposed bill, which was passed by the California Senate three weeks ago.

Since then, some politicians have countered with a modified bill lacking strong protections. C/Net reported:

"The vote on Wednesday in a California Assembly committee hearing advanced a bill that implements some net neutrality protections, but it scaled back all the measures of the bill that had gone beyond the rules outlined in the Federal Communications Commission's 2015 regulation, which was officially taken off the books by the Trump Administration's commission last week. In a surprise move, the vote happened before the hearing officially started,..."

Weiner's original bill was considered the "gold standard" of net neutrality protections for consumers because:

"... it went beyond the FCC's 2015 net neutrality "bright line" rules by including provisions like a ban on zero-rating, a business practice that allows broadband providers like AT&T to exempt their own services from their monthly wireless data caps, while services from competitors are counted against those limits. The result is a market controlled by internet service providers like AT&T, who can shut out the competition by creating an economic disadvantage for those competitors through its wireless service plans."

State Senator Weiner summarized the modified legislation:

"It is, with the amendments, a fake net neutrality bill..."

A key supporter of the modified, weak bill was Assemblyman Miguel Santiago, a Democrat from Los Angeles. Motherboard reported:

"Spearheading the rushed dismantling of the promising law was Committee Chair Miguel Santiago, a routine recipient of AT&T campaign contributions. Santiago’s office failed to respond to numerous requests for comment from Motherboard and numerous other media outlets... Weiner told the San Francisco Chronicle that the AT&T fueled “evisceration” of his proposal was “decidedly unfair.” But that’s historically how AT&T, a company with an almost comical amount of control over state legislatures, tends to operate. The company has so much power in many states, it’s frequently allowed to quite literally write terrible state telecom law..."

Supporters of this weakened bill either forgot or ignored the results from a December 2017 study of 1,077 voters. Most consumers want net neutrality protections:

Do you favor or oppose the proposal to give ISPs the freedom to: a) provide websites the option to give their visitors the ability to download material at a higher speed, for a fee, while providing a slower speed for other websites; b) block access to certain websites; and c) charge their customers an extra fee to gain access to certain websites?
Group Favor Opposed Refused/Don't Know
National 15.5% 82.9% 1.6%
Republicans 21.0% 75.4% 3.6%
Democrats 11.0% 88.5% 0.5%
Independents 14.0% 85.9% 0.1%

Why would politicians pursue weak net neutrality bills with few protections, while constituents want those protections? They are doing the bidding of the corporate internet service providers (ISPs) at the expense of their constituents. Profits before people. These politicians promote the freedom for ISPs to do as they please while restricting consumers' freedoms to use the bandwidth they've purchased however they please.

Broadcasting and Cable reported:

"These California democrats will go down in history as among the worst corporate shills that have ever held elected office," said Evan Greer of net neutrality activist group Fight for the Future. "Californians should rise up and demand that at their Assembly members represent them. The actions of this committee are an attack not just on net neutrality, but on our democracy.” According to Greer, the vote passed 8-0, with Democrats joining Republicans to amend the bill."

According to C/Net, more than 24 states are considering net neutrality legislation to protect their residents:

"... New York, Connecticut, and Maryland, are also considering legislation to reinstate net neutrality rules. Oregon and Washington state have already signed their own net neutrality legislation into law. Governors in several states, including New Jersey and Montana, have signed executive orders requiring ISPs that do business with the state adhere to net neutrality principles."

So, we have AT&T (plus politicians more interested in corporate donors than their constituents, the FCC, President Trump, and probably other telecommunications companies) to thank for this mess. What do you think?


Apple To Close Security Hole Law Enforcement Frequently Used To Access iPhones

You may remember. In 2016, the U.S. Department of Justice attempted to force Apple Computer to build a back door into its devices so law enforcement could access suspects' iPhones. After Apple refused, the government found a vendor to do the hacking for them. In 2017, multiple espionage campaigns targeted Apple devices with new malware.

Now, we learn a future Apple operating system (iOS) software update will close a security hole frequently used by law enforcement. Reuters reported that the future iOS update will include default settings to terminate communications through the USB port when the device hasn't been unlocked within the past hour. Reportedly, that change may reduce access by 90 percent.

Kudos to the executives at Apple for keeping customers' privacy foremost.


New Commissioner Says FTC Should Get Tough on Companies Like Facebook and Google

[Editor's note: today's guest post, by reporters at ProPublica, explores enforcement policy by the U.S. Federal Trade Commission (FTC), which has become more important  given the "light touch" enforcement approach by the Federal Communications Commission. Today's post is reprinted with permission.]

By Jesse Eisinger, ProPublica

Declaring that "the credibility of law enforcement and regulatory agencies has been undermined by the real or perceived lax treatment of repeat offenders," newly installed Democratic Federal Trade Commissioner Rohit Chopra is calling for much more serious penalties for repeat corporate offenders.

"FTC orders are not suggestions," he wrote in his first official statement, which was released on May 14.

Many giant companies, including Facebook and Google, are under FTC consent orders for various alleged transgressions (such as, in Facebook’s case, not keeping its promises to protect the privacy of its users’ data). Typically, a first FTC action essentially amounts to a warning not to do it again. The second carries potential penalties that are more serious.

Some critics charge that that approach has encouraged companies to treat FTC and other regulatory orders casually, often violating their terms. They also say the FTC and other regulators and law enforcers have gone easy on corporate recidivists.

In 2012, a Republican FTC commissioner, J. Thomas Rosch, dissented from an agency agreement with Google that fined the company $22.5 million for violations of a previous order even as it denied liability. Rosch wrote, “There is no question in my mind that there is ‘reason to believe’ that Google is in contempt of a prior Commission order.” He objected to allowing the company to deny its culpability while accepting a fine.

Chopra’s memo signals a tough stance from Democratic watchdogs — albeit a largely symbolic one, given the fact that Republicans have a 3-2 majority on the FTC — as the Trump administration pursues a wide-ranging deregulatory agenda. Agencies such as the Environmental Protection Agency and the Department of Interior are rolling back rules, while enforcement actions from the Securities and Exchange Commission and the Department of Justice are at multiyear lows.

Chopra, 36, is an ally of Elizabeth Warren and a former assistant director of the Consumer Financial Protection Bureau. President Donald Trump nominated him to his post in October, and he was confirmed last month. The FTC is led by a five-person commission, with a chairman from the president’s party.

The Chopra memo is also a tacit criticism of enforcement in the Obama years. Chopra cites the SEC’s practice of giving waivers to banks that have been sanctioned by the Department of Justice or regulators allowing them to continue to receive preferential access to capital markets. The habitual waivers drew criticism from a Democratic commissioner on the SEC, Kara Stein. Chopra contends in his memo that regulators treated both Wells Fargo and the giant British bank HSBC too lightly after repeated misconduct.

"When companies violate orders, this is usually the result of serious management dysfunction, a calculated risk that the payoff of skirting the law is worth the expected consequences, or both," he wrote. Both require more serious, structural remedies, rather than small fines.

The repeated bad behavior and soft penalties “undermine the rule of law,” he argued.

Chopra called for the FTC to use more aggressive tools: referring criminal matters to the Department of Justice; holding individual executives accountable, even if they weren’t named in the initial complaint; and “meaningful” civil penalties.

The FTC used such aggressive tactics in going after Kevin Trudeau, infomercial marketer of miracle treatments for bodily ailments. Chopra implied that the commission does not treat corporate recidivists with the same toughness. “Regardless of their size and clout, these offenders, too, should be stopped cold,” he writes.

Chopra also suggested other remedies. He called for the FTC to consider banning companies from engaging in certain business practices; requiring that they close or divest the offending business unit or subsidiary; requiring the dismissal of senior executives; and clawing back executive compensation, among other forceful measures.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Privacy Badger Update Fights 'Link Tracking' And 'Link Shims'

Many internet users know that social media companies track both users and non-users. The Electronic Frontier Foundation (EFF) updated its Privacy Badger browser add-on to help consumers fight a specific type of surveillance technology called "Link Tracking," which facebook and many social networking sites use to track users both on and off their social platforms. The EFF explained:

"Say your friend shares an article from EFF’s website on Facebook, and you’re interested. You click on the hyperlink, your browser opens a new tab, and Facebook is no longer a part of the equation. Right? Not exactly. Facebook—and many other companies, including Google and Twitter—use a variation of a technique called link shimming to track the links you click on their sites.

When your friend posts a link to eff.org on Facebook, the website will “wrap” it in a URL that actually points to Facebook.com: something like https://l.facebook.com/l.php?u=https%3A%2F%2Feff.org%2Fpb&h=ATPY93_4krP8Xwq6wg9XMEo_JHFVAh95wWm5awfXqrCAMQSH1TaWX6znA4wvKX8pNIHbWj3nW7M4F-ZGv3yyjHB_vRMRfq4_BgXDIcGEhwYvFgE7prU. This is a link shim.

When you click on that monstrosity, your browser first makes a request to Facebook with information about who you are, where you are coming from, and where you are navigating to. Then, Facebook quickly redirects you to the place you actually wanted to go... Facebook’s approach is a bit sneakier. When the site first loads in your browser, all normal URLs are replaced with their l.facebook.com shim equivalents. But as soon as you hover over a URL, a piece of code triggers that replaces the link shim with the actual link you wanted to see: that way, when you hover over a link, it looks innocuous. The link shim is stored in an invisible HTML attribute behind the scenes. The new link takes you to where you want to go, but when you click on it, another piece of code fires off a request to l.facebook.com in the background—tracking you just the same..."

Lovely. And, Facebook fails to deliver on privacy in more ways:

"According to Facebook's official post on the subject, in addition to helping Facebook track you, link shims are intended to protect users from links that are "spammy or malicious." The post states that Facebook can use click-time detection to save users from visiting malicious sites. However, since we found that link shims are replaced with their unwrapped equivalents before you have a chance to click on them, Facebook's system can't actually protect you in the way they describe.

Facebook also claims that link shims "protect privacy" by obfuscating the HTTP Referer header. With this update, Privacy Badger removes the Referer header from links on facebook.com altogether, protecting your privacy even more than Facebook's system claimed to."

Thanks to the EFF for focusing upon online privacy and delivering effective solutions.