590 posts categorized "Privacy" Feed

Tips For Parents To Teach Their Children Online Safety

Today's children often use mobile devices at very young ages... four, five, or six years of age. And they don't know anything about online dangers: computer viruses, stalking, cyber-bullying, identity theft, phishing scams, ransomware, and more. Nor do they know how to read terms-of-use and privacy policies. It is parents' responsibility to teach them.

NordVPN logo NordVPN, a maker of privacy software, offers several tips to help parents teach their children about online safety:

"1. Set an example: If you want your kid to be careful and responsible online, you should start with yourself."

Children watch their parents. If you practice good online safety habits, they will learn from watching you. And:

"2. Start talking to your kid early and do it often: If your child already knows how to play a video on Youtube or is able to download a gaming app without your help, they also should learn how to do it safely. Therefore, it’s important to start explaining the basics of privacy and cybersecurity at an early age."

So, long before having the "sex talk" with your children, parents should have the online safety talk. Developing good online safety habits at a young age will help children throughout their lives; especially as adults:

"3. Explain why safe behavior matters: Give relatable examples of what personal information is – your address, social security number, phone number, account credentials, and stress why you can never share this information with strangers."

You wouldn't give this information to a stranger on a city street. The same applies online. That also means discussing social media:

"4. Social media and messaging: a) don’t accept friend requests from people you don’t know; b) never send your pictures to strangers; c) make sure only your friends can see what you post on Facebook; d) turn on timeline review to check posts you are tagged in before they appear on your Facebook timeline; e) if someone asks you for some personal information, always tell your parents; f) don’t share too much on your profile (e.g., home address, phone number, current location); and g) don’t use your social media logins to authorize apps."

These are the basics. Read the entire list of online safety tips for parents by Nord VPN.


Facebook To Remove Onavo VPN App From Apple App Store

Not all Virtual Private Network (VPN) software is created equal. Some do a better job at protecting your privacy than others. Mashable reported that Facebook:

"... plans to remove its Onavo VPN app from the App Store after Apple warned the company that the app was in violation of its policies governing data gathering... For those blissfully unaware, Onavo sold itself as a virtual private network that people could run "to take the worry out of using smartphones and tablets." In reality, Facebook used data about users' internet activity collected by the app to inform acquisitions and product decisions. Essentially, Onavo allowed Facebook to run market research on you and your phone, 24/7. It was spyware, dressed up and neatly packaged with a Facebook-blue bow. Data gleaned from the app, notes the Wall Street Journal, reportedly played into the social media giant's decision to start building a rival to the Houseparty app. Oh, and its decision to buy WhatsApp."

Thanks Apple! We've all heard of the #FakeNews hashtag on social media. Yes, there is a #FakeVPN hashtag, too. So, buyer beware... online user beware.


Whirlpool's Online Product Registration: Confidentiality and Privacy Concerns

Earlier this month, my wife and I relocated to a different city within the same state to live closer to our new, 14-month young grandson. During the move, we bought new home appliances -- a clothes washer and dryer, both made by Whirlpool -- which prompted today's blog post.

The packaging and operation instructions included two registration postcards with the model and serial numbers printed in the form. Nothing controversial about that. The registration cards included, "Other Easy Ways To Register," and listed both registration websites for the United States and Canada. I tried the online registration to see what improvements or benefits Whirlpool's United States registration site might offer over the old-school snail-mail method besides speed.

The landing page includes a form for the customer's contact information, product purchased information, and future purchase plans. Pretty standard stuff. Nothing alarming there. Near the bottom of the form and just above the "Complete Registration" button are links to Whirlpool's Terms & Conditions and Privacy policies. I read both and found some surprises.

First, the site uses inconsistent nomenclature: two different policy titles. The link says "Terms & Conditions" while the title of the actual policy page states, "Terms Of Use." Which is it? Inconsistent nomenclature can confuse users. Not good. Come on, Whirlpool! This is not hard. Good website usability includes the consistent use of the same page title, so uses know where they are going when they select a link, and that they've arrived at the expected destination.

Second, the Terms Of Use (well, I had to pick a title so it wold be clear for you) policy page lacks a date. This can be confusing, making it difficult to impossible for consumers to know and reference the exact document read; plus determine what, if any, changes were posted since the prior version. Not good. Come on Whirlpool! Add a publication date. It's not hard.

Third, the Terms Of Use policy contained this clause:

"Whirlpool Corporation welcomes your submissions; however, any information submitted, other than your personal information (for example, your name and e-mail address), to Whirlpool Corporation through this site is the exclusive property of Whirlpool Corporation and is considered NOT to be confidential. Whirlpool Corporation does not receive the submission in confidence or under any confidential or fiduciary relationship. Whirlpool Corporation may use the submission for any purpose without restriction or compensation."

So, the Terms of Use policy is both vague and clear at the same time. It was vague because it didn't list the exact data elements considered "personal information." Not good. This leaves consumers to guess. The policy lists only two data elements. What about the rest? Are all confidential, or only some? And if some, which ones? Here's the list I consider confidential: name, street address, country, phone number, e-mail address, IP address, device type, device model, device operating system, payment card information, billing address, and online credentials (should I create a profile at the Whirlpool site). Come on Whirlpool! Get it together and provide the complete list of data elements you consider "personal information." It's not hard.

Fourth, the Terms Of Use policy was also clear because the above sentences quoted made Whirlpool's intentions clear: submissions to the site other than "personal information" are not confidential and Whirlpool can do with them whatever it wants. Since the policy doesn't list which data elements are personal, one must assume all are.  Not good.

Next, I read Whirlpool's Privacy policy, and hoped that it would clarify things. Thankfully, a little good news. First, the Privacy policy listed a date: May 31, 2018. Second, more inconsistent site nomenclature: the page-bottom links across the site say "Privacy Policy" while the policy page title says "Privacy Statement." I selected the "Expand All" button to view the entire policy. Third, Whirlpool's Privacy Statement listed the items considered personal information:

"- Your contact information, such as your name, email address, mailing address, and phone number
- Your billing information, such as your credit card number and billing address
- Your Whirlpool account information, including your user name, account number, and a password
- Your product and ownership information
- Your preferences, such as product wish lists, order history, and marketing preferences"

This list is a good start. A simple link to this section from the Terms Of Use policy would do wonders to clarify things. However, Whirlpool collects some key data which it more freely collects and trades than "personal information." The Privacy Statement contains this clause:

"Whirlpool and its business partners and service providers may use a variety of technologies that automatically or passively collect information about how you interact with our Websites ("Usage Information"). Usage Information may include: (i) your IP address, which is a unique set of numbers assigned to your computer by your Internet Service Provider (ISP) (which, depending on your ISP, may be a different number every time you connect to the Internet); (ii) the type of browser and operating system you use; and (iii) other information about your online session, such as the URL you came from to get to our Websites and the date and time you visited our Websites."

And, the Privacy Statement mentions the use of several online tracking technologies:

"We use Local Shared Objects (LSOs) such as HTML5 or Flash on our Websites to store content information and preferences. Third parties with whom we partner to provide certain features on our Websites or to display advertising based upon your web browsing activity use LSOs such as HTML5 or Flash to collect and store information... Web beacons are tiny electronic image files that can be embedded within a web page or included in an e-mail message, and are usually invisible to the human eye. When we use web beacons within our web pages, the web beacons (also known as “clear GIFs” or “tracking pixels”) may tell us such things as: how many people are coming to our Websites, whether they are one-time or repeat visitors, which pages they viewed and for how long, how well certain online advertising campaigns are converting, and other similar Website usage data. When used in our e-mail communications, web beacons can tell us the time an e-mail was opened, if and how many times it was forwarded, and what links users click on from within the e- mail message."

While the "EU-US Privacy Shield" section of the privacy policy lists Whirlpool's European subsidiaries, and contains a Privacy Shield link to an external site listing the companies that are probably some of Whirlpool's service and advertising partners, the privacy policy really does not disclose all of the "third parties," "business partners," "service vendors," advertising partners, and affiliates Whirlpool shares data with. Consumers are left in the dark.

Last, the "Your Rights: Choice & Access" section of the privacy policy mentions the opt-out mechanism for consumers. While consumers can opt-out or cancel receiving marketing (e.g., promotional) messaging from Whirlpool, you can't opt-out of the data collection and archival. So, choice is limited.

Given this and the above concerns, I abandoned the product registration form. Yep. Didn't complete it. Maybe I will in the future after Whirlpool fixes things. Perhaps most importantly, today's blog post is a reminder for all consumers: always read companies' privacy and terms-of-use policies. Always. You never know what you'll find that is irksome. And, if you don't know how to read online polices, this blog has some tips and suggestions.


Keep An Eye On Facebook's Moves To Expand Its Collection Of Financial Data About Its Users

Facebook logo On Monday, the Wall Street Journal reported that the social media giant had approached several major banks to share their detailed financial information about consumers in order, "to boost user engagement." Reportedly, Facebook approached JPMorgan Chase, Wells Fargo, Citigroup, and U.S. Bancorp. And, the detailed financial information sought included debit/credit/prepaid card transactions and checking account balances.

The Reuters news service also reported about the talks. The Reuters story mentioned the above banks, plus PayPal and American Express. Then, in a reply Facebook said that the Wall Street Journal news report was wrong. TechCrunch reported:

"Facebook spokesperson Elisabeth Diana tells TechCrunch it’s not asking for credit card transaction data from banks and it’s not interested in building a dedicated banking feature where you could interact with your accounts. It also says its work with banks isn’t to gather data to power ad targeting, or even personalize content... Facebook already lets Citibank customers in Singapore connect their accounts so they can ping their bank’s Messenger chatbot to check their balance, report fraud or get customer service’s help if they’re locked out of their account... That chatbot integration, which has no humans on the other end to limit privacy risks, was announced last year and launched this March. Facebook works with PayPal in more than 40 countries to let users get receipts via Messenger for their purchases. Expansions of these partnerships to more financial services providers could boost usage of Messenger by increasing its convenience — and make it more of a centralized utility akin to China’s WeChat."

There's plenty in the TechCrunch story. Reportedly, Diana's statement said that banks approached Facebook, and that it already partners:

"... with banks and credit card companies to offer services like customer chat or account management. Account linking enables people to receive real-time updates in Facebook Messenger where people can keep track of their transaction data like account balances, receipts, and shipping updates... The idea is that messaging with a bank can be better than waiting on hold over the phone – and it’s completely opt-in. We’re not using this information beyond enabling these types of experiences – not for advertising or anything else. A critical part of these partnerships is keeping people’s information safe and secure."

What to make of this? First, it really doesn't matter who approached whom. There's plenty of history. Way back in 2012, a German credit reporting agency approached Facebook. So, the financial sector is fully aware of the valuable data collected by Facebook.

Second, users doing business on the platform have already given Facebook permission to collect transaction data. Third, while Facebook's reply was about its users generally, its statement said "no" but sounded more like a "yes." Why? Basically, "account linking" or the convenience of purchase notifications is the hook or way into collecting users' financial transaction data. Existing practices, such as fitness apps  and music sharing, highlight the existing "account linking" used for data collection. Whatever users share on the platform allows Facebook to collect that information.

Fourth, the push to collect more banking data appears at best poorly timed, and at worst -- arrogant. Facebook is still trying to recover and regain users' trust after 87 million persons were affected by the massive data breach involving Cambridge Analytica. In May, the new Commissioner at the U.S. Federal Trade Commission (FTC) suggested stronger enforcement on tech companies, like Google and Facebook. Facebook has stumbled as its screening to identify political ads by politicians has incorrectly flagged news sites. Facebook CEO Mark Zuckerberg didn't help matters with his bumbling comments while failing to explain his company's stumbles to identify and prevent fake news.

Gary Cohn, President Donald Trump's former chief economic adviser, sharply criticized social media companies, including Facebook, for allowing fake news:

"In 2008 Facebook was one of those companies that was a big platform to criticize banks, they were very out front of criticizing banks for not being responsible citizens. I think banks were more responsible citizens in 2008 than some of the social media companies are today."

So, it seems wise to keep an eye on Facebook as it attempts to expand its data collection of consumers' financial information. Fifth, banks and banking executives bear some responsibility, too. A guest post on Forbes explained (highlighted text added):

"Whether this [banking] partnership pans or not, the Facebook plans are a reminder that banks sit on mountains of wealth much more valuable than money. Because of the speed at which tech giants move, banks must now make sure their clients agree on who owns their data, consent to the use of them, and understand with who they are shared. For that, it is now or never... In the financial industry, trust between a client and his provider is of primary importance. You can’t sell a customer’s banking data in the same way you sell his or her internet surfing behavior. Finance executives understand this: they even see the appropriate use of customer data as critical to financial stability. It is now or never to define these principles on the use of customer data... It’s why we believe new binding guidelines such as the EU’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act are welcome, even if they have room for improvement... A report by the US Treasury published earlier this week called on Congress to enact a federal data security and breach notification law to protect consumer financial data. The principles outlined above can serve as guidance to lawmakers drafting legislation, and bank executives considering how to respond to advances by Facebook and other big techs..."

Consumers should control their data -- especially financial data. If those rules are not put in place, then consumers have truly lost control of the sensitive personal and financial information that describes them. What are your opinions?


Test Finds Amazon's Facial Recognition Software Wrongly Identified Members Of Congress As Persons Arrested. A Few Legislators Demand Answers

In a test of Rekognition, the facial recognition software by Amazon, the American Civil Liberties Union (ACLU) found that the software misidentified 28 members of the United States Congress to mugshot photographs of persons arrested for crimes. Jokes aside about politicians, this is serious stuff. According to the ACLU:

"The members of Congress who were falsely matched with the mugshot database we used in the test include Republicans and Democrats, men and women, and legislators of all ages, from all across the country... To conduct our test, we used the exact same facial recognition system that Amazon offers to the public, which anyone could use to scan for matches between images of faces. And running the entire test cost us $12.33 — less than a large pizza... The false matches were disproportionately of people of color, including six members of the Congressional Black Caucus, among them civil rights legend Rep. John Lewis (D-Ga.). These results demonstrate why Congress should join the ACLU in calling for a moratorium on law enforcement use of face surveillance."

List of 28 Congressional legislators mis-identified by Amazon Rekognition in ACLU study. Click to view larger version With 535 member of Congress, the implied error rate was 5.23 percent. On Thursday, three of the misidentified legislators sent a joint letter to Jeffery Bezos, the Chief executive Officer at Amazon. The letter read in part:

"We write to express our concerns and seek more information about Amazon's facial recognition technology, Rekognition... While facial recognition services might provide a valuable law enforcement tool, the efficacy and impact of the technology are not yet fully understood. In particular, serious concerns have been raised about the dangers facial recognition can pose to privacy and civil rights, especially when it is used as a tool of government surveillance, as well as the accuracy of the technology and its disproportionate impact on communities of color.1 These concerns, including recent reports that Rekognition could lead to mis-identifications, raise serious questions regarding whether Amazon should be selling its technology to law enforcement... One study estimates that more than 117 million American adults are in facial recognition databases that can be searched in criminal investigations..."

The letter was sent by Senator Edward J. Markey (Massachusetts, Representative Luis V. Gutiérrez (Illinois), and Representative Mark DeSaulnier (California). Why only three legislators? Where are the other 25? Nobody else cares about software accuracy?

The three legislators asked Amazon to provide answers by August 20, 2018 to several key requests:

  • The results of any internal accuracy or bias assessments Amazon perform on Rekognition, with details by race, gender, and age,
  • The list of all law enforcement or intelligence agencies Amazon has communicated with regarding Rekognition,
  • The list of all law enforcement agencies which have used or currently use Rekognition,
  • If any law enforcement agencies which used Rekogntion have been investigated, sued, or reprimanded for unlawful or discriminatory policing practices,
  • Describe the protections, if any, Amazon has built into Rekognition to protect the privacy rights of innocent citizens cuaght in the biometric databases used by law enforcement for comparisons,
  • Can Rekognition identify persons younger than age 13, and what protections Amazon uses to comply with Children's Online Privacy Protections Act (COPPA),
  • Whether Amazon conduts any audits of Rekognition to ensure its appropriate and legal uses, and what actions Amazon has taken to correct any abuses,
  • Explain whether Rekognition is integrated with police body cameras and/or "public-facing camera networks."

The letter cited a 2016 report by the Center on Privacy and Technology (CPT) at Georgetown Law School, which found:

"... 16 states let the Federal Bureau of Investigation (FBI) use face recognition technology to compare the faces of suspected criminals to their driver’s license and ID photos, creating a virtual line-up of their state residents. In this line-up, it’s not a human that points to the suspect—it’s an algorithm... Across the country, state and local police departments are building their own face recognition systems, many of them more advanced than the FBI’s. We know very little about these systems. We don’t know how they impact privacy and civil liberties. We don’t know how they address accuracy problems..."

Everyone wants law enforcement to quickly catch criminals, prosecute criminals, and protect the safety and rights of law-abiding citizens. However, accuracy matters. Experts warn that the facial recognition technologies used are unregulated, and the systems' impacts upon innocent citizens are not understood. Key findings in the CPT report:

  1. "Law enforcement face recognition networks include over 117 million American adults. Face recognition is neither new nor rare. FBI face recognition searches are more common than federal court-ordered wiretaps. At least one out of four state or local police departments has the option to run face recognition searches through their or another agency’s system. At least 26 states (and potentially as many as 30) allow law enforcement to run or request searches against their databases of driver’s license and ID photos..."
  2. "Different uses of face recognition create different risks. This report offers a framework to tell them apart. A face recognition search conducted in the field to verify the identity of someone who has been legally stopped or arrested is different, in principle and effect, than an investigatory search of an ATM photo against a driver’s license database, or continuous, real-time scans of people walking by a surveillance camera. The former is targeted and public. The latter are generalized and invisible..."
  3. "By tapping into driver’s license databases, the FBI is using biometrics in a way it’s never done before. Historically, FBI fingerprint and DNA databases have been primarily or exclusively made up of information from criminal arrests or investigations. By running face recognition searches against 16 states’ driver’s license photo databases, the FBI has built a biometric network that primarily includes law-abiding Americans. This is unprecedented and highly problematic."
  4. " Major police departments are exploring face recognition on live surveillance video. Major police departments are exploring real-time face recognition on live surveillance camera video. Real-time face recognition lets police continuously scan the faces of pedestrians walking by a street surveillance camera. It may seem like science fiction. It is real. Contract documents and agency statements show that at least five major police departments—including agencies in Chicago, Dallas, and Los Angeles—either claimed to run real-time face recognition off of street cameras..."
  5. "Law enforcement face recognition is unregulated and in many instances out of control. No state has passed a law comprehensively regulating police face recognition. We are not aware of any agency that requires warrants for searches or limits them to serious crimes. This has consequences..."
  6. "Law enforcement agencies are not taking adequate steps to protect free speech. There is a real risk that police face recognition will be used to stifle free speech. There is also a history of FBI and police surveillance of civil rights protests. Of the 52 agencies that we found to use (or have used) face recognition, we found only one, the Ohio Bureau of Criminal Investigation, whose face recognition use policy expressly prohibits its officers from using face recognition to track individuals engaging in political, religious, or other protected free speech."
  7. "Most law enforcement agencies do little to ensure their systems are accurate. Face recognition is less accurate than fingerprinting, particularly when used in real-time or on large databases. Yet we found only two agencies, the San Francisco Police Department and the Seattle region’s South Sound 911, that conditioned purchase of the technology on accuracy tests or thresholds. There is a need for testing..."
  8. "The human backstop to accuracy is non-standardized and overstated. Companies and police departments largely rely on police officers to decide whether a candidate photo is in fact a match. Yet a recent study showed that, without specialized training, human users make the wrong decision about a match half the time...The training regime for examiners remains a work in progress."
  9. "Police face recognition will disproportionately affect African Americans. Police face recognition will disproportionately affect African Americans. Many police departments do not realize that... the Seattle Police Department says that its face recognition system “does not see race.” Yet an FBI co-authored study suggests that face recognition may be less accurate on black people. Also, due to disproportionately high arrest rates, systems that rely on mug shot databases likely include a disproportionate number of African Americans. Despite these findings, there is no independent testing regime for racially biased error rates. In interviews, two major face recognition companies admitted that they did not run these tests internally, either."
  10. "Agencies are keeping critical information from the public. Ohio’s face recognition system remained almost entirely unknown to the public for five years. The New York Police Department acknowledges using face recognition; press reports suggest it has an advanced system. Yet NYPD denied our records request entirely. The Los Angeles Police Department has repeatedly announced new face recognition initiatives—including a “smart car” equipped with face recognition and real-time face recognition cameras—yet the agency claimed to have “no records responsive” to our document request. Of 52 agencies, only four (less than 10%) have a publicly available use policy. And only one agency, the San Diego Association of Governments, received legislative approval for its policy."

The New York Times reported:

"Nina Lindsey, an Amazon Web Services spokeswoman, said in a statement that the company’s customers had used its facial recognition technology for various beneficial purposes, including preventing human trafficking and reuniting missing children with their families. She added that the A.C.L.U. had used the company’s face-matching technology, called Amazon Rekognition, differently during its test than the company recommended for law enforcement customers.

For one thing, she said, police departments do not typically use the software to make fully autonomous decisions about people’s identities... She also noted that the A.C.L.U had used the system’s default setting for matches, called a “confidence threshold,” of 80 percent. That means the group counted any face matches the system proposed that had a similarity score of 80 percent or more. Amazon itself uses the same percentage in one facial recognition example on its site describing matching an employee’s face with a work ID badge. But Ms. Lindsey said Amazon recommended that police departments use a much higher similarity score — 95 percent — to reduce the likelihood of erroneous matches."

Good of Amazon to respond quickly, but its reply is still insufficient and troublesome. Amazon may recommend 95 percent similarity scores, but the public does not know if police departments actually use the higher setting, or consistently do so across all types of criminal investigations. Plus, the CPT report cast doubt on human "backstop" intervention, which Amazon's reply seems to heavily rely upon.

Where is the rest of Congress on this? On Friday, three Senators sent a similar letter seeking answers from 39 federal law-enforcement agencies about their use facial recognition technology, and what policies, if any, they have put in place to prevent abuse and misuse.

All of the findings in the CPT report are disturbing. Finding #3 is particularly troublesome. So, voters need to know what, if anything, has changed since these findings were published in 2016. Voters need to know what their elected officials are doing to address these findings. Some elected officials seem engaged on the topic, but not enough. What are your opinions?


Health Insurers Are Vacuuming Up Details About You — And It Could Raise Your Rates

[Editor's note: today's guest post, by reporters at ProPublica, explores privacy and data collection issues within the healthcare industry. It is reprinted with permission.]

By Marshall Allen, ProPublica

To an outsider, the fancy booths at last month’s health insurance industry gathering in San Diego aren’t very compelling. A handful of companies pitching “lifestyle” data and salespeople touting jargony phrases like “social determinants of health.”

But dig deeper and the implications of what they’re selling might give many patients pause: A future in which everything you do — the things you buy, the food you eat, the time you spend watching TV — may help determine how much you pay for health insurance.

With little public scrutiny, the health insurance industry has joined forces with data brokers to vacuum up personal details about hundreds of millions of Americans, including, odds are, many readers of this story. The companies are tracking your race, education level, TV habits, marital status, net worth. They’re collecting what you post on social media, whether you’re behind on your bills, what you order online. Then they feed this information into complicated computer algorithms that spit out predictions about how much your health care could cost them.

Are you a woman who recently changed your name? You could be newly married and have a pricey pregnancy pending. Or maybe you’re stressed and anxious from a recent divorce. That, too, the computer models predict, may run up your medical bills.

Are you a woman who’s purchased plus-size clothing? You’re considered at risk of depression. Mental health care can be expensive.

Low-income and a minority? That means, the data brokers say, you are more likely to live in a dilapidated and dangerous neighborhood, increasing your health risks.

“We sit on oceans of data,” said Eric McCulley, director of strategic solutions for LexisNexis Risk Solutions, during a conversation at the data firm’s booth. And he isn’t apologetic about using it. “The fact is, our data is in the public domain,” he said. “We didn’t put it out there.”

Insurers contend they use the information to spot health issues in their clients — and flag them so they get services they need. And companies like LexisNexis say the data shouldn’t be used to set prices. But as a research scientist from one company told me: “I can’t say it hasn’t happened.”

At a time when every week brings a new privacy scandal and worries abound about the misuse of personal information, patient advocates and privacy scholars say the insurance industry’s data gathering runs counter to its touted, and federally required, allegiance to patients’ medical privacy. The Health Insurance Portability and Accountability Act, or HIPAA, only protects medical information.

“We have a health privacy machine that’s in crisis,” said Frank Pasquale, a professor at the University of Maryland Carey School of Law who specializes in issues related to machine learning and algorithms. “We have a law that only covers one source of health information. They are rapidly developing another source.”

Patient advocates warn that using unverified, error-prone “lifestyle” data to make medical assumptions could lead insurers to improperly price plans — for instance raising rates based on false information — or discriminate against anyone tagged as high cost. And, they say, the use of the data raises thorny questions that should be debated publicly, such as: Should a person’s rates be raised because algorithms say they are more likely to run up medical bills? Such questions would be moot in Europe, where a strict law took effect in May that bans trading in personal data.

This year, ProPublica and NPR are investigating the various tactics the health insurance industry uses to maximize its profits. Understanding these strategies is important because patients — through taxes, cash payments and insurance premiums — are the ones funding the entire health care system. Yet the industry’s bewildering web of strategies and inside deals often have little to do with patients’ needs. As the series’ first story showed, contrary to popular belief, lower bills aren’t health insurers’ top priority.

Inside the San Diego Convention Center last month, there were few qualms about the way insurance companies were mining Americans’ lives for information — or what they planned to do with the data.

The sprawling convention center was a balmy draw for one of America’s Health Insurance Plans’ marquee gatherings. Insurance executives and managers wandered through the exhibit hall, sampling chocolate-covered strawberries, champagne and other delectables designed to encourage deal-making.

Up front, the prime real estate belonged to the big guns in health data: The booths of Optum, IBM Watson Health and LexisNexis stretched toward the ceiling, with flat screen monitors and some comfy seating. (NPR collaborates with IBM Watson Health on national polls about consumer health topics.)

To understand the scope of what they were offering, consider Optum. The company, owned by the massive UnitedHealth Group, has collected the medical diagnoses, tests, prescriptions, costs and socioeconomic data of 150 million Americans going back to 1993, according to its marketing materials. (UnitedHealth Group provides financial support to NPR.) The company says it uses the information to link patients’ medical outcomes and costs to details like their level of education, net worth, family structure and race. An Optum spokesman said the socioeconomic data is de-identified and is not used for pricing health plans.

Optum’s marketing materials also boast that it now has access to even more. In 2016, the company filed a patent application to gather what people share on platforms like Facebook and Twitter, and link this material to the person’s clinical and payment information. A company spokesman said in an email that the patent application never went anywhere. But the company’s current marketing materials say it combines claims and clinical information with social media interactions.

I had a lot of questions about this and first reached out to Optum in May, but the company didn’t connect me with any of its experts as promised. At the conference, Optum salespeople said they weren’t allowed to talk to me about how the company uses this information.

It isn’t hard to understand the appeal of all this data to insurers. Merging information from data brokers with people’s clinical and payment records is a no-brainer if you overlook potential patient concerns. Electronic medical records now make it easy for insurers to analyze massive amounts of information and combine it with the personal details scooped up by data brokers.

It also makes sense given the shifts in how providers are getting paid. Doctors and hospitals have typically been paid based on the quantity of care they provide. But the industry is moving toward paying them in lump sums for caring for a patient, or for an event, like a knee surgery. In those cases, the medical providers can profit more when patients stay healthy. More money at stake means more interest in the social factors that might affect a patient’s health.

Some insurance companies are already using socioeconomic data to help patients get appropriate care, such as programs to help patients with chronic diseases stay healthy. Studies show social and economic aspects of people’s lives play an important role in their health. Knowing these personal details can help them identify those who may need help paying for medication or help getting to the doctor.

But patient advocates are skeptical health insurers have altruistic designs on people’s personal information.

The industry has a history of boosting profits by signing up healthy people and finding ways to avoid sick people — called “cherry-picking” and “lemon-dropping,” experts say. Among the classic examples: A company was accused of putting its enrollment office on the third floor of a building without an elevator, so only healthy patients could make the trek to sign up. Another tried to appeal to spry seniors by holding square dances.

The Affordable Care Act prohibits insurers from denying people coverage based on pre-existing health conditions or charging sick people more for individual or small group plans. But experts said patients’ personal information could still be used for marketing, and to assess risks and determine the prices of certain plans. And the Trump administration is promoting short-term health plans, which do allow insurers to deny coverage to sick patients.

Robert Greenwald, faculty director of Harvard Law School’s Center for Health Law and Policy Innovation, said insurance companies still cherry-pick, but now they’re subtler. The center analyzes health insurance plans to see if they discriminate. He said insurers will do things like failing to include enough information about which drugs a plan covers — which pushes sick people who need specific medications elsewhere. Or they may change the things a plan covers, or how much a patient has to pay for a type of care, after a patient has enrolled. Or, Greenwald added, they might exclude or limit certain types of providers from their networks — like those who have skill caring for patients with HIV or hepatitis C.

If there were concerns that personal data might be used to cherry-pick or lemon-drop, they weren’t raised at the conference.

At the IBM Watson Health booth, Kevin Ruane, a senior consulting scientist, told me that the company surveys 80,000 Americans a year to assess lifestyle, attitudes and behaviors that could relate to health care. Participants are asked whether they trust their doctor, have financial problems, go online, or own a Fitbit and similar questions. The responses of hundreds of adjacent households are analyzed together to identify social and economic factors for an area.

Ruane said he has used IBM Watson Health’s socioeconomic analysis to help insurance companies assess a potential market. The ACA increased the value of such assessments, experts say, because companies often don’t know the medical history of people seeking coverage. A region with too many sick people, or with patients who don’t take care of themselves, might not be worth the risk.

Ruane acknowledged that the information his company gathers may not be accurate for every person. “We talk to our clients and tell them to be careful about this,” he said. “Use it as a data insight. But it’s not necessarily a fact.”

In a separate conversation, a salesman from a different company joked about the potential for error. “God forbid you live on the wrong street these days,” he said. “You’re going to get lumped in with a lot of bad things.”

The LexisNexis booth was emblazoned with the slogan “Data. Insight. Action.” The company said it uses 442 non-medical personal attributes to predict a person’s medical costs. Its cache includes more than 78 billion records from more than 10,000 public and proprietary sources, including people’s cellphone numbers, criminal records, bankruptcies, property records, neighborhood safety and more. The information is used to predict patients’ health risks and costs in eight areas, including how often they are likely to visit emergency rooms, their total cost, their pharmacy costs, their motivation to stay healthy and their stress levels.

People who downsize their homes tend to have higher health care costs, the company says. As do those whose parents didn’t finish high school. Patients who own more valuable homes are less likely to land back in the hospital within 30 days of their discharge. The company says it has validated its scores against insurance claims and clinical data. But it won’t share its methods and hasn’t published the work in peer-reviewed journals.

McCulley, LexisNexis’ director of strategic solutions, said predictions made by the algorithms about patients are based on the combination of the personal attributes. He gave a hypothetical example: A high school dropout who had a recent income loss and doesn’t have a relative nearby might have higher than expected health costs.

But couldn’t that same type of person be healthy? I asked.

“Sure,” McCulley said, with no apparent dismay at the possibility that the predictions could be wrong.

McCulley and others at LexisNexis insist the scores are only used to help patients get the care they need and not to determine how much someone would pay for their health insurance. The company cited three different federal laws that restricted them and their clients from using the scores in that way. But privacy experts said none of the laws cited by the company bar the practice. The company backed off the assertions when I pointed that the laws did not seem to apply.

LexisNexis officials also said the company’s contracts expressly prohibit using the analysis to help price insurance plans. They would not provide a contract. But I knew that in at least one instance a company was already testing whether the scores could be used as a pricing tool.

Before the conference, I’d seen a press release announcing that the largest health actuarial firm in the world, Milliman, was now using the LexisNexis scores. I tracked down Marcos Dachary, who works in business development for Milliman. Actuaries calculate health care risks and help set the price of premiums for insurers. I asked Dachary if Milliman was using the LexisNexis scores to price health plans and he said: “There could be an opportunity.”

The scores could allow an insurance company to assess the risks posed by individual patients and make adjustments to protect themselves from losses, he said. For example, he said, the company could raise premiums, or revise contracts with providers.

It’s too early to tell whether the LexisNexis scores will actually be useful for pricing, he said. But he was excited about the possibilities. “One thing about social determinants data — it piques your mind,” he said.

Dachary acknowledged the scores could also be used to discriminate. Others, he said, have raised that concern. As much as there could be positive potential, he said, “there could also be negative potential.”

It’s that negative potential that still bothers data analyst Erin Kaufman, who left the health insurance industry in January. The 35-year-old from Atlanta had earned her doctorate in public health because she wanted to help people, but one day at Aetna, her boss told her to work with a new data set.

To her surprise, the company had obtained personal information from a data broker on millions of Americans. The data contained each person’s habits and hobbies, like whether they owned a gun, and if so, what type, she said. It included whether they had magazine subscriptions, liked to ride bikes or run marathons. It had hundreds of personal details about each person.

The Aetna data team merged the data with the information it had on patients it insured. The goal was to see how people’s personal interests and hobbies might relate to their health care costs. But Kaufman said it felt wrong: The information about the people who knitted or crocheted made her think of her grandmother. And the details about individuals who liked camping made her think of herself. What business did the insurance company have looking at this information? “It was a dataset that really dug into our clients’ lives,” she said. “No one gave anyone permission to do this.”

In a statement, Aetna said it uses consumer marketing information to supplement its claims and clinical information. The combined data helps predict the risk of repeat emergency room visits or hospital admissions. The information is used to reach out to members and help them and plays no role in pricing plans or underwriting, the statement said.

Kaufman said she had concerns about the accuracy of drawing inferences about an individual’s health from an analysis of a group of people with similar traits. Health scores generated from arrest records, home ownership and similar material may be wrong, she said.

Pam Dixon, executive director of the World Privacy Forum, a nonprofit that advocates for privacy in the digital age, shares Kaufman’s concerns. She points to a study by the analytics company SAS, which worked in 2012 with an unnamed major health insurance company to predict a person’s health care costs using 1,500 data elements, including the investments and types of cars people owned.

The SAS study said higher health care costs could be predicted by looking at things like ethnicity, watching TV and mail order purchases.

“I find that enormously offensive as a list,” Dixon said. “This is not health data. This is inferred data.”

Data scientist Cathy O’Neil said drawing conclusions about health risks on such data could lead to a bias against some poor people. It would be easy to infer they are prone to costly illnesses based on their backgrounds and living conditions, said O’Neil, author of the book “Weapons of Math Destruction,” which looked at how algorithms can increase inequality. That could lead to poor people being charged more, making it harder for them to get the care they need, she said. Employers, she said, could even decide not to hire people with data points that could indicate high medical costs in the future.

O’Neil said the companies should also measure how the scores might discriminate against the poor, sick or minorities.

American policymakers could do more to protect people’s information, experts said. In the United States, companies can harvest personal data unless a specific law bans it, although California just passed legislation that could create restrictions, said William McGeveran, a professor at the University of Minnesota Law School. Europe, in contrast, passed a strict law called the General Data Protection Regulation, which went into effect in May.

“In Europe, data protection is a constitutional right,” McGeveran said.

Pasquale, the University of Maryland law professor, said health scores should be treated like credit scores. Federal law gives people the right to know their credit scores and how they’re calculated. If people are going to be rated by whether they listen to sad songs on Spotify or look up information about AIDS online, they should know, Pasquale said. “The risk of improper use is extremely high. And data scores are not properly vetted and validated and available for scrutiny.”

As I reported this story I wondered how the data vendors might be using my personal information to score my potential health costs. So, I filled out a request on the LexisNexis website for the company to send me some of the personal information it has on me. A week later a somewhat creepy, 182-page walk down memory lane arrived in the mail. Federal law only requires the company to provide a subset of the information it collected about me. So that’s all I got.

LexisNexis had captured details about my life going back 25 years, many that I’d forgotten. It had my phone numbers going back decades and my home addresses going back to my childhood in Golden, Colorado. Each location had a field to show whether the address was “high risk.” Mine were all blank. The company also collects records of any liens and criminal activity, which, thankfully, I didn’t have.

My report was boring, which isn’t a surprise. I’ve lived a middle-class life and grown up in good neighborhoods. But it made me wonder: What if I had lived in “high risk” neighborhoods? Could that ever be used by insurers to jack up my rates — or to avoid me altogether?

I wanted to see more. If LexisNexis had health risk scores on me, I wanted to see how they were calculated and, more importantly, whether they were accurate. But the company told me that if it had calculated my scores it would have done so on behalf of their client, my insurance company. So, I couldn’t have them.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

 


Facial Recognition At Facebook: New Patents, New EU Privacy Laws, And Concerns For Offline Shoppers

Facebook logo Some Facebook users know that the social networking site tracks them both on and off (e.g., signed into, not signed into) the service. Many online users know that Facebook tracks both users and non-users around the internet. Recent developments indicate that the service intends to track people offline, too. The New York Times reported that Facebook:

"... has applied for various patents, many of them still under consideration... One patent application, published last November, described a system that could detect consumers within [brick-and-mortar retail] stores and match those shoppers’ faces with their social networking profiles. Then it could analyze the characteristics of their friends, and other details, using the information to determine a “trust level” for each shopper. Consumers deemed “trustworthy” could be eligible for special treatment, like automatic access to merchandise in locked display cases... Another Facebook patent filing described how cameras near checkout counters could capture shoppers’ faces, match them with their social networking profiles and then send purchase confirmation messages to their phones."

Some important background. First, the usage of surveillance cameras in retail stores is not new. What is new is the scope and accuracy of the technology. In 2012, we first learned about smart mannequins in retail stores. In 2013, we learned about the five ways retail stores spy on shoppers. In 2015, we learned more about tracking of shoppers by retail stores using WiFi connections. In 2018, some smart mannequins are used in the healthcare industry.

Second, Facebook's facial recognition technology scans images uploaded by users, and then allows users identified to accept or decline labels with their name for each photo. Each Facebook user can adjust their privacy settings to enable or disable the adding of their name label to photos. However:

"Facial recognition works by scanning faces of unnamed people in photos or videos and then matching codes of their facial patterns to those in a database of named people... The technology can be used to remotely identify people by name without their knowledge or consent. While proponents view it as a high-tech tool to catch criminals... critics said people cannot actually control the technology — because Facebook scans their faces in photos even when their facial recognition setting is turned off... Rochelle Nadhiri, a Facebook spokeswoman, said its system analyzes faces in users’ photos to check whether they match with those who have their facial recognition setting turned on. If the system cannot find a match, she said, it does not identify the unknown face and immediately deletes the facial data."

Simply stated: Facebook maintains a perpetual database of photos (and videos) with names attached, so it can perform the matching and not display name labels for users who declined and/or disabled the display of name labels in photos (videos). To learn more about facial recognition at Facebook, visit the Electronic Privacy Information Center (EPIC) site.

Third, other tech companies besides Facebook use facial recognition technology:

"... Amazon, Apple, Facebook, Google and Microsoft have filed facial recognition patent applications. In May, civil liberties groups criticized Amazon for marketing facial technology, called Rekognition, to police departments. The company has said the technology has also been used to find lost children at amusement parks and other purposes..."

You may remember, earlier in 2017 Apple launched its iPhone X with Face ID feature for users to unlock their phones. Fourth, since Facebook operates globally it must respond to new laws in certain regions:

"In the European Union, a tough new data protection law called the General Data Protection Regulation now requires companies to obtain explicit and “freely given” consent before collecting sensitive information like facial data. Some critics, including the former government official who originally proposed the new law, contend that Facebook tried to improperly influence user consent by promoting facial recognition as an identity protection tool."

Perhaps, you find the above issues troubling. I do. If my facial image will be captured, archived, tracked by brick-and-mortar stores, and then matched and merged with my online usage, then I want some type of notice before entering a brick-and-mortar store -- just as websites present privacy and terms-of-use policies. Otherwise, there is no notice nor informed consent by shoppers at brick-and-mortar stores.

So, is facial recognition a threat, a protection tool, or both? What are your opinions?


New Jersey to Suspend Prominent Psychologist for Failing to Protect Patient Privacy

[Editor's note: today's guest blog post, by reporters at ProPublica, explores privacy issues within the healthcare industry. The post is reprinted with permission.]

By Charles Ornstein, ProPublica

A prominent New Jersey psychologist is facing the suspension of his license after state officials concluded that he failed to keep details of mental health diagnoses and treatments confidential when he sued his patients over unpaid bills.

The state Board of Psychological Examiners last month upheld a decision by an administrative law judge that the psychologist, Barry Helfmann, “did not take reasonable measures to protect the confidentiality of his patients’ protected health information,” Lisa Coryell, a spokeswoman for the state attorney general’s office, said in an e-mail.

The administrative law judge recommended that Helfmann pay a fine and a share of the investigative costs. The board went further, ordering that Helfmann’s license be suspended for two years, Coryell wrote. During the first year, he will not be able to practice; during the second, he can practice, but only under supervision. Helfmann also will have to pay a $10,000 civil penalty, take an ethics course and reimburse the state for some of its investigative costs. The suspension is scheduled to begin in September.

New Jersey began to investigate Helfmann after a ProPublica article published in The New York Times in December 2015 that described the lawsuits and the information they contained. The allegations involved Helfmann’s patients as well as those of his colleagues at Short Hills Associates in Clinical Psychology, a New Jersey practice where he has been the managing partner.

Helfmann is a leader in his field, serving as president of the American Group Psychotherapy Association, and as a past president of the New Jersey Psychological Association.

ProPublica identified 24 court cases filed by Short Hills Associates from 2010 to 2014 over unpaid bills in which patients’ names, diagnoses and treatments were listed in documents. The defendants included lawyers, business people and a manager at a nonprofit. In cases involving patients who were minors, the lawsuits included children’s names and diagnoses.

The information was subsequently redacted from court records after a patient counter-sued Helfmann and his partners, the psychology group and the practice’s debt collection lawyers. The patient’s lawsuit was settled.

Helfmann has denied wrongdoing, saying his former debt collection lawyers were responsible for attaching patients’ information to the lawsuits. His current lawyer, Scott Piekarsky, said he intends to file an immediate appeal before the discipline takes effect.

"The discipline imposed is ‘so disproportionate as to be shocking to one’s sense of fairness’ under New Jersey case law," Piekarsky said in a statement.

Piekarsky also noted that the administrative law judge who heard the case found no need for any license suspension and raised questions about the credibility of the patient who sued Helfmann. "We feel this is a political decision due to Dr. Helfmann’s aggressive stance" in litigation, he said.

Helfmann sued the state of New Jersey and Joan Gelber, a senior deputy attorney general, claiming that he was not provided due process and equal protection under the law. He and Short Hills Associates sued his prior debt collection firm for legal malpractice. Those cases have been dismissed, though Helfmann has appealed.

Helfmann and Short Hills Associates also are suing the patient who sued him, as well as the man’s lawyer, claiming the patient and lawyer violated a confidential settlement agreement by talking to a ProPublica reporter and sharing information with a lawyer for the New Jersey attorney general’s office without providing advance notice. In court pleadings, the patient and his lawyer maintain that they did not breach the agreement. Helfmann brought all three of these lawsuits in state court in Union County.

Throughout his career, Helfmann has been an advocate for patient privacy, helping to push a state law limiting the information an insurance company can seek from a psychologist to determine the medical necessity of treatment. He also was a plaintiff in a lawsuit against two insurance companies and a New Jersey state commission, accusing them of requiring psychologists to turn over their treatment notes in order to get paid.

"It is apparent that upholding the ethical standards of his profession was very important to him," Carol Cohen, the administrative law judge, wrote. "Having said that, it appears that in the case of the information released to his attorney and eventually put into court papers, the respondent did not use due diligence in being sure that confidential information was not released and his patients were protected."

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Researchers Find Mobile Apps Can Easily Record Screenshots And Videos of Users' Activities

New academic research highlights how easy it is for mobile apps to both spy upon consumers and violate our privacy. During a recent study to determine whether or not smartphones record users' conversations, researchers at Northeastern University (NU) found:

"... that some companies were sending screenshots and videos of user phone activities to third parties. Although these privacy breaches appeared to be benign, they emphasized how easily a phone’s privacy window could be exploited for profit."

The NU researchers tested 17,260 of the most popular mobile apps running on smartphones using the Android operating system. About 9,000 of the 17,260 apps had the ability to take screenshots. The vulnerability: screenshot and video captures could easily be used to record users' keystrokes, passwords, and related sensitive information:

"This opening will almost certainly be used for malicious purposes," said Christo Wilson, another computer science professor on the research team. "It’s simple to install and collect this information. And what’s most disturbing is that this occurs with no notification to or permission by users."

The NU researchers found one app already recording video of users' screen activity (links added):

"That app was GoPuff, a fast-food delivery service, which sent the screenshots to Appsee, a data analytics firm for mobile devices. All this was done without the awareness of app users. [The researchers] emphasized that neither company appeared to have any nefarious intent. They said that web developers commonly use this type of information to debug their apps... GoPuff has changed its terms of service agreement to alert users that the company may take screenshots of their use patterns. Google issued a statement emphasizing that its policy requires developers to disclose to users how their information will be collected."

May? A brief review of the Appsee site seems to confirm that video recordings of the screens on app users' mobile devices is integral to the service:

"RECORDING: Watch every user action and understand exactly how they use your app, which problems they're experiencing, and how to fix them.​ See the app through your users' eyes to pinpoint usability, UX and performance issues... TOUCH HEAT MAPS: View aggregated touch heatmaps of all the gestures performed in each​ ​screen in your app.​ Discover user navigation and interaction preferences... REALTIME ANALYTICS & ALERTS:Get insightful analytics on user behavior without pre-defining any events. Obtain single-user and aggregate insights in real-time..."

Sounds like a version of "surveillance capitalism" to me. According to the Appsee site, a variety of companies use the service including eBay, Samsung, Virgin airlines, The Weather Network, and several advertising networks. Plus, the Appsee Privacy Policy dated may 23, 2018 stated:

"The Appsee SDK allows Subscribers to record session replays of their end-users' use of Subscribers' mobile applications ("End User Data") and to upload such End User Data to Appsee’s secured cloud servers."

In this scenario, GoPuff is a subscriber and consumers using the GoPuff mobile app are end users. The Appsee SDK is software code embedded within the GoPuff mobile app. The researchers said that this vulnerability, "will not be closed until the phone companies redesign their operating systems..."

Data-analytics services like Appsee raise several issues. First, there seems to be little need for digital agencies to conduct traditional eye-tracking and usability test sessions, since companies can now record, upload and archive what, when, where, and how often users swipe and select in-app content. Before, users were invited to and paid for their participation in user testing sessions.

Second, this in-app tracking and data collection amounts to perpetual, unannounced user testing. Previously, companies have gotten into plenty of trouble with their customers by performing secret user testing; especially when the service varies from the standard, expected configuration and the policies (e.g., privacy, terms of service) don't disclose it. Nobody wants to be a lab rat or crash-test dummy.

Third, surveillance agencies within several governments must be thrilled to learn of these new in-app tracking and spy tools, if they aren't already using them. A reasonable assumption is that Appsee also provides data to law enforcement upon demand.

Fourth, two of the researchers at NU are undergraduate students. Another startling disclosure:

"Coming into this project, I didn’t think much about phone privacy and neither did my friends," said Elleen Pan, who is the first author on the paper. "This has definitely sparked my interest in research, and I will consider going back to graduate school."

Given the tsunami of data breaches, privacy legislation in Europe, and demands by law enforcement for tech firms to build "back door" hacks into their mobile devices and smartphones, it is startling alarming that some college students, "don't think much about phone privacy." This means that Pan and her classmates probably haven't read privacy and terms-of-service policies for the apps and sites they've used. Maybe they will now.

Let's hope so.

Consumers interested in GoPuff should closely read the service's privacy and Terms of Service policies, since the latter includes dispute resolution via binding arbitration and prevents class-action lawsuits.

Hopefully, future studies about privacy and mobile apps will explore further the findings by Pan and her co-researchers. Download the study titled, "Panoptispy: Characterizing Audio and Video Exfiltration from Android Applications" (Adobe PDF) by Elleen Pan, Jingjing Ren, Martina Lindorfer, Christo Wilson, and David Choffnes.


The Wireless Carrier With At Least 8 'Hidden Spy Hubs' Helping The NSA

AT&T logo During the late 1970s and 1980s, AT&T conducted an iconic “reach out and touch someone” advertising campaign to encourage consumers to call their friends, family, and classmates. Back then, it was old school -- landlines. The campaign ranked #80 on Ad Age's list of the 100 top ad campaigns from the last century.

Now, we learn a little more about how extensive pervasive surveillance activities are at AT&T facilities to help law enforcement reach out and touch persons. Yesterday, the Intercept reported:

"The NSA considers AT&T to be one of its most trusted partners and has lauded the company’s “extreme willingness to help.” It is a collaboration that dates back decades. Little known, however, is that its scope is not restricted to AT&T’s customers. According to the NSA’s documents, it values AT&T not only because it "has access to information that transits the nation," but also because it maintains unique relationships with other phone and internet providers. The NSA exploits these relationships for surveillance purposes, commandeering AT&T’s massive infrastructure and using it as a platform to covertly tap into communications processed by other companies.”

The new report describes in detail the activities at eight AT&T facilities in major cities across the United States. Consumers who use other branded wireless service providers are also affected:

"Because of AT&T’s position as one of the U.S.’s leading telecommunications companies, it has a large network that is frequently used by other providers to transport their customers’ data. Companies that “peer” with AT&T include the American telecommunications giants Sprint, Cogent Communications, and Level 3, as well as foreign companies such as Sweden’s Telia, India’s Tata Communications, Italy’s Telecom Italia, and Germany’s Deutsche Telekom."

It was five years ago this month that the public learned about extensive surveillance by the U.S. National Security Agency (NSA). Back then, the Guardian UK newspaper reported about a court order allowing the NSA to spy on U.S. citizens. The revelations continued, and by 2016 we'd learned about NSA code inserted in Android operating system software, the FISA Court and how it undermines the public's trust, the importance of metadata and how much it reveals about you (despite some politicians' claims otherwise), the unintended consequences from broad NSA surveillance, U.S. government spy agencies' goal to break all encryption methods, warrantless searches of U.S. citizens' phone calls and e-mail messages, the NSA's facial image data collection program, the data collection programs included ordinary (e.g., innocent) citizens besides legal targets, and how  most hi-tech and telecommunications companies assisted the government with its spy programs. We knew before that AT&T was probably the best collaborator, and now we know more about why. 

Content vacuumed up during the surveillance includes consumers' phone calls, text messages, e-mail messages, and internet activity. The latest report by the Intercept also described:

"The messages that the NSA had unlawfully collected were swept up using a method of surveillance known as “upstream,” which the agency still deploys for other surveillance programs authorized under both Section 702 of FISA and Executive Order 12333. The upstream method involves tapping into communications as they are passing across internet networks – precisely the kind of electronic eavesdropping that appears to have taken place at the eight locations identified by The Intercept."

Former NSA contractor Edward Snowden commented on Twitter:


Supreme Court Ruling Requires Government To Obtain Search Warrants To Collect Users' Location Data

On Friday, the Supreme Court of the United States (SCOTUS) issued a decision which requires the government to obtain warrants in order to collect information from wireless carriers such as geo-location data. 9to5Mac reported that the court case resulted from:

"... a 2010 case of armed robberies in Detroit in which prosecutors used data from wireless carriers to make a conviction. In this case, lawyers had access to about 13,000 location data points. The sticking point has been whether access and use of data like this violates the Fourth Amendment. Apple, along with Google and Facebook had previously submitted a brief to the Supreme Court arguing for privacy protection..."

The Fourth Amendment in the U.S. Constitution states:

"The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized."

The New York Times reported:

"The 5-to-4 ruling will protect "deeply revealing" records associated with 400 million devices, the chief justice wrote. It did not matter, he wrote, that the records were in the hands of a third party. That aspect of the ruling was a significant break from earlier decisions. The Constitution must take account of vast technological changes, Chief Justice Roberts wrote, noting that digital data can provide a comprehensive, detailed — and intrusive — overview of private affairs that would have been impossible to imagine not long ago. The decision made exceptions for emergencies like bomb threats and child abductions..."

Background regarding the Fourth Amendment:

"In a pair of recent decisions, the Supreme Court expressed discomfort with allowing unlimited government access to digital data. In United States v. Jones, it limited the ability of the police to use GPS devices to track suspects’ movements. And in Riley v. California, it required a warrant to search cellphones. Chief Justice Roberts wrote that both decisions supported the result in the new case.

The Supreme court's decision also discussed historical use of the "third-party doctrine" by law enforcement:

"In 1979, for instance, in Smith v. Maryland, the Supreme Court ruled that a robbery suspect had no reasonable expectation that his right to privacy extended to the numbers dialed from his landline phone. The court reasoned that the suspect had voluntarily turned over that information to a third party: the phone company. Relying on the Smith decision’s “third-party doctrine,” federal appeals courts have said that government investigators seeking data from cellphone companies showing users’ movements do not require a warrant. But Chief Justice Roberts wrote that the doctrine is of limited use in the digital age. “While the third-party doctrine applies to telephone numbers and bank records, it is not clear whether its logic extends to the qualitatively different category of cell-site records,” he wrote."

The ruling also covered the Stored Communications Act, which requires:

"... prosecutors to go to court to obtain tracking data, but the showing they must make under the law is not probable cause, the standard for a warrant. Instead, they must demonstrate only that there were “specific and articulable facts showing that there are reasonable grounds to believe” that the records sought “are relevant and material to an ongoing criminal investigation.” That was insufficient, the court ruled. But Chief Justice Roberts emphasized the limits of the decision. It did not address real-time cell tower data, he wrote, “or call into question conventional surveillance techniques and tools, such as security cameras.” "

What else this Supreme Court decision might mean:

"The decision thus has implications for all kinds of personal information held by third parties, including email and text messages, internet searches, and bank and credit card records. But Chief Justice Roberts said the ruling had limits. "We hold only that a warrant is required in the rare case where the suspect has a legitimate privacy interest in records held by a third party," the chief justice wrote. The court’s four more liberal members — Justices Ruth Bader Ginsburg, Stephen G. Breyer, Sonia Sotomayor and Elena Kagan — joined his opinion."

Dissenting opinions by conservative Justices cited restrictions on law enforcement's abilities and further litigation. Breitbart News focused upon divisions within the Supreme Court and dissenting Justices' opinions, rather than a comprehensive explanation of the majority's opinion and law. Some conservatives say that President Trump will have an opportunity to appoint two Supreme Court Justices.

Albert Gidari, the Consulting Director of Privacy at the Stanford Law Center for Internet and Society, discussed the Court's ruling:

"What a Difference a Week Makes. The government sought seven days of records from the carrier; it got two days. The Court held that seven days or more was a search and required a warrant. So can the government just ask for 6 days with a subpoena or court order under the Stored Communications Act? Here’s what Justice Roberts said in footnote 3: “[W]e need not decide whether there is a limited period for which the Government may obtain an individual’s historical CSLI free from Fourth Amendment scrutiny, and if so, how long that period might be. It is sufficient for our purposes today to hold that accessing seven days of CSLI constitutes a Fourth Amendment search.” You can bet that will be litigated in the coming years, but the real question is what will mobile carriers do in the meantime... Where You Walk and Perhaps Your Mere Presence in Public Spaces Can Be Private. The Court said this clearly: “A person does not surrender all Fourth Amendment protection by venturing into the public sphere. To the contrary, “what [one] seeks to preserve as private, even in an area accessible to the public, may be constitutionally protected.”” This is the most important part of the Opinion in my view. It’s potential impact is much broader than the location record at issue in the case..."

Mr. Gidari's essay explored several more issues:

  • Does the Decision Really Make a Difference to Law Enforcement?
  • Are All Business Records in the Hands of Third Parties Now Protected?
  • Does It Matter Whether You Voluntarily Give the Data to a Third Party?

And:

Most people carry their smartphones with them 24/7 and everywhere they go. Hence, the geo-location data trail contains unique and very personal movements: where and whom you visit, how often and long you visit, who else (e.g., their smartphones) is nearby, and what you do (e.g., calls, mobile apps) at certain locations. The Supreme Court, or at least a majority of its Justices, seem to recognize and value this.

What are your opinions of the Supreme Court ruling?


Lawmakers In California Cave To Industry Lobbying, And Backtrack With Weakened Net Neutrality Bill

After the U.S. Federal Communications Commission (FCC) acted last year to repeal net neutrality rules, those protections officially expired on June 11th. Meanwhile, legislators in California have acted to protect their state's residents. In January, State Senator Weiner introduced in January a proposed bill, which was passed by the California Senate three weeks ago.

Since then, some politicians have countered with a modified bill lacking strong protections. C/Net reported:

"The vote on Wednesday in a California Assembly committee hearing advanced a bill that implements some net neutrality protections, but it scaled back all the measures of the bill that had gone beyond the rules outlined in the Federal Communications Commission's 2015 regulation, which was officially taken off the books by the Trump Administration's commission last week. In a surprise move, the vote happened before the hearing officially started,..."

Weiner's original bill was considered the "gold standard" of net neutrality protections for consumers because:

"... it went beyond the FCC's 2015 net neutrality "bright line" rules by including provisions like a ban on zero-rating, a business practice that allows broadband providers like AT&T to exempt their own services from their monthly wireless data caps, while services from competitors are counted against those limits. The result is a market controlled by internet service providers like AT&T, who can shut out the competition by creating an economic disadvantage for those competitors through its wireless service plans."

State Senator Weiner summarized the modified legislation:

"It is, with the amendments, a fake net neutrality bill..."

A key supporter of the modified, weak bill was Assemblyman Miguel Santiago, a Democrat from Los Angeles. Motherboard reported:

"Spearheading the rushed dismantling of the promising law was Committee Chair Miguel Santiago, a routine recipient of AT&T campaign contributions. Santiago’s office failed to respond to numerous requests for comment from Motherboard and numerous other media outlets... Weiner told the San Francisco Chronicle that the AT&T fueled “evisceration” of his proposal was “decidedly unfair.” But that’s historically how AT&T, a company with an almost comical amount of control over state legislatures, tends to operate. The company has so much power in many states, it’s frequently allowed to quite literally write terrible state telecom law..."

Supporters of this weakened bill either forgot or ignored the results from a December 2017 study of 1,077 voters. Most consumers want net neutrality protections:

Do you favor or oppose the proposal to give ISPs the freedom to: a) provide websites the option to give their visitors the ability to download material at a higher speed, for a fee, while providing a slower speed for other websites; b) block access to certain websites; and c) charge their customers an extra fee to gain access to certain websites?
Group Favor Opposed Refused/Don't Know
National 15.5% 82.9% 1.6%
Republicans 21.0% 75.4% 3.6%
Democrats 11.0% 88.5% 0.5%
Independents 14.0% 85.9% 0.1%

Why would politicians pursue weak net neutrality bills with few protections, while constituents want those protections? They are doing the bidding of the corporate internet service providers (ISPs) at the expense of their constituents. Profits before people. These politicians promote the freedom for ISPs to do as they please while restricting consumers' freedoms to use the bandwidth they've purchased however they please.

Broadcasting and Cable reported:

"These California democrats will go down in history as among the worst corporate shills that have ever held elected office," said Evan Greer of net neutrality activist group Fight for the Future. "Californians should rise up and demand that at their Assembly members represent them. The actions of this committee are an attack not just on net neutrality, but on our democracy.” According to Greer, the vote passed 8-0, with Democrats joining Republicans to amend the bill."

According to C/Net, more than 24 states are considering net neutrality legislation to protect their residents:

"... New York, Connecticut, and Maryland, are also considering legislation to reinstate net neutrality rules. Oregon and Washington state have already signed their own net neutrality legislation into law. Governors in several states, including New Jersey and Montana, have signed executive orders requiring ISPs that do business with the state adhere to net neutrality principles."

So, we have AT&T (plus politicians more interested in corporate donors than their constituents, the FCC, President Trump, and probably other telecommunications companies) to thank for this mess. What do you think?


Apple To Close Security Hole Law Enforcement Frequently Used To Access iPhones

You may remember. In 2016, the U.S. Department of Justice attempted to force Apple Computer to build a back door into its devices so law enforcement could access suspects' iPhones. After Apple refused, the government found a vendor to do the hacking for them. In 2017, multiple espionage campaigns targeted Apple devices with new malware.

Now, we learn a future Apple operating system (iOS) software update will close a security hole frequently used by law enforcement. Reuters reported that the future iOS update will include default settings to terminate communications through the USB port when the device hasn't been unlocked within the past hour. Reportedly, that change may reduce access by 90 percent.

Kudos to the executives at Apple for keeping customers' privacy foremost.


New Commissioner Says FTC Should Get Tough on Companies Like Facebook and Google

[Editor's note: today's guest post, by reporters at ProPublica, explores enforcement policy by the U.S. Federal Trade Commission (FTC), which has become more important  given the "light touch" enforcement approach by the Federal Communications Commission. Today's post is reprinted with permission.]

By Jesse Eisinger, ProPublica

Declaring that "the credibility of law enforcement and regulatory agencies has been undermined by the real or perceived lax treatment of repeat offenders," newly installed Democratic Federal Trade Commissioner Rohit Chopra is calling for much more serious penalties for repeat corporate offenders.

"FTC orders are not suggestions," he wrote in his first official statement, which was released on May 14.

Many giant companies, including Facebook and Google, are under FTC consent orders for various alleged transgressions (such as, in Facebook’s case, not keeping its promises to protect the privacy of its users’ data). Typically, a first FTC action essentially amounts to a warning not to do it again. The second carries potential penalties that are more serious.

Some critics charge that that approach has encouraged companies to treat FTC and other regulatory orders casually, often violating their terms. They also say the FTC and other regulators and law enforcers have gone easy on corporate recidivists.

In 2012, a Republican FTC commissioner, J. Thomas Rosch, dissented from an agency agreement with Google that fined the company $22.5 million for violations of a previous order even as it denied liability. Rosch wrote, “There is no question in my mind that there is ‘reason to believe’ that Google is in contempt of a prior Commission order.” He objected to allowing the company to deny its culpability while accepting a fine.

Chopra’s memo signals a tough stance from Democratic watchdogs — albeit a largely symbolic one, given the fact that Republicans have a 3-2 majority on the FTC — as the Trump administration pursues a wide-ranging deregulatory agenda. Agencies such as the Environmental Protection Agency and the Department of Interior are rolling back rules, while enforcement actions from the Securities and Exchange Commission and the Department of Justice are at multiyear lows.

Chopra, 36, is an ally of Elizabeth Warren and a former assistant director of the Consumer Financial Protection Bureau. President Donald Trump nominated him to his post in October, and he was confirmed last month. The FTC is led by a five-person commission, with a chairman from the president’s party.

The Chopra memo is also a tacit criticism of enforcement in the Obama years. Chopra cites the SEC’s practice of giving waivers to banks that have been sanctioned by the Department of Justice or regulators allowing them to continue to receive preferential access to capital markets. The habitual waivers drew criticism from a Democratic commissioner on the SEC, Kara Stein. Chopra contends in his memo that regulators treated both Wells Fargo and the giant British bank HSBC too lightly after repeated misconduct.

"When companies violate orders, this is usually the result of serious management dysfunction, a calculated risk that the payoff of skirting the law is worth the expected consequences, or both," he wrote. Both require more serious, structural remedies, rather than small fines.

The repeated bad behavior and soft penalties “undermine the rule of law,” he argued.

Chopra called for the FTC to use more aggressive tools: referring criminal matters to the Department of Justice; holding individual executives accountable, even if they weren’t named in the initial complaint; and “meaningful” civil penalties.

The FTC used such aggressive tactics in going after Kevin Trudeau, infomercial marketer of miracle treatments for bodily ailments. Chopra implied that the commission does not treat corporate recidivists with the same toughness. “Regardless of their size and clout, these offenders, too, should be stopped cold,” he writes.

Chopra also suggested other remedies. He called for the FTC to consider banning companies from engaging in certain business practices; requiring that they close or divest the offending business unit or subsidiary; requiring the dismissal of senior executives; and clawing back executive compensation, among other forceful measures.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Privacy Badger Update Fights 'Link Tracking' And 'Link Shims'

Many internet users know that social media companies track both users and non-users. The Electronic Frontier Foundation (EFF) updated its Privacy Badger browser add-on to help consumers fight a specific type of surveillance technology called "Link Tracking," which facebook and many social networking sites use to track users both on and off their social platforms. The EFF explained:

"Say your friend shares an article from EFF’s website on Facebook, and you’re interested. You click on the hyperlink, your browser opens a new tab, and Facebook is no longer a part of the equation. Right? Not exactly. Facebook—and many other companies, including Google and Twitter—use a variation of a technique called link shimming to track the links you click on their sites.

When your friend posts a link to eff.org on Facebook, the website will “wrap” it in a URL that actually points to Facebook.com: something like https://l.facebook.com/l.php?u=https%3A%2F%2Feff.org%2Fpb&h=ATPY93_4krP8Xwq6wg9XMEo_JHFVAh95wWm5awfXqrCAMQSH1TaWX6znA4wvKX8pNIHbWj3nW7M4F-ZGv3yyjHB_vRMRfq4_BgXDIcGEhwYvFgE7prU. This is a link shim.

When you click on that monstrosity, your browser first makes a request to Facebook with information about who you are, where you are coming from, and where you are navigating to. Then, Facebook quickly redirects you to the place you actually wanted to go... Facebook’s approach is a bit sneakier. When the site first loads in your browser, all normal URLs are replaced with their l.facebook.com shim equivalents. But as soon as you hover over a URL, a piece of code triggers that replaces the link shim with the actual link you wanted to see: that way, when you hover over a link, it looks innocuous. The link shim is stored in an invisible HTML attribute behind the scenes. The new link takes you to where you want to go, but when you click on it, another piece of code fires off a request to l.facebook.com in the background—tracking you just the same..."

Lovely. And, Facebook fails to deliver on privacy in more ways:

"According to Facebook's official post on the subject, in addition to helping Facebook track you, link shims are intended to protect users from links that are "spammy or malicious." The post states that Facebook can use click-time detection to save users from visiting malicious sites. However, since we found that link shims are replaced with their unwrapped equivalents before you have a chance to click on them, Facebook's system can't actually protect you in the way they describe.

Facebook also claims that link shims "protect privacy" by obfuscating the HTTP Referer header. With this update, Privacy Badger removes the Referer header from links on facebook.com altogether, protecting your privacy even more than Facebook's system claimed to."

Thanks to the EFF for focusing upon online privacy and delivering effective solutions.


Equifax Operates A Secondary Credit Reporting Agency, And Its Website Appears Haphazard

Equifax logo More news about Equifax, the credit reporting agency with multiple data security failures resulting in a massive data breach affecting half of the United States population. It appears that Equifax also operates a secondary credit bureau: the National Consumer Telecommunications and Utilities Exchange (NCTUE). The Krebs On Security blog explained Equifax's role:

"The NCTUE is a consumer reporting agency founded by AT&T in 1997 that maintains data such as payment and account history, reported by telecommunication, pay TV and utility service providers that are members of NCTUE... there are four "exchanges" that feed into the NCTUE’s system: the NCTUE itself, something called "Centralized Credit Check Systems," the New York Data Exchange (NYDE), and the California Utility Exchange. According to a partner solutions page at Verizon, the NYDE is a not-for-profit entity created in 1996 that provides participating exchange carriers with access to local telecommunications service arrears (accounts that are unpaid) and final account information on residential end user accounts. The NYDE is operated by Equifax Credit Information Services Inc. (yes, that Equifax)... The California Utility Exchange collects customer payment data from dozens of local utilities in the state, and also is operated by Equifax (Equifax Information Services LLC)."

This surfaced after consumers with security freezes on their credit reports at the three major credit reporting agencies (e.g., Experian, Equifax, TransUnion) found fraudulent mobile phone accounts opened in their names. This shouldn't have been possible since security freezes prevent credit reporting agencies from selling consumers' credit reports to telecommunications companies, who typically perform credit checks before opening new accounts. So, the credit information must have come from somewhere else. It turns out, the source was the NCTUE.

NCTUE logo Credit reporting agencies make money by selling consumers' credit reports to potential lenders. And credit reports from the NCTUE are easy for anyone to order:

"... the NCTUE makes it fairly easy to obtain any records they may have on Americans. Simply phone them up (1-866-349-5185) and provide your Social Security number and the numeric portion of your registered street address."

The Krebs on Security blog also explain the expired SSL certificate used by Equifax which prevents serving web pages in a secure manner. That was simply inexcusable, poor data security.

A quick check of the NCTUE page on the Better Business Bureau site found 2 negative reviews and 70 complaints -- mostly about negative credit inquiries, and unresolved issues. A quick check of the NCTUE Terms Of Use page found very thin usage and privacy policies lacking details, such as mentions about data sharing, cookies, tracking, and more. The lack of data-sharing mentions could indicate NCTUE will share or sell data to anyone: entities, companies, and government agencies. It also means there is no way to verify whether the NCTUE complies with its own policies. Not good.

The policy contains enough language which indicates that it is not liable for anything:

"... THE NCTUE IS NOT RESPONSIBLE FOR, AND EXPRESSLY DISCLAIM, ALL LIABILITY FOR, DAMAGES OF ANY KIND ARISING OUT OF USE, REFERENCE TO, OR RELIANCE ON ANY INFORMATION CONTAINED WITHIN THE SITE. All content located at or available from the NCTUE website is provided “as is,” and NCTUE makes no representations or warranties, express or implied, including but not limited to warranties of merchantability, fitness for a particular purpose, title or non-infringement of proprietary rights. Without limiting the foregoing, NCTUE makes no representation or warranty that content located on the NCTUE website is free from error or suitable for any purpose; nor that the use of such content will not infringe any third party copyrights, trademarks or other intellectual property rights.

Links to Third Party Websites: Although the NCTUE website may include links providing direct access to other Internet resources, including websites, NCTUE is not responsible for the accuracy or content of information contained in these sites.."

Huh?! As is? The data NCTUE collected is being used for credit decisions. Reliability and accuracy matters. And, there are more concerns.

While at the NCTUE site, I briefly browsed the credit freeze information, which is hosted on an outsourced site, the Exchange Service Center (ESC). What's up with that? Why a separate site, and not a cohesive single site with a unified customer experience? This design gives the impression that the security freeze process was an afterthought.

Plus, the NCTUE and ESC sites present different policies (e.g., terms of use, privacy). Really? Why the complexity? Which policies rule? You'd think that the policies in both sites would be consistent and would mention each other, since consumers must use the two sites complete security freezes. That design seems haphazard. Not good.

There's more. Rather than use state-of-the-art, traditional web pages, the ESC site presents its policies in static Adobe PDF documents making it difficult for users to follow links for more information. (Contrast those thin policies with the more comprehensive Privacy and Terms of Use policies by TransUnion.) Plus, one policy was old -- dated 2011. It seems the site hasn't been updated in seven years. What fresh hell is this? More haphazard design. Why the confusing user experience? Not good.

Image of confusing drop-down menu for exchanges within the security freeze process. Click to view larger version There's more. When placing a security freeze, the ESC site includes a drop-down menu asking consumers to pick an exchange (e.g., NCTUE, Centralized Credit Check System, California Utility Exchange, NYDE). The confusing drop-down menu appears in the image on the right. Which menu option is the global security freeze? Is there a global option? The form page doesn't say, and it should. Why would a consumer select one of the exchanges? Perhaps, is this another slick attempt to limit the effectiveness of security freezes placed by consumers. Not good.

What can consumers make of this? First, the NCTUE site seems to be a slick way for Equifax to skirt the security freezes which consumers have placed upon their credit reports. Sounds like a definite end-run to me. Surprised? I'll bet. Angry? I'll bet, too. We consumers paid good money for security freezes on our credit reports.

Second, the combo NCTUE/ESC site seems like some legal, outsourcing ju-jitsu to avoid all liability, while still enjoying the revenues from credit-report sales. The site left me with the impression that its design, which hasn't kept pace during the years with internet best practices, was by a committee of attorneys focused upon serving their corporate clients' data collection and sharing needs while doing the absolute minimum required legally -- rather than a site focused upon the security needs of consumers. I can best describe the site using an old film-review phrase: a million monkeys with a million crayons would be hard pressed in a million years to create something this bad.

Third, credit reporting agencies get their data from a variety of sources. So, their business model is based upon data sharing. NCTUE seems designed to effectively do just that, regardless of consumers' security needs and wishes.

Fourth, this situation offers several reminders: a) just about anyone can set up and operate a credit reporting agency. No special skills nor expertise required; b) there are both national and regional credit reporting agencies; c) credit reports often contain errors; and d) credit reporting agencies historically have outsourced work, sometimes internationally -- for better or worse data security.

Fifth, you now you know what criminals and fraudsters already know... how to skirt the security freezes on credit reports and gain access to consumers' sensitive information. The combo NCTUE/ESC site is definitely a high-value target by criminals.

My first impression of the NCTUE site: haphazard design making it difficult for consumers to use and to trust it. What do you think?


Oakland Law Mandates 'Technology Impact Reports' By Local Government Agencies Before Purchasing Surveillance Equipment

Popular tools used by law enforcement include stingrays, fake cellular phone towers, and automated license plate readers (ALPRs) to track the movements of persons. Historically, the technologies have often been deployed without notice to track both the bad guys (e.g., criminals and suspects) and innocent citizens.

To better balance the privacy needs of citizens versus the surveillance needs of law enforcement, some areas are implementing new laws. The East Bay Times reported about a new law in Oakland:

"... introduced at Tuesday’s city council meeting, creates a public approval process for surveillance technologies used by the city. The rules also lay a groundwork for the City Council to decide whether the benefits of using the technology outweigh the cost to people’s privacy. Berkeley and Davis have passed similar ordinances this year.

However, Oakland’s ordinance is unlike any other in the nation in that it requires any city department that wants to purchase or use the surveillance technology to submit a "technology impact report" to the city’s Privacy Advisory Commission, creating a “standardized public format” for technologies to be evaluated and approved... city departments must also submit a “surveillance use policy” to the Privacy Advisory Commission for consideration. The approved policy must be adopted by the City Council before the equipment is to be used..."

Reportedly, the city council will review the ordinance a second time before final passage.

The Northern California chapter of the American Civil Liberties Union (ACLU) discussed the problem, the need for transparency, and legislative actions:

"Public safety in the digital era must include transparency and accountability... the ACLU of California and a diverse coalition of civil rights and civil liberties groups support SB 1186, a bill that helps restores power at the local level and makes sure local voices are heard... the use of surveillance technology harms all Californians and disparately harms people of color, immigrants, and political activists... The Oakland Police Department concentrated their use of license plate readers in low income and minority neighborhoods... Across the state, residents are fighting to take back ownership of their neighborhoods... Earlier this year, Alameda, Culver City, and San Pablo rejected license plate reader proposals after hearing about the Immigration & Customs Enforcement (ICE) data [sharing] deal. Communities are enacting ordinances that require transparency, oversight, and accountability for all surveillance technologies. In 2016, Santa Clara County, California passed a groundbreaking ordinance that has been used to scrutinize multiple surveillance technologies in the past year... SB 1186 helps enhance public safety by safeguarding local power and ensuring transparency, accountability... SB 1186 covers the broad array of surveillance technologies used by police, including drones, social media surveillance software, and automated license plate readers. The bill also anticipates – and covers – AI-powered predictive policing systems on the rise today... Without oversight, the sensitive information collected by local governments about our private lives feeds databases that are ripe for abuse by the federal government. This is not a hypothetical threat – earlier this year, ICE announced it had obtained access to a nationwide database of location information collected using license plate readers – potentially sweeping in the 100+ California communities that use this technology. Many residents may not be aware their localities also share their information with fusion centers, federal-state intelligence warehouses that collect and disseminate surveillance data from all levels of government.

Statewide legislation can build on the nationwide Community Control Over Police Surveillance (CCOPS) movement, a reform effort spearheaded by 17 organizations, including the ACLU, that puts local residents and elected officials in charge of decisions about surveillance technology. If passed in its current form, SB 1186 would help protect Californians from intrusive, discriminatory, and unaccountable deployment of law enforcement surveillance technology."

Is there similar legislation in your state?


Twitter Advised Its Users To Change Their Passwords After Security Blunder

Yesterday, Twitter.com advised all of its users to change their passwords after a huge security blunder exposed users' passwords online in an unprotected format. The social networking service released a statement on May 3rd:

"We recently identified a bug that stored passwords unmasked in an internal log. We have fixed the bug, and our investigation shows no indication of breach or misuse by anyone. Out of an abundance of caution, we ask that you consider changing your password on all services where you’ve used this password."

Security experts advise consumers not to use the same password at several sites or services. Repeated use of the same password makes it easy for criminals to hack into multiple sites or services.

The statement by Twitter.com also explained that it masks users' passwords:

"... through a process called hashing using a function known as bcrypt, which replaces the actual password with a random set of numbers and letters that are stored in Twitter’s system. This allows our systems to validate your account credentials without revealing your password. This is an industry standard.

Due to a bug, passwords were written to an internal log before completing the hashing process. We found this error ourselves, removed the passwords, and are implementing plans to prevent this bug from happening again."

The good news: Twitter found the buy by itself. The not-so-good news: the statement was short on details. It did not disclose details about the fixes so this blunder doesn't happen again. Nor did the statement say how many users were affected. Twitter has about 330 million users, so it seems that all users were affected.


How to Wrestle Your Data From Data Brokers, Silicon Valley — and Cambridge Analytica

[Editor's note: today's guest post, by reporters at ProPublica, discusses data brokers you may not know, the data collected and archived about consumers, and options for consumers to (re)gain as much privacy as possible. It is reprinted with permission.]

By Jeremy B. Merrill, ProPublica

Cambridge Analytica thinks that I’m a "Very Unlikely Republican." Another political data firm, ALC Digital, has concluded I’m a "Socially Conservative," Republican, "Boomer Voter." In fact, I’m a 27-year-old millennial with no set party allegiance.

For all the fanfare, the burgeoning field of mining our personal data remains an inexact art.

One thing is certain: My personal data, and likely yours, is in more hands than ever. Tech firms, data brokers and political consultants build profiles of what they know — or think they can reasonably guess — about your purchasing habits, personality, hobbies and even what political issues you care about.

You can find out what those companies know about you but be prepared to be stubborn. Very stubborn. To demonstrate how this works, we’ve chosen a couple of representative companies from three major categories: data brokers, big tech firms and political data consultants.

Few of them make it easy. Some will show you on their websites, others will make you ask for your digital profile via the U.S. mail. And then there’s Cambridge Analytica, the controversial Trump campaign vendor that has come under intense fire in light of a report in the British newspaper The Observer and in The New York Times that the company used improperly obtained data from Facebook to help build voter profiles.

To find out what the chaps at the British data firm have on you, you’re going to need both stamps and a "cheque."

Once you see your data, you’ll have a much better understanding of how this shadowy corner of the new economy works. You’ll see what seemingly personal information they know about you … and you’ll probably have some hypotheses about where this data is coming from. You’ll also probably see some predictions about who you are that are hilariously wrong.

And if you do obtain your data from any of these companies, please let us know your thoughts at politicaldata@propublica.org. We won’t share or publish what you say (unless you tell us that’s it’s OK).

Cambridge Analytica and Other Political Consultants

Making statistically informed guesses about Americans’ political beliefs and pet issues is a common business these days, with dozens of firms selling data to candidates and issue groups about the purported leanings of individual American voters.

Few of these firms have to give your data. But Cambridge Analytica is required to do so by an obscure European rule.

Cambridge Analytica:

Around the time of the 2016 election, Paul-Olivier Dehaye, a Belgian mathematician and founder of a website that helps people exercise their data protection rights called PersonalData.IO, approached me with an idea for a story. He flagged some of Cambridge Analytica’s claims about the power of its "psychographic" targeting capabilities and suggested that I demand my data from them.

So I sent off a request, following Dehaye’s coaching, and citing the UK Data Protection Act 1998, the British implementation of a little-known European Union data-protection law that grants individuals (even Americans) the rights to see the data Europeans companies compile about individuals.

It worked. I got back a spreadsheet of data about me. But it took months, cost ten pounds — and I had to give them a photo ID and two utility bills. Presumably they didn’t want my personal data falling into the wrong hands.

How You Can Request Your Data From Cambridge Analytica:

  1. Visit Cambridge Analytica’s website here and fill out this web form.
  2. After you submit the form, the page will immediately request that you email to data.compliance@cambridgeanalytica.org a photo ID and two copies of your utility bills or bank statements, to prove your identity. This page will also include the company’s bank account details.
  3. Find a way to send them 10 GBP. You can try wiring this from your bank, though it may cost you an additional $25 or so — or ask a friend in the UK to go to their bank and get a cashier’s check. Your American bank probably won’t let you write a GBP-denominated check. Two services I tried, Xoom and TransferWise, weren’t able to do it.
  4. Eventually, Cambridge Analytica will email you a small Excel spreadsheet of information and a letter. You might have to wait a few weeks. Celeste LeCompte, ProPublica’s vice president of business development, requested her data on March 27 and still hasn’t received it.

Because the company is based in the United Kingdom, it had no choice but to fulfill my request. In recent weeks, the firm has come under intense fire after The New York Times and the British paper The Observer disclosed that it had used improperly obtained data from Facebook to build profiles of American voters. Facebook told me that data about me was likely transmitted to Cambridge Analytica because a person with whom I am "friends" on the social network had taken the now-infamous "This Is Your Digital Life" quiz. For what it’s worth, my data shows no sign of anything derived from Facebook.

What You Might Get Back From Cambridge Analytica:

Cambridge Analytica had generated 13 data points about my views: 10 political issues, ranked by importance; two guesses at my partisan leanings (one blank); and a guess at whether I would turn out in the 2016 general election.

They told me that the lower the rank, the higher the predicted importance of the issue to me.

Alongside that data labeled "models" were two other types of data that are run-of-the-mill and widely used by political consultants. One sheet of "core data" — that is, personal info, sliced and diced a few different ways, perhaps to be used more easily as parameters for a statistical model. It included my address, my electoral district, the census tract I live in and my date of birth.

The spreadsheet included a few rows of "election returns" — previous elections in New York State in which I had voted. (Intriguingly, Cambridge Analytica missed that I had voted in 2015’s snoozefest of a vote-for-five-of-these-five judicial election. It also didn’t know about elections in which I had voted in North Carolina, where I lived before I lived in New York.)

ALC Digital

ALC Digital is another data broker, which says that its info is "audiences are built from multi-sourced, verified information about an individual." Their data is distributed via Oracle Data Cloud, a service that lets advertisers target specific audience of people — like, perhaps, people who are Boomer Voters and also Republicans.

The firm brags in an Oracle document posted online about how hard it is to avoid their data collection efforts, saying, "It has no cookies to erase and can’t be ‘cleared.’ ALC Real World Data is rooted in reality, and doesn’t rely on inferences or faulty models."

How You Can Request Your Data From ALC Digital:

Here’s how to find the predictions about your political beliefs data in Oracle Data Cloud:

  1. Visit http://www.bluekai.com/registry/. If you use an ad blocker, there may not be much to see here.
  2. Click on the Partner Segments tab.
  3. Scroll on through until you find ALC Digital.

You may have to scroll for a while before you find it.

And not everyone appears to have data from ALC Digital, so don’t be shocked if you can’t find it. If you don’t, there may be other fascinating companies with data about who you are in your Oracle file.

What You Might Get Back From ALC Digital:

When I downloaded the data last year, it said I was "Socially Conservative," "Boomer Voter" — as well as a female voter and a tax reform supporter.

Recently, when I checked my data, those categories had disappeared entirely from my data. I had nothing from ALC Digital.

ALC Digital is not required to release this data. It is disclosed via the Oracle Data Cloud. Fran Green, the company’s president, said that Aristotle, a longtime political data company, “provides us with consumer data that populates these audiences.” She also said that “we do not claim to know people’s ‘beliefs.’”

Big Tech

Big tech firms like Google and Facebook tend to make their money by selling ads, so they build extensive profiles of their users’ interests and activities. They also depend on their users’ goodwill to keep us voluntarily giving them our locations, our browsing histories and plain ol’ lists of our friends and interests. (So far, these popular companies have not faced much regulation.) All three make it easy to download the data that they keep on you.

Firms like Google and Facebook firms don’t sell your data — because it’s their competitive advantage. Google’s privacy page screams in 72 point type: "We do not sell your personal information to anyone." As websites that we visit frequently, they sell access to our attention, so companies that want to reach you in particular can do so with these companies’ sites or other sites that feature their ads.

Facebook

How You Can Request Your Data From Facebook:

You of course have to have a Facebook account and be logged in:

  1. Visit https://www.facebook.com/settings on your computer.
  2. Click the “Download a copy of your Facebook data” link.
  3. On the next page, click “Start My Archive.”
  4. Enter your password, then click “Start My Archive” again.
  5. You’ll get an email immediately, and another one saying “Your Facebook download is ready” when your data is ready to be downloaded. You’ll get a notification on Facebook, too. Mine took just a few minutes.
  6. Once you get that email, click the link, then click Download Archive. Then reenter your password, which will start a zip file downloading..
  7. Unzip the folder; depending on your computer’s operating system, this might be called uncompressing or “expanding.” You’ll get a folder called something like “facebook-jeremybmerrill,” but, of course, with your username instead of mine.
  8. Open the folder and double-click “index.htm” to open it in your web browser.

What You Might Get Back From Facebook

Facebook designed its archive to first show you your profile information. That’s all information you typed into Facebook and that you probably intended to be shared with your friends. It’s no surprise that Facebook knows what city I live in or what my AIM screen name was — I told Facebook those things so that my friends would know.

But it’s a bit of a surprise that they decided to feature a list of my ex-girlfriends — what they blandly termed "Previous Relationships" — so prominently.

As you dig deeper in your archive, you’ll find more information that you gave Facebook, but that you might not have expected the social network to keep hold of for years: if you’re me, that’s the Nickelback concert I apparently RSVPed to, posts about switching high schools and instant messages from my freshman year in college.

But finally, you’ll find the creepier information: what Facebook knows about you that you didn’t tell it, on the "Ads" page. You’ll find "Ads Topics" that Facebook decided you were interested in, like Housing, ESPN or the town of Ellijay, Georgia. And, you’ll find a list of advertisers who have obtained your contact information and uploaded it to Facebook, as part of a so-called Custom Audience of specific people to whom they want to show their ads.

You’ll find more of that creepy information on your Ads Preferences page. Despite Mark Zuckerberg telling Rep. Jerry McNerney, D-Calif., in a hearing earlier this month that “all of your information is included in your ‘download your information,’” my archive didn’t include that list of ad categories that can be used to target ads to me. (Some other types of information aren’t included in the download, like other people’s posts you’ve liked. Those are listed here, along with where to find them — which, for most, is in your Activity Log.)

This area may include Facebook’s guesses about who you are, boiled down from some of your activities. Most Americans’ will have a guess about their politics — Facebook says I’m a "moderate" about U.S. Politics — and some will have a guess about so-called "multicultural affinity," which Facebook insists is not a guess about your ethnicity, but rather what sorts of content "you are interested in or will respond well to." For instance, Facebook recently added that I have a "Multicultural Affinity: African American." (I’m white — though, because Facebook’s definition of "multicultural affinity" is so strange, it’s hard to tell if this is an error on Facebook’s part.)

Facebook also doesn’t include your browsing history — the subject of back-and-forths between Mark Zuckerberg and several members of Congress — it says it keeps that just long enough to boil it down into those “Ad Topics.”

For people without Facebook accounts, Facebook says to email datarequests@support.facebook.com or fill out an online form to download what Facebook knows about you. One puzzle here is how Facebook gathers data on people whose identities it may not know. It may know that a person using a phone from Atlanta, Georgia, has accessed a Facebook site and that the same person was last week in Austin, Texas, and before that Cincinnati, but it may not know that that person is me. It’s in principle difficult for the company to give the data it collects about logged-out users if it doesn’t know exactly who they are.

Google

Like Facebook, Google will give you a zip archive of your data. Google’s can be much bigger, because you might have stored gigabytes of files in Google Drive or years of emails in Gmail.

But like Facebook, Google does not provide its guesses about your interests, which it uses to target ads. Those guesses are available elsewhere.

How You Can Request Your Data From Google:

  1. Visit https://takeout.google.com/settings/takeout/ to use Google’s cutely named Takeout service.
  2. You’ll have to pick which data you want to download and examine. You should definitely select My Activity, Location History and Searches. You may not want to download gigabytes of emails, if you use Gmail, since that uses a lot of space and may take a while. (That’s also information you shouldn’t be surprised that Google keeps — you left it with Gmail so that you could use Google’s search expertise to hold on to your emails. )
  3. Google will present you with a few options for how to get your archive. The defaults are fine.
  4. Within a few hours, you should get an email with the subject "Your Google data archive is ready." Click Download Archive and log in again. That should start the download of a file named something like "takeout-20180412T193535.zip."
  5. Unzip the folder; depending on your computer’s operating system, this might be called uncompressing or “expanding.”
  6. You’ll get a folder called Takeout. Open the file inside it called "index.html" in your web browser to explore your archive.

What You Might Get Back From Google:

Once you open the index.html file, you’ll see icons for the data you chose in step 2. Try exploring "Ads" under "My Activity" — you’ll see a list of times you saw Google Ads, including on apps on your phone.

Google also includes your search history, under "Searches" — in my case, going back to 2013. Google knows what I had forgotten: I Googled a bunch of dinosaurs around Valentine’s Day that year… And it’s not just web searches: the Sound Search history reminded me that at some point, I used that service to identify Natalie Imbruglia’s song "Torn."

Android phone users might want to check the "Android" folder: Google keeps a list of each app you’ve used on your phone.

Most of the data contained here are records of ways you’ve directly interacted with Google — and the company really does use the those to improve how their services work for me. I’m glad to see my searches auto-completed, for instance.

But the company also creates data about you: Visit the company’s Ads Settings page to see some of the “topics” Google guesses you’re interested in, and which it uses to personalize the ads you see. Those topics are fairly general — it knows I’m interested in “Politics” — but the company says it has more granular classifications that it doesn’t include on the list. Those more granular, hidden classifications are on various topics, from sports to vacations to politics, where Google does generate a guess whether some people are politically “left-leaning” or “right-leaning.”

Data Brokers

Here’s who really does sell your data. Data brokers like the credit reporting agency Experian and a firm named Epsilon.

These sometimes-shady firms are middlemen who buy your data from tracking firms, survey marketers and retailers, slice and dice the data into “segments,” then sell those on to advertisers.

Experian

Experian is best known as a credit reporting firm, but your credit cards aren’t all they keep track of. They told me that they “firmly believe people should be made aware of how their data is being used” — so if you print and mail them a form, they’ll tell you what data they have on you.

“Educated consumers,” they said, “are better equipped to be effective, successful participants in a world that increasingly relies on the exchange of information to efficiently deliver the products and services consumers demand.”

How You Can Request Your Data From Experian:

  1. Visit Experian’s Marketing Data Request site and print the Marketing Data Report Request form.
  2. Print a copy of your ID and proof of address.
  3. Mail it all to Experian at Experian Marketing Services PO Box 40 Allen, TX 75013
  4. Wait for them to mail you something back.

What You Might Get Back From Experian:

Expect to wait a while. I’ve been waiting almost a month.

They also come up with a guess about your political views that’s integrated with Facebook — our Facebook Political Ad Collector project has found that many political candidates use Experian’s data to target their Facebook ads to likely supporters.

You should hope to find a guess about your political views that’d be useful to those candidates — as well as categories derived from your purchasing data.

Experian told me they generate the data they have about you from a long list of sources, including public records and “historical catalog purchase information” — as well as calculating it from predictive models.

Epsilon

How You Can Request Your Data From Epsilon:

  1. Visit Epsilon’s Marketing Data Summary Request form.
  2. After entering your name and address, Epsilon will answer some of those identity-verification questions that quiz you about your old addresses and cars. If your identity can’t be verified with those, Epsilon will ask you to mail in a form.
  3. Wait for Epsilon to mail you your data; it took about a week for me.

What You Might Get Back From Epsilon:

Epsilon has information on “demographics” and “lifestyle interests” — at the household level. It also includes a list of “household purchases.”

It also has data that political candidates use to target their Facebook ads, including Randy Bryce, a Wisconsin Democrat who’s seeking his party’s nomination to run for retiring Speaker Paul Ryan’s seat, and Rep. Tulsi Gabbard, D-Hawaii.

In my case, Epsilon knows I buy clothes, books and home office supplies, among other things — but isn’t any more specific. They didn’t tell me what political beliefs they believe I hold. The company didn’t respond to a request for comment.

Oracle

Oracle’s Data Cloud aggregates data about you from Oracle, but also so-called third party data from other companies.

How You Can Request Your Data From Oracle:

  1. Visit http://www.bluekai.com/registry/. If you use an ad blocker, there may not be much to see here.
  2. Explore each tab, from “Basic Info” to “Hobbies & Interests” and “Partner Segments.”

Not fun scrolling through all those pages? I have 84 pages of four pieces of data each.

You can’t search. All the text is actually images of text. Oracle declined to say why it chose to make their site so hard to use.

What You Might Get Back From Oracle:

My Oracle profile includes nearly 1500 data points, covering all aspects of my life, from my age to my car to how old my children are to whether I buy eggs. These profiles can even say if you’re likely to dress your pet in a costume for Halloween. But many of them are off-base or contradictory.

Many companies in Oracle’s data, besides ALC Digital, offer guesses about my political views: Data from one company uploaded by AcquireWeb says that my political affiliations are as a Democrat and an Independent … but also that I’m a “Mild Republican.” Another company, an Oracle subsidiary called AddThis, says that I’m a “Liberal.” Cuebiq, which calls itself a “location intelligence” company, says I’m in a subset of “Democrats” called “Liberal Professions.”

If an advertiser wants to show an ad to Spring Break Enthusiasts, Oracle can enable that. I’m apparently a Spring Break Enthusiast. Do I buy eggs? I sure do. Data on Oracle’s site associated with AcquireWeb says I’m a cat owner …

But it also “knows” I’m a dog owner, which I’m not.

Al Gadbut, the CEO of AcquireWeb, explained that the guesses associated with his company weren’t based on my personal data, but rather the tendencies of people in my geographical area — hence the seemingly contradictory political guesses. He said his firm doesn’t generate the data, but rather uploaded it on behalf of other companies. Cuebiq’s guess was a “probabilistic inference” they drew from location data submitted to them by some app on my phone. Valentina Marastoni-Bieser, Cuebiq’s senior vice president of marketing, wouldn’t tell me which app it was, though.

Data for sale here includes a long list what TV shows I — supposedly — watch.

But it’s not all wrong. AddThis can tell that I’m “Young & Hip.”

Takeaways:

The above list is just a sampling of the firms that collect your data and try to draw conclusions about who you are — not just sites you visit like Facebook and controversial firms like Cambridge Analytica.

You can make some guesses as to where this data comes from — especially the more granular consumer data from Oracle. For each data point, it’s worth considering: Who’d be in a position to sell a list of what TV shows I watch, or, at least, a list of what TV shows people demographically like me watch? Who’d be in a position to sell a list of what groceries I, or people similar to me in my area, buy? Some of those companies — companies who you’re likely paying, and for whom the internet adage that “if you’re not paying, you’re the product” doesn’t hold — are likely selling data about you without your knowledge. Other data points, like the location data used by Cuebiq, can come from any number of apps or websites, so it may be difficult to figure out exactly which one has passed it on.

Companies like Google and Facebook often say that they’ll let you “correct” the data that they hold on you — tacitly acknowledgingly that they sometimes get it wrong. But if receiving relevant ads is not important to you, they’ll let you opt-out entirely — or, presumably, “correct” your data to something false.

An upcoming European Union rule called the General Data Protection Regulation portends a dramatic change to how data is collected and used on the web — if only for Europeans. No such law seems likely to be passed in the U.S. in the near future.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Many People Are Concerned About Facebook. Any Other Tech Companies Pose Privacy Threats?

The massive data breach involving Facebook and Cambridge Analytica focused attention and privacy concerns on the social networking giant. Reports about extensive tracking of users and non-users, testimony by its CEO before the U.S. Congress, and online tools allegedly allowing advertisers to violate federal housing laws have also focused attention on Facebook.

Are there any other tech or advertising companies which consumers should have privacy concerns about?  What other companies collect massive amounts of information about consumers? It seems wise to look beyond Facebook in to avoid missing significant threats.

Google logo To answer these questions, the Wall Street Journal compared Facebook and Google:

"... Alphabet Inc.’s Google is a far bigger threat by many measures: the volume of information it gathers, the reach of its tracking and the time people spend on its sites and apps... It’s likely that Google has shadow profiles on at least as many people as Facebook does, says Chandler Givens, chief executive of TrackOff, which develops software to fight identity theft. Google allows everyone, whether they have a Google account or not, to opt out of its ad targeting. Yet, like Facebook, it continues to gather your data... Google Analytics is far and away the web’s most dominant analytics platform. Used on the sites of about half of the biggest companies in the U.S., it has a total reach of 30 million to 50 million sites. Google Analytics tracks you whether or not you are logged in... Google uses, among other things, our browsing and search history, apps we’ve installed, demographics such as age and gender and, from its own analytics and other sources, where we’ve shopped in the real world. Google says it doesn’t use information from “sensitive categories” such as race, religion, sexual orientation or health..."

There's plenty more, so read the entire WSJ article. A good review worthy of further discussion.

However, more companies pose privacy threats. Equifax, one of three major credit reporting agencies, easily makes my list. Its massive data breach affected half the population in the USA, plus persons worldwide. An investigation discovered several data security failures at Equifax.

Also on my list would be the U.S. Federal Communications Commission (FCC). Using some  "light touch" legal ju-jitsu and vague promises of enabling infrastructure investments, the Republican-majority Commissioners and Trump appointee Ajit Pai at the FCC revoked broadband privacy protections for consumers last year... and punted broadband oversight responsibility to the U.S. Federal Trade Commission (FTC). This allowed corporate internet service providers (ISPs) to freely track and collect sensitive data about internet users without requiring notices nor opt-out mechanisms.

Uber logo Uber also makes my list, given its massive data breach affecting 57 million persons. Earlier this month, the FTC announced a revised settlement agreement where Uber:

"... failed to disclose a significant breach of consumer data that occurred in 2016 -- in the midst of the FTC’s investigation that led to the August 2017 settlement announcement... the revised settlement could subject Uber to civil penalties if it fails to notify the FTC of certain future incidents involving unauthorized access of consumer information... In announcing the original proposed settlement with Uber in August 2017, the FTC charged that the company had failed to live up to its claims that it closely monitored employee access to rider and driver data and that it deployed reasonable measures to secure personal information stored on a third-party cloud provider’s servers.

In the revised complaint, the FTC alleges that Uber learned in November 2016 that intruders had again accessed consumer data the company stored on its third-party cloud provider’s servers by using an access key an Uber engineer had posted on a code-sharing website... the intruders used the access key to download from Uber’s cloud storage unencrypted files that contained more than 25 million names and email addresses, 22 million names and mobile phone numbers, and 600,000 names and driver’s license numbers of U.S. Uber drivers and riders... Uber paid the intruders $100,000 through its third-party “bug bounty” program and failed to disclose the breach to consumers or the Commission until November 2017... the new provisions in the revised proposed order include requirements for Uber to submit to the Commission all the reports from the required third-party audits of Uber’s privacy program rather than only the initial such report..."

Yes, Wells Fargo bank makes my list, too. This blog post explains why. Who is on your list of the biggest privacy threats to consumers?