577 posts categorized "Privacy" Feed

New Commissioner Says FTC Should Get Tough on Companies Like Facebook and Google

[Editor's note: today's guest post, by reporters at ProPublica, explores enforcement policy by the U.S. Federal Trade Commission (FTC), which has become more important  given the "light touch" enforcement approach by the Federal Communications Commission. Today's post is reprinted with permission.]

By Jesse Eisinger, ProPublica

Declaring that "the credibility of law enforcement and regulatory agencies has been undermined by the real or perceived lax treatment of repeat offenders," newly installed Democratic Federal Trade Commissioner Rohit Chopra is calling for much more serious penalties for repeat corporate offenders.

"FTC orders are not suggestions," he wrote in his first official statement, which was released on May 14.

Many giant companies, including Facebook and Google, are under FTC consent orders for various alleged transgressions (such as, in Facebook’s case, not keeping its promises to protect the privacy of its users’ data). Typically, a first FTC action essentially amounts to a warning not to do it again. The second carries potential penalties that are more serious.

Some critics charge that that approach has encouraged companies to treat FTC and other regulatory orders casually, often violating their terms. They also say the FTC and other regulators and law enforcers have gone easy on corporate recidivists.

In 2012, a Republican FTC commissioner, J. Thomas Rosch, dissented from an agency agreement with Google that fined the company $22.5 million for violations of a previous order even as it denied liability. Rosch wrote, “There is no question in my mind that there is ‘reason to believe’ that Google is in contempt of a prior Commission order.” He objected to allowing the company to deny its culpability while accepting a fine.

Chopra’s memo signals a tough stance from Democratic watchdogs — albeit a largely symbolic one, given the fact that Republicans have a 3-2 majority on the FTC — as the Trump administration pursues a wide-ranging deregulatory agenda. Agencies such as the Environmental Protection Agency and the Department of Interior are rolling back rules, while enforcement actions from the Securities and Exchange Commission and the Department of Justice are at multiyear lows.

Chopra, 36, is an ally of Elizabeth Warren and a former assistant director of the Consumer Financial Protection Bureau. President Donald Trump nominated him to his post in October, and he was confirmed last month. The FTC is led by a five-person commission, with a chairman from the president’s party.

The Chopra memo is also a tacit criticism of enforcement in the Obama years. Chopra cites the SEC’s practice of giving waivers to banks that have been sanctioned by the Department of Justice or regulators allowing them to continue to receive preferential access to capital markets. The habitual waivers drew criticism from a Democratic commissioner on the SEC, Kara Stein. Chopra contends in his memo that regulators treated both Wells Fargo and the giant British bank HSBC too lightly after repeated misconduct.

"When companies violate orders, this is usually the result of serious management dysfunction, a calculated risk that the payoff of skirting the law is worth the expected consequences, or both," he wrote. Both require more serious, structural remedies, rather than small fines.

The repeated bad behavior and soft penalties “undermine the rule of law,” he argued.

Chopra called for the FTC to use more aggressive tools: referring criminal matters to the Department of Justice; holding individual executives accountable, even if they weren’t named in the initial complaint; and “meaningful” civil penalties.

The FTC used such aggressive tactics in going after Kevin Trudeau, infomercial marketer of miracle treatments for bodily ailments. Chopra implied that the commission does not treat corporate recidivists with the same toughness. “Regardless of their size and clout, these offenders, too, should be stopped cold,” he writes.

Chopra also suggested other remedies. He called for the FTC to consider banning companies from engaging in certain business practices; requiring that they close or divest the offending business unit or subsidiary; requiring the dismissal of senior executives; and clawing back executive compensation, among other forceful measures.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Privacy Badger Update Fights 'Link Tracking' And 'Link Shims'

Many internet users know that social media companies track both users and non-users. The Electronic Frontier Foundation (EFF) updated its Privacy Badger browser add-on to help consumers fight a specific type of surveillance technology called "Link Tracking," which facebook and many social networking sites use to track users both on and off their social platforms. The EFF explained:

"Say your friend shares an article from EFF’s website on Facebook, and you’re interested. You click on the hyperlink, your browser opens a new tab, and Facebook is no longer a part of the equation. Right? Not exactly. Facebook—and many other companies, including Google and Twitter—use a variation of a technique called link shimming to track the links you click on their sites.

When your friend posts a link to eff.org on Facebook, the website will “wrap” it in a URL that actually points to Facebook.com: something like https://l.facebook.com/l.php?u=https%3A%2F%2Feff.org%2Fpb&h=ATPY93_4krP8Xwq6wg9XMEo_JHFVAh95wWm5awfXqrCAMQSH1TaWX6znA4wvKX8pNIHbWj3nW7M4F-ZGv3yyjHB_vRMRfq4_BgXDIcGEhwYvFgE7prU. This is a link shim.

When you click on that monstrosity, your browser first makes a request to Facebook with information about who you are, where you are coming from, and where you are navigating to. Then, Facebook quickly redirects you to the place you actually wanted to go... Facebook’s approach is a bit sneakier. When the site first loads in your browser, all normal URLs are replaced with their l.facebook.com shim equivalents. But as soon as you hover over a URL, a piece of code triggers that replaces the link shim with the actual link you wanted to see: that way, when you hover over a link, it looks innocuous. The link shim is stored in an invisible HTML attribute behind the scenes. The new link takes you to where you want to go, but when you click on it, another piece of code fires off a request to l.facebook.com in the background—tracking you just the same..."

Lovely. And, Facebook fails to deliver on privacy in more ways:

"According to Facebook's official post on the subject, in addition to helping Facebook track you, link shims are intended to protect users from links that are "spammy or malicious." The post states that Facebook can use click-time detection to save users from visiting malicious sites. However, since we found that link shims are replaced with their unwrapped equivalents before you have a chance to click on them, Facebook's system can't actually protect you in the way they describe.

Facebook also claims that link shims "protect privacy" by obfuscating the HTTP Referer header. With this update, Privacy Badger removes the Referer header from links on facebook.com altogether, protecting your privacy even more than Facebook's system claimed to."

Thanks to the EFF for focusing upon online privacy and delivering effective solutions.


Equifax Operates A Secondary Credit Reporting Agency, And Its Website Appears Haphazard

Equifax logo More news about Equifax, the credit reporting agency with multiple data security failures resulting in a massive data breach affecting half of the United States population. It appears that Equifax also operates a secondary credit bureau: the National Consumer Telecommunications and Utilities Exchange (NCTUE). The Krebs On Security blog explained Equifax's role:

"The NCTUE is a consumer reporting agency founded by AT&T in 1997 that maintains data such as payment and account history, reported by telecommunication, pay TV and utility service providers that are members of NCTUE... there are four "exchanges" that feed into the NCTUE’s system: the NCTUE itself, something called "Centralized Credit Check Systems," the New York Data Exchange (NYDE), and the California Utility Exchange. According to a partner solutions page at Verizon, the NYDE is a not-for-profit entity created in 1996 that provides participating exchange carriers with access to local telecommunications service arrears (accounts that are unpaid) and final account information on residential end user accounts. The NYDE is operated by Equifax Credit Information Services Inc. (yes, that Equifax)... The California Utility Exchange collects customer payment data from dozens of local utilities in the state, and also is operated by Equifax (Equifax Information Services LLC)."

This surfaced after consumers with security freezes on their credit reports at the three major credit reporting agencies (e.g., Experian, Equifax, TransUnion) found fraudulent mobile phone accounts opened in their names. This shouldn't have been possible since security freezes prevent credit reporting agencies from selling consumers' credit reports to telecommunications companies, who typically perform credit checks before opening new accounts. So, the credit information must have come from somewhere else. It turns out, the source was the NCTUE.

NCTUE logo Credit reporting agencies make money by selling consumers' credit reports to potential lenders. And credit reports from the NCTUE are easy for anyone to order:

"... the NCTUE makes it fairly easy to obtain any records they may have on Americans. Simply phone them up (1-866-349-5185) and provide your Social Security number and the numeric portion of your registered street address."

The Krebs on Security blog also explain the expired SSL certificate used by Equifax which prevents serving web pages in a secure manner. That was simply inexcusable, poor data security.

A quick check of the NCTUE page on the Better Business Bureau site found 2 negative reviews and 70 complaints -- mostly about negative credit inquiries, and unresolved issues. A quick check of the NCTUE Terms Of Use page found very thin usage and privacy policies lacking details, such as mentions about data sharing, cookies, tracking, and more. The lack of data-sharing mentions could indicate NCTUE will share or sell data to anyone: entities, companies, and government agencies. It also means there is no way to verify whether the NCTUE complies with its own policies. Not good.

The policy contains enough language which indicates that it is not liable for anything:

"... THE NCTUE IS NOT RESPONSIBLE FOR, AND EXPRESSLY DISCLAIM, ALL LIABILITY FOR, DAMAGES OF ANY KIND ARISING OUT OF USE, REFERENCE TO, OR RELIANCE ON ANY INFORMATION CONTAINED WITHIN THE SITE. All content located at or available from the NCTUE website is provided “as is,” and NCTUE makes no representations or warranties, express or implied, including but not limited to warranties of merchantability, fitness for a particular purpose, title or non-infringement of proprietary rights. Without limiting the foregoing, NCTUE makes no representation or warranty that content located on the NCTUE website is free from error or suitable for any purpose; nor that the use of such content will not infringe any third party copyrights, trademarks or other intellectual property rights.

Links to Third Party Websites: Although the NCTUE website may include links providing direct access to other Internet resources, including websites, NCTUE is not responsible for the accuracy or content of information contained in these sites.."

Huh?! As is? The data NCTUE collected is being used for credit decisions. Reliability and accuracy matters. And, there are more concerns.

While at the NCTUE site, I briefly browsed the credit freeze information, which is hosted on an outsourced site, the Exchange Service Center (ESC). What's up with that? Why a separate site, and not a cohesive single site with a unified customer experience? This design gives the impression that the security freeze process was an afterthought.

Plus, the NCTUE and ESC sites present different policies (e.g., terms of use, privacy). Really? Why the complexity? Which policies rule? You'd think that the policies in both sites would be consistent and would mention each other, since consumers must use the two sites complete security freezes. That design seems haphazard. Not good.

There's more. Rather than use state-of-the-art, traditional web pages, the ESC site presents its policies in static Adobe PDF documents making it difficult for users to follow links for more information. (Contrast those thin policies with the more comprehensive Privacy and Terms of Use policies by TransUnion.) Plus, one policy was old -- dated 2011. It seems the site hasn't been updated in seven years. What fresh hell is this? More haphazard design. Why the confusing user experience? Not good.

Image of confusing drop-down menu for exchanges within the security freeze process. Click to view larger version There's more. When placing a security freeze, the ESC site includes a drop-down menu asking consumers to pick an exchange (e.g., NCTUE, Centralized Credit Check System, California Utility Exchange, NYDE). The confusing drop-down menu appears in the image on the right. Which menu option is the global security freeze? Is there a global option? The form page doesn't say, and it should. Why would a consumer select one of the exchanges? Perhaps, is this another slick attempt to limit the effectiveness of security freezes placed by consumers. Not good.

What can consumers make of this? First, the NCTUE site seems to be a slick way for Equifax to skirt the security freezes which consumers have placed upon their credit reports. Sounds like a definite end-run to me. Surprised? I'll bet. Angry? I'll bet, too. We consumers paid good money for security freezes on our credit reports.

Second, the combo NCTUE/ESC site seems like some legal, outsourcing ju-jitsu to avoid all liability, while still enjoying the revenues from credit-report sales. The site left me with the impression that its design, which hasn't kept pace during the years with internet best practices, was by a committee of attorneys focused upon serving their corporate clients' data collection and sharing needs while doing the absolute minimum required legally -- rather than a site focused upon the security needs of consumers. I can best describe the site using an old film-review phrase: a million monkeys with a million crayons would be hard pressed in a million years to create something this bad.

Third, credit reporting agencies get their data from a variety of sources. So, their business model is based upon data sharing. NCTUE seems designed to effectively do just that, regardless of consumers' security needs and wishes.

Fourth, this situation offers several reminders: a) just about anyone can set up and operate a credit reporting agency. No special skills nor expertise required; b) there are both national and regional credit reporting agencies; c) credit reports often contain errors; and d) credit reporting agencies historically have outsourced work, sometimes internationally -- for better or worse data security.

Fifth, you now you know what criminals and fraudsters already know... how to skirt the security freezes on credit reports and gain access to consumers' sensitive information. The combo NCTUE/ESC site is definitely a high-value target by criminals.

My first impression of the NCTUE site: haphazard design making it difficult for consumers to use and to trust it. What do you think?


Oakland Law Mandates 'Technology Impact Reports' By Local Government Agencies Before Purchasing Surveillance Equipment

Popular tools used by law enforcement include stingrays, fake cellular phone towers, and automated license plate readers (ALPRs) to track the movements of persons. Historically, the technologies have often been deployed without notice to track both the bad guys (e.g., criminals and suspects) and innocent citizens.

To better balance the privacy needs of citizens versus the surveillance needs of law enforcement, some areas are implementing new laws. The East Bay Times reported about a new law in Oakland:

"... introduced at Tuesday’s city council meeting, creates a public approval process for surveillance technologies used by the city. The rules also lay a groundwork for the City Council to decide whether the benefits of using the technology outweigh the cost to people’s privacy. Berkeley and Davis have passed similar ordinances this year.

However, Oakland’s ordinance is unlike any other in the nation in that it requires any city department that wants to purchase or use the surveillance technology to submit a "technology impact report" to the city’s Privacy Advisory Commission, creating a “standardized public format” for technologies to be evaluated and approved... city departments must also submit a “surveillance use policy” to the Privacy Advisory Commission for consideration. The approved policy must be adopted by the City Council before the equipment is to be used..."

Reportedly, the city council will review the ordinance a second time before final passage.

The Northern California chapter of the American Civil Liberties Union (ACLU) discussed the problem, the need for transparency, and legislative actions:

"Public safety in the digital era must include transparency and accountability... the ACLU of California and a diverse coalition of civil rights and civil liberties groups support SB 1186, a bill that helps restores power at the local level and makes sure local voices are heard... the use of surveillance technology harms all Californians and disparately harms people of color, immigrants, and political activists... The Oakland Police Department concentrated their use of license plate readers in low income and minority neighborhoods... Across the state, residents are fighting to take back ownership of their neighborhoods... Earlier this year, Alameda, Culver City, and San Pablo rejected license plate reader proposals after hearing about the Immigration & Customs Enforcement (ICE) data [sharing] deal. Communities are enacting ordinances that require transparency, oversight, and accountability for all surveillance technologies. In 2016, Santa Clara County, California passed a groundbreaking ordinance that has been used to scrutinize multiple surveillance technologies in the past year... SB 1186 helps enhance public safety by safeguarding local power and ensuring transparency, accountability... SB 1186 covers the broad array of surveillance technologies used by police, including drones, social media surveillance software, and automated license plate readers. The bill also anticipates – and covers – AI-powered predictive policing systems on the rise today... Without oversight, the sensitive information collected by local governments about our private lives feeds databases that are ripe for abuse by the federal government. This is not a hypothetical threat – earlier this year, ICE announced it had obtained access to a nationwide database of location information collected using license plate readers – potentially sweeping in the 100+ California communities that use this technology. Many residents may not be aware their localities also share their information with fusion centers, federal-state intelligence warehouses that collect and disseminate surveillance data from all levels of government.

Statewide legislation can build on the nationwide Community Control Over Police Surveillance (CCOPS) movement, a reform effort spearheaded by 17 organizations, including the ACLU, that puts local residents and elected officials in charge of decisions about surveillance technology. If passed in its current form, SB 1186 would help protect Californians from intrusive, discriminatory, and unaccountable deployment of law enforcement surveillance technology."

Is there similar legislation in your state?


Twitter Advised Its Users To Change Their Passwords After Security Blunder

Yesterday, Twitter.com advised all of its users to change their passwords after a huge security blunder exposed users' passwords online in an unprotected format. The social networking service released a statement on May 3rd:

"We recently identified a bug that stored passwords unmasked in an internal log. We have fixed the bug, and our investigation shows no indication of breach or misuse by anyone. Out of an abundance of caution, we ask that you consider changing your password on all services where you’ve used this password."

Security experts advise consumers not to use the same password at several sites or services. Repeated use of the same password makes it easy for criminals to hack into multiple sites or services.

The statement by Twitter.com also explained that it masks users' passwords:

"... through a process called hashing using a function known as bcrypt, which replaces the actual password with a random set of numbers and letters that are stored in Twitter’s system. This allows our systems to validate your account credentials without revealing your password. This is an industry standard.

Due to a bug, passwords were written to an internal log before completing the hashing process. We found this error ourselves, removed the passwords, and are implementing plans to prevent this bug from happening again."

The good news: Twitter found the buy by itself. The not-so-good news: the statement was short on details. It did not disclose details about the fixes so this blunder doesn't happen again. Nor did the statement say how many users were affected. Twitter has about 330 million users, so it seems that all users were affected.


How to Wrestle Your Data From Data Brokers, Silicon Valley — and Cambridge Analytica

[Editor's note: today's guest post, by reporters at ProPublica, discusses data brokers you may not know, the data collected and archived about consumers, and options for consumers to (re)gain as much privacy as possible. It is reprinted with permission.]

By Jeremy B. Merrill, ProPublica

Cambridge Analytica thinks that I’m a "Very Unlikely Republican." Another political data firm, ALC Digital, has concluded I’m a "Socially Conservative," Republican, "Boomer Voter." In fact, I’m a 27-year-old millennial with no set party allegiance.

For all the fanfare, the burgeoning field of mining our personal data remains an inexact art.

One thing is certain: My personal data, and likely yours, is in more hands than ever. Tech firms, data brokers and political consultants build profiles of what they know — or think they can reasonably guess — about your purchasing habits, personality, hobbies and even what political issues you care about.

You can find out what those companies know about you but be prepared to be stubborn. Very stubborn. To demonstrate how this works, we’ve chosen a couple of representative companies from three major categories: data brokers, big tech firms and political data consultants.

Few of them make it easy. Some will show you on their websites, others will make you ask for your digital profile via the U.S. mail. And then there’s Cambridge Analytica, the controversial Trump campaign vendor that has come under intense fire in light of a report in the British newspaper The Observer and in The New York Times that the company used improperly obtained data from Facebook to help build voter profiles.

To find out what the chaps at the British data firm have on you, you’re going to need both stamps and a "cheque."

Once you see your data, you’ll have a much better understanding of how this shadowy corner of the new economy works. You’ll see what seemingly personal information they know about you … and you’ll probably have some hypotheses about where this data is coming from. You’ll also probably see some predictions about who you are that are hilariously wrong.

And if you do obtain your data from any of these companies, please let us know your thoughts at politicaldata@propublica.org. We won’t share or publish what you say (unless you tell us that’s it’s OK).

Cambridge Analytica and Other Political Consultants

Making statistically informed guesses about Americans’ political beliefs and pet issues is a common business these days, with dozens of firms selling data to candidates and issue groups about the purported leanings of individual American voters.

Few of these firms have to give your data. But Cambridge Analytica is required to do so by an obscure European rule.

Cambridge Analytica:

Around the time of the 2016 election, Paul-Olivier Dehaye, a Belgian mathematician and founder of a website that helps people exercise their data protection rights called PersonalData.IO, approached me with an idea for a story. He flagged some of Cambridge Analytica’s claims about the power of its "psychographic" targeting capabilities and suggested that I demand my data from them.

So I sent off a request, following Dehaye’s coaching, and citing the UK Data Protection Act 1998, the British implementation of a little-known European Union data-protection law that grants individuals (even Americans) the rights to see the data Europeans companies compile about individuals.

It worked. I got back a spreadsheet of data about me. But it took months, cost ten pounds — and I had to give them a photo ID and two utility bills. Presumably they didn’t want my personal data falling into the wrong hands.

How You Can Request Your Data From Cambridge Analytica:

  1. Visit Cambridge Analytica’s website here and fill out this web form.
  2. After you submit the form, the page will immediately request that you email to data.compliance@cambridgeanalytica.org a photo ID and two copies of your utility bills or bank statements, to prove your identity. This page will also include the company’s bank account details.
  3. Find a way to send them 10 GBP. You can try wiring this from your bank, though it may cost you an additional $25 or so — or ask a friend in the UK to go to their bank and get a cashier’s check. Your American bank probably won’t let you write a GBP-denominated check. Two services I tried, Xoom and TransferWise, weren’t able to do it.
  4. Eventually, Cambridge Analytica will email you a small Excel spreadsheet of information and a letter. You might have to wait a few weeks. Celeste LeCompte, ProPublica’s vice president of business development, requested her data on March 27 and still hasn’t received it.

Because the company is based in the United Kingdom, it had no choice but to fulfill my request. In recent weeks, the firm has come under intense fire after The New York Times and the British paper The Observer disclosed that it had used improperly obtained data from Facebook to build profiles of American voters. Facebook told me that data about me was likely transmitted to Cambridge Analytica because a person with whom I am "friends" on the social network had taken the now-infamous "This Is Your Digital Life" quiz. For what it’s worth, my data shows no sign of anything derived from Facebook.

What You Might Get Back From Cambridge Analytica:

Cambridge Analytica had generated 13 data points about my views: 10 political issues, ranked by importance; two guesses at my partisan leanings (one blank); and a guess at whether I would turn out in the 2016 general election.

They told me that the lower the rank, the higher the predicted importance of the issue to me.

Alongside that data labeled "models" were two other types of data that are run-of-the-mill and widely used by political consultants. One sheet of "core data" — that is, personal info, sliced and diced a few different ways, perhaps to be used more easily as parameters for a statistical model. It included my address, my electoral district, the census tract I live in and my date of birth.

The spreadsheet included a few rows of "election returns" — previous elections in New York State in which I had voted. (Intriguingly, Cambridge Analytica missed that I had voted in 2015’s snoozefest of a vote-for-five-of-these-five judicial election. It also didn’t know about elections in which I had voted in North Carolina, where I lived before I lived in New York.)

ALC Digital

ALC Digital is another data broker, which says that its info is "audiences are built from multi-sourced, verified information about an individual." Their data is distributed via Oracle Data Cloud, a service that lets advertisers target specific audience of people — like, perhaps, people who are Boomer Voters and also Republicans.

The firm brags in an Oracle document posted online about how hard it is to avoid their data collection efforts, saying, "It has no cookies to erase and can’t be ‘cleared.’ ALC Real World Data is rooted in reality, and doesn’t rely on inferences or faulty models."

How You Can Request Your Data From ALC Digital:

Here’s how to find the predictions about your political beliefs data in Oracle Data Cloud:

  1. Visit http://www.bluekai.com/registry/. If you use an ad blocker, there may not be much to see here.
  2. Click on the Partner Segments tab.
  3. Scroll on through until you find ALC Digital.

You may have to scroll for a while before you find it.

And not everyone appears to have data from ALC Digital, so don’t be shocked if you can’t find it. If you don’t, there may be other fascinating companies with data about who you are in your Oracle file.

What You Might Get Back From ALC Digital:

When I downloaded the data last year, it said I was "Socially Conservative," "Boomer Voter" — as well as a female voter and a tax reform supporter.

Recently, when I checked my data, those categories had disappeared entirely from my data. I had nothing from ALC Digital.

ALC Digital is not required to release this data. It is disclosed via the Oracle Data Cloud. Fran Green, the company’s president, said that Aristotle, a longtime political data company, “provides us with consumer data that populates these audiences.” She also said that “we do not claim to know people’s ‘beliefs.’”

Big Tech

Big tech firms like Google and Facebook tend to make their money by selling ads, so they build extensive profiles of their users’ interests and activities. They also depend on their users’ goodwill to keep us voluntarily giving them our locations, our browsing histories and plain ol’ lists of our friends and interests. (So far, these popular companies have not faced much regulation.) All three make it easy to download the data that they keep on you.

Firms like Google and Facebook firms don’t sell your data — because it’s their competitive advantage. Google’s privacy page screams in 72 point type: "We do not sell your personal information to anyone." As websites that we visit frequently, they sell access to our attention, so companies that want to reach you in particular can do so with these companies’ sites or other sites that feature their ads.

Facebook

How You Can Request Your Data From Facebook:

You of course have to have a Facebook account and be logged in:

  1. Visit https://www.facebook.com/settings on your computer.
  2. Click the “Download a copy of your Facebook data” link.
  3. On the next page, click “Start My Archive.”
  4. Enter your password, then click “Start My Archive” again.
  5. You’ll get an email immediately, and another one saying “Your Facebook download is ready” when your data is ready to be downloaded. You’ll get a notification on Facebook, too. Mine took just a few minutes.
  6. Once you get that email, click the link, then click Download Archive. Then reenter your password, which will start a zip file downloading..
  7. Unzip the folder; depending on your computer’s operating system, this might be called uncompressing or “expanding.” You’ll get a folder called something like “facebook-jeremybmerrill,” but, of course, with your username instead of mine.
  8. Open the folder and double-click “index.htm” to open it in your web browser.

What You Might Get Back From Facebook

Facebook designed its archive to first show you your profile information. That’s all information you typed into Facebook and that you probably intended to be shared with your friends. It’s no surprise that Facebook knows what city I live in or what my AIM screen name was — I told Facebook those things so that my friends would know.

But it’s a bit of a surprise that they decided to feature a list of my ex-girlfriends — what they blandly termed "Previous Relationships" — so prominently.

As you dig deeper in your archive, you’ll find more information that you gave Facebook, but that you might not have expected the social network to keep hold of for years: if you’re me, that’s the Nickelback concert I apparently RSVPed to, posts about switching high schools and instant messages from my freshman year in college.

But finally, you’ll find the creepier information: what Facebook knows about you that you didn’t tell it, on the "Ads" page. You’ll find "Ads Topics" that Facebook decided you were interested in, like Housing, ESPN or the town of Ellijay, Georgia. And, you’ll find a list of advertisers who have obtained your contact information and uploaded it to Facebook, as part of a so-called Custom Audience of specific people to whom they want to show their ads.

You’ll find more of that creepy information on your Ads Preferences page. Despite Mark Zuckerberg telling Rep. Jerry McNerney, D-Calif., in a hearing earlier this month that “all of your information is included in your ‘download your information,’” my archive didn’t include that list of ad categories that can be used to target ads to me. (Some other types of information aren’t included in the download, like other people’s posts you’ve liked. Those are listed here, along with where to find them — which, for most, is in your Activity Log.)

This area may include Facebook’s guesses about who you are, boiled down from some of your activities. Most Americans’ will have a guess about their politics — Facebook says I’m a "moderate" about U.S. Politics — and some will have a guess about so-called "multicultural affinity," which Facebook insists is not a guess about your ethnicity, but rather what sorts of content "you are interested in or will respond well to." For instance, Facebook recently added that I have a "Multicultural Affinity: African American." (I’m white — though, because Facebook’s definition of "multicultural affinity" is so strange, it’s hard to tell if this is an error on Facebook’s part.)

Facebook also doesn’t include your browsing history — the subject of back-and-forths between Mark Zuckerberg and several members of Congress — it says it keeps that just long enough to boil it down into those “Ad Topics.”

For people without Facebook accounts, Facebook says to email datarequests@support.facebook.com or fill out an online form to download what Facebook knows about you. One puzzle here is how Facebook gathers data on people whose identities it may not know. It may know that a person using a phone from Atlanta, Georgia, has accessed a Facebook site and that the same person was last week in Austin, Texas, and before that Cincinnati, but it may not know that that person is me. It’s in principle difficult for the company to give the data it collects about logged-out users if it doesn’t know exactly who they are.

Google

Like Facebook, Google will give you a zip archive of your data. Google’s can be much bigger, because you might have stored gigabytes of files in Google Drive or years of emails in Gmail.

But like Facebook, Google does not provide its guesses about your interests, which it uses to target ads. Those guesses are available elsewhere.

How You Can Request Your Data From Google:

  1. Visit https://takeout.google.com/settings/takeout/ to use Google’s cutely named Takeout service.
  2. You’ll have to pick which data you want to download and examine. You should definitely select My Activity, Location History and Searches. You may not want to download gigabytes of emails, if you use Gmail, since that uses a lot of space and may take a while. (That’s also information you shouldn’t be surprised that Google keeps — you left it with Gmail so that you could use Google’s search expertise to hold on to your emails. )
  3. Google will present you with a few options for how to get your archive. The defaults are fine.
  4. Within a few hours, you should get an email with the subject "Your Google data archive is ready." Click Download Archive and log in again. That should start the download of a file named something like "takeout-20180412T193535.zip."
  5. Unzip the folder; depending on your computer’s operating system, this might be called uncompressing or “expanding.”
  6. You’ll get a folder called Takeout. Open the file inside it called "index.html" in your web browser to explore your archive.

What You Might Get Back From Google:

Once you open the index.html file, you’ll see icons for the data you chose in step 2. Try exploring "Ads" under "My Activity" — you’ll see a list of times you saw Google Ads, including on apps on your phone.

Google also includes your search history, under "Searches" — in my case, going back to 2013. Google knows what I had forgotten: I Googled a bunch of dinosaurs around Valentine’s Day that year… And it’s not just web searches: the Sound Search history reminded me that at some point, I used that service to identify Natalie Imbruglia’s song "Torn."

Android phone users might want to check the "Android" folder: Google keeps a list of each app you’ve used on your phone.

Most of the data contained here are records of ways you’ve directly interacted with Google — and the company really does use the those to improve how their services work for me. I’m glad to see my searches auto-completed, for instance.

But the company also creates data about you: Visit the company’s Ads Settings page to see some of the “topics” Google guesses you’re interested in, and which it uses to personalize the ads you see. Those topics are fairly general — it knows I’m interested in “Politics” — but the company says it has more granular classifications that it doesn’t include on the list. Those more granular, hidden classifications are on various topics, from sports to vacations to politics, where Google does generate a guess whether some people are politically “left-leaning” or “right-leaning.”

Data Brokers

Here’s who really does sell your data. Data brokers like the credit reporting agency Experian and a firm named Epsilon.

These sometimes-shady firms are middlemen who buy your data from tracking firms, survey marketers and retailers, slice and dice the data into “segments,” then sell those on to advertisers.

Experian

Experian is best known as a credit reporting firm, but your credit cards aren’t all they keep track of. They told me that they “firmly believe people should be made aware of how their data is being used” — so if you print and mail them a form, they’ll tell you what data they have on you.

“Educated consumers,” they said, “are better equipped to be effective, successful participants in a world that increasingly relies on the exchange of information to efficiently deliver the products and services consumers demand.”

How You Can Request Your Data From Experian:

  1. Visit Experian’s Marketing Data Request site and print the Marketing Data Report Request form.
  2. Print a copy of your ID and proof of address.
  3. Mail it all to Experian at Experian Marketing Services PO Box 40 Allen, TX 75013
  4. Wait for them to mail you something back.

What You Might Get Back From Experian:

Expect to wait a while. I’ve been waiting almost a month.

They also come up with a guess about your political views that’s integrated with Facebook — our Facebook Political Ad Collector project has found that many political candidates use Experian’s data to target their Facebook ads to likely supporters.

You should hope to find a guess about your political views that’d be useful to those candidates — as well as categories derived from your purchasing data.

Experian told me they generate the data they have about you from a long list of sources, including public records and “historical catalog purchase information” — as well as calculating it from predictive models.

Epsilon

How You Can Request Your Data From Epsilon:

  1. Visit Epsilon’s Marketing Data Summary Request form.
  2. After entering your name and address, Epsilon will answer some of those identity-verification questions that quiz you about your old addresses and cars. If your identity can’t be verified with those, Epsilon will ask you to mail in a form.
  3. Wait for Epsilon to mail you your data; it took about a week for me.

What You Might Get Back From Epsilon:

Epsilon has information on “demographics” and “lifestyle interests” — at the household level. It also includes a list of “household purchases.”

It also has data that political candidates use to target their Facebook ads, including Randy Bryce, a Wisconsin Democrat who’s seeking his party’s nomination to run for retiring Speaker Paul Ryan’s seat, and Rep. Tulsi Gabbard, D-Hawaii.

In my case, Epsilon knows I buy clothes, books and home office supplies, among other things — but isn’t any more specific. They didn’t tell me what political beliefs they believe I hold. The company didn’t respond to a request for comment.

Oracle

Oracle’s Data Cloud aggregates data about you from Oracle, but also so-called third party data from other companies.

How You Can Request Your Data From Oracle:

  1. Visit http://www.bluekai.com/registry/. If you use an ad blocker, there may not be much to see here.
  2. Explore each tab, from “Basic Info” to “Hobbies & Interests” and “Partner Segments.”

Not fun scrolling through all those pages? I have 84 pages of four pieces of data each.

You can’t search. All the text is actually images of text. Oracle declined to say why it chose to make their site so hard to use.

What You Might Get Back From Oracle:

My Oracle profile includes nearly 1500 data points, covering all aspects of my life, from my age to my car to how old my children are to whether I buy eggs. These profiles can even say if you’re likely to dress your pet in a costume for Halloween. But many of them are off-base or contradictory.

Many companies in Oracle’s data, besides ALC Digital, offer guesses about my political views: Data from one company uploaded by AcquireWeb says that my political affiliations are as a Democrat and an Independent … but also that I’m a “Mild Republican.” Another company, an Oracle subsidiary called AddThis, says that I’m a “Liberal.” Cuebiq, which calls itself a “location intelligence” company, says I’m in a subset of “Democrats” called “Liberal Professions.”

If an advertiser wants to show an ad to Spring Break Enthusiasts, Oracle can enable that. I’m apparently a Spring Break Enthusiast. Do I buy eggs? I sure do. Data on Oracle’s site associated with AcquireWeb says I’m a cat owner …

But it also “knows” I’m a dog owner, which I’m not.

Al Gadbut, the CEO of AcquireWeb, explained that the guesses associated with his company weren’t based on my personal data, but rather the tendencies of people in my geographical area — hence the seemingly contradictory political guesses. He said his firm doesn’t generate the data, but rather uploaded it on behalf of other companies. Cuebiq’s guess was a “probabilistic inference” they drew from location data submitted to them by some app on my phone. Valentina Marastoni-Bieser, Cuebiq’s senior vice president of marketing, wouldn’t tell me which app it was, though.

Data for sale here includes a long list what TV shows I — supposedly — watch.

But it’s not all wrong. AddThis can tell that I’m “Young & Hip.”

Takeaways:

The above list is just a sampling of the firms that collect your data and try to draw conclusions about who you are — not just sites you visit like Facebook and controversial firms like Cambridge Analytica.

You can make some guesses as to where this data comes from — especially the more granular consumer data from Oracle. For each data point, it’s worth considering: Who’d be in a position to sell a list of what TV shows I watch, or, at least, a list of what TV shows people demographically like me watch? Who’d be in a position to sell a list of what groceries I, or people similar to me in my area, buy? Some of those companies — companies who you’re likely paying, and for whom the internet adage that “if you’re not paying, you’re the product” doesn’t hold — are likely selling data about you without your knowledge. Other data points, like the location data used by Cuebiq, can come from any number of apps or websites, so it may be difficult to figure out exactly which one has passed it on.

Companies like Google and Facebook often say that they’ll let you “correct” the data that they hold on you — tacitly acknowledgingly that they sometimes get it wrong. But if receiving relevant ads is not important to you, they’ll let you opt-out entirely — or, presumably, “correct” your data to something false.

An upcoming European Union rule called the General Data Protection Regulation portends a dramatic change to how data is collected and used on the web — if only for Europeans. No such law seems likely to be passed in the U.S. in the near future.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Many People Are Concerned About Facebook. Any Other Tech Companies Pose Privacy Threats?

The massive data breach involving Facebook and Cambridge Analytica focused attention and privacy concerns on the social networking giant. Reports about extensive tracking of users and non-users, testimony by its CEO before the U.S. Congress, and online tools allegedly allowing advertisers to violate federal housing laws have also focused attention on Facebook.

Are there any other tech or advertising companies which consumers should have privacy concerns about?  What other companies collect massive amounts of information about consumers? It seems wise to look beyond Facebook in to avoid missing significant threats.

Google logo To answer these questions, the Wall Street Journal compared Facebook and Google:

"... Alphabet Inc.’s Google is a far bigger threat by many measures: the volume of information it gathers, the reach of its tracking and the time people spend on its sites and apps... It’s likely that Google has shadow profiles on at least as many people as Facebook does, says Chandler Givens, chief executive of TrackOff, which develops software to fight identity theft. Google allows everyone, whether they have a Google account or not, to opt out of its ad targeting. Yet, like Facebook, it continues to gather your data... Google Analytics is far and away the web’s most dominant analytics platform. Used on the sites of about half of the biggest companies in the U.S., it has a total reach of 30 million to 50 million sites. Google Analytics tracks you whether or not you are logged in... Google uses, among other things, our browsing and search history, apps we’ve installed, demographics such as age and gender and, from its own analytics and other sources, where we’ve shopped in the real world. Google says it doesn’t use information from “sensitive categories” such as race, religion, sexual orientation or health..."

There's plenty more, so read the entire WSJ article. A good review worthy of further discussion.

However, more companies pose privacy threats. Equifax, one of three major credit reporting agencies, easily makes my list. Its massive data breach affected half the population in the USA, plus persons worldwide. An investigation discovered several data security failures at Equifax.

Also on my list would be the U.S. Federal Communications Commission (FCC). Using some  "light touch" legal ju-jitsu and vague promises of enabling infrastructure investments, the Republican-majority Commissioners and Trump appointee Ajit Pai at the FCC revoked broadband privacy protections for consumers last year... and punted broadband oversight responsibility to the U.S. Federal Trade Commission (FTC). This allowed corporate internet service providers (ISPs) to freely track and collect sensitive data about internet users without requiring notices nor opt-out mechanisms.

Uber logo Uber also makes my list, given its massive data breach affecting 57 million persons. Earlier this month, the FTC announced a revised settlement agreement where Uber:

"... failed to disclose a significant breach of consumer data that occurred in 2016 -- in the midst of the FTC’s investigation that led to the August 2017 settlement announcement... the revised settlement could subject Uber to civil penalties if it fails to notify the FTC of certain future incidents involving unauthorized access of consumer information... In announcing the original proposed settlement with Uber in August 2017, the FTC charged that the company had failed to live up to its claims that it closely monitored employee access to rider and driver data and that it deployed reasonable measures to secure personal information stored on a third-party cloud provider’s servers.

In the revised complaint, the FTC alleges that Uber learned in November 2016 that intruders had again accessed consumer data the company stored on its third-party cloud provider’s servers by using an access key an Uber engineer had posted on a code-sharing website... the intruders used the access key to download from Uber’s cloud storage unencrypted files that contained more than 25 million names and email addresses, 22 million names and mobile phone numbers, and 600,000 names and driver’s license numbers of U.S. Uber drivers and riders... Uber paid the intruders $100,000 through its third-party “bug bounty” program and failed to disclose the breach to consumers or the Commission until November 2017... the new provisions in the revised proposed order include requirements for Uber to submit to the Commission all the reports from the required third-party audits of Uber’s privacy program rather than only the initial such report..."

Yes, Wells Fargo bank makes my list, too. This blog post explains why. Who is on your list of the biggest privacy threats to consumers?


The Brave Web Browser: A New Tool For Consumers Wanting Online Privacy

After the U.S. Federal Communications Commission (FCC), led by Trump appointee Ajit Pai, repealed last year both broadband privacy and net neutrality protections, and after details emerged about the tracking of both users and non-users by Facebook, many consumers have sought tools to regain their online privacy. One popular approach has been installing ad-blocking software with existing web browsers to both suppress online ads, and disable tracking mechanisms embedded in  online advertisements and web sites.

Bravel Software logo What if a web browser came with ad-blocking software already built in? If that's what you seek, then the new Brave web browser is worth consideration. According to its website:

"Brave blocks ads and trackers by default so you browse faster and safer. You can add ad blocking extensions to your existing browser, but it’s complicated and they often conflict with one another because browser companies don't test them. Worse, the leading ad blockers still allow some ads and all trackers."

Other benefits of this new, open-source browser:

"Brave loads major news sites 2 to 8 times faster than Chrome and Safari on mobile. And Brave is 2 times faster than Chrome on desktop."

You can read details about speed tests at the Brave site. Reportedly, this new browser already has about 2 million users. Brave was started by Brendan Eich, creator of JavaScript programming language and former CEO of Mozilla. So, he knows what he is doing.

What also makes this new browser unique is its smart, innovative use of block-chain, the technology behind bitcoin. CNet explained that Brave soon will:

"... give cryptocurrency-like payment tokens to anyone using the ad-blocking web browser, a move that won't let you line your own pockets but that will make it easier to fund the websites you visit. Brave developed the Basic Attention Token (BAT) as an alternative to regular money for the payments that flow from advertiser to website publishers. Brave plans to use BAT more broadly, though, for example also sending a portion of advertising revenue to you if you're using Brave and letting you spend BAT for premium content like news articles that otherwise would be behind a subscription paywall.

Most of that is in the future, though. Today, Brave can send BAT to website publishers, YouTubers and Twitch videogame streamers, all of whom can convert that BAT into ordinary money once they're verified. You can buy BAT on your own, but Brave has given away millions of dollars' worth through a few promotions. The next phase of the plan, though, is just to automatically lavish BAT on anyone using Brave, so you won't have to fret that you missed a promotional giveaway... The BAT giveaway plan is an important new phase in Brave's effort to salvage what's good about advertising on the internet -- free access to useful or entertaining services like Facebook, Google search and YouTube -- without downsides like privacy invasion and the sorts of political manipulations that Facebook partner Cambridge Analytica tried to enable."

To summarize, Brave will use block-chain as a measurement tool; not as real money. Smart. Plus, Brave pursues a new business model where advertisers can still get paid, browser users get paid, and most importantly: consumers don't have to divulge massive amounts of sensitive, personal information in order to view content. (Facebook and Google executives: are you paying attention?) This seems like a far better balance of privacy versus tracking for advertising.

Skeptical? CNet also reported that Brave started:

"... in 2017 with an initial coin offering (ICO). Enough people were convinced of BAT's value that they funded Brave by buying $36 million worth of BAT in about 30 seconds. About 300 million of the tokens are reserved for a "user growth pool" to attract people to Brave and its BAT-based payment system for online ads. That's the source of the supply Brave plans to release to Brave users.

Today, more than 12,000 publishers have verified themselves for BAT payments, the company said. That includes more than 3,300 websites, 8,800 YouTube creators and nearly 350 people streaming video games on Amazon's Twitch site. Notable verified media sites include The Washington Post, the Guardian, and Dow Jones Media Group, a Dow Jones subsidiary that operates Barron's and MarketWatch."

Last week, Brave announced a partnership with Dow Jones Media Group where it:

"... will provide access to premium content to a limited number of users who download the Brave browser on a first-come, first-serve basis. The available content set features full access to Barrons.com or a premium MarketWatch newsletter..."

Plus, Brave and DuckDuckGo have collaborated to enable private search within the private tabs of the Brave browser. So, consumers can add the Brave browser to the list of optional tools for online privacy:

What are your opinions? If you use the Brave browser, share your experiences below.


How Facebook Tracks Its Users, And Non-Users, Around the Internet

Facebook logo Many Facebook users wrongly believe that the social networking service doesn't track them around the internet when they aren't signed in. Also, many non-users of Facebook wrongly believe that they are not tracked.

Earlier this month, Consumer Reports explained the tracking:

"As you travel through the web, you’re likely to encounter Facebook Like or Share buttons, which the company calls Social Plugins, on all sorts of pages, from news outlets to shopping sites. Click on a Like button and you can see the number on the page’s counter increase by one; click on a Share button and a box opens up to let you post a link to your Facebook account.

But that’s just what’s happening on the surface. "If those buttons are on the page, regardless of whether you touch them or not, Facebook is collecting data," said Casey Oppenheim, co-founder of data security firm Disconnect."

This blog discussed social plugins back in 2010. However, the tracking includes more technologies:

"... every web page contains little bits of code that request the pictures, videos, and text that browsers need to display each item on the page. These requests typically go out to a wide swath of corporate servers—including Facebook—in addition to the website’s owner. And such requests can transmit data about the site you’re on, the browser you are using, and more. Useful data gets sent to Facebook whether you click on one of its buttons or not. If you click, Facebook finds out about that, too. And it learns a bit more about your interests.

In addition to the buttons, many websites also incorporate a Facebook Pixel, a tiny, transparent image file the size of just one of the millions of pixels on a typical computer screen. The web page makes a request for a Facebook Pixel, just as it would request a Like button. No user will ever notice the picture, but the request to get it is packaged with information... Facebook explains what data can be collected using a Pixel, such as products you’ve clicked on or added to a shopping cart, in its documentation for advertisers. Web developers can control what data is collected and when it is transmitted... Even if you’re not logged in, the company can still associate the data with your IP address and all the websites you’ve been to that contain Facebook code."

The article also explains "re-targeting" and how consumers who don't purchase anything at an online retail site will see advertisements later -- around the internet and not solely on the Facebook site -- about the items they viewed but not purchased. Then, there is the database it assembles:

"In materials written for its advertisers, Facebook explains that it sorts consumers into a wide variety of buckets based on factors such as age, gender, language, and geographic location. Facebook also sorts its users based on their online activities—from buying dog food, to reading recipes, to tagging images of kitchen remodeling projects, to using particular mobile devices. The company explains that it can even analyze its database to build “look-alike” audiences that are similar... Facebook can show ads to consumers on other websites and apps as well through the company’s Audience Network."

So, several technologies are used to track both Facebook users and non-users, and assemble a robust, descriptive database. And, some website operators collaborate to facilitate the tracking, which is invisible to most users. Neat, eh?

Like it or not, internet users are automatically included in the tracking and data collection. Can you opt out? Consumer reports also warns:

"The biggest tech companies don’t give you strong tools for opting out of data collection, though. For instance, privacy settings may let you control whether you see targeted ads, but that doesn’t affect whether a company collects and stores information about you."

Given this, one can conclude that Facebook is really a massive advertising network masquerading as a social networking service.

To minimize the tracking, consumers can: disable the Facebook API platform on their Facebook accounts, use the new tools (e.g., see these step-by-step instructions) by Facebook to review and disable the apps with access to their data, use ad-blocking software (e.g., Adblock Plus, Ghostery), use the opt out-out mechanisms offered by the major data brokers, use the OptOutPrescreen.com site to stop pre-approved credit offers, and use VPN software and services.

If you use the Firefox web browser, configure it for Private Browsing and install the new Facebook Container add-on specifically designed to prevent Facebook from tracking you. Don't use Firefox? Several web browsers offer Incognito Mode. And, you might try the Privacy Badger add-on instead. I've used it happily for years.

To combat "canvas fingerprinting" (e.g., tracking users by identifying the unique attributes of your computer, browser, and software), security experts have advised consumers to use different web browsers. For example, you'd use one browser only for online banking, and a different web browser for surfing the internet. However,  this security method may not work much longer given the rise of cross-browser fingerprinting.

It seems that an arms race is underway between software for users to maintain privacy online versus technologies by advertisers to defeat users' privacy. Would Facebook and its affiliates/partners use cross-browser fingerprinting? My guess: yes it would, just like any other advertising network.

What do you think? Some related reading:


Apple Computer And Facebook Executives Exchange Criticisms

Chief executives at Apple Computer and Facebook recently exchanged criticisms. During a lengthy interview by Recode's Kara Swisher and MSNBC’s Chris Hayes, Apple CEO Tim Cook responded to questions about Facebook's recent data security and privacy incident. The interview was conducted in Chicago, Illinois on Tuesday, March 27. It was broadcast on MSNBC on Friday, April 6, 2018. The relevant section of the interview:

"Hayes: We are back with Apple CEO Tim Cook. In the wake of the news about data scraping by Cambridge Analytica and Facebook, you had this to say recently, and I thought it was quite interesting. You said, "It’s clear to me that something, some large profound change, is needed. I’m personally not a big fan of regulation because sometimes regulation can have unexpected consequences to it. However, I think this certain situation is so dire, has become so large, that probably some well-crafted regulation is necessary." What’d you mean?

Cook: Yeah. Look, we’ve never believed that these detailed profiles of people — that has incredibly deep personal information that is patched together from several sources — should exist. That the connection of all of these dots, that you could use them in such devious ways if someone wanted to do that, that this was one of the things that were possible in life but shouldn’t exist.

Swisher: Right.

Cook: Shouldn’t be allowed to exist. And so I think the best regulation is no regulation, is self regulation. That is the best regulation, because regulation can have unexpected consequences, right? However, I think we’re beyond that here, and I do think that it’s time for a set of people to think deeply about what can be done here.

Hayes: Now, the cynic in me says, you’ve got other tech companies that are much more dependent on that kind of thing than Apple is. And so, yes, you want regulation here because that would essentially be a comparative advantage, that if regulation were to come in on this privacy question, the people it’s going to hit harder aren’t Apple. It’s places like Facebook and Google.

Cook: Well, the skeptic in you would be wrong. (laughter) The truth is we could make a ton of money if we monetized our customer. If our customer was our product, we could make a ton of money. We’ve elected not to do that. (applause) Because we don’t... our products are iPhones and iPads and Macs and HomePods and the Watch, etc., and if we can convince you to buy one, we’ll make a little bit of money, right? But you are not our product."

The comments about regulation are relevant since Mr. Zuckerberg will testify before Congress this week about Facebook's privacy and data security incident involving Cambridge Analytica. Mr. Cook's comments highlight the radically different business models.

Mr. Cook's comments didn't sit well with Mr. Zuckerberg. Vox's Ezra Klein interviewed Zuckerberg on Monday, April 2. The relevant portion of that interview:

"Ezra Klein: One of the things that has been coming up a lot in the conversation is whether the business model of monetizing user attention is what is letting in a lot of these problems. Tim Cook, the CEO of Apple, gave an interview the other day and he was asked what he would do if he was in your shoes. He said, “I wouldn’t be in this situation,” and argued that Apple sells products to users, it doesn’t sell users to advertisers, and so it’s a sounder business model that doesn’t open itself to these problems.

Do you think part of the problem here is the business model where attention ends up dominating above all else, and so anything that can engage has powerful value within the ecosystem?

Mark Zuckerberg: You know, I find that argument, that if you’re not paying that somehow we can’t care about you, to be extremely glib and not at all aligned with the truth. The reality here is that if you want to build a service that helps connect everyone in the world, then there are a lot of people who can’t afford to pay. And therefore, as with a lot of media, having an advertising-supported model is the only rational model that can support building this service to reach people.

That doesn’t mean that we’re not primarily focused on serving people. I think probably to the dissatisfaction of our sales team here, I make all of our decisions based on what’s going to matter to our community and focus much less on the advertising side of the business.

But if you want to build a service which is not just serving rich people, then you need to have something that people can afford. I thought Jeff Bezos had an excellent saying on this in one of his Kindle launches a number of years back. He said, “There are companies that work hard to charge you more, and there are companies that work hard to charge you less.” And at Facebook, we are squarely in the camp of the companies that work hard to charge you less and provide a free service that everyone can use.

I don’t think at all that that means that we don’t care about people. To the contrary, I think it’s important that we don’t all get Stockholm syndrome and let the companies that work hard to charge you more convince you that they actually care more about you. Because that sounds ridiculous to me."

What to make of this. While Mr. Zuckerberg is entitled to his opinions, an old saying seems to apply: people in glass houses shouldn't throw stones.

There seems no question that Facebook built a platform which collected users' intimate and sensitive information, tracked users around the internet, allowed "advertisers" to collect information about both users who interacted with a quiz app and users' friends (without their friends' knowledge), allowed "advertisers" to target groups of users (regardless of the law and/or consequences), and made it easier for "advertisers" to combine data collected with information from other sources. You may remember, Facebook's "friction-less sharing" program in 2011, where apps automatically posted content in users' timelines without users active involvement. And, Facebook's history with a convoluted and often confusing interface for users to change their privacy settings.

You may remember, it was Apple which fought to protect its customers' sensitive information by resisting demands by federal law enforcement officials to build back-door hacks into its devices. I don't think Facebook can make a similar claim about protecting users' information. Actions speak louder than words.

Nobody forced Facebook to build the platform it built. Its executives made choices. And now, Mr. Zuckerberg is apologizing (again) for his company's behavior. You may remember, an admission of problems and promises to do better by Mr. Zuckerberg in January. Facebook COO Sheryl Sandberg also apologized last week about the executive failures in 2015. You might call it the #Facebookapologytour.

Mr. Zuckerberg's "an advertising-supported model is the only rational model" comment deserves attention. The only model? Mr. Zuckerberg and Facebook made the decision not to charge monthly fees. Would some users pay a monthly fee for guaranteed privacy? I imagine many users would gladly pay. I would. (An Apple co-founder is willing to pay, too.) It seems, a more accurate statement would be: an advertising-supported model is the profit-maximizing model.

Also, Mr. Zuckerberg's "advertising-supported" description of his company's business model seems disingenuous. It gives the impression that traditional advertisers pay money to passively display ads, while the reality is much more. More types of companies than traditional advertisers used the social networking service's sophisticated software tools (e.g., Facebook's API platform) to target groups and then collect data about Facebook users and their connected friends.

This makes one wonder how many other companies like Cambridge Analytica have harvested information -- either directly or indirectly via intermediaries. Facebook has suspended the account of Cubeyou, another alleged data harvester, while it investigates.

If there are more companies and Facebook executives know it, then they must admit it. Its March 21st press release promising to investigate all apps that had access to large amounts of information, and to conduct full audits of any apps with suspicious activity suggests that Facebook doesn't know. I'm not sure which is worse: knowing and not saying, or not knowing.

According to news reports, Cambridge Analytica paid sizeable amounts - US $ .75 to $5.00 per voter - for profiles crafted from Facebook users' information. Do that math... that could be amounts ranging from $1.5 to $10 million, allegedly based upon 2 million users from 11 states: Arkansas, Colorado, Florida, Iowa, Louisiana, Nevada, New Hampshire, North Carolina, Oregon, South Carolina, and West Virginia. Nobody pays that amount of money without expecting satisfactory results.

Later today, Facebook will inform users whose information may have been harvested by Cambridge Analytica. What are your opinions?


4 Ways to Fix Facebook

[Editor's Note: today's guest post, by ProPublica reporters, explores solutions to the massive privacy and data security problems at Facebook.com. It is reprinted with permission.]

By Julia Angwin, ProPublica

Gathered in a Washington, D.C., ballroom last Thursday for their annual “tech prom,” hundreds of tech industry lobbyists and policy makers applauded politely as announcers read out the names of the event’s sponsors. But the room fell silent when “Facebook” was proclaimed — and the silence was punctuated by scattered boos and groans.

Facebook logo These days, it seems the only bipartisan agreement in Washington is to hate Facebook. Democrats blame the social network for costing them the presidential election. Republicans loathe Silicon Valley billionaires like Facebook founder and CEO Mark Zuckerberg for their liberal leanings. Even many tech executives, boosters and acolytes can’t hide their disappointment and recriminations.

The tipping point appears to have been the recent revelation that a voter-profiling outfit working with the Trump campaign, Cambridge Analytica, had obtained data on 87 million Facebook users without their knowledge or consent. News of the breach came after a difficult year in which, among other things, Facebook admitted that it allowed Russians to buy political ads, advertisers to discriminate by race and age, hate groups to spread vile epithets, and hucksters to promote fake news on its platform.

Over the years, Congress and federal regulators have largely left Facebook to police itself. Now, lawmakers around the world are calling for it to be regulated. Congress is gearing up to grill Zuckerberg. The Federal Trade Commission is investigating whether Facebook violated its 2011 settlement agreement with the agency. Zuckerberg himself suggested, in a CNN interview, that perhaps Facebook should be regulated by the government.

The regulatory fever is so strong that even Peter Swire, a privacy law professor at Georgia Institute of Technology who testified last year in an Irish court on behalf of Facebook, recently laid out the legal case for why Google and Facebook might be regulated as public utilities. Both companies, he argued, satisfy the traditional criteria for utility regulation: They have large market share, are natural monopolies, and are difficult for customers to do without.

While the political momentum may not be strong enough right now for something as drastic as that, many in Washington are trying to envision what regulating Facebook would look like. After all, the solutions are not obvious. The world has never tried to rein in a global network with 2 billion users that is built on fast-moving technology and evolving data practices.

I talked to numerous experts about the ideas bubbling up in Washington. They identified four concrete, practical reforms that could address some of Facebook’s main problems. None are specific to Facebook alone; potentially, they could be applied to all social media and the tech industry.

1. Impose Fines for Data Breaches

The Cambridge Analytica data loss was the result of a breach of contract, rather than a technical breach in which a company gets hacked. But either way, it’s far too common for institutions to lose customers’ data — and they rarely suffer significant financial consequences for the loss. In the United States, companies are only required to notify people if their data has been breached in certain states and under certain circumstances — and regulators rarely have the authority to penalize companies that lose personal data.

Consider the Federal Trade Commission, which is the primary agency that regulates internet companies these days. The FTC doesn’t have the authority to demand civil penalties for most data breaches. (There are exceptions for violations of children’s privacy and a few other offenses.) Typically, the FTC can only impose penalties if a company has violated a previous agreement with the agency.

That means Facebook may well face a fine for the Cambridge Analytica breach, assuming the FTC can show that the social network violated a 2011 settlement with the agency. In that settlement, the FTC charged Facebook with eight counts of unfair and deceptive behavior, including allowing outside apps to access data that they didn’t need — which is what Cambridge Analytica reportedly did years later. The settlement carried no financial penalties but included a clause stating that Facebook could face fines of $16,000 per violation per day.

David Vladeck, former FTC director of consumer protection, who crafted the 2011 settlement with Facebook, said he believes Facebook’s actions in the Cambridge Analytica episode violated the agreement on multiple counts. “I predict that if the FTC concludes that Facebook violated the consent decree, there will be a heavy civil penalty that could well be in the amount of $1 billion or more,” he said.

Facebook maintains it has abided by the agreement. “Facebook rejects any suggestion that it violated the consent decree,” spokesman Andy Stone said. “We respected the privacy settings that people had in place.”

If a fine had been levied at the time of the settlement, it might well have served as a stronger deterrent against any future breaches. Daniel J. Weitzner, who served in the White House as the deputy chief technology officer at the time of the Facebook settlement, says that technology should be policed by something similar to the Department of Justice’s environmental crimes unit. The unit has levied hundreds of millions of dollars in fines. Under previous administrations, it filed felony charges against people for such crimes as dumping raw sewage or killing a bald eagle. Some ended up sentenced to prison.

“We know how to do serious law enforcement when we think there’s a real priority and we haven’t gotten there yet when it comes to privacy,” Weitzner said.

2. Police Political Advertising

Last year, Facebook disclosed that it had inadvertently accepted thousands of advertisements that were placed by a Russian disinformation operation — in possible violation of laws that restrict foreign involvement in U.S. elections. FBI special prosecutor Robert Mueller has charged 13 Russians who worked for an internet disinformation organization with conspiring to defraud the United States, but it seems unlikely that Russia will compel them to face trial in the U.S.

Facebook has said it will introduce a new regime of advertising transparency later this year, which will require political advertisers to submit a government-issued ID and to have an authentic mailing address. It said political advertisers will also have to disclose which candidate or organization they represent and that all election ads will be displayed in a public archive.

But Ann Ravel, a former commissioner at the Federal Election Commission, says that more could be done. While she was at the commission, she urged it to consider what it could do to make internet advertising contain as much disclosure as broadcast and print ads. “Do we want Vladimir Putin or drug cartels to be influencing American elections?” she presciently asked at a 2015 commission meeting.

However, the election commission — which is often deadlocked between its evenly split Democratic and Republican commissioners — has not yet ruled on new disclosure rules for internet advertising. Even if it does pass such a rule, the commission’s definition of election advertising is so narrow that many of the ads placed by the Russians may not have qualified for scrutiny. It’s limited to ads that mention a federal candidate and appear within 60 days prior to a general election or 30 days prior to a primary.

This definition, Ravel said, is not going to catch new forms of election interference, such as ads placed months before an election, or the practice of paying individuals or bots to spread a message that doesn’t identify a candidate and looks like authentic communications rather than ads.

To combat this type of interference, Ravel said, the current definition of election advertising needs to be broadened. The FEC, she suggested, should establish “a multi-faceted test” to determine whether certain communications should count as election advertisements. For instance, communications could be examined for their intent, and whether they were paid for in a nontraditional way — such as through an automated bot network.

And to help the tech companies find suspect communications, she suggested setting up an enforcement arm similar to the Treasury Department’s Financial Crimes Enforcement Network, known as FinCEN. FinCEN combats money laundering by investigating suspicious account transactions reported by financial institutions. Ravel said that a similar enforcement arm that would work with tech companies would help the FEC.

“The platforms could turn over lots of communications and the investigative agency could then examine them to determine if they are from prohibited sources,” she said.

3. Make Tech Companies Liable for Objectionable Content

Last year, ProPublica found that Facebook was allowing advertisers to buy discriminatory ads, including ads targeting people who identified themselves as “Jew-haters,” and ads for housing and employment that excluded audiences based on race, age and other protected characteristics under civil rights laws.

Facebook has claimed that it has immunity against liability for such discrimination under section 230 of the 1996 federal Communications Decency Act, which protects online publishers from liability for third-party content.

“Advertisers, not Facebook, are responsible for both the content of their ads and what targeting criteria to use, if any,” Facebook stated in legal filings in a federal case in California challenging Facebook’s use of racial exclusions in ad targeting.

But sentiment is growing in Washington to interpret the law more narrowly. Last month, the House of Representatives passed a bill that carves out an exemption in the law, making websites liable if they aid and abet sex trafficking. Despite fierce opposition by many tech advocates, a version of the bill has already passed the Senate.

And many staunch defenders of the tech industry have started to suggest that more exceptions to section 230 may be needed. In November, Harvard Law professor Jonathan Zittrain wrote an article rethinking his previous support for the law and declared it has become, in effect, “a subsidy” for the tech giants, who don’t bear the costs of ensuring the content they publish is accurate and fair.

“Any honest account must acknowledge the collateral damage it has permitted to be visited upon real people whose reputations, privacy, and dignity have been hurt in ways that defy redress,” Zittrain wrote.

In a December 2017 paper titled “The Internet Will Not Break: Denying Bad Samaritans 230 Immunity,” University of Maryland law professors Danielle Citron and Benjamin Wittes argue that the law should be amended — either through legislation or judicial interpretation — to deny immunity to technology companies that enable and host illegal content.

“The time is now to go back and revise the words of the statute to make clear that it only provides shelter if you take reasonable steps to address illegal activity that you know about,” Citron said in an interview.

4. Install Ethics Review Boards

Cambridge Analytica obtained its data on Facebook users by paying a psychology professor to build a Facebook personality quiz. When 270,000 Facebook users took the quiz, the researcher was able to obtain data about them and all of their Facebook friends — or about 50 million people altogether. (Facebook later ended the ability for quizzes and other apps to pull data on users’ friends.)

Cambridge Analytica then used the data to build a model predicting the psychology of those people, on metrics such as “neuroticism,” political views and extroversion. It then offered that information to political consultants, including those working for the Trump campaign.

The company claimed that it had enough information about people’s psychological vulnerabilities that it could effectively target ads to them that would sway their political opinions. It is not clear whether the company actually achieved its desired effect.

But there is no question that people can be swayed by online content. In a controversial 2014 study, Facebook tested whether it could manipulate the emotions of its users by filling some users’ news feeds with only positive news and other users’ feeds with only negative news. The study found that Facebook could indeed manipulate feelings — and sparked outrage from Facebook users and others who claimed it was unethical to experiment on them without their consent.

Such studies, if conducted by a professor on a college campus, would require approval from an institutional review board, or IRB, overseeing experiments on human subjects. But there is no such standard online. The usual practice is that a company’s terms of service contain a blanket statement of consent that users never read or agree to.

James Grimmelman, a law professor and computer scientist, argued in a 2015 paper that the technology companies should stop burying consent forms in their fine print. Instead, he wrote, “they should seek enthusiastic consent from users, making them into valued partners who feel they have a stake in the research.”

Such a consent process could be overseen by an independent ethics review board, based on the university model, which would also review research proposals and ensure that people’s private information isn’t shared with brokers like Cambridge Analytica.

“I think if we are in the business of requiring IRBs for academics,” Grimmelman said in an interview, “we should ask for appropriate supervisions for companies doing research.”

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

 


Facebook Update: 87 Million Affected By Its Data Breach With Cambridge Analytica. Considerations For All Consumers

Facebook logo Facebook.com has dominated the news during the past three weeks. The news media have reported about many issues, but there are more -- whether or not you use Facebook. Things began about mid-March, when Bloomberg reported:

"Yes, Cambridge Analytica... violated rules when it obtained information from some 50 million Facebook profiles... the data came from someone who didn’t hack the system: a professor who originally told Facebook he wanted it for academic purposes. He set up a personality quiz using tools that let people log in with their Facebook accounts, then asked them to sign over access to their friend lists and likes before using the app. The 270,000 users of that app and their friend networks opened up private data on 50 million people... All of that was allowed under Facebook’s rules, until the professor handed the information off to a third party... "

So, an authorized user shared members' sensitive information with unauthorized users. Facebook confirmed these details on March 16:

"We are suspending Strategic Communication Laboratories (SCL), including their political data analytics firm, Cambridge Analytica (CA), from Facebook... In 2015, we learned that a psychology professor at the University of Cambridge named Dr. Aleksandr Kogan lied to us and violated our Platform Policies by passing data from an app that was using Facebook Login to SCL/CA, a firm that does political, government and military work around the globe. He also passed that data to Christopher Wylie of Eunoia Technologies, Inc.

Like all app developers, Kogan requested and gained access to information from people after they chose to download his app. His app, “thisisyourdigitallife,” offered a personality prediction, and billed itself on Facebook as “a research app used by psychologists.” Approximately 270,000 people downloaded the app. In so doing, they gave their consent for Kogan to access information such as the city they set on their profile, or content they had liked... When we learned of this violation in 2015, we removed his app from Facebook and demanded certifications from Kogan and all parties he had given data to that the information had been destroyed. CA, Kogan and Wylie all certified to us that they destroyed the data... Several days ago, we received reports that, contrary to the certifications we were given, not all data was deleted..."

So, data that should have been deleted wasn't. Then, Facebook relied upon certifications from entities that had lied previously. Not good. Then, Facebook posted this addendum on March 17:

"The claim that this is a data breach is completely false. Aleksandr Kogan requested and gained access to information from users who chose to sign up to his app, and everyone involved gave their consent. People knowingly provided their information, no systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked."

Why the rush to deny a breach? It seems wise to complete a thorough investigation before making such a claim. In the 11+ years I've written this blog, whenever unauthorized persons access data they shouldn't have, it's a breach. You can read about plenty of similar incidents where credit reporting agencies sold sensitive consumer data to ID-theft services and/or data brokers, who then re-sold that information to criminals and fraudsters. Seems like a breach to me.

Cambridge Analytica logo Facebook announced on March 19th that it had hired a digital forensics firm:

"... Stroz Friedberg, to conduct a comprehensive audit of Cambridge Analytica (CA). CA has agreed to comply and afford the firm complete access to their servers and systems. We have approached the other parties involved — Christopher Wylie and Aleksandr Kogan — and asked them to submit to an audit as well. Mr. Kogan has given his verbal agreement to do so. Mr. Wylie thus far has declined. This is part of a comprehensive internal and external review that we are conducting to determine the accuracy of the claims that the Facebook data in question still exists... Independent forensic auditors from Stroz Friedberg were on site at CA’s London office this evening. At the request of the UK Information Commissioner’s Office, which has announced it is pursuing a warrant to conduct its own on-site investigation, the Stroz Friedberg auditors stood down."

That's a good start. An audit would determine or not data which perpetrators said was destroyed, actually had been destroyed. However, Facebook seems to have built a leaky system which allows data harvesting:

"Hundreds of millions of Facebook users are likely to have had their private information harvested by companies that exploited the same terms as the firm that collected data and passed it on to CA, according to a new whistleblower. Sandy Parakilas, the platform operations manager at Facebook responsible for policing data breaches by third-party software developers between 2011 and 2012, told the Guardian he warned senior executives at the company that its lax approach to data protection risked a major breach..."

Reportedly, Parakilas added that Facebook, "did not use its enforcement mechanisms, including audits of external developers, to ensure data was not being misused." Not good. The incident makes one wonder what other developers, corporate, and academic users have violated Facebook's rules: shared sensitive Facebook members' data they shouldn't have.

Facebook announced on March 21st that it will, 1) investigate all apps that had access to large amounts of information and conduct full audits of any apps with suspicious activity; 2) inform users affected by apps that have misused their data; 3) disable an app's access to a member's information if that member hasn't used the app within the last three months; 4) change Login to "reduce the data that an app can request without app review to include only name, profile photo and email address;" 5) encourage members to manage the apps they use; and reward users who find vulnerabilities.

Those actions seem good, but too little too late. Facebook needs to do more... perhaps, revise its Terms Of Use to include large fines for violators of its data security rules. Meanwhile, there has been plenty of news about CA. The Guardian UK reported on March 19:

"The company at the centre of the Facebook data breach boasted of using honey traps, fake news campaigns and operations with ex-spies to swing election campaigns around the world, a new investigation reveals. Executives from Cambridge Analytica spoke to undercover reporters from Channel 4 News about the dark arts used by the company to help clients, which included entrapping rival candidates in fake bribery stings and hiring prostitutes to seduce them."

Geez. After these news reports surfaced, CA's board suspended Alexander Nix, its CEO, pending an internal investigation. So, besides Facebook's failure to secure sensitive members' information, another key issue seems to be the misuse of social media data by a company that openly brags about unethical, and perhaps illegal, behavior.

What else might be happening? The Intercept explained on March 30th that CA:

"... has marketed itself as classifying voters using five personality traits known as OCEAN — Openness, Conscientiousness, Extroversion, Agreeableness, and Neuroticism — the same model used by University of Cambridge researchers for in-house, non-commercial research. The question of whether OCEAN made a difference in the presidential election remains unanswered. Some have argued that big data analytics is a magic bullet for drilling into the psychology of individual voters; others are more skeptical. The predictive power of Facebook likes is not in dispute. A 2013 study by three of Kogan’s former colleagues at the University of Cambridge showed that likes alone could predict race with 95 percent accuracy and political party with 85 percent accuracy. Less clear is their power as a tool for targeted persuasion; CA has claimed that OCEAN scores can be used to drive voter and consumer behavior through “microtargeting,” meaning narrowly tailored messages..."

So, while experts disagree about the effectiveness of data analytics with political campaigns, it seems wise to assume that the practice will continue with improvements. Data analytics fueled by social media input means political campaigns can bypass traditional news media outlets to distribute information and disinformation. That highlights the need for Facebook (and other social media) to improve their data security and compliance audits.

While the UK Information Commissioner's Office aggressively investigates CA, things seem to move at a much slower pace in the USA. TechCrunch reported on April 4th:

"... Facebook’s founder Mark Zuckerberg believes North America users of his platform deserve a lower data protection standard than people everywhere else in the world. In a phone interview with Reuters yesterday Mark Zuckerberg declined to commit to universally implementing changes to the platform that are necessary to comply with the European Union’s incoming General Data Protection Regulation (GDPR). Rather, he said the company was working on a version of the law that would bring some European privacy guarantees worldwide — declining to specify to the reporter which parts of the law would not extend worldwide... Facebook’s leadership has previously implied the product changes it’s making to comply with GDPR’s incoming data protection standard would be extended globally..."

Do users in the USA want weaker data protections than users in other countries? I think not. I don't. Read for yourself the April 4th announcement by Facebook about changes to its terms of service and data policy. It didn't mention specific countries or regions; who gets what and where. Not good.

Mark Zuckerberg apologized and defended his company in a March 21st post:

"I want to share an update on the Cambridge Analytica situation -- including the steps we've already taken and our next steps to address this important issue. We have a responsibility to protect your data, and if we can't then we don't deserve to serve you. I've been working to understand exactly what happened and how to make sure this doesn't happen again. The good news is that the most important actions to prevent this from happening again today we have already taken years ago. But we also made mistakes, there's more to do, and we need to step up and do it... This was a breach of trust between Kogan, Cambridge Analytica and Facebook. But it was also a breach of trust between Facebook and the people who share their data with us and expect us to protect it. We need to fix that... at the end of the day I'm responsible for what happens on our platform. I'm serious about doing what it takes to protect our community. While this specific issue involving Cambridge Analytica should no longer happen with new apps today, that doesn't change what happened in the past. We will learn from this experience to secure our platform further and make our community safer for everyone going forward."

Nice sounding words, but actions speak louder. Wired magazine said:

"Zuckerberg didn't mention in his Facebook post why it took him five days to respond to the scandal... The groundswell of outrage and attention following these revelations has been greater than anything Facebook predicted—or has experienced in its long history of data privacy scandals. By Monday, its stock price nosedived. On Tuesday, Facebook shareholders filed a lawsuit against the company in San Francisco, alleging that Facebook made "materially false and misleading statements" that led to significant losses this week. Meanwhile, in Washington, a bipartisan group of senators called on Zuckerberg to testify before the Senate Judiciary Committee. And the Federal Trade Commission also opened an investigation into whether Facebook had violated a 2011 consent decree, which required the company to notify users when their data was obtained by unauthorized sources."

Frankly, Zuckerberg has lost credibility with me. Why? Facebook's history suggests it can't (or won't) protect users' data it collects. Some of its privacy snafus: settlement of a lawsuit resulting from alleged privacy abuses by its Beacon advertising program, changed members' ad settings without notice nor consent, an advertising platform which allegedly facilitates abuses of older workers, health and privacy concerns about a new service for children ages 6 to 13, transparency concerns about political ads, and new lawsuits about the company's advertising platform. Plus, Zuckerberg made promises in January to clean up the service's advertising. Now, we have yet another apology.

In a press release this afternoon, Facebook revised upward the number affected by the Facebook/CA breach from 50 to 87 million persons. Most, about 70.6 million, are in the United States. The breakdown by country:

Number of affected persons by country in the Facebook - Cambridge Analytica breach. Click to view larger version

So, what should consumers do?

You have options. If you use Facebook, see these instructions by Consumer Reports to deactivate or delete your account. Some people I know simply stopped using Facebook, but left their accounts active. That doesn't seem wise. A better approach is to adjust the privacy settings on your Facebook account to get as much privacy and protections as possible.

Facebook has a new tool for members to review and disable, in bulk, all of the apps with access to their data. Follow these handy step-by-step instructions by Mashable. And, users should also disable the Facebook API platform for their account. If you use the Firefox web browser, then install the new Facebook Container new add-on specifically designed to prevent Facebook from tracking you. Don't use Firefox? You might try the Privacy Badger add-on instead. I've used it happily for years.

Of course, you should submit feedback directly to Facebook demanding that it extend GDPR privacy protections to your country, too. And, wise online users always read the terms and conditions of all Facebook quizzes before taking them.

Don't use Facebook? There are considerations for you, too; especially if you use a different social networking site (or app). Reportedly, Mark Zuckerberg, the CEO of Facebook, will testify before the U.S. Congress on April 11th. His upcoming testimony will be worth monitoring for everyone. Why? The outcome may prod Congress to act by passing new laws giving consumers in the USA data security and privacy protections equal to what's available in the United Kingdom. And, there may be demands for Cambridge Analytica executives to testify before Congress, too.

Or, consumers may demand stronger, faster action by the U.S. Federal Trade Commission (FTC), which announced on March 26th:

"The FTC is firmly and fully committed to using all of its tools to protect the privacy of consumers. Foremost among these tools is enforcement action against companies that fail to honor their privacy promises, including to comply with Privacy Shield, or that engage in unfair acts that cause substantial injury to consumers in violation of the FTC Act. Companies who have settled previous FTC actions must also comply with FTC order provisions imposing privacy and data security requirements. Accordingly, the FTC takes very seriously recent press reports raising substantial concerns about the privacy practices of Facebook. Today, the FTC is confirming that it has an open non-public investigation into these practices."

An "open non-public investigation?" Either the investigation is public, or it isn't. Hopefully, an attorney will explain. And, that announcement read like weak tea. I expect more. Much more.

USA citizens may want stronger data security laws, especially if Facebook's solutions are less than satisfactory, it refuses to provide protections equal to those in the United Kingdom, or if it backtracks later on its promises. Thoughts? Comments?


The 'CLOUD Act' - What It Is And What You Need To Know

Chances are, you probably have not heard of the "CLOUD Act." I hadn't heard about it until recently. A draft of the legislation is available on the website for U.S. Senator Orrin Hatch (Republican - Utah).

Many people who already use cloud services to store and backup data might assume: if it has to do with the cloud, then it must be good.  Such an assumption would be foolish. The full name of the bill: "Clarifying Overseas Use Of Data." What problem does this bill solve? Senator Hatch stated last month why he thinks this bill is needed:

"... the Supreme Court will hear arguments in a case... United States v. Microsoft Corp., colloquially known as the Microsoft Ireland case... The case began back in 2013, when the US Department of Justice asked Microsoft to turn over emails stored in a data center in Ireland. Microsoft refused on the ground that US warrants traditionally have stopped at the water’s edge. Over the last few years, the legal battle has worked its way through the court system up to the Supreme Court... The issues the Microsoft Ireland case raises are complex and have created significant difficulties for both law enforcement and technology companies... law enforcement officials increasingly need access to data stored in other countries for investigations, yet no clear enforcement framework exists for them to obtain overseas data. Meanwhile, technology companies, who have an obligation to keep their customers’ information private, are increasingly caught between conflicting laws that prohibit disclosure to foreign law enforcement. Equally important, the ability of one nation to access data stored in another country implicates national sovereignty... The CLOUD Act bridges the divide that sometimes exists between law enforcement and the tech sector by giving law enforcement the tools it needs to access data throughout the world while at the same time creating a commonsense framework to encourage international cooperation to resolve conflicts of law. To help law enforcement, the bill creates incentives for bilateral agreements—like the pending agreement between the US and the UK—to enable investigators to seek data stored in other countries..."

Senators Coons, Graham, and Whitehouse, support the CLOUD Act, along with House Representatives Collins, Jeffries, and others. The American Civil Liberties Union (ACLU) opposes the bill and warned:

"Despite its fluffy sounding name, the recently introduced CLOUD Act is far from harmless. It threatens activists abroad, individuals here in the U.S., and would empower Attorney General Sessions in new disturbing ways... the CLOUD Act represents a dramatic change in our law, and its effects will be felt across the globe... The bill starts by giving the executive branch dramatically more power than it has today. It would allow Attorney General Sessions to enter into agreements with foreign governments that bypass current law, without any approval from Congress. Under these agreements, foreign governments would be able to get emails and other electronic information without any additional scrutiny by a U.S. judge or official. And, while the attorney general would need to consider a country’s human rights record, he is not prohibited from entering into an agreement with a country that has committed human rights abuses... the bill would for the first time allow these foreign governments to wiretap in the U.S. — even in cases where they do not meet Wiretap Act standards. Paradoxically, that would give foreign governments the power to engage in surveillance — which could sweep in the information of Americans communicating with foreigners — that the U.S. itself would not be able to engage in. The bill also provides broad discretion to funnel this information back to the U.S., circumventing the Fourth Amendment. This information could potentially be used by the U.S. to engage in a variety of law enforcement actions."

Given that warning, I read the draft legislation. One portion immediately struck me:

"A provider of electronic communication service or remote computing service shall comply with the obligations of this chapter to preserve, backup, or disclose the contents of a wire or electronic communication and any record or other information pertaining to a customer or subscriber within such provider’s possession, custody, or control, regardless of whether such communication, record, or other information is located within or outside of the United States."

While I am not an attorney, this bill definitely sounds like an end-run around the Fourth Amendment. The review process is largely governed by the House of Representatives; a body not known for internet knowledge nor savvy. The bill also smells like an attack on internet services consumers regularly use for privacy, such as search engines that don't collect nor archive search data and Virtual Private Networks (VPNs).

Today, for online privacy many consumers in the United States use VPN software and services provided by vendors located offshore. Why? Despite a national poll in 2017 which found the the Republican rollback of FCC broadband privacy rules very unpopular among consumers, the Republican-led Congress proceeded with that rollback, and President Trump signed the privacy-rollback legislation on April 3, 2017. Hopefully, skilled and experienced privacy attorneys will continue to review and monitor the draft legislation.

The ACLU emphasized in its warning:

"Today, the information of global activists — such as those that fight for LGBTQ rights, defend religious freedom, or advocate for gender equality are protected from being disclosed by U.S. companies to governments who may seek to do them harm. The CLOUD Act eliminates many of these protections and replaces them with vague assurances, weak standards, and largely unenforceable restrictions... The CLOUD Act represents a major change in the law — and a major threat to our freedoms. Congress should not try to sneak it by the American people by hiding it inside of a giant spending bill. There has not been even one minute devoted to considering amendments to this proposal. Congress should robustly debate this bill and take steps to fix its many flaws, instead of trying to pull a fast one on the American people."

I agree. Seems like this bill creates far more problems than it solves. Plus, something this important should be openly and thoroughly discussed; not be buried in a spending bill. What do you think?


Cozy Relationship Between The FBI And A Computer Repair Service Spurs 4th Amendment Concerns

Image of Geek Squad auto and two technicians. Click to view larger version The Electronic Frontier Foundation (EFF) has learned more about the relationship between Geek Squad, a computer repair service, and the U.S. Federal Bureau of Investigation (FBI). In a March 6th announcement, the EFF said it filed a:

"... FOIA lawsuit last year to learn more about how the FBI uses Geek Squad employees to flag illegal material when people pay Best Buy to repair their computers. The relationship potentially circumvents computer owners’ Fourth Amendment rights."

Founded in 1966, the Best Buy retail chain operates more than 1,500 stores in North America and employs more than 125,000 people. The chain sells home appliances and electronics both online and at stores in the United States, Canada, and Mexico. Located in about 1,100 Best Buy stores, Geek Squad provides repair services via phone, in-store, or at home. This means that Geek Squad employees configure and fix popular smart devices many consumers have purchased for their homes: cameras and camcorders, cell phones, computers and tablets, home theater, car electronics, home security (e.g., smart doorbells, smart locks, smart thermostats, wireless cameras), smart appliances (e.g., refrigerators, ovens, washing machines, dryers, etc.), smart speakers, video game consoles, wearables (e.g., fitness bands, smart watches), and more.

The 4th Amendment of the U.S. Constitution states:

"The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized."

It is most puzzling how a broken computer translates into probable cause for a search. The FOIA request was prompted by the prosecution of a doctor in California, "who was charged with possession of child pornography after Best Buy sent his computer to the Kentucky Geek Squad repair facility."

Logos for Best Buy and Geek Squad The FOIA request yielded documents which showed:

"... that Best Buy officials have enjoyed a particularly close relationship with the agency for at least 10 years. For example, an FBI memo from September 2008 details how Best Buy hosted a meeting of the agency’s “Cyber Working Group” at the company’s Kentucky repair facility... Another document records a $500 payment from the FBI to a confidential Geek Squad informant... over the years of working with Geek Squad employees, FBI agents developed a process for investigating and prosecuting people who sent their devices to the Geek Squad for repairs..."

The EFF announcement described that process in detail:

"... a series of FBI investigations in which a Geek Squad employee would call the FBI’s Louisville field office after finding what they believed was child pornography. The FBI agent would show up, review the images or video and determine whether they believe they are illegal content. After that, they would seize the hard drive or computer and send it to another FBI field office near where the owner of the device lived. Agents at that local FBI office would then investigate further, and in some cases try to obtain a warrant to search the device... For example, documents reflect that Geek Squad employees only alert the FBI when they happen to find illegal materials during a manual search of images on a device and that the FBI does not direct those employees to actively find illegal content. But some evidence in the case appears to show Geek Squad employees did make an affirmative effort to identify illegal material... Other evidence showed that Geek Squad employees were financially rewarded for finding child pornography..."

Finding child pornography and prosecuting perpetrators is a worthy goal, but the FBI-Geek Squad program seems to blur the line between computer repair and law enforcement. The program and FOIA documents raise several questions:

  1. What are the program details (e.g., training, qualifications for informants, payments, conditions for payments, scope, etc.) for financial rewarding Geek Squad employees for finding child pornography?
  2. What other computer/appliance repair vendors does the FBI operate similar programs with?
  3. What quality control measures does the program contain to prevent wrongful prosecutions?
  4. What penalties or consequences, if any, for Geek Squad employees who falsely reported child pornography claims?
  5. Is this Geek Squad program nationwide, or if not, in which states does it operate?
  6. In cases of suspected child pornography, what other information on targets' devices is collected and archived by the FBI through this program?
  7. Were/are whole hard drives copied and archived?
  8. How long is information archived?
  9. Does the program between the FBI and Geek Squad target other types of crime  and threats (e.g., terrorism)?
  10. What other law enforcement or security agencies does Geek Squad have cozy relationships with?

I'm sure there are more questions to be asked. What are your opinions?

Image of Geek Squad services promoted on Best Buy site


Analysis: Closing The 'Regulatory Donut Hole' - The 9th Circuit Appeals Court, AT&T, The FCC And The FTC

The International Association of Privacy Professionals (IAPP) site has a good article explaining what a recent appeals court decision means for everyone who uses the internet:

"When the 9th U.S. Circuit Court of Appeals ruled, in September 2016, that the Federal Trade Commission did not have the authority to regulate AT&T because it was a “common carrier,” which only the Federal Communications Commission can regulate, the decision created what many in privacy foresaw as a “regulatory doughnut hole.” Indeed, when the FCC, in repealing its broadband privacy rules, decided to hand over all privacy regulation of internet service providers to the FTC, the predicted situation came about: The courts said “common carriers” could only be regulated by the FCC, but the FCC says only the FTC should be regulating privacy. So, was there no regulator to oversee a company like AT&T’s privacy practices?

Indeed, argued Gigi Sohn, formerly counsel to then-FCC Chair Tom Wheeler, “The new FCC/FTC relationship lets consumers know they’re getting screwed. But much beyond that, they don’t have any recourse.” Now, things have changed once again. With an en banc decision, the 9th Circuit has reversed itself... This reversal of its previous decision by the 9th Circuit now allows the FTC to go forward with its case against AT&T and what it says were deceptive throttling practices, but it also now allows the FTC to once again regulate internet service providers’ data-handling and cybersecurity practices if they come in the context of activities that are outside their activities as common carriers."

Somebody has to oversee Internet service providers (ISPs). Somebody has to do their job. It's an important job. The Republicans-led FCC, by Trump appointee Ajit Pai, has clearly stated it won't given its "light touch" approach to broadband regulation, and repeals last year of both broadband privacy and net neutrality rules. Earlier this month, the National Rifle Association (NRA) honored FCC Chairman Pai for repealing net neutrality rules.

"No touch" is probably a more accurate description. A prior blog post listed many historical problems and abuses of consumers by some ISPs. Consumers should buckle up, as ISPs slowly unveiled their plans in a world without net neutrality protections for consumers. What might that look like? What has AT&T said about this?

Bob Quinn, the Vice President of External and Legislative Affairs for AT&T, claimed today in a blog post:

"Net neutrality has been an emotional issue for a lot of people over the past 10 years... For much of those 10 years, there has been relative agreement over what those rules should be: don’t block websites; censor online content; or throttle, degrade or discriminate in network performance based on content; and disclose to consumers how you manage your network to make that happen. AT&T has been publicly committed to those principles... But no discussion of net neutrality would be complete without also addressing the topic of paid prioritization. Let me start by saying that the issue of paid prioritization has always been hazy and theoretical. The business models for services that would require end-to-end management have only recently begun to come into focus... Let me clear about this – AT&T is not interested in creating fast lanes and slow lanes on anyone’s internet."

Really? The Ars Technica blog called out AT&T and Quinn on his claim:

"AT&T is talking up the benefits of paid prioritization schemes in preparation for the death of net neutrality rules while claiming that charging certain content providers for priority access won't create fast lanes and slow lanes... What Quinn did not mention is that the net neutrality rules have a specific carve-out that already allows such services to exist... without violating the paid prioritization ban. Telemedicine, automobile telematics, and school-related applications and content are among the services that can be given isolated capacity... The key is that the FCC maintained the right to stop ISPs from using this exception to violate the spirit of the net neutrality rules... In contrast, AT&T wants total control over which services are allowed to get priority."

Moreover, fast and slow lanes by AT&T already exist:

"... AT&T provides only DSL service in many rural areas, with speeds of just a few megabits per second or even less than a megabit. AT&T has a new fixed wireless service for some rural areas, but the 10Mbps download speeds fall well short of the federal broadband standard of 25Mbps. In areas where AT&T has brought fiber to each home, the company might be able to implement paid prioritization and manage its network in a way that prevents most customers from noticing any slowdown in other services..."

So, rural (e.g., DSL) consumers are more likely to suffer and notice service slowdowns. Once the final FCC rules are available without net neutrality protections for consumers and the lawsuits have been resolved, then AT&T probably won't have to worry about violating any prioritization bans.

The bottom line for consumers: expect ISPs to implement first changes consumers won't see directly. Remember the old story about a frog stuck in a pot of water? The way to kill it is to slowly turn up the heat. You can expect ISPs to implement this approach in a post-net-neutrality world. (Yes, in this analogy we consumers are the frog, and the heat is higher internet prices.) Paid prioritization is one method consumers won't directly see. It forces content producers, and not ISPs, to raise prices on consumers. Make no mistake about where the money will go.

Consumers will likely see ISPs introduce tiered broadband services, with lower-priced service options that exclude video streaming content... spun as greater choice for consumers. (Some hotels in the United States already sell to their guests WiFi services with tiered content.) Also, expect to see more "sponsored data programs," where video content owned by your ISP doesn't count against wireless data caps. Read more about other possible changes.

Seems to me the 9th Circuit Appeals Court made the best of a bad situation. I look forward to the FTC doing an important job which the FCC chose to run away from. What do you think?


DuckDuckGo Introduces New Privacy Browser

DuckDuckGo search engine for privacy Readers of this blog are familiar with DuckDuckGo, the popular search engine for privacy which doesn't track you nor maintain logs of your search queries. For even more online privacy, DuckDuckGo has has introduced a web browser mobile app for your smartphone or tablet. Benefits of this new browser app:

"1. Escape Advertising Tracker Networks: Our Privacy Protection will block all the hidden trackers we can find, exposing the major advertising networks tracking you over time, so that you can track who's trying to track you.
2. Increase Encryption Protection: We force sites to use an encrypted connection where available, protecting your data from prying eyes, like internet service providers (ISPs).
3. Search Privately: You share your most personal information with your search engine, like your financial, medical, and political questions. What you search for is your own business, which is why DuckDuckGo search doesn't track you. Ever.
4. Decode Privacy Policies — We’ve partnered with Terms of Service Didn't Read to include their scores and labels of website terms of service and privacy policies, where available."

The new browser app is available in both the iTunes and Google Play stores. The iPhone and iPad versions requires iOS version 9.0 or later. How it provides more privacy online:

"As you search and browse, the DuckDuckGo Privacy Browser shows you a Privacy Grade rating when you visit a website (A-F). This rating lets you see how protected you are at a glance, dig into the details to see who we caught trying to track you, and learn how we enhanced the underlying site's privacy measures. The Privacy Grade is scored automatically based on the prevalence of hidden tracker networks, encryption availability, and website privacy practices.

Our app provides standard browsing functionality including tabs, bookmarks, and autocomplete. In addition to strong Privacy Protection as described above, we also packed in some extra privacy features into the browser itself: a) Fire Button — Clear all your tabs and data with one tap; b) Application Lock: Secure the app with Touch ID or Face ID."

The Privacy Grade ratings reminds me of the warnings provided by the Privacy Badger add-on, which alerts consumers to the tracking mechanisms used by sites, and provides consumers finer control about which mechanisms to enable or disable at each site.


Fresenius Medical Care To Pay $3.5 Million For 5 Small Data Breaches During 2012

Logo-fresenius-medical-careFresenius Medical Care Holdings, Inc. has agreed to a $3.5 million settlement agreement regarding five small data breaches the Massachusetts-based healthcare organization experienced during 2012. Fresenius Medical Care Holdings, Inc. does business under the name Fresenius Medical Care North America (FMCNA). This represents one of the largest HIPAA settlements ever by the U.S. Department of Health & Human Services (HHS).

The five small data breaches, at different locations across the United States, affected about 521 persons:

  1. Bio-Medical Applications of Florida, Inc. d/b/a Fresenius Medical Care Duval Facility: On February 23, 2012, two desktop computers were stolen during a break-in. One of the computers contained the electronic Protected Health Information (ePHI) of 200 persons, including patient name, admission date, date of first dialysis, days and times of treatments, date of birth, and Social Security number
  2. Bio-Medical Applications of Alabama, Inc. d/b/a Fresenius Medical Care Magnolia Grove: On April 3, 2012, an unencrypted USB drive was stolen from a worker's car while parked in the organization's parking lot. The USB device contained the ePHI of 245 persons, including patient name, address, date of birth, telephone number, insurance company, insurance account number (a potential social security number derivative for some patients) and the covered entity location where each patient was seen.
  3. Renal Dimensions, LLC d/b/a Fresenius Medical Care Ak-Chin: On June 18, 2012, an anonymous phone tip reported that a hard drive was missing from a desktop computer, which had been taken out of service. The hard drive contained the ePHI of 35 persons, including name, date of birth, Social Security number and Zip code. While the worker notified a manager about the missing hard drive, the manager failed t notify the FMCNA Corporate Risk Management Department.
  4. Fresenius Vascular Care Augusta, LLC: On June 16, 2012, a worker's unencrypted laptop was stolen from her car while parked overnight at home. The laptop bag also include a list of her passwords. The laptop contained the ePHI of 10 persons, including patient name, insurance account number (which could be a social security number derivative) and other insurance information.
  5. WSKC Dialysis Services, Inc. d/b/a Fresenius Medical Care Blue Island Dialysis: On or about June 17 - 18, 2012, three desktop computers and one encrypted laptop were stolen from the office. One of the desktop computers contained the ePHI of 31 persons, including patient name, dates of birth, address, telephone number, and either full or partial Social Security numbers.

Besides the hefty payment, terms of the settlement agreement (Adobe PDF) require FMCNA to implement and complete a Corrective Action Plan:

  • Conduct a risk analysis,
  • Develop and implement a risk management plan,
  • Implement a process for evaluating workplace operational changes,
  • Develop an Encryption Report,
  • Review and revise internal policies and procedures to control devices and storage media,
  • Review and revise policies to control access to facilities,
  • Develop a privacy and security awareness training program for workers, and
  • Submit progress reports at regular intervals to HHS.

The Encryption report identifies and describes the devices and equipment (e.g., desktops, laptops, tables smartphones, etc.) that may be used to access, store, and transmit patients' ePHI information; records the number of devices including which utilize encrypted information; and provides a detailed plan for implementing encryption on devices and media which should contain encrypted information and currently don't.

Some readers may wonder why a large fine for relatively small data breaches, since news reports often cite data breaches affecting thousands or millions of persons. HHS explained that the investigation by its Office For Civil Rights (OCR) unit:

"... revealed FMCNA covered entities failed to conduct an accurate and thorough risk analysis of potential risks and vulnerabilities to the confidentiality, integrity, and availability of all of its ePHI. The FMCNA covered entities impermissibly disclosed the ePHI of patients by providing unauthorized access for a purpose not permitted by the Privacy Rule... Five breaches add up to millions in settlement costs for entity that failed to heed HIPAA’s risk analysis and risk management rules.."

OCR Director Roger Severino added:

"The number of breaches, involving a variety of locations and vulnerabilities, highlights why there is no substitute for an enterprise-wide risk analysis for a covered entity... Covered entities must take a thorough look at their internal policies and procedures to ensure they are protecting their patients’ health information in accordance with the law."


Fitness Device Usage By U.S. Soldiers Reveal Sensitive Location And Movement Data

Useful technology can often have unintended consequences. The Washington Post reported about an interactive map:

"... posted on the Internet that shows the whereabouts of people who use fitness devices such as Fitbit also reveals highly sensitive information about the locations and activities of soldiers at U.S. military bases, in what appears to be a major security oversight. The Global Heat Map, published by the GPS tracking company Strava, uses satellite information to map the locations and movements of subscribers to the company’s fitness service over a two-year period, by illuminating areas of activity. Strava says it has 27 million users around the world, including people who own widely available fitness devices such as Fitbit and Jawbone, as well as people who directly subscribe to its mobile app. The map is not live — rather, it shows a pattern of accumulated activity between 2015 and September 2017... The U.S.-led coalition against the Islamic State said on Monday it is revising its guidelines on the use of all wireless and technological devices on military facilities as a result of the revelations. "

Takeaway #1: it's easier than you might think for the bad guys to track the locations and movements of high-value targets (e.g, soldiers, corporate executives, politicians, attorneys).

Takeaway #2: unintended consequences from mobile devices is not new, as CNN reported in 2015. Consumers love the convenience of their digital devices. It is wise to remember the warning from a famous economist, "There's no such thing as a free lunch."


Royal Caribbean Cruise Line And CPP-The Myers-Briggs Offer Travel Personality Quiz

Inc. Magazine warned in 2016, "ready or not, companies will soon be tracking your emotions." Most Facebook users already knows this. Also in 2016, the social networking site expanded several reaction buttons beyond its (in)famous "Like" button to cover several emotions (e.g., "Love," "Haha," "Wow," "Sad," "Angry"):

Facebook-emotions-buttons

Maybe you have used these reaction buttons. Companies do this because effective marketing appeals to emotions instead of reason.

Now, a popular cruise line has taken things a step further. Cruise Critic, a popular travel site, announced:

"... Royal Caribbean has teamed up with CPP-The Myers-Briggs Company to launch a quiz that offers cruise recommendations based on your personality type. The assessment tool, found on MyAdventurePersonality.com, asks users 13 questions as they pertain to personal behavior and preferences... Once the results are calculated, users will be designated a travel personality type, such as Expert Adventure Planner, Laidback Wanderer and Spontaneous Sightseer. They also will receive an itinerary recommendation best suited for their type, with planning tips."

What is the Myers'Briggs assessment tool? The Myers-Briggs Foundation site explains:

"The purpose of the Myers-Briggs Type Indicator® (MBTI®) personality inventory is to make the theory of psychological types described by C. G. Jung understandable and useful in people's lives. The essence of the theory is that much seemingly random variation in the behavior is actually quite orderly and consistent, being due to basic differences in the ways individuals prefer to use their perception and judgment... In developing the Myers-Briggs Type Indicator [instrument], the aim of Isabel Briggs Myers, and her mother, Katharine Briggs, was to make the insights of type theory accessible to individuals and groups... The identification of basic preferences of each of the four dichotomies specified or implicit in Jung's theory. The identification and description of the 16 distinctive personality types that result from the interactions among the preferences."

Indeed, this assessment tool became very accessible. The Seattle Times reported in 2013:

"Chances are you’ve taken the Myers-Briggs Type Indicator (MBTI), or will. Roughly 2 million people a year do. It has become the gold standard of psychological assessments, used in businesses, government agencies and educational institutions... More than 10,000 companies, 2,500 colleges and universities and 200 government agencies in the United States use the test... It’s estimated that 50 million people have taken the Myers-Briggs personality test since the Educational Testing Service first added the research to its portfolio in 1962... Organizations administer the MBTI assessment to employees in one of two ways. They either pay for someone in their human-resources department to become certified, then pay the materials costs each time employees take the test. Or, they contract with certified, independent training consultants or leadership coaches."

Selected questions from the MyAdventurePersonality site. Click to view larger version The travel quiz uses different and fewer (13 versus ~ 88) forced-choice questions than the MBTI. Plus, the travel quiz categorizes consumers into four travel personality types (versus 16 types by the MBTI). And, the MBTI tool is administered by certified professionals in an ethical manner. So, consumers shouldn't assume that the travel quiz is as rigorous as the MBTI. Admittedly, MyAdventurePersonality may add more questions and/or types in the future.

If you are considering the travel quiz, wise consumers always read the fine print, first. The MyAdventurePersonality site uses the same legal and privacy policies as the core Royal Caribbean cruise line site. So, consumers should know that whatever they submit to the travel quiz will probably be freely shared with other entities, since the Royal Caribbean Privacy Policy does not state any limitations.

The MyAdventurePersonality site may be a marketing gimmick to attract new customers and/or better target e-mail marketing campaigns to current and prospective cruise travelers.

Me? After 28 cruise ship vacations (with many on Royal Caribbean ships) to many areas of the planet, I know my travel needs and preferences very well. So, I doubt the quiz will tell me something I don't already know.

What do you think? Should companies uses these types of quizzes?


Report: Several Impacts From Technology Changes Within The Financial Services Industry

For better or worse, the type of smart device you use can identify you in ways you may not expect. First, a report by London-based Privacy International highlighted the changes within the financial services industry:

"Financial services are changing, with technology being a key driver. It is affecting the nature of financial services from credit and lending through to insurance and even the future of money itself. The field known as “fintech” is where the attention and investment is flowing. Within it, new sources of data are being used by existing institutions and new entrants. They are using new forms of data analysis. These changes are significant to this sector and the lives of the people it serves. We are seeing dramatic changes in the ways that financial products make decisions. The nature of the decision-making is changing, transforming the products in the market and impacting on end results and bottom lines. However, this also means that treatment of individuals will change. This changing terrain of finance has implications for human rights, privacy and identity... Data that people would consider as having nothing to do with the financial sphere, such as their text-messages, is being used at an increasing rate by the financial sector...  Yet protections are weak or absent... It is essential that these innovations are subject to scrutiny... Fintech covers a broad array of sectors and technologies. A non-exhaustive list includes:

  • Alternative credit scoring (new data sources for credit scoring)
  • Payments (new ways of paying for goods and services that often have implications for the data generated)
  • Insurtech (the use of technology in the insurance sector)
  • Regtech (the use of technology to meet regulatory requirements)."

"Similarly, a breadth of technologies are used in the sector, including: Artificial Intelligence; Blockchain; the Internet of Things; Telematics and connected cars..."

While the study focused upon India and Kenya, it has implications for consumers worldwide. More observations and concerns:

"Social media is another source of data for companies in the fintech space. However, decisions are made not on just on the content of posts, but rather social media is being used in other ways: to authenticate customers via facial recognition, for instance... blockchain, or distributed ledger technology, is still best known for cryptocurrencies like BitCoin. However, the technology is being used more broadly, such as the World Bank-backed initiative in Kenya for blockchain-backed bonds10. Yet it is also used in other fields, like the push in digital identities11. A controversial example of this was a very small-scale scheme in the UK to pay benefits using blockchain technology, via an app developed by the fintech GovCoin12 (since renamed DISC). The trial raised concerns, with the BBC reporting a former member of the Government Digital Service describing this as "a potentially efficient way for Department of Work and Pensions to restrict, audit and control exactly what each benefits payment is actually spent on, without the government being perceived as a big brother13..."

Many consumers know that you can buy a wide variety of internet-connected devices for your home. That includes both devices you'd expect (e.g., televisions, printers, smart speakers and assistants, security systems, door locks and cameras, utility meters, hot water heaters, thermostats, refrigerators, robotic vacuum cleaners, lawn mowers) and devices you might not expect (e.g., sex toys, smart watches for children, mouse traps, wine bottlescrock pots, toy dolls, and trash/recycle bins). Add your car or truck to the list:

"With an increasing number of sensors being built into cars, they are increasingly “connected” and communicating with actors including manufacturers, insurers and other vehicles15. Insurers are making use of this data to make decisions about the pricing of insurance, looking for features like sharp acceleration and braking and time of day16. This raises privacy concerns: movements can be tracked, and much about the driver’s life derived from their car use patterns..."

And, there are hidden prices for the convenience of making payments with your favorite smart device:

"The payments sector is a key area of growth in the fintech sector: in 2016, this sector received 40% of the total investment in fintech22. Transactions paid by most electronic means can be tracked, even those in physical shops. In the US, Google has access to 70% of credit and debit card transactions—through Google’s "third-party partnerships", the details of which have not been confirmed23. The growth of alternatives to cash can be seen all over the world... There is a concerted effort against cash from elements of the development community... A disturbing aspect of the cashless debate is the emphasis on the immorality of cash—and, by extension, the immorality of anonymity. A UK Treasury minister, in 2012, said that paying tradesman by cash was "morally wrong"26, as it facilitated tax avoidance... MasterCard states: "Contrary to transactions made with a MasterCard product, the anonymity of digital currency transactions enables any party to facilitate the purchase of illegal goods or services; to launder money or finance terrorism; and to pursue other activity that introduces consumer and social harm without detection by regulatory or police authority."27"

The report cited a loss of control by consumers over their personal information. Going forward, the report included general and actor-specific recommendations. General recommendations:

  • "Protecting the human right to privacy should be an essential element of fintech.
  • Current national and international privacy regulations should be applicable to fintech.
  • Customers should be at the centre of fintech, not their product.
  • Fintech is not a single technology or business model. Any attempt to implement or regulate fintech should take these differences into account, and be based on the type activities they perform, rather than the type of institutions involved."

Want to learn more? Follow Privacy International on Facebook, on Twitter, or read about 10 ways of "Invisible Manipulation" of consumers.