422 posts categorized "Mobile" Feed

Mashable: 7 Privacy Settings iPhone Users Should Enable Today

Most people want to get the most from their smartphones. That includes using their devices wisely and with privacy. Mashable recommended seven privacy settings for Apple iPhone users. I found the recommendations very helpful, and thought that you would, too.

Three privacy settings stood out. First, many mobile apps have:

"... access to your camera. For some of these, the reasoning is a no-brainer. You want to be able to use Snapchat filters? Fine, the app needs access to your camera. That makes sense. Other apps' reasoning for having access to your camera might be less clear. Once again, head to Settings > Privacy > Camera and review what apps you've granted camera access. See anything in there that doesn't make sense? Go ahead and disable it."

A feature most consumers probably haven't considered:

"... which apps on your phone have requested microphone access. For example, do you want Drivetime to have access to your mic? No? Because if you've downloaded it, then it might. If an app doesn't have a clear reason for needing access to your microphone, don't give it that access."

And, perhaps most importantly:

"Did you forget about your voicemail? Hackers didn't. At the 2018 DEF CON, researchers demonstrated the ability to brute force voicemail accounts and use that access to reset victims' Google and PayPal accounts... Set a random 9-digit voicemail password. Go to Settings > Phone and scroll down to "Change Voicemail Password." You iPhone should let you choose a 9-digit code..."

The full list is a reminder for consumers not to assume that the default settings on mobile apps you install are right for your privacy needs. Wise consumers check and make adjustments.


Privacy Study Finds Consumers Less Likely To Share Several Key Data Elements

Advertising Research Foundation logoLast month, the Advertising Research Foundation (ARF) announced the results of its 2019 Privacy Study, which was conducted in March. The survey included 1,100 consumers in the United States weighted by age gender, and region. Key findings including device and internet usage:

"The key differences between 2018 and 2019 are: i) People are spending more time on their mobile devices and less time on their PCs; ii) People are spending more time checking email, banking, listening to music, buying things, playing games, and visiting social media via mobile apps; iii) In general, people are only slightly less likely to share their data than last year. iv) They are least likely to share their social security number; financial and medical information; and their home address and phone numbers; v) People seem to understand the benefits of personalized advertising, but do not value personalization highly and do not understand the technical approaches through which it is accomplished..."

Advertisers use these findings to adjust their advertising, offers, and pitches to maximize responses by consumers. More detail about the above privacy and data sharing findings:

"In general, people were slightly less likely to share their data in 2019 than they were in 2018. They were least likely to share their social security number; financial and medical information; their work address; and their home address and phone numbers in both years. They were most likely to share their gender, race, marital status, employment status, sexual orientation, religion, political affiliation, and citizenship... The biggest changes in respondents’ willingness to share their data from 2018 to 2019 were seen in their home address (-10 percentage points), spouse’s first and last name (-8 percentage points), personal email address (-7 percentage points), and first and last names (-6 percentage points)."

The researchers asked the data sharing question in two ways:

  1. "Which of the following types of information would you be willing to share with a website?"
  2. "Which of the following types of information would you be willing to share for a personalized experience?"

The survey included 20 information types for both questions. For the first question, survey respondents' willingness to share decreased for 15 of 20 information types, remained constant for two information types, and increased slightly for the remainder:

Which of the following types of information
would you be willing to share with a website?
Information Type 2018: %
Respondents
2019: %
Respondents
2019 H/(L)
2018
Birth Date 71 68 (3)
Citizenship Status 82 79 (3)
Employment Status 84 82 (2)
Financial Information 23 20 (3)
First & Last Name 69 63 (6)
Gender 93 93 --
Home Address 41 31 (10)
Home Landline
Phone Number
33 30 (3)
Marital Status 89 85 (4)
Medical Information 29 26 (3)
Personal Email Address 61 54 (7)
Personal Mobile
Phone Number
34 32 (2)
Place Of Birth 62 58 (4)
Political Affiliation 76 77 1
Race or Ethnicity 90 91 1
Religious Preference 78 79 1
Sexual Orientation 80 79 (1)
Social Security Number 10 10 --
Spouse's First
& Last Name
41 33 (8)
Work Address 33 31 (2)

The researchers asked about citizenship status due to controversy related to the upcoming 2020 Census. The researchers concluded:

The survey finding most relevant to these proposals is that the public does not see the value of sharing data to improve personalization of advertising messages..."

Overall, it appears that consumers are getting wiser about their privacy. Consumers' willingness to share decreased for more items than it increased for. View the detailed ARF 2019 Privacy Survey (Adobe PDF).


Google Claims Blocking Cookies Is Bad For Privacy. Researchers: Nope. That Is 'Privacy Gaslighting'

Google logo The announcement by Google last week included some dubious claims, which received a fair amount of attention among privacy experts. First, a Senior Product Manager of User Privacy and Trust wrote in a post:

"Ads play a major role in sustaining the free and open web. They underwrite the great content and services that people enjoy... But the ad-supported web is at risk if digital advertising practices don’t evolve to reflect people’s changing expectations around how data is collected and used. The mission is clear: we need to ensure that people all around the world can continue to access ad supported content on the web while also feeling confident that their privacy is protected. As we shared in May, we believe the path to making this happen is also clear: increase transparency into how digital advertising works, offer users additional controls, and ensure that people’s choices about the use of their data are respected."

Okay, that is a fair assessment of today's internet. And, more transparency is good. Google executives are entitled to their opinions. The post also stated:

"The web ecosystem is complex... We’ve seen that approaches that don’t account for the whole ecosystem—or that aren’t supported by the whole ecosystem—will not succeed. For example, efforts by individual browsers to block cookies used for ads personalization without suitable, broadly accepted alternatives have fallen down on two accounts. First, blocking cookies materially reduces publisher revenue... Second, broad cookie restrictions have led some industry participants to use workarounds like fingerprinting, an opaque tracking technique that bypasses user choice and doesn’t allow reasonable transparency or control. Adoption of such workarounds represents a step back for user privacy, not a step forward."

So, Google claims that blocking cookies is bad for privacy. With a statement like that, the "User Privacy and Trust" title seems like an oxymoron. Maybe, that's the best one can expect from a company that gets 87 percent of its revenues from advertising.

Also on August 22nd, the Director of Chrome Engineering repeated this claim and proposed new internet privacy standards (bold emphasis added):

... we are announcing a new initiative to develop a set of open standards to fundamentally enhance privacy on the web. We’re calling this a Privacy Sandbox. Technology that publishers and advertisers use to make advertising even more relevant to people is now being used far beyond its original design intent... some other browsers have attempted to address this problem, but without an agreed upon set of standards, attempts to improve user privacy are having unintended consequences. First, large scale blocking of cookies undermine people’s privacy by encouraging opaque techniques such as fingerprinting. With fingerprinting, developers have found ways to use tiny bits of information that vary between users, such as what device they have or what fonts they have installed to generate a unique identifier which can then be used to match a user across websites. Unlike cookies, users cannot clear their fingerprint, and therefore cannot control how their information is collected... Second, blocking cookies without another way to deliver relevant ads significantly reduces publishers’ primary means of funding, which jeopardizes the future of the vibrant web..."

Yes, fingerprinting is a nasty, privacy-busting technology. No argument with that. But, blocking cookies is bad for privacy? Really? Come on, let's be honest.

This dubious claim ignores corporate responsibility... that some advertisers and website operators made choices -- conscious decisions to use more invasive technologies like fingerprinting to do an end-run around users' needs, desires, and actions to regain online privacy. Sites and advertisers made those invasive-tech choices when other options were available, such as using subscription services to pay for their content.

Plus, Google's claim also ignores the push by corporate internet service providers (ISPs) which resulted in the repeal of online privacy protections for consumers thanks to a compliant, GOP-led Federal Communications Commission (FCC), which seems happy to tilt the playing field further towards corporations and against consumers. So, users are simply trying to regain online privacy.

During the past few years, both privacy-friendly web browsers (e.g., Brave, Firefox) and search engines (e.g., DuckDuckGo) have emerged to meet consumers' online privacy needs. (Well, it's not only consumers that need online privacy. Attorneys and businesses need it, too, to protect their intellectual property and proprietary business methods.) Online users demanded choice, something advertisers need to remember and value.

Privacy experts weighed in about Google's blocking-cookies-is-bad-for-privacy claim. Jonathan Mayer and Arvind Narayanan explained:

That’s the new disingenuous argument from Google, trying to justify why Chrome is so far behind Safari and Firefox in offering privacy protections. As researchers who have spent over a decade studying web tracking and online advertising, we want to set the record straight. Our high-level points are: 1) Cookie blocking does not undermine web privacy. Google’s claim to the contrary is privacy gaslighting; 2) There is little trustworthy evidence on the comparative value of tracking-based advertising; 3) Google has not devised an innovative way to balance privacy and advertising; it is latching onto prior approaches that it previously disclaimed as impractical; and 4) Google is attempting a punt to the web standardization process, which will at best result in years of delay."

The researchers debunked Google's claim with more details:

"Google is trying to thread a needle here, implying that some level of tracking is consistent with both the original design intent for web technology and user privacy expectations. Neither is true. If the benchmark is original design intent, let’s be clear: cookies were not supposed to enable third-party tracking, and browsers were supposed to block third-party cookies. We know this because the authors of the original cookie technical specification said so (RFC 2109, Section 4.3.5). Similarly, if the benchmark is user privacy expectations, let’s be clear: study after study has demonstrated that users don’t understand and don’t want the pervasive web tracking that occurs today."

Moreover:

"... there are several things wrong with Google’s argument. First, while fingerprinting is indeed a privacy invasion, that’s an argument for taking additional steps to protect users from it, rather than throwing up our hands in the air. Indeed, Apple and Mozilla have already taken steps to mitigate fingerprinting, and they are continuing to develop anti-fingerprinting protections. Second, protecting consumer privacy is not like protecting security—just because a clever circumvention is technically possible does not mean it will be widely deployed. Firms face immense reputational and legal pressures against circumventing cookie blocking. Google’s own privacy fumble in 2012 offers a perfect illustration of our point: Google implemented a workaround for Safari’s cookie blocking; it was spotted (in part by one of us), and it had to settle enforcement actions with the Federal Trade Commission and state attorneys general."

Gaslighting, indeed. Online privacy is important. So, too, are consumers' choices and desires. Thanks to Mr. Mayer and Mr. Narayanan for the comprehensive response.

What are your opinions of cookie blocking? Of Google's claims?


ExpressVPN Survey Indicates Americans Care About Privacy. Some Have Already Taken Action

ExpressVPN published the results of its privacy survey. The survey, commissioned by ExpressVPN and conducted by Propeller Insights, included a representative sample of about 1,000 adults in the United States.

Overall, 29.3% of survey respondents said they already use had used a virtual private network (VPN) or a proxy network. Survey respondents cited three broad reasons for using a VPN service: 1) to avoid surveillance, 2) to access content, and 3) to stay safe online. Detailed survey results about surveillance concerns:

"The most popular reasons to use a VPN are related to surveillance, with 41.7% of respondents aiming to protect against sites seeing their IP, 26.4% to prevent their internet service provider (ISP) from gathering information, and 16.6% to shield against their local government."

Who performs the surveillance matters to consumers. People are more concerned with surveillance by companies than by law enforcement agencies within the U.S. government:

"Among the respondents, 15.9% say they fear the FBI surveillance, and only 6.4% fear the NSA spying on them. People are by far most worried about information gathering by ISPs (23.2%) and Facebook (20.5%). Google spying is more of a worry for people (5.9%) than snooping by employers (2.6%) or family members (5.1%).

Concerns with internet service providers (ISPs) are not surprising since these telecommunications company enjoy a unique position enabling them to track all online activities by consumers. Concerns about Facebook are not surprising since it tracks both users and non-users, similar to advertising networks. The "protect against sites seeing their IP" suggests that consumers, or at least VPN users, want to protect themselves and their devices against advertisers, advertising networks, and privacy-busting mobile apps which track their geo-location.

Detailed survey results about content access concerns:

"... 26.7% use [a VPN service] to access their corporate or academic network, 19.9% to access content otherwise not available in their region, and 16.9% to circumvent censorship."

The survey also found that consumers generally trust their mobile devices:

" Only 30.5% of Android users are “not at all” or “not very” confident in their devices. iOS fares slightly better, with 27.4% of users expressing a lack of confidence."

The survey uncovered views about government intervention and policies:

"Net neutrality continues to be popular (70% more respondents support it rather then don’t), but 51.4% say they don’t know enough about it to form an opinion... 82.9% also believe Congress should enact laws to require tech companies to get permission before collecting personal data. Even more, 85.2% believe there should be fines for companies that lose users’ data, and 90.2% believe there should be further fines if the data is misused. Of the respondents, 47.4% believe Congress should go as far as breaking up Facebook and Google."

The survey found views about smart devices (e.g., door bells, voice assistants, smart speakers) installed in many consumers' homes, since these devices are equipped with always-on cameras and/or microphones:

"... 85% of survey respondents say they are extremely (24.7%), very (23.4%), or somewhat (28.0%) concerned about smart devices monitoring their personal habits... Almost a quarter (24.8%) of survey respondents do not own any smart devices at all, while almost as many (24.4%) always turn off their devices’ microphones if they are not using them. However, one-fifth (21.2%) say they always leave the microphone on. The numbers are similar for camera use..."

There are more statistics and findings in the entire survey report by ExpressVPN. I encourage everyone to read it.


Researcher Uncovers Several Browser Extensions That Track Users' Online Activity And Share Data

Many consumers use web browsers since websites contain full content and functionality, versus pieces of websites in mobile apps. A researcher has found that as many as four million consumers have been affected by browser extensions, the optional functionality for web browsers, which collected sensitive personal and financial information.

Ars Technica reported about DataSpii, the name of the online privacy issue:

"The term DataSpii was coined by Sam Jadali, the researcher who discovered—or more accurately re-discovered—the browser extension privacy issue. Jadali intended for the DataSpii name to capture the unseen collection of both internal corporate data and personally identifiable information (PII).... DataSpii begins with browser extensions—available mostly for Chrome but in more limited cases for Firefox as well—that, by Google's account, had as many as 4.1 million users. These extensions collected the URLs, webpage titles, and in some cases the embedded hyperlinks of every page that the browser user visited. Most of these collected Web histories were then published by a fee-based service called Nacho Analytics..."

At first glance, this may not sound important, but it is. Why? First, the data collected included the most sensitive and personal information:

"Home and business surveillance videos hosted on Nest and other security services; tax returns, billing invoices, business documents, and presentation slides posted to, or hosted on, Microsoft OneDrive, Intuit.com, and other online services; vehicle identification numbers of recently bought automobiles, along with the names and addresses of the buyers; patient names, the doctors they visited, and other details listed by DrChrono, a patient care cloud platform that contracts with medical services; travel itineraries hosted on Priceline, Booking.com, and airline websites; Facebook Messenger attachments..."

I'll bet you thought your Facebook Messenger stuff was truly private. Second, because:

"... the published URLs wouldn’t open a page unless the person following them supplied an account password or had access to the private network that hosted the content. But even in these cases, the combination of the full URL and the corresponding page name sometimes divulged sensitive internal information. DataSpii is known to have affected 50 companies..."

Ars Technica also reported:

"Principals with both Nacho Analytics and the browser extensions say that any data collection is strictly "opt in." They also insist that links are anonymized and scrubbed of sensitive data before being published. Ars, however, saw numerous cases where names, locations, and other sensitive data appeared directly in URLs, in page titles, or by clicking on the links. The privacy policies for the browser extensions do give fair warning that some sort of data collection will occur..."

So, the data collection may be legal, but is it ethical -- especially if the anonymization is partial? After the researcher's report went public, many of the suspect browser extensions were deleted from online stores. However, extensions already installed locally on users' browsers can still collect data:

"Beginning on July 3—about 24 hours after Jadali reported the data collection to Google—Fairshare Unlock, SpeakIt!, Hover Zoom, PanelMeasurement, Branded Surveys, and Panel Community Surveys were no longer available in the Chrome Web Store... While the notices say the extensions violate the Chrome Web Store policy, they make no mention of data collection nor of the publishing of data by Nacho Analytics. The toggle button in the bottom-right of the notice allows users to "force enable" the extension. Doing so causes browsing data to be collected just as it was before... In response to follow-up questions from Ars, a Google representative didn't explain why these technical changes failed to detect or prevent the data collection they were designed to stop... But removing an extension from an online marketplace doesn't necessarily stop the problems. Even after the removals of Super Zoom in February or March, Jadali said, code already installed by the Chrome and Firefox versions of the extension continued to collect visited URL information..."

Since browser developers haven't remotely disabled leaky browser extensions, the burden is on consumers. The Ars Technica report lists the leaky browser extensions by name. Since online stores can't seem to consistently police browser extensions for privacy compliance, again the burden falls upon consumers.

The bottom line: browser extensions can easily compromise your online privacy and security. That means like any other software, wise consumers: read independent online reviews first, read the developer's terms of use and privacy policy before installing the browser extension, and use a privacy-focused web browser.

Consumer Reports advises consumers to, a) install browser extensions only from companies you trust, and b) uninstall browser extensions you don't need nor use. For consumers that don't know how, the Consumer Reports article also lists step-by-step instructions to uninstall browser extensions in Google Chrome, Firefox, Safari, and Internet Explorer branded web browsers.


FBI Seeks To Monitor Twitter, Facebook, Instagram, And Other Social Media Accounts For Violent Threats

Federal Bureau of Investigation logo The U.S. Federal Bureau of Investigation (FBI) issued on July 8th a Request For Proposals (RFP) seeking quotes from technology companies to build a "Social Media Alerting" tool, which would enable the FBI to monitor in real-time accounts in several social media services for violence threats. The RFP, which was amended on August 7th, stated:

"The purpose of this procurement is to acquire the services of a company to proactively identify and reactively monitor threats to the United States and its interests through a means of online sources. A subscription to this service shall grant the Federal Bureau of Investigation (FBI) access to tools that will allow for the exploitation of lawfully collected/acquired data from social media platforms that will be stored, vetted and formatted by a vendor... This synopsis and solicitation is being issued as Request for Proposal (RFP) number DJF194750PR0000369 and... This announcement is supplemented by a detailed RFP Notice, an SF-33 document, an accompanying Statement of Objectives (SOO) and associated FBI documents..."

"Proactively identify" suggests the usage of software algorithms or artificial intelligence (AI). And, the vendor selected will archive the collected data for an undisclosed period of time. The RFP also stated:

"Background: The use of social media platforms, by terrorist groups, domestic threats, foreign intelligence services, and criminal organizations to further their illegal activity creates a demonstrated need for tools to properly identify the activity and react appropriately. With increased use of social media platforms by subjects of current FBI investigations and individuals that pose a threat to the United States, it is critical to obtain a service which will allow the FBI to identify relevant information from Twitter, Facebook, Instagram, and other Social media platforms in a timely fashion. Consequently, the FBI needs near real time access to a full range of social media exchanges..."

For context, in 2016 the FBI attempted to force Apple Computer to build "backdoor software" to unclock an alleged terrorist's iPhone in California. The FBI later found an offshore technology company to build its backdoor.

The documents indicate that the FBI wants its staff to use the tool at both headquarters and field-office locations globally, and with mobile devices. The SOO document stated:

"FBI personnel are deployed internationally and sometimes in areas of press censorship. A social media exploitation tool with international reach and paired with a strong language translation capability, can become crucial to their operations and more importantly their safety. The functions of most value to these individuals is early notification, broad international reach, instant translation, and the mobility of the needed capability."

The SOO also explained the data elements too be collected:

"3.3.2.2.1 Obtain the full social media profile of persons-of-interest and their affiliation to any organization or groups through the corroboration of multiple social media sources... Items of interest in this context are social networks, user IDs, emails, IP addresses and telephone numbers, along with likely additional account with similar IDs or aliases... Any connectivity between aliases and their relationship must be identifiable through active link analysis mapping..."
"3.3.3.2.1 Online media is monitored based on location, determined by the users’ delineation or the import of overlays from existing maps (neighborhood, city, county, state or country). These must allow for customization as AOR sometimes cross state or county lines..."

While the document mentioned "user IDs" and didn't mention passwords, the implication seems clear that the FBI wants both in order to access and monitor in real-time social media accounts. And, the "other Social Media platforms" statement raises questions. What is the full list of specific services that refers to? Why list only the three largest platforms by name?

As this FBI project proceeds, let's hope that the full list of social sites includes 8Chan, Reddit, Stormfront, and similar others. Why? In a study released in November of 2018, the Center for Strategic and International Studies (CSIS) found:

"Right-wing extremism in the United States appears to be growing. The number of terrorist attacks by far-right perpetrators rose over the past decade, more than quadrupling between 2016 and 2017. The recent pipe bombs and the October 27, 2018, synagogue attack in Pittsburgh are symptomatic of this trend. U.S. federal and local agencies need to quickly double down to counter this threat. There has also been a rise in far-right attacks in Europe, jumping 43 percent between 2016 and 2017... Of particular concern are white supremacists and anti-government extremists, such as militia groups and so-called sovereign citizens interested in plotting attacks against government, racial, religious, and political targets in the United States... There also is a continuing threat from extremists inspired by the Islamic State and al-Qaeda. But the number of attacks from right-wing extremists since 2014 has been greater than attacks from Islamic extremists. With the rising trend in right-wing extremism, U.S. federal and local agencies need to shift some of their focus and intelligence resources to penetrating far-right networks and preventing future attacks. To be clear, the terms “right-wing extremists” and “left-wing extremists” do not correspond to political parties in the United States..."

The CSIS study also noted:

"... right-wing terrorism commonly refers to the use or threat of violence by sub-national or non-state entities whose goals may include racial, ethnic, or religious supremacy; opposition to government authority; and the end of practices like abortion... Left-wing terrorism, on the other hand, refers to the use or threat of violence by sub-national or non-state entities that oppose capitalism, imperialism, and colonialism; focus on environmental or animal rights issues; espouse pro-communist or pro-socialist beliefs; or support a decentralized sociopolitical system like anarchism."

Terrorism is terrorism. All of it needs to be prosecuted including left-, right-, domestic, and foreign. (This prosecutor is doing the right thing.) It seems wise to monitor the platform where suspects congregate.

This project also raises questions about the effectiveness of monitoring social media? Will this really works. Digital Trends reported:

"Companies like Google, Facebook, Twitter, and Amazon already use algorithms to predict your interests, your behaviors, and crucially, what you like to buy. Sometimes, an algorithm can get your personality right – like when Spotify somehow manages to put together a playlist full of new music you love. In theory, companies could use the same technology to flag potential shooters... But preventing mass shootings before they happen raises thorny legal questions: how do you determine if someone is just angry online rather than someone who could actually carry out a shooting? Can you arrest someone if a computer thinks they’ll eventually become a shooter?"

Some social media users have already experienced inaccuracies (failures?) when sites present irrelevant advertisements and/or political party messaging based upon supposedly accurate software algorithms. The Digital Trends article also dug deeper:

"A Twitter spokesperson wouldn’t say much directly about Trump’s proposal, but did tell Digital Trends that the company suspended 166,513 accounts connected to the promotion of terrorism during the second half of 2018... Twitter also frequently works to help facilitate investigations when authorities request information – but the company largely avoids proactively flagging banned accounts (or the people behind them) to those same authorities. Even if they did, that would mean flagging 166,513 people to the FBI – far more people than the agency could ever investigate."

Then, there is the problem of the content by users in social media posts:

"Even if someone does post to social media immediately before they decide to unleash violence, it’s often not something that would trip up either Twitter or Facebook’s policies. The man who killed three people at the Gilroy Garlic Festival in Northern California posted to Instagram from the event itself – once calling the food served there “overprices” and a second that told people to read a 19th-century pro-fascist book that’s popular with white nationalists."

Also, Amazon got caught up in the hosting mess with 8Chan. So, there is more news to come.

Last, this blog post explored the problems with emotion recognition by facial-recognition software. Let's hope this FBI project is not a waste of taxpayer's hard-earned money.


Tech Expert Concluded Google Chrome Browser Operates A Lot Like Spy Software

Many consumers still use web browsers. Which are better for your online privacy? You may be interested in this analysis by a tech expert:

"... I've been investigating the secret life of my data, running experiments to see what technology really gets up to under the cover of privacy policies that nobody reads... My tests of Chrome vs. Firefox [browsers] unearthed a personal data caper of absurd proportions. In a week of Web surfing on my desktop, I discovered 11,189 requests for tracker "cookies" that Chrome would have ushered right onto my computer but were automatically blocked by Firefox... Chrome welcomed trackers even at websites you would think would be private. I watched Aetna and the Federal Student Aid website set cookies for Facebook and Google. They surreptitiously told the data giants every time I pulled up the insurance and loan service's log-in pages."

"And that's not the half of it. Look in the upper right corner of your Chrome browser. See a picture or a name in the circle? If so, you're logged in to the browser, and Google might be tapping into your Web activity to target ads. Don't recall signing in? I didn't, either. Chrome recently started doing that automatically when you use Gmail... I felt hoodwinked when Google quietly began signing Gmail users into Chrome last fall. Google says the Chrome shift didn't cause anybody's browsing history to be "synced" unless they specifically opted in — but I found mine was being sent to Google and don't recall ever asking for extra surveillance..."

Also:

"Google's product managers told me in an interview that Chrome prioritizes privacy choices and controls, and they're working on new ones for cookies. But they also said they have to get the right balance with a "healthy Web ecosystem" (read: ad business). Firefox's product managers told me they don't see privacy as an "option" relegated to controls. They've launched a war on surveillance, starting last month with "enhanced tracking protection" that blocks nosy cookies by default on new Firefox installations..."

This tech expert concluded:

"It turns out, having the world's biggest advertising company make the most popular Web browser was about as smart as letting kids run a candy shop. It made me decide to ditch Chrome for a new version of nonprofit Mozilla's Firefox, which has default privacy protections. Switching involved less inconvenience than you might imagine."

Regular readers of this blog are aware of how Google tracks consumers online purchases, the worst mobile apps for privacy, and privacy alternatives such the Brave web browser, the DuckDuckGo search engine, virtual private network (VPN) software, and more. Yes, you can use the Firefox browser on your Apple iPhone. I do.

Me? I've used the Firefox browser since about 2010 on my (Windows) laptop, and the DuckDuckGo search engine since 2013. I stopped using Bing, Yahoo, and Google search engines in 2013. While Firefox installs with Google as the default search engine, you can easily switch it to DuckDuckGo. I did. I am very happy with the results.

Which web browser and search engine do you use? What do you do to protect your online privacy?


The Worst Mobile Apps For Privacy

ExpressVPN compiled its list for 2019 of the four worst mobile apps for privacy. If you value your online privacy and want to protect yourself, the security firm advises consumers to, "Delete them now." The list of apps includes both predictable items and some surprises:

"1. Angry Birds: If you were an international spying organization, which app would you target to harvest smartphone user information? If you picked Angry Birds, congratulations! You’re thinking just like the NSA and GCHQ did... what it lacks in gameplay, it certainly makes up for in leaky data... A mobile ad platform placed a code snippet in Angry Birds that allowed the company to target advertisements to users based on previously collected information. Unfortunately, the ad’s library of data was visible, meaning it was leaking user information such as phone number, call logs, location, political affiliation, sexual orientation, and marital status..."

"2. The YouVersion Bible App: The YouVersion Bible App is on more than 300 million devices around the world. It claims to be the No. 1 Bible app and comes with over 1,400 Bibles in over 1,000 languages. It also harvests data... Notable permissions the app demands are full internet access, the ability to connect and disconnect to Wi-Fi, modify stored content on the phone, track the device’s location, and read all a user’s contacts..."

Read the full list of sketchy apps at the ExpressVPN site.


How To Do Online Banking Safely And Securely

Most people love the convenience of online banking via their smartphone or other mobile device. However, it is important to do it safely and securely. How? NordVPN listed four items:

"1. Don't lose your phone: The biggest security threat of your mobile phone is also its greatest asset – its size. Phones are small, handy, beautiful, and easy to lose..."

So, keep your phone in your hand. Never place it on a table out of sight. Of course, you should lock your phone with a strong password. NordVPN commented about other locking options:

"Facial recognition: convenient but not secure, since it can sometimes be bypassed with a photograph... Fingerprints: low false-acceptance rates, perfect if you don’t often wear gloves."

More advice:

"2. Use the official banking app, not the browser... If you aren’t careful, you could download a fake banking app created by scammers to break into your account. Make sure your bank created or approves of the app you are downloading. Get it from their website. Moreover, do not use mobile browsers to log in to your bank account – they are less secure than bank-sanctioned apps..."

Obviously, you should sign out of the mobile app when when finished with your online banking session. Otherwise, a thief with your stolen phone has direct access to your money and accounts. NordVPN advises consumers to do your homework first: read app ratings and reviews before downloading any mobile apps.

Readers of this blog are probably familiar with the next item:

"4. Don’t use mobile banking on public Wi-Fi: Anyone on a public Wi-Fi network is in danger of a security breach. Most of these networks lack basic security measures and have poor router configurations and weak passwords..."

Popular places with public Wi-Fi includes coffee shops, fast food restaurants, supermarkets, airports, libraries, and hotels. If you must do online banking in a public place, NordVPN advised:

"... use your cellular network instead. It’s not perfect, but it’s better than public Wi-Fi. Better yet, turn on a virtual private network (VPN) and then use public Wi-Fi..."

There you have it. Read the entire online banking article by NordVPN. Ignore this advice at your own peril.


Brave Alerts FTC On Threats From Business Practices With Big Data

The U.S. Federal Trade Commission (FTC) held a "Privacy, Big Data, And Competition" hearing on November 6-8, 2018 as part of its "Competition And Consumer Protection in the 21st Century" series of discussions. During that session, the FTC asked for input on several topics:

  1. "What is “big data”? Is there an important technical or policy distinction to be drawn between data and big data?
  2. How have developments involving data – data resources, analytic tools, technology, and business models – changed the understanding and use of personal or commercial information or sensitive data?
  3. Does the importance of data – or large, complex data sets comprising personal or commercial information – in a firm’s ordinary course operations change how the FTC should analyze mergers or firm conduct? If so, how? Does data differ in importance from other assets in assessing firm or industry conduct?
  4. What structural, behavioral or conduct remedies should the FTC consider when remedying antitrust harm in a market or industry where data or personal or commercial information are a significant product or a key competitive input?
  5. Are there policy recommendations that would facilitate competition in markets involving data or personal or commercial information that the FTC should consider?
  6. Do the presence of personal information or privacy concerns inform or change competition analysis?
  7. How do state, federal, and international privacy laws and regulations, adopted to protect data and consumers, affect competition, innovation, and product offerings in the United States and abroad?"

Brave, the developer of a web browser, submitted comments to the FTC which highlighted two concerns:

"First, big tech companies “cross-use” user data from one part of their business to prop up others. This stifles competition, and hurts innovation and consumer choice. Brave suggests that FTC should investigate. Second, the GDPR is emerging as a de facto international standard. Whether this helps or harms United States firms will be determined by whether the United States enacts and actively enforces robust federal privacy laws."

A letter by Dr. Johnny Ryan, the Chief Policy & Industry Relations Officer at Brave, described in detail the company's concerns:

"The cross-use and offensive leveraging of personal information from one line of business to another is likely to have anti-competitive effects. Indeed anti-competitive practices may be inevitable when companies with Google’s degree of market dominance update their privacy policies to include the cross-use of personal information. The result is that a company can leverage all the personal information accumulated from its users in one line of business to dominate other lines of business too. Rather than competing on the merits, the company can enjoy the unfair advantage of massive network effects... The result is that nascent and potential competitors will be stifled, and consumer choice will be limited... The cross-use of data between different lines of business is analogous to the tying of two products. Indeed, tying and cross-use of data can occur at the same time, as Google Chrome’s latest “auto sign in to everything” controversy illustrates..."

Historically, Google let Chrome web browser users decide whether or not to sign in for cross-device usage. The Chrome 69 update forced auto sign-in, but a Chrome 70 update restored users' choice after numerous complaints and criticism.

Regarding topic #7 by the FTC, Brave's response said:

"A de facto international standard appears to be emerging, based on the European Union’s General Data Protection Regulation (GDPR)... the application of GDPR-like laws for commercial use of consumers’ personal data in the EU, Britain (post EU), Japan, India, Brazil, South Korea, Malaysia, Argentina, and China bring more than half of global GDP under a common standard. Whether this emerging standard helps or harms United States firms will be determined by whether the United States enacts and actively enforces robust federal privacy laws. Unless there is a federal GDPR-like law in the United States, there may be a degree of friction and the potential of isolation for United States companies... there is an opportunity in this trend. The United States can assume the global lead by adopting the emerging GDPR standard, and by investing in world-leading regulation that pursues test cases, and defines practical standards..."

Currently, companies collect, archive, share, and sell consumers' personal information at will -- often without notice nor consent. While all 50 states and territories have breach notification laws, most states have not upgraded their breach notification laws to include biometric and passport data. While the Health Insurance Portability and Accountability Act (HIPAA) is the federal law which governs healthcare data and related breaches, many consumers share health data with social media sites -- robbing themselves of HIPAA protections.

Moreover, it's an unregulated free-for-all of data collection, archiving, and sharing by telecommunications companies after the revoking in 2017 of broadband privacy protections for consumers in the USA. Plus, laws have historically focused upon "declared data" (e.g., the data users upload or submit into websites or apps) while ignoring "inferred data" -- which is arguably just as sensitive and revealing.

Regarding future federal privacy legislation, Brave added:

"... The GDPR is compatible with a United States view of consumer protection and privacy principles. Indeed, the FTC has proposed important privacy protections to legislators in 2009, and again in 2012 and 2014, which ended up being incorporated in the GDPR. The high-level principles of the GDPR are closely aligned, and often identical to, the United States’ privacy principles... The GDPR also incorporates principles endorsed by the U.S. in the 1980 OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data; and the principles endorsed by the United States this year, in Article 19.8 (3) of the new United States-Mexico-Canada Agreement."

"The GDPR differs from established United States privacy principles in its explicit reference to “proportionality” as a precondition of data use, and in its more robust approach to data minimization and to purpose specification. In our view, a federal law should incorporate these elements too. We also recommend that federal law should adopt the GDPR definitions of concepts such as “personal data”, “legal basis” including opt-in “consent”, “processing”, “special category personal data”, ”profiling”, “data controller”, “automated decision making”, “purpose limitation”, and so forth, and tools such as data protection impact assessments, breach notification, and records of processing activities."

"In keeping with the fair information practice principles (FIPPs) of the 1974 US Privacy Act, Brave recommends that a federal law should require that the collection of personal information is subject to purpose specification. This means that personal information shall only be collected for specific and explicit purposes. Personal information should not used beyond those purposes without consent, unless a further purpose is poses no risk of harm and is compatible with the initial purpose, in which case the data subject should have the opportunity to opt-out."

Submissions by Brave and others are available to the public at the FTC website in the "Public Comments" section.


Study: Privacy Concerns Have Caused Consumers To Change How They Use The Internet

Facebook commissioned a study by the Economist Intelligence Unit (EIU) to understand "internet inclusion" globally, or how people use the Internet, the benefits received, and the obstacles experienced. The latest survey included 5,069 respondents from 100 countries in Asia-Pacific, the Americas, Europe, the Middle East, North Africa and Sub-Saharan Africa.

Overall findings in the report cited:

"... cause for both optimism and concern. We are seeing steady progress in the number and percentage of households connected to the Internet, narrowing the gender gap and improving accessibility for people with disabilities. The Internet also has become a crucial tool for employment and obtaining job-related skills. On the other hand, growth in Internet connections is slowing, especially among the lowest income countries, and efforts to close the digital divide are stalling..."

The EIU describes itself as, "the world leader in global business intelligence, to help companies, governments and banks understand changes in the world is changing, seize opportunities created by those changes, and manage associated risks. So, any provider of social media services globally would greatly value the EIU's services.

The chart below highlights some of the benefits mentioned by survey respondents:

Chart-internet-benefits-eiu-2019

Other benefits respondents said: almost three-quarters (74.4%) said the Internet is more effective than other methods for finding jobs; 70.5% said their job prospects have improved due to the Internet; and more. So, job seekers and employers both benefit.

Key findings regarding online privacy (emphasis added):

"... More than half (52.2%) of [survey] respondents say they are not confident about their online privacy, hardly changed from 51.5% in the 2018 survey... Most respondents are changing the way they use the Internet because they believe some information may not remain private. For example, 55.8% of respondents say they limit how much financial information they share online because of privacy concerns. This is relatively consistent across different age groups and household income levels... 42.6% say they limit how much personal health and medical information they share. Only 7.5% of respondents say privacy concerns have not changed the way they use the Internet."

So, the lack of online privacy affects how people use the internet -- for business and pleasure. The chart below highlights the types of online changes:

Chart-internet-usage-eiu-2019

Findings regarding privacy and online shopping:

"Despite lingering privacy concerns, people are increasingly shopping online. Whether this continues in the future may hinge on attitudes toward online safety and security... A majority of respondents say that making online purchases is safe and secure, but, at 58.8% it was slightly lower than the 62.1% recorded in the 2018 survey."

So, the percentage of respondents who said online purchases as safe and secure went in the wrong direction -- down. Not good. There were regional differences, too, about online privacy:

"In Europe, the share of respondents confident about their online privacy increased by 8 percentage points from the 2018 survey, probably because of the General Data Protection Regulation (GDPR), the EU’s comprehensive data privacy rules that came into force in May 2018. However, the Middle East and North Africa region saw a decline of 9 percentage points compared with the 2018 survey."

So, sensible legislation to protect consumers' online privacy can have positive impacts. There were other regional differences:

"Trust in online sources of information remained relatively stable, except in the West. Political turbulence in the US and UK may have played a role in causing the share of respondents in North America and Europe who say they trust information on government websites and apps to retreat by 10 percentage points and 6 percentage points, respectively, compared with the 2018 survey."

So, stability is important. The report's authors concluded:

"The survey also reflects anxiety about online privacy and a decline in trust in some sources of information. Indeed, trust in government information has fallen since last year in Europe and North America. The growth and importance of the digital economy will mean that alleviating these anxieties should be a priority of companies, governments, regulators and developers."

Addressing those anxieties is critical, if governments in the West are serious about facilitating business growth via consumer confidence and internet usage. Download the Inclusive Internet Index 2019 Executive Summary (Adobe PDF) report.


'Software Pirates' Stole Apple Tech To Distribute Hacked Mobile Apps To Consumers

Prior news reports highlighted the abuse of Apple's corporate digital certificates. Now, we learn that this abuse is more widespread than first thought. CNet reported:

"Pirates used Apple's enterprise developer certificates to put out hacked versions of some major apps... The altered versions of Spotify, Angry Birds, Pokemon Go and Minecraft make paid features available for free and remove in-app ads... The pirates appear to have figured out how to use digital certs to get around Apple's carefully policed App Store by saying the apps will be used only by their employees, when they're actually being distributed to everyone."

So, bad actors abuse technology intended for a company's employees to distribute apps directly to consumers. Software pirates, indeed.

To avoid paying for hacked apps, consumers need to shop wisely from trusted sites. A fix is underway. According to CNet:

"Apple will reportedly take steps to fight back by requiring all app makers to use its two-factor authentication protocol from the end of February, so logging into an Apple ID will require a password and code sent to a trusted Apple device."

Let's hope that fix is sufficient.


Popular iOS Apps Record All In-App Activity Causing Privacy, Data Security, And Other Issues

As the internet has evolved, the user testing and market research practices have also evolved. This may surprise consumers. TechCrunch reported that many popular Apple mobile apps record everything customers do with the apps:

"Apps like Abercrombie & Fitch, Hotels.com and Singapore Airlines also use Glassbox, a customer experience analytics firm, one of a handful of companies that allows developers to embed “session replay” technology into their apps. These session replays let app developers record the screen and play them back to see how its users interacted with the app to figure out if something didn’t work or if there was an error. Every tap, button push and keyboard entry is recorded — effectively screenshotted — and sent back to the app developers."

So, customers' entire app sessions and activities have been recorded. Of course, marketers need to understand their customers' needs, and how users interact with their mobile apps, to build better products, services, and apps. However, in doing so some apps have security vulnerabilities:

"The App Analyst... recently found Air Canada’s iPhone app wasn’t properly masking the session replays when they were sent, exposing passport numbers and credit card data in each replay session. Just weeks earlier, Air Canada said its app had a data breach, exposing 20,000 profiles."

Not good for a couple reasons. First, sensitive data like payment information (e.g., credit/debit card numbers, passport numbers, bank account numbers, etc.) should be masked. Second, when sensitive information isn't masked, more data security problems arise. How long is this app usage data archived? What employees, contractors, and business partners have access to the archive? What security methods are used to protect the archive from abuse?

In short, unauthorized persons may have access to the archives and the sensitive information contained. For example, market researchers probably have little or no need to specific customers' payment information. Sensitive information in these archives should be encrypted, to provide the best protection from abuse and from data breaches.

Sadly, there is more bad news:

"Apps that are submitted to Apple’s App Store must have a privacy policy, but none of the apps we reviewed make it clear in their policies that they record a user’s screen... Expedia’s policy makes no mention of recording your screen, nor does Hotels.com’s policy. And in Air Canada’s case, we couldn’t spot a single line in its iOS terms and conditions or privacy policy that suggests the iPhone app sends screen data back to the airline. And in Singapore Airlines’ privacy policy, there’s no mention, either."

So, the app session recordings were done covertly... without explicit language to provide meaningful and clear notice to consumers. I encourage everyone to read the entire TechCrunch article, which also includes responses by some of the companies mentioned. In my opinion, most of the responses fell far short with lame, boilerplate statements.

All of this is very troubling. And, there is more.

The TechCrunch article didn't discuss it, but historically companies hired testing firms to recruit user test participants -- usually current and prospective customers. Test participants were paid for their time. (I know because as a former user experience professional I conducted such in-person test sessions where clients paid test participants.) Things have changed. Not only has user testing and research migrated online, but companies use automated tools to perform perpetual, unannounced user testing -- all without compensating test participants.

While change is inevitable, not all change is good. Plus, things can be done in better ways. If the test information is that valuable, then pay test participants. Otherwise, this seems like another example of corporate greed at consumers' expense. And, it's especially egregious if data transmissions of the recorded app sessions to developers' servers use up cellular data plan capacity consumers paid for. Some consumers (e.g., elders, children, the poor) cannot afford the costs of unlimited cellular data plans.

After this TechCrunch report, Apple notified developers to either stop or disclose screen recording:

"Protecting user privacy is paramount in the Apple ecosystem. Our App Store Review Guidelines require that apps request explicit user consent and provide a clear visual indication when recording, logging, or otherwise making a record of user activity... We have notified the developers that are in violation of these strict privacy terms and guidelines, and will take immediate action if necessary..."

Good. That's a start. Still, user testing and market research is not a free pass for developers to ignore or skip data security best practices. Given these covert recorded app sessions, mobile apps must be continually tested. Otherwise, some ethically-challenged companies may re-introduce covert screen recording features. What are your opinions?


Survey: People In Relationships Spy On Cheating Partners. FTC: Singles Looking For Love Are The Biggest Target Of Scammers

Happy Valentine's Day! First, BestVPN announced the results of a survey of 1,000 adults globally about relationships and trust in today's digital age where social media usage is very popular. Key findings:

"... nearly 30% of respondents admitted to using tracking apps to catch a partner [suspected of or cheating]. After all, over a quarter of those caught cheating were busted by modern technology... 85% of those caught out in the past now take additional steps to protect their privacy, including deleting their browsing data or using a private browsing mode."

Below is an infographic with more findings from the survey.

Valentines-day-infograph-bestvpn-feb2019

Second, the U.S. Federal Trade Commission (FTC) issued a warning earlier this week about fraud affecting single persons:

"... romance scams generated more reported losses than any other consumer fraud type reported to the agency... The number of romance scams reported to the FTC has grown from 8,500 in 2015 to more than 21,000 in 2018, while reported losses to these scams more than quadrupled in recent years—from $33 million in 2015 to $143 million last year. For those who said they lost money to a romance scam, the median reported loss was $2,600, with those 70 and over reporting the biggest median losses at $10,000."

"Romance scammers often find their victims online through a dating site or app or via social media. These scammers create phony profiles that often involve the use of a stranger’s photo they have found online. The goals of these scams are often the same: to gain the victim’s trust and love in order to get them to send money through a wire transfer, gift card, or other means."

So, be careful out there. Don't cheat, and beware of scammers and dating imposters. You have been warned.


Senators Demand Answers From Facebook And Google About Project Atlas And Screenwise Meter Programs

After news reports surfaced about Facebook's Project Atlas, a secret program where Facebook paid teenagers (and other users) for a research app installed on their phones to track and collect information about their mobile usage, several United States Senators have demanded explanations. Three Senators sent a join letter on February 7, 2019 to Mark Zuckerberg, Facebook's chief executive officer.

The joint letter to Facebook (Adobe PDF format) stated, in part:

"We write concerned about reports that Facebook is collecting highly-sensitive data on teenagers, including their web browsing, phone use, communications, and locations -- all to profile their behavior without adequate disclosure, consent, or oversight. These reports fit with Longstanding concerns that Facebook has used its products to deeply intrude into personal privacy... According to a journalist who attempted to register as a teen, the linked registration page failed to impose meaningful checks on parental consent. Facebook has more rigorous mechanism to obtain and verify parental consent, such as when it is required to sign up for Messenger Kids... Facebook's monitoring under Project Atlas is particularly concerning because the data data collection performed by the research app was deeply invasive. Facebook's registration process encouraged participants to "set it and forget it," warning that if a participant disconnected from the monitoring for more than ten minutes for a few days, that they could be disqualified. Behind the scenes, the app watched everything on the phone."

The letter included another example highlighting the alleged lack of meaningful disclosures:

"... the app added a VPN connection that would automatically route all of a participant's traffic through Facebook's servers. The app installed a SSL root certificate on the participant's phone, which would allow Facebook to intercept or modify data sent to encrypted websites. As a result, Facebook would have limitless access to monitor normally secure web traffic, even allowing Facebook to watch an individual log into their bank account or exchange pictures with their family. None of the disclosures provided at registration offer a meaningful explanation about how the sensitive data is used, how long it is kept, or who within Facebook has access to it..."

The letter was signed by Senators Richard Blumenthal (Democrat, Connecticut), Edward J. Markey (Democrat, Massachusetts), and Josh Hawley (Republican, Mississippi). Based upon news reports about how Facebook's Research App operated with similar functionality to the Onavo VPN app which was banned last year by Apple, the Senators concluded:

"Faced with that ban, Facebook appears to have circumvented Apple's attempts to protect consumers."

The joint letter also listed twelve questions the Senators want detailed answers about. Below are selected questions from that list:

"1. When did Project Atlas begin and how many individuals participated? How many participants were under age 18?"

"3. Why did Facebook use a less strict mechanism for verifying parental consent than is Required for Messenger Kids or Global Data Protection Requlation (GDPR) compliance?"

"4.What specific types of data was collected (e.g., device identifieers, usage of specific applications, content of messages, friends lists, locations, et al.)?"

"5. Did Facebook use the root certificate installed on a participant's device by the Project Atlas app to decrypt and inspect encrypted web traffic? Did this monitoring include analysis or retention of application-layer content?"

"7. Were app usage data or communications content collected by Project Atlas ever reviewed by or available to Facebook personnel or employees of Facebook partners?"

8." Given that Project Atlas acknowledged the collection of "data about [users'] activities and content within those apps," did Facebook ever collect or retain the private messages, photos, or other communications sent or received over non-Facebook products?"

"11. Why did Facebook bypass Apple's app review? Has Facebook bypassed the App Store aproval processing using enterprise certificates for any other app that was used for non-internal purposes? If so, please list and describe those apps."

Read the entire letter to Facebook (Adobe PDF format). Also on February 7th, the Senators sent a similar letter to Google (Adobe PDF format), addressed to Hiroshi Lockheimer, the Senior Vice President of Platforms & Ecosystems. It stated in part:

"TechCrunch has subsequently reported that Google maintained its own measurement program called "Screenwise Meter," which raises similar concerns as Project Atlas. The Screenwise Meter app also bypassed the App Store using an enterprise certificate and installed a VPN service in order to monitor phones... While Google has since removed the app, questions remain about why it had gone outside Apple's review process to run the monitoring program. Platforms must maintain and consistently enforce clear policies on the monitoring of teens and what constitutes meaningful parental consent..."

The letter to Google includes a similar list of eight questions the Senators seek detailed answers about. Some notable questions:

"5. Why did Google bypass App Store approval for Screenwise Meter app using enterprise certificates? Has Google bypassed the App Store approval processing using enterprise certificates for any other non-internal app? If so, please list and describe those apps."

"6. What measures did Google have in place to ensure that teenage participants in Screenwise Meter had authentic parental consent?"

"7. Given that Apple removed Onavoo protect from the App Store for violating its terms of service regarding privacy, why has Google continued to allow the Onavo Protect app to be available on the Play Store?"

The lawmakers have asked for responses by March 1st. Thanks to all three Senators for protecting consumers' -- and children's -- privacy... and for enforcing transparency and accountability.


Facebook Paid Teens To Install Unauthorized Spyware On Their Phones. Plenty Of Questions Remain

Facebook logoWhile today is the 15th anniversary of Facebook,  more important news rules. Last week featured plenty of news about Facebook. TechCrunch reported on Tuesday:

"Since 2016, Facebook has been paying users ages 13 to 35 up to $20 per month plus referral fees to sell their privacy by installing the iOS or Android “Facebook Research” app. Facebook even asked users to screenshot their Amazon order history page. The program is administered through beta testing services Applause, BetaBound and uTest to cloak Facebook’s involvement, and is referred to in some documentation as “Project Atlas” — a fitting name for Facebook’s effort to map new trends and rivals around the globe... Facebook admitted to TechCrunch it was running the Research program to gather data on usage habits."

So, teenagers installed surveillance software on their phones and tablets, to spy for Facebook on themselves, Facebook's competitors,, and others. This is huge news for several reasons. First, the "Facebook Research" app is VPN (Virtual Private Network) software which:

"... lets the company suck in all of a user’s phone and web activity, similar to Facebook’s Onavo Protect app that Apple banned in June and that was removed in August. Facebook sidesteps the App Store and rewards teenagers and adults to download the Research app and give it root access to network traffic in what may be a violation of Apple policy..."

Reportedly, the Research app collected massive amounts of information: private messages in social media apps, chats from in instant messaging apps, photos/videos sent to others, emails, web searches, web browsing activity, and geo-location data. So, a very intrusive app. And, after being forced to remove oneintrusive app from Apple's store, Facebook continued anyway -- with another app that performed the same function. Not good.

Second, there is the moral issue of using the youngest users as spies... persons who arguably have the lease experience and skills at reading complex documents: corporate terms-of-use and privacy policies. I wonder how many teenagers notified their friends of the spying and data collection. How many teenagers fully understood what they were doing? How many parents were aware of the activity and payments? How many parents notified the parents of their children's friends? How many teens installed the spyware on both their iPhones and iPads? Lots of unanswered questions.

Third, Apple responded quickly. TechCrunch reported Wednesday morning:

"... Apple blocked Facebook’s Research VPN app before the social network could voluntarily shut it down... Apple tells TechCrunch that yesterday evening it revoked the Enterprise Certificate that allows Facebook to distribute the Research app without going through the App Store."

Facebook's usage of the Enterprise Certificate is significant. TechCrunch also published a statement by Apple:

"We designed our Enterprise Developer Program solely for the internal distribution of apps within an organization... Facebook has been using their membership to distribute a data-collecting app to consumers, which is a clear breach of their agreement with Apple. Any developer using their enterprise certificates to distribute apps to consumers will have their certificates revoked..."

So, the Research app violated Apple's policy. Not good. The app also performs similar functions as the banned Onavo VPN app. Worse. This sounds like an end-run to me. So as punishment for its end-run actions, Apple temporarily disable the certificates for internal corporate apps.

Axios described very well Facebook's behavior:

"Facebook took a program designed to let businesses internally test their own app and used it to monitor most, if not everything, a user did on their phone — a degree of surveillance barred in the official App Store."

And the animated Facebook image in the Axios article sure looks like a liar-liar-logo-on-fire image. LOL! Pure gold! Seriously, Facebook's behavior indicates questionable ethics, and/or an expectation of not getting caught. Reportedly, the internal apps which were shut down included shuttle schedules, campus maps, and company calendars. After that, some Facebook employees discussed quitting.

And, it raises more questions. Which Facebook executives approved Project Atlas? What advice did Facebook's legal staff provide prior to approval? Was that advice followed or ignored?

Google logo Fourth, TechCrunch also reported:

"Facebook’s Research program will continue to run on Android."

What? So, Google devices were involved, too. Is this spy program okay with Google executives? A follow-up report on Wednesday by TechCrunch:

"Google has been running an app called Screenwise Meter, which bears a strong resemblance to the app distributed by Facebook Research that has now been barred by Apple... Google invites users aged 18 and up (or 13 if part of a family group) to download the app by way of a special code and registration process using an Enterprise Certificate. That’s the same type of policy violation that led Apple to shut down Facebook’s similar Research VPN iOS app..."

Oy! So, Google operates like Facebook. Also reported by TechCrunch:

"The Screenwise Meter iOS app should not have operated under Apple’s developer enterprise program — this was a mistake, and we apologize. We have disabled this app on iOS devices..."

So, Google will terminate its spy program on Apple devices, but continue its own program with Facebook. Hmmmmm. Well, that answers some questions. I guess Google executives are okay with this spy program. More questions remain.

Fifth, Facebook tried to defend the Research app and its actions in an internal memo to employees. On Thursday, TechCrunch tore apart the claims in an internal Facebook memo from vice president Pedro Canahuati. Chiefly:

"Facebook claims it didn’t hide the program, but it was never formally announced like every other Facebook product. There were no Facebook Help pages, blog posts, or support info from the company. It used intermediaries Applause and CentreCode to run the program under names like Project Atlas and Project Kodiak. Users only found out Facebook was involved once they started the sign-up process and signed a non-disclosure agreement prohibiting them from discussing it publicly... Facebook claims it wasn’t “spying,” yet it never fully laid out the specific kinds of information it would collect. In some cases, descriptions of the app’s data collection power were included in merely a footnote. The program did not specify data types gathered, only saying it would scoop up “which apps are on your phone, how and when you use them” and “information about your internet browsing activity.” The parental consent form from Facebook and Applause lists none of the specific types of data collected...

So, Research app participants (e.g., teenagers, parents) couldn't discuss nor warn their friends (and their friends' parents) about the data collection. I strongly encourage everyone to read the entire TechCrunch analysis. It is eye-opening.

Sixth, a reader shared concerns about whether Facebook's actions violated federal laws. Did Project Atlas violate the Digital Millennium Copyright Act (DMCA); specifically the "anti-circumvention" provision, which prohibits avoiding the security protections in software? Did it violate the Computer Fraud and Abuse Act? What about breach-of-contract and fraud laws? What about states' laws? So, one could ask similar questions about Google's actions, too.

I am not an attorney. Hopefully, some attorneys will weigh in on these questions. Probably, some skilled attorneys will investigate various legal options.

All of this is very disturbing. Is this what consumers can expect of Silicon Valley firms? Is this the best tech firms can do? Is this the low level the United States has sunk to? Kudos to the TechCrunch staff for some excellent reporting.

What are your opinions of Project Atlas? Of Facebook's behavior? Of Google's?


Companies Want Your Location Data. Recent Examples: The Weather Channel And Burger King

Weather Channel logo It is easy to find examples where companies use mobile apps to collect consumers' real-time GPS location data, so they can archive and resell that information later for additional profits. First, ExpressVPN reported:

"The city of Los Angeles is suing the Weather Company, a subsidiary of IBM, for secretly mining and selling user location data with the extremely popular Weather Channel App. Stating that the app unfairly manipulates users into enabling their location settings for more accurate weather reports, the lawsuit affirms that the app collects and then sells this data to third-party companies... Citing a recent investigation by The New York Times that revealed more than 75 companies silently collecting location data (if you haven’t seen it yet, it’s worth a read), the lawsuit is basing its case on California’s Unfair Competition Law... the California Consumer Privacy Act, which is set to go into effect in 2020, would make it harder for companies to blindly profit off customer data... This lawsuit hopes to fine the Weather Company up to $2,500 for each violation of the Unfair Competition Law. With more than 200 million downloads and a reported 45+ million users..."

Long-term readers remember that a data breach in 2007 at IBM Inc. prompted this blog. It's not only internet service providers which collect consumers' location data. Advertisers, retailers, and data brokers want it, too.

Burger King logo Second, Burger King ran last month a national "Whopper Detour" promotion which offered customers a once-cent Whopper burger if they went near a competitor's store. News 5, the ABC News affiliate in Cleveland, reported:

"If you download the Burger King mobile app and drive to a McDonald’s store, you can get the penny burger until December 12, 2018, according to the fast-food chain. You must be within 600 feet of a McDonald's to claim your discount, and no, McDonald's will not serve you a Whopper — you'll have to order the sandwich in the Burger King app, then head to the nearest participating Burger King location to pick it up. More information about the deal can be found on the app on Apple and Android devices."

Next, the relevant portions from Burger King's privacy policy for its mobile apps (emphasis added):

"We collect information you give us when you use the Services. For example, when you visit one of our restaurants, visit one of our websites or use one of our Services, create an account with us, buy a stored-value card in-restaurant or online, participate in a survey or promotion, or take advantage of our in-restaurant Wi-Fi service, we may ask for information such as your name, e-mail address, year of birth, gender, street address, or mobile phone number so that we can provide Services to you. We may collect payment information, such as your credit card number, security code and expiration date... We also may collect information about the products you buy, including where and how frequently you buy them... we may collect information about your use of the Services. For example, we may collect: 1) Device information - such as your hardware model, IP address, other unique device identifiers, operating system version, and settings of the device you use to access the Services; 2) Usage information - such as information about the Services you use, the time and duration of your use of the Services and other information about your interaction with content offered through a Service, and any information stored in cookies and similar technologies that we have set on your device; and 3) Location information - such as your computer’s IP address, your mobile device’s GPS signal or information about nearby WiFi access points and cell towers that may be transmitted to us..."

So, for the low, low price of one hamburger, participants in this promotion gave RBI, the parent company which owns Burger King, perpetual access to their real-time location data. And, since RBI knows when, where, and how long its customers visit competitors' fast-food stores, it also knows similar details about everywhere else you go -- including school, work, doctors, hospitals, and more. Sweet deal for RBI. A poor deal for consumers.

Expect to see more corporate promotions like this, which privacy advocates call "surveillance capitalism."

Consumers' real-time location data is very valuable. Don't give it away for free. If you decide to share it, demand a fair, ongoing payment in exchange. Read privacy and terms-of-use policies before downloading mobile apps, so you don't get abused or taken. Opinions? Thoughts?


After Promises To Stop, Mobile Providers Continued Sales Of Location Data About Consumers. What You Can Do To Protect Your Privacy

Sadly, history repeats itself. First, the history: after getting caught selling consumers' real-time GPS location data without notice nor consumers' consent, in 2018 mobile providers promised to stop the practice. The Ars Technica blog reported in June, 2018:

"Verizon and AT&T have promised to stop selling their mobile customers' location information to third-party data brokers following a security problem that leaked the real-time location of US cell phone users. Senator Ron Wyden (D-Ore.) recently urged all four major carriers to stop the practice, and today he published responses he received from Verizon, AT&T, T-Mobile USA, and SprintWyden's statement praised Verizon for "taking quick action to protect its customers' privacy and security," but he criticized the other carriers for not making the same promise... AT&T changed its stance shortly after Wyden's statement... Senator Wyden recognized AT&T's change on Twitter and called on T-Mobile and Sprint to follow suit."

Kudos to Senator Wyden. The other mobile providers soon complied... sort of.

Second, some background: real-time location data is very valuable stuff. It indicates where you are as you (with your phone or other mobile devices) move about the physical world in your daily routine. No delays. No lag. Yes, there are appropriate uses for real-time GPS location data -- such as by law enforcement to quickly find a kidnapped person or child before further harm happens. But, do any and all advertisers need real-time location data about consumers? Data brokers? Others?

I think not. Domestic violence and stalking victims probably would not want their, nor their children's, real-time location data resold publicly. Most parents would not want their children's location data resold publicly. Most patients probably would not want their location data broadcast every time they visit their physician, specialist, rehab, or a hospital. Corporate executives, government officials, and attorneys conducting sensitive negotiations probably wouldn't want their location data collected and resold, either.

So, most consumers probably don't want their real-time location data resold publicly. Well, some of you make location-specific announcements via posts on social media. That's your choice, but I conclude that most people don't. Consumers want control over their location information so they can decide if, when, and with whom to share it. The mass collection and sales of consumers' real-time location data by mobile providers prevents choice -- and it violates persons' privacy.

Third, fast forward seven months from 2018. TechCrunch reported on January 9th:

"... new reporting by Motherboard shows that while [reseller] LocationSmart faced the brunt of the criticism [in 2018], few focused on the other big player in the location-tracking business, Zumigo. A payment of $300 and a phone number was enough for a bounty hunter to track down the participating reporter by obtaining his location using Zumigo’s location data, which was continuing to pay for access from most of the carriers. Worse, Zumigo sold that data on — like LocationSmart did with Securus — to other companies, like Microbilt, a Georgia-based credit reporting company, which in turn sells that data on to other firms that want that data. In this case, it was a bail bond company, whose bounty hunter was paid by Motherboard to track down the reporter — with his permission."

"Everyone seemed to drop the ball. Microbilt said the bounty hunter shouldn’t have used the location data to track the Motherboard reporter. Zumigo said it didn’t mind location data ending up in the hands of the bounty hunter, but still cut Microbilt’s access. But nobody quite dropped the ball like the carriers, which said they would not to share location data again."

The TechCrunch article rightly held offending mobile providers accountable. Example: T-Mobile's chief executive tweeted last year:

Then, Legere tweeted last week:

The right way? In my view, real-time location never should have been collected and resold. Almost a year after reports first surfaced, T-Mobile is finally getting around to stopping the practice and terminating its relationships with location data resellers -- two months from now. Why not announce this slow wind-down last year when the issue first surfaced? "Emergency assistance" is the reason we are supposed to believe. Yeah, right.

The TechCrunch article rightly took AT&T and Verizon to task, too. Good. I strongly encourage everyone to read the entire TechCrunch article.

What can consumers make of this? There seem to be several takeaways:

  1. Transparency is needed, since corporate privacy policies don't list all (or often any) business partners. This lack of transparency provides an easy way for mobile providers to resume location data sales without notice to anyone and without consumers' consent,
  2. Corporate executives will say anything in tweets/social media. A healthy dose of skepticism by consumers and regulators is wise,
  3. Consumers can't trust mobile providers. They are happy to make money selling consumers' real-time location data, regardless of consumers' desires not for our data to be collected and sold,
  4. Data brokers and credit reporting agencies want consumers' location data,
  5. To ensure privacy, consumers also must take action: adjust the privacy settings on your phones to limit or deny mobile apps access to your location data. I did. It's not hard. Do it today, and
  6. Oversight is needed, since a) mobile providers have, at best, sloppy to minimal oversight and internal processes to prevent location data sales; and b) data brokers and others are readily available to enable and facilitate location data transactions.

I cannot over-emphasize #5 above. What issues or takeaways do you see? What are your opinions about real-time location data?


Samsung Phone Owners Unable To Delete Facebook And Other Apps. Anger And Privacy Concerns Result

Some consumers have learned that they can't delete Facebook and other mobile apps from their Samsung smartphones. Bloomberg described one consumer's experiences:

"Winke bought his Samsung Galaxy S8, an Android-based device that comes with Facebook’s social network already installed, when it was introduced in 2017. He has used the Facebook app to connect with old friends and to share pictures of natural landscapes and his Siamese cat -- but he didn’t want to be stuck with it. He tried to remove the program from his phone, but the chatter proved true -- it was undeletable. He found only an option to "disable," and he wasn’t sure what that meant."

Samsung phones operate using Google's Android operating system (OS). The "chatter" refers to online complaints by Samsung phone owners. There were plenty of complaints, ranging from snarky:

To informative:

And:

Some persons shared their (understandable) anger:

One person reminded consumers of bigger issues with Android OS phones:

And, that privacy concern still exists. Sophos Labs reported:

"Advocacy group Privacy International announced the findings in a presentation at the 35th Chaos Computer Congress late last month. The organization tested 34 apps and documented the results, as part of a downloadable report... 61% of the apps tested automatically tell Facebook that a user has opened them. This accompanies other basic event data such as an app being closed, along with information about their device and suspected location based on language and time settings. Apps have been doing this even when users don’t have a Facebook account, the report said. Some apps went far beyond basic event information, sending highly detailed data. For example, the travel app Kayak routinely sends search information including departure and arrival dates and cities, and numbers of tickets (including tickets for children)."

After multiple data breaches and privacy snafus, some Facebook users have decided to either quit the Facebook mobile app or quit the service entirely. Now, some Samsung phone users have learned that quitting can be more difficult, and they don't have as much control over their devices as they thought.

How did this happen? Bloomberg explained:

"Samsung, the world’s largest smartphone maker, said it provides a pre-installed Facebook app on selected models with options to disable it, and once it’s disabled, the app is no longer running. Facebook declined to provide a list of the partners with which it has deals for permanent apps, saying that those agreements vary by region and type... consumers may not know if Facebook is pre-loaded unless they specifically ask a customer service representative when they purchase a phone."

Not good. So, now we know that there are two classes of mobile apps: 1) pre-installed and 2) permanent. Pre-installed apps come on new devices. Some pre-installed apps can be deleted by users. Permanent mobile apps are pre-installed apps which cannot be removed/deleted by users. Users can only disable permanent apps.

Sadly, there's more and it's not only Facebook. Bloomberg cited other agreements:

"A T-Mobile US Inc. list of apps built into its version of the Samsung Galaxy S9, for example, includes the social network as well as Amazon.com Inc. The phone also comes loaded with many Google apps such as YouTube, Google Play Music and Gmail... Other phone makers and service providers, including LG Electronics Inc., Sony Corp., Verizon Communications Inc. and AT&T Inc., have made similar deals with app makers..."

This is disturbing. There seem to be several issues:

  1. Notice: consumers should be informed before purchase of any and all phone apps which can't be removed. The presence of permanent mobile apps suggests either a lack of notice, notice buried within legal language of phone manufacturers' user agreements, or both.
  2. Privacy: just because a mobile app isn't running doesn't mean it isn't operating. Stealth apps can still collect GPS location and device information while running in the background; and then transmit it to manufacturers. Hopefully, some enterprising technicians or testing labs will verify independently whether "disabled" permanent mobile apps have truly stopped working.
  3. Transparency: phone manufacturers should explain and publish their lists of partners with both pre-installed and permanent app agreements -- for each device model. Otherwise, consumers cannot make informed purchase decisions about phones.
  4. Scope: the Samsung-Facebook pre-installed apps raises questions about other devices with permanent apps: phones, tablets, laptops, smart televisions, and automotive vehicles. Perhaps, some independent testing by Consumer Reports can determine a full list of devices with permanent apps.
  5. Nothing is free. Pre-installed app agreements indicate another method which device manufacturers use to make money, by collecting and sharing consumers' data with other tech companies.

The bottom line is trust. Consumers have more valid reasons to distrust some device manufacturers and OS developers. What issues do you see? What are your thoughts about permanent mobile apps?


A Series Of Recent Events And Privacy Snafus At Facebook Cause Multiple Concerns. Does Facebook Deserve Users' Data?

Facebook logo So much has happened lately at Facebook that it can be difficult to keep up with the data scandals, data breaches, privacy fumbles, and more at the global social service. To help, below is a review of recent events.

The the New York Times reported on Tuesday, December 18th that for years:

"... Facebook gave some of the world’s largest technology companies more intrusive access to users’ personal data than it has disclosed, effectively exempting those business partners from its usual privacy rules... The special arrangements are detailed in hundreds of pages of Facebook documents obtained by The New York Times. The records, generated in 2017 by the company’s internal system for tracking partnerships, provide the most complete picture yet of the social network’s data-sharing practices... Facebook allowed Microsoft’s Bing search engine to see the names of virtually all Facebook users’ friends without consent... and gave Netflix and Spotify the ability to read Facebook users’ private messages. The social network permitted Amazon to obtain users’ names and contact information through their friends, and it let Yahoo view streams of friends’ posts as recently as this summer, despite public statements that it had stopped that type of sharing years earlier..."

According to the Reuters newswire, a Netflix spokesperson denied that Netflix accessed Facebook users' private messages, nor asked for that access. Facebook responded with denials the same day:

"... none of these partnerships or features gave companies access to information without people’s permission, nor did they violate our 2012 settlement with the FTC... most of these features are now gone. We shut down instant personalization, which powered Bing’s features, in 2014 and we wound down our partnerships with device and platform companies months ago, following an announcement in April. Still, we recognize that we’ve needed tighter management over how partners and developers can access information using our APIs. We’re already in the process of reviewing all our APIs and the partners who can access them."

Needed tighter management with its partners and developers? That's an understatement. During March and April of 2018 we learned that bad actors posed as researchers and used both quizzes and automated tools to vacuum up (and allegedly resell later) profile data for 87 million Facebook users. There's more news about this breach. The Office of the Attorney General for Washington, DC announced on December 19th that it has:

"... sued Facebook, Inc. for failing to protect its users’ data... In its lawsuit, the Office of the Attorney General (OAG) alleges Facebook’s lax oversight and misleading privacy settings allowed, among other things, a third-party application to use the platform to harvest the personal information of millions of users without their permission and then sell it to a political consulting firm. In the run-up to the 2016 presidential election, some Facebook users downloaded a “personality quiz” app which also collected data from the app users’ Facebook friends without their knowledge or consent. The app’s developer then sold this data to Cambridge Analytica, which used it to help presidential campaigns target voters based on their personal traits. Facebook took more than two years to disclose this to its consumers. OAG is seeking monetary and injunctive relief, including relief for harmed consumers, damages, and penalties to the District."

Sadly, there's still more. Facebook announced on December 14th another data breach:

"Our internal team discovered a photo API bug that may have affected people who used Facebook Login and granted permission to third-party apps to access their photos. We have fixed the issue but, because of this bug, some third-party apps may have had access to a broader set of photos than usual for 12 days between September 13 to September 25, 2018... the bug potentially gave developers access to other photos, such as those shared on Marketplace or Facebook Stories. The bug also impacted photos that people uploaded to Facebook but chose not to post... we believe this may have affected up to 6.8 million users and up to 1,500 apps built by 876 developers... Early next week we will be rolling out tools for app developers that will allow them to determine which people using their app might be impacted by this bug. We will be working with those developers to delete the photos from impacted users. We will also notify the people potentially impacted..."

We believe? That sounds like Facebook doesn't know for sure. Where was the quality assurance (QA) team on this? Who is performing the post-breach investigation to determine what happened so it doesn't happen again? This post-breach response seems sloppy. And, the "bug" description seems disingenuous. Anytime persons -- in this case developers -- have access to data they shouldn't have, it is a data breach.

One quickly gets the impression that Facebook has created so many niches, apps, APIs, and special arrangements for developers and advertisers that it really can't manage nor control the data it collects about its users. That implies Facebook users aren't in control of their data, either.

There were other notable stumbles. There were reports after many users experienced repeated bogus Friend Requests, due to hacked and/or cloned accounts. It can be difficult for users to distinguish valid Friend Requests from spammers or bad actors masquerading as friends.

In August, reports surfaced that Facebook approached several major banks offering to share its detailed financial information about consumers in order, "to boost user engagement." Reportedly, the detailed financial information included debit/credit/prepaid card transactions and checking account balances. Not good.

Also in August, Facebook's Onavo VPN App was removed from the Apple App store because the app violated data-collection policies. 9 To 5 Mac reported on December 5th:

"The UK parliament has today publicly shared secret internal Facebook emails that cover a wide-range of the company’s tactics related to its free iOS VPN app that was used as spyware, recording users’ call and text message history, and much more... Onavo was an interesting effort from Facebook. It posed as a free VPN service/app labeled as Facebook’s “Protect” feature, but was more or less spyware designed to collect data from users that Facebook could leverage..."

Why spy? Why the deception? This seems unnecessary for a global social networking company already collecting massive amounts of content.

In November, an investigative report by ProPublica detailed the failures in Facebook's news transparency implementation. The failures mean Facebook hasn't made good on its promises to ensure trustworthy news content, nor stop foreign entities from using the social service to meddle in elections in democratic countries.

There is more. Facebook disclosed in October a massive data breach affecting 30 million users (emphasis added):

For 15 million people, attackers accessed two sets of information – name and contact details (phone number, email, or both, depending on what people had on their profiles). For 14 million people, the attackers accessed the same two sets of information, as well as other details people had on their profiles. This included username, gender, locale/language, relationship status, religion, hometown, self-reported current city, birth date, device types used to access Facebook, education, work, the last 10 places they checked into or were tagged in, website, people or Pages they follow, and the 15 most recent searches..."

The stolen data allows bad actors to operate several types of attacks (e.g., spam, phishing, etc.) against Facebook users. The stolen data allows foreign spy agencies to collect useful information to target persons. Neither is good. Wired summarized the situation:

"Every month this year—and in some months, every week—new information has come out that makes it seem as if Facebook's big rethink is in big trouble... Well-known and well-regarded executives, like the founders of Facebook-owned Instagram, Oculus, and WhatsApp, have left abruptly. And more and more current and former employees are beginning to question whether Facebook's management team, which has been together for most of the last decade, is up to the task.

Technically, Zuckerberg controls enough voting power to resist and reject any moves to remove him as CEO. But the number of times that he and his number two Sheryl Sandberg have over-promised and under-delivered since the 2016 election would doom any other management team... Meanwhile, investigations in November revealed, among other things, that the company had hired a Washington firm to spread its own brand of misinformation on other platforms..."

Hiring a firm to distribute misinformation elsewhere while promising to eliminate misinformation on its platform. Not good. Are Zuckerberg and Sandberg up to the task? The above list of breaches, scandals, fumbles, and stumbles suggest not. What do you think?

The bottom line is trust. Given recent events, BuzzFeed News article posed a relevant question (emphasis added):

"Of all of the statements, apologies, clarifications, walk-backs, defenses, and pleas uttered by Facebook employees in 2018, perhaps the most inadvertently damning came from its CEO, Mark Zuckerberg. Speaking from a full-page ad displayed in major papers across the US and Europe, Zuckerberg proclaimed, "We have a responsibility to protect your information. If we can’t, we don’t deserve it." At the time, the statement was a classic exercise in damage control. But given the privacy blunders that followed, it hasn’t aged well. In fact, it’s become an archetypal criticism of Facebook and the set up for its existential question: Why, after all that’s happened in 2018, does Facebook deserve our personal information?"

Facebook executives have apologized often. Enough is enough. No more apologies. Just fix it! And, if Facebook users haven't asked themselves the above question yet, some surely will. Earlier this week, a friend posted on the site:

"To all my FB friends:
I will be deleting my FB account very soon as I am disgusted by their invasion of the privacy of their users. Please contact me by email in the future. Please note that it will take several days for this action to take effect as FB makes it hard to get out of its grip. Merry Christmas to all and with best wishes for a Healthy, safe, and invasive free New Year."

I reminded this friend to also delete any Instagram and What's App accounts, since Facebook operates those services, too. If you want to quit the service but suffer with FOMO (Fear Of Missing Out), then read the experiences of a person who quit Apple, Google, Facebook, Microsoft, and Amazon for a month. It can be done. And, your social life will continue -- spectacularly. It did before Facebook.

Me? I have reduced my activity on Facebook. And there are certain activities I don't do on Facebook: take quizzes, make online payments, use its emotion reaction buttons (besides "Like"), use its mobile app, use the Messenger mobile app, nor use its voting and ballot previews content. Long ago I disabled the Facebook API platform on my Facebook account. You should, too. I never use my Facebook credentials (e.g., username, password) to sign into other sites. Never.

I will continue to post on Facebook links to posts in this blog, since it is helpful information for many Facebook users. In what ways have you reduced your usage of Facebook?