'Software Pirates' Stole Apple Tech To Distribute Hacked Mobile Apps To Consumers

Prior news reports highlighted the abuse of Apple's corporate digital certificates. Now, we learn that this abuse is more widespread than first thought. CNet reported:

"Pirates used Apple's enterprise developer certificates to put out hacked versions of some major apps... The altered versions of Spotify, Angry Birds, Pokemon Go and Minecraft make paid features available for free and remove in-app ads... The pirates appear to have figured out how to use digital certs to get around Apple's carefully policed App Store by saying the apps will be used only by their employees, when they're actually being distributed to everyone."

So, bad actors abuse technology intended for a company's employees to distribute apps directly to consumers. Software pirates, indeed.

To avoid paying for hacked apps, consumers need to shop wisely from trusted sites. A fix is underway. According to CNet:

"Apple will reportedly take steps to fight back by requiring all app makers to use its two-factor authentication protocol from the end of February, so logging into an Apple ID will require a password and code sent to a trusted Apple device."

Let's hope that fix is sufficient.


Ex-IBM Executive Says She Was Told Not to Disclose Names of Employees Over Age 50 Who’d Been Laid Off

[Editor's note: today's guest blog post, by reporters at ProPublica, explores employment and hiring practices within the workplace. Part of a series, it is reprinted with permission.]

IBM logo By Peter Gosselin, ProPublica

In sworn testimony filed recently as part of a class-action lawsuit against IBM, a former executive says she was ordered not to comply with a federal agency’s request that the company disclose the names of employees over 50 who’d been laid off from her business unit.

Catherine A. Rodgers, a vice president who was then IBM’s senior executive in Nevada, cited the order among several practices she said prompted her to warn IBM superiors the company was leaving itself open to allegations of age discrimination. She claims she was fired in 2017 because of her warnings.

Company spokesman Edward Barbini labeled Rodgers’ claims related to potential age discrimination “false,” adding that the reasons for her firing were “wholly unrelated to her allegations.”

Rodgers’ affidavit was filed Jan. 17 as part of a lawsuit in federal district court in New York. The suit cites a March 2018 ProPublica story that IBM engaged in a strategy designed to, in the words of one internal company document, “correct seniority mix” by flouting or outflanking U.S. anti-age discrimination laws to force out tens of thousands of older workers in the five years through 2017 alone.

Rodgers said in an interview Sunday that IBM “appears to be engaged in a concerted and disproportionate targeting of older workers.” She said that if the company releases the ages of those laid off, something required by federal law and that IBM did until 2014, “the facts will speak for themselves.”

“IBM is a data company. Release the data,” she said.

Rodgers is not a plaintiff in the New York case but intends to become one, said Shannon Liss-Riordan, the attorney for the employees.

IBM has not yet responded to Rodgers’ affidavit in the class-action suit. But in a filing in a separate age-bias lawsuit in federal district court in Austin, Texas, where a laid-off IBM sales executive introduced the document to bolster his case, lawyers for the company termed the order for Rodgers not to disclose the layoffs of older workers from her business unit “unremarkable.”

They said that the U.S. Department of Labor sought the names of the workers so it could determine whether they qualified for federal Trade Adjustment Assistance, or TAA, which provides jobless benefits and re-training to those who lose their jobs because of foreign competition. They said that company executives concluded that only one of about 10 workers whose names Rodgers had sought to provide qualified.

In its reporting, ProPublica found that IBM has gone to considerable lengths to avoid reporting its layoff numbers by, among other things, limiting its involvement in government programs that might require disclosure. Although the company has laid off tens of thousands of U.S. workers in recent years and shipped many jobs overseas, it sought and won TAA aid for just three during the past decade, government records show.

Company lawyers in the Texas case said that Rodgers, 62 at the time of her firing and a 39-year veteran of IBM, was let go in July 2017 because of "gross misconduct."

Rodgers said that she received “excellent” job performance reviews for decades before questioning IBM’s practices toward older workers. She rejected the misconduct charge as unfounded.

Legal action against IBM over its treatment of older workers appears to be growing. In addition to the suits in New York and Texas, cases are also underway in California, New Jersey and North Carolina.

Liss-Riordan, who has represented workers against a series of tech giants including Amazon, Google and Uber, has added 41 plaintiffs to the original three in the New York case and is asking the judge to require that IBM notify all U.S. workers whom it has laid off since July 2017 of the suit and of their option to challenge the company.

One complicating factor is that IBM requires departing employees who want to receive severance pay to sign a document waiving their right to take the company to court and limiting them to private, individual arbitration. Studies show this process rarely results in decisions that favor workers. To date, neither plaintiffs’ lawyers nor the government has challenged the legality of IBM’s waiver document.

Many ex-employees also don’t act within the 300-day federal statute of limitations for bringing a case. Of about 500 ex-employees who Liss-Riordan said contacted her since she filed the New York case last September, only 100 had timely claims and, of these, only about 40 had not signed the waivers and so were eligible to join the lawsuit. She said she’s filed arbitration cases for the other 60.

At key points, Rodgers’ account of IBM’s practices is similar to those reported by ProPublica. Among the parallels:

  • Rodgers said that all layoffs in her business unit were of older workers and that younger workers were unaffected. (ProPublica estimated that about 60 percent of the company’s U.S. layoffs from 2014 through 2017 were workers age 40 and above.)
  • She said that she and other managers were told to encourage workers flagged for layoff to use IBM’s internal hiring system to find other jobs in the company even as upper management erected insurmountable barriers to their being hired for these jobs.
  • Rodgers said the company reversed a decades long practice of encouraging employees to work from home and ordered many to begin reporting to a few “hub” offices around the country, a change she said appeared designed to prompt people to quit. She said that in one case an employee agreed to relocate to Connecticut only to be told to relocate again to North Carolina.

Barbini, the IBM spokesman, didn’t comment on individual elements of Rodgers’ allegations. Last year, he did not address a 10-page summary of ProPublica’s findings, but issued a statement that read in part, “We are proud of our company and our employees’ ability to reinvent themselves era after era, while always complying with the law.”

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for The Big Story newsletter to receive stories like this one in your inbox.


Popular iOS Apps Record All In-App Activity Causing Privacy, Data Security, And Other Issues

As the internet has evolved, the user testing and market research practices have also evolved. This may surprise consumers. TechCrunch reported that many popular Apple mobile apps record everything customers do with the apps:

"Apps like Abercrombie & Fitch, Hotels.com and Singapore Airlines also use Glassbox, a customer experience analytics firm, one of a handful of companies that allows developers to embed “session replay” technology into their apps. These session replays let app developers record the screen and play them back to see how its users interacted with the app to figure out if something didn’t work or if there was an error. Every tap, button push and keyboard entry is recorded — effectively screenshotted — and sent back to the app developers."

So, customers' entire app sessions and activities have been recorded. Of course, marketers need to understand their customers' needs, and how users interact with their mobile apps, to build better products, services, and apps. However, in doing so some apps have security vulnerabilities:

"The App Analyst... recently found Air Canada’s iPhone app wasn’t properly masking the session replays when they were sent, exposing passport numbers and credit card data in each replay session. Just weeks earlier, Air Canada said its app had a data breach, exposing 20,000 profiles."

Not good for a couple reasons. First, sensitive data like payment information (e.g., credit/debit card numbers, passport numbers, bank account numbers, etc.) should be masked. Second, when sensitive information isn't masked, more data security problems arise. How long is this app usage data archived? What employees, contractors, and business partners have access to the archive? What security methods are used to protect the archive from abuse?

In short, unauthorized persons may have access to the archives and the sensitive information contained. For example, market researchers probably have little or no need to specific customers' payment information. Sensitive information in these archives should be encrypted, to provide the best protection from abuse and from data breaches.

Sadly, there is more bad news:

"Apps that are submitted to Apple’s App Store must have a privacy policy, but none of the apps we reviewed make it clear in their policies that they record a user’s screen... Expedia’s policy makes no mention of recording your screen, nor does Hotels.com’s policy. And in Air Canada’s case, we couldn’t spot a single line in its iOS terms and conditions or privacy policy that suggests the iPhone app sends screen data back to the airline. And in Singapore Airlines’ privacy policy, there’s no mention, either."

So, the app session recordings were done covertly... without explicit language to provide meaningful and clear notice to consumers. I encourage everyone to read the entire TechCrunch article, which also includes responses by some of the companies mentioned. In my opinion, most of the responses fell far short with lame, boilerplate statements.

All of this is very troubling. And, there is more.

The TechCrunch article didn't discuss it, but historically companies hired testing firms to recruit user test participants -- usually current and prospective customers. Test participants were paid for their time. (I know because as a former user experience professional I conducted such in-person test sessions where clients paid test participants.) Things have changed. Not only has user testing and research migrated online, but companies use automated tools to perform perpetual, unannounced user testing -- all without compensating test participants.

While change is inevitable, not all change is good. Plus, things can be done in better ways. If the test information is that valuable, then pay test participants. Otherwise, this seems like another example of corporate greed at consumers' expense. And, it's especially egregious if data transmissions of the recorded app sessions to developers' servers use up cellular data plan capacity consumers paid for. Some consumers (e.g., elders, children, the poor) cannot afford the costs of unlimited cellular data plans.

After this TechCrunch report, Apple notified developers to either stop or disclose screen recording:

"Protecting user privacy is paramount in the Apple ecosystem. Our App Store Review Guidelines require that apps request explicit user consent and provide a clear visual indication when recording, logging, or otherwise making a record of user activity... We have notified the developers that are in violation of these strict privacy terms and guidelines, and will take immediate action if necessary..."

Good. That's a start. Still, user testing and market research is not a free pass for developers to ignore or skip data security best practices. Given these covert recorded app sessions, mobile apps must be continually tested. Otherwise, some ethically-challenged companies may re-introduce covert screen recording features. What are your opinions?


Survey: People In Relationships Spy On Cheating Partners. FTC: Singles Looking For Love Are The Biggest Target Of Scammers

Happy Valentine's Day! First, BestVPN announced the results of a survey of 1,000 adults globally about relationships and trust in today's digital age where social media usage is very popular. Key findings:

"... nearly 30% of respondents admitted to using tracking apps to catch a partner [suspected of or cheating]. After all, over a quarter of those caught cheating were busted by modern technology... 85% of those caught out in the past now take additional steps to protect their privacy, including deleting their browsing data or using a private browsing mode."

Below is an infographic with more findings from the survey.

Valentines-day-infograph-bestvpn-feb2019

Second, the U.S. Federal Trade Commission (FTC) issued a warning earlier this week about fraud affecting single persons:

"... romance scams generated more reported losses than any other consumer fraud type reported to the agency... The number of romance scams reported to the FTC has grown from 8,500 in 2015 to more than 21,000 in 2018, while reported losses to these scams more than quadrupled in recent years—from $33 million in 2015 to $143 million last year. For those who said they lost money to a romance scam, the median reported loss was $2,600, with those 70 and over reporting the biggest median losses at $10,000."

"Romance scammers often find their victims online through a dating site or app or via social media. These scammers create phony profiles that often involve the use of a stranger’s photo they have found online. The goals of these scams are often the same: to gain the victim’s trust and love in order to get them to send money through a wire transfer, gift card, or other means."

So, be careful out there. Don't cheat, and beware of scammers and dating imposters. You have been warned.


Walgreens To Pay About $2 Million To Massachusetts To Settle Multiple Price Abuse Allegations. Other Settlement Payments Exceed $200 Million

Walgreens logo The Office of the Attorney General of the Commonwealth of Massachusetts announced two settlement agreements with Walgreens, a national pharmacy chain. Walgreens has agreed to pay about $2 million to settle multiple allegations of pricing abuses. According to the announcement:

"Under the first settlement, Walgreens will pay $774,486 to resolve allegations that it submitted claims to MassHealth in which it reported prices for certain prescription drugs at levels that were higher than what Walgreens actually charged, resulting in fraudulent overpayments."

"Under the second settlement, Walgreens will pay $1,437,366 to resolve allegations that from January 2006 through December 2017, rather than dispensing the quantity of insulin called for by a patient’s prescription, Walgreens exceeded the prescription amount and falsified information on claims submitted for reimbursement to MassHealth, including the quantity of insulin and/or days’ supply dispensed."

Both settlements arose from whistle-blower activity. MassHealth is the state's healthcare program based upon a state law passed in 2006 to provide health insurance to all Commonwealth residents. The law was amended in 2008 and 2010 to make it consistent with the federal Affordable Care Act.

Massachusetts Attorney General (AG) Maura Healey said:

"Walgreens repeatedly failed to provide MassHealth with accurate information regarding its dispensing and billing practices, resulting in overpayment to the company at taxpayers’ expense... We will continue to investigate cases of fraud and take action to protect the integrity of MassHealth."

In a separate case, Walgreen's will pay $1 million to the state of Arkansas to settle allegations of Medicaid fraud. Last month, the New York State Attorney General announced that New York State, other states, and the federal government reached:

"... an agreement in principle with Walgreens to settle allegations that Walgreens violated the False Claims Act by billing Medicaid at rates higher than its usual and customary (U&C) rates for certain prescription drugs... Walgreens will pay the states and federal government $60 million, all of which is attributable to the states’ Medicaid programs... The national federal and state civil settlement will resolve allegations relating to Walgreens’ discount drug program, known as the Prescription Savings Club (PSC). The investigation revealed that Walgreens submitted claims to the states’ Medicaid programs in which it identified U&C prices for certain prescription drugs sold through the PSC program that were higher than what Walgreens actually charged for those drugs... This is the second false claims act settlement reached with Walgreens today. On January 22, 2019, AG James announced that Walgreens is to pay New York over $6.5 million as part of a $209.2 million settlement with the federal government and other states, resolving allegations that Walgreens knowingly engaged in fraudulent conduct when it dispensed insulin pens..."

States involved in the settlement include New York, California, Illinois, Indiana, Michigan and Ohio. Kudos to all Attorneys General and their staffs for protecting patients against corporate greed.


Senators Demand Answers From Facebook And Google About Project Atlas And Screenwise Meter Programs

After news reports surfaced about Facebook's Project Atlas, a secret program where Facebook paid teenagers (and other users) for a research app installed on their phones to track and collect information about their mobile usage, several United States Senators have demanded explanations. Three Senators sent a join letter on February 7, 2019 to Mark Zuckerberg, Facebook's chief executive officer.

The joint letter to Facebook (Adobe PDF format) stated, in part:

"We write concerned about reports that Facebook is collecting highly-sensitive data on teenagers, including their web browsing, phone use, communications, and locations -- all to profile their behavior without adequate disclosure, consent, or oversight. These reports fit with Longstanding concerns that Facebook has used its products to deeply intrude into personal privacy... According to a journalist who attempted to register as a teen, the linked registration page failed to impose meaningful checks on parental consent. Facebook has more rigorous mechanism to obtain and verify parental consent, such as when it is required to sign up for Messenger Kids... Facebook's monitoring under Project Atlas is particularly concerning because the data data collection performed by the research app was deeply invasive. Facebook's registration process encouraged participants to "set it and forget it," warning that if a participant disconnected from the monitoring for more than ten minutes for a few days, that they could be disqualified. Behind the scenes, the app watched everything on the phone."

The letter included another example highlighting the alleged lack of meaningful disclosures:

"... the app added a VPN connection that would automatically route all of a participant's traffic through Facebook's servers. The app installed a SSL root certificate on the participant's phone, which would allow Facebook to intercept or modify data sent to encrypted websites. As a result, Facebook would have limitless access to monitor normally secure web traffic, even allowing Facebook to watch an individual log into their bank account or exchange pictures with their family. None of the disclosures provided at registration offer a meaningful explanation about how the sensitive data is used, how long it is kept, or who within Facebook has access to it..."

The letter was signed by Senators Richard Blumenthal (Democrat, Connecticut), Edward J. Markey (Democrat, Massachusetts), and Josh Hawley (Republican, Mississippi). Based upon news reports about how Facebook's Research App operated with similar functionality to the Onavo VPN app which was banned last year by Apple, the Senators concluded:

"Faced with that ban, Facebook appears to have circumvented Apple's attempts to protect consumers."

The joint letter also listed twelve questions the Senators want detailed answers about. Below are selected questions from that list:

"1. When did Project Atlas begin and how many individuals participated? How many participants were under age 18?"

"3. Why did Facebook use a less strict mechanism for verifying parental consent than is Required for Messenger Kids or Global Data Protection Requlation (GDPR) compliance?"

"4.What specific types of data was collected (e.g., device identifieers, usage of specific applications, content of messages, friends lists, locations, et al.)?"

"5. Did Facebook use the root certificate installed on a participant's device by the Project Atlas app to decrypt and inspect encrypted web traffic? Did this monitoring include analysis or retention of application-layer content?"

"7. Were app usage data or communications content collected by Project Atlas ever reviewed by or available to Facebook personnel or employees of Facebook partners?"

8." Given that Project Atlas acknowledged the collection of "data about [users'] activities and content within those apps," did Facebook ever collect or retain the private messages, photos, or other communications sent or received over non-Facebook products?"

"11. Why did Facebook bypass Apple's app review? Has Facebook bypassed the App Store aproval processing using enterprise certificates for any other app that was used for non-internal purposes? If so, please list and describe those apps."

Read the entire letter to Facebook (Adobe PDF format). Also on February 7th, the Senators sent a similar letter to Google (Adobe PDF format), addressed to Hiroshi Lockheimer, the Senior Vice President of Platforms & Ecosystems. It stated in part:

"TechCrunch has subsequently reported that Google maintained its own measurement program called "Screenwise Meter," which raises similar concerns as Project Atlas. The Screenwise Meter app also bypassed the App Store using an enterprise certificate and installed a VPN service in order to monitor phones... While Google has since removed the app, questions remain about why it had gone outside Apple's review process to run the monitoring program. Platforms must maintain and consistently enforce clear policies on the monitoring of teens and what constitutes meaningful parental consent..."

The letter to Google includes a similar list of eight questions the Senators seek detailed answers about. Some notable questions:

"5. Why did Google bypass App Store approval for Screenwise Meter app using enterprise certificates? Has Google bypassed the App Store approval processing using enterprise certificates for any other non-internal app? If so, please list and describe those apps."

"6. What measures did Google have in place to ensure that teenage participants in Screenwise Meter had authentic parental consent?"

"7. Given that Apple removed Onavoo protect from the App Store for violating its terms of service regarding privacy, why has Google continued to allow the Onavo Protect app to be available on the Play Store?"

The lawmakers have asked for responses by March 1st. Thanks to all three Senators for protecting consumers' -- and children's -- privacy... and for enforcing transparency and accountability.


Technology And Human Rights Organizations Sent Joint Letter Urging House Representatives Not To Fund 'Invasive Surveillance' Tech Instead of A Border Wall

More than two dozen technology and human rights organizations sent a joint letter Tuesday to representatives in the House of Representatives, urging them not to fund "invasive surveillance technologies" in replacement of a physical wall or barrier along the southern border of the United States. The joint letter cited five concerns:

"1. Risk-based targeting: The proposal calls for “an expansion of risk-based targeting of passengers and cargo entering the United States.” We are concerned that this includes the expansion of programs — proven to be ineffective and to exacerbate racial profiling — that use mathematical analytics to make targeting determinations. All too often, these systems replicate the biases of their programmers, burden vulnerable communities, lack democratic transparency, and encourage the collection and analysis of ever-increasing amounts of data... 3. Biometrics: The proposal calls for “new cutting edge technology” at the border. If that includes new face surveillance like that deployed at international airline departures, it should not. Senator Jeff Merkley and the Congressional Black Caucus have expressed serious concern that facial recognition technology would place “disproportionate burdens on communities of color and could stifle Americans’ willingness to exercise their first amendment rights in public.” In addition, use of other biometrics, including iris scans and voice recognition, also raise significant privacy concerns... 5. Biometric and DNA data: We oppose biometric screening at the border and the collection of immigrants’ DNA, and fear this may be another form of “new cutting edge technology” under consideration. We are concerned about the threat that any collected biometric data will be stolen or misused, as well as the potential for such programs to be expanded far beyond their original scope..."

The letter was sent to Speaker Nancy Pelosi, Minority Leader Kevin McCarthy, Minority Leader Steny Hoyer, Minority Whip Steve Scalise, Chair Nita Lowey a Ranking Member of House Appropriations, and Kay Granger of the House Appropriations committee.

27 organizations signed the joint letter, including Fight for the Future, the Electronic Frontier Foundation, the American Civil Liberties Union (ACLU), the American-Arab Anti-Discrimination Committee, the Center for Media Justice, the Project On Government Oversight, and others. Read the entire letter.

Earlier this month, a structural and civil engineer cited several reasons why a physical wall won't work and would be vastly more expensive than the $5.7 billion requested.

Clearly, the are distinct advantages and disadvantages for each and all border-protection solutions the House and President are considering. It is a complex problem. These advantages and disadvantages of all proposals need to be clear, transparent, and understood by taxpayers prior to any final decisions.


Survey: Users Don't Understand Facebook's Advertising System. Some Disagree With Its Classifications

Most people know that many companies collect data about their online activities. Based upon the data collected, companies classify users for a variety of reasons and purposes. Do users agree with these classifications? Do the classifications accurately describe users' habits, interests, and activities?

Facebook logo To answer these questions, the Pew Research Center surveyed users of Facebook. Why Facebook? Besides being the most popular social media platform in the United States, it collects:

"... a wide variety of data about their users’ behaviors. Platforms use this data to deliver content and recommendations based on users’ interests and traits, and to allow advertisers to target ads... But how well do Americans understand these algorithm-driven classification systems, and how much do they think their lives line up with what gets reported about them?"

The findings are significant. First:

"Facebook makes it relatively easy for users to find out how the site’s algorithm has categorized their interests via a “Your ad preferences” page. Overall, however, 74% of Facebook users say they did not know that this list of their traits and interests existed until they were directed to their page as part of this study."

So, almost three quarters of Facebook users surveyed don't know what data Facebook has collected about them, nor how to view it (nor how to edit it, or how to opt out of the ad targeting classifications). According to Wired magazine, Facebook's "Your Ad Preferences" page:

"... can be hard to understand if you haven’t looked at the page before. At the top, Facebook displays “Your interests.” These groupings are assigned based on your behavior on the platform and can be used by marketers to target you with ads. They can include fairly straightforward subjects, like “Netflix,” “Graduate school,” and “Entrepreneurship,” but also more bizarre ones, like “Everything” and “Authority.” Facebook has generated an enormous number of these categories for its users. ProPublica alone has collected over 50,000, including those only marketers can see..."

Now, back to the Pew survey. After survey participants viewed their Ad Preferences page:

"A majority of users (59%) say these categories reflect their real-life interests, while 27% say they are not very or not at all accurate in describing them. And once shown how the platform classifies their interests, roughly half of Facebook users (51%) say they are not comfortable that the company created such a list."

So, about half of persons surveyed use a site whose data collection they are uncomfortable with. Not good. Second, substantial groups said the classifications by Facebook were not accurate:

"... about half of Facebook users (51%) are assigned a political “affinity” by the site. Among those who are assigned a political category by the site, 73% say the platform’s categorization of their politics is very or somewhat accurate, while 27% say it describes them not very or not at all accurately. Put differently, 37% of Facebook users are both assigned a political affinity and say that affinity describes them well, while 14% are both assigned a category and say it does not represent them accurately..."

So, significant numbers of users disagree with the political classifications Facebook assigned to their profiles. Third, its' not only politics:

"... Facebook also lists a category called “multicultural affinity”... this listing is meant to designate a user’s “affinity” with various racial and ethnic groups, rather than assign them to groups reflecting their actual race or ethnic background. Only about a fifth of Facebook users (21%) say they are listed as having a “multicultural affinity.” Overall, 60% of users who are assigned a multicultural affinity category say they do in fact have a very or somewhat strong affinity for the group to which they are assigned, while 37% say their affinity for that group is not particularly strong. Some 57% of those who are assigned to this category say they do in fact consider themselves to be a member of the racial or ethnic group to which Facebook assigned them."

The survey included a nationally representative sample of 963 Facebook users ages 18 and older from the United States. The survey was conducted September 4 to October 1, 2018. Read the entire survey at the Pew Research Center site.

What can consumers conclude from this survey? Social media users should understand that all social sites, and especially mobile apps, collect data about you, and then make judgements... classifications about you. (Remember, some Samsung phone owners were unable to delete Facebook and other mobile apps users. And, everyone wants your geolocation data.) Use any tools the sites provide to edit or adjust your ad preferences to match your interests. Adjust the privacy settings on your profile to limit the data sharing as much as possible.

Last, an important reminder. While Facebook users can edit their ad preferences and can opt out of the ad-targeting classifications, they cannot completely avoid ads. Facebook will still display less-targeted ads. That is simply, Facebook being Facebook to make money. That probably applies to other social sites, too.

What are your opinions of the survey's findings?


Facebook Paid Teens To Install Unauthorized Spyware On Their Phones. Plenty Of Questions Remain

Facebook logoWhile today is the 15th anniversary of Facebook,  more important news rules. Last week featured plenty of news about Facebook. TechCrunch reported on Tuesday:

"Since 2016, Facebook has been paying users ages 13 to 35 up to $20 per month plus referral fees to sell their privacy by installing the iOS or Android “Facebook Research” app. Facebook even asked users to screenshot their Amazon order history page. The program is administered through beta testing services Applause, BetaBound and uTest to cloak Facebook’s involvement, and is referred to in some documentation as “Project Atlas” — a fitting name for Facebook’s effort to map new trends and rivals around the globe... Facebook admitted to TechCrunch it was running the Research program to gather data on usage habits."

So, teenagers installed surveillance software on their phones and tablets, to spy for Facebook on themselves, Facebook's competitors,, and others. This is huge news for several reasons. First, the "Facebook Research" app is VPN (Virtual Private Network) software which:

"... lets the company suck in all of a user’s phone and web activity, similar to Facebook’s Onavo Protect app that Apple banned in June and that was removed in August. Facebook sidesteps the App Store and rewards teenagers and adults to download the Research app and give it root access to network traffic in what may be a violation of Apple policy..."

Reportedly, the Research app collected massive amounts of information: private messages in social media apps, chats from in instant messaging apps, photos/videos sent to others, emails, web searches, web browsing activity, and geo-location data. So, a very intrusive app. And, after being forced to remove oneintrusive app from Apple's store, Facebook continued anyway -- with another app that performed the same function. Not good.

Second, there is the moral issue of using the youngest users as spies... persons who arguably have the lease experience and skills at reading complex documents: corporate terms-of-use and privacy policies. I wonder how many teenagers notified their friends of the spying and data collection. How many teenagers fully understood what they were doing? How many parents were aware of the activity and payments? How many parents notified the parents of their children's friends? How many teens installed the spyware on both their iPhones and iPads? Lots of unanswered questions.

Third, Apple responded quickly. TechCrunch reported Wednesday morning:

"... Apple blocked Facebook’s Research VPN app before the social network could voluntarily shut it down... Apple tells TechCrunch that yesterday evening it revoked the Enterprise Certificate that allows Facebook to distribute the Research app without going through the App Store."

Facebook's usage of the Enterprise Certificate is significant. TechCrunch also published a statement by Apple:

"We designed our Enterprise Developer Program solely for the internal distribution of apps within an organization... Facebook has been using their membership to distribute a data-collecting app to consumers, which is a clear breach of their agreement with Apple. Any developer using their enterprise certificates to distribute apps to consumers will have their certificates revoked..."

So, the Research app violated Apple's policy. Not good. The app also performs similar functions as the banned Onavo VPN app. Worse. This sounds like an end-run to me. So as punishment for its end-run actions, Apple temporarily disable the certificates for internal corporate apps.

Axios described very well Facebook's behavior:

"Facebook took a program designed to let businesses internally test their own app and used it to monitor most, if not everything, a user did on their phone — a degree of surveillance barred in the official App Store."

And the animated Facebook image in the Axios article sure looks like a liar-liar-logo-on-fire image. LOL! Pure gold! Seriously, Facebook's behavior indicates questionable ethics, and/or an expectation of not getting caught. Reportedly, the internal apps which were shut down included shuttle schedules, campus maps, and company calendars. After that, some Facebook employees discussed quitting.

And, it raises more questions. Which Facebook executives approved Project Atlas? What advice did Facebook's legal staff provide prior to approval? Was that advice followed or ignored?

Google logo Fourth, TechCrunch also reported:

"Facebook’s Research program will continue to run on Android."

What? So, Google devices were involved, too. Is this spy program okay with Google executives? A follow-up report on Wednesday by TechCrunch:

"Google has been running an app called Screenwise Meter, which bears a strong resemblance to the app distributed by Facebook Research that has now been barred by Apple... Google invites users aged 18 and up (or 13 if part of a family group) to download the app by way of a special code and registration process using an Enterprise Certificate. That’s the same type of policy violation that led Apple to shut down Facebook’s similar Research VPN iOS app..."

Oy! So, Google operates like Facebook. Also reported by TechCrunch:

"The Screenwise Meter iOS app should not have operated under Apple’s developer enterprise program — this was a mistake, and we apologize. We have disabled this app on iOS devices..."

So, Google will terminate its spy program on Apple devices, but continue its own program with Facebook. Hmmmmm. Well, that answers some questions. I guess Google executives are okay with this spy program. More questions remain.

Fifth, Facebook tried to defend the Research app and its actions in an internal memo to employees. On Thursday, TechCrunch tore apart the claims in an internal Facebook memo from vice president Pedro Canahuati. Chiefly:

"Facebook claims it didn’t hide the program, but it was never formally announced like every other Facebook product. There were no Facebook Help pages, blog posts, or support info from the company. It used intermediaries Applause and CentreCode to run the program under names like Project Atlas and Project Kodiak. Users only found out Facebook was involved once they started the sign-up process and signed a non-disclosure agreement prohibiting them from discussing it publicly... Facebook claims it wasn’t “spying,” yet it never fully laid out the specific kinds of information it would collect. In some cases, descriptions of the app’s data collection power were included in merely a footnote. The program did not specify data types gathered, only saying it would scoop up “which apps are on your phone, how and when you use them” and “information about your internet browsing activity.” The parental consent form from Facebook and Applause lists none of the specific types of data collected...

So, Research app participants (e.g., teenagers, parents) couldn't discuss nor warn their friends (and their friends' parents) about the data collection. I strongly encourage everyone to read the entire TechCrunch analysis. It is eye-opening.

Sixth, a reader shared concerns about whether Facebook's actions violated federal laws. Did Project Atlas violate the Digital Millennium Copyright Act (DMCA); specifically the "anti-circumvention" provision, which prohibits avoiding the security protections in software? Did it violate the Computer Fraud and Abuse Act? What about breach-of-contract and fraud laws? What about states' laws? So, one could ask similar questions about Google's actions, too.

I am not an attorney. Hopefully, some attorneys will weigh in on these questions. Probably, some skilled attorneys will investigate various legal options.

All of this is very disturbing. Is this what consumers can expect of Silicon Valley firms? Is this the best tech firms can do? Is this the low level the United States has sunk to? Kudos to the TechCrunch staff for some excellent reporting.

What are your opinions of Project Atlas? Of Facebook's behavior? Of Google's?


Google Fined 50 Million Euros For Violations Of New European Privacy Law

Google logo Google has been find 50 million Euros (about U.S. $57 million) under the new European privacy law for failing to properly disclose to users how their data is collected and used for targeted advertising. The European Union's General Data Protection Regulations, which went into effect in May 2018, give EU residents more control over their information and how companies use it.

After receiving two complaints last year from privacy-rights groups, France's National Data Protection Commission (CNL) announced earlier this month:

"... CNIL carried out online inspections in September 2018. The aim was to verify the compliance of the processing operations implemented by GOOGLE with the French Data Protection Act and the GDPR by analysing the browsing pattern of a user and the documents he or she can have access, when creating a GOOGLE account during the configuration of a mobile equipment using Android. On the basis of the inspections carried out, the CNIL’s restricted committee responsible for examining breaches of the Data Protection Act observed two types of breaches of the GDPR."

The first violation involved transparency failures:

"... information provided by GOOGLE is not easily accessible for users. Indeed, the general structure of the information chosen by the company does not enable to comply with the Regulation. Essential information, such as the data processing purposes, the data storage periods or the categories of personal data used for the ads personalization, are excessively disseminated across several documents, with buttons and links on which it is required to click to access complementary information. The relevant information is accessible after several steps only, implying sometimes up to 5 or 6 actions... some information is not always clear nor comprehensive. Users are not able to fully understand the extent of the processing operations carried out by GOOGLE. But the processing operations are particularly massive and intrusive because of the number of services offered (about twenty), the amount and the nature of the data processed and combined. The restricted committee observes in particular that the purposes of processing are described in a too generic and vague manner..."

So, important information is buried and scattered across several documents making it difficult for users to access and to understand. The second violation involved the legal basis for personalized ads processing:

"... GOOGLE states that it obtains the user’s consent to process data for ads personalization purposes. However, the restricted committee considers that the consent is not validly obtained for two reasons. First, the restricted committee observes that the users’ consent is not sufficiently informed. The information on processing operations for the ads personalization is diluted in several documents and does not enable the user to be aware of their extent. For example, in the section “Ads Personalization”, it is not possible to be aware of the plurality of services, websites and applications involved in these processing operations (Google search, Youtube, Google home, Google maps, Playstore, Google pictures, etc.) and therefore of the amount of data processed and combined."

"[Second], the restricted committee observes that the collected consent is neither “specific” nor “unambiguous.” When an account is created, the user can admittedly modify some options associated to the account by clicking on the button « More options », accessible above the button « Create Account ». It is notably possible to configure the display of personalized ads. That does not mean that the GDPR is respected. Indeed, the user not only has to click on the button “More options” to access the configuration, but the display of the ads personalization is moreover pre-ticked. However, as provided by the GDPR, consent is “unambiguous” only with a clear affirmative action from the user (by ticking a non-pre-ticked box for instance). Finally, before creating an account, the user is asked to tick the boxes « I agree to Google’s Terms of Service» and « I agree to the processing of my information as described above and further explained in the Privacy Policy» in order to create the account. Therefore, the user gives his or her consent in full, for all the processing operations purposes carried out by GOOGLE based on this consent (ads personalization, speech recognition, etc.). However, the GDPR provides that the consent is “specific” only if it is given distinctly for each purpose."

So, not only is important information buried and scattered across multiple documents (again), but also critical boxes for users to give consent are pre-checked when they shouldn't be.

CNIL explained its reasons for the massive fine:

"The amount decided, and the publicity of the fine, are justified by the severity of the infringements observed regarding the essential principles of the GDPR: transparency, information and consent. Despite the measures implemented by GOOGLE (documentation and configuration tools), the infringements observed deprive the users of essential guarantees regarding processing operations that can reveal important parts of their private life since they are based on a huge amount of data, a wide variety of services and almost unlimited possible combinations... Moreover, the violations are continuous breaches of the Regulation as they are still observed to date. It is not a one-off, time-limited, infringement..."

This is the largest fine, so far, under GDPR laws. Reportedly, Google will appeal the fine:

"We've worked hard to create a GDPR consent process for personalised ads that is as transparent and straightforward as possible, based on regulatory guidance and user experience testing... We're also concerned about the impact of this ruling on publishers, original content creators and tech companies in Europe and beyond... For all these reasons, we've now decided to appeal."

This is not the first EU fine for Google. CNet reported:

"Google is no stranger to fines under EU laws. It's currently awaiting the outcome of yet another antitrust investigation -- after already being slapped with a $5 billion fine last year for anticompetitive Android practices and a $2.7 billion fine in 2017 over Google Shopping."


The Federal Reserve Introduced A New Publication For And About Consumers

The Federal Reserve Board (FRB) has introduced a new publication titled, "Consumer & Community Context." According to the FRB announcement, the new publication will feature:

"... original analyses about the financial conditions and experiences of consumers and communities, including traditionally under-served and economically vulnerable households and neighborhoods. The goal of the series is to increase public understanding of the financial conditions and concerns of consumers and communities... The inaugural issue covers the theme of student loans, and includes articles on the effect that rising student loan debt levels may have on home ownership rates among young adults; and the relationship between the amount of student loan debt and individuals' decisions to live in rural or urban areas."

Authors are employees of the FRB or the Federal Reserve System (FRS). As the central bank of the United States, the FRS performs five general functions to "promote the effective operation of the U.S. economy and, more generally, the public interest:" i) conducts the nation’s monetary policy to promote maximum employment, stable prices, and moderate long-term interest rates; ii) promotes the stability of the financial system and seeks to minimize and contain systemic risks; iii) promotes the safety and soundness of individual financial institutions; iv) fosters payment and settlement system safety and efficiency through services to the banking industry; and v) promotes consumer protection and community development through consumer-focused supervision, examination, and monitoring of the financial system. Learn more about the Federal Reserve.

The first issue of Consumer & Community Context is available, in Adobe PDF format, at the FRB site. Economists, bank executives, consumer advocates, researchers, teachers, and policy makers may be particularly interested. To better understand the publication's content, below is an excerpt.

In their analysis of student loan debt and home ownership among young adults, the researchers found:

"... home ownership rate in the United States fell approximately 4 percentage points in the wake of the financial crisis, from a peak of 69 percent in 2005 to 65 percent in 2014. The decline in home ownership was even more pronounced among young adults. Whereas 45 percent of household heads ages 24 to 32 in 2005 owned their own home, just 36 percent did in 2014 — a marked 9 percentage point drop... We found that a $1,000 increase in student loan debt (accumulated during the prime college-going years and measured in 2014 dollars) causes a 1 to 2 percentage point drop in the home ownership rate for student loan borrowers during their late 20s and early 30s... higher student loan debt early in life leads to a lower credit score later in life, all else equal. We also find that, all else equal, increased student loan debt causes borrowers to be more likely to default on their student loan debt, which has a major adverse effect on their credit scores, thereby impacting their ability to qualify for a mortgage..."

The FRB announcement described the publication schedule as, "periodically." Perhaps, this is due to the partial government shutdown. Hopefully, in the near future the FRB will commit to a more regular publication schedule.


Companies Want Your Location Data. Recent Examples: The Weather Channel And Burger King

Weather Channel logo It is easy to find examples where companies use mobile apps to collect consumers' real-time GPS location data, so they can archive and resell that information later for additional profits. First, ExpressVPN reported:

"The city of Los Angeles is suing the Weather Company, a subsidiary of IBM, for secretly mining and selling user location data with the extremely popular Weather Channel App. Stating that the app unfairly manipulates users into enabling their location settings for more accurate weather reports, the lawsuit affirms that the app collects and then sells this data to third-party companies... Citing a recent investigation by The New York Times that revealed more than 75 companies silently collecting location data (if you haven’t seen it yet, it’s worth a read), the lawsuit is basing its case on California’s Unfair Competition Law... the California Consumer Privacy Act, which is set to go into effect in 2020, would make it harder for companies to blindly profit off customer data... This lawsuit hopes to fine the Weather Company up to $2,500 for each violation of the Unfair Competition Law. With more than 200 million downloads and a reported 45+ million users..."

Long-term readers remember that a data breach in 2007 at IBM Inc. prompted this blog. It's not only internet service providers which collect consumers' location data. Advertisers, retailers, and data brokers want it, too.

Burger King logo Second, Burger King ran last month a national "Whopper Detour" promotion which offered customers a once-cent Whopper burger if they went near a competitor's store. News 5, the ABC News affiliate in Cleveland, reported:

"If you download the Burger King mobile app and drive to a McDonald’s store, you can get the penny burger until December 12, 2018, according to the fast-food chain. You must be within 600 feet of a McDonald's to claim your discount, and no, McDonald's will not serve you a Whopper — you'll have to order the sandwich in the Burger King app, then head to the nearest participating Burger King location to pick it up. More information about the deal can be found on the app on Apple and Android devices."

Next, the relevant portions from Burger King's privacy policy for its mobile apps (emphasis added):

"We collect information you give us when you use the Services. For example, when you visit one of our restaurants, visit one of our websites or use one of our Services, create an account with us, buy a stored-value card in-restaurant or online, participate in a survey or promotion, or take advantage of our in-restaurant Wi-Fi service, we may ask for information such as your name, e-mail address, year of birth, gender, street address, or mobile phone number so that we can provide Services to you. We may collect payment information, such as your credit card number, security code and expiration date... We also may collect information about the products you buy, including where and how frequently you buy them... we may collect information about your use of the Services. For example, we may collect: 1) Device information - such as your hardware model, IP address, other unique device identifiers, operating system version, and settings of the device you use to access the Services; 2) Usage information - such as information about the Services you use, the time and duration of your use of the Services and other information about your interaction with content offered through a Service, and any information stored in cookies and similar technologies that we have set on your device; and 3) Location information - such as your computer’s IP address, your mobile device’s GPS signal or information about nearby WiFi access points and cell towers that may be transmitted to us..."

So, for the low, low price of one hamburger, participants in this promotion gave RBI, the parent company which owns Burger King, perpetual access to their real-time location data. And, since RBI knows when, where, and how long its customers visit competitors' fast-food stores, it also knows similar details about everywhere else you go -- including school, work, doctors, hospitals, and more. Sweet deal for RBI. A poor deal for consumers.

Expect to see more corporate promotions like this, which privacy advocates call "surveillance capitalism."

Consumers' real-time location data is very valuable. Don't give it away for free. If you decide to share it, demand a fair, ongoing payment in exchange. Read privacy and terms-of-use policies before downloading mobile apps, so you don't get abused or taken. Opinions? Thoughts?


The Privacy And Data Security Issues With Medical Marijuana

In the United States, some states have enacted legislation making medical marijuana legal -- despite it being illegal at a federal level. This situation presents privacy issues for both retailers and patients.

In her "Data Security And Privacy" podcast series, privacy consultant Rebecca Harold (@PrivacyProf) interviewed a patient cannabis advocate about privacy and data security issues:

"Most people assume that their data is safe in cannabis stores & medical cannabis dispensaries. Or they believe if they pay in cash there will be no record of their cannabis purchase. Those are incorrect beliefs. How do dispensaries secure & share data? Who WANTS that data? What security is needed? Some in government, law enforcement & employers want data about state legal marijuana and medical cannabis purchases. Michelle Dumay, Cannabis Patient Advocate, helps cannabis dispensaries & stores to secure their customers’ & patients’ data & privacy. Michelle learned through experience getting treatment for her daughter that most medical cannabis dispensaries are not compliant with laws governing the security and privacy of patient data... In this episode, we discuss information security & privacy practices of cannabis shops, risks & what needs to be done when it comes to securing data and understanding privacy laws."

Many consumers know that the Health Insurance Portability and Accountability Act (HIPAA) governs how patients' privacy is protected and the businesses which must comply with that law.

Poor data security (e.g., data breaches, unauthorized recording of patients inside or outside of dispensaries) can result in the misuse of patients' personal and medical information by bad actors and others. Downstream consequences can be negative, such as employers using the data to decline job applications.

After listening to the episode, it seems reasonable for consumers to assume that traditional information industry players (e.g., credit reporting agencies, advertisers, data brokers, law enforcement, government intelligence agencies, etc.) all want marijuana purchase data. Note the use of "consumers," and not only "patients," since about 10 states have legalized recreational marijuana.

Listen to an encore presentation of the "Medical Cannabis Patient Privacy And Data Security" episode.


Google To EU Regulators: No One Country Should Censor The Web Globally. Poll Finds Canadians Support 'Right To Be Forgotten'

For those watching privacy legislation in Europe, MediaPost reported:

"... Maciej Szpunar, an advisor to the highest court in the EU, sided with Google in the fight, arguing that the right to be forgotten should only be enforceable in Europe -- not the entire world. The opinion is non-binding, but seen as likely to be followed."

For those unfamiliar, in the European Union (EU) the right to be forgotten:

"... was created in 2014, when EU judges ruled that Google (and other search engines) must remove links to embarrassing information about Europeans at their request... The right to be forgotten doesn't exist in the United States... Google interpreted the EU's ruling as requiring removal of links to material in search engines designed for European countries but not from its worldwide search results... In 2015, French regulators rejected Google's position and ordered the company to remove material from all of its results pages. Google then asked Europe's highest court to reject that view. The company argues that no one country should be able to censor the web internationally."

No one corporation should be able to censor the web globally, either. Meanwhile, Radio Canada International reported:

"A new poll shows a slim majority of Canadians agree with the concept known as the “right to be forgotten online.” This means the right to have outdated, inaccurate, or no longer relevant information about yourself removed from search engine results. The poll by the Angus Reid Institute found 51 percent of Canadians agree that people should have the right to be forgotten..."

Consumers should have control over their information. If that control is limited to only the country of their residence, then the global nature of the internet means that control is very limited -- and probably irrelevant. What are your opinions?


Report: Navient Tops List Of Student Loan Complaints

The Consumer Financial Protection Bureau (CFPB), a federal government agency in the United States, collects complaints about banks and other financial institutions. That includes lenders of student loans.

The CFPB and private-sector firms analyze these complaints, looking for patterns. Forbes magazine reported:

"The team at Make Lemonade analyzed these complaints [submitted during 2018], and found that there were 8,752 related to student loans. About 64% were related to federal student loans and 36% were related to private student loans. Nearly 67% of complaints were related to an issue with a student loan lender or student loan servicer."

"Navient, one of the nation's largest student loan servicers, ranked highest in terms of student loan complaints. In 2018, student loan borrowers submitted 4,032 complaints about Navient to the CFPB, which represents 46% of all student loan complaints. AES/PHEAA and Nelnet, two other major student loan servicers, received approximately 20% and 7%, respectively."

When looking for a student loan, wise consumers shop around, do their research, and shop wisely. Some lenders are better than others. The Forbes article is very helpful as it contains links to additional resources and information for consumers.

Learn more about the CFPB and its complaints database designed to help consumers and regulators:


After Promises To Stop, Mobile Providers Continued Sales Of Location Data About Consumers. What You Can Do To Protect Your Privacy

Sadly, history repeats itself. First, the history: after getting caught selling consumers' real-time GPS location data without notice nor consumers' consent, in 2018 mobile providers promised to stop the practice. The Ars Technica blog reported in June, 2018:

"Verizon and AT&T have promised to stop selling their mobile customers' location information to third-party data brokers following a security problem that leaked the real-time location of US cell phone users. Senator Ron Wyden (D-Ore.) recently urged all four major carriers to stop the practice, and today he published responses he received from Verizon, AT&T, T-Mobile USA, and SprintWyden's statement praised Verizon for "taking quick action to protect its customers' privacy and security," but he criticized the other carriers for not making the same promise... AT&T changed its stance shortly after Wyden's statement... Senator Wyden recognized AT&T's change on Twitter and called on T-Mobile and Sprint to follow suit."

Kudos to Senator Wyden. The other mobile providers soon complied... sort of.

Second, some background: real-time location data is very valuable stuff. It indicates where you are as you (with your phone or other mobile devices) move about the physical world in your daily routine. No delays. No lag. Yes, there are appropriate uses for real-time GPS location data -- such as by law enforcement to quickly find a kidnapped person or child before further harm happens. But, do any and all advertisers need real-time location data about consumers? Data brokers? Others?

I think not. Domestic violence and stalking victims probably would not want their, nor their children's, real-time location data resold publicly. Most parents would not want their children's location data resold publicly. Most patients probably would not want their location data broadcast every time they visit their physician, specialist, rehab, or a hospital. Corporate executives, government officials, and attorneys conducting sensitive negotiations probably wouldn't want their location data collected and resold, either.

So, most consumers probably don't want their real-time location data resold publicly. Well, some of you make location-specific announcements via posts on social media. That's your choice, but I conclude that most people don't. Consumers want control over their location information so they can decide if, when, and with whom to share it. The mass collection and sales of consumers' real-time location data by mobile providers prevents choice -- and it violates persons' privacy.

Third, fast forward seven months from 2018. TechCrunch reported on January 9th:

"... new reporting by Motherboard shows that while [reseller] LocationSmart faced the brunt of the criticism [in 2018], few focused on the other big player in the location-tracking business, Zumigo. A payment of $300 and a phone number was enough for a bounty hunter to track down the participating reporter by obtaining his location using Zumigo’s location data, which was continuing to pay for access from most of the carriers. Worse, Zumigo sold that data on — like LocationSmart did with Securus — to other companies, like Microbilt, a Georgia-based credit reporting company, which in turn sells that data on to other firms that want that data. In this case, it was a bail bond company, whose bounty hunter was paid by Motherboard to track down the reporter — with his permission."

"Everyone seemed to drop the ball. Microbilt said the bounty hunter shouldn’t have used the location data to track the Motherboard reporter. Zumigo said it didn’t mind location data ending up in the hands of the bounty hunter, but still cut Microbilt’s access. But nobody quite dropped the ball like the carriers, which said they would not to share location data again."

The TechCrunch article rightly held offending mobile providers accountable. Example: T-Mobile's chief executive tweeted last year:

Then, Legere tweeted last week:

The right way? In my view, real-time location never should have been collected and resold. Almost a year after reports first surfaced, T-Mobile is finally getting around to stopping the practice and terminating its relationships with location data resellers -- two months from now. Why not announce this slow wind-down last year when the issue first surfaced? "Emergency assistance" is the reason we are supposed to believe. Yeah, right.

The TechCrunch article rightly took AT&T and Verizon to task, too. Good. I strongly encourage everyone to read the entire TechCrunch article.

What can consumers make of this? There seem to be several takeaways:

  1. Transparency is needed, since corporate privacy policies don't list all (or often any) business partners. This lack of transparency provides an easy way for mobile providers to resume location data sales without notice to anyone and without consumers' consent,
  2. Corporate executives will say anything in tweets/social media. A healthy dose of skepticism by consumers and regulators is wise,
  3. Consumers can't trust mobile providers. They are happy to make money selling consumers' real-time location data, regardless of consumers' desires not for our data to be collected and sold,
  4. Data brokers and credit reporting agencies want consumers' location data,
  5. To ensure privacy, consumers also must take action: adjust the privacy settings on your phones to limit or deny mobile apps access to your location data. I did. It's not hard. Do it today, and
  6. Oversight is needed, since a) mobile providers have, at best, sloppy to minimal oversight and internal processes to prevent location data sales; and b) data brokers and others are readily available to enable and facilitate location data transactions.

I cannot over-emphasize #5 above. What issues or takeaways do you see? What are your opinions about real-time location data?


Federal Regulators Encourage Banks To Work With Borrowers Affected By Partial Government Shutdown

Six financial regulatory agencies issued a joint statement advising banks and financial institutions to be flexible with borrowers during the partial government shutdown in the United States. The January 11, 2019 statement said:

"While the effects of the federal government shutdown on individuals should be temporary, affected borrowers may face a temporary hardship in making payments on debts such as mortgages, student loans, car loans, business loans, or credit cards. As they have in prior shutdowns, the agencies encourage financial institutions to consider prudent efforts to modify terms on existing loans or extend new credit to help affected borrowers."

"Prudent workout arrangements that are consistent with safe-and-sound lending practices are generally in the long-term best interest of the financial institution, the borrower, and the economy. Such efforts should not be subject to examiner criticism. Consumers affected by the government shutdown are encouraged to contact their lenders immediately should they encounter financial strain."

The six agencies which signed the joint statement include the:

  • Board of Governors of the Federal Reserve System
  • Conference of State Bank Supervisors
  • Consumer Financial Protection Bureau
  • Federal Deposit Insurance Corporation
  • National Credit Union Administration
  • Office of the Comptroller of the Currency

Today is day 25 of the shutdown. Yesterday, President Trump rejected calls by Republicans to temporarily reopen several agencies to encourage negotiations with Democrats in the House of Representatives.

Reportedly, OceanFirst Bank has suspended fees for borrowers unable to make monthly payments on mortgage loans, home equity loans, and lines of credit. Provident Bank it would offer a limited number of refunds on late payment fees for mortgages, home equity loans, checking account overdraft fees, and late credit card payment fees.

Has your bank shown flexibility? Or has it refused your requests? Share your experiences and opinions below.


Marriott Lowered The Number Of Guests Affected By Its Data Breach. Class Action Lawsuits Filed

Marriott International logo Important updates about the gigantic Marriott-Starwood data breach. The incident received more attention after security experts said that China's intelligence agencies may have been behind the cyberattack, which also targeted healthcare insurance companies.

Earlier this month, Marriott announced a lower number of guests affected:

"Working closely with its internal and external forensics and analytics investigation team, Marriott determined that the total number of guest records involved in this incident is less than the initial disclosure... Marriott now believes that the number of potentially involved guests is lower than the 500 million the company had originally estimated [in November, 2018]. Marriott has identified approximately 383 million records as the upper limit for the total number of guest records that were involved...

The announcement also said that fewer than 383 million different persons were affected because its database contained multiple records for the same guests. The announcement also stated that about:

"... 5.25 million unencrypted passport numbers were included in the information accessed by an unauthorized third party. The information accessed also includes approximately 20.3 million encrypted passport numbers... Marriott now believes that approximately 8.6 million encrypted payment cards were involved in the incident. Of that number, approximately 354,000 payment cards were unexpired as of September 2018..."

This is mixed news. Fewer breach victims is good news. The bad news: multiple database records for the same guests, and unencrypted passport numbers. Better, stronger data security always includes encrypting sensitive information. The announcement did not explain why some data was encrypted and some wasn't.

The hotel chain said that it will terminate its Starwood reservations database at the end of the year, and continue its post-breach investigation:

"While the payment card field in the data involved was encrypted, Marriott is undertaking additional analysis to see if payment card data was inadvertently entered into other fields and was therefore not encrypted. Marriott believes that there may be a small number (fewer than 2,000) of 15-digit and 16-digit numbers in other fields in the data involved that might be unencrypted payment card numbers. The company is continuing to analyze these numbers to better understand if they are payment card numbers and, if they are payment card numbers, the process it will put in place to assist guests."

Also, the hotel chain admitted during its January 4th announcement that it still wasn't fully ready to help affected guests:

"Marriott is putting in place a mechanism to enable its designated call center representatives to refer guests to the appropriate resources to enable a look up of individual passport numbers to see if they were included in this set of unencrypted passport numbers. Marriott will update its designated website for this incident (https://info.starwoodhotels.com) when it has this capability in place."

In related news, about 150 former guests have sued Marriott. Vox reported that a class-action lawsuit:

"... was filed Maryland federal district court on January 9, claims that Marriott did not adequately protect guest information before the breach and, once the breach had been discovered, “failed to provide timely, accurate, and adequate notice” to guests whose information may have been obtained by hackers... According to the suit, Marriott’s purchase of the Starwood properties is part of the problem. “This breach had been going on since 2014. In conducting due diligence to acquire Starwood, Marriott should have gone through and done an accounting of the cybersecurity of Starwood,” Amy Keller, an attorney at DiCello Levitt & Casey who is representing the Marriott guests, told Vox... According to a December report by the Wall Street Journal, Marriott could have caught the breach years earlier."

At least one other class-action lawsuit has been filed by breach victims.


Samsung Phone Owners Unable To Delete Facebook And Other Apps. Anger And Privacy Concerns Result

Some consumers have learned that they can't delete Facebook and other mobile apps from their Samsung smartphones. Bloomberg described one consumer's experiences:

"Winke bought his Samsung Galaxy S8, an Android-based device that comes with Facebook’s social network already installed, when it was introduced in 2017. He has used the Facebook app to connect with old friends and to share pictures of natural landscapes and his Siamese cat -- but he didn’t want to be stuck with it. He tried to remove the program from his phone, but the chatter proved true -- it was undeletable. He found only an option to "disable," and he wasn’t sure what that meant."

Samsung phones operate using Google's Android operating system (OS). The "chatter" refers to online complaints by Samsung phone owners. There were plenty of complaints, ranging from snarky:

To informative:

And:

Some persons shared their (understandable) anger:

One person reminded consumers of bigger issues with Android OS phones:

And, that privacy concern still exists. Sophos Labs reported:

"Advocacy group Privacy International announced the findings in a presentation at the 35th Chaos Computer Congress late last month. The organization tested 34 apps and documented the results, as part of a downloadable report... 61% of the apps tested automatically tell Facebook that a user has opened them. This accompanies other basic event data such as an app being closed, along with information about their device and suspected location based on language and time settings. Apps have been doing this even when users don’t have a Facebook account, the report said. Some apps went far beyond basic event information, sending highly detailed data. For example, the travel app Kayak routinely sends search information including departure and arrival dates and cities, and numbers of tickets (including tickets for children)."

After multiple data breaches and privacy snafus, some Facebook users have decided to either quit the Facebook mobile app or quit the service entirely. Now, some Samsung phone users have learned that quitting can be more difficult, and they don't have as much control over their devices as they thought.

How did this happen? Bloomberg explained:

"Samsung, the world’s largest smartphone maker, said it provides a pre-installed Facebook app on selected models with options to disable it, and once it’s disabled, the app is no longer running. Facebook declined to provide a list of the partners with which it has deals for permanent apps, saying that those agreements vary by region and type... consumers may not know if Facebook is pre-loaded unless they specifically ask a customer service representative when they purchase a phone."

Not good. So, now we know that there are two classes of mobile apps: 1) pre-installed and 2) permanent. Pre-installed apps come on new devices. Some pre-installed apps can be deleted by users. Permanent mobile apps are pre-installed apps which cannot be removed/deleted by users. Users can only disable permanent apps.

Sadly, there's more and it's not only Facebook. Bloomberg cited other agreements:

"A T-Mobile US Inc. list of apps built into its version of the Samsung Galaxy S9, for example, includes the social network as well as Amazon.com Inc. The phone also comes loaded with many Google apps such as YouTube, Google Play Music and Gmail... Other phone makers and service providers, including LG Electronics Inc., Sony Corp., Verizon Communications Inc. and AT&T Inc., have made similar deals with app makers..."

This is disturbing. There seem to be several issues:

  1. Notice: consumers should be informed before purchase of any and all phone apps which can't be removed. The presence of permanent mobile apps suggests either a lack of notice, notice buried within legal language of phone manufacturers' user agreements, or both.
  2. Privacy: just because a mobile app isn't running doesn't mean it isn't operating. Stealth apps can still collect GPS location and device information while running in the background; and then transmit it to manufacturers. Hopefully, some enterprising technicians or testing labs will verify independently whether "disabled" permanent mobile apps have truly stopped working.
  3. Transparency: phone manufacturers should explain and publish their lists of partners with both pre-installed and permanent app agreements -- for each device model. Otherwise, consumers cannot make informed purchase decisions about phones.
  4. Scope: the Samsung-Facebook pre-installed apps raises questions about other devices with permanent apps: phones, tablets, laptops, smart televisions, and automotive vehicles. Perhaps, some independent testing by Consumer Reports can determine a full list of devices with permanent apps.
  5. Nothing is free. Pre-installed app agreements indicate another method which device manufacturers use to make money, by collecting and sharing consumers' data with other tech companies.

The bottom line is trust. Consumers have more valid reasons to distrust some device manufacturers and OS developers. What issues do you see? What are your thoughts about permanent mobile apps?