1,107 posts categorized "Corporate Responsibility" Feed

New Commissioner Says FTC Should Get Tough on Companies Like Facebook and Google

[Editor's note: today's guest post, by reporters at ProPublica, explores enforcement policy by the U.S. Federal Trade Commission (FTC), which has become more important  given the "light touch" enforcement approach by the Federal Communications Commission. Today's post is reprinted with permission.]

By Jesse Eisinger, ProPublica

Declaring that "the credibility of law enforcement and regulatory agencies has been undermined by the real or perceived lax treatment of repeat offenders," newly installed Democratic Federal Trade Commissioner Rohit Chopra is calling for much more serious penalties for repeat corporate offenders.

"FTC orders are not suggestions," he wrote in his first official statement, which was released on May 14.

Many giant companies, including Facebook and Google, are under FTC consent orders for various alleged transgressions (such as, in Facebook’s case, not keeping its promises to protect the privacy of its users’ data). Typically, a first FTC action essentially amounts to a warning not to do it again. The second carries potential penalties that are more serious.

Some critics charge that that approach has encouraged companies to treat FTC and other regulatory orders casually, often violating their terms. They also say the FTC and other regulators and law enforcers have gone easy on corporate recidivists.

In 2012, a Republican FTC commissioner, J. Thomas Rosch, dissented from an agency agreement with Google that fined the company $22.5 million for violations of a previous order even as it denied liability. Rosch wrote, “There is no question in my mind that there is ‘reason to believe’ that Google is in contempt of a prior Commission order.” He objected to allowing the company to deny its culpability while accepting a fine.

Chopra’s memo signals a tough stance from Democratic watchdogs — albeit a largely symbolic one, given the fact that Republicans have a 3-2 majority on the FTC — as the Trump administration pursues a wide-ranging deregulatory agenda. Agencies such as the Environmental Protection Agency and the Department of Interior are rolling back rules, while enforcement actions from the Securities and Exchange Commission and the Department of Justice are at multiyear lows.

Chopra, 36, is an ally of Elizabeth Warren and a former assistant director of the Consumer Financial Protection Bureau. President Donald Trump nominated him to his post in October, and he was confirmed last month. The FTC is led by a five-person commission, with a chairman from the president’s party.

The Chopra memo is also a tacit criticism of enforcement in the Obama years. Chopra cites the SEC’s practice of giving waivers to banks that have been sanctioned by the Department of Justice or regulators allowing them to continue to receive preferential access to capital markets. The habitual waivers drew criticism from a Democratic commissioner on the SEC, Kara Stein. Chopra contends in his memo that regulators treated both Wells Fargo and the giant British bank HSBC too lightly after repeated misconduct.

"When companies violate orders, this is usually the result of serious management dysfunction, a calculated risk that the payoff of skirting the law is worth the expected consequences, or both," he wrote. Both require more serious, structural remedies, rather than small fines.

The repeated bad behavior and soft penalties “undermine the rule of law,” he argued.

Chopra called for the FTC to use more aggressive tools: referring criminal matters to the Department of Justice; holding individual executives accountable, even if they weren’t named in the initial complaint; and “meaningful” civil penalties.

The FTC used such aggressive tactics in going after Kevin Trudeau, infomercial marketer of miracle treatments for bodily ailments. Chopra implied that the commission does not treat corporate recidivists with the same toughness. “Regardless of their size and clout, these offenders, too, should be stopped cold,” he writes.

Chopra also suggested other remedies. He called for the FTC to consider banning companies from engaging in certain business practices; requiring that they close or divest the offending business unit or subsidiary; requiring the dismissal of senior executives; and clawing back executive compensation, among other forceful measures.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Privacy Badger Update Fights 'Link Tracking' And 'Link Shims'

Many internet users know that social media companies track both users and non-users. The Electronic Frontier Foundation (EFF) updated its Privacy Badger browser add-on to help consumers fight a specific type of surveillance technology called "Link Tracking," which facebook and many social networking sites use to track users both on and off their social platforms. The EFF explained:

"Say your friend shares an article from EFF’s website on Facebook, and you’re interested. You click on the hyperlink, your browser opens a new tab, and Facebook is no longer a part of the equation. Right? Not exactly. Facebook—and many other companies, including Google and Twitter—use a variation of a technique called link shimming to track the links you click on their sites.

When your friend posts a link to eff.org on Facebook, the website will “wrap” it in a URL that actually points to Facebook.com: something like https://l.facebook.com/l.php?u=https%3A%2F%2Feff.org%2Fpb&h=ATPY93_4krP8Xwq6wg9XMEo_JHFVAh95wWm5awfXqrCAMQSH1TaWX6znA4wvKX8pNIHbWj3nW7M4F-ZGv3yyjHB_vRMRfq4_BgXDIcGEhwYvFgE7prU. This is a link shim.

When you click on that monstrosity, your browser first makes a request to Facebook with information about who you are, where you are coming from, and where you are navigating to. Then, Facebook quickly redirects you to the place you actually wanted to go... Facebook’s approach is a bit sneakier. When the site first loads in your browser, all normal URLs are replaced with their l.facebook.com shim equivalents. But as soon as you hover over a URL, a piece of code triggers that replaces the link shim with the actual link you wanted to see: that way, when you hover over a link, it looks innocuous. The link shim is stored in an invisible HTML attribute behind the scenes. The new link takes you to where you want to go, but when you click on it, another piece of code fires off a request to l.facebook.com in the background—tracking you just the same..."

Lovely. And, Facebook fails to deliver on privacy in more ways:

"According to Facebook's official post on the subject, in addition to helping Facebook track you, link shims are intended to protect users from links that are "spammy or malicious." The post states that Facebook can use click-time detection to save users from visiting malicious sites. However, since we found that link shims are replaced with their unwrapped equivalents before you have a chance to click on them, Facebook's system can't actually protect you in the way they describe.

Facebook also claims that link shims "protect privacy" by obfuscating the HTTP Referer header. With this update, Privacy Badger removes the Referer header from links on facebook.com altogether, protecting your privacy even more than Facebook's system claimed to."

Thanks to the EFF for focusing upon online privacy and delivering effective solutions.


Academic Professors, Researchers, And Google Employees Protest Warfare Programs By The Tech Giant

Google logo Many internet users know that Google's business of model of free services comes with a steep price: the collection of massive amounts of information about users of its services. There are implications you may not be aware of.

A Guardian UK article by three professors asked several questions:

"Should Google, a global company with intimate access to the lives of billions, use its technology to bolster one country’s military dominance? Should it use its state of the art artificial intelligence technologies, its best engineers, its cloud computing services, and the vast personal data that it collects to contribute to programs that advance the development of autonomous weapons? Should it proceed despite moral and ethical opposition by several thousand of its own employees?"

These questions are relevant and necessary for several reasons. First, more than a dozen Google employees resigned citing ethical and transparency concerns with artificial intelligence (AI) help the company provides to the U.S. Department of Defense for Maven, a weaponized drone program to identify people. Reportedly, these are the first known mass resignations.

Second, more than 3,100 employees signed a public letter saying that Google should not be in the business of war. That letter (Adobe PDF) demanded that Google terminate its Maven program assistance, and draft a clear corporate policy that neither it, nor its contractors, will build warfare technology.

Third, more than 700 academic researchers, who study digital technologies, signed a letter in support of the protesting Google employees and former employees. The letter stated, in part:

"We wholeheartedly support their demand that Google terminate its contract with the DoD, and that Google and its parent company Alphabet commit not to develop military technologies and not to use the personal data that they collect for military purposes... We also urge Google and Alphabet’s executives to join other AI and robotics researchers and technology executives in calling for an international treaty to prohibit autonomous weapon systems... Google has become responsible for compiling our email, videos, calendars, and photographs, and guiding us to physical destinations. Like many other digital technology companies, Google has collected vast amounts of data on the behaviors, activities and interests of their users. The private data collected by Google comes with a responsibility not only to use that data to improve its own technologies and expand its business, but also to benefit society. The company’s motto "Don’t Be Evil" famously embraces this responsibility.

Project Maven is a United States military program aimed at using machine learning to analyze massive amounts of drone surveillance footage and to label objects of interest for human analysts. Google is supplying not only the open source ‘deep learning’ technology, but also engineering expertise and assistance to the Department of Defense. According to Defense One, Joint Special Operations Forces “in the Middle East” have conducted initial trials using video footage from a small ScanEagle surveillance drone. The project is slated to expand “to larger, medium-altitude Predator and Reaper drones by next summer” and eventually to Gorgon Stare, “a sophisticated, high-tech series of cameras... that can view entire towns.” With Project Maven, Google becomes implicated in the questionable practice of targeted killings. These include so-called signature strikes and pattern-of-life strikes that target people based not on known activities but on probabilities drawn from long range surveillance footage. The legality of these operations has come into question under international and U.S. law. These operations also have raised significant questions of racial and gender bias..."

I'll bet that many people never imagined -- nor want - that their personal e-mail, photos, calendars, video, social media, map usage, archived photos, social media, and more would be used for automated military applications. What are your opinions?


Equifax Operates A Secondary Credit Reporting Agency, And Its Website Appears Haphazard

Equifax logo More news about Equifax, the credit reporting agency with multiple data security failures resulting in a massive data breach affecting half of the United States population. It appears that Equifax also operates a secondary credit bureau: the National Consumer Telecommunications and Utilities Exchange (NCTUE). The Krebs On Security blog explained Equifax's role:

"The NCTUE is a consumer reporting agency founded by AT&T in 1997 that maintains data such as payment and account history, reported by telecommunication, pay TV and utility service providers that are members of NCTUE... there are four "exchanges" that feed into the NCTUE’s system: the NCTUE itself, something called "Centralized Credit Check Systems," the New York Data Exchange (NYDE), and the California Utility Exchange. According to a partner solutions page at Verizon, the NYDE is a not-for-profit entity created in 1996 that provides participating exchange carriers with access to local telecommunications service arrears (accounts that are unpaid) and final account information on residential end user accounts. The NYDE is operated by Equifax Credit Information Services Inc. (yes, that Equifax)... The California Utility Exchange collects customer payment data from dozens of local utilities in the state, and also is operated by Equifax (Equifax Information Services LLC)."

This surfaced after consumers with security freezes on their credit reports at the three major credit reporting agencies (e.g., Experian, Equifax, TransUnion) found fraudulent mobile phone accounts opened in their names. This shouldn't have been possible since security freezes prevent credit reporting agencies from selling consumers' credit reports to telecommunications companies, who typically perform credit checks before opening new accounts. So, the credit information must have come from somewhere else. It turns out, the source was the NCTUE.

NCTUE logo Credit reporting agencies make money by selling consumers' credit reports to potential lenders. And credit reports from the NCTUE are easy for anyone to order:

"... the NCTUE makes it fairly easy to obtain any records they may have on Americans. Simply phone them up (1-866-349-5185) and provide your Social Security number and the numeric portion of your registered street address."

The Krebs on Security blog also explain the expired SSL certificate used by Equifax which prevents serving web pages in a secure manner. That was simply inexcusable, poor data security.

A quick check of the NCTUE page on the Better Business Bureau site found 2 negative reviews and 70 complaints -- mostly about negative credit inquiries, and unresolved issues. A quick check of the NCTUE Terms Of Use page found very thin usage and privacy policies lacking details, such as mentions about data sharing, cookies, tracking, and more. The lack of data-sharing mentions could indicate NCTUE will share or sell data to anyone: entities, companies, and government agencies. It also means there is no way to verify whether the NCTUE complies with its own policies. Not good.

The policy contains enough language which indicates that it is not liable for anything:

"... THE NCTUE IS NOT RESPONSIBLE FOR, AND EXPRESSLY DISCLAIM, ALL LIABILITY FOR, DAMAGES OF ANY KIND ARISING OUT OF USE, REFERENCE TO, OR RELIANCE ON ANY INFORMATION CONTAINED WITHIN THE SITE. All content located at or available from the NCTUE website is provided “as is,” and NCTUE makes no representations or warranties, express or implied, including but not limited to warranties of merchantability, fitness for a particular purpose, title or non-infringement of proprietary rights. Without limiting the foregoing, NCTUE makes no representation or warranty that content located on the NCTUE website is free from error or suitable for any purpose; nor that the use of such content will not infringe any third party copyrights, trademarks or other intellectual property rights.

Links to Third Party Websites: Although the NCTUE website may include links providing direct access to other Internet resources, including websites, NCTUE is not responsible for the accuracy or content of information contained in these sites.."

Huh?! As is? The data NCTUE collected is being used for credit decisions. Reliability and accuracy matters. And, there are more concerns.

While at the NCTUE site, I briefly browsed the credit freeze information, which is hosted on an outsourced site, the Exchange Service Center (ESC). What's up with that? Why a separate site, and not a cohesive single site with a unified customer experience? This design gives the impression that the security freeze process was an afterthought.

Plus, the NCTUE and ESC sites present different policies (e.g., terms of use, privacy). Really? Why the complexity? Which policies rule? You'd think that the policies in both sites would be consistent and would mention each other, since consumers must use the two sites complete security freezes. That design seems haphazard. Not good.

There's more. Rather than use state-of-the-art, traditional web pages, the ESC site presents its policies in static Adobe PDF documents making it difficult for users to follow links for more information. (Contrast those thin policies with the more comprehensive Privacy and Terms of Use policies by TransUnion.) Plus, one policy was old -- dated 2011. It seems the site hasn't been updated in seven years. What fresh hell is this? More haphazard design. Why the confusing user experience? Not good.

Image of confusing drop-down menu for exchanges within the security freeze process. Click to view larger version There's more. When placing a security freeze, the ESC site includes a drop-down menu asking consumers to pick an exchange (e.g., NCTUE, Centralized Credit Check System, California Utility Exchange, NYDE). The confusing drop-down menu appears in the image on the right. Which menu option is the global security freeze? Is there a global option? The form page doesn't say, and it should. Why would a consumer select one of the exchanges? Perhaps, is this another slick attempt to limit the effectiveness of security freezes placed by consumers. Not good.

What can consumers make of this? First, the NCTUE site seems to be a slick way for Equifax to skirt the security freezes which consumers have placed upon their credit reports. Sounds like a definite end-run to me. Surprised? I'll bet. Angry? I'll bet, too. We consumers paid good money for security freezes on our credit reports.

Second, the combo NCTUE/ESC site seems like some legal, outsourcing ju-jitsu to avoid all liability, while still enjoying the revenues from credit-report sales. The site left me with the impression that its design, which hasn't kept pace during the years with internet best practices, was by a committee of attorneys focused upon serving their corporate clients' data collection and sharing needs while doing the absolute minimum required legally -- rather than a site focused upon the security needs of consumers. I can best describe the site using an old film-review phrase: a million monkeys with a million crayons would be hard pressed in a million years to create something this bad.

Third, credit reporting agencies get their data from a variety of sources. So, their business model is based upon data sharing. NCTUE seems designed to effectively do just that, regardless of consumers' security needs and wishes.

Fourth, this situation offers several reminders: a) just about anyone can set up and operate a credit reporting agency. No special skills nor expertise required; b) there are both national and regional credit reporting agencies; c) credit reports often contain errors; and d) credit reporting agencies historically have outsourced work, sometimes internationally -- for better or worse data security.

Fifth, you now you know what criminals and fraudsters already know... how to skirt the security freezes on credit reports and gain access to consumers' sensitive information. The combo NCTUE/ESC site is definitely a high-value target by criminals.

My first impression of the NCTUE site: haphazard design making it difficult for consumers to use and to trust it. What do you think?


Report: Software Failure In Fatal Accident With Self-Driving Uber Car

TechCrunch reported:

"The cause of the fatal crash of an Uber self-driving car appears to have been at the software level, specifically a function that determines which objects to ignore and which to attend to, The Information reported. This puts the fault squarely on Uber’s doorstep, though there was never much reason to think it belonged anywhere else.

Given the multiplicity of vision systems and backups on board any given autonomous vehicle, it seemed impossible that any one of them failing could have prevented the car’s systems from perceiving Elaine Herzberg, who was crossing the street directly in front of the lidar and front-facing cameras. Yet the car didn’t even touch the brakes or sound an alarm. Combined with an inattentive safety driver, this failure resulted in Herzberg’s death."

The TechCrunch story provides details about which software subsystem the report said failed.

Not good.

So, the autonomous or self-driving cars are only as good as the software they're programmed with (including maintenance). Anyone who has used computers during the last couple decades probably has experienced software glitches, bugs, and failures. It happens.

This latest incident suggests self-driving cars aren't yet ready. what do you think?


Connecticut And Federal Regulators Announce $1.3 Million Settlement With Substance Abuse Healthcare Provider

Connecticut and federal regulators recently announced a settlement agreement to resolve allegations that New Era Rehabilitation Center (New Era), operating in New Haven and Bridgeport, submitted false claims to both state and federal healthcare programs. The office of George Jepsen, Connecticut Attorney General, announced that New Era:

"... and its co-founders and owners – Dr. Ebenezer Kolade and Dr. Christina Kolade – are enrolled as providers in the Connecticut Medical Assistance Program (CMAP), which includes the state's Medicaid program. As part of their practice, they provide methadone treatment services for patients dealing with opioid addiction. Most of their patients are CMAP beneficiaries.

During the relevant time period, CMAP reimbursed methadone clinics by paying a weekly bundled rate that included all of the services associated with methadone maintenance, including the patient's doses of methadone; the initial intake evaluation; a physical examination; periodic drug testing; and individual, group and family drug counseling... The state and federal governments alleged that, from October 2009 to November 2013, New Era and the Kolades engaged in a pattern and practice of billing CMAP weekly for the methadone bundled service rate and then also submitting a separate claim to the CMAP for virtually every drug counseling session provided to clients by using a billing code for outpatient psychotherapy. The state and federal governments further alleged that those psychotherapy sessions were actually the drug counseling sessions already included and reimbursed through the bundled rate."

These actions were part of the State of Connecticut's Inter-agency Fraud Task Force created in 2013 to investigate and prosecute healthcare fraud. The joint investigation included the Connecticut AT's office, the office of Connecticut U.S. Attorney John H. Durham, and the U.S. Health and Human Services, Office of Inspector General – Office of Investigations.

Connecticut Fight Fraud logo Terms of the settlement agreement require NERC to pay $1,378,533 in settlement funds. Of that amount, $881,945 will be returned to CMAP.

Connecticut residents suspecting healthcare fraud or abuse should contact the Attorney General’s Antitrust and Government Program Fraud Department (phone at 860-808-5040, or email at ag.fraud@ct.gov), or the Department of Social Services fraud (hotline at 1-800-842-2155, online at www.ct.gov/dss/reportingfraud, or email at providerfraud.dss@ct.gov). Residents in other states can contact their state's attorney general's office.


Twitter Advised Its Users To Change Their Passwords After Security Blunder

Yesterday, Twitter.com advised all of its users to change their passwords after a huge security blunder exposed users' passwords online in an unprotected format. The social networking service released a statement on May 3rd:

"We recently identified a bug that stored passwords unmasked in an internal log. We have fixed the bug, and our investigation shows no indication of breach or misuse by anyone. Out of an abundance of caution, we ask that you consider changing your password on all services where you’ve used this password."

Security experts advise consumers not to use the same password at several sites or services. Repeated use of the same password makes it easy for criminals to hack into multiple sites or services.

The statement by Twitter.com also explained that it masks users' passwords:

"... through a process called hashing using a function known as bcrypt, which replaces the actual password with a random set of numbers and letters that are stored in Twitter’s system. This allows our systems to validate your account credentials without revealing your password. This is an industry standard.

Due to a bug, passwords were written to an internal log before completing the hashing process. We found this error ourselves, removed the passwords, and are implementing plans to prevent this bug from happening again."

The good news: Twitter found the buy by itself. The not-so-good news: the statement was short on details. It did not disclose details about the fixes so this blunder doesn't happen again. Nor did the statement say how many users were affected. Twitter has about 330 million users, so it seems that all users were affected.


News Media Alliance Challenges Tech Companies To 'Accept Accountability' And Responsibility For Filtering News In Their Platforms

Last week, David Chavern, the President and CEO of News Media Alliance (NMA), testified before the House Judiciary Committee. The NMA is a nonprofit trade association representing over 2,000 news organizations across the United States. Mr. Chavern's testimony focused upon the problem of fake news, often aided by social networking platform.

His comments first described current conditions:

"... Quality journalism is essential to a healthy and functioning democracy -- and my members are united in their desire to fight for its future.

Too often in today’s information-driven environment, news is included in the broad term "digital content." It’s actually much more important than that. While some low-quality entertainment or posts by friends can be disappointing, inaccurate information about world events can be immediately destructive. Civil society depends upon the availability of real, accurate news.

The internet represents an extraordinary opportunity for broader understanding and education. We have never been more interconnected or had easier and quicker means of communication. However, as currently structured, the digital ecosystem gives tremendous viewpoint control and economic power to a very small number of companies – the tech platforms that distribute online content. That control and power must come with new responsibilities... Historically, newspapers controlled the distribution of their product; the news. They invested in the journalism required to deliver it, and then printed it in a form that could be handed directly to readers. No other party decided who got access to the information, or on what terms. The distribution of online news is now dominated by the major technology platforms. They decide what news is delivered and to whom – and they control the economics of digital news..."

Last month, a survey found that roughly two-thirds of U.S. adults (68%) use Facebook.com, and about three-quarters of those use the social networking site daily. In 2016, a survey found that 62 percent of adults in the United States get their news from social networking sites. The corresponding statistic in 2012 was 49 percent. That 2016 survey also found that fewer social media users get their news from other platforms: local television (46 percent), cable TV (31 percent), nightly network TV (30 percent), news websites/apps (28 percent), radio (25 percent), and print newspapers (20 percent).

Mr. Chavern then described the problems with two specific tech companies:

"The First Amendment prohibits the government from regulating the press. But it doesn’t prevent Facebook and Google from acting as de facto regulators of the news business.

Neither Google nor Facebook are – or have ever been – "neutral pipes." To the contrary, their businesses depend upon their ability to make nuanced decisions through sophisticated algorithms about how and when content is delivered to users. The term “algorithm” makes these decisions seem scientific and neutral. The fact is that, while their decision processes may be highly-automated, both companies make extensive editorial judgments about accuracy, relevance, newsworthiness and many other criteria.

The business models of Facebook and Google are complex and varied. However, we do know that they are both immense advertising platforms that sell people’s time and attention. Their "secret algorithms" are used to cultivate that time and attention. We have seen many examples of the types of content favored by these systems – namely, click-bait and anything that can generate outrage, disgust and passion. Their systems also favor giving users information like that which they previously consumed, thereby generating intense filter bubbles and undermining common understandings of issues and challenges.

All of these things are antithetical to a healthy news business – and a healthy democracy..."

Earlier this month, Apple Computer and Facebook executives exchanged criticisms about each other's business models and privacy. Mr. Chavern's testimony before Congress also described more problems and threats:

"Good journalism is factual, verified and takes into account multiple points of view. It can take a lot of time and investment. Most particularly, it requires someone to take responsibility for what is published. Whether or not one agrees with a particular piece of journalism, my members put their names on their product and stand behind it. Readers know where to send complaints. The same cannot be said of the sea of bad information that is delivered by the platforms in paid priority over my members’ quality information. The major platforms’ control over distribution also threatens the quality of news for another reason: it results in the “commoditization” of news. Many news publishers have spent decades – often more than a century – establishing their brands. Readers know the brands that they can trust — publishers whose reporting demonstrates the principles of verification, accuracy and fidelity to facts. The major platforms, however, work hard to erase these distinctions. Publishers are forced to squeeze their content into uniform, homogeneous formats. The result is that every digital publication starts to look the same. This is reinforced by things like the Google News Carousel, which encourages users to flick back and forth through articles on the same topic without ever noticing the publisher. This erosion of news publishers’ brands has played no small part in the rise of "fake news." When hard news sources and tabloids all look the same, how is a customer supposed to tell the difference? The bottom line is that while Facebook and Google claim that they do not want to be "arbiters of truth," they are continually making huge decisions on how and to whom news content is delivered. These decisions too often favor free and commoditized junk over quality journalism. The platforms created by both companies could be wonderful means for distributing important and high-quality information about the world. But, for that to happen, they must accept accountability for the power they have and the ultimate impacts their decisions have on our economic, social and political systems..."

Download Mr. Chavern's complete testimony. Industry watchers argue that recent changes by Facebook have hurt local news organizations. MediaPost reported:

"When Facebook changed its algorithm earlier this year to focus on “meaningful” interactions, publishers across the board were hit hard. However, local news seemed particularly vulnerable to the alterations. To assuage this issue, the company announced that it would prioritize news related to local towns and metro areas where a user resided... To determine how positively that tweak affected local news outlets, the Tow Center measured interactions for posts from publications coming from 13 metro areas... The survey found that 11 out of those 13 have consistently seen a drop in traffic between January 1 and April 1 of 2018, allowing the results to show how outlets are faring nine weeks after the algorithm change. According to the Tow Center study, three outlets saw interactions on their pages decrease by a dramatic 50%. These include The Dallas Morning News, The Denver Post, and The San Francisco Chronicle. The Atlanta Journal-Constitution saw interactions drop by 46%."

So, huge problems persist.

Early in my business career, I had the opportunity to develop and market an online service using content from Dow Jones News/Retrieval. That experience taught me that the news - hard news - included who, where, when, and what happened. Everything else is either opinion, commentary, analysis, an advertisement, or fiction. And, it is critical to know the differences and/or learn to spot each type. Otherwise, you are likely to be misled, misinformed, or fooled.


Federal Regulators Assess $1 Billion Fine Against Wells Fargo Bank

On Friday, several federal regulators announced the assessment of a $1 billion fine against Wells Fargo Bank for violations of the, "Consumer Financial Protection Act (CFPA) in the way it administered a mandatory insurance program related to its auto loans..."

Consumer Financial Protection Bureau logo The Consumer Financial Protection Bureau (CFPB) announced the fine and settlement with Wells Fargo Bank, N.A., and its coordinated action with the Office of the Comptroller of the Currency (OCC). The announcement stated that the CFPB:

"... also found that Wells Fargo violated the CFPA in how it charged certain borrowers for mortgage interest rate-lock extensions. Under the terms of the consent orders, Wells Fargo will remediate harmed consumers and undertake certain activities related to its risk management and compliance management. The CFPB assessed a $1 billion penalty against the bank and credited the $500 million penalty collected by the OCC toward the satisfaction of its fine."

Wells Fargo logo This not the first fine against Wells Fargo. The bank paid a $185 million fine in 2016 to settle charges about for alleged unlawful sales practices during the past five years. To game an internal sales system, employees allegedly created about 1.5 million bogus email accounts, and both issued and activated debit cards associated with the secret accounts. Then, employees also created PIN numbers for the accounts, all without customers' knowledge nor consent. An investigation in 2017 found 1.4 million more bogus accounts created than originally found in 2016. Also in 2017, irregularities were reported about how the bank handled mortgages.

The OCC explained that it took action:

"... given the severity of the deficiencies and violations of law, the financial harm to consumers, and the bank’s failure to correct the deficiencies and violations in a timely manner. The OCC found deficiencies in the bank’s enterprise-wide compliance risk management program that constituted reckless, unsafe, or unsound practices and resulted in violations of the unfair practices prong of Section 5 of the Federal Trade Commission (FTC) Act. In addition, the agency found the bank violated the FTC Act and engaged in unsafe and unsound practices relating to improper placement and maintenance of collateral protection insurance policies on auto loan accounts and improper fees associated with interest rate lock extensions. These practices resulted in consumer harm which the OCC has directed the bank to remediate.

The $500 million civil money penalty reflects a number of factors, including the bank’s failure to develop and implement an effective enterprise risk management program to detect and prevent the unsafe or unsound practices, and the scope and duration of the practices..."

MarketWatch explained the bank's unfair and unsound practices:

"When consumers buy a vehicle through a lender, the lender often requires the consumer to also purchase “collateral protection insurance.” That means the vehicle itself is collateral — or essentially, could be repossessed — if the loan is not paid... Sometimes, the fine print of the contracts say that if borrowers do not buy their own insurance (enough to satisfy the terms of the loan), the lender will go out and purchase that insurance on their behalf, and charge them for it... That is a legal practice. But in the case of Wells Fargo, borrowers said they actually did buy that insurance, and Wells Fargo still bought more insurance on their behalf and charged them for it."

So, the bank forced consumers to buy unwanted and unnecessary auto insurance. The lesson for consumers: don't accept the first auto loan offered, and closely read the fine print of contracts from lenders. Wells Fargo said in a news release that it:

"... will adjust its first quarter 2018 preliminary financial results by an additional accrual of $800 million, which is not tax deductible. The accrual reduces reported first quarter 2018 net income by $800 million, or $0.16 cents per diluted common share, to $4.7 billion, or 96 cents per diluted common share. Under the consent orders, Wells Fargo will also be required to submit, for review by its board, plans detailing its ongoing efforts to strengthen its compliance and risk management, and its approach to customer remediation efforts."

Kudos to the OCC and CFPB for taking this action against a bank with a spotty history. Will executives at Wells Fargo learn their lessons from the massive fine? The Washington Post reported that the bank will:

"... benefit from a massive corporate tax cut passed by Congress last year. he bank’s effective tax rate this year will fall from about 33 percent to 22 percent, according to a Goldman Sachs analysis released in December. The change could boost its profits by 18 percent, according to the analysis. Just in the first quarter, Wells Fargo’s effective tax rate fell from about 28 percent to 18 percent, saving it more than $600 million. For the entire year, the tax cut is expected to boost the company’s profits by $3.7 billion..."

So, don't worry about the bank. It's tax savings will easily offset the fine. This makes one doubt the fine was a sufficient deterrent. And, I found the OCC's announcement forceful and appropriate, while the CFPB's announcement seemed to soft-pedal things by saying the absolute minimum.

What do you think? Will the fine curb executive wrongdoing?


How Facebook Tracks Its Users, And Non-Users, Around the Internet

Facebook logo Many Facebook users wrongly believe that the social networking service doesn't track them around the internet when they aren't signed in. Also, many non-users of Facebook wrongly believe that they are not tracked.

Earlier this month, Consumer Reports explained the tracking:

"As you travel through the web, you’re likely to encounter Facebook Like or Share buttons, which the company calls Social Plugins, on all sorts of pages, from news outlets to shopping sites. Click on a Like button and you can see the number on the page’s counter increase by one; click on a Share button and a box opens up to let you post a link to your Facebook account.

But that’s just what’s happening on the surface. "If those buttons are on the page, regardless of whether you touch them or not, Facebook is collecting data," said Casey Oppenheim, co-founder of data security firm Disconnect."

This blog discussed social plugins back in 2010. However, the tracking includes more technologies:

"... every web page contains little bits of code that request the pictures, videos, and text that browsers need to display each item on the page. These requests typically go out to a wide swath of corporate servers—including Facebook—in addition to the website’s owner. And such requests can transmit data about the site you’re on, the browser you are using, and more. Useful data gets sent to Facebook whether you click on one of its buttons or not. If you click, Facebook finds out about that, too. And it learns a bit more about your interests.

In addition to the buttons, many websites also incorporate a Facebook Pixel, a tiny, transparent image file the size of just one of the millions of pixels on a typical computer screen. The web page makes a request for a Facebook Pixel, just as it would request a Like button. No user will ever notice the picture, but the request to get it is packaged with information... Facebook explains what data can be collected using a Pixel, such as products you’ve clicked on or added to a shopping cart, in its documentation for advertisers. Web developers can control what data is collected and when it is transmitted... Even if you’re not logged in, the company can still associate the data with your IP address and all the websites you’ve been to that contain Facebook code."

The article also explains "re-targeting" and how consumers who don't purchase anything at an online retail site will see advertisements later -- around the internet and not solely on the Facebook site -- about the items they viewed but not purchased. Then, there is the database it assembles:

"In materials written for its advertisers, Facebook explains that it sorts consumers into a wide variety of buckets based on factors such as age, gender, language, and geographic location. Facebook also sorts its users based on their online activities—from buying dog food, to reading recipes, to tagging images of kitchen remodeling projects, to using particular mobile devices. The company explains that it can even analyze its database to build “look-alike” audiences that are similar... Facebook can show ads to consumers on other websites and apps as well through the company’s Audience Network."

So, several technologies are used to track both Facebook users and non-users, and assemble a robust, descriptive database. And, some website operators collaborate to facilitate the tracking, which is invisible to most users. Neat, eh?

Like it or not, internet users are automatically included in the tracking and data collection. Can you opt out? Consumer reports also warns:

"The biggest tech companies don’t give you strong tools for opting out of data collection, though. For instance, privacy settings may let you control whether you see targeted ads, but that doesn’t affect whether a company collects and stores information about you."

Given this, one can conclude that Facebook is really a massive advertising network masquerading as a social networking service.

To minimize the tracking, consumers can: disable the Facebook API platform on their Facebook accounts, use the new tools (e.g., see these step-by-step instructions) by Facebook to review and disable the apps with access to their data, use ad-blocking software (e.g., Adblock Plus, Ghostery), use the opt out-out mechanisms offered by the major data brokers, use the OptOutPrescreen.com site to stop pre-approved credit offers, and use VPN software and services.

If you use the Firefox web browser, configure it for Private Browsing and install the new Facebook Container add-on specifically designed to prevent Facebook from tracking you. Don't use Firefox? Several web browsers offer Incognito Mode. And, you might try the Privacy Badger add-on instead. I've used it happily for years.

To combat "canvas fingerprinting" (e.g., tracking users by identifying the unique attributes of your computer, browser, and software), security experts have advised consumers to use different web browsers. For example, you'd use one browser only for online banking, and a different web browser for surfing the internet. However,  this security method may not work much longer given the rise of cross-browser fingerprinting.

It seems that an arms race is underway between software for users to maintain privacy online versus technologies by advertisers to defeat users' privacy. Would Facebook and its affiliates/partners use cross-browser fingerprinting? My guess: yes it would, just like any other advertising network.

What do you think? Some related reading:


Backpage Executive Pled Guilty In Three States. Several Other Executives Indicted

Late last week, the Washington Post reported:

"Carl Ferrer, the chief executive of Backpage.com whose name was conspicuously absent from an indictment of seven other Backpage officials unsealed Monday, has pleaded guilty in state courts in California and Texas and federal court in Arizona to charges of money laundering and conspiracy to facilitate prostitution. In addition, he agreed to testify against the men who co-founded Backpage with him, Michael Lacey and James Larkin, who remained in jail Thursday in Arizona on facilitating prostitution charges. Backpage, in addition to hosting thinly veiled ads for prostitution since 2004, was accused of hosting child sex trafficking ads on its site... Court records show that Ferrer pleaded guilty to conspiracy to facilitate prostitution and money laundering in federal court in Phoenix on April 5, with the hearing and documents sealed. Backpage.com also pleaded guilty, by Ferrer as the CEO, to a money laundering conspiracy in Phoenix, where Backpage was created. Ferrer then on Monday appeared in state court in Corpus Christi, Texas, where he personally pleaded guilty to money laundering..."


How To View The List Of Advertisers Tracking You On Facebook. Any Surprises On Your List?

The massive privacy and data security breach at Facebook.com involving Cambridge Analytica has heightened many users' sensitivity to the advertising practices by the social networking service. Many Facebook users want to know the exact list of advertiser tracking them.

How To View The List Of Advertisers Tracking You

Facebook Ad Preferences page. Click to view larger version How to view this list? It's easy. Sign into Facebook.com and navigate to Settings > Ads > Advertisers You've Interacted With. (When using a web browser, you'll have to click on the tiny arrow in the upper right portion of the page to access the drop-down menu.) Within the Ad Preferences page, click on the "Advertisers You've Interacted With" headline to open that module. When opened, it displays several lists of advertisers:

  1. Who've added their contact list to Facebook
  2. Whose website or app you've used,
  3. Whom you've visited, and
  4. More

The default view of list #1 displays 12 advertisers tracking you. There probably are many more in your list. Select "Show More" to view more advertisers. Facebook doesn't make it easy. The module lacks a "Show All" button, which forces users to repeatedly select "Show More." Not good. Come on Facebook! You can do better.

List #1 includes important explanatory text:

"These advertisers are running ads using a contact list they uploaded that includes your contact info. This info was collected by the advertiser, typically after you shared your email address with them or another business they've partnered with."

The key phrase to remember: or another business they've interacted with. So, list #1 includes not only advertisers but also affiliates or business partners. Not good. More Facebook being Facebook.

I selected "Show More" about two dozen times to view my complete list: 235 advertisers tracking me, and collecting data about me. 235 advertisers even though I never used the Facebook mobile app, and had already disabled the Facebook API platform on my account years ago! Not good.

Your mileage will vary. There may be fewer or more advertisers on your list.

My list #1 included both advertisers I expected and many I didn't expect. The advertisers I expected to see brands I currently do business with (e.g., Marriott Rewards, ACLU), brands I no longer do business with (e.g., Bank of America, AT&T), and/or brands whose Facebook pages I "Liked" or left comments on. The advertisers who I didn't expected to see included politicians in other states I've neither visited nor live in, brands I've never purchased nor interacted with in any manner, brands I have never "Liked," and more.

Who's on your list? A friend shared:

"I looked at my list and it's crazy. Will follow the opt-out links tomorrow and clear them out. Cardi B was in my list of FB advertisers."

A rapper? That's too funny. I guess that's to be expected if you stream and share music online via Facebook. Me? I don't stream music online because that is another way to be tracked. Instead, I enjoy listening to CDs privately in my home. I prefer to keep my home a truly private place.

What's really going on here? Why the crazy long list? Popular Science explained:

"You, can thank the "data providers" for this mess. Mark Zuckerberg spent roughly 11 hours testifying in front of Congressional committees... One thing that got very little attention was the concept of “data brokers,” middleman businesses that collect consumer information and sell it to companies. Facebook stopped using them just last month. However, that long string of companies, personalities, and alternative rock bands is a result of Facebook’s old program... after the Cambridge Analytica scandal broke, but before Mark Zuckerberg’s marathon testimony in front of Congress, Facebook announced that it was ending a program called Partner Categories, canceling a long-standing relationship between the social network and data brokers. The change was announced in a short statement, but it has big implications for your personal information and the agencies that collect and sell it."

"The ability to target advertising is what makes Facebook its money—roughly $40 billion last year... while you provide lots of user information to Facebook, advertisers typically want even more... and that’s where data brokers come in. Facebook calls on brokers like Acxiom, Epsilon, and TransUnion to act as a conduit between Facebook and individual advertisers looking to reach targeted audiences..."

Readers of this blog may recognize TransUnion, one of the three major credit reporting agencies. So, the "advertisers" on Facebook tracking you (and data harvesting) include a variety of entities: traditional advertisers, business partners, affiliates, data brokers, and their intermediaries.

It's called "surveillance capitalism" for good reasons. Many companies besides Facebook do it.

What To Do Next

It's not easy to opt out or delete items from your advertising list. For those brands and entities you have "Liked," you can visit their Facebook page and "Unlike" them. However, that won't stop them or other "advertisers" from re-targeting (and tracking) you in the future. The "Ad Preferences" page for your profile also includes the "Your Information" module where you can toggle on or off advertising based upon certain profile elements:

Your Information module within Ad Preferences. Facebook. Click to view larger version

The above image is from 2017. back then I disabled all of the active toggles you see. Deactivating these toggles might minimize the number of ads displays, but it won't stop the tracking and data collection. The Popular Science article includes links to several opt-out mechanisms for major data brokers. You could (and should) use those. However, two key problems remain.

First, these opt-out links should be easily accessible within Facebook. They aren't. This forces consumers to waste time hunting for the opt-out mechanisms, when Facebook has the expertise to provide them. Facebook probably knows that many consumers will give up and quit, rather than hunt for opt-out links. It's great that Popular Science did a lot of the work for consumers.

Second, the opt-out mechanisms offered by some data brokers are unnecessarily complex. Example: see the opt-out mechanisms offered by Experian, another credit reporting agency:

Experian opt-out site pages. Click to view larger version

Didn't know that Experian plays in both ponds: credit reporting and data brokerage? Most people probably don't know. Experian's site lacks a unified, single opt-out mechanism which forces consumers to wade through seven different mechanisms and methods; some of which are paper-based and lack an online method. Not good!

TransUnion's opt-out mechanism isn't much better. And, it raises more questions than it answers? It links to the OptOutPrescreen.com site, which I completed way back in 2007. Did my Facebook membership undo that? Or is there some other data sharing at work, which the OptOutprescreen doesn't cover? TransUnion's page doesn't explain, and nither does Facebook's page. Not good.

Some people choose to use ad-blocking software (e.g., Adblock Plus, Ghostery) to suppress the display of online ads, but that probably won't stop the tracking and data collection internal to Facebook. There's no substitute for Facebook giving its users internal tools to completely disable and opt out of the tracking and data collection.

That highlights another problem: users are automatically included, so the burden is upon users to (continually) opt out. This is Facebook's business model. The reverse should be the default. Users should not be tracked nor data harvested unless they register and opt into the program. Given the social media site's business model, even if you opt out today, there's nothing stopping Facebook from re-subscribing you in the future with any updates to its system or terms of use.

How many advertisers are on your list? 200 or more? 300? 400? Any surprises on your list?


How To Check If Your Information Was Collected By Cambridge Analytica In The Facebook Breach

You've probably heard about the massive privacy and data security breach at Facebook.com where users' information, plus their friends' information was captured and shared with Cambridge Analytica. by an app created by an academic professor. Now, you want to know if your information was harvested.

How To Check

It's easy to check. Visit this Facebook Help Center page. If you are not signed into your Facebook account, then the page displays as:

Default version of Facebook Help page for users to determine if their information was collected by Cambridge Analytica. Click to view larger version

If you have already signed into your Facebook account and your information was not harvested, then the main column of the page displays:

Default version of Facebook Help page for users to determine if their information was collected by Cambridge Analytica. Click to view larger version

If your information was harvested, then the content under "Was My Information Shared?" will be different. It may display this:

"Based upon our investigation, you don't appear to have logged into "This Is Your Digital Life" with Facebook before we removed it from our platform in 2015. However, a friend of yours did log in. As a result, the following information was likely shared with "This Is Your Digital Life": Your public profile, page likes, date of birth, and current city"

Of course, if you logged into the "This Is Your Digital Life" app yourself, then the page content will say so, and list the data elements harvested. Reportedly, about 270,000 Facebook users logged into the app/quiz which then collected information for an estimated 87 million of those users' Facebook friends.

What To Do Next

There's not a lot you can do immediately. CNN Tech advised:

"Even if you delete your Facebook account, or remove third-party apps connected to your profile, the third-party apps will still have access to data they previously collected. Users have to contact the app individually to have the data be removed... According to a notice on affected accounts, the "small number of people" who accessed the app also shared their News Feed, timeline, posts and messages. A Facebook spokesperson confirmed that 1,500 users who logged into the app granted explicit access to their private message inbox... For now, the platform is directing people to their Settings page to see which apps are connected to their accounts, such as Uber and Netflix. Users can also disconnect those apps... Walt Mossberg, a veteran tech reporter and cofounder of tech website Recode, urged Facebook to let users know which friends accessed the app and when..."

Yeah, that! Facebook should inform affected users which of their friends contributed to the data leakage.

Of course, Facebook wants its users to keep using the service. Facebook announced on March 21st that it will, 1) investigate all apps that had access to large amounts of information and conduct full audits of any apps with suspicious activity; 2) inform users affected by apps that have misused their data; 3) disable an app's access to a member's information if that member hasn't used the app within the last three months; 4) change Login to "reduce the data that an app can request without app review to include only name, profile photo and email address;" 5) encourage members to manage the apps they use; and reward users who find vulnerabilities.

Those actions seem good, but too little too late. What can affected users do?

You have options. If you use Facebook, see these instructions by Consumer Reports to deactivate or delete your account. Some people I know simply stopped using Facebook, but left their accounts active. That doesn't seem wise. A better approach is to adjust the privacy settings on your Facebook account to get as much privacy and protections as possible.

Facebook has a new tool for members to review and disable, in bulk, all of the apps with access to their data. Follow these handy step-by-step instructions by Mashable. And, users should also disable the Facebook API platform for their account. If you use the Firefox web browser, then install the new Facebook Container add-on specifically designed to prevent Facebook from tracking you. Don't use Firefox? You might try the Privacy Badger add-on instead. I've used it happily for years.

Whatever you do, remember that lots of advertising networks and tech companies besides Facebook want to track your movements around the web. Some of those companies include internet service providers (ISPs), since the U.S. Federal Communications Commission (FCC) killed both broadband privacy and net neutrality in 2017.

A windfall for broadband providers, and terrible for consumers. You might contact your elected officials and demand that the FCC put broadband privacy and net neutrality protections back into place.


Security Experts: Breach At Panera Bread Affected Millions. Questions Linger About Vulnerability Fix

Panera Bread logo Apparently, Panera Bread experienced a massive data breach, which the restaurant chain's management allegedly ignored for months. CSO Online reported:

"Panera Bread’s website leaked millions of customer records in plain text for at least eight months, which is how long the company blew off the issues reported by security researcher Dylan Houlihan... Houlihan shared copies of email exchanges with Panera Bread CIO John Meister – who at first accused Houlihan of trying to run a scam when he first reported the security vulnerability back in August 2017... Exactly eight months after reporting the issue to Panera Bread, Houlihan turned to KrebsOnSecurity. Krebs spoke to Meister, and the website was briefly taken offline. Less than two hours later, Panera said it had fixed the problem."

Reportedly, the sensitive customer information leaked included usernames, first and last names, email addresses, phone numbers, home addresses, birthdays, the last four digits of saved credit card numbers, dietary restrictions, food preferences, and "social account integration information."

Security experts disagree about two key issues: a) whether or not the vulnerability was fixed, and b) the number of affected consumers. Panera Bread claimed about 10,000 customers were affected. Then, that number went up:

"After some more poking, Hold Security reported to Krebs that Panera didn’t just leak plain text records of 7 million customers; “the vulnerabilities also appear to have extended to Panera’s commercial division, which serves countless catering companies. At last count, the number of customer records exposed in this breach appears to exceed 37 million.”

A check earlier today of the public-facing pages at Panera's website failed to find a breach notice, which companies usually provide after a data breach. Not good. Shoppers need to know. Many states have breach notification laws.

Panera's behavior doesn't inspire much confidence. It's internal breach-detection mechanisms seem to have failed, and its post-breach response seemed unprepared, unfocused, and disinterested. What do you think?


Apple Computer And Facebook Executives Exchange Criticisms

Chief executives at Apple Computer and Facebook recently exchanged criticisms. During a lengthy interview by Recode's Kara Swisher and MSNBC’s Chris Hayes, Apple CEO Tim Cook responded to questions about Facebook's recent data security and privacy incident. The interview was conducted in Chicago, Illinois on Tuesday, March 27. It was broadcast on MSNBC on Friday, April 6, 2018. The relevant section of the interview:

"Hayes: We are back with Apple CEO Tim Cook. In the wake of the news about data scraping by Cambridge Analytica and Facebook, you had this to say recently, and I thought it was quite interesting. You said, "It’s clear to me that something, some large profound change, is needed. I’m personally not a big fan of regulation because sometimes regulation can have unexpected consequences to it. However, I think this certain situation is so dire, has become so large, that probably some well-crafted regulation is necessary." What’d you mean?

Cook: Yeah. Look, we’ve never believed that these detailed profiles of people — that has incredibly deep personal information that is patched together from several sources — should exist. That the connection of all of these dots, that you could use them in such devious ways if someone wanted to do that, that this was one of the things that were possible in life but shouldn’t exist.

Swisher: Right.

Cook: Shouldn’t be allowed to exist. And so I think the best regulation is no regulation, is self regulation. That is the best regulation, because regulation can have unexpected consequences, right? However, I think we’re beyond that here, and I do think that it’s time for a set of people to think deeply about what can be done here.

Hayes: Now, the cynic in me says, you’ve got other tech companies that are much more dependent on that kind of thing than Apple is. And so, yes, you want regulation here because that would essentially be a comparative advantage, that if regulation were to come in on this privacy question, the people it’s going to hit harder aren’t Apple. It’s places like Facebook and Google.

Cook: Well, the skeptic in you would be wrong. (laughter) The truth is we could make a ton of money if we monetized our customer. If our customer was our product, we could make a ton of money. We’ve elected not to do that. (applause) Because we don’t... our products are iPhones and iPads and Macs and HomePods and the Watch, etc., and if we can convince you to buy one, we’ll make a little bit of money, right? But you are not our product."

The comments about regulation are relevant since Mr. Zuckerberg will testify before Congress this week about Facebook's privacy and data security incident involving Cambridge Analytica. Mr. Cook's comments highlight the radically different business models.

Mr. Cook's comments didn't sit well with Mr. Zuckerberg. Vox's Ezra Klein interviewed Zuckerberg on Monday, April 2. The relevant portion of that interview:

"Ezra Klein: One of the things that has been coming up a lot in the conversation is whether the business model of monetizing user attention is what is letting in a lot of these problems. Tim Cook, the CEO of Apple, gave an interview the other day and he was asked what he would do if he was in your shoes. He said, “I wouldn’t be in this situation,” and argued that Apple sells products to users, it doesn’t sell users to advertisers, and so it’s a sounder business model that doesn’t open itself to these problems.

Do you think part of the problem here is the business model where attention ends up dominating above all else, and so anything that can engage has powerful value within the ecosystem?

Mark Zuckerberg: You know, I find that argument, that if you’re not paying that somehow we can’t care about you, to be extremely glib and not at all aligned with the truth. The reality here is that if you want to build a service that helps connect everyone in the world, then there are a lot of people who can’t afford to pay. And therefore, as with a lot of media, having an advertising-supported model is the only rational model that can support building this service to reach people.

That doesn’t mean that we’re not primarily focused on serving people. I think probably to the dissatisfaction of our sales team here, I make all of our decisions based on what’s going to matter to our community and focus much less on the advertising side of the business.

But if you want to build a service which is not just serving rich people, then you need to have something that people can afford. I thought Jeff Bezos had an excellent saying on this in one of his Kindle launches a number of years back. He said, “There are companies that work hard to charge you more, and there are companies that work hard to charge you less.” And at Facebook, we are squarely in the camp of the companies that work hard to charge you less and provide a free service that everyone can use.

I don’t think at all that that means that we don’t care about people. To the contrary, I think it’s important that we don’t all get Stockholm syndrome and let the companies that work hard to charge you more convince you that they actually care more about you. Because that sounds ridiculous to me."

What to make of this. While Mr. Zuckerberg is entitled to his opinions, an old saying seems to apply: people in glass houses shouldn't throw stones.

There seems no question that Facebook built a platform which collected users' intimate and sensitive information, tracked users around the internet, allowed "advertisers" to collect information about both users who interacted with a quiz app and users' friends (without their friends' knowledge), allowed "advertisers" to target groups of users (regardless of the law and/or consequences), and made it easier for "advertisers" to combine data collected with information from other sources. You may remember, Facebook's "friction-less sharing" program in 2011, where apps automatically posted content in users' timelines without users active involvement. And, Facebook's history with a convoluted and often confusing interface for users to change their privacy settings.

You may remember, it was Apple which fought to protect its customers' sensitive information by resisting demands by federal law enforcement officials to build back-door hacks into its devices. I don't think Facebook can make a similar claim about protecting users' information. Actions speak louder than words.

Nobody forced Facebook to build the platform it built. Its executives made choices. And now, Mr. Zuckerberg is apologizing (again) for his company's behavior. You may remember, an admission of problems and promises to do better by Mr. Zuckerberg in January. Facebook COO Sheryl Sandberg also apologized last week about the executive failures in 2015. You might call it the #Facebookapologytour.

Mr. Zuckerberg's "an advertising-supported model is the only rational model" comment deserves attention. The only model? Mr. Zuckerberg and Facebook made the decision not to charge monthly fees. Would some users pay a monthly fee for guaranteed privacy? I imagine many users would gladly pay. I would. (An Apple co-founder is willing to pay, too.) It seems, a more accurate statement would be: an advertising-supported model is the profit-maximizing model.

Also, Mr. Zuckerberg's "advertising-supported" description of his company's business model seems disingenuous. It gives the impression that traditional advertisers pay money to passively display ads, while the reality is much more. More types of companies than traditional advertisers used the social networking service's sophisticated software tools (e.g., Facebook's API platform) to target groups and then collect data about Facebook users and their connected friends.

This makes one wonder how many other companies like Cambridge Analytica have harvested information -- either directly or indirectly via intermediaries. Facebook has suspended the account of Cubeyou, another alleged data harvester, while it investigates.

If there are more companies and Facebook executives know it, then they must admit it. Its March 21st press release promising to investigate all apps that had access to large amounts of information, and to conduct full audits of any apps with suspicious activity suggests that Facebook doesn't know. I'm not sure which is worse: knowing and not saying, or not knowing.

According to news reports, Cambridge Analytica paid sizeable amounts - US $ .75 to $5.00 per voter - for profiles crafted from Facebook users' information. Do that math... that could be amounts ranging from $1.5 to $10 million, allegedly based upon 2 million users from 11 states: Arkansas, Colorado, Florida, Iowa, Louisiana, Nevada, New Hampshire, North Carolina, Oregon, South Carolina, and West Virginia. Nobody pays that amount of money without expecting satisfactory results.

Later today, Facebook will inform users whose information may have been harvested by Cambridge Analytica. What are your opinions?


4 Ways to Fix Facebook

[Editor's Note: today's guest post, by ProPublica reporters, explores solutions to the massive privacy and data security problems at Facebook.com. It is reprinted with permission.]

By Julia Angwin, ProPublica

Gathered in a Washington, D.C., ballroom last Thursday for their annual “tech prom,” hundreds of tech industry lobbyists and policy makers applauded politely as announcers read out the names of the event’s sponsors. But the room fell silent when “Facebook” was proclaimed — and the silence was punctuated by scattered boos and groans.

Facebook logo These days, it seems the only bipartisan agreement in Washington is to hate Facebook. Democrats blame the social network for costing them the presidential election. Republicans loathe Silicon Valley billionaires like Facebook founder and CEO Mark Zuckerberg for their liberal leanings. Even many tech executives, boosters and acolytes can’t hide their disappointment and recriminations.

The tipping point appears to have been the recent revelation that a voter-profiling outfit working with the Trump campaign, Cambridge Analytica, had obtained data on 87 million Facebook users without their knowledge or consent. News of the breach came after a difficult year in which, among other things, Facebook admitted that it allowed Russians to buy political ads, advertisers to discriminate by race and age, hate groups to spread vile epithets, and hucksters to promote fake news on its platform.

Over the years, Congress and federal regulators have largely left Facebook to police itself. Now, lawmakers around the world are calling for it to be regulated. Congress is gearing up to grill Zuckerberg. The Federal Trade Commission is investigating whether Facebook violated its 2011 settlement agreement with the agency. Zuckerberg himself suggested, in a CNN interview, that perhaps Facebook should be regulated by the government.

The regulatory fever is so strong that even Peter Swire, a privacy law professor at Georgia Institute of Technology who testified last year in an Irish court on behalf of Facebook, recently laid out the legal case for why Google and Facebook might be regulated as public utilities. Both companies, he argued, satisfy the traditional criteria for utility regulation: They have large market share, are natural monopolies, and are difficult for customers to do without.

While the political momentum may not be strong enough right now for something as drastic as that, many in Washington are trying to envision what regulating Facebook would look like. After all, the solutions are not obvious. The world has never tried to rein in a global network with 2 billion users that is built on fast-moving technology and evolving data practices.

I talked to numerous experts about the ideas bubbling up in Washington. They identified four concrete, practical reforms that could address some of Facebook’s main problems. None are specific to Facebook alone; potentially, they could be applied to all social media and the tech industry.

1. Impose Fines for Data Breaches

The Cambridge Analytica data loss was the result of a breach of contract, rather than a technical breach in which a company gets hacked. But either way, it’s far too common for institutions to lose customers’ data — and they rarely suffer significant financial consequences for the loss. In the United States, companies are only required to notify people if their data has been breached in certain states and under certain circumstances — and regulators rarely have the authority to penalize companies that lose personal data.

Consider the Federal Trade Commission, which is the primary agency that regulates internet companies these days. The FTC doesn’t have the authority to demand civil penalties for most data breaches. (There are exceptions for violations of children’s privacy and a few other offenses.) Typically, the FTC can only impose penalties if a company has violated a previous agreement with the agency.

That means Facebook may well face a fine for the Cambridge Analytica breach, assuming the FTC can show that the social network violated a 2011 settlement with the agency. In that settlement, the FTC charged Facebook with eight counts of unfair and deceptive behavior, including allowing outside apps to access data that they didn’t need — which is what Cambridge Analytica reportedly did years later. The settlement carried no financial penalties but included a clause stating that Facebook could face fines of $16,000 per violation per day.

David Vladeck, former FTC director of consumer protection, who crafted the 2011 settlement with Facebook, said he believes Facebook’s actions in the Cambridge Analytica episode violated the agreement on multiple counts. “I predict that if the FTC concludes that Facebook violated the consent decree, there will be a heavy civil penalty that could well be in the amount of $1 billion or more,” he said.

Facebook maintains it has abided by the agreement. “Facebook rejects any suggestion that it violated the consent decree,” spokesman Andy Stone said. “We respected the privacy settings that people had in place.”

If a fine had been levied at the time of the settlement, it might well have served as a stronger deterrent against any future breaches. Daniel J. Weitzner, who served in the White House as the deputy chief technology officer at the time of the Facebook settlement, says that technology should be policed by something similar to the Department of Justice’s environmental crimes unit. The unit has levied hundreds of millions of dollars in fines. Under previous administrations, it filed felony charges against people for such crimes as dumping raw sewage or killing a bald eagle. Some ended up sentenced to prison.

“We know how to do serious law enforcement when we think there’s a real priority and we haven’t gotten there yet when it comes to privacy,” Weitzner said.

2. Police Political Advertising

Last year, Facebook disclosed that it had inadvertently accepted thousands of advertisements that were placed by a Russian disinformation operation — in possible violation of laws that restrict foreign involvement in U.S. elections. FBI special prosecutor Robert Mueller has charged 13 Russians who worked for an internet disinformation organization with conspiring to defraud the United States, but it seems unlikely that Russia will compel them to face trial in the U.S.

Facebook has said it will introduce a new regime of advertising transparency later this year, which will require political advertisers to submit a government-issued ID and to have an authentic mailing address. It said political advertisers will also have to disclose which candidate or organization they represent and that all election ads will be displayed in a public archive.

But Ann Ravel, a former commissioner at the Federal Election Commission, says that more could be done. While she was at the commission, she urged it to consider what it could do to make internet advertising contain as much disclosure as broadcast and print ads. “Do we want Vladimir Putin or drug cartels to be influencing American elections?” she presciently asked at a 2015 commission meeting.

However, the election commission — which is often deadlocked between its evenly split Democratic and Republican commissioners — has not yet ruled on new disclosure rules for internet advertising. Even if it does pass such a rule, the commission’s definition of election advertising is so narrow that many of the ads placed by the Russians may not have qualified for scrutiny. It’s limited to ads that mention a federal candidate and appear within 60 days prior to a general election or 30 days prior to a primary.

This definition, Ravel said, is not going to catch new forms of election interference, such as ads placed months before an election, or the practice of paying individuals or bots to spread a message that doesn’t identify a candidate and looks like authentic communications rather than ads.

To combat this type of interference, Ravel said, the current definition of election advertising needs to be broadened. The FEC, she suggested, should establish “a multi-faceted test” to determine whether certain communications should count as election advertisements. For instance, communications could be examined for their intent, and whether they were paid for in a nontraditional way — such as through an automated bot network.

And to help the tech companies find suspect communications, she suggested setting up an enforcement arm similar to the Treasury Department’s Financial Crimes Enforcement Network, known as FinCEN. FinCEN combats money laundering by investigating suspicious account transactions reported by financial institutions. Ravel said that a similar enforcement arm that would work with tech companies would help the FEC.

“The platforms could turn over lots of communications and the investigative agency could then examine them to determine if they are from prohibited sources,” she said.

3. Make Tech Companies Liable for Objectionable Content

Last year, ProPublica found that Facebook was allowing advertisers to buy discriminatory ads, including ads targeting people who identified themselves as “Jew-haters,” and ads for housing and employment that excluded audiences based on race, age and other protected characteristics under civil rights laws.

Facebook has claimed that it has immunity against liability for such discrimination under section 230 of the 1996 federal Communications Decency Act, which protects online publishers from liability for third-party content.

“Advertisers, not Facebook, are responsible for both the content of their ads and what targeting criteria to use, if any,” Facebook stated in legal filings in a federal case in California challenging Facebook’s use of racial exclusions in ad targeting.

But sentiment is growing in Washington to interpret the law more narrowly. Last month, the House of Representatives passed a bill that carves out an exemption in the law, making websites liable if they aid and abet sex trafficking. Despite fierce opposition by many tech advocates, a version of the bill has already passed the Senate.

And many staunch defenders of the tech industry have started to suggest that more exceptions to section 230 may be needed. In November, Harvard Law professor Jonathan Zittrain wrote an article rethinking his previous support for the law and declared it has become, in effect, “a subsidy” for the tech giants, who don’t bear the costs of ensuring the content they publish is accurate and fair.

“Any honest account must acknowledge the collateral damage it has permitted to be visited upon real people whose reputations, privacy, and dignity have been hurt in ways that defy redress,” Zittrain wrote.

In a December 2017 paper titled “The Internet Will Not Break: Denying Bad Samaritans 230 Immunity,” University of Maryland law professors Danielle Citron and Benjamin Wittes argue that the law should be amended — either through legislation or judicial interpretation — to deny immunity to technology companies that enable and host illegal content.

“The time is now to go back and revise the words of the statute to make clear that it only provides shelter if you take reasonable steps to address illegal activity that you know about,” Citron said in an interview.

4. Install Ethics Review Boards

Cambridge Analytica obtained its data on Facebook users by paying a psychology professor to build a Facebook personality quiz. When 270,000 Facebook users took the quiz, the researcher was able to obtain data about them and all of their Facebook friends — or about 50 million people altogether. (Facebook later ended the ability for quizzes and other apps to pull data on users’ friends.)

Cambridge Analytica then used the data to build a model predicting the psychology of those people, on metrics such as “neuroticism,” political views and extroversion. It then offered that information to political consultants, including those working for the Trump campaign.

The company claimed that it had enough information about people’s psychological vulnerabilities that it could effectively target ads to them that would sway their political opinions. It is not clear whether the company actually achieved its desired effect.

But there is no question that people can be swayed by online content. In a controversial 2014 study, Facebook tested whether it could manipulate the emotions of its users by filling some users’ news feeds with only positive news and other users’ feeds with only negative news. The study found that Facebook could indeed manipulate feelings — and sparked outrage from Facebook users and others who claimed it was unethical to experiment on them without their consent.

Such studies, if conducted by a professor on a college campus, would require approval from an institutional review board, or IRB, overseeing experiments on human subjects. But there is no such standard online. The usual practice is that a company’s terms of service contain a blanket statement of consent that users never read or agree to.

James Grimmelman, a law professor and computer scientist, argued in a 2015 paper that the technology companies should stop burying consent forms in their fine print. Instead, he wrote, “they should seek enthusiastic consent from users, making them into valued partners who feel they have a stake in the research.”

Such a consent process could be overseen by an independent ethics review board, based on the university model, which would also review research proposals and ensure that people’s private information isn’t shared with brokers like Cambridge Analytica.

“I think if we are in the business of requiring IRBs for academics,” Grimmelman said in an interview, “we should ask for appropriate supervisions for companies doing research.”

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

 


Facebook Update: 87 Million Affected By Its Data Breach With Cambridge Analytica. Considerations For All Consumers

Facebook logo Facebook.com has dominated the news during the past three weeks. The news media have reported about many issues, but there are more -- whether or not you use Facebook. Things began about mid-March, when Bloomberg reported:

"Yes, Cambridge Analytica... violated rules when it obtained information from some 50 million Facebook profiles... the data came from someone who didn’t hack the system: a professor who originally told Facebook he wanted it for academic purposes. He set up a personality quiz using tools that let people log in with their Facebook accounts, then asked them to sign over access to their friend lists and likes before using the app. The 270,000 users of that app and their friend networks opened up private data on 50 million people... All of that was allowed under Facebook’s rules, until the professor handed the information off to a third party... "

So, an authorized user shared members' sensitive information with unauthorized users. Facebook confirmed these details on March 16:

"We are suspending Strategic Communication Laboratories (SCL), including their political data analytics firm, Cambridge Analytica (CA), from Facebook... In 2015, we learned that a psychology professor at the University of Cambridge named Dr. Aleksandr Kogan lied to us and violated our Platform Policies by passing data from an app that was using Facebook Login to SCL/CA, a firm that does political, government and military work around the globe. He also passed that data to Christopher Wylie of Eunoia Technologies, Inc.

Like all app developers, Kogan requested and gained access to information from people after they chose to download his app. His app, “thisisyourdigitallife,” offered a personality prediction, and billed itself on Facebook as “a research app used by psychologists.” Approximately 270,000 people downloaded the app. In so doing, they gave their consent for Kogan to access information such as the city they set on their profile, or content they had liked... When we learned of this violation in 2015, we removed his app from Facebook and demanded certifications from Kogan and all parties he had given data to that the information had been destroyed. CA, Kogan and Wylie all certified to us that they destroyed the data... Several days ago, we received reports that, contrary to the certifications we were given, not all data was deleted..."

So, data that should have been deleted wasn't. Then, Facebook relied upon certifications from entities that had lied previously. Not good. Then, Facebook posted this addendum on March 17:

"The claim that this is a data breach is completely false. Aleksandr Kogan requested and gained access to information from users who chose to sign up to his app, and everyone involved gave their consent. People knowingly provided their information, no systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked."

Why the rush to deny a breach? It seems wise to complete a thorough investigation before making such a claim. In the 11+ years I've written this blog, whenever unauthorized persons access data they shouldn't have, it's a breach. You can read about plenty of similar incidents where credit reporting agencies sold sensitive consumer data to ID-theft services and/or data brokers, who then re-sold that information to criminals and fraudsters. Seems like a breach to me.

Cambridge Analytica logo Facebook announced on March 19th that it had hired a digital forensics firm:

"... Stroz Friedberg, to conduct a comprehensive audit of Cambridge Analytica (CA). CA has agreed to comply and afford the firm complete access to their servers and systems. We have approached the other parties involved — Christopher Wylie and Aleksandr Kogan — and asked them to submit to an audit as well. Mr. Kogan has given his verbal agreement to do so. Mr. Wylie thus far has declined. This is part of a comprehensive internal and external review that we are conducting to determine the accuracy of the claims that the Facebook data in question still exists... Independent forensic auditors from Stroz Friedberg were on site at CA’s London office this evening. At the request of the UK Information Commissioner’s Office, which has announced it is pursuing a warrant to conduct its own on-site investigation, the Stroz Friedberg auditors stood down."

That's a good start. An audit would determine or not data which perpetrators said was destroyed, actually had been destroyed. However, Facebook seems to have built a leaky system which allows data harvesting:

"Hundreds of millions of Facebook users are likely to have had their private information harvested by companies that exploited the same terms as the firm that collected data and passed it on to CA, according to a new whistleblower. Sandy Parakilas, the platform operations manager at Facebook responsible for policing data breaches by third-party software developers between 2011 and 2012, told the Guardian he warned senior executives at the company that its lax approach to data protection risked a major breach..."

Reportedly, Parakilas added that Facebook, "did not use its enforcement mechanisms, including audits of external developers, to ensure data was not being misused." Not good. The incident makes one wonder what other developers, corporate, and academic users have violated Facebook's rules: shared sensitive Facebook members' data they shouldn't have.

Facebook announced on March 21st that it will, 1) investigate all apps that had access to large amounts of information and conduct full audits of any apps with suspicious activity; 2) inform users affected by apps that have misused their data; 3) disable an app's access to a member's information if that member hasn't used the app within the last three months; 4) change Login to "reduce the data that an app can request without app review to include only name, profile photo and email address;" 5) encourage members to manage the apps they use; and reward users who find vulnerabilities.

Those actions seem good, but too little too late. Facebook needs to do more... perhaps, revise its Terms Of Use to include large fines for violators of its data security rules. Meanwhile, there has been plenty of news about CA. The Guardian UK reported on March 19:

"The company at the centre of the Facebook data breach boasted of using honey traps, fake news campaigns and operations with ex-spies to swing election campaigns around the world, a new investigation reveals. Executives from Cambridge Analytica spoke to undercover reporters from Channel 4 News about the dark arts used by the company to help clients, which included entrapping rival candidates in fake bribery stings and hiring prostitutes to seduce them."

Geez. After these news reports surfaced, CA's board suspended Alexander Nix, its CEO, pending an internal investigation. So, besides Facebook's failure to secure sensitive members' information, another key issue seems to be the misuse of social media data by a company that openly brags about unethical, and perhaps illegal, behavior.

What else might be happening? The Intercept explained on March 30th that CA:

"... has marketed itself as classifying voters using five personality traits known as OCEAN — Openness, Conscientiousness, Extroversion, Agreeableness, and Neuroticism — the same model used by University of Cambridge researchers for in-house, non-commercial research. The question of whether OCEAN made a difference in the presidential election remains unanswered. Some have argued that big data analytics is a magic bullet for drilling into the psychology of individual voters; others are more skeptical. The predictive power of Facebook likes is not in dispute. A 2013 study by three of Kogan’s former colleagues at the University of Cambridge showed that likes alone could predict race with 95 percent accuracy and political party with 85 percent accuracy. Less clear is their power as a tool for targeted persuasion; CA has claimed that OCEAN scores can be used to drive voter and consumer behavior through “microtargeting,” meaning narrowly tailored messages..."

So, while experts disagree about the effectiveness of data analytics with political campaigns, it seems wise to assume that the practice will continue with improvements. Data analytics fueled by social media input means political campaigns can bypass traditional news media outlets to distribute information and disinformation. That highlights the need for Facebook (and other social media) to improve their data security and compliance audits.

While the UK Information Commissioner's Office aggressively investigates CA, things seem to move at a much slower pace in the USA. TechCrunch reported on April 4th:

"... Facebook’s founder Mark Zuckerberg believes North America users of his platform deserve a lower data protection standard than people everywhere else in the world. In a phone interview with Reuters yesterday Mark Zuckerberg declined to commit to universally implementing changes to the platform that are necessary to comply with the European Union’s incoming General Data Protection Regulation (GDPR). Rather, he said the company was working on a version of the law that would bring some European privacy guarantees worldwide — declining to specify to the reporter which parts of the law would not extend worldwide... Facebook’s leadership has previously implied the product changes it’s making to comply with GDPR’s incoming data protection standard would be extended globally..."

Do users in the USA want weaker data protections than users in other countries? I think not. I don't. Read for yourself the April 4th announcement by Facebook about changes to its terms of service and data policy. It didn't mention specific countries or regions; who gets what and where. Not good.

Mark Zuckerberg apologized and defended his company in a March 21st post:

"I want to share an update on the Cambridge Analytica situation -- including the steps we've already taken and our next steps to address this important issue. We have a responsibility to protect your data, and if we can't then we don't deserve to serve you. I've been working to understand exactly what happened and how to make sure this doesn't happen again. The good news is that the most important actions to prevent this from happening again today we have already taken years ago. But we also made mistakes, there's more to do, and we need to step up and do it... This was a breach of trust between Kogan, Cambridge Analytica and Facebook. But it was also a breach of trust between Facebook and the people who share their data with us and expect us to protect it. We need to fix that... at the end of the day I'm responsible for what happens on our platform. I'm serious about doing what it takes to protect our community. While this specific issue involving Cambridge Analytica should no longer happen with new apps today, that doesn't change what happened in the past. We will learn from this experience to secure our platform further and make our community safer for everyone going forward."

Nice sounding words, but actions speak louder. Wired magazine said:

"Zuckerberg didn't mention in his Facebook post why it took him five days to respond to the scandal... The groundswell of outrage and attention following these revelations has been greater than anything Facebook predicted—or has experienced in its long history of data privacy scandals. By Monday, its stock price nosedived. On Tuesday, Facebook shareholders filed a lawsuit against the company in San Francisco, alleging that Facebook made "materially false and misleading statements" that led to significant losses this week. Meanwhile, in Washington, a bipartisan group of senators called on Zuckerberg to testify before the Senate Judiciary Committee. And the Federal Trade Commission also opened an investigation into whether Facebook had violated a 2011 consent decree, which required the company to notify users when their data was obtained by unauthorized sources."

Frankly, Zuckerberg has lost credibility with me. Why? Facebook's history suggests it can't (or won't) protect users' data it collects. Some of its privacy snafus: settlement of a lawsuit resulting from alleged privacy abuses by its Beacon advertising program, changed members' ad settings without notice nor consent, an advertising platform which allegedly facilitates abuses of older workers, health and privacy concerns about a new service for children ages 6 to 13, transparency concerns about political ads, and new lawsuits about the company's advertising platform. Plus, Zuckerberg made promises in January to clean up the service's advertising. Now, we have yet another apology.

In a press release this afternoon, Facebook revised upward the number affected by the Facebook/CA breach from 50 to 87 million persons. Most, about 70.6 million, are in the United States. The breakdown by country:

Number of affected persons by country in the Facebook - Cambridge Analytica breach. Click to view larger version

So, what should consumers do?

You have options. If you use Facebook, see these instructions by Consumer Reports to deactivate or delete your account. Some people I know simply stopped using Facebook, but left their accounts active. That doesn't seem wise. A better approach is to adjust the privacy settings on your Facebook account to get as much privacy and protections as possible.

Facebook has a new tool for members to review and disable, in bulk, all of the apps with access to their data. Follow these handy step-by-step instructions by Mashable. And, users should also disable the Facebook API platform for their account. If you use the Firefox web browser, then install the new Facebook Container new add-on specifically designed to prevent Facebook from tracking you. Don't use Firefox? You might try the Privacy Badger add-on instead. I've used it happily for years.

Of course, you should submit feedback directly to Facebook demanding that it extend GDPR privacy protections to your country, too. And, wise online users always read the terms and conditions of all Facebook quizzes before taking them.

Don't use Facebook? There are considerations for you, too; especially if you use a different social networking site (or app). Reportedly, Mark Zuckerberg, the CEO of Facebook, will testify before the U.S. Congress on April 11th. His upcoming testimony will be worth monitoring for everyone. Why? The outcome may prod Congress to act by passing new laws giving consumers in the USA data security and privacy protections equal to what's available in the United Kingdom. And, there may be demands for Cambridge Analytica executives to testify before Congress, too.

Or, consumers may demand stronger, faster action by the U.S. Federal Trade Commission (FTC), which announced on March 26th:

"The FTC is firmly and fully committed to using all of its tools to protect the privacy of consumers. Foremost among these tools is enforcement action against companies that fail to honor their privacy promises, including to comply with Privacy Shield, or that engage in unfair acts that cause substantial injury to consumers in violation of the FTC Act. Companies who have settled previous FTC actions must also comply with FTC order provisions imposing privacy and data security requirements. Accordingly, the FTC takes very seriously recent press reports raising substantial concerns about the privacy practices of Facebook. Today, the FTC is confirming that it has an open non-public investigation into these practices."

An "open non-public investigation?" Either the investigation is public, or it isn't. Hopefully, an attorney will explain. And, that announcement read like weak tea. I expect more. Much more.

USA citizens may want stronger data security laws, especially if Facebook's solutions are less than satisfactory, it refuses to provide protections equal to those in the United Kingdom, or if it backtracks later on its promises. Thoughts? Comments?


Fair Housing Groups Sue Facebook for Allowing Discrimination in Housing Ads

[Editor's Note: today's guest post, by reporters at ProPublica, is the latest in a series about advertising and social networking services. It is reprinted with permission.]

Facebook logo By Julia Angwin and Ariana Tobin, ProPublica

In February 2017, in response to a ProPublica investigation, Facebook pledged to crack down on efforts by advertisers of rental housing to discriminate against tenants based on race, disability, gender and other characteristics.

But a new lawsuit, filed Tuesday by the National Fair Housing Alliance in U.S. District Court in the Southern District of New York, alleges that the world’s largest social network still allows advertisers to discriminate against legally protected groups, including mothers, the disabled and Spanish-language speakers.

Since 2018 marks the 50th anniversary of the Fair Housing Act, "it is all the more egregious and shocking" that "Facebook continues to enable landlords and real estate brokers to bar families with children, women and others from receiving rental and sales ads or housing," the lawsuit states. It asks the court, among other things, to declare that Facebook’s policies violate fair housing laws, to bar the company from publishing discriminatory ads, and to require it to develop and make public a written fair housing policy for advertising.

Diane Houk, lead counsel for the alliance, said this type of discrimination is especially difficult to uncover and combat. "The person who is being discriminated against has no way to know" it, because the technology "keeps the discrimination hidden in hopes that it will not be caught," she said.

Facebook disputes the housing groups’ allegations. "There is absolutely no place for discrimination on Facebook. We believe this lawsuit is without merit, and we will defend ourselves vigorously," said Facebook spokesman Joe Osborne.

The lawsuit adds to Facebook’s woes, which are mounting on multiple fronts. The company’s stock plunged last week on the news that it had allowed a voter-profiling outfit, Cambridge Analytica, to obtain data on 50 million of its users without their knowledge or consent. The news came after a troubling year in which, among other things, Facebook admitted that it unwittingly allowed a Russian disinformation operation on its platform and had been promoting fake news in its News Feed algorithm. As a result, lawmakers and regulators around the world have launched investigations into Facebook.

Discrimination in housing advertising has been a persistent problem for Facebook. In October 2016, we described how Facebook let advertisers exclude specific groups with what it called "ethnic affinities," including blacks and Hispanics, from seeing ads. Although Facebook responded by announcing it had built a system to flag and reject these ads, we bought dozens of rental housing ads in November 2017 that we specified would not be shown to blacks, Jews, people interested in wheelchair ramps and other groups.

It wasn’t until ProPublica brought the issue of advertising discrimination on Facebook to light, Houk said, that fair housing advocates learned of it. Emulating ProPublica’s technique, the Washington, D.C.-based national fair housing group, along with member groups in New York, San Antonio and Miami created fake housing companies and placed discriminatory ads on Facebook. The ads were approved by Facebook over a period of a few months, with the most recent buys occurring on Feb. 23.

Using Facebook’s dropdown "exclusion" menu, they were able to buy housing ads that blocked groups such as "trendy moms," "soccer moms," "parents with teenagers," people interested in a disabled parking permit and people interested in Telemundo, the Spanish-language television network.

The Fair Housing Act makes it illegal to publish any advertisement "with respect to the sale or rental of a dwelling that indicates any preference, limitation or discrimination based on race, color, religion, sex, handicap, familial status or national origin." Violators may face tens of thousands of dollars in fines.

After ProPublica’s investigation, Facebook added a self-certification option, which asks housing advertisers to certify that their advertisement is not discriminatory. In some cases, Houk said, the housing groups encountered the self-certification option, and did not submit the ads to Facebook for approval and publication. But that only happened in some of the ad buys, she said.

Since advertisers can falsely attest to fairness, the self-certification screens don’t "seem like a whole-hearted commitment to trying to change the advertising platform to comply with the Fair Housing Act and local fair housing laws," Houk said.

A couple of weeks after the groups bought housing ads, so did ProPublica (independently) — and we excluded some of the same categories, such as "soccer moms." In most of those tests, we encountered self-certification screens. However, when we bought another housing ad this week, we were able to exclude people interested in Telemundo.

Houk said there were so many possible explanations for the difference in results — such as the number of categories excluded or the types of exclusions sought — that it was impossible to speculate about what caused many of her clients’ ad purchases to be approved but not ProPublica’s.

Both the fair housing groups and ProPublica found that Facebook has blocked the use of race as an exclusion category — as it promised to do in November. Facebook rejected a ProPublica housing ad that was specifically aimed at African Americans. It also denied our attempts to buy employment ads targeted by race, and removed a job listing with a question designed to filter by race. However, the housing groups’ and ProPublica’s ability to exclude people interested in Telemundo suggests that advertisers could still discriminate by using proxies for race or ethnicity.

In a separate federal case in California, challenging Facebook’s use of racial exclusions in ad targeting, Facebook has argued that it has immunity against liability for such discrimination. It cited Section 230 of the 1996 federal Communications Decency Act, which protects internet companies from liability for third-party content.

"Advertisers, not Facebook, are responsible for both the content of their ads and what targeting criteria to use, if any," Facebook contended.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


How the Crowd Led ProPublica to Investigate IBM

[Editor's note: today's guest post, by the reporters at ProPublica, discusses employment practices at a major corporation in the United States. The investigation is as interesting as the "Cutting 'Old Heads' At IBM" report. This also caught my attention because a data breach at IBM in 2007 led to the creation of this blog. Today's article is reprinted with permission.]

IBM logo By Ariana Tobin and Peter Gosselin, ProPublica

On March 22, we reported that over the past five years IBM has been removing older U.S. employees from their jobs, replacing some with younger, less experienced, lower-paid American workers and moving many other jobs overseas.

We’ve got documentation and details — most of which are the direct result of a questionnaire filled out by over 1,100 former IBMers.

We’ve gone to the company with our findings. IBM did not answer the specific questions we sent. Spokesman Edward Barbini said: “We are proud of our company and our employees’ ability to reinvent themselves era after era, while always complying with the law. Our ability to do this is why we are the only tech company that has not only survived but thrived for more than 100 years.”

We don’t know the exact size of the problem. Our questionnaire isn’t a scientific sample, nor did all the participants tell us they experienced age discrimination. But the hundreds of similar stories show a pattern of older employees being pushed out even when the company itself says they were doing a good job.

This project wasn’t inspired by a high-level leak or an errant line in secret documents. It came to us through reader engagement. Our investigation took us beyond some of our usual reporting techniques. We’d like to elaborate on this because:

  • We know readers will wonder how we sourced some pretty serious claims.
  • Many ex-employees trusted us with their stories and spent many hours in conversation with us. We think it’s good practice to let them know how we’ve used their information.
  • This is the probably the first time we’ve been pointed to a big project by a community of people we found through digital outreach. We hope that by sharing our experiences, we can help others build on our work.

IBMers found us

This project started as a conversation between the two of us, both reporters at ProPublica. Peter had taken on the age discrimination beat for reasons both personal and professional. Ariana was newly minted into a job called “engagement reporter.”

Ariana suggested that Peter write up a short essay on his own experiences of being laid off at 63 and searching for a job in the aftermath. We attached a short questionnaire to the bottom and headlined it: “Over 50 and looking for a job? We’d like to hear from you.

Dozens of people responded within the first couple of weeks. As we looked through this first round of questionnaires, we noticed a theme: a whole lot of information and technology workers told us they were struggling to stay employed. And those who had lost their jobs? They were having a really hard time finding new work.

Of those IT workers, several mentioned IBM right off the bat. One woman wrote that she and her coworkers were working together to find new jobs in order to “ward off the dreaded old person layoff from IBM.”

Another wrote: “I can probably help you get a lot more stories, contact me if you want to discuss this possibility.”

Another wrote: “Part of the separation agreement was that I not seek collective action against IBM for age discrimination. I was not going to sign as a law firm was planning to file a grievance. However they needed 10 people to agree and they could not get the numbers.”

… and then they connected us with more IBMers

We started making some calls. One of the first people we talked to was Brian Paulson, a 57-year-old senior manager with 18 years at IBM, who was fired for “performance reasons” that the company refused to explain. He was still job-hunting two years later.

Another ex-IBM employee told us that she had seen examples of older workers laid off from many parts of the company on a public Facebook page called WatchingIBM. Ariana spent a day looking through the posts, which were, as promised, crawling with stories, questions, and calls for support from workers of all kinds, as shown in the accompanying screenshot.

We decided to reach out to the page’s administrator, who was a longtime IBM workplace activist, Lee Conrad. He shared our age discrimination questionnaire in the group and more responses poured in.

With dozens of interviews already on the books, we decided to launch a second, more specific questionnaire — this time about IBM

We realized that we had been pointed toward an angry, sad and motivated group. The older ex-IBM workers we called were trying to figure out whether their own layoffs were unique or part of a larger trend. And if they were part of a larger trend... how many people were affected?

A major frustration we saw in comment after comment: These workers couldn’t get information on how many others had been forced out with them.

This was an information gap that immediately struck Peter, because that information is exactly what the law requires employers to disclose at the time of a layoff.

On top of that, many of these sources mentioned having been forced to sign agreements that kept them from going to court or even talking about what had happened to them. They were scared to do anything in violation of those agreements, a fear that kept them from finding out the answers to some big open questions: Why would IBM have stopped releasing the ages and positions of those let go, as they had done before 2014 to comply with federal law? How many workers out there believed they had been “retired” against their will? What did managers really tell their subordinates when the time came to let them go? Who was left to do all of their work?

So we wrote up another questionnaire asking those specific questions.

We learned from the responses, and also the response rate

We contacted people on listservs, found them on open petitions, joined closed LinkedIn networks, and followed each posting on ex-IBM groups. We tweeted the questionnaire out on days that IBM reported its earnings, including the company’s ticker symbol. We talked to trade magazines and IBM historians and organizers who still work at IBM. We bought ads on Facebook and aimed them toward cities and towns where we knew IBM had been cutting its workforce.

As the responses came in, we tried to figure out where most of them were coming from. To identify any meaningful trends, we needed to know who was answering, what was working, and why. We also realized that we needed to introduce ourselves in order to persuade anyone it was worth participating.

When something worked, we’d double down:

We know what worked the best: When people filled out the questionnaire they’d also share their contact information with us. So we asked them to forward the questionnaire around within their own networks:

And we got more leads

We read through all of the responses and identified themes: 183 respondents said the company recorded them as having retired by choice even though they had no desire to retire or flat-out objected to the idea. Forty-five people were told they’d have to uproot their lives and move sometimes thousands of miles from the communities where they had worked for years, or else resign. Fifty-three said their jobs had been moved overseas. Some were happy they’d left. Some were company luminaries, given top ratings throughout their career. Some were still fighting over benefits and health care. Some were worried about finding work ever again.

Inevitably, this categorization process led to us to identify new patterns as we went along, and as new responses accumulated. For each new pattern, we would go back and see how many people fit.

One of the first and most interesting such categories were the people who had received emails congratulating them on their retirement at the same time as they were informed of their layoff. We realized there would be power in numbers there, so we set up a SecureDrop for people who were willing to send us their paperwork.

Eventually, we also created a category called “legal action.” We’d stumbled upon support groups of ex-IBM employees who had filed formal complaints with the Equal Employment Opportunity Commission. Some sent us the company’s responses to their individual complaints, giving us insight into the way the company responded to allegations of discrimination. These seemed, of course, very useful.

In other words: we sent some rather complicated mass emails and were surprised over and over again by the specificity of the responses:

IBM undoubtedly has information that would shed light on the documents, its layoff practices or the overall extent and nature of its job cuts. The company chose not to respond to our questions about those issues.

So we tried to answer ex-IBMers’ questions ourselves, including one of the most basic: How many employees ages 40 and over were let go or left in recent years?

IBM won’t say. In fact, over the years, the company has stopped releasing almost all information about its U.S. workforce. In 2009, it stopped publishing its American employment total. In 2014, it stopped disclosing the numbers and ages of older employees it was laying off, a requirement of the nation’s basic anti-age bias law, the Age Discrimination in Employment Act (ADEA).

So we’ve sought to estimate the number, relying on one of the few remaining bits of company-provided information — a technique developed by a veteran financial analyst who follows IBM for investors — as well as patterns we spotted in internal company documents.

We began with a line in the company’s quarterly and annual filings with the U.S. Securities and Exchange Commission for “workforce rebalancing,” a company term for layoffs, firings and other non-retirement departures. It’s a gauge of what IBM spends to let people go. In the past five years, workforce rebalancing charges have totaled $4.3 billion.

The technique was used by veteran IBM analyst Toni Sacconaghi of Bernstein Research. Sacconaghi is a respected Wall Street analyst who has been named to Institutional Investor’s All-America Research Team every year since 2001. His technique and layoff estimates have been widely cited by news organizations including The Wall Street Journal and Fortune.

Some years ago, Sacconaghi estimated that IBM’s average per-employee cost for laying off a worker was $70,000.

Dividing $4.3 billion by $70,000 suggests that during the past five years IBM’s worldwide job cuts totaled about 62,000. If anything, that number is low, given IBM executives’ comments at a recent investor conference. Internal company documents we reviewed suggest that 50 to 60 percent of cuts were made in the U.S., with older workers representing roughly 60 percent of those. That translates to about 20,000 older American workers let go.

Our analysis suggests the total of U.S. layoffs is almost certainly higher.

First, as Sacconaghi said in a recent interview, IBM’s per-employee rebalancing costs are likely much lower now because, starting in 2016, the company reduced severance payments to departing employees from six months to just 30 days. That means IBM can lay off or fire more people for the same or lower overall costs.

Second, because, as those ex-IBMers told us, the company often converts their layoffs into retirements, the workplace rebalancing numbers don’t tell the whole story.

Right below the line for “workforce rebalancing” in its SEC filings, IBM adds another line for “retirement-related costs,” which reflects how much the company spends each year retiring people out. Some — perhaps a substantial amount of that — went to retirements that were less than fully voluntary. This could add up to thousands more people.

By coming up with answers and investigating in the open, we’ve gotten more sources

Many of the conversations we’ve had during our reporting didn’t make it into the final story. People allowed us to review internal company documents. They let us see long email exchanges with their managers. They dug back through closets and garages to find memos they had saved out of frustration or fatigue or just plain anger.

We can’t go into detail about all of the ways the community helped us report out this story, because we also promised many of our sources that we would protect their confidentiality. The beauty is that they talked to us anyway. They knew where to find us, because our contact information had been spread far and wide.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Airlines Want To Extend 'Dynamic Pricing' Capabilities To Set Ticket Prices By Each Person

In the near future, what you post on social media sites (e.g., Facebook, Instagram, Pinterest, etc.) could affect the price you pay for airline tickets. How's that?

First, airlines already use what the travel industry calls "dynamic pricing" to vary prices by date, time of day, and season. We've all seen higher ticket prices during the holidays and peak travel times. The Telegraph UK reported that airlines want to extend dynamic pricing to set fares by person:

"... the advent of setting fares by the person, rather than the flight, are fast approaching. According to John McBride, director of product management for PROS, a software provider that works with airlines including Lufthansa, Emirates and Southwest, a number of operators have already introduced dynamic pricing on some ticket searches. "2018 will be a very phenomenal year in terms of traction," he told Travel Weekly..."

And, there was a preliminary industry study about how to do it:

" "The introduction of a Dynamic Pricing Engine will allow an airline to take a base published fare that has already been calculated based on journey characteristics and broad segmentation, and further adjust the fare after evaluating details about the travelers and current market conditions," explains a white paper on pricing written by the Airline Tariff Publishing Company (ATPCO), which counts British Airways, Delta and KLM among its 430 airline customers... An ATPCO working group met [in late February] to discuss dynamic pricing, but it is likely that any roll out to its customers would be incremental."

What's "incremental" mean? Experts say first step would be to vary ticket prices in search results at the airline's site, or at an intermediary's site. There's virtually no way for each traveler to know they'd see a personal price that's higher (or lower) from prices presented to others.

With dynamic pricing per person, business travelers would pay more. And, an airline could automatically bundle several fees (e.g., priority boarding, luggage, meals, etc.) for its loyalty program members into each person's ticket price, obscuring transparency and avoiding fairness. Of course, airlines would pitch this as convenience, but alert consumers know that any convenience always has its price.

Thankfully, some politicians in the United States are paying attention. The Shear Social Media Law & Technology blog summarized the situation very well:

"[Dynamic pricing by person] demonstrates why technology companies and the data collection industry needs greater regulation to protect the personal privacy and free speech rights of Americans. Until Silicon Valley and data brokers are properly regulated Americans will continue to be discriminated against based upon the information that technology companies are collecting about us."

Just because something can be done with technology, doesn't mean it should be done. What do you think?