1,134 posts categorized "Corporate Responsibility" Feed

A Free Press Works For All of Us

[Editor's note: after repeated claim since 2017 by President Trump accusing journalists of being, "the enemy of the people," more than 300 local and national newspapers responded during August. Today's guest post includes ProPublica's response. It is reprinted with permission.]

By Stephen Engelberg, Editor-in-Chief, ProPublica

ProPublica does not have an editorial page, and we have never advocated for a particular policy to address the wrongs our journalism exposes. But from the very beginning of our work more than a decade ago, we have benefited enormously from the traditions and laws that protect free speech. And so today, as the nation’s news organizations remind readers of the value of robust journalism, it seems fitting to add our voice.

ProPublica specializes in investigative reporting — telling stories with “moral force” that hold government, businesses and revered institutions to account. There are few forms of journalism more vulnerable to pressure from the powerful. What we publish can change the outcome of elections, reverse policies, embarrass police or prosecutors and cost companies boatloads of money. The main subjects of our work, in most cases, would much prefer that our reporting never appear or be substantially watered down.

The framers of our Constitution fully understood the importance of protecting a robust, sometimes raucous press. It is no coincidence that the very first amendment begins, “Congress shall make no law ... abridging the freedom of speech, or of the press.” They had lived under a system in which a powerful monarch could use the law of seditious libel to accomplish the 18th-century version of “lock her up.” They wanted no part of it.

In the 21st century, journalism — at least as practiced on cable television — is becoming a craft in which partisans put forth or omit facts to advance their preferred political perspective. Those who bring to light uncomfortable truths are dismissed as “fake news” or, in our case, the work of the “Soros-funded” ProPublica, the all-purpose, vaguely anti-Semitic epithet meant to connote left-wing bias. (For the record, George Soros’s Open Society Foundations fund less than 2 percent of our operations.)

We have covered Presidents George W. Bush, Barack Obama and Donald Trump. We’re proud to say that we’ve annoyed them all with journalism that revealed serious shortcomings. We revealed that Bush had granted pardons to nearly four times as many white applicants as blacks; we ceaselessly hammered Obama for his failure to provide mortgage relief he’d promised ordinary homeowners; and we’ve vigorously covered Trump’s crackdown on immigrants, notably disclosing an audio recording of wailing children in a shelter. Democrats and Republicans have come under our scrutiny. We disclosed how California’s Democrats had manipulated the state’s redistricting process; however, we also reported that Republicans had used dark money and redistricting in other states to win the House in 2012, even though GOP congressional candidates won far fewer votes in aggregate than Democrats.

Journalists inevitably make mistakes along the way, and we’ve had our share at ProPublica. But the argument advanced by Trump and his allies — that journalists are the “enemy of the people” who sit around making up fake stories to undermine his administration — is palpably false. In fact, to use a word we have shied away from in our coverage, it’s a lie. And the president knows it.

For our part, we’re both proud and pleased to live in a country where one can still say that.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


No, a Teen Did Not Hack a State Election

[Editor's note: today's guest post, by reporters at ProPublica, is the latest in a series about the integrity and security of voting systems in the United States. It is reprinted with permission.]

By Lilia Chang, ProPublica

Headlines from Def Con, a hacking conference held this month in Las Vegas, might have left some thinking that infiltrating state election websites and affecting the 2018 midterm results would be child’s play.

Articles reported that teenage hackers at the event were able to “crash the upcoming midterm elections” and that it had taken “an 11-year-old hacker just 10 minutes to change election results.” A first-person account by a 17-year-old in Politico Magazine described how he shut down a website that would tally votes in November, “bringing the election to a screeching halt.”

But now, elections experts are raising concerns that misunderstandings about the event — many of them stoked by its organizers — have left people with a distorted sense of its implications.

In a website published before r00tz Asylum, the youth section of Def Con, organizers indicated that students would attempt to hack exact duplicates of state election websites, referring to them as “replicas” or “exact clones.” (The language was scaled back after the conference to simply say “clones.”)

Instead, students were working with look-a-likes created for the event that had vulnerabilities they were coached to find. Organizers provided them with cheat sheets, and adults walked the students through the challenges they would encounter.

Josh Franklin, an elections expert formerly at the National Institute of Standards and Technology and a speaker at Def Con, called the websites “fake.”

“When I learned that they were not using exact copies and pains hadn’t been taken to more properly replicate the underlying infrastructure, I was definitely saddened,” Franklin said.

Franklin and David Becker, the executive director of the Center for Election Innovation & Research, also pointed out that while state election websites report voting results, they do not actually tabulate votes. This information is kept separately and would not be affected if hackers got into sites that display vote totals.

“It would be lunacy to directly connect the election management system, of which the tabulation system is a part of, to the internet,” Franklin said.

Jake Braun, the co-organizer of the event, defended the attention-grabbing way it was framed, saying the security issues of election websites haven’t gotten enough attention. Those questioning the technical details of the mock sites and whether their vulnerabilities were realistic are missing the point, he insisted.

“We want elections officials to start putting together communications redundancy plans so they have protocol in place to communicate with voters and the media and so on if this happens on election day,” he said.

Braun provided ProPublica with a report that r00tz plans to circulate more widely that explains the technical underpinnings of the mock websites. They were designed to be vulnerable to a SQL injection attack, a common hack, the report says.

Franklin acknowledged that some state election reporting sites do indeed have this vulnerability, but he said that states have been aware of it for months and are in the process of protecting against it.

Becker said the details spelled out in the r00tz report would have been helpful to have from the start.

“We have to be really careful about adding to the hysteria about our election system not working or being too vulnerable because that’s exactly what someone like President Putin wants,” Becker said. Instead, Becker said that “we should find real vulnerabilities and address them as elections officials are working really hard to do.”

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Verizon Throttled Mobile Services Of First Responders Fighting California Wildfires

Verizon logo Fighting fires is difficult, dangerous work. Recently, that was made worse by an internet service provider (ISP). Ars Technica reported:

"Verizon Wireless' throttling of a fire department that uses its data services has been submitted as evidence in a lawsuit that seeks to reinstate federal net neutrality rules. "County Fire has experienced throttling by its ISP, Verizon," Santa Clara County Fire Chief Anthony Bowden wrote in a declaration. "This throttling has had a significant impact on our ability to provide emergency services. Verizon imposed these limitations despite being informed that throttling was actively impeding County Fire's ability to provide crisis-response and essential emergency services." Bowden's declaration was submitted in an addendum to a brief filed by 22 state attorneys general, the District of Columbia, Santa Clara County, Santa Clara County Central Fire Protection District, and the California Public Utilities Commission. The government agencies are seeking to overturn the recent repeal of net neutrality rules in a lawsuit they filed against the Federal Communications Commission in the US Court of Appeals for the District of Columbia Circuit."

Reportedly, Verizon replied with a statement that the throttling, "was a customer service error." Huh? This is how Verizon treats first-responders? This is how an ISP treats first-responders during a major emergency and natural disaster? The wildfires have claimed 12 deaths, destroyed at least 1,200 homes, and wiped out the state's emergency fund. Smoke from the massive wildfires has caused extensive pollution and health warnings in Northwest areas including Portland, Oregon and Seattle, Washington. The thick smoke could be seen from space.

Ars Technica reported in an August 21 update:

"Santa Clara County disputed Verizon's characterization of the problem in a press release last night. "Verizon's throttling has everything to do with net neutrality—it shows that the ISPs will act in their economic interests, even at the expense of public safety," County Counsel James Williams said on behalf of the county and fire department. "That is exactly what the Trump Administration's repeal of net neutrality allows and encourages." "

In 2017, President Trump appointed Ajit Pai, a former Verizon attorney, as Chairman of the U.S. Federal Communications Commission. Under Pai's leadership, the FCC revoked both online privacy and net neutrality protections for consumers. This gave ISPs the freedom to do as they want online while consumers lost two key freedoms: a) the freedom to control the data describing their activities online (which are collected and shared with others by ISPs), and b) freedom to use the internet bandwidth purchased as they choose.

If an ISP will throttle and abuse first-responders, think of what it will do it regular consumers. What are your opinions?


T-Mobile Confirmed Data Breach Affecting Millions Of Customers

T-Mobile logo T-Mobile confirmed a data breach which impacted its customers. Last week, the mobile service provider said in a statement:

"On August 20, our cyber-security team discovered and shut down an unauthorized access to certain information, including yours, and we promptly reported it to authorities. None of your financial data (including credit card information) or social security numbers were involved, and no passwords were compromised. However, you should know that some of your personal information may have been exposed, which may have included one or more of the following: name, billing zip code, phone number, email address, account number and account type (prepaid or postpaid)."

Affected customers are being notified. The statement did not disclose the number of affected customers, exactly how criminals breached its systems, nor the specific actions T-Mobile is taking to prevent this type of breach from happening again. The lack of detail is discouraging and does not promote trust.

CBS News reported:

"... the breach affected about 3 percent of T-Mobile's 77 million customers, or 2 million people... In May, researchers detected a bug in the company's website that allowed anyone to access the personal data of customers with just a phone number. The company is waiting for regulatory approval of a proposed $26.5 billion takeover of Sprint, the fourth-largest carrier in the United States."

So, criminals have stolen enough information to do damage: send spam via e-mail or text, and conduct pretexting (e.g., impersonate others to take over online accounts by resetting passwords, and/or gain access to payment data).

If you received a breach notice from T-Mobile, how satisfied are you with the company's response?


Besieged Facebook Says New Ad Limits Aren’t Response to Lawsuits

[Editor's note: today's guest post, by reporters at ProPublica, is the latest in a series monitoring Facebook's attempts to clean up its advertising systems and tools. It is reprinted with permission.]

By Ariana Tobin and Jeremy B. Merrill, ProPublica

Facebook logo Facebook’s move to eliminate 5,000 options that enable advertisers on its platform to limit their audiences is unrelated to lawsuits accusing it of fostering housing and employment discrimination, the company said Wednesday.

“We’ve been building these tools for a long time and collecting input from different outside groups,” Facebook spokesman Joe Osborne told ProPublica.

Tuesday’s blog post announcing the elimination of categories that the company has described as “sensitive personal attributes” came four days after the Department of Justice joined a lawsuit brought by fair housing groups against Facebook in federal court in New York City. The suit contends that advertisers could use Facebook’s options to prevent racial and religious minorities and other protected groups from seeing housing ads.

Raising the prospect of tighter regulation, the Justice Department said that the Communications Decency Act of 1996, which gives immunity to internet companies from liability for content on their platforms, did not apply to Facebook’s advertising portal. Facebook has repeatedly cited the act in legal proceedings in claiming immunity from anti-discrimination law. Congress restricted the law’s scope in March by making internet companies more liable for ads and posts related to child sex-trafficking.

Around the same time the Justice Department intervened in the lawsuit, the Department of Housing and Urban Development (HUD) filed a formal complaint against Facebook, signaling that it had found enough evidence during an initial investigation to raise the possibility of legal action against the social media giant for housing discrimination. Facebook has said that its policies strictly prohibit discrimination, that over the past year it has strengthened its systems to protect against misuse, and that it will work with HUD to address the concerns.

“The Fair Housing Act prohibits housing discrimination including those who might limit or deny housing options with a click of a mouse,” Anna María Farías, HUD’s assistant secretary for fair housing and equal opportunity, said in a statement accompanying the complaint. “When Facebook uses the vast amount of personal data it collects to help advertisers to discriminate, it’s the same as slamming the door in someone’s face.”

Regulators in at least one state are also scrutinizing Facebook. Last month, the state of Washington imposed legally binding compliance requirements on the company, barring it from offering advertisers the option of excluding protected groups from seeing ads about housing, credit, employment, insurance or “public accommodations of any kind.”

Advertising is the primary source of revenue for the social media giant, which is under siege on several fronts. A recent study and media coverage have highlighted how hate speech and false rumors on Facebook have spurred anti-refugee discrimination in Germany and violence against minority ethnic groups such as the Rohingya in Myanmar. This week, Facebook said it had found evidence of Russian and Iranian efforts to influence elections in the U.S. and around the world through fake accounts and targeted advertising. It also said it had suspended more than 400 apps “due to concerns around the developers who built them or how the information people chose to share with the app may have been used.”

Facebook declined to identify most of the 5,000 options being removed, saying that the information might help bad actors game the system. It did say that the categories could enable advertisers to exclude racial and religious minorities, and it provided four examples that it deleted: “Native American culture,” “Passover,” “Evangelicalism” and “Buddhism.” It said the changes will be completed next month.

According to Facebook, these categories have not been widely used by advertisers to discriminate, and their removal is intended to be proactive. In some cases, advertisers legitimately use these categories to reach key audiences. According to targeting data from ads submitted to ProPublica’s Political Ad Collector project, Jewish groups used the “Passover” category to promote Jewish cultural events, and the Michael J. Fox Foundation used it to find people of Ashkenazi Jewish ancestry for medical research on Parkinson’s disease.

Facebook is not limiting advertisers’ options for narrowing audiences by age or sex. The company has defended age-based targeting in employment ads as beneficial for employers and job seekers. Advertisers may also still target or exclude by ZIP code — which critics have described as “digital red-lining” but Facebook says is standard industry practice.

A pending suit in federal court in San Francisco alleges that, by allowing employers to target audiences by age, Facebook is enabling employment discrimination against older job applicants. Peter Romer-Friedman, a lawyer representing the plaintiffs in that case, said that Facebook’s removal of the 5,000 options “is a modest step in the right direction.” But allowing employers to sift job seekers by age, he added, “shows what Facebook cares about: its bottom line. There is real money in age-restricted discrimination.”

Senators Bob Casey of Pennsylvania and Susan Collins of Maine have asked Facebook for more information on what steps it is taking to prevent age discrimination on the site.

The issue of discriminatory advertising on Facebook arose in October 2016 when ProPublica revealed that advertisers on the platform could narrow their audiences by excluding so-called “ethnic affinity” categories such as African-Americans and Spanish-speaking Hispanics. At the time, Facebook promised to build a system to flag and reject such ads. However, a year later, we bought dozens of rental housing ads that excluded protected categories. They were approved within seconds. So were ads that excluded older job seekers, as well as ads aimed at anti-Semitic categories such as “Jew hater.”

The removal of the 5,000 options isn’t Facebook’s first change to its advertising portal in response to such criticism. Last November, it added a self-certification option, which asks housing advertisers to check a box agreeing that their advertisement is not discriminatory. The company also plans to require advertisers to read educational material on the site about ethical practices.

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Facebook To Remove Onavo VPN App From Apple App Store

Not all Virtual Private Network (VPN) software is created equal. Some do a better job at protecting your privacy than others. Mashable reported that Facebook:

"... plans to remove its Onavo VPN app from the App Store after Apple warned the company that the app was in violation of its policies governing data gathering... For those blissfully unaware, Onavo sold itself as a virtual private network that people could run "to take the worry out of using smartphones and tablets." In reality, Facebook used data about users' internet activity collected by the app to inform acquisitions and product decisions. Essentially, Onavo allowed Facebook to run market research on you and your phone, 24/7. It was spyware, dressed up and neatly packaged with a Facebook-blue bow. Data gleaned from the app, notes the Wall Street Journal, reportedly played into the social media giant's decision to start building a rival to the Houseparty app. Oh, and its decision to buy WhatsApp."

Thanks Apple! We've all heard of the #FakeNews hashtag on social media. Yes, there is a #FakeVPN hashtag, too. So, buyer beware... online user beware.


Study: Performance Issues Impede IoT Device Trust And Usage Worldwide By Consumers

Dynatrace logo A global survey recently uncovered interesting findings about the usage and satisfaction of Iot (Internet of things) devices by consumers. A survey of consumers in several countries found that 52 percent already use IoT devices, and 64 percent of users have already encountered performance issues with their devices.

Opinium Research logo Dynatrace, a software intelligence company, commissioned Opinium Research to conduct a global survey of 10,002 participants, with 2,000 in the United States, 2,000 in the United Kingdom, and 1,000 respondents each in France, Germany, Australia, Brazil, Singapore, and China. Dynatrace announced several findings, chiefly:

"On average, consumers experience 1.5 digital performance problems every day, and 62% of people fear the number of problems they encounter, and the frequency, will increase due to the rise of IoT."

That seems like plenty of poor performance. Some findings were specific to travel, healthcare, and in-home retail sectors. Regarding travel:

"The digital performance failures consumers are already experiencing with everyday technology is potentially making them wary of other uses of IoT. 85% of respondents said they are concerned that self-driving cars will malfunction... 72% feel it is likely software glitches in self-driving cars will cause serious injuries and fatalities... 84% of consumers said they wouldn’t use self-driving cars due to a fear of software glitches..."

Regarding healthcare:

"... 62% of consumers stated they would not trust IoT devices to administer medication; this sentiment is strongest in the 55+ age range, with 74% expressing distrust. There were also specific concerns about the use of IoT devices to monitor vital signs, such as heart rate and blood pressure. 85% of consumers expressed concern that performance problems with these types of IoT devices could compromise clinical data..."

Regarding in-home retail devices:

"... 83% of consumers are concerned about losing control of their smart home due to digital performance problems... 73% of consumers fear being locked in or out of the smart home due to bugs in smart home technology... 68% of consumers are worried they won’t be able to control the temperature in the smart home due to malfunctions in smart home technology... 81% of consumers are concerned that technology or software problems with smart meters will lead to them being overcharged for gas, electricity, and water."

The findings are a clear call to IoT makers to improve the performance, security, and reliability of their internet-connected devices. To learn more, download the full Dynatrace report titled, "IoT Consumer Confidence Report: Challenges for Enterprise Cloud Monitoring on the Horizon."


Whirlpool's Online Product Registration: Confidentiality and Privacy Concerns

Earlier this month, my wife and I relocated to a different city within the same state to live closer to our new, 14-month young grandson. During the move, we bought new home appliances -- a clothes washer and dryer, both made by Whirlpool -- which prompted today's blog post.

The packaging and operation instructions included two registration postcards with the model and serial numbers printed in the form. Nothing controversial about that. The registration cards included, "Other Easy Ways To Register," and listed both registration websites for the United States and Canada. I tried the online registration to see what improvements or benefits Whirlpool's United States registration site might offer over the old-school snail-mail method besides speed.

The landing page includes a form for the customer's contact information, product purchased information, and future purchase plans. Pretty standard stuff. Nothing alarming there. Near the bottom of the form and just above the "Complete Registration" button are links to Whirlpool's Terms & Conditions and Privacy policies. I read both and found some surprises.

First, the site uses inconsistent nomenclature: two different policy titles. The link says "Terms & Conditions" while the title of the actual policy page states, "Terms Of Use." Which is it? Inconsistent nomenclature can confuse users. Not good. Come on, Whirlpool! This is not hard. Good website usability includes the consistent use of the same page title, so uses know where they are going when they select a link, and that they've arrived at the expected destination.

Second, the Terms Of Use (well, I had to pick a title so it wold be clear for you) policy page lacks a date. This can be confusing, making it difficult to impossible for consumers to know and reference the exact document read; plus determine what, if any, changes were posted since the prior version. Not good. Come on Whirlpool! Add a publication date. It's not hard.

Third, the Terms Of Use policy contained this clause:

"Whirlpool Corporation welcomes your submissions; however, any information submitted, other than your personal information (for example, your name and e-mail address), to Whirlpool Corporation through this site is the exclusive property of Whirlpool Corporation and is considered NOT to be confidential. Whirlpool Corporation does not receive the submission in confidence or under any confidential or fiduciary relationship. Whirlpool Corporation may use the submission for any purpose without restriction or compensation."

So, the Terms of Use policy is both vague and clear at the same time. It was vague because it didn't list the exact data elements considered "personal information." Not good. This leaves consumers to guess. The policy lists only two data elements. What about the rest? Are all confidential, or only some? And if some, which ones? Here's the list I consider confidential: name, street address, country, phone number, e-mail address, IP address, device type, device model, device operating system, payment card information, billing address, and online credentials (should I create a profile at the Whirlpool site). Come on Whirlpool! Get it together and provide the complete list of data elements you consider "personal information." It's not hard.

Fourth, the Terms Of Use policy was also clear because the above sentences quoted made Whirlpool's intentions clear: submissions to the site other than "personal information" are not confidential and Whirlpool can do with them whatever it wants. Since the policy doesn't list which data elements are personal, one must assume all are.  Not good.

Next, I read Whirlpool's Privacy policy, and hoped that it would clarify things. Thankfully, a little good news. First, the Privacy policy listed a date: May 31, 2018. Second, more inconsistent site nomenclature: the page-bottom links across the site say "Privacy Policy" while the policy page title says "Privacy Statement." I selected the "Expand All" button to view the entire policy. Third, Whirlpool's Privacy Statement listed the items considered personal information:

"- Your contact information, such as your name, email address, mailing address, and phone number
- Your billing information, such as your credit card number and billing address
- Your Whirlpool account information, including your user name, account number, and a password
- Your product and ownership information
- Your preferences, such as product wish lists, order history, and marketing preferences"

This list is a good start. A simple link to this section from the Terms Of Use policy would do wonders to clarify things. However, Whirlpool collects some key data which it more freely collects and trades than "personal information." The Privacy Statement contains this clause:

"Whirlpool and its business partners and service providers may use a variety of technologies that automatically or passively collect information about how you interact with our Websites ("Usage Information"). Usage Information may include: (i) your IP address, which is a unique set of numbers assigned to your computer by your Internet Service Provider (ISP) (which, depending on your ISP, may be a different number every time you connect to the Internet); (ii) the type of browser and operating system you use; and (iii) other information about your online session, such as the URL you came from to get to our Websites and the date and time you visited our Websites."

And, the Privacy Statement mentions the use of several online tracking technologies:

"We use Local Shared Objects (LSOs) such as HTML5 or Flash on our Websites to store content information and preferences. Third parties with whom we partner to provide certain features on our Websites or to display advertising based upon your web browsing activity use LSOs such as HTML5 or Flash to collect and store information... Web beacons are tiny electronic image files that can be embedded within a web page or included in an e-mail message, and are usually invisible to the human eye. When we use web beacons within our web pages, the web beacons (also known as “clear GIFs” or “tracking pixels”) may tell us such things as: how many people are coming to our Websites, whether they are one-time or repeat visitors, which pages they viewed and for how long, how well certain online advertising campaigns are converting, and other similar Website usage data. When used in our e-mail communications, web beacons can tell us the time an e-mail was opened, if and how many times it was forwarded, and what links users click on from within the e- mail message."

While the "EU-US Privacy Shield" section of the privacy policy lists Whirlpool's European subsidiaries, and contains a Privacy Shield link to an external site listing the companies that are probably some of Whirlpool's service and advertising partners, the privacy policy really does not disclose all of the "third parties," "business partners," "service vendors," advertising partners, and affiliates Whirlpool shares data with. Consumers are left in the dark.

Last, the "Your Rights: Choice & Access" section of the privacy policy mentions the opt-out mechanism for consumers. While consumers can opt-out or cancel receiving marketing (e.g., promotional) messaging from Whirlpool, you can't opt-out of the data collection and archival. So, choice is limited.

Given this and the above concerns, I abandoned the product registration form. Yep. Didn't complete it. Maybe I will in the future after Whirlpool fixes things. Perhaps most importantly, today's blog post is a reminder for all consumers: always read companies' privacy and terms-of-use policies. Always. You never know what you'll find that is irksome. And, if you don't know how to read online polices, this blog has some tips and suggestions.


Federal Reserve Board Fined Citigroup For Mishandling Residential Mortgages

Citibank logo The Federal Reserve Board (FRB) announced on Friday that it had fined Citigroup $8.6 million for the "improper execution of residential mortgage-related documents" in a subsidiary. The announcement explained:

"The $8.6 million penalty addresses the deficient execution and notarization of certain mortgage-related affidavits prepared by a subsidiary, CitiFinancial. The improper practices occurred in 2015 and were corrected. CitiFinancial exited the mortgage servicing business in 2017.

Also on Friday, the Board announced the termination of an enforcement action from 2011 against Citigroup and CitiFinancial related to residential mortgage loan servicing. The termination of this action was based on evidence of sustainable improvements."

In 2014, Citigroup paid $7 billion to settle allegations by the Department of Justice (DOJ) and several states attorneys general (AGs) that the bank mislead investors about toxic mortgage-backed securities. So, sloppy or shoddy handling of mortgage paperwork  will get a bank fined. Good. There must be consequences when consumers are abused.

Earlier this month, Wells Fargo admitted to software bugs in its systems which led to the bank accidentally foreclosing on residential homeowners it shouldn't have. 400 homeowners lost their homes. Untold consumers' credit ratings wrecked. That sounds like shabby mortgage paperwork handling, too -- definitely worth a larger fine. What do you think?


Wells Fargo Accidentally Foreclosed on Homeowners. 400 Customers Lost Their Homes

Wells Fargo logo Earlier this week, Wells Fargo Bank admitted that it accidentally foreclosed on nearly 400 homeowners it shouldn't have due to a "software glitch." The San Francisco Business Times reported:

"Nearly 400 Wells Fargo customers lost their homes when they were accidentally foreclosed on after a software glitch denied them the ability to modify their mortgages as they sought federal aid, the bank disclosed in a regulatory filing... The bank apologized and has set aside $8 million to compensate those affected by the glitch, which occurred from 2010 to 2015... the software mistake miscalculated customers' eligibility for mortgage modifications. The error caused about 625 customers to be denied loan modifications they sought from a federal program to help homeowners avoid foreclosures."

The $8 million set aside is one small step towards rebuilding consumers' trust. It seems that the bank and its executives have a nasty habit of alleged wrongdoing that often results in fines and settlement agreements. Earlier this month, the U.S. Department of Justice announced a $2 billion settlement agreement where:

"... Wells Fargo Bank, N.A. and several of its affiliates (Wells Fargo) will pay a civil penalty of $2.09 billion under the Financial Institutions Reform, Recovery, and Enforcement Act of 1989 (FIRREA) based on the bank’s alleged origination and sale of residential mortgage loans that it knew contained misstated income information and did not meet the quality that Wells Fargo represented. Investors, including federally insured financial institutions, suffered billions of dollars in losses from investing in residential mortgage-backed securities (RMBS) containing loans originated by Wells Fargo... The United States alleged that, in 2005, Wells Fargo began an initiative to double its production of subprime and Alt-A loans. As part of that initative, Wells Fargo loosened its requirements for originating stated income loans – loans where a borrower simply states his or her income without providing any supporting income documentation... despite its knowledge that a substantial portion of its stated income loans contained misstated income, Wells Fargo failed to disclose this information, and instead reported to investors false debt-to-income ratios in connection with the loans it sold. Wells Fargo also allegedly heralded its fraud controls while failing to disclose the income discrepancies its controls had identified."

Sadly, there's plenty more. In April, federal regulators at the Consumer Financial Protection Bureau (CFPB) and the Office of the Comptroller of the Currency (OCC) assessed a $1 billion fine against the bank for violations of the, "Consumer Financial Protection Act (CFPA) in the way it administered a mandatory insurance program related to its auto loans..."

Since 2016, the bank paid a $185 million fine for alleged unlawful sales practices where its employees created phony accounts to game an internal sales compensation system. While the bank's CEO was let go and 5,300 workers were fired due to that scandal, bad behavior and poor executive decisions seem to continue.

In August of 2017, the results of an internal investigation of auto insurance policies sold from 2012 to 2016 found that thousands of the bank's customers were forced to buy unneeded and unwanted auto insurance.

The latest incident raises more questions:

  • How does a "software glitch" go undetected and unfixed for five years -- or longer?
  • Where was the quality assurance and software testing processes?
  • The post implementation audits failed to detect errors?
  • Were any employees reprimanded, demoted, or fired? And if none, why?
  • What specific changes are being implemented to prevent future software glitches?
  • How will the damaged credit histories of foreclosed homeowners be repaired?

Often, all or a portion of the settlement agreements are tax deductible. This both lessens the fines' impacts and shifts the burden to taxpayers. I hope that as regulators pursue solutions, tax-deductible settlements are not repeated. What are your opinions?


Keep An Eye On Facebook's Moves To Expand Its Collection Of Financial Data About Its Users

Facebook logo On Monday, the Wall Street Journal reported that the social media giant had approached several major banks to share their detailed financial information about consumers in order, "to boost user engagement." Reportedly, Facebook approached JPMorgan Chase, Wells Fargo, Citigroup, and U.S. Bancorp. And, the detailed financial information sought included debit/credit/prepaid card transactions and checking account balances.

The Reuters news service also reported about the talks. The Reuters story mentioned the above banks, plus PayPal and American Express. Then, in a reply Facebook said that the Wall Street Journal news report was wrong. TechCrunch reported:

"Facebook spokesperson Elisabeth Diana tells TechCrunch it’s not asking for credit card transaction data from banks and it’s not interested in building a dedicated banking feature where you could interact with your accounts. It also says its work with banks isn’t to gather data to power ad targeting, or even personalize content... Facebook already lets Citibank customers in Singapore connect their accounts so they can ping their bank’s Messenger chatbot to check their balance, report fraud or get customer service’s help if they’re locked out of their account... That chatbot integration, which has no humans on the other end to limit privacy risks, was announced last year and launched this March. Facebook works with PayPal in more than 40 countries to let users get receipts via Messenger for their purchases. Expansions of these partnerships to more financial services providers could boost usage of Messenger by increasing its convenience — and make it more of a centralized utility akin to China’s WeChat."

There's plenty in the TechCrunch story. Reportedly, Diana's statement said that banks approached Facebook, and that it already partners:

"... with banks and credit card companies to offer services like customer chat or account management. Account linking enables people to receive real-time updates in Facebook Messenger where people can keep track of their transaction data like account balances, receipts, and shipping updates... The idea is that messaging with a bank can be better than waiting on hold over the phone – and it’s completely opt-in. We’re not using this information beyond enabling these types of experiences – not for advertising or anything else. A critical part of these partnerships is keeping people’s information safe and secure."

What to make of this? First, it really doesn't matter who approached whom. There's plenty of history. Way back in 2012, a German credit reporting agency approached Facebook. So, the financial sector is fully aware of the valuable data collected by Facebook.

Second, users doing business on the platform have already given Facebook permission to collect transaction data. Third, while Facebook's reply was about its users generally, its statement said "no" but sounded more like a "yes." Why? Basically, "account linking" or the convenience of purchase notifications is the hook or way into collecting users' financial transaction data. Existing practices, such as fitness apps  and music sharing, highlight the existing "account linking" used for data collection. Whatever users share on the platform allows Facebook to collect that information.

Fourth, the push to collect more banking data appears at best poorly timed, and at worst -- arrogant. Facebook is still trying to recover and regain users' trust after 87 million persons were affected by the massive data breach involving Cambridge Analytica. In May, the new Commissioner at the U.S. Federal Trade Commission (FTC) suggested stronger enforcement on tech companies, like Google and Facebook. Facebook has stumbled as its screening to identify political ads by politicians has incorrectly flagged news sites. Facebook CEO Mark Zuckerberg didn't help matters with his bumbling comments while failing to explain his company's stumbles to identify and prevent fake news.

Gary Cohn, President Donald Trump's former chief economic adviser, sharply criticized social media companies, including Facebook, for allowing fake news:

"In 2008 Facebook was one of those companies that was a big platform to criticize banks, they were very out front of criticizing banks for not being responsible citizens. I think banks were more responsible citizens in 2008 than some of the social media companies are today."

So, it seems wise to keep an eye on Facebook as it attempts to expand its data collection of consumers' financial information. Fifth, banks and banking executives bear some responsibility, too. A guest post on Forbes explained (highlighted text added):

"Whether this [banking] partnership pans or not, the Facebook plans are a reminder that banks sit on mountains of wealth much more valuable than money. Because of the speed at which tech giants move, banks must now make sure their clients agree on who owns their data, consent to the use of them, and understand with who they are shared. For that, it is now or never... In the financial industry, trust between a client and his provider is of primary importance. You can’t sell a customer’s banking data in the same way you sell his or her internet surfing behavior. Finance executives understand this: they even see the appropriate use of customer data as critical to financial stability. It is now or never to define these principles on the use of customer data... It’s why we believe new binding guidelines such as the EU’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act are welcome, even if they have room for improvement... A report by the US Treasury published earlier this week called on Congress to enact a federal data security and breach notification law to protect consumer financial data. The principles outlined above can serve as guidance to lawmakers drafting legislation, and bank executives considering how to respond to advances by Facebook and other big techs..."

Consumers should control their data -- especially financial data. If those rules are not put in place, then consumers have truly lost control of the sensitive personal and financial information that describes them. What are your opinions?


New York State Tells Charter To Leave Due To 'Persistent Non-Compliance And Failure To Live Up To Promises'

The New York State Public Service Commission (NYPSC) announced on Friday that it has revoked its approval of the 2016 merger agreement between Charter Communications, Inc. and Time Warner Cable, Inc. because:

"... Charter, doing business as Spectrum has — through word and deed — made clear that it has no intention of providing the public benefits upon which the Commission's earlier [merger] approval was conditioned. In addition, the Commission directed Commission counsel to bring an enforcement action in State Supreme Court to seek additional penalties for Charter's past failures and ongoing non-compliance..."

Charter, the largest cable provider in the State, provides digital cable television, broadband internet and VoIP telephone services to more than two million subscribers in in more than 1,150 communities. It provides services to consumers in Buffalo, Rochester, Syracuse, Albany and four boroughs in New York City: Manhattan, Staten Island, Queens and Brooklyn. The planned expansion could have increased to five million subscribers in the state.

Charter provides services in 41 states: Alabama, Arizona, California, Colorado, Connecticut, Florida, Georgia, Hawaii, Idaho, Illinois, Indiana, Kansas, Kentucky, Louisiana, Maine, Massachusetts, Michigan, Minnesota, Missouri, Montana, Nebraska, Nevada, New Hampshire, New Jersey, New Mexico, New York, North Carolina, Ohio, Oregon, Pennsylvania, Rhode Island, South Carolina, South Dakota, Tennessee, Texas, Utah, Vermont, Virginia, Washington, Wisconsin, and Wyoming.

A unit of the Department of Public Service, the NYPSC site described its mission, "to ensure affordable, safe, secure, and reliable access to electric, gas, steam, telecommunications, and water services for New York State’s residential and business consumers, while protecting the natural environment." Its announcement listed Spectrum's failures and non-compliance:

"1. The company’s repeated failures to meet deadlines;
2. Charter’s attempts to skirt obligations to serve rural communities;
3. Unsafe practices in the field;
4. Its failure to fully commit to its obligations under the 2016 merger agreement; and
5. The company’s purposeful obfuscation of its performance and compliance obligations to the Commission and its customers."

The announcement provided details:

"On Jan. 8, 2016, the Commission approved Charter’s acquisition of Time Warner. To obtain approval, Charter agreed to a number of conditions required by the Commission to advance the public interest, including delivering broadband speed upgrades to 100 Mbps statewide by the end of 2018, and 300 Mbps by the end of 2019, and building out its network to pass an additional 145,000 un-served or under-served homes and businesses in the State's less densely populated areas within four years... Despite missing every network expansion target since the merger was approved in 2016, Charter has falsely claimed in advertisements it is exceeding its commitments to the State and is on track to deliver its network expansion. This led to the NYPSC’s general counsel referring a false advertising claim to the Attorney General’s office for enforcement... By its own admission, Charter has failed to meet its commitment to expand its service network... Its failure to meet its June 18, 2018 target by more than 40 percent is only the most recent example. Rather than accept responsibility Charter has tried to pass the blame for its failure on other companies, such as utility pole owners..."

The NYPSC has already levied $3 million in fines against Charter. The latest action basically boots Charter out of the State:

"Charter is ordered to file within 60 days a plan with the Commission to ensure an orderly transition to a successor provider(s). During the transition process, Charter must continue to comply with all local franchises it holds in New York State and all obligations under the Public Service Law and the NYPSC regulations. Charter must ensure no interruption in service is experienced by customers, and, in the event that Charter does not do so, the NYPSC will take further steps..."

Of course, executives at Charter have a different view of the situation. NBC New York reported:

"In the weeks leading up to an election, rhetoric often becomes politically charged. But the fact is that Spectrum has extended the reach of our advanced broadband network to more than 86,000 New York homes and businesses since our merger agreement with the PSC. Our 11,000 diverse and locally based workers, who serve millions of customers in the state every day, remain focused on delivering faster and better broadband to more New Yorkers, as we promised..."


Test Finds Amazon's Facial Recognition Software Wrongly Identified Members Of Congress As Persons Arrested. A Few Legislators Demand Answers

In a test of Rekognition, the facial recognition software by Amazon, the American Civil Liberties Union (ACLU) found that the software misidentified 28 members of the United States Congress to mugshot photographs of persons arrested for crimes. Jokes aside about politicians, this is serious stuff. According to the ACLU:

"The members of Congress who were falsely matched with the mugshot database we used in the test include Republicans and Democrats, men and women, and legislators of all ages, from all across the country... To conduct our test, we used the exact same facial recognition system that Amazon offers to the public, which anyone could use to scan for matches between images of faces. And running the entire test cost us $12.33 — less than a large pizza... The false matches were disproportionately of people of color, including six members of the Congressional Black Caucus, among them civil rights legend Rep. John Lewis (D-Ga.). These results demonstrate why Congress should join the ACLU in calling for a moratorium on law enforcement use of face surveillance."

List of 28 Congressional legislators mis-identified by Amazon Rekognition in ACLU study. Click to view larger version With 535 member of Congress, the implied error rate was 5.23 percent. On Thursday, three of the misidentified legislators sent a joint letter to Jeffery Bezos, the Chief executive Officer at Amazon. The letter read in part:

"We write to express our concerns and seek more information about Amazon's facial recognition technology, Rekognition... While facial recognition services might provide a valuable law enforcement tool, the efficacy and impact of the technology are not yet fully understood. In particular, serious concerns have been raised about the dangers facial recognition can pose to privacy and civil rights, especially when it is used as a tool of government surveillance, as well as the accuracy of the technology and its disproportionate impact on communities of color.1 These concerns, including recent reports that Rekognition could lead to mis-identifications, raise serious questions regarding whether Amazon should be selling its technology to law enforcement... One study estimates that more than 117 million American adults are in facial recognition databases that can be searched in criminal investigations..."

The letter was sent by Senator Edward J. Markey (Massachusetts, Representative Luis V. Gutiérrez (Illinois), and Representative Mark DeSaulnier (California). Why only three legislators? Where are the other 25? Nobody else cares about software accuracy?

The three legislators asked Amazon to provide answers by August 20, 2018 to several key requests:

  • The results of any internal accuracy or bias assessments Amazon perform on Rekognition, with details by race, gender, and age,
  • The list of all law enforcement or intelligence agencies Amazon has communicated with regarding Rekognition,
  • The list of all law enforcement agencies which have used or currently use Rekognition,
  • If any law enforcement agencies which used Rekogntion have been investigated, sued, or reprimanded for unlawful or discriminatory policing practices,
  • Describe the protections, if any, Amazon has built into Rekognition to protect the privacy rights of innocent citizens cuaght in the biometric databases used by law enforcement for comparisons,
  • Can Rekognition identify persons younger than age 13, and what protections Amazon uses to comply with Children's Online Privacy Protections Act (COPPA),
  • Whether Amazon conduts any audits of Rekognition to ensure its appropriate and legal uses, and what actions Amazon has taken to correct any abuses,
  • Explain whether Rekognition is integrated with police body cameras and/or "public-facing camera networks."

The letter cited a 2016 report by the Center on Privacy and Technology (CPT) at Georgetown Law School, which found:

"... 16 states let the Federal Bureau of Investigation (FBI) use face recognition technology to compare the faces of suspected criminals to their driver’s license and ID photos, creating a virtual line-up of their state residents. In this line-up, it’s not a human that points to the suspect—it’s an algorithm... Across the country, state and local police departments are building their own face recognition systems, many of them more advanced than the FBI’s. We know very little about these systems. We don’t know how they impact privacy and civil liberties. We don’t know how they address accuracy problems..."

Everyone wants law enforcement to quickly catch criminals, prosecute criminals, and protect the safety and rights of law-abiding citizens. However, accuracy matters. Experts warn that the facial recognition technologies used are unregulated, and the systems' impacts upon innocent citizens are not understood. Key findings in the CPT report:

  1. "Law enforcement face recognition networks include over 117 million American adults. Face recognition is neither new nor rare. FBI face recognition searches are more common than federal court-ordered wiretaps. At least one out of four state or local police departments has the option to run face recognition searches through their or another agency’s system. At least 26 states (and potentially as many as 30) allow law enforcement to run or request searches against their databases of driver’s license and ID photos..."
  2. "Different uses of face recognition create different risks. This report offers a framework to tell them apart. A face recognition search conducted in the field to verify the identity of someone who has been legally stopped or arrested is different, in principle and effect, than an investigatory search of an ATM photo against a driver’s license database, or continuous, real-time scans of people walking by a surveillance camera. The former is targeted and public. The latter are generalized and invisible..."
  3. "By tapping into driver’s license databases, the FBI is using biometrics in a way it’s never done before. Historically, FBI fingerprint and DNA databases have been primarily or exclusively made up of information from criminal arrests or investigations. By running face recognition searches against 16 states’ driver’s license photo databases, the FBI has built a biometric network that primarily includes law-abiding Americans. This is unprecedented and highly problematic."
  4. " Major police departments are exploring face recognition on live surveillance video. Major police departments are exploring real-time face recognition on live surveillance camera video. Real-time face recognition lets police continuously scan the faces of pedestrians walking by a street surveillance camera. It may seem like science fiction. It is real. Contract documents and agency statements show that at least five major police departments—including agencies in Chicago, Dallas, and Los Angeles—either claimed to run real-time face recognition off of street cameras..."
  5. "Law enforcement face recognition is unregulated and in many instances out of control. No state has passed a law comprehensively regulating police face recognition. We are not aware of any agency that requires warrants for searches or limits them to serious crimes. This has consequences..."
  6. "Law enforcement agencies are not taking adequate steps to protect free speech. There is a real risk that police face recognition will be used to stifle free speech. There is also a history of FBI and police surveillance of civil rights protests. Of the 52 agencies that we found to use (or have used) face recognition, we found only one, the Ohio Bureau of Criminal Investigation, whose face recognition use policy expressly prohibits its officers from using face recognition to track individuals engaging in political, religious, or other protected free speech."
  7. "Most law enforcement agencies do little to ensure their systems are accurate. Face recognition is less accurate than fingerprinting, particularly when used in real-time or on large databases. Yet we found only two agencies, the San Francisco Police Department and the Seattle region’s South Sound 911, that conditioned purchase of the technology on accuracy tests or thresholds. There is a need for testing..."
  8. "The human backstop to accuracy is non-standardized and overstated. Companies and police departments largely rely on police officers to decide whether a candidate photo is in fact a match. Yet a recent study showed that, without specialized training, human users make the wrong decision about a match half the time...The training regime for examiners remains a work in progress."
  9. "Police face recognition will disproportionately affect African Americans. Police face recognition will disproportionately affect African Americans. Many police departments do not realize that... the Seattle Police Department says that its face recognition system “does not see race.” Yet an FBI co-authored study suggests that face recognition may be less accurate on black people. Also, due to disproportionately high arrest rates, systems that rely on mug shot databases likely include a disproportionate number of African Americans. Despite these findings, there is no independent testing regime for racially biased error rates. In interviews, two major face recognition companies admitted that they did not run these tests internally, either."
  10. "Agencies are keeping critical information from the public. Ohio’s face recognition system remained almost entirely unknown to the public for five years. The New York Police Department acknowledges using face recognition; press reports suggest it has an advanced system. Yet NYPD denied our records request entirely. The Los Angeles Police Department has repeatedly announced new face recognition initiatives—including a “smart car” equipped with face recognition and real-time face recognition cameras—yet the agency claimed to have “no records responsive” to our document request. Of 52 agencies, only four (less than 10%) have a publicly available use policy. And only one agency, the San Diego Association of Governments, received legislative approval for its policy."

The New York Times reported:

"Nina Lindsey, an Amazon Web Services spokeswoman, said in a statement that the company’s customers had used its facial recognition technology for various beneficial purposes, including preventing human trafficking and reuniting missing children with their families. She added that the A.C.L.U. had used the company’s face-matching technology, called Amazon Rekognition, differently during its test than the company recommended for law enforcement customers.

For one thing, she said, police departments do not typically use the software to make fully autonomous decisions about people’s identities... She also noted that the A.C.L.U had used the system’s default setting for matches, called a “confidence threshold,” of 80 percent. That means the group counted any face matches the system proposed that had a similarity score of 80 percent or more. Amazon itself uses the same percentage in one facial recognition example on its site describing matching an employee’s face with a work ID badge. But Ms. Lindsey said Amazon recommended that police departments use a much higher similarity score — 95 percent — to reduce the likelihood of erroneous matches."

Good of Amazon to respond quickly, but its reply is still insufficient and troublesome. Amazon may recommend 95 percent similarity scores, but the public does not know if police departments actually use the higher setting, or consistently do so across all types of criminal investigations. Plus, the CPT report cast doubt on human "backstop" intervention, which Amazon's reply seems to heavily rely upon.

Where is the rest of Congress on this? On Friday, three Senators sent a similar letter seeking answers from 39 federal law-enforcement agencies about their use facial recognition technology, and what policies, if any, they have put in place to prevent abuse and misuse.

All of the findings in the CPT report are disturbing. Finding #3 is particularly troublesome. So, voters need to know what, if anything, has changed since these findings were published in 2016. Voters need to know what their elected officials are doing to address these findings. Some elected officials seem engaged on the topic, but not enough. What are your opinions?


Health Insurers Are Vacuuming Up Details About You — And It Could Raise Your Rates

[Editor's note: today's guest post, by reporters at ProPublica, explores privacy and data collection issues within the healthcare industry. It is reprinted with permission.]

By Marshall Allen, ProPublica

To an outsider, the fancy booths at last month’s health insurance industry gathering in San Diego aren’t very compelling. A handful of companies pitching “lifestyle” data and salespeople touting jargony phrases like “social determinants of health.”

But dig deeper and the implications of what they’re selling might give many patients pause: A future in which everything you do — the things you buy, the food you eat, the time you spend watching TV — may help determine how much you pay for health insurance.

With little public scrutiny, the health insurance industry has joined forces with data brokers to vacuum up personal details about hundreds of millions of Americans, including, odds are, many readers of this story. The companies are tracking your race, education level, TV habits, marital status, net worth. They’re collecting what you post on social media, whether you’re behind on your bills, what you order online. Then they feed this information into complicated computer algorithms that spit out predictions about how much your health care could cost them.

Are you a woman who recently changed your name? You could be newly married and have a pricey pregnancy pending. Or maybe you’re stressed and anxious from a recent divorce. That, too, the computer models predict, may run up your medical bills.

Are you a woman who’s purchased plus-size clothing? You’re considered at risk of depression. Mental health care can be expensive.

Low-income and a minority? That means, the data brokers say, you are more likely to live in a dilapidated and dangerous neighborhood, increasing your health risks.

“We sit on oceans of data,” said Eric McCulley, director of strategic solutions for LexisNexis Risk Solutions, during a conversation at the data firm’s booth. And he isn’t apologetic about using it. “The fact is, our data is in the public domain,” he said. “We didn’t put it out there.”

Insurers contend they use the information to spot health issues in their clients — and flag them so they get services they need. And companies like LexisNexis say the data shouldn’t be used to set prices. But as a research scientist from one company told me: “I can’t say it hasn’t happened.”

At a time when every week brings a new privacy scandal and worries abound about the misuse of personal information, patient advocates and privacy scholars say the insurance industry’s data gathering runs counter to its touted, and federally required, allegiance to patients’ medical privacy. The Health Insurance Portability and Accountability Act, or HIPAA, only protects medical information.

“We have a health privacy machine that’s in crisis,” said Frank Pasquale, a professor at the University of Maryland Carey School of Law who specializes in issues related to machine learning and algorithms. “We have a law that only covers one source of health information. They are rapidly developing another source.”

Patient advocates warn that using unverified, error-prone “lifestyle” data to make medical assumptions could lead insurers to improperly price plans — for instance raising rates based on false information — or discriminate against anyone tagged as high cost. And, they say, the use of the data raises thorny questions that should be debated publicly, such as: Should a person’s rates be raised because algorithms say they are more likely to run up medical bills? Such questions would be moot in Europe, where a strict law took effect in May that bans trading in personal data.

This year, ProPublica and NPR are investigating the various tactics the health insurance industry uses to maximize its profits. Understanding these strategies is important because patients — through taxes, cash payments and insurance premiums — are the ones funding the entire health care system. Yet the industry’s bewildering web of strategies and inside deals often have little to do with patients’ needs. As the series’ first story showed, contrary to popular belief, lower bills aren’t health insurers’ top priority.

Inside the San Diego Convention Center last month, there were few qualms about the way insurance companies were mining Americans’ lives for information — or what they planned to do with the data.

The sprawling convention center was a balmy draw for one of America’s Health Insurance Plans’ marquee gatherings. Insurance executives and managers wandered through the exhibit hall, sampling chocolate-covered strawberries, champagne and other delectables designed to encourage deal-making.

Up front, the prime real estate belonged to the big guns in health data: The booths of Optum, IBM Watson Health and LexisNexis stretched toward the ceiling, with flat screen monitors and some comfy seating. (NPR collaborates with IBM Watson Health on national polls about consumer health topics.)

To understand the scope of what they were offering, consider Optum. The company, owned by the massive UnitedHealth Group, has collected the medical diagnoses, tests, prescriptions, costs and socioeconomic data of 150 million Americans going back to 1993, according to its marketing materials. (UnitedHealth Group provides financial support to NPR.) The company says it uses the information to link patients’ medical outcomes and costs to details like their level of education, net worth, family structure and race. An Optum spokesman said the socioeconomic data is de-identified and is not used for pricing health plans.

Optum’s marketing materials also boast that it now has access to even more. In 2016, the company filed a patent application to gather what people share on platforms like Facebook and Twitter, and link this material to the person’s clinical and payment information. A company spokesman said in an email that the patent application never went anywhere. But the company’s current marketing materials say it combines claims and clinical information with social media interactions.

I had a lot of questions about this and first reached out to Optum in May, but the company didn’t connect me with any of its experts as promised. At the conference, Optum salespeople said they weren’t allowed to talk to me about how the company uses this information.

It isn’t hard to understand the appeal of all this data to insurers. Merging information from data brokers with people’s clinical and payment records is a no-brainer if you overlook potential patient concerns. Electronic medical records now make it easy for insurers to analyze massive amounts of information and combine it with the personal details scooped up by data brokers.

It also makes sense given the shifts in how providers are getting paid. Doctors and hospitals have typically been paid based on the quantity of care they provide. But the industry is moving toward paying them in lump sums for caring for a patient, or for an event, like a knee surgery. In those cases, the medical providers can profit more when patients stay healthy. More money at stake means more interest in the social factors that might affect a patient’s health.

Some insurance companies are already using socioeconomic data to help patients get appropriate care, such as programs to help patients with chronic diseases stay healthy. Studies show social and economic aspects of people’s lives play an important role in their health. Knowing these personal details can help them identify those who may need help paying for medication or help getting to the doctor.

But patient advocates are skeptical health insurers have altruistic designs on people’s personal information.

The industry has a history of boosting profits by signing up healthy people and finding ways to avoid sick people — called “cherry-picking” and “lemon-dropping,” experts say. Among the classic examples: A company was accused of putting its enrollment office on the third floor of a building without an elevator, so only healthy patients could make the trek to sign up. Another tried to appeal to spry seniors by holding square dances.

The Affordable Care Act prohibits insurers from denying people coverage based on pre-existing health conditions or charging sick people more for individual or small group plans. But experts said patients’ personal information could still be used for marketing, and to assess risks and determine the prices of certain plans. And the Trump administration is promoting short-term health plans, which do allow insurers to deny coverage to sick patients.

Robert Greenwald, faculty director of Harvard Law School’s Center for Health Law and Policy Innovation, said insurance companies still cherry-pick, but now they’re subtler. The center analyzes health insurance plans to see if they discriminate. He said insurers will do things like failing to include enough information about which drugs a plan covers — which pushes sick people who need specific medications elsewhere. Or they may change the things a plan covers, or how much a patient has to pay for a type of care, after a patient has enrolled. Or, Greenwald added, they might exclude or limit certain types of providers from their networks — like those who have skill caring for patients with HIV or hepatitis C.

If there were concerns that personal data might be used to cherry-pick or lemon-drop, they weren’t raised at the conference.

At the IBM Watson Health booth, Kevin Ruane, a senior consulting scientist, told me that the company surveys 80,000 Americans a year to assess lifestyle, attitudes and behaviors that could relate to health care. Participants are asked whether they trust their doctor, have financial problems, go online, or own a Fitbit and similar questions. The responses of hundreds of adjacent households are analyzed together to identify social and economic factors for an area.

Ruane said he has used IBM Watson Health’s socioeconomic analysis to help insurance companies assess a potential market. The ACA increased the value of such assessments, experts say, because companies often don’t know the medical history of people seeking coverage. A region with too many sick people, or with patients who don’t take care of themselves, might not be worth the risk.

Ruane acknowledged that the information his company gathers may not be accurate for every person. “We talk to our clients and tell them to be careful about this,” he said. “Use it as a data insight. But it’s not necessarily a fact.”

In a separate conversation, a salesman from a different company joked about the potential for error. “God forbid you live on the wrong street these days,” he said. “You’re going to get lumped in with a lot of bad things.”

The LexisNexis booth was emblazoned with the slogan “Data. Insight. Action.” The company said it uses 442 non-medical personal attributes to predict a person’s medical costs. Its cache includes more than 78 billion records from more than 10,000 public and proprietary sources, including people’s cellphone numbers, criminal records, bankruptcies, property records, neighborhood safety and more. The information is used to predict patients’ health risks and costs in eight areas, including how often they are likely to visit emergency rooms, their total cost, their pharmacy costs, their motivation to stay healthy and their stress levels.

People who downsize their homes tend to have higher health care costs, the company says. As do those whose parents didn’t finish high school. Patients who own more valuable homes are less likely to land back in the hospital within 30 days of their discharge. The company says it has validated its scores against insurance claims and clinical data. But it won’t share its methods and hasn’t published the work in peer-reviewed journals.

McCulley, LexisNexis’ director of strategic solutions, said predictions made by the algorithms about patients are based on the combination of the personal attributes. He gave a hypothetical example: A high school dropout who had a recent income loss and doesn’t have a relative nearby might have higher than expected health costs.

But couldn’t that same type of person be healthy? I asked.

“Sure,” McCulley said, with no apparent dismay at the possibility that the predictions could be wrong.

McCulley and others at LexisNexis insist the scores are only used to help patients get the care they need and not to determine how much someone would pay for their health insurance. The company cited three different federal laws that restricted them and their clients from using the scores in that way. But privacy experts said none of the laws cited by the company bar the practice. The company backed off the assertions when I pointed that the laws did not seem to apply.

LexisNexis officials also said the company’s contracts expressly prohibit using the analysis to help price insurance plans. They would not provide a contract. But I knew that in at least one instance a company was already testing whether the scores could be used as a pricing tool.

Before the conference, I’d seen a press release announcing that the largest health actuarial firm in the world, Milliman, was now using the LexisNexis scores. I tracked down Marcos Dachary, who works in business development for Milliman. Actuaries calculate health care risks and help set the price of premiums for insurers. I asked Dachary if Milliman was using the LexisNexis scores to price health plans and he said: “There could be an opportunity.”

The scores could allow an insurance company to assess the risks posed by individual patients and make adjustments to protect themselves from losses, he said. For example, he said, the company could raise premiums, or revise contracts with providers.

It’s too early to tell whether the LexisNexis scores will actually be useful for pricing, he said. But he was excited about the possibilities. “One thing about social determinants data — it piques your mind,” he said.

Dachary acknowledged the scores could also be used to discriminate. Others, he said, have raised that concern. As much as there could be positive potential, he said, “there could also be negative potential.”

It’s that negative potential that still bothers data analyst Erin Kaufman, who left the health insurance industry in January. The 35-year-old from Atlanta had earned her doctorate in public health because she wanted to help people, but one day at Aetna, her boss told her to work with a new data set.

To her surprise, the company had obtained personal information from a data broker on millions of Americans. The data contained each person’s habits and hobbies, like whether they owned a gun, and if so, what type, she said. It included whether they had magazine subscriptions, liked to ride bikes or run marathons. It had hundreds of personal details about each person.

The Aetna data team merged the data with the information it had on patients it insured. The goal was to see how people’s personal interests and hobbies might relate to their health care costs. But Kaufman said it felt wrong: The information about the people who knitted or crocheted made her think of her grandmother. And the details about individuals who liked camping made her think of herself. What business did the insurance company have looking at this information? “It was a dataset that really dug into our clients’ lives,” she said. “No one gave anyone permission to do this.”

In a statement, Aetna said it uses consumer marketing information to supplement its claims and clinical information. The combined data helps predict the risk of repeat emergency room visits or hospital admissions. The information is used to reach out to members and help them and plays no role in pricing plans or underwriting, the statement said.

Kaufman said she had concerns about the accuracy of drawing inferences about an individual’s health from an analysis of a group of people with similar traits. Health scores generated from arrest records, home ownership and similar material may be wrong, she said.

Pam Dixon, executive director of the World Privacy Forum, a nonprofit that advocates for privacy in the digital age, shares Kaufman’s concerns. She points to a study by the analytics company SAS, which worked in 2012 with an unnamed major health insurance company to predict a person’s health care costs using 1,500 data elements, including the investments and types of cars people owned.

The SAS study said higher health care costs could be predicted by looking at things like ethnicity, watching TV and mail order purchases.

“I find that enormously offensive as a list,” Dixon said. “This is not health data. This is inferred data.”

Data scientist Cathy O’Neil said drawing conclusions about health risks on such data could lead to a bias against some poor people. It would be easy to infer they are prone to costly illnesses based on their backgrounds and living conditions, said O’Neil, author of the book “Weapons of Math Destruction,” which looked at how algorithms can increase inequality. That could lead to poor people being charged more, making it harder for them to get the care they need, she said. Employers, she said, could even decide not to hire people with data points that could indicate high medical costs in the future.

O’Neil said the companies should also measure how the scores might discriminate against the poor, sick or minorities.

American policymakers could do more to protect people’s information, experts said. In the United States, companies can harvest personal data unless a specific law bans it, although California just passed legislation that could create restrictions, said William McGeveran, a professor at the University of Minnesota Law School. Europe, in contrast, passed a strict law called the General Data Protection Regulation, which went into effect in May.

“In Europe, data protection is a constitutional right,” McGeveran said.

Pasquale, the University of Maryland law professor, said health scores should be treated like credit scores. Federal law gives people the right to know their credit scores and how they’re calculated. If people are going to be rated by whether they listen to sad songs on Spotify or look up information about AIDS online, they should know, Pasquale said. “The risk of improper use is extremely high. And data scores are not properly vetted and validated and available for scrutiny.”

As I reported this story I wondered how the data vendors might be using my personal information to score my potential health costs. So, I filled out a request on the LexisNexis website for the company to send me some of the personal information it has on me. A week later a somewhat creepy, 182-page walk down memory lane arrived in the mail. Federal law only requires the company to provide a subset of the information it collected about me. So that’s all I got.

LexisNexis had captured details about my life going back 25 years, many that I’d forgotten. It had my phone numbers going back decades and my home addresses going back to my childhood in Golden, Colorado. Each location had a field to show whether the address was “high risk.” Mine were all blank. The company also collects records of any liens and criminal activity, which, thankfully, I didn’t have.

My report was boring, which isn’t a surprise. I’ve lived a middle-class life and grown up in good neighborhoods. But it made me wonder: What if I had lived in “high risk” neighborhoods? Could that ever be used by insurers to jack up my rates — or to avoid me altogether?

I wanted to see more. If LexisNexis had health risk scores on me, I wanted to see how they were calculated and, more importantly, whether they were accurate. But the company told me that if it had calculated my scores it would have done so on behalf of their client, my insurance company. So, I couldn’t have them.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

 


European Regulators Fine Google $5 Billion For 'Breaching EU Antitrust Rules'

On Wednesday, European anti-trust regulators fined Google 4.34 billion Euros (U.S. $5 billion) and ordered the tech company to stop using its Android operating system software to block competition. ComputerWorld reported:

"The European Commission found that Google has abused its dominant market position in three ways: tying access to the Play store to installation of Google Search and Google Chrome; paying phone makers and network operators to exclusively install Google Search, and preventing manufacturers from making devices running forks of Android... Google won't let smartphone manufacturers install Play on their phones unless they also make its search engine and Chrome browser the defaults on their phones. In addition, they must only use a Google-approved version of Android. This has prevented companies like Amazon.com, which developed a fork of Android it calls FireOS, from persuading big-name manufacturers to produce phones running its OS or connecting to its app store..."

Reportedly, less than 10% of Android phone users download a different browser than the pre-installed default. Less than 1% use a different search app. View the archive of European Commission Android OS documents.

Yesterday, the European Commission announced on social media:

European Commission tweet. Google Android OS restrictions graphic. Click to view larger version

European Commission tweet. Vestager comments. Click to view larger version

And, The Guardian newspaper reported:

"Soon after Brussels handed down its verdict, Google announced it would appeal. "Android has created more choice for everyone, not less," a Google spokesperson said... Google has 90 days to end its "illegal conduct" or its parent company Alphabet could be hit with fines amounting to 5% of its daily [revenues] for each day it fails to comply. Wednesday’s verdict ends a 39-month investigation by the European commission’s competition authorities into Google’s Android operating system but it is only one part of an eight-year battle between Brussels and the tech giant."

According to the Reuters news service, a third EU case against Google, involving accusations that the tech company's AdSense advertising service blocks users from displaying search ads from competitors, is still ongoing.


Facial Recognition At Facebook: New Patents, New EU Privacy Laws, And Concerns For Offline Shoppers

Facebook logo Some Facebook users know that the social networking site tracks them both on and off (e.g., signed into, not signed into) the service. Many online users know that Facebook tracks both users and non-users around the internet. Recent developments indicate that the service intends to track people offline, too. The New York Times reported that Facebook:

"... has applied for various patents, many of them still under consideration... One patent application, published last November, described a system that could detect consumers within [brick-and-mortar retail] stores and match those shoppers’ faces with their social networking profiles. Then it could analyze the characteristics of their friends, and other details, using the information to determine a “trust level” for each shopper. Consumers deemed “trustworthy” could be eligible for special treatment, like automatic access to merchandise in locked display cases... Another Facebook patent filing described how cameras near checkout counters could capture shoppers’ faces, match them with their social networking profiles and then send purchase confirmation messages to their phones."

Some important background. First, the usage of surveillance cameras in retail stores is not new. What is new is the scope and accuracy of the technology. In 2012, we first learned about smart mannequins in retail stores. In 2013, we learned about the five ways retail stores spy on shoppers. In 2015, we learned more about tracking of shoppers by retail stores using WiFi connections. In 2018, some smart mannequins are used in the healthcare industry.

Second, Facebook's facial recognition technology scans images uploaded by users, and then allows users identified to accept or decline labels with their name for each photo. Each Facebook user can adjust their privacy settings to enable or disable the adding of their name label to photos. However:

"Facial recognition works by scanning faces of unnamed people in photos or videos and then matching codes of their facial patterns to those in a database of named people... The technology can be used to remotely identify people by name without their knowledge or consent. While proponents view it as a high-tech tool to catch criminals... critics said people cannot actually control the technology — because Facebook scans their faces in photos even when their facial recognition setting is turned off... Rochelle Nadhiri, a Facebook spokeswoman, said its system analyzes faces in users’ photos to check whether they match with those who have their facial recognition setting turned on. If the system cannot find a match, she said, it does not identify the unknown face and immediately deletes the facial data."

Simply stated: Facebook maintains a perpetual database of photos (and videos) with names attached, so it can perform the matching and not display name labels for users who declined and/or disabled the display of name labels in photos (videos). To learn more about facial recognition at Facebook, visit the Electronic Privacy Information Center (EPIC) site.

Third, other tech companies besides Facebook use facial recognition technology:

"... Amazon, Apple, Facebook, Google and Microsoft have filed facial recognition patent applications. In May, civil liberties groups criticized Amazon for marketing facial technology, called Rekognition, to police departments. The company has said the technology has also been used to find lost children at amusement parks and other purposes..."

You may remember, earlier in 2017 Apple launched its iPhone X with Face ID feature for users to unlock their phones. Fourth, since Facebook operates globally it must respond to new laws in certain regions:

"In the European Union, a tough new data protection law called the General Data Protection Regulation now requires companies to obtain explicit and “freely given” consent before collecting sensitive information like facial data. Some critics, including the former government official who originally proposed the new law, contend that Facebook tried to improperly influence user consent by promoting facial recognition as an identity protection tool."

Perhaps, you find the above issues troubling. I do. If my facial image will be captured, archived, tracked by brick-and-mortar stores, and then matched and merged with my online usage, then I want some type of notice before entering a brick-and-mortar store -- just as websites present privacy and terms-of-use policies. Otherwise, there is no notice nor informed consent by shoppers at brick-and-mortar stores.

So, is facial recognition a threat, a protection tool, or both? What are your opinions?


New Jersey to Suspend Prominent Psychologist for Failing to Protect Patient Privacy

[Editor's note: today's guest blog post, by reporters at ProPublica, explores privacy issues within the healthcare industry. The post is reprinted with permission.]

By Charles Ornstein, ProPublica

A prominent New Jersey psychologist is facing the suspension of his license after state officials concluded that he failed to keep details of mental health diagnoses and treatments confidential when he sued his patients over unpaid bills.

The state Board of Psychological Examiners last month upheld a decision by an administrative law judge that the psychologist, Barry Helfmann, “did not take reasonable measures to protect the confidentiality of his patients’ protected health information,” Lisa Coryell, a spokeswoman for the state attorney general’s office, said in an e-mail.

The administrative law judge recommended that Helfmann pay a fine and a share of the investigative costs. The board went further, ordering that Helfmann’s license be suspended for two years, Coryell wrote. During the first year, he will not be able to practice; during the second, he can practice, but only under supervision. Helfmann also will have to pay a $10,000 civil penalty, take an ethics course and reimburse the state for some of its investigative costs. The suspension is scheduled to begin in September.

New Jersey began to investigate Helfmann after a ProPublica article published in The New York Times in December 2015 that described the lawsuits and the information they contained. The allegations involved Helfmann’s patients as well as those of his colleagues at Short Hills Associates in Clinical Psychology, a New Jersey practice where he has been the managing partner.

Helfmann is a leader in his field, serving as president of the American Group Psychotherapy Association, and as a past president of the New Jersey Psychological Association.

ProPublica identified 24 court cases filed by Short Hills Associates from 2010 to 2014 over unpaid bills in which patients’ names, diagnoses and treatments were listed in documents. The defendants included lawyers, business people and a manager at a nonprofit. In cases involving patients who were minors, the lawsuits included children’s names and diagnoses.

The information was subsequently redacted from court records after a patient counter-sued Helfmann and his partners, the psychology group and the practice’s debt collection lawyers. The patient’s lawsuit was settled.

Helfmann has denied wrongdoing, saying his former debt collection lawyers were responsible for attaching patients’ information to the lawsuits. His current lawyer, Scott Piekarsky, said he intends to file an immediate appeal before the discipline takes effect.

"The discipline imposed is ‘so disproportionate as to be shocking to one’s sense of fairness’ under New Jersey case law," Piekarsky said in a statement.

Piekarsky also noted that the administrative law judge who heard the case found no need for any license suspension and raised questions about the credibility of the patient who sued Helfmann. "We feel this is a political decision due to Dr. Helfmann’s aggressive stance" in litigation, he said.

Helfmann sued the state of New Jersey and Joan Gelber, a senior deputy attorney general, claiming that he was not provided due process and equal protection under the law. He and Short Hills Associates sued his prior debt collection firm for legal malpractice. Those cases have been dismissed, though Helfmann has appealed.

Helfmann and Short Hills Associates also are suing the patient who sued him, as well as the man’s lawyer, claiming the patient and lawyer violated a confidential settlement agreement by talking to a ProPublica reporter and sharing information with a lawyer for the New Jersey attorney general’s office without providing advance notice. In court pleadings, the patient and his lawyer maintain that they did not breach the agreement. Helfmann brought all three of these lawsuits in state court in Union County.

Throughout his career, Helfmann has been an advocate for patient privacy, helping to push a state law limiting the information an insurance company can seek from a psychologist to determine the medical necessity of treatment. He also was a plaintiff in a lawsuit against two insurance companies and a New Jersey state commission, accusing them of requiring psychologists to turn over their treatment notes in order to get paid.

"It is apparent that upholding the ethical standards of his profession was very important to him," Carol Cohen, the administrative law judge, wrote. "Having said that, it appears that in the case of the information released to his attorney and eventually put into court papers, the respondent did not use due diligence in being sure that confidential information was not released and his patients were protected."

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Federal Investigation Into Facebook Widens. Company Stock Price Drops

The Boston Globe reported on Tuesday (links added):

"A federal investigation into Facebook’s sharing of data with political consultancy Cambridge Analytica has broadened to focus on the actions and statements of the tech giant and now involves three agencies, including the Securities and Exchange Commission, according to people familiar with the official inquiries.

Representatives for the FBI, the SEC, and the Federal Trade Commission have joined the Justice Department in its inquiries about the two companies and the sharing of personal information of 71 million Americans... The Justice Department and the other federal agencies declined to comment. The FTC in March disclosed that it was investigating Facebook over possible privacy violations..."

About 87 million persons were affected by the Facebook breach involving Cambridge Analytica. In May, the new Commissioner at the U.S. Federal Trade Commission (FTC) suggested stronger enforcement on tech companies, like Google and Facebook.

After news broke about the wider probe, shares of Facebook stock fell about 18 percent of their value and then recovered somewhat for a net drop of 2 percent. That 2 percent drop is about $12 billion in valuation. Clearly, there will be more news (and stock price fluctuations) to come.

During the last few months, there has been plenty of news about Facebook:


Adidas Announced A 'Potential' Data Breach Affecting Online Shoppers in the United States

Adidas announced on June 28 a "potential" data breach affecting an undisclosed number of:

"... consumers who purchased on adidas.com/US... On June 26, Adidas became aware that an unauthorized party claims to have acquired limited data associated with certain Adidas consumers. Adidas is committed to the privacy and security of its consumers' personal data. Adidas immediately began taking steps to determine the scope of the issue and to alert relevant consumers. adidas is working with leading data security firms and law enforcement authorities to investigate the issue..."

The preliminary breach investigation found that contact information, usernames, and encrypted passwords were exposed or stolen. So far, no credit card or fitness information of consumers was "impacted." The company said it is continuing a forensic review and alerting affected customers.

While the company's breach announcement did not disclose the number of affected customer, CBS News reported that hackers may have stolen data about millions of customers. Fox Business reported that the Adidas:

"... hack was reported weeks after Under Armour’s health and fitness app suffered a security breach, which exposed the personal data of roughly 150 million users. The revealed information included the usernames, hashed passwords and email addresses of MyFitnessPal users."

It is critical to remember that this June 28th announcement was based upon a preliminary investigation. A completed breach investigation will hopefully determine and disclose any additional data elements exposed (or stolen), how the hackers penetrated the company's computer systems, which systems were penetrated, whether any internal databases were damaged/corrupted/altered, the total number of customers affected, specific fixes implemented so this type of breach doesn't happen again, and descriptive information about the cyber criminals.

This incident is also a reminder to consumers to never reuse the same password at several online sites. Cyber criminals are persistent, and will use the same password at several sites to see where else they can get in. It is no relief that encrypted passwords were stolen, because we don't yet know if the encryption tools were also stolen (making it easy for the hackers to de-encrypt the passwords). Not good.

We also don't yet know what "contact information" means. That could be first name, last name, phone, street address, e-mail address, mobile phone numbers, or some combination. If e-mail addresses were stolen, then breach victims could also experience phishing attacks where fraudsters try to trick victims into revealing bank account, sign-in credentials, and other sensitive information.

If you received a breach notice from Adidas, please share it below while removing any sensitive, identifying information.


Facebook’s Screening for Political Ads Nabs News Sites Instead of Politicians

[Editor's note: today's post, by reporters at ProPublica, discusses new advertising rules at the Facebook.com social networking service. It is reprinted with permission.]

By Jeremy B. Merrill and Ariana Tobin, ProPublica

One ad couldn’t have been more obviously political. Targeted to people aged 18 and older, it urged them to “vote YES” on June 5 on a ballot proposition to issue bonds for schools in a district near San Francisco. Yet it showed up in users’ news feeds without the “paid for by” disclaimer required for political ads under Facebook’s new policy designed to prevent a repeat of Russian meddling in the 2016 presidential election. Nor does it appear, as it should, in Facebook’s new archive of political ads.

The other ad was from The Hechinger Report, a nonprofit news outlet, promoting one of its articles about financial aid for college students. Yet Facebook’s screening system flagged it as political. For the ad to run, The Hechinger Report would have to undergo the multi-step authorization and authentication process of submitting Social Security numbers and identification that Facebook now requires for anyone running “electoral ads” or “issue ads.”

When The Hechinger Report appealed, Facebook acknowledged that its system should have allowed the ad to run. But Facebook then blocked another ad from The Hechinger Report, about an article headlined, “DACA students persevere, enrolling at, remaining in, and graduating from college.” This time, Facebook rejected The Hechinger Report’s appeal, maintaining that the text or imagery was political.

As these examples suggest, Facebook’s new screening policies to deter manipulation of political ads are creating their own problems. The company’s human reviewers and software algorithms are catching paid posts from legitimate news organizations that mention issues or candidates, while overlooking straightforwardly political posts from candidates and advocacy groups. Participants in ProPublica’s Facebook Political Ad Collector project have submitted 40 ads that should have carried disclaimers under the social network’s policy, but didn’t. Facebook may have underestimated the difficulty of distinguishing between political messages and political news coverage — and the consternation that failing to do so would stir among news organizations.

The rules require anyone running ads that mention candidates for public office, are about elections, or that discuss any of 20 “national issues of public importance” to verify their personal Facebook accounts and add a "paid for by" disclosure to their ads, which are to be preserved in a public archive for seven years. Advertisers who don’t comply will have their ads taken down until they undergo an "authorization" process, submitting a Social Security number, driver’s license photo, and home address, to which Facebook sends a letter with a code to confirm that anyone running ads about American political issues has an American home address. The complication is that the 20 hot-button issues — environment, guns, immigration, values foreign policy, civil rights and the like — are likely to pop up in posts from news organizations as well.

"This could be really confusing to consumers because it’s labeling news content as political ad content," said Stefanie Murray, director of the Center for Cooperative Media at Montclair State University.

The Hechinger Report joined trade organizations representing thousands of publishers earlier this month in protesting this policy, arguing that the filter lumps their stories in with the very organizations and issues they are covering, thus confusing readers already wary of "fake news." Some publishers — including larger outlets like New York Media, which owns New York Magazine — have stopped buying ads on political content they expect would be subject to Facebook’s ad archive disclosure requirement.

"When it comes to news, Facebook still doesn’t get it. In its efforts to clear up one bad mess, it seems set on joining those who want blur the line between reality-based journalism and propaganda," Mark Thompson, chief executive officer of The New York Times, said in prepared remarks at the Open Markets Institute on Tuesday, June 12th.

In a statement Wednesday June 13th, Campbell Brown, Facebook’s head of global news partnerships, said the company recognized "that news content was different from political and issue advertising," and promised to create a "differentiated space within our archive to separate news content from political and issue ads." But Brown rejected the publishers’ request for a "whitelist" of legitimate news organizations whose ads would not be considered political.

"Removing an entire group of advertisers, in this case publishers, would go against our transparency efforts and the work we’re doing to shore up election integrity on Facebook," she wrote."“We don’t want to be in a position where a bad actor obfuscates its identity by claiming to be a news publisher." Many of the foreign agents that bought ads to sway the 2016 presidential election, the company has said, posed as journalistic outlets.

Her response didn’t satisfy news organizations. Facebook "continues to characterize professional news and opinion as ‘advertising’ — which is both misguided and dangerous," said David Chavern, chief executive of the News Media Alliance — a trade association representing 2,000 news organizations in the U.S. and Canada —and co-author of an open letter to Facebook on June 11.

ProPublica asked Facebook to explain its decision to block 14 advertisements shared with us by news outlets. Of those, 12 were ultimately rejected as political content, one was overturned on appeal, and one Facebook could not locate in its records. Most of these publications, including The Hechinger Report, are affiliated with the Institute for Nonprofit News, a consortium of mostly small nonprofit newsrooms that produce primarily investigative journalism (ProPublica is a member).

Here are a few examples of news organization ads that were rejected as political:

  • Voice of Monterey Bay tried to boost an interview with labor leader Dolores Huerta headlined "She Still Can." After the ad ran for about a day, Facebook sent an alert that the ad had been turned off. The outlet is refusing to seek approval for political ads, “since we are a news organization,” said Julie Martinez, co-founder of the nonprofit news site.
  • Ensia tried to advertise an article headlined: "Opinion: We need to talk about how logging in the Southern U.S. is harming local residents." It was rejected as political. Ensia will not appeal or buy new ads until Facebook addresses the issue, said senior editor David Doody.
  • inewsource tried to promote a post about a local candidate, headlined: "Scott Peters’ Plea to Get San Diego Unified Homeless Funding Rejected." The ad was rejected as political. inewsource appealed successfully, but then Facebook changed its mind and rejected it again, a spokeswoman for the social network said.
  • BirminghamWatch tried to boost a post about a story headlined, "‘That is Crazy:’ 17 Steps to Cutting Checks for Birmingham Neighborhood Projects." The ad was rejected as political and rejected again on appeal. A little while later, BirminghamWatch’s advertiser on the account received a message from Facebook: "Finish boosting your post for $15, up to 15,000 people will see it in NewsFeed and it can get more likes, comments, and shares." The nonprofit news site appealed again, and the ad was rejected again.

For most of its history, Facebook treated political ads like any other ads. Last October, a month after disclosing that "inauthentic accounts… operated out of Russia" had spent $100,000 on 3,000 ads that "appeared to focus on amplifying divisive social and political messages," the company announced it would implement new rules for election ads. Then in April, it said the rules would also apply to issue-related ads.

The policy took effect last month, at a time when Facebook’s relationship with the news industry was already rocky. A recent algorithm change reduced the number of posts from news organizations that users see in their news feed, thus decreasing the amount of traffic many media outlets can bring in without paying for wider exposure, and frustrating publishers who had come to rely on Facebook as a way to reach a broader audience.

Facebook has pledged to assign 3,000-4,000 "content moderators" to monitor political ads, but hasn’t reached that staffing level yet. The company told ProPublica that it is committed to meeting the goal by the U.S. midterm elections this fall.

To ward off "bad actors who try to game our enforcement system," Facebook has kept secret its specific parameters and keywords for determining if an ad is political. It has published only the list of 20 national issues, which it says is based in part on a data-coding system developed by a network of political scientists called the Comparative Agendas Project. A director on that project, Frank Baumgartner, said the lack of transparency is problematic.

"I think [filtering for political speech] is a puzzle that can be solved by algorithms and big data, but it has to be done right and the code needs to be transparent and publicly available. You can’t have proprietary algorithms determining what we see," Baumgartner said.

However Facebook’s algorithms work, they are missing overtly political ads. Incumbent members of Congress, national advocacy groups and advocates of local ballot initiatives have all run ads on Facebook without the social network’s promised transparency measures, after they were supposed to be implemented.

Ads from Senator Jeff Merkley, Democrat-Oregon, Representative Don Norcross, Democrat-New Jersey, and Representative Pramila Jayapal, Democrat-Washington, all ran without disclaimers as recently as this past Monday. So did an ad from Alliance Defending Freedom, a right-wing group that represented a Christian baker whose refusal for religious reasons to make a wedding cake for a gay couple was upheld by the Supreme Court this month. And ads from NORML, the marijuana legalization advocacy group and MoveOn, the liberal organization, ran for weeks before being taken down.

ProPublica asked Facebook why these ads weren’t considered political. The company said it is reviewing them. "Enforcement is never perfect at launch," it said.

Clarification, June 15, 2018: This article has been updated to include more specific information about the kinds of advertising New York Media has stopped buying on Facebook’s platform.

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.