1,120 posts categorized "Corporate Responsibility" Feed

European Regulators Fine Google $5 Billion For 'Breaching EU Antitrust Rules'

On Wednesday, European anti-trust regulators fined Google 4.34 billion Euros (U.S. $5 billion) and ordered the tech company to stop using its Android operating system software to block competition. ComputerWorld reported:

"The European Commission found that Google has abused its dominant market position in three ways: tying access to the Play store to installation of Google Search and Google Chrome; paying phone makers and network operators to exclusively install Google Search, and preventing manufacturers from making devices running forks of Android... Google won't let smartphone manufacturers install Play on their phones unless they also make its search engine and Chrome browser the defaults on their phones. In addition, they must only use a Google-approved version of Android. This has prevented companies like Amazon.com, which developed a fork of Android it calls FireOS, from persuading big-name manufacturers to produce phones running its OS or connecting to its app store..."

Reportedly, less than 10% of Android phone users download a different browser than the pre-installed default. Less than 1% use a different search app. View the archive of European Commission Android OS documents.

Yesterday, the European Commission announced on social media:

European Commission tweet. Google Android OS restrictions graphic. Click to view larger version

European Commission tweet. Vestager comments. Click to view larger version

And, The Guardian newspaper reported:

"Soon after Brussels handed down its verdict, Google announced it would appeal. "Android has created more choice for everyone, not less," a Google spokesperson said... Google has 90 days to end its "illegal conduct" or its parent company Alphabet could be hit with fines amounting to 5% of its daily [revenues] for each day it fails to comply. Wednesday’s verdict ends a 39-month investigation by the European commission’s competition authorities into Google’s Android operating system but it is only one part of an eight-year battle between Brussels and the tech giant."

According to the Reuters news service, a third EU case against Google, involving accusations that the tech company's AdSense advertising service blocks users from displaying search ads from competitors, is still ongoing.


Facial Recognition At Facebook: New Patents, New EU Privacy Laws, And Concerns For Offline Shoppers

Facebook logo Some Facebook users know that the social networking site tracks them both on and off (e.g., signed into, not signed into) the service. Many online users know that Facebook tracks both users and non-users around the internet. Recent developments indicate that the service intends to track people offline, too. The New York Times reported that Facebook:

"... has applied for various patents, many of them still under consideration... One patent application, published last November, described a system that could detect consumers within [brick-and-mortar retail] stores and match those shoppers’ faces with their social networking profiles. Then it could analyze the characteristics of their friends, and other details, using the information to determine a “trust level” for each shopper. Consumers deemed “trustworthy” could be eligible for special treatment, like automatic access to merchandise in locked display cases... Another Facebook patent filing described how cameras near checkout counters could capture shoppers’ faces, match them with their social networking profiles and then send purchase confirmation messages to their phones."

Some important background. First, the usage of surveillance cameras in retail stores is not new. What is new is the scope and accuracy of the technology. In 2012, we first learned about smart mannequins in retail stores. In 2013, we learned about the five ways retail stores spy on shoppers. In 2015, we learned more about tracking of shoppers by retail stores using WiFi connections. In 2018, some smart mannequins are used in the healthcare industry.

Second, Facebook's facial recognition technology scans images uploaded by users, and then allows users identified to accept or decline labels with their name for each photo. Each Facebook user can adjust their privacy settings to enable or disable the adding of their name label to photos. However:

"Facial recognition works by scanning faces of unnamed people in photos or videos and then matching codes of their facial patterns to those in a database of named people... The technology can be used to remotely identify people by name without their knowledge or consent. While proponents view it as a high-tech tool to catch criminals... critics said people cannot actually control the technology — because Facebook scans their faces in photos even when their facial recognition setting is turned off... Rochelle Nadhiri, a Facebook spokeswoman, said its system analyzes faces in users’ photos to check whether they match with those who have their facial recognition setting turned on. If the system cannot find a match, she said, it does not identify the unknown face and immediately deletes the facial data."

Simply stated: Facebook maintains a perpetual database of photos (and videos) with names attached, so it can perform the matching and not display name labels for users who declined and/or disabled the display of name labels in photos (videos). To learn more about facial recognition at Facebook, visit the Electronic Privacy Information Center (EPIC) site.

Third, other tech companies besides Facebook use facial recognition technology:

"... Amazon, Apple, Facebook, Google and Microsoft have filed facial recognition patent applications. In May, civil liberties groups criticized Amazon for marketing facial technology, called Rekognition, to police departments. The company has said the technology has also been used to find lost children at amusement parks and other purposes..."

You may remember, earlier in 2017 Apple launched its iPhone X with Face ID feature for users to unlock their phones. Fourth, since Facebook operates globally it must respond to new laws in certain regions:

"In the European Union, a tough new data protection law called the General Data Protection Regulation now requires companies to obtain explicit and “freely given” consent before collecting sensitive information like facial data. Some critics, including the former government official who originally proposed the new law, contend that Facebook tried to improperly influence user consent by promoting facial recognition as an identity protection tool."

Perhaps, you find the above issues troubling. I do. If my facial image will be captured, archived, tracked by brick-and-mortar stores, and then matched and merged with my online usage, then I want some type of notice before entering a brick-and-mortar store -- just as websites present privacy and terms-of-use policies. Otherwise, there is no notice nor informed consent by shoppers at brick-and-mortar stores.

So, is facial recognition a threat, a protection tool, or both? What are your opinions?


New Jersey to Suspend Prominent Psychologist for Failing to Protect Patient Privacy

[Editor's note: today's guest blog post, by reporters at ProPublica, explores privacy issues within the healthcare industry. The post is reprinted with permission.]

By Charles Ornstein, ProPublica

A prominent New Jersey psychologist is facing the suspension of his license after state officials concluded that he failed to keep details of mental health diagnoses and treatments confidential when he sued his patients over unpaid bills.

The state Board of Psychological Examiners last month upheld a decision by an administrative law judge that the psychologist, Barry Helfmann, “did not take reasonable measures to protect the confidentiality of his patients’ protected health information,” Lisa Coryell, a spokeswoman for the state attorney general’s office, said in an e-mail.

The administrative law judge recommended that Helfmann pay a fine and a share of the investigative costs. The board went further, ordering that Helfmann’s license be suspended for two years, Coryell wrote. During the first year, he will not be able to practice; during the second, he can practice, but only under supervision. Helfmann also will have to pay a $10,000 civil penalty, take an ethics course and reimburse the state for some of its investigative costs. The suspension is scheduled to begin in September.

New Jersey began to investigate Helfmann after a ProPublica article published in The New York Times in December 2015 that described the lawsuits and the information they contained. The allegations involved Helfmann’s patients as well as those of his colleagues at Short Hills Associates in Clinical Psychology, a New Jersey practice where he has been the managing partner.

Helfmann is a leader in his field, serving as president of the American Group Psychotherapy Association, and as a past president of the New Jersey Psychological Association.

ProPublica identified 24 court cases filed by Short Hills Associates from 2010 to 2014 over unpaid bills in which patients’ names, diagnoses and treatments were listed in documents. The defendants included lawyers, business people and a manager at a nonprofit. In cases involving patients who were minors, the lawsuits included children’s names and diagnoses.

The information was subsequently redacted from court records after a patient counter-sued Helfmann and his partners, the psychology group and the practice’s debt collection lawyers. The patient’s lawsuit was settled.

Helfmann has denied wrongdoing, saying his former debt collection lawyers were responsible for attaching patients’ information to the lawsuits. His current lawyer, Scott Piekarsky, said he intends to file an immediate appeal before the discipline takes effect.

"The discipline imposed is ‘so disproportionate as to be shocking to one’s sense of fairness’ under New Jersey case law," Piekarsky said in a statement.

Piekarsky also noted that the administrative law judge who heard the case found no need for any license suspension and raised questions about the credibility of the patient who sued Helfmann. "We feel this is a political decision due to Dr. Helfmann’s aggressive stance" in litigation, he said.

Helfmann sued the state of New Jersey and Joan Gelber, a senior deputy attorney general, claiming that he was not provided due process and equal protection under the law. He and Short Hills Associates sued his prior debt collection firm for legal malpractice. Those cases have been dismissed, though Helfmann has appealed.

Helfmann and Short Hills Associates also are suing the patient who sued him, as well as the man’s lawyer, claiming the patient and lawyer violated a confidential settlement agreement by talking to a ProPublica reporter and sharing information with a lawyer for the New Jersey attorney general’s office without providing advance notice. In court pleadings, the patient and his lawyer maintain that they did not breach the agreement. Helfmann brought all three of these lawsuits in state court in Union County.

Throughout his career, Helfmann has been an advocate for patient privacy, helping to push a state law limiting the information an insurance company can seek from a psychologist to determine the medical necessity of treatment. He also was a plaintiff in a lawsuit against two insurance companies and a New Jersey state commission, accusing them of requiring psychologists to turn over their treatment notes in order to get paid.

"It is apparent that upholding the ethical standards of his profession was very important to him," Carol Cohen, the administrative law judge, wrote. "Having said that, it appears that in the case of the information released to his attorney and eventually put into court papers, the respondent did not use due diligence in being sure that confidential information was not released and his patients were protected."

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Federal Investigation Into Facebook Widens. Company Stock Price Drops

The Boston Globe reported on Tuesday (links added):

"A federal investigation into Facebook’s sharing of data with political consultancy Cambridge Analytica has broadened to focus on the actions and statements of the tech giant and now involves three agencies, including the Securities and Exchange Commission, according to people familiar with the official inquiries.

Representatives for the FBI, the SEC, and the Federal Trade Commission have joined the Justice Department in its inquiries about the two companies and the sharing of personal information of 71 million Americans... The Justice Department and the other federal agencies declined to comment. The FTC in March disclosed that it was investigating Facebook over possible privacy violations..."

About 87 million persons were affected by the Facebook breach involving Cambridge Analytica. In May, the new Commissioner at the U.S. Federal Trade Commission (FTC) suggested stronger enforcement on tech companies, like Google and Facebook.

After news broke about the wider probe, shares of Facebook stock fell about 18 percent of their value and then recovered somewhat for a net drop of 2 percent. That 2 percent drop is about $12 billion in valuation. Clearly, there will be more news (and stock price fluctuations) to come.

During the last few months, there has been plenty of news about Facebook:


Adidas Announced A 'Potential' Data Breach Affecting Online Shoppers in the United States

Adidas announced on June 28 a "potential" data breach affecting an undisclosed number of:

"... consumers who purchased on adidas.com/US... On June 26, Adidas became aware that an unauthorized party claims to have acquired limited data associated with certain Adidas consumers. Adidas is committed to the privacy and security of its consumers' personal data. Adidas immediately began taking steps to determine the scope of the issue and to alert relevant consumers. adidas is working with leading data security firms and law enforcement authorities to investigate the issue..."

The preliminary breach investigation found that contact information, usernames, and encrypted passwords were exposed or stolen. So far, no credit card or fitness information of consumers was "impacted." The company said it is continuing a forensic review and alerting affected customers.

While the company's breach announcement did not disclose the number of affected customer, CBS News reported that hackers may have stolen data about millions of customers. Fox Business reported that the Adidas:

"... hack was reported weeks after Under Armour’s health and fitness app suffered a security breach, which exposed the personal data of roughly 150 million users. The revealed information included the usernames, hashed passwords and email addresses of MyFitnessPal users."

It is critical to remember that this June 28th announcement was based upon a preliminary investigation. A completed breach investigation will hopefully determine and disclose any additional data elements exposed (or stolen), how the hackers penetrated the company's computer systems, which systems were penetrated, whether any internal databases were damaged/corrupted/altered, the total number of customers affected, specific fixes implemented so this type of breach doesn't happen again, and descriptive information about the cyber criminals.

This incident is also a reminder to consumers to never reuse the same password at several online sites. Cyber criminals are persistent, and will use the same password at several sites to see where else they can get in. It is no relief that encrypted passwords were stolen, because we don't yet know if the encryption tools were also stolen (making it easy for the hackers to de-encrypt the passwords). Not good.

We also don't yet know what "contact information" means. That could be first name, last name, phone, street address, e-mail address, mobile phone numbers, or some combination. If e-mail addresses were stolen, then breach victims could also experience phishing attacks where fraudsters try to trick victims into revealing bank account, sign-in credentials, and other sensitive information.

If you received a breach notice from Adidas, please share it below while removing any sensitive, identifying information.


Facebook’s Screening for Political Ads Nabs News Sites Instead of Politicians

[Editor's note: today's post, by reporters at ProPublica, discusses new advertising rules at the Facebook.com social networking service. It is reprinted with permission.]

By Jeremy B. Merrill and Ariana Tobin, ProPublica

One ad couldn’t have been more obviously political. Targeted to people aged 18 and older, it urged them to “vote YES” on June 5 on a ballot proposition to issue bonds for schools in a district near San Francisco. Yet it showed up in users’ news feeds without the “paid for by” disclaimer required for political ads under Facebook’s new policy designed to prevent a repeat of Russian meddling in the 2016 presidential election. Nor does it appear, as it should, in Facebook’s new archive of political ads.

The other ad was from The Hechinger Report, a nonprofit news outlet, promoting one of its articles about financial aid for college students. Yet Facebook’s screening system flagged it as political. For the ad to run, The Hechinger Report would have to undergo the multi-step authorization and authentication process of submitting Social Security numbers and identification that Facebook now requires for anyone running “electoral ads” or “issue ads.”

When The Hechinger Report appealed, Facebook acknowledged that its system should have allowed the ad to run. But Facebook then blocked another ad from The Hechinger Report, about an article headlined, “DACA students persevere, enrolling at, remaining in, and graduating from college.” This time, Facebook rejected The Hechinger Report’s appeal, maintaining that the text or imagery was political.

As these examples suggest, Facebook’s new screening policies to deter manipulation of political ads are creating their own problems. The company’s human reviewers and software algorithms are catching paid posts from legitimate news organizations that mention issues or candidates, while overlooking straightforwardly political posts from candidates and advocacy groups. Participants in ProPublica’s Facebook Political Ad Collector project have submitted 40 ads that should have carried disclaimers under the social network’s policy, but didn’t. Facebook may have underestimated the difficulty of distinguishing between political messages and political news coverage — and the consternation that failing to do so would stir among news organizations.

The rules require anyone running ads that mention candidates for public office, are about elections, or that discuss any of 20 “national issues of public importance” to verify their personal Facebook accounts and add a "paid for by" disclosure to their ads, which are to be preserved in a public archive for seven years. Advertisers who don’t comply will have their ads taken down until they undergo an "authorization" process, submitting a Social Security number, driver’s license photo, and home address, to which Facebook sends a letter with a code to confirm that anyone running ads about American political issues has an American home address. The complication is that the 20 hot-button issues — environment, guns, immigration, values foreign policy, civil rights and the like — are likely to pop up in posts from news organizations as well.

"This could be really confusing to consumers because it’s labeling news content as political ad content," said Stefanie Murray, director of the Center for Cooperative Media at Montclair State University.

The Hechinger Report joined trade organizations representing thousands of publishers earlier this month in protesting this policy, arguing that the filter lumps their stories in with the very organizations and issues they are covering, thus confusing readers already wary of "fake news." Some publishers — including larger outlets like New York Media, which owns New York Magazine — have stopped buying ads on political content they expect would be subject to Facebook’s ad archive disclosure requirement.

"When it comes to news, Facebook still doesn’t get it. In its efforts to clear up one bad mess, it seems set on joining those who want blur the line between reality-based journalism and propaganda," Mark Thompson, chief executive officer of The New York Times, said in prepared remarks at the Open Markets Institute on Tuesday, June 12th.

In a statement Wednesday June 13th, Campbell Brown, Facebook’s head of global news partnerships, said the company recognized "that news content was different from political and issue advertising," and promised to create a "differentiated space within our archive to separate news content from political and issue ads." But Brown rejected the publishers’ request for a "whitelist" of legitimate news organizations whose ads would not be considered political.

"Removing an entire group of advertisers, in this case publishers, would go against our transparency efforts and the work we’re doing to shore up election integrity on Facebook," she wrote."“We don’t want to be in a position where a bad actor obfuscates its identity by claiming to be a news publisher." Many of the foreign agents that bought ads to sway the 2016 presidential election, the company has said, posed as journalistic outlets.

Her response didn’t satisfy news organizations. Facebook "continues to characterize professional news and opinion as ‘advertising’ — which is both misguided and dangerous," said David Chavern, chief executive of the News Media Alliance — a trade association representing 2,000 news organizations in the U.S. and Canada —and co-author of an open letter to Facebook on June 11.

ProPublica asked Facebook to explain its decision to block 14 advertisements shared with us by news outlets. Of those, 12 were ultimately rejected as political content, one was overturned on appeal, and one Facebook could not locate in its records. Most of these publications, including The Hechinger Report, are affiliated with the Institute for Nonprofit News, a consortium of mostly small nonprofit newsrooms that produce primarily investigative journalism (ProPublica is a member).

Here are a few examples of news organization ads that were rejected as political:

  • Voice of Monterey Bay tried to boost an interview with labor leader Dolores Huerta headlined "She Still Can." After the ad ran for about a day, Facebook sent an alert that the ad had been turned off. The outlet is refusing to seek approval for political ads, “since we are a news organization,” said Julie Martinez, co-founder of the nonprofit news site.
  • Ensia tried to advertise an article headlined: "Opinion: We need to talk about how logging in the Southern U.S. is harming local residents." It was rejected as political. Ensia will not appeal or buy new ads until Facebook addresses the issue, said senior editor David Doody.
  • inewsource tried to promote a post about a local candidate, headlined: "Scott Peters’ Plea to Get San Diego Unified Homeless Funding Rejected." The ad was rejected as political. inewsource appealed successfully, but then Facebook changed its mind and rejected it again, a spokeswoman for the social network said.
  • BirminghamWatch tried to boost a post about a story headlined, "‘That is Crazy:’ 17 Steps to Cutting Checks for Birmingham Neighborhood Projects." The ad was rejected as political and rejected again on appeal. A little while later, BirminghamWatch’s advertiser on the account received a message from Facebook: "Finish boosting your post for $15, up to 15,000 people will see it in NewsFeed and it can get more likes, comments, and shares." The nonprofit news site appealed again, and the ad was rejected again.

For most of its history, Facebook treated political ads like any other ads. Last October, a month after disclosing that "inauthentic accounts… operated out of Russia" had spent $100,000 on 3,000 ads that "appeared to focus on amplifying divisive social and political messages," the company announced it would implement new rules for election ads. Then in April, it said the rules would also apply to issue-related ads.

The policy took effect last month, at a time when Facebook’s relationship with the news industry was already rocky. A recent algorithm change reduced the number of posts from news organizations that users see in their news feed, thus decreasing the amount of traffic many media outlets can bring in without paying for wider exposure, and frustrating publishers who had come to rely on Facebook as a way to reach a broader audience.

Facebook has pledged to assign 3,000-4,000 "content moderators" to monitor political ads, but hasn’t reached that staffing level yet. The company told ProPublica that it is committed to meeting the goal by the U.S. midterm elections this fall.

To ward off "bad actors who try to game our enforcement system," Facebook has kept secret its specific parameters and keywords for determining if an ad is political. It has published only the list of 20 national issues, which it says is based in part on a data-coding system developed by a network of political scientists called the Comparative Agendas Project. A director on that project, Frank Baumgartner, said the lack of transparency is problematic.

"I think [filtering for political speech] is a puzzle that can be solved by algorithms and big data, but it has to be done right and the code needs to be transparent and publicly available. You can’t have proprietary algorithms determining what we see," Baumgartner said.

However Facebook’s algorithms work, they are missing overtly political ads. Incumbent members of Congress, national advocacy groups and advocates of local ballot initiatives have all run ads on Facebook without the social network’s promised transparency measures, after they were supposed to be implemented.

Ads from Senator Jeff Merkley, Democrat-Oregon, Representative Don Norcross, Democrat-New Jersey, and Representative Pramila Jayapal, Democrat-Washington, all ran without disclaimers as recently as this past Monday. So did an ad from Alliance Defending Freedom, a right-wing group that represented a Christian baker whose refusal for religious reasons to make a wedding cake for a gay couple was upheld by the Supreme Court this month. And ads from NORML, the marijuana legalization advocacy group and MoveOn, the liberal organization, ran for weeks before being taken down.

ProPublica asked Facebook why these ads weren’t considered political. The company said it is reviewing them. "Enforcement is never perfect at launch," it said.

Clarification, June 15, 2018: This article has been updated to include more specific information about the kinds of advertising New York Media has stopped buying on Facebook’s platform.

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


The Wireless Carrier With At Least 8 'Hidden Spy Hubs' Helping The NSA

AT&T logo During the late 1970s and 1980s, AT&T conducted an iconic “reach out and touch someone” advertising campaign to encourage consumers to call their friends, family, and classmates. Back then, it was old school -- landlines. The campaign ranked #80 on Ad Age's list of the 100 top ad campaigns from the last century.

Now, we learn a little more about how extensive pervasive surveillance activities are at AT&T facilities to help law enforcement reach out and touch persons. Yesterday, the Intercept reported:

"The NSA considers AT&T to be one of its most trusted partners and has lauded the company’s “extreme willingness to help.” It is a collaboration that dates back decades. Little known, however, is that its scope is not restricted to AT&T’s customers. According to the NSA’s documents, it values AT&T not only because it "has access to information that transits the nation," but also because it maintains unique relationships with other phone and internet providers. The NSA exploits these relationships for surveillance purposes, commandeering AT&T’s massive infrastructure and using it as a platform to covertly tap into communications processed by other companies.”

The new report describes in detail the activities at eight AT&T facilities in major cities across the United States. Consumers who use other branded wireless service providers are also affected:

"Because of AT&T’s position as one of the U.S.’s leading telecommunications companies, it has a large network that is frequently used by other providers to transport their customers’ data. Companies that “peer” with AT&T include the American telecommunications giants Sprint, Cogent Communications, and Level 3, as well as foreign companies such as Sweden’s Telia, India’s Tata Communications, Italy’s Telecom Italia, and Germany’s Deutsche Telekom."

It was five years ago this month that the public learned about extensive surveillance by the U.S. National Security Agency (NSA). Back then, the Guardian UK newspaper reported about a court order allowing the NSA to spy on U.S. citizens. The revelations continued, and by 2016 we'd learned about NSA code inserted in Android operating system software, the FISA Court and how it undermines the public's trust, the importance of metadata and how much it reveals about you (despite some politicians' claims otherwise), the unintended consequences from broad NSA surveillance, U.S. government spy agencies' goal to break all encryption methods, warrantless searches of U.S. citizens' phone calls and e-mail messages, the NSA's facial image data collection program, the data collection programs included ordinary (e.g., innocent) citizens besides legal targets, and how  most hi-tech and telecommunications companies assisted the government with its spy programs. We knew before that AT&T was probably the best collaborator, and now we know more about why. 

Content vacuumed up during the surveillance includes consumers' phone calls, text messages, e-mail messages, and internet activity. The latest report by the Intercept also described:

"The messages that the NSA had unlawfully collected were swept up using a method of surveillance known as “upstream,” which the agency still deploys for other surveillance programs authorized under both Section 702 of FISA and Executive Order 12333. The upstream method involves tapping into communications as they are passing across internet networks – precisely the kind of electronic eavesdropping that appears to have taken place at the eight locations identified by The Intercept."

Former NSA contractor Edward Snowden commented on Twitter:


Apple To Close Security Hole Law Enforcement Frequently Used To Access iPhones

You may remember. In 2016, the U.S. Department of Justice attempted to force Apple Computer to build a back door into its devices so law enforcement could access suspects' iPhones. After Apple refused, the government found a vendor to do the hacking for them. In 2017, multiple espionage campaigns targeted Apple devices with new malware.

Now, we learn a future Apple operating system (iOS) software update will close a security hole frequently used by law enforcement. Reuters reported that the future iOS update will include default settings to terminate communications through the USB port when the device hasn't been unlocked within the past hour. Reportedly, that change may reduce access by 90 percent.

Kudos to the executives at Apple for keeping customers' privacy foremost.


When "Unlimited" Mobile Plans Are Anything But

My apologies to readers for the 10-day gap in blog posts. I took a few days off to attend a high school reunion in another state. Time passes more quickly than you think. It was good to renew connections with classmates.

Speaking of connections, several telecommunications companies appear to either ignore or not know the meaning of "unlimited" for mobile internet access. 9To5mac reported:

"Not content with offering one ‘unlimited’ plan which isn’t, and a second ‘beyond unlimited’ plan which also isn’t, Verizon has now decided the solution to this is a third plan. The latest addition is called ‘above unlimited’ and, you guessed it, it’s not... The carrier has the usual get-out clause, claiming that all three plans really are unlimited, it’s just that they reserve the right to throttle your connection speed once you hit the stated, ah, limits."

Some of the mobile plans limit video to low-resolution formats. Do you prefer to watch in 2018 low-resolution video formatted to 2008 (or earlier)? I think not. Do you want your connection slowed after you reach a data download threshold? I think not.

I look forward to action by the U.S. Federal Trade Commission (FTC) to enforce the definition of "unlimited," since the "light-touch" regulatory approach by the Federal Communications Commission (FCC) means that the FCC has abandoned its duties regarding oversight of internet service providers.

Caveat emptor, or buyer beware, definitely applies. Wise consumers read the fine print before purchase of any online services.


Google To Exit Weaponized Drone Contract And Pursue Other Defense Projects

Google logo Last month, protests by current and former Google employees, plus academic researchers, cited ethical and transparency concerns with artificial intelligence (AI) help the company provides to the U.S. Department of Defense for Project Maven, a weaponized drone program to identify people. Gizmodo reported that Google plans not to renew its contract for Project Maven:

"Google Cloud CEO Diane Greene announced the decision at a meeting with employees Friday morning, three sources told Gizmodo. The current contract expires in 2019 and there will not be a follow-up contract... The company plans to unveil new ethical principles about its use of AI this week... Google secured the Project Maven contract in late September, the emails reveal, after competing for months against several other “AI heavyweights” for the work. IBM was in the running, as Gizmodo reported last month, along with Amazon and Microsoft... Google is reportedly competing for a Pentagon cloud computing contract worth $10 billion."


Why Your Health Insurer Doesn’t Care About Your Big Bills

[Editor's note: today's guest post, by the reporters at ProPublica, discusses pricing and insurance problems within the healthcare industry, and a resource most consumers probably are unaware of. It is reprinted with permission.]

By Marshall Allen, ProPublica

Michael Frank ran his finger down his medical bill, studying the charges and pausing in disbelief. The numbers didn’t make sense.

His recovery from a partial hip replacement had been difficult. He’d iced and elevated his leg for weeks. He’d pushed his 49-year-old body, limping and wincing, through more than a dozen physical therapy sessions.

NYU Langone Health logo The last thing he needed was a botched bill.

His December 2015 surgery to replace the ball in his left hip joint at NYU Langone Medical Center in New York City had been routine. One night in the hospital and no complications.

Aetna Inc. logoHe was even supposed to get a deal on the cost. His insurance company, Aetna, had negotiated an in-network “member rate” for him. That’s the discounted price insured patients get in return for paying their premiums every month.

But Frank was startled to see that Aetna had agreed to pay NYU Langone $70,000. That’s more than three times the Medicare rate for the surgery and more than double the estimate of what other insurance companies would pay for such a procedure, according to a nonprofit that tracks prices.

Fuming, Frank reached for the phone. He couldn’t see how NYU Langone could justify these fees. And what was Aetna doing? As his insurer, wasn’t its duty to represent him, its “member”? So why had it agreed to pay a grossly inflated rate, one that stuck him with a $7,088 bill for his portion?

Frank wouldn’t be the first to wonder. The United States spends more per person on health care than any other country. A lot more. As a country, by many measures, we are not getting our money’s worth. Tens of millions remain uninsured. And millions are in financial peril: About 1 in 5 is currently being pursued by a collection agency over medical debt. Health care costs repeatedly top the list of consumers’ financial concerns.

Experts frequently blame this on the high prices charged by doctors and hospitals. But less scrutinized is the role insurance companies — the middlemen between patients and those providers — play in boosting our health care tab. Widely perceived as fierce guardians of health care dollars, insurers, in many cases, aren’t. In fact, they often agree to pay high prices, then, one way or another, pass those high prices on to patients — all while raking in healthy profits.

ProPublica and NPR are examining the bewildering, sometimes enraging ways the health insurance industry works, by taking an inside look at the games, deals and incentives that often result in higher costs, delays in care or denials of treatment. The misunderstood relationship between insurers and hospitals is a good place to start.

Today, about half of Americans get their health care benefits through their employers, who rely on insurance companies to manage the plans, restrain costs and get them fair deals.

But as Frank eventually discovered, once he’d signed on for surgery, a secretive system of pre-cut deals came into play that had little to do with charging him a reasonable fee.

After Aetna approved the in-network payment of $70,882 (not including the fees of the surgeon and anesthesiologist), Frank’s coinsurance required him to pay the hospital 10 percent of the total.

When Frank called NYU Langone to question the charges, the hospital punted him to Aetna, which told him it paid the bill according to its negotiated rates. Neither Aetna nor the hospital would answer his questions about the charges.

Frank found himself in a standoff familiar to many patients. The hospital and insurance company had agreed on a price and he was required to help pay it. It’s a three-party transaction in which only two of the parties know how the totals are tallied.

Frank could have paid the bill and gotten on with his life. But he was outraged by what his insurance company agreed to pay. “As bad as NYU is,” Frank said, “Aetna is equally culpable because Aetna’s job was to be the checks and balances and to be my advocate.”

And he also knew that Aetna and NYU Langone hadn’t double-teamed an ordinary patient. In fact, if you imagined the perfect person to take on insurance companies and hospitals, it might be Frank.

For three decades, Frank has worked for insurance companies like Aetna, helping to assess how much people should pay in monthly premiums. He is a former president of the Actuarial Society of Greater New York and has taught actuarial science at Columbia University. He teaches courses for insurance regulators and has even served as an expert witness for insurance companies.

The hospital and insurance company may have expected him to shut up and pay. But Frank wasn’t going away.

Patients fund the entire health care industry through taxes, insurance premiums and cash payments. Even the portion paid by employers comes out of an employee’s compensation. Yet when the health care industry refers to “payers,” it means insurance companies or government programs like Medicare.

Patients who want to know what they’ll be paying — let alone shop around for the best deal — usually don’t have a chance. Before Frank’s hip operation he asked NYU Langone for an estimate. It told him to call Aetna, which referred him back to the hospital. He never did get a price.

Imagine if other industries treated customers this way. The price of a flight from New York to Los Angeles would be a mystery until after the trip. Or, while digesting a burger, you’d learn it cost 50 bucks.

A decade ago, the opacity of prices was perhaps less pressing because medical expenses were more manageable. But now patients pay more and more for monthly premiums, and then, when they use services, they pay higher co-pays, deductibles and coinsurance rates.

Employers are equally captive to the rising prices. They fund benefits for more than 150 million Americans and see health care expenses eating up more and more of their budgets.

Richard Master, the founder and CEO of MCS Industries Inc. in Easton, Pennsylvania, offered to share his numbers. By most measures MCS is doing well. Its picture frames and decorative mirrors are sold at Walmart, Target and other stores and, Master said, the company brings in more than $200 million a year.

But the cost of health care is a growing burden for MCS and its 170 employees. A decade ago, Master said, an MCS family policy cost $1,000 a month with no deductible. Now it’s more than $2,000 a month with a $6,000 deductible. MCS covers 75 percent of the premium and the entire deductible. Those rising costs eat into every employee’s take-home pay.

Economist Priyanka Anand of George Mason University said employers nationwide are passing rising health care costs on to their workers by asking them to absorb a larger share of higher premiums. Anand studied Bureau of Labor Statistics data and found that every time health care costs rose by a dollar, an employee’s overall compensation got cut by 52 cents.

Master said his company hops between insurance providers every few years to find the best benefits at the lowest cost. But he still can’t get a breakdown to understand what he’s actually paying for.

“You pay for everything, but you can’t see what you pay for,” he said.

Master is a CEO. If he can’t get answers from the insurance industry, what chance did Frank have?

Frank’s hospital bill and Aetna’s “explanation of benefits” arrived at his home in Port Chester, New York, about a month after his operation. Loaded with an off-putting array of jargon and numbers, the documents were a natural playing field for an actuary like Frank.

Under the words, “DETAIL BILL,” Frank saw that NYU Langone’s total charges were more than $117,000, but that was the sticker price, and those are notoriously inflated. Insurance companies negotiate an in-network rate for their members. But in Frank’s case at least, the “deal” still cost $70,882.

With a practiced eye, Frank scanned the billing codes hospitals use to get paid and immediately saw red flags: There were charges for physical therapy sessions that never took place, and drugs he never received. One line stood out — the cost of the implant and related supplies. Aetna said NYU Langone paid a “member rate” of $26,068 for “supply/implants.” But Frank didn’t see how that could be accurate. He called and emailed Smith & Nephew, the maker of his implant, until a representative told him the hospital would have paid about $1,500. His NYU Langone surgeon confirmed the amount, Frank said. The device company and surgeon did not respond to ProPublica’s requests for comment.

Frank then called and wrote Aetna multiple times, sure it would want to know about the problems. “I believe that I am a victim of excessive billing,” he wrote. He asked Aetna for copies of what NYU Langone submitted so he could review it for accuracy, stressing he wanted “to understand all costs.”

Aetna reviewed the charges and payments twice — both times standing by its decision to pay the bills. The payment was appropriate based on the details of the insurance plan, Aetna wrote.

Frank also repeatedly called and wrote NYU Langone to contest the bill. In its written reply, the hospital didn’t explain the charges. It simply noted that they “are consistent with the hospital’s pricing methodology.”

Increasingly frustrated, Frank drew on his decades of experience to essentially serve as an expert witness on his own case. He gathered every piece of relevant information to understand what happened, documenting what Medicare, the government’s insurance program for the disabled and people over age 65, would have paid for a partial hip replacement at NYU Langone — about $20,491 — and what FAIR Health, a New York nonprofit that publishes pricing benchmarks, estimated as the in-network price of the entire surgery, including the surgeon fees — $29,162.

He guesses he spent about 300 hours meticulously detailing his battle plan in two inches-thick binders with bills, medical records and correspondence.

ProPublica sent the Medicare and FAIR Health estimates to Aetna and asked why they had paid so much more. The insurance company declined an interview and said in an emailed statement that it works with hospitals, including NYU Langone, to negotiate the “best rates” for members. The charges for Frank's procedure were correct given his coverage, the billed services and the Aetna contract with NYU Langone, the insurer wrote.

NYU Langone also declined ProPublica’s interview request. The hospital said in an emailed statement it billed Frank according to the contract Aetna had negotiated on his behalf. Aetna, it wrote, confirmed the bills were correct.

After seven months, NYU Langone turned Frank’s $7,088 bill over to a debt collector, putting his credit rating at risk. “They upped the ante,” he said.

Frank sent a new flurry of letters to Aetna and to the debt collector and complained to the New York State Department of Financial Services, the insurance regulator, and to the New York State Office of the Attorney General. He even posted his story on LinkedIn.

But no one came to the rescue. A year after he got the first bills, NYU Langone sued him for the unpaid sum. He would have to argue his case before a judge.

You’d think that health insurers would make money, in part, by reducing how much they spend.

Turns out, insurers don’t have to decrease spending to make money. They just have to accurately predict how much the people they insure will cost. That way they can set premiums to cover those costs — adding about 20 percent to for their administration and profit. If they’re right, they make money. If they’re wrong, they lose money. But, they aren’t too worried if they guess wrong. They can usually cover losses by raising rates the following year.

Frank suspects he got dinged for costing Aetna too much with his surgery. The company raised the rates on his small group policy — the plan just includes him and his partner — by 18.75 percent the following year.

The Affordable Care Act kept profit margins in check by requiring companies to use at least 80 percent of the premiums for medical care. That’s good in theory but it actually contributes to rising health care costs. If the insurance company has accurately built high costs into the premium, it can make more money. Here’s how: Let’s say administrative expenses eat up about 17 percent of each premium dollar and around 3 percent is profit. Making a 3 percent profit is better if the company spends more.

It’s like if a mom told her son he could have 3 percent of a bowl of ice cream. A clever child would say, “Make it a bigger bowl.”

Wonks call this a “perverse incentive.”

“These insurers and providers have a symbiotic relationship,” said Wendell Potter, who left a career as a public relations executive in the insurance industry to become an author and patient advocate. “There’s not a great deal of incentive on the part of any players to bring the costs down.”

Insurance companies may also accept high prices because often they aren’t always the ones footing the bill. Nowadays about 60 percent of the employer benefits are “self-funded.” That means the employer pays the bills. The insurers simply manage the benefits, processing claims and giving employers access to their provider networks. These management deals are often a large, and lucrative, part of a company’s business. Aetna, for example, insured 8 million people in 2017, but provided administrative services only to considerably more — 14 million.

To woo the self-funded plans, insurers need a strong network of medical providers. A brand-name system like NYU Langone can demand — and get — the highest payments, said Manuel Jimenez, a longtime negotiator for insurers including Aetna. “They tend to be very aggressive in their negotiations.”

On the flip side, insurers can dictate the terms to the smaller hospitals, Jimenez said. The little guys, “get the short end of the stick,” he said. That’s why they often merge with the bigger hospital chains, he said, so they can also increase their rates.

Other types of horse-trading can also come into play, experts say. Insurance companies may agree to pay higher prices for some services in exchange for lower rates on others.

Patients, of course, don’t know how the behind-the-scenes haggling affects what they pay. By keeping costs and deals secret, hospitals and insurers dodge questions about their profits, said Dr. John Freedman, a Massachusetts health care consultant. Cases like Frank’s “happen every day in every town across America. Only a few of them come up for scrutiny.”

In response, a Tennessee company is trying to expose the prices and steer patients to the best deals. Healthcare Bluebook aims to save money for both employers who self-pay, and their workers. Bluebook used payment information from self-funded employers to build a searchable online pricing database that shows the low-, medium- and high-priced facilities for certain common procedures, like MRIs. The company, which launched in 2008, now has more than 4,500 companies paying for its services. Patients can get a $50 bonus for choosing the best deal.

Bluebook doesn’t have price information for Frank’s operation — a partial hip replacement. But its price range in the New York City area for a full hip replacement is from $28,000 to $77,000, including doctor fees. Its “fair price” for these services tops out at about two-thirds of what Aetna agreed to pay on Frank’s behalf.

Frank, who worked with mainstream insurers, didn’t know about Bluebook. If he had used its data, he would have seen that there were facilities that were both high quality and offered a fair price near his home, including Holy Name Medical Center in Teaneck, New Jersey, and Greenwich Hospital in Connecticut. NYU Langone is one of Bluebook’s highest-priced, high-quality hospitals in the area for hip replacements. Others on Bluebook’s pricey list include Montefiore New Rochelle Hospital in New Rochelle, New York, and Hospital for Special Surgery in Manhattan.

ProPublica contacted Hospital for Special Surgery to see if it would provide a price for a partial hip replacement for a patient with an Aetna small-group plan like Frank’s. The hospital declined, citing its confidentiality agreements with insurance companies.

Frank arrived at the Manhattan courthouse on April 2 wearing a suit and fidgeted in his seat while he waited for his hearing to begin. He had never been sued for anything, he said. He and his attorney, Gabriel Nugent, made quiet conversation while they waited for the judge.

In the back of the courtroom, NYU Langone’s attorney, Anton Mikofsky, agreed to talk about the lawsuit. The case is simple, he said. “The guy doesn’t understand how to read a bill.”

The high price of the operation made sense because NYU Langone has to pay its staff, Mikofsky said. It also must battle with insurance companies who are trying to keep costs down, he said. “Hospitals all over the country are struggling,” he said.

“Aetna reviewed it twice,” Mikofsky added. “Didn’t the operation go well? He should feel blessed.”

When the hearing started, the judge gave each side about a minute to make its case, then pushed them to settle.

Mikofsky told the judge Aetna found nothing wrong with the billing and had already taken care of most of the charges. The hospital’s position was clear. Frank owed $7,088.

Nugent argued that the charges had not been justified and Frank felt he owed about $1,500.

The lawyers eventually agreed that Frank would pay $4,000 to settle the case.

Frank said later that he felt compelled to settle because going to trial and losing carried too many risks. He could have been hit with legal fees and interest. It would have also hurt his credit at a time he needs to take out college loans for his kids.

After the hearing, Nugent said a technicality might have doomed their case. New York defendants routinely lose in court if they have not contested a bill in writing within 30 days, he said. Frank had contested the bill over the phone with NYU Langone, and in writing within 30 days with Aetna. But he did not dispute it in writing to the hospital within 30 days.

Frank paid the $4,000, but held on to his outrage. “The system,” he said, “is stacked against the consumer.”

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

 


What Facebook’s New Political Ad System Misses

[Editor's Note: today's guest post is by the reporters at ProPublica. It is reprinted with permission.]

By Jeremy B. Merrill, Ariana Tobin, and Madeleine Varner, ProPublica

Facebook’s long-awaited change in how it handles political advertisements is only a first step toward addressing a problem intrinsic to a social network built on the viral sharing of user posts.

The company’s approach, a searchable database of political ads and their sponsors, depends on the company’s ability to sort through huge quantities of ads and identify which ones are political. Facebook is betting that a combination of voluntary disclosure and review by both people and automated systems will close a vulnerability that was famously exploited by Russian meddlers in the 2016 election.

The company is doubling down on tactics that so far have not prevented the proliferation of hate-filled posts or ads that use Facebook’s capability to target ads particular groups.

If the policy works as Facebook hopes, users will learn who has paid for the ads they see. But the company is not revealing details about the significant aspect of how political advertisers use its platform — the specific attributes the ad buyers used to target a particular person for an ad.

Facebook’s new system is the company’s most ambitious response thus far to the now-documented efforts by Russian agents to circulate items that would boost Donald Trump’s chances or suppress Democratic turnout. The new policies announced Thursday will make it harder for somebody trying to exploit the precise vulnerabilities in Facebook’s system exploited by the Russians in 2016 in several ways:

First, political ads that you see on Facebook will now include the name of the organization or person who paid for it, reminiscent of disclaimers required on political mailers and TV ads. (The ads Facebook identified as placed by Russians carried no such tags.)

The Federal Election Commission requires political ads to carry such clear disclosures but as we have reported, many candidates and groups on Facebook haven’t been following that rule.

Second, all political ads will be published in a searchable database.

Finally, the company will now require that anyone buying a political ad in their system confirm that they’re a U.S. resident. Facebook will even mail advertisers a postcard to make certain they’re in the U.S. Facebook says ads by advertisers whose identities aren’t verified under this process will be taken down starting in about a week, and they will be blocked from buying new ads until they have verified themselves.

While the new system can still be gamed, the specific tactics used by the Russian Internet Research Agency, such as an overseas purchase of ads promoting a Black Lives Matter rally under the name “Blacktivist,” will become harder — or at least harder to do without getting caught.

The company has also pledged to devote more employees to the issue, including 3,000-4,000 more content moderators. But Facebook says these will be not be additional hires — they will be included in the 20,000 already promised to tackle various moderation issues in the coming months.

What Is Facebook Missing?

The most obvious flaw in Facebook’s new system is that it misses ads it should catch. Right now, it’s easy to find political ads that are missing from their archive. Take this one, from the Washington State Democratic Party. Just minutes after Facebook finished announcing its launch of the tool, a participant in ProPublica’s Facebook Political Ad Collector project saw this ad, criticizing Republican congresswoman Cathy McMorris Rodgers… but it wasn’t in the database.

And there are others.

The company acknowledged that the process is still a work in progress, reiterating its request that users pitch in by reporting the political ads that lack disclosures.

Even as Facebook’s system gets better at identifying political ads, the company is withholding a critical piece of information in the ads it’s publishing. While we’ll see some demographic information about who saw a given ad, Facebook is not indicating which audiences the advertiser intended to target — categories that often include racial or political characteristics and which have been controversial in the past.

This information is critical to researchers and journalists trying to make sense of political advertising on Facebook. Take, for instance, this ad promoting the environmental benefits of nuclear power, from a group called Nuclear Matters: the group chose specifically to show it to people interested in veganism — a fact we wouldn’t know from looking at the demographics of the users who saw the ad.

Facebook said it considers the information about who saw an ad — age, gender and location — sufficient. Rob Leathern, Facebook’s Director of Product Management, said that the limited demographics-only breakdown “offers more transparency than the intent, in terms of showing the targeting.”

The company is also promising to launch an API, a technical tool which will allow outsiders to write software that would look for patterns in the new ad database. The company says it will launch an API “later this summer” but hasn’t said what data it will contain or who will have access to it.

ProPublica’s own Facebook Ad Collector tool, which also collects political ads spotted on Facebook, has an API that can be accessed by anyone. It also includes the targeting information — which users can also see on each ad that they view.

Facebook said it would not release data about ads flagged by users as political and then rejected by the system. We’re curious about those, and we know firsthand that their software can be imperfect. We’ve attempted to buy ads specifically about our journalism that were flagged as problematic — because the ads “contained profanity,” or were misclassified as discriminatory ads for “employment, credit or housing opportunities” by mistake.

Facebook’s track record on initiatives aimed at improving the transparency of its massively profitable advertising system is spotty. The company has said it’s going to rely in part on artificial intelligence to review ads — the same sort of technology that the company said in the past it would use to block discriminatory ads for housing, employment and credit opportunities.

When we tested the system almost a year after a ProPublica story showed Facebook was allowing advertisers to target housing ads in a way that violated Fair Housing Act protections, we found that the company was still approving housing ads that excluded African-Americans and other “multicultural affinities” from seeing them. The company was pressured to implement several changes to its ad portal and a Fair Housing group filed a lawsuit against the company.

Facebook also plans to rely in part on users to find and report political ads that get through the system without the required disclosures.

But its track record of moderating user-flagged content — when it comes to both hate speech and advertising — has been uneven. Last December, ProPublica brought 49 cases of user-flagged offensive speech to Facebook, and the company acknowledged that its moderators had made the wrong call in 22 of them.

The company admits it's playing a “cat and mouse game” with people trying to pass political ads through their system unnoticed. Just last month, Ohio Democratic gubernatorial candidate Richard Cordray’s campaign ran Facebook ads criticizing his opponent — but from a page called “Ohio Primary Info.”

The need for ad transparency goes way beyond Russian bad actors. Our tool has already caught scams and malware disguised as politics, which users raised as a problem years before Facebook made any meaningful change.

If you flag an ad to Facebook, please report them to us as well by sending an email to political.ads@propublica.org. We will be watching to see how well Facebook responds when users flag an ad.

How Will They Enforce the New Rules?

It’s one thing to create a set of rules, and another to enforce them consistently and on a large scale.

Facebook, which kept its content moderation and hate speech policies secret until they were revealed by ProPublica, won’t share the specific rules governing political ad content or details about the instructions moderators receive.

Leathern said the company is keeping the rules secret to frustrate the efforts of “bad actors who try to game our enforcement systems”

Facebook has said it’s looking to flag both electoral ads and those that take a position on its list of twenty “national legislative issues of public importance”. These range from the concrete, like “abortion” and “taxes,” to broad topics like “health” and “values.”

Facebook acknowledges its system will make mistakes and says it will improve over time. Ads for specific candidates are relatively easy to detect. “We’ll likely miss ads when they aim to persuade,” said Katie Harbath, Facebook’s Global Politics and Government Outreach Director.

We plan to keep an eye out for ads that don’t make it into the archive. We’ll be looking for ads that our Political Ad Collector tool finds that aren’t in Facebook’s database.

Want to Help?

We need your help building out our independent database of political ads! If you’re still reading this article, we’re giving you permission to stop and install the Political Ad Collector extension. Here’s what you need to know about how it works.

You can also help us find other people who can install the tool. We are especially in need of people who aren’t ProPublica readers already. We need people from a diverse set of backgrounds, and with different perspectives and political beliefs. Please encourage your friends and relatives — especially the ones you avoid talking politics with — to install it.

Do You Work at a News Outlet and Want to Partner With Us on This?

Awesome. We’re already working with quite a few newsrooms all over the world, including the CBC in Canada, Bridge Magazine in Michigan, The Guardian in Australia and more.

In the U.S., we’re trying to get eyes and ears on the ground in as many local elections as possible. If your readers would be interested in joining our transparency effort, please reach out. We’re happy to send more information about this and our larger Electionland project.

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

 


Federal Watchdog Launches Investigation of Age Bias at IBM

[Editor's note: today's guest post, by reporters at ProPublica, updates a prior post about employment practices. It is reprinted with permission. A data breach at IBM in 2007 led to the creation of this blog.]

IBM logo By Peter Gosselin, ProPublica

The U.S. Equal Employment Opportunity Commission has launched a nationwide probe of age bias at IBM in the wake of a ProPublica investigation showing the company has flouted or outflanked laws intended to protect older workers from discrimination.

More than five years after IBM stopped providing legally required disclosures to older workers being laid off, the EEOC’s New York district office has begun consolidating individuals’ complaints from across the country and asking the company to explain practices recounted in the ProPublica story, according to ex-employees who’ve spoken with investigators and people familiar with the agency’s actions.

"Whenever you see the EEOC pulling cases and sending them to investigations, you know they’re taking things seriously," said the agency’s former general counsel, David Lopez. "I suspect IBM’s treatment of its later-career workers and older applicants is going to get a thorough vetting."

EEOC officials refused to comment on the agency’s investigation, but a dozen ex-IBM employees from California, Colorado, Texas, New Jersey and elsewhere allowed ProPublica to view the status screens for their cases on the agency’s website. The screens show the cases being transferred to EEOC’s New York district office shortly after the March 22 publication of ProPublica’s original story, and then being shifted to the office’s investigations division, in most instances, between April 5 and April 10.

The agency’s acting chair, Victoria Lipnic, a Republican, has made age discrimination a priority. The EEOC’s New York office won a settlement last year from Kentucky-based national restaurant chain Texas Roadhouse in the largest age-related case as measured by number of workers covered to go to trial in more than three decades.

IBM did not respond to questions about the EEOC investigation. In response to detailed questions for our earlier story, the company issued a brief statement, saying in part, "We are proud of our company and its employees’ ability to reinvent themselves era after era while always complying with the law."

Just prior to publication of the story, IBM issued a video recounting its long history of support for equal employment and diversity. In it, CEO Virginia "Ginni" Rometty said, "Every generation of IBMers has asked ‘How can we in our own time expand our understanding of inclusion?’ "

ProPublica reported in March that the tech giant, which has an annual revenue of about $80 billion, has ousted an estimated 20,000 U.S. employees ages 40 and over since 2014, about 60 percent of its American job cuts during those years. In some instances, it earmarked money saved by the departures to hire young replacements in order to, in the words of one internal company document, "correct seniority mix."

ProPublica reported that IBM regularly denied older workers information the law says they’re entitled to in order to decide whether they’ve been victims of age bias, and used point systems and other methods to pick older workers for removal, even when the company rated them high performers.

In some cases, IBM treated job cuts as voluntary retirements, even over employees’ objections. This reduced the number of departures counted as layoffs, which can trigger public reporting requirements in high enough numbers, and prevented employees from seeking jobless benefits for which voluntary retirees can’t file.

In addition to the complaints covered in the EEOC probe, a number of current and former employees say they have recently filed new complaints with the agency about age bias and are contemplating legal action against the company.

Edvin Rusis of Laguna Niguel, a suburb south of Los Angeles, said IBM has told him he’ll be laid off June 27 from his job of 15 years as a technical specialist. Rusis refused to sign a severance agreement and hired a class-action lawyer. They have filed an EEOC complaint claiming Rusis was one of "thousands" discriminated against by IBM.

If the agency issues a right-to-sue letter indicating Rusis has exhausted administrative remedies for his claim, they can take IBM to court. "I don’t see a clear reason for why they’re laying me off," the 59-year-old Rusis said in an interview. "I can only assume it’s age, and I don’t want to go silently."

Coretta Roddey of suburban Atlanta, 49, an African-American Army veteran and former IBM employee, said she’s applied more than 50 times to return to the company, but has been turned down or received no response. She’s hired a lawyer and filed an age discrimination complaint with EEOC.

"It’s frustrating," she said of the multiple rejections. "It makes you feel you don’t have the qualifications (for the job) when you really do."

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


New Commissioner Says FTC Should Get Tough on Companies Like Facebook and Google

[Editor's note: today's guest post, by reporters at ProPublica, explores enforcement policy by the U.S. Federal Trade Commission (FTC), which has become more important  given the "light touch" enforcement approach by the Federal Communications Commission. Today's post is reprinted with permission.]

By Jesse Eisinger, ProPublica

Declaring that "the credibility of law enforcement and regulatory agencies has been undermined by the real or perceived lax treatment of repeat offenders," newly installed Democratic Federal Trade Commissioner Rohit Chopra is calling for much more serious penalties for repeat corporate offenders.

"FTC orders are not suggestions," he wrote in his first official statement, which was released on May 14.

Many giant companies, including Facebook and Google, are under FTC consent orders for various alleged transgressions (such as, in Facebook’s case, not keeping its promises to protect the privacy of its users’ data). Typically, a first FTC action essentially amounts to a warning not to do it again. The second carries potential penalties that are more serious.

Some critics charge that that approach has encouraged companies to treat FTC and other regulatory orders casually, often violating their terms. They also say the FTC and other regulators and law enforcers have gone easy on corporate recidivists.

In 2012, a Republican FTC commissioner, J. Thomas Rosch, dissented from an agency agreement with Google that fined the company $22.5 million for violations of a previous order even as it denied liability. Rosch wrote, “There is no question in my mind that there is ‘reason to believe’ that Google is in contempt of a prior Commission order.” He objected to allowing the company to deny its culpability while accepting a fine.

Chopra’s memo signals a tough stance from Democratic watchdogs — albeit a largely symbolic one, given the fact that Republicans have a 3-2 majority on the FTC — as the Trump administration pursues a wide-ranging deregulatory agenda. Agencies such as the Environmental Protection Agency and the Department of Interior are rolling back rules, while enforcement actions from the Securities and Exchange Commission and the Department of Justice are at multiyear lows.

Chopra, 36, is an ally of Elizabeth Warren and a former assistant director of the Consumer Financial Protection Bureau. President Donald Trump nominated him to his post in October, and he was confirmed last month. The FTC is led by a five-person commission, with a chairman from the president’s party.

The Chopra memo is also a tacit criticism of enforcement in the Obama years. Chopra cites the SEC’s practice of giving waivers to banks that have been sanctioned by the Department of Justice or regulators allowing them to continue to receive preferential access to capital markets. The habitual waivers drew criticism from a Democratic commissioner on the SEC, Kara Stein. Chopra contends in his memo that regulators treated both Wells Fargo and the giant British bank HSBC too lightly after repeated misconduct.

"When companies violate orders, this is usually the result of serious management dysfunction, a calculated risk that the payoff of skirting the law is worth the expected consequences, or both," he wrote. Both require more serious, structural remedies, rather than small fines.

The repeated bad behavior and soft penalties “undermine the rule of law,” he argued.

Chopra called for the FTC to use more aggressive tools: referring criminal matters to the Department of Justice; holding individual executives accountable, even if they weren’t named in the initial complaint; and “meaningful” civil penalties.

The FTC used such aggressive tactics in going after Kevin Trudeau, infomercial marketer of miracle treatments for bodily ailments. Chopra implied that the commission does not treat corporate recidivists with the same toughness. “Regardless of their size and clout, these offenders, too, should be stopped cold,” he writes.

Chopra also suggested other remedies. He called for the FTC to consider banning companies from engaging in certain business practices; requiring that they close or divest the offending business unit or subsidiary; requiring the dismissal of senior executives; and clawing back executive compensation, among other forceful measures.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Privacy Badger Update Fights 'Link Tracking' And 'Link Shims'

Many internet users know that social media companies track both users and non-users. The Electronic Frontier Foundation (EFF) updated its Privacy Badger browser add-on to help consumers fight a specific type of surveillance technology called "Link Tracking," which facebook and many social networking sites use to track users both on and off their social platforms. The EFF explained:

"Say your friend shares an article from EFF’s website on Facebook, and you’re interested. You click on the hyperlink, your browser opens a new tab, and Facebook is no longer a part of the equation. Right? Not exactly. Facebook—and many other companies, including Google and Twitter—use a variation of a technique called link shimming to track the links you click on their sites.

When your friend posts a link to eff.org on Facebook, the website will “wrap” it in a URL that actually points to Facebook.com: something like https://l.facebook.com/l.php?u=https%3A%2F%2Feff.org%2Fpb&h=ATPY93_4krP8Xwq6wg9XMEo_JHFVAh95wWm5awfXqrCAMQSH1TaWX6znA4wvKX8pNIHbWj3nW7M4F-ZGv3yyjHB_vRMRfq4_BgXDIcGEhwYvFgE7prU. This is a link shim.

When you click on that monstrosity, your browser first makes a request to Facebook with information about who you are, where you are coming from, and where you are navigating to. Then, Facebook quickly redirects you to the place you actually wanted to go... Facebook’s approach is a bit sneakier. When the site first loads in your browser, all normal URLs are replaced with their l.facebook.com shim equivalents. But as soon as you hover over a URL, a piece of code triggers that replaces the link shim with the actual link you wanted to see: that way, when you hover over a link, it looks innocuous. The link shim is stored in an invisible HTML attribute behind the scenes. The new link takes you to where you want to go, but when you click on it, another piece of code fires off a request to l.facebook.com in the background—tracking you just the same..."

Lovely. And, Facebook fails to deliver on privacy in more ways:

"According to Facebook's official post on the subject, in addition to helping Facebook track you, link shims are intended to protect users from links that are "spammy or malicious." The post states that Facebook can use click-time detection to save users from visiting malicious sites. However, since we found that link shims are replaced with their unwrapped equivalents before you have a chance to click on them, Facebook's system can't actually protect you in the way they describe.

Facebook also claims that link shims "protect privacy" by obfuscating the HTTP Referer header. With this update, Privacy Badger removes the Referer header from links on facebook.com altogether, protecting your privacy even more than Facebook's system claimed to."

Thanks to the EFF for focusing upon online privacy and delivering effective solutions.


Academic Professors, Researchers, And Google Employees Protest Warfare Programs By The Tech Giant

Google logo Many internet users know that Google's business of model of free services comes with a steep price: the collection of massive amounts of information about users of its services. There are implications you may not be aware of.

A Guardian UK article by three professors asked several questions:

"Should Google, a global company with intimate access to the lives of billions, use its technology to bolster one country’s military dominance? Should it use its state of the art artificial intelligence technologies, its best engineers, its cloud computing services, and the vast personal data that it collects to contribute to programs that advance the development of autonomous weapons? Should it proceed despite moral and ethical opposition by several thousand of its own employees?"

These questions are relevant and necessary for several reasons. First, more than a dozen Google employees resigned citing ethical and transparency concerns with artificial intelligence (AI) help the company provides to the U.S. Department of Defense for Maven, a weaponized drone program to identify people. Reportedly, these are the first known mass resignations.

Second, more than 3,100 employees signed a public letter saying that Google should not be in the business of war. That letter (Adobe PDF) demanded that Google terminate its Maven program assistance, and draft a clear corporate policy that neither it, nor its contractors, will build warfare technology.

Third, more than 700 academic researchers, who study digital technologies, signed a letter in support of the protesting Google employees and former employees. The letter stated, in part:

"We wholeheartedly support their demand that Google terminate its contract with the DoD, and that Google and its parent company Alphabet commit not to develop military technologies and not to use the personal data that they collect for military purposes... We also urge Google and Alphabet’s executives to join other AI and robotics researchers and technology executives in calling for an international treaty to prohibit autonomous weapon systems... Google has become responsible for compiling our email, videos, calendars, and photographs, and guiding us to physical destinations. Like many other digital technology companies, Google has collected vast amounts of data on the behaviors, activities and interests of their users. The private data collected by Google comes with a responsibility not only to use that data to improve its own technologies and expand its business, but also to benefit society. The company’s motto "Don’t Be Evil" famously embraces this responsibility.

Project Maven is a United States military program aimed at using machine learning to analyze massive amounts of drone surveillance footage and to label objects of interest for human analysts. Google is supplying not only the open source ‘deep learning’ technology, but also engineering expertise and assistance to the Department of Defense. According to Defense One, Joint Special Operations Forces “in the Middle East” have conducted initial trials using video footage from a small ScanEagle surveillance drone. The project is slated to expand “to larger, medium-altitude Predator and Reaper drones by next summer” and eventually to Gorgon Stare, “a sophisticated, high-tech series of cameras... that can view entire towns.” With Project Maven, Google becomes implicated in the questionable practice of targeted killings. These include so-called signature strikes and pattern-of-life strikes that target people based not on known activities but on probabilities drawn from long range surveillance footage. The legality of these operations has come into question under international and U.S. law. These operations also have raised significant questions of racial and gender bias..."

I'll bet that many people never imagined -- nor want - that their personal e-mail, photos, calendars, video, social media, map usage, archived photos, social media, and more would be used for automated military applications. What are your opinions?


Equifax Operates A Secondary Credit Reporting Agency, And Its Website Appears Haphazard

Equifax logo More news about Equifax, the credit reporting agency with multiple data security failures resulting in a massive data breach affecting half of the United States population. It appears that Equifax also operates a secondary credit bureau: the National Consumer Telecommunications and Utilities Exchange (NCTUE). The Krebs On Security blog explained Equifax's role:

"The NCTUE is a consumer reporting agency founded by AT&T in 1997 that maintains data such as payment and account history, reported by telecommunication, pay TV and utility service providers that are members of NCTUE... there are four "exchanges" that feed into the NCTUE’s system: the NCTUE itself, something called "Centralized Credit Check Systems," the New York Data Exchange (NYDE), and the California Utility Exchange. According to a partner solutions page at Verizon, the NYDE is a not-for-profit entity created in 1996 that provides participating exchange carriers with access to local telecommunications service arrears (accounts that are unpaid) and final account information on residential end user accounts. The NYDE is operated by Equifax Credit Information Services Inc. (yes, that Equifax)... The California Utility Exchange collects customer payment data from dozens of local utilities in the state, and also is operated by Equifax (Equifax Information Services LLC)."

This surfaced after consumers with security freezes on their credit reports at the three major credit reporting agencies (e.g., Experian, Equifax, TransUnion) found fraudulent mobile phone accounts opened in their names. This shouldn't have been possible since security freezes prevent credit reporting agencies from selling consumers' credit reports to telecommunications companies, who typically perform credit checks before opening new accounts. So, the credit information must have come from somewhere else. It turns out, the source was the NCTUE.

NCTUE logo Credit reporting agencies make money by selling consumers' credit reports to potential lenders. And credit reports from the NCTUE are easy for anyone to order:

"... the NCTUE makes it fairly easy to obtain any records they may have on Americans. Simply phone them up (1-866-349-5185) and provide your Social Security number and the numeric portion of your registered street address."

The Krebs on Security blog also explain the expired SSL certificate used by Equifax which prevents serving web pages in a secure manner. That was simply inexcusable, poor data security.

A quick check of the NCTUE page on the Better Business Bureau site found 2 negative reviews and 70 complaints -- mostly about negative credit inquiries, and unresolved issues. A quick check of the NCTUE Terms Of Use page found very thin usage and privacy policies lacking details, such as mentions about data sharing, cookies, tracking, and more. The lack of data-sharing mentions could indicate NCTUE will share or sell data to anyone: entities, companies, and government agencies. It also means there is no way to verify whether the NCTUE complies with its own policies. Not good.

The policy contains enough language which indicates that it is not liable for anything:

"... THE NCTUE IS NOT RESPONSIBLE FOR, AND EXPRESSLY DISCLAIM, ALL LIABILITY FOR, DAMAGES OF ANY KIND ARISING OUT OF USE, REFERENCE TO, OR RELIANCE ON ANY INFORMATION CONTAINED WITHIN THE SITE. All content located at or available from the NCTUE website is provided “as is,” and NCTUE makes no representations or warranties, express or implied, including but not limited to warranties of merchantability, fitness for a particular purpose, title or non-infringement of proprietary rights. Without limiting the foregoing, NCTUE makes no representation or warranty that content located on the NCTUE website is free from error or suitable for any purpose; nor that the use of such content will not infringe any third party copyrights, trademarks or other intellectual property rights.

Links to Third Party Websites: Although the NCTUE website may include links providing direct access to other Internet resources, including websites, NCTUE is not responsible for the accuracy or content of information contained in these sites.."

Huh?! As is? The data NCTUE collected is being used for credit decisions. Reliability and accuracy matters. And, there are more concerns.

While at the NCTUE site, I briefly browsed the credit freeze information, which is hosted on an outsourced site, the Exchange Service Center (ESC). What's up with that? Why a separate site, and not a cohesive single site with a unified customer experience? This design gives the impression that the security freeze process was an afterthought.

Plus, the NCTUE and ESC sites present different policies (e.g., terms of use, privacy). Really? Why the complexity? Which policies rule? You'd think that the policies in both sites would be consistent and would mention each other, since consumers must use the two sites complete security freezes. That design seems haphazard. Not good.

There's more. Rather than use state-of-the-art, traditional web pages, the ESC site presents its policies in static Adobe PDF documents making it difficult for users to follow links for more information. (Contrast those thin policies with the more comprehensive Privacy and Terms of Use policies by TransUnion.) Plus, one policy was old -- dated 2011. It seems the site hasn't been updated in seven years. What fresh hell is this? More haphazard design. Why the confusing user experience? Not good.

Image of confusing drop-down menu for exchanges within the security freeze process. Click to view larger version There's more. When placing a security freeze, the ESC site includes a drop-down menu asking consumers to pick an exchange (e.g., NCTUE, Centralized Credit Check System, California Utility Exchange, NYDE). The confusing drop-down menu appears in the image on the right. Which menu option is the global security freeze? Is there a global option? The form page doesn't say, and it should. Why would a consumer select one of the exchanges? Perhaps, is this another slick attempt to limit the effectiveness of security freezes placed by consumers. Not good.

What can consumers make of this? First, the NCTUE site seems to be a slick way for Equifax to skirt the security freezes which consumers have placed upon their credit reports. Sounds like a definite end-run to me. Surprised? I'll bet. Angry? I'll bet, too. We consumers paid good money for security freezes on our credit reports.

Second, the combo NCTUE/ESC site seems like some legal, outsourcing ju-jitsu to avoid all liability, while still enjoying the revenues from credit-report sales. The site left me with the impression that its design, which hasn't kept pace during the years with internet best practices, was by a committee of attorneys focused upon serving their corporate clients' data collection and sharing needs while doing the absolute minimum required legally -- rather than a site focused upon the security needs of consumers. I can best describe the site using an old film-review phrase: a million monkeys with a million crayons would be hard pressed in a million years to create something this bad.

Third, credit reporting agencies get their data from a variety of sources. So, their business model is based upon data sharing. NCTUE seems designed to effectively do just that, regardless of consumers' security needs and wishes.

Fourth, this situation offers several reminders: a) just about anyone can set up and operate a credit reporting agency. No special skills nor expertise required; b) there are both national and regional credit reporting agencies; c) credit reports often contain errors; and d) credit reporting agencies historically have outsourced work, sometimes internationally -- for better or worse data security.

Fifth, you now you know what criminals and fraudsters already know... how to skirt the security freezes on credit reports and gain access to consumers' sensitive information. The combo NCTUE/ESC site is definitely a high-value target by criminals.

My first impression of the NCTUE site: haphazard design making it difficult for consumers to use and to trust it. What do you think?


Report: Software Failure In Fatal Accident With Self-Driving Uber Car

TechCrunch reported:

"The cause of the fatal crash of an Uber self-driving car appears to have been at the software level, specifically a function that determines which objects to ignore and which to attend to, The Information reported. This puts the fault squarely on Uber’s doorstep, though there was never much reason to think it belonged anywhere else.

Given the multiplicity of vision systems and backups on board any given autonomous vehicle, it seemed impossible that any one of them failing could have prevented the car’s systems from perceiving Elaine Herzberg, who was crossing the street directly in front of the lidar and front-facing cameras. Yet the car didn’t even touch the brakes or sound an alarm. Combined with an inattentive safety driver, this failure resulted in Herzberg’s death."

The TechCrunch story provides details about which software subsystem the report said failed.

Not good.

So, the autonomous or self-driving cars are only as good as the software they're programmed with (including maintenance). Anyone who has used computers during the last couple decades probably has experienced software glitches, bugs, and failures. It happens.

This latest incident suggests self-driving cars aren't yet ready. what do you think?


Connecticut And Federal Regulators Announce $1.3 Million Settlement With Substance Abuse Healthcare Provider

Connecticut and federal regulators recently announced a settlement agreement to resolve allegations that New Era Rehabilitation Center (New Era), operating in New Haven and Bridgeport, submitted false claims to both state and federal healthcare programs. The office of George Jepsen, Connecticut Attorney General, announced that New Era:

"... and its co-founders and owners – Dr. Ebenezer Kolade and Dr. Christina Kolade – are enrolled as providers in the Connecticut Medical Assistance Program (CMAP), which includes the state's Medicaid program. As part of their practice, they provide methadone treatment services for patients dealing with opioid addiction. Most of their patients are CMAP beneficiaries.

During the relevant time period, CMAP reimbursed methadone clinics by paying a weekly bundled rate that included all of the services associated with methadone maintenance, including the patient's doses of methadone; the initial intake evaluation; a physical examination; periodic drug testing; and individual, group and family drug counseling... The state and federal governments alleged that, from October 2009 to November 2013, New Era and the Kolades engaged in a pattern and practice of billing CMAP weekly for the methadone bundled service rate and then also submitting a separate claim to the CMAP for virtually every drug counseling session provided to clients by using a billing code for outpatient psychotherapy. The state and federal governments further alleged that those psychotherapy sessions were actually the drug counseling sessions already included and reimbursed through the bundled rate."

These actions were part of the State of Connecticut's Inter-agency Fraud Task Force created in 2013 to investigate and prosecute healthcare fraud. The joint investigation included the Connecticut AT's office, the office of Connecticut U.S. Attorney John H. Durham, and the U.S. Health and Human Services, Office of Inspector General – Office of Investigations.

Connecticut Fight Fraud logo Terms of the settlement agreement require NERC to pay $1,378,533 in settlement funds. Of that amount, $881,945 will be returned to CMAP.

Connecticut residents suspecting healthcare fraud or abuse should contact the Attorney General’s Antitrust and Government Program Fraud Department (phone at 860-808-5040, or email at ag.fraud@ct.gov), or the Department of Social Services fraud (hotline at 1-800-842-2155, online at www.ct.gov/dss/reportingfraud, or email at providerfraud.dss@ct.gov). Residents in other states can contact their state's attorney general's office.


Twitter Advised Its Users To Change Their Passwords After Security Blunder

Yesterday, Twitter.com advised all of its users to change their passwords after a huge security blunder exposed users' passwords online in an unprotected format. The social networking service released a statement on May 3rd:

"We recently identified a bug that stored passwords unmasked in an internal log. We have fixed the bug, and our investigation shows no indication of breach or misuse by anyone. Out of an abundance of caution, we ask that you consider changing your password on all services where you’ve used this password."

Security experts advise consumers not to use the same password at several sites or services. Repeated use of the same password makes it easy for criminals to hack into multiple sites or services.

The statement by Twitter.com also explained that it masks users' passwords:

"... through a process called hashing using a function known as bcrypt, which replaces the actual password with a random set of numbers and letters that are stored in Twitter’s system. This allows our systems to validate your account credentials without revealing your password. This is an industry standard.

Due to a bug, passwords were written to an internal log before completing the hashing process. We found this error ourselves, removed the passwords, and are implementing plans to prevent this bug from happening again."

The good news: Twitter found the buy by itself. The not-so-good news: the statement was short on details. It did not disclose details about the fixes so this blunder doesn't happen again. Nor did the statement say how many users were affected. Twitter has about 330 million users, so it seems that all users were affected.