Previous month:
April 2018
Next month:
June 2018

16 posts from May 2018

Why Your Health Insurer Doesn’t Care About Your Big Bills

[Editor's note: today's guest post, by the reporters at ProPublica, discusses pricing and insurance problems within the healthcare industry, and a resource most consumers probably are unaware of. It is reprinted with permission.]

By Marshall Allen, ProPublica

Michael Frank ran his finger down his medical bill, studying the charges and pausing in disbelief. The numbers didn’t make sense.

His recovery from a partial hip replacement had been difficult. He’d iced and elevated his leg for weeks. He’d pushed his 49-year-old body, limping and wincing, through more than a dozen physical therapy sessions.

NYU Langone Health logo The last thing he needed was a botched bill.

His December 2015 surgery to replace the ball in his left hip joint at NYU Langone Medical Center in New York City had been routine. One night in the hospital and no complications.

Aetna Inc. logoHe was even supposed to get a deal on the cost. His insurance company, Aetna, had negotiated an in-network “member rate” for him. That’s the discounted price insured patients get in return for paying their premiums every month.

But Frank was startled to see that Aetna had agreed to pay NYU Langone $70,000. That’s more than three times the Medicare rate for the surgery and more than double the estimate of what other insurance companies would pay for such a procedure, according to a nonprofit that tracks prices.

Fuming, Frank reached for the phone. He couldn’t see how NYU Langone could justify these fees. And what was Aetna doing? As his insurer, wasn’t its duty to represent him, its “member”? So why had it agreed to pay a grossly inflated rate, one that stuck him with a $7,088 bill for his portion?

Frank wouldn’t be the first to wonder. The United States spends more per person on health care than any other country. A lot more. As a country, by many measures, we are not getting our money’s worth. Tens of millions remain uninsured. And millions are in financial peril: About 1 in 5 is currently being pursued by a collection agency over medical debt. Health care costs repeatedly top the list of consumers’ financial concerns.

Experts frequently blame this on the high prices charged by doctors and hospitals. But less scrutinized is the role insurance companies — the middlemen between patients and those providers — play in boosting our health care tab. Widely perceived as fierce guardians of health care dollars, insurers, in many cases, aren’t. In fact, they often agree to pay high prices, then, one way or another, pass those high prices on to patients — all while raking in healthy profits.

ProPublica and NPR are examining the bewildering, sometimes enraging ways the health insurance industry works, by taking an inside look at the games, deals and incentives that often result in higher costs, delays in care or denials of treatment. The misunderstood relationship between insurers and hospitals is a good place to start.

Today, about half of Americans get their health care benefits through their employers, who rely on insurance companies to manage the plans, restrain costs and get them fair deals.

But as Frank eventually discovered, once he’d signed on for surgery, a secretive system of pre-cut deals came into play that had little to do with charging him a reasonable fee.

After Aetna approved the in-network payment of $70,882 (not including the fees of the surgeon and anesthesiologist), Frank’s coinsurance required him to pay the hospital 10 percent of the total.

When Frank called NYU Langone to question the charges, the hospital punted him to Aetna, which told him it paid the bill according to its negotiated rates. Neither Aetna nor the hospital would answer his questions about the charges.

Frank found himself in a standoff familiar to many patients. The hospital and insurance company had agreed on a price and he was required to help pay it. It’s a three-party transaction in which only two of the parties know how the totals are tallied.

Frank could have paid the bill and gotten on with his life. But he was outraged by what his insurance company agreed to pay. “As bad as NYU is,” Frank said, “Aetna is equally culpable because Aetna’s job was to be the checks and balances and to be my advocate.”

And he also knew that Aetna and NYU Langone hadn’t double-teamed an ordinary patient. In fact, if you imagined the perfect person to take on insurance companies and hospitals, it might be Frank.

For three decades, Frank has worked for insurance companies like Aetna, helping to assess how much people should pay in monthly premiums. He is a former president of the Actuarial Society of Greater New York and has taught actuarial science at Columbia University. He teaches courses for insurance regulators and has even served as an expert witness for insurance companies.

The hospital and insurance company may have expected him to shut up and pay. But Frank wasn’t going away.

Patients fund the entire health care industry through taxes, insurance premiums and cash payments. Even the portion paid by employers comes out of an employee’s compensation. Yet when the health care industry refers to “payers,” it means insurance companies or government programs like Medicare.

Patients who want to know what they’ll be paying — let alone shop around for the best deal — usually don’t have a chance. Before Frank’s hip operation he asked NYU Langone for an estimate. It told him to call Aetna, which referred him back to the hospital. He never did get a price.

Imagine if other industries treated customers this way. The price of a flight from New York to Los Angeles would be a mystery until after the trip. Or, while digesting a burger, you’d learn it cost 50 bucks.

A decade ago, the opacity of prices was perhaps less pressing because medical expenses were more manageable. But now patients pay more and more for monthly premiums, and then, when they use services, they pay higher co-pays, deductibles and coinsurance rates.

Employers are equally captive to the rising prices. They fund benefits for more than 150 million Americans and see health care expenses eating up more and more of their budgets.

Richard Master, the founder and CEO of MCS Industries Inc. in Easton, Pennsylvania, offered to share his numbers. By most measures MCS is doing well. Its picture frames and decorative mirrors are sold at Walmart, Target and other stores and, Master said, the company brings in more than $200 million a year.

But the cost of health care is a growing burden for MCS and its 170 employees. A decade ago, Master said, an MCS family policy cost $1,000 a month with no deductible. Now it’s more than $2,000 a month with a $6,000 deductible. MCS covers 75 percent of the premium and the entire deductible. Those rising costs eat into every employee’s take-home pay.

Economist Priyanka Anand of George Mason University said employers nationwide are passing rising health care costs on to their workers by asking them to absorb a larger share of higher premiums. Anand studied Bureau of Labor Statistics data and found that every time health care costs rose by a dollar, an employee’s overall compensation got cut by 52 cents.

Master said his company hops between insurance providers every few years to find the best benefits at the lowest cost. But he still can’t get a breakdown to understand what he’s actually paying for.

“You pay for everything, but you can’t see what you pay for,” he said.

Master is a CEO. If he can’t get answers from the insurance industry, what chance did Frank have?

Frank’s hospital bill and Aetna’s “explanation of benefits” arrived at his home in Port Chester, New York, about a month after his operation. Loaded with an off-putting array of jargon and numbers, the documents were a natural playing field for an actuary like Frank.

Under the words, “DETAIL BILL,” Frank saw that NYU Langone’s total charges were more than $117,000, but that was the sticker price, and those are notoriously inflated. Insurance companies negotiate an in-network rate for their members. But in Frank’s case at least, the “deal” still cost $70,882.

With a practiced eye, Frank scanned the billing codes hospitals use to get paid and immediately saw red flags: There were charges for physical therapy sessions that never took place, and drugs he never received. One line stood out — the cost of the implant and related supplies. Aetna said NYU Langone paid a “member rate” of $26,068 for “supply/implants.” But Frank didn’t see how that could be accurate. He called and emailed Smith & Nephew, the maker of his implant, until a representative told him the hospital would have paid about $1,500. His NYU Langone surgeon confirmed the amount, Frank said. The device company and surgeon did not respond to ProPublica’s requests for comment.

Frank then called and wrote Aetna multiple times, sure it would want to know about the problems. “I believe that I am a victim of excessive billing,” he wrote. He asked Aetna for copies of what NYU Langone submitted so he could review it for accuracy, stressing he wanted “to understand all costs.”

Aetna reviewed the charges and payments twice — both times standing by its decision to pay the bills. The payment was appropriate based on the details of the insurance plan, Aetna wrote.

Frank also repeatedly called and wrote NYU Langone to contest the bill. In its written reply, the hospital didn’t explain the charges. It simply noted that they “are consistent with the hospital’s pricing methodology.”

Increasingly frustrated, Frank drew on his decades of experience to essentially serve as an expert witness on his own case. He gathered every piece of relevant information to understand what happened, documenting what Medicare, the government’s insurance program for the disabled and people over age 65, would have paid for a partial hip replacement at NYU Langone — about $20,491 — and what FAIR Health, a New York nonprofit that publishes pricing benchmarks, estimated as the in-network price of the entire surgery, including the surgeon fees — $29,162.

He guesses he spent about 300 hours meticulously detailing his battle plan in two inches-thick binders with bills, medical records and correspondence.

ProPublica sent the Medicare and FAIR Health estimates to Aetna and asked why they had paid so much more. The insurance company declined an interview and said in an emailed statement that it works with hospitals, including NYU Langone, to negotiate the “best rates” for members. The charges for Frank's procedure were correct given his coverage, the billed services and the Aetna contract with NYU Langone, the insurer wrote.

NYU Langone also declined ProPublica’s interview request. The hospital said in an emailed statement it billed Frank according to the contract Aetna had negotiated on his behalf. Aetna, it wrote, confirmed the bills were correct.

After seven months, NYU Langone turned Frank’s $7,088 bill over to a debt collector, putting his credit rating at risk. “They upped the ante,” he said.

Frank sent a new flurry of letters to Aetna and to the debt collector and complained to the New York State Department of Financial Services, the insurance regulator, and to the New York State Office of the Attorney General. He even posted his story on LinkedIn.

But no one came to the rescue. A year after he got the first bills, NYU Langone sued him for the unpaid sum. He would have to argue his case before a judge.

You’d think that health insurers would make money, in part, by reducing how much they spend.

Turns out, insurers don’t have to decrease spending to make money. They just have to accurately predict how much the people they insure will cost. That way they can set premiums to cover those costs — adding about 20 percent to for their administration and profit. If they’re right, they make money. If they’re wrong, they lose money. But, they aren’t too worried if they guess wrong. They can usually cover losses by raising rates the following year.

Frank suspects he got dinged for costing Aetna too much with his surgery. The company raised the rates on his small group policy — the plan just includes him and his partner — by 18.75 percent the following year.

The Affordable Care Act kept profit margins in check by requiring companies to use at least 80 percent of the premiums for medical care. That’s good in theory but it actually contributes to rising health care costs. If the insurance company has accurately built high costs into the premium, it can make more money. Here’s how: Let’s say administrative expenses eat up about 17 percent of each premium dollar and around 3 percent is profit. Making a 3 percent profit is better if the company spends more.

It’s like if a mom told her son he could have 3 percent of a bowl of ice cream. A clever child would say, “Make it a bigger bowl.”

Wonks call this a “perverse incentive.”

“These insurers and providers have a symbiotic relationship,” said Wendell Potter, who left a career as a public relations executive in the insurance industry to become an author and patient advocate. “There’s not a great deal of incentive on the part of any players to bring the costs down.”

Insurance companies may also accept high prices because often they aren’t always the ones footing the bill. Nowadays about 60 percent of the employer benefits are “self-funded.” That means the employer pays the bills. The insurers simply manage the benefits, processing claims and giving employers access to their provider networks. These management deals are often a large, and lucrative, part of a company’s business. Aetna, for example, insured 8 million people in 2017, but provided administrative services only to considerably more — 14 million.

To woo the self-funded plans, insurers need a strong network of medical providers. A brand-name system like NYU Langone can demand — and get — the highest payments, said Manuel Jimenez, a longtime negotiator for insurers including Aetna. “They tend to be very aggressive in their negotiations.”

On the flip side, insurers can dictate the terms to the smaller hospitals, Jimenez said. The little guys, “get the short end of the stick,” he said. That’s why they often merge with the bigger hospital chains, he said, so they can also increase their rates.

Other types of horse-trading can also come into play, experts say. Insurance companies may agree to pay higher prices for some services in exchange for lower rates on others.

Patients, of course, don’t know how the behind-the-scenes haggling affects what they pay. By keeping costs and deals secret, hospitals and insurers dodge questions about their profits, said Dr. John Freedman, a Massachusetts health care consultant. Cases like Frank’s “happen every day in every town across America. Only a few of them come up for scrutiny.”

In response, a Tennessee company is trying to expose the prices and steer patients to the best deals. Healthcare Bluebook aims to save money for both employers who self-pay, and their workers. Bluebook used payment information from self-funded employers to build a searchable online pricing database that shows the low-, medium- and high-priced facilities for certain common procedures, like MRIs. The company, which launched in 2008, now has more than 4,500 companies paying for its services. Patients can get a $50 bonus for choosing the best deal.

Bluebook doesn’t have price information for Frank’s operation — a partial hip replacement. But its price range in the New York City area for a full hip replacement is from $28,000 to $77,000, including doctor fees. Its “fair price” for these services tops out at about two-thirds of what Aetna agreed to pay on Frank’s behalf.

Frank, who worked with mainstream insurers, didn’t know about Bluebook. If he had used its data, he would have seen that there were facilities that were both high quality and offered a fair price near his home, including Holy Name Medical Center in Teaneck, New Jersey, and Greenwich Hospital in Connecticut. NYU Langone is one of Bluebook’s highest-priced, high-quality hospitals in the area for hip replacements. Others on Bluebook’s pricey list include Montefiore New Rochelle Hospital in New Rochelle, New York, and Hospital for Special Surgery in Manhattan.

ProPublica contacted Hospital for Special Surgery to see if it would provide a price for a partial hip replacement for a patient with an Aetna small-group plan like Frank’s. The hospital declined, citing its confidentiality agreements with insurance companies.

Frank arrived at the Manhattan courthouse on April 2 wearing a suit and fidgeted in his seat while he waited for his hearing to begin. He had never been sued for anything, he said. He and his attorney, Gabriel Nugent, made quiet conversation while they waited for the judge.

In the back of the courtroom, NYU Langone’s attorney, Anton Mikofsky, agreed to talk about the lawsuit. The case is simple, he said. “The guy doesn’t understand how to read a bill.”

The high price of the operation made sense because NYU Langone has to pay its staff, Mikofsky said. It also must battle with insurance companies who are trying to keep costs down, he said. “Hospitals all over the country are struggling,” he said.

“Aetna reviewed it twice,” Mikofsky added. “Didn’t the operation go well? He should feel blessed.”

When the hearing started, the judge gave each side about a minute to make its case, then pushed them to settle.

Mikofsky told the judge Aetna found nothing wrong with the billing and had already taken care of most of the charges. The hospital’s position was clear. Frank owed $7,088.

Nugent argued that the charges had not been justified and Frank felt he owed about $1,500.

The lawyers eventually agreed that Frank would pay $4,000 to settle the case.

Frank said later that he felt compelled to settle because going to trial and losing carried too many risks. He could have been hit with legal fees and interest. It would have also hurt his credit at a time he needs to take out college loans for his kids.

After the hearing, Nugent said a technicality might have doomed their case. New York defendants routinely lose in court if they have not contested a bill in writing within 30 days, he said. Frank had contested the bill over the phone with NYU Langone, and in writing within 30 days with Aetna. But he did not dispute it in writing to the hospital within 30 days.

Frank paid the $4,000, but held on to his outrage. “The system,” he said, “is stacked against the consumer.”

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

 


What Facebook’s New Political Ad System Misses

[Editor's Note: today's guest post is by the reporters at ProPublica. It is reprinted with permission.]

By Jeremy B. Merrill, Ariana Tobin, and Madeleine Varner, ProPublica

Facebook’s long-awaited change in how it handles political advertisements is only a first step toward addressing a problem intrinsic to a social network built on the viral sharing of user posts.

The company’s approach, a searchable database of political ads and their sponsors, depends on the company’s ability to sort through huge quantities of ads and identify which ones are political. Facebook is betting that a combination of voluntary disclosure and review by both people and automated systems will close a vulnerability that was famously exploited by Russian meddlers in the 2016 election.

The company is doubling down on tactics that so far have not prevented the proliferation of hate-filled posts or ads that use Facebook’s capability to target ads particular groups.

If the policy works as Facebook hopes, users will learn who has paid for the ads they see. But the company is not revealing details about the significant aspect of how political advertisers use its platform — the specific attributes the ad buyers used to target a particular person for an ad.

Facebook’s new system is the company’s most ambitious response thus far to the now-documented efforts by Russian agents to circulate items that would boost Donald Trump’s chances or suppress Democratic turnout. The new policies announced Thursday will make it harder for somebody trying to exploit the precise vulnerabilities in Facebook’s system exploited by the Russians in 2016 in several ways:

First, political ads that you see on Facebook will now include the name of the organization or person who paid for it, reminiscent of disclaimers required on political mailers and TV ads. (The ads Facebook identified as placed by Russians carried no such tags.)

The Federal Election Commission requires political ads to carry such clear disclosures but as we have reported, many candidates and groups on Facebook haven’t been following that rule.

Second, all political ads will be published in a searchable database.

Finally, the company will now require that anyone buying a political ad in their system confirm that they’re a U.S. resident. Facebook will even mail advertisers a postcard to make certain they’re in the U.S. Facebook says ads by advertisers whose identities aren’t verified under this process will be taken down starting in about a week, and they will be blocked from buying new ads until they have verified themselves.

While the new system can still be gamed, the specific tactics used by the Russian Internet Research Agency, such as an overseas purchase of ads promoting a Black Lives Matter rally under the name “Blacktivist,” will become harder — or at least harder to do without getting caught.

The company has also pledged to devote more employees to the issue, including 3,000-4,000 more content moderators. But Facebook says these will be not be additional hires — they will be included in the 20,000 already promised to tackle various moderation issues in the coming months.

What Is Facebook Missing?

The most obvious flaw in Facebook’s new system is that it misses ads it should catch. Right now, it’s easy to find political ads that are missing from their archive. Take this one, from the Washington State Democratic Party. Just minutes after Facebook finished announcing its launch of the tool, a participant in ProPublica’s Facebook Political Ad Collector project saw this ad, criticizing Republican congresswoman Cathy McMorris Rodgers… but it wasn’t in the database.

And there are others.

The company acknowledged that the process is still a work in progress, reiterating its request that users pitch in by reporting the political ads that lack disclosures.

Even as Facebook’s system gets better at identifying political ads, the company is withholding a critical piece of information in the ads it’s publishing. While we’ll see some demographic information about who saw a given ad, Facebook is not indicating which audiences the advertiser intended to target — categories that often include racial or political characteristics and which have been controversial in the past.

This information is critical to researchers and journalists trying to make sense of political advertising on Facebook. Take, for instance, this ad promoting the environmental benefits of nuclear power, from a group called Nuclear Matters: the group chose specifically to show it to people interested in veganism — a fact we wouldn’t know from looking at the demographics of the users who saw the ad.

Facebook said it considers the information about who saw an ad — age, gender and location — sufficient. Rob Leathern, Facebook’s Director of Product Management, said that the limited demographics-only breakdown “offers more transparency than the intent, in terms of showing the targeting.”

The company is also promising to launch an API, a technical tool which will allow outsiders to write software that would look for patterns in the new ad database. The company says it will launch an API “later this summer” but hasn’t said what data it will contain or who will have access to it.

ProPublica’s own Facebook Ad Collector tool, which also collects political ads spotted on Facebook, has an API that can be accessed by anyone. It also includes the targeting information — which users can also see on each ad that they view.

Facebook said it would not release data about ads flagged by users as political and then rejected by the system. We’re curious about those, and we know firsthand that their software can be imperfect. We’ve attempted to buy ads specifically about our journalism that were flagged as problematic — because the ads “contained profanity,” or were misclassified as discriminatory ads for “employment, credit or housing opportunities” by mistake.

Facebook’s track record on initiatives aimed at improving the transparency of its massively profitable advertising system is spotty. The company has said it’s going to rely in part on artificial intelligence to review ads — the same sort of technology that the company said in the past it would use to block discriminatory ads for housing, employment and credit opportunities.

When we tested the system almost a year after a ProPublica story showed Facebook was allowing advertisers to target housing ads in a way that violated Fair Housing Act protections, we found that the company was still approving housing ads that excluded African-Americans and other “multicultural affinities” from seeing them. The company was pressured to implement several changes to its ad portal and a Fair Housing group filed a lawsuit against the company.

Facebook also plans to rely in part on users to find and report political ads that get through the system without the required disclosures.

But its track record of moderating user-flagged content — when it comes to both hate speech and advertising — has been uneven. Last December, ProPublica brought 49 cases of user-flagged offensive speech to Facebook, and the company acknowledged that its moderators had made the wrong call in 22 of them.

The company admits it's playing a “cat and mouse game” with people trying to pass political ads through their system unnoticed. Just last month, Ohio Democratic gubernatorial candidate Richard Cordray’s campaign ran Facebook ads criticizing his opponent — but from a page called “Ohio Primary Info.”

The need for ad transparency goes way beyond Russian bad actors. Our tool has already caught scams and malware disguised as politics, which users raised as a problem years before Facebook made any meaningful change.

If you flag an ad to Facebook, please report them to us as well by sending an email to [email protected]. We will be watching to see how well Facebook responds when users flag an ad.

How Will They Enforce the New Rules?

It’s one thing to create a set of rules, and another to enforce them consistently and on a large scale.

Facebook, which kept its content moderation and hate speech policies secret until they were revealed by ProPublica, won’t share the specific rules governing political ad content or details about the instructions moderators receive.

Leathern said the company is keeping the rules secret to frustrate the efforts of “bad actors who try to game our enforcement systems”

Facebook has said it’s looking to flag both electoral ads and those that take a position on its list of twenty “national legislative issues of public importance”. These range from the concrete, like “abortion” and “taxes,” to broad topics like “health” and “values.”

Facebook acknowledges its system will make mistakes and says it will improve over time. Ads for specific candidates are relatively easy to detect. “We’ll likely miss ads when they aim to persuade,” said Katie Harbath, Facebook’s Global Politics and Government Outreach Director.

We plan to keep an eye out for ads that don’t make it into the archive. We’ll be looking for ads that our Political Ad Collector tool finds that aren’t in Facebook’s database.

Want to Help?

We need your help building out our independent database of political ads! If you’re still reading this article, we’re giving you permission to stop and install the Political Ad Collector extension. Here’s what you need to know about how it works.

You can also help us find other people who can install the tool. We are especially in need of people who aren’t ProPublica readers already. We need people from a diverse set of backgrounds, and with different perspectives and political beliefs. Please encourage your friends and relatives — especially the ones you avoid talking politics with — to install it.

Do You Work at a News Outlet and Want to Partner With Us on This?

Awesome. We’re already working with quite a few newsrooms all over the world, including the CBC in Canada, Bridge Magazine in Michigan, The Guardian in Australia and more.

In the U.S., we’re trying to get eyes and ears on the ground in as many local elections as possible. If your readers would be interested in joining our transparency effort, please reach out. We’re happy to send more information about this and our larger Electionland project.

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

 


FBI Warns Sophisticated Malware Targets Wireless Routers In Homes And Small Businesses

The U.S. Federal Bureau of Investigation (FBI) issued a Public Service Announcement (PSA) warning consumers and small businesses that "foreign cyber actors" have targeted their wireless routers. The May 25th PSA explained the threat:

"The actors used VPNFilter malware to target small office and home office routers. The malware is able to perform multiple functions, including possible information collection, device exploitation, and blocking network traffic... The malware targets routers produced by several manufacturers and network-attached storage devices by at least one manufacturer... VPNFilter is able to render small office and home office routers inoperable. The malware can potentially also collect information passing through the router. Detection and analysis of the malware’s network activity is complicated by its use of encryption and misattributable networks."

The "VPN" acronym usually refers to a Virtual Private Network. Why use the VPNfilter name for a sophisticated computer virus? Wired magazine explained:

"... the versatile code is designed to serve as a multipurpose spy tool, and also creates a network of hijacked routers that serve as unwitting VPNs, potentially hiding the attackers' origin as they carry out other malicious activities."

The FBI's PSA advised users to, a) reboot (e.g., turn off and then back on) their routers; b) disable remote management features which attackers could take over to gain access; and c) update their routers with the latest software and security patches. For routers purchased independently, security experts advise consumers to contact the router manufacturer's tech support or customer service site.

For routers leased or purchased from an internet service providers (ISP), consumers should contact their ISP's customer service or technical department for software updates and security patches. Example: the Verizon FiOS forums site section lists the brands and models affected by the VPNfilter malware, since several manufacturers produce routers for the Verizon FiOS service.

It is critical for consumers to heed this PSA. The New York Times reported:

"An analysis by Talos, the threat intelligence division for the tech giant Cisco, estimated that at least 500,000 routers in at least 54 countries had been infected by the [VPNfilter] malware... A global network of hundreds of thousands of routers is already under the control of the Sofacy Group, the Justice Department said last week. That group, which is also known as A.P.T. 28 and Fancy Bear and believed to be directed by Russia’s military intelligence agency... To disrupt the Sofacy network, the Justice Department sought and received permission to seize the web domain toknowall.com, which it said was a critical part of the malware’s “command-and-control infrastructure.” Now that the domain is under F.B.I. control, any attempts by the malware to reinfect a compromised router will be bounced to an F.B.I. server that can record the I.P. address of the affected device..."

Readers wanting technical details about VPNfilter, should read the Talos Intelligence blog post.

When consumers contact their ISP about router software updates, it is wise to also inquire about security patches for the Krack malware, which the bad actors have used recently. Example: the Verizon site also provides information about the Krack malware.

The latest threat provides several strong reminders:

  1. The conveniences of wireless internet connectivity which consumers demand and enjoy, also benefits the bad guys,
  2. The bad guys are persistent and will continue to target internet-connected devices with weak or no protection, including devices consumers fail to protect,
  3. Wireless benefits come with a responsibility for consumers to shop wisely for internet-connected devices featuring easy, continual software updates and security patches. Otherwise, that shiny new device you recently purchased is nothing more than an expensive "brick," and
  4. Manufacturers have a responsibility to provide consumers with easy, continual software updates and security patches for the internet-connected devices they sell.

What are your opinions of the VPNfilter malware? What has been your experience with securing your wireless home router?


Federal Watchdog Launches Investigation of Age Bias at IBM

[Editor's note: today's guest post, by reporters at ProPublica, updates a prior post about employment practices. It is reprinted with permission. A data breach at IBM in 2007 led to the creation of this blog.]

IBM logo By Peter Gosselin, ProPublica

The U.S. Equal Employment Opportunity Commission has launched a nationwide probe of age bias at IBM in the wake of a ProPublica investigation showing the company has flouted or outflanked laws intended to protect older workers from discrimination.

More than five years after IBM stopped providing legally required disclosures to older workers being laid off, the EEOC’s New York district office has begun consolidating individuals’ complaints from across the country and asking the company to explain practices recounted in the ProPublica story, according to ex-employees who’ve spoken with investigators and people familiar with the agency’s actions.

"Whenever you see the EEOC pulling cases and sending them to investigations, you know they’re taking things seriously," said the agency’s former general counsel, David Lopez. "I suspect IBM’s treatment of its later-career workers and older applicants is going to get a thorough vetting."

EEOC officials refused to comment on the agency’s investigation, but a dozen ex-IBM employees from California, Colorado, Texas, New Jersey and elsewhere allowed ProPublica to view the status screens for their cases on the agency’s website. The screens show the cases being transferred to EEOC’s New York district office shortly after the March 22 publication of ProPublica’s original story, and then being shifted to the office’s investigations division, in most instances, between April 5 and April 10.

The agency’s acting chair, Victoria Lipnic, a Republican, has made age discrimination a priority. The EEOC’s New York office won a settlement last year from Kentucky-based national restaurant chain Texas Roadhouse in the largest age-related case as measured by number of workers covered to go to trial in more than three decades.

IBM did not respond to questions about the EEOC investigation. In response to detailed questions for our earlier story, the company issued a brief statement, saying in part, "We are proud of our company and its employees’ ability to reinvent themselves era after era while always complying with the law."

Just prior to publication of the story, IBM issued a video recounting its long history of support for equal employment and diversity. In it, CEO Virginia "Ginni" Rometty said, "Every generation of IBMers has asked ‘How can we in our own time expand our understanding of inclusion?’ "

ProPublica reported in March that the tech giant, which has an annual revenue of about $80 billion, has ousted an estimated 20,000 U.S. employees ages 40 and over since 2014, about 60 percent of its American job cuts during those years. In some instances, it earmarked money saved by the departures to hire young replacements in order to, in the words of one internal company document, "correct seniority mix."

ProPublica reported that IBM regularly denied older workers information the law says they’re entitled to in order to decide whether they’ve been victims of age bias, and used point systems and other methods to pick older workers for removal, even when the company rated them high performers.

In some cases, IBM treated job cuts as voluntary retirements, even over employees’ objections. This reduced the number of departures counted as layoffs, which can trigger public reporting requirements in high enough numbers, and prevented employees from seeking jobless benefits for which voluntary retirees can’t file.

In addition to the complaints covered in the EEOC probe, a number of current and former employees say they have recently filed new complaints with the agency about age bias and are contemplating legal action against the company.

Edvin Rusis of Laguna Niguel, a suburb south of Los Angeles, said IBM has told him he’ll be laid off June 27 from his job of 15 years as a technical specialist. Rusis refused to sign a severance agreement and hired a class-action lawyer. They have filed an EEOC complaint claiming Rusis was one of "thousands" discriminated against by IBM.

If the agency issues a right-to-sue letter indicating Rusis has exhausted administrative remedies for his claim, they can take IBM to court. "I don’t see a clear reason for why they’re laying me off," the 59-year-old Rusis said in an interview. "I can only assume it’s age, and I don’t want to go silently."

Coretta Roddey of suburban Atlanta, 49, an African-American Army veteran and former IBM employee, said she’s applied more than 50 times to return to the company, but has been turned down or received no response. She’s hired a lawyer and filed an age discrimination complaint with EEOC.

"It’s frustrating," she said of the multiple rejections. "It makes you feel you don’t have the qualifications (for the job) when you really do."

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


New Commissioner Says FTC Should Get Tough on Companies Like Facebook and Google

[Editor's note: today's guest post, by reporters at ProPublica, explores enforcement policy by the U.S. Federal Trade Commission (FTC), which has become more important  given the "light touch" enforcement approach by the Federal Communications Commission. Today's post is reprinted with permission.]

By Jesse Eisinger, ProPublica

Declaring that "the credibility of law enforcement and regulatory agencies has been undermined by the real or perceived lax treatment of repeat offenders," newly installed Democratic Federal Trade Commissioner Rohit Chopra is calling for much more serious penalties for repeat corporate offenders.

"FTC orders are not suggestions," he wrote in his first official statement, which was released on May 14.

Many giant companies, including Facebook and Google, are under FTC consent orders for various alleged transgressions (such as, in Facebook’s case, not keeping its promises to protect the privacy of its users’ data). Typically, a first FTC action essentially amounts to a warning not to do it again. The second carries potential penalties that are more serious.

Some critics charge that that approach has encouraged companies to treat FTC and other regulatory orders casually, often violating their terms. They also say the FTC and other regulators and law enforcers have gone easy on corporate recidivists.

In 2012, a Republican FTC commissioner, J. Thomas Rosch, dissented from an agency agreement with Google that fined the company $22.5 million for violations of a previous order even as it denied liability. Rosch wrote, “There is no question in my mind that there is ‘reason to believe’ that Google is in contempt of a prior Commission order.” He objected to allowing the company to deny its culpability while accepting a fine.

Chopra’s memo signals a tough stance from Democratic watchdogs — albeit a largely symbolic one, given the fact that Republicans have a 3-2 majority on the FTC — as the Trump administration pursues a wide-ranging deregulatory agenda. Agencies such as the Environmental Protection Agency and the Department of Interior are rolling back rules, while enforcement actions from the Securities and Exchange Commission and the Department of Justice are at multiyear lows.

Chopra, 36, is an ally of Elizabeth Warren and a former assistant director of the Consumer Financial Protection Bureau. President Donald Trump nominated him to his post in October, and he was confirmed last month. The FTC is led by a five-person commission, with a chairman from the president’s party.

The Chopra memo is also a tacit criticism of enforcement in the Obama years. Chopra cites the SEC’s practice of giving waivers to banks that have been sanctioned by the Department of Justice or regulators allowing them to continue to receive preferential access to capital markets. The habitual waivers drew criticism from a Democratic commissioner on the SEC, Kara Stein. Chopra contends in his memo that regulators treated both Wells Fargo and the giant British bank HSBC too lightly after repeated misconduct.

"When companies violate orders, this is usually the result of serious management dysfunction, a calculated risk that the payoff of skirting the law is worth the expected consequences, or both," he wrote. Both require more serious, structural remedies, rather than small fines.

The repeated bad behavior and soft penalties “undermine the rule of law,” he argued.

Chopra called for the FTC to use more aggressive tools: referring criminal matters to the Department of Justice; holding individual executives accountable, even if they weren’t named in the initial complaint; and “meaningful” civil penalties.

The FTC used such aggressive tactics in going after Kevin Trudeau, infomercial marketer of miracle treatments for bodily ailments. Chopra implied that the commission does not treat corporate recidivists with the same toughness. “Regardless of their size and clout, these offenders, too, should be stopped cold,” he writes.

Chopra also suggested other remedies. He called for the FTC to consider banning companies from engaging in certain business practices; requiring that they close or divest the offending business unit or subsidiary; requiring the dismissal of senior executives; and clawing back executive compensation, among other forceful measures.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Privacy Badger Update Fights 'Link Tracking' And 'Link Shims'

Many internet users know that social media companies track both users and non-users. The Electronic Frontier Foundation (EFF) updated its Privacy Badger browser add-on to help consumers fight a specific type of surveillance technology called "Link Tracking," which facebook and many social networking sites use to track users both on and off their social platforms. The EFF explained:

"Say your friend shares an article from EFF’s website on Facebook, and you’re interested. You click on the hyperlink, your browser opens a new tab, and Facebook is no longer a part of the equation. Right? Not exactly. Facebook—and many other companies, including Google and Twitter—use a variation of a technique called link shimming to track the links you click on their sites.

When your friend posts a link to eff.org on Facebook, the website will “wrap” it in a URL that actually points to Facebook.com: something like https://l.facebook.com/l.php?u=https%3A%2F%2Feff.org%2Fpb&h=ATPY93_4krP8Xwq6wg9XMEo_JHFVAh95wWm5awfXqrCAMQSH1TaWX6znA4wvKX8pNIHbWj3nW7M4F-ZGv3yyjHB_vRMRfq4_BgXDIcGEhwYvFgE7prU. This is a link shim.

When you click on that monstrosity, your browser first makes a request to Facebook with information about who you are, where you are coming from, and where you are navigating to. Then, Facebook quickly redirects you to the place you actually wanted to go... Facebook’s approach is a bit sneakier. When the site first loads in your browser, all normal URLs are replaced with their l.facebook.com shim equivalents. But as soon as you hover over a URL, a piece of code triggers that replaces the link shim with the actual link you wanted to see: that way, when you hover over a link, it looks innocuous. The link shim is stored in an invisible HTML attribute behind the scenes. The new link takes you to where you want to go, but when you click on it, another piece of code fires off a request to l.facebook.com in the background—tracking you just the same..."

Lovely. And, Facebook fails to deliver on privacy in more ways:

"According to Facebook's official post on the subject, in addition to helping Facebook track you, link shims are intended to protect users from links that are "spammy or malicious." The post states that Facebook can use click-time detection to save users from visiting malicious sites. However, since we found that link shims are replaced with their unwrapped equivalents before you have a chance to click on them, Facebook's system can't actually protect you in the way they describe.

Facebook also claims that link shims "protect privacy" by obfuscating the HTTP Referer header. With this update, Privacy Badger removes the Referer header from links on facebook.com altogether, protecting your privacy even more than Facebook's system claimed to."

Thanks to the EFF for focusing upon online privacy and delivering effective solutions.


Academic Professors, Researchers, And Google Employees Protest Warfare Programs By The Tech Giant

Google logo Many internet users know that Google's business of model of free services comes with a steep price: the collection of massive amounts of information about users of its services. There are implications you may not be aware of.

A Guardian UK article by three professors asked several questions:

"Should Google, a global company with intimate access to the lives of billions, use its technology to bolster one country’s military dominance? Should it use its state of the art artificial intelligence technologies, its best engineers, its cloud computing services, and the vast personal data that it collects to contribute to programs that advance the development of autonomous weapons? Should it proceed despite moral and ethical opposition by several thousand of its own employees?"

These questions are relevant and necessary for several reasons. First, more than a dozen Google employees resigned citing ethical and transparency concerns with artificial intelligence (AI) help the company provides to the U.S. Department of Defense for Maven, a weaponized drone program to identify people. Reportedly, these are the first known mass resignations.

Second, more than 3,100 employees signed a public letter saying that Google should not be in the business of war. That letter (Adobe PDF) demanded that Google terminate its Maven program assistance, and draft a clear corporate policy that neither it, nor its contractors, will build warfare technology.

Third, more than 700 academic researchers, who study digital technologies, signed a letter in support of the protesting Google employees and former employees. The letter stated, in part:

"We wholeheartedly support their demand that Google terminate its contract with the DoD, and that Google and its parent company Alphabet commit not to develop military technologies and not to use the personal data that they collect for military purposes... We also urge Google and Alphabet’s executives to join other AI and robotics researchers and technology executives in calling for an international treaty to prohibit autonomous weapon systems... Google has become responsible for compiling our email, videos, calendars, and photographs, and guiding us to physical destinations. Like many other digital technology companies, Google has collected vast amounts of data on the behaviors, activities and interests of their users. The private data collected by Google comes with a responsibility not only to use that data to improve its own technologies and expand its business, but also to benefit society. The company’s motto "Don’t Be Evil" famously embraces this responsibility.

Project Maven is a United States military program aimed at using machine learning to analyze massive amounts of drone surveillance footage and to label objects of interest for human analysts. Google is supplying not only the open source ‘deep learning’ technology, but also engineering expertise and assistance to the Department of Defense. According to Defense One, Joint Special Operations Forces “in the Middle East” have conducted initial trials using video footage from a small ScanEagle surveillance drone. The project is slated to expand “to larger, medium-altitude Predator and Reaper drones by next summer” and eventually to Gorgon Stare, “a sophisticated, high-tech series of cameras... that can view entire towns.” With Project Maven, Google becomes implicated in the questionable practice of targeted killings. These include so-called signature strikes and pattern-of-life strikes that target people based not on known activities but on probabilities drawn from long range surveillance footage. The legality of these operations has come into question under international and U.S. law. These operations also have raised significant questions of racial and gender bias..."

I'll bet that many people never imagined -- nor want - that their personal e-mail, photos, calendars, video, social media, map usage, archived photos, social media, and more would be used for automated military applications. What are your opinions?


U.S. Senate Vote Approves Resolution To Reinstate Net Neutrality Rules. FCC Chairman Pai Repeats Claims While Ignoring Consumers

Yesterday, the United States Senate approved a bipartisan resolution to preserve net neutrality rules, the set of internet protections established in 2015 which require wireless and internet service providers (ISPs) to provide customers with access to all websites, and equal access to all websites. That meant no throttling, blocking, slow-downs of selected sites, nor prioritizing internet traffic in "fast" or "slow" lanes.

Federal communications Commission logo Earlier this month, the Federal Communications Commission (FCC) said that current net neutrality rules would expire on June 11, 2018. Politicians promised that tax cuts will create new jobs, and that repeal of net neutrality rules would encourage investments by ISPs. FCC Chairman Ajit Pai, appointed by President Trump, released a statement on May 10, 2018:

"Now, on June 11, these unnecessary and harmful Internet regulations will be repealed and the bipartisan, light-touch approach that served the online world well for nearly 20 years will be restored. The Federal Trade Commission will once again be empowered to target any unfair or deceptive business practices of Internet service providers and to protect American’s broadband privacy. Armed with our strengthened transparency rule, we look forward to working closely with the FTC to safeguard a free and open Internet. On June 11, we will have a framework in place that encourages innovation and investment in our nation’s networks so that all Americans, no matter where they live, can have access to better, cheaper, and faster Internet access and the jobs, opportunities, and platform for free expression that it provides. And we will embrace a modern, forward-looking approach that will help the United States lead the world in 5G..."

Chairman Pai's claims sound hollow, since reality says otherwise. Telecommunications companies have fired workers and reduced staff despite getting tax cuts, broadband privacy repeal, and net neutrality repeal. In December, more than 1,000 startups and investors signed an open letter to Pai opposing the elimination of net neutrality. Entrepreneurs and executives are concerned that the loss of net neutrality will harm or hinder start-up businesses.

CNet provided a good overview of events surrounding the Senate's resolution:

"Democrats are using the Congressional Review Act to try to halt the FCC's December repeal of net neutrality. The law gives Congress 60 legislative days to undo regulations imposed by a federal agency. What's needed to roll back the FCC action are simple majorities in both the House and Senate, as well as the president's signature. Senator Ed Markey (Democrat, Massachusetts), who's leading the fight in the Senate to preserve the rules, last week filed a so-called discharge petition, a key step in this legislative effort... Meanwhile, Republican lawmakers and broadband lobbyists argue the existing rules hurt investment and will stifle innovation. They say efforts by Democrats to stop the FCC's repeal of the rules do nothing to protect consumers. All 49 Democrats in the Senate support the effort to undo the FCC's vote. One Republican, Senator Susan Collins of Maine, also supports the measure. One more Republican is needed to cross party lines to pass it."

"No touch" is probably a more accurate description of the internet under Chairman Pai's leadership, given many historical problems and abuses of consumers by some ISPs. The loss of net neutrality protections will likely result in huge price increases for internet access for consumers, which will also hurt public libraries, the poor, and disabled users. The loss of net neutrality will allow ISPs the freedom to carve up, throttle, block, and slow down the internet traffic they choose, while consumers will lose the freedom to use as they choose the broadband service they've paid for. And, don't forget the startup concerns above.

After the Senate's vote, FCC Chairman Pai released this statement:

“The Internet was free and open before 2015, when the prior FCC buckled to political pressure from the White House and imposed utility-style regulation on the Internet. And it will continue to be free and open once the Restoring Internet Freedom Order takes effect on June 11... our light-touch approach will deliver better, faster, and cheaper Internet access and more broadband competition to the American people—something that millions of consumers desperately want and something that should be a top priority. The prior Administration’s regulatory overreach took us in the opposite direction, reducing investment in broadband networks and particularly harming small Internet service providers in rural and lower-income areas..."

The internet was free and open before 2015? Mr. Pai is guilty of revisionist history. The lack of ISP competition in key markets meant consumers in the United States pay more for broadband and get slower speeds compared to other countries. There were numerous complaints by consumers about usage-based Internet pricing. There were privacy abuses and settlement agreements by ISPs involving technologies such as deep-packet inspection and 'Supercookies' to track customers online, despite consumers' wishes not to be tracked. Many consumers didn't get the broadband speeds ISP promised. Some consumers sued their ISPs, and the New York State Attorney General had residents  check their broadband speed with this tool.

Tim Berners-Lee, the founder of the internet, cited three reasons why the Internet is in trouble. His number one reason: consumers had lost control of their personal information. The loss of privacy meant consumers lost control over their personal information.

There's more. Some consumers found that their ISP hijacked their online search results without notice nor consent. An ISP in Kansas admitted in 2008 to secret snooping after pressure from Congress. Given this, something had to be done. The FCC stepped up to the plate and acted when it was legally able to; and reclassified broadband after open hearings. Proposed rules were circulated prior to adoption. It was done in the open.

Yet, Chairman Pai would have us now believe the internet was free and open before 2015; and that regulatory was unnecessary. I say BS.

FCC Commissioner Jessica Rosenworcel released a statement yesterday:

"Today the United States Senate took a big step to fix the serious mess the FCC made when it rolled back net neutrality late last year. The FCC's net neutrality repeal gave broadband providers extraordinary new powers to block websites, throttle services and play favorites when it comes to online content. This put the FCC on the wrong side of history, the wrong side of the law, and the wrong side of the American people. Today’s vote is a sign that the fight for internet freedom is far from over. I’ll keep raising a ruckus to support net neutrality and I hope others will too."

A mess, indeed, created by Chairman Pai. A December 2017 study of 1,077 voters found that most want net neutrality protections:

Do you favor or oppose the proposal to give ISPs the freedom to: a) provide websites the option to give their visitors the ability to download material at a higher speed, for a fee, while providing a slower speed for other websites; b) block access to certain websites; and c) charge their customers an extra fee to gain access to certain websites?
Group Favor Opposed Refused/Don't Know
National 15.5% 82.9% 1.6%
Republicans 21.0% 75.4% 3.6%
Democrats 11.0% 88.5% 0.5%
Independents 14.0% 85.9% 0.1%

Why did the FCC, President Trump, and most GOP politicians pursue the elimination of net neutrality protections despite consumers wishes otherwise? For the same reasons they repealed broadband privacy protections despite most consumers wanting broadband privacy. (Remember, President Trump signed the privacy-rollback legislation in April 2017.) They are doing the bidding of the corporate ISPs at the expense of consumers. Profits before people. Whenever Mr. Pai mentions a "free and open internet," he's referring to corporate ISPs and not consumers. What do you think?


Equifax Operates A Secondary Credit Reporting Agency, And Its Website Appears Haphazard

Equifax logo More news about Equifax, the credit reporting agency with multiple data security failures resulting in a massive data breach affecting half of the United States population. It appears that Equifax also operates a secondary credit bureau: the National Consumer Telecommunications and Utilities Exchange (NCTUE). The Krebs On Security blog explained Equifax's role:

"The NCTUE is a consumer reporting agency founded by AT&T in 1997 that maintains data such as payment and account history, reported by telecommunication, pay TV and utility service providers that are members of NCTUE... there are four "exchanges" that feed into the NCTUE’s system: the NCTUE itself, something called "Centralized Credit Check Systems," the New York Data Exchange (NYDE), and the California Utility Exchange. According to a partner solutions page at Verizon, the NYDE is a not-for-profit entity created in 1996 that provides participating exchange carriers with access to local telecommunications service arrears (accounts that are unpaid) and final account information on residential end user accounts. The NYDE is operated by Equifax Credit Information Services Inc. (yes, that Equifax)... The California Utility Exchange collects customer payment data from dozens of local utilities in the state, and also is operated by Equifax (Equifax Information Services LLC)."

This surfaced after consumers with security freezes on their credit reports at the three major credit reporting agencies (e.g., Experian, Equifax, TransUnion) found fraudulent mobile phone accounts opened in their names. This shouldn't have been possible since security freezes prevent credit reporting agencies from selling consumers' credit reports to telecommunications companies, who typically perform credit checks before opening new accounts. So, the credit information must have come from somewhere else. It turns out, the source was the NCTUE.

NCTUE logo Credit reporting agencies make money by selling consumers' credit reports to potential lenders. And credit reports from the NCTUE are easy for anyone to order:

"... the NCTUE makes it fairly easy to obtain any records they may have on Americans. Simply phone them up (1-866-349-5185) and provide your Social Security number and the numeric portion of your registered street address."

The Krebs on Security blog also explain the expired SSL certificate used by Equifax which prevents serving web pages in a secure manner. That was simply inexcusable, poor data security.

A quick check of the NCTUE page on the Better Business Bureau site found 2 negative reviews and 70 complaints -- mostly about negative credit inquiries, and unresolved issues. A quick check of the NCTUE Terms Of Use page found very thin usage and privacy policies lacking details, such as mentions about data sharing, cookies, tracking, and more. The lack of data-sharing mentions could indicate NCTUE will share or sell data to anyone: entities, companies, and government agencies. It also means there is no way to verify whether the NCTUE complies with its own policies. Not good.

The policy contains enough language which indicates that it is not liable for anything:

"... THE NCTUE IS NOT RESPONSIBLE FOR, AND EXPRESSLY DISCLAIM, ALL LIABILITY FOR, DAMAGES OF ANY KIND ARISING OUT OF USE, REFERENCE TO, OR RELIANCE ON ANY INFORMATION CONTAINED WITHIN THE SITE. All content located at or available from the NCTUE website is provided “as is,” and NCTUE makes no representations or warranties, express or implied, including but not limited to warranties of merchantability, fitness for a particular purpose, title or non-infringement of proprietary rights. Without limiting the foregoing, NCTUE makes no representation or warranty that content located on the NCTUE website is free from error or suitable for any purpose; nor that the use of such content will not infringe any third party copyrights, trademarks or other intellectual property rights.

Links to Third Party Websites: Although the NCTUE website may include links providing direct access to other Internet resources, including websites, NCTUE is not responsible for the accuracy or content of information contained in these sites.."

Huh?! As is? The data NCTUE collected is being used for credit decisions. Reliability and accuracy matters. And, there are more concerns.

While at the NCTUE site, I briefly browsed the credit freeze information, which is hosted on an outsourced site, the Exchange Service Center (ESC). What's up with that? Why a separate site, and not a cohesive single site with a unified customer experience? This design gives the impression that the security freeze process was an afterthought.

Plus, the NCTUE and ESC sites present different policies (e.g., terms of use, privacy). Really? Why the complexity? Which policies rule? You'd think that the policies in both sites would be consistent and would mention each other, since consumers must use the two sites complete security freezes. That design seems haphazard. Not good.

There's more. Rather than use state-of-the-art, traditional web pages, the ESC site presents its policies in static Adobe PDF documents making it difficult for users to follow links for more information. (Contrast those thin policies with the more comprehensive Privacy and Terms of Use policies by TransUnion.) Plus, one policy was old -- dated 2011. It seems the site hasn't been updated in seven years. What fresh hell is this? More haphazard design. Why the confusing user experience? Not good.

Image of confusing drop-down menu for exchanges within the security freeze process. Click to view larger version There's more. When placing a security freeze, the ESC site includes a drop-down menu asking consumers to pick an exchange (e.g., NCTUE, Centralized Credit Check System, California Utility Exchange, NYDE). The confusing drop-down menu appears in the image on the right. Which menu option is the global security freeze? Is there a global option? The form page doesn't say, and it should. Why would a consumer select one of the exchanges? Perhaps, is this another slick attempt to limit the effectiveness of security freezes placed by consumers. Not good.

What can consumers make of this? First, the NCTUE site seems to be a slick way for Equifax to skirt the security freezes which consumers have placed upon their credit reports. Sounds like a definite end-run to me. Surprised? I'll bet. Angry? I'll bet, too. We consumers paid good money for security freezes on our credit reports.

Second, the combo NCTUE/ESC site seems like some legal, outsourcing ju-jitsu to avoid all liability, while still enjoying the revenues from credit-report sales. The site left me with the impression that its design, which hasn't kept pace during the years with internet best practices, was by a committee of attorneys focused upon serving their corporate clients' data collection and sharing needs while doing the absolute minimum required legally -- rather than a site focused upon the security needs of consumers. I can best describe the site using an old film-review phrase: a million monkeys with a million crayons would be hard pressed in a million years to create something this bad.

Third, credit reporting agencies get their data from a variety of sources. So, their business model is based upon data sharing. NCTUE seems designed to effectively do just that, regardless of consumers' security needs and wishes.

Fourth, this situation offers several reminders: a) just about anyone can set up and operate a credit reporting agency. No special skills nor expertise required; b) there are both national and regional credit reporting agencies; c) credit reports often contain errors; and d) credit reporting agencies historically have outsourced work, sometimes internationally -- for better or worse data security.

Fifth, you now you know what criminals and fraudsters already know... how to skirt the security freezes on credit reports and gain access to consumers' sensitive information. The combo NCTUE/ESC site is definitely a high-value target by criminals.

My first impression of the NCTUE site: haphazard design making it difficult for consumers to use and to trust it. What do you think?


San Diego Police Widely Share Data From License Plate Database

Images of ALPR device mounted on a patrol car. Click to view larger version Many police departments use automated license plate reader (ALPR or LPR) technology to monitor the movements of drivers and their vehicles. The surveillance has several implications beyond the extensive data collection.

The Voice of San Diego reported that the San Diego Police Departments shares its database of ALPR data with many other agencies:

"SDPD shares that database with the San Diego sector of Border Patrol – and with another 600 agencies across the country, including other agencies within the Department of Homeland Security. The nationwide database is enabled by Vigilant Solutions, a private company that provides data management and software services to agencies across the country for ALPR systems... A memorandum of understanding between SDPD and Vigilant stipulates that each agency retains ownership of its data, and can take steps to determine who sees it. A Vigilant Solutions user manual spells out in detail how agencies can limit access to their data..."

San Diego's ALPR database is fed by a network of cameras which record images plus the date, time and GPS location of the cars that pass by them. So, the associated metadata for each database record probably includes the license plate number, license plate state, vehicle owner, GPS location, travel direction, date and time, road/street/highway name or number, and the LPR device ID number.

Information about San Diego's ALPR activities became public after a data request from the Electronic Frontier Foundation (EFF), a digital privacy organization. ALPRs are a popular tool, and were used in about 38 states in 2014. Typically, the surveillance collects data about both criminals and innocent drivers.

Images of ALPR devices mounted on unmarked patrol cars. Click to view larger version There are several valid applications: find stolen vehicles, find stolen license plates, find wanted vehicles (e.g., abductions), execute search warrants, find parolees, and find wanted parolees. Some ALPR devices are stationary (e.g., mounted on street lights), while others are mounted on (marked and unmarked) patrol cars. Both deployments scan moving vehicles, while the latter also facilitates the scanning of parked vehicles.

Earlier this year, the EFF issued hundreds of similar requests across the country to learn how law enforcement currently uses ALPR technology. The ALPR training manual for the Elk Grove, Illinois PD listed the data archival policies for several states: New Jersey - 5 years, Vermont - 18 months, Utah - 9 months,  Minnesota - 48 hours, Arkansas - 150 days, New Hampshire - not allowed, and California - no set time. The document also stated that more than "50 million captures" are added each month to the Vigilant database. And, the Elk Grove PD seems to broadly share its ALPR data with other police departments and agencies.

The SDPD website includes a "License Plate Recognition: Procedures" document (Adobe PDF), dated May 2015, which describes its ALPR usage and policies:

"The legitimate law enforcement purposes of LPR systems include the following: 1) Locating stolen, wanted, or subject of investigation vehicles; 2) Locating witnesses and victims of a violent crime; 3) Locating missing or abducted children and at risk individuals.

LPR Strategies: 1) LPR equipped vehicles should be deployed as frequently as possible to maximize the utilization of the system; 2) Regular operation of LPR should be considered as a force multiplying extension of an officer’s regular patrol efforts to observe and detect vehicles of interest and specific wanted vehicles; 3) LPR may be legitimately used to collect data that is within public view, but should not be used to gather intelligence of First Amendment activities; 4) Reasonable suspicion or probable cause is not required for the operation of LPR equipment; 5) Use of LPR equipped cars to conduct license plate canvasses and grid searches is encouraged, particularly for major crimes or incidents as well as areas that are experiencing any type of crime series... LPR data will be retained for a period of one year from the time the LPR record was captured by the LPR device..."

The document does not describe its data security methods to protect this sensitive information from breaches, hacks, and unauthorized access. Perhaps most importantly, the 2015 SDPD document describes the data sharing policy:

"Law enforcement officers shall not share LPR data with commercial or private entities or individuals. However, law enforcement officers may disseminate LPR data to government entities with an authorized law enforcement or public safety purpose for access to such data."

However, the Voice of San Diego reported:

"A memorandum of understanding between SDPD and Vigilant stipulates that each agency retains ownership of its data, and can take steps to determine who sees it. A Vigilant Solutions user manual spells out in detail how agencies can limit access to their data... SDPD’s sharing doesn’t stop at Border Patrol. The list of agencies with near immediate access to the travel habits of San Diegans includes law enforcement partners you might expect, like the Carlsbad Police Department – with which SDPD has for years shared license plate reader data, through a countywide arrangement overseen by SANDAG – but also obscure agencies like the police department in Meigs, Georgia, population 1,038, and a private group that is not itself a police department, the Missouri Police Chiefs Association..."

So, the accuracy of the 2015 document is questionable, it it isn't already obsolete. Moreover, what's really critical are the data retention and sharing policies by Vigilant and other agencies.


Medicare Scams Still Operate. How To Avoid Getting Your Identity Information Stolen

To minimize fraud, the new Medicare cards display a unique 11-digit identification number instead of patients' Social Security numbers. However, scammers have created a new tactic to trick patients into revealing their sensitive Medicare information. The Oregon Department of Justice warned:

"If someone calls and asks you for your personal information, money to activate the new card, or threatens to cancel your Medicare benefits if you don’t share your personal information, just hang up! It is a scam," said Attorney General Ellen Rosenblum.

Medicare will not call you nor ask for your Social Security number or bank information. That's good advice for patients nationwide. Experts estimate that Medicare loses about $60 billion yearly to con artists via a variety of scams.

Oregon residents suspecting healthcare fraud or wanting to report scammers, should contact Oregon's Department of Justice’s Consumer Protection (hotline: 1-877-877-9392 or www.oregonconsumer.gov). Consumers in other states should contact their state's attorney general, and/or report suspected fraud directly to Medicare.

The video below from 2017 includes advice about how patients should protect their Medicare cards.


Report: Software Failure In Fatal Accident With Self-Driving Uber Car

TechCrunch reported:

"The cause of the fatal crash of an Uber self-driving car appears to have been at the software level, specifically a function that determines which objects to ignore and which to attend to, The Information reported. This puts the fault squarely on Uber’s doorstep, though there was never much reason to think it belonged anywhere else.

Given the multiplicity of vision systems and backups on board any given autonomous vehicle, it seemed impossible that any one of them failing could have prevented the car’s systems from perceiving Elaine Herzberg, who was crossing the street directly in front of the lidar and front-facing cameras. Yet the car didn’t even touch the brakes or sound an alarm. Combined with an inattentive safety driver, this failure resulted in Herzberg’s death."

The TechCrunch story provides details about which software subsystem the report said failed.

Not good.

So, the autonomous or self-driving cars are only as good as the software they're programmed with (including maintenance). Anyone who has used computers during the last couple decades probably has experienced software glitches, bugs, and failures. It happens.

This latest incident suggests self-driving cars aren't yet ready. what do you think?


Connecticut And Federal Regulators Announce $1.3 Million Settlement With Substance Abuse Healthcare Provider

Connecticut and federal regulators recently announced a settlement agreement to resolve allegations that New Era Rehabilitation Center (New Era), operating in New Haven and Bridgeport, submitted false claims to both state and federal healthcare programs. The office of George Jepsen, Connecticut Attorney General, announced that New Era:

"... and its co-founders and owners – Dr. Ebenezer Kolade and Dr. Christina Kolade – are enrolled as providers in the Connecticut Medical Assistance Program (CMAP), which includes the state's Medicaid program. As part of their practice, they provide methadone treatment services for patients dealing with opioid addiction. Most of their patients are CMAP beneficiaries.

During the relevant time period, CMAP reimbursed methadone clinics by paying a weekly bundled rate that included all of the services associated with methadone maintenance, including the patient's doses of methadone; the initial intake evaluation; a physical examination; periodic drug testing; and individual, group and family drug counseling... The state and federal governments alleged that, from October 2009 to November 2013, New Era and the Kolades engaged in a pattern and practice of billing CMAP weekly for the methadone bundled service rate and then also submitting a separate claim to the CMAP for virtually every drug counseling session provided to clients by using a billing code for outpatient psychotherapy. The state and federal governments further alleged that those psychotherapy sessions were actually the drug counseling sessions already included and reimbursed through the bundled rate."

These actions were part of the State of Connecticut's Inter-agency Fraud Task Force created in 2013 to investigate and prosecute healthcare fraud. The joint investigation included the Connecticut AT's office, the office of Connecticut U.S. Attorney John H. Durham, and the U.S. Health and Human Services, Office of Inspector General – Office of Investigations.

Connecticut Fight Fraud logo Terms of the settlement agreement require NERC to pay $1,378,533 in settlement funds. Of that amount, $881,945 will be returned to CMAP.

Connecticut residents suspecting healthcare fraud or abuse should contact the Attorney General’s Antitrust and Government Program Fraud Department (phone at 860-808-5040, or email at [email protected]), or the Department of Social Services fraud (hotline at 1-800-842-2155, online at www.ct.gov/dss/reportingfraud, or email at [email protected]). Residents in other states can contact their state's attorney general's office.


Oakland Law Mandates 'Technology Impact Reports' By Local Government Agencies Before Purchasing Surveillance Equipment

Popular tools used by law enforcement include stingrays, fake cellular phone towers, and automated license plate readers (ALPRs) to track the movements of persons. Historically, the technologies have often been deployed without notice to track both the bad guys (e.g., criminals and suspects) and innocent citizens.

To better balance the privacy needs of citizens versus the surveillance needs of law enforcement, some areas are implementing new laws. The East Bay Times reported about a new law in Oakland:

"... introduced at Tuesday’s city council meeting, creates a public approval process for surveillance technologies used by the city. The rules also lay a groundwork for the City Council to decide whether the benefits of using the technology outweigh the cost to people’s privacy. Berkeley and Davis have passed similar ordinances this year.

However, Oakland’s ordinance is unlike any other in the nation in that it requires any city department that wants to purchase or use the surveillance technology to submit a "technology impact report" to the city’s Privacy Advisory Commission, creating a “standardized public format” for technologies to be evaluated and approved... city departments must also submit a “surveillance use policy” to the Privacy Advisory Commission for consideration. The approved policy must be adopted by the City Council before the equipment is to be used..."

Reportedly, the city council will review the ordinance a second time before final passage.

The Northern California chapter of the American Civil Liberties Union (ACLU) discussed the problem, the need for transparency, and legislative actions:

"Public safety in the digital era must include transparency and accountability... the ACLU of California and a diverse coalition of civil rights and civil liberties groups support SB 1186, a bill that helps restores power at the local level and makes sure local voices are heard... the use of surveillance technology harms all Californians and disparately harms people of color, immigrants, and political activists... The Oakland Police Department concentrated their use of license plate readers in low income and minority neighborhoods... Across the state, residents are fighting to take back ownership of their neighborhoods... Earlier this year, Alameda, Culver City, and San Pablo rejected license plate reader proposals after hearing about the Immigration & Customs Enforcement (ICE) data [sharing] deal. Communities are enacting ordinances that require transparency, oversight, and accountability for all surveillance technologies. In 2016, Santa Clara County, California passed a groundbreaking ordinance that has been used to scrutinize multiple surveillance technologies in the past year... SB 1186 helps enhance public safety by safeguarding local power and ensuring transparency, accountability... SB 1186 covers the broad array of surveillance technologies used by police, including drones, social media surveillance software, and automated license plate readers. The bill also anticipates – and covers – AI-powered predictive policing systems on the rise today... Without oversight, the sensitive information collected by local governments about our private lives feeds databases that are ripe for abuse by the federal government. This is not a hypothetical threat – earlier this year, ICE announced it had obtained access to a nationwide database of location information collected using license plate readers – potentially sweeping in the 100+ California communities that use this technology. Many residents may not be aware their localities also share their information with fusion centers, federal-state intelligence warehouses that collect and disseminate surveillance data from all levels of government.

Statewide legislation can build on the nationwide Community Control Over Police Surveillance (CCOPS) movement, a reform effort spearheaded by 17 organizations, including the ACLU, that puts local residents and elected officials in charge of decisions about surveillance technology. If passed in its current form, SB 1186 would help protect Californians from intrusive, discriminatory, and unaccountable deployment of law enforcement surveillance technology."

Is there similar legislation in your state?


Twitter Advised Its Users To Change Their Passwords After Security Blunder

Yesterday, Twitter.com advised all of its users to change their passwords after a huge security blunder exposed users' passwords online in an unprotected format. The social networking service released a statement on May 3rd:

"We recently identified a bug that stored passwords unmasked in an internal log. We have fixed the bug, and our investigation shows no indication of breach or misuse by anyone. Out of an abundance of caution, we ask that you consider changing your password on all services where you’ve used this password."

Security experts advise consumers not to use the same password at several sites or services. Repeated use of the same password makes it easy for criminals to hack into multiple sites or services.

The statement by Twitter.com also explained that it masks users' passwords:

"... through a process called hashing using a function known as bcrypt, which replaces the actual password with a random set of numbers and letters that are stored in Twitter’s system. This allows our systems to validate your account credentials without revealing your password. This is an industry standard.

Due to a bug, passwords were written to an internal log before completing the hashing process. We found this error ourselves, removed the passwords, and are implementing plans to prevent this bug from happening again."

The good news: Twitter found the buy by itself. The not-so-good news: the statement was short on details. It did not disclose details about the fixes so this blunder doesn't happen again. Nor did the statement say how many users were affected. Twitter has about 330 million users, so it seems that all users were affected.


How to Wrestle Your Data From Data Brokers, Silicon Valley — and Cambridge Analytica

[Editor's note: today's guest post, by reporters at ProPublica, discusses data brokers you may not know, the data collected and archived about consumers, and options for consumers to (re)gain as much privacy as possible. It is reprinted with permission.]

By Jeremy B. Merrill, ProPublica

Cambridge Analytica thinks that I’m a "Very Unlikely Republican." Another political data firm, ALC Digital, has concluded I’m a "Socially Conservative," Republican, "Boomer Voter." In fact, I’m a 27-year-old millennial with no set party allegiance.

For all the fanfare, the burgeoning field of mining our personal data remains an inexact art.

One thing is certain: My personal data, and likely yours, is in more hands than ever. Tech firms, data brokers and political consultants build profiles of what they know — or think they can reasonably guess — about your purchasing habits, personality, hobbies and even what political issues you care about.

You can find out what those companies know about you but be prepared to be stubborn. Very stubborn. To demonstrate how this works, we’ve chosen a couple of representative companies from three major categories: data brokers, big tech firms and political data consultants.

Few of them make it easy. Some will show you on their websites, others will make you ask for your digital profile via the U.S. mail. And then there’s Cambridge Analytica, the controversial Trump campaign vendor that has come under intense fire in light of a report in the British newspaper The Observer and in The New York Times that the company used improperly obtained data from Facebook to help build voter profiles.

To find out what the chaps at the British data firm have on you, you’re going to need both stamps and a "cheque."

Once you see your data, you’ll have a much better understanding of how this shadowy corner of the new economy works. You’ll see what seemingly personal information they know about you … and you’ll probably have some hypotheses about where this data is coming from. You’ll also probably see some predictions about who you are that are hilariously wrong.

And if you do obtain your data from any of these companies, please let us know your thoughts at [email protected]. We won’t share or publish what you say (unless you tell us that’s it’s OK).

Cambridge Analytica and Other Political Consultants

Making statistically informed guesses about Americans’ political beliefs and pet issues is a common business these days, with dozens of firms selling data to candidates and issue groups about the purported leanings of individual American voters.

Few of these firms have to give your data. But Cambridge Analytica is required to do so by an obscure European rule.

Cambridge Analytica:

Around the time of the 2016 election, Paul-Olivier Dehaye, a Belgian mathematician and founder of a website that helps people exercise their data protection rights called PersonalData.IO, approached me with an idea for a story. He flagged some of Cambridge Analytica’s claims about the power of its "psychographic" targeting capabilities and suggested that I demand my data from them.

So I sent off a request, following Dehaye’s coaching, and citing the UK Data Protection Act 1998, the British implementation of a little-known European Union data-protection law that grants individuals (even Americans) the rights to see the data Europeans companies compile about individuals.

It worked. I got back a spreadsheet of data about me. But it took months, cost ten pounds — and I had to give them a photo ID and two utility bills. Presumably they didn’t want my personal data falling into the wrong hands.

How You Can Request Your Data From Cambridge Analytica:

  1. Visit Cambridge Analytica’s website here and fill out this web form.
  2. After you submit the form, the page will immediately request that you email to [email protected] a photo ID and two copies of your utility bills or bank statements, to prove your identity. This page will also include the company’s bank account details.
  3. Find a way to send them 10 GBP. You can try wiring this from your bank, though it may cost you an additional $25 or so — or ask a friend in the UK to go to their bank and get a cashier’s check. Your American bank probably won’t let you write a GBP-denominated check. Two services I tried, Xoom and TransferWise, weren’t able to do it.
  4. Eventually, Cambridge Analytica will email you a small Excel spreadsheet of information and a letter. You might have to wait a few weeks. Celeste LeCompte, ProPublica’s vice president of business development, requested her data on March 27 and still hasn’t received it.

Because the company is based in the United Kingdom, it had no choice but to fulfill my request. In recent weeks, the firm has come under intense fire after The New York Times and the British paper The Observer disclosed that it had used improperly obtained data from Facebook to build profiles of American voters. Facebook told me that data about me was likely transmitted to Cambridge Analytica because a person with whom I am "friends" on the social network had taken the now-infamous "This Is Your Digital Life" quiz. For what it’s worth, my data shows no sign of anything derived from Facebook.

What You Might Get Back From Cambridge Analytica:

Cambridge Analytica had generated 13 data points about my views: 10 political issues, ranked by importance; two guesses at my partisan leanings (one blank); and a guess at whether I would turn out in the 2016 general election.

They told me that the lower the rank, the higher the predicted importance of the issue to me.

Alongside that data labeled "models" were two other types of data that are run-of-the-mill and widely used by political consultants. One sheet of "core data" — that is, personal info, sliced and diced a few different ways, perhaps to be used more easily as parameters for a statistical model. It included my address, my electoral district, the census tract I live in and my date of birth.

The spreadsheet included a few rows of "election returns" — previous elections in New York State in which I had voted. (Intriguingly, Cambridge Analytica missed that I had voted in 2015’s snoozefest of a vote-for-five-of-these-five judicial election. It also didn’t know about elections in which I had voted in North Carolina, where I lived before I lived in New York.)

ALC Digital

ALC Digital is another data broker, which says that its info is "audiences are built from multi-sourced, verified information about an individual." Their data is distributed via Oracle Data Cloud, a service that lets advertisers target specific audience of people — like, perhaps, people who are Boomer Voters and also Republicans.

The firm brags in an Oracle document posted online about how hard it is to avoid their data collection efforts, saying, "It has no cookies to erase and can’t be ‘cleared.’ ALC Real World Data is rooted in reality, and doesn’t rely on inferences or faulty models."

How You Can Request Your Data From ALC Digital:

Here’s how to find the predictions about your political beliefs data in Oracle Data Cloud:

  1. Visit http://www.bluekai.com/registry/. If you use an ad blocker, there may not be much to see here.
  2. Click on the Partner Segments tab.
  3. Scroll on through until you find ALC Digital.

You may have to scroll for a while before you find it.

And not everyone appears to have data from ALC Digital, so don’t be shocked if you can’t find it. If you don’t, there may be other fascinating companies with data about who you are in your Oracle file.

What You Might Get Back From ALC Digital:

When I downloaded the data last year, it said I was "Socially Conservative," "Boomer Voter" — as well as a female voter and a tax reform supporter.

Recently, when I checked my data, those categories had disappeared entirely from my data. I had nothing from ALC Digital.

ALC Digital is not required to release this data. It is disclosed via the Oracle Data Cloud. Fran Green, the company’s president, said that Aristotle, a longtime political data company, “provides us with consumer data that populates these audiences.” She also said that “we do not claim to know people’s ‘beliefs.’”

Big Tech

Big tech firms like Google and Facebook tend to make their money by selling ads, so they build extensive profiles of their users’ interests and activities. They also depend on their users’ goodwill to keep us voluntarily giving them our locations, our browsing histories and plain ol’ lists of our friends and interests. (So far, these popular companies have not faced much regulation.) All three make it easy to download the data that they keep on you.

Firms like Google and Facebook firms don’t sell your data — because it’s their competitive advantage. Google’s privacy page screams in 72 point type: "We do not sell your personal information to anyone." As websites that we visit frequently, they sell access to our attention, so companies that want to reach you in particular can do so with these companies’ sites or other sites that feature their ads.

Facebook

How You Can Request Your Data From Facebook:

You of course have to have a Facebook account and be logged in:

  1. Visit https://www.facebook.com/settings on your computer.
  2. Click the “Download a copy of your Facebook data” link.
  3. On the next page, click “Start My Archive.”
  4. Enter your password, then click “Start My Archive” again.
  5. You’ll get an email immediately, and another one saying “Your Facebook download is ready” when your data is ready to be downloaded. You’ll get a notification on Facebook, too. Mine took just a few minutes.
  6. Once you get that email, click the link, then click Download Archive. Then reenter your password, which will start a zip file downloading..
  7. Unzip the folder; depending on your computer’s operating system, this might be called uncompressing or “expanding.” You’ll get a folder called something like “facebook-jeremybmerrill,” but, of course, with your username instead of mine.
  8. Open the folder and double-click “index.htm” to open it in your web browser.

What You Might Get Back From Facebook

Facebook designed its archive to first show you your profile information. That’s all information you typed into Facebook and that you probably intended to be shared with your friends. It’s no surprise that Facebook knows what city I live in or what my AIM screen name was — I told Facebook those things so that my friends would know.

But it’s a bit of a surprise that they decided to feature a list of my ex-girlfriends — what they blandly termed "Previous Relationships" — so prominently.

As you dig deeper in your archive, you’ll find more information that you gave Facebook, but that you might not have expected the social network to keep hold of for years: if you’re me, that’s the Nickelback concert I apparently RSVPed to, posts about switching high schools and instant messages from my freshman year in college.

But finally, you’ll find the creepier information: what Facebook knows about you that you didn’t tell it, on the "Ads" page. You’ll find "Ads Topics" that Facebook decided you were interested in, like Housing, ESPN or the town of Ellijay, Georgia. And, you’ll find a list of advertisers who have obtained your contact information and uploaded it to Facebook, as part of a so-called Custom Audience of specific people to whom they want to show their ads.

You’ll find more of that creepy information on your Ads Preferences page. Despite Mark Zuckerberg telling Rep. Jerry McNerney, D-Calif., in a hearing earlier this month that “all of your information is included in your ‘download your information,’” my archive didn’t include that list of ad categories that can be used to target ads to me. (Some other types of information aren’t included in the download, like other people’s posts you’ve liked. Those are listed here, along with where to find them — which, for most, is in your Activity Log.)

This area may include Facebook’s guesses about who you are, boiled down from some of your activities. Most Americans’ will have a guess about their politics — Facebook says I’m a "moderate" about U.S. Politics — and some will have a guess about so-called "multicultural affinity," which Facebook insists is not a guess about your ethnicity, but rather what sorts of content "you are interested in or will respond well to." For instance, Facebook recently added that I have a "Multicultural Affinity: African American." (I’m white — though, because Facebook’s definition of "multicultural affinity" is so strange, it’s hard to tell if this is an error on Facebook’s part.)

Facebook also doesn’t include your browsing history — the subject of back-and-forths between Mark Zuckerberg and several members of Congress — it says it keeps that just long enough to boil it down into those “Ad Topics.”

For people without Facebook accounts, Facebook says to email [email protected] or fill out an online form to download what Facebook knows about you. One puzzle here is how Facebook gathers data on people whose identities it may not know. It may know that a person using a phone from Atlanta, Georgia, has accessed a Facebook site and that the same person was last week in Austin, Texas, and before that Cincinnati, but it may not know that that person is me. It’s in principle difficult for the company to give the data it collects about logged-out users if it doesn’t know exactly who they are.

Google

Like Facebook, Google will give you a zip archive of your data. Google’s can be much bigger, because you might have stored gigabytes of files in Google Drive or years of emails in Gmail.

But like Facebook, Google does not provide its guesses about your interests, which it uses to target ads. Those guesses are available elsewhere.

How You Can Request Your Data From Google:

  1. Visit https://takeout.google.com/settings/takeout/ to use Google’s cutely named Takeout service.
  2. You’ll have to pick which data you want to download and examine. You should definitely select My Activity, Location History and Searches. You may not want to download gigabytes of emails, if you use Gmail, since that uses a lot of space and may take a while. (That’s also information you shouldn’t be surprised that Google keeps — you left it with Gmail so that you could use Google’s search expertise to hold on to your emails. )
  3. Google will present you with a few options for how to get your archive. The defaults are fine.
  4. Within a few hours, you should get an email with the subject "Your Google data archive is ready." Click Download Archive and log in again. That should start the download of a file named something like "takeout-20180412T193535.zip."
  5. Unzip the folder; depending on your computer’s operating system, this might be called uncompressing or “expanding.”
  6. You’ll get a folder called Takeout. Open the file inside it called "index.html" in your web browser to explore your archive.

What You Might Get Back From Google:

Once you open the index.html file, you’ll see icons for the data you chose in step 2. Try exploring "Ads" under "My Activity" — you’ll see a list of times you saw Google Ads, including on apps on your phone.

Google also includes your search history, under "Searches" — in my case, going back to 2013. Google knows what I had forgotten: I Googled a bunch of dinosaurs around Valentine’s Day that year… And it’s not just web searches: the Sound Search history reminded me that at some point, I used that service to identify Natalie Imbruglia’s song "Torn."

Android phone users might want to check the "Android" folder: Google keeps a list of each app you’ve used on your phone.

Most of the data contained here are records of ways you’ve directly interacted with Google — and the company really does use the those to improve how their services work for me. I’m glad to see my searches auto-completed, for instance.

But the company also creates data about you: Visit the company’s Ads Settings page to see some of the “topics” Google guesses you’re interested in, and which it uses to personalize the ads you see. Those topics are fairly general — it knows I’m interested in “Politics” — but the company says it has more granular classifications that it doesn’t include on the list. Those more granular, hidden classifications are on various topics, from sports to vacations to politics, where Google does generate a guess whether some people are politically “left-leaning” or “right-leaning.”

Data Brokers

Here’s who really does sell your data. Data brokers like the credit reporting agency Experian and a firm named Epsilon.

These sometimes-shady firms are middlemen who buy your data from tracking firms, survey marketers and retailers, slice and dice the data into “segments,” then sell those on to advertisers.

Experian

Experian is best known as a credit reporting firm, but your credit cards aren’t all they keep track of. They told me that they “firmly believe people should be made aware of how their data is being used” — so if you print and mail them a form, they’ll tell you what data they have on you.

“Educated consumers,” they said, “are better equipped to be effective, successful participants in a world that increasingly relies on the exchange of information to efficiently deliver the products and services consumers demand.”

How You Can Request Your Data From Experian:

  1. Visit Experian’s Marketing Data Request site and print the Marketing Data Report Request form.
  2. Print a copy of your ID and proof of address.
  3. Mail it all to Experian at Experian Marketing Services PO Box 40 Allen, TX 75013
  4. Wait for them to mail you something back.

What You Might Get Back From Experian:

Expect to wait a while. I’ve been waiting almost a month.

They also come up with a guess about your political views that’s integrated with Facebook — our Facebook Political Ad Collector project has found that many political candidates use Experian’s data to target their Facebook ads to likely supporters.

You should hope to find a guess about your political views that’d be useful to those candidates — as well as categories derived from your purchasing data.

Experian told me they generate the data they have about you from a long list of sources, including public records and “historical catalog purchase information” — as well as calculating it from predictive models.

Epsilon

How You Can Request Your Data From Epsilon:

  1. Visit Epsilon’s Marketing Data Summary Request form.
  2. After entering your name and address, Epsilon will answer some of those identity-verification questions that quiz you about your old addresses and cars. If your identity can’t be verified with those, Epsilon will ask you to mail in a form.
  3. Wait for Epsilon to mail you your data; it took about a week for me.

What You Might Get Back From Epsilon:

Epsilon has information on “demographics” and “lifestyle interests” — at the household level. It also includes a list of “household purchases.”

It also has data that political candidates use to target their Facebook ads, including Randy Bryce, a Wisconsin Democrat who’s seeking his party’s nomination to run for retiring Speaker Paul Ryan’s seat, and Rep. Tulsi Gabbard, D-Hawaii.

In my case, Epsilon knows I buy clothes, books and home office supplies, among other things — but isn’t any more specific. They didn’t tell me what political beliefs they believe I hold. The company didn’t respond to a request for comment.

Oracle

Oracle’s Data Cloud aggregates data about you from Oracle, but also so-called third party data from other companies.

How You Can Request Your Data From Oracle:

  1. Visit http://www.bluekai.com/registry/. If you use an ad blocker, there may not be much to see here.
  2. Explore each tab, from “Basic Info” to “Hobbies & Interests” and “Partner Segments.”

Not fun scrolling through all those pages? I have 84 pages of four pieces of data each.

You can’t search. All the text is actually images of text. Oracle declined to say why it chose to make their site so hard to use.

What You Might Get Back From Oracle:

My Oracle profile includes nearly 1500 data points, covering all aspects of my life, from my age to my car to how old my children are to whether I buy eggs. These profiles can even say if you’re likely to dress your pet in a costume for Halloween. But many of them are off-base or contradictory.

Many companies in Oracle’s data, besides ALC Digital, offer guesses about my political views: Data from one company uploaded by AcquireWeb says that my political affiliations are as a Democrat and an Independent … but also that I’m a “Mild Republican.” Another company, an Oracle subsidiary called AddThis, says that I’m a “Liberal.” Cuebiq, which calls itself a “location intelligence” company, says I’m in a subset of “Democrats” called “Liberal Professions.”

If an advertiser wants to show an ad to Spring Break Enthusiasts, Oracle can enable that. I’m apparently a Spring Break Enthusiast. Do I buy eggs? I sure do. Data on Oracle’s site associated with AcquireWeb says I’m a cat owner …

But it also “knows” I’m a dog owner, which I’m not.

Al Gadbut, the CEO of AcquireWeb, explained that the guesses associated with his company weren’t based on my personal data, but rather the tendencies of people in my geographical area — hence the seemingly contradictory political guesses. He said his firm doesn’t generate the data, but rather uploaded it on behalf of other companies. Cuebiq’s guess was a “probabilistic inference” they drew from location data submitted to them by some app on my phone. Valentina Marastoni-Bieser, Cuebiq’s senior vice president of marketing, wouldn’t tell me which app it was, though.

Data for sale here includes a long list what TV shows I — supposedly — watch.

But it’s not all wrong. AddThis can tell that I’m “Young & Hip.”

Takeaways:

The above list is just a sampling of the firms that collect your data and try to draw conclusions about who you are — not just sites you visit like Facebook and controversial firms like Cambridge Analytica.

You can make some guesses as to where this data comes from — especially the more granular consumer data from Oracle. For each data point, it’s worth considering: Who’d be in a position to sell a list of what TV shows I watch, or, at least, a list of what TV shows people demographically like me watch? Who’d be in a position to sell a list of what groceries I, or people similar to me in my area, buy? Some of those companies — companies who you’re likely paying, and for whom the internet adage that “if you’re not paying, you’re the product” doesn’t hold — are likely selling data about you without your knowledge. Other data points, like the location data used by Cuebiq, can come from any number of apps or websites, so it may be difficult to figure out exactly which one has passed it on.

Companies like Google and Facebook often say that they’ll let you “correct” the data that they hold on you — tacitly acknowledgingly that they sometimes get it wrong. But if receiving relevant ads is not important to you, they’ll let you opt-out entirely — or, presumably, “correct” your data to something false.

An upcoming European Union rule called the General Data Protection Regulation portends a dramatic change to how data is collected and used on the web — if only for Europeans. No such law seems likely to be passed in the U.S. in the near future.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.