Previous month:
June 2018
Next month:
August 2018

14 posts from July 2018

New York State Tells Charter To Leave Due To 'Persistent Non-Compliance And Failure To Live Up To Promises'

The New York State Public Service Commission (NYPSC) announced on Friday that it has revoked its approval of the 2016 merger agreement between Charter Communications, Inc. and Time Warner Cable, Inc. because:

"... Charter, doing business as Spectrum has — through word and deed — made clear that it has no intention of providing the public benefits upon which the Commission's earlier [merger] approval was conditioned. In addition, the Commission directed Commission counsel to bring an enforcement action in State Supreme Court to seek additional penalties for Charter's past failures and ongoing non-compliance..."

Charter, the largest cable provider in the State, provides digital cable television, broadband internet and VoIP telephone services to more than two million subscribers in in more than 1,150 communities. It provides services to consumers in Buffalo, Rochester, Syracuse, Albany and four boroughs in New York City: Manhattan, Staten Island, Queens and Brooklyn. The planned expansion could have increased to five million subscribers in the state.

Charter provides services in 41 states: Alabama, Arizona, California, Colorado, Connecticut, Florida, Georgia, Hawaii, Idaho, Illinois, Indiana, Kansas, Kentucky, Louisiana, Maine, Massachusetts, Michigan, Minnesota, Missouri, Montana, Nebraska, Nevada, New Hampshire, New Jersey, New Mexico, New York, North Carolina, Ohio, Oregon, Pennsylvania, Rhode Island, South Carolina, South Dakota, Tennessee, Texas, Utah, Vermont, Virginia, Washington, Wisconsin, and Wyoming.

A unit of the Department of Public Service, the NYPSC site described its mission, "to ensure affordable, safe, secure, and reliable access to electric, gas, steam, telecommunications, and water services for New York State’s residential and business consumers, while protecting the natural environment." Its announcement listed Spectrum's failures and non-compliance:

"1. The company’s repeated failures to meet deadlines;
2. Charter’s attempts to skirt obligations to serve rural communities;
3. Unsafe practices in the field;
4. Its failure to fully commit to its obligations under the 2016 merger agreement; and
5. The company’s purposeful obfuscation of its performance and compliance obligations to the Commission and its customers."

The announcement provided details:

"On Jan. 8, 2016, the Commission approved Charter’s acquisition of Time Warner. To obtain approval, Charter agreed to a number of conditions required by the Commission to advance the public interest, including delivering broadband speed upgrades to 100 Mbps statewide by the end of 2018, and 300 Mbps by the end of 2019, and building out its network to pass an additional 145,000 un-served or under-served homes and businesses in the State's less densely populated areas within four years... Despite missing every network expansion target since the merger was approved in 2016, Charter has falsely claimed in advertisements it is exceeding its commitments to the State and is on track to deliver its network expansion. This led to the NYPSC’s general counsel referring a false advertising claim to the Attorney General’s office for enforcement... By its own admission, Charter has failed to meet its commitment to expand its service network... Its failure to meet its June 18, 2018 target by more than 40 percent is only the most recent example. Rather than accept responsibility Charter has tried to pass the blame for its failure on other companies, such as utility pole owners..."

The NYPSC has already levied $3 million in fines against Charter. The latest action basically boots Charter out of the State:

"Charter is ordered to file within 60 days a plan with the Commission to ensure an orderly transition to a successor provider(s). During the transition process, Charter must continue to comply with all local franchises it holds in New York State and all obligations under the Public Service Law and the NYPSC regulations. Charter must ensure no interruption in service is experienced by customers, and, in the event that Charter does not do so, the NYPSC will take further steps..."

Of course, executives at Charter have a different view of the situation. NBC New York reported:

"In the weeks leading up to an election, rhetoric often becomes politically charged. But the fact is that Spectrum has extended the reach of our advanced broadband network to more than 86,000 New York homes and businesses since our merger agreement with the PSC. Our 11,000 diverse and locally based workers, who serve millions of customers in the state every day, remain focused on delivering faster and better broadband to more New Yorkers, as we promised..."


Test Finds Amazon's Facial Recognition Software Wrongly Identified Members Of Congress As Persons Arrested. A Few Legislators Demand Answers

In a test of Rekognition, the facial recognition software by Amazon, the American Civil Liberties Union (ACLU) found that the software misidentified 28 members of the United States Congress to mugshot photographs of persons arrested for crimes. Jokes aside about politicians, this is serious stuff. According to the ACLU:

"The members of Congress who were falsely matched with the mugshot database we used in the test include Republicans and Democrats, men and women, and legislators of all ages, from all across the country... To conduct our test, we used the exact same facial recognition system that Amazon offers to the public, which anyone could use to scan for matches between images of faces. And running the entire test cost us $12.33 — less than a large pizza... The false matches were disproportionately of people of color, including six members of the Congressional Black Caucus, among them civil rights legend Rep. John Lewis (D-Ga.). These results demonstrate why Congress should join the ACLU in calling for a moratorium on law enforcement use of face surveillance."

List of 28 Congressional legislators mis-identified by Amazon Rekognition in ACLU study. Click to view larger version With 535 member of Congress, the implied error rate was 5.23 percent. On Thursday, three of the misidentified legislators sent a joint letter to Jeffery Bezos, the Chief executive Officer at Amazon. The letter read in part:

"We write to express our concerns and seek more information about Amazon's facial recognition technology, Rekognition... While facial recognition services might provide a valuable law enforcement tool, the efficacy and impact of the technology are not yet fully understood. In particular, serious concerns have been raised about the dangers facial recognition can pose to privacy and civil rights, especially when it is used as a tool of government surveillance, as well as the accuracy of the technology and its disproportionate impact on communities of color.1 These concerns, including recent reports that Rekognition could lead to mis-identifications, raise serious questions regarding whether Amazon should be selling its technology to law enforcement... One study estimates that more than 117 million American adults are in facial recognition databases that can be searched in criminal investigations..."

The letter was sent by Senator Edward J. Markey (Massachusetts, Representative Luis V. Gutiérrez (Illinois), and Representative Mark DeSaulnier (California). Why only three legislators? Where are the other 25? Nobody else cares about software accuracy?

The three legislators asked Amazon to provide answers by August 20, 2018 to several key requests:

  • The results of any internal accuracy or bias assessments Amazon perform on Rekognition, with details by race, gender, and age,
  • The list of all law enforcement or intelligence agencies Amazon has communicated with regarding Rekognition,
  • The list of all law enforcement agencies which have used or currently use Rekognition,
  • If any law enforcement agencies which used Rekogntion have been investigated, sued, or reprimanded for unlawful or discriminatory policing practices,
  • Describe the protections, if any, Amazon has built into Rekognition to protect the privacy rights of innocent citizens cuaght in the biometric databases used by law enforcement for comparisons,
  • Can Rekognition identify persons younger than age 13, and what protections Amazon uses to comply with Children's Online Privacy Protections Act (COPPA),
  • Whether Amazon conduts any audits of Rekognition to ensure its appropriate and legal uses, and what actions Amazon has taken to correct any abuses,
  • Explain whether Rekognition is integrated with police body cameras and/or "public-facing camera networks."

The letter cited a 2016 report by the Center on Privacy and Technology (CPT) at Georgetown Law School, which found:

"... 16 states let the Federal Bureau of Investigation (FBI) use face recognition technology to compare the faces of suspected criminals to their driver’s license and ID photos, creating a virtual line-up of their state residents. In this line-up, it’s not a human that points to the suspect—it’s an algorithm... Across the country, state and local police departments are building their own face recognition systems, many of them more advanced than the FBI’s. We know very little about these systems. We don’t know how they impact privacy and civil liberties. We don’t know how they address accuracy problems..."

Everyone wants law enforcement to quickly catch criminals, prosecute criminals, and protect the safety and rights of law-abiding citizens. However, accuracy matters. Experts warn that the facial recognition technologies used are unregulated, and the systems' impacts upon innocent citizens are not understood. Key findings in the CPT report:

  1. "Law enforcement face recognition networks include over 117 million American adults. Face recognition is neither new nor rare. FBI face recognition searches are more common than federal court-ordered wiretaps. At least one out of four state or local police departments has the option to run face recognition searches through their or another agency’s system. At least 26 states (and potentially as many as 30) allow law enforcement to run or request searches against their databases of driver’s license and ID photos..."
  2. "Different uses of face recognition create different risks. This report offers a framework to tell them apart. A face recognition search conducted in the field to verify the identity of someone who has been legally stopped or arrested is different, in principle and effect, than an investigatory search of an ATM photo against a driver’s license database, or continuous, real-time scans of people walking by a surveillance camera. The former is targeted and public. The latter are generalized and invisible..."
  3. "By tapping into driver’s license databases, the FBI is using biometrics in a way it’s never done before. Historically, FBI fingerprint and DNA databases have been primarily or exclusively made up of information from criminal arrests or investigations. By running face recognition searches against 16 states’ driver’s license photo databases, the FBI has built a biometric network that primarily includes law-abiding Americans. This is unprecedented and highly problematic."
  4. " Major police departments are exploring face recognition on live surveillance video. Major police departments are exploring real-time face recognition on live surveillance camera video. Real-time face recognition lets police continuously scan the faces of pedestrians walking by a street surveillance camera. It may seem like science fiction. It is real. Contract documents and agency statements show that at least five major police departments—including agencies in Chicago, Dallas, and Los Angeles—either claimed to run real-time face recognition off of street cameras..."
  5. "Law enforcement face recognition is unregulated and in many instances out of control. No state has passed a law comprehensively regulating police face recognition. We are not aware of any agency that requires warrants for searches or limits them to serious crimes. This has consequences..."
  6. "Law enforcement agencies are not taking adequate steps to protect free speech. There is a real risk that police face recognition will be used to stifle free speech. There is also a history of FBI and police surveillance of civil rights protests. Of the 52 agencies that we found to use (or have used) face recognition, we found only one, the Ohio Bureau of Criminal Investigation, whose face recognition use policy expressly prohibits its officers from using face recognition to track individuals engaging in political, religious, or other protected free speech."
  7. "Most law enforcement agencies do little to ensure their systems are accurate. Face recognition is less accurate than fingerprinting, particularly when used in real-time or on large databases. Yet we found only two agencies, the San Francisco Police Department and the Seattle region’s South Sound 911, that conditioned purchase of the technology on accuracy tests or thresholds. There is a need for testing..."
  8. "The human backstop to accuracy is non-standardized and overstated. Companies and police departments largely rely on police officers to decide whether a candidate photo is in fact a match. Yet a recent study showed that, without specialized training, human users make the wrong decision about a match half the time...The training regime for examiners remains a work in progress."
  9. "Police face recognition will disproportionately affect African Americans. Police face recognition will disproportionately affect African Americans. Many police departments do not realize that... the Seattle Police Department says that its face recognition system “does not see race.” Yet an FBI co-authored study suggests that face recognition may be less accurate on black people. Also, due to disproportionately high arrest rates, systems that rely on mug shot databases likely include a disproportionate number of African Americans. Despite these findings, there is no independent testing regime for racially biased error rates. In interviews, two major face recognition companies admitted that they did not run these tests internally, either."
  10. "Agencies are keeping critical information from the public. Ohio’s face recognition system remained almost entirely unknown to the public for five years. The New York Police Department acknowledges using face recognition; press reports suggest it has an advanced system. Yet NYPD denied our records request entirely. The Los Angeles Police Department has repeatedly announced new face recognition initiatives—including a “smart car” equipped with face recognition and real-time face recognition cameras—yet the agency claimed to have “no records responsive” to our document request. Of 52 agencies, only four (less than 10%) have a publicly available use policy. And only one agency, the San Diego Association of Governments, received legislative approval for its policy."

The New York Times reported:

"Nina Lindsey, an Amazon Web Services spokeswoman, said in a statement that the company’s customers had used its facial recognition technology for various beneficial purposes, including preventing human trafficking and reuniting missing children with their families. She added that the A.C.L.U. had used the company’s face-matching technology, called Amazon Rekognition, differently during its test than the company recommended for law enforcement customers.

For one thing, she said, police departments do not typically use the software to make fully autonomous decisions about people’s identities... She also noted that the A.C.L.U had used the system’s default setting for matches, called a “confidence threshold,” of 80 percent. That means the group counted any face matches the system proposed that had a similarity score of 80 percent or more. Amazon itself uses the same percentage in one facial recognition example on its site describing matching an employee’s face with a work ID badge. But Ms. Lindsey said Amazon recommended that police departments use a much higher similarity score — 95 percent — to reduce the likelihood of erroneous matches."

Good of Amazon to respond quickly, but its reply is still insufficient and troublesome. Amazon may recommend 95 percent similarity scores, but the public does not know if police departments actually use the higher setting, or consistently do so across all types of criminal investigations. Plus, the CPT report cast doubt on human "backstop" intervention, which Amazon's reply seems to heavily rely upon.

Where is the rest of Congress on this? On Friday, three Senators sent a similar letter seeking answers from 39 federal law-enforcement agencies about their use facial recognition technology, and what policies, if any, they have put in place to prevent abuse and misuse.

All of the findings in the CPT report are disturbing. Finding #3 is particularly troublesome. So, voters need to know what, if anything, has changed since these findings were published in 2016. Voters need to know what their elected officials are doing to address these findings. Some elected officials seem engaged on the topic, but not enough. What are your opinions?


How the Case for Voter Fraud Was Tested — and Utterly Failed

[Editor's note: today's blog post, by reporters at ProPublica, explores the results of a trial in Kansas about the state's voter-ID laws and claims of voter fraud. It is reprinted with permission.]

By Jessica Huseman, ProPublica

In the end, the decision seemed inevitable. After a seven-day trial in Kansas City federal court in March, in which Kansas Secretary of State Kris Kobach needed to be tutored on basic trial procedure by the judge and was found in contempt for his “willful failure” to obey a ruling, even he knew his chances were slim. Kobach told The Kansas City Star at the time that he expected the judge would rule against him (though he expressed optimism in his chances on appeal).

Sure enough, federal Judge Julie Robinson overturned the law that Kobach was defending as lead counsel for the state, dealing him an unalloyed defeat. The statute, championed by Kobach and signed into law in 2013, required Kansans to present proof of citizenship in order to register to vote. The American Civil Liberties Union sued, contending that the law violated the National Voter Registration Act (AKA the “motor voter” law), which was designed to make it easy to register.

The trial had a significance that extends far beyond the Jayhawk state. One of the fundamental questions in the debate over alleged voter fraud — whether a substantial number of non-citizens are in fact registering to vote — was one of two issues to be determined in the Kansas proceedings. (The second was whether there was a less burdensome solution than what Kansas had adopted.) That made the trial a telling opportunity to remove the voter fraud claims from the charged, and largely proof-free, realms of political campaigns and cable news shoutfests and examine them under the exacting strictures of the rules of evidence.

That’s precisely what occurred and according to Robinson, an appointee of George W. Bush, the proof that voter fraud is widespread was utterly lacking. As the judge put it, “the court finds no credible evidence that a substantial number of non-citizens registered to vote” even under the previous law, which Kobach had claimed was weak.

For Kobach, the trial should’ve been a moment of glory. He’s been arguing for a decade that voter fraud is a national calamity. Much of his career has been built on this issue, along with his fervent opposition to illegal immigration. (His claim is that unlawful immigrants are precisely the ones voting illegally.) Kobach, who also co-chaired the Trump administration’s short-lived commission on voter fraud, is perhaps the individual most identified with the cause of sniffing out and eradicating phony voter registration. He’s got a gilded resume, with degrees from Harvard University, Yale Law School and the University of Oxford, and is seen as both the intellect behind the cause and its prime advocate. Kobach has written voter laws in other jurisdictions and defended them in court. If anybody ever had time to marshal facts and arguments before a trial, it was Kobach.

But things didn’t go well for him in the Kansas City courtroom, as Robinson’s opinion made clear. Kobach’s strongest evidence of non-citizen registration was anemic at best: Over a 20-year period, fewer than 40 non-citizens had attempted to register in one Kansas county that had 130,000 voters. Most of those 40 improper registrations were the result of mistakes or confusion rather than intentional attempts to mislead, and only five of the 40 managed to cast a vote.

One of Kobach’s own experts even rebutted arguments made by both Kobach and President Donald Trump. The expert testified that a handful of improper registrations could not be extrapolated to conclude that 2.8 million fraudulent votes — roughly, the gap between Hillary Clinton and Trump in the popular vote tally — had been cast in the 2016 presidential election. Testimony from a second key expert for Kobach also fizzled.

As the judge’s opinion noted, Kobach insisted the meager instances of cheating revealed at trial are just “the tip of the iceberg.” As she explained, “This trial was his opportunity to produce credible evidence of that iceberg, but he failed to do so.” Dismissing the testimony by Kobach’s witnesses as unpersuasive, Robinson drew what she called “the more obvious conclusion that there is no iceberg; only an icicle largely created by confusion and administrative error.”

By the time the trial was over, Kobach, a charismatic 52-year-old whose broad shoulders and imposing height make him resemble an aging quarterback, seemed to have shrunk inside his chair at the defense table.

But despite his defeat, Kobach’s causes — restricting immigration and tightening voting requirements — seem to be enjoying favorable tides elsewhere. Recent press accounts noted Kobach’s role in restoring a question about citizenship, abandoned since 1950, to U.S. Census forms for 2020. And the Supreme Court ruled on June 11 that the state of Ohio can purge voters from its rolls when they fail to vote even a single time and don’t return a mailing verifying their address, a provision that means more voters will need to re-register and prove their eligibility again.

For his own part, Kobach is now a candidate for governor of Kansas, running neck and neck with the incumbent in polls for the Republican primary on Aug. 7. It’s not clear whether the verdict will affect his chances — or whether it will lead him and others to quietly retreat from claims of voter fraud. But the judge’s opinion and expert interviews reveal that Kobach effectively put the concept of mass voter fraud to the test — and the evidence crumbled.

Perhaps it was an omen. Before Kobach could enter the courtroom inside the Robert J. Dole U.S. Courthouse each day, he had to pass through a hallway whose walls featured a celebratory display entitled “Americans by Choice: The Story of Immigration and Citizenship in Kansas.” Photographs of people who’d been sworn in as citizens in that very courthouse were superimposed on the translucent window shades.

Public interest in the trial was high. The seating area quickly filled to capacity on the first day of trial on the frigid morning of March 6. The jury box was opened to spectators; it wouldn’t be needed, as this was a bench trial. Those who couldn’t squeeze in were sent to a lower floor, where a live feed had been prepared in a spillover room.

From the moment the trial opened, Kobach and his co-counsels in the Kansas secretary of state’s office, Sue Becker and Garrett Roe, stumbled over the most basic trial procedures. Their mistakes antagonized the judge. “Evidence 101,” Robinson snapped, only minutes into the day, after Kobach’s team attempted to improperly introduce evidence. “I’m not going to do it.”

Matters didn’t improve for Kobach from there.

Throughout the trial, his team’s repeated mishaps and botched cross examinations cost hours of the court’s time. Robinson was repeatedly forced to step into the role of law professor, guiding Kobach, Becker and Roe through courtroom procedure. “Do you know how to do the next step, if that’s what you’re going to do?” the judge asked Becker at one point, as she helped her through the steps of impeaching a witness. “We’re going to follow the rules of evidence here.”

Becker often seemed nervous. She took her bright red glasses off and on. At times she burst into nervous chuckles after a misstep. She laughed at witnesses, skirmished with the judge and even taunted the lawyers for the ACLU. “I can’t wait to ask my questions on Monday!” she shouted at the end of the first week, jabbing a finger in the direction of Dale Ho, the lead attorney for the plaintiffs. Ho rolled his eyes.

Roe was gentler — deferential, even. He often admitted he didn’t know what step came next, asking the judge for help. “I don’t — I don’t know if this one is objectionable. I hope it’s not,” he offered at one point, as he prepared to ask a question following a torrent of sustained objections. “I’ll let you know,” an attorney for the plaintiffs responded, to a wave of giggles in the courtroom. On the final day of trial, as Becker engaged in yet another dispute with the judge, Roe slapped a binder to his forehead and audibly whispered, “Stop talking. Stop talking.”

Kobach’s cross examinations were smoother and better organized, but he regularly attempted to introduce exhibits — for example, updated state statistics that he had failed to provide the ACLU in advance to vet — that Robinson ruled were inadmissible. As the trial wore on, she became increasingly irritated. She implored Kobach to “please read” the rules on which she based her rulings, saying his team had repeated these errors “ad nauseum.”

Kobach seemed unruffled. Instead of heeding her advice, he’d proffer the evidence for the record, a practice that allows the evidence to be preserved for appeal even if the trial judge refuses to admit it. Over the course of the trial, Kobach and his team would do this nearly a dozen times.

Eventually, Robinson got fed up. She asked Kobach to justify his use of proffers. Kobach, seemingly alarmed, grabbed a copy of the Federal Rules of Civil Procedure — to which he had attached a growing number of Post-it notes — and quickly flipped through it, trying to find the relevant rule.

The judge tried to help. “It’s Rule 26, of course, that’s been the basis for my rulings,” she told Kobach. “I think it would be helpful if you would just articulate under what provision of Rule 26 you think this is permissible.” Kobach seemed to play for time, asking clarifying questions rather than articulating a rationale. Finally, the judge offered mercy: a 15-minute break. Kobach’s team rushed from the courtroom.

It wasn’t enough to save him. In her opinion, Robinson described “a pattern and practice by Defendant [Kobach] of flaunting disclosure and discovery rules.” As she put it, “it is not clear to the Court whether Defendant repeatedly failed to meet his disclosure obligations intentionally or due to his unfamiliarity with the federal rules.” She ordered Kobach to attend the equivalent of after-school tutoring: six hours of extra legal education on the rules of civil procedure or the rules of evidence (and to present the court with a certificate of completion).

It’s always a bad idea for a lawyer to try the patience of a judge — and that’s doubly true during a bench trial, when the judge will decide not only the law, but also the facts. Kobach repeatedly annoyed Robinson with his procedural mistakes. But that was nothing next to what the judge viewed as Kobach’s intentional bad faith.

This view emerged in writing right after the trial — that’s when Robinson issued her ruling finding Kobach in contempt — but before the verdict. And the conduct that inspired the contempt finding had persisted over several years. Robinson concluded that Kobach had intentionally failed to follow a ruling she issued in 2016 that ordered him to restore the privileges of 17,000 suspended Kansas voters.

In her contempt ruling, the judge cited Kobach’s “history of noncompliance” with the order and characterized his explanations for not abiding by it as “nonsensical” and “disingenuous.” She wrote that she was “troubled” by Kobach’s “failure to take responsibility for violating this Court’s orders, and for failing to ensure compliance over an issue that he explicitly represented to the Court had been accomplished.” Robinson ordered Kobach to pay the ACLU’s legal fees for the contempt proceeding.

That contempt ruling was actually the second time Kobach was singled out for punishment in the case. Before the trial, a federal magistrate judge deputized to oversee the discovery portion of the suit fined him $1,000 for making “patently misleading representations” about a voting fraud document Kobach had prepared for Trump. Kobach paid the fine with a state credit card.

More than any procedural bumbling, the collapse of Kobach’s case traced back to the disintegration of a single witness.

The witness was Jesse Richman, a political scientist from Old Dominion University, who has written studies on voter fraud. For this trial, Richman was paid $5,000 by the taxpayers of Kansas to measure non-citizen registration in the state. Richman was the man who had to deliver the goods for Kobach.

With his gray-flecked beard and mustache, Richman looked the part of an academic, albeit one who seemed a bit too tall for his suit and who showed his discomfort in a series of awkward, sudden movements on the witness stand. At moments, Richman’s testimony turned combative, devolving into something resembling an episode of The Jerry Springer Show. By the time he left the stand, Richman had testified for more than five punishing hours. He’d bickered with the ACLU’s lawyer, raised his voice as he defended his studies and repeatedly sparred with the judge.

“Wait, wait, wait!” shouted Robinson at one point, silencing a verbal free-for-all that had erupted among Richman, the ACLU’s Ho, and Kobach, who were all speaking at the same time. “Especially you,” she said, turning her stare to Richman. “You are not here to be an advocate. You are not here to trash the plaintiff. And you are not here to argue with me.”

Richman had played a small but significant part in the 2016 presidential campaign. Trump and others had cited his work to claim that illegal votes had robbed Trump of the popular vote. At an October 2016 rally in Wisconsin, the candidate cited Richman’s work to bolster his predictions that the election would be rigged. “You don’t read about this, right?” Trump told the crowd, before reading from an op-ed Richman had written for The Washington Post: “‘We find that this participation was large enough to plausibly account for Democratic victories in various close elections.’ Okay? All right?”

Richman’s 2014 study of non-citizen registration used data from the Cooperative Congressional Election Study — an online survey of more than 32,000 people. Of those, fewer than 40 individuals indicated they were non-citizens registered to vote. Based on that sample, Richman concluded that up to 2.8 million illegal votes had been cast in 2008 by non-citizens. In fact, he put the illegal votes at somewhere between 38,000 and 2.8 million — a preposterously large range — and then Trump and others simply used the highest figure.

Academics pilloried Richman’s conclusions. Two hundred political scientists signed an open letter criticizing the study, saying it should “not be cited or used in any debate over fraudulent voting.” Harvard’s Stephen Ansolabehere, who administered the CCES, published his own peer-reviewed paper lambasting Richman’s work. Indeed, by the time Trump read Richman’s article onstage in 2016, The Washington Post had already appended a note to the op-ed linking to three rebuttals and a peer-reviewed study debunking the research.

None of that discouraged Kobach or Trump from repeating Richman’s conclusions. They then went a few steps further. They took the top end of the range for the 2008 election, assumed that it applied to the 2016 election, too, and further assumed that all of the fraudulent ballots had been cast for Clinton.

Some of those statements found their way into the courtroom, when Ho pressed play on a video shot by The Kansas City Star on Nov. 30, 2016. Kobach had met with Trump 10 days earlier and had brought with him a paper decrying non-citizen registration and voter fraud. Two days later, Trump tweeted that he would have won the popular vote if not for “millions of people who voted illegally.”

On the courtroom’s televisions, Kobach appeared, saying Trump’s tweet was “absolutely correct.” Without naming Richman, Kobach referred to his study: The number of non-citizens who said they’d voted in 2008 was far larger than the popular vote margin, Kobach said on the video. The same number likely voted again in 2016.

In the courtroom, Ho asked Richman if he believed his research supported such a claim. Richman stammered. He repeatedly looked at Kobach, seemingly searching for a way out. Ho persisted and finally, Richman gave his answer: “I do not believe my study provides strong support for that notion.”

To estimate the number of non-citizens voting in Kansas, Richman had used the same methodology he employed in his much-criticized 2014 study. Using samples as small as a single voter, he’d produced surveys with wildly different estimates of non-citizen registration in the state. The multiple iterations confused everyone in the courtroom.

“For the record, how many different data sources have you provided?” Robinson interjected in the middle of one Richman answer. “You provide a range of, like, zero to 18,000 or more.”

“I sense the frustration,” Richman responded, before offering a winding explanation of the multiple data sources and surveys he’d used to arrive at a half-dozen different estimates. Robinson cut him off. “Maybe we need to stop here,” she said.

“Your honor, let me finish answering your question,” he said.

“No, no. I’m done,” she responded, as he continued to protest. “No. Dr. Richman, I’m done.”

To refute Richman’s numbers, the ACLU called on Harvard’s Ansolabehere, whose data Richman had relied on in the past. Ansolabehere testified that Richman’s sample sizes were so small that it was just as possible that there were no non-citizens registered to vote in Kansas as 18,000. “There’s just a great deal of uncertainty with these estimates,” he said.

Ho asked if it would be accurate to say that Richman’s data “shows a rate of non-citizen registration in Kansas that is not statistically distinct from zero?”

“Correct.”

The judge was harsher than Ansolabehere in her description of Richman’s testimony. In her opinion, Robinson unloaded a fusillade of dismissive adjectives, calling Richman’s conclusions “confusing, inconsistent and methodologically flawed,” and adding that they were “credibly dismantled” by Ansolabehere. She labeled elements of Richman’s testimony “disingenuous” and “misleading,” and stated that she gave his research “no weight” in her decision.

One of the paradoxes of Kobach is that he has become a star in circles that focus on illegal immigration and voting fraud despite poor results in the courtroom. By ProPublica’s count, Kobach chalked up a 2–6 won-lost record in federal cases in which he was played a major role, and which reached a final disposition before the Kansas case.

Those results occurred when Kobach was an attorney for the legal arm of the Federation for American Immigration Reform from 2004 to 2011, when he became secretary of state in Kansas. In his FAIR role (in which he continued to moonlight till about 2014), Kobach traveled to places like Fremont, Nebraska, Hazleton, Pennsylvania, Farmers Branch, Texas, and Valley Park, Missouri, to help local governments write laws that attempted to hamper illegal immigration, and then defend them in court. Kobach won in Nebraska, but lost in Texas and Pennsylvania, and only a watered down version of the law remains in Missouri.

The best-known law that Kobach helped shape before joining the Kansas government in 2011 was Arizona’s “show me your papers” law. That statute allowed police to demand citizenship documents for any reason from anyone they thought might be in the country illegally. After it passed, the state paid Kobach $300 an hour to train law enforcement on how to legally arrest suspected illegal immigrants. The Supreme Court gutted key provisions of the law in 2012.

Kobach also struggled in two forays into political campaigning. In 2004, he lost a race for Congress. He also drew criticism for his stint as an informal adviser to Mitt Romney’s 2012 presidential campaign. Kobach was the man responsible for Romney’s much-maligned proposal that illegal immigrants “self-deport,” one reason Romney attracted little support among Latinos. Romney disavowed Kobach even before the campaign was over, telling media outlets that he was a “supporter,” not an adviser.

Trump’s election meant Kobach’s positions on immigration would be welcome in the White House. Kobach lobbied for, but didn’t receive, an appointment as Secretary of Homeland Security. He was, however, placed in charge of the voter fraud commission, a pet project of Trump’s. Facing a raft of lawsuits and bad publicity, the commission was disbanded little more than six months after it formally launched.

Back at home, Kobach expanded his power as secretary of state. Boasting of his experience as a law professor and scholar, Kobach convinced the state legislature to give him the authority to prosecute election crimes himself, a power wielded by no other secretary of state. In that role, he has obtained nine guilty pleas against individuals for election-related misdemeanors. Only one of those who pleaded guilty, as it happens, was a non-citizen.

He also persuaded Kansas’ attorney general to allow Kobach to represent the state in the trial of Kansas’ voting law. Kobach argued it was a bargain. As he told The Wichita Eagle at the time, “The advantage is the state gets an experienced appellate litigator who is a specialist in this field and in constitutional law for the cost the state is already paying, which is my salary.”

Kobach fared no better in the second main area of the Kansas City trial than he had in the first. This part explored whether there is a less burdensome way of identifying non-citizens than forcing everyone to show proof of citizenship upon registration. Judge Robinson would conclude that there were many alternatives that were less intrusive.

In his opening, Ho of the ACLU spotlighted a potentially less intrusive approach. Why not use the Department of Homeland Security’s Systematic Alien Verification for Entitlements System list, and compare the names on it to the Kansas voter rolls? That, Ho argued, could efficiently suss out illegal registrations.

Kobach told the judge that simply wasn’t feasible. The list, he explained, doesn’t contain all non-citizens in the country illegally — it contains only non-citizens legally present and those here illegally who register in some way with the federal government. Plus, he told Robinson, in order to really match the SAVE list against a voter roll, both datasets would have to contain alien registration numbers, the identifier given to non-citizens living in the U.S. “Those are things that a voter registration system doesn’t have,” he said. “So, the SAVE system does not work.”

But Kobach had made the opposite argument when he headed the voter fraud commission. There, he’d repeatedly advocated the use of the SAVE database. Appearing on Fox News in May 2017, shortly after the commission was established, Kobach said, “The Department of Homeland Security knows of the millions of aliens who are in the United States legally and that data that’s never been bounced against the state’s voter rolls to see whether these people are registered.” He said the federal databases “can be very valuable.”

A month later, as chief of the voting fraud commission, Kobach took steps to compare state information to the SAVE database. He sent a letter to all 50 secretaries of state requesting their voter rolls. Bipartisan outrage ensued. Democrats feared he would use the rolls to encourage states to purge legitimately registered voters. Republicans labelled the request federal overreach.

At trial, Kobach’s main expert on this point was Hans von Spakovsky, another member of the voter fraud commission. He, too, had been eager in commission meetings to match state voter rolls to the SAVE database.

But like Kobach, von Spakovsky took a different tack at trial. He testified that this database was unusable by elections offices. “In your experience and expertise as an election administrator and one who studies elections,” Kobach asked, “is [the alien registration number] a practical or even possible thing for a state to do in its voter registration database?” Von Spakovsky answered, “No, it is not.”

Von Spakovsky and Kobach have been friends for more than a decade. They worked together at the Department of Justice under George W. Bush. Kobach focused on immigration issues — helping create a database to register visitors to the U.S. from countries associated with terrorism — while von Spakovsky specialized in voting issues; he had opposed the renewal of the Voting Rights Act.

Von Spakovsky’s history as a local elections administrator in Fairfax County, Va., qualified him as an expert on voting fraud. Between 2010 and 2012, while serving as vice chairman of the county’s three-member electoral board, he’d examined the voter rolls and found what he said were 300 registered non-citizens. He’d pressed for action against them, but none came. Von Spakovsky later joined the Heritage Foundation, where he remains today, generating research that underpins the arguments of those who claim mass voter fraud.

Like Richman, von Spakovsky seemed nervous on the stand, albeit not combative. He wore wire-rimmed glasses and a severe, immovable expression. Immigration is a not-so-distant feature of his family history: His parents — Russian and German immigrants — met in a refugee camp in American-occupied Germany after World War II before moving to the U.S.

Von Spakovsky had the task of testifying about what was intended to be a key piece of evidence for Kobach’s case: a spreadsheet of 38 non-citizens who had registered to vote, or attempted to register, in a 20-year period in Sedgwick County, Kansas.

But the 38 non-citizens turned out to be something less than an electoral crime wave. For starters, some of the 38 had informed Sedgwick County that they were non-citizens. One woman had sent her registration postcard back to the county with an explanation that it was a “mistake” and that she was not a citizen. Another listed an alien registration number — which tellingly begins with an “A” — instead of a Social Security number on the voter registration form. The county registered her anyway.

When von Spakovsky took the stand, he had to contend with questions that suggested he had cherry-picked his data. (The judge would find he had.) In his expert report, von Spakovsky had referenced a 2005 report by the Government Accountability Office that polled federal courts to see how many non-citizens had been excused from jury duty for being non-citizens — a sign of fraud, because jurors are selected from voter rolls. The GAO report mentioned eight courts. Only one said it had a meaningful number of jury candidates who claimed to be non-citizens: “between 1 and 3 percent” had been dismissed on these grounds. This was the only court von Spakovsky mentioned in his expert report.

His report also cited a 2012 TV news segment from an NBC station in Fort Myers, Fla. Reporters claimed to have discovered more than 100 non-citizens on the local voter roll.

“Now, you know, Mr. von Spakovsky, don’t you, that after this NBC report there was a follow-up by the same NBC station that determined that at least 35 of those 100 individuals had documentation to prove they were, in fact, United States citizens. Correct?” Ho asked. “I am aware of that now, yes,” von Spakovsky replied.

That correction had been online since 2012 and Ho had asked von Spakovsky the same question almost two years before in a deposition before the trial. But von Spakovsky never corrected his expert report.

Under Ho’s questioning, von Spakovsky also acknowledged a false assertion he made in 2011. In a nationally syndicated column for McClatchy, von Spakovsky claimed a tight race in Missouri had been decided by the illegal votes of 50 Somali nationals. A month before the column was published, a Missouri state judge ruled that no such thing had happened.

On the stand, von Spakovsky claimed he had no knowledge of the ruling when he published the piece. He conceded that he never retracted the assertion.

Kobach, who watched the exchange without objection, had repeatedly made the same claim — even after the judge ruled it was false. In 2011, Kobach wrote a series of columns using the example as proof of the need for voter ID, publishing them in outlets ranging from the Topeka Capital-Journal to the Wall Street Journal and the Washington Post. In 2012, he made the claim in an article published in the Syracuse Law Review. In 2013, he wrote an op-ed for the Kansas City Star with the same example: “The election was stolen when Rizzo received about 50 votes illegally cast by citizens of Somalia.” None of those articles have ever been corrected.

Ultimately, Robinson would lacerate von Spakovsky’s testimony, much as she had Richman’s. Von Spakovsky’s statements, the judge wrote, were “premised on several misleading and unsupported examples” and included “false assertions.” As she put it, “His generalized opinions about the rates of noncitizen registration were likewise based on misleading evidence, and largely based on his preconceived beliefs about this issue, which has led to his aggressive public advocacy of stricter proof of citizenship laws.”

There was one other wobbly leg holding up the argument that voter fraud is rampant: the very meaning of the word “fraud.”

Kobach’s case, and the broader claim, rely on an extremely generous definition. Legal definitions of fraud require a person to knowingly be deceptive. But both Kobach and von Spakovsky characterized illegal ballots as “fraud” regardless of the intention of the voter.

Indeed, the nine convictions Kobach has obtained in Kansas are almost entirely made up of individuals who didn’t realize they were doing something wrong. For example, there were older voters who didn’t understand the restrictions and voted in multiple places they owned property. There was also a college student who’d forgotten she’d filled out an absentee ballot in her home state before voting months later in Kansas. (She voted for Trump both times.)

Late in the trial, the ACLU presented Lorraine Minnite, a professor at Rutgers who has written extensively about voter fraud, as a rebuttal witness. Her book, “The Myth of Voter Fraud,” concluded that almost all instances of illegal votes can be chalked up to misunderstandings and administrative error.

Kobach sent his co-counsel, Garrett Roe, to cross-examine her. “It’s your view that what matters is the voter’s knowledge that his or her action is unlawful?” Roe asked. “In a definition of fraud, yes,” said Minnite. Roe pressed her about this for several questions, seemingly surprised that she wouldn’t refer to all illegal voting as fraud.

Minnite stopped him. “The word ‘fraud’ has meaning, and that meaning is that there’s intent behind it. And that’s actually what Kansas laws are with respect to illegal voting,” she said. “You keep saying my definition” she said, putting finger quotes around “my.” “But, you know, it’s not like it’s a freak definition.”

Kobach had explored a similar line of inquiry with von Spakovsky, asking him if the list of 38 non-citizens he’d reviewed could be absolved of “fraud” because they may have lacked intent.

“No,” von Spakovsky replied, “I think any time a non-citizen registers, any time a non-citizen votes, they are — whether intentionally or by accident, I mean — they are defrauding legitimate citizens from a fair election.”

After Kobach concluded his questions, the judge began her own examination of von Spakovsky.

“I think it’s fair to say there’s a pretty good distinction in terms of how the two of you define fraud,” the judge said, explaining that Minnite focused on intent, while she understood von Spakovsky’s definition to include any time someone who wasn’t supposed to vote did so, regardless of reason. “Would that be a fair characterization?” she asked.

“Yes ma’am,” von Spakovsky replied.

The judge asked whether a greater number of legitimate voters would be barred from casting ballots under the law than fraudulent votes prevented. In that scenario, she asked, “Would that not also be defrauding the electoral process?” Von Spakovsky danced around the answer, asserting that one would need to answer that question in the context of the registration requirements, which he deemed reasonable.

The judge cut him off. “Well that doesn’t really answer my question,” she said, saying that she found it contradictory that he wanted to consider context when examining the burden of registration requirements, but not when examining the circumstances in which fraud was committed.

“When you’re talking about … non-citizen voting, you don’t want to consider that in context of whether that person made a mistake, whether a DMV person convinced them they should vote,” she said. Von Spakovsky allowed that not every improper voter should be prosecuted, but insisted that “each ballot they cast takes away the vote of and dilutes the vote of actual citizens who are voting. And that’s —”

The judge interrupted again. “So, the thousands of actual citizens that should be able to vote but who are not because of the system, because of this law, that’s not diluting the vote and that’s not impairing the integrity of the electoral process, I take it?” she said.

Von Spakovsky didn’t engage with the hypothetical. He simply didn’t believe it was happening. “I don’t believe that this requirement prevents individuals who are eligible to register and vote from doing so.” Later, on the stand, he’d tell Ho he couldn’t think of a single law in the country that he felt negatively impacted anyone’s ability to register or vote.

Robinson, in the end, strongly disagreed. As she wrote in her opinion, “the Court finds that the burden imposed on Kansans by this law outweighs the state’s interest in preventing noncitizen voter fraud, keeping accurate voter rolls, and maintaining confidence in elections. The burden is not just on a ‘few voters,’ but on tens of thousands of voters, many of whom were disenfranchised” by Kobach’s law. The law, she concluded, was a bigger problem than the one it set out to solve, acting as a “deterrent to registration and voting for substantially more eligible Kansans than it has prevented ineligible voters from registering to vote.”

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Experts Warn Biases Must Be Removed From Artificial Intelligence

CNN Tech reported:

"Every time humanity goes through a new wave of innovation and technological transformation, there are people who are hurt and there are issues as large as geopolitical conflict," said Fei Fei Li, the director of the Stanford Artificial Intelligence Lab. "AI is no exception." These are not issues for the future, but the present. AI powers the speech recognition that makes Siri and Alexa work. It underpins useful services like Google Photos and Google Translate. It helps Netflix recommend movies, Pandora suggest songs, and Amazon push products..."

Artificial intelligence (AI) technology is not only about autonomous ships, trucks, and preventing crashes involving self-driving cars. AI has global impacts. Researchers have already identified problems and limitations:

"A recent study by Joy Buolamwini at the M.I.T. Media Lab found facial recognition software has trouble identifying women of color. Tests by The Washington Post found that accents often trip up smart speakers like Alexa. And an investigation by ProPublica revealed that software used to sentence criminals is biased against black Americans. Addressing these issues will grow increasingly urgent as things like facial recognition software become more prevalent in law enforcement, border security, and even hiring."

Reportedly, the concerns and limitations were discussed earlier this month at the "AI Summit - Designing A Future For All" conference. Back in 2016, TechCrunch listed five unexpected biases in artificial intelligence. So, there is much important work to be done to remove biases.

According to CNN Tech, a range of solutions are needed:

"Diversifying the backgrounds of those creating artificial intelligence and applying it to everything from policing to shopping to banking...This goes beyond diversifying the ranks of engineers and computer scientists building these tools to include the people pondering how they are used."

Given the history of the internet, there seems to be an important take-away. Early on, many people mistakenly assumed that, "If it's in an e-mail, then it must be true." That mistaken assumption migrated to, "If it's in a website on the internet, then it must be true." And that mistaken assumption migrated to, "If it was posted on social media, then it must be true." Consumers, corporate executives, and technicians must educate themselves and avoid assuming, "If an AI system collected it, then it must be true." Veracity matters. What do you think?


Health Insurers Are Vacuuming Up Details About You — And It Could Raise Your Rates

[Editor's note: today's guest post, by reporters at ProPublica, explores privacy and data collection issues within the healthcare industry. It is reprinted with permission.]

By Marshall Allen, ProPublica

To an outsider, the fancy booths at last month’s health insurance industry gathering in San Diego aren’t very compelling. A handful of companies pitching “lifestyle” data and salespeople touting jargony phrases like “social determinants of health.”

But dig deeper and the implications of what they’re selling might give many patients pause: A future in which everything you do — the things you buy, the food you eat, the time you spend watching TV — may help determine how much you pay for health insurance.

With little public scrutiny, the health insurance industry has joined forces with data brokers to vacuum up personal details about hundreds of millions of Americans, including, odds are, many readers of this story. The companies are tracking your race, education level, TV habits, marital status, net worth. They’re collecting what you post on social media, whether you’re behind on your bills, what you order online. Then they feed this information into complicated computer algorithms that spit out predictions about how much your health care could cost them.

Are you a woman who recently changed your name? You could be newly married and have a pricey pregnancy pending. Or maybe you’re stressed and anxious from a recent divorce. That, too, the computer models predict, may run up your medical bills.

Are you a woman who’s purchased plus-size clothing? You’re considered at risk of depression. Mental health care can be expensive.

Low-income and a minority? That means, the data brokers say, you are more likely to live in a dilapidated and dangerous neighborhood, increasing your health risks.

“We sit on oceans of data,” said Eric McCulley, director of strategic solutions for LexisNexis Risk Solutions, during a conversation at the data firm’s booth. And he isn’t apologetic about using it. “The fact is, our data is in the public domain,” he said. “We didn’t put it out there.”

Insurers contend they use the information to spot health issues in their clients — and flag them so they get services they need. And companies like LexisNexis say the data shouldn’t be used to set prices. But as a research scientist from one company told me: “I can’t say it hasn’t happened.”

At a time when every week brings a new privacy scandal and worries abound about the misuse of personal information, patient advocates and privacy scholars say the insurance industry’s data gathering runs counter to its touted, and federally required, allegiance to patients’ medical privacy. The Health Insurance Portability and Accountability Act, or HIPAA, only protects medical information.

“We have a health privacy machine that’s in crisis,” said Frank Pasquale, a professor at the University of Maryland Carey School of Law who specializes in issues related to machine learning and algorithms. “We have a law that only covers one source of health information. They are rapidly developing another source.”

Patient advocates warn that using unverified, error-prone “lifestyle” data to make medical assumptions could lead insurers to improperly price plans — for instance raising rates based on false information — or discriminate against anyone tagged as high cost. And, they say, the use of the data raises thorny questions that should be debated publicly, such as: Should a person’s rates be raised because algorithms say they are more likely to run up medical bills? Such questions would be moot in Europe, where a strict law took effect in May that bans trading in personal data.

This year, ProPublica and NPR are investigating the various tactics the health insurance industry uses to maximize its profits. Understanding these strategies is important because patients — through taxes, cash payments and insurance premiums — are the ones funding the entire health care system. Yet the industry’s bewildering web of strategies and inside deals often have little to do with patients’ needs. As the series’ first story showed, contrary to popular belief, lower bills aren’t health insurers’ top priority.

Inside the San Diego Convention Center last month, there were few qualms about the way insurance companies were mining Americans’ lives for information — or what they planned to do with the data.

The sprawling convention center was a balmy draw for one of America’s Health Insurance Plans’ marquee gatherings. Insurance executives and managers wandered through the exhibit hall, sampling chocolate-covered strawberries, champagne and other delectables designed to encourage deal-making.

Up front, the prime real estate belonged to the big guns in health data: The booths of Optum, IBM Watson Health and LexisNexis stretched toward the ceiling, with flat screen monitors and some comfy seating. (NPR collaborates with IBM Watson Health on national polls about consumer health topics.)

To understand the scope of what they were offering, consider Optum. The company, owned by the massive UnitedHealth Group, has collected the medical diagnoses, tests, prescriptions, costs and socioeconomic data of 150 million Americans going back to 1993, according to its marketing materials. (UnitedHealth Group provides financial support to NPR.) The company says it uses the information to link patients’ medical outcomes and costs to details like their level of education, net worth, family structure and race. An Optum spokesman said the socioeconomic data is de-identified and is not used for pricing health plans.

Optum’s marketing materials also boast that it now has access to even more. In 2016, the company filed a patent application to gather what people share on platforms like Facebook and Twitter, and link this material to the person’s clinical and payment information. A company spokesman said in an email that the patent application never went anywhere. But the company’s current marketing materials say it combines claims and clinical information with social media interactions.

I had a lot of questions about this and first reached out to Optum in May, but the company didn’t connect me with any of its experts as promised. At the conference, Optum salespeople said they weren’t allowed to talk to me about how the company uses this information.

It isn’t hard to understand the appeal of all this data to insurers. Merging information from data brokers with people’s clinical and payment records is a no-brainer if you overlook potential patient concerns. Electronic medical records now make it easy for insurers to analyze massive amounts of information and combine it with the personal details scooped up by data brokers.

It also makes sense given the shifts in how providers are getting paid. Doctors and hospitals have typically been paid based on the quantity of care they provide. But the industry is moving toward paying them in lump sums for caring for a patient, or for an event, like a knee surgery. In those cases, the medical providers can profit more when patients stay healthy. More money at stake means more interest in the social factors that might affect a patient’s health.

Some insurance companies are already using socioeconomic data to help patients get appropriate care, such as programs to help patients with chronic diseases stay healthy. Studies show social and economic aspects of people’s lives play an important role in their health. Knowing these personal details can help them identify those who may need help paying for medication or help getting to the doctor.

But patient advocates are skeptical health insurers have altruistic designs on people’s personal information.

The industry has a history of boosting profits by signing up healthy people and finding ways to avoid sick people — called “cherry-picking” and “lemon-dropping,” experts say. Among the classic examples: A company was accused of putting its enrollment office on the third floor of a building without an elevator, so only healthy patients could make the trek to sign up. Another tried to appeal to spry seniors by holding square dances.

The Affordable Care Act prohibits insurers from denying people coverage based on pre-existing health conditions or charging sick people more for individual or small group plans. But experts said patients’ personal information could still be used for marketing, and to assess risks and determine the prices of certain plans. And the Trump administration is promoting short-term health plans, which do allow insurers to deny coverage to sick patients.

Robert Greenwald, faculty director of Harvard Law School’s Center for Health Law and Policy Innovation, said insurance companies still cherry-pick, but now they’re subtler. The center analyzes health insurance plans to see if they discriminate. He said insurers will do things like failing to include enough information about which drugs a plan covers — which pushes sick people who need specific medications elsewhere. Or they may change the things a plan covers, or how much a patient has to pay for a type of care, after a patient has enrolled. Or, Greenwald added, they might exclude or limit certain types of providers from their networks — like those who have skill caring for patients with HIV or hepatitis C.

If there were concerns that personal data might be used to cherry-pick or lemon-drop, they weren’t raised at the conference.

At the IBM Watson Health booth, Kevin Ruane, a senior consulting scientist, told me that the company surveys 80,000 Americans a year to assess lifestyle, attitudes and behaviors that could relate to health care. Participants are asked whether they trust their doctor, have financial problems, go online, or own a Fitbit and similar questions. The responses of hundreds of adjacent households are analyzed together to identify social and economic factors for an area.

Ruane said he has used IBM Watson Health’s socioeconomic analysis to help insurance companies assess a potential market. The ACA increased the value of such assessments, experts say, because companies often don’t know the medical history of people seeking coverage. A region with too many sick people, or with patients who don’t take care of themselves, might not be worth the risk.

Ruane acknowledged that the information his company gathers may not be accurate for every person. “We talk to our clients and tell them to be careful about this,” he said. “Use it as a data insight. But it’s not necessarily a fact.”

In a separate conversation, a salesman from a different company joked about the potential for error. “God forbid you live on the wrong street these days,” he said. “You’re going to get lumped in with a lot of bad things.”

The LexisNexis booth was emblazoned with the slogan “Data. Insight. Action.” The company said it uses 442 non-medical personal attributes to predict a person’s medical costs. Its cache includes more than 78 billion records from more than 10,000 public and proprietary sources, including people’s cellphone numbers, criminal records, bankruptcies, property records, neighborhood safety and more. The information is used to predict patients’ health risks and costs in eight areas, including how often they are likely to visit emergency rooms, their total cost, their pharmacy costs, their motivation to stay healthy and their stress levels.

People who downsize their homes tend to have higher health care costs, the company says. As do those whose parents didn’t finish high school. Patients who own more valuable homes are less likely to land back in the hospital within 30 days of their discharge. The company says it has validated its scores against insurance claims and clinical data. But it won’t share its methods and hasn’t published the work in peer-reviewed journals.

McCulley, LexisNexis’ director of strategic solutions, said predictions made by the algorithms about patients are based on the combination of the personal attributes. He gave a hypothetical example: A high school dropout who had a recent income loss and doesn’t have a relative nearby might have higher than expected health costs.

But couldn’t that same type of person be healthy? I asked.

“Sure,” McCulley said, with no apparent dismay at the possibility that the predictions could be wrong.

McCulley and others at LexisNexis insist the scores are only used to help patients get the care they need and not to determine how much someone would pay for their health insurance. The company cited three different federal laws that restricted them and their clients from using the scores in that way. But privacy experts said none of the laws cited by the company bar the practice. The company backed off the assertions when I pointed that the laws did not seem to apply.

LexisNexis officials also said the company’s contracts expressly prohibit using the analysis to help price insurance plans. They would not provide a contract. But I knew that in at least one instance a company was already testing whether the scores could be used as a pricing tool.

Before the conference, I’d seen a press release announcing that the largest health actuarial firm in the world, Milliman, was now using the LexisNexis scores. I tracked down Marcos Dachary, who works in business development for Milliman. Actuaries calculate health care risks and help set the price of premiums for insurers. I asked Dachary if Milliman was using the LexisNexis scores to price health plans and he said: “There could be an opportunity.”

The scores could allow an insurance company to assess the risks posed by individual patients and make adjustments to protect themselves from losses, he said. For example, he said, the company could raise premiums, or revise contracts with providers.

It’s too early to tell whether the LexisNexis scores will actually be useful for pricing, he said. But he was excited about the possibilities. “One thing about social determinants data — it piques your mind,” he said.

Dachary acknowledged the scores could also be used to discriminate. Others, he said, have raised that concern. As much as there could be positive potential, he said, “there could also be negative potential.”

It’s that negative potential that still bothers data analyst Erin Kaufman, who left the health insurance industry in January. The 35-year-old from Atlanta had earned her doctorate in public health because she wanted to help people, but one day at Aetna, her boss told her to work with a new data set.

To her surprise, the company had obtained personal information from a data broker on millions of Americans. The data contained each person’s habits and hobbies, like whether they owned a gun, and if so, what type, she said. It included whether they had magazine subscriptions, liked to ride bikes or run marathons. It had hundreds of personal details about each person.

The Aetna data team merged the data with the information it had on patients it insured. The goal was to see how people’s personal interests and hobbies might relate to their health care costs. But Kaufman said it felt wrong: The information about the people who knitted or crocheted made her think of her grandmother. And the details about individuals who liked camping made her think of herself. What business did the insurance company have looking at this information? “It was a dataset that really dug into our clients’ lives,” she said. “No one gave anyone permission to do this.”

In a statement, Aetna said it uses consumer marketing information to supplement its claims and clinical information. The combined data helps predict the risk of repeat emergency room visits or hospital admissions. The information is used to reach out to members and help them and plays no role in pricing plans or underwriting, the statement said.

Kaufman said she had concerns about the accuracy of drawing inferences about an individual’s health from an analysis of a group of people with similar traits. Health scores generated from arrest records, home ownership and similar material may be wrong, she said.

Pam Dixon, executive director of the World Privacy Forum, a nonprofit that advocates for privacy in the digital age, shares Kaufman’s concerns. She points to a study by the analytics company SAS, which worked in 2012 with an unnamed major health insurance company to predict a person’s health care costs using 1,500 data elements, including the investments and types of cars people owned.

The SAS study said higher health care costs could be predicted by looking at things like ethnicity, watching TV and mail order purchases.

“I find that enormously offensive as a list,” Dixon said. “This is not health data. This is inferred data.”

Data scientist Cathy O’Neil said drawing conclusions about health risks on such data could lead to a bias against some poor people. It would be easy to infer they are prone to costly illnesses based on their backgrounds and living conditions, said O’Neil, author of the book “Weapons of Math Destruction,” which looked at how algorithms can increase inequality. That could lead to poor people being charged more, making it harder for them to get the care they need, she said. Employers, she said, could even decide not to hire people with data points that could indicate high medical costs in the future.

O’Neil said the companies should also measure how the scores might discriminate against the poor, sick or minorities.

American policymakers could do more to protect people’s information, experts said. In the United States, companies can harvest personal data unless a specific law bans it, although California just passed legislation that could create restrictions, said William McGeveran, a professor at the University of Minnesota Law School. Europe, in contrast, passed a strict law called the General Data Protection Regulation, which went into effect in May.

“In Europe, data protection is a constitutional right,” McGeveran said.

Pasquale, the University of Maryland law professor, said health scores should be treated like credit scores. Federal law gives people the right to know their credit scores and how they’re calculated. If people are going to be rated by whether they listen to sad songs on Spotify or look up information about AIDS online, they should know, Pasquale said. “The risk of improper use is extremely high. And data scores are not properly vetted and validated and available for scrutiny.”

As I reported this story I wondered how the data vendors might be using my personal information to score my potential health costs. So, I filled out a request on the LexisNexis website for the company to send me some of the personal information it has on me. A week later a somewhat creepy, 182-page walk down memory lane arrived in the mail. Federal law only requires the company to provide a subset of the information it collected about me. So that’s all I got.

LexisNexis had captured details about my life going back 25 years, many that I’d forgotten. It had my phone numbers going back decades and my home addresses going back to my childhood in Golden, Colorado. Each location had a field to show whether the address was “high risk.” Mine were all blank. The company also collects records of any liens and criminal activity, which, thankfully, I didn’t have.

My report was boring, which isn’t a surprise. I’ve lived a middle-class life and grown up in good neighborhoods. But it made me wonder: What if I had lived in “high risk” neighborhoods? Could that ever be used by insurers to jack up my rates — or to avoid me altogether?

I wanted to see more. If LexisNexis had health risk scores on me, I wanted to see how they were calculated and, more importantly, whether they were accurate. But the company told me that if it had calculated my scores it would have done so on behalf of their client, my insurance company. So, I couldn’t have them.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

 


European Regulators Fine Google $5 Billion For 'Breaching EU Antitrust Rules'

On Wednesday, European anti-trust regulators fined Google 4.34 billion Euros (U.S. $5 billion) and ordered the tech company to stop using its Android operating system software to block competition. ComputerWorld reported:

"The European Commission found that Google has abused its dominant market position in three ways: tying access to the Play store to installation of Google Search and Google Chrome; paying phone makers and network operators to exclusively install Google Search, and preventing manufacturers from making devices running forks of Android... Google won't let smartphone manufacturers install Play on their phones unless they also make its search engine and Chrome browser the defaults on their phones. In addition, they must only use a Google-approved version of Android. This has prevented companies like Amazon.com, which developed a fork of Android it calls FireOS, from persuading big-name manufacturers to produce phones running its OS or connecting to its app store..."

Reportedly, less than 10% of Android phone users download a different browser than the pre-installed default. Less than 1% use a different search app. View the archive of European Commission Android OS documents.

Yesterday, the European Commission announced on social media:

European Commission tweet. Google Android OS restrictions graphic. Click to view larger version

European Commission tweet. Vestager comments. Click to view larger version

And, The Guardian newspaper reported:

"Soon after Brussels handed down its verdict, Google announced it would appeal. "Android has created more choice for everyone, not less," a Google spokesperson said... Google has 90 days to end its "illegal conduct" or its parent company Alphabet could be hit with fines amounting to 5% of its daily [revenues] for each day it fails to comply. Wednesday’s verdict ends a 39-month investigation by the European commission’s competition authorities into Google’s Android operating system but it is only one part of an eight-year battle between Brussels and the tech giant."

According to the Reuters news service, a third EU case against Google, involving accusations that the tech company's AdSense advertising service blocks users from displaying search ads from competitors, is still ongoing.


The DIY Revolution: Consumers Alter Or Build Items Previously Not Possible. Is It A Good Thing?

Recent advances in technology allow consumers to alter, customize, or build locally items previously not possible. These items are often referred to as Do-It-Yourself (DIY) products. You've probably heard DIY used in home repair and renovation projects on television. DIY now happens in some unexpected areas. Today's blog post highlights two areas.

DIY Glucose Monitors

Earlier this year, CNet described the bag an eight-year-old patient carries with her everywhere daily:

"... It houses a Dexcom glucose monitor and a pack of glucose tablets, which work in conjunction with the sensor attached to her arm and the insulin pump plugged into her stomach. The final item in her bag was an iPhone 5S. It's unusual for such a young child to have a smartphone. But Ruby's iPhone, which connects via Bluetooth to her Dexcom monitor, allowing [her mother] to read it remotely, illustrates the way technology has transformed the management of diabetes from an entirely manual process -- pricking fingers to measure blood sugar, writing down numbers in a notebook, calculating insulin doses and injecting it -- to a semi-automatic one..."

Some people have access to these new technologies, but many don't. Others want more connectivity and better capabilities. So, some creative "hacking" has resulted:

"There are people who are unwilling to wait, and who embrace unorthodox methods. (You can find them on Twitter via the hashtag #WeAreNotWaiting.) The Nightscout Foundation, an online diabetes community, figured out a workaround for the Pebble Watch. Groups such as Nightscout, Tidepool and OpenAPS are developing open-source fixes for diabetes that give major medical tech companies a run for their money... One major gripe of many tech-enabled diabetes patients is that the two devices they wear at all times -- the monitor and the pump -- don't talk to each other... diabetes will never be a hands-off disease to manage, but an artificial pancreas is basically as close as it gets. The FDA approved the first artificial pancreas -- the Medtronic 670G -- in October 2017. But thanks to a little DIY spirit, people have had them for years."

CNet shared the experience of another tech-enabled patient:

"Take Dana Lewis, founder of the open-source artificial pancreas system, or OpenAPS. Lewis started hacking her glucose monitor to increase the volume of the alarm so that it would wake her in the night. From there, Lewis tinkered with her equipment until she created a closed-loop system, which she's refined over time in terms of both hardware and algorithms that enable faster distribution of insulin. It has massively reduced the "cognitive burden" on her everyday life... JDRF, one of the biggest global diabetes research charities, said in October that it was backing the open-source community by launching an initiative to encourage rival manufacturers like Dexcom and Medtronic to open their protocols and make their devices interoperable."

Convenience and affordability are huge drivers. As you might have guessed, there are risks:

"Hacking a glucose monitor is not without risk -- inaccurate readings, failed alarms or the wrong dose of insulin distributed by the pump could have fatal consequences... Lewis and the OpenAPS community encourage people to embrace the build-your-own-pancreas method rather than waiting for the tech to become available and affordable."

Are DIY glucose monitors a good thing? Some patients think so as a way to achieve convenient and affordable healthcare solutions. That might lead you to conclude anything DIY is an improvement. Right? Keep reading.

DIY Guns

Got a 3-D printer? If so, then you can print your own DIY gun. How did this happen? How did the USA get to here? Wired explained:

"Five years ago, 25-year-old radical libertarian Cody Wilson stood on a remote central Texas gun range and pulled the trigger on the world’s first fully 3-D-printed gun... he drove back to Austin and uploaded the blueprints for the pistol to his website, Defcad.com... In the days after that first test-firing, his gun was downloaded more than 100,000 times. Wilson made the decision to go all in on the project, dropping out of law school at the University of Texas, as if to confirm his belief that technology supersedes law..."

The law intervened. Wilson stopped, took down his site, and then pursued a legal remedy:

"Two months ago, the Department of Justice quietly offered Wilson a settlement to end a lawsuit he and a group of co-plaintiffs have pursued since 2015 against the United States government. Wilson and his team of lawyers focused their legal argument on a free speech claim: They pointed out that by forbidding Wilson from posting his 3-D-printable data, the State Department was not only violating his right to bear arms but his right to freely share information. By blurring the line between a gun and a digital file, Wilson had also successfully blurred the lines between the Second Amendment and the First."

So, now you... anybody with an internet connection and a 3-D printer (and a computer-controlled milling machine for some advanced parts)... can produce their own DIY gun. No registration required. No licenses nor permits. No training required. And, that's anyone anywhere in the world.

Oh, there's more:

"The Department of Justice's surprising settlement, confirmed in court documents earlier this month, essentially surrenders to that argument. It promises to change the export control rules surrounding any firearm below .50 caliber—with a few exceptions like fully automatic weapons and rare gun designs that use caseless ammunition—and move their regulation to the Commerce Department, which won't try to police technical data about the guns posted on the public internet. In the meantime, it gives Wilson a unique license to publish data about those weapons anywhere he chooses."

As you might have guessed, Wilson is re-launching his website, but this time with blueprints for more DIY weaponry besides pistols: AR-15 rifles and semi-automatic weaponry. So, it will be easier for people to skirt federal and state gun laws. Is that a good thing?

You probably have some thoughts and concerns. I do. There are plenty of issues and questions. Are DIY products a good thing? Who is liable? How should laws be upgraded? How can society facilitate one set of DIY products and not the other? What related issues do you see? Any other notable DIY products?


Facial Recognition At Facebook: New Patents, New EU Privacy Laws, And Concerns For Offline Shoppers

Facebook logo Some Facebook users know that the social networking site tracks them both on and off (e.g., signed into, not signed into) the service. Many online users know that Facebook tracks both users and non-users around the internet. Recent developments indicate that the service intends to track people offline, too. The New York Times reported that Facebook:

"... has applied for various patents, many of them still under consideration... One patent application, published last November, described a system that could detect consumers within [brick-and-mortar retail] stores and match those shoppers’ faces with their social networking profiles. Then it could analyze the characteristics of their friends, and other details, using the information to determine a “trust level” for each shopper. Consumers deemed “trustworthy” could be eligible for special treatment, like automatic access to merchandise in locked display cases... Another Facebook patent filing described how cameras near checkout counters could capture shoppers’ faces, match them with their social networking profiles and then send purchase confirmation messages to their phones."

Some important background. First, the usage of surveillance cameras in retail stores is not new. What is new is the scope and accuracy of the technology. In 2012, we first learned about smart mannequins in retail stores. In 2013, we learned about the five ways retail stores spy on shoppers. In 2015, we learned more about tracking of shoppers by retail stores using WiFi connections. In 2018, some smart mannequins are used in the healthcare industry.

Second, Facebook's facial recognition technology scans images uploaded by users, and then allows users identified to accept or decline labels with their name for each photo. Each Facebook user can adjust their privacy settings to enable or disable the adding of their name label to photos. However:

"Facial recognition works by scanning faces of unnamed people in photos or videos and then matching codes of their facial patterns to those in a database of named people... The technology can be used to remotely identify people by name without their knowledge or consent. While proponents view it as a high-tech tool to catch criminals... critics said people cannot actually control the technology — because Facebook scans their faces in photos even when their facial recognition setting is turned off... Rochelle Nadhiri, a Facebook spokeswoman, said its system analyzes faces in users’ photos to check whether they match with those who have their facial recognition setting turned on. If the system cannot find a match, she said, it does not identify the unknown face and immediately deletes the facial data."

Simply stated: Facebook maintains a perpetual database of photos (and videos) with names attached, so it can perform the matching and not display name labels for users who declined and/or disabled the display of name labels in photos (videos). To learn more about facial recognition at Facebook, visit the Electronic Privacy Information Center (EPIC) site.

Third, other tech companies besides Facebook use facial recognition technology:

"... Amazon, Apple, Facebook, Google and Microsoft have filed facial recognition patent applications. In May, civil liberties groups criticized Amazon for marketing facial technology, called Rekognition, to police departments. The company has said the technology has also been used to find lost children at amusement parks and other purposes..."

You may remember, earlier in 2017 Apple launched its iPhone X with Face ID feature for users to unlock their phones. Fourth, since Facebook operates globally it must respond to new laws in certain regions:

"In the European Union, a tough new data protection law called the General Data Protection Regulation now requires companies to obtain explicit and “freely given” consent before collecting sensitive information like facial data. Some critics, including the former government official who originally proposed the new law, contend that Facebook tried to improperly influence user consent by promoting facial recognition as an identity protection tool."

Perhaps, you find the above issues troubling. I do. If my facial image will be captured, archived, tracked by brick-and-mortar stores, and then matched and merged with my online usage, then I want some type of notice before entering a brick-and-mortar store -- just as websites present privacy and terms-of-use policies. Otherwise, there is no notice nor informed consent by shoppers at brick-and-mortar stores.

So, is facial recognition a threat, a protection tool, or both? What are your opinions?


New Jersey to Suspend Prominent Psychologist for Failing to Protect Patient Privacy

[Editor's note: today's guest blog post, by reporters at ProPublica, explores privacy issues within the healthcare industry. The post is reprinted with permission.]

By Charles Ornstein, ProPublica

A prominent New Jersey psychologist is facing the suspension of his license after state officials concluded that he failed to keep details of mental health diagnoses and treatments confidential when he sued his patients over unpaid bills.

The state Board of Psychological Examiners last month upheld a decision by an administrative law judge that the psychologist, Barry Helfmann, “did not take reasonable measures to protect the confidentiality of his patients’ protected health information,” Lisa Coryell, a spokeswoman for the state attorney general’s office, said in an e-mail.

The administrative law judge recommended that Helfmann pay a fine and a share of the investigative costs. The board went further, ordering that Helfmann’s license be suspended for two years, Coryell wrote. During the first year, he will not be able to practice; during the second, he can practice, but only under supervision. Helfmann also will have to pay a $10,000 civil penalty, take an ethics course and reimburse the state for some of its investigative costs. The suspension is scheduled to begin in September.

New Jersey began to investigate Helfmann after a ProPublica article published in The New York Times in December 2015 that described the lawsuits and the information they contained. The allegations involved Helfmann’s patients as well as those of his colleagues at Short Hills Associates in Clinical Psychology, a New Jersey practice where he has been the managing partner.

Helfmann is a leader in his field, serving as president of the American Group Psychotherapy Association, and as a past president of the New Jersey Psychological Association.

ProPublica identified 24 court cases filed by Short Hills Associates from 2010 to 2014 over unpaid bills in which patients’ names, diagnoses and treatments were listed in documents. The defendants included lawyers, business people and a manager at a nonprofit. In cases involving patients who were minors, the lawsuits included children’s names and diagnoses.

The information was subsequently redacted from court records after a patient counter-sued Helfmann and his partners, the psychology group and the practice’s debt collection lawyers. The patient’s lawsuit was settled.

Helfmann has denied wrongdoing, saying his former debt collection lawyers were responsible for attaching patients’ information to the lawsuits. His current lawyer, Scott Piekarsky, said he intends to file an immediate appeal before the discipline takes effect.

"The discipline imposed is ‘so disproportionate as to be shocking to one’s sense of fairness’ under New Jersey case law," Piekarsky said in a statement.

Piekarsky also noted that the administrative law judge who heard the case found no need for any license suspension and raised questions about the credibility of the patient who sued Helfmann. "We feel this is a political decision due to Dr. Helfmann’s aggressive stance" in litigation, he said.

Helfmann sued the state of New Jersey and Joan Gelber, a senior deputy attorney general, claiming that he was not provided due process and equal protection under the law. He and Short Hills Associates sued his prior debt collection firm for legal malpractice. Those cases have been dismissed, though Helfmann has appealed.

Helfmann and Short Hills Associates also are suing the patient who sued him, as well as the man’s lawyer, claiming the patient and lawyer violated a confidential settlement agreement by talking to a ProPublica reporter and sharing information with a lawyer for the New Jersey attorney general’s office without providing advance notice. In court pleadings, the patient and his lawyer maintain that they did not breach the agreement. Helfmann brought all three of these lawsuits in state court in Union County.

Throughout his career, Helfmann has been an advocate for patient privacy, helping to push a state law limiting the information an insurance company can seek from a psychologist to determine the medical necessity of treatment. He also was a plaintiff in a lawsuit against two insurance companies and a New Jersey state commission, accusing them of requiring psychologists to turn over their treatment notes in order to get paid.

"It is apparent that upholding the ethical standards of his profession was very important to him," Carol Cohen, the administrative law judge, wrote. "Having said that, it appears that in the case of the information released to his attorney and eventually put into court papers, the respondent did not use due diligence in being sure that confidential information was not released and his patients were protected."

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Researchers Find Mobile Apps Can Easily Record Screenshots And Videos of Users' Activities

New academic research highlights how easy it is for mobile apps to both spy upon consumers and violate our privacy. During a recent study to determine whether or not smartphones record users' conversations, researchers at Northeastern University (NU) found:

"... that some companies were sending screenshots and videos of user phone activities to third parties. Although these privacy breaches appeared to be benign, they emphasized how easily a phone’s privacy window could be exploited for profit."

The NU researchers tested 17,260 of the most popular mobile apps running on smartphones using the Android operating system. About 9,000 of the 17,260 apps had the ability to take screenshots. The vulnerability: screenshot and video captures could easily be used to record users' keystrokes, passwords, and related sensitive information:

"This opening will almost certainly be used for malicious purposes," said Christo Wilson, another computer science professor on the research team. "It’s simple to install and collect this information. And what’s most disturbing is that this occurs with no notification to or permission by users."

The NU researchers found one app already recording video of users' screen activity (links added):

"That app was GoPuff, a fast-food delivery service, which sent the screenshots to Appsee, a data analytics firm for mobile devices. All this was done without the awareness of app users. [The researchers] emphasized that neither company appeared to have any nefarious intent. They said that web developers commonly use this type of information to debug their apps... GoPuff has changed its terms of service agreement to alert users that the company may take screenshots of their use patterns. Google issued a statement emphasizing that its policy requires developers to disclose to users how their information will be collected."

May? A brief review of the Appsee site seems to confirm that video recordings of the screens on app users' mobile devices is integral to the service:

"RECORDING: Watch every user action and understand exactly how they use your app, which problems they're experiencing, and how to fix them.​ See the app through your users' eyes to pinpoint usability, UX and performance issues... TOUCH HEAT MAPS: View aggregated touch heatmaps of all the gestures performed in each​ ​screen in your app.​ Discover user navigation and interaction preferences... REALTIME ANALYTICS & ALERTS:Get insightful analytics on user behavior without pre-defining any events. Obtain single-user and aggregate insights in real-time..."

Sounds like a version of "surveillance capitalism" to me. According to the Appsee site, a variety of companies use the service including eBay, Samsung, Virgin airlines, The Weather Network, and several advertising networks. Plus, the Appsee Privacy Policy dated may 23, 2018 stated:

"The Appsee SDK allows Subscribers to record session replays of their end-users' use of Subscribers' mobile applications ("End User Data") and to upload such End User Data to Appsee’s secured cloud servers."

In this scenario, GoPuff is a subscriber and consumers using the GoPuff mobile app are end users. The Appsee SDK is software code embedded within the GoPuff mobile app. The researchers said that this vulnerability, "will not be closed until the phone companies redesign their operating systems..."

Data-analytics services like Appsee raise several issues. First, there seems to be little need for digital agencies to conduct traditional eye-tracking and usability test sessions, since companies can now record, upload and archive what, when, where, and how often users swipe and select in-app content. Before, users were invited to and paid for their participation in user testing sessions.

Second, this in-app tracking and data collection amounts to perpetual, unannounced user testing. Previously, companies have gotten into plenty of trouble with their customers by performing secret user testing; especially when the service varies from the standard, expected configuration and the policies (e.g., privacy, terms of service) don't disclose it. Nobody wants to be a lab rat or crash-test dummy.

Third, surveillance agencies within several governments must be thrilled to learn of these new in-app tracking and spy tools, if they aren't already using them. A reasonable assumption is that Appsee also provides data to law enforcement upon demand.

Fourth, two of the researchers at NU are undergraduate students. Another startling disclosure:

"Coming into this project, I didn’t think much about phone privacy and neither did my friends," said Elleen Pan, who is the first author on the paper. "This has definitely sparked my interest in research, and I will consider going back to graduate school."

Given the tsunami of data breaches, privacy legislation in Europe, and demands by law enforcement for tech firms to build "back door" hacks into their mobile devices and smartphones, it is startling alarming that some college students, "don't think much about phone privacy." This means that Pan and her classmates probably haven't read privacy and terms-of-service policies for the apps and sites they've used. Maybe they will now.

Let's hope so.

Consumers interested in GoPuff should closely read the service's privacy and Terms of Service policies, since the latter includes dispute resolution via binding arbitration and prevents class-action lawsuits.

Hopefully, future studies about privacy and mobile apps will explore further the findings by Pan and her co-researchers. Download the study titled, "Panoptispy: Characterizing Audio and Video Exfiltration from Android Applications" (Adobe PDF) by Elleen Pan, Jingjing Ren, Martina Lindorfer, Christo Wilson, and David Choffnes.


FTC Requests Input From The Public And Will Hold Hearings About 'Competition And Consumer Protection'

During the coming months, the U.S. Federal Trade Commission (FTC) will hold a series of meeting and seek input from the public about "Competition And Consumer Protection" and:

"... whether broad-based changes in the economy, evolving business practices, new technologies, or international developments might require adjustments to competition and consumer protection enforcement law, enforcement priorities, and policy."

The FTC expects to conduct 15 to 20 hearings starting in September, 2018 and ending in January, 2019. Before each topical hearing, input from the public will be sought. The list of topics the FTC seeks input about (bold emphasis added):

  1. "The state of antitrust and consumer protection law and enforcement, and their development, since the Pitofsky hearings;
  2. Competition and consumer protection issues in communication, information, and media technology networks;
  3. The identification and measurement of market power and entry barriers, and the evaluation of collusive, exclusionary, or predatory conduct or conduct that violates the consumer protection statutes enforced by the FTC, in markets featuring “platform” businesses;
  4. The intersection between privacy, big data, and competition;
  5. The Commission’s remedial authority to deter unfair and deceptive conduct in privacy and data security matters;
  6. Evaluating the competitive effects of corporate acquisitions and mergers;
  7. Evidence and analysis of monopsony power, including but not limited to, in labor markets;
  8. The role of intellectual property and competition policy in promoting innovation; 
  9. The consumer welfare implications associated with the use of algorithmic decision tools, artificial intelligence, and predictive analytics;
  10. The interpretation and harmonization of state and federal statutes and regulations that prohibit unfair and deceptive acts and practices; and
  11. The agency’s investigation, enforcement, and remedial processes."

The public can submit written comments now through August 20, 2018. For more information, see the FTC site about each topic. Additional instructions for comment submissions:

"Each topic description includes issues of particular interest to the Commission, but comments need not be restricted to these subjects... the FTC will invite comments on the topic of each hearing session... The FTC will also invite public comment upon completion of the entire series of hearings. Public comments may address one or more of the above topics generally, or may address them with respect to a specific industry, such as the health care, high-tech, or energy industries... "

Comments must be submitted in writing. The public can submit comments online to the FTC, or via  postal mail to. Comments submitted via postal mail must include "‘Competition and Consumer Protection in the 21st Century Hearing, Project Number P181201," on both your comment and on the envelope. Mail comments to:

Federal Trade Commission
Office of the Secretary
600 Pennsylvania Avenue NW., Suite CC–5610 (Annex C)
Washington, DC 20580

See the FTC website for instructions for courier deliveries.

The "light touch" enforcement approach by the Federal Communications Commission (FCC) with oversight of the internet, the repeal of broadband privacy, and the repeal of net neutrality repeal, has highlighted the importance of oversight and enforcement by the FTC for consumer protection.

Given the broad range of topical hearings and input it could receive, the FTC may consider and/or pursue major changes to its operations. What do you think?


Federal Investigation Into Facebook Widens. Company Stock Price Drops

The Boston Globe reported on Tuesday (links added):

"A federal investigation into Facebook’s sharing of data with political consultancy Cambridge Analytica has broadened to focus on the actions and statements of the tech giant and now involves three agencies, including the Securities and Exchange Commission, according to people familiar with the official inquiries.

Representatives for the FBI, the SEC, and the Federal Trade Commission have joined the Justice Department in its inquiries about the two companies and the sharing of personal information of 71 million Americans... The Justice Department and the other federal agencies declined to comment. The FTC in March disclosed that it was investigating Facebook over possible privacy violations..."

About 87 million persons were affected by the Facebook breach involving Cambridge Analytica. In May, the new Commissioner at the U.S. Federal Trade Commission (FTC) suggested stronger enforcement on tech companies, like Google and Facebook.

After news broke about the wider probe, shares of Facebook stock fell about 18 percent of their value and then recovered somewhat for a net drop of 2 percent. That 2 percent drop is about $12 billion in valuation. Clearly, there will be more news (and stock price fluctuations) to come.

During the last few months, there has been plenty of news about Facebook:


Adidas Announced A 'Potential' Data Breach Affecting Online Shoppers in the United States

Adidas announced on June 28 a "potential" data breach affecting an undisclosed number of:

"... consumers who purchased on adidas.com/US... On June 26, Adidas became aware that an unauthorized party claims to have acquired limited data associated with certain Adidas consumers. Adidas is committed to the privacy and security of its consumers' personal data. Adidas immediately began taking steps to determine the scope of the issue and to alert relevant consumers. adidas is working with leading data security firms and law enforcement authorities to investigate the issue..."

The preliminary breach investigation found that contact information, usernames, and encrypted passwords were exposed or stolen. So far, no credit card or fitness information of consumers was "impacted." The company said it is continuing a forensic review and alerting affected customers.

While the company's breach announcement did not disclose the number of affected customer, CBS News reported that hackers may have stolen data about millions of customers. Fox Business reported that the Adidas:

"... hack was reported weeks after Under Armour’s health and fitness app suffered a security breach, which exposed the personal data of roughly 150 million users. The revealed information included the usernames, hashed passwords and email addresses of MyFitnessPal users."

It is critical to remember that this June 28th announcement was based upon a preliminary investigation. A completed breach investigation will hopefully determine and disclose any additional data elements exposed (or stolen), how the hackers penetrated the company's computer systems, which systems were penetrated, whether any internal databases were damaged/corrupted/altered, the total number of customers affected, specific fixes implemented so this type of breach doesn't happen again, and descriptive information about the cyber criminals.

This incident is also a reminder to consumers to never reuse the same password at several online sites. Cyber criminals are persistent, and will use the same password at several sites to see where else they can get in. It is no relief that encrypted passwords were stolen, because we don't yet know if the encryption tools were also stolen (making it easy for the hackers to de-encrypt the passwords). Not good.

We also don't yet know what "contact information" means. That could be first name, last name, phone, street address, e-mail address, mobile phone numbers, or some combination. If e-mail addresses were stolen, then breach victims could also experience phishing attacks where fraudsters try to trick victims into revealing bank account, sign-in credentials, and other sensitive information.

If you received a breach notice from Adidas, please share it below while removing any sensitive, identifying information.


Money Transfer Scams Target Both Businesses And Consumers

Money transfer scams, also called wire transfer scams, target both businesses and consumers. The affected firms include both small and large businesses.

Businesses

The Federal Bureau of Investigation (FBI) calls theses scams "Business E-mail Compromise" (BEC), since the fraudsters often target executives within a company with phishing e-mails, designed to trick victims into revealing sensitive bank account and sign-in credentials (e.g., usernames, passwords):

"At its heart, BEC relies on the oldest trick in the con artist’s handbook: deception. But the level of sophistication in this multifaceted global fraud is unprecedented... Carried out by transnational criminal organizations that employ lawyers, linguists, hackers, and social engineers, BEC can take a variety of forms. But in just about every case, the scammers target employees with access to company finances and trick them into making wire transfers to bank accounts thought to belong to trusted partners—except the money ends up in accounts controlled by the criminals."

From January, 2015 to February 2017, there was a 1,300 percent increase in financial losses due to these scams, totaling $3 billion. To trick victims, criminals use a variety of online methods including spear-phishing, social engineering, identity theft, e-mail spoofing, and the use of malware. (If these terms are unfamiliar, then you probably don't know enough to protect yourself.) Malware, or computer viruses, are often embedded in documents attached to e-mail messages -- another reason not to open e-mail attachments from strangers.

Forbes Magazine reported in April:

"Fraudsters target the CEO's and CFO's at various companies and hack their computers. They collect enough information to learn the types of billing the company pays, who the payee's are and the average balances paid. They then spoof a customer or, in other words, take their identity, and bill the company with wire transfer instructions to a scam bank account."

Some criminals are particularly crafty, by pretending to be a valid customer, client or vendor; and use a slightly altered sender's e-mail address hoping the victim won't to notice. This technique is successful more often that you might think. Example: a valid sender's e-mail address might be [email protected], while the scammer uses [email protected]. Did you spot the alteration? If you didn't, then you've just wired money directly to the criminal's offshore account instead of to a valid customer, client, or vendor.

Scammers can obtain executives' e-mail addresses and information from unprotected pages on social networking sites and/or data breaches. So, the data breaches at Under Armour, Equifax, Fresenius, Uber, the Chicago Board of Elections, Yahoo, Nationwide, Verizon, and others could have easily provided criminals with plenty of stolen personal data to do plenty of damage; impersonating coworkers, business associates, and/or coworkers. Much of the stolen information is resold by criminals to other criminals. Trading stolen data is what many cyber criminals do.

There are several things executives can do to protect themselves and their business' money. Learn to recognize money transfer scams and phishing e-mails. Often, bogus e-mails or text messages contain spelling errors (e.g., in the message body) and/or contain a request to wire immediately an unusually large amount of money. Most importantly, the FBI recommends:

"The best way to avoid being exploited is to verify the authenticity of requests to send money by walking into the CEO’s office or speaking to him or her directly on the phone. Don’t rely on e-mail alone."

That means don't rely upon text messages either.

Consumers

Wiring money is like sending cash. To avoid losing money, it is important for consumers to learn to recognize money transfer scams, too. There are several versions, according to the U.S. Federal Trade Commission (FTC):

"1. You just won a prize but you have to pay fees to get the prize
2. You need to pay for something you just bought online before they send it
3. A friend is in trouble and needs your help
4. You got a check for too much money and you need to send back the extra"

Regular readers of this blog are already familiar with #4 -- also called "check scams." Instead of paper checks, scammers have upgraded to prepaid cards and/or wire transfers. The FTC also advises consumers to pause before doing anything, and then:

  • "If the person claims (via e-mail) to need money for an emergency, call them first. Call another family member. Verify first if something truly happened.
  • If the check received is too much money, call your bank before you deposit the check.  Ask your bank what they think about wiring money back to someone.
  • If the e-mail or phone caller says you received an inheritance or prize, "you do not have to pay for a prize. Ever.  Did they say you have an inheritance? Talk to someone you trust. What does that person think?"

If you have already sent money to a scammer, it's gone and you probably won't get it back. So, file a complaint with the FTC. Chances are the scammer will contact you again, since they (or their associates) were successful already. Don't give them any more money.