Federal Reserve Board Fined Citigroup For Mishandling Residential Mortgages

Citibank logo The Federal Reserve Board (FRB) announced on Friday that it had fined Citigroup $8.6 million for the "improper execution of residential mortgage-related documents" in a subsidiary. The announcement explained:

"The $8.6 million penalty addresses the deficient execution and notarization of certain mortgage-related affidavits prepared by a subsidiary, CitiFinancial. The improper practices occurred in 2015 and were corrected. CitiFinancial exited the mortgage servicing business in 2017.

Also on Friday, the Board announced the termination of an enforcement action from 2011 against Citigroup and CitiFinancial related to residential mortgage loan servicing. The termination of this action was based on evidence of sustainable improvements."

In 2014, Citigroup paid $7 billion to settle allegations by the Department of Justice (DOJ) and several states attorneys general (AGs) that the bank mislead investors about toxic mortgage-backed securities. So, sloppy or shoddy handling of mortgage paperwork  will get a bank fined. Good. There must be consequences when consumers are abused.

Earlier this month, Wells Fargo admitted to software bugs in its systems which led to the bank accidentally foreclosing on residential homeowners it shouldn't have. 400 homeowners lost their homes. Untold consumers' credit ratings wrecked. That sounds like shabby mortgage paperwork handling, too -- definitely worth a larger fine. What do you think?


Wells Fargo Accidentally Foreclosed on Homeowners. 400 Customers Lost Their Homes

Wells Fargo logo Earlier this week, Wells Fargo Bank admitted that it accidentally foreclosed on nearly 400 homeowners it shouldn't have due to a "software glitch." The San Francisco Business Times reported:

"Nearly 400 Wells Fargo customers lost their homes when they were accidentally foreclosed on after a software glitch denied them the ability to modify their mortgages as they sought federal aid, the bank disclosed in a regulatory filing... The bank apologized and has set aside $8 million to compensate those affected by the glitch, which occurred from 2010 to 2015... the software mistake miscalculated customers' eligibility for mortgage modifications. The error caused about 625 customers to be denied loan modifications they sought from a federal program to help homeowners avoid foreclosures."

The $8 million set aside is one small step towards rebuilding consumers' trust. It seems that the bank and its executives have a nasty habit of alleged wrongdoing that often results in fines and settlement agreements. Earlier this month, the U.S. Department of Justice announced a $2 billion settlement agreement where:

"... Wells Fargo Bank, N.A. and several of its affiliates (Wells Fargo) will pay a civil penalty of $2.09 billion under the Financial Institutions Reform, Recovery, and Enforcement Act of 1989 (FIRREA) based on the bank’s alleged origination and sale of residential mortgage loans that it knew contained misstated income information and did not meet the quality that Wells Fargo represented. Investors, including federally insured financial institutions, suffered billions of dollars in losses from investing in residential mortgage-backed securities (RMBS) containing loans originated by Wells Fargo... The United States alleged that, in 2005, Wells Fargo began an initiative to double its production of subprime and Alt-A loans. As part of that initative, Wells Fargo loosened its requirements for originating stated income loans – loans where a borrower simply states his or her income without providing any supporting income documentation... despite its knowledge that a substantial portion of its stated income loans contained misstated income, Wells Fargo failed to disclose this information, and instead reported to investors false debt-to-income ratios in connection with the loans it sold. Wells Fargo also allegedly heralded its fraud controls while failing to disclose the income discrepancies its controls had identified."

Sadly, there's plenty more. In April, federal regulators at the Consumer Financial Protection Bureau (CFPB) and the Office of the Comptroller of the Currency (OCC) assessed a $1 billion fine against the bank for violations of the, "Consumer Financial Protection Act (CFPA) in the way it administered a mandatory insurance program related to its auto loans..."

Since 2016, the bank paid a $185 million fine for alleged unlawful sales practices where its employees created phony accounts to game an internal sales compensation system. While the bank's CEO was let go and 5,300 workers were fired due to that scandal, bad behavior and poor executive decisions seem to continue.

In August of 2017, the results of an internal investigation of auto insurance policies sold from 2012 to 2016 found that thousands of the bank's customers were forced to buy unneeded and unwanted auto insurance.

The latest incident raises more questions:

  • How does a "software glitch" go undetected and unfixed for five years -- or longer?
  • Where was the quality assurance and software testing processes?
  • The post implementation audits failed to detect errors?
  • Were any employees reprimanded, demoted, or fired? And if none, why?
  • What specific changes are being implemented to prevent future software glitches?
  • How will the damaged credit histories of foreclosed homeowners be repaired?

Often, all or a portion of the settlement agreements are tax deductible. This both lessens the fines' impacts and shifts the burden to taxpayers. I hope that as regulators pursue solutions, tax-deductible settlements are not repeated. What are your opinions?


Keep An Eye On Facebook's Moves To Expand Its Collection Of Financial Data About Its Users

Facebook logo On Monday, the Wall Street Journal reported that the social media giant had approached several major banks to share their detailed financial information about consumers in order, "to boost user engagement." Reportedly, Facebook approached JPMorgan Chase, Wells Fargo, Citigroup, and U.S. Bancorp. And, the detailed financial information sought included debit/credit/prepaid card transactions and checking account balances.

The Reuters news service also reported about the talks. The Reuters story mentioned the above banks, plus PayPal and American Express. Then, in a reply Facebook said that the Wall Street Journal news report was wrong. TechCrunch reported:

"Facebook spokesperson Elisabeth Diana tells TechCrunch it’s not asking for credit card transaction data from banks and it’s not interested in building a dedicated banking feature where you could interact with your accounts. It also says its work with banks isn’t to gather data to power ad targeting, or even personalize content... Facebook already lets Citibank customers in Singapore connect their accounts so they can ping their bank’s Messenger chatbot to check their balance, report fraud or get customer service’s help if they’re locked out of their account... That chatbot integration, which has no humans on the other end to limit privacy risks, was announced last year and launched this March. Facebook works with PayPal in more than 40 countries to let users get receipts via Messenger for their purchases. Expansions of these partnerships to more financial services providers could boost usage of Messenger by increasing its convenience — and make it more of a centralized utility akin to China’s WeChat."

There's plenty in the TechCrunch story. Reportedly, Diana's statement said that banks approached Facebook, and that it already partners:

"... with banks and credit card companies to offer services like customer chat or account management. Account linking enables people to receive real-time updates in Facebook Messenger where people can keep track of their transaction data like account balances, receipts, and shipping updates... The idea is that messaging with a bank can be better than waiting on hold over the phone – and it’s completely opt-in. We’re not using this information beyond enabling these types of experiences – not for advertising or anything else. A critical part of these partnerships is keeping people’s information safe and secure."

What to make of this? First, it really doesn't matter who approached whom. There's plenty of history. Way back in 2012, a German credit reporting agency approached Facebook. So, the financial sector is fully aware of the valuable data collected by Facebook.

Second, users doing business on the platform have already given Facebook permission to collect transaction data. Third, while Facebook's reply was about its users generally, its statement said "no" but sounded more like a "yes." Why? Basically, "account linking" or the convenience of purchase notifications is the hook or way into collecting users' financial transaction data. Existing practices, such as fitness apps  and music sharing, highlight the existing "account linking" used for data collection. Whatever users share on the platform allows Facebook to collect that information.

Fourth, the push to collect more banking data appears at best poorly timed, and at worst -- arrogant. Facebook is still trying to recover and regain users' trust after 87 million persons were affected by the massive data breach involving Cambridge Analytica. In May, the new Commissioner at the U.S. Federal Trade Commission (FTC) suggested stronger enforcement on tech companies, like Google and Facebook. Facebook has stumbled as its screening to identify political ads by politicians has incorrectly flagged news sites. Facebook CEO Mark Zuckerberg didn't help matters with his bumbling comments while failing to explain his company's stumbles to identify and prevent fake news.

Gary Cohn, President Donald Trump's former chief economic adviser, sharply criticized social media companies, including Facebook, for allowing fake news:

"In 2008 Facebook was one of those companies that was a big platform to criticize banks, they were very out front of criticizing banks for not being responsible citizens. I think banks were more responsible citizens in 2008 than some of the social media companies are today."

So, it seems wise to keep an eye on Facebook as it attempts to expand its data collection of consumers' financial information. Fifth, banks and banking executives bear some responsibility, too. A guest post on Forbes explained (highlighted text added):

"Whether this [banking] partnership pans or not, the Facebook plans are a reminder that banks sit on mountains of wealth much more valuable than money. Because of the speed at which tech giants move, banks must now make sure their clients agree on who owns their data, consent to the use of them, and understand with who they are shared. For that, it is now or never... In the financial industry, trust between a client and his provider is of primary importance. You can’t sell a customer’s banking data in the same way you sell his or her internet surfing behavior. Finance executives understand this: they even see the appropriate use of customer data as critical to financial stability. It is now or never to define these principles on the use of customer data... It’s why we believe new binding guidelines such as the EU’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act are welcome, even if they have room for improvement... A report by the US Treasury published earlier this week called on Congress to enact a federal data security and breach notification law to protect consumer financial data. The principles outlined above can serve as guidance to lawmakers drafting legislation, and bank executives considering how to respond to advances by Facebook and other big techs..."

Consumers should control their data -- especially financial data. If those rules are not put in place, then consumers have truly lost control of the sensitive personal and financial information that describes them. What are your opinions?


How Well Do Americans Distinguish Facts From Opinions? People With These 3 Skills Do The Best

The current fast-paced news environment, multitude of online sources, and the rise of "fake news" all place a premium upon being able to distinguish facts from opinions. And some opinions are also rumors or lies. Nobody wants to be duped as this shooter was in the Washington pizzeria attack in 2016. Nobody wants to waste their votes based upon misinformation.

How well do people in the United States distinguish facts from opinions? Earlier this year, the Pew Research Center conducted a survey to determine:

"... whether member of the public can recognize news as factual – something that’s capable of being proved or disproved by objective evidence – or as an opinion that reflects the beliefs and values of whoever expressed it."

Overall findings were not encouraging:

"The main portion of the study, which measured the public’s ability to distinguish between five factual statements and five opinion statements, found that a majority of Americans correctly identified at least three of the five statements in each set. But this result is only a little better than random guesses. Far fewer Americans got all five correct, and roughly a quarter got most or all wrong."

The survey of 5,035 U.S. adults was conducted between February 22 and March 8, 2018. Another key finding: people with certain skills outperformed others who lacked those skills:

"Those with high political awareness, those who are very digitally savvy and those who place high levels of trust in the news media are better able than others to accurately identify news-related statements as factual or opinion... 36% of Americans with high levels of political awareness (those who are knowledgeable about politics and regularly get political news) correctly identified all five factual news statements, compared with about half as many (17%) of those with low political awareness. Similarly, 44% of the very digitally savvy (those who are highly confident in using digital devices and regularly use the internet) identified all five opinion statements correctly versus 21% of those who are not as technologically savvy... Trust in those who do the reporting also matters in how that statement is interpreted. Almost four-in-ten Americans who have a lot of trust in the information from national news organizations (39%) correctly identified all five factual statements, compared with 18% of those who have not much or no trust. "

Pew Research. Survey findings. The politically aware, digitally savvy, and those more trusting of the news media fare better at distinguishing facts from opinions. Click to view larger version See the table on the right for details about the findings, which also apply across political parties:

"Both Republicans and Democrats show a propensity to be influenced by which side of the aisle a statement appeals to most. For example, members of each political party were more likely to label both factual and opinion statements as factual when they appealed more to their political side."

The study also investigated whether the news source brand affected person's abilities to distinguish facts from opinions:

"Overall, attributing the statements to news outlets had a limited impact on statement classification... Members of the two parties were as likely as each other to correctly classify the factual statements when no source was attributed or when USA Today or The New York Times was attributed. Labeling statements with a news outlet had no impact on how Republicans or Democrats classified the opinion statements."

When the source was attributed to Fox News, "Republicans were modestly more likely than Democrats to accurately classify the three factual statements... correspondingly, Democrats were modestly less likely than Republicans to do so.

Another finding:

"When Americans see a news statement as factual, they overwhelmingly also believe it to be accurate. This is true for both statements they correctly and incorrectly identified as factual, though small portions of the public did call statements both factual and inaccurate."

Many people I know strongly believe that persons in the other political party are misinformed and/or misled by their reliance upon opinions, rumors, and inaccurate information; while persons in their political party are uniquely informed without reliance upon opinions, rumors, and inaccurate information. We now know that belief isn't accurate.


New York State Tells Charter To Leave Due To 'Persistent Non-Compliance And Failure To Live Up To Promises'

The New York State Public Service Commission (NYPSC) announced on Friday that it has revoked its approval of the 2016 merger agreement between Charter Communications, Inc. and Time Warner Cable, Inc. because:

"... Charter, doing business as Spectrum has — through word and deed — made clear that it has no intention of providing the public benefits upon which the Commission's earlier [merger] approval was conditioned. In addition, the Commission directed Commission counsel to bring an enforcement action in State Supreme Court to seek additional penalties for Charter's past failures and ongoing non-compliance..."

Charter, the largest cable provider in the State, provides digital cable television, broadband internet and VoIP telephone services to more than two million subscribers in in more than 1,150 communities. It provides services to consumers in Buffalo, Rochester, Syracuse, Albany and four boroughs in New York City: Manhattan, Staten Island, Queens and Brooklyn. The planned expansion could have increased to five million subscribers in the state.

Charter provides services in 41 states: Alabama, Arizona, California, Colorado, Connecticut, Florida, Georgia, Hawaii, Idaho, Illinois, Indiana, Kansas, Kentucky, Louisiana, Maine, Massachusetts, Michigan, Minnesota, Missouri, Montana, Nebraska, Nevada, New Hampshire, New Jersey, New Mexico, New York, North Carolina, Ohio, Oregon, Pennsylvania, Rhode Island, South Carolina, South Dakota, Tennessee, Texas, Utah, Vermont, Virginia, Washington, Wisconsin, and Wyoming.

A unit of the Department of Public Service, the NYPSC site described its mission, "to ensure affordable, safe, secure, and reliable access to electric, gas, steam, telecommunications, and water services for New York State’s residential and business consumers, while protecting the natural environment." Its announcement listed Spectrum's failures and non-compliance:

"1. The company’s repeated failures to meet deadlines;
2. Charter’s attempts to skirt obligations to serve rural communities;
3. Unsafe practices in the field;
4. Its failure to fully commit to its obligations under the 2016 merger agreement; and
5. The company’s purposeful obfuscation of its performance and compliance obligations to the Commission and its customers."

The announcement provided details:

"On Jan. 8, 2016, the Commission approved Charter’s acquisition of Time Warner. To obtain approval, Charter agreed to a number of conditions required by the Commission to advance the public interest, including delivering broadband speed upgrades to 100 Mbps statewide by the end of 2018, and 300 Mbps by the end of 2019, and building out its network to pass an additional 145,000 un-served or under-served homes and businesses in the State's less densely populated areas within four years... Despite missing every network expansion target since the merger was approved in 2016, Charter has falsely claimed in advertisements it is exceeding its commitments to the State and is on track to deliver its network expansion. This led to the NYPSC’s general counsel referring a false advertising claim to the Attorney General’s office for enforcement... By its own admission, Charter has failed to meet its commitment to expand its service network... Its failure to meet its June 18, 2018 target by more than 40 percent is only the most recent example. Rather than accept responsibility Charter has tried to pass the blame for its failure on other companies, such as utility pole owners..."

The NYPSC has already levied $3 million in fines against Charter. The latest action basically boots Charter out of the State:

"Charter is ordered to file within 60 days a plan with the Commission to ensure an orderly transition to a successor provider(s). During the transition process, Charter must continue to comply with all local franchises it holds in New York State and all obligations under the Public Service Law and the NYPSC regulations. Charter must ensure no interruption in service is experienced by customers, and, in the event that Charter does not do so, the NYPSC will take further steps..."

Of course, executives at Charter have a different view of the situation. NBC New York reported:

"In the weeks leading up to an election, rhetoric often becomes politically charged. But the fact is that Spectrum has extended the reach of our advanced broadband network to more than 86,000 New York homes and businesses since our merger agreement with the PSC. Our 11,000 diverse and locally based workers, who serve millions of customers in the state every day, remain focused on delivering faster and better broadband to more New Yorkers, as we promised..."


Test Finds Amazon's Facial Recognition Software Wrongly Identified Members Of Congress As Persons Arrested. A Few Legislators Demand Answers

In a test of Rekognition, the facial recognition software by Amazon, the American Civil Liberties Union (ACLU) found that the software misidentified 28 members of the United States Congress to mugshot photographs of persons arrested for crimes. Jokes aside about politicians, this is serious stuff. According to the ACLU:

"The members of Congress who were falsely matched with the mugshot database we used in the test include Republicans and Democrats, men and women, and legislators of all ages, from all across the country... To conduct our test, we used the exact same facial recognition system that Amazon offers to the public, which anyone could use to scan for matches between images of faces. And running the entire test cost us $12.33 — less than a large pizza... The false matches were disproportionately of people of color, including six members of the Congressional Black Caucus, among them civil rights legend Rep. John Lewis (D-Ga.). These results demonstrate why Congress should join the ACLU in calling for a moratorium on law enforcement use of face surveillance."

List of 28 Congressional legislators mis-identified by Amazon Rekognition in ACLU study. Click to view larger version With 535 member of Congress, the implied error rate was 5.23 percent. On Thursday, three of the misidentified legislators sent a joint letter to Jeffery Bezos, the Chief executive Officer at Amazon. The letter read in part:

"We write to express our concerns and seek more information about Amazon's facial recognition technology, Rekognition... While facial recognition services might provide a valuable law enforcement tool, the efficacy and impact of the technology are not yet fully understood. In particular, serious concerns have been raised about the dangers facial recognition can pose to privacy and civil rights, especially when it is used as a tool of government surveillance, as well as the accuracy of the technology and its disproportionate impact on communities of color.1 These concerns, including recent reports that Rekognition could lead to mis-identifications, raise serious questions regarding whether Amazon should be selling its technology to law enforcement... One study estimates that more than 117 million American adults are in facial recognition databases that can be searched in criminal investigations..."

The letter was sent by Senator Edward J. Markey (Massachusetts, Representative Luis V. Gutiérrez (Illinois), and Representative Mark DeSaulnier (California). Why only three legislators? Where are the other 25? Nobody else cares about software accuracy?

The three legislators asked Amazon to provide answers by August 20, 2018 to several key requests:

  • The results of any internal accuracy or bias assessments Amazon perform on Rekognition, with details by race, gender, and age,
  • The list of all law enforcement or intelligence agencies Amazon has communicated with regarding Rekognition,
  • The list of all law enforcement agencies which have used or currently use Rekognition,
  • If any law enforcement agencies which used Rekogntion have been investigated, sued, or reprimanded for unlawful or discriminatory policing practices,
  • Describe the protections, if any, Amazon has built into Rekognition to protect the privacy rights of innocent citizens cuaght in the biometric databases used by law enforcement for comparisons,
  • Can Rekognition identify persons younger than age 13, and what protections Amazon uses to comply with Children's Online Privacy Protections Act (COPPA),
  • Whether Amazon conduts any audits of Rekognition to ensure its appropriate and legal uses, and what actions Amazon has taken to correct any abuses,
  • Explain whether Rekognition is integrated with police body cameras and/or "public-facing camera networks."

The letter cited a 2016 report by the Center on Privacy and Technology (CPT) at Georgetown Law School, which found:

"... 16 states let the Federal Bureau of Investigation (FBI) use face recognition technology to compare the faces of suspected criminals to their driver’s license and ID photos, creating a virtual line-up of their state residents. In this line-up, it’s not a human that points to the suspect—it’s an algorithm... Across the country, state and local police departments are building their own face recognition systems, many of them more advanced than the FBI’s. We know very little about these systems. We don’t know how they impact privacy and civil liberties. We don’t know how they address accuracy problems..."

Everyone wants law enforcement to quickly catch criminals, prosecute criminals, and protect the safety and rights of law-abiding citizens. However, accuracy matters. Experts warn that the facial recognition technologies used are unregulated, and the systems' impacts upon innocent citizens are not understood. Key findings in the CPT report:

  1. "Law enforcement face recognition networks include over 117 million American adults. Face recognition is neither new nor rare. FBI face recognition searches are more common than federal court-ordered wiretaps. At least one out of four state or local police departments has the option to run face recognition searches through their or another agency’s system. At least 26 states (and potentially as many as 30) allow law enforcement to run or request searches against their databases of driver’s license and ID photos..."
  2. "Different uses of face recognition create different risks. This report offers a framework to tell them apart. A face recognition search conducted in the field to verify the identity of someone who has been legally stopped or arrested is different, in principle and effect, than an investigatory search of an ATM photo against a driver’s license database, or continuous, real-time scans of people walking by a surveillance camera. The former is targeted and public. The latter are generalized and invisible..."
  3. "By tapping into driver’s license databases, the FBI is using biometrics in a way it’s never done before. Historically, FBI fingerprint and DNA databases have been primarily or exclusively made up of information from criminal arrests or investigations. By running face recognition searches against 16 states’ driver’s license photo databases, the FBI has built a biometric network that primarily includes law-abiding Americans. This is unprecedented and highly problematic."
  4. " Major police departments are exploring face recognition on live surveillance video. Major police departments are exploring real-time face recognition on live surveillance camera video. Real-time face recognition lets police continuously scan the faces of pedestrians walking by a street surveillance camera. It may seem like science fiction. It is real. Contract documents and agency statements show that at least five major police departments—including agencies in Chicago, Dallas, and Los Angeles—either claimed to run real-time face recognition off of street cameras..."
  5. "Law enforcement face recognition is unregulated and in many instances out of control. No state has passed a law comprehensively regulating police face recognition. We are not aware of any agency that requires warrants for searches or limits them to serious crimes. This has consequences..."
  6. "Law enforcement agencies are not taking adequate steps to protect free speech. There is a real risk that police face recognition will be used to stifle free speech. There is also a history of FBI and police surveillance of civil rights protests. Of the 52 agencies that we found to use (or have used) face recognition, we found only one, the Ohio Bureau of Criminal Investigation, whose face recognition use policy expressly prohibits its officers from using face recognition to track individuals engaging in political, religious, or other protected free speech."
  7. "Most law enforcement agencies do little to ensure their systems are accurate. Face recognition is less accurate than fingerprinting, particularly when used in real-time or on large databases. Yet we found only two agencies, the San Francisco Police Department and the Seattle region’s South Sound 911, that conditioned purchase of the technology on accuracy tests or thresholds. There is a need for testing..."
  8. "The human backstop to accuracy is non-standardized and overstated. Companies and police departments largely rely on police officers to decide whether a candidate photo is in fact a match. Yet a recent study showed that, without specialized training, human users make the wrong decision about a match half the time...The training regime for examiners remains a work in progress."
  9. "Police face recognition will disproportionately affect African Americans. Police face recognition will disproportionately affect African Americans. Many police departments do not realize that... the Seattle Police Department says that its face recognition system “does not see race.” Yet an FBI co-authored study suggests that face recognition may be less accurate on black people. Also, due to disproportionately high arrest rates, systems that rely on mug shot databases likely include a disproportionate number of African Americans. Despite these findings, there is no independent testing regime for racially biased error rates. In interviews, two major face recognition companies admitted that they did not run these tests internally, either."
  10. "Agencies are keeping critical information from the public. Ohio’s face recognition system remained almost entirely unknown to the public for five years. The New York Police Department acknowledges using face recognition; press reports suggest it has an advanced system. Yet NYPD denied our records request entirely. The Los Angeles Police Department has repeatedly announced new face recognition initiatives—including a “smart car” equipped with face recognition and real-time face recognition cameras—yet the agency claimed to have “no records responsive” to our document request. Of 52 agencies, only four (less than 10%) have a publicly available use policy. And only one agency, the San Diego Association of Governments, received legislative approval for its policy."

The New York Times reported:

"Nina Lindsey, an Amazon Web Services spokeswoman, said in a statement that the company’s customers had used its facial recognition technology for various beneficial purposes, including preventing human trafficking and reuniting missing children with their families. She added that the A.C.L.U. had used the company’s face-matching technology, called Amazon Rekognition, differently during its test than the company recommended for law enforcement customers.

For one thing, she said, police departments do not typically use the software to make fully autonomous decisions about people’s identities... She also noted that the A.C.L.U had used the system’s default setting for matches, called a “confidence threshold,” of 80 percent. That means the group counted any face matches the system proposed that had a similarity score of 80 percent or more. Amazon itself uses the same percentage in one facial recognition example on its site describing matching an employee’s face with a work ID badge. But Ms. Lindsey said Amazon recommended that police departments use a much higher similarity score — 95 percent — to reduce the likelihood of erroneous matches."

Good of Amazon to respond quickly, but its reply is still insufficient and troublesome. Amazon may recommend 95 percent similarity scores, but the public does not know if police departments actually use the higher setting, or consistently do so across all types of criminal investigations. Plus, the CPT report cast doubt on human "backstop" intervention, which Amazon's reply seems to heavily rely upon.

Where is the rest of Congress on this? On Friday, three Senators sent a similar letter seeking answers from 39 federal law-enforcement agencies about their use facial recognition technology, and what policies, if any, they have put in place to prevent abuse and misuse.

All of the findings in the CPT report are disturbing. Finding #3 is particularly troublesome. So, voters need to know what, if anything, has changed since these findings were published in 2016. Voters need to know what their elected officials are doing to address these findings. Some elected officials seem engaged on the topic, but not enough. What are your opinions?


How the Case for Voter Fraud Was Tested — and Utterly Failed

[Editor's note: today's blog post, by reporters at ProPublica, explores the results of a trial in Kansas about the state's voter-ID laws and claims of voter fraud. It is reprinted with permission.]

By Jessica Huseman, ProPublica

In the end, the decision seemed inevitable. After a seven-day trial in Kansas City federal court in March, in which Kansas Secretary of State Kris Kobach needed to be tutored on basic trial procedure by the judge and was found in contempt for his “willful failure” to obey a ruling, even he knew his chances were slim. Kobach told The Kansas City Star at the time that he expected the judge would rule against him (though he expressed optimism in his chances on appeal).

Sure enough, federal Judge Julie Robinson overturned the law that Kobach was defending as lead counsel for the state, dealing him an unalloyed defeat. The statute, championed by Kobach and signed into law in 2013, required Kansans to present proof of citizenship in order to register to vote. The American Civil Liberties Union sued, contending that the law violated the National Voter Registration Act (AKA the “motor voter” law), which was designed to make it easy to register.

The trial had a significance that extends far beyond the Jayhawk state. One of the fundamental questions in the debate over alleged voter fraud — whether a substantial number of non-citizens are in fact registering to vote — was one of two issues to be determined in the Kansas proceedings. (The second was whether there was a less burdensome solution than what Kansas had adopted.) That made the trial a telling opportunity to remove the voter fraud claims from the charged, and largely proof-free, realms of political campaigns and cable news shoutfests and examine them under the exacting strictures of the rules of evidence.

That’s precisely what occurred and according to Robinson, an appointee of George W. Bush, the proof that voter fraud is widespread was utterly lacking. As the judge put it, “the court finds no credible evidence that a substantial number of non-citizens registered to vote” even under the previous law, which Kobach had claimed was weak.

For Kobach, the trial should’ve been a moment of glory. He’s been arguing for a decade that voter fraud is a national calamity. Much of his career has been built on this issue, along with his fervent opposition to illegal immigration. (His claim is that unlawful immigrants are precisely the ones voting illegally.) Kobach, who also co-chaired the Trump administration’s short-lived commission on voter fraud, is perhaps the individual most identified with the cause of sniffing out and eradicating phony voter registration. He’s got a gilded resume, with degrees from Harvard University, Yale Law School and the University of Oxford, and is seen as both the intellect behind the cause and its prime advocate. Kobach has written voter laws in other jurisdictions and defended them in court. If anybody ever had time to marshal facts and arguments before a trial, it was Kobach.

But things didn’t go well for him in the Kansas City courtroom, as Robinson’s opinion made clear. Kobach’s strongest evidence of non-citizen registration was anemic at best: Over a 20-year period, fewer than 40 non-citizens had attempted to register in one Kansas county that had 130,000 voters. Most of those 40 improper registrations were the result of mistakes or confusion rather than intentional attempts to mislead, and only five of the 40 managed to cast a vote.

One of Kobach’s own experts even rebutted arguments made by both Kobach and President Donald Trump. The expert testified that a handful of improper registrations could not be extrapolated to conclude that 2.8 million fraudulent votes — roughly, the gap between Hillary Clinton and Trump in the popular vote tally — had been cast in the 2016 presidential election. Testimony from a second key expert for Kobach also fizzled.

As the judge’s opinion noted, Kobach insisted the meager instances of cheating revealed at trial are just “the tip of the iceberg.” As she explained, “This trial was his opportunity to produce credible evidence of that iceberg, but he failed to do so.” Dismissing the testimony by Kobach’s witnesses as unpersuasive, Robinson drew what she called “the more obvious conclusion that there is no iceberg; only an icicle largely created by confusion and administrative error.”

By the time the trial was over, Kobach, a charismatic 52-year-old whose broad shoulders and imposing height make him resemble an aging quarterback, seemed to have shrunk inside his chair at the defense table.

But despite his defeat, Kobach’s causes — restricting immigration and tightening voting requirements — seem to be enjoying favorable tides elsewhere. Recent press accounts noted Kobach’s role in restoring a question about citizenship, abandoned since 1950, to U.S. Census forms for 2020. And the Supreme Court ruled on June 11 that the state of Ohio can purge voters from its rolls when they fail to vote even a single time and don’t return a mailing verifying their address, a provision that means more voters will need to re-register and prove their eligibility again.

For his own part, Kobach is now a candidate for governor of Kansas, running neck and neck with the incumbent in polls for the Republican primary on Aug. 7. It’s not clear whether the verdict will affect his chances — or whether it will lead him and others to quietly retreat from claims of voter fraud. But the judge’s opinion and expert interviews reveal that Kobach effectively put the concept of mass voter fraud to the test — and the evidence crumbled.

Perhaps it was an omen. Before Kobach could enter the courtroom inside the Robert J. Dole U.S. Courthouse each day, he had to pass through a hallway whose walls featured a celebratory display entitled “Americans by Choice: The Story of Immigration and Citizenship in Kansas.” Photographs of people who’d been sworn in as citizens in that very courthouse were superimposed on the translucent window shades.

Public interest in the trial was high. The seating area quickly filled to capacity on the first day of trial on the frigid morning of March 6. The jury box was opened to spectators; it wouldn’t be needed, as this was a bench trial. Those who couldn’t squeeze in were sent to a lower floor, where a live feed had been prepared in a spillover room.

From the moment the trial opened, Kobach and his co-counsels in the Kansas secretary of state’s office, Sue Becker and Garrett Roe, stumbled over the most basic trial procedures. Their mistakes antagonized the judge. “Evidence 101,” Robinson snapped, only minutes into the day, after Kobach’s team attempted to improperly introduce evidence. “I’m not going to do it.”

Matters didn’t improve for Kobach from there.

Throughout the trial, his team’s repeated mishaps and botched cross examinations cost hours of the court’s time. Robinson was repeatedly forced to step into the role of law professor, guiding Kobach, Becker and Roe through courtroom procedure. “Do you know how to do the next step, if that’s what you’re going to do?” the judge asked Becker at one point, as she helped her through the steps of impeaching a witness. “We’re going to follow the rules of evidence here.”

Becker often seemed nervous. She took her bright red glasses off and on. At times she burst into nervous chuckles after a misstep. She laughed at witnesses, skirmished with the judge and even taunted the lawyers for the ACLU. “I can’t wait to ask my questions on Monday!” she shouted at the end of the first week, jabbing a finger in the direction of Dale Ho, the lead attorney for the plaintiffs. Ho rolled his eyes.

Roe was gentler — deferential, even. He often admitted he didn’t know what step came next, asking the judge for help. “I don’t — I don’t know if this one is objectionable. I hope it’s not,” he offered at one point, as he prepared to ask a question following a torrent of sustained objections. “I’ll let you know,” an attorney for the plaintiffs responded, to a wave of giggles in the courtroom. On the final day of trial, as Becker engaged in yet another dispute with the judge, Roe slapped a binder to his forehead and audibly whispered, “Stop talking. Stop talking.”

Kobach’s cross examinations were smoother and better organized, but he regularly attempted to introduce exhibits — for example, updated state statistics that he had failed to provide the ACLU in advance to vet — that Robinson ruled were inadmissible. As the trial wore on, she became increasingly irritated. She implored Kobach to “please read” the rules on which she based her rulings, saying his team had repeated these errors “ad nauseum.”

Kobach seemed unruffled. Instead of heeding her advice, he’d proffer the evidence for the record, a practice that allows the evidence to be preserved for appeal even if the trial judge refuses to admit it. Over the course of the trial, Kobach and his team would do this nearly a dozen times.

Eventually, Robinson got fed up. She asked Kobach to justify his use of proffers. Kobach, seemingly alarmed, grabbed a copy of the Federal Rules of Civil Procedure — to which he had attached a growing number of Post-it notes — and quickly flipped through it, trying to find the relevant rule.

The judge tried to help. “It’s Rule 26, of course, that’s been the basis for my rulings,” she told Kobach. “I think it would be helpful if you would just articulate under what provision of Rule 26 you think this is permissible.” Kobach seemed to play for time, asking clarifying questions rather than articulating a rationale. Finally, the judge offered mercy: a 15-minute break. Kobach’s team rushed from the courtroom.

It wasn’t enough to save him. In her opinion, Robinson described “a pattern and practice by Defendant [Kobach] of flaunting disclosure and discovery rules.” As she put it, “it is not clear to the Court whether Defendant repeatedly failed to meet his disclosure obligations intentionally or due to his unfamiliarity with the federal rules.” She ordered Kobach to attend the equivalent of after-school tutoring: six hours of extra legal education on the rules of civil procedure or the rules of evidence (and to present the court with a certificate of completion).

It’s always a bad idea for a lawyer to try the patience of a judge — and that’s doubly true during a bench trial, when the judge will decide not only the law, but also the facts. Kobach repeatedly annoyed Robinson with his procedural mistakes. But that was nothing next to what the judge viewed as Kobach’s intentional bad faith.

This view emerged in writing right after the trial — that’s when Robinson issued her ruling finding Kobach in contempt — but before the verdict. And the conduct that inspired the contempt finding had persisted over several years. Robinson concluded that Kobach had intentionally failed to follow a ruling she issued in 2016 that ordered him to restore the privileges of 17,000 suspended Kansas voters.

In her contempt ruling, the judge cited Kobach’s “history of noncompliance” with the order and characterized his explanations for not abiding by it as “nonsensical” and “disingenuous.” She wrote that she was “troubled” by Kobach’s “failure to take responsibility for violating this Court’s orders, and for failing to ensure compliance over an issue that he explicitly represented to the Court had been accomplished.” Robinson ordered Kobach to pay the ACLU’s legal fees for the contempt proceeding.

That contempt ruling was actually the second time Kobach was singled out for punishment in the case. Before the trial, a federal magistrate judge deputized to oversee the discovery portion of the suit fined him $1,000 for making “patently misleading representations” about a voting fraud document Kobach had prepared for Trump. Kobach paid the fine with a state credit card.

More than any procedural bumbling, the collapse of Kobach’s case traced back to the disintegration of a single witness.

The witness was Jesse Richman, a political scientist from Old Dominion University, who has written studies on voter fraud. For this trial, Richman was paid $5,000 by the taxpayers of Kansas to measure non-citizen registration in the state. Richman was the man who had to deliver the goods for Kobach.

With his gray-flecked beard and mustache, Richman looked the part of an academic, albeit one who seemed a bit too tall for his suit and who showed his discomfort in a series of awkward, sudden movements on the witness stand. At moments, Richman’s testimony turned combative, devolving into something resembling an episode of The Jerry Springer Show. By the time he left the stand, Richman had testified for more than five punishing hours. He’d bickered with the ACLU’s lawyer, raised his voice as he defended his studies and repeatedly sparred with the judge.

“Wait, wait, wait!” shouted Robinson at one point, silencing a verbal free-for-all that had erupted among Richman, the ACLU’s Ho, and Kobach, who were all speaking at the same time. “Especially you,” she said, turning her stare to Richman. “You are not here to be an advocate. You are not here to trash the plaintiff. And you are not here to argue with me.”

Richman had played a small but significant part in the 2016 presidential campaign. Trump and others had cited his work to claim that illegal votes had robbed Trump of the popular vote. At an October 2016 rally in Wisconsin, the candidate cited Richman’s work to bolster his predictions that the election would be rigged. “You don’t read about this, right?” Trump told the crowd, before reading from an op-ed Richman had written for The Washington Post: “‘We find that this participation was large enough to plausibly account for Democratic victories in various close elections.’ Okay? All right?”

Richman’s 2014 study of non-citizen registration used data from the Cooperative Congressional Election Study — an online survey of more than 32,000 people. Of those, fewer than 40 individuals indicated they were non-citizens registered to vote. Based on that sample, Richman concluded that up to 2.8 million illegal votes had been cast in 2008 by non-citizens. In fact, he put the illegal votes at somewhere between 38,000 and 2.8 million — a preposterously large range — and then Trump and others simply used the highest figure.

Academics pilloried Richman’s conclusions. Two hundred political scientists signed an open letter criticizing the study, saying it should “not be cited or used in any debate over fraudulent voting.” Harvard’s Stephen Ansolabehere, who administered the CCES, published his own peer-reviewed paper lambasting Richman’s work. Indeed, by the time Trump read Richman’s article onstage in 2016, The Washington Post had already appended a note to the op-ed linking to three rebuttals and a peer-reviewed study debunking the research.

None of that discouraged Kobach or Trump from repeating Richman’s conclusions. They then went a few steps further. They took the top end of the range for the 2008 election, assumed that it applied to the 2016 election, too, and further assumed that all of the fraudulent ballots had been cast for Clinton.

Some of those statements found their way into the courtroom, when Ho pressed play on a video shot by The Kansas City Star on Nov. 30, 2016. Kobach had met with Trump 10 days earlier and had brought with him a paper decrying non-citizen registration and voter fraud. Two days later, Trump tweeted that he would have won the popular vote if not for “millions of people who voted illegally.”

On the courtroom’s televisions, Kobach appeared, saying Trump’s tweet was “absolutely correct.” Without naming Richman, Kobach referred to his study: The number of non-citizens who said they’d voted in 2008 was far larger than the popular vote margin, Kobach said on the video. The same number likely voted again in 2016.

In the courtroom, Ho asked Richman if he believed his research supported such a claim. Richman stammered. He repeatedly looked at Kobach, seemingly searching for a way out. Ho persisted and finally, Richman gave his answer: “I do not believe my study provides strong support for that notion.”

To estimate the number of non-citizens voting in Kansas, Richman had used the same methodology he employed in his much-criticized 2014 study. Using samples as small as a single voter, he’d produced surveys with wildly different estimates of non-citizen registration in the state. The multiple iterations confused everyone in the courtroom.

“For the record, how many different data sources have you provided?” Robinson interjected in the middle of one Richman answer. “You provide a range of, like, zero to 18,000 or more.”

“I sense the frustration,” Richman responded, before offering a winding explanation of the multiple data sources and surveys he’d used to arrive at a half-dozen different estimates. Robinson cut him off. “Maybe we need to stop here,” she said.

“Your honor, let me finish answering your question,” he said.

“No, no. I’m done,” she responded, as he continued to protest. “No. Dr. Richman, I’m done.”

To refute Richman’s numbers, the ACLU called on Harvard’s Ansolabehere, whose data Richman had relied on in the past. Ansolabehere testified that Richman’s sample sizes were so small that it was just as possible that there were no non-citizens registered to vote in Kansas as 18,000. “There’s just a great deal of uncertainty with these estimates,” he said.

Ho asked if it would be accurate to say that Richman’s data “shows a rate of non-citizen registration in Kansas that is not statistically distinct from zero?”

“Correct.”

The judge was harsher than Ansolabehere in her description of Richman’s testimony. In her opinion, Robinson unloaded a fusillade of dismissive adjectives, calling Richman’s conclusions “confusing, inconsistent and methodologically flawed,” and adding that they were “credibly dismantled” by Ansolabehere. She labeled elements of Richman’s testimony “disingenuous” and “misleading,” and stated that she gave his research “no weight” in her decision.

One of the paradoxes of Kobach is that he has become a star in circles that focus on illegal immigration and voting fraud despite poor results in the courtroom. By ProPublica’s count, Kobach chalked up a 2–6 won-lost record in federal cases in which he was played a major role, and which reached a final disposition before the Kansas case.

Those results occurred when Kobach was an attorney for the legal arm of the Federation for American Immigration Reform from 2004 to 2011, when he became secretary of state in Kansas. In his FAIR role (in which he continued to moonlight till about 2014), Kobach traveled to places like Fremont, Nebraska, Hazleton, Pennsylvania, Farmers Branch, Texas, and Valley Park, Missouri, to help local governments write laws that attempted to hamper illegal immigration, and then defend them in court. Kobach won in Nebraska, but lost in Texas and Pennsylvania, and only a watered down version of the law remains in Missouri.

The best-known law that Kobach helped shape before joining the Kansas government in 2011 was Arizona’s “show me your papers” law. That statute allowed police to demand citizenship documents for any reason from anyone they thought might be in the country illegally. After it passed, the state paid Kobach $300 an hour to train law enforcement on how to legally arrest suspected illegal immigrants. The Supreme Court gutted key provisions of the law in 2012.

Kobach also struggled in two forays into political campaigning. In 2004, he lost a race for Congress. He also drew criticism for his stint as an informal adviser to Mitt Romney’s 2012 presidential campaign. Kobach was the man responsible for Romney’s much-maligned proposal that illegal immigrants “self-deport,” one reason Romney attracted little support among Latinos. Romney disavowed Kobach even before the campaign was over, telling media outlets that he was a “supporter,” not an adviser.

Trump’s election meant Kobach’s positions on immigration would be welcome in the White House. Kobach lobbied for, but didn’t receive, an appointment as Secretary of Homeland Security. He was, however, placed in charge of the voter fraud commission, a pet project of Trump’s. Facing a raft of lawsuits and bad publicity, the commission was disbanded little more than six months after it formally launched.

Back at home, Kobach expanded his power as secretary of state. Boasting of his experience as a law professor and scholar, Kobach convinced the state legislature to give him the authority to prosecute election crimes himself, a power wielded by no other secretary of state. In that role, he has obtained nine guilty pleas against individuals for election-related misdemeanors. Only one of those who pleaded guilty, as it happens, was a non-citizen.

He also persuaded Kansas’ attorney general to allow Kobach to represent the state in the trial of Kansas’ voting law. Kobach argued it was a bargain. As he told The Wichita Eagle at the time, “The advantage is the state gets an experienced appellate litigator who is a specialist in this field and in constitutional law for the cost the state is already paying, which is my salary.”

Kobach fared no better in the second main area of the Kansas City trial than he had in the first. This part explored whether there is a less burdensome way of identifying non-citizens than forcing everyone to show proof of citizenship upon registration. Judge Robinson would conclude that there were many alternatives that were less intrusive.

In his opening, Ho of the ACLU spotlighted a potentially less intrusive approach. Why not use the Department of Homeland Security’s Systematic Alien Verification for Entitlements System list, and compare the names on it to the Kansas voter rolls? That, Ho argued, could efficiently suss out illegal registrations.

Kobach told the judge that simply wasn’t feasible. The list, he explained, doesn’t contain all non-citizens in the country illegally — it contains only non-citizens legally present and those here illegally who register in some way with the federal government. Plus, he told Robinson, in order to really match the SAVE list against a voter roll, both datasets would have to contain alien registration numbers, the identifier given to non-citizens living in the U.S. “Those are things that a voter registration system doesn’t have,” he said. “So, the SAVE system does not work.”

But Kobach had made the opposite argument when he headed the voter fraud commission. There, he’d repeatedly advocated the use of the SAVE database. Appearing on Fox News in May 2017, shortly after the commission was established, Kobach said, “The Department of Homeland Security knows of the millions of aliens who are in the United States legally and that data that’s never been bounced against the state’s voter rolls to see whether these people are registered.” He said the federal databases “can be very valuable.”

A month later, as chief of the voting fraud commission, Kobach took steps to compare state information to the SAVE database. He sent a letter to all 50 secretaries of state requesting their voter rolls. Bipartisan outrage ensued. Democrats feared he would use the rolls to encourage states to purge legitimately registered voters. Republicans labelled the request federal overreach.

At trial, Kobach’s main expert on this point was Hans von Spakovsky, another member of the voter fraud commission. He, too, had been eager in commission meetings to match state voter rolls to the SAVE database.

But like Kobach, von Spakovsky took a different tack at trial. He testified that this database was unusable by elections offices. “In your experience and expertise as an election administrator and one who studies elections,” Kobach asked, “is [the alien registration number] a practical or even possible thing for a state to do in its voter registration database?” Von Spakovsky answered, “No, it is not.”

Von Spakovsky and Kobach have been friends for more than a decade. They worked together at the Department of Justice under George W. Bush. Kobach focused on immigration issues — helping create a database to register visitors to the U.S. from countries associated with terrorism — while von Spakovsky specialized in voting issues; he had opposed the renewal of the Voting Rights Act.

Von Spakovsky’s history as a local elections administrator in Fairfax County, Va., qualified him as an expert on voting fraud. Between 2010 and 2012, while serving as vice chairman of the county’s three-member electoral board, he’d examined the voter rolls and found what he said were 300 registered non-citizens. He’d pressed for action against them, but none came. Von Spakovsky later joined the Heritage Foundation, where he remains today, generating research that underpins the arguments of those who claim mass voter fraud.

Like Richman, von Spakovsky seemed nervous on the stand, albeit not combative. He wore wire-rimmed glasses and a severe, immovable expression. Immigration is a not-so-distant feature of his family history: His parents — Russian and German immigrants — met in a refugee camp in American-occupied Germany after World War II before moving to the U.S.

Von Spakovsky had the task of testifying about what was intended to be a key piece of evidence for Kobach’s case: a spreadsheet of 38 non-citizens who had registered to vote, or attempted to register, in a 20-year period in Sedgwick County, Kansas.

But the 38 non-citizens turned out to be something less than an electoral crime wave. For starters, some of the 38 had informed Sedgwick County that they were non-citizens. One woman had sent her registration postcard back to the county with an explanation that it was a “mistake” and that she was not a citizen. Another listed an alien registration number — which tellingly begins with an “A” — instead of a Social Security number on the voter registration form. The county registered her anyway.

When von Spakovsky took the stand, he had to contend with questions that suggested he had cherry-picked his data. (The judge would find he had.) In his expert report, von Spakovsky had referenced a 2005 report by the Government Accountability Office that polled federal courts to see how many non-citizens had been excused from jury duty for being non-citizens — a sign of fraud, because jurors are selected from voter rolls. The GAO report mentioned eight courts. Only one said it had a meaningful number of jury candidates who claimed to be non-citizens: “between 1 and 3 percent” had been dismissed on these grounds. This was the only court von Spakovsky mentioned in his expert report.

His report also cited a 2012 TV news segment from an NBC station in Fort Myers, Fla. Reporters claimed to have discovered more than 100 non-citizens on the local voter roll.

“Now, you know, Mr. von Spakovsky, don’t you, that after this NBC report there was a follow-up by the same NBC station that determined that at least 35 of those 100 individuals had documentation to prove they were, in fact, United States citizens. Correct?” Ho asked. “I am aware of that now, yes,” von Spakovsky replied.

That correction had been online since 2012 and Ho had asked von Spakovsky the same question almost two years before in a deposition before the trial. But von Spakovsky never corrected his expert report.

Under Ho’s questioning, von Spakovsky also acknowledged a false assertion he made in 2011. In a nationally syndicated column for McClatchy, von Spakovsky claimed a tight race in Missouri had been decided by the illegal votes of 50 Somali nationals. A month before the column was published, a Missouri state judge ruled that no such thing had happened.

On the stand, von Spakovsky claimed he had no knowledge of the ruling when he published the piece. He conceded that he never retracted the assertion.

Kobach, who watched the exchange without objection, had repeatedly made the same claim — even after the judge ruled it was false. In 2011, Kobach wrote a series of columns using the example as proof of the need for voter ID, publishing them in outlets ranging from the Topeka Capital-Journal to the Wall Street Journal and the Washington Post. In 2012, he made the claim in an article published in the Syracuse Law Review. In 2013, he wrote an op-ed for the Kansas City Star with the same example: “The election was stolen when Rizzo received about 50 votes illegally cast by citizens of Somalia.” None of those articles have ever been corrected.

Ultimately, Robinson would lacerate von Spakovsky’s testimony, much as she had Richman’s. Von Spakovsky’s statements, the judge wrote, were “premised on several misleading and unsupported examples” and included “false assertions.” As she put it, “His generalized opinions about the rates of noncitizen registration were likewise based on misleading evidence, and largely based on his preconceived beliefs about this issue, which has led to his aggressive public advocacy of stricter proof of citizenship laws.”

There was one other wobbly leg holding up the argument that voter fraud is rampant: the very meaning of the word “fraud.”

Kobach’s case, and the broader claim, rely on an extremely generous definition. Legal definitions of fraud require a person to knowingly be deceptive. But both Kobach and von Spakovsky characterized illegal ballots as “fraud” regardless of the intention of the voter.

Indeed, the nine convictions Kobach has obtained in Kansas are almost entirely made up of individuals who didn’t realize they were doing something wrong. For example, there were older voters who didn’t understand the restrictions and voted in multiple places they owned property. There was also a college student who’d forgotten she’d filled out an absentee ballot in her home state before voting months later in Kansas. (She voted for Trump both times.)

Late in the trial, the ACLU presented Lorraine Minnite, a professor at Rutgers who has written extensively about voter fraud, as a rebuttal witness. Her book, “The Myth of Voter Fraud,” concluded that almost all instances of illegal votes can be chalked up to misunderstandings and administrative error.

Kobach sent his co-counsel, Garrett Roe, to cross-examine her. “It’s your view that what matters is the voter’s knowledge that his or her action is unlawful?” Roe asked. “In a definition of fraud, yes,” said Minnite. Roe pressed her about this for several questions, seemingly surprised that she wouldn’t refer to all illegal voting as fraud.

Minnite stopped him. “The word ‘fraud’ has meaning, and that meaning is that there’s intent behind it. And that’s actually what Kansas laws are with respect to illegal voting,” she said. “You keep saying my definition” she said, putting finger quotes around “my.” “But, you know, it’s not like it’s a freak definition.”

Kobach had explored a similar line of inquiry with von Spakovsky, asking him if the list of 38 non-citizens he’d reviewed could be absolved of “fraud” because they may have lacked intent.

“No,” von Spakovsky replied, “I think any time a non-citizen registers, any time a non-citizen votes, they are — whether intentionally or by accident, I mean — they are defrauding legitimate citizens from a fair election.”

After Kobach concluded his questions, the judge began her own examination of von Spakovsky.

“I think it’s fair to say there’s a pretty good distinction in terms of how the two of you define fraud,” the judge said, explaining that Minnite focused on intent, while she understood von Spakovsky’s definition to include any time someone who wasn’t supposed to vote did so, regardless of reason. “Would that be a fair characterization?” she asked.

“Yes ma’am,” von Spakovsky replied.

The judge asked whether a greater number of legitimate voters would be barred from casting ballots under the law than fraudulent votes prevented. In that scenario, she asked, “Would that not also be defrauding the electoral process?” Von Spakovsky danced around the answer, asserting that one would need to answer that question in the context of the registration requirements, which he deemed reasonable.

The judge cut him off. “Well that doesn’t really answer my question,” she said, saying that she found it contradictory that he wanted to consider context when examining the burden of registration requirements, but not when examining the circumstances in which fraud was committed.

“When you’re talking about … non-citizen voting, you don’t want to consider that in context of whether that person made a mistake, whether a DMV person convinced them they should vote,” she said. Von Spakovsky allowed that not every improper voter should be prosecuted, but insisted that “each ballot they cast takes away the vote of and dilutes the vote of actual citizens who are voting. And that’s —”

The judge interrupted again. “So, the thousands of actual citizens that should be able to vote but who are not because of the system, because of this law, that’s not diluting the vote and that’s not impairing the integrity of the electoral process, I take it?” she said.

Von Spakovsky didn’t engage with the hypothetical. He simply didn’t believe it was happening. “I don’t believe that this requirement prevents individuals who are eligible to register and vote from doing so.” Later, on the stand, he’d tell Ho he couldn’t think of a single law in the country that he felt negatively impacted anyone’s ability to register or vote.

Robinson, in the end, strongly disagreed. As she wrote in her opinion, “the Court finds that the burden imposed on Kansans by this law outweighs the state’s interest in preventing noncitizen voter fraud, keeping accurate voter rolls, and maintaining confidence in elections. The burden is not just on a ‘few voters,’ but on tens of thousands of voters, many of whom were disenfranchised” by Kobach’s law. The law, she concluded, was a bigger problem than the one it set out to solve, acting as a “deterrent to registration and voting for substantially more eligible Kansans than it has prevented ineligible voters from registering to vote.”

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Experts Warn Biases Must Be Removed From Artificial Intelligence

CNN Tech reported:

"Every time humanity goes through a new wave of innovation and technological transformation, there are people who are hurt and there are issues as large as geopolitical conflict," said Fei Fei Li, the director of the Stanford Artificial Intelligence Lab. "AI is no exception." These are not issues for the future, but the present. AI powers the speech recognition that makes Siri and Alexa work. It underpins useful services like Google Photos and Google Translate. It helps Netflix recommend movies, Pandora suggest songs, and Amazon push products..."

Artificial intelligence (AI) technology is not only about autonomous ships, trucks, and preventing crashes involving self-driving cars. AI has global impacts. Researchers have already identified problems and limitations:

"A recent study by Joy Buolamwini at the M.I.T. Media Lab found facial recognition software has trouble identifying women of color. Tests by The Washington Post found that accents often trip up smart speakers like Alexa. And an investigation by ProPublica revealed that software used to sentence criminals is biased against black Americans. Addressing these issues will grow increasingly urgent as things like facial recognition software become more prevalent in law enforcement, border security, and even hiring."

Reportedly, the concerns and limitations were discussed earlier this month at the "AI Summit - Designing A Future For All" conference. Back in 2016, TechCrunch listed five unexpected biases in artificial intelligence. So, there is much important work to be done to remove biases.

According to CNN Tech, a range of solutions are needed:

"Diversifying the backgrounds of those creating artificial intelligence and applying it to everything from policing to shopping to banking...This goes beyond diversifying the ranks of engineers and computer scientists building these tools to include the people pondering how they are used."

Given the history of the internet, there seems to be an important take-away. Early on, many people mistakenly assumed that, "If it's in an e-mail, then it must be true." That mistaken assumption migrated to, "If it's in a website on the internet, then it must be true." And that mistaken assumption migrated to, "If it was posted on social media, then it must be true." Consumers, corporate executives, and technicians must educate themselves and avoid assuming, "If an AI system collected it, then it must be true." Veracity matters. What do you think?


Health Insurers Are Vacuuming Up Details About You — And It Could Raise Your Rates

[Editor's note: today's guest post, by reporters at ProPublica, explores privacy and data collection issues within the healthcare industry. It is reprinted with permission.]

By Marshall Allen, ProPublica

To an outsider, the fancy booths at last month’s health insurance industry gathering in San Diego aren’t very compelling. A handful of companies pitching “lifestyle” data and salespeople touting jargony phrases like “social determinants of health.”

But dig deeper and the implications of what they’re selling might give many patients pause: A future in which everything you do — the things you buy, the food you eat, the time you spend watching TV — may help determine how much you pay for health insurance.

With little public scrutiny, the health insurance industry has joined forces with data brokers to vacuum up personal details about hundreds of millions of Americans, including, odds are, many readers of this story. The companies are tracking your race, education level, TV habits, marital status, net worth. They’re collecting what you post on social media, whether you’re behind on your bills, what you order online. Then they feed this information into complicated computer algorithms that spit out predictions about how much your health care could cost them.

Are you a woman who recently changed your name? You could be newly married and have a pricey pregnancy pending. Or maybe you’re stressed and anxious from a recent divorce. That, too, the computer models predict, may run up your medical bills.

Are you a woman who’s purchased plus-size clothing? You’re considered at risk of depression. Mental health care can be expensive.

Low-income and a minority? That means, the data brokers say, you are more likely to live in a dilapidated and dangerous neighborhood, increasing your health risks.

“We sit on oceans of data,” said Eric McCulley, director of strategic solutions for LexisNexis Risk Solutions, during a conversation at the data firm’s booth. And he isn’t apologetic about using it. “The fact is, our data is in the public domain,” he said. “We didn’t put it out there.”

Insurers contend they use the information to spot health issues in their clients — and flag them so they get services they need. And companies like LexisNexis say the data shouldn’t be used to set prices. But as a research scientist from one company told me: “I can’t say it hasn’t happened.”

At a time when every week brings a new privacy scandal and worries abound about the misuse of personal information, patient advocates and privacy scholars say the insurance industry’s data gathering runs counter to its touted, and federally required, allegiance to patients’ medical privacy. The Health Insurance Portability and Accountability Act, or HIPAA, only protects medical information.

“We have a health privacy machine that’s in crisis,” said Frank Pasquale, a professor at the University of Maryland Carey School of Law who specializes in issues related to machine learning and algorithms. “We have a law that only covers one source of health information. They are rapidly developing another source.”

Patient advocates warn that using unverified, error-prone “lifestyle” data to make medical assumptions could lead insurers to improperly price plans — for instance raising rates based on false information — or discriminate against anyone tagged as high cost. And, they say, the use of the data raises thorny questions that should be debated publicly, such as: Should a person’s rates be raised because algorithms say they are more likely to run up medical bills? Such questions would be moot in Europe, where a strict law took effect in May that bans trading in personal data.

This year, ProPublica and NPR are investigating the various tactics the health insurance industry uses to maximize its profits. Understanding these strategies is important because patients — through taxes, cash payments and insurance premiums — are the ones funding the entire health care system. Yet the industry’s bewildering web of strategies and inside deals often have little to do with patients’ needs. As the series’ first story showed, contrary to popular belief, lower bills aren’t health insurers’ top priority.

Inside the San Diego Convention Center last month, there were few qualms about the way insurance companies were mining Americans’ lives for information — or what they planned to do with the data.

The sprawling convention center was a balmy draw for one of America’s Health Insurance Plans’ marquee gatherings. Insurance executives and managers wandered through the exhibit hall, sampling chocolate-covered strawberries, champagne and other delectables designed to encourage deal-making.

Up front, the prime real estate belonged to the big guns in health data: The booths of Optum, IBM Watson Health and LexisNexis stretched toward the ceiling, with flat screen monitors and some comfy seating. (NPR collaborates with IBM Watson Health on national polls about consumer health topics.)

To understand the scope of what they were offering, consider Optum. The company, owned by the massive UnitedHealth Group, has collected the medical diagnoses, tests, prescriptions, costs and socioeconomic data of 150 million Americans going back to 1993, according to its marketing materials. (UnitedHealth Group provides financial support to NPR.) The company says it uses the information to link patients’ medical outcomes and costs to details like their level of education, net worth, family structure and race. An Optum spokesman said the socioeconomic data is de-identified and is not used for pricing health plans.

Optum’s marketing materials also boast that it now has access to even more. In 2016, the company filed a patent application to gather what people share on platforms like Facebook and Twitter, and link this material to the person’s clinical and payment information. A company spokesman said in an email that the patent application never went anywhere. But the company’s current marketing materials say it combines claims and clinical information with social media interactions.

I had a lot of questions about this and first reached out to Optum in May, but the company didn’t connect me with any of its experts as promised. At the conference, Optum salespeople said they weren’t allowed to talk to me about how the company uses this information.

It isn’t hard to understand the appeal of all this data to insurers. Merging information from data brokers with people’s clinical and payment records is a no-brainer if you overlook potential patient concerns. Electronic medical records now make it easy for insurers to analyze massive amounts of information and combine it with the personal details scooped up by data brokers.

It also makes sense given the shifts in how providers are getting paid. Doctors and hospitals have typically been paid based on the quantity of care they provide. But the industry is moving toward paying them in lump sums for caring for a patient, or for an event, like a knee surgery. In those cases, the medical providers can profit more when patients stay healthy. More money at stake means more interest in the social factors that might affect a patient’s health.

Some insurance companies are already using socioeconomic data to help patients get appropriate care, such as programs to help patients with chronic diseases stay healthy. Studies show social and economic aspects of people’s lives play an important role in their health. Knowing these personal details can help them identify those who may need help paying for medication or help getting to the doctor.

But patient advocates are skeptical health insurers have altruistic designs on people’s personal information.

The industry has a history of boosting profits by signing up healthy people and finding ways to avoid sick people — called “cherry-picking” and “lemon-dropping,” experts say. Among the classic examples: A company was accused of putting its enrollment office on the third floor of a building without an elevator, so only healthy patients could make the trek to sign up. Another tried to appeal to spry seniors by holding square dances.

The Affordable Care Act prohibits insurers from denying people coverage based on pre-existing health conditions or charging sick people more for individual or small group plans. But experts said patients’ personal information could still be used for marketing, and to assess risks and determine the prices of certain plans. And the Trump administration is promoting short-term health plans, which do allow insurers to deny coverage to sick patients.

Robert Greenwald, faculty director of Harvard Law School’s Center for Health Law and Policy Innovation, said insurance companies still cherry-pick, but now they’re subtler. The center analyzes health insurance plans to see if they discriminate. He said insurers will do things like failing to include enough information about which drugs a plan covers — which pushes sick people who need specific medications elsewhere. Or they may change the things a plan covers, or how much a patient has to pay for a type of care, after a patient has enrolled. Or, Greenwald added, they might exclude or limit certain types of providers from their networks — like those who have skill caring for patients with HIV or hepatitis C.

If there were concerns that personal data might be used to cherry-pick or lemon-drop, they weren’t raised at the conference.

At the IBM Watson Health booth, Kevin Ruane, a senior consulting scientist, told me that the company surveys 80,000 Americans a year to assess lifestyle, attitudes and behaviors that could relate to health care. Participants are asked whether they trust their doctor, have financial problems, go online, or own a Fitbit and similar questions. The responses of hundreds of adjacent households are analyzed together to identify social and economic factors for an area.

Ruane said he has used IBM Watson Health’s socioeconomic analysis to help insurance companies assess a potential market. The ACA increased the value of such assessments, experts say, because companies often don’t know the medical history of people seeking coverage. A region with too many sick people, or with patients who don’t take care of themselves, might not be worth the risk.

Ruane acknowledged that the information his company gathers may not be accurate for every person. “We talk to our clients and tell them to be careful about this,” he said. “Use it as a data insight. But it’s not necessarily a fact.”

In a separate conversation, a salesman from a different company joked about the potential for error. “God forbid you live on the wrong street these days,” he said. “You’re going to get lumped in with a lot of bad things.”

The LexisNexis booth was emblazoned with the slogan “Data. Insight. Action.” The company said it uses 442 non-medical personal attributes to predict a person’s medical costs. Its cache includes more than 78 billion records from more than 10,000 public and proprietary sources, including people’s cellphone numbers, criminal records, bankruptcies, property records, neighborhood safety and more. The information is used to predict patients’ health risks and costs in eight areas, including how often they are likely to visit emergency rooms, their total cost, their pharmacy costs, their motivation to stay healthy and their stress levels.

People who downsize their homes tend to have higher health care costs, the company says. As do those whose parents didn’t finish high school. Patients who own more valuable homes are less likely to land back in the hospital within 30 days of their discharge. The company says it has validated its scores against insurance claims and clinical data. But it won’t share its methods and hasn’t published the work in peer-reviewed journals.

McCulley, LexisNexis’ director of strategic solutions, said predictions made by the algorithms about patients are based on the combination of the personal attributes. He gave a hypothetical example: A high school dropout who had a recent income loss and doesn’t have a relative nearby might have higher than expected health costs.

But couldn’t that same type of person be healthy? I asked.

“Sure,” McCulley said, with no apparent dismay at the possibility that the predictions could be wrong.

McCulley and others at LexisNexis insist the scores are only used to help patients get the care they need and not to determine how much someone would pay for their health insurance. The company cited three different federal laws that restricted them and their clients from using the scores in that way. But privacy experts said none of the laws cited by the company bar the practice. The company backed off the assertions when I pointed that the laws did not seem to apply.

LexisNexis officials also said the company’s contracts expressly prohibit using the analysis to help price insurance plans. They would not provide a contract. But I knew that in at least one instance a company was already testing whether the scores could be used as a pricing tool.

Before the conference, I’d seen a press release announcing that the largest health actuarial firm in the world, Milliman, was now using the LexisNexis scores. I tracked down Marcos Dachary, who works in business development for Milliman. Actuaries calculate health care risks and help set the price of premiums for insurers. I asked Dachary if Milliman was using the LexisNexis scores to price health plans and he said: “There could be an opportunity.”

The scores could allow an insurance company to assess the risks posed by individual patients and make adjustments to protect themselves from losses, he said. For example, he said, the company could raise premiums, or revise contracts with providers.

It’s too early to tell whether the LexisNexis scores will actually be useful for pricing, he said. But he was excited about the possibilities. “One thing about social determinants data — it piques your mind,” he said.

Dachary acknowledged the scores could also be used to discriminate. Others, he said, have raised that concern. As much as there could be positive potential, he said, “there could also be negative potential.”

It’s that negative potential that still bothers data analyst Erin Kaufman, who left the health insurance industry in January. The 35-year-old from Atlanta had earned her doctorate in public health because she wanted to help people, but one day at Aetna, her boss told her to work with a new data set.

To her surprise, the company had obtained personal information from a data broker on millions of Americans. The data contained each person’s habits and hobbies, like whether they owned a gun, and if so, what type, she said. It included whether they had magazine subscriptions, liked to ride bikes or run marathons. It had hundreds of personal details about each person.

The Aetna data team merged the data with the information it had on patients it insured. The goal was to see how people’s personal interests and hobbies might relate to their health care costs. But Kaufman said it felt wrong: The information about the people who knitted or crocheted made her think of her grandmother. And the details about individuals who liked camping made her think of herself. What business did the insurance company have looking at this information? “It was a dataset that really dug into our clients’ lives,” she said. “No one gave anyone permission to do this.”

In a statement, Aetna said it uses consumer marketing information to supplement its claims and clinical information. The combined data helps predict the risk of repeat emergency room visits or hospital admissions. The information is used to reach out to members and help them and plays no role in pricing plans or underwriting, the statement said.

Kaufman said she had concerns about the accuracy of drawing inferences about an individual’s health from an analysis of a group of people with similar traits. Health scores generated from arrest records, home ownership and similar material may be wrong, she said.

Pam Dixon, executive director of the World Privacy Forum, a nonprofit that advocates for privacy in the digital age, shares Kaufman’s concerns. She points to a study by the analytics company SAS, which worked in 2012 with an unnamed major health insurance company to predict a person’s health care costs using 1,500 data elements, including the investments and types of cars people owned.

The SAS study said higher health care costs could be predicted by looking at things like ethnicity, watching TV and mail order purchases.

“I find that enormously offensive as a list,” Dixon said. “This is not health data. This is inferred data.”

Data scientist Cathy O’Neil said drawing conclusions about health risks on such data could lead to a bias against some poor people. It would be easy to infer they are prone to costly illnesses based on their backgrounds and living conditions, said O’Neil, author of the book “Weapons of Math Destruction,” which looked at how algorithms can increase inequality. That could lead to poor people being charged more, making it harder for them to get the care they need, she said. Employers, she said, could even decide not to hire people with data points that could indicate high medical costs in the future.

O’Neil said the companies should also measure how the scores might discriminate against the poor, sick or minorities.

American policymakers could do more to protect people’s information, experts said. In the United States, companies can harvest personal data unless a specific law bans it, although California just passed legislation that could create restrictions, said William McGeveran, a professor at the University of Minnesota Law School. Europe, in contrast, passed a strict law called the General Data Protection Regulation, which went into effect in May.

“In Europe, data protection is a constitutional right,” McGeveran said.

Pasquale, the University of Maryland law professor, said health scores should be treated like credit scores. Federal law gives people the right to know their credit scores and how they’re calculated. If people are going to be rated by whether they listen to sad songs on Spotify or look up information about AIDS online, they should know, Pasquale said. “The risk of improper use is extremely high. And data scores are not properly vetted and validated and available for scrutiny.”

As I reported this story I wondered how the data vendors might be using my personal information to score my potential health costs. So, I filled out a request on the LexisNexis website for the company to send me some of the personal information it has on me. A week later a somewhat creepy, 182-page walk down memory lane arrived in the mail. Federal law only requires the company to provide a subset of the information it collected about me. So that’s all I got.

LexisNexis had captured details about my life going back 25 years, many that I’d forgotten. It had my phone numbers going back decades and my home addresses going back to my childhood in Golden, Colorado. Each location had a field to show whether the address was “high risk.” Mine were all blank. The company also collects records of any liens and criminal activity, which, thankfully, I didn’t have.

My report was boring, which isn’t a surprise. I’ve lived a middle-class life and grown up in good neighborhoods. But it made me wonder: What if I had lived in “high risk” neighborhoods? Could that ever be used by insurers to jack up my rates — or to avoid me altogether?

I wanted to see more. If LexisNexis had health risk scores on me, I wanted to see how they were calculated and, more importantly, whether they were accurate. But the company told me that if it had calculated my scores it would have done so on behalf of their client, my insurance company. So, I couldn’t have them.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

 


European Regulators Fine Google $5 Billion For 'Breaching EU Antitrust Rules'

On Wednesday, European anti-trust regulators fined Google 4.34 billion Euros (U.S. $5 billion) and ordered the tech company to stop using its Android operating system software to block competition. ComputerWorld reported:

"The European Commission found that Google has abused its dominant market position in three ways: tying access to the Play store to installation of Google Search and Google Chrome; paying phone makers and network operators to exclusively install Google Search, and preventing manufacturers from making devices running forks of Android... Google won't let smartphone manufacturers install Play on their phones unless they also make its search engine and Chrome browser the defaults on their phones. In addition, they must only use a Google-approved version of Android. This has prevented companies like Amazon.com, which developed a fork of Android it calls FireOS, from persuading big-name manufacturers to produce phones running its OS or connecting to its app store..."

Reportedly, less than 10% of Android phone users download a different browser than the pre-installed default. Less than 1% use a different search app. View the archive of European Commission Android OS documents.

Yesterday, the European Commission announced on social media:

European Commission tweet. Google Android OS restrictions graphic. Click to view larger version

European Commission tweet. Vestager comments. Click to view larger version

And, The Guardian newspaper reported:

"Soon after Brussels handed down its verdict, Google announced it would appeal. "Android has created more choice for everyone, not less," a Google spokesperson said... Google has 90 days to end its "illegal conduct" or its parent company Alphabet could be hit with fines amounting to 5% of its daily [revenues] for each day it fails to comply. Wednesday’s verdict ends a 39-month investigation by the European commission’s competition authorities into Google’s Android operating system but it is only one part of an eight-year battle between Brussels and the tech giant."

According to the Reuters news service, a third EU case against Google, involving accusations that the tech company's AdSense advertising service blocks users from displaying search ads from competitors, is still ongoing.


The DIY Revolution: Consumers Alter Or Build Items Previously Not Possible. Is It A Good Thing?

Recent advances in technology allow consumers to alter, customize, or build locally items previously not possible. These items are often referred to as Do-It-Yourself (DIY) products. You've probably heard DIY used in home repair and renovation projects on television. DIY now happens in some unexpected areas. Today's blog post highlights two areas.

DIY Glucose Monitors

Earlier this year, CNet described the bag an eight-year-old patient carries with her everywhere daily:

"... It houses a Dexcom glucose monitor and a pack of glucose tablets, which work in conjunction with the sensor attached to her arm and the insulin pump plugged into her stomach. The final item in her bag was an iPhone 5S. It's unusual for such a young child to have a smartphone. But Ruby's iPhone, which connects via Bluetooth to her Dexcom monitor, allowing [her mother] to read it remotely, illustrates the way technology has transformed the management of diabetes from an entirely manual process -- pricking fingers to measure blood sugar, writing down numbers in a notebook, calculating insulin doses and injecting it -- to a semi-automatic one..."

Some people have access to these new technologies, but many don't. Others want more connectivity and better capabilities. So, some creative "hacking" has resulted:

"There are people who are unwilling to wait, and who embrace unorthodox methods. (You can find them on Twitter via the hashtag #WeAreNotWaiting.) The Nightscout Foundation, an online diabetes community, figured out a workaround for the Pebble Watch. Groups such as Nightscout, Tidepool and OpenAPS are developing open-source fixes for diabetes that give major medical tech companies a run for their money... One major gripe of many tech-enabled diabetes patients is that the two devices they wear at all times -- the monitor and the pump -- don't talk to each other... diabetes will never be a hands-off disease to manage, but an artificial pancreas is basically as close as it gets. The FDA approved the first artificial pancreas -- the Medtronic 670G -- in October 2017. But thanks to a little DIY spirit, people have had them for years."

CNet shared the experience of another tech-enabled patient:

"Take Dana Lewis, founder of the open-source artificial pancreas system, or OpenAPS. Lewis started hacking her glucose monitor to increase the volume of the alarm so that it would wake her in the night. From there, Lewis tinkered with her equipment until she created a closed-loop system, which she's refined over time in terms of both hardware and algorithms that enable faster distribution of insulin. It has massively reduced the "cognitive burden" on her everyday life... JDRF, one of the biggest global diabetes research charities, said in October that it was backing the open-source community by launching an initiative to encourage rival manufacturers like Dexcom and Medtronic to open their protocols and make their devices interoperable."

Convenience and affordability are huge drivers. As you might have guessed, there are risks:

"Hacking a glucose monitor is not without risk -- inaccurate readings, failed alarms or the wrong dose of insulin distributed by the pump could have fatal consequences... Lewis and the OpenAPS community encourage people to embrace the build-your-own-pancreas method rather than waiting for the tech to become available and affordable."

Are DIY glucose monitors a good thing? Some patients think so as a way to achieve convenient and affordable healthcare solutions. That might lead you to conclude anything DIY is an improvement. Right? Keep reading.

DIY Guns

Got a 3-D printer? If so, then you can print your own DIY gun. How did this happen? How did the USA get to here? Wired explained:

"Five years ago, 25-year-old radical libertarian Cody Wilson stood on a remote central Texas gun range and pulled the trigger on the world’s first fully 3-D-printed gun... he drove back to Austin and uploaded the blueprints for the pistol to his website, Defcad.com... In the days after that first test-firing, his gun was downloaded more than 100,000 times. Wilson made the decision to go all in on the project, dropping out of law school at the University of Texas, as if to confirm his belief that technology supersedes law..."

The law intervened. Wilson stopped, took down his site, and then pursued a legal remedy:

"Two months ago, the Department of Justice quietly offered Wilson a settlement to end a lawsuit he and a group of co-plaintiffs have pursued since 2015 against the United States government. Wilson and his team of lawyers focused their legal argument on a free speech claim: They pointed out that by forbidding Wilson from posting his 3-D-printable data, the State Department was not only violating his right to bear arms but his right to freely share information. By blurring the line between a gun and a digital file, Wilson had also successfully blurred the lines between the Second Amendment and the First."

So, now you... anybody with an internet connection and a 3-D printer (and a computer-controlled milling machine for some advanced parts)... can produce their own DIY gun. No registration required. No licenses nor permits. No training required. And, that's anyone anywhere in the world.

Oh, there's more:

"The Department of Justice's surprising settlement, confirmed in court documents earlier this month, essentially surrenders to that argument. It promises to change the export control rules surrounding any firearm below .50 caliber—with a few exceptions like fully automatic weapons and rare gun designs that use caseless ammunition—and move their regulation to the Commerce Department, which won't try to police technical data about the guns posted on the public internet. In the meantime, it gives Wilson a unique license to publish data about those weapons anywhere he chooses."

As you might have guessed, Wilson is re-launching his website, but this time with blueprints for more DIY weaponry besides pistols: AR-15 rifles and semi-automatic weaponry. So, it will be easier for people to skirt federal and state gun laws. Is that a good thing?

You probably have some thoughts and concerns. I do. There are plenty of issues and questions. Are DIY products a good thing? Who is liable? How should laws be upgraded? How can society facilitate one set of DIY products and not the other? What related issues do you see? Any other notable DIY products?


Facial Recognition At Facebook: New Patents, New EU Privacy Laws, And Concerns For Offline Shoppers

Facebook logo Some Facebook users know that the social networking site tracks them both on and off (e.g., signed into, not signed into) the service. Many online users know that Facebook tracks both users and non-users around the internet. Recent developments indicate that the service intends to track people offline, too. The New York Times reported that Facebook:

"... has applied for various patents, many of them still under consideration... One patent application, published last November, described a system that could detect consumers within [brick-and-mortar retail] stores and match those shoppers’ faces with their social networking profiles. Then it could analyze the characteristics of their friends, and other details, using the information to determine a “trust level” for each shopper. Consumers deemed “trustworthy” could be eligible for special treatment, like automatic access to merchandise in locked display cases... Another Facebook patent filing described how cameras near checkout counters could capture shoppers’ faces, match them with their social networking profiles and then send purchase confirmation messages to their phones."

Some important background. First, the usage of surveillance cameras in retail stores is not new. What is new is the scope and accuracy of the technology. In 2012, we first learned about smart mannequins in retail stores. In 2013, we learned about the five ways retail stores spy on shoppers. In 2015, we learned more about tracking of shoppers by retail stores using WiFi connections. In 2018, some smart mannequins are used in the healthcare industry.

Second, Facebook's facial recognition technology scans images uploaded by users, and then allows users identified to accept or decline labels with their name for each photo. Each Facebook user can adjust their privacy settings to enable or disable the adding of their name label to photos. However:

"Facial recognition works by scanning faces of unnamed people in photos or videos and then matching codes of their facial patterns to those in a database of named people... The technology can be used to remotely identify people by name without their knowledge or consent. While proponents view it as a high-tech tool to catch criminals... critics said people cannot actually control the technology — because Facebook scans their faces in photos even when their facial recognition setting is turned off... Rochelle Nadhiri, a Facebook spokeswoman, said its system analyzes faces in users’ photos to check whether they match with those who have their facial recognition setting turned on. If the system cannot find a match, she said, it does not identify the unknown face and immediately deletes the facial data."

Simply stated: Facebook maintains a perpetual database of photos (and videos) with names attached, so it can perform the matching and not display name labels for users who declined and/or disabled the display of name labels in photos (videos). To learn more about facial recognition at Facebook, visit the Electronic Privacy Information Center (EPIC) site.

Third, other tech companies besides Facebook use facial recognition technology:

"... Amazon, Apple, Facebook, Google and Microsoft have filed facial recognition patent applications. In May, civil liberties groups criticized Amazon for marketing facial technology, called Rekognition, to police departments. The company has said the technology has also been used to find lost children at amusement parks and other purposes..."

You may remember, earlier in 2017 Apple launched its iPhone X with Face ID feature for users to unlock their phones. Fourth, since Facebook operates globally it must respond to new laws in certain regions:

"In the European Union, a tough new data protection law called the General Data Protection Regulation now requires companies to obtain explicit and “freely given” consent before collecting sensitive information like facial data. Some critics, including the former government official who originally proposed the new law, contend that Facebook tried to improperly influence user consent by promoting facial recognition as an identity protection tool."

Perhaps, you find the above issues troubling. I do. If my facial image will be captured, archived, tracked by brick-and-mortar stores, and then matched and merged with my online usage, then I want some type of notice before entering a brick-and-mortar store -- just as websites present privacy and terms-of-use policies. Otherwise, there is no notice nor informed consent by shoppers at brick-and-mortar stores.

So, is facial recognition a threat, a protection tool, or both? What are your opinions?