134 posts categorized "Reports & Studies" Feed

Study: Most Consumers Fear Companies Will 'Go Too Far' With Artificial Intelligence Technologies

New research has found that consumers are conflicted about artificial intelligence (AI) technologies. A national study of 697 adults during the Spring of 2018 by Elicit Insights found:

"Most consumers are conflicted about AI. They know there are benefits, but recognize the risks, too"

Several specific findings:

  • 73 percent of survey participants (e.g., Strongly Agree, Agree) fear "some companies will go too far with AI"
  • 64 percent agreed (e.g., Strongly Agree, Agree) with the statement: "I'm concerned about how companies will use artificial intelligence and the information they have about me to engage with me"
  • "Six out of 10 Americans agree or strongly agree that AI will never be as good as human interaction. Human interaction remains sacred and there is concern with at least a third of consumers that AI won’t stay focused on mundane tasks and leave the real thinking to humans."

Many of the concerns center around control. As AI applications become smarter and more powerful, they are able to operate independently, without human -- users' -- authorization. When presented with several smart-refrigerator scenarios, the less control users had over purchases the fewer survey participants viewed AI as a benefit:

Smart refrigerator and food purchase scenarios. AI study by Elicit Insights. Click to view larger version

AI technologies can also be used to find and present possible matches for online dating services. Again, survey participants expressed similar control concerns:

Dating service scenarios. AI study by Elicit Insights. Click to view larger version

Download Elicit Insights' complete Artificial Intelligence survey (Adobe PDF). What are your opinions? Do you prefer AI applications that operate independently, or which require your authorization?


Study: Performance Issues Impede IoT Device Trust And Usage Worldwide By Consumers

Dynatrace logo A global survey recently uncovered interesting findings about the usage and satisfaction of Iot (Internet of things) devices by consumers. A survey of consumers in several countries found that 52 percent already use IoT devices, and 64 percent of users have already encountered performance issues with their devices.

Opinium Research logo Dynatrace, a software intelligence company, commissioned Opinium Research to conduct a global survey of 10,002 participants, with 2,000 in the United States, 2,000 in the United Kingdom, and 1,000 respondents each in France, Germany, Australia, Brazil, Singapore, and China. Dynatrace announced several findings, chiefly:

"On average, consumers experience 1.5 digital performance problems every day, and 62% of people fear the number of problems they encounter, and the frequency, will increase due to the rise of IoT."

That seems like plenty of poor performance. Some findings were specific to travel, healthcare, and in-home retail sectors. Regarding travel:

"The digital performance failures consumers are already experiencing with everyday technology is potentially making them wary of other uses of IoT. 85% of respondents said they are concerned that self-driving cars will malfunction... 72% feel it is likely software glitches in self-driving cars will cause serious injuries and fatalities... 84% of consumers said they wouldn’t use self-driving cars due to a fear of software glitches..."

Regarding healthcare:

"... 62% of consumers stated they would not trust IoT devices to administer medication; this sentiment is strongest in the 55+ age range, with 74% expressing distrust. There were also specific concerns about the use of IoT devices to monitor vital signs, such as heart rate and blood pressure. 85% of consumers expressed concern that performance problems with these types of IoT devices could compromise clinical data..."

Regarding in-home retail devices:

"... 83% of consumers are concerned about losing control of their smart home due to digital performance problems... 73% of consumers fear being locked in or out of the smart home due to bugs in smart home technology... 68% of consumers are worried they won’t be able to control the temperature in the smart home due to malfunctions in smart home technology... 81% of consumers are concerned that technology or software problems with smart meters will lead to them being overcharged for gas, electricity, and water."

The findings are a clear call to IoT makers to improve the performance, security, and reliability of their internet-connected devices. To learn more, download the full Dynatrace report titled, "IoT Consumer Confidence Report: Challenges for Enterprise Cloud Monitoring on the Horizon."


Survey: Complexities And Consumer Fears With Checking Credit Reports For Errors

Many consumers know that they should check their credit reports yearly for errors, but most don't. A recent survey found much complexity and fears surrounding credit reports. WalletHub surveyed 500 adults in the United States during July, and found:

  • 84 percent of survey respondents know that they should check their credit reports at least once each year
  • Only 41 percent of respondents said they check their credit reports
  • 27 percent said they don't have the time to check their credit reports
  • 14 percent said they are afraid to see the contents of their credit reports

WalletHub found that women were twice as likely as men to have the above fear. Millennials were five times as likely than Baby Boomers to have this fear. More findings are listed below.

It is important for consumers to understand the industry. Inaccurate credit report can lower your credit score, the overall number used to indicate your credit worthiness. A low credit score can cost you money: denied credit applications, or approved loans but with higher interest rates. The errors in credit reports can include another person's data co-mingled with yours (obviously, that should never happen), a dead person's data co-mingled with yours, or a credit report that doesn't accurately reflect a loan you truly paid off on time and in full.

A 2013 study by the U.S. Federal Trade Commission (FTC) found problems with credit reports accuracy. First, 26 percent of participants identified errors in their credit reports. So, one in four consumers were affected. Plus, of the 572 credit reports where errors were identified, 399 reports (70%) were modified by a credit reporting agency, and 211 (36%) resulted in a credit score changed. So, finding and reporting errors is beneficial for consumers. Plus, a report in 2013 by the 60 Minutes television news magazine listed problems with the dispute process: failures by the largest three credit reporting agencies to correct errors reported by consumers on their credit reports.

There are national and regional credit reporting agencies. The three national credit reporting agencies include Experian, Equifax, andTransUnion. Equifax operates a secondary consumer reporting agency focused solely upon the telecommunications industry and broadband internet services.

Credit reporting agencies get their data from a variety of sources including data brokers. So, their business model is based upon data sharing. Just about anyone can set up and operate a credit reporting agency. No special skills nor expertise are required. Credit reporting agencies make money by selling credit reports to lenders. Credit reports often contain errors. For better or worse regarding security, credit reporting agencies historically have outsourced work, sometimes internationally.

The industry and executives have arguably lackadaisical data security approaches. A massive data breach at Equifax affected about 143 million persons in 2017. An independent investigation of that breach found a length list of data security flaws and failures at Equifax. To compound matters, the Internal Revenue Service gave Equifax a no-bid contract in 2017.

The industry has a spotty history. In 2007, Equifax paid a $2.7 million fine for violating federal credit laws. In 2009, it paid a $65,000 fine to the state of Indiana for violating the state's security freeze law. In 2012, Equifax and some of its customers paid $1.6 million to settle allegations of improper list sales. A data breach at Experian in 2015 affected 15 million wireless carrier customers. In 2017, Equifax and TransUnion paid $23.1 million to settle allegations of deceptive advertising about credit scores.

See the graphic below for more findings from the WalletHub survey.

2018 Credit Report Complexity Survey by WalletHub. Click to view larger version


Test Finds Amazon's Facial Recognition Software Wrongly Identified Members Of Congress As Persons Arrested. A Few Legislators Demand Answers

In a test of Rekognition, the facial recognition software by Amazon, the American Civil Liberties Union (ACLU) found that the software misidentified 28 members of the United States Congress to mugshot photographs of persons arrested for crimes. Jokes aside about politicians, this is serious stuff. According to the ACLU:

"The members of Congress who were falsely matched with the mugshot database we used in the test include Republicans and Democrats, men and women, and legislators of all ages, from all across the country... To conduct our test, we used the exact same facial recognition system that Amazon offers to the public, which anyone could use to scan for matches between images of faces. And running the entire test cost us $12.33 — less than a large pizza... The false matches were disproportionately of people of color, including six members of the Congressional Black Caucus, among them civil rights legend Rep. John Lewis (D-Ga.). These results demonstrate why Congress should join the ACLU in calling for a moratorium on law enforcement use of face surveillance."

List of 28 Congressional legislators mis-identified by Amazon Rekognition in ACLU study. Click to view larger version With 535 member of Congress, the implied error rate was 5.23 percent. On Thursday, three of the misidentified legislators sent a joint letter to Jeffery Bezos, the Chief executive Officer at Amazon. The letter read in part:

"We write to express our concerns and seek more information about Amazon's facial recognition technology, Rekognition... While facial recognition services might provide a valuable law enforcement tool, the efficacy and impact of the technology are not yet fully understood. In particular, serious concerns have been raised about the dangers facial recognition can pose to privacy and civil rights, especially when it is used as a tool of government surveillance, as well as the accuracy of the technology and its disproportionate impact on communities of color.1 These concerns, including recent reports that Rekognition could lead to mis-identifications, raise serious questions regarding whether Amazon should be selling its technology to law enforcement... One study estimates that more than 117 million American adults are in facial recognition databases that can be searched in criminal investigations..."

The letter was sent by Senator Edward J. Markey (Massachusetts, Representative Luis V. Gutiérrez (Illinois), and Representative Mark DeSaulnier (California). Why only three legislators? Where are the other 25? Nobody else cares about software accuracy?

The three legislators asked Amazon to provide answers by August 20, 2018 to several key requests:

  • The results of any internal accuracy or bias assessments Amazon perform on Rekognition, with details by race, gender, and age,
  • The list of all law enforcement or intelligence agencies Amazon has communicated with regarding Rekognition,
  • The list of all law enforcement agencies which have used or currently use Rekognition,
  • If any law enforcement agencies which used Rekogntion have been investigated, sued, or reprimanded for unlawful or discriminatory policing practices,
  • Describe the protections, if any, Amazon has built into Rekognition to protect the privacy rights of innocent citizens cuaght in the biometric databases used by law enforcement for comparisons,
  • Can Rekognition identify persons younger than age 13, and what protections Amazon uses to comply with Children's Online Privacy Protections Act (COPPA),
  • Whether Amazon conduts any audits of Rekognition to ensure its appropriate and legal uses, and what actions Amazon has taken to correct any abuses,
  • Explain whether Rekognition is integrated with police body cameras and/or "public-facing camera networks."

The letter cited a 2016 report by the Center on Privacy and Technology (CPT) at Georgetown Law School, which found:

"... 16 states let the Federal Bureau of Investigation (FBI) use face recognition technology to compare the faces of suspected criminals to their driver’s license and ID photos, creating a virtual line-up of their state residents. In this line-up, it’s not a human that points to the suspect—it’s an algorithm... Across the country, state and local police departments are building their own face recognition systems, many of them more advanced than the FBI’s. We know very little about these systems. We don’t know how they impact privacy and civil liberties. We don’t know how they address accuracy problems..."

Everyone wants law enforcement to quickly catch criminals, prosecute criminals, and protect the safety and rights of law-abiding citizens. However, accuracy matters. Experts warn that the facial recognition technologies used are unregulated, and the systems' impacts upon innocent citizens are not understood. Key findings in the CPT report:

  1. "Law enforcement face recognition networks include over 117 million American adults. Face recognition is neither new nor rare. FBI face recognition searches are more common than federal court-ordered wiretaps. At least one out of four state or local police departments has the option to run face recognition searches through their or another agency’s system. At least 26 states (and potentially as many as 30) allow law enforcement to run or request searches against their databases of driver’s license and ID photos..."
  2. "Different uses of face recognition create different risks. This report offers a framework to tell them apart. A face recognition search conducted in the field to verify the identity of someone who has been legally stopped or arrested is different, in principle and effect, than an investigatory search of an ATM photo against a driver’s license database, or continuous, real-time scans of people walking by a surveillance camera. The former is targeted and public. The latter are generalized and invisible..."
  3. "By tapping into driver’s license databases, the FBI is using biometrics in a way it’s never done before. Historically, FBI fingerprint and DNA databases have been primarily or exclusively made up of information from criminal arrests or investigations. By running face recognition searches against 16 states’ driver’s license photo databases, the FBI has built a biometric network that primarily includes law-abiding Americans. This is unprecedented and highly problematic."
  4. " Major police departments are exploring face recognition on live surveillance video. Major police departments are exploring real-time face recognition on live surveillance camera video. Real-time face recognition lets police continuously scan the faces of pedestrians walking by a street surveillance camera. It may seem like science fiction. It is real. Contract documents and agency statements show that at least five major police departments—including agencies in Chicago, Dallas, and Los Angeles—either claimed to run real-time face recognition off of street cameras..."
  5. "Law enforcement face recognition is unregulated and in many instances out of control. No state has passed a law comprehensively regulating police face recognition. We are not aware of any agency that requires warrants for searches or limits them to serious crimes. This has consequences..."
  6. "Law enforcement agencies are not taking adequate steps to protect free speech. There is a real risk that police face recognition will be used to stifle free speech. There is also a history of FBI and police surveillance of civil rights protests. Of the 52 agencies that we found to use (or have used) face recognition, we found only one, the Ohio Bureau of Criminal Investigation, whose face recognition use policy expressly prohibits its officers from using face recognition to track individuals engaging in political, religious, or other protected free speech."
  7. "Most law enforcement agencies do little to ensure their systems are accurate. Face recognition is less accurate than fingerprinting, particularly when used in real-time or on large databases. Yet we found only two agencies, the San Francisco Police Department and the Seattle region’s South Sound 911, that conditioned purchase of the technology on accuracy tests or thresholds. There is a need for testing..."
  8. "The human backstop to accuracy is non-standardized and overstated. Companies and police departments largely rely on police officers to decide whether a candidate photo is in fact a match. Yet a recent study showed that, without specialized training, human users make the wrong decision about a match half the time...The training regime for examiners remains a work in progress."
  9. "Police face recognition will disproportionately affect African Americans. Police face recognition will disproportionately affect African Americans. Many police departments do not realize that... the Seattle Police Department says that its face recognition system “does not see race.” Yet an FBI co-authored study suggests that face recognition may be less accurate on black people. Also, due to disproportionately high arrest rates, systems that rely on mug shot databases likely include a disproportionate number of African Americans. Despite these findings, there is no independent testing regime for racially biased error rates. In interviews, two major face recognition companies admitted that they did not run these tests internally, either."
  10. "Agencies are keeping critical information from the public. Ohio’s face recognition system remained almost entirely unknown to the public for five years. The New York Police Department acknowledges using face recognition; press reports suggest it has an advanced system. Yet NYPD denied our records request entirely. The Los Angeles Police Department has repeatedly announced new face recognition initiatives—including a “smart car” equipped with face recognition and real-time face recognition cameras—yet the agency claimed to have “no records responsive” to our document request. Of 52 agencies, only four (less than 10%) have a publicly available use policy. And only one agency, the San Diego Association of Governments, received legislative approval for its policy."

The New York Times reported:

"Nina Lindsey, an Amazon Web Services spokeswoman, said in a statement that the company’s customers had used its facial recognition technology for various beneficial purposes, including preventing human trafficking and reuniting missing children with their families. She added that the A.C.L.U. had used the company’s face-matching technology, called Amazon Rekognition, differently during its test than the company recommended for law enforcement customers.

For one thing, she said, police departments do not typically use the software to make fully autonomous decisions about people’s identities... She also noted that the A.C.L.U had used the system’s default setting for matches, called a “confidence threshold,” of 80 percent. That means the group counted any face matches the system proposed that had a similarity score of 80 percent or more. Amazon itself uses the same percentage in one facial recognition example on its site describing matching an employee’s face with a work ID badge. But Ms. Lindsey said Amazon recommended that police departments use a much higher similarity score — 95 percent — to reduce the likelihood of erroneous matches."

Good of Amazon to respond quickly, but its reply is still insufficient and troublesome. Amazon may recommend 95 percent similarity scores, but the public does not know if police departments actually use the higher setting, or consistently do so across all types of criminal investigations. Plus, the CPT report cast doubt on human "backstop" intervention, which Amazon's reply seems to heavily rely upon.

Where is the rest of Congress on this? On Friday, three Senators sent a similar letter seeking answers from 39 federal law-enforcement agencies about their use facial recognition technology, and what policies, if any, they have put in place to prevent abuse and misuse.

All of the findings in the CPT report are disturbing. Finding #3 is particularly troublesome. So, voters need to know what, if anything, has changed since these findings were published in 2016. Voters need to know what their elected officials are doing to address these findings. Some elected officials seem engaged on the topic, but not enough. What are your opinions?


Experts Warn Biases Must Be Removed From Artificial Intelligence

CNN Tech reported:

"Every time humanity goes through a new wave of innovation and technological transformation, there are people who are hurt and there are issues as large as geopolitical conflict," said Fei Fei Li, the director of the Stanford Artificial Intelligence Lab. "AI is no exception." These are not issues for the future, but the present. AI powers the speech recognition that makes Siri and Alexa work. It underpins useful services like Google Photos and Google Translate. It helps Netflix recommend movies, Pandora suggest songs, and Amazon push products..."

Artificial intelligence (AI) technology is not only about autonomous ships, trucks, and preventing crashes involving self-driving cars. AI has global impacts. Researchers have already identified problems and limitations:

"A recent study by Joy Buolamwini at the M.I.T. Media Lab found facial recognition software has trouble identifying women of color. Tests by The Washington Post found that accents often trip up smart speakers like Alexa. And an investigation by ProPublica revealed that software used to sentence criminals is biased against black Americans. Addressing these issues will grow increasingly urgent as things like facial recognition software become more prevalent in law enforcement, border security, and even hiring."

Reportedly, the concerns and limitations were discussed earlier this month at the "AI Summit - Designing A Future For All" conference. Back in 2016, TechCrunch listed five unexpected biases in artificial intelligence. So, there is much important work to be done to remove biases.

According to CNN Tech, a range of solutions are needed:

"Diversifying the backgrounds of those creating artificial intelligence and applying it to everything from policing to shopping to banking...This goes beyond diversifying the ranks of engineers and computer scientists building these tools to include the people pondering how they are used."

Given the history of the internet, there seems to be an important take-away. Early on, many people mistakenly assumed that, "If it's in an e-mail, then it must be true." That mistaken assumption migrated to, "If it's in a website on the internet, then it must be true." And that mistaken assumption migrated to, "If it was posted on social media, then it must be true." Consumers, corporate executives, and technicians must educate themselves and avoid assuming, "If an AI system collected it, then it must be true." Veracity matters. What do you think?


Researchers Find Mobile Apps Can Easily Record Screenshots And Videos of Users' Activities

New academic research highlights how easy it is for mobile apps to both spy upon consumers and violate our privacy. During a recent study to determine whether or not smartphones record users' conversations, researchers at Northeastern University (NU) found:

"... that some companies were sending screenshots and videos of user phone activities to third parties. Although these privacy breaches appeared to be benign, they emphasized how easily a phone’s privacy window could be exploited for profit."

The NU researchers tested 17,260 of the most popular mobile apps running on smartphones using the Android operating system. About 9,000 of the 17,260 apps had the ability to take screenshots. The vulnerability: screenshot and video captures could easily be used to record users' keystrokes, passwords, and related sensitive information:

"This opening will almost certainly be used for malicious purposes," said Christo Wilson, another computer science professor on the research team. "It’s simple to install and collect this information. And what’s most disturbing is that this occurs with no notification to or permission by users."

The NU researchers found one app already recording video of users' screen activity (links added):

"That app was GoPuff, a fast-food delivery service, which sent the screenshots to Appsee, a data analytics firm for mobile devices. All this was done without the awareness of app users. [The researchers] emphasized that neither company appeared to have any nefarious intent. They said that web developers commonly use this type of information to debug their apps... GoPuff has changed its terms of service agreement to alert users that the company may take screenshots of their use patterns. Google issued a statement emphasizing that its policy requires developers to disclose to users how their information will be collected."

May? A brief review of the Appsee site seems to confirm that video recordings of the screens on app users' mobile devices is integral to the service:

"RECORDING: Watch every user action and understand exactly how they use your app, which problems they're experiencing, and how to fix them.​ See the app through your users' eyes to pinpoint usability, UX and performance issues... TOUCH HEAT MAPS: View aggregated touch heatmaps of all the gestures performed in each​ ​screen in your app.​ Discover user navigation and interaction preferences... REALTIME ANALYTICS & ALERTS:Get insightful analytics on user behavior without pre-defining any events. Obtain single-user and aggregate insights in real-time..."

Sounds like a version of "surveillance capitalism" to me. According to the Appsee site, a variety of companies use the service including eBay, Samsung, Virgin airlines, The Weather Network, and several advertising networks. Plus, the Appsee Privacy Policy dated may 23, 2018 stated:

"The Appsee SDK allows Subscribers to record session replays of their end-users' use of Subscribers' mobile applications ("End User Data") and to upload such End User Data to Appsee’s secured cloud servers."

In this scenario, GoPuff is a subscriber and consumers using the GoPuff mobile app are end users. The Appsee SDK is software code embedded within the GoPuff mobile app. The researchers said that this vulnerability, "will not be closed until the phone companies redesign their operating systems..."

Data-analytics services like Appsee raise several issues. First, there seems to be little need for digital agencies to conduct traditional eye-tracking and usability test sessions, since companies can now record, upload and archive what, when, where, and how often users swipe and select in-app content. Before, users were invited to and paid for their participation in user testing sessions.

Second, this in-app tracking and data collection amounts to perpetual, unannounced user testing. Previously, companies have gotten into plenty of trouble with their customers by performing secret user testing; especially when the service varies from the standard, expected configuration and the policies (e.g., privacy, terms of service) don't disclose it. Nobody wants to be a lab rat or crash-test dummy.

Third, surveillance agencies within several governments must be thrilled to learn of these new in-app tracking and spy tools, if they aren't already using them. A reasonable assumption is that Appsee also provides data to law enforcement upon demand.

Fourth, two of the researchers at NU are undergraduate students. Another startling disclosure:

"Coming into this project, I didn’t think much about phone privacy and neither did my friends," said Elleen Pan, who is the first author on the paper. "This has definitely sparked my interest in research, and I will consider going back to graduate school."

Given the tsunami of data breaches, privacy legislation in Europe, and demands by law enforcement for tech firms to build "back door" hacks into their mobile devices and smartphones, it is startling alarming that some college students, "don't think much about phone privacy." This means that Pan and her classmates probably haven't read privacy and terms-of-service policies for the apps and sites they've used. Maybe they will now.

Let's hope so.

Consumers interested in GoPuff should closely read the service's privacy and Terms of Service policies, since the latter includes dispute resolution via binding arbitration and prevents class-action lawsuits.

Hopefully, future studies about privacy and mobile apps will explore further the findings by Pan and her co-researchers. Download the study titled, "Panoptispy: Characterizing Audio and Video Exfiltration from Android Applications" (Adobe PDF) by Elleen Pan, Jingjing Ren, Martina Lindorfer, Christo Wilson, and David Choffnes.


Report: Software Failure In Fatal Accident With Self-Driving Uber Car

TechCrunch reported:

"The cause of the fatal crash of an Uber self-driving car appears to have been at the software level, specifically a function that determines which objects to ignore and which to attend to, The Information reported. This puts the fault squarely on Uber’s doorstep, though there was never much reason to think it belonged anywhere else.

Given the multiplicity of vision systems and backups on board any given autonomous vehicle, it seemed impossible that any one of them failing could have prevented the car’s systems from perceiving Elaine Herzberg, who was crossing the street directly in front of the lidar and front-facing cameras. Yet the car didn’t even touch the brakes or sound an alarm. Combined with an inattentive safety driver, this failure resulted in Herzberg’s death."

The TechCrunch story provides details about which software subsystem the report said failed.

Not good.

So, the autonomous or self-driving cars are only as good as the software they're programmed with (including maintenance). Anyone who has used computers during the last couple decades probably has experienced software glitches, bugs, and failures. It happens.

This latest incident suggests self-driving cars aren't yet ready. what do you think?


How the Crowd Led ProPublica to Investigate IBM

[Editor's note: today's guest post, by the reporters at ProPublica, discusses employment practices at a major corporation in the United States. The investigation is as interesting as the "Cutting 'Old Heads' At IBM" report. This also caught my attention because a data breach at IBM in 2007 led to the creation of this blog. Today's article is reprinted with permission.]

IBM logo By Ariana Tobin and Peter Gosselin, ProPublica

On March 22, we reported that over the past five years IBM has been removing older U.S. employees from their jobs, replacing some with younger, less experienced, lower-paid American workers and moving many other jobs overseas.

We’ve got documentation and details — most of which are the direct result of a questionnaire filled out by over 1,100 former IBMers.

We’ve gone to the company with our findings. IBM did not answer the specific questions we sent. Spokesman Edward Barbini said: “We are proud of our company and our employees’ ability to reinvent themselves era after era, while always complying with the law. Our ability to do this is why we are the only tech company that has not only survived but thrived for more than 100 years.”

We don’t know the exact size of the problem. Our questionnaire isn’t a scientific sample, nor did all the participants tell us they experienced age discrimination. But the hundreds of similar stories show a pattern of older employees being pushed out even when the company itself says they were doing a good job.

This project wasn’t inspired by a high-level leak or an errant line in secret documents. It came to us through reader engagement. Our investigation took us beyond some of our usual reporting techniques. We’d like to elaborate on this because:

  • We know readers will wonder how we sourced some pretty serious claims.
  • Many ex-employees trusted us with their stories and spent many hours in conversation with us. We think it’s good practice to let them know how we’ve used their information.
  • This is the probably the first time we’ve been pointed to a big project by a community of people we found through digital outreach. We hope that by sharing our experiences, we can help others build on our work.

IBMers found us

This project started as a conversation between the two of us, both reporters at ProPublica. Peter had taken on the age discrimination beat for reasons both personal and professional. Ariana was newly minted into a job called “engagement reporter.”

Ariana suggested that Peter write up a short essay on his own experiences of being laid off at 63 and searching for a job in the aftermath. We attached a short questionnaire to the bottom and headlined it: “Over 50 and looking for a job? We’d like to hear from you.

Dozens of people responded within the first couple of weeks. As we looked through this first round of questionnaires, we noticed a theme: a whole lot of information and technology workers told us they were struggling to stay employed. And those who had lost their jobs? They were having a really hard time finding new work.

Of those IT workers, several mentioned IBM right off the bat. One woman wrote that she and her coworkers were working together to find new jobs in order to “ward off the dreaded old person layoff from IBM.”

Another wrote: “I can probably help you get a lot more stories, contact me if you want to discuss this possibility.”

Another wrote: “Part of the separation agreement was that I not seek collective action against IBM for age discrimination. I was not going to sign as a law firm was planning to file a grievance. However they needed 10 people to agree and they could not get the numbers.”

… and then they connected us with more IBMers

We started making some calls. One of the first people we talked to was Brian Paulson, a 57-year-old senior manager with 18 years at IBM, who was fired for “performance reasons” that the company refused to explain. He was still job-hunting two years later.

Another ex-IBM employee told us that she had seen examples of older workers laid off from many parts of the company on a public Facebook page called WatchingIBM. Ariana spent a day looking through the posts, which were, as promised, crawling with stories, questions, and calls for support from workers of all kinds, as shown in the accompanying screenshot.

We decided to reach out to the page’s administrator, who was a longtime IBM workplace activist, Lee Conrad. He shared our age discrimination questionnaire in the group and more responses poured in.

With dozens of interviews already on the books, we decided to launch a second, more specific questionnaire — this time about IBM

We realized that we had been pointed toward an angry, sad and motivated group. The older ex-IBM workers we called were trying to figure out whether their own layoffs were unique or part of a larger trend. And if they were part of a larger trend... how many people were affected?

A major frustration we saw in comment after comment: These workers couldn’t get information on how many others had been forced out with them.

This was an information gap that immediately struck Peter, because that information is exactly what the law requires employers to disclose at the time of a layoff.

On top of that, many of these sources mentioned having been forced to sign agreements that kept them from going to court or even talking about what had happened to them. They were scared to do anything in violation of those agreements, a fear that kept them from finding out the answers to some big open questions: Why would IBM have stopped releasing the ages and positions of those let go, as they had done before 2014 to comply with federal law? How many workers out there believed they had been “retired” against their will? What did managers really tell their subordinates when the time came to let them go? Who was left to do all of their work?

So we wrote up another questionnaire asking those specific questions.

We learned from the responses, and also the response rate

We contacted people on listservs, found them on open petitions, joined closed LinkedIn networks, and followed each posting on ex-IBM groups. We tweeted the questionnaire out on days that IBM reported its earnings, including the company’s ticker symbol. We talked to trade magazines and IBM historians and organizers who still work at IBM. We bought ads on Facebook and aimed them toward cities and towns where we knew IBM had been cutting its workforce.

As the responses came in, we tried to figure out where most of them were coming from. To identify any meaningful trends, we needed to know who was answering, what was working, and why. We also realized that we needed to introduce ourselves in order to persuade anyone it was worth participating.

When something worked, we’d double down:

We know what worked the best: When people filled out the questionnaire they’d also share their contact information with us. So we asked them to forward the questionnaire around within their own networks:

And we got more leads

We read through all of the responses and identified themes: 183 respondents said the company recorded them as having retired by choice even though they had no desire to retire or flat-out objected to the idea. Forty-five people were told they’d have to uproot their lives and move sometimes thousands of miles from the communities where they had worked for years, or else resign. Fifty-three said their jobs had been moved overseas. Some were happy they’d left. Some were company luminaries, given top ratings throughout their career. Some were still fighting over benefits and health care. Some were worried about finding work ever again.

Inevitably, this categorization process led to us to identify new patterns as we went along, and as new responses accumulated. For each new pattern, we would go back and see how many people fit.

One of the first and most interesting such categories were the people who had received emails congratulating them on their retirement at the same time as they were informed of their layoff. We realized there would be power in numbers there, so we set up a SecureDrop for people who were willing to send us their paperwork.

Eventually, we also created a category called “legal action.” We’d stumbled upon support groups of ex-IBM employees who had filed formal complaints with the Equal Employment Opportunity Commission. Some sent us the company’s responses to their individual complaints, giving us insight into the way the company responded to allegations of discrimination. These seemed, of course, very useful.

In other words: we sent some rather complicated mass emails and were surprised over and over again by the specificity of the responses:

IBM undoubtedly has information that would shed light on the documents, its layoff practices or the overall extent and nature of its job cuts. The company chose not to respond to our questions about those issues.

So we tried to answer ex-IBMers’ questions ourselves, including one of the most basic: How many employees ages 40 and over were let go or left in recent years?

IBM won’t say. In fact, over the years, the company has stopped releasing almost all information about its U.S. workforce. In 2009, it stopped publishing its American employment total. In 2014, it stopped disclosing the numbers and ages of older employees it was laying off, a requirement of the nation’s basic anti-age bias law, the Age Discrimination in Employment Act (ADEA).

So we’ve sought to estimate the number, relying on one of the few remaining bits of company-provided information — a technique developed by a veteran financial analyst who follows IBM for investors — as well as patterns we spotted in internal company documents.

We began with a line in the company’s quarterly and annual filings with the U.S. Securities and Exchange Commission for “workforce rebalancing,” a company term for layoffs, firings and other non-retirement departures. It’s a gauge of what IBM spends to let people go. In the past five years, workforce rebalancing charges have totaled $4.3 billion.

The technique was used by veteran IBM analyst Toni Sacconaghi of Bernstein Research. Sacconaghi is a respected Wall Street analyst who has been named to Institutional Investor’s All-America Research Team every year since 2001. His technique and layoff estimates have been widely cited by news organizations including The Wall Street Journal and Fortune.

Some years ago, Sacconaghi estimated that IBM’s average per-employee cost for laying off a worker was $70,000.

Dividing $4.3 billion by $70,000 suggests that during the past five years IBM’s worldwide job cuts totaled about 62,000. If anything, that number is low, given IBM executives’ comments at a recent investor conference. Internal company documents we reviewed suggest that 50 to 60 percent of cuts were made in the U.S., with older workers representing roughly 60 percent of those. That translates to about 20,000 older American workers let go.

Our analysis suggests the total of U.S. layoffs is almost certainly higher.

First, as Sacconaghi said in a recent interview, IBM’s per-employee rebalancing costs are likely much lower now because, starting in 2016, the company reduced severance payments to departing employees from six months to just 30 days. That means IBM can lay off or fire more people for the same or lower overall costs.

Second, because, as those ex-IBMers told us, the company often converts their layoffs into retirements, the workplace rebalancing numbers don’t tell the whole story.

Right below the line for “workforce rebalancing” in its SEC filings, IBM adds another line for “retirement-related costs,” which reflects how much the company spends each year retiring people out. Some — perhaps a substantial amount of that — went to retirements that were less than fully voluntary. This could add up to thousands more people.

By coming up with answers and investigating in the open, we’ve gotten more sources

Many of the conversations we’ve had during our reporting didn’t make it into the final story. People allowed us to review internal company documents. They let us see long email exchanges with their managers. They dug back through closets and garages to find memos they had saved out of frustration or fatigue or just plain anger.

We can’t go into detail about all of the ways the community helped us report out this story, because we also promised many of our sources that we would protect their confidentiality. The beauty is that they talked to us anyway. They knew where to find us, because our contact information had been spread far and wide.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Airlines Want To Extend 'Dynamic Pricing' Capabilities To Set Ticket Prices By Each Person

In the near future, what you post on social media sites (e.g., Facebook, Instagram, Pinterest, etc.) could affect the price you pay for airline tickets. How's that?

First, airlines already use what the travel industry calls "dynamic pricing" to vary prices by date, time of day, and season. We've all seen higher ticket prices during the holidays and peak travel times. The Telegraph UK reported that airlines want to extend dynamic pricing to set fares by person:

"... the advent of setting fares by the person, rather than the flight, are fast approaching. According to John McBride, director of product management for PROS, a software provider that works with airlines including Lufthansa, Emirates and Southwest, a number of operators have already introduced dynamic pricing on some ticket searches. "2018 will be a very phenomenal year in terms of traction," he told Travel Weekly..."

And, there was a preliminary industry study about how to do it:

" "The introduction of a Dynamic Pricing Engine will allow an airline to take a base published fare that has already been calculated based on journey characteristics and broad segmentation, and further adjust the fare after evaluating details about the travelers and current market conditions," explains a white paper on pricing written by the Airline Tariff Publishing Company (ATPCO), which counts British Airways, Delta and KLM among its 430 airline customers... An ATPCO working group met [in late February] to discuss dynamic pricing, but it is likely that any roll out to its customers would be incremental."

What's "incremental" mean? Experts say first step would be to vary ticket prices in search results at the airline's site, or at an intermediary's site. There's virtually no way for each traveler to know they'd see a personal price that's higher (or lower) from prices presented to others.

With dynamic pricing per person, business travelers would pay more. And, an airline could automatically bundle several fees (e.g., priority boarding, luggage, meals, etc.) for its loyalty program members into each person's ticket price, obscuring transparency and avoiding fairness. Of course, airlines would pitch this as convenience, but alert consumers know that any convenience always has its price.

Thankfully, some politicians in the United States are paying attention. The Shear Social Media Law & Technology blog summarized the situation very well:

"[Dynamic pricing by person] demonstrates why technology companies and the data collection industry needs greater regulation to protect the personal privacy and free speech rights of Americans. Until Silicon Valley and data brokers are properly regulated Americans will continue to be discriminated against based upon the information that technology companies are collecting about us."

Just because something can be done with technology, doesn't mean it should be done. What do you think?


Report: Little Progress Since 2016 To Replace Old, Vulnerable Voting Machines In United States

We've know for some time that a sizeable portion of voting machines in the United States are vulnerable to hacking and errors. Too many states, cities, and town use antiquated equipment or equipment without paper backups. The latter makes re-counts impossible.

Has any progress been made to fix the vulnerabilities? The Brennan Center For Justice (BCJ) reported:

"... despite manifold warnings about election hacking for the past two years, the country has made remarkably little progress since the 2016 election in replacing antiquated, vulnerable voting machines — and has done even less to ensure that our country can recover from a successful cyberattack against those machines."

It is important to remember this warning in January 2017 from the Director of National Intelligence (DNI):

"Russian effortsto influence the 2016 US presidential election represent the most recent expression of Moscow’s longstanding desire to undermine the US-led liberal democratic order, but these activities demonstrated a significant escalation in directness, level of activity, and scope of effort compared to previous operations. We assess Russian President Vladimir Putin ordered an influence campaign in 2016 aimed at the US presidential election. Russia’s goals were to undermine public faith in the US democratic process... Russian intelligence accessed elements of multiple state or local electoral boards. Since early 2014, Russian intelligence has researched US electoral processes and related technology and equipment. DHS assesses that the types of systems we observed Russian actors targeting or compromising are not involved in vote tallying... We assess Moscow will apply lessons learned from its Putin-ordered campaign aimed at the US presidential election to future influence efforts worldwide, including against US allies and their election processes... "

Detailed findings in the BCJ report about the lack of progress:

  1. "This year, most states will use computerized voting machines that are at least 10 years old, and which election officials say must be replaced before 2020.
    While the lifespan of any electronic voting machine varies, systems over a decade old are far more likely to need to be replaced, for both security and reliability reasons... older machines are more likely to use outdated software like Windows 2000. Using obsolete software poses serious security risks: vendors may no longer write security patches for it; jurisdictions cannot replace critical hardware that is failing because it is incompatible with their new, more secure hardware... In 2016, jurisdictions in 44 states used voting machines that were at least a decade old. Election officials in 31 of those states said they needed to replace that equipment by 2020... This year, 41 states will be using systems that are at least a decade old, and officials in 33 say they must replace their machines by 2020. In most cases, elections officials do not yet have adequate funds to do so..."
  2. "Since 2016, only one state has replaced its paperless electronic voting machines statewide.
    Security experts have long warned about the dangers of continuing to use paperless electronic voting machines. These machines do not produce a paper record that can be reviewed by the voter, and they do not allow election officials and the public to confirm electronic vote totals. Therefore, votes cast on them could be lost or changed without notice... In 2016, 14 states (Arkansas, Delaware, Georgia, Indiana, Kansas, Kentucky, Louisiana, Mississippi, New Jersey, Pennsylvania, South Carolina, Tennessee, Texas, and Virginia) used paperless electronic machines as the primary polling place equipment in at least some counties and towns. Five of these states used paperless machines statewide. By 2018 these numbers have barely changed: 13 states will still use paperless voting machines, and 5 will continue to use such systems statewide. Only Virginia decertified and replaced all of its paperless systems..."
  3. "Only three states mandate post-election audits to provide a high-level of confidence in the accuracy of the final vote tally.
    Paper records of votes have limited value against a cyberattack if they are not used to check the accuracy of the software-generated total to confirm that the veracity of election results. In the last few years, statisticians, cybersecurity professionals, and election experts have made substantial advances in developing techniques to use post-election audits of voter verified paper records to identify a computer error or fraud that could change the outcome of a contest... Specifically, “risk limiting audits” — a process that employs statistical models to consistently provide a high level of confidence in the accuracy of the final vote tally – are now considered the “gold standard” of post-election audits by experts... Despite this fact, risk limiting audits are required in only three states: Colorado, New Mexico, and Rhode Island. While 13 state legislatures are currently considering new post-election audit bills, since the 2016 election, only one — Rhode Island — has enacted a new risk limiting audit requirement."
  4. "43 states are using machines that are no longer manufactured.
    The problem of maintaining secure and reliable voting machines is particularly challenging in the many jurisdictions that use machines models that are no longer produced. In 2015... the Brennan Center estimated that 43 states and the District of Columbia were using machines that are no longer manufactured. In 2018, that number has not changed. A primary challenge of using machines no longer manufactured is finding replacement parts and the technicians who can repair them. These difficulties make systems less reliable and secure... In a recent interview with the Brennan Center, Neal Kelley, registrar of voters for Orange County, California, explained that after years of cannibalizing old machines and hoarding spare parts, he is now forced to take systems out of service when they fail..."

That is embarrassing for a country that prides itself on having an effective democracy. According to BCJ, the solution would be for Congress to fund via grants the replacement of paperless and antiquated equipment; plus fund post-election audits.

Rather than protect the integrity of our democracy, the government passed a massive tax cut which will increase federal deficits during the coming years while pursuing both a costly military parade and an unfunded border wall. Seems like questionable priorities to me. What do you think?


2017 FTC Complaints Report: Debt Collection Tops The List. Older Consumers Better At Spotting Scams

Earlier this month,, the U.S. Federal Trade Commission (FTC) released its annual report of complaints submitted by consumers in the United States. The report is helpful is understand the most frequent types of scams and reports consumers experienced.

The latest report, titled 2017 Consumer Sentinel Network Data Book, includes complaints from 2.68 million consumers, a decrease from 2.98 million in 2016. However, consumers reported losing a total of $905 million to fraud in 2017, which is $63 million more than in 2016. The most frequent complaints were about debt collection (23 percent), identity theft (14 percent), and imposter scams (13 percent). The top 20 complaint categories:

Rank Category # Of
Reports
% Of
Reports
1 Debt Collection 608,535 22.74%
2 Identity Theft 371,061 13.87%
3 Imposter Scams 347,829 13.00%
4 Telephone & Mobile Services 149,578 5.59%
5 Banks & Lenders 149,316 5.58%
6 Prizes, Sweepstakes & Lotteries 142,870 5.34%
7 Shop-at-Home & Catalog Sales 126,387 4.72%
8 Credit Bureaus, Information
Furnishers & Report Users
107,473 4.02%
9 Auto Related 86,289 3.23%
10 Television and Electronic Media 47,456 1.77%
11 Credit Cards 45,428 1.70%
12 Internet Services 45,093 1.69%
13 Foreign Money Offers &
Counterfeit Check Scams
31,980 1.20%
14 Health Care 27,660 1.03%
15 Travel, Vacations &
Timeshare Plans
22,264 0.83%
16 Business & Job Opportunities 19,082 0.71%
17 Advance Payments for
Credit Services
17,762 0.66%
18 Investment Related 15,079 0.56%
19 Computer Equipment
& Software
9,762 0.36%
20 Mortgage Foreclosure Relief
& Debt Management
8,973 0.34%

While the median loss for all fraud reports in 2017 was $429, consumers reported larger losses in certain types of scams: travel, vacations and timeshare plans ($1,710); mortgage foreclosure relief and debt management ($1,200); and business/job opportunities ($1,063).

The telephone was the most frequently-reported method (70 percent) scammers used to contact consumers, and  wire transfers was the most frequently-reported payment method for fraud ($333 million in losses reported). Also:

"The states with the highest per capita rates of fraud reports in 2017 were Florida, Georgia, Nevada, Delaware, and Michigan. For identity theft, the top states in 2017 were Michigan, Florida, California, Maryland, and Nevada."

What's new in this report is that it details financial losses by age group. The FTC report concluded:

"Consumers in their twenties reported losing money to fraud more often than those over age 70. For example, among people aged 20-29 who reported fraud, 40 percent indicated they lost money. In comparison, just 18 percent of those 70 and older who reported fraud indicated they lost any money. However, when these older adults did report losing money to a scammer, the median amount lost was greater. The median reported loss for people age 80 and older was $1,092 compared to $400 for those aged 20-29."

Detailed information supporting this conclusion:

2017 FTC Consumer Sentinel complaints report. Reports and losses by age group. Click to view larger image

2017 FTC Consumer Sentinel complaints report. Median losses by age group. Click to view larger image

The second chart is key. Twice as many younger consumers (40 percent, ages 20 - 29) reported fraud losses compared to 18 percent of consumers ages 70 and older. At the same time, those older consumers lost more money. So, older consumers were more skilled at spotting scams and few fell victim to scams. It seems both groups could learn from each other.

CBS News interviewed a millennial who fell victim to a mystery-shopper scam, which seemed to be a slick version of the old check scam. It seems wise for all consumers, regardless of age, to maintain awareness about the types of scams. Pick a news source or blog you trust. Hopefully, this blog.

Below is a graphic summarizing the 2017 FTC report:

Ftc-complaints-report-2017


Security Experts: Artificial Intelligence Is Ripe For Misuse By Bad Actors

Over the years, bad actors (e.g., criminals, terrorists, rogue states, ethically-challenged business executives) have used a variety of online technologies to remotely hack computers, track users online without consent nor notice, and circumvent privacy settings by consumers on their internet-connected devices. During the past year or two, reports surfaced about bad actors using advertising and social networking technologies to sway public opinion.

Security researchers and experts have warned in a new report that two of the newest technologies can be also be used maliciously:

"Artificial intelligence and machine learning capabilities are growing at an unprecedented rate. These technologies have many widely beneficial applications, ranging from machine translation to medical image analysis... Less attention has historically been paid to the ways in which artificial intelligence can be used maliciously. This report surveys the landscape of potential security threats from malicious uses of artificial intelligence technologies, and proposes ways to better forecast, prevent, and mitigate these threats. We analyze, but do not conclusively resolve, the question of what the long-term equilibrium between attackers and defenders will be. We focus instead on what sorts of attacks we are likely to see soon if adequate defenses are not developed."

Companies currently use or test artificial intelligence (A.I.) to automate mundane tasks, upgrade and improve existing automated processes, and/or personalize employee (and customer) experiences in a variety of applications and business functions, including sales, customer service, and human resources. "Machine learning" refers to the development of digital systems to improve the performance of a task using experience. Both are part of a business trend often referred to as "digital transformation" or the "intelligent workplace." The CXO Talk site, featuring interviews with business leaders and innovators, is a good resource to learn more about A.I. and digital transformation.

A survey last year of employees in the USA, France, Germany, and the United Kingdom found that they, "see A.I. as the technology that will cause the most disruption to the workplace." The survey also found: 70 percent of employees surveyed expect A.I. to impact their jobs during the next ten years, half expect impacts within the next three years, and about a third percent see A.I. as a job creator.

This new report was authored by 26 security experts from a variety of educational institutions including American University, Stanford University, Yale University, the University of Cambridge, the University of Oxford, and others. The report cited three general ways bad actors could misuse A.I.:

"1. Expansion of existing threats. The costs of attacks may be lowered by the scalable use of AI systems to complete tasks that would ordinarily require human labor, intelligence and expertise. A natural effect would be to expand the set of actors who can carry out particular attacks, the rate at which they can carry out these attacks, and the set of potential targets.

2. Introduction of new threats. New attacks may arise through the use of AI systems to complete tasks that would be otherwise impractical for humans. In addition, malicious actors may exploit the vulnerabilities of AI systems deployed by defenders.

3. Change to the typical character of threats. We believe there is reason to expect attacks enabled by the growing use of AI to be especially effective, finely targeted, difficult to attribute, and likely to exploit vulnerabilities in AI systems."

So, A.I. could make it easier for the bad guys to automated labor-intensive cyber-attacks such as spear-fishing. The bad guys could also create new cyber-attacks by combining A.I. with speech synthesis. The authors of the report cited examples of more threats:

"The use of AI to automate tasks involved in carrying out attacks with drones and other physical systems (e.g. through the deployment of autonomous weapons systems) may expand the threats associated with these attacks. We also expect novel attacks that subvert cyber-physical systems (e.g. causing autonomous vehicles to crash) or involve physical systems that it would be infeasible to direct remotely (e.g. a swarm of thousands of micro-drones)... The use of AI to automate tasks involved in surveillance (e.g. analyzing mass-collected data), persuasion (e.g. creating targeted propaganda), and deception (e.g. manipulating videos) may expand threats associated with privacy invasion and social manipulation..."

BBC News reported even more possible threats:

"Technologies such as AlphaGo - an AI developed by Google's DeepMind and able to outwit human Go players - could be used by hackers to find patterns in data and new exploits in code. A malicious individual could buy a drone and train it with facial recognition software to target a certain individual. Bots could be automated or "fake" lifelike videos for political manipulation. Hackers could use speech synthesis to impersonate targets."

From all of this, one can conclude that the 2016 elections interference cited by intelligence officials is probably mild compared to what will come: more serious, sophisticated, and numerous attacks. The report included four high-level recommendations:

"1. Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.

2. Researchers and engineers in artificial intelligence should take the dual-use nature of their work seriously, allowing misuse-related considerations to influence research priorities and norms, and proactively reaching out to relevant actors when harmful applications are foreseeable.

3. Best practices should be identified in research areas with more mature methods for addressing dual-use concerns, such as computer security, and imported where applicable to the case of AI.

4. Actively seek to expand the range of stakeholders and domain experts involved in discussions of these challenges."

Download the 101-page report titled, "The Malicious Use Of Artificial Intelligence: Forecasting, Prevention, And Mitigation" A copy of the report is also available here (Adobe PDF; 1,400 k bytes)here.

To prepare, both corporate and government executives would be wise to both harden their computer networks and (re)train their employees to recognize and guard against cyber attacks. What do you think?


I Approved This Facebook Message — But You Don’t Know That

[Editor's note: today's guest post, by reporters at ProPublica, is the latest in a series about advertising and social networking sites. It is reprinted with permission.]

Facebook logo By Jennifer Valentino-DeVries, ProPublica

Hundreds of federal political ads — including those from major players such as the Democratic National Committee and the Donald Trump 2020 campaign — are running on Facebook without adequate disclaimer language, likely violating Federal Election Commission (FEC) rules, a review by ProPublica has found.

An FEC opinion in December clarified that the requirement for political ads to say who paid for and approved them, which has long applied to print and broadcast outlets, extends to ads on Facebook. So we checked more than 300 ads that had run on the world’s largest social network since the opinion, and that election-law experts told us met the criteria for a disclaimer. Fewer than 40 had disclosures that appeared to satisfy FEC rules.

“I’m totally shocked,” said David Keating, president of the nonprofit Institute for Free Speech in Alexandria, Virginia, which usually opposes restrictions on political advertising. “There’s no excuse,” he said, looking through our database of ads.

The FEC can investigate possible violations of the law and fine people up to thousands of dollars for breaking it — fines double if the violation was “knowing and willful,” according to the regulations. Under the law, it’s up to advertisers, not Facebook, to ensure they have the right disclaimers. The FEC has not imposed penalties on any Facebook advertiser for failing to disclose.

An FEC spokeswoman declined to say whether the commission has any recent complaints about lack of disclosure on Facebook ads. Enforcement matters are confidential until they are resolved, she said.

None of the individuals or groups we contacted whose ads appeared to have inadequate disclaimers, including the Democratic National Committee and the Trump campaign, responded to requests for comment. Facebook declined to comment on ProPublica’s findings or the December opinion. In public documents, the company has urged the FEC to be “flexible” in what it allows online, and to develop a policy for all digital advertising rather than focusing on Facebook.

Insufficient disclaimers can be minor technicalities, not necessarily evidence of intent to deceive. But the pervasiveness of the lapses ProPublica found suggests a larger problem that may raise concerns about the upcoming midterm elections — that political advertising on the world’s largest social network isn’t playing by rules intended to protect the public.

Unease about political ads on Facebook and other social networking sites has intensified since internet companies acknowledged that organizations associated with the Russian government bought ads to influence U.S. voters during the 2016 election. Foreign contributions to campaigns for U.S. federal office are illegal. Online, advertisers can target ads to relatively small groups of people. Once the marketing campaign is over, the ads disappear. This makes it difficult for the public to scrutinize them.

The FEC opinion is part of a push toward more transparency in online political advertising that has come in response to these concerns. In addition to handing down the opinion in a specific case, the FEC is preparing new rules to address ads on social media more broadly. Three senators are sponsoring a bill called the Honest Ads Act, which would require internet companies to provide more information on who is buying political ads. And earlier this month, the election authority in Seattle said Facebook was violating a city law on election-ad disclosures, marking a milestone in municipal attempts to enforce such transparency.

Facebook itself has promised more transparency about political ads in the coming months, including “paid for by” disclosures. Since late October it has been conducting tests in Canada that publish ads on an advertiser’s Facebook page, where people can see them even without being part of the advertiser’s target audience. Those ads are only up while the ad campaign is running, but Facebook says it will create a searchable archive for federal election advertising in the U.S. starting this summer.

ProPublica found the ads using a tool called the Political Ad Collector, which allows Facebook users to automatically send us the political ads that were displayed on their news feeds. Because they reflect what users of the tool are seeing, the ads in our database aren’t a representative sample.

The disclaimers required by the FEC are familiar to anyone who has seen a print or television political ad — think of a candidate saying, “I’m ____, and I approved this message,” at the end of a TV commercial, or a “paid for by” box at the bottom of a newspaper advertisement. They’re intended to make sure the public knows who is paying to support a candidate, and to prevent people from falsely claiming to speak on a candidate’s behalf.

The system does have limitations, reflecting concerns that overuse of disclaimers could inhibit free speech. For starters, the rules apply only to certain types of political ads. Political committees and candidates have to include disclaimers, as do people seeking donations or conducting “express advocacy.” To count as express advocacy, an ad typically must mention a candidate and use certain words clearly campaigning for or against a candidate — such as “vote for,” “reject” or “re-elect.” And the regulations only apply to federal elections, not state and local ones.

The rules also don’t address so-called “issue” ads that advocate a policy stance. These ads may include a candidate’s name without a disclaimer, as long as they aren’t funded by a political committee or candidate and don’t use express-advocacy language. Many of the political ads purchased by Russian groups in 2016 attempted to influence public opinion without mentioning candidates at all — and would not require disclosure even today.

Enforcement of the law often relies on political opponents or a member of the public complaining to the FEC. If only supporters see an ad, as might be the case online, a complaint may never come.

The disclaimer law was last amended in 2002, but online advertising has changed so rapidly that several experts said the FEC has had trouble keeping up. In 2002, the commission found that paid text message ads were exempt from disclosure under the “small-items exception” originally intended for buttons, pins and the like. What counts as small depends on the situation and is up to the FEC.

In 2010, the FEC considered ads on Google that had no graphics or photos and were limited to 95 characters of text. Google proposed that disclaimers not be part of the ads themselves but be included on the web pages that users would go to after clicking on the ads; the FEC agreed.

In 2011, Facebook asked the FEC to allow political ads on the social network to run without disclosures. At the time, Facebook limited all ads on its platform to small, “thumbnail” photos and brief text of only 100 or 160 characters, depending on the type of ad. In that case, the six-person FEC couldn’t muster the four votes needed to issue an opinion, with three commissioners saying only limited disclosure was required and three saying the ads needed no disclosure at all, because it would be “impracticable” for political ads on Facebook to contain more text than other ads. The result was that political ads on Facebook ran without the disclaimers seen on other types of election advertising.

Since then, though, ads on Facebook have expanded. They can now include much more text, as well as graphics or photos that take up a large part of the news feed’s width. Video ads can run for many minutes, giving advertisers plenty of time to show the disclaimer as text or play it in a voiceover.

Last October, a group called Take Back Action Fund decided to test whether these Facebook ads should still be exempt from the rules.

“For years now, people have said, ‘Oh, don’t worry about the rules, because the FEC doesn’t enforce anything on Facebook,’” said John Pudner, president of Take Back Action Fund, which advocates for campaign finance reform. Many political consultants “didn’t think you ever needed a disclaimer on a Facebook ad,” said Pudner, a longtime campaign consultant to conservative candidates.

Take Back Action Fund came up with a plan: Ask the FEC whether it should include disclosures on ads that the group thought clearly needed them.

The group told the FEC it planned to buy “express advocacy” ads on Facebook that included large images or videos on the news feed. In its filing, Take Back Action Fund provided some sample text it said it was thinking of using: “While [Candidate Name] accuses the Russians of helping President Trump get elected, [s/he] refuses to call out [his/her] own Democrat Party for paying to create fake documents that slandered Trump during his presidential campaign. [Name] is unfit to serve.”

In a comment filed with the FEC in the matter, the Internet Association trade group, of which Facebook is a member, asked the commission to follow the precedent of the 2010 Google case and allow a “one-click” disclosure that didn’t need to be on the ad itself but could be on the web page the ad led to.

The FEC didn’t follow that recommendation. It said unanimously that the ads needed full disclaimers.

The opinion, handed down Dec. 15, was narrow, saying that if any of the “facts or assumptions” presented in another case were different in a “material” way, the opinion could not be relied upon. But several legal experts who spoke with ProPublica said the opinion means anyone who would have to include disclaimers in traditional advertising should now do so on large Facebook image ads or video ads — including candidates, political committees and anyone using express advocacy.

“The functionality and capabilities of today’s Facebook Video and Image ads can accommodate the information without the same constrictions imposed by the character-limited ads that Facebook presented to the Commission in 2011,” three commissioners wrote in a concurring statement. A fourth commissioner went further, saying the commission’s earlier decision in the text messaging case should now be completely superseded. The remaining two commissioners didn’t comment beyond the published opinion.

“We are overjoyed at the decision and hope it will have the effect of stopping anonymous attacks,” said Pudner, of Take Back Action Fund. “We think that this is a matter of the voter’s right to know.” He added that the group doesn’t intend to purchase the ads.

This year, the FEC plans to tackle concerns about digital political advertising more generally. Facebook favors such an industry-wide approach, partly for competitive reasons, according to a comment it submitted to the commission.

“Facebook strongly supports the Commission providing further guidance to committees and other advertisers regarding their disclaimer obligations when running election-related Internet communications on any digital platform,” Facebook General Counsel Colin Stretch wrote to the FEC.

Facebook was concerned that its own transparency efforts “will apply only to advertising on Facebook’s platform, which could have the unintended consequence of pushing purchasers who wish to avoid disclosure to use other, less transparent platforms,” Stretch wrote.

He urged the FEC to adopt a “flexible” approach, on the grounds that there are many different types of online ads. “For example, allowing ads to include an icon or other obvious indicator that more information about an ad is available via quick navigation (like a single click) would give clear guidance.”

To test whether political advertisers were following the FEC guidelines, we searched for large U.S. political ads that our tool gathered between Dec. 20 — five days after the opinion — and Feb. 1. We excluded the small ads that run on the right column of Facebook’s website. To find ads that were most likely to fall under the purview of the FEC regulations, we searched for terms like “committee,” “donate” and “chip in.” We also searched for ads that used express advocacy language such as, “for Congress,” “vote against,” “elect” or “defeat.” We left out ads with state and local terms such as “governor” or “mayor,” as well as ads from groups such as the White House Historical Association or National Audubon Society that were obviously not election-oriented. Then we examined the ads, including the text and photos or graphics.

Of nearly 70 entities that ran ads with a large photo or graphic in addition to text, only two used all of the required disclaimer language. About 20 correctly indicated in some fashion the name of the committee associated with the ad but omitted other language, such as whether the ad was endorsed by a candidate. The rest had more significant shortcomings. Many of those that didn’t include disclosures were for relatively inexperienced candidates for Congress, but plenty of seasoned lawmakers and major groups failed to use the proper language as well.

For example, one ad said, “It’s time for Donald Trump, his family, his campaign, and all of his cronies to come clean about their collusion with Russia.” A photo of Donald Trump appeared over a black and red map of Russia, overlaid by the text, “Stop the Lies.” The ad urged people to “Demand Answers Today” and “Sign Up.”

At the top, the ad identified the Democratic Party as the sponsor, and linked to the party’s Facebook page. But, under FEC rules, it should have named the funder, the Democratic National Committee, and given the committee’s address or website. It should also have said whether the ad was endorsed by any candidate. It didn’t. The only nod to the national committee was a link to my.democrats.org, which is paid for by the DNC, at the bottom of the ad. As on all Facebook ads, the word “Sponsored” was included at the top.

Advertisers seemed more likely to put the proper disclaimers on video ads, especially when those ads appeared to have been created for television, where disclaimers have been mandatory for years. Videos that didn’t look made for TV were less likely to include a disclaimer.

One ad that said it was from Donald J. Trump consisted of 20 seconds of video with an American flag background and stirring music. The words “Donate Now! And Enter for a Chance To Win Dinner With Trump!” materialized on the screen with dramatic thuds and crashes. The ad linked to Trump’s Facebook page, and a “Donate” button at the bottom of the ad linked to a website that identified the president’s re-election committee, Donald J. Trump for President, Inc., as its funder. It wasn’t clear on the ad whether Trump himself or his committee paid for it, which should have been specified under FEC rules.

The large majority of advertisements we collected — both those that used disclosures and those that didn’t — were for liberal groups and politicians, possibly reflecting the allegiances of the ProPublica readers who installed our ad-collection tool. There were only four Republican advertisers among the ads we analyzed.

It’s not clear why advertisers aren’t following the FEC regulations. Keating, of the Institute for Free Speech, suggested that advertisers might think the word “Sponsored” and a link to their Facebook page are enough and that reasonable people would know they had paid for the ad.

Others said social media marketers may simply be slow in adjusting to the FEC opinion.

“It’s entirely possible that because disclaimers haven’t been included for years now, candidates and committees just aren’t used to putting them on there,” said Brendan Fischer, director of the Federal and FEC Reform Program at the Campaign Legal Center, the group that provided legal services to Take Back Action Fund. “But they should be on notice,” he added.

There were only two advertisers we saw that included the full, clear disclosures required by the FEC on their large image ads. One was Amy Klobuchar, a Democratic senator from Minnesota who is a co-sponsor of the Honest Ads Act. The other was John Moser, an IT security professional and Democratic primary candidate in Maryland’s 7th Congressional District who received $190 in contributions last year, according to his FEC filings.

Reached by Facebook Messenger, Moser said he is running because he has a plan for ending poverty in the U.S. by restructuring Social Security into a “universal dividend” that gives everyone over age 18 a portion of the country’s per capita income. He complained that Facebook doesn’t make it easy for political advertisers to include the required disclosures. “You have to wedge it in there somewhere,” said Moser, who faces an uphill battle against longtime U.S. Rep. Elijah Cummings. “They need to add specific support for that, honestly.”

Asked why he went to the trouble to put the words on his ad, Moser’s answer was simple: “I included a disclosure because you're supposed to.”

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


New Data Breach Legislation Proposed In North Carolina

After a surge in data breaches in North Carolina during 2017, state legislators have proposed stronger data breach laws. The National Law Review explained what prompted the legislative action:

"On January 8, 2018, the State of North Carolina released its Security Breach Report 2017, which highlights a 15 percent increase in breaches since 2016... Health care, financial services and insurance businesses accounted for 38 percent, with general businesses making up for just more than half of these data breaches. Almost 75 percent of all breaches resulted from phishing, hacking and unauthorized access, reflecting an overall increase of more than 3,500 percent in reported hacking incidents alone since 2006. Since 2015, phishing incidents increased over 2,300 percent. These numbers emphasize the warning to beware of emails or texts requesting personal information..."

So, fraudsters have tricked many North Carolina residents and employees into both opening fraudulent e-mail and text messages, and then responding by disclosing sensitive personal information. Not good.

Details about the proposed legislation:

"... named the Act to Strengthen Identity Theft Practices (ASITP), announced by Representative Jason Saine and Attorney General Josh Stein, attempts to combat the data breach epidemic by expanding North Carolina’s breach notification obligations, while reducing the time businesses have to comply with notification to the affected population and to the North Carolina Attorney General’s Office. If enacted, this new legislation will be one of the most aggressive U.S. breach notification statutes... The Fact Sheet concerning the ASITP as published by the North Carolina Attorney General proposes that the AG take a more direct role in the investigation of data breaches closer to their time of discovery...  To accomplish this goal, the ASITP proposes a significantly shorter period of time for an entity to provide notification to the affected population and to the North Carolina Attorney General. Currently, North Carolina’s statute mandates that notification be made to affected individuals and the Attorney General without “unreasonable delay.” Under the ASITP, the new deadline for all notifications would be 15 days following discovery of the data security incident. In addition to being the shortest deadline in the nation, it is important to note that notification vendors typically require 5 business days to process, print and mail notification letters... The proposed legislation also seeks to (1) expand the definition of “protected information” to include medical information and insurance account numbers, and (2) penalize those who fail to maintain reasonable security procedures by charging them with a violation under the Unfair and Deceptive Trade Practices Act for each person whose information is breached..."

Good. The National Law Review article also compared the breach notification deadlines across all 50 states and territories. It is worth a look to see how your state compares. A comparison of selected states:

Time After Discovery of Breach Selected States/Territories
10 calendar days Puerto Rico (Dept. of Consumer Affairs)
15 calendar days North Carolina (Proposed)
15 business California (Protected Health Information)
30 calendar days Florida
45 calendar days Ohio, Maryland
90 calendar days Connecticut
Most expedient time & without
unreasonable delay
California (other), Massachusetts, New York, North Carolina, Pennsylvania, Puerto Rico (other)
As soon as possible Texas

To learn more, download the North Carolina Security Breach Report 2017 (Adobe PDF), and the ASITP Fact Sheet (Adobe PDF).


Report: Air Travel Globally During 2017 Was The Safest Year On Record

The Independent UK newspaper reported:

"The Dutch-based aviation consultancy, To70, has released its Civil Aviation Safety Review for 2017. It reports only two fatal accidents, both involving small turbo-prop aircraft, with a total of 13 lives lost. No jets crashed in passenger service anywhere in the world... The chances of a plane being involved in a fatal accident is now one in 16 million, according to the lead researcher, Adrian Young... The report warns that electronic devices in checked-in bags pose a growing potential danger: “The increasing use of lithium-ion batteries in electronics creates a fire risk on board aeroplanes as such batteries are difficult to extinguish if they catch fire... The UK has the best air-safety record of any major country. No fatal accidents involving a British airline have happened since the 1980s. The last was on 10 January 1989... In contrast, sub-Saharan Africa has an accident rate 44 per cent worse than the global average, according to the International Air Transport Association (IATA)..."

Read the full 2017 aviation safety report by To70. Below is a chart from the report.

Accident Data Chart from To70 Air Safety Review for 2017. Click to view larger version


What We Discovered During a Year of Documenting Hate

[Editor's note: today's guest blog post, by the reporters at ProPublica, is second in a series about law enforcement and hate crimes in the United States. Today's post is reprinted with permission.]

By Rachel Glickhouse, ProPublica

The days after Election Day last year seemed to bring with them a rise in hate crimes and bias incidents. Reports filled social media and appeared in local news. There were the letters calling for the genocide of Muslims that were sent to Islamic centers from California to Ohio. And the swastikas that were scrawled on buildings around the country. In Florida, “colored” and “whites only” signs were posted over water fountains at a high school. A man assaulted a Hispanic woman in San Francisco, telling her “No Latinos here.”

But were these horrible events indicative of an increase in crimes and incidents themselves, or did the reports simply reflect an increased awareness and willingness to come forward on the part of victims and witnesses? As data journalists, we went looking for answers and were not prepared for what we found: Nobody knows for sure. Hate crimes are so poorly tracked in America, there’s no way to undertake the kind of national analysis that we do in other areas, from bank robberies to virus outbreaks.

There is a vast discrepancy between the hate crimes numbers gathered by the FBI from police jurisdictions around the country and the estimate of hate crime victims in annual surveys by the Bureau of Justice Statistics. The FBI counts 6,121 hate crimes in 2016, and the BJS estimates 250,000 hate crimes a year.

We were told early on that while the law required the Department of Justice to report hate crime statistics, local and state police departments aren’t bound to report their numbers to the FBI — and many don't. Complicating matters further is that hate crime laws vary by state, with some including sexual orientation as a protected class of victims and some not. Five states have no hate crime statute at all.

We decided to try collecting data ourselves, using a mix of social media news gathering and asking readers to send in their personal stories. We assembled a coalition of more than 130 newsrooms to help us report on hate incidents by gathering and verifying tips, and worked on several lines of investigation in our own newsroom.

Along the way, we’ve learned a lot about how hate crimes fall through the cracks:

We’ve received thousands of tips so far through our embeddable incident reporting form. We’ve also added tips sent to us by civil rights groups such as the Southern Poverty Law Center.

ProPublica and reporters in newsrooms around the country used those tips to tell the stories of people who’ve come forward as victims or witnesses. They’ve identified a number of patterns:

Impact

Our mission at ProPublica is to do journalism that has impact. We’ve seen significant impact from Documenting Hate.

  • The official Virginia state after-action report on the Charlottesville rally cited ProPublica’s reporting and made recommendations for better police practices based on our journalism.
  • Cloudflare changed their complaint policies following a ProPublica story on how the company helps support neo-Nazi sites. The company cited our reporting when they later shut down The Daily Stormer, a major neo-Nazi site.
  • After we asked for their records, the Jacksonville Sheriff’s Office, which had not sent a hate crime report to the state of Florida in years, began reporting hate crime data for the first time since 2013.
  • The Miami-Dade Police Department started an internal audit after we talked to them in October. Detective Carlos Rosario, a spokesman for the department, told us they found four hate crimes that they had failed to report to the state. Rosario also told us that they are in the process of creating a digital hate crime reporting process as a result of our reporting.
  • The Colorado Springs, Colorado, police department fixed a database problem that had caused the loss of at least 18 hate crime reports. The error was discovered after we asked them questions about their records.
  • The Madison, Wisconsin, police department changed how they categorize hate crimes before they send them to the FBI based on our records request.
  • A group of nine senators led by Sen. Patty Murray, D-Wash., sent a letter to Education Secretary Betsy DeVos asking what the administration will do in response to racist harassment in schools and universities, citing Buzzfeed’s reporting for the project.
  • The Daily Stormer in Spanish removed the name of a popular Spanish forum from its site after legal action was threatened following a Univision story.
  • The Matthew Shepard Foundation said it would increase resources dedicated to training police officers to identify and investigate hate crimes, citing our project.

Even after the 100 news stories produced by the Documenting Hate coalition, we’re by no means finished. ProPublica and our partners will spend next year collecting and telling more stories from victims and witnesses. And we still have a lot of questions that demand answers. You can help.

Filed under: Civil Rights

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

 


The Limitations And Issues With Facial Recognition Software

We've all seen television shows where police technicians use facial recognition software to swiftly and accurately identify suspects, or catch the bad guys. How accurate is that? An article in The Guardian newspaper discussed the promises, limitations, and issues with facial recognition software used by law enforcement:

"The software, which has taken an expanding role among law enforcement agencies in the US over the last several years, has been mired in controversy because of its effect on people of color. Experts fear that the new technology may actually be hurting the communities the police claims they are trying to protect... "It’s considered an imperfect biometric," said Clare Garvie, who in 2016 created a study on facial recognition software, published by the Center on Privacy and Technology at Georgetown Law, called The Perpetual Line-Up. "There’s no consensus in the scientific community that it provides a positive identification of somebody"... [Garvie's] report found that black individuals, as with so many aspects of the justice system, were the most likely to be scrutinized by facial recognition software in cases. It also suggested that software was most likely to be incorrect when used on black individuals – a finding corroborated by the FBI's own research. This combination, which is making Lynch’s and other black Americans’ lives excruciatingly difficult, is born from another race issue that has become a subject of national discourse: the lack of diversity in the technology sector... According to a 2011 study by the National Institute of Standards and Technologies (Nist), facial recognition software is actually more accurate on Asian faces when it’s created by firms in Asian countries, suggesting that who makes the software strongly affects how it works... Law enforcement agencies often don’t review their software to check for baked-in racial bias – and there aren’t laws or regulations forcing them to."


Report: Several Impacts From Technology Changes Within The Financial Services Industry

For better or worse, the type of smart device you use can identify you in ways you may not expect. First, a report by London-based Privacy International highlighted the changes within the financial services industry:

"Financial services are changing, with technology being a key driver. It is affecting the nature of financial services from credit and lending through to insurance and even the future of money itself. The field known as “fintech” is where the attention and investment is flowing. Within it, new sources of data are being used by existing institutions and new entrants. They are using new forms of data analysis. These changes are significant to this sector and the lives of the people it serves. We are seeing dramatic changes in the ways that financial products make decisions. The nature of the decision-making is changing, transforming the products in the market and impacting on end results and bottom lines. However, this also means that treatment of individuals will change. This changing terrain of finance has implications for human rights, privacy and identity... Data that people would consider as having nothing to do with the financial sphere, such as their text-messages, is being used at an increasing rate by the financial sector...  Yet protections are weak or absent... It is essential that these innovations are subject to scrutiny... Fintech covers a broad array of sectors and technologies. A non-exhaustive list includes:

  • Alternative credit scoring (new data sources for credit scoring)
  • Payments (new ways of paying for goods and services that often have implications for the data generated)
  • Insurtech (the use of technology in the insurance sector)
  • Regtech (the use of technology to meet regulatory requirements)."

"Similarly, a breadth of technologies are used in the sector, including: Artificial Intelligence; Blockchain; the Internet of Things; Telematics and connected cars..."

While the study focused upon India and Kenya, it has implications for consumers worldwide. More observations and concerns:

"Social media is another source of data for companies in the fintech space. However, decisions are made not on just on the content of posts, but rather social media is being used in other ways: to authenticate customers via facial recognition, for instance... blockchain, or distributed ledger technology, is still best known for cryptocurrencies like BitCoin. However, the technology is being used more broadly, such as the World Bank-backed initiative in Kenya for blockchain-backed bonds10. Yet it is also used in other fields, like the push in digital identities11. A controversial example of this was a very small-scale scheme in the UK to pay benefits using blockchain technology, via an app developed by the fintech GovCoin12 (since renamed DISC). The trial raised concerns, with the BBC reporting a former member of the Government Digital Service describing this as "a potentially efficient way for Department of Work and Pensions to restrict, audit and control exactly what each benefits payment is actually spent on, without the government being perceived as a big brother13..."

Many consumers know that you can buy a wide variety of internet-connected devices for your home. That includes both devices you'd expect (e.g., televisions, printers, smart speakers and assistants, security systems, door locks and cameras, utility meters, hot water heaters, thermostats, refrigerators, robotic vacuum cleaners, lawn mowers) and devices you might not expect (e.g., sex toys, smart watches for children, mouse traps, wine bottlescrock pots, toy dolls, and trash/recycle bins). Add your car or truck to the list:

"With an increasing number of sensors being built into cars, they are increasingly “connected” and communicating with actors including manufacturers, insurers and other vehicles15. Insurers are making use of this data to make decisions about the pricing of insurance, looking for features like sharp acceleration and braking and time of day16. This raises privacy concerns: movements can be tracked, and much about the driver’s life derived from their car use patterns..."

And, there are hidden prices for the convenience of making payments with your favorite smart device:

"The payments sector is a key area of growth in the fintech sector: in 2016, this sector received 40% of the total investment in fintech22. Transactions paid by most electronic means can be tracked, even those in physical shops. In the US, Google has access to 70% of credit and debit card transactions—through Google’s "third-party partnerships", the details of which have not been confirmed23. The growth of alternatives to cash can be seen all over the world... There is a concerted effort against cash from elements of the development community... A disturbing aspect of the cashless debate is the emphasis on the immorality of cash—and, by extension, the immorality of anonymity. A UK Treasury minister, in 2012, said that paying tradesman by cash was "morally wrong"26, as it facilitated tax avoidance... MasterCard states: "Contrary to transactions made with a MasterCard product, the anonymity of digital currency transactions enables any party to facilitate the purchase of illegal goods or services; to launder money or finance terrorism; and to pursue other activity that introduces consumer and social harm without detection by regulatory or police authority."27"

The report cited a loss of control by consumers over their personal information. Going forward, the report included general and actor-specific recommendations. General recommendations:

  • "Protecting the human right to privacy should be an essential element of fintech.
  • Current national and international privacy regulations should be applicable to fintech.
  • Customers should be at the centre of fintech, not their product.
  • Fintech is not a single technology or business model. Any attempt to implement or regulate fintech should take these differences into account, and be based on the type activities they perform, rather than the type of institutions involved."

Want to learn more? Follow Privacy International on Facebook, on Twitter, or read about 10 ways of "Invisible Manipulation" of consumers.


Report: Patched Macs Still Vulnerable To Firmware Hacks

Apple Inc. logo I've heard numerous times the erroneous assumption by consumers: "Apple-branded devices don't get computer viruses." Well, they do. Ars Technica reported about a particular nasty hack of vulnerabilities in devices' Extensible Firmware Interface (EFI). Never heard of EFI? Well:

"An analysis by security firm Duo Security of more than 73,000 Macs shows that a surprising number remained vulnerable to such attacks even though they received OS updates that were supposed to patch the EFI firmware. On average, 4.2 percent of the Macs analyzed ran EFI versions that were different from what was prescribed by the hardware model and OS version. 47 Mac models remained vulnerable to the original Thunderstrike, and 31 remained vulnerable to Thunderstrike 2. At least 16 models received no EFI updates at all. EFI updates for other models were inconsistently successful, with the 21.5-inch iMac released in late 2015 topping the list, with 43 percent of those sampled running the wrong version."

This is very bad. EFI hacks are particularly effective and nasty because:

"... they give attackers control that starts with the very first instruction a Mac receives... the level of control attackers get far exceeds what they gain by exploiting vulnerabilities in the OS... That means an attacker who compromises a computer's EFI can bypass higher-level security controls, such as those built into the OS or, assuming one is running for extra protection, a virtual machine hypervisor. An EFI infection is also extremely hard to detect and even harder to remedy, as it can survive even after a hard drive is wiped or replaced and a clean version of the OS is installed."

At-risk EFI versions mean that devices running Windows and Linux operating systems are also vulnerable. Reportedly, the exploit requires plenty of computing and technical expertise, so hackers would probably pursue high-value targets (e.g., journalists, attorneys, government officials, contractors with government clearances) first.

The Duo Labs Report (63 pages, Adobe PDF) lists the specific MacBook, MacBookAir, and MacBookPro models at risk. The researchers shared a draft of the report with Apple before publication. The report's "Mitigation" section provides solutions, including but not limited to:

"Always deploy the full update package as released by Apple, do not remove separate packages from the bundle updater... When possible, deploy Combo OS updates instead of Delta updates... As a general rule of thumb, always run the latest version of macOS..."

Scary, huh? The nature of the attack means that hackers probably can disable the anti-virus software on your device(s), and you probably wouldn't know you've been hacked.


What We Know -- And Don't Know -- About Hate Crimes in America

[Editor's Note: today's guest blog post explores the problem of hate crimes. Recent surveys about harassment found that what happens online often doesn't stay online. Hopefully, future reports by ProPublica will explore the linkages. Today's blog post is reprinted with permission.]

By Rachel Glickhouse, ProPublica

"Go home. We need Americans here!" white supremacist Jeremy Joseph Christian yelled at two black women -- one wearing a hijab -- on a train in Portland, Oregon, in May. According to news reports, when several commuters tried to intervene, he went on a rampage, stabbing three people. Two of them died.

If the fatal stabbing was the worst racist attack in Portland this year, it was by no means the only one. In March, Buzzfeed reported on hate incidents in Oregon and the state's long history as a haven for white supremacists. Some of the incidents they found were gathered by Documenting Hate, a collaborative journalism project we launched earlier this year.

Documenting Hate is an attempt to overcome the inadequate data collection on hate crimes and bias incidents in America. We've been compiling incident reports from civil-rights groups, as well as news reports, social media and law enforcement records. We've also asked people to tell us their personal stories of witnessing or being the victim of hate.

It's been about six months since the project launched. Since then, we've been joined by more than 100 newsrooms around the country. Together, we're verifying the incidents that have been reported to us -- and telling people's stories.

We've received thousands of reports, with more coming every day. They come from cities big and small, and from states blue and red. People have reported hate incidents from every part of their communities: in schools, on the road, at private businesses, in the workplace. ProPublica and our partners have produced more than 50 stories using the tips from the database, from New York to Seattle, Minneapolis to Phoenix. Some examples:

Univision, HuffPost, and The New York Times opinion section identified a common thread in the reports we've received in which people of color are harassed "Go back to your country." This type of harassment affects both immigrants and U.S. citizens alike, reporters found.

Several stories published by our partners focused on racial harassment on public transportation, using tips to illustrate something officials were also seeing. The New York City Commission on Human Rights observed a 480 percent increase in claims of discriminatory harassment between 2015 and 2016, according to The New York Times Opinion section. The Massachusetts Bay Transportation Authority recorded 24 cases of offensive graffiti through April, compared to 35 in all of last year, the Boston Globe found. Univision covered multiple incidents involving Latinos targeted in incidents on the New York City subway.

Combing through our database, Buzzfeed discovered there were dozens of reported incidents in K-12 schools in which students cited President Donald Trump's name or slogans to harass minority classmates. This echoed a pattern Univision had reported on: In November, the Teaching Tolerance project at the Southern Poverty Law Center received more than 10,000 responses to an educator survey indicating an uptick in anti-Semitic, anti-Muslim and anti-immigrant activity in schools.

Our local partners reported on how hate incidents affect communities across the country: anti-Semitic graffiti in Phoenix, Islamophobia in Minneapolis, racist vandalism and homophobic threats in Seattle, white supremacist activity at a California university, racist harassment and vandalism in Boston, racism in the workplace in New Orleans, and hate incidents throughout Florida.

There are a few questions for which answers continue to elude us: How many hate crimes happen each year, and why is the record keeping so inadequate?

The FBI, which is required to track hate crimes, counts between 5,000 and 6,000 of them annually. But the Bureau of Justice Statistics estimates the total is closer to 250,000. One explanation for the gap is that many victims -- more than half, according to a recent estimate -- don't report what happened to them to police.

Even if they do, law enforcement agencies aren't all required to report to the FBI, meaning their reports might never make it into the national tally. The federal government is hardly a model of best practices; many federal agencies don't report their data, either -- even though they're legally required to do so.

We'll spend the next six months continuing to tackle these questions and more. And we and our partners will keep working our way through the tips in our database, telling people's stories and doing our best to understand what's happening.

There are ways that you can help us move the project forward:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.