138 posts categorized "Reports & Studies" Feed

Federal Reserve Released Its Non-cash Payments Fraud Report. Have Chip Cards Helped?

Many consumers prefer to pay for products and services using methods other than cash. How secure are these non-cash payment methods? The Federal Reserve Board (FRB) analyzed the payments landscape within the United States. Its October 2018 report found good and bad news. The good news: non-cash payments fraud is small. The bad news:

  • Overall, non-cash payments fraud is growing,
  • Card payments fraud drove the growth
Non-Cash Payment Activity And Fraud
Payment Type 2012 2015 Increase (Decrease)
Card payments & ATM withdrawal fraud $4 billion $6.5 billion 62.5 percent
Check fraud $1.1 billion $710 million (35) percent
Non-cash payments fraud $6.1 billion $8.3 billion 37 percent
Total Non-cash payments $161.2 trillion $180.3 trillion 12 percent

The FRB report included:

"... fraud totals and rates for payments processed over general-purpose credit and debit card networks, including non-prepaid and prepaid debit card networks, the automated clearinghouse (ACH) transfer system, and the check clearing system. These payment systems form the core of the noncash payment and settlement systems used to clear and settle everyday payments made by consumers and businesses in the United States. The fraud data were collected as part of Federal Reserve surveys of depository institutions in 2012 and 2015 and payment card networks in 2015 and 2016. The types of fraudulent payments covered in the study are those made by an unauthorized third party."

Data from the card network survey included general-purpose credit and debit (non-prepaid and prepaid) card payments, but did not include ATM withdrawals. The card networks include Visa, MasterCard, Discover and others. Additional findings:

"... the rate of card fraud, by value, was nearly flat from 2015 to 2016, with the rate of in-person card fraud decreasing notably and the rate of remote card fraud increasing significantly..."

The industry defines several categories of card fraud:

  1. "Counterfeit card. Fraud is perpetrated using an altered or cloned card;
  2. Lost or stolen card. Fraud is undertaken using a legitimate card, but without the cardholder’s consent;
  3. Card issued but not received. A newly issued card sent to a cardholder is intercepted and used to commit fraud;
  4. Fraudulent application. A new card is issued based on a fake identity or on someone else’s identity;
  5. Fraudulent use of account number. Fraud is perpetrated without using a physical card. This type of fraud is typically remote, with the card number being provided through an online web form or a mailed paper form, or given orally over the telephone; and
  6. Other. Fraud including fraud from account take-over and any other types of fraud not covered above."
Card Fraud By Category
Fraud Category 2015 2016 Increase/(Decrease)
Fraudulent use of account number $2.88 billion $3.46 billion 20 percent
Counterfeit card fraud $3.05 billion $2.62 billion (14) percent
Lost or stolen card fraud $730 million $810 million 11 percent
Fraudulent application $210 million $360 million 71 percent

The increase in fraudulent application suggests that criminals consider it easy to intercept pre-screened credit and card offers sent via postal mail. It is easy for consumers to opt out of pre-screened credit and card offers. There is also the National Do Not Call Registry. Do both today if you haven't.

The report also covered EMV chip cards, which were introduced to stop counterfeit card fraud. Card networks distributed both chip cards to consumers, and chip-reader terminals to retailers. The banking industry had set an October 1, 2015 deadline to switch to chip cards. The FRB report:

EMV Chip card fraud and payments. Federal Reserve Board. October 2018

The FRB concluded:

"Card systems brought EMV processing online, and a liability shift, beginning in October 2015, created an incentive for merchants to accept chip cards. By value, the share of non-fraudulent in-person payments made with [chip cards] shifted dramatically between 2015 and 2016, with chip-authenticated payments increasing from 3.2 percent to 26.4 percent. The share of fraudulent in-person payments made with [chip cards] also increased from 4.1 percent in 2015 to 22.8 percent in 2016. As [chip cards] are more secure, this growth in the share of fraudulent in-person chip payments may seem counter-intuitive; however, it reflects the overall increase in use. Note that in 2015, the share of fraudulent in-person payments with [chip cards] (4.1 percent) was greater than the share of non-fraudulent in-person payments with [chip cards] (3.2 percent), a relationship that reversed in 2016."


When Fatal Crashes Can't Be Avoided, Who Should Self-Driving Cars Save? Or Sacrifice? Results From A Global Survey May Surprise You

Experts predict that there will be 10 million self-driving cars on the roads by 2020. Any outstanding issues need to be resolved before then. One outstanding issue is the "trolley problem" - a situation where a fatal vehicle crash can not be avoided and the self-driving car must decide whether to save the passenger or a nearby pedestrian. Ethical issues with self-driving cars are not new. There are related issues, and some experts have called for a code of ethics.

Like it or not, the software in self-driving cars must be programmed to make decisions like this. Which person in a "trolley problem" should the self-driving car save? In other words, the software must be programmed with moral preferences which dictate which person to sacrifice.

The answer is tricky. You might assume: always save the driver, since nobody would buy self-driving car which would kill their owners. What if the pedestrian is crossing against a 'do not cross' signal within a crosswalk? Does the answer change if there are multiple pedestrians in the crosswalk? What if the pedestrians are children, elders, or pregnant? Or a doctor? Does it matter if the passenger is older than the pedestrians?

To understand what the public wants -- expects -- in self-driving cars, also known as autonomous vehicles (AV), researchers from MIT asked consumers in a massive, online global survey. The survey included 2 million people from 233 countries. The survey included 13 accident scenarios with nine varying factors:

  1. "Sparing people versus pets/animals,
  2. Staying on course versus swerving,
  3. Sparing passengers versus pedestrians,
  4. Sparing more lives versus fewer lives,
  5. Sparing men versus women,
  6. Sparing the young versus the elderly,
  7. Sparing pedestrians who cross legally versus jaywalking,
  8. Sparing the fit versus the less fit, and
  9. Sparing those with higher social status versus lower social status."

Besides recording the accident choices, the researchers also collected demographic information (e.g., gender, age, income, education, attitudes about religion and politics, geo-location) about the survey participants, in order to identify clusters: groups, areas, countries, territories, or regions containing people with similar "moral preferences."

Newsweek reported:

"The study is basically trying to understand the kinds of moral decisions that driverless cars might have to resort to," Edmond Awad, lead author of the study from the MIT Media Lab, said in a statement. "We don't know yet how they should do that."

And the overall findings:

"First, human lives should be spared over those of animals; many people should be saved over a few; and younger people should be preserved ahead of the elderly."

These have implications for policymakers. The researchers noted:

"... given the strong preference for sparing children, policymakers must be aware of a dual challenge if they decide not to give a special status to children: the challenge of explaining the rationale for such a decision, and the challenge of handling the strong backlash that will inevitably occur the day an autonomous vehicle sacrifices children in a dilemma situation."

The researchers found regional differences about who should be saved:

"The first cluster (which we label the Western cluster) contains North America as well as many European countries of Protestant, Catholic, and Orthodox Christian cultural groups. The internal structure within this cluster also exhibits notable face validity, with a sub-cluster containing Scandinavian countries, and a sub-cluster containing Commonwealth countries.

The second cluster (which we call the Eastern cluster) contains many far eastern countries such as Japan and Taiwan that belong to the Confucianist cultural group, and Islamic countries such as Indonesia, Pakistan and Saudi Arabia.

The third cluster (a broadly Southern cluster) consists of the Latin American countries of Central and South America, in addition to some countries that are characterized in part by French influence (for example, metropolitan France, French overseas territories, and territories that were at some point under French leadership). Latin American countries are cleanly separated in their own sub-cluster within the Southern cluster."

The researchers also observed:

"... systematic differences between individualistic cultures and collectivistic cultures. Participants from individualistic cultures, which emphasize the distinctive value of each individual, show a stronger preference for sparing the greater number of characters. Furthermore, participants from collectivistic cultures, which emphasize the respect that is due to older members of the community, show a weaker preference for sparing younger characters... prosperity (as indexed by GDP per capita) and the quality of rules and institutions (as indexed by the Rule of Law) correlate with a greater preference against pedestrians who cross illegally. In other words, participants from countries that are poorer and suffer from weaker institutions are more tolerant of pedestrians who cross illegally, presumably because of their experience of lower rule compliance and weaker punishment of rule deviation... higher country-level economic inequality (as indexed by the country’s Gini coefficient) corresponds to how unequally characters of different social status are treated. Those from countries with less economic equality between the rich and poor also treat the rich and poor less equally... In nearly all countries, participants showed a preference for female characters; however, this preference was stronger in nations with better health and survival prospects for women. In other words, in places where there is less devaluation of women’s lives in health and at birth, males are seen as more expendable..."

This is huge. It makes one question the wisdom of a one-size-fits-all programming approach by AV makers wishing to sell cars globally. Citizens in clusters may resent an AV maker forcing its moral preferences upon them. Some clusters or countries may demand vehicles matching their moral preferences.

The researchers concluded (emphasis added):

"Never in the history of humanity have we allowed a machine to autonomously decide who should live and who should die, in a fraction of a second, without real-time supervision. We are going to cross that bridge any time now, and it will not happen in a distant theatre of military operations; it will happen in that most mundane aspect of our lives, everyday transportation. Before we allow our cars to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers that will regulate them... Our data helped us to identify three strong preferences that can serve as building blocks for discussions of universal machine ethics, even if they are not ultimately endorsed by policymakers: the preference for sparing human lives, the preference for sparing more lives, and the preference for sparing young lives. Some preferences based on gender or social status vary considerably across countries, and appear to reflect underlying societal-level preferences..."

And the researchers advised caution, given this study's limitations (emphasis added):

"Even with a sample size as large as ours, we could not do justice to all of the complexity of autonomous vehicle dilemmas. For example, we did not introduce uncertainty about the fates of the characters, and we did not introduce any uncertainty about the classification of these characters. In our scenarios, characters were recognized as adults, children, and so on with 100% certainty, and life-and-death outcomes were predicted with 100% certainty. These assumptions are technologically unrealistic, but they were necessary... Similarly, we did not manipulate the hypothetical relationship between respondents and characters (for example, relatives or spouses)... Indeed, we can embrace the challenges of machine ethics as a unique opportunity to decide, as a community, what we believe to be right or wrong; and to make sure that machines, unlike humans, unerringly follow these moral preferences. We might not reach universal agreement: even the strongest preferences expressed through the [survey] showed substantial cultural variations..."

Several important limitations to remember. And, there are more. It didn't address self-driving trucks. Should an AV tractor-trailer semi  -- often called a robotruck -- carrying $2 million worth of goods sacrifice its load (and passenger) to save one or more pedestrians? What about one or more drivers on the highway? Does it matter if the other drivers are motorcyclists, school buses, or ambulances?

What about autonomous freighters? Should an AV cargo ship be programed to sacrifice its $80 million load to save a pleasure craft? Does the size (e.g., number of passengers) of the pleasure craft matter? What if the other craft is a cabin cruiser with five persons? Or a cruise ship with 2,000 passengers and a crew of 800? What happens in international waters between AV ships from different countries programmed with different moral preferences?

Regardless, this MIT research seems invaluable. It's a good start. AV makers (e.g., autos, ships, trucks) need to explicitly state what their vehicles will (and won't do). Don't hide behind legalese similar to what exists today in too many online terms-of-use and privacy policies.

Hopefully, corporate executives and government policymakers will listen, consider the limitations, demand follow-up research, and not dive headlong into the AV pool without looking first. After reading this study, it struck me that similar research would have been wise before building a global social media service, since people in different countries or regions having varying preferences with online privacy, sharing information, and corporate surveillance. What are your opinions?


Survey: Most Home Users Satisfied With Voice-Controlled Assistants. Tech Adoption Barriers Exist

Recent survey results reported by MediaPost:

"Amazon Alexa and Google Assistant have the highest satisfaction levels among mobile users, each with an 85% satisfaction rating, followed by Siri and Bixby at 78% and Microsoft’s Cortana at 77%... As found in other studies, virtual assistants are being used for a range of things, including looking up things on the internet (51%), listening to music (48%), getting weather information (46%) and setting a timer (35%)... Smart speaker usage varies, with 31% of Amazon device owners using their speaker at least a few times a week, Google Home owners 25% and Apple HomePod 18%."

Additional survey results are available at Digital Trends and Experian. PWC found:

"Only 10% of surveyed respondents were not familiar with voice-enabled products and devices. Of the 90% who were, the majority have used a voice assistant (72%). Adoption is being driven by younger consumers, households with children, and households with an income of >$100k... Despite being accessible everywhere, three out of every four consumers (74%) are using their mobile voice assistants at home..."

Consumers seem to want privacy when using voice assistants, so usage tends to occur at home and not in public places. Also:

"... the bulk of consumers have yet to graduate to more advanced activities like shopping or controlling other smart devices in the home... 50% of respondents have made a purchase using their voice assistant, and an additional 25% would consider doing so in the future. The majority of items purchased are small and quick.. Usage will continue to increase but consistency must improve for wider adoption... Some consumers see voice assistants as a privacy risk... When forced to choose, 57% of consumers said they would rather watch an ad in the middle of a TV show than listen to an ad spoken by their voice assistant..."

Consumers want control over the presentation of advertisements by voice assistants. Control options desired include skip, select, never while listening to music, only at pre-approved times, customized based upon interests, seamless integration, and match to preferred brands. 38 percent of survey respondents said that they, "don't want something 'listening in' on my life all the time."

What are your preferences with voice assistants? Any privacy concerns?


NPR Podcast: 'The Weaponization Of Social Media'

Any technology can be used for good, or for bad. Social media is no exception. A recent data breach study in Australia listed the vulnerabilities of social media. A study in 2016 found, "social media attractive to vulnerable narcissists."

How have social media sites and mobile apps been used as weapons? The podcast below features an interview of P.W. Singer and Emerson Brooking, authors of a new book, "LikeWar: The Weaponization of Social Media." The authors cite real-world examples of how social media sites and mobile apps have been used during conflicts and demonstrations around the globe -- and continue to be used.

A Kirkus book review stated:

"... Singer and Brooking sagely note the intensity of interpersonal squabbling online as a moral equivalent of actual combat, and they also discuss how "humans as a species are uniquely ill-equipped to handle both the instantaneity and the immensity of information that defines the social media age." The United States seems especially ill-suited, since in the Wild West of the internet, our libertarian tendencies have led us to resist what other nations have put in place, including public notices when external disinformation campaigns are uncovered and “legal action to limit the effect of poisonous super-spreaders.” Information literacy, by this account, becomes a “national security imperative,” one in which the U.S. is badly lagging..."

The new book "LikeWar" is available at several online bookstores, including Barnes and Noble, Powell's, and Amazon. Now, watch the podcast:


Study: Most Consumers Fear Companies Will 'Go Too Far' With Artificial Intelligence Technologies

New research has found that consumers are conflicted about artificial intelligence (AI) technologies. A national study of 697 adults during the Spring of 2018 by Elicit Insights found:

"Most consumers are conflicted about AI. They know there are benefits, but recognize the risks, too"

Several specific findings:

  • 73 percent of survey participants (e.g., Strongly Agree, Agree) fear "some companies will go too far with AI"
  • 64 percent agreed (e.g., Strongly Agree, Agree) with the statement: "I'm concerned about how companies will use artificial intelligence and the information they have about me to engage with me"
  • "Six out of 10 Americans agree or strongly agree that AI will never be as good as human interaction. Human interaction remains sacred and there is concern with at least a third of consumers that AI won’t stay focused on mundane tasks and leave the real thinking to humans."

Many of the concerns center around control. As AI applications become smarter and more powerful, they are able to operate independently, without human -- users' -- authorization. When presented with several smart-refrigerator scenarios, the less control users had over purchases the fewer survey participants viewed AI as a benefit:

Smart refrigerator and food purchase scenarios. AI study by Elicit Insights. Click to view larger version

AI technologies can also be used to find and present possible matches for online dating services. Again, survey participants expressed similar control concerns:

Dating service scenarios. AI study by Elicit Insights. Click to view larger version

Download Elicit Insights' complete Artificial Intelligence survey (Adobe PDF). What are your opinions? Do you prefer AI applications that operate independently, or which require your authorization?


Study: Performance Issues Impede IoT Device Trust And Usage Worldwide By Consumers

Dynatrace logo A global survey recently uncovered interesting findings about the usage and satisfaction of Iot (Internet of things) devices by consumers. A survey of consumers in several countries found that 52 percent already use IoT devices, and 64 percent of users have already encountered performance issues with their devices.

Opinium Research logo Dynatrace, a software intelligence company, commissioned Opinium Research to conduct a global survey of 10,002 participants, with 2,000 in the United States, 2,000 in the United Kingdom, and 1,000 respondents each in France, Germany, Australia, Brazil, Singapore, and China. Dynatrace announced several findings, chiefly:

"On average, consumers experience 1.5 digital performance problems every day, and 62% of people fear the number of problems they encounter, and the frequency, will increase due to the rise of IoT."

That seems like plenty of poor performance. Some findings were specific to travel, healthcare, and in-home retail sectors. Regarding travel:

"The digital performance failures consumers are already experiencing with everyday technology is potentially making them wary of other uses of IoT. 85% of respondents said they are concerned that self-driving cars will malfunction... 72% feel it is likely software glitches in self-driving cars will cause serious injuries and fatalities... 84% of consumers said they wouldn’t use self-driving cars due to a fear of software glitches..."

Regarding healthcare:

"... 62% of consumers stated they would not trust IoT devices to administer medication; this sentiment is strongest in the 55+ age range, with 74% expressing distrust. There were also specific concerns about the use of IoT devices to monitor vital signs, such as heart rate and blood pressure. 85% of consumers expressed concern that performance problems with these types of IoT devices could compromise clinical data..."

Regarding in-home retail devices:

"... 83% of consumers are concerned about losing control of their smart home due to digital performance problems... 73% of consumers fear being locked in or out of the smart home due to bugs in smart home technology... 68% of consumers are worried they won’t be able to control the temperature in the smart home due to malfunctions in smart home technology... 81% of consumers are concerned that technology or software problems with smart meters will lead to them being overcharged for gas, electricity, and water."

The findings are a clear call to IoT makers to improve the performance, security, and reliability of their internet-connected devices. To learn more, download the full Dynatrace report titled, "IoT Consumer Confidence Report: Challenges for Enterprise Cloud Monitoring on the Horizon."


Survey: Complexities And Consumer Fears With Checking Credit Reports For Errors

Many consumers know that they should check their credit reports yearly for errors, but most don't. A recent survey found much complexity and fears surrounding credit reports. WalletHub surveyed 500 adults in the United States during July, and found:

  • 84 percent of survey respondents know that they should check their credit reports at least once each year
  • Only 41 percent of respondents said they check their credit reports
  • 27 percent said they don't have the time to check their credit reports
  • 14 percent said they are afraid to see the contents of their credit reports

WalletHub found that women were twice as likely as men to have the above fear. Millennials were five times as likely than Baby Boomers to have this fear. More findings are listed below.

It is important for consumers to understand the industry. Inaccurate credit report can lower your credit score, the overall number used to indicate your credit worthiness. A low credit score can cost you money: denied credit applications, or approved loans but with higher interest rates. The errors in credit reports can include another person's data co-mingled with yours (obviously, that should never happen), a dead person's data co-mingled with yours, or a credit report that doesn't accurately reflect a loan you truly paid off on time and in full.

A 2013 study by the U.S. Federal Trade Commission (FTC) found problems with credit reports accuracy. First, 26 percent of participants identified errors in their credit reports. So, one in four consumers were affected. Plus, of the 572 credit reports where errors were identified, 399 reports (70%) were modified by a credit reporting agency, and 211 (36%) resulted in a credit score changed. So, finding and reporting errors is beneficial for consumers. Plus, a report in 2013 by the 60 Minutes television news magazine listed problems with the dispute process: failures by the largest three credit reporting agencies to correct errors reported by consumers on their credit reports.

There are national and regional credit reporting agencies. The three national credit reporting agencies include Experian, Equifax, andTransUnion. Equifax operates a secondary consumer reporting agency focused solely upon the telecommunications industry and broadband internet services.

Credit reporting agencies get their data from a variety of sources including data brokers. So, their business model is based upon data sharing. Just about anyone can set up and operate a credit reporting agency. No special skills nor expertise are required. Credit reporting agencies make money by selling credit reports to lenders. Credit reports often contain errors. For better or worse regarding security, credit reporting agencies historically have outsourced work, sometimes internationally.

The industry and executives have arguably lackadaisical data security approaches. A massive data breach at Equifax affected about 143 million persons in 2017. An independent investigation of that breach found a length list of data security flaws and failures at Equifax. To compound matters, the Internal Revenue Service gave Equifax a no-bid contract in 2017.

The industry has a spotty history. In 2007, Equifax paid a $2.7 million fine for violating federal credit laws. In 2009, it paid a $65,000 fine to the state of Indiana for violating the state's security freeze law. In 2012, Equifax and some of its customers paid $1.6 million to settle allegations of improper list sales. A data breach at Experian in 2015 affected 15 million wireless carrier customers. In 2017, Equifax and TransUnion paid $23.1 million to settle allegations of deceptive advertising about credit scores.

See the graphic below for more findings from the WalletHub survey.

2018 Credit Report Complexity Survey by WalletHub. Click to view larger version


Test Finds Amazon's Facial Recognition Software Wrongly Identified Members Of Congress As Persons Arrested. A Few Legislators Demand Answers

In a test of Rekognition, the facial recognition software by Amazon, the American Civil Liberties Union (ACLU) found that the software misidentified 28 members of the United States Congress to mugshot photographs of persons arrested for crimes. Jokes aside about politicians, this is serious stuff. According to the ACLU:

"The members of Congress who were falsely matched with the mugshot database we used in the test include Republicans and Democrats, men and women, and legislators of all ages, from all across the country... To conduct our test, we used the exact same facial recognition system that Amazon offers to the public, which anyone could use to scan for matches between images of faces. And running the entire test cost us $12.33 — less than a large pizza... The false matches were disproportionately of people of color, including six members of the Congressional Black Caucus, among them civil rights legend Rep. John Lewis (D-Ga.). These results demonstrate why Congress should join the ACLU in calling for a moratorium on law enforcement use of face surveillance."

List of 28 Congressional legislators mis-identified by Amazon Rekognition in ACLU study. Click to view larger version With 535 member of Congress, the implied error rate was 5.23 percent. On Thursday, three of the misidentified legislators sent a joint letter to Jeffery Bezos, the Chief executive Officer at Amazon. The letter read in part:

"We write to express our concerns and seek more information about Amazon's facial recognition technology, Rekognition... While facial recognition services might provide a valuable law enforcement tool, the efficacy and impact of the technology are not yet fully understood. In particular, serious concerns have been raised about the dangers facial recognition can pose to privacy and civil rights, especially when it is used as a tool of government surveillance, as well as the accuracy of the technology and its disproportionate impact on communities of color.1 These concerns, including recent reports that Rekognition could lead to mis-identifications, raise serious questions regarding whether Amazon should be selling its technology to law enforcement... One study estimates that more than 117 million American adults are in facial recognition databases that can be searched in criminal investigations..."

The letter was sent by Senator Edward J. Markey (Massachusetts, Representative Luis V. Gutiérrez (Illinois), and Representative Mark DeSaulnier (California). Why only three legislators? Where are the other 25? Nobody else cares about software accuracy?

The three legislators asked Amazon to provide answers by August 20, 2018 to several key requests:

  • The results of any internal accuracy or bias assessments Amazon perform on Rekognition, with details by race, gender, and age,
  • The list of all law enforcement or intelligence agencies Amazon has communicated with regarding Rekognition,
  • The list of all law enforcement agencies which have used or currently use Rekognition,
  • If any law enforcement agencies which used Rekogntion have been investigated, sued, or reprimanded for unlawful or discriminatory policing practices,
  • Describe the protections, if any, Amazon has built into Rekognition to protect the privacy rights of innocent citizens cuaght in the biometric databases used by law enforcement for comparisons,
  • Can Rekognition identify persons younger than age 13, and what protections Amazon uses to comply with Children's Online Privacy Protections Act (COPPA),
  • Whether Amazon conduts any audits of Rekognition to ensure its appropriate and legal uses, and what actions Amazon has taken to correct any abuses,
  • Explain whether Rekognition is integrated with police body cameras and/or "public-facing camera networks."

The letter cited a 2016 report by the Center on Privacy and Technology (CPT) at Georgetown Law School, which found:

"... 16 states let the Federal Bureau of Investigation (FBI) use face recognition technology to compare the faces of suspected criminals to their driver’s license and ID photos, creating a virtual line-up of their state residents. In this line-up, it’s not a human that points to the suspect—it’s an algorithm... Across the country, state and local police departments are building their own face recognition systems, many of them more advanced than the FBI’s. We know very little about these systems. We don’t know how they impact privacy and civil liberties. We don’t know how they address accuracy problems..."

Everyone wants law enforcement to quickly catch criminals, prosecute criminals, and protect the safety and rights of law-abiding citizens. However, accuracy matters. Experts warn that the facial recognition technologies used are unregulated, and the systems' impacts upon innocent citizens are not understood. Key findings in the CPT report:

  1. "Law enforcement face recognition networks include over 117 million American adults. Face recognition is neither new nor rare. FBI face recognition searches are more common than federal court-ordered wiretaps. At least one out of four state or local police departments has the option to run face recognition searches through their or another agency’s system. At least 26 states (and potentially as many as 30) allow law enforcement to run or request searches against their databases of driver’s license and ID photos..."
  2. "Different uses of face recognition create different risks. This report offers a framework to tell them apart. A face recognition search conducted in the field to verify the identity of someone who has been legally stopped or arrested is different, in principle and effect, than an investigatory search of an ATM photo against a driver’s license database, or continuous, real-time scans of people walking by a surveillance camera. The former is targeted and public. The latter are generalized and invisible..."
  3. "By tapping into driver’s license databases, the FBI is using biometrics in a way it’s never done before. Historically, FBI fingerprint and DNA databases have been primarily or exclusively made up of information from criminal arrests or investigations. By running face recognition searches against 16 states’ driver’s license photo databases, the FBI has built a biometric network that primarily includes law-abiding Americans. This is unprecedented and highly problematic."
  4. " Major police departments are exploring face recognition on live surveillance video. Major police departments are exploring real-time face recognition on live surveillance camera video. Real-time face recognition lets police continuously scan the faces of pedestrians walking by a street surveillance camera. It may seem like science fiction. It is real. Contract documents and agency statements show that at least five major police departments—including agencies in Chicago, Dallas, and Los Angeles—either claimed to run real-time face recognition off of street cameras..."
  5. "Law enforcement face recognition is unregulated and in many instances out of control. No state has passed a law comprehensively regulating police face recognition. We are not aware of any agency that requires warrants for searches or limits them to serious crimes. This has consequences..."
  6. "Law enforcement agencies are not taking adequate steps to protect free speech. There is a real risk that police face recognition will be used to stifle free speech. There is also a history of FBI and police surveillance of civil rights protests. Of the 52 agencies that we found to use (or have used) face recognition, we found only one, the Ohio Bureau of Criminal Investigation, whose face recognition use policy expressly prohibits its officers from using face recognition to track individuals engaging in political, religious, or other protected free speech."
  7. "Most law enforcement agencies do little to ensure their systems are accurate. Face recognition is less accurate than fingerprinting, particularly when used in real-time or on large databases. Yet we found only two agencies, the San Francisco Police Department and the Seattle region’s South Sound 911, that conditioned purchase of the technology on accuracy tests or thresholds. There is a need for testing..."
  8. "The human backstop to accuracy is non-standardized and overstated. Companies and police departments largely rely on police officers to decide whether a candidate photo is in fact a match. Yet a recent study showed that, without specialized training, human users make the wrong decision about a match half the time...The training regime for examiners remains a work in progress."
  9. "Police face recognition will disproportionately affect African Americans. Police face recognition will disproportionately affect African Americans. Many police departments do not realize that... the Seattle Police Department says that its face recognition system “does not see race.” Yet an FBI co-authored study suggests that face recognition may be less accurate on black people. Also, due to disproportionately high arrest rates, systems that rely on mug shot databases likely include a disproportionate number of African Americans. Despite these findings, there is no independent testing regime for racially biased error rates. In interviews, two major face recognition companies admitted that they did not run these tests internally, either."
  10. "Agencies are keeping critical information from the public. Ohio’s face recognition system remained almost entirely unknown to the public for five years. The New York Police Department acknowledges using face recognition; press reports suggest it has an advanced system. Yet NYPD denied our records request entirely. The Los Angeles Police Department has repeatedly announced new face recognition initiatives—including a “smart car” equipped with face recognition and real-time face recognition cameras—yet the agency claimed to have “no records responsive” to our document request. Of 52 agencies, only four (less than 10%) have a publicly available use policy. And only one agency, the San Diego Association of Governments, received legislative approval for its policy."

The New York Times reported:

"Nina Lindsey, an Amazon Web Services spokeswoman, said in a statement that the company’s customers had used its facial recognition technology for various beneficial purposes, including preventing human trafficking and reuniting missing children with their families. She added that the A.C.L.U. had used the company’s face-matching technology, called Amazon Rekognition, differently during its test than the company recommended for law enforcement customers.

For one thing, she said, police departments do not typically use the software to make fully autonomous decisions about people’s identities... She also noted that the A.C.L.U had used the system’s default setting for matches, called a “confidence threshold,” of 80 percent. That means the group counted any face matches the system proposed that had a similarity score of 80 percent or more. Amazon itself uses the same percentage in one facial recognition example on its site describing matching an employee’s face with a work ID badge. But Ms. Lindsey said Amazon recommended that police departments use a much higher similarity score — 95 percent — to reduce the likelihood of erroneous matches."

Good of Amazon to respond quickly, but its reply is still insufficient and troublesome. Amazon may recommend 95 percent similarity scores, but the public does not know if police departments actually use the higher setting, or consistently do so across all types of criminal investigations. Plus, the CPT report cast doubt on human "backstop" intervention, which Amazon's reply seems to heavily rely upon.

Where is the rest of Congress on this? On Friday, three Senators sent a similar letter seeking answers from 39 federal law-enforcement agencies about their use facial recognition technology, and what policies, if any, they have put in place to prevent abuse and misuse.

All of the findings in the CPT report are disturbing. Finding #3 is particularly troublesome. So, voters need to know what, if anything, has changed since these findings were published in 2016. Voters need to know what their elected officials are doing to address these findings. Some elected officials seem engaged on the topic, but not enough. What are your opinions?


Experts Warn Biases Must Be Removed From Artificial Intelligence

CNN Tech reported:

"Every time humanity goes through a new wave of innovation and technological transformation, there are people who are hurt and there are issues as large as geopolitical conflict," said Fei Fei Li, the director of the Stanford Artificial Intelligence Lab. "AI is no exception." These are not issues for the future, but the present. AI powers the speech recognition that makes Siri and Alexa work. It underpins useful services like Google Photos and Google Translate. It helps Netflix recommend movies, Pandora suggest songs, and Amazon push products..."

Artificial intelligence (AI) technology is not only about autonomous ships, trucks, and preventing crashes involving self-driving cars. AI has global impacts. Researchers have already identified problems and limitations:

"A recent study by Joy Buolamwini at the M.I.T. Media Lab found facial recognition software has trouble identifying women of color. Tests by The Washington Post found that accents often trip up smart speakers like Alexa. And an investigation by ProPublica revealed that software used to sentence criminals is biased against black Americans. Addressing these issues will grow increasingly urgent as things like facial recognition software become more prevalent in law enforcement, border security, and even hiring."

Reportedly, the concerns and limitations were discussed earlier this month at the "AI Summit - Designing A Future For All" conference. Back in 2016, TechCrunch listed five unexpected biases in artificial intelligence. So, there is much important work to be done to remove biases.

According to CNN Tech, a range of solutions are needed:

"Diversifying the backgrounds of those creating artificial intelligence and applying it to everything from policing to shopping to banking...This goes beyond diversifying the ranks of engineers and computer scientists building these tools to include the people pondering how they are used."

Given the history of the internet, there seems to be an important take-away. Early on, many people mistakenly assumed that, "If it's in an e-mail, then it must be true." That mistaken assumption migrated to, "If it's in a website on the internet, then it must be true." And that mistaken assumption migrated to, "If it was posted on social media, then it must be true." Consumers, corporate executives, and technicians must educate themselves and avoid assuming, "If an AI system collected it, then it must be true." Veracity matters. What do you think?


Researchers Find Mobile Apps Can Easily Record Screenshots And Videos of Users' Activities

New academic research highlights how easy it is for mobile apps to both spy upon consumers and violate our privacy. During a recent study to determine whether or not smartphones record users' conversations, researchers at Northeastern University (NU) found:

"... that some companies were sending screenshots and videos of user phone activities to third parties. Although these privacy breaches appeared to be benign, they emphasized how easily a phone’s privacy window could be exploited for profit."

The NU researchers tested 17,260 of the most popular mobile apps running on smartphones using the Android operating system. About 9,000 of the 17,260 apps had the ability to take screenshots. The vulnerability: screenshot and video captures could easily be used to record users' keystrokes, passwords, and related sensitive information:

"This opening will almost certainly be used for malicious purposes," said Christo Wilson, another computer science professor on the research team. "It’s simple to install and collect this information. And what’s most disturbing is that this occurs with no notification to or permission by users."

The NU researchers found one app already recording video of users' screen activity (links added):

"That app was GoPuff, a fast-food delivery service, which sent the screenshots to Appsee, a data analytics firm for mobile devices. All this was done without the awareness of app users. [The researchers] emphasized that neither company appeared to have any nefarious intent. They said that web developers commonly use this type of information to debug their apps... GoPuff has changed its terms of service agreement to alert users that the company may take screenshots of their use patterns. Google issued a statement emphasizing that its policy requires developers to disclose to users how their information will be collected."

May? A brief review of the Appsee site seems to confirm that video recordings of the screens on app users' mobile devices is integral to the service:

"RECORDING: Watch every user action and understand exactly how they use your app, which problems they're experiencing, and how to fix them.​ See the app through your users' eyes to pinpoint usability, UX and performance issues... TOUCH HEAT MAPS: View aggregated touch heatmaps of all the gestures performed in each​ ​screen in your app.​ Discover user navigation and interaction preferences... REALTIME ANALYTICS & ALERTS:Get insightful analytics on user behavior without pre-defining any events. Obtain single-user and aggregate insights in real-time..."

Sounds like a version of "surveillance capitalism" to me. According to the Appsee site, a variety of companies use the service including eBay, Samsung, Virgin airlines, The Weather Network, and several advertising networks. Plus, the Appsee Privacy Policy dated may 23, 2018 stated:

"The Appsee SDK allows Subscribers to record session replays of their end-users' use of Subscribers' mobile applications ("End User Data") and to upload such End User Data to Appsee’s secured cloud servers."

In this scenario, GoPuff is a subscriber and consumers using the GoPuff mobile app are end users. The Appsee SDK is software code embedded within the GoPuff mobile app. The researchers said that this vulnerability, "will not be closed until the phone companies redesign their operating systems..."

Data-analytics services like Appsee raise several issues. First, there seems to be little need for digital agencies to conduct traditional eye-tracking and usability test sessions, since companies can now record, upload and archive what, when, where, and how often users swipe and select in-app content. Before, users were invited to and paid for their participation in user testing sessions.

Second, this in-app tracking and data collection amounts to perpetual, unannounced user testing. Previously, companies have gotten into plenty of trouble with their customers by performing secret user testing; especially when the service varies from the standard, expected configuration and the policies (e.g., privacy, terms of service) don't disclose it. Nobody wants to be a lab rat or crash-test dummy.

Third, surveillance agencies within several governments must be thrilled to learn of these new in-app tracking and spy tools, if they aren't already using them. A reasonable assumption is that Appsee also provides data to law enforcement upon demand.

Fourth, two of the researchers at NU are undergraduate students. Another startling disclosure:

"Coming into this project, I didn’t think much about phone privacy and neither did my friends," said Elleen Pan, who is the first author on the paper. "This has definitely sparked my interest in research, and I will consider going back to graduate school."

Given the tsunami of data breaches, privacy legislation in Europe, and demands by law enforcement for tech firms to build "back door" hacks into their mobile devices and smartphones, it is startling alarming that some college students, "don't think much about phone privacy." This means that Pan and her classmates probably haven't read privacy and terms-of-service policies for the apps and sites they've used. Maybe they will now.

Let's hope so.

Consumers interested in GoPuff should closely read the service's privacy and Terms of Service policies, since the latter includes dispute resolution via binding arbitration and prevents class-action lawsuits.

Hopefully, future studies about privacy and mobile apps will explore further the findings by Pan and her co-researchers. Download the study titled, "Panoptispy: Characterizing Audio and Video Exfiltration from Android Applications" (Adobe PDF) by Elleen Pan, Jingjing Ren, Martina Lindorfer, Christo Wilson, and David Choffnes.


Report: Software Failure In Fatal Accident With Self-Driving Uber Car

TechCrunch reported:

"The cause of the fatal crash of an Uber self-driving car appears to have been at the software level, specifically a function that determines which objects to ignore and which to attend to, The Information reported. This puts the fault squarely on Uber’s doorstep, though there was never much reason to think it belonged anywhere else.

Given the multiplicity of vision systems and backups on board any given autonomous vehicle, it seemed impossible that any one of them failing could have prevented the car’s systems from perceiving Elaine Herzberg, who was crossing the street directly in front of the lidar and front-facing cameras. Yet the car didn’t even touch the brakes or sound an alarm. Combined with an inattentive safety driver, this failure resulted in Herzberg’s death."

The TechCrunch story provides details about which software subsystem the report said failed.

Not good.

So, the autonomous or self-driving cars are only as good as the software they're programmed with (including maintenance). Anyone who has used computers during the last couple decades probably has experienced software glitches, bugs, and failures. It happens.

This latest incident suggests self-driving cars aren't yet ready. what do you think?


How the Crowd Led ProPublica to Investigate IBM

[Editor's note: today's guest post, by the reporters at ProPublica, discusses employment practices at a major corporation in the United States. The investigation is as interesting as the "Cutting 'Old Heads' At IBM" report. This also caught my attention because a data breach at IBM in 2007 led to the creation of this blog. Today's article is reprinted with permission.]

IBM logo By Ariana Tobin and Peter Gosselin, ProPublica

On March 22, we reported that over the past five years IBM has been removing older U.S. employees from their jobs, replacing some with younger, less experienced, lower-paid American workers and moving many other jobs overseas.

We’ve got documentation and details — most of which are the direct result of a questionnaire filled out by over 1,100 former IBMers.

We’ve gone to the company with our findings. IBM did not answer the specific questions we sent. Spokesman Edward Barbini said: “We are proud of our company and our employees’ ability to reinvent themselves era after era, while always complying with the law. Our ability to do this is why we are the only tech company that has not only survived but thrived for more than 100 years.”

We don’t know the exact size of the problem. Our questionnaire isn’t a scientific sample, nor did all the participants tell us they experienced age discrimination. But the hundreds of similar stories show a pattern of older employees being pushed out even when the company itself says they were doing a good job.

This project wasn’t inspired by a high-level leak or an errant line in secret documents. It came to us through reader engagement. Our investigation took us beyond some of our usual reporting techniques. We’d like to elaborate on this because:

  • We know readers will wonder how we sourced some pretty serious claims.
  • Many ex-employees trusted us with their stories and spent many hours in conversation with us. We think it’s good practice to let them know how we’ve used their information.
  • This is the probably the first time we’ve been pointed to a big project by a community of people we found through digital outreach. We hope that by sharing our experiences, we can help others build on our work.

IBMers found us

This project started as a conversation between the two of us, both reporters at ProPublica. Peter had taken on the age discrimination beat for reasons both personal and professional. Ariana was newly minted into a job called “engagement reporter.”

Ariana suggested that Peter write up a short essay on his own experiences of being laid off at 63 and searching for a job in the aftermath. We attached a short questionnaire to the bottom and headlined it: “Over 50 and looking for a job? We’d like to hear from you.

Dozens of people responded within the first couple of weeks. As we looked through this first round of questionnaires, we noticed a theme: a whole lot of information and technology workers told us they were struggling to stay employed. And those who had lost their jobs? They were having a really hard time finding new work.

Of those IT workers, several mentioned IBM right off the bat. One woman wrote that she and her coworkers were working together to find new jobs in order to “ward off the dreaded old person layoff from IBM.”

Another wrote: “I can probably help you get a lot more stories, contact me if you want to discuss this possibility.”

Another wrote: “Part of the separation agreement was that I not seek collective action against IBM for age discrimination. I was not going to sign as a law firm was planning to file a grievance. However they needed 10 people to agree and they could not get the numbers.”

… and then they connected us with more IBMers

We started making some calls. One of the first people we talked to was Brian Paulson, a 57-year-old senior manager with 18 years at IBM, who was fired for “performance reasons” that the company refused to explain. He was still job-hunting two years later.

Another ex-IBM employee told us that she had seen examples of older workers laid off from many parts of the company on a public Facebook page called WatchingIBM. Ariana spent a day looking through the posts, which were, as promised, crawling with stories, questions, and calls for support from workers of all kinds, as shown in the accompanying screenshot.

We decided to reach out to the page’s administrator, who was a longtime IBM workplace activist, Lee Conrad. He shared our age discrimination questionnaire in the group and more responses poured in.

With dozens of interviews already on the books, we decided to launch a second, more specific questionnaire — this time about IBM

We realized that we had been pointed toward an angry, sad and motivated group. The older ex-IBM workers we called were trying to figure out whether their own layoffs were unique or part of a larger trend. And if they were part of a larger trend... how many people were affected?

A major frustration we saw in comment after comment: These workers couldn’t get information on how many others had been forced out with them.

This was an information gap that immediately struck Peter, because that information is exactly what the law requires employers to disclose at the time of a layoff.

On top of that, many of these sources mentioned having been forced to sign agreements that kept them from going to court or even talking about what had happened to them. They were scared to do anything in violation of those agreements, a fear that kept them from finding out the answers to some big open questions: Why would IBM have stopped releasing the ages and positions of those let go, as they had done before 2014 to comply with federal law? How many workers out there believed they had been “retired” against their will? What did managers really tell their subordinates when the time came to let them go? Who was left to do all of their work?

So we wrote up another questionnaire asking those specific questions.

We learned from the responses, and also the response rate

We contacted people on listservs, found them on open petitions, joined closed LinkedIn networks, and followed each posting on ex-IBM groups. We tweeted the questionnaire out on days that IBM reported its earnings, including the company’s ticker symbol. We talked to trade magazines and IBM historians and organizers who still work at IBM. We bought ads on Facebook and aimed them toward cities and towns where we knew IBM had been cutting its workforce.

As the responses came in, we tried to figure out where most of them were coming from. To identify any meaningful trends, we needed to know who was answering, what was working, and why. We also realized that we needed to introduce ourselves in order to persuade anyone it was worth participating.

When something worked, we’d double down:

We know what worked the best: When people filled out the questionnaire they’d also share their contact information with us. So we asked them to forward the questionnaire around within their own networks:

And we got more leads

We read through all of the responses and identified themes: 183 respondents said the company recorded them as having retired by choice even though they had no desire to retire or flat-out objected to the idea. Forty-five people were told they’d have to uproot their lives and move sometimes thousands of miles from the communities where they had worked for years, or else resign. Fifty-three said their jobs had been moved overseas. Some were happy they’d left. Some were company luminaries, given top ratings throughout their career. Some were still fighting over benefits and health care. Some were worried about finding work ever again.

Inevitably, this categorization process led to us to identify new patterns as we went along, and as new responses accumulated. For each new pattern, we would go back and see how many people fit.

One of the first and most interesting such categories were the people who had received emails congratulating them on their retirement at the same time as they were informed of their layoff. We realized there would be power in numbers there, so we set up a SecureDrop for people who were willing to send us their paperwork.

Eventually, we also created a category called “legal action.” We’d stumbled upon support groups of ex-IBM employees who had filed formal complaints with the Equal Employment Opportunity Commission. Some sent us the company’s responses to their individual complaints, giving us insight into the way the company responded to allegations of discrimination. These seemed, of course, very useful.

In other words: we sent some rather complicated mass emails and were surprised over and over again by the specificity of the responses:

IBM undoubtedly has information that would shed light on the documents, its layoff practices or the overall extent and nature of its job cuts. The company chose not to respond to our questions about those issues.

So we tried to answer ex-IBMers’ questions ourselves, including one of the most basic: How many employees ages 40 and over were let go or left in recent years?

IBM won’t say. In fact, over the years, the company has stopped releasing almost all information about its U.S. workforce. In 2009, it stopped publishing its American employment total. In 2014, it stopped disclosing the numbers and ages of older employees it was laying off, a requirement of the nation’s basic anti-age bias law, the Age Discrimination in Employment Act (ADEA).

So we’ve sought to estimate the number, relying on one of the few remaining bits of company-provided information — a technique developed by a veteran financial analyst who follows IBM for investors — as well as patterns we spotted in internal company documents.

We began with a line in the company’s quarterly and annual filings with the U.S. Securities and Exchange Commission for “workforce rebalancing,” a company term for layoffs, firings and other non-retirement departures. It’s a gauge of what IBM spends to let people go. In the past five years, workforce rebalancing charges have totaled $4.3 billion.

The technique was used by veteran IBM analyst Toni Sacconaghi of Bernstein Research. Sacconaghi is a respected Wall Street analyst who has been named to Institutional Investor’s All-America Research Team every year since 2001. His technique and layoff estimates have been widely cited by news organizations including The Wall Street Journal and Fortune.

Some years ago, Sacconaghi estimated that IBM’s average per-employee cost for laying off a worker was $70,000.

Dividing $4.3 billion by $70,000 suggests that during the past five years IBM’s worldwide job cuts totaled about 62,000. If anything, that number is low, given IBM executives’ comments at a recent investor conference. Internal company documents we reviewed suggest that 50 to 60 percent of cuts were made in the U.S., with older workers representing roughly 60 percent of those. That translates to about 20,000 older American workers let go.

Our analysis suggests the total of U.S. layoffs is almost certainly higher.

First, as Sacconaghi said in a recent interview, IBM’s per-employee rebalancing costs are likely much lower now because, starting in 2016, the company reduced severance payments to departing employees from six months to just 30 days. That means IBM can lay off or fire more people for the same or lower overall costs.

Second, because, as those ex-IBMers told us, the company often converts their layoffs into retirements, the workplace rebalancing numbers don’t tell the whole story.

Right below the line for “workforce rebalancing” in its SEC filings, IBM adds another line for “retirement-related costs,” which reflects how much the company spends each year retiring people out. Some — perhaps a substantial amount of that — went to retirements that were less than fully voluntary. This could add up to thousands more people.

By coming up with answers and investigating in the open, we’ve gotten more sources

Many of the conversations we’ve had during our reporting didn’t make it into the final story. People allowed us to review internal company documents. They let us see long email exchanges with their managers. They dug back through closets and garages to find memos they had saved out of frustration or fatigue or just plain anger.

We can’t go into detail about all of the ways the community helped us report out this story, because we also promised many of our sources that we would protect their confidentiality. The beauty is that they talked to us anyway. They knew where to find us, because our contact information had been spread far and wide.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Airlines Want To Extend 'Dynamic Pricing' Capabilities To Set Ticket Prices By Each Person

In the near future, what you post on social media sites (e.g., Facebook, Instagram, Pinterest, etc.) could affect the price you pay for airline tickets. How's that?

First, airlines already use what the travel industry calls "dynamic pricing" to vary prices by date, time of day, and season. We've all seen higher ticket prices during the holidays and peak travel times. The Telegraph UK reported that airlines want to extend dynamic pricing to set fares by person:

"... the advent of setting fares by the person, rather than the flight, are fast approaching. According to John McBride, director of product management for PROS, a software provider that works with airlines including Lufthansa, Emirates and Southwest, a number of operators have already introduced dynamic pricing on some ticket searches. "2018 will be a very phenomenal year in terms of traction," he told Travel Weekly..."

And, there was a preliminary industry study about how to do it:

" "The introduction of a Dynamic Pricing Engine will allow an airline to take a base published fare that has already been calculated based on journey characteristics and broad segmentation, and further adjust the fare after evaluating details about the travelers and current market conditions," explains a white paper on pricing written by the Airline Tariff Publishing Company (ATPCO), which counts British Airways, Delta and KLM among its 430 airline customers... An ATPCO working group met [in late February] to discuss dynamic pricing, but it is likely that any roll out to its customers would be incremental."

What's "incremental" mean? Experts say first step would be to vary ticket prices in search results at the airline's site, or at an intermediary's site. There's virtually no way for each traveler to know they'd see a personal price that's higher (or lower) from prices presented to others.

With dynamic pricing per person, business travelers would pay more. And, an airline could automatically bundle several fees (e.g., priority boarding, luggage, meals, etc.) for its loyalty program members into each person's ticket price, obscuring transparency and avoiding fairness. Of course, airlines would pitch this as convenience, but alert consumers know that any convenience always has its price.

Thankfully, some politicians in the United States are paying attention. The Shear Social Media Law & Technology blog summarized the situation very well:

"[Dynamic pricing by person] demonstrates why technology companies and the data collection industry needs greater regulation to protect the personal privacy and free speech rights of Americans. Until Silicon Valley and data brokers are properly regulated Americans will continue to be discriminated against based upon the information that technology companies are collecting about us."

Just because something can be done with technology, doesn't mean it should be done. What do you think?


Report: Little Progress Since 2016 To Replace Old, Vulnerable Voting Machines In United States

We've know for some time that a sizeable portion of voting machines in the United States are vulnerable to hacking and errors. Too many states, cities, and town use antiquated equipment or equipment without paper backups. The latter makes re-counts impossible.

Has any progress been made to fix the vulnerabilities? The Brennan Center For Justice (BCJ) reported:

"... despite manifold warnings about election hacking for the past two years, the country has made remarkably little progress since the 2016 election in replacing antiquated, vulnerable voting machines — and has done even less to ensure that our country can recover from a successful cyberattack against those machines."

It is important to remember this warning in January 2017 from the Director of National Intelligence (DNI):

"Russian effortsto influence the 2016 US presidential election represent the most recent expression of Moscow’s longstanding desire to undermine the US-led liberal democratic order, but these activities demonstrated a significant escalation in directness, level of activity, and scope of effort compared to previous operations. We assess Russian President Vladimir Putin ordered an influence campaign in 2016 aimed at the US presidential election. Russia’s goals were to undermine public faith in the US democratic process... Russian intelligence accessed elements of multiple state or local electoral boards. Since early 2014, Russian intelligence has researched US electoral processes and related technology and equipment. DHS assesses that the types of systems we observed Russian actors targeting or compromising are not involved in vote tallying... We assess Moscow will apply lessons learned from its Putin-ordered campaign aimed at the US presidential election to future influence efforts worldwide, including against US allies and their election processes... "

Detailed findings in the BCJ report about the lack of progress:

  1. "This year, most states will use computerized voting machines that are at least 10 years old, and which election officials say must be replaced before 2020.
    While the lifespan of any electronic voting machine varies, systems over a decade old are far more likely to need to be replaced, for both security and reliability reasons... older machines are more likely to use outdated software like Windows 2000. Using obsolete software poses serious security risks: vendors may no longer write security patches for it; jurisdictions cannot replace critical hardware that is failing because it is incompatible with their new, more secure hardware... In 2016, jurisdictions in 44 states used voting machines that were at least a decade old. Election officials in 31 of those states said they needed to replace that equipment by 2020... This year, 41 states will be using systems that are at least a decade old, and officials in 33 say they must replace their machines by 2020. In most cases, elections officials do not yet have adequate funds to do so..."
  2. "Since 2016, only one state has replaced its paperless electronic voting machines statewide.
    Security experts have long warned about the dangers of continuing to use paperless electronic voting machines. These machines do not produce a paper record that can be reviewed by the voter, and they do not allow election officials and the public to confirm electronic vote totals. Therefore, votes cast on them could be lost or changed without notice... In 2016, 14 states (Arkansas, Delaware, Georgia, Indiana, Kansas, Kentucky, Louisiana, Mississippi, New Jersey, Pennsylvania, South Carolina, Tennessee, Texas, and Virginia) used paperless electronic machines as the primary polling place equipment in at least some counties and towns. Five of these states used paperless machines statewide. By 2018 these numbers have barely changed: 13 states will still use paperless voting machines, and 5 will continue to use such systems statewide. Only Virginia decertified and replaced all of its paperless systems..."
  3. "Only three states mandate post-election audits to provide a high-level of confidence in the accuracy of the final vote tally.
    Paper records of votes have limited value against a cyberattack if they are not used to check the accuracy of the software-generated total to confirm that the veracity of election results. In the last few years, statisticians, cybersecurity professionals, and election experts have made substantial advances in developing techniques to use post-election audits of voter verified paper records to identify a computer error or fraud that could change the outcome of a contest... Specifically, “risk limiting audits” — a process that employs statistical models to consistently provide a high level of confidence in the accuracy of the final vote tally – are now considered the “gold standard” of post-election audits by experts... Despite this fact, risk limiting audits are required in only three states: Colorado, New Mexico, and Rhode Island. While 13 state legislatures are currently considering new post-election audit bills, since the 2016 election, only one — Rhode Island — has enacted a new risk limiting audit requirement."
  4. "43 states are using machines that are no longer manufactured.
    The problem of maintaining secure and reliable voting machines is particularly challenging in the many jurisdictions that use machines models that are no longer produced. In 2015... the Brennan Center estimated that 43 states and the District of Columbia were using machines that are no longer manufactured. In 2018, that number has not changed. A primary challenge of using machines no longer manufactured is finding replacement parts and the technicians who can repair them. These difficulties make systems less reliable and secure... In a recent interview with the Brennan Center, Neal Kelley, registrar of voters for Orange County, California, explained that after years of cannibalizing old machines and hoarding spare parts, he is now forced to take systems out of service when they fail..."

That is embarrassing for a country that prides itself on having an effective democracy. According to BCJ, the solution would be for Congress to fund via grants the replacement of paperless and antiquated equipment; plus fund post-election audits.

Rather than protect the integrity of our democracy, the government passed a massive tax cut which will increase federal deficits during the coming years while pursuing both a costly military parade and an unfunded border wall. Seems like questionable priorities to me. What do you think?


2017 FTC Complaints Report: Debt Collection Tops The List. Older Consumers Better At Spotting Scams

Earlier this month,, the U.S. Federal Trade Commission (FTC) released its annual report of complaints submitted by consumers in the United States. The report is helpful is understand the most frequent types of scams and reports consumers experienced.

The latest report, titled 2017 Consumer Sentinel Network Data Book, includes complaints from 2.68 million consumers, a decrease from 2.98 million in 2016. However, consumers reported losing a total of $905 million to fraud in 2017, which is $63 million more than in 2016. The most frequent complaints were about debt collection (23 percent), identity theft (14 percent), and imposter scams (13 percent). The top 20 complaint categories:

Rank Category # Of
Reports
% Of
Reports
1 Debt Collection 608,535 22.74%
2 Identity Theft 371,061 13.87%
3 Imposter Scams 347,829 13.00%
4 Telephone & Mobile Services 149,578 5.59%
5 Banks & Lenders 149,316 5.58%
6 Prizes, Sweepstakes & Lotteries 142,870 5.34%
7 Shop-at-Home & Catalog Sales 126,387 4.72%
8 Credit Bureaus, Information
Furnishers & Report Users
107,473 4.02%
9 Auto Related 86,289 3.23%
10 Television and Electronic Media 47,456 1.77%
11 Credit Cards 45,428 1.70%
12 Internet Services 45,093 1.69%
13 Foreign Money Offers &
Counterfeit Check Scams
31,980 1.20%
14 Health Care 27,660 1.03%
15 Travel, Vacations &
Timeshare Plans
22,264 0.83%
16 Business & Job Opportunities 19,082 0.71%
17 Advance Payments for
Credit Services
17,762 0.66%
18 Investment Related 15,079 0.56%
19 Computer Equipment
& Software
9,762 0.36%
20 Mortgage Foreclosure Relief
& Debt Management
8,973 0.34%

While the median loss for all fraud reports in 2017 was $429, consumers reported larger losses in certain types of scams: travel, vacations and timeshare plans ($1,710); mortgage foreclosure relief and debt management ($1,200); and business/job opportunities ($1,063).

The telephone was the most frequently-reported method (70 percent) scammers used to contact consumers, and  wire transfers was the most frequently-reported payment method for fraud ($333 million in losses reported). Also:

"The states with the highest per capita rates of fraud reports in 2017 were Florida, Georgia, Nevada, Delaware, and Michigan. For identity theft, the top states in 2017 were Michigan, Florida, California, Maryland, and Nevada."

What's new in this report is that it details financial losses by age group. The FTC report concluded:

"Consumers in their twenties reported losing money to fraud more often than those over age 70. For example, among people aged 20-29 who reported fraud, 40 percent indicated they lost money. In comparison, just 18 percent of those 70 and older who reported fraud indicated they lost any money. However, when these older adults did report losing money to a scammer, the median amount lost was greater. The median reported loss for people age 80 and older was $1,092 compared to $400 for those aged 20-29."

Detailed information supporting this conclusion:

2017 FTC Consumer Sentinel complaints report. Reports and losses by age group. Click to view larger image

2017 FTC Consumer Sentinel complaints report. Median losses by age group. Click to view larger image

The second chart is key. Twice as many younger consumers (40 percent, ages 20 - 29) reported fraud losses compared to 18 percent of consumers ages 70 and older. At the same time, those older consumers lost more money. So, older consumers were more skilled at spotting scams and few fell victim to scams. It seems both groups could learn from each other.

CBS News interviewed a millennial who fell victim to a mystery-shopper scam, which seemed to be a slick version of the old check scam. It seems wise for all consumers, regardless of age, to maintain awareness about the types of scams. Pick a news source or blog you trust. Hopefully, this blog.

Below is a graphic summarizing the 2017 FTC report:

Ftc-complaints-report-2017


Security Experts: Artificial Intelligence Is Ripe For Misuse By Bad Actors

Over the years, bad actors (e.g., criminals, terrorists, rogue states, ethically-challenged business executives) have used a variety of online technologies to remotely hack computers, track users online without consent nor notice, and circumvent privacy settings by consumers on their internet-connected devices. During the past year or two, reports surfaced about bad actors using advertising and social networking technologies to sway public opinion.

Security researchers and experts have warned in a new report that two of the newest technologies can be also be used maliciously:

"Artificial intelligence and machine learning capabilities are growing at an unprecedented rate. These technologies have many widely beneficial applications, ranging from machine translation to medical image analysis... Less attention has historically been paid to the ways in which artificial intelligence can be used maliciously. This report surveys the landscape of potential security threats from malicious uses of artificial intelligence technologies, and proposes ways to better forecast, prevent, and mitigate these threats. We analyze, but do not conclusively resolve, the question of what the long-term equilibrium between attackers and defenders will be. We focus instead on what sorts of attacks we are likely to see soon if adequate defenses are not developed."

Companies currently use or test artificial intelligence (A.I.) to automate mundane tasks, upgrade and improve existing automated processes, and/or personalize employee (and customer) experiences in a variety of applications and business functions, including sales, customer service, and human resources. "Machine learning" refers to the development of digital systems to improve the performance of a task using experience. Both are part of a business trend often referred to as "digital transformation" or the "intelligent workplace." The CXO Talk site, featuring interviews with business leaders and innovators, is a good resource to learn more about A.I. and digital transformation.

A survey last year of employees in the USA, France, Germany, and the United Kingdom found that they, "see A.I. as the technology that will cause the most disruption to the workplace." The survey also found: 70 percent of employees surveyed expect A.I. to impact their jobs during the next ten years, half expect impacts within the next three years, and about a third percent see A.I. as a job creator.

This new report was authored by 26 security experts from a variety of educational institutions including American University, Stanford University, Yale University, the University of Cambridge, the University of Oxford, and others. The report cited three general ways bad actors could misuse A.I.:

"1. Expansion of existing threats. The costs of attacks may be lowered by the scalable use of AI systems to complete tasks that would ordinarily require human labor, intelligence and expertise. A natural effect would be to expand the set of actors who can carry out particular attacks, the rate at which they can carry out these attacks, and the set of potential targets.

2. Introduction of new threats. New attacks may arise through the use of AI systems to complete tasks that would be otherwise impractical for humans. In addition, malicious actors may exploit the vulnerabilities of AI systems deployed by defenders.

3. Change to the typical character of threats. We believe there is reason to expect attacks enabled by the growing use of AI to be especially effective, finely targeted, difficult to attribute, and likely to exploit vulnerabilities in AI systems."

So, A.I. could make it easier for the bad guys to automated labor-intensive cyber-attacks such as spear-fishing. The bad guys could also create new cyber-attacks by combining A.I. with speech synthesis. The authors of the report cited examples of more threats:

"The use of AI to automate tasks involved in carrying out attacks with drones and other physical systems (e.g. through the deployment of autonomous weapons systems) may expand the threats associated with these attacks. We also expect novel attacks that subvert cyber-physical systems (e.g. causing autonomous vehicles to crash) or involve physical systems that it would be infeasible to direct remotely (e.g. a swarm of thousands of micro-drones)... The use of AI to automate tasks involved in surveillance (e.g. analyzing mass-collected data), persuasion (e.g. creating targeted propaganda), and deception (e.g. manipulating videos) may expand threats associated with privacy invasion and social manipulation..."

BBC News reported even more possible threats:

"Technologies such as AlphaGo - an AI developed by Google's DeepMind and able to outwit human Go players - could be used by hackers to find patterns in data and new exploits in code. A malicious individual could buy a drone and train it with facial recognition software to target a certain individual. Bots could be automated or "fake" lifelike videos for political manipulation. Hackers could use speech synthesis to impersonate targets."

From all of this, one can conclude that the 2016 elections interference cited by intelligence officials is probably mild compared to what will come: more serious, sophisticated, and numerous attacks. The report included four high-level recommendations:

"1. Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.

2. Researchers and engineers in artificial intelligence should take the dual-use nature of their work seriously, allowing misuse-related considerations to influence research priorities and norms, and proactively reaching out to relevant actors when harmful applications are foreseeable.

3. Best practices should be identified in research areas with more mature methods for addressing dual-use concerns, such as computer security, and imported where applicable to the case of AI.

4. Actively seek to expand the range of stakeholders and domain experts involved in discussions of these challenges."

Download the 101-page report titled, "The Malicious Use Of Artificial Intelligence: Forecasting, Prevention, And Mitigation" A copy of the report is also available here (Adobe PDF; 1,400 k bytes)here.

To prepare, both corporate and government executives would be wise to both harden their computer networks and (re)train their employees to recognize and guard against cyber attacks. What do you think?


I Approved This Facebook Message — But You Don’t Know That

[Editor's note: today's guest post, by reporters at ProPublica, is the latest in a series about advertising and social networking sites. It is reprinted with permission.]

Facebook logo By Jennifer Valentino-DeVries, ProPublica

Hundreds of federal political ads — including those from major players such as the Democratic National Committee and the Donald Trump 2020 campaign — are running on Facebook without adequate disclaimer language, likely violating Federal Election Commission (FEC) rules, a review by ProPublica has found.

An FEC opinion in December clarified that the requirement for political ads to say who paid for and approved them, which has long applied to print and broadcast outlets, extends to ads on Facebook. So we checked more than 300 ads that had run on the world’s largest social network since the opinion, and that election-law experts told us met the criteria for a disclaimer. Fewer than 40 had disclosures that appeared to satisfy FEC rules.

“I’m totally shocked,” said David Keating, president of the nonprofit Institute for Free Speech in Alexandria, Virginia, which usually opposes restrictions on political advertising. “There’s no excuse,” he said, looking through our database of ads.

The FEC can investigate possible violations of the law and fine people up to thousands of dollars for breaking it — fines double if the violation was “knowing and willful,” according to the regulations. Under the law, it’s up to advertisers, not Facebook, to ensure they have the right disclaimers. The FEC has not imposed penalties on any Facebook advertiser for failing to disclose.

An FEC spokeswoman declined to say whether the commission has any recent complaints about lack of disclosure on Facebook ads. Enforcement matters are confidential until they are resolved, she said.

None of the individuals or groups we contacted whose ads appeared to have inadequate disclaimers, including the Democratic National Committee and the Trump campaign, responded to requests for comment. Facebook declined to comment on ProPublica’s findings or the December opinion. In public documents, the company has urged the FEC to be “flexible” in what it allows online, and to develop a policy for all digital advertising rather than focusing on Facebook.

Insufficient disclaimers can be minor technicalities, not necessarily evidence of intent to deceive. But the pervasiveness of the lapses ProPublica found suggests a larger problem that may raise concerns about the upcoming midterm elections — that political advertising on the world’s largest social network isn’t playing by rules intended to protect the public.

Unease about political ads on Facebook and other social networking sites has intensified since internet companies acknowledged that organizations associated with the Russian government bought ads to influence U.S. voters during the 2016 election. Foreign contributions to campaigns for U.S. federal office are illegal. Online, advertisers can target ads to relatively small groups of people. Once the marketing campaign is over, the ads disappear. This makes it difficult for the public to scrutinize them.

The FEC opinion is part of a push toward more transparency in online political advertising that has come in response to these concerns. In addition to handing down the opinion in a specific case, the FEC is preparing new rules to address ads on social media more broadly. Three senators are sponsoring a bill called the Honest Ads Act, which would require internet companies to provide more information on who is buying political ads. And earlier this month, the election authority in Seattle said Facebook was violating a city law on election-ad disclosures, marking a milestone in municipal attempts to enforce such transparency.

Facebook itself has promised more transparency about political ads in the coming months, including “paid for by” disclosures. Since late October it has been conducting tests in Canada that publish ads on an advertiser’s Facebook page, where people can see them even without being part of the advertiser’s target audience. Those ads are only up while the ad campaign is running, but Facebook says it will create a searchable archive for federal election advertising in the U.S. starting this summer.

ProPublica found the ads using a tool called the Political Ad Collector, which allows Facebook users to automatically send us the political ads that were displayed on their news feeds. Because they reflect what users of the tool are seeing, the ads in our database aren’t a representative sample.

The disclaimers required by the FEC are familiar to anyone who has seen a print or television political ad — think of a candidate saying, “I’m ____, and I approved this message,” at the end of a TV commercial, or a “paid for by” box at the bottom of a newspaper advertisement. They’re intended to make sure the public knows who is paying to support a candidate, and to prevent people from falsely claiming to speak on a candidate’s behalf.

The system does have limitations, reflecting concerns that overuse of disclaimers could inhibit free speech. For starters, the rules apply only to certain types of political ads. Political committees and candidates have to include disclaimers, as do people seeking donations or conducting “express advocacy.” To count as express advocacy, an ad typically must mention a candidate and use certain words clearly campaigning for or against a candidate — such as “vote for,” “reject” or “re-elect.” And the regulations only apply to federal elections, not state and local ones.

The rules also don’t address so-called “issue” ads that advocate a policy stance. These ads may include a candidate’s name without a disclaimer, as long as they aren’t funded by a political committee or candidate and don’t use express-advocacy language. Many of the political ads purchased by Russian groups in 2016 attempted to influence public opinion without mentioning candidates at all — and would not require disclosure even today.

Enforcement of the law often relies on political opponents or a member of the public complaining to the FEC. If only supporters see an ad, as might be the case online, a complaint may never come.

The disclaimer law was last amended in 2002, but online advertising has changed so rapidly that several experts said the FEC has had trouble keeping up. In 2002, the commission found that paid text message ads were exempt from disclosure under the “small-items exception” originally intended for buttons, pins and the like. What counts as small depends on the situation and is up to the FEC.

In 2010, the FEC considered ads on Google that had no graphics or photos and were limited to 95 characters of text. Google proposed that disclaimers not be part of the ads themselves but be included on the web pages that users would go to after clicking on the ads; the FEC agreed.

In 2011, Facebook asked the FEC to allow political ads on the social network to run without disclosures. At the time, Facebook limited all ads on its platform to small, “thumbnail” photos and brief text of only 100 or 160 characters, depending on the type of ad. In that case, the six-person FEC couldn’t muster the four votes needed to issue an opinion, with three commissioners saying only limited disclosure was required and three saying the ads needed no disclosure at all, because it would be “impracticable” for political ads on Facebook to contain more text than other ads. The result was that political ads on Facebook ran without the disclaimers seen on other types of election advertising.

Since then, though, ads on Facebook have expanded. They can now include much more text, as well as graphics or photos that take up a large part of the news feed’s width. Video ads can run for many minutes, giving advertisers plenty of time to show the disclaimer as text or play it in a voiceover.

Last October, a group called Take Back Action Fund decided to test whether these Facebook ads should still be exempt from the rules.

“For years now, people have said, ‘Oh, don’t worry about the rules, because the FEC doesn’t enforce anything on Facebook,’” said John Pudner, president of Take Back Action Fund, which advocates for campaign finance reform. Many political consultants “didn’t think you ever needed a disclaimer on a Facebook ad,” said Pudner, a longtime campaign consultant to conservative candidates.

Take Back Action Fund came up with a plan: Ask the FEC whether it should include disclosures on ads that the group thought clearly needed them.

The group told the FEC it planned to buy “express advocacy” ads on Facebook that included large images or videos on the news feed. In its filing, Take Back Action Fund provided some sample text it said it was thinking of using: “While [Candidate Name] accuses the Russians of helping President Trump get elected, [s/he] refuses to call out [his/her] own Democrat Party for paying to create fake documents that slandered Trump during his presidential campaign. [Name] is unfit to serve.”

In a comment filed with the FEC in the matter, the Internet Association trade group, of which Facebook is a member, asked the commission to follow the precedent of the 2010 Google case and allow a “one-click” disclosure that didn’t need to be on the ad itself but could be on the web page the ad led to.

The FEC didn’t follow that recommendation. It said unanimously that the ads needed full disclaimers.

The opinion, handed down Dec. 15, was narrow, saying that if any of the “facts or assumptions” presented in another case were different in a “material” way, the opinion could not be relied upon. But several legal experts who spoke with ProPublica said the opinion means anyone who would have to include disclaimers in traditional advertising should now do so on large Facebook image ads or video ads — including candidates, political committees and anyone using express advocacy.

“The functionality and capabilities of today’s Facebook Video and Image ads can accommodate the information without the same constrictions imposed by the character-limited ads that Facebook presented to the Commission in 2011,” three commissioners wrote in a concurring statement. A fourth commissioner went further, saying the commission’s earlier decision in the text messaging case should now be completely superseded. The remaining two commissioners didn’t comment beyond the published opinion.

“We are overjoyed at the decision and hope it will have the effect of stopping anonymous attacks,” said Pudner, of Take Back Action Fund. “We think that this is a matter of the voter’s right to know.” He added that the group doesn’t intend to purchase the ads.

This year, the FEC plans to tackle concerns about digital political advertising more generally. Facebook favors such an industry-wide approach, partly for competitive reasons, according to a comment it submitted to the commission.

“Facebook strongly supports the Commission providing further guidance to committees and other advertisers regarding their disclaimer obligations when running election-related Internet communications on any digital platform,” Facebook General Counsel Colin Stretch wrote to the FEC.

Facebook was concerned that its own transparency efforts “will apply only to advertising on Facebook’s platform, which could have the unintended consequence of pushing purchasers who wish to avoid disclosure to use other, less transparent platforms,” Stretch wrote.

He urged the FEC to adopt a “flexible” approach, on the grounds that there are many different types of online ads. “For example, allowing ads to include an icon or other obvious indicator that more information about an ad is available via quick navigation (like a single click) would give clear guidance.”

To test whether political advertisers were following the FEC guidelines, we searched for large U.S. political ads that our tool gathered between Dec. 20 — five days after the opinion — and Feb. 1. We excluded the small ads that run on the right column of Facebook’s website. To find ads that were most likely to fall under the purview of the FEC regulations, we searched for terms like “committee,” “donate” and “chip in.” We also searched for ads that used express advocacy language such as, “for Congress,” “vote against,” “elect” or “defeat.” We left out ads with state and local terms such as “governor” or “mayor,” as well as ads from groups such as the White House Historical Association or National Audubon Society that were obviously not election-oriented. Then we examined the ads, including the text and photos or graphics.

Of nearly 70 entities that ran ads with a large photo or graphic in addition to text, only two used all of the required disclaimer language. About 20 correctly indicated in some fashion the name of the committee associated with the ad but omitted other language, such as whether the ad was endorsed by a candidate. The rest had more significant shortcomings. Many of those that didn’t include disclosures were for relatively inexperienced candidates for Congress, but plenty of seasoned lawmakers and major groups failed to use the proper language as well.

For example, one ad said, “It’s time for Donald Trump, his family, his campaign, and all of his cronies to come clean about their collusion with Russia.” A photo of Donald Trump appeared over a black and red map of Russia, overlaid by the text, “Stop the Lies.” The ad urged people to “Demand Answers Today” and “Sign Up.”

At the top, the ad identified the Democratic Party as the sponsor, and linked to the party’s Facebook page. But, under FEC rules, it should have named the funder, the Democratic National Committee, and given the committee’s address or website. It should also have said whether the ad was endorsed by any candidate. It didn’t. The only nod to the national committee was a link to my.democrats.org, which is paid for by the DNC, at the bottom of the ad. As on all Facebook ads, the word “Sponsored” was included at the top.

Advertisers seemed more likely to put the proper disclaimers on video ads, especially when those ads appeared to have been created for television, where disclaimers have been mandatory for years. Videos that didn’t look made for TV were less likely to include a disclaimer.

One ad that said it was from Donald J. Trump consisted of 20 seconds of video with an American flag background and stirring music. The words “Donate Now! And Enter for a Chance To Win Dinner With Trump!” materialized on the screen with dramatic thuds and crashes. The ad linked to Trump’s Facebook page, and a “Donate” button at the bottom of the ad linked to a website that identified the president’s re-election committee, Donald J. Trump for President, Inc., as its funder. It wasn’t clear on the ad whether Trump himself or his committee paid for it, which should have been specified under FEC rules.

The large majority of advertisements we collected — both those that used disclosures and those that didn’t — were for liberal groups and politicians, possibly reflecting the allegiances of the ProPublica readers who installed our ad-collection tool. There were only four Republican advertisers among the ads we analyzed.

It’s not clear why advertisers aren’t following the FEC regulations. Keating, of the Institute for Free Speech, suggested that advertisers might think the word “Sponsored” and a link to their Facebook page are enough and that reasonable people would know they had paid for the ad.

Others said social media marketers may simply be slow in adjusting to the FEC opinion.

“It’s entirely possible that because disclaimers haven’t been included for years now, candidates and committees just aren’t used to putting them on there,” said Brendan Fischer, director of the Federal and FEC Reform Program at the Campaign Legal Center, the group that provided legal services to Take Back Action Fund. “But they should be on notice,” he added.

There were only two advertisers we saw that included the full, clear disclosures required by the FEC on their large image ads. One was Amy Klobuchar, a Democratic senator from Minnesota who is a co-sponsor of the Honest Ads Act. The other was John Moser, an IT security professional and Democratic primary candidate in Maryland’s 7th Congressional District who received $190 in contributions last year, according to his FEC filings.

Reached by Facebook Messenger, Moser said he is running because he has a plan for ending poverty in the U.S. by restructuring Social Security into a “universal dividend” that gives everyone over age 18 a portion of the country’s per capita income. He complained that Facebook doesn’t make it easy for political advertisers to include the required disclosures. “You have to wedge it in there somewhere,” said Moser, who faces an uphill battle against longtime U.S. Rep. Elijah Cummings. “They need to add specific support for that, honestly.”

Asked why he went to the trouble to put the words on his ad, Moser’s answer was simple: “I included a disclosure because you're supposed to.”

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


New Data Breach Legislation Proposed In North Carolina

After a surge in data breaches in North Carolina during 2017, state legislators have proposed stronger data breach laws. The National Law Review explained what prompted the legislative action:

"On January 8, 2018, the State of North Carolina released its Security Breach Report 2017, which highlights a 15 percent increase in breaches since 2016... Health care, financial services and insurance businesses accounted for 38 percent, with general businesses making up for just more than half of these data breaches. Almost 75 percent of all breaches resulted from phishing, hacking and unauthorized access, reflecting an overall increase of more than 3,500 percent in reported hacking incidents alone since 2006. Since 2015, phishing incidents increased over 2,300 percent. These numbers emphasize the warning to beware of emails or texts requesting personal information..."

So, fraudsters have tricked many North Carolina residents and employees into both opening fraudulent e-mail and text messages, and then responding by disclosing sensitive personal information. Not good.

Details about the proposed legislation:

"... named the Act to Strengthen Identity Theft Practices (ASITP), announced by Representative Jason Saine and Attorney General Josh Stein, attempts to combat the data breach epidemic by expanding North Carolina’s breach notification obligations, while reducing the time businesses have to comply with notification to the affected population and to the North Carolina Attorney General’s Office. If enacted, this new legislation will be one of the most aggressive U.S. breach notification statutes... The Fact Sheet concerning the ASITP as published by the North Carolina Attorney General proposes that the AG take a more direct role in the investigation of data breaches closer to their time of discovery...  To accomplish this goal, the ASITP proposes a significantly shorter period of time for an entity to provide notification to the affected population and to the North Carolina Attorney General. Currently, North Carolina’s statute mandates that notification be made to affected individuals and the Attorney General without “unreasonable delay.” Under the ASITP, the new deadline for all notifications would be 15 days following discovery of the data security incident. In addition to being the shortest deadline in the nation, it is important to note that notification vendors typically require 5 business days to process, print and mail notification letters... The proposed legislation also seeks to (1) expand the definition of “protected information” to include medical information and insurance account numbers, and (2) penalize those who fail to maintain reasonable security procedures by charging them with a violation under the Unfair and Deceptive Trade Practices Act for each person whose information is breached..."

Good. The National Law Review article also compared the breach notification deadlines across all 50 states and territories. It is worth a look to see how your state compares. A comparison of selected states:

Time After Discovery of Breach Selected States/Territories
10 calendar days Puerto Rico (Dept. of Consumer Affairs)
15 calendar days North Carolina (Proposed)
15 business California (Protected Health Information)
30 calendar days Florida
45 calendar days Ohio, Maryland
90 calendar days Connecticut
Most expedient time & without
unreasonable delay
California (other), Massachusetts, New York, North Carolina, Pennsylvania, Puerto Rico (other)
As soon as possible Texas

To learn more, download the North Carolina Security Breach Report 2017 (Adobe PDF), and the ASITP Fact Sheet (Adobe PDF).


Report: Air Travel Globally During 2017 Was The Safest Year On Record

The Independent UK newspaper reported:

"The Dutch-based aviation consultancy, To70, has released its Civil Aviation Safety Review for 2017. It reports only two fatal accidents, both involving small turbo-prop aircraft, with a total of 13 lives lost. No jets crashed in passenger service anywhere in the world... The chances of a plane being involved in a fatal accident is now one in 16 million, according to the lead researcher, Adrian Young... The report warns that electronic devices in checked-in bags pose a growing potential danger: “The increasing use of lithium-ion batteries in electronics creates a fire risk on board aeroplanes as such batteries are difficult to extinguish if they catch fire... The UK has the best air-safety record of any major country. No fatal accidents involving a British airline have happened since the 1980s. The last was on 10 January 1989... In contrast, sub-Saharan Africa has an accident rate 44 per cent worse than the global average, according to the International Air Transport Association (IATA)..."

Read the full 2017 aviation safety report by To70. Below is a chart from the report.

Accident Data Chart from To70 Air Safety Review for 2017. Click to view larger version


What We Discovered During a Year of Documenting Hate

[Editor's note: today's guest blog post, by the reporters at ProPublica, is second in a series about law enforcement and hate crimes in the United States. Today's post is reprinted with permission.]

By Rachel Glickhouse, ProPublica

The days after Election Day last year seemed to bring with them a rise in hate crimes and bias incidents. Reports filled social media and appeared in local news. There were the letters calling for the genocide of Muslims that were sent to Islamic centers from California to Ohio. And the swastikas that were scrawled on buildings around the country. In Florida, “colored” and “whites only” signs were posted over water fountains at a high school. A man assaulted a Hispanic woman in San Francisco, telling her “No Latinos here.”

But were these horrible events indicative of an increase in crimes and incidents themselves, or did the reports simply reflect an increased awareness and willingness to come forward on the part of victims and witnesses? As data journalists, we went looking for answers and were not prepared for what we found: Nobody knows for sure. Hate crimes are so poorly tracked in America, there’s no way to undertake the kind of national analysis that we do in other areas, from bank robberies to virus outbreaks.

There is a vast discrepancy between the hate crimes numbers gathered by the FBI from police jurisdictions around the country and the estimate of hate crime victims in annual surveys by the Bureau of Justice Statistics. The FBI counts 6,121 hate crimes in 2016, and the BJS estimates 250,000 hate crimes a year.

We were told early on that while the law required the Department of Justice to report hate crime statistics, local and state police departments aren’t bound to report their numbers to the FBI — and many don't. Complicating matters further is that hate crime laws vary by state, with some including sexual orientation as a protected class of victims and some not. Five states have no hate crime statute at all.

We decided to try collecting data ourselves, using a mix of social media news gathering and asking readers to send in their personal stories. We assembled a coalition of more than 130 newsrooms to help us report on hate incidents by gathering and verifying tips, and worked on several lines of investigation in our own newsroom.

Along the way, we’ve learned a lot about how hate crimes fall through the cracks:

We’ve received thousands of tips so far through our embeddable incident reporting form. We’ve also added tips sent to us by civil rights groups such as the Southern Poverty Law Center.

ProPublica and reporters in newsrooms around the country used those tips to tell the stories of people who’ve come forward as victims or witnesses. They’ve identified a number of patterns:

Impact

Our mission at ProPublica is to do journalism that has impact. We’ve seen significant impact from Documenting Hate.

  • The official Virginia state after-action report on the Charlottesville rally cited ProPublica’s reporting and made recommendations for better police practices based on our journalism.
  • Cloudflare changed their complaint policies following a ProPublica story on how the company helps support neo-Nazi sites. The company cited our reporting when they later shut down The Daily Stormer, a major neo-Nazi site.
  • After we asked for their records, the Jacksonville Sheriff’s Office, which had not sent a hate crime report to the state of Florida in years, began reporting hate crime data for the first time since 2013.
  • The Miami-Dade Police Department started an internal audit after we talked to them in October. Detective Carlos Rosario, a spokesman for the department, told us they found four hate crimes that they had failed to report to the state. Rosario also told us that they are in the process of creating a digital hate crime reporting process as a result of our reporting.
  • The Colorado Springs, Colorado, police department fixed a database problem that had caused the loss of at least 18 hate crime reports. The error was discovered after we asked them questions about their records.
  • The Madison, Wisconsin, police department changed how they categorize hate crimes before they send them to the FBI based on our records request.
  • A group of nine senators led by Sen. Patty Murray, D-Wash., sent a letter to Education Secretary Betsy DeVos asking what the administration will do in response to racist harassment in schools and universities, citing Buzzfeed’s reporting for the project.
  • The Daily Stormer in Spanish removed the name of a popular Spanish forum from its site after legal action was threatened following a Univision story.
  • The Matthew Shepard Foundation said it would increase resources dedicated to training police officers to identify and investigate hate crimes, citing our project.

Even after the 100 news stories produced by the Documenting Hate coalition, we’re by no means finished. ProPublica and our partners will spend next year collecting and telling more stories from victims and witnesses. And we still have a lot of questions that demand answers. You can help.

Filed under: Civil Rights

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.