151 posts categorized "Reports & Studies" Feed

Report: Auto Emergency Braking With Pedestrian Detection Systems Fail When Needed Most

Image from AAA report on Emergency braking and pedestrian detection. October 2019. Click to view larger version The American Automobile Association (AAA) reported new research results from tests of automatic emergency braking with pedestrian detection systems in automobiles. The AAA found that these systems work inconsistently and failed when most needed: at night. Chief findings from the report:

"... automatic emergency braking systems with pedestrian detection perform inconsistently, and proved to be completely ineffective at night. An alarming result, considering 75% of pedestrian fatalities occur after dark. The systems were also challenged by real-world situations, like a vehicle turning right into the path of an adult. AAA’s testing found that in this simulated scenario, the systems did not react at all, colliding with the adult pedestrian target every time..."

The testing was performed jointly with the Automotive Club of Southern California’s Automotive Research Center in Los Angeles, California. Track testing was conducted on closed surface streets on the grounds of the Auto Club Speedway in Fontana, California. Four test vehicles were used: 2019 Chevy Malibu, 2019 Honda Accord, 2019 Tesla Model 3 and 2019 Toyota Camry. The testing included four scenarios:

  1. "An adult crossing in front of a vehicle traveling at 20 mph and 30 mph during the day and at 25 mph at night;
  2. A child darting out from between two parked cars in front of a vehicle traveling at 20 mph and 30 mph;
  3. A vehicle turning right onto an adjacent road with an adult crossing at the same time; and
  4. Two adults standing along the side of the road with their backs to traffic, with a vehicle approaching at 20 mph and 30 mph."

For scenario #1: a vehicle moving at 20 mph a collision resulted 60% of the time (= the systems avoided a collision 40 percent of the time). For scenario #2: a collision occurred 89% of the time for vehicles moving at 20 mph For scenario #3, collisions resulted 100 percent of the time. For scenario #4, a collision resulted 80 percent of the time for vehicles moving at 20 mph. Additional test results:

"... the systems were ineffective in all scenarios where the vehicle was traveling at 30 mph. At night, none of the systems detected or reacted to the adult pedestrian."

The October 2019 "Automatic Emergency Braking With Pedestrian Detection" AAA report is available here (Adobe PDF).


The National Auto Surveillance Database You Haven't Heard About Has Plenty Of Privacy Issues

Some consumers have heard of Automated License Plate Recognition (ALPR) cameras, the high-speed, computer-controlled technology that automatically reads and records vehicle license plates. Local governments have installed ALPR cameras on stationary objects such as street-light poles, traffic lights, overpasses, highway exit ramps, and electronic toll collection (ETC).

Mobile ALPR cameras have been installed on police cars and/or police surveillance vans. The Houston Police Department explained in this 2016 video how it uses the technology. Last year, a blog post discussed ALPR usage in San Diego and its data-sharing with Vigilant Solutions.

What you probably don't know: the auto repossession industry also uses the technology. Many "repo men" have ALPR cameras installed on their vehicles. The data they collect is fed into a massive, nationwide, and privately-owned database which archives license-plate images. Reporters at Motherboard obtained a private demo of the database tool to understand its capabilities.

Digital Recognition Network logo The demo included tracking a license plate with the vehicle owner's consent. Vice reported:

"This tool, called Digital Recognition Network (DRN), is not run by a government, although law enforcement can also access it. Instead, DRN is a private surveillance system crowdsourced by hundreds of repo men who have installed cameras that passively scan, capture, and upload the license plates of every car they drive by to DRN's database. DRN stretches coast to coast and is available to private individuals and companies focused on tracking and locating people or vehicles. The tool is made by a company that is also called Digital Recognition Network... DRN has more than 600 of these "affiliates" collecting data, according to the contract. These affiliates are paid a monthly bonus for gathering the data..."

ALPR financing image from DRN site on September 20, 2019. Click to view larger version Affiliates are rep men and others, who both use the database tool and upload images to it. DRN even offers financing to help affiliates buy ALPR cameras. The image on the right was taken from the site on September 20, 2019.

When consumers fail to pay their bills, lenders and insurance companies have valid needs to retrieve ( or repossess) their unpaid assets. Lenders hire repo men, who then use the DRN database to find vehicles they've been hired to repossess. Those applications are valid, but there are plenty of privacy issues and opportunity for abuse.

Plenty.

First, the data collection is indiscriminate and broad. As repo men (and women) drive through cities and towns to retrieve wanted vehicles, the ALPR cameras mounted on their cars scan all nearby vehicles: both moving and parked vehicles. Scans are not limited solely to vehicles they've been hired to repossess, nor to vehicles of known/suspected criminals. So, innocent consumers are caught in the massive data collection. According to Vice:

"... in fact, the vast majority of vehicles captured are connected to innocent people. DRN claims to have more than 9 billion license plate scans, according to a DRN contract obtained by Motherboard..."

Second, the data is archived forever. That can provide a very detailed history of a vehicle's (or a person's) movements:

"The results popped up: dozens of sightings, spanning years. The system could see photos of the car parked outside the owner's house; the car in another state as its driver went to visit family; and the car parked in other spots in the owner's city... Some showed the car's location as recently as a few weeks before."

Third, to facilitate searches metadata is automatically attached to the images: GPS or geolocation, date, time, day of week, and more. The metadata helps provide a pretty detailed history of each vehicle's -- or person's -- movements: where and when a vehicle ( or person) travels, patterns such as which days of the week certain locations are visited, and how long the vehicle (or person) parked at specific locations. Vice explained:

"The data is easy to query, according to a DRN training video obtained by Motherboard. The system adds a "tag" to each result, categorising what sort of location the vehicle was likely spotted at, such as "workplace" or "home."

So, DRN can help users to associate specific addresses (work, home, school, doctors, etc.) with specific vehicles. How accurate might this be? While that might help repo men and insurance companies spot fraud via out-of-state registered vehicles whose owners are trying to avoid detection and/or higher premiums, it raises other concerns.

Fourth, consumers -- vehicle owners -- have no control over the data describing them. Vehicle owners cannot opt out of the data collection. Vehicle owners cannot review nor correct any errors in their DRN profiles.

That sounds out of control to me.

The persons which the archived data directly describes have no say. None. That's a huge concern.

Also, I wonder about single females -- victims of domestic violence -- who have protective orders for their safety. Some states, such as Massachusetts, have Address Confidentiality Programs (ACPs) to protect victims of domestic violence, sexual assault, and stalkers. Does DRN accommodate ACP programs? And if so, how? And if not, why not? How does DRN prevent perps from using its database tool? (Yes, DRN access is an issue. Keep reading.) The Vice report didn't say. Hopefully, future reporting will discuss this.

Fifth, DRN is robust. It can be used to track vehicles near or in real time:

"DRN charges $20 to look up a license plate, or $70 for a "live alert", according to the contract. With a live alert, a user can enter a license plate they wish to receive updates on; when the DRN system spots the vehicle, it'll send an email to the user with the newly discovered location."

That makes DRN highly appealing to both valid users (e.g., police, repo men, insurance companies, private investigators) and bad actors posing as valid users. Who might those bad actors be? The Electronic Frontier Foundation (EFF) warned:

"Taken in the aggregate, ALPR data can paint an intimate portrait of a driver’s life and even chill First Amendment protected activity. ALPR technology can be used to target drivers who visit sensitive places such as health centers, immigration clinics, gun shops, union halls, protests, or centers of religious worship."

Sixth, is the problem of access. Anybody can use DRN. According to Vice:

"... a private investigator, or a repo man, or an insurance company does not need a warrant to search for someone's movements over years; they just need to pay to access the DRN system, or find someone willing to share or leverage their access..."

Users simply need to comply with DRN's policies. The company says that, a) users can use its database tool only for certain applications, and b) its contract prohibits users from sharing search results with third parties. We consumers have only DRN's word and assurances that it enforces its policies; and that users comply. As we have seen with Facebook data breaches, it is easy for bad actors to pose as valid users in order to doo end runs around such policies.

What are your opinions of ALPR cameras and DRN?


Report: World Shipments Of Smart Home Devices Forecasted To Grow To 815 Million In 2019, And To 1.39 Billion in 2023

International Data Corporation logo A report by the International Data Corporation (IDC) has forecasted worldwide shipments of devices for smart homes to grown 23.5% in 2019 over 2018 to nearly 815 million. The report also forecasted a 14.4 percent annual compound growth rate to about 1.39 billion shipments in 2023.

According to the announcement about the report:

"Video entertainment devices are expected to maintain the largest volume of shipments, accounting for 29.9% of all shipments in 2023... Home monitoring/security devices like smart cameras and smart locks will account for 22.1% of the shipments in 2023... Growth in smart speakers and displays is expected to slow to single digits in the next few years... as the installed base of these devices approaches saturation and consumers look to other form factors to access smart assistants in the home, such as thermostats, appliances, and TVs to name a few."

The report, titled "Worldwide Quarterly Smart Home Device Tracker," includes familiar products such as Amazon Echo, Google Home, Philips Hue bulbs, smart speakers, smart thermostats, and connected doorbells. The report covers Asia/Pacific, Canada, Central and Eastern Europe, China, Japan, Latin America, the Middle East and Africa, the United States, and Western Europe.

Surveys in 2018 found that most consumers are satisfied with in-home voice-controlled assistants, and performance issues hinder trust and device adoption. A survey in 2017 found that 90 percent of consumers want security built into smart-home devices. Also in 2017, researchers warned that a hacked Amazon Echo could be turned into always-on surveillance devices.

And, consumers should use these privacy tips for smart speakers in their homes.

Today's smart homes contain a variety of internet-connected appliances -- televisions, utility meters, hot water heaters, thermostats, refrigerators, security systems, solar panels -- and internet-connected devices you might not expect:  mouse traps, water bowls and feeders for your pets, wine bottles, crock pots, toy dolls, trash/recycle bins, vibrators, orgasm trackers, and adult sex toys. It is a connected world, indeed.


Study: Anonymized Data Can Not Be Totally Anonymous. And 'Homomorphic Encryption' Explained

Many online users have encountered situations where companies collect data with the promised that it is safe because the data has been anonymized -- all personally-identifiable data elements have been removed. How safe is this really? A recent study reinforced the findings that it isn't as safe as promised. Anonymized data can be de-anonymized = re-identified to individual persons.

The Guardian UK reported:

"... data can be deanonymised in a number of ways. In 2008, an anonymised Netflix data set of film ratings was deanonymised by comparing the ratings with public scores on the IMDb film website in 2014; the home addresses of New York taxi drivers were uncovered from an anonymous data set of individual trips in the city; and an attempt by Australia’s health department to offer anonymous medical billing data could be reidentified by cross-referencing “mundane facts” such as the year of birth for older mothers and their children, or for mothers with many children. Now researchers from Belgium’s Université catholique de Louvain (UCLouvain) and Imperial College London have built a model to estimate how easy it would be to deanonymise any arbitrary dataset. A dataset with 15 demographic attributes, for instance, “would render 99.98% of people in Massachusetts unique”. And for smaller populations, it gets easier..."

According to the U.S. Census Bureau, the population of Massachusetts was abut 6.9 million on July 1, 2018. How did this de-anonymization problem happen? Scientific American explained:

"Many commonly used anonymization techniques, however, originated in the 1990s, before the Internet’s rapid development made it possible to collect such an enormous amount of detail about things such as an individual’s health, finances, and shopping and browsing habits. This discrepancy has made it relatively easy to connect an anonymous line of data to a specific person: if a private detective is searching for someone in New York City and knows the subject is male, is 30 to 35 years old and has diabetes, the sleuth would not be able to deduce the man’s name—but could likely do so quite easily if he or she also knows the target’s birthday, number of children, zip code, employer and car model."

Data brokers, including credit-reporting agencies, have collected a massive number of demographic data attributes about every persons. According to this 2018 report, Acxiom has compiled about 5,000 data elements for each of 700 million persons worldwide.

It's reasonable to assume that credit-reporting agencies and other data brokers have similar capabilities. So, data brokers' massive databases can make it relatively easy to re-identify data that was supposedly been anonymized. This means consumers don't have the privacy promised.

What's the solution? Researchers suggest that data brokers must develop new anonymization methods, and rigorously test them to ensure anonymization truly works. And data brokers must be held to higher data security standards.

Any legislation serious about protecting consumers' privacy must address this, too. What do you think?


Researcher Uncovers Several Browser Extensions That Track Users' Online Activity And Share Data

Many consumers use web browsers since websites contain full content and functionality, versus pieces of websites in mobile apps. A researcher has found that as many as four million consumers have been affected by browser extensions, the optional functionality for web browsers, which collected sensitive personal and financial information.

Ars Technica reported about DataSpii, the name of the online privacy issue:

"The term DataSpii was coined by Sam Jadali, the researcher who discovered—or more accurately re-discovered—the browser extension privacy issue. Jadali intended for the DataSpii name to capture the unseen collection of both internal corporate data and personally identifiable information (PII).... DataSpii begins with browser extensions—available mostly for Chrome but in more limited cases for Firefox as well—that, by Google's account, had as many as 4.1 million users. These extensions collected the URLs, webpage titles, and in some cases the embedded hyperlinks of every page that the browser user visited. Most of these collected Web histories were then published by a fee-based service called Nacho Analytics..."

At first glance, this may not sound important, but it is. Why? First, the data collected included the most sensitive and personal information:

"Home and business surveillance videos hosted on Nest and other security services; tax returns, billing invoices, business documents, and presentation slides posted to, or hosted on, Microsoft OneDrive, Intuit.com, and other online services; vehicle identification numbers of recently bought automobiles, along with the names and addresses of the buyers; patient names, the doctors they visited, and other details listed by DrChrono, a patient care cloud platform that contracts with medical services; travel itineraries hosted on Priceline, Booking.com, and airline websites; Facebook Messenger attachments..."

I'll bet you thought your Facebook Messenger stuff was truly private. Second, because:

"... the published URLs wouldn’t open a page unless the person following them supplied an account password or had access to the private network that hosted the content. But even in these cases, the combination of the full URL and the corresponding page name sometimes divulged sensitive internal information. DataSpii is known to have affected 50 companies..."

Ars Technica also reported:

"Principals with both Nacho Analytics and the browser extensions say that any data collection is strictly "opt in." They also insist that links are anonymized and scrubbed of sensitive data before being published. Ars, however, saw numerous cases where names, locations, and other sensitive data appeared directly in URLs, in page titles, or by clicking on the links. The privacy policies for the browser extensions do give fair warning that some sort of data collection will occur..."

So, the data collection may be legal, but is it ethical -- especially if the anonymization is partial? After the researcher's report went public, many of the suspect browser extensions were deleted from online stores. However, extensions already installed locally on users' browsers can still collect data:

"Beginning on July 3—about 24 hours after Jadali reported the data collection to Google—Fairshare Unlock, SpeakIt!, Hover Zoom, PanelMeasurement, Branded Surveys, and Panel Community Surveys were no longer available in the Chrome Web Store... While the notices say the extensions violate the Chrome Web Store policy, they make no mention of data collection nor of the publishing of data by Nacho Analytics. The toggle button in the bottom-right of the notice allows users to "force enable" the extension. Doing so causes browsing data to be collected just as it was before... In response to follow-up questions from Ars, a Google representative didn't explain why these technical changes failed to detect or prevent the data collection they were designed to stop... But removing an extension from an online marketplace doesn't necessarily stop the problems. Even after the removals of Super Zoom in February or March, Jadali said, code already installed by the Chrome and Firefox versions of the extension continued to collect visited URL information..."

Since browser developers haven't remotely disabled leaky browser extensions, the burden is on consumers. The Ars Technica report lists the leaky browser extensions by name. Since online stores can't seem to consistently police browser extensions for privacy compliance, again the burden falls upon consumers.

The bottom line: browser extensions can easily compromise your online privacy and security. That means like any other software, wise consumers: read independent online reviews first, read the developer's terms of use and privacy policy before installing the browser extension, and use a privacy-focused web browser.

Consumer Reports advises consumers to, a) install browser extensions only from companies you trust, and b) uninstall browser extensions you don't need nor use. For consumers that don't know how, the Consumer Reports article also lists step-by-step instructions to uninstall browser extensions in Google Chrome, Firefox, Safari, and Internet Explorer branded web browsers.


Emotion Recognition: Facial Recognition Software Based Upon Valid Science or Malarkey?

The American Civil Liberties Union (ACLU) reported:

"Emotion recognition is a hot new area, with numerous companies peddling products that claim to be able to read people’s internal emotional states, and artificial intelligence (A.I.) researchers looking to improve computers’ ability to do so. This is done through voice analysis, body language analysis, gait analysis, eye tracking, and remote measurement of physiological signs like pulse and breathing rates. Most of all, though, it’s done through analysis of facial expressions.

A new study, however, strongly suggests that these products are built on a bed of intellectual quicksand... after reviewing over 1,000 scientific papers in the psychological literature, these experts came to a unanimous conclusion: there is no scientific support for the common assumption “that a person’s emotional state can be readily inferred from his or her facial movements.” The scientists conclude that there are three specific misunderstandings “about how emotions are expressed and perceived in facial movements.” The link between facial expressions and emotions is not reliable (i.e., the same emotions are not always expressed in the same way), specific (the same facial expressions do not reliably indicate the same emotions), or generalizable (the effects of different cultures and contexts has not been sufficiently documented)."

Another reason why this is important:

"... an entire industry of automated purported emotion-reading technologies is quickly emerging. As we wrote in our recent paper on “Robot Surveillance,” the market for emotion recognition software is forecast to reach at least $3.8 billion by 2025. Emotion recognition (aka “affect recognition” or “affective computing”) is already being incorporated into products for purposes such as marketing, robotics, driver safety, and audio “aggression detectors.”

Regular readers of this blog are familiar with aggression detectors and the variety of industries where the technology is already deployed. And, one police body-cam maker says it won't deploy facial recognition in its products due to problems with the technology.

Yes, reliability matters -- especially when used for surveillance purposes. Nobody wants law enforcement making decisions about persons based upon software built using unreliable or fake science masquerading as reliable, valid science. Nobody wants education and school officials making decisions about students using unreliable software. Nobody wants hospital administrators and physicians making decisions about patients based upon unreliable software.

What are your opinions?


Police Body Cam Maker Says It Won't Use Facial Recognition Due To Problems With The Technology

We've all heard of the following three technologies: police body cameras, artificial intelligence, and facial recognition software. Across the nation, some police departments use body cameras.

Do the three technologies go together -- work well together? The Washington Post reported:

"Axon, the country’s biggest seller of police body cameras, announced that it accepts the recommendation of an ethics board and will not use facial recognition in its devices... the company convened the independent board last year to assess the possible consequences and ethical costs of artificial intelligence and facial-recognition software. The board’s first report, published June 27, concluded that “face recognition technology is not currently reliable enough to ethically justify its use” — guidance that Axon plans to follow."

So, a major U.S. corporation assembled an ethics board to guide its activities. Good. That's not something you read about often. Then, the same corporation followed that board's advice. Even better.

Why reject using facial recognition with body cameras? Axon explained in a statement:

"Current face matching technology raises serious ethical concerns. In addition, there are technological limitations to using this technology on body cameras. Consistent with the board's recommendation, Axon will not be commercializing face matching products on our body cameras at this time. We do believe face matching technology deserves further research to better understand and solve for the key issues identified in the report, including evaluating ways to de-bias algorithms as the board recommends. Our AI team will continue to evaluate the state of face recognition technologies and will keep the board informed about our research..."

Two types of inaccuracies occur with facial recognition software: i) persons falsely identified (a/k/a "false positives;" and ii) persons not identified (a/k/a "false negatives) who should have been identified. The ethics board's report provided detailed explanations:

"The truth is that current technology does not perform as well on people of color compared to whites, on women compared to men, or young people compared to older people, to name a few disparities. These disparities exist in both directions — a greater false positive rate and false negative rate."

The ethics board's report also explained the problem of bias:

"One cause of these biases is statistically unrepresentative training data — the face images that engineers use to “train” the face recognition algorithm. These images are unrepresentative for a variety of reasons but in part because of decisions that have been made for decades that have prioritized certain groups at the cost of others. These disparities make real-world face recognition deployment a complete nonstarter for the Board. Until we have something approaching parity, this technology should remain on the shelf. Policing today already exhibits all manner of disparities (particularly racial). In this undeniable context, adding a tool that will exacerbate this disparity would be unacceptable..."

So, well-meaning software engineers can create bias in their algorithms by using sets of images that are not representative of the population. The ethic board's 42-page report titled, "First Report Of The Axon A.I. & Policing Technology Ethics Board" (Adobe PDF; 3.1 Megabytes) listed six general conclusions:

"1: Face recognition technology is not currently reliable enough to ethically justify its use on body-worn cameras. At the least, face recognition technology should not be deployed until the technology performs with far greater accuracy and performs equally well across races, ethnicities, genders, and other identity groups. Whether face recognition on body-worn cameras can ever be ethically justifiable is an issue the Board has begun to discuss in the context of the use cases outlined in Part IV.A, and will take up again if and when these prerequisites are met."

"2: When assessing face recognition algorithms, rather than talking about “accuracy,” we prefer to discuss false positive and false negative rates. Our tolerance for one or the other will depend on the use case."

"3: The Board is unwilling to endorse the development of face recognition technology of any sort that can be completely customized by the user. It strongly prefers a model in which the technologies that are made available are limited in what functions they can perform, so as to prevent misuse by law enforcement."

"4: No jurisdiction should adopt face recognition technology without going through open, transparent, democratic processes, with adequate opportunity for genuinely representative public analysis, input, and objection."

"5: Development of face recognition products should be premised on evidence-based benefits. Unless and until those benefits are clear, there is no need to discuss costs or adoption of any particular product."

"6: When assessing the costs and benefits of potential use cases, one must take into account both the realities of policing in America (and in other jurisdictions) and existing technological limitations."

The board included persons with legal, technology, law enforcement, and civil rights backgrounds; plus members from the affected communities. Axon management listened to the report's conclusions and is following the board's recommendations (emphasis added):

"Respond publicly to this report, including to the Board’s conclusions and recommendations regarding face recognition technology. Commit, based on the concerns raised by the Board, not to proceed with the development of face matching products, including adding such capabilities to body-worn cameras or to Axon Evidence (Evidence.com)... Invest company resources to work, in a transparent manner and in tandem with leading independent researchers, to ensure training data are statistically representative of the appropriate populations and that algorithms work equally well across different populations. Continue to comply with the Board’s Operating Principles, including by involving the Board in the earliest possible stages of new or anticipated products. Work with the Board to produce products and services designed to improve policing transparency and democratic accountability, including by developing products in ways that assure audit trails or that collect information that agencies can release to the public about their use of Axon products..."

Admirable. Encouraging. The Washington Post reported:

"San Francisco in May became the first U.S. city to ban city police and agencies from using facial-recognition software... Somerville, Massachusetts became the second, with other cities, including Berkeley and Oakland, Calif., considering similar measures..."

Clearly, this topic bears monitoring. Consumers and government officials are concerned about accuracy and bias. So, too, are some corporations.

And, more news seems likely. Will other technology companies and local governments utilize similar A.I. ethics boards? Will schools, healthcare facilities, and other customers of surveillance devices demand products with accuracy and without bias supported by evidence?


UK Parliamentary Committee Issued Its Final Report on Disinformation And Fake News. Facebook And Six4Three Discussed

On February 18th, a United Kingdom (UK) parliamentary committee published its final report on disinformation and "fake news." The 109-page report by the Digital, Culture, Media, And Sport Committee (DCMS) updates its interim report from July, 2018.

The report covers many issues: political advertising (by unnamed entities called "dark adverts"), Brexit and UK elections, data breaches, privacy, and recommendations for UK regulators and government officials. It seems wise to understand the report's findings regarding the business practices of U.S.-based companies mentioned, since these companies' business practices affect consumers globally, including consumers in the United States.

Issues Identified

First, the DCMS' final report built upon issues identified in its:

"... Interim Report: the definition, role and legal liabilities of social media platforms; data misuse and targeting, based around the Facebook, Cambridge Analytica and Aggregate IQ (AIQ) allegations, including evidence from the documents we obtained from Six4Three about Facebook’s knowledge of and participation in data-sharing; political campaigning; Russian influence in political campaigns; SCL influence in foreign elections; and digital literacy..."

The final report includes input from 23 "oral evidence sessions," more than 170 written submissions, interviews of at least 73 witnesses, and more than 4,350 questions asked at hearings. The DCMS Committee sought input from individuals, organizations, industry experts, and other governments. Some of the information sources:

"The Canadian Standing Committee on Access to Information, Privacy and Ethics published its report, “Democracy under threat: risks and solutions in the era of disinformation and data monopoly” in December 2018. The report highlights the Canadian Committee’s study of the breach of personal data involving Cambridge Analytica and Facebook, and broader issues concerning the use of personal data by social media companies and the way in which such companies are responsible for the spreading of misinformation and disinformation... The U.S. Senate Select Committee on Intelligence has an ongoing investigation into the extent of Russian interference in the 2016 U.S. elections. As a result of data sets provided by Facebook, Twitter and Google to the Intelligence Committee -- under its Technical Advisory Group -- two third-party reports were published in December 2018. New Knowledge, an information integrity company, published “The Tactics and Tropes of the Internet Research Agency,” which highlights the Internet Research Agency’s tactics and messages in manipulating and influencing Americans... The Computational Propaganda Research Project and Graphika published the second report, which looks at activities of known Internet Research Agency accounts, using Facebook, Instagram, Twitter and YouTube between 2013 and 2018, to impact US users"

Why Disinformation

Second, definitions matter. According to the DCMS Committee:

"We have even changed the title of our inquiry from “fake news” to “disinformation and ‘fake news’”, as the term ‘fake news’ has developed its own, loaded meaning. As we said in our Interim Report, ‘fake news’ has been used to describe content that a reader might dislike or disagree with... We were pleased that the UK Government accepted our view that the term ‘fake news’ is misleading, and instead sought to address the terms ‘disinformation’ and ‘misinformation'..."

Overall Recommendations

Summary recommendations from the report:

  1. "Compulsory Code of Ethics for tech companies overseen by independent regulator,
  2. Regulator given powers to launch legal action against companies breaching code,
  3. Government to reform current electoral communications laws and rules on overseas involvement in UK elections, and
  4. Social media companies obliged to take down known sources of harmful content, including proven sources of disinformation"

Role And Liability Of Tech Companies

Regarding detailed observations and findings about the role and liability of tech companies, the report stated:

"Social media companies cannot hide behind the claim of being merely a ‘platform’ and maintain that they have no responsibility themselves in regulating the content of their sites. We repeat the recommendation from our Interim Report that a new category of tech company is formulated, which tightens tech companies’ liabilities, and which is not necessarily either a ‘platform’ or a ‘publisher’. This approach would see the tech companies assume legal liability for content identified as harmful after it has been posted by users. We ask the Government to consider this new category of tech company..."

The UK Government and its regulators may adopt some, all, or none of the report's recommendations. More observations and findings in the report:

"... both social media companies and search engines use algorithms, or sequences of instructions, to personalize news and other content for users. The algorithms select content based on factors such as a user’s past online activity, social connections, and their location. The tech companies’ business models rely on revenue coming from the sale of adverts and, because the bottom line is profit, any form of content that increases profit will always be prioritized. Therefore, negative stories will always be prioritized by algorithms, as they are shared more frequently than positive stories... Just as information about the tech companies themselves needs to be more transparent, so does information about their algorithms. These can carry inherent biases, as a result of the way that they are developed by engineers... Monika Bickert, from Facebook, admitted that Facebook was concerned about “any type of bias, whether gender bias, racial bias or other forms of bias that could affect the way that work is done at our company. That includes working on algorithms.” Facebook should be taking a more active and urgent role in tackling such inherent biases..."

Based upon this, the report recommended that the UK's new Centre For Ethics And Innovation (CFEI) should play a key role as an advisor to the UK Government by continually analyzing and anticipating gaps in governance and regulation, suggesting best practices and corporate codes of conduct, and standards for artificial intelligence (AI) and related technologies.

Inferred Data

The report also discussed a critical issue related to algorithms (emphasis added):

"... When Mark Zuckerberg gave evidence to Congress in April 2018, in the wake of the Cambridge Analytica scandal, he made the following claim: “You should have complete control over your data […] If we’re not communicating this clearly, that’s a big thing we should work on”. When asked who owns “the virtual you”, Zuckerberg replied that people themselves own all the “content” they upload, and can delete it at will. However, the advertising profile that Facebook builds up about users cannot be accessed, controlled or deleted by those users... In the UK, the protection of user data is covered by the General Data Protection Regulation (GDPR). However, ‘inferred’ data is not protected; this includes characteristics that may be inferred about a user not based on specific information they have shared, but through analysis of their data profile. This, for example, allows political parties to identify supporters on sites like Facebook, through the data profile matching and the ‘lookalike audience’ advertising targeting tool... Inferred data is therefore regarded by the ICO as personal data, which becomes a problem when users are told that they can own their own data, and that they have power of where that data goes and what it is used for..."

The distinction between uploaded and inferred data cannot be overemphasized. It is critical when evaluating tech companies statements, policies (e.g., privacy, terms of use), and promises about what "data" users have control over. Wise consumers must insist upon clear definitions to avoided getting misled or duped.

What might be an exampled of inferred data? What comes to mind is Facebook's Ad Preferences feature allows users to review and delete the "Interests" -- advertising categories -- Facebook assigns to each user's profile. (The service's algorithms assign Interests based groups/pages/events/advertisements users "Liked" or clicked on, posts submitted, posts commented upon, and more.) These "Interests" are inferred data, since Facebook assigned them, and uers didn't.

In fact, Facebook doesn't notify its users when it assigns new Interests. It just does it. And, Facebook can assign Interests whether you interacted with an item once or many times. How relevant is an Interest assigned after a single interaction, "Like," or click? Most people would say: not relevant. So, does the Interests list assigned to users' profiles accurately describe users? Do Facebook users own the Interests list assigned to their profiles? Any control Facebook users have seems minimal. Why? Facebook users can delete Interests assigned to their profiles, but users cannot stop Facebook from applying new Interests. Users cannot prevent Facebook from re-applying Interests previously deleted. Deleting Interests doesn't reduce the number of ads users see on Facebook.

The only way to know what Interests have been assigned is for Facebook users to visit the Ad Preferences section of their profiles, and browse the list. Depending how frequently a person uses Facebook, it may be necessary to prune an Interests list at least once monthly -- a cumbersome and time consuming task, probably designed that way to discourage reviews and pruning. And, that's one example of inferred data. There are probably plenty more examples, and as the report emphasizes users don't have access to all inferred data with their profiles.

Now, back to the report. To fix problems with inferred data, the DCMS recommended:

"We support the recommendation from the ICO that inferred data should be as protected under the law as personal information. Protections of privacy law should be extended beyond personal information to include models used to make inferences about an individual. We recommend that the Government studies the way in which the protections of privacy law can be expanded to include models that are used to make inferences about individuals, in particular during political campaigning. This will ensure that inferences about individuals are treated as importantly as individuals’ personal information."

Business Practices At Facebook

Next, the DCMS Committee's report said plenty about Facebook, its management style, and executives (emphasis added):

"Despite all the apologies for past mistakes that Facebook has made, it still seems unwilling to be properly scrutinized... Ashkan Soltani, an independent researcher and consultant, and former Chief Technologist to the US Federal Trade Commission (FTC), called into question Facebook’s willingness to be regulated... He discussed the California Consumer Privacy Act, which Facebook supported in public, but lobbied against, behind the scenes... By choosing not to appear before the Committee and by choosing not to respond personally to any of our invitations, Mark Zuckerberg has shown contempt towards both the UK Parliament and the ‘International Grand Committee’, involving members from nine legislatures from around the world. The management structure of Facebook is opaque to those outside the business and this seemed to be designed to conceal knowledge of and responsibility for specific decisions. Facebook used the strategy of sending witnesses who they said were the most appropriate representatives, yet had not been properly briefed on crucial issues, and could not or chose not to answer many of our questions. They then promised to follow up with letters, which -- unsurprisingly -- failed to address all of our questions. We are left in no doubt that this strategy was deliberate."

So, based upon Facebook's actions (or lack thereof), the DCMS concluded that Facebook executives intentionally ducked and dodged issues and questions.

While discussing data use and targeting, the report said more about data breaches and Facebook:

"The scale and importance of the GSR/Cambridge Analytica breach was such that its occurrence should have been referred to Mark Zuckerberg as its CEO immediately. The fact that it was not is evidence that Facebook did not treat the breach with the seriousness it merited. It was a profound failure of governance within Facebook that its CEO did not know what was going on, the company now maintains, until the issue became public to us all in 2018. The incident displays the fundamental weakness of Facebook in managing its responsibilities to the people whose data is used for its own commercial interests..."

So, internal management failed. That's not all. After a detailed review of the GSR/Cambridge Analytica breach and Facebook's 2011 Consent Decree with the U.S. Federal Trade Commission (FTC), the DCMS Committee concluded (emphasis and text link added):

"The Cambridge Analytica scandal was facilitated by Facebook’s policies. If it had fully complied with the FTC settlement, it would not have happened. The FTC Complaint of 2011 ruled against Facebook -- for not protecting users’ data and for letting app developers gain as much access to user data as they liked, without restraint -- and stated that Facebook built their company in a way that made data abuses easy. When asked about Facebook’s failure to act on the FTC’s complaint, Elizabeth Denham, the Information Commissioner, told us: “I am very disappointed that Facebook, being such an innovative company, could not have put more focus, attention and resources into protecting people’s data”. We are equally disappointed."

Wow! Not good. There's more:

"... a current court case at the San Mateo Superior Court in California also concerns Facebook’s data practices. It is alleged that Facebook violated the privacy of US citizens by actively exploiting its privacy policy... The published ‘corrected memorandum of points and authorities to defendants’ special motions to strike’, by the complainant in the case, the U.S.-based app developer Six4Three, describes the allegations against Facebook; that Facebook used its users’ data to persuade app developers to create platforms on its system, by promising access to users’ data, including access to data of users’ friends. The case also alleges that those developers that became successful were targeted and ordered to pay money to Facebook... Six4Three lodged its original case in 2015, after Facebook removed developers’ access to friends’ data, including its own. The DCMS Committee took the unusual, but lawful, step of obtaining these documents, which spanned between 2012 and 2014... Since we published these sealed documents, on 14 January 2019 another court agreed to unseal 135 pages of internal Facebook memos, strategies and employee emails from between 2012 and 2014, connected with Facebook’s inappropriate profiting from business transactions with children. A New York Times investigation published in December 2018 based on internal Facebook documents also revealed that the company had offered preferential access to users data to other major technology companies, including Microsoft, Amazon and Spotify."

"We believed that our publishing the documents was in the public interest and would also be of interest to regulatory bodies... The documents highlight Facebook’s aggressive action against certain apps, including denying them access to data that they were originally promised. They highlight the link between friends’ data and the financial value of the developers’ relationship with Facebook. The main issues concern: ‘white lists’; the value of friends’ data; reciprocity; the sharing of data of users owning Android phones..."

You can read the report's detailed descriptions of those issues. A summary: a) Facebook allegedly used promises of access to users' data to lure developers (often by overriding Facebook users' privacy settings); b) some developers got priority treatment based upon unclear criteria; c) developers who didn't spend enough money with Facebook were denied access to data previously promised; d) Facebook's reciprocity clause demanded that developers also share their users' data with Facebook; e) Facebook's mobile app for Android OS phone users collected far more data about users, allegedly without consent, than users were told; and f) Facebook allegedly targeted certain app developers (emphasis added):

"We received evidence that showed that Facebook not only targeted developers to increase revenue, but also sought to switch off apps where it considered them to be in competition or operating in a lucrative areas of its platform and vulnerable to takeover. Since 1970, the US has possessed high-profile federal legislation, the Racketeer Influenced and Corrupt Organizations Act (RICO); and many individual states have since adopted similar laws. Originally aimed at tackling organized crime syndicates, it has also been used in business cases and has provisions for civil action for damages in RICO-covered offenses... Despite specific requests, Facebook has not provided us with one example of a business excluded from its platform because of serious data breaches. We believe that is because it only ever takes action when breaches become public. We consider that data transfer for value is Facebook’s business model and that Mark Zuckerberg’s statement that “we’ve never sold anyone’s data” is simply untrue.” The evidence that we obtained from the Six4Three court documents indicates that Facebook was willing to override its users’ privacy settings in order to transfer data to some app developers, to charge high prices in advertising to some developers, for the exchange of that data, and to starve some developers—such as Six4Three—of that data, thereby causing them to lose their business. It seems clear that Facebook was, at the very least, in violation of its Federal Trade Commission settlement."

"The Information Commissioner told the Committee that Facebook needs to significantly change its business model and its practices to maintain trust. From the documents we received from Six4Three, it is evident that Facebook intentionally and knowingly violated both data privacy and anti-competition laws. The ICO should carry out a detailed investigation into the practices of the Facebook Platform, its use of users’ and users’ friends’ data, and the use of ‘reciprocity’ of the sharing of data."

The Information Commissioner's Office (ICO) is one of the regulatory agencies within the UK. So, the Committee concluded that Facebook's real business model is, "data transfer for value" -- in other words: have money, get access to data (regardless of Facebook users' privacy settings).

One quickly gets the impression that Facebook acted like a monopoly in its treatment of both users and developers... or worse, like organized crime. The report concluded (emphasis added):

"The Competitions and Market Authority (CMA) should conduct a comprehensive audit of the operation of the advertising market on social media. The Committee made this recommendation its interim report, and we are pleased that it has also been supported in the independent Cairncross Report commissioned by the government and published in February 2019. Given the contents of the Six4Three documents that we have published, it should also investigate whether Facebook specifically has been involved in any anti-competitive practices and conduct a review of Facebook’s business practices towards other developers, to decide whether Facebook is unfairly using its dominant market position in social media to decide which businesses should succeed or fail... Companies like Facebook should not be allowed to behave like ‘digital gangsters’ in the online world, considering themselves to be ahead of and beyond the law."

The DCMS Committee's report also discussed findings from the Cairncross Report. In summary, Damian Collins MP, Chair of the DCMS Committee, said:

“... we cannot delay any longer. Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalized ‘dark adverts’ from unidentifiable sources, delivered through the major social media platforms we use everyday. Much of this is directed from agencies working in foreign countries, including Russia... Companies like Facebook exercise massive market power which enables them to make money by bullying the smaller technology companies and developers... We need a radical shift in the balance of power between the platforms and the people. The age of inadequate self regulation must come to an end. The rights of the citizen need to be established in statute, by requiring the tech companies to adhere to a code of conduct..."

So, the report seems extensive, comprehensive, and detailed. Read the DCMS Committee's announcement, and/or download the full DCMS Committee report (Adobe PDF format, 3,5o7 kilobytes).

Once can assume that governments' intelligence and spy agencies will continue to do what they've always done: collect data about targets and adversaries, use disinformation and other tools to attempt to meddle in other governments' activities. It is clear that social media makes these tasks far easier than before. The DCMS Committee's report provided recommendations about what the UK Government's response should be. Other countries' governments face similar decisions about their responses, if any, to the threats.

Given the data in the DCMS report, it will be interesting to see how the FTC and lawmakers in the United States respond. If increased regulation of social media results, tech companies arguably have only themselves to blame. What do you think?


The Federal Reserve Introduced A New Publication For And About Consumers

The Federal Reserve Board (FRB) has introduced a new publication titled, "Consumer & Community Context." According to the FRB announcement, the new publication will feature:

"... original analyses about the financial conditions and experiences of consumers and communities, including traditionally under-served and economically vulnerable households and neighborhoods. The goal of the series is to increase public understanding of the financial conditions and concerns of consumers and communities... The inaugural issue covers the theme of student loans, and includes articles on the effect that rising student loan debt levels may have on home ownership rates among young adults; and the relationship between the amount of student loan debt and individuals' decisions to live in rural or urban areas."

Authors are employees of the FRB or the Federal Reserve System (FRS). As the central bank of the United States, the FRS performs five general functions to "promote the effective operation of the U.S. economy and, more generally, the public interest:" i) conducts the nation’s monetary policy to promote maximum employment, stable prices, and moderate long-term interest rates; ii) promotes the stability of the financial system and seeks to minimize and contain systemic risks; iii) promotes the safety and soundness of individual financial institutions; iv) fosters payment and settlement system safety and efficiency through services to the banking industry; and v) promotes consumer protection and community development through consumer-focused supervision, examination, and monitoring of the financial system. Learn more about the Federal Reserve.

The first issue of Consumer & Community Context is available, in Adobe PDF format, at the FRB site. Economists, bank executives, consumer advocates, researchers, teachers, and policy makers may be particularly interested. To better understand the publication's content, below is an excerpt.

In their analysis of student loan debt and home ownership among young adults, the researchers found:

"... home ownership rate in the United States fell approximately 4 percentage points in the wake of the financial crisis, from a peak of 69 percent in 2005 to 65 percent in 2014. The decline in home ownership was even more pronounced among young adults. Whereas 45 percent of household heads ages 24 to 32 in 2005 owned their own home, just 36 percent did in 2014 — a marked 9 percentage point drop... We found that a $1,000 increase in student loan debt (accumulated during the prime college-going years and measured in 2014 dollars) causes a 1 to 2 percentage point drop in the home ownership rate for student loan borrowers during their late 20s and early 30s... higher student loan debt early in life leads to a lower credit score later in life, all else equal. We also find that, all else equal, increased student loan debt causes borrowers to be more likely to default on their student loan debt, which has a major adverse effect on their credit scores, thereby impacting their ability to qualify for a mortgage..."

The FRB announcement described the publication schedule as, "periodically." Perhaps, this is due to the partial government shutdown. Hopefully, in the near future the FRB will commit to a more regular publication schedule.


Report: Navient Tops List Of Student Loan Complaints

The Consumer Financial Protection Bureau (CFPB), a federal government agency in the United States, collects complaints about banks and other financial institutions. That includes lenders of student loans.

The CFPB and private-sector firms analyze these complaints, looking for patterns. Forbes magazine reported:

"The team at Make Lemonade analyzed these complaints [submitted during 2018], and found that there were 8,752 related to student loans. About 64% were related to federal student loans and 36% were related to private student loans. Nearly 67% of complaints were related to an issue with a student loan lender or student loan servicer."

"Navient, one of the nation's largest student loan servicers, ranked highest in terms of student loan complaints. In 2018, student loan borrowers submitted 4,032 complaints about Navient to the CFPB, which represents 46% of all student loan complaints. AES/PHEAA and Nelnet, two other major student loan servicers, received approximately 20% and 7%, respectively."

When looking for a student loan, wise consumers shop around, do their research, and shop wisely. Some lenders are better than others. The Forbes article is very helpful as it contains links to additional resources and information for consumers.

Learn more about the CFPB and its complaints database designed to help consumers and regulators:


To Estimate The Value Of Facebook, A Study Asked How Much Money Users Would Demand As Payment To Quit The Service

Facebook logo What is the value of Facebook to its users? In a recent study, researchers explored answers to that question:

"Because [Facebook] users do not pay for the service, its benefits are hard to measure. We report the results of a series of three non-hypothetical auction experiments where winners are paid to deactivate their Facebook accounts for up to one year..."

The study was published in PLOS One, a peer-reviewed journal by the Public Library of Science. The study is important and of interest to economists because:

"... If Facebook were a country, it would be the world’s largest in terms of population with over 2.20 billion monthly active users, 1.45 billion of whom are active on a daily basis, spending an average of 50 minutes each day on Facebook-owned platforms (e.g., Facebook, Messenger, Instagram)... Despite concerns about loss of relevance due to declining personal posts by users, diminished interest in adoption and use by teens and young adults, claims about potential manipulation of its content for political purposes, and leaks that question the company’s handling of private user data, Facebook remains the top social networking site in the world and the third most visited site on the Internet after Google and YouTube...  Since its launch in 2004, Facebook has redefined how we communicate... Facebook had 23,165 employees as of September 30, 2017. This is less than 1% the number employed by Walmart, the world’s largest private employer... Because Facebook’s users pay nothing for the service, Facebook does not contribute directly to gross domestic product (GDP), economists’ standard metric of a nation’s output. In this context, it may seem surprising then that Facebook is the world’s fifth most valuable company with a market capitalization of $541.56 billion in May 2018... In 2017, the company had $40.65 billion in revenues, primarily from advertising, and $20.20 billion in net income..."

The detailed methodology of the study included:

"... a Vickrey second-price approach. In a typical experimental auction, participants bid to purchase a good or service. The highest bidder wins the auction and pays a price equal to the second-highest bid. This approach is designed such that participants’ best strategy is to bid their true willingness-to-pay... Because our study participants already had free access to Facebook, we could not ask people how much they would be willing to pay for access to the service. Instead, people bid for how much they would need in compensation to give up using Facebook. Economists have used these “willingness-to-accept” (WTA) auctions to assess the value of mundane items such as pens and chocolate bars, but also more abstract or novel items such as food safety, goods free of genetically modified ingredients, the stigma associated with HIV, battery life in smartphones, and the payment people require to endure an unpleasant experience... In this study, each bid can be interpreted as the minimum dollar amount a person would be willing to accept in exchange for not using Facebook for a given time period. The three auctions differ in the amount of time winners would have to go without using Facebook..."

The authors also discussed "consumer surplus," an economics term defined as:

"... a measure of value equal to the difference between the most a consumer would be willing to pay for a service and the price she actually pays to use it. When considering all consumers, Figure 1 below shows consumer surplus is the area under the demand curve, which shows consumers’ willingness to pay, and above the price; it is generally interpreted as consumers’ net benefit from being able to access a good or service in the marketplace. GDP, by contrast, is the market value of all final goods and services produced domestically in a given year..."

Journal.pone.0207101.g001

For comparison, the researchers cited related studies:

"... Bapna, Jank, and Shmueli [24] found that eBay users received a median of $4 in consumer surplus per transaction in 2003, or $7 billion in total. Ghose, Smith, and Telang [25] found that Amazon’s used-book market generated $67 million in annual consumer surplus. Brynjolfsson, Hu, and Smith [26] found that the increased variety of books available on Amazon created $1 billion in consumer surplus in 2000. Widening the lens to focus on the entire Internet, Greenstein and McDevitt [27] found that high-speed Internet access (as opposed to dial-up) generated $4.8 billion to $6.7 billion of consumer surplus in total between 1999 and 2006. Dutz, Orszag, and Willig [28] estimated that high-speed internet access generated $32 billion in consumer surplus in 2008 alone..."

Across the three auctions, study participants submitted bids ranging from $1,130 to $2,076 on average. The researchers found:

"... across all three samples, the mean bid to deactivate Facebook for a year exceeded $1,000. Even the most conservative of these mean WTA estimates, if applied to Facebook’s 214 million U.S. users, suggests an annual value of over $240 billion to users... Facebook reached a market capitalization of $542 billion in May 2018. At 2.20 billion active users in March 2018, this suggests a value to investors of almost $250 per user, which is less than one fourth of the annual value of [payments demanded by study participants to quit the service]. This reinforces the idea that the vast majority of benefits of new inventions go not to the inventors but to users."

To summarize, users in the study demanded at least $1,000 yearly each to quit the service. That's a measure of the value of Facebook to users. And, that value far exceeds the $250 value of each user to investors. The authors concluded:

"Concerns about data privacy, such as Cambridge Analytica’s alleged problematic handling of users’ private information, which are thought to have been used to influence the 2016 United States presidential election, only underscore the value Facebook’s users must derive from the service. Despite the parade of negative publicity surrounding the Cambridge Analytica revelations in mid-March 2018, Facebook added 70 million users between the end of 2017 and March 31, 2018. This implies the value users derive from the social network more than offsets the privacy concerns."

The conclusion suggests that the risk of a mass exodus of users is unlikely. I guess Facebook executives will find some comfort in that. However, more research is needed. Different sub-groups of users might demand different values. For example, a sub-group of users who have had their accounts hacked or cloned might demand a different -- perhaps lower -- annual payment amount to quit Facebook.

Another sub-group of users who have been identity theft and fraud victims might demand a higher annual payment to cover the costs of credit monitoring services and/or fraud resolution fees. A third sub-group -- parents and grandparents -- might demand a different payment amount due to the loss of access to family, children and grandchildren.

A one-size-fits-all approach to a WTA value doesn't seem very relevant. Follow-up studies could explore values by these sub-groups and by users with different types of behaviors (e.g., dissatisfaction levels):

  1. Quit the service's mobile apps and use only its browser interface,
  2. Reduced their time on the site (e.g., fewer posts, not using quizzes, not posting photos, not using Facebook Messenger, etc.),
  3. Daily usage ranges (e.g., less than 30 minutes, 31 to 59 minutes,, 60 to 79 minutes, 80 to 99 minutes, 100 minutes or more, etc.),
  4. Disabled the API interface with their accounts (e.g., don't user Facebook credentials to sign into other sites), and
  5. Tightened their privacy settings to display less (e.g., don't display Friends list, suppress personal newsfeed, don't display personal data, don't allow friends to post to their personal newsfeed page, etc.).

Clearly, more research is needed. Would you quit Facebook? If so, how much money would you demand as payment? What follow-up studies are you interested in?


If You're Over 50, Chances Are The Decision To Leave a Job Won't Be Yours

[Editor's note: today's guest post, by reporters at ProPublica, discusses workplace discrimination. It is reprinted with permission. Older than 50? Some of the employment experiences below may be familiar. Younger than 50? Save as much money as you can -- now.]

By Peter Gosselin, ProPublica

Tom Steckel hunched over a laptop in the overheated basement of the state Capitol building in Pierre, South Dakota, early last week, trying to figure out how a newly awarded benefit claims contract will make it easier for him do his job. Steckel is South Dakota’s director of employee benefits. His department administers programs that help the state’s 13,500 public employees pay for health care and prepare for retirement.

It’s steady work and, for that, Steckel, 62, is grateful. After turning 50, he was laid off three times before landing his current position in 2014, weathering unemployment stints of up to eight months. When he started, his $90,000-a-year salary was only 60 percent of what he made at his highest-paying job. Even with a subsequent raise, he’s nowhere close to matching his peak earnings.

Money is hardly the only trade-off Steckel has made to hang onto the South Dakota post.

He spends three weeks of every four away from his wife, Mary, and the couple’s three children, who live 700 miles away in Plymouth, Wisconsin, in a house the family was unable to sell for most of the last decade.

Before Christmas, he set off late on Dec. 18 for the 11-hour drive home. After the holiday is over, he drove back to Pierre. “I’m glad to be employed,” he said, “but this isn’t what I would have planned for this point in my life.”

Many Americans assume that by the time they reach their 50s they’ll have steady work, time to save and the right to make their own decisions about when to retire. But as Steckel’s situation suggests, that’s no longer the reality for many — indeed, most — people.

ProPublica and the Urban Institute, a Washington think tank, analyzed data from the Health and Retirement Study, or HRS, the premier source of quantitative information about aging in America. Since 1992, the study has followed a nationally representative sample of about 20,000 people from the time they turn 50 through the rest of their lives.

Through 2016, our analysis found that between the time older workers enter the study and when they leave paid employment, 56 percent are laid off at least once or leave jobs under such financially damaging circumstances that it’s likely they were pushed out rather than choosing to go voluntarily.

Only one in 10 of these workers ever again earns as much as they did before their employment setbacks, our analysis showed. Even years afterward, the household incomes of over half of those who experience such work disruptions remain substantially below those of workers who don’t.

“This isn’t how most people think they’re going to finish out their work lives,” said Richard Johnson, an Urban Institute economist and veteran scholar of the older labor force who worked on the analysis. “For the majority of older Americans, working after 50 is considerably riskier and more turbulent than we previously thought.”

The HRS is based on employee surveys, not employer records, so it can’t definitively identify what’s behind every setback, but it includes detailed information about the circumstances under which workers leave jobs and the consequences of these departures.

We focused on workers who enter their 50s with stable, full-time jobs and who’ve been with the same employer for at least five years — those who HRS data and other economic studies show are least likely to encounter employment problems. We considered only separations that result in at least six months of unemployment or at least a 50 percent drop in earnings from pre-separation levels.

Then, we sorted job departures into voluntary and involuntary and, among involuntary departures, distinguished between those likely driven by employers and those resulting from personal issues, such as poor health or family problems. (See the full analysis here.)

We found that 28 percent of stable, longtime employees sustain at least one damaging layoff by their employers between turning 50 and leaving work for retirement.

“We’ve known that some workers get a nudge from their employers to exit the work force and some get a great big kick,” said Gary Burtless, a prominent labor economist with the Brookings Institution in Washington. “What these results suggest is that a whole lot more are getting the great big kick.”

An additional 13 percent of workers who start their 50s in long-held positions unexpectedly retire under conditions that suggest they were forced out. They begin by telling survey takers they plan to keep working for many years, but, within a couple of years, they suddenly announce they’ve retired, amid a substantial drop in earnings and income.

Jeffrey Wenger, a senior labor economist with the RAND Corp., said some of these people likely were laid off, but they cover it up by saying they retired. “There’s so much social stigma around being separated from work,” he said, “even people who are fired or let go will say they retired to save face.”

Finally, a further 15 percent of over-50 workers who begin with stable jobs quit or leave them after reporting that their pay, hours, work locations or treatment by supervisors have deteriorated. These, too, indicate departures that may well not be freely chosen.

Taken together, the scale of damage sustained by older workers is substantial. According to the U.S. Census Bureau, there are currently 40 million Americans age 50 and older who are working. Our analysis of the HRS data suggests that as many as 22 million of these people have or will suffer a layoff, forced retirement or other involuntary job separation. Of these, only a little over 2 million have recovered or will.

“These findings tell us that a sizable percentage, possibly a majority, of workers who hold career jobs in their 50s will get pushed out of those jobs on their way to retirement,” Burtless said. “Yes, workers can find jobs after a career job comes to an early, unexpected end. But way too often, the replacement job is a whole lot worse than the career job. This leaves little room for the worker to rebuild.”

When you add in those forced to leave their jobs for personal reasons such as poor health or family trouble, the share of Americans pushed out of regular work late in their careers rises to almost two-thirds. That’s a far cry from the voluntary glide path to retirement that most economists assume, and many Americans expect.

Steckel knows a lot about how tough the labor market can be for older workers, and not just because of his own job losses. He’s spent much of his career in human resources, often helping employers show workers — including many, like him, over 50 — the door.

In most instances, he said he’s understood the business rationale for the cuts. Employers need to reduce costs, boost profits and beat the competition. But he also understands the frustration and loss of control older workers feel at having their experience work against them and their expectations come undone.

“Nobody plans to lose their job. If there’s work to do and you’re doing it, you figure you’ll get to keep doing it,” he said recently. But once employers start pushing people out, no amount of hard work will save you, he added, and “nothing you do at your job really prepares you for being out” of work.

For 50 years, it has been illegal under the federal Age Discrimination in Employment Act, or ADEA, for employers to treat older workers differently than younger ones with only a few exceptions, such as when a job requires great stamina or quick reflexes.

For decades, judges and policymakers treated the age law’s provisions as part and parcel of the nation’s fundamental civil rights guarantee against discrimination on the basis of race, sex, ethnic origin and other categories.

But in recent years, employers’ pleas for greater freedom to remake their workforces to meet global competition have won an increasingly sympathetic hearing. Federal appeals courts and the U.S. Supreme Court have reacted by widening the reach of the ADEA’s exceptions and restricting the law’s protections.

Meanwhile, most employers have stopped offering traditional pensions, which once delivered a double-barreled incentive for older workers to retire voluntarily: maximum payouts for date-certain departures and the assurance that benefits would last as long as the people receiving them. That’s left workers largely responsible for financing their own retirements and many in need of continued work.

“There’s no safe haven in today’s labor market,” said Carl Van Horn, a public policy professor and director of the Heldrich Center for Workforce Development at Rutgers University in New Jersey. “Even older workers who have held jobs with the same employer for decades may be laid off without warning” or otherwise cut.

In a story this year, ProPublica described how IBM has forced out more than 20,000 U.S. workers aged 40 and over in just the past five years in order to, in the words of one internal company planning document, “correct seniority mix.” To accomplish this, the company used a combination of layoffs and forced retirements, as well as tactics such as mandatory relocations seemingly designed to push longtime workers to quit.

In response, IBM issued a statement that said, in part, “We are proud of our company and our employees’ ability to reinvent themselves era after era, while always complying with the law.”

As an older tech firm trying to keep up in what’s seen as a young industry, IBM might seem unique, but our analysis of the HRS data suggests the company is no outlier in how it approaches shaping its workforce.

The share of U.S. workers who’ve suffered financially damaging, employer-driven job separations after age 50 has risen steadily from just over 10 percent in 1998 to almost 30 percent in 2016, the analysis shows.

The turbulence experienced by older workers is about the same regardless of their income, education, geography or industry.

Some 58 percent of those with high school educations who reach their 50s working steadily in long-term jobs subsequently face a damaging layoff or other involuntarily separation. Yet more education provides little additional protection; 55 percent of those with college or graduate degrees experience similar job losses.

Across major industrial sectors and regions of the country, more than half of older workers experience involuntarily separations. The same is true across sexes, races and ethnicities, although a larger share of older African-American and Hispanic workers than whites are forced out of work by poor health and family crises, the data shows. This could indicate that minority workers are more likely to have jobs that take a bigger toll on health.

Once out, older workers only rarely regain the income and stability they once enjoyed.

Jaye Crist, 58, of Lancaster, Pennsylvania, was a mid-level executive with printing giant RR Donnelley until his May 2016 layoff. Today, he supports his family on less than half his previous $100,000-a-year salary, working 9 a.m. to 5 p.m. at a local print shop, 7 p.m. to 2 a.m. at the front desk of a Planet Fitness gym and bartending on Sundays.

Linda Norris, 62, of Nashua, New Hampshire, earned a similar amount doing engineering work for defense contractors before being laid off in late 2015. She spent much of 2016 campaigning for then-candidate Donald Trump and is convinced her fortunes will change now that he’s president. In the meantime, she hasn’t been able to find a permanent full-time job and said she has $25 to her name.

The HRS is widely considered the gold standard for information about the economic lives and health of older Americans. It’s funded by the National Institutes of Health and the Social Security Administration and is administered by the University of Michigan. It has been cited in thousands of academic papers and has served as the basis for a generation of business and government policymaking.

Our analysis suggests that some of those policies, as well as a good deal of what analysts and advocates focus on when it comes to aging, don’t grapple with the key challenges faced by working Americans during the last third or so of their lives.

Much public discussion of aging focuses on Social Security, Medicare and how to boost private retirement savings. But our analysis shows that many, perhaps most, older workers encounter trouble well before they’re eligible for these benefits and that their biggest economic challenge may be hanging onto a job that allows for any kind of savings at all.

“We’re talking about the wrong issues,” said Anne Colamosca, an economic commentator who co-authored one of the earliest critiques of tax-advantaged savings plans, “The Great 401(k) Hoax.” “Having a stable job with good wages is more important to most people than what’s in their 401(k). Getting to the point where you can collect Social Security and Medicare can be every bit as hard as trying to live on the benefits once you start getting them.”

Layoffs are the most common way workers over 50 get pushed out of their jobs, and more than a third of those who sustain one major involuntary departure go on to experience additional ones, as the last decade of Steckel’s work life illustrates.

Steckel spent 27 years with the U.S. affiliate of Maersk, the world’s largest container cargo company, working at several of its operations across the country. It was while managing a trucking terminal in Chicago that he met his wife, an MBA student who went on to become the marketing director at Thorek Memorial Hospital on the city’s North Side.

In the late 1990s, Steckel was promoted to a human resources position. It required the family to relocate to the company’s headquarters in northern New Jersey, but the salary — which, with bonuses, would eventually reach about $130,000 — allowed Mary to be a stay-at-home mom.

Steckel saw himself continuing to climb the company’s ranks, but as shipping technology changed and business slumped in the middle of the last decade, Maersk started consolidating operations and laying people off. Steckel flew around the country to notify employees, including some he knew personally.

“It was pretty hard not to notice that many — not all, but many — were over 50,” he said. A Maersk spokesman confirmed Steckel worked for the company but otherwise declined to comment.

In early 2007, Steckel, then 51, was laid off. He and Mary moved back to the Midwest, where the cost of living was lower and they had relatives.

Layoffs are common in the U.S. economy; there were 20.7 million of them last year alone, according to the Bureau of Labor Statistics. In most instances, those who lose their jobs find new ones quickly. Steckel certainly assumed he would.

But laid-off workers in their 50s and beyond are more apt than those in their 30s or 40s to be unemployed for long periods and land poorer subsequent jobs, the HRS data shows. “Older workers don’t lose their jobs any more frequently than younger ones,” said Princeton labor economist Henry Farber, “but when they do, they’re substantially less likely to be re-employed.”

Steckel was out of work for eight months. The family made do, buoyed by generous severance pay and a short consulting contract. They did without dinners out, vacations or big purchases, but were basically okay.

Steckel was hired again in January 2008, this time as a benefits manager for Kohler, a manufacturer of bathroom fixtures. At about $90,000, his salary was 30 percent lower than what he’d made at Maersk, but Wisconsin was so affordable that the family was able to buy the house and five acres in Plymouth.

Kohler seemed like a safe bet. Many of its employees had never worked anywhere else, following their parents and grandparents into lifetime jobs with the company. But as Steckel started in his new position, the U.S. financial crisis cratered real estate and home construction and, with them, Kohler’s business.

This time, Steckel’s role in executing layoffs was explaining severance packages to the company’s shellshocked factory workers.

“Most of these people were in their late 40s and 50s and there was nothing out there for them,” he said. “They’d come in with their wives and some of them would break down and cry.”

After three years, Kohler’s problems leapt from the factory to the front office. Steckel, by then 54, was laid off again in April 2010. A Kohler spokeswoman did not reply to phone calls and emails.

Still the family’s sole breadwinner, with kids in fourth, eighth and ninth grades, he scrambled for new work and, after a string of interviews, landed a job just four months later as the manager of retirement plans at Alpha Natural Resources.

Alpha, in the coal mining business, was riding a double wave of demand from China and U.S. steel producers, snapping up smaller companies on its way to becoming an industry behemoth.

Steckel’s job was a big one, overseeing complicated, union-negotiated pensions and savings arrangements. At $145,000, the salary represented a substantial raise from what he’d been making at Kohler and was even more than he’d earned at Maersk. The Steckels relocated again, this time to the tiny southwest Virginia town of Abingdon.

“We started thinking: ‘This may be it. This is where we’ll stay,’” Mary Steckel said. “Then, all that changed.”

In January 2011, Alpha bought Massey Energy for $8.5 billion and with it the responsibility for reaching financial settlements with the families of 29 miners killed the previous year in an explosion at Massey’s Upper Big Branch mine in West Virginia. The combination of the settlement costs and a sustained fall in coal prices forced layoffs at Alpha and eventually led to the company’s bankruptcy.

Steckel struggled to collect decades of paper records on wages and years of service in order to calculate pension payments for laid-off miners, virtually all in their 50s and 60s. “There were no jobs for them, but they were owed [pension benefits] and they wanted their money yesterday,” he said. A spokesman for the successor company to Alpha, Contura Energy, did not return phone calls or emails.

Once again, he processed other employees’ layoffs right up until his own, in March 2013. He was 56. The Steckels packed the kids and the family’s belongings into their Mercury Sable station wagon and went back to Wisconsin.

There, Mary took a job at Oshkosh Defense, which builds Humvees and other equipment for the military. Tom was out of work almost six months before landing a consulting contract to work in Milwaukee with Harley-Davidson, the motorcycle maker.

If it had lasted, the position would have paid about $90,000, or about what he’d made at Kohler, and, for a time, it seemed possible that it might turn into a regular job. But it didn’t, and he was out again that December.

Unlike Steckel, Jean Potter of Dallas, Georgia, seemed to leave her longtime job at BellSouth by her own choice, taking early retirement in 2009, when she was 55.

But that wasn’t the full story, she said. Potter, who’d had a 27-year career with the telephone company, rising from operator services to pole-climbing line work to technical troubleshooting, said she only retired after hearing she was going to lose her $54,000-a-year job along with thousands of other employees being laid off as part of the company’s acquisition by AT&T.

Under the law, retirements are supposed to be voluntary decisions made by employees. The 1967 ADEA barred companies from setting a mandatory retirement age lower than 65. Congress raised that to 70 and then, in 1986, largely prohibited mandatory retirement at any age. Outraged by companies’ giving employees the unpalatable choice of retiring or getting laid off, lawmakers subsequently added a requirement that people’s retirement decisions must be “knowing and voluntary.”

Yet for almost two decades now, when HRS respondents who’ve recently retired have been asked whether their retirements were “something you wanted to do or something you felt forced into,” those who’ve answered they were forced or partially forced has risen steadily. The number of respondents saying this has grown from 33 percent in 1998 to 55 percent in 2014, the last year for which comparable figures are available.

“The expectation that American workers decide when they want to retire is no longer realistic for a significant number of older workers who are pushed out before they are ready to retire,” said Rutgers’ Van Horn.

Potter was convinced she’d secured money and benefits by leaving as a retiree that she would not otherwise have received. She felt better for making the decision herself and figured she’d go back to school, get a college degree and find a better job.

“I thought I’d gotten the drop on them by retiring,” she said.

But looking back, Potter acknowledges, her decision to retire was hardly freely chosen.

“If I had to do it over, I’d take early retirement again, but you can’t very well call it voluntary,” she said recently. “All the old people were toast. They were going to get laid off, me included.”

Jim Kimberly, a spokesman for AT&T, said the company could not confirm Potter’s employment at BellSouth because of privacy concerns. Speaking more generally, Kimberly said “We’re recognized for our longstanding commitment to diversity. We don’t tolerate discrimination based on an employee’s age.”

There was a time when older workers thought they could use early retirements as a stepping stone, locking in years of payments for leaving and then adding income from new jobs on top of that.

But many have discovered they can’t land comparable new jobs, or, in many cases, any jobs at all. In the decade since she left Bell South, Potter, now 65, has yet to find stable, long-term work.

After getting her bachelor’s degree in Spanish in 2014, Potter applied to teach in the Cobb County, Georgia, public schools but could only get substitute work. She got certified to teach English as a second language but said she was told she’d need a master’s degree to land anything beyond temporary jobs.

She’s scheduled to receive her master’s degree next June. In the meantime, she tutors grade-school students in math, English and Spanish and works as a graduate assistant in the office of multicultural student affairs at Kennesaw State University. She makes do on $1,129 a month from Social Security and a graduate-student stipend of $634, while applying, so far unsuccessfully, for other work.

She’s applied for jobs selling cellphones in a mall, providing call-center customer service and even being a waitress at a Waffle House. For the Waffle House job, she said she was told she wouldn’t be hired because she’d just leave when she got a better offer.

“Isn’t that what every waitress does?” she recalled replying. “Why hire them and not me?”

As with retirements, our analysis of the HRS data shows that, among older workers, quitting a job isn’t always the voluntary act most people, including economists, assume it to be.

The survey asks why people leave their jobs, including when they quit. It includes questions about whether their supervisors encouraged the departure, whether their wages or hours were reduced prior to their exit and whether they thought they “would have been laid off” if they didn’t leave.

We found that even when we excluded all but the most consequential cases — those in which workers subsequently experienced at least six months of unemployment or a 50 percent wage decline — 15 percent of workers over 50 who’d had long-term, stable jobs quit or left their positions after their working conditions deteriorated or they felt pressured to do so.

Quitting a job carries far greater risk for older workers than for younger ones, both because it’s harder to get rehired and because there’s less time to make up for what’s lost in being out of work.

After a simmering disagreement with a supervisor, David Burns, 50, of Roswell, Georgia, quit his $90,000-a-year logistics job with a major shipping company last February. He figured that the combination of his education and experience and the fact that unemployment nationally is at a 20-year low assured that he’d easily land a new position. But 10 months on, he says he’s yet to receive a single offer of comparable work. To help bring in some money, he’s doing woodworking for $20 an hour.

Burns has an MBA from Georgia State University and two decades in shipping logistics. A quick scan of online job ads turns up dozens for logistics management positions like the one he had in the area where he lives.

When he’d last lost a job at the age of 35, he said it took him only a couple of months and four applications to get three offers and a new spot. But in the years since, he said, he seems to have crossed a line that he wasn’t aware existed, eliminating his appeal to employers.

He keeps a spreadsheet of his current efforts to find new work. Through November, it shows he filed 160 online job applications and landed 14 phone interviews, nine face-to-face meetings and zero offers.

“My skills are in high demand,” he said. “But what’s not in high demand is me, a 50-year-old dude!”

“People can quibble about exactly why this kind of thing is going on or what to do about it, but it’s going on.”

Meg Bourbonniere had a similar experience just as she seemingly had reached the pinnacle of a successful career.

Two weeks after being appointed to a $200,000-a-year directorship managing a group of researchers at Massachusetts General Hospital in Boston in March 2015, Bourbonniere, then 59, said her supervisor called with an odd question: When did she think she’d be retiring?

“I kept asking myself, ‘Why would that be important today?’” she recalled. “The only thing I could come up with was they think I’m too old for the job.‘’

After she answered, “I’ll be here as long as you are,” she said she ran into an array of problems on the job: her decisions were countermanded, she was given what she saw as an unfairly negative job review and she was put on a “personal improvement plan” that required her to step up her performance or risk dismissal. Finally, a year after being hired, she was demoted from director to nurse scientist, the title held by those she’d managed.

Michael Morrison, a spokesman for Mass General’s parent organization, Partners HealthCare, confirmed the dates of Bourbonniere’s employment but said there was nothing further he could share as the company doesn’t comment on individual employees.

Bourbonniere said she accepted the demotion because her husband was unemployed at the time. “I couldn’t not work,” she said. “I was the chief wage earner.”

Through a friend, she found out about an opening for an assistant professor of nursing at the University of Rhode Island that, at about $75,000, paid only a third as much as the Mass General job. She told the friend she’d apply on one condition. “I said she had to tell the dean how old I was so I wouldn’t go through the same experience all over again.”

On paper, Bourbonniere quit Mass General of her own accord to take the position at URI. But, in her eyes, there was nothing voluntary about the move. “I had to go find another job,” she said. “They demoted me; I couldn’t stay.”

Soon after Steckel’s consulting contract ended in late 2013, he got what he saw as a sharp reminder of the role age was playing in his efforts to get and keep a job.

While searching job sites on his computer, Steckel stumbled across what seemed like his dream job on LinkedIn. Business insurer CNA Financial was looking for an assistant vice president to head its employee benefits operation. Best yet, the position was at CNA’s Chicago headquarters, a mere 145 miles from Plymouth. He immediately applied.

The application asked for the year he’d graduated from college.

Older job seekers are almost universally counseled not to answer questions like this. The ADEA bars employers from putting age requirements in help-wanted ads, but as job searches have moved online, companies have found other ways to target or exclude applicants by age. Last year, ProPublica and The New York Times reported that employers were using platforms like Facebook to micro-target jobs ads to younger users. Companies also digitally scour resumes for age indicators, including graduation dates.

Steckel left the field in the CNA application blank, but when he pushed “submit,” the system kicked it back, saying it was incomplete. He reluctantly filled in 1978. This time, the system accepted the application and sent back an automated response that he was in the top 10 percent of applicants based on his LinkedIn resume.

Hours later, however, he received a second automated response saying CNA had decided to “move forward with other candidates.” The rejection rankled Steckel enough that he tracked down the email address of the CNA recruiter responsible for filling the slot.

“Apparently, CNA believes a college application date is so important that it is a mandatory element in your job application process,” his email to the recruiter said. “Please cite a credible, peer-reviewed study that affirms the value of the year and date of one’s college graduation as a valid and reliable predictor of job success.”

He never got an answer.

Contacted by ProPublica, CNA spokesman Brandon Davis did not respond to questions but issued a statement. “CNA adheres to all applicable federal, state and local employment laws, and our policy prohibits any form of discrimination,” it said.

Steckel landed his current job with the state of South Dakota in March 2014.

Going back and forth between Pierre and Plymouth since then, he’s driven the equivalent of once around the world. If, as he hopes, he can hang onto the position until he retires, he figures he’ll make it around a second time.

During his off hours in the spring, when he’s not with his family, he fishes in the Black Hills. In the fall, he goes out with his Mossberg 12-gauge shotgun and hunts duck. The loneliest months are January and February. That’s when the Legislature is in session, so he can’t go home, and it’s usually too cold to do much outside. He spends a lot of time at the Y.

A half-century ago, in a report that led to enactment of the ADEA, then-U.S. Labor Secretary W. Willard Wirtz said that half of all private-sector job ads at the time explicitly barred anyone over the age of 55 from applying and a quarter barred anyone over 45.

Wirtz lambasted the practice in terms that, although backward in their depiction of work as solely a male concern, still ring true for older workers like Steckel and their families.

“There is no harsher verdict in most men’s lives than someone else’s judgment that they are no longer worth their keep,” he wrote. “It is then, when the answer at the hiring gate is ‘You’re too old,’ that a man turns away … finding nothing to look backwards to with pride [or] forward to with hope.”

Asked how the years of job turmoil and now separation have affected her family, Mary Steckel resists anger or bitterness. “The children know they are loved by two parents, even if Tom is not always here,” she said. She doesn’t dwell on the current arrangement. “I just deal with it.”

As for Tom?

“He hasn’t admitted defeat,” Mary said, although something has changed. “He’s not hopeful anymore.”

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for The Big Story newsletter to receive stories like this one in your inbox.


House Oversight Committee Report On The Equifax Data Breach. Did The Recommendations Go Far Enough?

On Monday, the U.S. House of Representatives Committee on Oversight and Government Reform released its report (Adobe PDF) on the massive Equifax data breach, where the most sensitive personal and payment information of more than 148 million consumers -- nearly half of the population -- was accessed and stolen. The report summary:

"In 2005, former Equifax Chief Executive Officer(CEO) Richard Smith embarked on an aggressive growth strategy, leading to the acquisition of multiple companies, information technology (IT) systems, and data. While the acquisition strategy was successful for Equifax’s bottom line and stock price, this growth brought increasing complexity to Equifax’s IT systems, and expanded data security risks... Equifax, however, failed to implement an adequate security program to protect this sensitive data. As a result, Equifax allowed one of the largest data breaches in U.S. history. Such a breach was entirely preventable."

The report cited several failures by Equifax. First:

"On March 7, 2017, a critical vulnerability in the Apache Struts software was publicly disclosed. Equifax used Apache Struts to run certain applications on legacy operating systems. The following day, the Department of Homeland Security alerted Equifax to this critical vulnerability. Equifax’s Global Threate and Vulnerability Management (GTVM) team emailed this alert to over 400 people on March 9, instructing anyone who had Apache Struts running on their system to apply the necessary patch within 48 hours. The Equifax GTVM team also held a March 16 meeting about this vulnerability. Equifax, however, did not fully patch its systems. Equifax’s Automated Consumer Interview System (ACIS), a custom-built internet-facing consumer dispute portal developed in the 1970s, was running a version of Apache Struts containing the vulnerability. Equifax did not patch the Apache Struts software located within ACIS, leaving its systems and data exposed."

As bad as that is, it gets worse:

"On May 13, 2017, attackers began a cyberattack on Equifax. The attack lasted for 76 days. The attackers dropped “web shells” (a web-based backdoor) to obtain remote control over Equifax’s network. They found a file containing unencrypted credentials (usernames and passwords), enabling the attackers to access sensitive data outside of the ACIS environment. The attackers were able to use these credentials to access 48 unrelated databases."

"Attackers sent 9,000 queries on these 48 databases, successfully locating unencrypted personally identifiable information (PII) data 265 times. The attackers transferred this data out of the Equifax environment, unbeknownst to Equifax. Equifax did not see the data exfiltration because the device used to monitor ACIS network traffic had been inactive for 19 months due to an expired security certificate. On July 29, 2017, Equifax updated the expired certificate and immediately noticed suspicious web traffic..."

Findings so far: 1) growth prioritized over security while archiving highly valuable data; 2) antiquated computer systems; 3) failed security patches; 4) unprotected user credentials; and 5) failed intrusion detection mechanism. Geez!

Only after updating its expired security certificate did Equifax notice the intrusion. After that, you'd think that Equifax would have implemented a strong post-breach response. You'd be wrong. More failures:

"When Equifax informed the public of the breach on September 7, the company was unprepared to support the large number of affected consumers. The dedicated breach website and call centers were immediately overwhelmed, and consumers were not able to obtain timely information about whether they were affected and how they could obtain identity protection services."

"Equifax should have addressed at least two points of failure to mitigate, or even prevent, this data breach. First, a lack of accountability and no clear lines of authority in Equifax’s IT management structure existed, leading to an execution gap between IT policy development and operation. This also restricted the company’s implementation of other security initiatives in a comprehensive and timely manner. As an example, Equifax had allowed over 300 security certificates to expire, including 79 certificates for monitoring business critical domains. "Second, Equifax’s aggressive growth strategy and accumulation of data resulted in a complex IT environment. Equifax ran a number of its most critical IT applications on custom-built legacy systems. Both the complexity and antiquated nature of Equifax’s IT systems made IT security especially challenging..."

Findings so far: 6) inadequate post-breach response; and 7) complicated IT structure making updates difficult. Geez!

The report listed the executives who retired and/or were fired. That's a small start for a company archiving the most sensitive personal and payment information of all USA citizens. The report included seven recommendations:

"1: Empower Consumers through Transparency. Consumer reporting agencies (CRAs) should provide more transparency to consumers on what data is collected and how it is used. A large amount of the public’s concern after Equifax’s data breach announcement stemmed from the lack of knowledge regarding the extensive data CRAs hold on individuals. CRAs must invest in and deploy additional tools to empower consumers to better control their own data..."

"2: Review Sufficiency of FTC Oversight and Enforcement Authorities. Currently, the FTC uses statutory authority under Section 5 of the Federal Trade Commission Act to hold businesses accountable for making false or misleading claims about their data security or failing to employ reasonable security measures. Additional oversight authorities and enforcement tools may be needed to enable the FTC to effectively monitor CRA data security practices..."

"3: Review Effectiveness of Identity Monitoring and Protection Services Offered to Breach Victims. The General Accounting Office (GAO) should examine the effectiveness of current identity monitoring and protection services and provide recommendations to Congress. In particular, GAO should review the length of time that credit monitoring and protection services are needed after a data breach to mitigate identity theft risks. Equifax offered free credit monitoring and protection services for one year to any consumer who requested it... This GAO study would help clarify the value of credit monitoring services and the length of time such services should be maintained. The GAO study should examine alternatives to credit monitoring services and identify addit ional or complimentary services..."

"4: Increase Transparency of Cyber Risk in Private Sector. Federal agencies and the private sector should work together to increase transparency of a company’s cybersecurity risks and steps taken to mitigate such risks. One example of how a private entity can increase transparency related to the company’s cyber risk is by making disclosures in its Securities and Exchange Commission (SEC) filings. In 2011, the SEC developed guidance to assist companies in disclosing cybersecurity risks and incidents. According to the SEC guidance, if cybersecurity risks or incidents are “sufficiently material to investors” a private company may be required to disclose the information... Equifax did not disclose any cybersecurity risks or cybers ecurity incidents in its SEC filings prior to the 2017 data breach..."

"5: Hold Federal Contractors Accountable for Cybersecurity with Clear Requirements. The Equifax data breach and federal customers’ use of Equifax identity validation services highlight the need for the federal government to be vigilant in mitigating cybersecurity risk in federal acquisition. The Office of Management and Budget (OMB) should continue efforts to develop a clear set of requirements for federal contractors to address increasing cybersecurity risks, particularly as it relates to handling of PII. There should be a government-wide framework of cybersecurity and data security risk-based requirements. In 2016, the Committee urged OMB to focus on improving and updating cybersecurity requirements for federal acquisition... The Committee again urges OMB to expedite development of a long-promised cybersecurity acquisition memorandum to provide guidance to federal agencies and acquisition professionals..."

"6: Reduce Use of Social Security Numbers as Personal Identifiers. The executive branch should work with the private sector to reduce reliance on Social Security numbers. Social Security numbers are widely used by the public and private sector to both identify and authenticate individuals. Authenticators are only useful if they are kept confidential. Attackers stole the Social Security numbers of an estimated 145 million consumers from Equifax. As a result of this breach, nearly half of the country’s Social Security numbers are no longer confidential. To better protect consumers from identity theft, OMB and other relevant federal agencies should pursue emerging technology solutions as an alternative to Social Security number use."

"7: Implement Modernized IT Solutions. Companies storing sensitive consumer data should transition away from legacy IT and implement modern IT security solutions. Equifax failed to modernize its IT environments in a timely manner. The complexity of the legacy IT environment hosting the ACIS application allowed the attackers to move throughout the Equifax network... Equifax’s legacy IT was difficult to scan, patch, and modify... Private sector companies, especially those holding sensitive consumer data like Equifax, must prioritize investment in modernized tools and technologies...."

The history of corporate data breaches and the above list of corporate failures by Equifax both should be warnings to anyone in government promoting the privatization of current government activities. Companies screw up stuff, too.

Recommendation #6 is frightening in that it hasn't been implemented. Yikes! No federal agency should do business with a private sector firm operating with antiquated computer systems. And, if Equifax can't protect the information it archives, it should cease to exist. While that sounds harsh, it ain't. Continual data breaches place risks and burdens upon already burdened consumers trying to control and protect their data.

What are your opinions of the report? Did it go far enough?


Ireland Regulator: LinkedIn Processed Email Addresses Of 18 Million Non-Members

LinkedIn logo On Friday November 23rd, the Data Protection Commission (DPC) in Ireland released its annual report. That report includes the results of an investigation by the DPC of the LinkedIn.com social networking site, after a 2017 complaint by a person who didn't use the social networking service. Apparently, LinkedIn obtained 18 million email address of non-members so it could then use the Facebook platform to deliver advertisements encouraging them to join.

The DPC 2018 report (Adobe PDF; 827k bytes) stated on page 21:

"The DPC concluded its audit of LinkedIn Ireland Unlimited Company (LinkedIn) in respect of its processing of personal data following an investigation of a complaint notified to the DPC by a non-LinkedIn user. The complaint concerned LinkedIn’s obtaining and use of the complainant’s email address for the purpose of targeted advertising on the Facebook Platform. Our investigation identified that LinkedIn Corporation (LinkedIn Corp) in the U.S., LinkedIn Ireland’s data processor, had processed hashed email addresses of approximately 18 million non-LinkedIn members and targeted these individuals on the Facebook Platform with the absence of instruction from the data controller (i.e. LinkedIn Ireland), as is required pursuant to Section 2C(3)(a) of the Acts. The complaint was ultimately amicably resolved, with LinkedIn implementing a number of immediate actions to cease the processing of user data for the purposes that gave rise to the complaint."

So, in an attempt to gain more users LinkedIn acquired and processed the email addresses of 18 million non-members without getting governmental "instruction" as required by law. Not good.

The DPC report covered the time frame from January 1st through May 24, 2018. The report did not mention the source(s) from which LinkedIn acquired the email addresses. The DPC report also discussed investigations of Facebook (e.g., WhatsApp, facial recognition),  and Yahoo/Oath. Microsoft acquired LinkedIn in 2016. GDPR went into effect across the EU on May 25, 2018.

There is more. The investigation's findings raised concerns about broader compliance issues, so the DPC conducted a more in-depth audit:

"... to verify that LinkedIn had in place appropriate technical security and organisational measures, particularly for its processing of non-member data and its retention of such data. The audit identified that LinkedIn Corp was undertaking the pre-computation of a suggested professional network for non-LinkedIn members. As a result of the findings of our audit, LinkedIn Corp was instructed by LinkedIn Ireland, as data controller of EU user data, to cease pre-compute processing and to delete all personal data associated with such processing prior to 25 May 2018."

That the DPC ordered LinkedIn to stop this particular data processing, strongly suggests that the social networking service's activity probably violated data protection laws, as the European Union (EU) implements stronger privacy laws, known as General Data Protection Regulation (GDPR). ZDNet explained in this primer:

".... GDPR is a new set of rules designed to give EU citizens more control over their personal data. It aims to simplify the regulatory environment for business so both citizens and businesses in the European Union can fully benefit from the digital economy... almost every aspect of our lives revolves around data. From social media companies, to banks, retailers, and governments -- almost every service we use involves the collection and analysis of our personal data. Your name, address, credit card number and more all collected, analysed and, perhaps most importantly, stored by organisations... Data breaches inevitably happen. Information gets lost, stolen or otherwise released into the hands of people who were never intended to see it -- and those people often have malicious intent. Under the terms of GDPR, not only will organisations have to ensure that personal data is gathered legally and under strict conditions, but those who collect and manage it will be obliged to protect it from misuse and exploitation, as well as to respect the rights of data owners - or face penalties for not doing so... There are two different types of data-handlers the legislation applies to: 'processors' and 'controllers'. The definitions of each are laid out in Article 4 of the General Data Protection Regulation..."

The new GDPR applies to both companies operating within the EU, and to companies located outside of the EU which offer goods or services to customers or businesses inside the EU. As a result, some companies have changed their business processes. TechCrunch reported in April:

"Facebook has another change in the works to respond to the European Union’s beefed up data protection framework — and this one looks intended to shrink its legal liabilities under GDPR, and at scale. Late yesterday Reuters reported on a change incoming to Facebook’s [Terms & Conditions policy] that it said will be pushed out next month — meaning all non-EU international are switched from having their data processed by Facebook Ireland to Facebook USA. With this shift, Facebook will ensure that the privacy protections afforded by the EU’s incoming GDPR — which applies from May 25 — will not cover the ~1.5 billion+ international Facebook users who aren’t EU citizens (but current have their data processed in the EU, by Facebook Ireland). The U.S. does not have a comparable data protection framework to GDPR..."

What was LinkedIn's response to the DPC report? At press time, a search of LinkedIn's blog and press areas failed to find any mentions of the DPC investigation. TechCrunch reported statements by Dennis Kelleher, Head of Privacy, EMEA at LinkedIn:

"... Unfortunately the strong processes and procedures we have in place were not followed and for that we are sorry. We’ve taken appropriate action, and have improved the way we work to ensure that this will not happen again. During the audit, we also identified one further area where we could improve data privacy for non-members and we have voluntarily changed our practices as a result."

What does this mean? Plenty. There seem to be several takeaways for consumer and users of social networking services:

  • EU regulators are proactive and conduct detailed audits to ensure companies both comply with GDPR and act consistent with any promises they made,
  • LinkedIn wants consumers to accept another "we are sorry" corporate statement. No thanks. No more apologies. Actions speak more loudly than words,
  • The DPC didn't fine LinkedIn probably because GDPR didn't become effective until May 25, 2018. This suggests that fines will be applied to violations occurring on or after May 25, 2018, and
  • People in different areas of the world view privacy and data protection differently - as they should. That is fine, and it shouldn't be a surprise. (A global survey about self-driving cars found similar regional differences.) Smart executives in businesses -- and in governments -- worldwide recognize regional differences, find ways to sell products and services across areas without degraded customer experience, and don't try to force their country's approach on other countries or areas which don't want it.

What takeaways do you see?


Federal Reserve Released Its Non-cash Payments Fraud Report. Have Chip Cards Helped?

Many consumers prefer to pay for products and services using methods other than cash. How secure are these non-cash payment methods? The Federal Reserve Board (FRB) analyzed the payments landscape within the United States. Its October 2018 report found good and bad news. The good news: non-cash payments fraud is small. The bad news:

  • Overall, non-cash payments fraud is growing,
  • Card payments fraud drove the growth
Non-Cash Payment Activity And Fraud
Payment Type 2012 2015 Increase (Decrease)
Card payments & ATM withdrawal fraud $4 billion $6.5 billion 62.5 percent
Check fraud $1.1 billion $710 million (35) percent
Non-cash payments fraud $6.1 billion $8.3 billion 37 percent
Total Non-cash payments $161.2 trillion $180.3 trillion 12 percent

The FRB report included:

"... fraud totals and rates for payments processed over general-purpose credit and debit card networks, including non-prepaid and prepaid debit card networks, the automated clearinghouse (ACH) transfer system, and the check clearing system. These payment systems form the core of the noncash payment and settlement systems used to clear and settle everyday payments made by consumers and businesses in the United States. The fraud data were collected as part of Federal Reserve surveys of depository institutions in 2012 and 2015 and payment card networks in 2015 and 2016. The types of fraudulent payments covered in the study are those made by an unauthorized third party."

Data from the card network survey included general-purpose credit and debit (non-prepaid and prepaid) card payments, but did not include ATM withdrawals. The card networks include Visa, MasterCard, Discover and others. Additional findings:

"... the rate of card fraud, by value, was nearly flat from 2015 to 2016, with the rate of in-person card fraud decreasing notably and the rate of remote card fraud increasing significantly..."

The industry defines several categories of card fraud:

  1. "Counterfeit card. Fraud is perpetrated using an altered or cloned card;
  2. Lost or stolen card. Fraud is undertaken using a legitimate card, but without the cardholder’s consent;
  3. Card issued but not received. A newly issued card sent to a cardholder is intercepted and used to commit fraud;
  4. Fraudulent application. A new card is issued based on a fake identity or on someone else’s identity;
  5. Fraudulent use of account number. Fraud is perpetrated without using a physical card. This type of fraud is typically remote, with the card number being provided through an online web form or a mailed paper form, or given orally over the telephone; and
  6. Other. Fraud including fraud from account take-over and any other types of fraud not covered above."
Card Fraud By Category
Fraud Category 2015 2016 Increase/(Decrease)
Fraudulent use of account number $2.88 billion $3.46 billion 20 percent
Counterfeit card fraud $3.05 billion $2.62 billion (14) percent
Lost or stolen card fraud $730 million $810 million 11 percent
Fraudulent application $210 million $360 million 71 percent

The increase in fraudulent application suggests that criminals consider it easy to intercept pre-screened credit and card offers sent via postal mail. It is easy for consumers to opt out of pre-screened credit and card offers. There is also the National Do Not Call Registry. Do both today if you haven't.

The report also covered EMV chip cards, which were introduced to stop counterfeit card fraud. Card networks distributed both chip cards to consumers, and chip-reader terminals to retailers. The banking industry had set an October 1, 2015 deadline to switch to chip cards. The FRB report:

EMV Chip card fraud and payments. Federal Reserve Board. October 2018

The FRB concluded:

"Card systems brought EMV processing online, and a liability shift, beginning in October 2015, created an incentive for merchants to accept chip cards. By value, the share of non-fraudulent in-person payments made with [chip cards] shifted dramatically between 2015 and 2016, with chip-authenticated payments increasing from 3.2 percent to 26.4 percent. The share of fraudulent in-person payments made with [chip cards] also increased from 4.1 percent in 2015 to 22.8 percent in 2016. As [chip cards] are more secure, this growth in the share of fraudulent in-person chip payments may seem counter-intuitive; however, it reflects the overall increase in use. Note that in 2015, the share of fraudulent in-person payments with [chip cards] (4.1 percent) was greater than the share of non-fraudulent in-person payments with [chip cards] (3.2 percent), a relationship that reversed in 2016."


When Fatal Crashes Can't Be Avoided, Who Should Self-Driving Cars Save? Or Sacrifice? Results From A Global Survey May Surprise You

Experts predict that there will be 10 million self-driving cars on the roads by 2020. Any outstanding issues need to be resolved before then. One outstanding issue is the "trolley problem" - a situation where a fatal vehicle crash can not be avoided and the self-driving car must decide whether to save the passenger or a nearby pedestrian. Ethical issues with self-driving cars are not new. There are related issues, and some experts have called for a code of ethics.

Like it or not, the software in self-driving cars must be programmed to make decisions like this. Which person in a "trolley problem" should the self-driving car save? In other words, the software must be programmed with moral preferences which dictate which person to sacrifice.

The answer is tricky. You might assume: always save the driver, since nobody would buy self-driving car which would kill their owners. What if the pedestrian is crossing against a 'do not cross' signal within a crosswalk? Does the answer change if there are multiple pedestrians in the crosswalk? What if the pedestrians are children, elders, or pregnant? Or a doctor? Does it matter if the passenger is older than the pedestrians?

To understand what the public wants -- expects -- in self-driving cars, also known as autonomous vehicles (AV), researchers from MIT asked consumers in a massive, online global survey. The survey included 2 million people from 233 countries. The survey included 13 accident scenarios with nine varying factors:

  1. "Sparing people versus pets/animals,
  2. Staying on course versus swerving,
  3. Sparing passengers versus pedestrians,
  4. Sparing more lives versus fewer lives,
  5. Sparing men versus women,
  6. Sparing the young versus the elderly,
  7. Sparing pedestrians who cross legally versus jaywalking,
  8. Sparing the fit versus the less fit, and
  9. Sparing those with higher social status versus lower social status."

Besides recording the accident choices, the researchers also collected demographic information (e.g., gender, age, income, education, attitudes about religion and politics, geo-location) about the survey participants, in order to identify clusters: groups, areas, countries, territories, or regions containing people with similar "moral preferences."

Newsweek reported:

"The study is basically trying to understand the kinds of moral decisions that driverless cars might have to resort to," Edmond Awad, lead author of the study from the MIT Media Lab, said in a statement. "We don't know yet how they should do that."

And the overall findings:

"First, human lives should be spared over those of animals; many people should be saved over a few; and younger people should be preserved ahead of the elderly."

These have implications for policymakers. The researchers noted:

"... given the strong preference for sparing children, policymakers must be aware of a dual challenge if they decide not to give a special status to children: the challenge of explaining the rationale for such a decision, and the challenge of handling the strong backlash that will inevitably occur the day an autonomous vehicle sacrifices children in a dilemma situation."

The researchers found regional differences about who should be saved:

"The first cluster (which we label the Western cluster) contains North America as well as many European countries of Protestant, Catholic, and Orthodox Christian cultural groups. The internal structure within this cluster also exhibits notable face validity, with a sub-cluster containing Scandinavian countries, and a sub-cluster containing Commonwealth countries.

The second cluster (which we call the Eastern cluster) contains many far eastern countries such as Japan and Taiwan that belong to the Confucianist cultural group, and Islamic countries such as Indonesia, Pakistan and Saudi Arabia.

The third cluster (a broadly Southern cluster) consists of the Latin American countries of Central and South America, in addition to some countries that are characterized in part by French influence (for example, metropolitan France, French overseas territories, and territories that were at some point under French leadership). Latin American countries are cleanly separated in their own sub-cluster within the Southern cluster."

The researchers also observed:

"... systematic differences between individualistic cultures and collectivistic cultures. Participants from individualistic cultures, which emphasize the distinctive value of each individual, show a stronger preference for sparing the greater number of characters. Furthermore, participants from collectivistic cultures, which emphasize the respect that is due to older members of the community, show a weaker preference for sparing younger characters... prosperity (as indexed by GDP per capita) and the quality of rules and institutions (as indexed by the Rule of Law) correlate with a greater preference against pedestrians who cross illegally. In other words, participants from countries that are poorer and suffer from weaker institutions are more tolerant of pedestrians who cross illegally, presumably because of their experience of lower rule compliance and weaker punishment of rule deviation... higher country-level economic inequality (as indexed by the country’s Gini coefficient) corresponds to how unequally characters of different social status are treated. Those from countries with less economic equality between the rich and poor also treat the rich and poor less equally... In nearly all countries, participants showed a preference for female characters; however, this preference was stronger in nations with better health and survival prospects for women. In other words, in places where there is less devaluation of women’s lives in health and at birth, males are seen as more expendable..."

This is huge. It makes one question the wisdom of a one-size-fits-all programming approach by AV makers wishing to sell cars globally. Citizens in clusters may resent an AV maker forcing its moral preferences upon them. Some clusters or countries may demand vehicles matching their moral preferences.

The researchers concluded (emphasis added):

"Never in the history of humanity have we allowed a machine to autonomously decide who should live and who should die, in a fraction of a second, without real-time supervision. We are going to cross that bridge any time now, and it will not happen in a distant theatre of military operations; it will happen in that most mundane aspect of our lives, everyday transportation. Before we allow our cars to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers that will regulate them... Our data helped us to identify three strong preferences that can serve as building blocks for discussions of universal machine ethics, even if they are not ultimately endorsed by policymakers: the preference for sparing human lives, the preference for sparing more lives, and the preference for sparing young lives. Some preferences based on gender or social status vary considerably across countries, and appear to reflect underlying societal-level preferences..."

And the researchers advised caution, given this study's limitations (emphasis added):

"Even with a sample size as large as ours, we could not do justice to all of the complexity of autonomous vehicle dilemmas. For example, we did not introduce uncertainty about the fates of the characters, and we did not introduce any uncertainty about the classification of these characters. In our scenarios, characters were recognized as adults, children, and so on with 100% certainty, and life-and-death outcomes were predicted with 100% certainty. These assumptions are technologically unrealistic, but they were necessary... Similarly, we did not manipulate the hypothetical relationship between respondents and characters (for example, relatives or spouses)... Indeed, we can embrace the challenges of machine ethics as a unique opportunity to decide, as a community, what we believe to be right or wrong; and to make sure that machines, unlike humans, unerringly follow these moral preferences. We might not reach universal agreement: even the strongest preferences expressed through the [survey] showed substantial cultural variations..."

Several important limitations to remember. And, there are more. It didn't address self-driving trucks. Should an AV tractor-trailer semi  -- often called a robotruck -- carrying $2 million worth of goods sacrifice its load (and passenger) to save one or more pedestrians? What about one or more drivers on the highway? Does it matter if the other drivers are motorcyclists, school buses, or ambulances?

What about autonomous freighters? Should an AV cargo ship be programed to sacrifice its $80 million load to save a pleasure craft? Does the size (e.g., number of passengers) of the pleasure craft matter? What if the other craft is a cabin cruiser with five persons? Or a cruise ship with 2,000 passengers and a crew of 800? What happens in international waters between AV ships from different countries programmed with different moral preferences?

Regardless, this MIT research seems invaluable. It's a good start. AV makers (e.g., autos, ships, trucks) need to explicitly state what their vehicles will (and won't do). Don't hide behind legalese similar to what exists today in too many online terms-of-use and privacy policies.

Hopefully, corporate executives and government policymakers will listen, consider the limitations, demand follow-up research, and not dive headlong into the AV pool without looking first. After reading this study, it struck me that similar research would have been wise before building a global social media service, since people in different countries or regions having varying preferences with online privacy, sharing information, and corporate surveillance. What are your opinions?


Survey: Most Home Users Satisfied With Voice-Controlled Assistants. Tech Adoption Barriers Exist

Recent survey results reported by MediaPost:

"Amazon Alexa and Google Assistant have the highest satisfaction levels among mobile users, each with an 85% satisfaction rating, followed by Siri and Bixby at 78% and Microsoft’s Cortana at 77%... As found in other studies, virtual assistants are being used for a range of things, including looking up things on the internet (51%), listening to music (48%), getting weather information (46%) and setting a timer (35%)... Smart speaker usage varies, with 31% of Amazon device owners using their speaker at least a few times a week, Google Home owners 25% and Apple HomePod 18%."

Additional survey results are available at Digital Trends and Experian. PWC found:

"Only 10% of surveyed respondents were not familiar with voice-enabled products and devices. Of the 90% who were, the majority have used a voice assistant (72%). Adoption is being driven by younger consumers, households with children, and households with an income of >$100k... Despite being accessible everywhere, three out of every four consumers (74%) are using their mobile voice assistants at home..."

Consumers seem to want privacy when using voice assistants, so usage tends to occur at home and not in public places. Also:

"... the bulk of consumers have yet to graduate to more advanced activities like shopping or controlling other smart devices in the home... 50% of respondents have made a purchase using their voice assistant, and an additional 25% would consider doing so in the future. The majority of items purchased are small and quick.. Usage will continue to increase but consistency must improve for wider adoption... Some consumers see voice assistants as a privacy risk... When forced to choose, 57% of consumers said they would rather watch an ad in the middle of a TV show than listen to an ad spoken by their voice assistant..."

Consumers want control over the presentation of advertisements by voice assistants. Control options desired include skip, select, never while listening to music, only at pre-approved times, customized based upon interests, seamless integration, and match to preferred brands. 38 percent of survey respondents said that they, "don't want something 'listening in' on my life all the time."

What are your preferences with voice assistants? Any privacy concerns?


NPR Podcast: 'The Weaponization Of Social Media'

Any technology can be used for good, or for bad. Social media is no exception. A recent data breach study in Australia listed the vulnerabilities of social media. A study in 2016 found, "social media attractive to vulnerable narcissists."

How have social media sites and mobile apps been used as weapons? The podcast below features an interview of P.W. Singer and Emerson Brooking, authors of a new book, "LikeWar: The Weaponization of Social Media." The authors cite real-world examples of how social media sites and mobile apps have been used during conflicts and demonstrations around the globe -- and continue to be used.

A Kirkus book review stated:

"... Singer and Brooking sagely note the intensity of interpersonal squabbling online as a moral equivalent of actual combat, and they also discuss how "humans as a species are uniquely ill-equipped to handle both the instantaneity and the immensity of information that defines the social media age." The United States seems especially ill-suited, since in the Wild West of the internet, our libertarian tendencies have led us to resist what other nations have put in place, including public notices when external disinformation campaigns are uncovered and “legal action to limit the effect of poisonous super-spreaders.” Information literacy, by this account, becomes a “national security imperative,” one in which the U.S. is badly lagging..."

The new book "LikeWar" is available at several online bookstores, including Barnes and Noble, Powell's, and Amazon. Now, watch the podcast:


Study: Most Consumers Fear Companies Will 'Go Too Far' With Artificial Intelligence Technologies

New research has found that consumers are conflicted about artificial intelligence (AI) technologies. A national study of 697 adults during the Spring of 2018 by Elicit Insights found:

"Most consumers are conflicted about AI. They know there are benefits, but recognize the risks, too"

Several specific findings:

  • 73 percent of survey participants (e.g., Strongly Agree, Agree) fear "some companies will go too far with AI"
  • 64 percent agreed (e.g., Strongly Agree, Agree) with the statement: "I'm concerned about how companies will use artificial intelligence and the information they have about me to engage with me"
  • "Six out of 10 Americans agree or strongly agree that AI will never be as good as human interaction. Human interaction remains sacred and there is concern with at least a third of consumers that AI won’t stay focused on mundane tasks and leave the real thinking to humans."

Many of the concerns center around control. As AI applications become smarter and more powerful, they are able to operate independently, without human -- users' -- authorization. When presented with several smart-refrigerator scenarios, the less control users had over purchases the fewer survey participants viewed AI as a benefit:

Smart refrigerator and food purchase scenarios. AI study by Elicit Insights. Click to view larger version

AI technologies can also be used to find and present possible matches for online dating services. Again, survey participants expressed similar control concerns:

Dating service scenarios. AI study by Elicit Insights. Click to view larger version

Download Elicit Insights' complete Artificial Intelligence survey (Adobe PDF). What are your opinions? Do you prefer AI applications that operate independently, or which require your authorization?


Study: Performance Issues Impede IoT Device Trust And Usage Worldwide By Consumers

Dynatrace logo A global survey recently uncovered interesting findings about the usage and satisfaction of Iot (Internet of things) devices by consumers. A survey of consumers in several countries found that 52 percent already use IoT devices, and 64 percent of users have already encountered performance issues with their devices.

Opinium Research logo Dynatrace, a software intelligence company, commissioned Opinium Research to conduct a global survey of 10,002 participants, with 2,000 in the United States, 2,000 in the United Kingdom, and 1,000 respondents each in France, Germany, Australia, Brazil, Singapore, and China. Dynatrace announced several findings, chiefly:

"On average, consumers experience 1.5 digital performance problems every day, and 62% of people fear the number of problems they encounter, and the frequency, will increase due to the rise of IoT."

That seems like plenty of poor performance. Some findings were specific to travel, healthcare, and in-home retail sectors. Regarding travel:

"The digital performance failures consumers are already experiencing with everyday technology is potentially making them wary of other uses of IoT. 85% of respondents said they are concerned that self-driving cars will malfunction... 72% feel it is likely software glitches in self-driving cars will cause serious injuries and fatalities... 84% of consumers said they wouldn’t use self-driving cars due to a fear of software glitches..."

Regarding healthcare:

"... 62% of consumers stated they would not trust IoT devices to administer medication; this sentiment is strongest in the 55+ age range, with 74% expressing distrust. There were also specific concerns about the use of IoT devices to monitor vital signs, such as heart rate and blood pressure. 85% of consumers expressed concern that performance problems with these types of IoT devices could compromise clinical data..."

Regarding in-home retail devices:

"... 83% of consumers are concerned about losing control of their smart home due to digital performance problems... 73% of consumers fear being locked in or out of the smart home due to bugs in smart home technology... 68% of consumers are worried they won’t be able to control the temperature in the smart home due to malfunctions in smart home technology... 81% of consumers are concerned that technology or software problems with smart meters will lead to them being overcharged for gas, electricity, and water."

The findings are a clear call to IoT makers to improve the performance, security, and reliability of their internet-connected devices. To learn more, download the full Dynatrace report titled, "IoT Consumer Confidence Report: Challenges for Enterprise Cloud Monitoring on the Horizon."