127 posts categorized "Reports & Studies" Feed

How To Spot Fake News And Not Get Duped

You may have heard about the "pizzagate" conspiracy -- fake news about a supposed child-sex ring operating from a pizzeria in Washington, DC. A heavily armed citizen drove from North Carolina to the pizzeria to investigate to investigate the bogus child-sex ring supposedly run by Presidential candidate Hillary Clinton. The reality: no sex ring. That citizen had been duped by fake news. Shots were fired, and thankfully nobody was hurt.

CBS News reported that the pizzagate conspiracy had been promoted by Michael G. Flynn, son of retired General Michael T. Flynn, Donald Trump's pick for national security adviser. As a result, the younger Flynn resigned Tuesday from President-Elect Trump's transition team.

I use the phrase "fake news" for several types of misleading content: propaganda, unproven or fact-free conspiracy theories, disinformation, and clickbait. The pizzagate incident highlighted two issues: a) fake news has consequences, and b) many people don't know how to distinguish real news from fake news. So, while political operatives reportedly have used a combination of fake news, ads, and social media to both encourage supporters to vote and discourage opponents from voting, there clearly are other real-life consequences.

To help people spot fake news, NPR reported:

"Stopping the proliferation of fake news isn't just the responsibility of the platforms used to spread it. Those who consume news also need to find ways of determining if what they're reading is true. We offer several tips below. The idea is that people should have a fundamental sense of media literacy. And based on a study recently released by Stanford University researchers, many people don't."

The report is enlightening. In the "Evaluating Information: The Cornerstone of Civic Online Reasoning" report, researchers at Stanford University tested about 7,804 students in 12 states between January 2015 and June 2016. They found:

"... at each level—middle school, high school, and college—these variations paled in comparison to a stunning and dismaying consistency. Overall, young people’s ability to reason about the information on the Internet can be summed up in one word: bleak. Our “digital natives” may be able to flit between Facebook and Twitter while simultaneously uploading a selfie to Instagram and texting a friend. But when it comes to evaluating information that flows through social media channels, they are easily duped... We would hope that middle school students could distinguish an ad from a news story. By high school, we would hope that students reading about gun laws would notice that a chart came from a gun owners’ political action committee. And, in 2016, we would hope college students, who spend hours each day online, would look beyond a .org URL and ask who’s behind a site that presents only one side of a contentious issue. But in every case and at every level, we were taken aback by students’ lack of preparation... Many [people] assume that because young people are fluent in social media they are equally savvy about what they find there. Our work shows the opposite."

This is important for both individuals and the future of the nation because:

"For every challenge facing this nation, there are scores of websites pretending to be something they are not. Ordinary people once relied on publishers, editors, and subject matter experts to vet the information they consumed. But on the unregulated Internet, all bets are off... Never have we had so much information at our fingertips. Whether this bounty will make us smarter and better informed or more ignorant and narrow-minded will depend on our awareness of this problem and our educational response to it. At present, we worry that democracy is threatened by the ease at which disinformation about civic issues is allowed to spread and flourish."

While the study focused upon students, but older persons have been duped, too. The suspect in the pizzeria incident was 28 years old. The Stanford report focused upon what teachers and educators can do to better prepare students. According to the researchers, additional solutions are forthcoming.

What can you do to spot fake news? Don't wait for sites and/or social media to do it for you. Become a smarter consumer. The NPR report suggested:

  1. Pay attention to the domain and URL
  2. Read the "About Us" section of the site
  3. Look at the quotes in a story
  4. Look at who said the quotes

All of the suggestions require readers to take the time to understand the website, publication, and/or publisher. A little skepticism is healthy. Also verify the persons quoted and whether the persons quoted are who the article claims. And, verify that any images used actually relate to the event.

We all have to be smarter consumers of news in order to stay informed and meet our civic duties, which includes voting. Nobody wants to vote for politicians that don't represent their interests because they've been duped. To the above list, I would add:

  • Read news wires. These sites include the raw, unfiltered news about who, when, where, and what happened. Some suggested sources: : Associated Press (AP), Reuters, and United Press International (UPI)
  • Learn to recognize advertisements
  • Learn the differences between different types of content: news, opinion, analysis, satire/humor, and entertainment. Reputable sites will label them to help readers.

If you don't know the differences and can't spot each type, then you are likely to get duped.


Voting Technologies By County Across The United States

State and local governments across the United States use a variety of voting technologies. Chances are, you voted on Tuesday using one of two dominant technologies: optical-scan ballots or direct-recording electronic (DRE) devices. Optical-scan ballots are paper ballots where voters fill in bubbles or other machine-readable marks. DRE devices include touch-screen devices that store votes in computer memory.

The Pew Research Center analyzed data from the Verified Voting Foundation, a nongovernmental organization, and found that almost:

"... half of registered voters (47%) live in jurisdictions that use only optical-scan as their standard voting system, and about 28% live in DRE-only jurisdictions... Another 19% of registered voters live in jurisdictions where both optical-scan and DRE systems are in use... Around 5% of registered voters live in places that conduct elections entirely by mail – the states of Colorado, Oregon and Washington, more than half of the counties in North Dakota, 10 counties in Utah and two in California. And in more than 1,800 small counties, cities and towns – mostly in New England, the Midwest and the inter-mountain West – more than a million voters still use paper ballots that are counted by hand."

Previously, voting systems nationwide used punch-card devices and "lever machines" which were slowly replaced since 1980 by optical-scan and DRE devices. You may remember voting with one of the old-style lever machines, a self-contained voting booth where voters flips switches for candidates and then pulled a large lever to record their votes:

"Punch cards hung on throughout the 1990s but gradually lost ground to optical-scan and electronic systems – a decline that accelerated sharply after the 2000 Florida election recount debacle that brought the term “hanging chad” to brief prominence. But as punch cards faded away (the last two jurisdictions to use them, Franklin and Shoshone counties in Idaho, abandoned them after the 2014 elections), some voters became concerned that fully electronic voting would not generate any “paper trail” for future recounts. According to Verified Voting, of the 53,608 jurisdictions that use DRE equipment as their major voting method, almost three-quarters use systems that don’t create paper receipts or other hard-copy records of voters’ choices."

In August of this year, Wired reported about the state of security of the DRE devices:

"What people may not remember is the resulting Help America Vote Act (HAVA), passed in 2002, which among other objectives worked to phase out the use of the punchcard voting systems that had caused millions of ballots to be tossed. In many cases, those dated machines were replaced with electronic voting systems. The intentions were pure. The consequences were a technological train wreck.

“People weren’t thinking about voting system security or all the additional challenges that come with electronic voting systems,” says the Brennan Center’s Lawrence Norden. “Moving to electronic voting systems solved a lot of problems, but created a lot of new ones.”

The list of those problems is what you’d expect from any computer or, more specifically, any computer that’s a decade or older. Most of these machines are running Windows XP, for which Microsoft hasn’t released a security patch since April 2014. Though there’s no evidence of direct voting machine interference to date, researchers have demonstrated that many of them are susceptible to malware or, equally if not more alarming, a well-timed denial of service attack."

Experts have said that, besides better built and more secure DREs, post-election auditing -- checking vote totals against paper ballots -- is the best way to ensure accurate vote totals. Reportedly, more than half of states perform post-election audits.

So, it seems appropriate for citizens living in counties that use antiquated DREs, or that don't perform post-election audits, to contact their elected representatives and demand improvements. Good entities to contact are the elections departments in your city, or the Secretary in your state. Find your state in this list. Below is an image of voting technologies by county:

Pew Research Voting technologies by county in the United States. Click to view larger version


Connected Cars: 4 Tips For Drivers To Stay Safe Online

With the increasing dominance of the Internet of Things (IoT), connected cars are becoming more ubiquitous than ever. We’ve long heard warnings from the media about staying safe online, but few consumers consider data hacks and other security compromises while driving a car connected to the internet.

According to the inforgraphic below from Arxan, an app protection company, 75 percent of all cars shipped globally will have internet connectivity by 2020, and current connected cars have more than 100 million lines of code. Connected features are designed to improve safety, fuel efficiency, and overall convenience. These features range from Bluetooth, WiFi, cellular network connections, keyless entry systems, to deeper “cyberphysical” features like automated braking, and parking and lane assist.

More Features Means More Vulnerability
However, with this increasing connectivity comes risks from malicious hacking. Today, connected cars have many attack points malicious hackers can exploit, including the OBD2 port used to connect third-party devices, and the software running on infotainment systems.

According to Arxan, some of the more vulnerable attack points are mobile apps that unlock vehicles and start a vehicle remotely, diagnostic devices, and insurance dongles, including the ones insurance companies give to monitor and reward safe drivers. These plug into the OBD2 port, but hackers could essentially access any embedded system in the car after lifting cryptographic keys, as the Arxan page on application protection for connected cars describes.

Vulnerabilities are usually demonstrated in conferences like Black Hat. Example: in 2010, researchers at the University of Washington and the University of California San Diego hacked a car that had a variety of wireless capabilities. The vulnerable attack points they targeted included its Bluetooth, the cellular radio, an Android app on the owner’s phone that was connected to the car’s network, and an audio file burned onto a CD in the car’s stereo. In 2013, hackers Charlie Miller and Chris Valasek hijacked the steering and brake systems of both a Ford Escape and Toyota Prius with only their laptops.

How To Protect Yourself
According to the FBI and Department of Transportation in a public service announcement, it’s crucial that consumers following the following recommendations to best protect themselves:

  1. Keep your vehicle’s software up to date
  2. Stay aware of recalls that require manual security patches to your car’s code
  3. Avoid unauthorized changes to your car’s software
  4. Use caution when plugging insecure devices into the car’s ports and network

With the latest remote hack of a Tesla Model S, it seems that the response time between finding out about a breach and issuing a patch to correct it is thankfully getting shorter. As more automakers become tech-oriented like Tesla, they will also need to cooperate with OEMs to make sure the operating-system software in their vehicles is designed securely. It seems, this will take time, coordination with vendors, and money to bring these operations in house.

Arxan connected vehicles infographic

What do you do to protect your Internet-connected vehicle? What security tools and features would you prefer automakers and security vendors provide?


Report Documents The Problems And Privacy Risks With Unregulated Facial Recognition Databases By Law Enforcement

According to a report by the Center on Privacy and Technology (CPT) at Georgetown Law school, about 48 percent of adult Americans -- 117 million people-- are already profiled in facial-recognition databases by law enforcement. The U.S. Federal Bureau of Investigation (FBI) maintains a facial-recognition database, but local police departments do, too.

Issues raised by findings in the report:

"Across the country, state and local police departments are building their own face recognition systems, many of them more advanced than the FBI’s. We know very little about these systems. We don’t know how they impact privacy and civil liberties. We don’t know how they address accuracy problems. And we don’t know how any of these systems—local, state, or federal—affect racial and ethnic minorities."

Facial recognition software is not new, and the report acknowledges that its use is inevitable by law enforcement. The facts include:

"FBI face recognition searches are more common than federal court-ordered wiretaps. At least one out of four state or local police departments has the option to run face recognition searches through their or another agency’s system. At least 26 states (and potentially as many as 30) allow law enforcement to run or request searches against their databases of driver’s license and ID photos. Roughly one in two American adults has their photos searched this way... Historically, FBI fingerprint and DNA databases have been primarily or exclusively made up of information from criminal arrests or investigations. By running face recognition searches against 16 states’ driver’s license photo databases, the FBI has built a biometric network that primarily includes law-abiding Americans. This is unprecedented and highly problematic..."

The report does not want to stop facial-recognition software usage, and it acknowledges that most law enforcement personnel do not want to invade citizens' privacy. The report' raises concerns based upon the data collection primarily includes law-abiding citizens and not just criminals; plus the lack of transparency and regulation regarding accuracy, training, and deployment. Some of the uses that raise concerns:

"Real-time face recognition lets police continuously scan the faces of pedestrians walking by a street surveillance camera... at least five major police departments—including agencies in Chicago, Dallas, and Los Angeles—either claimed to run real-time face recognition off of street cameras, bought technology that can do so, or expressed a written interest in buying it... A face recognition search conducted in the field to verify the identity of someone who has been legally stopped or arrested is different, in principle and effect, than an investigatory search of an ATM photo against a driver’s license database, or continuous, real-time scans of people walking by a surveillance camera. The former is targeted and public. The latter are generalized and invisible. While some agencies, like the San Diego Association of Governments, limit themselves to more targeted use of the technology, others are embracing high and very high risk deployments."

The report described specific examples of usage at the state and local levels:

"No state has passed a law comprehensively regulating police face recognition. We are not aware of any agency that requires warrants for searches or limits them to serious crimes. This has consequences. The Maricopa County Sheriff’s Office enrolled all of Honduras’ driver’s licenses and mug shots into its database. The Pinellas County Sheriff’s Office system runs 8,000 monthly searches on the faces of seven million Florida drivers—without requiring that officers have even a reasonable suspicion before running a search..."

A major concern the report discussed is the:

"... real risk that police face recognition will be used to stifle free speech. There is also a history of FBI and police surveillance of civil rights protests. Of the 52 agencies that we found to use (or have used) face recognition, we found only one, the Ohio Bureau of Criminal Investigation, whose face recognition use policy expressly prohibits its officers from using face recognition to track individuals engaging in political, religious, or other protected free speech."

Another major concern the report discussed:

"Face recognition is less accurate than fingerprinting, particularly when used in real-time or on large databases. Yet we found only two agencies, the San Francisco Police Department and the Seattle region’s South Sound 911, that conditioned purchase of the technology on accuracy tests or thresholds. There is a need for testing. One major face recognition company, FaceFirst, publicly advertises a 95% accuracy rate but disclaims liability for failing to meet that threshold in contracts with the San Diego Association of Governments... Companies and police departments largely rely on police officers to decide whether a candidate photo is in fact a match. Yet a recent study showed that, without specialized training, human users make the wrong decision about a match half the time... an FBI co-authored study suggests that face recognition may be less accurate on black people..."

Regarding the lack of transparency by law enforcement:

"Ohio’s face recognition system remained almost entirely unknown to the public for five years. The New York Police Department acknowledges using face recognition; press reports suggest it has an advanced system. Yet NYPD denied our records request entirely. The Los Angeles Police Department has repeatedly announced new face recognition initiatives—including a “smart car” equipped with face recognition and real-time face recognition cameras—yet the agency claimed to have “no records responsive” to our document request. Of 52 agencies, only four (less than 10%) have a publicly available use policy. And only one agency, the San Diego Association of Governments, received legislative approval for its policy... Maryland’s system, which includes the license photos of over two million residents, was launched in 2011. It has never been audited. The Pinellas County Sheriff’s Office system is almost 15 years old and may be the most frequently used system in the country. When asked if his office audits searches for misuse, Sheriff Bob Gualtieri replied, “No, not really.” Despite assurances to Congress, the FBI has not audited use of its face recognition system, either..."

Learn more about the expanded facial-recognition system the FBI deployed in 2014. The New York Times reported last year about some of the problems:

"Facial recognition software, which American military and intelligence agencies used for years in Iraq and Afghanistan to identify potential terrorists, is being eagerly adopted by dozens of police departments around the country to pursue drug dealers, prostitutes and other conventional criminal suspects. But because it is being used with few guidelines and with little oversight or public disclosure... Law enforcement officers say the technology is much faster than fingerprinting at identifying suspects, although it is unclear how much it is helping the police make arrests... "

The CPT report proposed the following solutions to address privacy concerns:

  • Use mug-shot databases (and not driver’s license databases and ID photos) as the default for facial recognition searches. Periodically purge them of innocent persons,
  • Searches of driver's license databases and ID photos should require a court order showing probable cause, except in instances of identity theft and fraud,
  • Notify the public if the policy includes searches of databases maintained by motor-vehicle agencies,
  • Local communities should decide real-time facial recognition surveillance is used in public places of the public and/or with police-worn body cameras. Real-time facial recognition surveilance should be a last resort used only in life-threatening emergencies supported by probable cause with limits as to scope and duration.

The year-long investigation by the CPT included more than 100 records requests to police departments around the country. Read the full report: "The Perpetual Line-up: Unregulated Police Face Recognition in America."

We know the National Security Agency (NSA) uses facial recognition software. Some agencies probably acquire photos and related information from them, too. If so, this should be disclosed. In 2012, the U.S. Federal Trade Commission (FTC) proposed guidelines for facial-recognition by social networking sites, companies, and retail stores. Since governments are supposed to report to and serve citizens, similar guidelines should apply to law enforcement.

What are your opinions of real-time facial recognition surveillance? Of the issues raised by the CDT report?


Proposed Legislation in Michigan For Driverless Cars

The Stanford Center For Internet & Society (CIS) analyzed several draft driverless-car bills under consideration by legislators in Michigan. The analysis highlighted the issues and inconsistencies by the proposed legislation. First, the good news. While SB 995 repeals existing laws that ban driverless cars, it:

"... would return Michigan law to flexible ambiguity on the question of the legality of automated driving in general. The bill probably goes even further by expressly authorizing automated driving: It provides that "[a]n automated motor vehicle may be operated on a street or highway on this state," and the summary of the bill as reported from committee similarly concludes that SB 995 would "[a]llow an automated motor vehicle to be operated on a street or highway in Michigan." (This provision is somewhat confusing because it would be added to an existing statutory section that currently addresses only research and testing and because it would seem to subvert many restrictions on research tests and "on-demand automated motor vehicle networks.") Regardless, this bill would also exempt groups of closely spaced and tightly coordinated vehicles from certain following-distance requirements that are incompatible with platooning."

Platooning is a method for several driverless vehicles to operate together on highways with less space in between, than otherwise. Advocates claim this maximizes the capacity of highways. What does this mean for safety? Do consumers want platooning? Can drivers opt out? If platooning is allowed, then the driverless vehicle you ultimately buy must be outfitted with that software feature.

The drawbacks of the draft legislation:

"... The currently proposed language could mean that automated driving is lawful only in the context of research and development and "on-demand motor vehicle networks." Or it could mean that automated driving is lawful generally and that these networks are subject to more restrictive requirements. It could mean that any company could run a driverless taxi service, including motor vehicle manufacturers that might otherwise face unrelated and unspecified legal impediments. Or it could mean that a company seeking to run a driverless taxi service must partner with a motor vehicle manufacturer -- or that such a company must at least purchase production vehicles, the modification of which might then be restricted by SB 927 and 928 (see below). It could also mean that municipalities could regulate and tax only those driverless taxi services that do not involve a manufacturer..."

And:

"... SB 995 and 996 understandably struggle to reconcile an existing vehicle code with automated driving. Under existing Michigan law, a "driver" is "every person who drives or is in actual physical control of a vehicle," an "operator" is "a person, other than a chauffeur, who "[o]perates" either "a motor vehicle" or "an automated motor vehicle," and "operate" means either "[b]eing in actual physical control of a vehicle" or "[c]ausing an automated motor vehicle to move under its own power in automatic mode," which "includes engaging the automated technology of that automated motor vehicle for that purpose." The new bills would not change this language, but they would further complicate these concepts in several ways..."

I encourage you to read the long list of complications in the CIS analysis. Another key issue:

"Consider the provision that "an automated driving system ... shall be considered the driver or operator ... for purposes of determining conformance to any applicable traffic or motor vehicle laws." This provision says nothing about who or what the driver is for purposes of determining liability for a violation of those laws, particularly when there is no crash. SB 996 does provide that "a motor vehicle manufacturer shall assume liability for each incident in which the automated driving system is at fault," subject to the state's existing insurance code..."

The proposed legislation is important for several reasons. Besides platooning and the list of complications, it decides: a) which types of companies can operate driverless-car networks, b) who is liable and under what conditions, and c) who can repair driverless cars. All items affect consumers rights. A narrow definition of "A" (e.g., only automakers) would mean fewer competitors, and probably higher prices due to a lack of competition. Similarly, a narrow definition of "C" could mean fewer options and choices for consumers, with higher repair prices. Liability must be clear for instances when a driverless vehicle violates road laws; and especially when there is a crash and/or fatality.

Consistency and clarity matter, too. The final legislation and definitions also should be forward-thinking. It's not just driverless vehicles but also remotely-operated vehicles. Companies want remotely-operated ships on the oceans, and remotely-operated trucks are already used off-road for mining purposes. It seems wise to anticipate that off-road use will probably migrate to roads and highways.

Clearly, the proposed legislation in Michigan is not ready yet for prime time. This topic definitely bears monitoring.


Oklahoma Closes 37 'Disposal Wells' After Quake. Report Listed Susceptible Areas In 6 States

During the holiday weekend, CNN reported:

"Five months before Saturday's 5.6 magnitude temblor in central Oklahoma, government scientists warned that oil and natural gas drilling had made a wide swath of the country more susceptible to earthquakes.

The U.S. Geological Survey (USGS), in a March report on "induced earthquakes," said as many as 7.9 million people in parts of Kansas, Colorado, New Mexico, Texas, Oklahoma and Arkansas now face the same earthquake risks as those in California. The report found that oil and gas drilling activity, particularly practices like hydraulic fracturing or fracking, is at issue... Saturday's earthquake spurred state regulators in Oklahoma to order 37 disposal wells, which are used by frackers, to shut down over a 725-square mile area... The quake that struck Saturday is at least the second of its size to affect central Oklahoma since 2011."

What are "disposal wells?" A variety of activities produce waste stored using "Class I Disposal Wells:" petroleum refining, metal production, chemical production, pharmaceutical production, commercial disposal, food production, and municipal wastewater treatment. According to the U.S. Environmental Protection Agency (EPA), these Class I wells are further categorized into four types: municipal, non-hazardous, hazardous, and radioactive. The EPA site also explains the other Classes of wells: II, III, IV, V, and VI.

So, a lot of industries besides fracking pump liquids into the ground -- deep into the ground; both to extract resources and to deposit waste.

Given the earthquake activity, the closed wells, and damage to business and residential properties, it seems wise to read the March 2016 report by the USGS, which discussed at the risks and potential for damage from both natural and induced earthquakes:

"The most significant hazards from induced seismicity are in six states, listed in order from highest to lowest potential hazard: Oklahoma, Kansas, Texas, Colorado, New Mexico and Arkansas. Oklahoma and Texas have the largest populations exposed to induced earthquakes."

So, that's a list you wouldn't want to see mention your state. Nor would you want to see your state at the top of the list. The USGS report included maps highlighting specific areas with risks ranging from less than one percent to a 12 percent probability. The report also stated:

“In the past five years, the USGS has documented high shaking and damage in areas of these six states, mostly from induced earthquakes... the USGS Did You Feel It? website has archived tens of thousands of reports from the public who experienced shaking in those states, including about 1,500 reports of strong shaking or damage.” In developing this new product, USGS scientists identified 21 areas with increased rates of induced seismicity. Induced earthquakes have occurred within small areas of Alabama and Ohio but a recent decrease in induced earthquake activity has resulted in a lower hazard forecast in these states for the next year. In other areas of Alabama and small parts of Mississippi, there has been an increase in activity, and scientists are still investigating whether those events were induced or natural."

Lets unpack this. First, risk varies based upon where you live. Second, risk varies with time. The USGS risk models include both one-year and 50-year outlooks. So, the risk in an area may be low during the coming year, but very different (e.g., higher) when considering what might happen during the next 50 years. That sounds a lot like floods. A huge, devastating flood may not happen often -- perhaps once every 50 or 100 years, but when it does... the damage and costs are considerable. Third, you don't need to live near or adjacent to a well to be affected.

Below is the USGS map with 21 susceptible areas:

USGS map with seismic activity during 1980 to 2015. Click to view larger version

Note the areas named: Alice, Ashtabula, Brewton, Cogdell, Dagger Draw, El Dorado, Fashbing, Greeley, Irving, North-Central Arkansas, North Texas, Oklahoma-Kansas, Paradox Valley, Perry, Raton Basio, Rangely, Rocky Mountain Arsenal, Sun City, Timpson, Venus, and Youngstown. The USGS advises persons living in areas with higher earthquake risks to learn how to prepare, and visit FEMA's Ready Campaign website.

A USGS report in 2015 titled, "6 Facts About Human-Caused Earthquakes" described the types of human activities:

"Injecting fluid underground can induce earthquakes, a fact that was established decades ago by USGS scientists. This process increases the fluid pressure within fault zones, essentially loosening the fault zones and making them more likely to fail in an earthquake... even faults that have not moved in historical times can be made to slip and cause an earthquake... There are several purposes for injecting fluid underground. The three main reasons are wastewater injection, hydraulic fracturing and enhanced oil recovery. Within the United States, each of these three activities has induced earthquakes to varying degrees in the past few years. All three types of wells used for these purposes are regulated under the Safe Drinking Water Act with minimum standards set by the U.S. Environmental Protection Agency. Additional regulations vary by state and municipality. Other purposes for injecting fluid underground include enhanced geothermal systems and geologic carbon sequestration."

That same report also mentioned this:

"Fact 5: Induced seismicity can occur at significant distances from injection wells and at different depths. Earthquakes can be induced at distances of 10 miles or more away from the injection point and at significantly greater depths than the injection point."

So, to be affected you don't have to live near or adjacent to a disposal well or injection point. Alert readers will notice that the EPA's classification system for wells and injection points largely mirrors the different types of human activities... which really seem to be mostly corporate activities.

Do you live in or near one of the 21 areas? What are your opinions?


Study Confirms Consumers Ignore Online Policies And Agree To Anything

Researchers have confirmed what privacy advocates and government regulators have suspected for a long time: Internet users often ignore online policies: privacy and terms of service. And those consumers who read policies, pay insufficient attention.

In a working paper titled, "The Biggest Lie On The Internet," researchers tested 543 college students (from a communications class) by having them sign up for NameDrop, a fictitious social networking site (SNS). 47 Percent of test participants were female, and the average age of all participants was 19. 62 percent identified as Caucasian, 15 percent as Asian, 6 percent as Black, 2 percent as Hispanic/Latin, and 3 percent as mixed race/ethnicity.

Authors of the working paper were Jonathann A. Obar, a Research Associate at the the Quello Center for Telecommunications Management and Law at Michigan State University, and Anne Oeldorf-Hirsch, at the University of Connecticut. The paper was submitted for peer review and to the U.S. Feral Communications Commission (FCC).

The study found that almost three of four test participants -- 74 percent -- skipped reading the privacy policy by clicking on a "Quick Join" button. Those that did read the privacy policy spent a little over a minute -- 73 seconds -- reading the 7,977-word policy. Test participants spent less time -- 51 seconds -- reading the 4,316-word TOS policy.

The researchers expected test participants to spend longer times reading the policies because persons with a 12-grade or college education read about 250 to 280 words per minute. So, the it should have taken 29 to 32 minutes to read the 7,977-word privacy policy. The range of actual reading times was 2.96 seconds to 37 minutes; with 80 percent of test participants spending less than one minute of reading time.

The paper did not mention if reading times varied by device (e.g., phone, tablet, laptop, desktop). The researchers identified three factors that predict policy reading times:

  1. Information Overload: if the persons perceived the policies to be too long andtoo much work,
  2. Nothing to Hide: persons view the policies as irrelevant because they do nothing wrong, and
  3. Difficult to Understand: persons believe that they can't understand the language in the policies.

The researchers inserted problematic clauses into the policies which test participants should have spotted and inquired about:

"Implications were revealed as 98 percent missed NameDrop TOS 'gotcha clauses' about data sharing with the National Security Agency (NSA) and employers, and about providing a first-born child as payment for SNS access."

Only 15 percent (83 persons) expressed concerns about NameDrop's policies. Of the 83 persons who expressed concerns, 11 mentioned the NSA clause, and nine mentioned the child-assignment clause. The rest mentioned concerns about the length of the policies and the trustworthiness of the SNS.

The study also asked test participants how long they spent reading policies. The findings supported the "privacy paradox" found by other researchers:

"The paradox suggests that when asked, individuals appear to value privacy, but when behaviors are examined, individual actions suggest that privacy is not a high priority... When participants were asked to self-report their engagement with privacy and TOS policies, results suggested average reading times of approximately five minutes..."

So, test participants said they spent about 5 minutes reading policies while their actual times were about a minute or less, if they read the policies at all.

With most consumers skipping online policies, they have given companies the power to insert any clauses desired into these policies. This has implications for consumers' ability to control their online reputation, privacy, and resolve conflicts (e.g., binding arbitration instead of courts).

This also has implications for how governments enforce data protection for their citizens. Historically:

"... approaches to privacy and increasingly reputation protections by governments throughout the world often draw from a contentious model referred to as the 'notice and choice' privacy framework. Notice and choice evolved from the U.S. Federal Trade Commission's (FTC) Fair Information Practice Principles, developed in the 1970s to address growing information privacy concerns raised by digitization. In the early 1980s, the FIPPs were promoted by the OECD as part of an international set of privacy guidelines, contributing to the implementation of data protection laws and guidelines in the U.S., Canada, the EU, Australia, and elsewhere... The notice and choice privacy framework was designed to "put individuals in charge of the collection and use of their personal information" (Reidenberg et al, 2014: 3)..."

The researchers' focused upon the:

"... notice component, noted by the FTC as "the most fundamental principle" (FTC, 1998: 7) of personal information protection... As the FTC (1998) notes, choice and related principles attempting to offer data control "are only meaningful when a consumer has notice of an entity's policies, and his or her rights with respect thereto." Notice policies typically... appear on websites, applications, are sent in the mail, provided in-person, generally when an individual connects with the entity in question for the first time, and increasingly when policies change. Despite suggestions that notice policy in particular is deeply flawed, strategies for strengthening notice policy continue to be seen as central to address, for example, privacy concerns associated with corporate and government surveillance, and consumer protection concerns about Big Data..."

So, the biggest lie on the Internet is that consumers agree to policies, which they really can't because they haven't read them. Governments, privacy advocates, companies, and usability professionals need to find a better way, because the current approach clearly isn't working:

"The policy implications of these findings contribute to the community of critique suggesting that notice and choice policy is deeply flawed, if not an absolute failure. Transparency is a great place to start, as is notice and choice policy; however, all are terrible places to finish. They leave digital citizens with nothing more than an empty promise of protection, an impractical opportunity for data privacy self-management, and as Daniel Solove (2012) analogizes, too much homework. This doesn't even begin to address the challenges unique to children in the realm of digital reputation..."

Absolutely, since many sites allow children as young as 14 to sign up. Policy reading rates are probably worse among children ages 14 - 17.

Download the working paper: "The Biggest Lie on The Internet" (Adobe PDF). the paper is also available here. The study used students majoring in communications. I wonder if the results would have been different with business majors or law students. What do you think?


Coming Soon: Autonomous Freighters On The Oceans

Technology races forward in several industries. The military uses remote-controlled drones, vendors use drones to inspect buildings, companies test driver-less cars, automakers introduce cars with more automation, and retailers pursue delivery drones. Add shipping to the list of industries.

Experts predict that robotic ships will sail the oceans by 2020. The Infinity Leap site reported:

"The concept of robotic ships was revealed by Rolls Royce back in 2014. According to reports, the Advanced Autonomous Waterborne Applications (AAWA) project guided by Rolls-Royce recently came up with a white paper which provides comprehensive details about the robotic ships or the autonomous vessels and the problems associated with them as far as their operation is concerned... the AAWA whitepaper is developed by Rolls-Royce with the support of partners like ESL Shipping, Finferries, Brighthouse Intelligence and the Tampere University of Technology. The AAWA whitepaper talks extensively about autonomous applications, and the issues related to the safety and certainty of designing and running the distantly controlled ships."

So, there's some new terminology to learn. Obviously, manned ships include on-board human crews that operate all ship's functions. There are subtle but important differences between automated, remote-controlled, and autonomous ships. The Maritime Unmanned Navigation through Intelligent Networks (MUNIN) website provides some helpful definitions and diagrams:

"The remote ship is where the tasks of operating the ship are performed via a remote control mechanism (e.g. by a shore based human operator), and

The automated ship is where advanced decision support systems on board undertake all the operational decisions independently without intervention of a human operator."

I found this diagram helpful with understanding the different types of robotic ships:

MUNIN. Types of robotic ships. Click to view larger version

So, the remote human operator could be on land, on board another ship, or on board an airplane. And, remote-controlled ships will use augmented reality displays. Again, from Infinity Leap:

"According to reports, Rolls-Royce has developed a unique new bridge called ‘oX’ or the Future Operator Experience Concept in collaboration with Finland’s VTT Technical Research Centre and Aalto University. It is learned that the bridge’s windows serve as augmented reality displays, which help in displaying necessary information and improve the visibility around the ship with the support of high-end cameras and sensors. That means the augmented reality windows help in displaying navigation tracks and give necessary warnings and information about the ships sailing nearby, ice and a whole lot of other invisible things."

The MUNIN site also provides a view of how decisions might be made by autonomous ships:

MUNIN. Decision making by autonomous ships. Click to view larger version

All of this makes one wonder how much of this automation the passenger cruise ship industry will adopt. It is a reminder of the importance of applying similar distinctions in types of automation to land-based commercial vehicles: delivery vans, school buses, inter-city buses, tractor-trailers, buses and trains in mass-transit systems, and construction equipment.

Would you want your children riding in autonomous school buses? How do you feel about riding in autonomous mass-transit buses or subways? Commuter trains?


In The Modern Era, More Young Adults Live With Their Parents

As a parent of three children who are now adults, this news item caught my attention. The Pew Research Center reported:

"Broad demographic shifts in marital status, educational attainment and employment have transformed the way young adults in the U.S. are living, and an analysis of census data highlights the implications of these changes for the most basic element of their lives – where they call home. In 2014, for the first time in more than 130 years, adults ages 18 to 34 were slightly more likely to be living in their parents’ home than they were to be living with a spouse or partner in their own household."

The data:

  Percent of Adults
Ages 18 to 34
Living Arrangement 1880 1940 1960 2014
Living at home with parents 30 35 20 32.1
Married or co-habitation in own household 45 46 62 31.6
Living alone, single parents, and other head of household 3 3 5 14
Other living arrangement 22 16 13 22

Several factors contributed to this shift:

"The first is the postponement of, if not retreat from, marriage. The median age of first marriage has risen steadily for decades. In addition, a growing share of young adults may be eschewing marriage altogether. A previous Pew Research Center analysis projected that as many as one-in-four of today’s young adults may never marry. While cohabitation has been on the rise, the overall share of young adults either married or living with an unmarried partner has substantially fallen since 1990.

In addition... employed young men are much less likely to live at home than young men without a job, and employment among young men has fallen significantly in recent decades. The share of young men with jobs peaked around 1960 at 84%. In 2014, only 71% of 18- to 34-year-old men were employed. Similarly with earnings, young men’s wages (after adjusting for inflation) have been on a downward trajectory since 1970 and fell significantly from 2000 to 2010. As wages have fallen, the share of young men living in the home of their parent(s) has risen."

And there are differences by gender:

"For men ages 18 to 34, living at home with mom and/or dad has been the dominant living arrangement since 2009. 'In 2014, 28 percent of young men were living with a spouse or partner in their own home, while 35 percent were living in the home of their parent(s). For their part, young women are on the cusp of crossing over this threshold: They are still more likely to be living with a spouse or romantic partner (35%) than they are to be living with their parent(s) (29%). In 2014, more young women (16%) than young men (13%) were heading up a household without a spouse or partner. This is mainly because women are more likely than men to be single parents living with their children..."

Additional findings:

"In 2014, 40 percent of 18- to 34-year-olds who had not completed high school lived with parent(s), the highest rate observed since the 1940 Census when information on educational attainment was first collected.

Young adults in states in the South Atlantic, West South Central and Pacific United States have recently experienced the highest rates on record of living with parent(s).

With few exceptions, since 1880 young men across all races and ethnicities have been more likely than young women to live in the home of their parent(s)."

The methodology included decennial census data and large samples, typically 1 percent of young adults nationwide.


Social Networking Sites With The Largest Number of News Users

Recently, some friends and I were discussing the wisdom of getting your news from social networking websites (e.g., Facebook, Twitter, Snapchat, Youtube, LinkedIn, etc.) instead of directly from news media sites. Apparently, many consumers get their news from such sites.

The Pew Research Center reported that most adults in the United States, 62 percent, get their news from social networking sites. The corresponding statistic in 2012 was 49 percent. Fewer social media site users get their news from other platforms: local television (46 percent), cable TV (31 percent), nightly network TV (30 percent), news websites/apps (28 percent), radio (25 percent), and print newspapers (20 percent). 

Pew analyzed which social networking sites were used the most for news, and whether consumers used multiple sites to obtain news. The Pew Research Center found:

"Two-thirds of Facebook users (66 percent) get news on the site, nearly six-in-ten Twitter users (59 percent) get news on Twitter, and seven-in-ten Reddit users get news on that platform. On Tumblr, the figure sits at 31 percent..."

The corresponding statistics are 23 percent for Instagram, 21 percent for Youtube, 19 percent for LinkedIn, and 17 percent at Snapchat. The implications:

"Facebook is by far the largest social networking site, reaching 67% of U.S. adults. The two-thirds of Facebook users who get news there, then, amount to 44% of the general population. YouTube has the next greatest reach in terms of general usage, at 48% of U.S. adults. But only about a fifth of its users get news there, which amounts to 10% of the adult population. That puts it on par with Twitter, which has a smaller user base (16% of U.S. adults) but a larger portion getting news there."

About audience overlap, Pew found that most people (64 percent) get their news from one social media site. 26 percent get their news from two social media sites, and 10 percent get their news from three social media sites. Pew also found that more users at Reddit, Twitter, and LinkedIn seek out news versus stumbling across it by accident:

  Percent of news users of each
site who mostly get news online
Social Networking Site While doing
other things
Because they're
looking for it
Instagram 63 37
Facebook 62 38
Youtube 58 41
LinkedIn 46 51
Twitter 45 54
Reddit 42 55

Who are the news users at the five largest social sites with news users? The users vary by site:

"... while there is some crossover, each site appeals to a somewhat different group. Instagram news consumers stand out from other groups as more likely to be non-white, young and, for all but Facebook, female. LinkedIn news consumers are more likely to have a college degree than news users of the other four platforms; Twitter news users are the second most likely."

The demographic data:

Pew-social-news-users

Some of you are probably wondering about Google+ and Pinterest. Pew removed three social media sites because:

"... Pinterest, which has been shown to have a small portion of users who use it for news; Myspace, which has largely transitioned to a music site; and Google+, which through its recent transformations is being phased out as a social networking site."

The survey was conducted from January 12 to February 8, 2016 and included 4,654 respondents (4,339 by web and 315 by mail). The methodology included a randomly-selected subset of U.S. adults (6,301 total web-based persons and 474 total mail persons.


Courts To Use Risk Scores More Frequently. Analysis Found Scores Unreliable And Racial Bias

ProPublica investigated the use of risk assessment scores by the courts and justice system in the United States:

"... risk assessments — are increasingly common in courtrooms across the nation. They are used to inform decisions about who can be set free at every stage of the criminal justice system, from assigning bond amounts... to even more fundamental decisions about defendants’ freedom. In Arizona, Colorado, Delaware, Kentucky, Louisiana, Oklahoma, Virginia, Washington and Wisconsin, the results of such assessments are given to judges during criminal sentencing. Rating a defendant’s risk of future crime is often done in conjunction with an evaluation of a defendant’s rehabilitation needs. The Justice Department’s National Institute of Corrections now encourages the use of such combined assessments at every stage of the criminal justice process. And a landmark sentencing reform bill currently pending in Congress would mandate the use of such assessments in federal prisons."

Some important background:

"In 2014, then U.S. Attorney General Eric Holder warned that the risk scores might be injecting bias into the courts. He called for the U.S. Sentencing Commission to study their use... The sentencing commission did not, however, launch a study of risk scores. So ProPublica did, as part of a larger examination of the powerful, largely hidden effect of algorithms in American life. [ProPublica] obtained the risk scores assigned to more than 7,000 people arrested in Broward County, Florida, in 2013 and 2014 and checked to see how many were charged with new crimes over the next two years, the same benchmark used by the creators of the algorithm."

ProPublica analyzed data for Broward County in the State of Florida, and found the risk assessment scores to be unreliable:

"... in forecasting violent crime: Only 20 percent of the people predicted to commit violent crimes actually went on to do so. When a full range of crimes were taken into account — including misdemeanors such as driving with an expired license — the algorithm was somewhat more accurate than a coin flip. Of those deemed likely to re-offend, 61 percent were arrested for any subsequent crimes within two years."

ProPublica also found biases based upon race:

"In forecasting who would re-offend, the algorithm made mistakes with black and white defendants at roughly the same rate but in very different ways. The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants. White defendants were mislabeled as low risk more often than black defendants."

Northpointe logo ProPublica re-checked the analysis. Same results. Northpointe, the for-profit company that produced the Broward County, Florida risk scores disagreed:

"... it criticized ProPublica’s methodology and defended the accuracy of its test: “Northpointe does not agree that the results of your analysis, or the claims being made based upon that analysis, are correct or that they accurately reflect the outcomes from the application of the model.” Northpointe’s software is among the most widely used assessment tools in the country. The company does not publicly disclose the calculations used to arrive at defendants’ risk scores, so it is not possible for either defendants or the public to see what might be driving the disparity... Northpointe’s core product is a set of scores derived from 137 questions that are either answered by defendants or pulled from criminal records. Race is not one of the questions..."

Formed in 1989, Northpointe is a wholly owned subsidiary of the Volaris Group. Northpointe works with a variety ot federal, state, and local justice agencies in the United States and Canada. The company's website also states that it also works with policy makers.

Besides Northpointe, several companies provide risk assessment tools to courts and the judicial system. The National Center For State Courts (NCSC) provides a list of risk assessment tools (Adobe PDF).

All of this points to a larger problem suggesting risk scores still haven't been adequately studied nor techniques vetted:

"There have been few independent studies of these criminal risk assessments. In 2013, researchers Sarah Desmarais and Jay Singh examined 19 different risk methodologies used in the United States and found that “in most cases, validity had only been examined in one or two studies” and that “frequently, those investigations were completed by the same people who developed the instrument.” Their analysis of the research through 2012 found that the tools “were moderate at best in terms of predictive validity,”... there have been some attempts to explore racial disparities in risk scores. One 2016 study examined the validity of a risk assessment tool, not Northpointe’s, used to make probation decisions for about 35,000 federal convicts. The researchers, Jennifer Skeem at University of California, Berkeley, and Christopher T. Lowenkamp from the Administrative Office of the U.S. Courts, found that blacks did get a higher average score but concluded the differences were not attributable to bias."

I wonder if the biases found started in the data rather than in the algorithm. The algorithm may have been developed and tested using existing prison populations which are known to be skewed, plus overly aggressive policing via school-to-prison pipelines and for-profit prisons in many states. Both the State of Florida and Broward County have histories with school-to-prison pipelines.

Plus, It seems crazy to make decisions about persons' lives based upon scores without knowing how the scores were calculated, and without adequate research or vetting of techniques. Transparency matters.

Thoughts? Opinions?


Study: Many Sharing Economy Companies Not There Yet On Privacy And Transparency

Uber logo You've probably heard of the term, "sharing economy" (a/k/a digital economy). It refers to a variety of companies that link buyers and sellers online. These companies include taxi-like ride-sharing services (e.g., Uber, Lyft), home sharing services (e.g., Home Away, Airbnb, VRBO), delivery services (e.g., Postmates), and on-demand labor services (e.g., TaskRabbit).

The 2016 "Who Has Your Back?" report by the Electronic Frontier Foundation (EFF) focused upon companies in the sharing economy, and their policies and practices for inquiries by law enforcement. Prior annual reports included social networking websites, email providers, Internet service providers (ISPs), cloud storage providers, and other companies. The EFF observed that companies in the sharing economy:

"... also collect sensitive information about the habits of millions of people across the United States. Details about what consumers buy, where they sleep, and where they travel are really just scratching the surface of this data trove. These apps may also obtain detailed records of where your cell phone is at a given time, when you are logged on or active in an app, and with whom you communicate.

It’s not just the purchasers in the gig economy who have to trust their data to the startups developing these apps. Individuals offering services are users just like the buyers, and also leave behind a digital trail as (or more) detailed than that of the purchasers. From Lyft drivers to Airbnb hosts to Instacart shoppers, people providing services are entrusting enormous amounts of data to these apps... As with any rich trove of data, law enforcement is increasingly turning to the distributed workforce as part of their investigations. That’s not necessarily a bad thing, but we need to know how and when these companies actually stand up for user privacy..."

So, it is sensible and appropriate to evaluate how well (or poorly) these companies protect consumers' privacy and communicate their activities. The EFF found overall:

"Many sharing economy companies have not yet stepped up to meet accepted tech industry best practices related to privacy and transparency, according to our analysis of their published policies. This analysis is specific to government access requests for user data, and within that context we see ample room for improvement by this budding industry... however, some gig economy companies leading the field on this issue...

Regarding ride-sharing companies, the EFF found:

"We analyzed 10 companies as part of this report. Of them, both Uber and Lyft earned credit in all of the categories we examined. We commend these two companies for their transparency around government access requests, commitments to protecting Fourth Amendment rights in relation to user communications and location data, advocacy on the federal level for user privacy, and commitment to providing users with notice about law enforcement requests. These two companies are setting a strong example for other distributed workforce companies... In contrast, another ride-sharing company, Getaround, received no stars in this year’s report."

TripAdvisor logo The EFF also found improvements by home-sharing companies (links added):

"... FlipKey (owned by TripAdvisor) has adopted several policies related to government access of user data. FlipKey requires a warrant for user content or location data and promises to inform users of law enforcement access requests. It is also a member of the Digital Due Process Coalition, fighting for reform to outdated communications privacy law. Of the home sharing companies we reviewed, FlipKey does the most to stand up for user privacy against government demands.

Only two other companies from our research set earned credit in any categories: Airbnb and Instacart, each earning credit in three categories. Both of these companies require a warrant for content, publish law enforcement guidelines, and are members of the Digital Due Process Coalition..."

Airbnb logo The Digital Due Process Coalition (DDPC) seeks reforms to the Electronic Communications Privacy Act (ECPA) because:

"Technology has advanced dramatically since 1986, and ECPA has been outpaced. The statute has not undergone a significant revision since it was enacted in 1986... As a result, ECPA is a patchwork of confusing standards that have been interpreted inconsistently by the courts, creating uncertainty for both service providers and law enforcement agencies. ECPA can no longer be applied in a clear and consistent way, and, consequently, the vast amount of personal information generated by today’s digital communication services may no longer be adequately protected. At the same time, ECPA must be flexible enough to allow law enforcement agencies and services providers to work effectively together..."

DDPC members include Adobe, Airbnb, Amazon.com, Apple, AT&T, Dell, Dropbox, eBay, Facebook, IBM, Intel, Lyft, Reddit, Snapchat, and many more well-known brands.

Postmates logo The EFF report also found (links added):

"... half of the companies we reviewed—Getaround, Postmates, TaskRabbit, Turo, and VRBO—received no credit in any of our categories. This finding is disappointing... most of the companies we analyzed were not yet publishing transparency reports. Only two companies in the field—Lyft and Uber—have published reports outlining how many law enforcement access requests they’ve received. As a result, the general public has little insight into how often the government is pressuring gig economy companies for access to user data. This concerns us, as one way to make surveillance without due process worse is to allow it to happen entirely in secret. Publicizing reports of law enforcement access requests can help illuminate patterns of overzealous policing, shine a light on efforts by companies to resist overly broad requests, and perhaps give pause to law enforcement officials who might otherwise seek to grab more user data than they need..."

Read the 2016 EFF "Who Has Your Back?" executive summary, or the full report (Adobe PDF). Kudos to the EFF for providing a very timely and valuable report. What are your opinions.


Report: Lawsuits Resulting From Corporate Data Breaches

Chart 1: Bryan Cave LLP: 2016 Breach Litigation Report. Click to view larger version

This week, the law firm of Bryan Cave LLP released its annual review of litigation related to data breaches. 83 cases were filed, representing a 25 percent decline compared to the prior year. Other Key findings from the 2016 report:

"Approximately 5% of publicly reported data breaches led to class action litigation. The conversion rate has remained relatively consistent as compared to prior years... When multiple filings against single defendants are removed, there were only 21 unique defendants during the Period. This indicates a continuation of the “lightning rod” effect noted in the 2015 Report, wherein plaintiffs’ attorneys are filing multiple cases against companies connected to the largest and most publicized breaches, and are not filing cases against the vast majority of other companies that experience data breaches..."

Slightly more than half (51 percent) of all cases were national. The most popular locations were lawsuits were filed included the Northern District of Georgia, the Central District of California, the Northern District of California, and the Northern District of Illinois. However:

"Choice of forum, however, continues to be primarily motivated by the states in which the company-victims of data breaches are based."

Charges of negligence were cited in 75 percent of lawsuits. Which industry were frequently sued and which weren't:

"... the medical industry was disproportionately targeted by the plaintiffs’ bar. While only 24% of publicly reported breaches related to the medical industry, nearly 33% of data breach class actions targeted medical or insurance providers. The overweighting of the medical industry was due, however, to multiple lawsuits filed in connection with two large scale breaches... There was a 76% decline in the percentage of class actions involving the breach of credit cards... The decline most likely reflects a reduction in the quantity of high profile credit card breaches, difficulties by plaintiffs’ attorneys to prove economic harm following such breaches, and relatively small awards and settlements.."

57 percent of cases included sensitive personal information (e.g., Social Security numbers), 23 percent of cases included debit/credit card information, and 18 percent of cases included credit reports. The law firm reviewed lawsuits occurring during a 15-month period ending in December, 2015. Data sources included Westlaw Pleadings, Westlaw Dockets, and PACER databases.

Historically, some lawsuits by consumers haven't succeeded when courts have dismissed cases because plaintiffs weren't able to prove injuries. According to the Financial Times:

"However, decisions from a number of high-profile cases are likely to make it easier for consumers to bring suits against companies in the event of a data breach... For example, in July 2015, the Seventh US Circuit Court of Appeals, overturning a previous judgment, ruled that customers of Neiman marcus could potentially sue the retailer because they were at substantial risk of identity theft or becoming victims of fraud..."

Learn more about the Neiman Marcus class-action. Criminals hack corporate databases specifically to reuse (or resell) victims' stolen sensitive personal and payment information to obtain fraudulent credit, drain bank accounts, and/or hack online accounts -- injuries which often don't happen immediately after the breach. That's what identity thieves do. Hopefully, courts will take a broader, more enlightened view.

I look forward to reading future reports which discuss drivers' licenses data and children's online privacy, and the Internet of Things (ioT). View the "2016 Data Breach Litigation Report" by Bryan Cave LLP. Below is another chart from the report.

Chart 2: Bryan Cave LLP: 2016 Breach Litigation Report. Click to view larger version


Report: Significant Security Risks With Healthcare And Financial Services Mobile Apps

Arxan Technologies logo Arxan Technologies recently released its fifth annual report about the state of application security. This latest report also highlighted some differences between how information technology (I.T.) professionals and consumers view the security of healthcare and financial services mobile apps. Overall, Arxan found critical vulnerabilities:

"84 percent of the US FDA-approved apps tested did not adequately address at least two of the Open Web Application Security Project (OWASP) Mobile Top 10 Risks. Similarly, 80 percent of the apps tested that were formerly approved by the UK National Health Service (NHS) did not adequately address at least two of the OWASP Mobile Top 10 Risks... 95 percent of the FDA-approved apps, and 100 percent of the apps formerly approved by the NHS, lacked binary protection, which could result in privacy violations, theft of personal health information, and tampering... 100 percent of the mobile finance apps tested, which are commonly used for mobile banking and for electronic payments, were shown to be susceptible to code tampering and reverse-engineering..."

Some background about the U.S. Food and Drug Administration (FDA). The FDA revised its guidelines for mobile medical apps in September, 2015. The top of that document clearly stated, "Contains Nonbinding Regulations." The document also explained which apps the FDA regulates (link added):

"Many mobile apps are not medical devices (meaning such mobile apps do not meet the definition of a device under section 201(h) of the Federal Food, Drug, and Cosmetic Act (FD&C Act)), and FDA does not regulate them. Some mobile apps may meet the definition of a medical device but because they pose a lower risk to the public, FDA intends to exercise enforcement discretion over these devices (meaning it will not enforce requirements under the FD&C Act). The majority of mobile apps on the market at this time fit into these two categories. Consistent with the FDA’s existing oversight approach that considers functionality rather than platform, the FDA intends to apply its regulatory oversight to only those mobile apps that are medical devices and whose functionality could pose a risk to a patient’s safety if the mobile app were to not function as intended. This subset of mobile apps the FDA refers to as mobile medical apps."

The Arxan report found that consumers are concerned about app mobile security:

80 percent of mobile app users would change providers if they knew the apps they were using were not secure. 82 percent would change providers if they knew alternative apps offered by similar service providers were more secure."

Arxan commissioned a a third party which surveyed 1,083 persons in the United States, United Kingdom, Germany, and Japan during November, 2015. 268 survey participants were I.T. professionals and 815 participants were consumers. Also, Arxan hired Mi3 to test mobile apps during October and November, 2015. Those tests included 126 health and financial mobile apps covering both the Apple iOS and Android platforms, 19 mobile health apps approved by the FDA, and 15 mobile health apps approved3 by the UK NHS.

One difference in app security perceptions between the two groups: 82 percent of I.T. professionals believe "everything is being done to protect my apps" while only 57 percent of consumers hold that belief. To maintain privacy and protect sensitive personal information, Arxan advises consumers to:

  1. Buy apps only from reputable app stores,
  2. Don't "jail break" your mobile devices, and
  3. Demand that app developers disclose upfront the security methods and features in their apps.

The infographic below presents more results from the consolidated report. Three reports by Arxan Technologies are available: consolidated, healthcare, and financial services.

Arxan Technologies. 5th Annual State of App Security infographic
Infographic reprinted with permission.


Facts About Debt Collection Scams And Other Consumer Complaints

Logo for Consumer Financial Protection Bureau The Consumer Financial Protection Bureau (CFPB) recently released a report about debt collection scams. The report is based upon more than 834,00 complaints filed by consumers nationally with the CFPB about financial products and services: checking and savings accounts, mortgages, credit cards, prepaid cards, consumer loans, student loans, money transfers, payday loans, debt settlement, credit repair, and credit reports. Complaints about debt collection scams accounted for 26 percent of all complaints.

The most frequent scam are attempts to collect money from consumers for debts they don't owe. This accounted for 38 percent of all debt-collection-scam complaints submitted. This included harassment:

"Consumers complained about receiving multiple calls weekly and sometimes daily from debt collectors. Consumers often complained that the collector continued to call even after being repeatedly told that the alleged debtor could not be contacted at the dialed number. Consumers also complained about debt collectors calling their places of employment... Consumers complained that they were not given enough information to verify whether or not they owed the debt that someone was attempting to collect. "

The two companies with the most complaints:

"... were Encore Capital Group and Portfolio Recovery Associates, Inc. Both companies, which are among the largest debt buyers in the country, averaged over 100 complaints submitted to the Bureau each month between October and December 2015. In 2015, the CFPB took enforcement actions against these two large debt buyers for using deceptive tactics to collect bad debts."

Compared to a year ago, debt collection complaints increased the most in Indiana (38 percent), Arizona (27 percent), and New Hampshire (26 percent) during December 2015 through February 2016. Debt collection complaints decreased the most in Maine (-34 percent), Wyoming (-26 percent), and North Dakota (-23 percent). And:

"Of the five most populated states, California (10 percent) experienced the greatest percentage increase and Illinois (-4 percent) experienced the greatest percentage decrease in debt collection complaints..."

The report lists 20 companies with the most debt-collection complaints during October through December 2015. The top five companies with with average monthly complaints about debt collection are Encore Capital Group (139.3), Portfolio Recovery Associates, Inc. (112.3), Enhanced recovery Company, LLC (65.7), Transworld Systems Inc. (63.7), and Citibank (54.7). This top-20 list also includes several banks: Synchrony Bank, Capital One, JPMorgan Chase, Bank of America, and Wells Fargo.

While the March Monthly Complaint Report by the CFPB focused upon debt collection complaints, it also provides plenty of detailed information about all categories of complaints. From December 2015 through February 2016, the CFPB received on average every month about 6,856 debt collection complaints, 4,211 mortgage complaints, 3,556 credit reporting complaints, 2,021 complaints about bank accounts or services, and 1,995 complaints about credit cards. Most categories showed increased complaint volumes compared to the same period a year ago. Only two categories showed a decline in average monthly complaints: credit reporting and payday loans. Debt collection complaints were up 6 percent.

Compared to a year ago, average monthly complaint volume (all categories) increased in 40 states and decreased in 11 states. The top five states with the largest increases (all categories) included Connecticut (31 percent), Kansas (30 percent), Georgia (25 percent), Louisiana (25 percent), and Indiana (24 percent). The top five states with the largest decreases (all categories) included Hawaii (-25 percent), Maine (-19 percent), South Dakota (-14 percent), District of Columbia (-8 percent), and Idaho (-6 percent). Also:

"Of the five most populated states, New York (12 percent) experienced the greatest complaint volume percentage increase, and Texas (-8 percent) experienced the greatest complaint volume percentage decrease from December 2014 to February 2015 to December 2015 to February 2016."

The chart below lists the 10 companies with the most complaints (all categories) during October through December, 2015:

Companies with the most complaints. CFPB March 2016 Monthly Complaints Report. Click to view larger image

The "Other" category includes consumer loans, student loans, prepaid cards, payday loans, prepaid cards, money transfers, and more. During this three-month period, complaints about these companies totaled 46 percent of all complaints. Consumers submit complaints about the national big banks covering several categories. According to the CFPB March complaints report (links added):

"By average monthly complaint volume, Equifax (988), Experian (841), and TransUnion (810) were the most-complained-about companies for October - December 2015. Equifax experienced the greatest percentage increase in average monthly complaint volume (32 percent)... Ocwen experienced the greatest percentage decrease in average monthly complaint volume (-18 percent)... Empowerment Ventures (parent company of RushCard) debuted as the 10th most-complained-about company..."

To learn more about the CFPB, there are plenty of posts in this blog. Simply enter "CFPB" in the search box in the right column.


Survey: Bankers Expect Consumers To Use Wearable And Smart Home Devices For Banking

Pegasystems logo Would you use a smart watch, fitness band, or other wearable device for banking? How about your smart television or refrigerator? Many bankers think you will, and are racing to integrate a broader range of mobile devices and technologies into their banking services. A recent survey of financial executives found that:

"... 20 per cent expect it to be common for consumers to make financial transactions using wearables within one year, 59 per cent within two years and 91 per cent within five years... 87 per cent expect it to be common for consumers to make financial transactions using Smart TVs and 68 per cent via home appliances."

The survey included 500 executives globally in several financial areas: banking, financial advice, consumer finance, investment management, insurance, and payments. So, consumers are likely to see these changes not just at your bank, but in a variety of financial and insurance transactions. Here's why:

"... too many banks are out of touch with what customers really want: one survey found 62 per cent of retail banking executives believed their bank offered excellent service compared to just 35 per cent of customers.... Millennials will have annual spending power of US$1. trillion [in 2020] and represent 30 per cent of total retail sales... Millennials not only have an appetite for disruptive new technologies but also an affinity with brand-savvy digital leaders... The Millennial Disruption Index, a three-year study of industry disruption conducted by Viacom subsidiary Scratch, found that banking was most vulnerable to disruption..."

The report discussed the desire by executives to serve customers via a variety of methods:

"Today’s customers expect a flawless end-to-end experience across all channels, yet fewer than 4 per cent of our respondents say they have achieved full omni-channel integration... by 2020, 89 per cent of our respondents expect to achieve full omni-channel integration. This either suggests a massive surge of investment over the next five years – or an industry in denial about the scale of the task ahead... 70 per cent expect video chat to largely replace branch appointments. Indeed, six out of ten now believe a digital-only channel model is viable."

Bankers view the Internet-of-Things (IoT) as both a collection of endpoint devices to provide services through, and a rich source of data:

"...93 per cent agree that finding innovative ways to provide value-added services to customers based on data-driven insight will be crucial to long-term success... 86 per cent agree that once consumers recognize the data potential of the IoT they will increasingly seek to benchmark their own behavior against their peers..."

Banks will probably develop more non-human (e.g., self-service) interfaces:

"... 76 per cent agree the widespread use of virtual assistants such as Siri on the iPhone means customers are more willing to engage with automated assistance and advice... almost three quarters of our respondents agree that in the future customers will interact with a human-like avatar..."

Another technology being considered:

"... 60 per cent [of survey respondents] believe that blockchain, a distributed public ledger which can securely record any information and the ownership of any asset, will prove to be the most significant technology development to affect financial services since the Internet and 45 per cent think the combination of blockchain wallets and peerto-peer (P2P) lending could herald the end of banking as we know it... 12 per cent expect the settlement of insurance claims using IoT data, blockchain and smart contracts to be mainstream practice within two years and 74 per cent expect it to be mainstream by 2025..."

Don't expect your bank to provide these new services next week or next month. It will take them time. New systems must be built, tested, debugged, and integrated with legacy computer systems and processes. All of this suggests that to fund their investments in innovation projects, banks probably won't lower their retail banking prices and fees (e.g., checking, savings, etc.) any time soon. While writing this blog the past 8+ years, I've found it wise to always keep an eye on the banks.

Download "The Future of Retail Financial Services" report by Cognizant, Marketforce, and Pegasystems.


New Federal Agency For Stronger Protections Of Background Investigations

Office of Personnel Management logo Fallout continues from the massive data breach at the Office of Personnel Management (OPM) in 2015. The U.S. Federal government announced a reorganization to provide stronger protections of sensitive information collected during background investigations for federal employees and contractors. The reorganization features several changes including a new agency, the National Background Investigations Bureau (NBIB). The WhiteHouse.gov site announced:

"... the establishment of the National Background Investigations Bureau (NBIB), which will absorb the U.S. Office of Personnel Management’s (OPM) existing Federal Investigative Services (FIS), and be headquartered in Washington, D.C.  This new government-wide service provider for background investigations will be housed within the OPM. Its mission will be to provide effective, efficient, and secure background investigations for the Federal Government. Unlike the previous structure, the Department of Defense will assume the responsibility for the design, development, security, and operation of the background investigations IT systems for the NBIB."

After the massive data breach at OPM, several federal agencies conducted a joint 90-Day Suitability and Security review. The agencies involved included the Performance Accountability Council (PAC), the Office of Management and Budget (OMB), the Director of National Intelligence (DNI), the Director of the U.S. OPM, the Departments of Defense (DOD), the Treasury, Homeland Security, State, Justice, Energy, the Federal Bureau of Investigation, and others.

According to its Fact Sheet, the OPM’s Federal Investigative Services (FIS) unit currently conducts investigations for more than 100 Federal agencies. The FIS conducts more than 600,000 security clearance investigations and 400,000 suitability investigations annually. An NBIB Transition Team will oversee the migration to the new information technology systems and procedures. Transition project goals include:

  1. Establish a five-year re-investigation requirement for all personnel with security clearances, regardless of the level of access,
  2. Reduce the number of personnel with active security clearances by 17 percent
  3. Introduce programs to continuously evaluate personnel with security clearances to determine whether ongoing security clearances are necessary, and
  4. Develop recommendations to enhance information sharing between State, local, and Federal Law Enforcement agencies regarding background investigations.

The changes were announced jointly on January 22, 2016 by James R. Clapper (the Director of National Intelligence), Beth Cobert (Acting Director of the OPM), Marcel Lettre (Under Secretary of Defense for Intelligence, Department of Defense), Tony Scott (U.S. Chief Information Officer), and J. Michael Daniel (Special Assistant to the President and Cybersecurity Coordinator, National Security Council, The White House).


Are You A Lab Rat, Social Addict, And Crash Test Dummy? Facebook Acted Like You Are

Facebook logo After unannounced tests in 2014 when Facebook manipulated its customers' news feeds without notice nor consent, users complained bitterly. Well, Facebook has done it again. Either executives at the social networking giant haven't learned from their 2014 experience, or don't care.

This time, the unannounced test included Android app users where Facebook intentionally crashed their apps. Forbes magazine reported:

"Facebook conducted secret tests to determine the magnitude of its Android users’ Facebook addiction, according to a new report published yesterday. Like a bunch of crash test dummies, users of the Facebook app for Android were (several years ago) subject to intentional Facebook for Android app crashes without being informed of the tests. These tests were reportedly conducted so Facebook could determine user resilience to app deprivation–that is, whether users would find ways to use Facebook on their Android devices without the Google Play store app..."

Similarly, the dating service OKCupid irritated its users in 2014 after secret tests. People don't like being treated like lab rats. Ethically-challenged executives don't seem to understand this.

Supposedly, Facebook wanted to know if those Android app users would get replacement apps from other sources, or use the browser interface. Reportedly, Facebook has one billion Android app users. The news article didn't say whether Facebook performed similar tests on Apple iPhone app users. It seems wise to assume so.

The news report didn't mention whether Facebook slowed or manipulated the browser interface to see if users would switch to one of its mobile apps. It seems wise to assume so.

What are your opinions of the secret tests? Is this an acceptable "cost" for a service that promises to remain free?


The Ethical Dilemmas Of Self-Driving Cars

There have been plenty of articles in the news media about self-driving cars. What hasn't been discussed so much are the ethical dilemmas. What are the ethical dilemmas? The M.I.T. Technology review explored the topic:

"Here is the nature of the dilemma. Imagine that in the not-too-distant future, you own a self-driving car. One day, while you are driving along, an unfortunate set of events causes the car to head toward a crowd of 10 people crossing the road. It cannot stop in time but it can avoid killing 10 people by steering into a wall. However, this collision would kill you, the owner and occupant. What should it do?”

If one programs self-driving cars to always minimize the loss of life, then in this scenario the owner is sacrificed. Will consumers buy self-driving cars knowing this? Would you?

Researchers posed this and similar ethical dilemmas to workers at Amazon Mechanical Turk, a crowd-sourcing marketplace for developing human intelligence in computers. The researchers found that while people wanted self-driving cars programmed to minimize the loss of life:

"This utilitarian approach is certainly laudable but the participants were willing to go only so far. [Participants] were not as confident that autonomous vehicles would be programmed that way in reality – and for good reason. They actually wished others to cruise in utilitarian autonomous vehicle more than they wanted to buy a utilitarian autonomous vehicle themselves”

So, few people want to sacrifice themselves. They want others to do it, but not themselves.

There are plenty of ethical dilemmas with self-driving cars:

"Is it acceptable for an autonomous vehicle to avoid a motorcycle by swerving into a wall, considering that the probability of survival is greater for the passenger of the card than for the rider of the motorcycle? Should different decisions be made when children are on board, since they both have a longer time ahead of them than adults, and had less agency in being in the car in the first place? If a manufacturer offers different versions of its moral algorithm, and a buyer knowingly chooses one of them, is the buyer to blame for the harmful consequences of the algorithm’s decisions?”

You can probably think of more dilemmas. I know I can. Should self-driving car manufacturers offer different algorithms so each driver can use the algorithm they want? Or should all cars have the same algorithm? If the approach is differing algorithms, how will this affect insurance rates? If you drive from one country to another, must drivers adjust their car's algorithm for each country?

Last, I prefer the term, "self-driving" to describe the new technology. While some technology sites and news organizations have used the term "driverless," the term "self-driving" is a more accurate description, and it places the responsibility where it should be. Something is driving the car, and not a person.

And, there may be hybrid applications in the future, where a driver operates the vehicle remotely, as drone operators do today. So, there will always be drivers: somebody or something.

Read the MIT Technology Review article titled, "Why Self-Driving Cars Must Be Programmed To Kill." Share below your opinions about how self-driving cars should be programmed.


American Adults Who Don't Use The Internet. Who They Are And Why

A few weeks ago, the Pew Research Center released the results of survey about adults in the United States that don't use the Internet. You're probably thinking: everyone uses the Internet. Right? Afterall, 64 percent of Americans have smartphones and 19 percent of them use their phones to go online.

Actually, a substantial chunk of the population doesn't go online. The Pew Research Center survey described American adults who don't use the Internet.

Overall, in 2015 about 15 percent of American adults don't use the Internet. Across the years, things have gotten better. The comparable figure in 2000 was 48 percent, and 24 percent in 2010. However, in 2015 equal portions of men (15 percent) and women (15 percent) don't use the Internet. The numbers vary more by race, age, income, and residence:

U.S. Adults% Don't Use The Internet
White
Black
Hispanic
Asian
14
20
18
5
Less than $30K
$30K - $49.9K
$50K - $74.9K
$75K or more
25
14
5
3
18 - 29
30 - 49
50 - 64
65 or older
3
6
19
39
Less than high school
High school
Some college
College graduates
33
23
9
4
Urban
Suburban
Rural
13
13
24

The 2015 findings are based upon three surveys of 5,005 adults in the United States. In 2013, Pew Research Center surveyed American adults who don't use the Internet:

Reason For Not Using The Internet% Adults
Not interested 21
Don't have a computer 13
Too difficult or frustrating 10
Don't know how / don't have the skills 8
Too old to learn 8
Don't have access 7
Too expensive 6
Don't need it / don't want it 6
Consider it a waste of time 4
Physically unable (e.g., poor eyesight, disabled) 4
Too busy / don't have the time 3
Worried about privacy / spam / spyware / hackers 3

Of these adults that don't use the Internet:

  • 44 percent have asked a friend or family member to look up something online for them,
  • 23 percent live in households were somebody else in that household uses the Internet, and
  • 14 percent used the Internet previously and stopped.

What to make of this? I look at the people who said Internet access is too expensive or they don't have access. While overall our country appears strong, there are areas of the country were citizens lack one or several services we all take for granted. There are Internet deserts, broadband deserts, banking deserts, public library deserts, and food deserts.