Some Surprising Facts About Facebook And Its Users

Facebook logo The Pew Research Center announced findings from its latest survey of social media users:

  • About two-thirds (68%) of adults in the United States use Facebook. That is unchanged from April 2016, but up from 54% in August 2012. Only Youtube gets more adult usage (73%).
  • About three-quarters (74%) of adult Facebook users visit the site at least once a day. That's higher than Snapchat (63%) and Instagram (60%).
  • Facebook is popular across all demographic groups in the United States: 74% of women use it, as do 62% of men, 81% of persons ages 18 to 29, and 41% of persons ages 65 and older.
  • Usage by teenagers has fallen to 51% (at March/April 2018) from 71% during 2014 to 2015. More teens use other social media services: YouTube (85%), Instagram (72%) and Snapchat (69%).
  • 43% of adults use Facebook as a news source. That is higher than other social media services: YouTube (21%), Twitter (12%), Instagram (8%), and LinkedIn (6%). More women (61%) use Facebook as a news source than men (39%). More whites (62%) use Facebook as a news source than nonwhites (37%).
  • 54% of adult users said they adjusted their privacy settings during the past 12 months. 42% said they have taken a break from checking the platform for several weeks or more. 26% said they have deleted the app from their phone during the past year.

Perhaps, the most troubling finding:

"Many adult Facebook users in the U.S. lack a clear understanding of how the platform’s news feed works, according to the May and June survey. Around half of these users (53%) say they do not understand why certain posts are included in their news feed and others are not, including 20% who say they do not understand this at all."

Facebook users should know that the service does not display in their news feed all posts by their friends and groups. Facebook's proprietary algorithm -- called its "secret sauce" by some -- displays items it thinks users will engage with = click the "Like" or other emotion buttons. This makes Facebook a terrible news source, since it doesn't display all news -- only the news you (probably already) agree with.

That's like living life in an online bubble. Sadly, there is more.

If you haven't watched it, PBS has broadcast a two-part documentary titled, "The Facebook Dilemma" (see trailer below), which arguable could have been titled, "the dark side of sharing." The Frontline documentary rightly discusses Facebook's approaches to news, privacy, its focus upon growth via advertising revenues, how various groups have used the service as a weapon, and Facebook's extensive data collection about everyone.

Yes, everyone. Obviously, Facebook collects data about its users. The service also collects data about nonusers in what the industry calls "shadow profiles." CNet explained that during an April:

"... hearing before the House Energy and Commerce Committee, the Facebook CEO confirmed the company collects information on nonusers. "In general, we collect data of people who have not signed up for Facebook for security purposes," he said... That data comes from a range of sources, said Nate Cardozo, senior staff attorney at the Electronic Frontier Foundation. That includes brokers who sell customer information that you gave to other businesses, as well as web browsing data sent to Facebook when you "like" content or make a purchase on a page outside of the social network. It also includes data about you pulled from other Facebook users' contacts lists, no matter how tenuous your connection to them might be. "Those are the [data sources] we're aware of," Cardozo said."

So, there might be more data sources besides the ones we know about. Facebook isn't saying. So much for greater transparency and control claims by Mr. Zuckerberg. Moreover, data breaches highlight the problems with the service's massive data collection and storage:

"The fact that Facebook has [shadow profiles] data isn't new. In 2013, the social network revealed that user data had been exposed by a bug in its system. In the process, it said it had amassed contact information from users and matched it against existing user profiles on the social network. That explained how the leaked data included information users hadn't directly handed over to Facebook. For example, if you gave the social network access to the contacts in your phone, it could have taken your mom's second email address and added it to the information your mom already gave to Facebook herself..."

So, Facebook probably launched shadow profiles when it introduced its mobile app. That means, if you uploaded the address book in your phone to Facebook, then you helped the service collect information about nonusers, too. This means Facebook acts more like a massive advertising network than simply a social media service.

How has Facebook been able to collect massive amounts of data about both users and nonusers? According to the Frontline documentary, we consumers have lax privacy laws in the United States to thank for this massive surveillance advertising mechanism. What do you think?


Federal Reserve Released Its Non-cash Payments Fraud Report. Have Chip Cards Helped?

Many consumers prefer to pay for products and services using methods other than cash. How secure are these non-cash payment methods? The Federal Reserve Board (FRB) analyzed the payments landscape within the United States. Its October 2018 report found good and bad news. The good news: non-cash payments fraud is small. The bad news:

  • Overall, non-cash payments fraud is growing,
  • Card payments fraud drove the growth
Non-Cash Payment Activity And Fraud
Payment Type 2012 2015 Increase (Decrease)
Card payments & ATM withdrawal fraud $4 billion $6.5 billion 62.5 percent
Check fraud $1.1 billion $710 million (35) percent
Non-cash payments fraud $6.1 billion $8.3 billion 37 percent
Total Non-cash payments $161.2 trillion $180.3 trillion 12 percent

The FRB report included:

"... fraud totals and rates for payments processed over general-purpose credit and debit card networks, including non-prepaid and prepaid debit card networks, the automated clearinghouse (ACH) transfer system, and the check clearing system. These payment systems form the core of the noncash payment and settlement systems used to clear and settle everyday payments made by consumers and businesses in the United States. The fraud data were collected as part of Federal Reserve surveys of depository institutions in 2012 and 2015 and payment card networks in 2015 and 2016. The types of fraudulent payments covered in the study are those made by an unauthorized third party."

Data from the card network survey included general-purpose credit and debit (non-prepaid and prepaid) card payments, but did not include ATM withdrawals. The card networks include Visa, MasterCard, Discover and others. Additional findings:

"... the rate of card fraud, by value, was nearly flat from 2015 to 2016, with the rate of in-person card fraud decreasing notably and the rate of remote card fraud increasing significantly..."

The industry defines several categories of card fraud:

  1. "Counterfeit card. Fraud is perpetrated using an altered or cloned card;
  2. Lost or stolen card. Fraud is undertaken using a legitimate card, but without the cardholder’s consent;
  3. Card issued but not received. A newly issued card sent to a cardholder is intercepted and used to commit fraud;
  4. Fraudulent application. A new card is issued based on a fake identity or on someone else’s identity;
  5. Fraudulent use of account number. Fraud is perpetrated without using a physical card. This type of fraud is typically remote, with the card number being provided through an online web form or a mailed paper form, or given orally over the telephone; and
  6. Other. Fraud including fraud from account take-over and any other types of fraud not covered above."
Card Fraud By Category
Fraud Category 2015 2016 Increase/(Decrease)
Fraudulent use of account number $2.88 billion $3.46 billion 20 percent
Counterfeit card fraud $3.05 billion $2.62 billion (14) percent
Lost or stolen card fraud $730 million $810 million 11 percent
Fraudulent application $210 million $360 million 71 percent

The increase in fraudulent application suggests that criminals consider it easy to intercept pre-screened credit and card offers sent via postal mail. It is easy for consumers to opt out of pre-screened credit and card offers. There is also the National Do Not Call Registry. Do both today if you haven't.

The report also covered EMV chip cards, which were introduced to stop counterfeit card fraud. Card networks distributed both chip cards to consumers, and chip-reader terminals to retailers. The banking industry had set an October 1, 2015 deadline to switch to chip cards. The FRB report:

EMV Chip card fraud and payments. Federal Reserve Board. October 2018

The FRB concluded:

"Card systems brought EMV processing online, and a liability shift, beginning in October 2015, created an incentive for merchants to accept chip cards. By value, the share of non-fraudulent in-person payments made with [chip cards] shifted dramatically between 2015 and 2016, with chip-authenticated payments increasing from 3.2 percent to 26.4 percent. The share of fraudulent in-person payments made with [chip cards] also increased from 4.1 percent in 2015 to 22.8 percent in 2016. As [chip cards] are more secure, this growth in the share of fraudulent in-person chip payments may seem counter-intuitive; however, it reflects the overall increase in use. Note that in 2015, the share of fraudulent in-person payments with [chip cards] (4.1 percent) was greater than the share of non-fraudulent in-person payments with [chip cards] (3.2 percent), a relationship that reversed in 2016."


Senator Wyden Introduces Bill To Help Consumers Regain Online Privacy And Control Over Sensitive Data

Late last week, Senator Ron Wyden (Dem - Oregon) introduced a "discussion draft" of legislation to help consumers recover online privacy and control over their sensitive personal data. Senator Wyden said:

"Today’s economy is a giant vacuum for your personal information – Everything you read, everywhere you go, everything you buy and everyone you talk to is sucked up in a corporation’s database. But individual Americans know far too little about how their data is collected, how it’s used and how it’s shared... It’s time for some sunshine on this shadowy network of information sharing. My bill creates radical transparency for consumers, gives them new tools to control their information and backs it up with tough rules with real teeth to punish companies that abuse Americans’ most private information.”

The press release by Senator Wyden's office explained the need for new legislation:

"The government has failed to respond to these new threats: a) Information about consumers’ activities, including their location information and the websites they visit is tracked, sold and monetized without their knowledge by many entities; b) Corporations’ lax cybersecurity and poor oversight of commercial data-sharing partnerships has resulted in major data breaches and the misuse of Americans’ personal data; c) Consumers have no effective way to control companies’ use and sharing of their data."

Consumers in the United States lost both control and privacy protections when the U.S. Federal Communications Commission (FCC), led by President Trump appointee Ajit Pai, a former Verizon lawyer, repealed last year both broadband privacy and net neutrality protections for consumers. A December 2017 study of 1,077 voters found that most want net neutrality protections. President Trump signed the privacy-rollback legislation in April 2017. A prior blog post listed many historical abuses of consumers by some internet service providers (ISPs).

With the repealed broadband privacy, ISPs are free to collect and archive as much data about consumers as desired without having to notify and get consumers' approval of the collection nor of who they share archived data with. That's 100 percent freedom for ISPs and zero freedom for consumers.

By repealing online privacy and net neutrality protections for consumers, the FCC essentially punted responsibility to the U.S. Federal Trade Commission (FTC). According to Senator Wyden's press release:

"The FTC, the nation’s main privacy and data security regulator, currently lacks the authority and resources to address and prevent threats to consumers’ privacy: 1) The FTC cannot fine first-time corporate offenders. Fines for subsequent violations of the law are tiny, and not a credible deterrent; 2) The FTC does not have the power to punish companies unless they lie to consumers about how much they protect their privacy or the companies’ harmful behavior costs consumers money; 3) The FTC does not have the power to set minimum cybersecurity standards for products that process consumer data, nor does any federal regulator; and 4) The FTC does not have enough staff, especially skilled technology experts. Currently about 50 people at the FTC police the entire technology sector and credit agencies."

This means consumers have no protections nor legal options unless the company, or website, violates its published terms-of-conditions and privacy policies. To solves the above gaps, Senator Wyden's new legislation, titled the Consumer Data Privacy Act (CDPA), contains several new and stronger protections. It:

"... allows consumers to control the sale and sharing of their data, gives the FTC the authority to be an effective cop on the beat, and will spur a new market for privacy-protecting services. The bill empowers the FTC to: i) Establish minimum privacy and cybersecurity standards; ii) Issue steep fines (up to 4% of annual revenue), on the first offense for companies and 10-20 year criminal penalties for senior executives; iii) Create a national Do Not Track system that lets consumers stop third-party companies from tracking them on the web by sharing data, selling data, or targeting advertisements based on their personal information. It permits companies to charge consumers who want to use their products and services, but don’t want their information monetized; iv) Give consumers a way to review what personal information a company has about them, learn with whom it has been shared or sold, and to challenge inaccuracies in it; v) Hire 175 more staff to police the largely unregulated market for private data; and vi) Require companies to assess the algorithms that process consumer data to examine their impact on accuracy, fairness, bias, discrimination, privacy, and security."

Permitting companies to charge consumers who opt out of data collection and sharing is a good thing. Why? Monthly payments by consumers are leverage -- a strong incentive for companies to provide better cybersecurity.

Business as usual -- cybersecurity methods by corporate executives and government enforcement -- isn't enough. The tsunami of data breaches is an indication. During October alone:

A few notable breach events from earlier this year:

The status quo, or business as usual, is unacceptable. Executives' behavior won't change without stronger consequences like jail time, since companies perform cost-benefit analyses regarding how much to spend on cybersecurity versus the probability of breaches and fines. Opt-outs of data collection and sharing by consumers, steeper fines, and criminal penalties could change those cost-benefit calculations.

Four former chief technologists at the FCC support Senator Wyden's legislation. Gabriel Weinberg, the Chief Executive Officer of DuckDuckGo also supports it:

"Senator Wyden’s proposed consumer privacy bill creates needed privacy protections for consumers, mandating easy opt-outs from hidden tracking. By forcing companies that sell and monetize user data to be more transparent about their data practices, the bill will also empower consumers to make better-informed privacy decisions online, enabling companies like ours to compete on a more level playing field."

Regular readers of this blog know that the DuckDuckGo search engine (unlike Google, Bing and Yahoo search engines) doesn't track users, doesn't collect nor archive data about users and their devices, and doesn't collect nor store users' search criteria. So, DuckDuckGo users can search knowing their data isn't being sold to advertisers, data brokers, and others.

Lastly, Wyden's proposed legislation includes several key definitions (emphasis added):

"... The term "automated decision system" means a computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision or facilitates human decision making, that impacts consumers... The term "automated decision system impact assessment" means a study evaluating an automated decision system and the automated decision system’s development process, including the design and training data of the automated decision 14 system, for impacts on accuracy, fairness, bias, discrimination, privacy, and security that includes... The term "data protection impact assessment" means a study evaluating the extent to which an information system protects the privacy and security of personal information the system processes... "

The draft legislation requires companies to perform both automated data impact assessments and data protection impact assessments; and requires the FTC to set the frequency and conditions for both. A copy of the CDPA draft is also available here (Adobe PDF; 67.7 k bytes).

This is a good start. It is important... critical... to hold accountable both corporate executives and the automated decision systems their approve and deploy. Based upon history, outsourcing has been one corporate tactic to manage liability by shifting it to providers. Good to close any loopholes now where executives could abuse artificial intelligence and related technologies to avoid responsibility.

What are your thoughts, opinions of the proposed legislation?


Mail-Only Voting In Oregon: Easy, Simple, And Secure. Why Not In All 50 States?

Hopefully, you voted today. A democracy works best when citizens participate. And voting is one way to participate.

If you already stood in line to vote, or if your state was one which closed some polling places, know that it doesn't have to be this way. Consider Oregon. Not only is the process there easier and simpler, but elections officials in Oregon don't have to worry as much as officials in other states about hacks and tampering. Why? The don't have voting machines. Yes, that's correct. No voting machines. No polling places either.

NBC News explained:

"Twenty years ago, Oregon became the first state in the nation to conduct all statewide elections entirely by mail. Three weeks before each election, all of Oregon's nearly 2.7 million registered voters are sent a ballot by the U.S. Postal Service. Then they mark and sign their ballots and send them in. You don't have to ask for the ballot, it just arrives. There are no forms to fill out, no voter ID, no technology except paper and stamps. If you don't want to pay for a stamp, you can drop your ballot in a box at one of the state's hundreds of collection sites."

Reportedly, Washington and Colorado also have mail-only voting. Perhaps most importantly, Oregon gets a higher voter participation:

"In the 2014 election, records showed that 45 percent of registered voters 34 and under marked a ballot — twice the level of many other states."

State and local governments across the United States use a variety of voting technologies. The two dominant are optical-scan ballots or direct-recording electronic (DRE) devices. Optical-scan ballots are paper ballots where voters fill in bubbles or other machine-readable marks. DRE devices include touch-screen devices that store votes in computer memory. A study in 2016 found that half of registered voters (47%) live in areas hat use only optical-scan as their standard voting system, about 28% live in DRE-only areas, 19% live in areas with both optical-scan and DRE systems, and about 5% of registered voters live in areas that conduct elections entirely by mail.

Some voters and many experts worry about areas using old, obsolete DRE devices that lack software and security upgrades. An analysis earlier this year found that the USA has made little progress since the 2016 election in replacing antiquated, vulnerable voting machines; and done even less to improve capabilities to recover from cyberattacks.

Last week, the Pew Research Center released results of its latest survey. Key findings: while nearly nine-in-ten (89%) Americans have confidence in poll workers in their community to do a good job, 67% of Americans say it is very or somewhat likely that Russia (or other foreign governments) will try to influence the midterm elections, and less than half (45%) are very or somewhat confident that election systems are secure from hacking. The survey also found that younger voters (ages 18 - 29) are less likely to view voting as convenient, compared to older voters.

Oregon's process is more secure. There are no local, electronic DRE devices scattered across towns and cities that can be hacked or tampered with; and which don't provide paper backups. If there is a question about the count, the paper ballots are stored in a secure place after the election, so elections officials can perform re-counts when needed for desired communities. According to the NBC News report, Oregon's Secretary of State, Dennis Richardson, said:

"You can't hack paper"

Oregon posts results online at results.oregonvotes.gov starting at 8:00 pm on Tuesday. Residents of Oregon can use the oregonvotes.gov site to check their voter record, track their ballot, find an official drop box, check election results, and find other relevant information. 2) ,

Oregon's process sounds simple, comprehensive, more secure, and easy for voters. Voters don't have to stand in long lines, nor take time off from work to vote. If online retailers can reliably fulfill consumers' online purchases via package delivery, then elections officials in local towns and cities can -- and should -- do the same with paper ballots. Many states already provide absentee ballots via postal mail, so a mail-only process isn't a huge stretch.


When Fatal Crashes Can't Be Avoided, Who Should Self-Driving Cars Save? Or Sacrifice? Results From A Global Survey May Surprise You

Experts predict that there will be 10 million self-driving cars on the roads by 2020. Any outstanding issues need to be resolved before then. One outstanding issue is the "trolley problem" - a situation where a fatal vehicle crash can not be avoided and the self-driving car must decide whether to save the passenger or a nearby pedestrian. Ethical issues with self-driving cars are not new. There are related issues, and some experts have called for a code of ethics.

Like it or not, the software in self-driving cars must be programmed to make decisions like this. Which person in a "trolley problem" should the self-driving car save? In other words, the software must be programmed with moral preferences which dictate which person to sacrifice.

The answer is tricky. You might assume: always save the driver, since nobody would buy self-driving car which would kill their owners. What if the pedestrian is crossing against a 'do not cross' signal within a crosswalk? Does the answer change if there are multiple pedestrians in the crosswalk? What if the pedestrians are children, elders, or pregnant? Or a doctor? Does it matter if the passenger is older than the pedestrians?

To understand what the public wants -- expects -- in self-driving cars, also known as autonomous vehicles (AV), researchers from MIT asked consumers in a massive, online global survey. The survey included 2 million people from 233 countries. The survey included 13 accident scenarios with nine varying factors:

  1. "Sparing people versus pets/animals,
  2. Staying on course versus swerving,
  3. Sparing passengers versus pedestrians,
  4. Sparing more lives versus fewer lives,
  5. Sparing men versus women,
  6. Sparing the young versus the elderly,
  7. Sparing pedestrians who cross legally versus jaywalking,
  8. Sparing the fit versus the less fit, and
  9. Sparing those with higher social status versus lower social status."

Besides recording the accident choices, the researchers also collected demographic information (e.g., gender, age, income, education, attitudes about religion and politics, geo-location) about the survey participants, in order to identify clusters: groups, areas, countries, territories, or regions containing people with similar "moral preferences."

Newsweek reported:

"The study is basically trying to understand the kinds of moral decisions that driverless cars might have to resort to," Edmond Awad, lead author of the study from the MIT Media Lab, said in a statement. "We don't know yet how they should do that."

And the overall findings:

"First, human lives should be spared over those of animals; many people should be saved over a few; and younger people should be preserved ahead of the elderly."

These have implications for policymakers. The researchers noted:

"... given the strong preference for sparing children, policymakers must be aware of a dual challenge if they decide not to give a special status to children: the challenge of explaining the rationale for such a decision, and the challenge of handling the strong backlash that will inevitably occur the day an autonomous vehicle sacrifices children in a dilemma situation."

The researchers found regional differences about who should be saved:

"The first cluster (which we label the Western cluster) contains North America as well as many European countries of Protestant, Catholic, and Orthodox Christian cultural groups. The internal structure within this cluster also exhibits notable face validity, with a sub-cluster containing Scandinavian countries, and a sub-cluster containing Commonwealth countries.

The second cluster (which we call the Eastern cluster) contains many far eastern countries such as Japan and Taiwan that belong to the Confucianist cultural group, and Islamic countries such as Indonesia, Pakistan and Saudi Arabia.

The third cluster (a broadly Southern cluster) consists of the Latin American countries of Central and South America, in addition to some countries that are characterized in part by French influence (for example, metropolitan France, French overseas territories, and territories that were at some point under French leadership). Latin American countries are cleanly separated in their own sub-cluster within the Southern cluster."

The researchers also observed:

"... systematic differences between individualistic cultures and collectivistic cultures. Participants from individualistic cultures, which emphasize the distinctive value of each individual, show a stronger preference for sparing the greater number of characters. Furthermore, participants from collectivistic cultures, which emphasize the respect that is due to older members of the community, show a weaker preference for sparing younger characters... prosperity (as indexed by GDP per capita) and the quality of rules and institutions (as indexed by the Rule of Law) correlate with a greater preference against pedestrians who cross illegally. In other words, participants from countries that are poorer and suffer from weaker institutions are more tolerant of pedestrians who cross illegally, presumably because of their experience of lower rule compliance and weaker punishment of rule deviation... higher country-level economic inequality (as indexed by the country’s Gini coefficient) corresponds to how unequally characters of different social status are treated. Those from countries with less economic equality between the rich and poor also treat the rich and poor less equally... In nearly all countries, participants showed a preference for female characters; however, this preference was stronger in nations with better health and survival prospects for women. In other words, in places where there is less devaluation of women’s lives in health and at birth, males are seen as more expendable..."

This is huge. It makes one question the wisdom of a one-size-fits-all programming approach by AV makers wishing to sell cars globally. Citizens in clusters may resent an AV maker forcing its moral preferences upon them. Some clusters or countries may demand vehicles matching their moral preferences.

The researchers concluded (emphasis added):

"Never in the history of humanity have we allowed a machine to autonomously decide who should live and who should die, in a fraction of a second, without real-time supervision. We are going to cross that bridge any time now, and it will not happen in a distant theatre of military operations; it will happen in that most mundane aspect of our lives, everyday transportation. Before we allow our cars to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers that will regulate them... Our data helped us to identify three strong preferences that can serve as building blocks for discussions of universal machine ethics, even if they are not ultimately endorsed by policymakers: the preference for sparing human lives, the preference for sparing more lives, and the preference for sparing young lives. Some preferences based on gender or social status vary considerably across countries, and appear to reflect underlying societal-level preferences..."

And the researchers advised caution, given this study's limitations (emphasis added):

"Even with a sample size as large as ours, we could not do justice to all of the complexity of autonomous vehicle dilemmas. For example, we did not introduce uncertainty about the fates of the characters, and we did not introduce any uncertainty about the classification of these characters. In our scenarios, characters were recognized as adults, children, and so on with 100% certainty, and life-and-death outcomes were predicted with 100% certainty. These assumptions are technologically unrealistic, but they were necessary... Similarly, we did not manipulate the hypothetical relationship between respondents and characters (for example, relatives or spouses)... Indeed, we can embrace the challenges of machine ethics as a unique opportunity to decide, as a community, what we believe to be right or wrong; and to make sure that machines, unlike humans, unerringly follow these moral preferences. We might not reach universal agreement: even the strongest preferences expressed through the [survey] showed substantial cultural variations..."

Several important limitations to remember. And, there are more. It didn't address self-driving trucks. Should an AV tractor-trailer semi  -- often called a robotruck -- carrying $2 million worth of goods sacrifice its load (and passenger) to save one or more pedestrians? What about one or more drivers on the highway? Does it matter if the other drivers are motorcyclists, school buses, or ambulances?

What about autonomous freighters? Should an AV cargo ship be programed to sacrifice its $80 million load to save a pleasure craft? Does the size (e.g., number of passengers) of the pleasure craft matter? What if the other craft is a cabin cruiser with five persons? Or a cruise ship with 2,000 passengers and a crew of 800? What happens in international waters between AV ships from different countries programmed with different moral preferences?

Regardless, this MIT research seems invaluable. It's a good start. AV makers (e.g., autos, ships, trucks) need to explicitly state what their vehicles will (and won't do). Don't hide behind legalese similar to what exists today in too many online terms-of-use and privacy policies.

Hopefully, corporate executives and government policymakers will listen, consider the limitations, demand follow-up research, and not dive headlong into the AV pool without looking first. After reading this study, it struck me that similar research would have been wise before building a global social media service, since people in different countries or regions having varying preferences with online privacy, sharing information, and corporate surveillance. What are your opinions?


More Consequences From The Phony-Accounts Scandal At Wells Fargo Bank

Wells Fargo logo Consequences continue after the bank's phony-accounts scandal. Last week, Well Fargo announced several changes in senior management:

"Chief Administrative Officer Hope Hardison and Chief Auditor David Julian have begun leaves of absence from Wells Fargo and will no longer be members of the company’s Operating Committee. These leaves relate to previously disclosed ongoing reviews by regulatory agencies in connection with historical retail banking sales practices. These leaves of absence are unrelated to the company’s reported financial results..."

An investigation in 2017 found a new total of 3.5 million phony consumer and small business accounts set up by employees trying to game an internal sales compensation system. The phony accounts, many of which incurred fees and charges, had been set up without customers' knowledge nor approval. In a settlement agreement in 2016 with the Consumer Financial Protection Bureau (CFPB), Wells Fargo paid a $185 million fine last year for alleged unlawful sales practices with 1.5 million phony accounts known at that time. In 2016, about 5,300 mostly lower-level employees had been fired as a result of the scandal.

The latest announcement listed more executive changes:

"David Galloreese continues as head of Human Resources and will report directly to CEO and President Tim Sloan and join the Operating Committee. Cara Peck, who heads the Culture and Change Management teams, will report directly to Galloreese.

Jim Rowe continues as head of Stakeholder Relations and will report directly to Sloan. Stakeholder Relations will expand to include Corporate Philanthropy and Community Relations, headed by Jon Campbell... Kimberly Bordner, currently executive audit director, will become the company’s acting Chief Auditor..."

The bank is conducting an executive search for a new Chief Auditor.

Executives at the bank have plenty to fix. In April, federal regulators assessed a $1 billion fine against the bank for violations of the Consumer Financial Protection Act (CFPA) in the way it administered mandatory insurance for auto loans. In August, reports surfaced that the bank had accidentally foreclosed on 400 homeowners it shouldn't have due to a software bug.

In June 2017, U.S. Senator Elizabeth Warren (D-Massachusetts) called for the firing of all 12 board members at Wells Fargo bank for failing to adequately protect account holders. Let's hope these latest senior executive changes bring about needed changes.


Yahoo Agrees To $50 Million Payment To Settle 2013 Breach

Fortune magazine reported that Yahoo:

"... has agreed to pay a $50 million settlement to roughly 200 million people affected by the email service’s 2013 data breach... Up to 3 billion accounts had their emails and other personal information stolen in the hacking, but the settlement filed late Monday only applies to an estimated 1 billion accounts, held by 200 million people in the United States and Israel between 2012 and 2016... A hearing to approve this proposed end to the two-year lawsuit will be held in California on Nov. 29. If approved, the affected account holders will be emailed a notice."


Security Researcher Warns 'Hack A Smart Meter And Kill The Grid'

Everyone has utility meters which measure their gas and/or electricity consumption. Many of those meters are smart meters, installed in homes in both the United States and Britain. How secure are smart meters? First, some background since few consumers know what's installed in their homes.

According to the U.S. Energy Information Administration (EIA), there were 78.9 million smart meters installed in the United States by 2017, and residential installations account for 88 percent of that total. So, about half of all electricity customers in the United States use smart utility meters.

All smart meters wirelessly transmit usage to the utility provider. That's why you never see utility technicians visiting homes to manually "read" utility meters. There are two types of smart meters. In 2013, the number of two-way (AMI) smart meters in the United States exceeded the number of one-way (AMR) smart meters. The EIA explained:

"Two-way or AMI (advanced metering infrastructure) meters allow utilities and customers to interact to support smart consumption applications...The deployment and use of AMI and AMR meters vary greatly by state. Only 5 states had an AMI penetration rate above 80 percent. High penetration rates are seen in northern New England, Western states, Georgia and Texas. California added the most AMI meters of any state in 2013... There were 6 states with AMR penetration rates above 80 percent with Rhode Island the leader at 95 percent. The highest penetration rates are in the Rocky Mountain, Upper Plains and Southern Atlantic states. New York added nearly 540,000 AMR meters in 2013. Pennsylvania lost 580,000 AMR meters, but gained almost 620,000 AMI meters..."

Chart with 2-way smart utility meter penetration by state in 2013 by EIA. Click to view larger version See the chart on the right for two-way installations by state. Readers of this blog are aware of the privacy issues. A few states allow users to opt out of smart meters. Since the wireless transmissions occur throughout each month, smart meters can be used to profile consumers with a high degree of accuracy (e.g., the number of persons living in the dwelling, when you're home and the duration, which electric appliances are used, the presence of security and alarm systems, special equipment such as in-home medical equipment, etc.).

Now, back to Britain. It seems that the security of smart utility meters is questionable. Smart Grid Awareness follows the security and privacy issues associated with smart utility meters. The site reported that a British security expert:

"... is more convinced than ever that evidence now exists that rogue chips may be embedded into electronic circuit boards during the manufacturing process, such as those contained within utility smart meters. Smart meters can be considered high value targets for hackers due to the existence of the “remote disconnect” feature included as an option for most smart meters deployed today."

So, smart meters are part of the "smart power grid." If smart utility meters can be hacked, then the power grid -- and residential utility services -- can be interrupted or disabled. The remote-disconnect feature allows the utility to remotely turn off a meter. Smart meters in the United States have this feature, too. Reportedly, utilities say that they've disabled the feature. However, if the supply chain has been compromised with hacked chips (as this Bloomberg report claimed with computers), then bad actors may indeed be able to turn on and use the remote-disconnect feature in smart utility meters.

You can read for yourself the report by security researcher Nick Hunn. Clearly, there is more news to come about this. I look forward to energy providers assuring consumers how they've protected their supply chains. What do you think?


Survey: Most Home Users Satisfied With Voice-Controlled Assistants. Tech Adoption Barriers Exist

Recent survey results reported by MediaPost:

"Amazon Alexa and Google Assistant have the highest satisfaction levels among mobile users, each with an 85% satisfaction rating, followed by Siri and Bixby at 78% and Microsoft’s Cortana at 77%... As found in other studies, virtual assistants are being used for a range of things, including looking up things on the internet (51%), listening to music (48%), getting weather information (46%) and setting a timer (35%)... Smart speaker usage varies, with 31% of Amazon device owners using their speaker at least a few times a week, Google Home owners 25% and Apple HomePod 18%."

Additional survey results are available at Digital Trends and Experian. PWC found:

"Only 10% of surveyed respondents were not familiar with voice-enabled products and devices. Of the 90% who were, the majority have used a voice assistant (72%). Adoption is being driven by younger consumers, households with children, and households with an income of >$100k... Despite being accessible everywhere, three out of every four consumers (74%) are using their mobile voice assistants at home..."

Consumers seem to want privacy when using voice assistants, so usage tends to occur at home and not in public places. Also:

"... the bulk of consumers have yet to graduate to more advanced activities like shopping or controlling other smart devices in the home... 50% of respondents have made a purchase using their voice assistant, and an additional 25% would consider doing so in the future. The majority of items purchased are small and quick.. Usage will continue to increase but consistency must improve for wider adoption... Some consumers see voice assistants as a privacy risk... When forced to choose, 57% of consumers said they would rather watch an ad in the middle of a TV show than listen to an ad spoken by their voice assistant..."

Consumers want control over the presentation of advertisements by voice assistants. Control options desired include skip, select, never while listening to music, only at pre-approved times, customized based upon interests, seamless integration, and match to preferred brands. 38 percent of survey respondents said that they, "don't want something 'listening in' on my life all the time."

What are your preferences with voice assistants? Any privacy concerns?


Billions Of Data Points About Consumers Exposed During Data Breach At Data Aggregator

It's not only social media companies and credit reporting agencies that experience data breaches where massive amounts of sensitive, personal information about millions of consumers are exposed and/or stolen. Data aggregators and analytics firms also have data breaches. Wired Magazine reported:

"The sales intelligence firm Apollo sent a notice to its customers disclosing a data breach it suffered over the summer... Apollo is a data aggregator and analytics service aimed at helping sales teams know who to contact, when, and with what message to make the most deals... Apollo also claims in its marketing materials to have 200 million contacts and information from over 10 million companies in its vast reservoir of data. That's apparently not just spin. Night Lion Security founder Vinny Troia, who routinely scans the internet for unprotected, freely accessible databases, discovered Apollo's trove containing 212 million contact listings as well as nine billion data points related to companies and organizations. All of which was readily available online, for anyone to access. Troia disclosed the exposure to the company in mid-August."

This is especially problematic for several reasons. First, data aggregators like Apollo (and social media companies and credit reporting agencies) are high-value targets: plenty of data is stored in one location. That's both convenient and risky. It also places a premium upon data security.

When data like this is exposed or stolen, it makes it easy for fraudsters, scammers, and spammers to create sophisticated and more effective phishing (and vishing) attacks to trick consumers and employees into revealing sensitive payment and financial information.

Second, data breaches like this make it easier for governments' intelligence agencies to compile data about persons and targets. Third, Apollo's database reportedly also contained sensitive data about clients. That's proprietary information. Wired explained:

"Some client-imported data was also accessed without authorization... Customers access Apollo's data and predictive features through a main dashboard. They also have the option to connect other data tools they might use, for example authorizing their Salesforce accounts to port data into Apollo..."

Salesforce, a customer relationship management (CRM) platform, uses cloud services and other online technologies to help its clients, companies with sales representatives, to manage their sales, service, and marketing activities. This breach also suggests that some employee training is needed about what to, and what not to upload, to outsourcing vendor sites. What do you think?


Data Breach Affects 75,000 Healthcare.gov Users

On Friday, the Centers For Medicare and Medicaid Services (CMS) announced a data breach at a computer system which interacts with the Healthcare.gov site. Files for about 75,000 users -- agents and brokers -- were accessed by unauthorized persons. The announcement stated:

"Earlier this week, CMS staff detected anomalous activity in the Federally Facilitated Exchanges, or FFE’s Direct Enrollment pathway for agents and brokers. The Direct Enrollment pathway, first launched in 2013, allows agents and brokers to assist consumers with applications for coverage in the FFE... CMS began the initial investigation of anomalous system activity in the Direct Enrollment pathway for agents and brokers on October 13, 2018 and a breach was declared on October 16, 2018. The agent and broker accounts that were associated with the anomalous activity were deactivated, and – out of an abundance of caution – the Direct Enrollment pathway for agents and brokers was disabled."

CMS has notified and is working with Federal law enforcement. It expects to restore the Direct Enrollment pathway for agents and brokers within the next 7 days, before the start of the sign-up period on November 1st for health care coverage under the Affordable Care Act.

CMS Administrator Seema Verma said:

"I want to make clear to the public that HealthCare.gov and the Marketplace Call Center are still available, and open enrollment will not be negatively impacted. We are working to identify the individuals potentially impacted as quickly as possible so that we can notify them and provide resources such as credit protection."

Sadly, data breaches happen -- all too often within government agencies and corporations. It should be noted that this breach was detected quickly -- within 3 days. Other data breaches have gone undetected for weeks or months; and too many corporate data breaches affected millions.

 


New York State Attorney General Expands Investigation Into Fraudulent 'Net Neutrality' Comments Submitted To FCC

The Attorney General (AG) for New York State has expanded its fraud investigation regarding net neutrality comments submitted to the U.S. Federal Communication Commission (FTC) website in 2017. The New York Times reported that the New York State AG has:

"... subpoenaed more than a dozen telecommunications trade groups, lobbying contractors and Washington advocacy organizations on Tuesday, seeking to determine whether the groups submitted millions of fraudulent public comments to sway a critical federal decision on internet regulation... The attorney general, Barbara D. Underwood, is investigating the source of more than 22 million public comments submitted to the F.C.C. during the battle over the regulations. Millions of comments were provided using temporary or duplicate email addresses, while others recycled identical phrases. Seven popular comments, repeated verbatim, accounted for millions more. The noise from the fake or orchestrated comments appears to have broadly favored the telecommunications industry..."

Also this month, the Center For Internet & Society reported the results of a study at Stanford University (bold emphasis added):

"In the leadup to the FCC's historic vote in December 2017 to repeal all net neutrality protections, 22 million comments were filed to the agency. But unfortunately, millions of those comments were fake. Some of the fake comment were part of sophisticated campaigns that filed fake comments using the names of real people - including journalists, Senators and dead people. The FCC did nothing to try to prevent comment stuffing and comment fraud, and even after the vote, made no attempt to help the public, journalists, policy makers actually understand what Americans actually told the FCC... This report used the 800,000 comments Kao identified as semantic standouts from form letter and fraud campaigns. These unique comments were overwhelmingly in support of keeping the 2015 Open Internet Order - in fact, 99.7% of comments opposed the repeal of net neutrality protections. This report then matched and sorted those comments to geographic areas, including the 50 states and every Congressional District..."

An investigation in 2017 by the New York State AG found that about 2 million of the comments submitted to the FCC about net neutrality "stole real Americans' identities." A follow-up investigation found that more than 9 million comments "used stolen identities."

The FCC, led by Trump appointee Ajit Pai, a former Verizon lawyer, repealed last year both broadband privacy and net neutrality protections for consumers. The FCC has ignored requests to investigate comments fraud. A December 2017 study of 1,077 voters found that most want net neutrality protections. President Trump signed the privacy-rollback legislation in April 2017. A prior blog post listed many historical abuses of consumers by some ISPs.

Some of the organizations subpoenaed by the New York State AG include (links added):

"... Broadband for America, Century Strategies, and MediaBridge. Broadband for America is a coalition supported by cable and telecommunications companies; Century Strategies is a political consultancy founded by Ralph Reed, the former director of the Christian Coalition; and MediaBridge is a conservative messaging firm..."

Reportedly, the New York AG has requested information from both groups which opposed and supported net neutrality protections. The New York AG operates a website where consumers can check for fake comments submitted to the FCC. (When you check, enter your name in quotes for a more precise search. And check the street address, since many people have the same name.) I checked. You can read my valid comment submitted to the FCC.

This whole affair is another reminder of how to attack and undermine a democracy by abusing online tools. A prior post discussed how social media has been abused.