508 posts categorized "Federal / U.S. Government" Feed

New Commissioner Says FTC Should Get Tough on Companies Like Facebook and Google

[Editor's note: today's guest post, by reporters at ProPublica, explores enforcement policy by the U.S. Federal Trade Commission (FTC), which has become more important  given the "light touch" enforcement approach by the Federal Communications Commission. Today's post is reprinted with permission.]

By Jesse Eisinger, ProPublica

Declaring that "the credibility of law enforcement and regulatory agencies has been undermined by the real or perceived lax treatment of repeat offenders," newly installed Democratic Federal Trade Commissioner Rohit Chopra is calling for much more serious penalties for repeat corporate offenders.

"FTC orders are not suggestions," he wrote in his first official statement, which was released on May 14.

Many giant companies, including Facebook and Google, are under FTC consent orders for various alleged transgressions (such as, in Facebook’s case, not keeping its promises to protect the privacy of its users’ data). Typically, a first FTC action essentially amounts to a warning not to do it again. The second carries potential penalties that are more serious.

Some critics charge that that approach has encouraged companies to treat FTC and other regulatory orders casually, often violating their terms. They also say the FTC and other regulators and law enforcers have gone easy on corporate recidivists.

In 2012, a Republican FTC commissioner, J. Thomas Rosch, dissented from an agency agreement with Google that fined the company $22.5 million for violations of a previous order even as it denied liability. Rosch wrote, “There is no question in my mind that there is ‘reason to believe’ that Google is in contempt of a prior Commission order.” He objected to allowing the company to deny its culpability while accepting a fine.

Chopra’s memo signals a tough stance from Democratic watchdogs — albeit a largely symbolic one, given the fact that Republicans have a 3-2 majority on the FTC — as the Trump administration pursues a wide-ranging deregulatory agenda. Agencies such as the Environmental Protection Agency and the Department of Interior are rolling back rules, while enforcement actions from the Securities and Exchange Commission and the Department of Justice are at multiyear lows.

Chopra, 36, is an ally of Elizabeth Warren and a former assistant director of the Consumer Financial Protection Bureau. President Donald Trump nominated him to his post in October, and he was confirmed last month. The FTC is led by a five-person commission, with a chairman from the president’s party.

The Chopra memo is also a tacit criticism of enforcement in the Obama years. Chopra cites the SEC’s practice of giving waivers to banks that have been sanctioned by the Department of Justice or regulators allowing them to continue to receive preferential access to capital markets. The habitual waivers drew criticism from a Democratic commissioner on the SEC, Kara Stein. Chopra contends in his memo that regulators treated both Wells Fargo and the giant British bank HSBC too lightly after repeated misconduct.

"When companies violate orders, this is usually the result of serious management dysfunction, a calculated risk that the payoff of skirting the law is worth the expected consequences, or both," he wrote. Both require more serious, structural remedies, rather than small fines.

The repeated bad behavior and soft penalties “undermine the rule of law,” he argued.

Chopra called for the FTC to use more aggressive tools: referring criminal matters to the Department of Justice; holding individual executives accountable, even if they weren’t named in the initial complaint; and “meaningful” civil penalties.

The FTC used such aggressive tactics in going after Kevin Trudeau, infomercial marketer of miracle treatments for bodily ailments. Chopra implied that the commission does not treat corporate recidivists with the same toughness. “Regardless of their size and clout, these offenders, too, should be stopped cold,” he writes.

Chopra also suggested other remedies. He called for the FTC to consider banning companies from engaging in certain business practices; requiring that they close or divest the offending business unit or subsidiary; requiring the dismissal of senior executives; and clawing back executive compensation, among other forceful measures.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Academic Professors, Researchers, And Google Employees Protest Warfare Programs By The Tech Giant

Google logo Many internet users know that Google's business of model of free services comes with a steep price: the collection of massive amounts of information about users of its services. There are implications you may not be aware of.

A Guardian UK article by three professors asked several questions:

"Should Google, a global company with intimate access to the lives of billions, use its technology to bolster one country’s military dominance? Should it use its state of the art artificial intelligence technologies, its best engineers, its cloud computing services, and the vast personal data that it collects to contribute to programs that advance the development of autonomous weapons? Should it proceed despite moral and ethical opposition by several thousand of its own employees?"

These questions are relevant and necessary for several reasons. First, more than a dozen Google employees resigned citing ethical and transparency concerns with artificial intelligence (AI) help the company provides to the U.S. Department of Defense for Maven, a weaponized drone program to identify people. Reportedly, these are the first known mass resignations.

Second, more than 3,100 employees signed a public letter saying that Google should not be in the business of war. That letter (Adobe PDF) demanded that Google terminate its Maven program assistance, and draft a clear corporate policy that neither it, nor its contractors, will build warfare technology.

Third, more than 700 academic researchers, who study digital technologies, signed a letter in support of the protesting Google employees and former employees. The letter stated, in part:

"We wholeheartedly support their demand that Google terminate its contract with the DoD, and that Google and its parent company Alphabet commit not to develop military technologies and not to use the personal data that they collect for military purposes... We also urge Google and Alphabet’s executives to join other AI and robotics researchers and technology executives in calling for an international treaty to prohibit autonomous weapon systems... Google has become responsible for compiling our email, videos, calendars, and photographs, and guiding us to physical destinations. Like many other digital technology companies, Google has collected vast amounts of data on the behaviors, activities and interests of their users. The private data collected by Google comes with a responsibility not only to use that data to improve its own technologies and expand its business, but also to benefit society. The company’s motto "Don’t Be Evil" famously embraces this responsibility.

Project Maven is a United States military program aimed at using machine learning to analyze massive amounts of drone surveillance footage and to label objects of interest for human analysts. Google is supplying not only the open source ‘deep learning’ technology, but also engineering expertise and assistance to the Department of Defense. According to Defense One, Joint Special Operations Forces “in the Middle East” have conducted initial trials using video footage from a small ScanEagle surveillance drone. The project is slated to expand “to larger, medium-altitude Predator and Reaper drones by next summer” and eventually to Gorgon Stare, “a sophisticated, high-tech series of cameras... that can view entire towns.” With Project Maven, Google becomes implicated in the questionable practice of targeted killings. These include so-called signature strikes and pattern-of-life strikes that target people based not on known activities but on probabilities drawn from long range surveillance footage. The legality of these operations has come into question under international and U.S. law. These operations also have raised significant questions of racial and gender bias..."

I'll bet that many people never imagined -- nor want - that their personal e-mail, photos, calendars, video, social media, map usage, archived photos, social media, and more would be used for automated military applications. What are your opinions?


U.S. Senate Vote Approves Resolution To Reinstate Net Neutrality Rules. FCC Chairman Pai Repeats Claims While Ignoring Consumers

Yesterday, the United States Senate approved a bipartisan resolution to preserve net neutrality rules, the set of internet protections established in 2015 which require wireless and internet service providers (ISPs) to provide customers with access to all websites, and equal access to all websites. That meant no throttling, blocking, slow-downs of selected sites, nor prioritizing internet traffic in "fast" or "slow" lanes.

Federal communications Commission logo Earlier this month, the Federal Communications Commission (FCC) said that current net neutrality rules would expire on June 11, 2018. Politicians promised that tax cuts will create new jobs, and that repeal of net neutrality rules would encourage investments by ISPs. FCC Chairman Ajit Pai, appointed by President Trump, released a statement on May 10, 2018:

"Now, on June 11, these unnecessary and harmful Internet regulations will be repealed and the bipartisan, light-touch approach that served the online world well for nearly 20 years will be restored. The Federal Trade Commission will once again be empowered to target any unfair or deceptive business practices of Internet service providers and to protect American’s broadband privacy. Armed with our strengthened transparency rule, we look forward to working closely with the FTC to safeguard a free and open Internet. On June 11, we will have a framework in place that encourages innovation and investment in our nation’s networks so that all Americans, no matter where they live, can have access to better, cheaper, and faster Internet access and the jobs, opportunities, and platform for free expression that it provides. And we will embrace a modern, forward-looking approach that will help the United States lead the world in 5G..."

Chairman Pai's claims sound hollow, since reality says otherwise. Telecommunications companies have fired workers and reduced staff despite getting tax cuts, broadband privacy repeal, and net neutrality repeal. In December, more than 1,000 startups and investors signed an open letter to Pai opposing the elimination of net neutrality. Entrepreneurs and executives are concerned that the loss of net neutrality will harm or hinder start-up businesses.

CNet provided a good overview of events surrounding the Senate's resolution:

"Democrats are using the Congressional Review Act to try to halt the FCC's December repeal of net neutrality. The law gives Congress 60 legislative days to undo regulations imposed by a federal agency. What's needed to roll back the FCC action are simple majorities in both the House and Senate, as well as the president's signature. Senator Ed Markey (Democrat, Massachusetts), who's leading the fight in the Senate to preserve the rules, last week filed a so-called discharge petition, a key step in this legislative effort... Meanwhile, Republican lawmakers and broadband lobbyists argue the existing rules hurt investment and will stifle innovation. They say efforts by Democrats to stop the FCC's repeal of the rules do nothing to protect consumers. All 49 Democrats in the Senate support the effort to undo the FCC's vote. One Republican, Senator Susan Collins of Maine, also supports the measure. One more Republican is needed to cross party lines to pass it."

"No touch" is probably a more accurate description of the internet under Chairman Pai's leadership, given many historical problems and abuses of consumers by some ISPs. The loss of net neutrality protections will likely result in huge price increases for internet access for consumers, which will also hurt public libraries, the poor, and disabled users. The loss of net neutrality will allow ISPs the freedom to carve up, throttle, block, and slow down the internet traffic they choose, while consumers will lose the freedom to use as they choose the broadband service they've paid for. And, don't forget the startup concerns above.

After the Senate's vote, FCC Chairman Pai released this statement:

“The Internet was free and open before 2015, when the prior FCC buckled to political pressure from the White House and imposed utility-style regulation on the Internet. And it will continue to be free and open once the Restoring Internet Freedom Order takes effect on June 11... our light-touch approach will deliver better, faster, and cheaper Internet access and more broadband competition to the American people—something that millions of consumers desperately want and something that should be a top priority. The prior Administration’s regulatory overreach took us in the opposite direction, reducing investment in broadband networks and particularly harming small Internet service providers in rural and lower-income areas..."

The internet was free and open before 2015? Mr. Pai is guilty of revisionist history. The lack of ISP competition in key markets meant consumers in the United States pay more for broadband and get slower speeds compared to other countries. There were numerous complaints by consumers about usage-based Internet pricing. There were privacy abuses and settlement agreements by ISPs involving technologies such as deep-packet inspection and 'Supercookies' to track customers online, despite consumers' wishes not to be tracked. Many consumers didn't get the broadband speeds ISP promised. Some consumers sued their ISPs, and the New York State Attorney General had residents  check their broadband speed with this tool.

Tim Berners-Lee, the founder of the internet, cited three reasons why the Internet is in trouble. His number one reason: consumers had lost control of their personal information. The loss of privacy meant consumers lost control over their personal information.

There's more. Some consumers found that their ISP hijacked their online search results without notice nor consent. An ISP in Kansas admitted in 2008 to secret snooping after pressure from Congress. Given this, something had to be done. The FCC stepped up to the plate and acted when it was legally able to; and reclassified broadband after open hearings. Proposed rules were circulated prior to adoption. It was done in the open.

Yet, Chairman Pai would have us now believe the internet was free and open before 2015; and that regulatory was unnecessary. I say BS.

FCC Commissioner Jessica Rosenworcel released a statement yesterday:

"Today the United States Senate took a big step to fix the serious mess the FCC made when it rolled back net neutrality late last year. The FCC's net neutrality repeal gave broadband providers extraordinary new powers to block websites, throttle services and play favorites when it comes to online content. This put the FCC on the wrong side of history, the wrong side of the law, and the wrong side of the American people. Today’s vote is a sign that the fight for internet freedom is far from over. I’ll keep raising a ruckus to support net neutrality and I hope others will too."

A mess, indeed, created by Chairman Pai. A December 2017 study of 1,077 voters found that most want net neutrality protections:

Do you favor or oppose the proposal to give ISPs the freedom to: a) provide websites the option to give their visitors the ability to download material at a higher speed, for a fee, while providing a slower speed for other websites; b) block access to certain websites; and c) charge their customers an extra fee to gain access to certain websites?
Group Favor Opposed Refused/Don't Know
National 15.5% 82.9% 1.6%
Republicans 21.0% 75.4% 3.6%
Democrats 11.0% 88.5% 0.5%
Independents 14.0% 85.9% 0.1%

Why did the FCC, President Trump, and most GOP politicians pursue the elimination of net neutrality protections despite consumers wishes otherwise? For the same reasons they repealed broadband privacy protections despite most consumers wanting broadband privacy. (Remember, President Trump signed the privacy-rollback legislation in April 2017.) They are doing the bidding of the corporate ISPs at the expense of consumers. Profits before people. Whenever Mr. Pai mentions a "free and open internet," he's referring to corporate ISPs and not consumers. What do you think?


News Media Alliance Challenges Tech Companies To 'Accept Accountability' And Responsibility For Filtering News In Their Platforms

Last week, David Chavern, the President and CEO of News Media Alliance (NMA), testified before the House Judiciary Committee. The NMA is a nonprofit trade association representing over 2,000 news organizations across the United States. Mr. Chavern's testimony focused upon the problem of fake news, often aided by social networking platform.

His comments first described current conditions:

"... Quality journalism is essential to a healthy and functioning democracy -- and my members are united in their desire to fight for its future.

Too often in today’s information-driven environment, news is included in the broad term "digital content." It’s actually much more important than that. While some low-quality entertainment or posts by friends can be disappointing, inaccurate information about world events can be immediately destructive. Civil society depends upon the availability of real, accurate news.

The internet represents an extraordinary opportunity for broader understanding and education. We have never been more interconnected or had easier and quicker means of communication. However, as currently structured, the digital ecosystem gives tremendous viewpoint control and economic power to a very small number of companies – the tech platforms that distribute online content. That control and power must come with new responsibilities... Historically, newspapers controlled the distribution of their product; the news. They invested in the journalism required to deliver it, and then printed it in a form that could be handed directly to readers. No other party decided who got access to the information, or on what terms. The distribution of online news is now dominated by the major technology platforms. They decide what news is delivered and to whom – and they control the economics of digital news..."

Last month, a survey found that roughly two-thirds of U.S. adults (68%) use Facebook.com, and about three-quarters of those use the social networking site daily. In 2016, a survey found that 62 percent of adults in the United States get their news from social networking sites. The corresponding statistic in 2012 was 49 percent. That 2016 survey also found that fewer social media users get their news from other platforms: local television (46 percent), cable TV (31 percent), nightly network TV (30 percent), news websites/apps (28 percent), radio (25 percent), and print newspapers (20 percent).

Mr. Chavern then described the problems with two specific tech companies:

"The First Amendment prohibits the government from regulating the press. But it doesn’t prevent Facebook and Google from acting as de facto regulators of the news business.

Neither Google nor Facebook are – or have ever been – "neutral pipes." To the contrary, their businesses depend upon their ability to make nuanced decisions through sophisticated algorithms about how and when content is delivered to users. The term “algorithm” makes these decisions seem scientific and neutral. The fact is that, while their decision processes may be highly-automated, both companies make extensive editorial judgments about accuracy, relevance, newsworthiness and many other criteria.

The business models of Facebook and Google are complex and varied. However, we do know that they are both immense advertising platforms that sell people’s time and attention. Their "secret algorithms" are used to cultivate that time and attention. We have seen many examples of the types of content favored by these systems – namely, click-bait and anything that can generate outrage, disgust and passion. Their systems also favor giving users information like that which they previously consumed, thereby generating intense filter bubbles and undermining common understandings of issues and challenges.

All of these things are antithetical to a healthy news business – and a healthy democracy..."

Earlier this month, Apple Computer and Facebook executives exchanged criticisms about each other's business models and privacy. Mr. Chavern's testimony before Congress also described more problems and threats:

"Good journalism is factual, verified and takes into account multiple points of view. It can take a lot of time and investment. Most particularly, it requires someone to take responsibility for what is published. Whether or not one agrees with a particular piece of journalism, my members put their names on their product and stand behind it. Readers know where to send complaints. The same cannot be said of the sea of bad information that is delivered by the platforms in paid priority over my members’ quality information. The major platforms’ control over distribution also threatens the quality of news for another reason: it results in the “commoditization” of news. Many news publishers have spent decades – often more than a century – establishing their brands. Readers know the brands that they can trust — publishers whose reporting demonstrates the principles of verification, accuracy and fidelity to facts. The major platforms, however, work hard to erase these distinctions. Publishers are forced to squeeze their content into uniform, homogeneous formats. The result is that every digital publication starts to look the same. This is reinforced by things like the Google News Carousel, which encourages users to flick back and forth through articles on the same topic without ever noticing the publisher. This erosion of news publishers’ brands has played no small part in the rise of "fake news." When hard news sources and tabloids all look the same, how is a customer supposed to tell the difference? The bottom line is that while Facebook and Google claim that they do not want to be "arbiters of truth," they are continually making huge decisions on how and to whom news content is delivered. These decisions too often favor free and commoditized junk over quality journalism. The platforms created by both companies could be wonderful means for distributing important and high-quality information about the world. But, for that to happen, they must accept accountability for the power they have and the ultimate impacts their decisions have on our economic, social and political systems..."

Download Mr. Chavern's complete testimony. Industry watchers argue that recent changes by Facebook have hurt local news organizations. MediaPost reported:

"When Facebook changed its algorithm earlier this year to focus on “meaningful” interactions, publishers across the board were hit hard. However, local news seemed particularly vulnerable to the alterations. To assuage this issue, the company announced that it would prioritize news related to local towns and metro areas where a user resided... To determine how positively that tweak affected local news outlets, the Tow Center measured interactions for posts from publications coming from 13 metro areas... The survey found that 11 out of those 13 have consistently seen a drop in traffic between January 1 and April 1 of 2018, allowing the results to show how outlets are faring nine weeks after the algorithm change. According to the Tow Center study, three outlets saw interactions on their pages decrease by a dramatic 50%. These include The Dallas Morning News, The Denver Post, and The San Francisco Chronicle. The Atlanta Journal-Constitution saw interactions drop by 46%."

So, huge problems persist.

Early in my business career, I had the opportunity to develop and market an online service using content from Dow Jones News/Retrieval. That experience taught me that the news - hard news - included who, where, when, and what happened. Everything else is either opinion, commentary, analysis, an advertisement, or fiction. And, it is critical to know the differences and/or learn to spot each type. Otherwise, you are likely to be misled, misinformed, or fooled.


Federal Regulators Assess $1 Billion Fine Against Wells Fargo Bank

On Friday, several federal regulators announced the assessment of a $1 billion fine against Wells Fargo Bank for violations of the, "Consumer Financial Protection Act (CFPA) in the way it administered a mandatory insurance program related to its auto loans..."

Consumer Financial Protection Bureau logo The Consumer Financial Protection Bureau (CFPB) announced the fine and settlement with Wells Fargo Bank, N.A., and its coordinated action with the Office of the Comptroller of the Currency (OCC). The announcement stated that the CFPB:

"... also found that Wells Fargo violated the CFPA in how it charged certain borrowers for mortgage interest rate-lock extensions. Under the terms of the consent orders, Wells Fargo will remediate harmed consumers and undertake certain activities related to its risk management and compliance management. The CFPB assessed a $1 billion penalty against the bank and credited the $500 million penalty collected by the OCC toward the satisfaction of its fine."

Wells Fargo logo This not the first fine against Wells Fargo. The bank paid a $185 million fine in 2016 to settle charges about for alleged unlawful sales practices during the past five years. To game an internal sales system, employees allegedly created about 1.5 million bogus email accounts, and both issued and activated debit cards associated with the secret accounts. Then, employees also created PIN numbers for the accounts, all without customers' knowledge nor consent. An investigation in 2017 found 1.4 million more bogus accounts created than originally found in 2016. Also in 2017, irregularities were reported about how the bank handled mortgages.

The OCC explained that it took action:

"... given the severity of the deficiencies and violations of law, the financial harm to consumers, and the bank’s failure to correct the deficiencies and violations in a timely manner. The OCC found deficiencies in the bank’s enterprise-wide compliance risk management program that constituted reckless, unsafe, or unsound practices and resulted in violations of the unfair practices prong of Section 5 of the Federal Trade Commission (FTC) Act. In addition, the agency found the bank violated the FTC Act and engaged in unsafe and unsound practices relating to improper placement and maintenance of collateral protection insurance policies on auto loan accounts and improper fees associated with interest rate lock extensions. These practices resulted in consumer harm which the OCC has directed the bank to remediate.

The $500 million civil money penalty reflects a number of factors, including the bank’s failure to develop and implement an effective enterprise risk management program to detect and prevent the unsafe or unsound practices, and the scope and duration of the practices..."

MarketWatch explained the bank's unfair and unsound practices:

"When consumers buy a vehicle through a lender, the lender often requires the consumer to also purchase “collateral protection insurance.” That means the vehicle itself is collateral — or essentially, could be repossessed — if the loan is not paid... Sometimes, the fine print of the contracts say that if borrowers do not buy their own insurance (enough to satisfy the terms of the loan), the lender will go out and purchase that insurance on their behalf, and charge them for it... That is a legal practice. But in the case of Wells Fargo, borrowers said they actually did buy that insurance, and Wells Fargo still bought more insurance on their behalf and charged them for it."

So, the bank forced consumers to buy unwanted and unnecessary auto insurance. The lesson for consumers: don't accept the first auto loan offered, and closely read the fine print of contracts from lenders. Wells Fargo said in a news release that it:

"... will adjust its first quarter 2018 preliminary financial results by an additional accrual of $800 million, which is not tax deductible. The accrual reduces reported first quarter 2018 net income by $800 million, or $0.16 cents per diluted common share, to $4.7 billion, or 96 cents per diluted common share. Under the consent orders, Wells Fargo will also be required to submit, for review by its board, plans detailing its ongoing efforts to strengthen its compliance and risk management, and its approach to customer remediation efforts."

Kudos to the OCC and CFPB for taking this action against a bank with a spotty history. Will executives at Wells Fargo learn their lessons from the massive fine? The Washington Post reported that the bank will:

"... benefit from a massive corporate tax cut passed by Congress last year. he bank’s effective tax rate this year will fall from about 33 percent to 22 percent, according to a Goldman Sachs analysis released in December. The change could boost its profits by 18 percent, according to the analysis. Just in the first quarter, Wells Fargo’s effective tax rate fell from about 28 percent to 18 percent, saving it more than $600 million. For the entire year, the tax cut is expected to boost the company’s profits by $3.7 billion..."

So, don't worry about the bank. It's tax savings will easily offset the fine. This makes one doubt the fine was a sufficient deterrent. And, I found the OCC's announcement forceful and appropriate, while the CFPB's announcement seemed to soft-pedal things by saying the absolute minimum.

What do you think? Will the fine curb executive wrongdoing?


4 Ways to Fix Facebook

[Editor's Note: today's guest post, by ProPublica reporters, explores solutions to the massive privacy and data security problems at Facebook.com. It is reprinted with permission.]

By Julia Angwin, ProPublica

Gathered in a Washington, D.C., ballroom last Thursday for their annual “tech prom,” hundreds of tech industry lobbyists and policy makers applauded politely as announcers read out the names of the event’s sponsors. But the room fell silent when “Facebook” was proclaimed — and the silence was punctuated by scattered boos and groans.

Facebook logo These days, it seems the only bipartisan agreement in Washington is to hate Facebook. Democrats blame the social network for costing them the presidential election. Republicans loathe Silicon Valley billionaires like Facebook founder and CEO Mark Zuckerberg for their liberal leanings. Even many tech executives, boosters and acolytes can’t hide their disappointment and recriminations.

The tipping point appears to have been the recent revelation that a voter-profiling outfit working with the Trump campaign, Cambridge Analytica, had obtained data on 87 million Facebook users without their knowledge or consent. News of the breach came after a difficult year in which, among other things, Facebook admitted that it allowed Russians to buy political ads, advertisers to discriminate by race and age, hate groups to spread vile epithets, and hucksters to promote fake news on its platform.

Over the years, Congress and federal regulators have largely left Facebook to police itself. Now, lawmakers around the world are calling for it to be regulated. Congress is gearing up to grill Zuckerberg. The Federal Trade Commission is investigating whether Facebook violated its 2011 settlement agreement with the agency. Zuckerberg himself suggested, in a CNN interview, that perhaps Facebook should be regulated by the government.

The regulatory fever is so strong that even Peter Swire, a privacy law professor at Georgia Institute of Technology who testified last year in an Irish court on behalf of Facebook, recently laid out the legal case for why Google and Facebook might be regulated as public utilities. Both companies, he argued, satisfy the traditional criteria for utility regulation: They have large market share, are natural monopolies, and are difficult for customers to do without.

While the political momentum may not be strong enough right now for something as drastic as that, many in Washington are trying to envision what regulating Facebook would look like. After all, the solutions are not obvious. The world has never tried to rein in a global network with 2 billion users that is built on fast-moving technology and evolving data practices.

I talked to numerous experts about the ideas bubbling up in Washington. They identified four concrete, practical reforms that could address some of Facebook’s main problems. None are specific to Facebook alone; potentially, they could be applied to all social media and the tech industry.

1. Impose Fines for Data Breaches

The Cambridge Analytica data loss was the result of a breach of contract, rather than a technical breach in which a company gets hacked. But either way, it’s far too common for institutions to lose customers’ data — and they rarely suffer significant financial consequences for the loss. In the United States, companies are only required to notify people if their data has been breached in certain states and under certain circumstances — and regulators rarely have the authority to penalize companies that lose personal data.

Consider the Federal Trade Commission, which is the primary agency that regulates internet companies these days. The FTC doesn’t have the authority to demand civil penalties for most data breaches. (There are exceptions for violations of children’s privacy and a few other offenses.) Typically, the FTC can only impose penalties if a company has violated a previous agreement with the agency.

That means Facebook may well face a fine for the Cambridge Analytica breach, assuming the FTC can show that the social network violated a 2011 settlement with the agency. In that settlement, the FTC charged Facebook with eight counts of unfair and deceptive behavior, including allowing outside apps to access data that they didn’t need — which is what Cambridge Analytica reportedly did years later. The settlement carried no financial penalties but included a clause stating that Facebook could face fines of $16,000 per violation per day.

David Vladeck, former FTC director of consumer protection, who crafted the 2011 settlement with Facebook, said he believes Facebook’s actions in the Cambridge Analytica episode violated the agreement on multiple counts. “I predict that if the FTC concludes that Facebook violated the consent decree, there will be a heavy civil penalty that could well be in the amount of $1 billion or more,” he said.

Facebook maintains it has abided by the agreement. “Facebook rejects any suggestion that it violated the consent decree,” spokesman Andy Stone said. “We respected the privacy settings that people had in place.”

If a fine had been levied at the time of the settlement, it might well have served as a stronger deterrent against any future breaches. Daniel J. Weitzner, who served in the White House as the deputy chief technology officer at the time of the Facebook settlement, says that technology should be policed by something similar to the Department of Justice’s environmental crimes unit. The unit has levied hundreds of millions of dollars in fines. Under previous administrations, it filed felony charges against people for such crimes as dumping raw sewage or killing a bald eagle. Some ended up sentenced to prison.

“We know how to do serious law enforcement when we think there’s a real priority and we haven’t gotten there yet when it comes to privacy,” Weitzner said.

2. Police Political Advertising

Last year, Facebook disclosed that it had inadvertently accepted thousands of advertisements that were placed by a Russian disinformation operation — in possible violation of laws that restrict foreign involvement in U.S. elections. FBI special prosecutor Robert Mueller has charged 13 Russians who worked for an internet disinformation organization with conspiring to defraud the United States, but it seems unlikely that Russia will compel them to face trial in the U.S.

Facebook has said it will introduce a new regime of advertising transparency later this year, which will require political advertisers to submit a government-issued ID and to have an authentic mailing address. It said political advertisers will also have to disclose which candidate or organization they represent and that all election ads will be displayed in a public archive.

But Ann Ravel, a former commissioner at the Federal Election Commission, says that more could be done. While she was at the commission, she urged it to consider what it could do to make internet advertising contain as much disclosure as broadcast and print ads. “Do we want Vladimir Putin or drug cartels to be influencing American elections?” she presciently asked at a 2015 commission meeting.

However, the election commission — which is often deadlocked between its evenly split Democratic and Republican commissioners — has not yet ruled on new disclosure rules for internet advertising. Even if it does pass such a rule, the commission’s definition of election advertising is so narrow that many of the ads placed by the Russians may not have qualified for scrutiny. It’s limited to ads that mention a federal candidate and appear within 60 days prior to a general election or 30 days prior to a primary.

This definition, Ravel said, is not going to catch new forms of election interference, such as ads placed months before an election, or the practice of paying individuals or bots to spread a message that doesn’t identify a candidate and looks like authentic communications rather than ads.

To combat this type of interference, Ravel said, the current definition of election advertising needs to be broadened. The FEC, she suggested, should establish “a multi-faceted test” to determine whether certain communications should count as election advertisements. For instance, communications could be examined for their intent, and whether they were paid for in a nontraditional way — such as through an automated bot network.

And to help the tech companies find suspect communications, she suggested setting up an enforcement arm similar to the Treasury Department’s Financial Crimes Enforcement Network, known as FinCEN. FinCEN combats money laundering by investigating suspicious account transactions reported by financial institutions. Ravel said that a similar enforcement arm that would work with tech companies would help the FEC.

“The platforms could turn over lots of communications and the investigative agency could then examine them to determine if they are from prohibited sources,” she said.

3. Make Tech Companies Liable for Objectionable Content

Last year, ProPublica found that Facebook was allowing advertisers to buy discriminatory ads, including ads targeting people who identified themselves as “Jew-haters,” and ads for housing and employment that excluded audiences based on race, age and other protected characteristics under civil rights laws.

Facebook has claimed that it has immunity against liability for such discrimination under section 230 of the 1996 federal Communications Decency Act, which protects online publishers from liability for third-party content.

“Advertisers, not Facebook, are responsible for both the content of their ads and what targeting criteria to use, if any,” Facebook stated in legal filings in a federal case in California challenging Facebook’s use of racial exclusions in ad targeting.

But sentiment is growing in Washington to interpret the law more narrowly. Last month, the House of Representatives passed a bill that carves out an exemption in the law, making websites liable if they aid and abet sex trafficking. Despite fierce opposition by many tech advocates, a version of the bill has already passed the Senate.

And many staunch defenders of the tech industry have started to suggest that more exceptions to section 230 may be needed. In November, Harvard Law professor Jonathan Zittrain wrote an article rethinking his previous support for the law and declared it has become, in effect, “a subsidy” for the tech giants, who don’t bear the costs of ensuring the content they publish is accurate and fair.

“Any honest account must acknowledge the collateral damage it has permitted to be visited upon real people whose reputations, privacy, and dignity have been hurt in ways that defy redress,” Zittrain wrote.

In a December 2017 paper titled “The Internet Will Not Break: Denying Bad Samaritans 230 Immunity,” University of Maryland law professors Danielle Citron and Benjamin Wittes argue that the law should be amended — either through legislation or judicial interpretation — to deny immunity to technology companies that enable and host illegal content.

“The time is now to go back and revise the words of the statute to make clear that it only provides shelter if you take reasonable steps to address illegal activity that you know about,” Citron said in an interview.

4. Install Ethics Review Boards

Cambridge Analytica obtained its data on Facebook users by paying a psychology professor to build a Facebook personality quiz. When 270,000 Facebook users took the quiz, the researcher was able to obtain data about them and all of their Facebook friends — or about 50 million people altogether. (Facebook later ended the ability for quizzes and other apps to pull data on users’ friends.)

Cambridge Analytica then used the data to build a model predicting the psychology of those people, on metrics such as “neuroticism,” political views and extroversion. It then offered that information to political consultants, including those working for the Trump campaign.

The company claimed that it had enough information about people’s psychological vulnerabilities that it could effectively target ads to them that would sway their political opinions. It is not clear whether the company actually achieved its desired effect.

But there is no question that people can be swayed by online content. In a controversial 2014 study, Facebook tested whether it could manipulate the emotions of its users by filling some users’ news feeds with only positive news and other users’ feeds with only negative news. The study found that Facebook could indeed manipulate feelings — and sparked outrage from Facebook users and others who claimed it was unethical to experiment on them without their consent.

Such studies, if conducted by a professor on a college campus, would require approval from an institutional review board, or IRB, overseeing experiments on human subjects. But there is no such standard online. The usual practice is that a company’s terms of service contain a blanket statement of consent that users never read or agree to.

James Grimmelman, a law professor and computer scientist, argued in a 2015 paper that the technology companies should stop burying consent forms in their fine print. Instead, he wrote, “they should seek enthusiastic consent from users, making them into valued partners who feel they have a stake in the research.”

Such a consent process could be overseen by an independent ethics review board, based on the university model, which would also review research proposals and ensure that people’s private information isn’t shared with brokers like Cambridge Analytica.

“I think if we are in the business of requiring IRBs for academics,” Grimmelman said in an interview, “we should ask for appropriate supervisions for companies doing research.”

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

 


Facebook Update: 87 Million Affected By Its Data Breach With Cambridge Analytica. Considerations For All Consumers

Facebook logo Facebook.com has dominated the news during the past three weeks. The news media have reported about many issues, but there are more -- whether or not you use Facebook. Things began about mid-March, when Bloomberg reported:

"Yes, Cambridge Analytica... violated rules when it obtained information from some 50 million Facebook profiles... the data came from someone who didn’t hack the system: a professor who originally told Facebook he wanted it for academic purposes. He set up a personality quiz using tools that let people log in with their Facebook accounts, then asked them to sign over access to their friend lists and likes before using the app. The 270,000 users of that app and their friend networks opened up private data on 50 million people... All of that was allowed under Facebook’s rules, until the professor handed the information off to a third party... "

So, an authorized user shared members' sensitive information with unauthorized users. Facebook confirmed these details on March 16:

"We are suspending Strategic Communication Laboratories (SCL), including their political data analytics firm, Cambridge Analytica (CA), from Facebook... In 2015, we learned that a psychology professor at the University of Cambridge named Dr. Aleksandr Kogan lied to us and violated our Platform Policies by passing data from an app that was using Facebook Login to SCL/CA, a firm that does political, government and military work around the globe. He also passed that data to Christopher Wylie of Eunoia Technologies, Inc.

Like all app developers, Kogan requested and gained access to information from people after they chose to download his app. His app, “thisisyourdigitallife,” offered a personality prediction, and billed itself on Facebook as “a research app used by psychologists.” Approximately 270,000 people downloaded the app. In so doing, they gave their consent for Kogan to access information such as the city they set on their profile, or content they had liked... When we learned of this violation in 2015, we removed his app from Facebook and demanded certifications from Kogan and all parties he had given data to that the information had been destroyed. CA, Kogan and Wylie all certified to us that they destroyed the data... Several days ago, we received reports that, contrary to the certifications we were given, not all data was deleted..."

So, data that should have been deleted wasn't. Then, Facebook relied upon certifications from entities that had lied previously. Not good. Then, Facebook posted this addendum on March 17:

"The claim that this is a data breach is completely false. Aleksandr Kogan requested and gained access to information from users who chose to sign up to his app, and everyone involved gave their consent. People knowingly provided their information, no systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked."

Why the rush to deny a breach? It seems wise to complete a thorough investigation before making such a claim. In the 11+ years I've written this blog, whenever unauthorized persons access data they shouldn't have, it's a breach. You can read about plenty of similar incidents where credit reporting agencies sold sensitive consumer data to ID-theft services and/or data brokers, who then re-sold that information to criminals and fraudsters. Seems like a breach to me.

Cambridge Analytica logo Facebook announced on March 19th that it had hired a digital forensics firm:

"... Stroz Friedberg, to conduct a comprehensive audit of Cambridge Analytica (CA). CA has agreed to comply and afford the firm complete access to their servers and systems. We have approached the other parties involved — Christopher Wylie and Aleksandr Kogan — and asked them to submit to an audit as well. Mr. Kogan has given his verbal agreement to do so. Mr. Wylie thus far has declined. This is part of a comprehensive internal and external review that we are conducting to determine the accuracy of the claims that the Facebook data in question still exists... Independent forensic auditors from Stroz Friedberg were on site at CA’s London office this evening. At the request of the UK Information Commissioner’s Office, which has announced it is pursuing a warrant to conduct its own on-site investigation, the Stroz Friedberg auditors stood down."

That's a good start. An audit would determine or not data which perpetrators said was destroyed, actually had been destroyed. However, Facebook seems to have built a leaky system which allows data harvesting:

"Hundreds of millions of Facebook users are likely to have had their private information harvested by companies that exploited the same terms as the firm that collected data and passed it on to CA, according to a new whistleblower. Sandy Parakilas, the platform operations manager at Facebook responsible for policing data breaches by third-party software developers between 2011 and 2012, told the Guardian he warned senior executives at the company that its lax approach to data protection risked a major breach..."

Reportedly, Parakilas added that Facebook, "did not use its enforcement mechanisms, including audits of external developers, to ensure data was not being misused." Not good. The incident makes one wonder what other developers, corporate, and academic users have violated Facebook's rules: shared sensitive Facebook members' data they shouldn't have.

Facebook announced on March 21st that it will, 1) investigate all apps that had access to large amounts of information and conduct full audits of any apps with suspicious activity; 2) inform users affected by apps that have misused their data; 3) disable an app's access to a member's information if that member hasn't used the app within the last three months; 4) change Login to "reduce the data that an app can request without app review to include only name, profile photo and email address;" 5) encourage members to manage the apps they use; and reward users who find vulnerabilities.

Those actions seem good, but too little too late. Facebook needs to do more... perhaps, revise its Terms Of Use to include large fines for violators of its data security rules. Meanwhile, there has been plenty of news about CA. The Guardian UK reported on March 19:

"The company at the centre of the Facebook data breach boasted of using honey traps, fake news campaigns and operations with ex-spies to swing election campaigns around the world, a new investigation reveals. Executives from Cambridge Analytica spoke to undercover reporters from Channel 4 News about the dark arts used by the company to help clients, which included entrapping rival candidates in fake bribery stings and hiring prostitutes to seduce them."

Geez. After these news reports surfaced, CA's board suspended Alexander Nix, its CEO, pending an internal investigation. So, besides Facebook's failure to secure sensitive members' information, another key issue seems to be the misuse of social media data by a company that openly brags about unethical, and perhaps illegal, behavior.

What else might be happening? The Intercept explained on March 30th that CA:

"... has marketed itself as classifying voters using five personality traits known as OCEAN — Openness, Conscientiousness, Extroversion, Agreeableness, and Neuroticism — the same model used by University of Cambridge researchers for in-house, non-commercial research. The question of whether OCEAN made a difference in the presidential election remains unanswered. Some have argued that big data analytics is a magic bullet for drilling into the psychology of individual voters; others are more skeptical. The predictive power of Facebook likes is not in dispute. A 2013 study by three of Kogan’s former colleagues at the University of Cambridge showed that likes alone could predict race with 95 percent accuracy and political party with 85 percent accuracy. Less clear is their power as a tool for targeted persuasion; CA has claimed that OCEAN scores can be used to drive voter and consumer behavior through “microtargeting,” meaning narrowly tailored messages..."

So, while experts disagree about the effectiveness of data analytics with political campaigns, it seems wise to assume that the practice will continue with improvements. Data analytics fueled by social media input means political campaigns can bypass traditional news media outlets to distribute information and disinformation. That highlights the need for Facebook (and other social media) to improve their data security and compliance audits.

While the UK Information Commissioner's Office aggressively investigates CA, things seem to move at a much slower pace in the USA. TechCrunch reported on April 4th:

"... Facebook’s founder Mark Zuckerberg believes North America users of his platform deserve a lower data protection standard than people everywhere else in the world. In a phone interview with Reuters yesterday Mark Zuckerberg declined to commit to universally implementing changes to the platform that are necessary to comply with the European Union’s incoming General Data Protection Regulation (GDPR). Rather, he said the company was working on a version of the law that would bring some European privacy guarantees worldwide — declining to specify to the reporter which parts of the law would not extend worldwide... Facebook’s leadership has previously implied the product changes it’s making to comply with GDPR’s incoming data protection standard would be extended globally..."

Do users in the USA want weaker data protections than users in other countries? I think not. I don't. Read for yourself the April 4th announcement by Facebook about changes to its terms of service and data policy. It didn't mention specific countries or regions; who gets what and where. Not good.

Mark Zuckerberg apologized and defended his company in a March 21st post:

"I want to share an update on the Cambridge Analytica situation -- including the steps we've already taken and our next steps to address this important issue. We have a responsibility to protect your data, and if we can't then we don't deserve to serve you. I've been working to understand exactly what happened and how to make sure this doesn't happen again. The good news is that the most important actions to prevent this from happening again today we have already taken years ago. But we also made mistakes, there's more to do, and we need to step up and do it... This was a breach of trust between Kogan, Cambridge Analytica and Facebook. But it was also a breach of trust between Facebook and the people who share their data with us and expect us to protect it. We need to fix that... at the end of the day I'm responsible for what happens on our platform. I'm serious about doing what it takes to protect our community. While this specific issue involving Cambridge Analytica should no longer happen with new apps today, that doesn't change what happened in the past. We will learn from this experience to secure our platform further and make our community safer for everyone going forward."

Nice sounding words, but actions speak louder. Wired magazine said:

"Zuckerberg didn't mention in his Facebook post why it took him five days to respond to the scandal... The groundswell of outrage and attention following these revelations has been greater than anything Facebook predicted—or has experienced in its long history of data privacy scandals. By Monday, its stock price nosedived. On Tuesday, Facebook shareholders filed a lawsuit against the company in San Francisco, alleging that Facebook made "materially false and misleading statements" that led to significant losses this week. Meanwhile, in Washington, a bipartisan group of senators called on Zuckerberg to testify before the Senate Judiciary Committee. And the Federal Trade Commission also opened an investigation into whether Facebook had violated a 2011 consent decree, which required the company to notify users when their data was obtained by unauthorized sources."

Frankly, Zuckerberg has lost credibility with me. Why? Facebook's history suggests it can't (or won't) protect users' data it collects. Some of its privacy snafus: settlement of a lawsuit resulting from alleged privacy abuses by its Beacon advertising program, changed members' ad settings without notice nor consent, an advertising platform which allegedly facilitates abuses of older workers, health and privacy concerns about a new service for children ages 6 to 13, transparency concerns about political ads, and new lawsuits about the company's advertising platform. Plus, Zuckerberg made promises in January to clean up the service's advertising. Now, we have yet another apology.

In a press release this afternoon, Facebook revised upward the number affected by the Facebook/CA breach from 50 to 87 million persons. Most, about 70.6 million, are in the United States. The breakdown by country:

Number of affected persons by country in the Facebook - Cambridge Analytica breach. Click to view larger version

So, what should consumers do?

You have options. If you use Facebook, see these instructions by Consumer Reports to deactivate or delete your account. Some people I know simply stopped using Facebook, but left their accounts active. That doesn't seem wise. A better approach is to adjust the privacy settings on your Facebook account to get as much privacy and protections as possible.

Facebook has a new tool for members to review and disable, in bulk, all of the apps with access to their data. Follow these handy step-by-step instructions by Mashable. And, users should also disable the Facebook API platform for their account. If you use the Firefox web browser, then install the new Facebook Container new add-on specifically designed to prevent Facebook from tracking you. Don't use Firefox? You might try the Privacy Badger add-on instead. I've used it happily for years.

Of course, you should submit feedback directly to Facebook demanding that it extend GDPR privacy protections to your country, too. And, wise online users always read the terms and conditions of all Facebook quizzes before taking them.

Don't use Facebook? There are considerations for you, too; especially if you use a different social networking site (or app). Reportedly, Mark Zuckerberg, the CEO of Facebook, will testify before the U.S. Congress on April 11th. His upcoming testimony will be worth monitoring for everyone. Why? The outcome may prod Congress to act by passing new laws giving consumers in the USA data security and privacy protections equal to what's available in the United Kingdom. And, there may be demands for Cambridge Analytica executives to testify before Congress, too.

Or, consumers may demand stronger, faster action by the U.S. Federal Trade Commission (FTC), which announced on March 26th:

"The FTC is firmly and fully committed to using all of its tools to protect the privacy of consumers. Foremost among these tools is enforcement action against companies that fail to honor their privacy promises, including to comply with Privacy Shield, or that engage in unfair acts that cause substantial injury to consumers in violation of the FTC Act. Companies who have settled previous FTC actions must also comply with FTC order provisions imposing privacy and data security requirements. Accordingly, the FTC takes very seriously recent press reports raising substantial concerns about the privacy practices of Facebook. Today, the FTC is confirming that it has an open non-public investigation into these practices."

An "open non-public investigation?" Either the investigation is public, or it isn't. Hopefully, an attorney will explain. And, that announcement read like weak tea. I expect more. Much more.

USA citizens may want stronger data security laws, especially if Facebook's solutions are less than satisfactory, it refuses to provide protections equal to those in the United Kingdom, or if it backtracks later on its promises. Thoughts? Comments?


The 'CLOUD Act' - What It Is And What You Need To Know

Chances are, you probably have not heard of the "CLOUD Act." I hadn't heard about it until recently. A draft of the legislation is available on the website for U.S. Senator Orrin Hatch (Republican - Utah).

Many people who already use cloud services to store and backup data might assume: if it has to do with the cloud, then it must be good.  Such an assumption would be foolish. The full name of the bill: "Clarifying Overseas Use Of Data." What problem does this bill solve? Senator Hatch stated last month why he thinks this bill is needed:

"... the Supreme Court will hear arguments in a case... United States v. Microsoft Corp., colloquially known as the Microsoft Ireland case... The case began back in 2013, when the US Department of Justice asked Microsoft to turn over emails stored in a data center in Ireland. Microsoft refused on the ground that US warrants traditionally have stopped at the water’s edge. Over the last few years, the legal battle has worked its way through the court system up to the Supreme Court... The issues the Microsoft Ireland case raises are complex and have created significant difficulties for both law enforcement and technology companies... law enforcement officials increasingly need access to data stored in other countries for investigations, yet no clear enforcement framework exists for them to obtain overseas data. Meanwhile, technology companies, who have an obligation to keep their customers’ information private, are increasingly caught between conflicting laws that prohibit disclosure to foreign law enforcement. Equally important, the ability of one nation to access data stored in another country implicates national sovereignty... The CLOUD Act bridges the divide that sometimes exists between law enforcement and the tech sector by giving law enforcement the tools it needs to access data throughout the world while at the same time creating a commonsense framework to encourage international cooperation to resolve conflicts of law. To help law enforcement, the bill creates incentives for bilateral agreements—like the pending agreement between the US and the UK—to enable investigators to seek data stored in other countries..."

Senators Coons, Graham, and Whitehouse, support the CLOUD Act, along with House Representatives Collins, Jeffries, and others. The American Civil Liberties Union (ACLU) opposes the bill and warned:

"Despite its fluffy sounding name, the recently introduced CLOUD Act is far from harmless. It threatens activists abroad, individuals here in the U.S., and would empower Attorney General Sessions in new disturbing ways... the CLOUD Act represents a dramatic change in our law, and its effects will be felt across the globe... The bill starts by giving the executive branch dramatically more power than it has today. It would allow Attorney General Sessions to enter into agreements with foreign governments that bypass current law, without any approval from Congress. Under these agreements, foreign governments would be able to get emails and other electronic information without any additional scrutiny by a U.S. judge or official. And, while the attorney general would need to consider a country’s human rights record, he is not prohibited from entering into an agreement with a country that has committed human rights abuses... the bill would for the first time allow these foreign governments to wiretap in the U.S. — even in cases where they do not meet Wiretap Act standards. Paradoxically, that would give foreign governments the power to engage in surveillance — which could sweep in the information of Americans communicating with foreigners — that the U.S. itself would not be able to engage in. The bill also provides broad discretion to funnel this information back to the U.S., circumventing the Fourth Amendment. This information could potentially be used by the U.S. to engage in a variety of law enforcement actions."

Given that warning, I read the draft legislation. One portion immediately struck me:

"A provider of electronic communication service or remote computing service shall comply with the obligations of this chapter to preserve, backup, or disclose the contents of a wire or electronic communication and any record or other information pertaining to a customer or subscriber within such provider’s possession, custody, or control, regardless of whether such communication, record, or other information is located within or outside of the United States."

While I am not an attorney, this bill definitely sounds like an end-run around the Fourth Amendment. The review process is largely governed by the House of Representatives; a body not known for internet knowledge nor savvy. The bill also smells like an attack on internet services consumers regularly use for privacy, such as search engines that don't collect nor archive search data and Virtual Private Networks (VPNs).

Today, for online privacy many consumers in the United States use VPN software and services provided by vendors located offshore. Why? Despite a national poll in 2017 which found the the Republican rollback of FCC broadband privacy rules very unpopular among consumers, the Republican-led Congress proceeded with that rollback, and President Trump signed the privacy-rollback legislation on April 3, 2017. Hopefully, skilled and experienced privacy attorneys will continue to review and monitor the draft legislation.

The ACLU emphasized in its warning:

"Today, the information of global activists — such as those that fight for LGBTQ rights, defend religious freedom, or advocate for gender equality are protected from being disclosed by U.S. companies to governments who may seek to do them harm. The CLOUD Act eliminates many of these protections and replaces them with vague assurances, weak standards, and largely unenforceable restrictions... The CLOUD Act represents a major change in the law — and a major threat to our freedoms. Congress should not try to sneak it by the American people by hiding it inside of a giant spending bill. There has not been even one minute devoted to considering amendments to this proposal. Congress should robustly debate this bill and take steps to fix its many flaws, instead of trying to pull a fast one on the American people."

I agree. Seems like this bill creates far more problems than it solves. Plus, something this important should be openly and thoroughly discussed; not be buried in a spending bill. What do you think?


Securities & Exchange Commission Charges Former Equifax Executive With Insider Trading

Last week, the U.S. Securities and Exchange Commission (SEC) charged a former Equifax executive with insider trading. While an employee, Jun Ying allegedly used confidential information to dump stock and avoid losses before Equifax announced its massive data breach in September, 2017.

The SEC announced on March 14th that it had:

"... charged a former chief information officer of a U.S. business unit of Equifax with insider trading in advance of the company’s September 2017 announcement about a massive data breach that exposed the social security numbers and other personal information of about 148 million U.S. customers... The SEC’s complaint charges Ying with violating the antifraud provisions of the federal securities laws and seeks disgorgement of ill-gotten gains plus interest, penalties, and injunctive relief... According to the SEC’s complaint, Jun Ying, who was next in line to be the company’s global CIO, allegedly used confidential information entrusted to him by the company to conclude that Equifax had suffered a serious breach. The SEC alleges that before Equifax’s public disclosure of the data breach, Ying exercised all of his vested Equifax stock options and then sold the shares, reaping proceeds of nearly $1 million. According to the complaint, by selling before public disclosure of the data breach, Ying avoided more than $117,000 in losses... The U.S. Attorney’s Office for the Northern District of Georgia today announced parallel criminal charges against Ying."

The massive data breach affected about 143 million persons. Equifax announced in March, 2018 that even more people were affected, than originally estimated in its September, 2017 announcement.

MarketWatch reported that Ying:

"... found out about the breach on Friday afternoon, August 25, 2017... The SEC complaint says that Ying’s internet browsing history shows he learned that Experian’s stock price had dropped approximately 4% after the public announcement of [a prior 2015] Experian breach. Later Monday morning, Ying exercised all of his available stock options for 6,815 shares of Equifax stock that he immediately sold for over $950,000, and a gain of over $480,000... on Aug. 30, the global CIO for Equifax officially told Ying that it was Equifax that had been breached. One of the company’s attorneys, unaware that Ying had already traded on the information, told Ying that the news about the breach was confidential, should not be shared with anyone, and that Ying should not trade in Equifax securities. According the SEC complaint, Ying did not volunteer the fact that he had exercised and sold all of his vested Equifax options two days before. Equifax finally announced the breach on Sept. 7, and Equifax common stock closed at $123.23 the next day, a drop of $19.49 or nearly 14%..."


Banking Legislation Advances In U.S. Senate

The Economic Growth, Regulatory Relief, and Consumer Protection Act (Senate Bill 2155) was approved Wednesday by the United States Senate. The vote was 67 for, 31 against, and 2 non voting. The voting roll call by name:

Alexander (R-TN), Yea
Baldwin (D-WI), Nay
Barrasso (R-WY), Yea
Bennet (D-CO), Yea
Blumenthal (D-CT), Nay
Blunt (R-MO), Yea
Booker (D-NJ), Nay
Boozman (R-AR), Yea
Brown (D-OH), Nay
Burr (R-NC), Yea
Cantwell (D-WA), Nay
Capito (R-WV), Yea
Cardin (D-MD), Nay
Carper (D-DE), Yea
Casey (D-PA), Nay
Cassidy (R-LA), Yea
Cochran (R-MS), Yea
Collins (R-ME), Yea
Coons (D-DE), Yea
Corker (R-TN), Yea
Cornyn (R-TX), Yea
Cortez Masto (D-NV), Nay
Cotton (R-AR), Yea
Crapo (R-ID), Yea
Cruz (R-TX), Yea
Daines (R-MT), Yea
Donnelly (D-IN), Yea
Duckworth (D-IL), Nay
Durbin (D-IL), Nay
Enzi (R-WY), Yea
Ernst (R-IA), Yea
Feinstein (D-CA), Nay
Fischer (R-NE), Yea
Flake (R-AZ), Yea
Gardner (R-CO), Yea
Gillibrand (D-NY), Nay
Graham (R-SC), Yea
Grassley (R-IA), Yea
Harris (D-CA), Nay
Hassan (D-NH), Yea
Hatch (R-UT), Yea
Heinrich (D-NM), Not Voting
Heitkamp (D-ND), Yea
Heller (R-NV), Yea
Hirono (D-HI), Nay
Hoeven (R-ND), Yea
Inhofe (R-OK), Yea
Isakson (R-GA), Yea
Johnson (R-WI), Yea
Jones (D-AL), Yea
Kaine (D-VA), Yea
Kennedy (R-LA), Yea
King (I-ME), Yea
Klobuchar (D-MN), Nay
Lankford (R-OK), Yea
Leahy (D-VT), Nay
Lee (R-UT), Yea
Manchin (D-WV), Yea
Markey (D-MA), Nay
McCain (R-AZ), Not Voting
McCaskill (D-MO), Yea
McConnell (R-KY), Yea
Menendez (D-NJ), Nay
Merkley (D-OR), Nay
Moran (R-KS), Yea
Murkowski (R-AK), Yea
Murphy (D-CT), Nay
Murray (D-WA), Nay
Nelson (D-FL), Yea
Paul (R-KY), Yea
Perdue (R-GA), Yea
Peters (D-MI), Yea
Portman (R-OH), Yea
Reed (D-RI), Nay
Risch (R-ID), Yea
Roberts (R-KS), Yea
Rounds (R-SD), Yea
Rubio (R-FL), Yea
Sanders (I-VT), Nay
Sasse (R-NE), Yea
Schatz (D-HI), Nay
Schumer (D-NY), Nay
Scott (R-SC), Yea
Shaheen (D-NH), Yea
Shelby (R-AL), Yea
Smith (D-MN), Nay
Stabenow (D-MI), Yea
Sullivan (R-AK), Yea
Tester (D-MT), Yea
Thune (R-SD), Yea
Tillis (R-NC), Yea
Toomey (R-PA), Yea
Udall (D-NM), Nay
Van Hollen (D-MD), Nay
Warner (D-VA), Yea
Warren (D-MA), Nay
Whitehouse (D-RI), Nay
Wicker (R-MS), Yea
Wyden (D-OR), Nay
Young (R-IN), Yea

The bill now proceeds to the House of Representatives. If it passes the House, then it would be sent to the President for a signature.


Report: Little Progress Since 2016 To Replace Old, Vulnerable Voting Machines In United States

We've know for some time that a sizeable portion of voting machines in the United States are vulnerable to hacking and errors. Too many states, cities, and town use antiquated equipment or equipment without paper backups. The latter makes re-counts impossible.

Has any progress been made to fix the vulnerabilities? The Brennan Center For Justice (BCJ) reported:

"... despite manifold warnings about election hacking for the past two years, the country has made remarkably little progress since the 2016 election in replacing antiquated, vulnerable voting machines — and has done even less to ensure that our country can recover from a successful cyberattack against those machines."

It is important to remember this warning in January 2017 from the Director of National Intelligence (DNI):

"Russian effortsto influence the 2016 US presidential election represent the most recent expression of Moscow’s longstanding desire to undermine the US-led liberal democratic order, but these activities demonstrated a significant escalation in directness, level of activity, and scope of effort compared to previous operations. We assess Russian President Vladimir Putin ordered an influence campaign in 2016 aimed at the US presidential election. Russia’s goals were to undermine public faith in the US democratic process... Russian intelligence accessed elements of multiple state or local electoral boards. Since early 2014, Russian intelligence has researched US electoral processes and related technology and equipment. DHS assesses that the types of systems we observed Russian actors targeting or compromising are not involved in vote tallying... We assess Moscow will apply lessons learned from its Putin-ordered campaign aimed at the US presidential election to future influence efforts worldwide, including against US allies and their election processes... "

Detailed findings in the BCJ report about the lack of progress:

  1. "This year, most states will use computerized voting machines that are at least 10 years old, and which election officials say must be replaced before 2020.
    While the lifespan of any electronic voting machine varies, systems over a decade old are far more likely to need to be replaced, for both security and reliability reasons... older machines are more likely to use outdated software like Windows 2000. Using obsolete software poses serious security risks: vendors may no longer write security patches for it; jurisdictions cannot replace critical hardware that is failing because it is incompatible with their new, more secure hardware... In 2016, jurisdictions in 44 states used voting machines that were at least a decade old. Election officials in 31 of those states said they needed to replace that equipment by 2020... This year, 41 states will be using systems that are at least a decade old, and officials in 33 say they must replace their machines by 2020. In most cases, elections officials do not yet have adequate funds to do so..."
  2. "Since 2016, only one state has replaced its paperless electronic voting machines statewide.
    Security experts have long warned about the dangers of continuing to use paperless electronic voting machines. These machines do not produce a paper record that can be reviewed by the voter, and they do not allow election officials and the public to confirm electronic vote totals. Therefore, votes cast on them could be lost or changed without notice... In 2016, 14 states (Arkansas, Delaware, Georgia, Indiana, Kansas, Kentucky, Louisiana, Mississippi, New Jersey, Pennsylvania, South Carolina, Tennessee, Texas, and Virginia) used paperless electronic machines as the primary polling place equipment in at least some counties and towns. Five of these states used paperless machines statewide. By 2018 these numbers have barely changed: 13 states will still use paperless voting machines, and 5 will continue to use such systems statewide. Only Virginia decertified and replaced all of its paperless systems..."
  3. "Only three states mandate post-election audits to provide a high-level of confidence in the accuracy of the final vote tally.
    Paper records of votes have limited value against a cyberattack if they are not used to check the accuracy of the software-generated total to confirm that the veracity of election results. In the last few years, statisticians, cybersecurity professionals, and election experts have made substantial advances in developing techniques to use post-election audits of voter verified paper records to identify a computer error or fraud that could change the outcome of a contest... Specifically, “risk limiting audits” — a process that employs statistical models to consistently provide a high level of confidence in the accuracy of the final vote tally – are now considered the “gold standard” of post-election audits by experts... Despite this fact, risk limiting audits are required in only three states: Colorado, New Mexico, and Rhode Island. While 13 state legislatures are currently considering new post-election audit bills, since the 2016 election, only one — Rhode Island — has enacted a new risk limiting audit requirement."
  4. "43 states are using machines that are no longer manufactured.
    The problem of maintaining secure and reliable voting machines is particularly challenging in the many jurisdictions that use machines models that are no longer produced. In 2015... the Brennan Center estimated that 43 states and the District of Columbia were using machines that are no longer manufactured. In 2018, that number has not changed. A primary challenge of using machines no longer manufactured is finding replacement parts and the technicians who can repair them. These difficulties make systems less reliable and secure... In a recent interview with the Brennan Center, Neal Kelley, registrar of voters for Orange County, California, explained that after years of cannibalizing old machines and hoarding spare parts, he is now forced to take systems out of service when they fail..."

That is embarrassing for a country that prides itself on having an effective democracy. According to BCJ, the solution would be for Congress to fund via grants the replacement of paperless and antiquated equipment; plus fund post-election audits.

Rather than protect the integrity of our democracy, the government passed a massive tax cut which will increase federal deficits during the coming years while pursuing both a costly military parade and an unfunded border wall. Seems like questionable priorities to me. What do you think?


Legislation Moving Through Congress To Loosen Regulations On Banks

Legislation is moving through Congress which will loosen regulations on banks. Is this an improvement? Is it risky? Is it a good deal for consumers? Before answering those questions, a summary of the Economic Growth, Regulatory Relief, and Consumer Protection Act (Senate Bill 2155):

"This bill amends the Truth in Lending Act to allow institutions with less than $10 billion in assets to waive ability-to-repay requirements for certain residential-mortgage loans... The bill amends the Bank Holding Company Act of 1956 to exempt banks with assets valued at less than $10 billion from the "Volcker Rule," which prohibits banking agencies from engaging in proprietary trading or entering into certain relationships with hedge funds and private-equity funds... The bill amends the United States Housing Act of 1937 to reduce inspection requirements and environmental-review requirements for certain smaller, rural public-housing agencies.

Provisions relating to enhanced prudential regulation for financial institutions are modified, including those related to stress testing, leverage requirements, and the use of municipal bonds for purposes of meeting liquidity requirements. The bill requires credit reporting agencies to provide credit-freeze alerts and includes consumer-credit provisions related to senior citizens, minors, and veterans."

Well, that definitely sounds like relief for banks. Fewer regulations means it's easier to do business... and make more money. Next questions: is it good for consumers? Is it risky? Keep reading.

The non-partisan Congressional Budget Office (CBO) analyzed the proposed legislation in the Senate, and concluded (bold emphasis added):

"S. 2155 would modify provisions of the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd Frank Act) and other laws governing regulation of the financial industry. The bill would change the regulatory framework for small depository institutions with assets under $10 billion (community banks) and for large banks with assets over $50 billion. The bill also would make changes to consumer mortgage and credit-reporting regulations and to the authorities of the agencies that regulate the financial industry. CBO estimates that enacting the bill would increase federal deficits by $671 million over the 2018-2027 period... CBO’s estimate of the bill’s budgetary effect is subject to considerable uncertainty, in part because it depends on the probability in any year that a systemically important financial institution (SIFI) will fail or that there will be a financial crisis. CBO estimates that the probability is small under current law and would be slightly greater under the legislation..."

So, the propose legislation means there is a greater risk of banks either failing or needing government assistance (e.g., bailout funds). Are there risks to consumers? To taxpayers? CNN interviewed U.S. Senator Elizabeth Warren (Dem- Mass.), who said:

"Frankly, I just don't see how any senator can vote to weaken the regulations on Wall Street banks.. [weakened regulations] puts us at greater risk that there will be another taxpayer bailout, that there will be another crash and another taxpayer bailout..."

So, there are risks for consumers/taxpayers. How? Why? Let's count the ways.

First, the proposed legislation increases federal deficits. Somebody has to pay for that: with either higher taxes, less services, more debt, or a combination of all three. That doesn't sound good. Does it sound good to you?

Second, looser regulations mean some banks may lend money to more people they shouldn't have = persons who default on loan. To compensate, those banks would raise prices (e.g., more fees, higher fees, higher interest rates) to borrowers to cover their losses. If those banks can't cover their losses, then they will fail. If enough banks fail at about the same time, then bingo... another financial crisis.

If key banks fail, then the government will bail out (again) banks to keep the financial system running. (Remember too big to fail banks?) Somebody has to pay for bailouts... with either higher taxes, less services, more debt, or a combination of all three. Does that sound good to you? It doesn't sound good to me. If it doesn't sound good, I encourage you to contact your elected officials.

It's critical to remember banking history in the United States. Nobody wants a repeat of the 2008 melt-down. There are always consequences when government... Congress decides to help bankers by loosening regulations. What do you think?


Cozy Relationship Between The FBI And A Computer Repair Service Spurs 4th Amendment Concerns

Image of Geek Squad auto and two technicians. Click to view larger version The Electronic Frontier Foundation (EFF) has learned more about the relationship between Geek Squad, a computer repair service, and the U.S. Federal Bureau of Investigation (FBI). In a March 6th announcement, the EFF said it filed a:

"... FOIA lawsuit last year to learn more about how the FBI uses Geek Squad employees to flag illegal material when people pay Best Buy to repair their computers. The relationship potentially circumvents computer owners’ Fourth Amendment rights."

Founded in 1966, the Best Buy retail chain operates more than 1,500 stores in North America and employs more than 125,000 people. The chain sells home appliances and electronics both online and at stores in the United States, Canada, and Mexico. Located in about 1,100 Best Buy stores, Geek Squad provides repair services via phone, in-store, or at home. This means that Geek Squad employees configure and fix popular smart devices many consumers have purchased for their homes: cameras and camcorders, cell phones, computers and tablets, home theater, car electronics, home security (e.g., smart doorbells, smart locks, smart thermostats, wireless cameras), smart appliances (e.g., refrigerators, ovens, washing machines, dryers, etc.), smart speakers, video game consoles, wearables (e.g., fitness bands, smart watches), and more.

The 4th Amendment of the U.S. Constitution states:

"The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized."

It is most puzzling how a broken computer translates into probable cause for a search. The FOIA request was prompted by the prosecution of a doctor in California, "who was charged with possession of child pornography after Best Buy sent his computer to the Kentucky Geek Squad repair facility."

Logos for Best Buy and Geek Squad The FOIA request yielded documents which showed:

"... that Best Buy officials have enjoyed a particularly close relationship with the agency for at least 10 years. For example, an FBI memo from September 2008 details how Best Buy hosted a meeting of the agency’s “Cyber Working Group” at the company’s Kentucky repair facility... Another document records a $500 payment from the FBI to a confidential Geek Squad informant... over the years of working with Geek Squad employees, FBI agents developed a process for investigating and prosecuting people who sent their devices to the Geek Squad for repairs..."

The EFF announcement described that process in detail:

"... a series of FBI investigations in which a Geek Squad employee would call the FBI’s Louisville field office after finding what they believed was child pornography. The FBI agent would show up, review the images or video and determine whether they believe they are illegal content. After that, they would seize the hard drive or computer and send it to another FBI field office near where the owner of the device lived. Agents at that local FBI office would then investigate further, and in some cases try to obtain a warrant to search the device... For example, documents reflect that Geek Squad employees only alert the FBI when they happen to find illegal materials during a manual search of images on a device and that the FBI does not direct those employees to actively find illegal content. But some evidence in the case appears to show Geek Squad employees did make an affirmative effort to identify illegal material... Other evidence showed that Geek Squad employees were financially rewarded for finding child pornography..."

Finding child pornography and prosecuting perpetrators is a worthy goal, but the FBI-Geek Squad program seems to blur the line between computer repair and law enforcement. The program and FOIA documents raise several questions:

  1. What are the program details (e.g., training, qualifications for informants, payments, conditions for payments, scope, etc.) for financial rewarding Geek Squad employees for finding child pornography?
  2. What other computer/appliance repair vendors does the FBI operate similar programs with?
  3. What quality control measures does the program contain to prevent wrongful prosecutions?
  4. What penalties or consequences, if any, for Geek Squad employees who falsely reported child pornography claims?
  5. Is this Geek Squad program nationwide, or if not, in which states does it operate?
  6. In cases of suspected child pornography, what other information on targets' devices is collected and archived by the FBI through this program?
  7. Were/are whole hard drives copied and archived?
  8. How long is information archived?
  9. Does the program between the FBI and Geek Squad target other types of crime  and threats (e.g., terrorism)?
  10. What other law enforcement or security agencies does Geek Squad have cozy relationships with?

I'm sure there are more questions to be asked. What are your opinions?

Image of Geek Squad services promoted on Best Buy site


2017 FTC Complaints Report: Debt Collection Tops The List. Older Consumers Better At Spotting Scams

Earlier this month,, the U.S. Federal Trade Commission (FTC) released its annual report of complaints submitted by consumers in the United States. The report is helpful is understand the most frequent types of scams and reports consumers experienced.

The latest report, titled 2017 Consumer Sentinel Network Data Book, includes complaints from 2.68 million consumers, a decrease from 2.98 million in 2016. However, consumers reported losing a total of $905 million to fraud in 2017, which is $63 million more than in 2016. The most frequent complaints were about debt collection (23 percent), identity theft (14 percent), and imposter scams (13 percent). The top 20 complaint categories:

Rank Category # Of
Reports
% Of
Reports
1 Debt Collection 608,535 22.74%
2 Identity Theft 371,061 13.87%
3 Imposter Scams 347,829 13.00%
4 Telephone & Mobile Services 149,578 5.59%
5 Banks & Lenders 149,316 5.58%
6 Prizes, Sweepstakes & Lotteries 142,870 5.34%
7 Shop-at-Home & Catalog Sales 126,387 4.72%
8 Credit Bureaus, Information
Furnishers & Report Users
107,473 4.02%
9 Auto Related 86,289 3.23%
10 Television and Electronic Media 47,456 1.77%
11 Credit Cards 45,428 1.70%
12 Internet Services 45,093 1.69%
13 Foreign Money Offers &
Counterfeit Check Scams
31,980 1.20%
14 Health Care 27,660 1.03%
15 Travel, Vacations &
Timeshare Plans
22,264 0.83%
16 Business & Job Opportunities 19,082 0.71%
17 Advance Payments for
Credit Services
17,762 0.66%
18 Investment Related 15,079 0.56%
19 Computer Equipment
& Software
9,762 0.36%
20 Mortgage Foreclosure Relief
& Debt Management
8,973 0.34%

While the median loss for all fraud reports in 2017 was $429, consumers reported larger losses in certain types of scams: travel, vacations and timeshare plans ($1,710); mortgage foreclosure relief and debt management ($1,200); and business/job opportunities ($1,063).

The telephone was the most frequently-reported method (70 percent) scammers used to contact consumers, and  wire transfers was the most frequently-reported payment method for fraud ($333 million in losses reported). Also:

"The states with the highest per capita rates of fraud reports in 2017 were Florida, Georgia, Nevada, Delaware, and Michigan. For identity theft, the top states in 2017 were Michigan, Florida, California, Maryland, and Nevada."

What's new in this report is that it details financial losses by age group. The FTC report concluded:

"Consumers in their twenties reported losing money to fraud more often than those over age 70. For example, among people aged 20-29 who reported fraud, 40 percent indicated they lost money. In comparison, just 18 percent of those 70 and older who reported fraud indicated they lost any money. However, when these older adults did report losing money to a scammer, the median amount lost was greater. The median reported loss for people age 80 and older was $1,092 compared to $400 for those aged 20-29."

Detailed information supporting this conclusion:

2017 FTC Consumer Sentinel complaints report. Reports and losses by age group. Click to view larger image

2017 FTC Consumer Sentinel complaints report. Median losses by age group. Click to view larger image

The second chart is key. Twice as many younger consumers (40 percent, ages 20 - 29) reported fraud losses compared to 18 percent of consumers ages 70 and older. At the same time, those older consumers lost more money. So, older consumers were more skilled at spotting scams and few fell victim to scams. It seems both groups could learn from each other.

CBS News interviewed a millennial who fell victim to a mystery-shopper scam, which seemed to be a slick version of the old check scam. It seems wise for all consumers, regardless of age, to maintain awareness about the types of scams. Pick a news source or blog you trust. Hopefully, this blog.

Below is a graphic summarizing the 2017 FTC report:

Ftc-complaints-report-2017


Analysis: Closing The 'Regulatory Donut Hole' - The 9th Circuit Appeals Court, AT&T, The FCC And The FTC

The International Association of Privacy Professionals (IAPP) site has a good article explaining what a recent appeals court decision means for everyone who uses the internet:

"When the 9th U.S. Circuit Court of Appeals ruled, in September 2016, that the Federal Trade Commission did not have the authority to regulate AT&T because it was a “common carrier,” which only the Federal Communications Commission can regulate, the decision created what many in privacy foresaw as a “regulatory doughnut hole.” Indeed, when the FCC, in repealing its broadband privacy rules, decided to hand over all privacy regulation of internet service providers to the FTC, the predicted situation came about: The courts said “common carriers” could only be regulated by the FCC, but the FCC says only the FTC should be regulating privacy. So, was there no regulator to oversee a company like AT&T’s privacy practices?

Indeed, argued Gigi Sohn, formerly counsel to then-FCC Chair Tom Wheeler, “The new FCC/FTC relationship lets consumers know they’re getting screwed. But much beyond that, they don’t have any recourse.” Now, things have changed once again. With an en banc decision, the 9th Circuit has reversed itself... This reversal of its previous decision by the 9th Circuit now allows the FTC to go forward with its case against AT&T and what it says were deceptive throttling practices, but it also now allows the FTC to once again regulate internet service providers’ data-handling and cybersecurity practices if they come in the context of activities that are outside their activities as common carriers."

Somebody has to oversee Internet service providers (ISPs). Somebody has to do their job. It's an important job. The Republicans-led FCC, by Trump appointee Ajit Pai, has clearly stated it won't given its "light touch" approach to broadband regulation, and repeals last year of both broadband privacy and net neutrality rules. Earlier this month, the National Rifle Association (NRA) honored FCC Chairman Pai for repealing net neutrality rules.

"No touch" is probably a more accurate description. A prior blog post listed many historical problems and abuses of consumers by some ISPs. Consumers should buckle up, as ISPs slowly unveiled their plans in a world without net neutrality protections for consumers. What might that look like? What has AT&T said about this?

Bob Quinn, the Vice President of External and Legislative Affairs for AT&T, claimed today in a blog post:

"Net neutrality has been an emotional issue for a lot of people over the past 10 years... For much of those 10 years, there has been relative agreement over what those rules should be: don’t block websites; censor online content; or throttle, degrade or discriminate in network performance based on content; and disclose to consumers how you manage your network to make that happen. AT&T has been publicly committed to those principles... But no discussion of net neutrality would be complete without also addressing the topic of paid prioritization. Let me start by saying that the issue of paid prioritization has always been hazy and theoretical. The business models for services that would require end-to-end management have only recently begun to come into focus... Let me clear about this – AT&T is not interested in creating fast lanes and slow lanes on anyone’s internet."

Really? The Ars Technica blog called out AT&T and Quinn on his claim:

"AT&T is talking up the benefits of paid prioritization schemes in preparation for the death of net neutrality rules while claiming that charging certain content providers for priority access won't create fast lanes and slow lanes... What Quinn did not mention is that the net neutrality rules have a specific carve-out that already allows such services to exist... without violating the paid prioritization ban. Telemedicine, automobile telematics, and school-related applications and content are among the services that can be given isolated capacity... The key is that the FCC maintained the right to stop ISPs from using this exception to violate the spirit of the net neutrality rules... In contrast, AT&T wants total control over which services are allowed to get priority."

Moreover, fast and slow lanes by AT&T already exist:

"... AT&T provides only DSL service in many rural areas, with speeds of just a few megabits per second or even less than a megabit. AT&T has a new fixed wireless service for some rural areas, but the 10Mbps download speeds fall well short of the federal broadband standard of 25Mbps. In areas where AT&T has brought fiber to each home, the company might be able to implement paid prioritization and manage its network in a way that prevents most customers from noticing any slowdown in other services..."

So, rural (e.g., DSL) consumers are more likely to suffer and notice service slowdowns. Once the final FCC rules are available without net neutrality protections for consumers and the lawsuits have been resolved, then AT&T probably won't have to worry about violating any prioritization bans.

The bottom line for consumers: expect ISPs to implement first changes consumers won't see directly. Remember the old story about a frog stuck in a pot of water? The way to kill it is to slowly turn up the heat. You can expect ISPs to implement this approach in a post-net-neutrality world. (Yes, in this analogy we consumers are the frog, and the heat is higher internet prices.) Paid prioritization is one method consumers won't directly see. It forces content producers, and not ISPs, to raise prices on consumers. Make no mistake about where the money will go.

Consumers will likely see ISPs introduce tiered broadband services, with lower-priced service options that exclude video streaming content... spun as greater choice for consumers. (Some hotels in the United States already sell to their guests WiFi services with tiered content.) Also, expect to see more "sponsored data programs," where video content owned by your ISP doesn't count against wireless data caps. Read more about other possible changes.

Seems to me the 9th Circuit Appeals Court made the best of a bad situation. I look forward to the FTC doing an important job which the FCC chose to run away from. What do you think?


Investigative Report By Senator Warren Details Failures By Equifax From Massive Data Breach

Equifax logo Earlier this month, U.S. Senator Elizabeth Warren (Democrat - Massachusetts) issued a report about her office's investigation in to the massive Equifax data breach. Key findings from the report:

  1. "Equifax Set up a Flawed System to Prevent and Mitigate Data Security Problems. The breach was made possible because Equifax adopted weak cybersecurity measures that did not adequately protect consumer data. The company failed to prioritize cybersecurity and failed to follow basic procedures that would have prevented or mitigated the impact of the breach. For example, Equifax was warned of the vulnerability in the web application software Apache Struts that was used to breach its system, and emailed staff to tell them to fix the vulnerability – but then failed to confirm that the fixes were made...
  2. Equifax Ignored Numerous Warnings of Risks to Sensitive Data. Equifax had ample warning of weaknesses and risks to its systems. Equifax received a specific warning from the Department of Homeland Security about the precise vulnerability that hackers took advantage of to breach the company’s systems. The company had been subject to several smaller breaches in the years prior to the massive 2017 breach, and several outside experts identified and reported weaknesses...
  3. Equifax Failed to Notify Consumers, Investors, and Regulators about the Breach in a Timely and Appropriate Fashion. The breach occurred on May 13, 2017, and Equifax first observed suspicious signs of a problem on July 29, 2017. But Equifax failed to notify consumers, investors, business partners, and the appropriate regulators until 40 days after the company discovered the breach. By failing to provide adequate information in a timely fashion, Equifax robbed consumers of the ability to take precautionary measures to protect themselves...
  4. Equifax Took Advantage of Federal Contracting Loopholes and Failed to Adequately Protect Sensitive IRS Taxpayer Data. Soon after the breach was announced, Equifax and the IRS were engulfed in controversy amid news that the IRS was signing a new $7.2 mil lion contract with the company. Senator Warren’s investigation revealed that Equifax used contracting loopholes to force the IRS into signing this “bridge” contract, and the contract was finally cancelled weeks later by the IRS after the agency learned of additional weaknesses in Equifax security that potentially endangered taxpayer data.
  5. Equifax’s Assistance and Information Provided to Consumers Following the Breach was Inadequate. Equifax took 40 days to prepare a response for the public before finally announcing the extent of the breach – and e ven after this delay, the company failed to respond appropriately. Equifax had an inadequate crisis management plan and failed to follow their own procedures for notifying consumers. Consumers who called the Equifax call center had hours-long waits. The website set up by Equifax to assist consumers was initially unable to give individuals clarity other than to tell them that their information “may” have been hacked – and that website had a host of security problems in its own right. Equifax delayed their public notice in part because the company spent almost two weeks trying to determine precisely which consumers were affected..."

Senator Warren's investigation was one of several underway. The importance of this investigative report cannot be overstated for several reasons. First, the three national credit reporting agencies (e.g., Equifax, Experian, and TransUnion) maintain reports about the credit histories and worthiness of all adults in the United States. That's extremely sensitive -- and valuable -- information that affects just about everyone. And, the country's economy relies on the accuracy and security of credit reports.

Second, Mick Mulvaney, the interim director appointed by President Trump to head the Consumer Financial Protection Bureau (CFPB), announced a halt to its investigation of the Equifax breach. This makes Senator Warren's investigative report even more important. Third, the massive Equifax data breach affected at least 143 million persons in the United States... about 44 percent of the United States population... almost half. Nobody in their right mind wants to experience that again, so a thorough investigation seems wise, appropriate, and necessary.

The credit reporting industry includes national agencies, regional agencies, and a larger list of "consumer reporting companies" -- businesses that collect information about consumers into reports for a variety of decisions about credit, employment, residential rental housing, insurance, and more. The CFPB compiled this larger list in 2017 (Adobe PDF; 264k bytes).

Senator Warren's report highlighted fixes needed:

"Federal Legislation is Necessary to Prevent and Respond to Future Breaches. Equifax and other credit reporting agencies collect consumer data without permission, and consumers have no way to prevent their data from being collected and held by the company – which was more focused on its own profits and growth than on protecting the sensitive personal information of millions of consumers. This breach and the response by Equifax illustrate the need for federal legislation that (1) establishes appropriate fines for credit reporting agencies that allow serious cybersecurity breaches on their watches; and (2) empowers the Federal Trade Commission to establish basic standards to ensure that credit reporting agencies are adequately protecting consumer data."

Download the full report (Adobe PDF; 672k bytes) titled, "Bad Credit: Uncovering Equifax's Failure to Protect Americans' Personal Information." Senator Warren's report is also available here. The CFPB list of consumer reporting companies is also available here.

My personal view: data breaches like Equifax's will stop only after executives at credit reporting agencies suffer direct consequences for failed information security: jail time or massive personal fines. There has to be consequences. What do you think?


I Approved This Facebook Message — But You Don’t Know That

[Editor's note: today's guest post, by reporters at ProPublica, is the latest in a series about advertising and social networking sites. It is reprinted with permission.]

Facebook logo By Jennifer Valentino-DeVries, ProPublica

Hundreds of federal political ads — including those from major players such as the Democratic National Committee and the Donald Trump 2020 campaign — are running on Facebook without adequate disclaimer language, likely violating Federal Election Commission (FEC) rules, a review by ProPublica has found.

An FEC opinion in December clarified that the requirement for political ads to say who paid for and approved them, which has long applied to print and broadcast outlets, extends to ads on Facebook. So we checked more than 300 ads that had run on the world’s largest social network since the opinion, and that election-law experts told us met the criteria for a disclaimer. Fewer than 40 had disclosures that appeared to satisfy FEC rules.

“I’m totally shocked,” said David Keating, president of the nonprofit Institute for Free Speech in Alexandria, Virginia, which usually opposes restrictions on political advertising. “There’s no excuse,” he said, looking through our database of ads.

The FEC can investigate possible violations of the law and fine people up to thousands of dollars for breaking it — fines double if the violation was “knowing and willful,” according to the regulations. Under the law, it’s up to advertisers, not Facebook, to ensure they have the right disclaimers. The FEC has not imposed penalties on any Facebook advertiser for failing to disclose.

An FEC spokeswoman declined to say whether the commission has any recent complaints about lack of disclosure on Facebook ads. Enforcement matters are confidential until they are resolved, she said.

None of the individuals or groups we contacted whose ads appeared to have inadequate disclaimers, including the Democratic National Committee and the Trump campaign, responded to requests for comment. Facebook declined to comment on ProPublica’s findings or the December opinion. In public documents, the company has urged the FEC to be “flexible” in what it allows online, and to develop a policy for all digital advertising rather than focusing on Facebook.

Insufficient disclaimers can be minor technicalities, not necessarily evidence of intent to deceive. But the pervasiveness of the lapses ProPublica found suggests a larger problem that may raise concerns about the upcoming midterm elections — that political advertising on the world’s largest social network isn’t playing by rules intended to protect the public.

Unease about political ads on Facebook and other social networking sites has intensified since internet companies acknowledged that organizations associated with the Russian government bought ads to influence U.S. voters during the 2016 election. Foreign contributions to campaigns for U.S. federal office are illegal. Online, advertisers can target ads to relatively small groups of people. Once the marketing campaign is over, the ads disappear. This makes it difficult for the public to scrutinize them.

The FEC opinion is part of a push toward more transparency in online political advertising that has come in response to these concerns. In addition to handing down the opinion in a specific case, the FEC is preparing new rules to address ads on social media more broadly. Three senators are sponsoring a bill called the Honest Ads Act, which would require internet companies to provide more information on who is buying political ads. And earlier this month, the election authority in Seattle said Facebook was violating a city law on election-ad disclosures, marking a milestone in municipal attempts to enforce such transparency.

Facebook itself has promised more transparency about political ads in the coming months, including “paid for by” disclosures. Since late October it has been conducting tests in Canada that publish ads on an advertiser’s Facebook page, where people can see them even without being part of the advertiser’s target audience. Those ads are only up while the ad campaign is running, but Facebook says it will create a searchable archive for federal election advertising in the U.S. starting this summer.

ProPublica found the ads using a tool called the Political Ad Collector, which allows Facebook users to automatically send us the political ads that were displayed on their news feeds. Because they reflect what users of the tool are seeing, the ads in our database aren’t a representative sample.

The disclaimers required by the FEC are familiar to anyone who has seen a print or television political ad — think of a candidate saying, “I’m ____, and I approved this message,” at the end of a TV commercial, or a “paid for by” box at the bottom of a newspaper advertisement. They’re intended to make sure the public knows who is paying to support a candidate, and to prevent people from falsely claiming to speak on a candidate’s behalf.

The system does have limitations, reflecting concerns that overuse of disclaimers could inhibit free speech. For starters, the rules apply only to certain types of political ads. Political committees and candidates have to include disclaimers, as do people seeking donations or conducting “express advocacy.” To count as express advocacy, an ad typically must mention a candidate and use certain words clearly campaigning for or against a candidate — such as “vote for,” “reject” or “re-elect.” And the regulations only apply to federal elections, not state and local ones.

The rules also don’t address so-called “issue” ads that advocate a policy stance. These ads may include a candidate’s name without a disclaimer, as long as they aren’t funded by a political committee or candidate and don’t use express-advocacy language. Many of the political ads purchased by Russian groups in 2016 attempted to influence public opinion without mentioning candidates at all — and would not require disclosure even today.

Enforcement of the law often relies on political opponents or a member of the public complaining to the FEC. If only supporters see an ad, as might be the case online, a complaint may never come.

The disclaimer law was last amended in 2002, but online advertising has changed so rapidly that several experts said the FEC has had trouble keeping up. In 2002, the commission found that paid text message ads were exempt from disclosure under the “small-items exception” originally intended for buttons, pins and the like. What counts as small depends on the situation and is up to the FEC.

In 2010, the FEC considered ads on Google that had no graphics or photos and were limited to 95 characters of text. Google proposed that disclaimers not be part of the ads themselves but be included on the web pages that users would go to after clicking on the ads; the FEC agreed.

In 2011, Facebook asked the FEC to allow political ads on the social network to run without disclosures. At the time, Facebook limited all ads on its platform to small, “thumbnail” photos and brief text of only 100 or 160 characters, depending on the type of ad. In that case, the six-person FEC couldn’t muster the four votes needed to issue an opinion, with three commissioners saying only limited disclosure was required and three saying the ads needed no disclosure at all, because it would be “impracticable” for political ads on Facebook to contain more text than other ads. The result was that political ads on Facebook ran without the disclaimers seen on other types of election advertising.

Since then, though, ads on Facebook have expanded. They can now include much more text, as well as graphics or photos that take up a large part of the news feed’s width. Video ads can run for many minutes, giving advertisers plenty of time to show the disclaimer as text or play it in a voiceover.

Last October, a group called Take Back Action Fund decided to test whether these Facebook ads should still be exempt from the rules.

“For years now, people have said, ‘Oh, don’t worry about the rules, because the FEC doesn’t enforce anything on Facebook,’” said John Pudner, president of Take Back Action Fund, which advocates for campaign finance reform. Many political consultants “didn’t think you ever needed a disclaimer on a Facebook ad,” said Pudner, a longtime campaign consultant to conservative candidates.

Take Back Action Fund came up with a plan: Ask the FEC whether it should include disclosures on ads that the group thought clearly needed them.

The group told the FEC it planned to buy “express advocacy” ads on Facebook that included large images or videos on the news feed. In its filing, Take Back Action Fund provided some sample text it said it was thinking of using: “While [Candidate Name] accuses the Russians of helping President Trump get elected, [s/he] refuses to call out [his/her] own Democrat Party for paying to create fake documents that slandered Trump during his presidential campaign. [Name] is unfit to serve.”

In a comment filed with the FEC in the matter, the Internet Association trade group, of which Facebook is a member, asked the commission to follow the precedent of the 2010 Google case and allow a “one-click” disclosure that didn’t need to be on the ad itself but could be on the web page the ad led to.

The FEC didn’t follow that recommendation. It said unanimously that the ads needed full disclaimers.

The opinion, handed down Dec. 15, was narrow, saying that if any of the “facts or assumptions” presented in another case were different in a “material” way, the opinion could not be relied upon. But several legal experts who spoke with ProPublica said the opinion means anyone who would have to include disclaimers in traditional advertising should now do so on large Facebook image ads or video ads — including candidates, political committees and anyone using express advocacy.

“The functionality and capabilities of today’s Facebook Video and Image ads can accommodate the information without the same constrictions imposed by the character-limited ads that Facebook presented to the Commission in 2011,” three commissioners wrote in a concurring statement. A fourth commissioner went further, saying the commission’s earlier decision in the text messaging case should now be completely superseded. The remaining two commissioners didn’t comment beyond the published opinion.

“We are overjoyed at the decision and hope it will have the effect of stopping anonymous attacks,” said Pudner, of Take Back Action Fund. “We think that this is a matter of the voter’s right to know.” He added that the group doesn’t intend to purchase the ads.

This year, the FEC plans to tackle concerns about digital political advertising more generally. Facebook favors such an industry-wide approach, partly for competitive reasons, according to a comment it submitted to the commission.

“Facebook strongly supports the Commission providing further guidance to committees and other advertisers regarding their disclaimer obligations when running election-related Internet communications on any digital platform,” Facebook General Counsel Colin Stretch wrote to the FEC.

Facebook was concerned that its own transparency efforts “will apply only to advertising on Facebook’s platform, which could have the unintended consequence of pushing purchasers who wish to avoid disclosure to use other, less transparent platforms,” Stretch wrote.

He urged the FEC to adopt a “flexible” approach, on the grounds that there are many different types of online ads. “For example, allowing ads to include an icon or other obvious indicator that more information about an ad is available via quick navigation (like a single click) would give clear guidance.”

To test whether political advertisers were following the FEC guidelines, we searched for large U.S. political ads that our tool gathered between Dec. 20 — five days after the opinion — and Feb. 1. We excluded the small ads that run on the right column of Facebook’s website. To find ads that were most likely to fall under the purview of the FEC regulations, we searched for terms like “committee,” “donate” and “chip in.” We also searched for ads that used express advocacy language such as, “for Congress,” “vote against,” “elect” or “defeat.” We left out ads with state and local terms such as “governor” or “mayor,” as well as ads from groups such as the White House Historical Association or National Audubon Society that were obviously not election-oriented. Then we examined the ads, including the text and photos or graphics.

Of nearly 70 entities that ran ads with a large photo or graphic in addition to text, only two used all of the required disclaimer language. About 20 correctly indicated in some fashion the name of the committee associated with the ad but omitted other language, such as whether the ad was endorsed by a candidate. The rest had more significant shortcomings. Many of those that didn’t include disclosures were for relatively inexperienced candidates for Congress, but plenty of seasoned lawmakers and major groups failed to use the proper language as well.

For example, one ad said, “It’s time for Donald Trump, his family, his campaign, and all of his cronies to come clean about their collusion with Russia.” A photo of Donald Trump appeared over a black and red map of Russia, overlaid by the text, “Stop the Lies.” The ad urged people to “Demand Answers Today” and “Sign Up.”

At the top, the ad identified the Democratic Party as the sponsor, and linked to the party’s Facebook page. But, under FEC rules, it should have named the funder, the Democratic National Committee, and given the committee’s address or website. It should also have said whether the ad was endorsed by any candidate. It didn’t. The only nod to the national committee was a link to my.democrats.org, which is paid for by the DNC, at the bottom of the ad. As on all Facebook ads, the word “Sponsored” was included at the top.

Advertisers seemed more likely to put the proper disclaimers on video ads, especially when those ads appeared to have been created for television, where disclaimers have been mandatory for years. Videos that didn’t look made for TV were less likely to include a disclaimer.

One ad that said it was from Donald J. Trump consisted of 20 seconds of video with an American flag background and stirring music. The words “Donate Now! And Enter for a Chance To Win Dinner With Trump!” materialized on the screen with dramatic thuds and crashes. The ad linked to Trump’s Facebook page, and a “Donate” button at the bottom of the ad linked to a website that identified the president’s re-election committee, Donald J. Trump for President, Inc., as its funder. It wasn’t clear on the ad whether Trump himself or his committee paid for it, which should have been specified under FEC rules.

The large majority of advertisements we collected — both those that used disclosures and those that didn’t — were for liberal groups and politicians, possibly reflecting the allegiances of the ProPublica readers who installed our ad-collection tool. There were only four Republican advertisers among the ads we analyzed.

It’s not clear why advertisers aren’t following the FEC regulations. Keating, of the Institute for Free Speech, suggested that advertisers might think the word “Sponsored” and a link to their Facebook page are enough and that reasonable people would know they had paid for the ad.

Others said social media marketers may simply be slow in adjusting to the FEC opinion.

“It’s entirely possible that because disclaimers haven’t been included for years now, candidates and committees just aren’t used to putting them on there,” said Brendan Fischer, director of the Federal and FEC Reform Program at the Campaign Legal Center, the group that provided legal services to Take Back Action Fund. “But they should be on notice,” he added.

There were only two advertisers we saw that included the full, clear disclosures required by the FEC on their large image ads. One was Amy Klobuchar, a Democratic senator from Minnesota who is a co-sponsor of the Honest Ads Act. The other was John Moser, an IT security professional and Democratic primary candidate in Maryland’s 7th Congressional District who received $190 in contributions last year, according to his FEC filings.

Reached by Facebook Messenger, Moser said he is running because he has a plan for ending poverty in the U.S. by restructuring Social Security into a “universal dividend” that gives everyone over age 18 a portion of the country’s per capita income. He complained that Facebook doesn’t make it easy for political advertisers to include the required disclosures. “You have to wedge it in there somewhere,” said Moser, who faces an uphill battle against longtime U.S. Rep. Elijah Cummings. “They need to add specific support for that, honestly.”

Asked why he went to the trouble to put the words on his ad, Moser’s answer was simple: “I included a disclosure because you're supposed to.”

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Advertising Agency Paid $2 Million To Settle Deceptive Advertising Charges

Marketing Architects inc. The U.S. Federal Trade Commission (FTC) announced that Minneapolis-based Marketing Architects, Inc. (MAI):

"... an advertising agency that created and disseminated allegedly deceptive radio ads for weight-loss products marketed by its client, Direct Alternatives, has agreed to pay $2 million to the Federal Trade Commission and State of Maine Attorney General’s Office to settle their complaint..."

First, some background. According to the FTC, MAI created advertising for several products (e.g., Puranol, Pur-Hoodia Plus, Acai Fresh, AF Plus, and Final Trim) by Direct Alternatives from 2006 through February 2015. Then, in 2016 the FTC and the State of Maine settled allegations against Direct Alternatives, which required the company to halt deceptive advertising and illegal billing practices.

Additional background according to the FTC: MAI previously created weight-loss ads for Sensa Products, LLC between March 2009 and May 2011. The FTC filed a complaint against Sensa in 2014, and subsequently Sensa agreed to refund $26.5 million to defrauded consumers. So, there's important, relevant history.

In the latest action, the joint complaint alleged that MAI created and disseminated radio ads with false or unsubstantiated weight-loss claims for AF Plus and Final Trim. Besides:

"... receiving FTC’s Sensa order, MAI was previously made aware of the need to have competent and reliable scientific evidence to back up health claims. Among other things, the complaint alleges that Direct Alternatives provided MAI with documents indicating that some of the weight-loss claims later challenged by the FTC needed to be supported by scientific evidence.

The complaint further charges that MAI developed and disseminated fictitious weight-loss testimonials and created radio ads for weight-loss products falsely disguised as news stories. Finally, the complaint charges MAI with creating inbound call scripts that failed to adequately disclose that consumers would be automatically enrolled in negative-option (auto-ship) continuity plans."

The latest action includes a proposed court order to ban MAI from making weight-loss claims about products the FTC has already advised as false, and:

"... requires MAI to have competent and reliable scientific evidence to support any other claims about the health benefits or efficacy of weight-loss products, and prohibits it from misrepresenting the existence or outcome of tests or studies. In addition, the order prohibits MAI from misrepresenting the experience of consumer testimonialists or that paid commercial advertising is independent programming."

This action is a reminder to advertising and digital agency executives everywhere: ensure that claims are supported by competent, reliable scientific evidence.

Good. Kudos to the FTC for these enforcement actions and for protecting consumers.


CFPB Backs Off Investigating The Massive Equifax Breach

Logo for Consumer Financial Protection Bureau MarketWatch reported on Monday that the Consumer Financial Protection Bureau (CFPB) has:

"...  scaled back its investigation into a data breach at credit reporting agency Equifax Reuters reported Monday. The CFPB's interim director Mick Mulvaney, appointed by the Trump administration, has not followed "routine steps" that would be involved in a probe, including issuing subpoenas against Equifax and seeking sworn testimony from its executives, Reuters reported.

And when regulators at the Federal Reserve, Federal Deposit Insurance Corp. and Office of the Comptroller of the Currency have offered to help examine the credit bureaus, the CFPB reportedly declined the help... several politicians and consumer advocates said this is the latest sign the CFPB under Mulvaney will be weak in its prosecution of financial firms... The Federal Trade Commission is also investigating the breach, but imposes financial penalties more rarely than the CFPB does... Mulvaney wrote in an op-ed published in January The Wall Street Journal that the bureau will no longer “push the envelope.” “When it comes to enforcement, we will focus on quantifiable and unavoidable harm to the consumer,” he wrote..."

Equifax logo The massive Equifax data breach affected at least 143 million persons in the United States. That was about 44 percent of the United States population... almost half. Nobody in their right mind wants to experience that again, so a thorough investigation seems wise, appropriate, and necessary.

The CFPB began supervision of the credit reporting industry in 2012. While the news report by MarketWatch is very troubling, sadly there is even more bad news:

"Consumer advocates are also concerned that the CFPB will get rid of the database of complaints related to current investigations, which allows the public to air complaints publicly. It also provided a direct way for the public to engage with the CFPB’s activities. The database contains hundreds of thousands of complaints filed by consumers about issues ranging from predatory debt collectors to errors on credit reports. Republicans have argued that the database shouldn’t be public, while consumer advocates say the public list of complaints is an important tool for consumers.

A public database has been “a powerful mechanism for keeping financial predators accountable to consumers,” Melissa Stegman, senior policy counsel at the Center for Responsible Lending, a nonprofit based in Durham, N.C., told MarketWatch... Mulvaney announced in January the CFPB may reconsider a rule Cordray implemented for payday lenders that was designed to protect consumers and limit the amount lenders are allowed to loan them, if they do not meet certain borrowing criteria."

Now, you know why you should be concerned, too, about foot-dragging by the CFPB's Equifax probe. There is plenty of evidence that the CFPB has done a spectacular job protecting consumers and their money:

While campaigning for President, Donald Trump positioned himself as a populist... promoting "populist nationalism." A true populist would not appoint a CFPB director that weakens or abandons protection for consumers. What do you think?


Fresenius Medical Care To Pay $3.5 Million For 5 Small Data Breaches During 2012

Logo-fresenius-medical-careFresenius Medical Care Holdings, Inc. has agreed to a $3.5 million settlement agreement regarding five small data breaches the Massachusetts-based healthcare organization experienced during 2012. Fresenius Medical Care Holdings, Inc. does business under the name Fresenius Medical Care North America (FMCNA). This represents one of the largest HIPAA settlements ever by the U.S. Department of Health & Human Services (HHS).

The five small data breaches, at different locations across the United States, affected about 521 persons:

  1. Bio-Medical Applications of Florida, Inc. d/b/a Fresenius Medical Care Duval Facility: On February 23, 2012, two desktop computers were stolen during a break-in. One of the computers contained the electronic Protected Health Information (ePHI) of 200 persons, including patient name, admission date, date of first dialysis, days and times of treatments, date of birth, and Social Security number
  2. Bio-Medical Applications of Alabama, Inc. d/b/a Fresenius Medical Care Magnolia Grove: On April 3, 2012, an unencrypted USB drive was stolen from a worker's car while parked in the organization's parking lot. The USB device contained the ePHI of 245 persons, including patient name, address, date of birth, telephone number, insurance company, insurance account number (a potential social security number derivative for some patients) and the covered entity location where each patient was seen.
  3. Renal Dimensions, LLC d/b/a Fresenius Medical Care Ak-Chin: On June 18, 2012, an anonymous phone tip reported that a hard drive was missing from a desktop computer, which had been taken out of service. The hard drive contained the ePHI of 35 persons, including name, date of birth, Social Security number and Zip code. While the worker notified a manager about the missing hard drive, the manager failed t notify the FMCNA Corporate Risk Management Department.
  4. Fresenius Vascular Care Augusta, LLC: On June 16, 2012, a worker's unencrypted laptop was stolen from her car while parked overnight at home. The laptop bag also include a list of her passwords. The laptop contained the ePHI of 10 persons, including patient name, insurance account number (which could be a social security number derivative) and other insurance information.
  5. WSKC Dialysis Services, Inc. d/b/a Fresenius Medical Care Blue Island Dialysis: On or about June 17 - 18, 2012, three desktop computers and one encrypted laptop were stolen from the office. One of the desktop computers contained the ePHI of 31 persons, including patient name, dates of birth, address, telephone number, and either full or partial Social Security numbers.

Besides the hefty payment, terms of the settlement agreement (Adobe PDF) require FMCNA to implement and complete a Corrective Action Plan:

  • Conduct a risk analysis,
  • Develop and implement a risk management plan,
  • Implement a process for evaluating workplace operational changes,
  • Develop an Encryption Report,
  • Review and revise internal policies and procedures to control devices and storage media,
  • Review and revise policies to control access to facilities,
  • Develop a privacy and security awareness training program for workers, and
  • Submit progress reports at regular intervals to HHS.

The Encryption report identifies and describes the devices and equipment (e.g., desktops, laptops, tables smartphones, etc.) that may be used to access, store, and transmit patients' ePHI information; records the number of devices including which utilize encrypted information; and provides a detailed plan for implementing encryption on devices and media which should contain encrypted information and currently don't.

Some readers may wonder why a large fine for relatively small data breaches, since news reports often cite data breaches affecting thousands or millions of persons. HHS explained that the investigation by its Office For Civil Rights (OCR) unit:

"... revealed FMCNA covered entities failed to conduct an accurate and thorough risk analysis of potential risks and vulnerabilities to the confidentiality, integrity, and availability of all of its ePHI. The FMCNA covered entities impermissibly disclosed the ePHI of patients by providing unauthorized access for a purpose not permitted by the Privacy Rule... Five breaches add up to millions in settlement costs for entity that failed to heed HIPAA’s risk analysis and risk management rules.."

OCR Director Roger Severino added:

"The number of breaches, involving a variety of locations and vulnerabilities, highlights why there is no substitute for an enterprise-wide risk analysis for a covered entity... Covered entities must take a thorough look at their internal policies and procedures to ensure they are protecting their patients’ health information in accordance with the law."