516 posts categorized "Federal / U.S. Government" Feed

The DIY Revolution: Consumers Alter Or Build Items Previously Not Possible. Is It A Good Thing?

Recent advances in technology allow consumers to alter, customize, or build locally items previously not possible. These items are often referred to as Do-It-Yourself (DIY) products. You've probably heard DIY used in home repair and renovation projects on television. DIY now happens in some unexpected areas. Today's blog post highlights two areas.

DIY Glucose Monitors

Earlier this year, CNet described the bag an eight-year-old patient carries with her everywhere daily:

"... It houses a Dexcom glucose monitor and a pack of glucose tablets, which work in conjunction with the sensor attached to her arm and the insulin pump plugged into her stomach. The final item in her bag was an iPhone 5S. It's unusual for such a young child to have a smartphone. But Ruby's iPhone, which connects via Bluetooth to her Dexcom monitor, allowing [her mother] to read it remotely, illustrates the way technology has transformed the management of diabetes from an entirely manual process -- pricking fingers to measure blood sugar, writing down numbers in a notebook, calculating insulin doses and injecting it -- to a semi-automatic one..."

Some people have access to these new technologies, but many don't. Others want more connectivity and better capabilities. So, some creative "hacking" has resulted:

"There are people who are unwilling to wait, and who embrace unorthodox methods. (You can find them on Twitter via the hashtag #WeAreNotWaiting.) The Nightscout Foundation, an online diabetes community, figured out a workaround for the Pebble Watch. Groups such as Nightscout, Tidepool and OpenAPS are developing open-source fixes for diabetes that give major medical tech companies a run for their money... One major gripe of many tech-enabled diabetes patients is that the two devices they wear at all times -- the monitor and the pump -- don't talk to each other... diabetes will never be a hands-off disease to manage, but an artificial pancreas is basically as close as it gets. The FDA approved the first artificial pancreas -- the Medtronic 670G -- in October 2017. But thanks to a little DIY spirit, people have had them for years."

CNet shared the experience of another tech-enabled patient:

"Take Dana Lewis, founder of the open-source artificial pancreas system, or OpenAPS. Lewis started hacking her glucose monitor to increase the volume of the alarm so that it would wake her in the night. From there, Lewis tinkered with her equipment until she created a closed-loop system, which she's refined over time in terms of both hardware and algorithms that enable faster distribution of insulin. It has massively reduced the "cognitive burden" on her everyday life... JDRF, one of the biggest global diabetes research charities, said in October that it was backing the open-source community by launching an initiative to encourage rival manufacturers like Dexcom and Medtronic to open their protocols and make their devices interoperable."

Convenience and affordability are huge drivers. As you might have guessed, there are risks:

"Hacking a glucose monitor is not without risk -- inaccurate readings, failed alarms or the wrong dose of insulin distributed by the pump could have fatal consequences... Lewis and the OpenAPS community encourage people to embrace the build-your-own-pancreas method rather than waiting for the tech to become available and affordable."

Are DIY glucose monitors a good thing? Some patients think so as a way to achieve convenient and affordable healthcare solutions. That might lead you to conclude anything DIY is an improvement. Right? Keep reading.

DIY Guns

Got a 3-D printer? If so, then you can print your own DIY gun. How did this happen? How did the USA get to here? Wired explained:

"Five years ago, 25-year-old radical libertarian Cody Wilson stood on a remote central Texas gun range and pulled the trigger on the world’s first fully 3-D-printed gun... he drove back to Austin and uploaded the blueprints for the pistol to his website, Defcad.com... In the days after that first test-firing, his gun was downloaded more than 100,000 times. Wilson made the decision to go all in on the project, dropping out of law school at the University of Texas, as if to confirm his belief that technology supersedes law..."

The law intervened. Wilson stopped, took down his site, and then pursued a legal remedy:

"Two months ago, the Department of Justice quietly offered Wilson a settlement to end a lawsuit he and a group of co-plaintiffs have pursued since 2015 against the United States government. Wilson and his team of lawyers focused their legal argument on a free speech claim: They pointed out that by forbidding Wilson from posting his 3-D-printable data, the State Department was not only violating his right to bear arms but his right to freely share information. By blurring the line between a gun and a digital file, Wilson had also successfully blurred the lines between the Second Amendment and the First."

So, now you... anybody with an internet connection and a 3-D printer (and a computer-controlled milling machine for some advanced parts)... can produce their own DIY gun. No registration required. No licenses nor permits. No training required. And, that's anyone anywhere in the world.

Oh, there's more:

"The Department of Justice's surprising settlement, confirmed in court documents earlier this month, essentially surrenders to that argument. It promises to change the export control rules surrounding any firearm below .50 caliber—with a few exceptions like fully automatic weapons and rare gun designs that use caseless ammunition—and move their regulation to the Commerce Department, which won't try to police technical data about the guns posted on the public internet. In the meantime, it gives Wilson a unique license to publish data about those weapons anywhere he chooses."

As you might have guessed, Wilson is re-launching his website, but this time with blueprints for more DIY weaponry besides pistols: AR-15 rifles and semi-automatic weaponry. So, it will be easier for people to skirt federal and state gun laws. Is that a good thing?

You probably have some thoughts and concerns. I do. There are plenty of issues and questions. Are DIY products a good thing? Who is liable? How should laws be upgraded? How can society facilitate one set of DIY products and not the other? What related issues do you see? Any other notable DIY products?


FTC Requests Input From The Public And Will Hold Hearings About 'Competition And Consumer Protection'

During the coming months, the U.S. Federal Trade Commission (FTC) will hold a series of meeting and seek input from the public about "Competition And Consumer Protection" and:

"... whether broad-based changes in the economy, evolving business practices, new technologies, or international developments might require adjustments to competition and consumer protection enforcement law, enforcement priorities, and policy."

The FTC expects to conduct 15 to 20 hearings starting in September, 2018 and ending in January, 2019. Before each topical hearing, input from the public will be sought. The list of topics the FTC seeks input about (bold emphasis added):

  1. "The state of antitrust and consumer protection law and enforcement, and their development, since the Pitofsky hearings;
  2. Competition and consumer protection issues in communication, information, and media technology networks;
  3. The identification and measurement of market power and entry barriers, and the evaluation of collusive, exclusionary, or predatory conduct or conduct that violates the consumer protection statutes enforced by the FTC, in markets featuring “platform” businesses;
  4. The intersection between privacy, big data, and competition;
  5. The Commission’s remedial authority to deter unfair and deceptive conduct in privacy and data security matters;
  6. Evaluating the competitive effects of corporate acquisitions and mergers;
  7. Evidence and analysis of monopsony power, including but not limited to, in labor markets;
  8. The role of intellectual property and competition policy in promoting innovation; 
  9. The consumer welfare implications associated with the use of algorithmic decision tools, artificial intelligence, and predictive analytics;
  10. The interpretation and harmonization of state and federal statutes and regulations that prohibit unfair and deceptive acts and practices; and
  11. The agency’s investigation, enforcement, and remedial processes."

The public can submit written comments now through August 20, 2018. For more information, see the FTC site about each topic. Additional instructions for comment submissions:

"Each topic description includes issues of particular interest to the Commission, but comments need not be restricted to these subjects... the FTC will invite comments on the topic of each hearing session... The FTC will also invite public comment upon completion of the entire series of hearings. Public comments may address one or more of the above topics generally, or may address them with respect to a specific industry, such as the health care, high-tech, or energy industries... "

Comments must be submitted in writing. The public can submit comments online to the FTC, or via  postal mail to. Comments submitted via postal mail must include "‘Competition and Consumer Protection in the 21st Century Hearing, Project Number P181201," on both your comment and on the envelope. Mail comments to:

Federal Trade Commission
Office of the Secretary
600 Pennsylvania Avenue NW., Suite CC–5610 (Annex C)
Washington, DC 20580

See the FTC website for instructions for courier deliveries.

The "light touch" enforcement approach by the Federal Communications Commission (FCC) with oversight of the internet, the repeal of broadband privacy, and the repeal of net neutrality repeal, has highlighted the importance of oversight and enforcement by the FTC for consumer protection.

Given the broad range of topical hearings and input it could receive, the FTC may consider and/or pursue major changes to its operations. What do you think?


Federal Investigation Into Facebook Widens. Company Stock Price Drops

The Boston Globe reported on Tuesday (links added):

"A federal investigation into Facebook’s sharing of data with political consultancy Cambridge Analytica has broadened to focus on the actions and statements of the tech giant and now involves three agencies, including the Securities and Exchange Commission, according to people familiar with the official inquiries.

Representatives for the FBI, the SEC, and the Federal Trade Commission have joined the Justice Department in its inquiries about the two companies and the sharing of personal information of 71 million Americans... The Justice Department and the other federal agencies declined to comment. The FTC in March disclosed that it was investigating Facebook over possible privacy violations..."

About 87 million persons were affected by the Facebook breach involving Cambridge Analytica. In May, the new Commissioner at the U.S. Federal Trade Commission (FTC) suggested stronger enforcement on tech companies, like Google and Facebook.

After news broke about the wider probe, shares of Facebook stock fell about 18 percent of their value and then recovered somewhat for a net drop of 2 percent. That 2 percent drop is about $12 billion in valuation. Clearly, there will be more news (and stock price fluctuations) to come.

During the last few months, there has been plenty of news about Facebook:


The Wireless Carrier With At Least 8 'Hidden Spy Hubs' Helping The NSA

AT&T logo During the late 1970s and 1980s, AT&T conducted an iconic “reach out and touch someone” advertising campaign to encourage consumers to call their friends, family, and classmates. Back then, it was old school -- landlines. The campaign ranked #80 on Ad Age's list of the 100 top ad campaigns from the last century.

Now, we learn a little more about how extensive pervasive surveillance activities are at AT&T facilities to help law enforcement reach out and touch persons. Yesterday, the Intercept reported:

"The NSA considers AT&T to be one of its most trusted partners and has lauded the company’s “extreme willingness to help.” It is a collaboration that dates back decades. Little known, however, is that its scope is not restricted to AT&T’s customers. According to the NSA’s documents, it values AT&T not only because it "has access to information that transits the nation," but also because it maintains unique relationships with other phone and internet providers. The NSA exploits these relationships for surveillance purposes, commandeering AT&T’s massive infrastructure and using it as a platform to covertly tap into communications processed by other companies.”

The new report describes in detail the activities at eight AT&T facilities in major cities across the United States. Consumers who use other branded wireless service providers are also affected:

"Because of AT&T’s position as one of the U.S.’s leading telecommunications companies, it has a large network that is frequently used by other providers to transport their customers’ data. Companies that “peer” with AT&T include the American telecommunications giants Sprint, Cogent Communications, and Level 3, as well as foreign companies such as Sweden’s Telia, India’s Tata Communications, Italy’s Telecom Italia, and Germany’s Deutsche Telekom."

It was five years ago this month that the public learned about extensive surveillance by the U.S. National Security Agency (NSA). Back then, the Guardian UK newspaper reported about a court order allowing the NSA to spy on U.S. citizens. The revelations continued, and by 2016 we'd learned about NSA code inserted in Android operating system software, the FISA Court and how it undermines the public's trust, the importance of metadata and how much it reveals about you (despite some politicians' claims otherwise), the unintended consequences from broad NSA surveillance, U.S. government spy agencies' goal to break all encryption methods, warrantless searches of U.S. citizens' phone calls and e-mail messages, the NSA's facial image data collection program, the data collection programs included ordinary (e.g., innocent) citizens besides legal targets, and how  most hi-tech and telecommunications companies assisted the government with its spy programs. We knew before that AT&T was probably the best collaborator, and now we know more about why. 

Content vacuumed up during the surveillance includes consumers' phone calls, text messages, e-mail messages, and internet activity. The latest report by the Intercept also described:

"The messages that the NSA had unlawfully collected were swept up using a method of surveillance known as “upstream,” which the agency still deploys for other surveillance programs authorized under both Section 702 of FISA and Executive Order 12333. The upstream method involves tapping into communications as they are passing across internet networks – precisely the kind of electronic eavesdropping that appears to have taken place at the eight locations identified by The Intercept."

Former NSA contractor Edward Snowden commented on Twitter:


Supreme Court Ruling Requires Government To Obtain Search Warrants To Collect Users' Location Data

On Friday, the Supreme Court of the United States (SCOTUS) issued a decision which requires the government to obtain warrants in order to collect information from wireless carriers such as geo-location data. 9to5Mac reported that the court case resulted from:

"... a 2010 case of armed robberies in Detroit in which prosecutors used data from wireless carriers to make a conviction. In this case, lawyers had access to about 13,000 location data points. The sticking point has been whether access and use of data like this violates the Fourth Amendment. Apple, along with Google and Facebook had previously submitted a brief to the Supreme Court arguing for privacy protection..."

The Fourth Amendment in the U.S. Constitution states:

"The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized."

The New York Times reported:

"The 5-to-4 ruling will protect "deeply revealing" records associated with 400 million devices, the chief justice wrote. It did not matter, he wrote, that the records were in the hands of a third party. That aspect of the ruling was a significant break from earlier decisions. The Constitution must take account of vast technological changes, Chief Justice Roberts wrote, noting that digital data can provide a comprehensive, detailed — and intrusive — overview of private affairs that would have been impossible to imagine not long ago. The decision made exceptions for emergencies like bomb threats and child abductions..."

Background regarding the Fourth Amendment:

"In a pair of recent decisions, the Supreme Court expressed discomfort with allowing unlimited government access to digital data. In United States v. Jones, it limited the ability of the police to use GPS devices to track suspects’ movements. And in Riley v. California, it required a warrant to search cellphones. Chief Justice Roberts wrote that both decisions supported the result in the new case.

The Supreme court's decision also discussed historical use of the "third-party doctrine" by law enforcement:

"In 1979, for instance, in Smith v. Maryland, the Supreme Court ruled that a robbery suspect had no reasonable expectation that his right to privacy extended to the numbers dialed from his landline phone. The court reasoned that the suspect had voluntarily turned over that information to a third party: the phone company. Relying on the Smith decision’s “third-party doctrine,” federal appeals courts have said that government investigators seeking data from cellphone companies showing users’ movements do not require a warrant. But Chief Justice Roberts wrote that the doctrine is of limited use in the digital age. “While the third-party doctrine applies to telephone numbers and bank records, it is not clear whether its logic extends to the qualitatively different category of cell-site records,” he wrote."

The ruling also covered the Stored Communications Act, which requires:

"... prosecutors to go to court to obtain tracking data, but the showing they must make under the law is not probable cause, the standard for a warrant. Instead, they must demonstrate only that there were “specific and articulable facts showing that there are reasonable grounds to believe” that the records sought “are relevant and material to an ongoing criminal investigation.” That was insufficient, the court ruled. But Chief Justice Roberts emphasized the limits of the decision. It did not address real-time cell tower data, he wrote, “or call into question conventional surveillance techniques and tools, such as security cameras.” "

What else this Supreme Court decision might mean:

"The decision thus has implications for all kinds of personal information held by third parties, including email and text messages, internet searches, and bank and credit card records. But Chief Justice Roberts said the ruling had limits. "We hold only that a warrant is required in the rare case where the suspect has a legitimate privacy interest in records held by a third party," the chief justice wrote. The court’s four more liberal members — Justices Ruth Bader Ginsburg, Stephen G. Breyer, Sonia Sotomayor and Elena Kagan — joined his opinion."

Dissenting opinions by conservative Justices cited restrictions on law enforcement's abilities and further litigation. Breitbart News focused upon divisions within the Supreme Court and dissenting Justices' opinions, rather than a comprehensive explanation of the majority's opinion and law. Some conservatives say that President Trump will have an opportunity to appoint two Supreme Court Justices.

Albert Gidari, the Consulting Director of Privacy at the Stanford Law Center for Internet and Society, discussed the Court's ruling:

"What a Difference a Week Makes. The government sought seven days of records from the carrier; it got two days. The Court held that seven days or more was a search and required a warrant. So can the government just ask for 6 days with a subpoena or court order under the Stored Communications Act? Here’s what Justice Roberts said in footnote 3: “[W]e need not decide whether there is a limited period for which the Government may obtain an individual’s historical CSLI free from Fourth Amendment scrutiny, and if so, how long that period might be. It is sufficient for our purposes today to hold that accessing seven days of CSLI constitutes a Fourth Amendment search.” You can bet that will be litigated in the coming years, but the real question is what will mobile carriers do in the meantime... Where You Walk and Perhaps Your Mere Presence in Public Spaces Can Be Private. The Court said this clearly: “A person does not surrender all Fourth Amendment protection by venturing into the public sphere. To the contrary, “what [one] seeks to preserve as private, even in an area accessible to the public, may be constitutionally protected.”” This is the most important part of the Opinion in my view. It’s potential impact is much broader than the location record at issue in the case..."

Mr. Gidari's essay explored several more issues:

  • Does the Decision Really Make a Difference to Law Enforcement?
  • Are All Business Records in the Hands of Third Parties Now Protected?
  • Does It Matter Whether You Voluntarily Give the Data to a Third Party?

And:

Most people carry their smartphones with them 24/7 and everywhere they go. Hence, the geo-location data trail contains unique and very personal movements: where and whom you visit, how often and long you visit, who else (e.g., their smartphones) is nearby, and what you do (e.g., calls, mobile apps) at certain locations. The Supreme Court, or at least a majority of its Justices, seem to recognize and value this.

What are your opinions of the Supreme Court ruling?


Google To Exit Weaponized Drone Contract And Pursue Other Defense Projects

Google logo Last month, protests by current and former Google employees, plus academic researchers, cited ethical and transparency concerns with artificial intelligence (AI) help the company provides to the U.S. Department of Defense for Project Maven, a weaponized drone program to identify people. Gizmodo reported that Google plans not to renew its contract for Project Maven:

"Google Cloud CEO Diane Greene announced the decision at a meeting with employees Friday morning, three sources told Gizmodo. The current contract expires in 2019 and there will not be a follow-up contract... The company plans to unveil new ethical principles about its use of AI this week... Google secured the Project Maven contract in late September, the emails reveal, after competing for months against several other “AI heavyweights” for the work. IBM was in the running, as Gizmodo reported last month, along with Amazon and Microsoft... Google is reportedly competing for a Pentagon cloud computing contract worth $10 billion."


FBI Warns Sophisticated Malware Targets Wireless Routers In Homes And Small Businesses

The U.S. Federal Bureau of Investigation (FBI) issued a Public Service Announcement (PSA) warning consumers and small businesses that "foreign cyber actors" have targeted their wireless routers. The May 25th PSA explained the threat:

"The actors used VPNFilter malware to target small office and home office routers. The malware is able to perform multiple functions, including possible information collection, device exploitation, and blocking network traffic... The malware targets routers produced by several manufacturers and network-attached storage devices by at least one manufacturer... VPNFilter is able to render small office and home office routers inoperable. The malware can potentially also collect information passing through the router. Detection and analysis of the malware’s network activity is complicated by its use of encryption and misattributable networks."

The "VPN" acronym usually refers to a Virtual Private Network. Why use the VPNfilter name for a sophisticated computer virus? Wired magazine explained:

"... the versatile code is designed to serve as a multipurpose spy tool, and also creates a network of hijacked routers that serve as unwitting VPNs, potentially hiding the attackers' origin as they carry out other malicious activities."

The FBI's PSA advised users to, a) reboot (e.g., turn off and then back on) their routers; b) disable remote management features which attackers could take over to gain access; and c) update their routers with the latest software and security patches. For routers purchased independently, security experts advise consumers to contact the router manufacturer's tech support or customer service site.

For routers leased or purchased from an internet service providers (ISP), consumers should contact their ISP's customer service or technical department for software updates and security patches. Example: the Verizon FiOS forums site section lists the brands and models affected by the VPNfilter malware, since several manufacturers produce routers for the Verizon FiOS service.

It is critical for consumers to heed this PSA. The New York Times reported:

"An analysis by Talos, the threat intelligence division for the tech giant Cisco, estimated that at least 500,000 routers in at least 54 countries had been infected by the [VPNfilter] malware... A global network of hundreds of thousands of routers is already under the control of the Sofacy Group, the Justice Department said last week. That group, which is also known as A.P.T. 28 and Fancy Bear and believed to be directed by Russia’s military intelligence agency... To disrupt the Sofacy network, the Justice Department sought and received permission to seize the web domain toknowall.com, which it said was a critical part of the malware’s “command-and-control infrastructure.” Now that the domain is under F.B.I. control, any attempts by the malware to reinfect a compromised router will be bounced to an F.B.I. server that can record the I.P. address of the affected device..."

Readers wanting technical details about VPNfilter, should read the Talos Intelligence blog post.

When consumers contact their ISP about router software updates, it is wise to also inquire about security patches for the Krack malware, which the bad actors have used recently. Example: the Verizon site also provides information about the Krack malware.

The latest threat provides several strong reminders:

  1. The conveniences of wireless internet connectivity which consumers demand and enjoy, also benefits the bad guys,
  2. The bad guys are persistent and will continue to target internet-connected devices with weak or no protection, including devices consumers fail to protect,
  3. Wireless benefits come with a responsibility for consumers to shop wisely for internet-connected devices featuring easy, continual software updates and security patches. Otherwise, that shiny new device you recently purchased is nothing more than an expensive "brick," and
  4. Manufacturers have a responsibility to provide consumers with easy, continual software updates and security patches for the internet-connected devices they sell.

What are your opinions of the VPNfilter malware? What has been your experience with securing your wireless home router?


Federal Watchdog Launches Investigation of Age Bias at IBM

[Editor's note: today's guest post, by reporters at ProPublica, updates a prior post about employment practices. It is reprinted with permission. A data breach at IBM in 2007 led to the creation of this blog.]

IBM logo By Peter Gosselin, ProPublica

The U.S. Equal Employment Opportunity Commission has launched a nationwide probe of age bias at IBM in the wake of a ProPublica investigation showing the company has flouted or outflanked laws intended to protect older workers from discrimination.

More than five years after IBM stopped providing legally required disclosures to older workers being laid off, the EEOC’s New York district office has begun consolidating individuals’ complaints from across the country and asking the company to explain practices recounted in the ProPublica story, according to ex-employees who’ve spoken with investigators and people familiar with the agency’s actions.

"Whenever you see the EEOC pulling cases and sending them to investigations, you know they’re taking things seriously," said the agency’s former general counsel, David Lopez. "I suspect IBM’s treatment of its later-career workers and older applicants is going to get a thorough vetting."

EEOC officials refused to comment on the agency’s investigation, but a dozen ex-IBM employees from California, Colorado, Texas, New Jersey and elsewhere allowed ProPublica to view the status screens for their cases on the agency’s website. The screens show the cases being transferred to EEOC’s New York district office shortly after the March 22 publication of ProPublica’s original story, and then being shifted to the office’s investigations division, in most instances, between April 5 and April 10.

The agency’s acting chair, Victoria Lipnic, a Republican, has made age discrimination a priority. The EEOC’s New York office won a settlement last year from Kentucky-based national restaurant chain Texas Roadhouse in the largest age-related case as measured by number of workers covered to go to trial in more than three decades.

IBM did not respond to questions about the EEOC investigation. In response to detailed questions for our earlier story, the company issued a brief statement, saying in part, "We are proud of our company and its employees’ ability to reinvent themselves era after era while always complying with the law."

Just prior to publication of the story, IBM issued a video recounting its long history of support for equal employment and diversity. In it, CEO Virginia "Ginni" Rometty said, "Every generation of IBMers has asked ‘How can we in our own time expand our understanding of inclusion?’ "

ProPublica reported in March that the tech giant, which has an annual revenue of about $80 billion, has ousted an estimated 20,000 U.S. employees ages 40 and over since 2014, about 60 percent of its American job cuts during those years. In some instances, it earmarked money saved by the departures to hire young replacements in order to, in the words of one internal company document, "correct seniority mix."

ProPublica reported that IBM regularly denied older workers information the law says they’re entitled to in order to decide whether they’ve been victims of age bias, and used point systems and other methods to pick older workers for removal, even when the company rated them high performers.

In some cases, IBM treated job cuts as voluntary retirements, even over employees’ objections. This reduced the number of departures counted as layoffs, which can trigger public reporting requirements in high enough numbers, and prevented employees from seeking jobless benefits for which voluntary retirees can’t file.

In addition to the complaints covered in the EEOC probe, a number of current and former employees say they have recently filed new complaints with the agency about age bias and are contemplating legal action against the company.

Edvin Rusis of Laguna Niguel, a suburb south of Los Angeles, said IBM has told him he’ll be laid off June 27 from his job of 15 years as a technical specialist. Rusis refused to sign a severance agreement and hired a class-action lawyer. They have filed an EEOC complaint claiming Rusis was one of "thousands" discriminated against by IBM.

If the agency issues a right-to-sue letter indicating Rusis has exhausted administrative remedies for his claim, they can take IBM to court. "I don’t see a clear reason for why they’re laying me off," the 59-year-old Rusis said in an interview. "I can only assume it’s age, and I don’t want to go silently."

Coretta Roddey of suburban Atlanta, 49, an African-American Army veteran and former IBM employee, said she’s applied more than 50 times to return to the company, but has been turned down or received no response. She’s hired a lawyer and filed an age discrimination complaint with EEOC.

"It’s frustrating," she said of the multiple rejections. "It makes you feel you don’t have the qualifications (for the job) when you really do."

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


New Commissioner Says FTC Should Get Tough on Companies Like Facebook and Google

[Editor's note: today's guest post, by reporters at ProPublica, explores enforcement policy by the U.S. Federal Trade Commission (FTC), which has become more important  given the "light touch" enforcement approach by the Federal Communications Commission. Today's post is reprinted with permission.]

By Jesse Eisinger, ProPublica

Declaring that "the credibility of law enforcement and regulatory agencies has been undermined by the real or perceived lax treatment of repeat offenders," newly installed Democratic Federal Trade Commissioner Rohit Chopra is calling for much more serious penalties for repeat corporate offenders.

"FTC orders are not suggestions," he wrote in his first official statement, which was released on May 14.

Many giant companies, including Facebook and Google, are under FTC consent orders for various alleged transgressions (such as, in Facebook’s case, not keeping its promises to protect the privacy of its users’ data). Typically, a first FTC action essentially amounts to a warning not to do it again. The second carries potential penalties that are more serious.

Some critics charge that that approach has encouraged companies to treat FTC and other regulatory orders casually, often violating their terms. They also say the FTC and other regulators and law enforcers have gone easy on corporate recidivists.

In 2012, a Republican FTC commissioner, J. Thomas Rosch, dissented from an agency agreement with Google that fined the company $22.5 million for violations of a previous order even as it denied liability. Rosch wrote, “There is no question in my mind that there is ‘reason to believe’ that Google is in contempt of a prior Commission order.” He objected to allowing the company to deny its culpability while accepting a fine.

Chopra’s memo signals a tough stance from Democratic watchdogs — albeit a largely symbolic one, given the fact that Republicans have a 3-2 majority on the FTC — as the Trump administration pursues a wide-ranging deregulatory agenda. Agencies such as the Environmental Protection Agency and the Department of Interior are rolling back rules, while enforcement actions from the Securities and Exchange Commission and the Department of Justice are at multiyear lows.

Chopra, 36, is an ally of Elizabeth Warren and a former assistant director of the Consumer Financial Protection Bureau. President Donald Trump nominated him to his post in October, and he was confirmed last month. The FTC is led by a five-person commission, with a chairman from the president’s party.

The Chopra memo is also a tacit criticism of enforcement in the Obama years. Chopra cites the SEC’s practice of giving waivers to banks that have been sanctioned by the Department of Justice or regulators allowing them to continue to receive preferential access to capital markets. The habitual waivers drew criticism from a Democratic commissioner on the SEC, Kara Stein. Chopra contends in his memo that regulators treated both Wells Fargo and the giant British bank HSBC too lightly after repeated misconduct.

"When companies violate orders, this is usually the result of serious management dysfunction, a calculated risk that the payoff of skirting the law is worth the expected consequences, or both," he wrote. Both require more serious, structural remedies, rather than small fines.

The repeated bad behavior and soft penalties “undermine the rule of law,” he argued.

Chopra called for the FTC to use more aggressive tools: referring criminal matters to the Department of Justice; holding individual executives accountable, even if they weren’t named in the initial complaint; and “meaningful” civil penalties.

The FTC used such aggressive tactics in going after Kevin Trudeau, infomercial marketer of miracle treatments for bodily ailments. Chopra implied that the commission does not treat corporate recidivists with the same toughness. “Regardless of their size and clout, these offenders, too, should be stopped cold,” he writes.

Chopra also suggested other remedies. He called for the FTC to consider banning companies from engaging in certain business practices; requiring that they close or divest the offending business unit or subsidiary; requiring the dismissal of senior executives; and clawing back executive compensation, among other forceful measures.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Academic Professors, Researchers, And Google Employees Protest Warfare Programs By The Tech Giant

Google logo Many internet users know that Google's business of model of free services comes with a steep price: the collection of massive amounts of information about users of its services. There are implications you may not be aware of.

A Guardian UK article by three professors asked several questions:

"Should Google, a global company with intimate access to the lives of billions, use its technology to bolster one country’s military dominance? Should it use its state of the art artificial intelligence technologies, its best engineers, its cloud computing services, and the vast personal data that it collects to contribute to programs that advance the development of autonomous weapons? Should it proceed despite moral and ethical opposition by several thousand of its own employees?"

These questions are relevant and necessary for several reasons. First, more than a dozen Google employees resigned citing ethical and transparency concerns with artificial intelligence (AI) help the company provides to the U.S. Department of Defense for Maven, a weaponized drone program to identify people. Reportedly, these are the first known mass resignations.

Second, more than 3,100 employees signed a public letter saying that Google should not be in the business of war. That letter (Adobe PDF) demanded that Google terminate its Maven program assistance, and draft a clear corporate policy that neither it, nor its contractors, will build warfare technology.

Third, more than 700 academic researchers, who study digital technologies, signed a letter in support of the protesting Google employees and former employees. The letter stated, in part:

"We wholeheartedly support their demand that Google terminate its contract with the DoD, and that Google and its parent company Alphabet commit not to develop military technologies and not to use the personal data that they collect for military purposes... We also urge Google and Alphabet’s executives to join other AI and robotics researchers and technology executives in calling for an international treaty to prohibit autonomous weapon systems... Google has become responsible for compiling our email, videos, calendars, and photographs, and guiding us to physical destinations. Like many other digital technology companies, Google has collected vast amounts of data on the behaviors, activities and interests of their users. The private data collected by Google comes with a responsibility not only to use that data to improve its own technologies and expand its business, but also to benefit society. The company’s motto "Don’t Be Evil" famously embraces this responsibility.

Project Maven is a United States military program aimed at using machine learning to analyze massive amounts of drone surveillance footage and to label objects of interest for human analysts. Google is supplying not only the open source ‘deep learning’ technology, but also engineering expertise and assistance to the Department of Defense. According to Defense One, Joint Special Operations Forces “in the Middle East” have conducted initial trials using video footage from a small ScanEagle surveillance drone. The project is slated to expand “to larger, medium-altitude Predator and Reaper drones by next summer” and eventually to Gorgon Stare, “a sophisticated, high-tech series of cameras... that can view entire towns.” With Project Maven, Google becomes implicated in the questionable practice of targeted killings. These include so-called signature strikes and pattern-of-life strikes that target people based not on known activities but on probabilities drawn from long range surveillance footage. The legality of these operations has come into question under international and U.S. law. These operations also have raised significant questions of racial and gender bias..."

I'll bet that many people never imagined -- nor want - that their personal e-mail, photos, calendars, video, social media, map usage, archived photos, social media, and more would be used for automated military applications. What are your opinions?


U.S. Senate Vote Approves Resolution To Reinstate Net Neutrality Rules. FCC Chairman Pai Repeats Claims While Ignoring Consumers

Yesterday, the United States Senate approved a bipartisan resolution to preserve net neutrality rules, the set of internet protections established in 2015 which require wireless and internet service providers (ISPs) to provide customers with access to all websites, and equal access to all websites. That meant no throttling, blocking, slow-downs of selected sites, nor prioritizing internet traffic in "fast" or "slow" lanes.

Federal communications Commission logo Earlier this month, the Federal Communications Commission (FCC) said that current net neutrality rules would expire on June 11, 2018. Politicians promised that tax cuts will create new jobs, and that repeal of net neutrality rules would encourage investments by ISPs. FCC Chairman Ajit Pai, appointed by President Trump, released a statement on May 10, 2018:

"Now, on June 11, these unnecessary and harmful Internet regulations will be repealed and the bipartisan, light-touch approach that served the online world well for nearly 20 years will be restored. The Federal Trade Commission will once again be empowered to target any unfair or deceptive business practices of Internet service providers and to protect American’s broadband privacy. Armed with our strengthened transparency rule, we look forward to working closely with the FTC to safeguard a free and open Internet. On June 11, we will have a framework in place that encourages innovation and investment in our nation’s networks so that all Americans, no matter where they live, can have access to better, cheaper, and faster Internet access and the jobs, opportunities, and platform for free expression that it provides. And we will embrace a modern, forward-looking approach that will help the United States lead the world in 5G..."

Chairman Pai's claims sound hollow, since reality says otherwise. Telecommunications companies have fired workers and reduced staff despite getting tax cuts, broadband privacy repeal, and net neutrality repeal. In December, more than 1,000 startups and investors signed an open letter to Pai opposing the elimination of net neutrality. Entrepreneurs and executives are concerned that the loss of net neutrality will harm or hinder start-up businesses.

CNet provided a good overview of events surrounding the Senate's resolution:

"Democrats are using the Congressional Review Act to try to halt the FCC's December repeal of net neutrality. The law gives Congress 60 legislative days to undo regulations imposed by a federal agency. What's needed to roll back the FCC action are simple majorities in both the House and Senate, as well as the president's signature. Senator Ed Markey (Democrat, Massachusetts), who's leading the fight in the Senate to preserve the rules, last week filed a so-called discharge petition, a key step in this legislative effort... Meanwhile, Republican lawmakers and broadband lobbyists argue the existing rules hurt investment and will stifle innovation. They say efforts by Democrats to stop the FCC's repeal of the rules do nothing to protect consumers. All 49 Democrats in the Senate support the effort to undo the FCC's vote. One Republican, Senator Susan Collins of Maine, also supports the measure. One more Republican is needed to cross party lines to pass it."

"No touch" is probably a more accurate description of the internet under Chairman Pai's leadership, given many historical problems and abuses of consumers by some ISPs. The loss of net neutrality protections will likely result in huge price increases for internet access for consumers, which will also hurt public libraries, the poor, and disabled users. The loss of net neutrality will allow ISPs the freedom to carve up, throttle, block, and slow down the internet traffic they choose, while consumers will lose the freedom to use as they choose the broadband service they've paid for. And, don't forget the startup concerns above.

After the Senate's vote, FCC Chairman Pai released this statement:

“The Internet was free and open before 2015, when the prior FCC buckled to political pressure from the White House and imposed utility-style regulation on the Internet. And it will continue to be free and open once the Restoring Internet Freedom Order takes effect on June 11... our light-touch approach will deliver better, faster, and cheaper Internet access and more broadband competition to the American people—something that millions of consumers desperately want and something that should be a top priority. The prior Administration’s regulatory overreach took us in the opposite direction, reducing investment in broadband networks and particularly harming small Internet service providers in rural and lower-income areas..."

The internet was free and open before 2015? Mr. Pai is guilty of revisionist history. The lack of ISP competition in key markets meant consumers in the United States pay more for broadband and get slower speeds compared to other countries. There were numerous complaints by consumers about usage-based Internet pricing. There were privacy abuses and settlement agreements by ISPs involving technologies such as deep-packet inspection and 'Supercookies' to track customers online, despite consumers' wishes not to be tracked. Many consumers didn't get the broadband speeds ISP promised. Some consumers sued their ISPs, and the New York State Attorney General had residents  check their broadband speed with this tool.

Tim Berners-Lee, the founder of the internet, cited three reasons why the Internet is in trouble. His number one reason: consumers had lost control of their personal information. The loss of privacy meant consumers lost control over their personal information.

There's more. Some consumers found that their ISP hijacked their online search results without notice nor consent. An ISP in Kansas admitted in 2008 to secret snooping after pressure from Congress. Given this, something had to be done. The FCC stepped up to the plate and acted when it was legally able to; and reclassified broadband after open hearings. Proposed rules were circulated prior to adoption. It was done in the open.

Yet, Chairman Pai would have us now believe the internet was free and open before 2015; and that regulatory was unnecessary. I say BS.

FCC Commissioner Jessica Rosenworcel released a statement yesterday:

"Today the United States Senate took a big step to fix the serious mess the FCC made when it rolled back net neutrality late last year. The FCC's net neutrality repeal gave broadband providers extraordinary new powers to block websites, throttle services and play favorites when it comes to online content. This put the FCC on the wrong side of history, the wrong side of the law, and the wrong side of the American people. Today’s vote is a sign that the fight for internet freedom is far from over. I’ll keep raising a ruckus to support net neutrality and I hope others will too."

A mess, indeed, created by Chairman Pai. A December 2017 study of 1,077 voters found that most want net neutrality protections:

Do you favor or oppose the proposal to give ISPs the freedom to: a) provide websites the option to give their visitors the ability to download material at a higher speed, for a fee, while providing a slower speed for other websites; b) block access to certain websites; and c) charge their customers an extra fee to gain access to certain websites?
Group Favor Opposed Refused/Don't Know
National 15.5% 82.9% 1.6%
Republicans 21.0% 75.4% 3.6%
Democrats 11.0% 88.5% 0.5%
Independents 14.0% 85.9% 0.1%

Why did the FCC, President Trump, and most GOP politicians pursue the elimination of net neutrality protections despite consumers wishes otherwise? For the same reasons they repealed broadband privacy protections despite most consumers wanting broadband privacy. (Remember, President Trump signed the privacy-rollback legislation in April 2017.) They are doing the bidding of the corporate ISPs at the expense of consumers. Profits before people. Whenever Mr. Pai mentions a "free and open internet," he's referring to corporate ISPs and not consumers. What do you think?


News Media Alliance Challenges Tech Companies To 'Accept Accountability' And Responsibility For Filtering News In Their Platforms

Last week, David Chavern, the President and CEO of News Media Alliance (NMA), testified before the House Judiciary Committee. The NMA is a nonprofit trade association representing over 2,000 news organizations across the United States. Mr. Chavern's testimony focused upon the problem of fake news, often aided by social networking platform.

His comments first described current conditions:

"... Quality journalism is essential to a healthy and functioning democracy -- and my members are united in their desire to fight for its future.

Too often in today’s information-driven environment, news is included in the broad term "digital content." It’s actually much more important than that. While some low-quality entertainment or posts by friends can be disappointing, inaccurate information about world events can be immediately destructive. Civil society depends upon the availability of real, accurate news.

The internet represents an extraordinary opportunity for broader understanding and education. We have never been more interconnected or had easier and quicker means of communication. However, as currently structured, the digital ecosystem gives tremendous viewpoint control and economic power to a very small number of companies – the tech platforms that distribute online content. That control and power must come with new responsibilities... Historically, newspapers controlled the distribution of their product; the news. They invested in the journalism required to deliver it, and then printed it in a form that could be handed directly to readers. No other party decided who got access to the information, or on what terms. The distribution of online news is now dominated by the major technology platforms. They decide what news is delivered and to whom – and they control the economics of digital news..."

Last month, a survey found that roughly two-thirds of U.S. adults (68%) use Facebook.com, and about three-quarters of those use the social networking site daily. In 2016, a survey found that 62 percent of adults in the United States get their news from social networking sites. The corresponding statistic in 2012 was 49 percent. That 2016 survey also found that fewer social media users get their news from other platforms: local television (46 percent), cable TV (31 percent), nightly network TV (30 percent), news websites/apps (28 percent), radio (25 percent), and print newspapers (20 percent).

Mr. Chavern then described the problems with two specific tech companies:

"The First Amendment prohibits the government from regulating the press. But it doesn’t prevent Facebook and Google from acting as de facto regulators of the news business.

Neither Google nor Facebook are – or have ever been – "neutral pipes." To the contrary, their businesses depend upon their ability to make nuanced decisions through sophisticated algorithms about how and when content is delivered to users. The term “algorithm” makes these decisions seem scientific and neutral. The fact is that, while their decision processes may be highly-automated, both companies make extensive editorial judgments about accuracy, relevance, newsworthiness and many other criteria.

The business models of Facebook and Google are complex and varied. However, we do know that they are both immense advertising platforms that sell people’s time and attention. Their "secret algorithms" are used to cultivate that time and attention. We have seen many examples of the types of content favored by these systems – namely, click-bait and anything that can generate outrage, disgust and passion. Their systems also favor giving users information like that which they previously consumed, thereby generating intense filter bubbles and undermining common understandings of issues and challenges.

All of these things are antithetical to a healthy news business – and a healthy democracy..."

Earlier this month, Apple Computer and Facebook executives exchanged criticisms about each other's business models and privacy. Mr. Chavern's testimony before Congress also described more problems and threats:

"Good journalism is factual, verified and takes into account multiple points of view. It can take a lot of time and investment. Most particularly, it requires someone to take responsibility for what is published. Whether or not one agrees with a particular piece of journalism, my members put their names on their product and stand behind it. Readers know where to send complaints. The same cannot be said of the sea of bad information that is delivered by the platforms in paid priority over my members’ quality information. The major platforms’ control over distribution also threatens the quality of news for another reason: it results in the “commoditization” of news. Many news publishers have spent decades – often more than a century – establishing their brands. Readers know the brands that they can trust — publishers whose reporting demonstrates the principles of verification, accuracy and fidelity to facts. The major platforms, however, work hard to erase these distinctions. Publishers are forced to squeeze their content into uniform, homogeneous formats. The result is that every digital publication starts to look the same. This is reinforced by things like the Google News Carousel, which encourages users to flick back and forth through articles on the same topic without ever noticing the publisher. This erosion of news publishers’ brands has played no small part in the rise of "fake news." When hard news sources and tabloids all look the same, how is a customer supposed to tell the difference? The bottom line is that while Facebook and Google claim that they do not want to be "arbiters of truth," they are continually making huge decisions on how and to whom news content is delivered. These decisions too often favor free and commoditized junk over quality journalism. The platforms created by both companies could be wonderful means for distributing important and high-quality information about the world. But, for that to happen, they must accept accountability for the power they have and the ultimate impacts their decisions have on our economic, social and political systems..."

Download Mr. Chavern's complete testimony. Industry watchers argue that recent changes by Facebook have hurt local news organizations. MediaPost reported:

"When Facebook changed its algorithm earlier this year to focus on “meaningful” interactions, publishers across the board were hit hard. However, local news seemed particularly vulnerable to the alterations. To assuage this issue, the company announced that it would prioritize news related to local towns and metro areas where a user resided... To determine how positively that tweak affected local news outlets, the Tow Center measured interactions for posts from publications coming from 13 metro areas... The survey found that 11 out of those 13 have consistently seen a drop in traffic between January 1 and April 1 of 2018, allowing the results to show how outlets are faring nine weeks after the algorithm change. According to the Tow Center study, three outlets saw interactions on their pages decrease by a dramatic 50%. These include The Dallas Morning News, The Denver Post, and The San Francisco Chronicle. The Atlanta Journal-Constitution saw interactions drop by 46%."

So, huge problems persist.

Early in my business career, I had the opportunity to develop and market an online service using content from Dow Jones News/Retrieval. That experience taught me that the news - hard news - included who, where, when, and what happened. Everything else is either opinion, commentary, analysis, an advertisement, or fiction. And, it is critical to know the differences and/or learn to spot each type. Otherwise, you are likely to be misled, misinformed, or fooled.


Federal Regulators Assess $1 Billion Fine Against Wells Fargo Bank

On Friday, several federal regulators announced the assessment of a $1 billion fine against Wells Fargo Bank for violations of the, "Consumer Financial Protection Act (CFPA) in the way it administered a mandatory insurance program related to its auto loans..."

Consumer Financial Protection Bureau logo The Consumer Financial Protection Bureau (CFPB) announced the fine and settlement with Wells Fargo Bank, N.A., and its coordinated action with the Office of the Comptroller of the Currency (OCC). The announcement stated that the CFPB:

"... also found that Wells Fargo violated the CFPA in how it charged certain borrowers for mortgage interest rate-lock extensions. Under the terms of the consent orders, Wells Fargo will remediate harmed consumers and undertake certain activities related to its risk management and compliance management. The CFPB assessed a $1 billion penalty against the bank and credited the $500 million penalty collected by the OCC toward the satisfaction of its fine."

Wells Fargo logo This not the first fine against Wells Fargo. The bank paid a $185 million fine in 2016 to settle charges about for alleged unlawful sales practices during the past five years. To game an internal sales system, employees allegedly created about 1.5 million bogus email accounts, and both issued and activated debit cards associated with the secret accounts. Then, employees also created PIN numbers for the accounts, all without customers' knowledge nor consent. An investigation in 2017 found 1.4 million more bogus accounts created than originally found in 2016. Also in 2017, irregularities were reported about how the bank handled mortgages.

The OCC explained that it took action:

"... given the severity of the deficiencies and violations of law, the financial harm to consumers, and the bank’s failure to correct the deficiencies and violations in a timely manner. The OCC found deficiencies in the bank’s enterprise-wide compliance risk management program that constituted reckless, unsafe, or unsound practices and resulted in violations of the unfair practices prong of Section 5 of the Federal Trade Commission (FTC) Act. In addition, the agency found the bank violated the FTC Act and engaged in unsafe and unsound practices relating to improper placement and maintenance of collateral protection insurance policies on auto loan accounts and improper fees associated with interest rate lock extensions. These practices resulted in consumer harm which the OCC has directed the bank to remediate.

The $500 million civil money penalty reflects a number of factors, including the bank’s failure to develop and implement an effective enterprise risk management program to detect and prevent the unsafe or unsound practices, and the scope and duration of the practices..."

MarketWatch explained the bank's unfair and unsound practices:

"When consumers buy a vehicle through a lender, the lender often requires the consumer to also purchase “collateral protection insurance.” That means the vehicle itself is collateral — or essentially, could be repossessed — if the loan is not paid... Sometimes, the fine print of the contracts say that if borrowers do not buy their own insurance (enough to satisfy the terms of the loan), the lender will go out and purchase that insurance on their behalf, and charge them for it... That is a legal practice. But in the case of Wells Fargo, borrowers said they actually did buy that insurance, and Wells Fargo still bought more insurance on their behalf and charged them for it."

So, the bank forced consumers to buy unwanted and unnecessary auto insurance. The lesson for consumers: don't accept the first auto loan offered, and closely read the fine print of contracts from lenders. Wells Fargo said in a news release that it:

"... will adjust its first quarter 2018 preliminary financial results by an additional accrual of $800 million, which is not tax deductible. The accrual reduces reported first quarter 2018 net income by $800 million, or $0.16 cents per diluted common share, to $4.7 billion, or 96 cents per diluted common share. Under the consent orders, Wells Fargo will also be required to submit, for review by its board, plans detailing its ongoing efforts to strengthen its compliance and risk management, and its approach to customer remediation efforts."

Kudos to the OCC and CFPB for taking this action against a bank with a spotty history. Will executives at Wells Fargo learn their lessons from the massive fine? The Washington Post reported that the bank will:

"... benefit from a massive corporate tax cut passed by Congress last year. he bank’s effective tax rate this year will fall from about 33 percent to 22 percent, according to a Goldman Sachs analysis released in December. The change could boost its profits by 18 percent, according to the analysis. Just in the first quarter, Wells Fargo’s effective tax rate fell from about 28 percent to 18 percent, saving it more than $600 million. For the entire year, the tax cut is expected to boost the company’s profits by $3.7 billion..."

So, don't worry about the bank. It's tax savings will easily offset the fine. This makes one doubt the fine was a sufficient deterrent. And, I found the OCC's announcement forceful and appropriate, while the CFPB's announcement seemed to soft-pedal things by saying the absolute minimum.

What do you think? Will the fine curb executive wrongdoing?


4 Ways to Fix Facebook

[Editor's Note: today's guest post, by ProPublica reporters, explores solutions to the massive privacy and data security problems at Facebook.com. It is reprinted with permission.]

By Julia Angwin, ProPublica

Gathered in a Washington, D.C., ballroom last Thursday for their annual “tech prom,” hundreds of tech industry lobbyists and policy makers applauded politely as announcers read out the names of the event’s sponsors. But the room fell silent when “Facebook” was proclaimed — and the silence was punctuated by scattered boos and groans.

Facebook logo These days, it seems the only bipartisan agreement in Washington is to hate Facebook. Democrats blame the social network for costing them the presidential election. Republicans loathe Silicon Valley billionaires like Facebook founder and CEO Mark Zuckerberg for their liberal leanings. Even many tech executives, boosters and acolytes can’t hide their disappointment and recriminations.

The tipping point appears to have been the recent revelation that a voter-profiling outfit working with the Trump campaign, Cambridge Analytica, had obtained data on 87 million Facebook users without their knowledge or consent. News of the breach came after a difficult year in which, among other things, Facebook admitted that it allowed Russians to buy political ads, advertisers to discriminate by race and age, hate groups to spread vile epithets, and hucksters to promote fake news on its platform.

Over the years, Congress and federal regulators have largely left Facebook to police itself. Now, lawmakers around the world are calling for it to be regulated. Congress is gearing up to grill Zuckerberg. The Federal Trade Commission is investigating whether Facebook violated its 2011 settlement agreement with the agency. Zuckerberg himself suggested, in a CNN interview, that perhaps Facebook should be regulated by the government.

The regulatory fever is so strong that even Peter Swire, a privacy law professor at Georgia Institute of Technology who testified last year in an Irish court on behalf of Facebook, recently laid out the legal case for why Google and Facebook might be regulated as public utilities. Both companies, he argued, satisfy the traditional criteria for utility regulation: They have large market share, are natural monopolies, and are difficult for customers to do without.

While the political momentum may not be strong enough right now for something as drastic as that, many in Washington are trying to envision what regulating Facebook would look like. After all, the solutions are not obvious. The world has never tried to rein in a global network with 2 billion users that is built on fast-moving technology and evolving data practices.

I talked to numerous experts about the ideas bubbling up in Washington. They identified four concrete, practical reforms that could address some of Facebook’s main problems. None are specific to Facebook alone; potentially, they could be applied to all social media and the tech industry.

1. Impose Fines for Data Breaches

The Cambridge Analytica data loss was the result of a breach of contract, rather than a technical breach in which a company gets hacked. But either way, it’s far too common for institutions to lose customers’ data — and they rarely suffer significant financial consequences for the loss. In the United States, companies are only required to notify people if their data has been breached in certain states and under certain circumstances — and regulators rarely have the authority to penalize companies that lose personal data.

Consider the Federal Trade Commission, which is the primary agency that regulates internet companies these days. The FTC doesn’t have the authority to demand civil penalties for most data breaches. (There are exceptions for violations of children’s privacy and a few other offenses.) Typically, the FTC can only impose penalties if a company has violated a previous agreement with the agency.

That means Facebook may well face a fine for the Cambridge Analytica breach, assuming the FTC can show that the social network violated a 2011 settlement with the agency. In that settlement, the FTC charged Facebook with eight counts of unfair and deceptive behavior, including allowing outside apps to access data that they didn’t need — which is what Cambridge Analytica reportedly did years later. The settlement carried no financial penalties but included a clause stating that Facebook could face fines of $16,000 per violation per day.

David Vladeck, former FTC director of consumer protection, who crafted the 2011 settlement with Facebook, said he believes Facebook’s actions in the Cambridge Analytica episode violated the agreement on multiple counts. “I predict that if the FTC concludes that Facebook violated the consent decree, there will be a heavy civil penalty that could well be in the amount of $1 billion or more,” he said.

Facebook maintains it has abided by the agreement. “Facebook rejects any suggestion that it violated the consent decree,” spokesman Andy Stone said. “We respected the privacy settings that people had in place.”

If a fine had been levied at the time of the settlement, it might well have served as a stronger deterrent against any future breaches. Daniel J. Weitzner, who served in the White House as the deputy chief technology officer at the time of the Facebook settlement, says that technology should be policed by something similar to the Department of Justice’s environmental crimes unit. The unit has levied hundreds of millions of dollars in fines. Under previous administrations, it filed felony charges against people for such crimes as dumping raw sewage or killing a bald eagle. Some ended up sentenced to prison.

“We know how to do serious law enforcement when we think there’s a real priority and we haven’t gotten there yet when it comes to privacy,” Weitzner said.

2. Police Political Advertising

Last year, Facebook disclosed that it had inadvertently accepted thousands of advertisements that were placed by a Russian disinformation operation — in possible violation of laws that restrict foreign involvement in U.S. elections. FBI special prosecutor Robert Mueller has charged 13 Russians who worked for an internet disinformation organization with conspiring to defraud the United States, but it seems unlikely that Russia will compel them to face trial in the U.S.

Facebook has said it will introduce a new regime of advertising transparency later this year, which will require political advertisers to submit a government-issued ID and to have an authentic mailing address. It said political advertisers will also have to disclose which candidate or organization they represent and that all election ads will be displayed in a public archive.

But Ann Ravel, a former commissioner at the Federal Election Commission, says that more could be done. While she was at the commission, she urged it to consider what it could do to make internet advertising contain as much disclosure as broadcast and print ads. “Do we want Vladimir Putin or drug cartels to be influencing American elections?” she presciently asked at a 2015 commission meeting.

However, the election commission — which is often deadlocked between its evenly split Democratic and Republican commissioners — has not yet ruled on new disclosure rules for internet advertising. Even if it does pass such a rule, the commission’s definition of election advertising is so narrow that many of the ads placed by the Russians may not have qualified for scrutiny. It’s limited to ads that mention a federal candidate and appear within 60 days prior to a general election or 30 days prior to a primary.

This definition, Ravel said, is not going to catch new forms of election interference, such as ads placed months before an election, or the practice of paying individuals or bots to spread a message that doesn’t identify a candidate and looks like authentic communications rather than ads.

To combat this type of interference, Ravel said, the current definition of election advertising needs to be broadened. The FEC, she suggested, should establish “a multi-faceted test” to determine whether certain communications should count as election advertisements. For instance, communications could be examined for their intent, and whether they were paid for in a nontraditional way — such as through an automated bot network.

And to help the tech companies find suspect communications, she suggested setting up an enforcement arm similar to the Treasury Department’s Financial Crimes Enforcement Network, known as FinCEN. FinCEN combats money laundering by investigating suspicious account transactions reported by financial institutions. Ravel said that a similar enforcement arm that would work with tech companies would help the FEC.

“The platforms could turn over lots of communications and the investigative agency could then examine them to determine if they are from prohibited sources,” she said.

3. Make Tech Companies Liable for Objectionable Content

Last year, ProPublica found that Facebook was allowing advertisers to buy discriminatory ads, including ads targeting people who identified themselves as “Jew-haters,” and ads for housing and employment that excluded audiences based on race, age and other protected characteristics under civil rights laws.

Facebook has claimed that it has immunity against liability for such discrimination under section 230 of the 1996 federal Communications Decency Act, which protects online publishers from liability for third-party content.

“Advertisers, not Facebook, are responsible for both the content of their ads and what targeting criteria to use, if any,” Facebook stated in legal filings in a federal case in California challenging Facebook’s use of racial exclusions in ad targeting.

But sentiment is growing in Washington to interpret the law more narrowly. Last month, the House of Representatives passed a bill that carves out an exemption in the law, making websites liable if they aid and abet sex trafficking. Despite fierce opposition by many tech advocates, a version of the bill has already passed the Senate.

And many staunch defenders of the tech industry have started to suggest that more exceptions to section 230 may be needed. In November, Harvard Law professor Jonathan Zittrain wrote an article rethinking his previous support for the law and declared it has become, in effect, “a subsidy” for the tech giants, who don’t bear the costs of ensuring the content they publish is accurate and fair.

“Any honest account must acknowledge the collateral damage it has permitted to be visited upon real people whose reputations, privacy, and dignity have been hurt in ways that defy redress,” Zittrain wrote.

In a December 2017 paper titled “The Internet Will Not Break: Denying Bad Samaritans 230 Immunity,” University of Maryland law professors Danielle Citron and Benjamin Wittes argue that the law should be amended — either through legislation or judicial interpretation — to deny immunity to technology companies that enable and host illegal content.

“The time is now to go back and revise the words of the statute to make clear that it only provides shelter if you take reasonable steps to address illegal activity that you know about,” Citron said in an interview.

4. Install Ethics Review Boards

Cambridge Analytica obtained its data on Facebook users by paying a psychology professor to build a Facebook personality quiz. When 270,000 Facebook users took the quiz, the researcher was able to obtain data about them and all of their Facebook friends — or about 50 million people altogether. (Facebook later ended the ability for quizzes and other apps to pull data on users’ friends.)

Cambridge Analytica then used the data to build a model predicting the psychology of those people, on metrics such as “neuroticism,” political views and extroversion. It then offered that information to political consultants, including those working for the Trump campaign.

The company claimed that it had enough information about people’s psychological vulnerabilities that it could effectively target ads to them that would sway their political opinions. It is not clear whether the company actually achieved its desired effect.

But there is no question that people can be swayed by online content. In a controversial 2014 study, Facebook tested whether it could manipulate the emotions of its users by filling some users’ news feeds with only positive news and other users’ feeds with only negative news. The study found that Facebook could indeed manipulate feelings — and sparked outrage from Facebook users and others who claimed it was unethical to experiment on them without their consent.

Such studies, if conducted by a professor on a college campus, would require approval from an institutional review board, or IRB, overseeing experiments on human subjects. But there is no such standard online. The usual practice is that a company’s terms of service contain a blanket statement of consent that users never read or agree to.

James Grimmelman, a law professor and computer scientist, argued in a 2015 paper that the technology companies should stop burying consent forms in their fine print. Instead, he wrote, “they should seek enthusiastic consent from users, making them into valued partners who feel they have a stake in the research.”

Such a consent process could be overseen by an independent ethics review board, based on the university model, which would also review research proposals and ensure that people’s private information isn’t shared with brokers like Cambridge Analytica.

“I think if we are in the business of requiring IRBs for academics,” Grimmelman said in an interview, “we should ask for appropriate supervisions for companies doing research.”

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

 


Facebook Update: 87 Million Affected By Its Data Breach With Cambridge Analytica. Considerations For All Consumers

Facebook logo Facebook.com has dominated the news during the past three weeks. The news media have reported about many issues, but there are more -- whether or not you use Facebook. Things began about mid-March, when Bloomberg reported:

"Yes, Cambridge Analytica... violated rules when it obtained information from some 50 million Facebook profiles... the data came from someone who didn’t hack the system: a professor who originally told Facebook he wanted it for academic purposes. He set up a personality quiz using tools that let people log in with their Facebook accounts, then asked them to sign over access to their friend lists and likes before using the app. The 270,000 users of that app and their friend networks opened up private data on 50 million people... All of that was allowed under Facebook’s rules, until the professor handed the information off to a third party... "

So, an authorized user shared members' sensitive information with unauthorized users. Facebook confirmed these details on March 16:

"We are suspending Strategic Communication Laboratories (SCL), including their political data analytics firm, Cambridge Analytica (CA), from Facebook... In 2015, we learned that a psychology professor at the University of Cambridge named Dr. Aleksandr Kogan lied to us and violated our Platform Policies by passing data from an app that was using Facebook Login to SCL/CA, a firm that does political, government and military work around the globe. He also passed that data to Christopher Wylie of Eunoia Technologies, Inc.

Like all app developers, Kogan requested and gained access to information from people after they chose to download his app. His app, “thisisyourdigitallife,” offered a personality prediction, and billed itself on Facebook as “a research app used by psychologists.” Approximately 270,000 people downloaded the app. In so doing, they gave their consent for Kogan to access information such as the city they set on their profile, or content they had liked... When we learned of this violation in 2015, we removed his app from Facebook and demanded certifications from Kogan and all parties he had given data to that the information had been destroyed. CA, Kogan and Wylie all certified to us that they destroyed the data... Several days ago, we received reports that, contrary to the certifications we were given, not all data was deleted..."

So, data that should have been deleted wasn't. Then, Facebook relied upon certifications from entities that had lied previously. Not good. Then, Facebook posted this addendum on March 17:

"The claim that this is a data breach is completely false. Aleksandr Kogan requested and gained access to information from users who chose to sign up to his app, and everyone involved gave their consent. People knowingly provided their information, no systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked."

Why the rush to deny a breach? It seems wise to complete a thorough investigation before making such a claim. In the 11+ years I've written this blog, whenever unauthorized persons access data they shouldn't have, it's a breach. You can read about plenty of similar incidents where credit reporting agencies sold sensitive consumer data to ID-theft services and/or data brokers, who then re-sold that information to criminals and fraudsters. Seems like a breach to me.

Cambridge Analytica logo Facebook announced on March 19th that it had hired a digital forensics firm:

"... Stroz Friedberg, to conduct a comprehensive audit of Cambridge Analytica (CA). CA has agreed to comply and afford the firm complete access to their servers and systems. We have approached the other parties involved — Christopher Wylie and Aleksandr Kogan — and asked them to submit to an audit as well. Mr. Kogan has given his verbal agreement to do so. Mr. Wylie thus far has declined. This is part of a comprehensive internal and external review that we are conducting to determine the accuracy of the claims that the Facebook data in question still exists... Independent forensic auditors from Stroz Friedberg were on site at CA’s London office this evening. At the request of the UK Information Commissioner’s Office, which has announced it is pursuing a warrant to conduct its own on-site investigation, the Stroz Friedberg auditors stood down."

That's a good start. An audit would determine or not data which perpetrators said was destroyed, actually had been destroyed. However, Facebook seems to have built a leaky system which allows data harvesting:

"Hundreds of millions of Facebook users are likely to have had their private information harvested by companies that exploited the same terms as the firm that collected data and passed it on to CA, according to a new whistleblower. Sandy Parakilas, the platform operations manager at Facebook responsible for policing data breaches by third-party software developers between 2011 and 2012, told the Guardian he warned senior executives at the company that its lax approach to data protection risked a major breach..."

Reportedly, Parakilas added that Facebook, "did not use its enforcement mechanisms, including audits of external developers, to ensure data was not being misused." Not good. The incident makes one wonder what other developers, corporate, and academic users have violated Facebook's rules: shared sensitive Facebook members' data they shouldn't have.

Facebook announced on March 21st that it will, 1) investigate all apps that had access to large amounts of information and conduct full audits of any apps with suspicious activity; 2) inform users affected by apps that have misused their data; 3) disable an app's access to a member's information if that member hasn't used the app within the last three months; 4) change Login to "reduce the data that an app can request without app review to include only name, profile photo and email address;" 5) encourage members to manage the apps they use; and reward users who find vulnerabilities.

Those actions seem good, but too little too late. Facebook needs to do more... perhaps, revise its Terms Of Use to include large fines for violators of its data security rules. Meanwhile, there has been plenty of news about CA. The Guardian UK reported on March 19:

"The company at the centre of the Facebook data breach boasted of using honey traps, fake news campaigns and operations with ex-spies to swing election campaigns around the world, a new investigation reveals. Executives from Cambridge Analytica spoke to undercover reporters from Channel 4 News about the dark arts used by the company to help clients, which included entrapping rival candidates in fake bribery stings and hiring prostitutes to seduce them."

Geez. After these news reports surfaced, CA's board suspended Alexander Nix, its CEO, pending an internal investigation. So, besides Facebook's failure to secure sensitive members' information, another key issue seems to be the misuse of social media data by a company that openly brags about unethical, and perhaps illegal, behavior.

What else might be happening? The Intercept explained on March 30th that CA:

"... has marketed itself as classifying voters using five personality traits known as OCEAN — Openness, Conscientiousness, Extroversion, Agreeableness, and Neuroticism — the same model used by University of Cambridge researchers for in-house, non-commercial research. The question of whether OCEAN made a difference in the presidential election remains unanswered. Some have argued that big data analytics is a magic bullet for drilling into the psychology of individual voters; others are more skeptical. The predictive power of Facebook likes is not in dispute. A 2013 study by three of Kogan’s former colleagues at the University of Cambridge showed that likes alone could predict race with 95 percent accuracy and political party with 85 percent accuracy. Less clear is their power as a tool for targeted persuasion; CA has claimed that OCEAN scores can be used to drive voter and consumer behavior through “microtargeting,” meaning narrowly tailored messages..."

So, while experts disagree about the effectiveness of data analytics with political campaigns, it seems wise to assume that the practice will continue with improvements. Data analytics fueled by social media input means political campaigns can bypass traditional news media outlets to distribute information and disinformation. That highlights the need for Facebook (and other social media) to improve their data security and compliance audits.

While the UK Information Commissioner's Office aggressively investigates CA, things seem to move at a much slower pace in the USA. TechCrunch reported on April 4th:

"... Facebook’s founder Mark Zuckerberg believes North America users of his platform deserve a lower data protection standard than people everywhere else in the world. In a phone interview with Reuters yesterday Mark Zuckerberg declined to commit to universally implementing changes to the platform that are necessary to comply with the European Union’s incoming General Data Protection Regulation (GDPR). Rather, he said the company was working on a version of the law that would bring some European privacy guarantees worldwide — declining to specify to the reporter which parts of the law would not extend worldwide... Facebook’s leadership has previously implied the product changes it’s making to comply with GDPR’s incoming data protection standard would be extended globally..."

Do users in the USA want weaker data protections than users in other countries? I think not. I don't. Read for yourself the April 4th announcement by Facebook about changes to its terms of service and data policy. It didn't mention specific countries or regions; who gets what and where. Not good.

Mark Zuckerberg apologized and defended his company in a March 21st post:

"I want to share an update on the Cambridge Analytica situation -- including the steps we've already taken and our next steps to address this important issue. We have a responsibility to protect your data, and if we can't then we don't deserve to serve you. I've been working to understand exactly what happened and how to make sure this doesn't happen again. The good news is that the most important actions to prevent this from happening again today we have already taken years ago. But we also made mistakes, there's more to do, and we need to step up and do it... This was a breach of trust between Kogan, Cambridge Analytica and Facebook. But it was also a breach of trust between Facebook and the people who share their data with us and expect us to protect it. We need to fix that... at the end of the day I'm responsible for what happens on our platform. I'm serious about doing what it takes to protect our community. While this specific issue involving Cambridge Analytica should no longer happen with new apps today, that doesn't change what happened in the past. We will learn from this experience to secure our platform further and make our community safer for everyone going forward."

Nice sounding words, but actions speak louder. Wired magazine said:

"Zuckerberg didn't mention in his Facebook post why it took him five days to respond to the scandal... The groundswell of outrage and attention following these revelations has been greater than anything Facebook predicted—or has experienced in its long history of data privacy scandals. By Monday, its stock price nosedived. On Tuesday, Facebook shareholders filed a lawsuit against the company in San Francisco, alleging that Facebook made "materially false and misleading statements" that led to significant losses this week. Meanwhile, in Washington, a bipartisan group of senators called on Zuckerberg to testify before the Senate Judiciary Committee. And the Federal Trade Commission also opened an investigation into whether Facebook had violated a 2011 consent decree, which required the company to notify users when their data was obtained by unauthorized sources."

Frankly, Zuckerberg has lost credibility with me. Why? Facebook's history suggests it can't (or won't) protect users' data it collects. Some of its privacy snafus: settlement of a lawsuit resulting from alleged privacy abuses by its Beacon advertising program, changed members' ad settings without notice nor consent, an advertising platform which allegedly facilitates abuses of older workers, health and privacy concerns about a new service for children ages 6 to 13, transparency concerns about political ads, and new lawsuits about the company's advertising platform. Plus, Zuckerberg made promises in January to clean up the service's advertising. Now, we have yet another apology.

In a press release this afternoon, Facebook revised upward the number affected by the Facebook/CA breach from 50 to 87 million persons. Most, about 70.6 million, are in the United States. The breakdown by country:

Number of affected persons by country in the Facebook - Cambridge Analytica breach. Click to view larger version

So, what should consumers do?

You have options. If you use Facebook, see these instructions by Consumer Reports to deactivate or delete your account. Some people I know simply stopped using Facebook, but left their accounts active. That doesn't seem wise. A better approach is to adjust the privacy settings on your Facebook account to get as much privacy and protections as possible.

Facebook has a new tool for members to review and disable, in bulk, all of the apps with access to their data. Follow these handy step-by-step instructions by Mashable. And, users should also disable the Facebook API platform for their account. If you use the Firefox web browser, then install the new Facebook Container new add-on specifically designed to prevent Facebook from tracking you. Don't use Firefox? You might try the Privacy Badger add-on instead. I've used it happily for years.

Of course, you should submit feedback directly to Facebook demanding that it extend GDPR privacy protections to your country, too. And, wise online users always read the terms and conditions of all Facebook quizzes before taking them.

Don't use Facebook? There are considerations for you, too; especially if you use a different social networking site (or app). Reportedly, Mark Zuckerberg, the CEO of Facebook, will testify before the U.S. Congress on April 11th. His upcoming testimony will be worth monitoring for everyone. Why? The outcome may prod Congress to act by passing new laws giving consumers in the USA data security and privacy protections equal to what's available in the United Kingdom. And, there may be demands for Cambridge Analytica executives to testify before Congress, too.

Or, consumers may demand stronger, faster action by the U.S. Federal Trade Commission (FTC), which announced on March 26th:

"The FTC is firmly and fully committed to using all of its tools to protect the privacy of consumers. Foremost among these tools is enforcement action against companies that fail to honor their privacy promises, including to comply with Privacy Shield, or that engage in unfair acts that cause substantial injury to consumers in violation of the FTC Act. Companies who have settled previous FTC actions must also comply with FTC order provisions imposing privacy and data security requirements. Accordingly, the FTC takes very seriously recent press reports raising substantial concerns about the privacy practices of Facebook. Today, the FTC is confirming that it has an open non-public investigation into these practices."

An "open non-public investigation?" Either the investigation is public, or it isn't. Hopefully, an attorney will explain. And, that announcement read like weak tea. I expect more. Much more.

USA citizens may want stronger data security laws, especially if Facebook's solutions are less than satisfactory, it refuses to provide protections equal to those in the United Kingdom, or if it backtracks later on its promises. Thoughts? Comments?


The 'CLOUD Act' - What It Is And What You Need To Know

Chances are, you probably have not heard of the "CLOUD Act." I hadn't heard about it until recently. A draft of the legislation is available on the website for U.S. Senator Orrin Hatch (Republican - Utah).

Many people who already use cloud services to store and backup data might assume: if it has to do with the cloud, then it must be good.  Such an assumption would be foolish. The full name of the bill: "Clarifying Overseas Use Of Data." What problem does this bill solve? Senator Hatch stated last month why he thinks this bill is needed:

"... the Supreme Court will hear arguments in a case... United States v. Microsoft Corp., colloquially known as the Microsoft Ireland case... The case began back in 2013, when the US Department of Justice asked Microsoft to turn over emails stored in a data center in Ireland. Microsoft refused on the ground that US warrants traditionally have stopped at the water’s edge. Over the last few years, the legal battle has worked its way through the court system up to the Supreme Court... The issues the Microsoft Ireland case raises are complex and have created significant difficulties for both law enforcement and technology companies... law enforcement officials increasingly need access to data stored in other countries for investigations, yet no clear enforcement framework exists for them to obtain overseas data. Meanwhile, technology companies, who have an obligation to keep their customers’ information private, are increasingly caught between conflicting laws that prohibit disclosure to foreign law enforcement. Equally important, the ability of one nation to access data stored in another country implicates national sovereignty... The CLOUD Act bridges the divide that sometimes exists between law enforcement and the tech sector by giving law enforcement the tools it needs to access data throughout the world while at the same time creating a commonsense framework to encourage international cooperation to resolve conflicts of law. To help law enforcement, the bill creates incentives for bilateral agreements—like the pending agreement between the US and the UK—to enable investigators to seek data stored in other countries..."

Senators Coons, Graham, and Whitehouse, support the CLOUD Act, along with House Representatives Collins, Jeffries, and others. The American Civil Liberties Union (ACLU) opposes the bill and warned:

"Despite its fluffy sounding name, the recently introduced CLOUD Act is far from harmless. It threatens activists abroad, individuals here in the U.S., and would empower Attorney General Sessions in new disturbing ways... the CLOUD Act represents a dramatic change in our law, and its effects will be felt across the globe... The bill starts by giving the executive branch dramatically more power than it has today. It would allow Attorney General Sessions to enter into agreements with foreign governments that bypass current law, without any approval from Congress. Under these agreements, foreign governments would be able to get emails and other electronic information without any additional scrutiny by a U.S. judge or official. And, while the attorney general would need to consider a country’s human rights record, he is not prohibited from entering into an agreement with a country that has committed human rights abuses... the bill would for the first time allow these foreign governments to wiretap in the U.S. — even in cases where they do not meet Wiretap Act standards. Paradoxically, that would give foreign governments the power to engage in surveillance — which could sweep in the information of Americans communicating with foreigners — that the U.S. itself would not be able to engage in. The bill also provides broad discretion to funnel this information back to the U.S., circumventing the Fourth Amendment. This information could potentially be used by the U.S. to engage in a variety of law enforcement actions."

Given that warning, I read the draft legislation. One portion immediately struck me:

"A provider of electronic communication service or remote computing service shall comply with the obligations of this chapter to preserve, backup, or disclose the contents of a wire or electronic communication and any record or other information pertaining to a customer or subscriber within such provider’s possession, custody, or control, regardless of whether such communication, record, or other information is located within or outside of the United States."

While I am not an attorney, this bill definitely sounds like an end-run around the Fourth Amendment. The review process is largely governed by the House of Representatives; a body not known for internet knowledge nor savvy. The bill also smells like an attack on internet services consumers regularly use for privacy, such as search engines that don't collect nor archive search data and Virtual Private Networks (VPNs).

Today, for online privacy many consumers in the United States use VPN software and services provided by vendors located offshore. Why? Despite a national poll in 2017 which found the the Republican rollback of FCC broadband privacy rules very unpopular among consumers, the Republican-led Congress proceeded with that rollback, and President Trump signed the privacy-rollback legislation on April 3, 2017. Hopefully, skilled and experienced privacy attorneys will continue to review and monitor the draft legislation.

The ACLU emphasized in its warning:

"Today, the information of global activists — such as those that fight for LGBTQ rights, defend religious freedom, or advocate for gender equality are protected from being disclosed by U.S. companies to governments who may seek to do them harm. The CLOUD Act eliminates many of these protections and replaces them with vague assurances, weak standards, and largely unenforceable restrictions... The CLOUD Act represents a major change in the law — and a major threat to our freedoms. Congress should not try to sneak it by the American people by hiding it inside of a giant spending bill. There has not been even one minute devoted to considering amendments to this proposal. Congress should robustly debate this bill and take steps to fix its many flaws, instead of trying to pull a fast one on the American people."

I agree. Seems like this bill creates far more problems than it solves. Plus, something this important should be openly and thoroughly discussed; not be buried in a spending bill. What do you think?


Securities & Exchange Commission Charges Former Equifax Executive With Insider Trading

Last week, the U.S. Securities and Exchange Commission (SEC) charged a former Equifax executive with insider trading. While an employee, Jun Ying allegedly used confidential information to dump stock and avoid losses before Equifax announced its massive data breach in September, 2017.

The SEC announced on March 14th that it had:

"... charged a former chief information officer of a U.S. business unit of Equifax with insider trading in advance of the company’s September 2017 announcement about a massive data breach that exposed the social security numbers and other personal information of about 148 million U.S. customers... The SEC’s complaint charges Ying with violating the antifraud provisions of the federal securities laws and seeks disgorgement of ill-gotten gains plus interest, penalties, and injunctive relief... According to the SEC’s complaint, Jun Ying, who was next in line to be the company’s global CIO, allegedly used confidential information entrusted to him by the company to conclude that Equifax had suffered a serious breach. The SEC alleges that before Equifax’s public disclosure of the data breach, Ying exercised all of his vested Equifax stock options and then sold the shares, reaping proceeds of nearly $1 million. According to the complaint, by selling before public disclosure of the data breach, Ying avoided more than $117,000 in losses... The U.S. Attorney’s Office for the Northern District of Georgia today announced parallel criminal charges against Ying."

The massive data breach affected about 143 million persons. Equifax announced in March, 2018 that even more people were affected, than originally estimated in its September, 2017 announcement.

MarketWatch reported that Ying:

"... found out about the breach on Friday afternoon, August 25, 2017... The SEC complaint says that Ying’s internet browsing history shows he learned that Experian’s stock price had dropped approximately 4% after the public announcement of [a prior 2015] Experian breach. Later Monday morning, Ying exercised all of his available stock options for 6,815 shares of Equifax stock that he immediately sold for over $950,000, and a gain of over $480,000... on Aug. 30, the global CIO for Equifax officially told Ying that it was Equifax that had been breached. One of the company’s attorneys, unaware that Ying had already traded on the information, told Ying that the news about the breach was confidential, should not be shared with anyone, and that Ying should not trade in Equifax securities. According the SEC complaint, Ying did not volunteer the fact that he had exercised and sold all of his vested Equifax options two days before. Equifax finally announced the breach on Sept. 7, and Equifax common stock closed at $123.23 the next day, a drop of $19.49 or nearly 14%..."


Banking Legislation Advances In U.S. Senate

The Economic Growth, Regulatory Relief, and Consumer Protection Act (Senate Bill 2155) was approved Wednesday by the United States Senate. The vote was 67 for, 31 against, and 2 non voting. The voting roll call by name:

Alexander (R-TN), Yea
Baldwin (D-WI), Nay
Barrasso (R-WY), Yea
Bennet (D-CO), Yea
Blumenthal (D-CT), Nay
Blunt (R-MO), Yea
Booker (D-NJ), Nay
Boozman (R-AR), Yea
Brown (D-OH), Nay
Burr (R-NC), Yea
Cantwell (D-WA), Nay
Capito (R-WV), Yea
Cardin (D-MD), Nay
Carper (D-DE), Yea
Casey (D-PA), Nay
Cassidy (R-LA), Yea
Cochran (R-MS), Yea
Collins (R-ME), Yea
Coons (D-DE), Yea
Corker (R-TN), Yea
Cornyn (R-TX), Yea
Cortez Masto (D-NV), Nay
Cotton (R-AR), Yea
Crapo (R-ID), Yea
Cruz (R-TX), Yea
Daines (R-MT), Yea
Donnelly (D-IN), Yea
Duckworth (D-IL), Nay
Durbin (D-IL), Nay
Enzi (R-WY), Yea
Ernst (R-IA), Yea
Feinstein (D-CA), Nay
Fischer (R-NE), Yea
Flake (R-AZ), Yea
Gardner (R-CO), Yea
Gillibrand (D-NY), Nay
Graham (R-SC), Yea
Grassley (R-IA), Yea
Harris (D-CA), Nay
Hassan (D-NH), Yea
Hatch (R-UT), Yea
Heinrich (D-NM), Not Voting
Heitkamp (D-ND), Yea
Heller (R-NV), Yea
Hirono (D-HI), Nay
Hoeven (R-ND), Yea
Inhofe (R-OK), Yea
Isakson (R-GA), Yea
Johnson (R-WI), Yea
Jones (D-AL), Yea
Kaine (D-VA), Yea
Kennedy (R-LA), Yea
King (I-ME), Yea
Klobuchar (D-MN), Nay
Lankford (R-OK), Yea
Leahy (D-VT), Nay
Lee (R-UT), Yea
Manchin (D-WV), Yea
Markey (D-MA), Nay
McCain (R-AZ), Not Voting
McCaskill (D-MO), Yea
McConnell (R-KY), Yea
Menendez (D-NJ), Nay
Merkley (D-OR), Nay
Moran (R-KS), Yea
Murkowski (R-AK), Yea
Murphy (D-CT), Nay
Murray (D-WA), Nay
Nelson (D-FL), Yea
Paul (R-KY), Yea
Perdue (R-GA), Yea
Peters (D-MI), Yea
Portman (R-OH), Yea
Reed (D-RI), Nay
Risch (R-ID), Yea
Roberts (R-KS), Yea
Rounds (R-SD), Yea
Rubio (R-FL), Yea
Sanders (I-VT), Nay
Sasse (R-NE), Yea
Schatz (D-HI), Nay
Schumer (D-NY), Nay
Scott (R-SC), Yea
Shaheen (D-NH), Yea
Shelby (R-AL), Yea
Smith (D-MN), Nay
Stabenow (D-MI), Yea
Sullivan (R-AK), Yea
Tester (D-MT), Yea
Thune (R-SD), Yea
Tillis (R-NC), Yea
Toomey (R-PA), Yea
Udall (D-NM), Nay
Van Hollen (D-MD), Nay
Warner (D-VA), Yea
Warren (D-MA), Nay
Whitehouse (D-RI), Nay
Wicker (R-MS), Yea
Wyden (D-OR), Nay
Young (R-IN), Yea

The bill now proceeds to the House of Representatives. If it passes the House, then it would be sent to the President for a signature.


Report: Little Progress Since 2016 To Replace Old, Vulnerable Voting Machines In United States

We've know for some time that a sizeable portion of voting machines in the United States are vulnerable to hacking and errors. Too many states, cities, and town use antiquated equipment or equipment without paper backups. The latter makes re-counts impossible.

Has any progress been made to fix the vulnerabilities? The Brennan Center For Justice (BCJ) reported:

"... despite manifold warnings about election hacking for the past two years, the country has made remarkably little progress since the 2016 election in replacing antiquated, vulnerable voting machines — and has done even less to ensure that our country can recover from a successful cyberattack against those machines."

It is important to remember this warning in January 2017 from the Director of National Intelligence (DNI):

"Russian effortsto influence the 2016 US presidential election represent the most recent expression of Moscow’s longstanding desire to undermine the US-led liberal democratic order, but these activities demonstrated a significant escalation in directness, level of activity, and scope of effort compared to previous operations. We assess Russian President Vladimir Putin ordered an influence campaign in 2016 aimed at the US presidential election. Russia’s goals were to undermine public faith in the US democratic process... Russian intelligence accessed elements of multiple state or local electoral boards. Since early 2014, Russian intelligence has researched US electoral processes and related technology and equipment. DHS assesses that the types of systems we observed Russian actors targeting or compromising are not involved in vote tallying... We assess Moscow will apply lessons learned from its Putin-ordered campaign aimed at the US presidential election to future influence efforts worldwide, including against US allies and their election processes... "

Detailed findings in the BCJ report about the lack of progress:

  1. "This year, most states will use computerized voting machines that are at least 10 years old, and which election officials say must be replaced before 2020.
    While the lifespan of any electronic voting machine varies, systems over a decade old are far more likely to need to be replaced, for both security and reliability reasons... older machines are more likely to use outdated software like Windows 2000. Using obsolete software poses serious security risks: vendors may no longer write security patches for it; jurisdictions cannot replace critical hardware that is failing because it is incompatible with their new, more secure hardware... In 2016, jurisdictions in 44 states used voting machines that were at least a decade old. Election officials in 31 of those states said they needed to replace that equipment by 2020... This year, 41 states will be using systems that are at least a decade old, and officials in 33 say they must replace their machines by 2020. In most cases, elections officials do not yet have adequate funds to do so..."
  2. "Since 2016, only one state has replaced its paperless electronic voting machines statewide.
    Security experts have long warned about the dangers of continuing to use paperless electronic voting machines. These machines do not produce a paper record that can be reviewed by the voter, and they do not allow election officials and the public to confirm electronic vote totals. Therefore, votes cast on them could be lost or changed without notice... In 2016, 14 states (Arkansas, Delaware, Georgia, Indiana, Kansas, Kentucky, Louisiana, Mississippi, New Jersey, Pennsylvania, South Carolina, Tennessee, Texas, and Virginia) used paperless electronic machines as the primary polling place equipment in at least some counties and towns. Five of these states used paperless machines statewide. By 2018 these numbers have barely changed: 13 states will still use paperless voting machines, and 5 will continue to use such systems statewide. Only Virginia decertified and replaced all of its paperless systems..."
  3. "Only three states mandate post-election audits to provide a high-level of confidence in the accuracy of the final vote tally.
    Paper records of votes have limited value against a cyberattack if they are not used to check the accuracy of the software-generated total to confirm that the veracity of election results. In the last few years, statisticians, cybersecurity professionals, and election experts have made substantial advances in developing techniques to use post-election audits of voter verified paper records to identify a computer error or fraud that could change the outcome of a contest... Specifically, “risk limiting audits” — a process that employs statistical models to consistently provide a high level of confidence in the accuracy of the final vote tally – are now considered the “gold standard” of post-election audits by experts... Despite this fact, risk limiting audits are required in only three states: Colorado, New Mexico, and Rhode Island. While 13 state legislatures are currently considering new post-election audit bills, since the 2016 election, only one — Rhode Island — has enacted a new risk limiting audit requirement."
  4. "43 states are using machines that are no longer manufactured.
    The problem of maintaining secure and reliable voting machines is particularly challenging in the many jurisdictions that use machines models that are no longer produced. In 2015... the Brennan Center estimated that 43 states and the District of Columbia were using machines that are no longer manufactured. In 2018, that number has not changed. A primary challenge of using machines no longer manufactured is finding replacement parts and the technicians who can repair them. These difficulties make systems less reliable and secure... In a recent interview with the Brennan Center, Neal Kelley, registrar of voters for Orange County, California, explained that after years of cannibalizing old machines and hoarding spare parts, he is now forced to take systems out of service when they fail..."

That is embarrassing for a country that prides itself on having an effective democracy. According to BCJ, the solution would be for Congress to fund via grants the replacement of paperless and antiquated equipment; plus fund post-election audits.

Rather than protect the integrity of our democracy, the government passed a massive tax cut which will increase federal deficits during the coming years while pursuing both a costly military parade and an unfunded border wall. Seems like questionable priorities to me. What do you think?


Legislation Moving Through Congress To Loosen Regulations On Banks

Legislation is moving through Congress which will loosen regulations on banks. Is this an improvement? Is it risky? Is it a good deal for consumers? Before answering those questions, a summary of the Economic Growth, Regulatory Relief, and Consumer Protection Act (Senate Bill 2155):

"This bill amends the Truth in Lending Act to allow institutions with less than $10 billion in assets to waive ability-to-repay requirements for certain residential-mortgage loans... The bill amends the Bank Holding Company Act of 1956 to exempt banks with assets valued at less than $10 billion from the "Volcker Rule," which prohibits banking agencies from engaging in proprietary trading or entering into certain relationships with hedge funds and private-equity funds... The bill amends the United States Housing Act of 1937 to reduce inspection requirements and environmental-review requirements for certain smaller, rural public-housing agencies.

Provisions relating to enhanced prudential regulation for financial institutions are modified, including those related to stress testing, leverage requirements, and the use of municipal bonds for purposes of meeting liquidity requirements. The bill requires credit reporting agencies to provide credit-freeze alerts and includes consumer-credit provisions related to senior citizens, minors, and veterans."

Well, that definitely sounds like relief for banks. Fewer regulations means it's easier to do business... and make more money. Next questions: is it good for consumers? Is it risky? Keep reading.

The non-partisan Congressional Budget Office (CBO) analyzed the proposed legislation in the Senate, and concluded (bold emphasis added):

"S. 2155 would modify provisions of the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd Frank Act) and other laws governing regulation of the financial industry. The bill would change the regulatory framework for small depository institutions with assets under $10 billion (community banks) and for large banks with assets over $50 billion. The bill also would make changes to consumer mortgage and credit-reporting regulations and to the authorities of the agencies that regulate the financial industry. CBO estimates that enacting the bill would increase federal deficits by $671 million over the 2018-2027 period... CBO’s estimate of the bill’s budgetary effect is subject to considerable uncertainty, in part because it depends on the probability in any year that a systemically important financial institution (SIFI) will fail or that there will be a financial crisis. CBO estimates that the probability is small under current law and would be slightly greater under the legislation..."

So, the propose legislation means there is a greater risk of banks either failing or needing government assistance (e.g., bailout funds). Are there risks to consumers? To taxpayers? CNN interviewed U.S. Senator Elizabeth Warren (Dem- Mass.), who said:

"Frankly, I just don't see how any senator can vote to weaken the regulations on Wall Street banks.. [weakened regulations] puts us at greater risk that there will be another taxpayer bailout, that there will be another crash and another taxpayer bailout..."

So, there are risks for consumers/taxpayers. How? Why? Let's count the ways.

First, the proposed legislation increases federal deficits. Somebody has to pay for that: with either higher taxes, less services, more debt, or a combination of all three. That doesn't sound good. Does it sound good to you?

Second, looser regulations mean some banks may lend money to more people they shouldn't have = persons who default on loan. To compensate, those banks would raise prices (e.g., more fees, higher fees, higher interest rates) to borrowers to cover their losses. If those banks can't cover their losses, then they will fail. If enough banks fail at about the same time, then bingo... another financial crisis.

If key banks fail, then the government will bail out (again) banks to keep the financial system running. (Remember too big to fail banks?) Somebody has to pay for bailouts... with either higher taxes, less services, more debt, or a combination of all three. Does that sound good to you? It doesn't sound good to me. If it doesn't sound good, I encourage you to contact your elected officials.

It's critical to remember banking history in the United States. Nobody wants a repeat of the 2008 melt-down. There are always consequences when government... Congress decides to help bankers by loosening regulations. What do you think?