431 posts categorized "Government" Feed

European Regulators Fine Google $5 Billion For 'Breaching EU Antitrust Rules'

On Wednesday, European anti-trust regulators fined Google 4.34 billion Euros (U.S. $5 billion) and ordered the tech company to stop using its Android operating system software to block competition. ComputerWorld reported:

"The European Commission found that Google has abused its dominant market position in three ways: tying access to the Play store to installation of Google Search and Google Chrome; paying phone makers and network operators to exclusively install Google Search, and preventing manufacturers from making devices running forks of Android... Google won't let smartphone manufacturers install Play on their phones unless they also make its search engine and Chrome browser the defaults on their phones. In addition, they must only use a Google-approved version of Android. This has prevented companies like Amazon.com, which developed a fork of Android it calls FireOS, from persuading big-name manufacturers to produce phones running its OS or connecting to its app store..."

Reportedly, less than 10% of Android phone users download a different browser than the pre-installed default. Less than 1% use a different search app. View the archive of European Commission Android OS documents.

Yesterday, the European Commission announced on social media:

European Commission tweet. Google Android OS restrictions graphic. Click to view larger version

European Commission tweet. Vestager comments. Click to view larger version

And, The Guardian newspaper reported:

"Soon after Brussels handed down its verdict, Google announced it would appeal. "Android has created more choice for everyone, not less," a Google spokesperson said... Google has 90 days to end its "illegal conduct" or its parent company Alphabet could be hit with fines amounting to 5% of its daily [revenues] for each day it fails to comply. Wednesday’s verdict ends a 39-month investigation by the European commission’s competition authorities into Google’s Android operating system but it is only one part of an eight-year battle between Brussels and the tech giant."

According to the Reuters news service, a third EU case against Google, involving accusations that the tech company's AdSense advertising service blocks users from displaying search ads from competitors, is still ongoing.


New Jersey to Suspend Prominent Psychologist for Failing to Protect Patient Privacy

[Editor's note: today's guest blog post, by reporters at ProPublica, explores privacy issues within the healthcare industry. The post is reprinted with permission.]

By Charles Ornstein, ProPublica

A prominent New Jersey psychologist is facing the suspension of his license after state officials concluded that he failed to keep details of mental health diagnoses and treatments confidential when he sued his patients over unpaid bills.

The state Board of Psychological Examiners last month upheld a decision by an administrative law judge that the psychologist, Barry Helfmann, “did not take reasonable measures to protect the confidentiality of his patients’ protected health information,” Lisa Coryell, a spokeswoman for the state attorney general’s office, said in an e-mail.

The administrative law judge recommended that Helfmann pay a fine and a share of the investigative costs. The board went further, ordering that Helfmann’s license be suspended for two years, Coryell wrote. During the first year, he will not be able to practice; during the second, he can practice, but only under supervision. Helfmann also will have to pay a $10,000 civil penalty, take an ethics course and reimburse the state for some of its investigative costs. The suspension is scheduled to begin in September.

New Jersey began to investigate Helfmann after a ProPublica article published in The New York Times in December 2015 that described the lawsuits and the information they contained. The allegations involved Helfmann’s patients as well as those of his colleagues at Short Hills Associates in Clinical Psychology, a New Jersey practice where he has been the managing partner.

Helfmann is a leader in his field, serving as president of the American Group Psychotherapy Association, and as a past president of the New Jersey Psychological Association.

ProPublica identified 24 court cases filed by Short Hills Associates from 2010 to 2014 over unpaid bills in which patients’ names, diagnoses and treatments were listed in documents. The defendants included lawyers, business people and a manager at a nonprofit. In cases involving patients who were minors, the lawsuits included children’s names and diagnoses.

The information was subsequently redacted from court records after a patient counter-sued Helfmann and his partners, the psychology group and the practice’s debt collection lawyers. The patient’s lawsuit was settled.

Helfmann has denied wrongdoing, saying his former debt collection lawyers were responsible for attaching patients’ information to the lawsuits. His current lawyer, Scott Piekarsky, said he intends to file an immediate appeal before the discipline takes effect.

"The discipline imposed is ‘so disproportionate as to be shocking to one’s sense of fairness’ under New Jersey case law," Piekarsky said in a statement.

Piekarsky also noted that the administrative law judge who heard the case found no need for any license suspension and raised questions about the credibility of the patient who sued Helfmann. "We feel this is a political decision due to Dr. Helfmann’s aggressive stance" in litigation, he said.

Helfmann sued the state of New Jersey and Joan Gelber, a senior deputy attorney general, claiming that he was not provided due process and equal protection under the law. He and Short Hills Associates sued his prior debt collection firm for legal malpractice. Those cases have been dismissed, though Helfmann has appealed.

Helfmann and Short Hills Associates also are suing the patient who sued him, as well as the man’s lawyer, claiming the patient and lawyer violated a confidential settlement agreement by talking to a ProPublica reporter and sharing information with a lawyer for the New Jersey attorney general’s office without providing advance notice. In court pleadings, the patient and his lawyer maintain that they did not breach the agreement. Helfmann brought all three of these lawsuits in state court in Union County.

Throughout his career, Helfmann has been an advocate for patient privacy, helping to push a state law limiting the information an insurance company can seek from a psychologist to determine the medical necessity of treatment. He also was a plaintiff in a lawsuit against two insurance companies and a New Jersey state commission, accusing them of requiring psychologists to turn over their treatment notes in order to get paid.

"It is apparent that upholding the ethical standards of his profession was very important to him," Carol Cohen, the administrative law judge, wrote. "Having said that, it appears that in the case of the information released to his attorney and eventually put into court papers, the respondent did not use due diligence in being sure that confidential information was not released and his patients were protected."

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


FTC Requests Input From The Public And Will Hold Hearings About 'Competition And Consumer Protection'

During the coming months, the U.S. Federal Trade Commission (FTC) will hold a series of meeting and seek input from the public about "Competition And Consumer Protection" and:

"... whether broad-based changes in the economy, evolving business practices, new technologies, or international developments might require adjustments to competition and consumer protection enforcement law, enforcement priorities, and policy."

The FTC expects to conduct 15 to 20 hearings starting in September, 2018 and ending in January, 2019. Before each topical hearing, input from the public will be sought. The list of topics the FTC seeks input about (bold emphasis added):

  1. "The state of antitrust and consumer protection law and enforcement, and their development, since the Pitofsky hearings;
  2. Competition and consumer protection issues in communication, information, and media technology networks;
  3. The identification and measurement of market power and entry barriers, and the evaluation of collusive, exclusionary, or predatory conduct or conduct that violates the consumer protection statutes enforced by the FTC, in markets featuring “platform” businesses;
  4. The intersection between privacy, big data, and competition;
  5. The Commission’s remedial authority to deter unfair and deceptive conduct in privacy and data security matters;
  6. Evaluating the competitive effects of corporate acquisitions and mergers;
  7. Evidence and analysis of monopsony power, including but not limited to, in labor markets;
  8. The role of intellectual property and competition policy in promoting innovation; 
  9. The consumer welfare implications associated with the use of algorithmic decision tools, artificial intelligence, and predictive analytics;
  10. The interpretation and harmonization of state and federal statutes and regulations that prohibit unfair and deceptive acts and practices; and
  11. The agency’s investigation, enforcement, and remedial processes."

The public can submit written comments now through August 20, 2018. For more information, see the FTC site about each topic. Additional instructions for comment submissions:

"Each topic description includes issues of particular interest to the Commission, but comments need not be restricted to these subjects... the FTC will invite comments on the topic of each hearing session... The FTC will also invite public comment upon completion of the entire series of hearings. Public comments may address one or more of the above topics generally, or may address them with respect to a specific industry, such as the health care, high-tech, or energy industries... "

Comments must be submitted in writing. The public can submit comments online to the FTC, or via  postal mail to. Comments submitted via postal mail must include "‘Competition and Consumer Protection in the 21st Century Hearing, Project Number P181201," on both your comment and on the envelope. Mail comments to:

Federal Trade Commission
Office of the Secretary
600 Pennsylvania Avenue NW., Suite CC–5610 (Annex C)
Washington, DC 20580

See the FTC website for instructions for courier deliveries.

The "light touch" enforcement approach by the Federal Communications Commission (FCC) with oversight of the internet, the repeal of broadband privacy, and the repeal of net neutrality repeal, has highlighted the importance of oversight and enforcement by the FTC for consumer protection.

Given the broad range of topical hearings and input it could receive, the FTC may consider and/or pursue major changes to its operations. What do you think?


North Carolina Provides Its Residents With an Opt-out From Smart Meter Installations. Will It Last?

Wise consumers know how smart utility meters operate. Unlike conventional analog meters which must be read manually on-site by a technician from the utility, smart meters perform two-way digital communication with the service provider, have memory to digitally store a year's worth of your usage, and transmit your usage at regular intervals (e.g., every 15 minutes). Plus, consumers have little or no control over smart meters installed on their property.

There is some good news. Residents in North Carolina can say "no" to smart meter installations by their power company. The Charlotte Observer reported:

"Residents who say they suffer from acute sensitivity to radio-frequency waves can say no to Duke's smart meters — as long as they have a notarized doctor's note to attest to their rare condition. The N.C. Utilities Commission, which sets utility rates and rules, created the new standard on Friday, possibly making North Carolina the first state to limit the smart meter technology revolution by means of a medical opinion... Duke Energy's two North Carolina utility subsidiaries are in the midst of switching its 3.4 million North Carolina customers to smart meters..."

While it currently is free to opt out and get an analog meter instead, that could change:

"... Duke had proposed charging customers extra if they refused a smart meter. Duke wanted to charge an initial fee of $150 plus $11.75 a month to cover the expense of sending someone out to that customer's house to take a monthly meter reading. But the Utilities Commission opted to give the benefit of the doubt to customers with smart meter health issues until the Federal Communications Commission determines the health risks of the devices."

The Smart Grid Awareness blog contains more information about activities in North Carolina. There are privacy concerns with smart meters. Smart meters can be used to profile consumers with a high degree of accuracy and details. One can easily deduce the number of persons living in the dwelling, when they are home and the duration, which electric appliances are used when they are home, the presence of security and alarm systems, and any special conditions (e.g., in-home medical equipment, baby appliances, etc.).

Other states are considering similar measures. The Kentucky Public Service Commission (PSC) will hold a public meeting only July 9th and accept public comments about planned smart meter deployments by Kentucky Utilities Co. (KU) and Louisville Gas & Electric Company (LG&E). Smart meters are being deployed in New Jersey.

When Maryland lawmakers considered legislation to provide law enforcement with access to consumers' smart meters, the Electronic Privacy Information Center (EPIC) responded with a January 16, 2018 letter outlining the privacy concerns:

"HB 56 is a sensible and effective response to an emerging privacy issue facing Maryland residents. Smart meters collect detailed personal data about the use of utility services. With a smart meter, it is possible to determine when a person is in a residence, and what they are doing. Moreover the routine collection of this data, without adequate privacy safeguards, would enable ongoing surveillance of Maryland residents without regard to any criminal suspicion."

"HB 56 does not prevent law enforcement use of data generated by smart meters; it simply requires that law enforcement follow clear procedures, subject to judicial oversight, to access the data generated by smart meters. HB 56 is an example of a model privacy law that enables innovation while safeguarding personal privacy."

That's a worthy goal of government: balance the competing needs of the business sector to innovate while protecting consumers' privacy. Is a medical opt-out sufficient? Should Fourth Amendment constitutional concerns apply? What are your opinions?


Supreme Court Ruling Requires Government To Obtain Search Warrants To Collect Users' Location Data

On Friday, the Supreme Court of the United States (SCOTUS) issued a decision which requires the government to obtain warrants in order to collect information from wireless carriers such as geo-location data. 9to5Mac reported that the court case resulted from:

"... a 2010 case of armed robberies in Detroit in which prosecutors used data from wireless carriers to make a conviction. In this case, lawyers had access to about 13,000 location data points. The sticking point has been whether access and use of data like this violates the Fourth Amendment. Apple, along with Google and Facebook had previously submitted a brief to the Supreme Court arguing for privacy protection..."

The Fourth Amendment in the U.S. Constitution states:

"The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized."

The New York Times reported:

"The 5-to-4 ruling will protect "deeply revealing" records associated with 400 million devices, the chief justice wrote. It did not matter, he wrote, that the records were in the hands of a third party. That aspect of the ruling was a significant break from earlier decisions. The Constitution must take account of vast technological changes, Chief Justice Roberts wrote, noting that digital data can provide a comprehensive, detailed — and intrusive — overview of private affairs that would have been impossible to imagine not long ago. The decision made exceptions for emergencies like bomb threats and child abductions..."

Background regarding the Fourth Amendment:

"In a pair of recent decisions, the Supreme Court expressed discomfort with allowing unlimited government access to digital data. In United States v. Jones, it limited the ability of the police to use GPS devices to track suspects’ movements. And in Riley v. California, it required a warrant to search cellphones. Chief Justice Roberts wrote that both decisions supported the result in the new case.

The Supreme court's decision also discussed historical use of the "third-party doctrine" by law enforcement:

"In 1979, for instance, in Smith v. Maryland, the Supreme Court ruled that a robbery suspect had no reasonable expectation that his right to privacy extended to the numbers dialed from his landline phone. The court reasoned that the suspect had voluntarily turned over that information to a third party: the phone company. Relying on the Smith decision’s “third-party doctrine,” federal appeals courts have said that government investigators seeking data from cellphone companies showing users’ movements do not require a warrant. But Chief Justice Roberts wrote that the doctrine is of limited use in the digital age. “While the third-party doctrine applies to telephone numbers and bank records, it is not clear whether its logic extends to the qualitatively different category of cell-site records,” he wrote."

The ruling also covered the Stored Communications Act, which requires:

"... prosecutors to go to court to obtain tracking data, but the showing they must make under the law is not probable cause, the standard for a warrant. Instead, they must demonstrate only that there were “specific and articulable facts showing that there are reasonable grounds to believe” that the records sought “are relevant and material to an ongoing criminal investigation.” That was insufficient, the court ruled. But Chief Justice Roberts emphasized the limits of the decision. It did not address real-time cell tower data, he wrote, “or call into question conventional surveillance techniques and tools, such as security cameras.” "

What else this Supreme Court decision might mean:

"The decision thus has implications for all kinds of personal information held by third parties, including email and text messages, internet searches, and bank and credit card records. But Chief Justice Roberts said the ruling had limits. "We hold only that a warrant is required in the rare case where the suspect has a legitimate privacy interest in records held by a third party," the chief justice wrote. The court’s four more liberal members — Justices Ruth Bader Ginsburg, Stephen G. Breyer, Sonia Sotomayor and Elena Kagan — joined his opinion."

Dissenting opinions by conservative Justices cited restrictions on law enforcement's abilities and further litigation. Breitbart News focused upon divisions within the Supreme Court and dissenting Justices' opinions, rather than a comprehensive explanation of the majority's opinion and law. Some conservatives say that President Trump will have an opportunity to appoint two Supreme Court Justices.

Albert Gidari, the Consulting Director of Privacy at the Stanford Law Center for Internet and Society, discussed the Court's ruling:

"What a Difference a Week Makes. The government sought seven days of records from the carrier; it got two days. The Court held that seven days or more was a search and required a warrant. So can the government just ask for 6 days with a subpoena or court order under the Stored Communications Act? Here’s what Justice Roberts said in footnote 3: “[W]e need not decide whether there is a limited period for which the Government may obtain an individual’s historical CSLI free from Fourth Amendment scrutiny, and if so, how long that period might be. It is sufficient for our purposes today to hold that accessing seven days of CSLI constitutes a Fourth Amendment search.” You can bet that will be litigated in the coming years, but the real question is what will mobile carriers do in the meantime... Where You Walk and Perhaps Your Mere Presence in Public Spaces Can Be Private. The Court said this clearly: “A person does not surrender all Fourth Amendment protection by venturing into the public sphere. To the contrary, “what [one] seeks to preserve as private, even in an area accessible to the public, may be constitutionally protected.”” This is the most important part of the Opinion in my view. It’s potential impact is much broader than the location record at issue in the case..."

Mr. Gidari's essay explored several more issues:

  • Does the Decision Really Make a Difference to Law Enforcement?
  • Are All Business Records in the Hands of Third Parties Now Protected?
  • Does It Matter Whether You Voluntarily Give the Data to a Third Party?

And:

Most people carry their smartphones with them 24/7 and everywhere they go. Hence, the geo-location data trail contains unique and very personal movements: where and whom you visit, how often and long you visit, who else (e.g., their smartphones) is nearby, and what you do (e.g., calls, mobile apps) at certain locations. The Supreme Court, or at least a majority of its Justices, seem to recognize and value this.

What are your opinions of the Supreme Court ruling?


Lawmakers In California Cave To Industry Lobbying, And Backtrack With Weakened Net Neutrality Bill

After the U.S. Federal Communications Commission (FCC) acted last year to repeal net neutrality rules, those protections officially expired on June 11th. Meanwhile, legislators in California have acted to protect their state's residents. In January, State Senator Weiner introduced in January a proposed bill, which was passed by the California Senate three weeks ago.

Since then, some politicians have countered with a modified bill lacking strong protections. C/Net reported:

"The vote on Wednesday in a California Assembly committee hearing advanced a bill that implements some net neutrality protections, but it scaled back all the measures of the bill that had gone beyond the rules outlined in the Federal Communications Commission's 2015 regulation, which was officially taken off the books by the Trump Administration's commission last week. In a surprise move, the vote happened before the hearing officially started,..."

Weiner's original bill was considered the "gold standard" of net neutrality protections for consumers because:

"... it went beyond the FCC's 2015 net neutrality "bright line" rules by including provisions like a ban on zero-rating, a business practice that allows broadband providers like AT&T to exempt their own services from their monthly wireless data caps, while services from competitors are counted against those limits. The result is a market controlled by internet service providers like AT&T, who can shut out the competition by creating an economic disadvantage for those competitors through its wireless service plans."

State Senator Weiner summarized the modified legislation:

"It is, with the amendments, a fake net neutrality bill..."

A key supporter of the modified, weak bill was Assemblyman Miguel Santiago, a Democrat from Los Angeles. Motherboard reported:

"Spearheading the rushed dismantling of the promising law was Committee Chair Miguel Santiago, a routine recipient of AT&T campaign contributions. Santiago’s office failed to respond to numerous requests for comment from Motherboard and numerous other media outlets... Weiner told the San Francisco Chronicle that the AT&T fueled “evisceration” of his proposal was “decidedly unfair.” But that’s historically how AT&T, a company with an almost comical amount of control over state legislatures, tends to operate. The company has so much power in many states, it’s frequently allowed to quite literally write terrible state telecom law..."

Supporters of this weakened bill either forgot or ignored the results from a December 2017 study of 1,077 voters. Most consumers want net neutrality protections:

Do you favor or oppose the proposal to give ISPs the freedom to: a) provide websites the option to give their visitors the ability to download material at a higher speed, for a fee, while providing a slower speed for other websites; b) block access to certain websites; and c) charge their customers an extra fee to gain access to certain websites?
Group Favor Opposed Refused/Don't Know
National 15.5% 82.9% 1.6%
Republicans 21.0% 75.4% 3.6%
Democrats 11.0% 88.5% 0.5%
Independents 14.0% 85.9% 0.1%

Why would politicians pursue weak net neutrality bills with few protections, while constituents want those protections? They are doing the bidding of the corporate internet service providers (ISPs) at the expense of their constituents. Profits before people. These politicians promote the freedom for ISPs to do as they please while restricting consumers' freedoms to use the bandwidth they've purchased however they please.

Broadcasting and Cable reported:

"These California democrats will go down in history as among the worst corporate shills that have ever held elected office," said Evan Greer of net neutrality activist group Fight for the Future. "Californians should rise up and demand that at their Assembly members represent them. The actions of this committee are an attack not just on net neutrality, but on our democracy.” According to Greer, the vote passed 8-0, with Democrats joining Republicans to amend the bill."

According to C/Net, more than 24 states are considering net neutrality legislation to protect their residents:

"... New York, Connecticut, and Maryland, are also considering legislation to reinstate net neutrality rules. Oregon and Washington state have already signed their own net neutrality legislation into law. Governors in several states, including New Jersey and Montana, have signed executive orders requiring ISPs that do business with the state adhere to net neutrality principles."

So, we have AT&T (plus politicians more interested in corporate donors than their constituents, the FCC, President Trump, and probably other telecommunications companies) to thank for this mess. What do you think?


Several States Updated Their Existing Breach Notification Laws, Or Introduced New Laws

Given the increased usage of data in digital formats, new access methods, and continual data breaches within corporations and governments, several state governments have updated their data breach notification laws, and/or passed new laws:

Alabama

The last state without any breach notification laws, Governor Kay Ivey signed in March the state's first data breach law: the Alabama Data Breach Notification Act of 2018 (SB 318), which became effective on June 1, 2018. Some of the key modifications: a) similar to other states, the law defined the format and types of data elements which must be protected, including health information; b) defined "covered entities" including state government agencies and "third-party agents" contracted to maintain, store, process and/or access protected data; c) requires notification of affected individuals within 45 days, and to the state Attorney General; and d) while penalties aren't mandatory, the law allows civil penalties up to $5,000 per day for, "each consecutive day that the covered entity fails to take reasonable action to comply with the notice provisions of this act."

Arizona

Earlier this year, Arizona Governor Doug Ducey signed legislation updating the state's breach notification laws. Some of the key modifications: a) expanded definitions of personal information to include medical or mental health treatment/diagnosis, passport numbers, taxpayer ID numbers, biometric data, e-mail addresses in combination with online passwords and security questions; b) set the notification window for affected persons at 45 days; c) allows e-mail notification of affected persons; d) and if the breach affected more than 1,000 persons, then notification must provided to the three national credit-reporting agencies and to the state Attorney General.

Colorado

Colorado Governor John Hickenloope signed on May 29th several laws including HB-1128, which will go into effect on september 1, 2018. Some experts view HB-1128 as the strongest protections in the country. Some of the key modifications: a) expanded "covered entities" to include certain "third-party service providers" contracted to maintain, store, process and/or access protected data; b) expanded definitions of "personal information" to include biometric data, plus e-mail addresses in combination with online passwords and security questions; c) allows substitute notification methods (e.g., e-mail, post on website, statewide news media) if the cost of basic notification would exceed $250,000; d) allows e-mail notification of affected persons; e) sets the notification window at 30 days, if the breach affected more than 500 Colorado residents; and f) expanded requirements for companies to protected personal information.

Louisiana

Louisiana Governor John Edwards signed in May 2018 an amendment to the state’s Database Security Breach Notification Law (Act 382) which will take effect August 1, 2018. Some of the key modifications: a) expanded definition of ‘personal information’ to include a state identification card number, passport number, and “biometric data” (e.g., fingerprints, voice prints, eye retina or iris, or other unique biological characteristics used to access systems); b) removed vagueness and defined the notification window as within 60 days; c) allows substitute notification methods (e.g., e-mail, posts on affected company's website, statewide news media); and d) tightened required that companies utilizing "computerized data" better protect the information they archive.

South Dakota

The next-to-last state without any breach notification laws, Governor Dennis Daugaard signed into law in March the state’s first breach notification law (SB 62). Like breach laws in other states, it provides definitions of what a breach is, personal information which must be protected, covered entities (e.g., companies, government agencies) subject to the law, notification requirements, and conditions when substitute notification methods (e.g., e-mail, posts on the affected entity's website, statewide news media) are allowed.

To Summarize

New Mexico enacted its new breach notification law (HB 15) in March, 2017. With the additions of Alabama and South Dakota, finally every state has a breach notification law. Sadly, it has taken 16 years. California was the first state to enact a breach notification law in 2002. It has taken that long for other states to catch up... not only catch up with California, but also catch up with technological changes driven by the internet.

California has led the way for a long time. It banned RFID skimming in 2008, co-hosted privacy workshops with the U.S. Federal Trade Commission in 2008, strengthened its existing breach law in 2011, and introduced in 2013 privacy guidelines for mobile app developers. Other states' legislatures can learn from this leadership.

Want to learn more? Detailed reviews of new and updated breach laws are available in the National Law Review website.


San Diego Police Widely Share Data From License Plate Database

Images of ALPR device mounted on a patrol car. Click to view larger version Many police departments use automated license plate reader (ALPR or LPR) technology to monitor the movements of drivers and their vehicles. The surveillance has several implications beyond the extensive data collection.

The Voice of San Diego reported that the San Diego Police Departments shares its database of ALPR data with many other agencies:

"SDPD shares that database with the San Diego sector of Border Patrol – and with another 600 agencies across the country, including other agencies within the Department of Homeland Security. The nationwide database is enabled by Vigilant Solutions, a private company that provides data management and software services to agencies across the country for ALPR systems... A memorandum of understanding between SDPD and Vigilant stipulates that each agency retains ownership of its data, and can take steps to determine who sees it. A Vigilant Solutions user manual spells out in detail how agencies can limit access to their data..."

San Diego's ALPR database is fed by a network of cameras which record images plus the date, time and GPS location of the cars that pass by them. So, the associated metadata for each database record probably includes the license plate number, license plate state, vehicle owner, GPS location, travel direction, date and time, road/street/highway name or number, and the LPR device ID number.

Information about San Diego's ALPR activities became public after a data request from the Electronic Frontier Foundation (EFF), a digital privacy organization. ALPRs are a popular tool, and were used in about 38 states in 2014. Typically, the surveillance collects data about both criminals and innocent drivers.

Images of ALPR devices mounted on unmarked patrol cars. Click to view larger version There are several valid applications: find stolen vehicles, find stolen license plates, find wanted vehicles (e.g., abductions), execute search warrants, find parolees, and find wanted parolees. Some ALPR devices are stationary (e.g., mounted on street lights), while others are mounted on (marked and unmarked) patrol cars. Both deployments scan moving vehicles, while the latter also facilitates the scanning of parked vehicles.

Earlier this year, the EFF issued hundreds of similar requests across the country to learn how law enforcement currently uses ALPR technology. The ALPR training manual for the Elk Grove, Illinois PD listed the data archival policies for several states: New Jersey - 5 years, Vermont - 18 months, Utah - 9 months,  Minnesota - 48 hours, Arkansas - 150 days, New Hampshire - not allowed, and California - no set time. The document also stated that more than "50 million captures" are added each month to the Vigilant database. And, the Elk Grove PD seems to broadly share its ALPR data with other police departments and agencies.

The SDPD website includes a "License Plate Recognition: Procedures" document (Adobe PDF), dated May 2015, which describes its ALPR usage and policies:

"The legitimate law enforcement purposes of LPR systems include the following: 1) Locating stolen, wanted, or subject of investigation vehicles; 2) Locating witnesses and victims of a violent crime; 3) Locating missing or abducted children and at risk individuals.

LPR Strategies: 1) LPR equipped vehicles should be deployed as frequently as possible to maximize the utilization of the system; 2) Regular operation of LPR should be considered as a force multiplying extension of an officer’s regular patrol efforts to observe and detect vehicles of interest and specific wanted vehicles; 3) LPR may be legitimately used to collect data that is within public view, but should not be used to gather intelligence of First Amendment activities; 4) Reasonable suspicion or probable cause is not required for the operation of LPR equipment; 5) Use of LPR equipped cars to conduct license plate canvasses and grid searches is encouraged, particularly for major crimes or incidents as well as areas that are experiencing any type of crime series... LPR data will be retained for a period of one year from the time the LPR record was captured by the LPR device..."

The document does not describe its data security methods to protect this sensitive information from breaches, hacks, and unauthorized access. Perhaps most importantly, the 2015 SDPD document describes the data sharing policy:

"Law enforcement officers shall not share LPR data with commercial or private entities or individuals. However, law enforcement officers may disseminate LPR data to government entities with an authorized law enforcement or public safety purpose for access to such data."

However, the Voice of San Diego reported:

"A memorandum of understanding between SDPD and Vigilant stipulates that each agency retains ownership of its data, and can take steps to determine who sees it. A Vigilant Solutions user manual spells out in detail how agencies can limit access to their data... SDPD’s sharing doesn’t stop at Border Patrol. The list of agencies with near immediate access to the travel habits of San Diegans includes law enforcement partners you might expect, like the Carlsbad Police Department – with which SDPD has for years shared license plate reader data, through a countywide arrangement overseen by SANDAG – but also obscure agencies like the police department in Meigs, Georgia, population 1,038, and a private group that is not itself a police department, the Missouri Police Chiefs Association..."

So, the accuracy of the 2015 document is questionable, it it isn't already obsolete. Moreover, what's really critical are the data retention and sharing policies by Vigilant and other agencies.


Medicare Scams Still Operate. How To Avoid Getting Your Identity Information Stolen

To minimize fraud, the new Medicare cards display a unique 11-digit identification number instead of patients' Social Security numbers. However, scammers have created a new tactic to trick patients into revealing their sensitive Medicare information. The Oregon Department of Justice warned:

"If someone calls and asks you for your personal information, money to activate the new card, or threatens to cancel your Medicare benefits if you don’t share your personal information, just hang up! It is a scam," said Attorney General Ellen Rosenblum.

Medicare will not call you nor ask for your Social Security number or bank information. That's good advice for patients nationwide. Experts estimate that Medicare loses about $60 billion yearly to con artists via a variety of scams.

Oregon residents suspecting healthcare fraud or wanting to report scammers, should contact Oregon's Department of Justice’s Consumer Protection (hotline: 1-877-877-9392 or www.oregonconsumer.gov). Consumers in other states should contact their state's attorney general, and/or report suspected fraud directly to Medicare.

The video below from 2017 includes advice about how patients should protect their Medicare cards.


Connecticut And Federal Regulators Announce $1.3 Million Settlement With Substance Abuse Healthcare Provider

Connecticut and federal regulators recently announced a settlement agreement to resolve allegations that New Era Rehabilitation Center (New Era), operating in New Haven and Bridgeport, submitted false claims to both state and federal healthcare programs. The office of George Jepsen, Connecticut Attorney General, announced that New Era:

"... and its co-founders and owners – Dr. Ebenezer Kolade and Dr. Christina Kolade – are enrolled as providers in the Connecticut Medical Assistance Program (CMAP), which includes the state's Medicaid program. As part of their practice, they provide methadone treatment services for patients dealing with opioid addiction. Most of their patients are CMAP beneficiaries.

During the relevant time period, CMAP reimbursed methadone clinics by paying a weekly bundled rate that included all of the services associated with methadone maintenance, including the patient's doses of methadone; the initial intake evaluation; a physical examination; periodic drug testing; and individual, group and family drug counseling... The state and federal governments alleged that, from October 2009 to November 2013, New Era and the Kolades engaged in a pattern and practice of billing CMAP weekly for the methadone bundled service rate and then also submitting a separate claim to the CMAP for virtually every drug counseling session provided to clients by using a billing code for outpatient psychotherapy. The state and federal governments further alleged that those psychotherapy sessions were actually the drug counseling sessions already included and reimbursed through the bundled rate."

These actions were part of the State of Connecticut's Inter-agency Fraud Task Force created in 2013 to investigate and prosecute healthcare fraud. The joint investigation included the Connecticut AT's office, the office of Connecticut U.S. Attorney John H. Durham, and the U.S. Health and Human Services, Office of Inspector General – Office of Investigations.

Connecticut Fight Fraud logo Terms of the settlement agreement require NERC to pay $1,378,533 in settlement funds. Of that amount, $881,945 will be returned to CMAP.

Connecticut residents suspecting healthcare fraud or abuse should contact the Attorney General’s Antitrust and Government Program Fraud Department (phone at 860-808-5040, or email at ag.fraud@ct.gov), or the Department of Social Services fraud (hotline at 1-800-842-2155, online at www.ct.gov/dss/reportingfraud, or email at providerfraud.dss@ct.gov). Residents in other states can contact their state's attorney general's office.


Oakland Law Mandates 'Technology Impact Reports' By Local Government Agencies Before Purchasing Surveillance Equipment

Popular tools used by law enforcement include stingrays, fake cellular phone towers, and automated license plate readers (ALPRs) to track the movements of persons. Historically, the technologies have often been deployed without notice to track both the bad guys (e.g., criminals and suspects) and innocent citizens.

To better balance the privacy needs of citizens versus the surveillance needs of law enforcement, some areas are implementing new laws. The East Bay Times reported about a new law in Oakland:

"... introduced at Tuesday’s city council meeting, creates a public approval process for surveillance technologies used by the city. The rules also lay a groundwork for the City Council to decide whether the benefits of using the technology outweigh the cost to people’s privacy. Berkeley and Davis have passed similar ordinances this year.

However, Oakland’s ordinance is unlike any other in the nation in that it requires any city department that wants to purchase or use the surveillance technology to submit a "technology impact report" to the city’s Privacy Advisory Commission, creating a “standardized public format” for technologies to be evaluated and approved... city departments must also submit a “surveillance use policy” to the Privacy Advisory Commission for consideration. The approved policy must be adopted by the City Council before the equipment is to be used..."

Reportedly, the city council will review the ordinance a second time before final passage.

The Northern California chapter of the American Civil Liberties Union (ACLU) discussed the problem, the need for transparency, and legislative actions:

"Public safety in the digital era must include transparency and accountability... the ACLU of California and a diverse coalition of civil rights and civil liberties groups support SB 1186, a bill that helps restores power at the local level and makes sure local voices are heard... the use of surveillance technology harms all Californians and disparately harms people of color, immigrants, and political activists... The Oakland Police Department concentrated their use of license plate readers in low income and minority neighborhoods... Across the state, residents are fighting to take back ownership of their neighborhoods... Earlier this year, Alameda, Culver City, and San Pablo rejected license plate reader proposals after hearing about the Immigration & Customs Enforcement (ICE) data [sharing] deal. Communities are enacting ordinances that require transparency, oversight, and accountability for all surveillance technologies. In 2016, Santa Clara County, California passed a groundbreaking ordinance that has been used to scrutinize multiple surveillance technologies in the past year... SB 1186 helps enhance public safety by safeguarding local power and ensuring transparency, accountability... SB 1186 covers the broad array of surveillance technologies used by police, including drones, social media surveillance software, and automated license plate readers. The bill also anticipates – and covers – AI-powered predictive policing systems on the rise today... Without oversight, the sensitive information collected by local governments about our private lives feeds databases that are ripe for abuse by the federal government. This is not a hypothetical threat – earlier this year, ICE announced it had obtained access to a nationwide database of location information collected using license plate readers – potentially sweeping in the 100+ California communities that use this technology. Many residents may not be aware their localities also share their information with fusion centers, federal-state intelligence warehouses that collect and disseminate surveillance data from all levels of government.

Statewide legislation can build on the nationwide Community Control Over Police Surveillance (CCOPS) movement, a reform effort spearheaded by 17 organizations, including the ACLU, that puts local residents and elected officials in charge of decisions about surveillance technology. If passed in its current form, SB 1186 would help protect Californians from intrusive, discriminatory, and unaccountable deployment of law enforcement surveillance technology."

Is there similar legislation in your state?


4 Ways to Fix Facebook

[Editor's Note: today's guest post, by ProPublica reporters, explores solutions to the massive privacy and data security problems at Facebook.com. It is reprinted with permission.]

By Julia Angwin, ProPublica

Gathered in a Washington, D.C., ballroom last Thursday for their annual “tech prom,” hundreds of tech industry lobbyists and policy makers applauded politely as announcers read out the names of the event’s sponsors. But the room fell silent when “Facebook” was proclaimed — and the silence was punctuated by scattered boos and groans.

Facebook logo These days, it seems the only bipartisan agreement in Washington is to hate Facebook. Democrats blame the social network for costing them the presidential election. Republicans loathe Silicon Valley billionaires like Facebook founder and CEO Mark Zuckerberg for their liberal leanings. Even many tech executives, boosters and acolytes can’t hide their disappointment and recriminations.

The tipping point appears to have been the recent revelation that a voter-profiling outfit working with the Trump campaign, Cambridge Analytica, had obtained data on 87 million Facebook users without their knowledge or consent. News of the breach came after a difficult year in which, among other things, Facebook admitted that it allowed Russians to buy political ads, advertisers to discriminate by race and age, hate groups to spread vile epithets, and hucksters to promote fake news on its platform.

Over the years, Congress and federal regulators have largely left Facebook to police itself. Now, lawmakers around the world are calling for it to be regulated. Congress is gearing up to grill Zuckerberg. The Federal Trade Commission is investigating whether Facebook violated its 2011 settlement agreement with the agency. Zuckerberg himself suggested, in a CNN interview, that perhaps Facebook should be regulated by the government.

The regulatory fever is so strong that even Peter Swire, a privacy law professor at Georgia Institute of Technology who testified last year in an Irish court on behalf of Facebook, recently laid out the legal case for why Google and Facebook might be regulated as public utilities. Both companies, he argued, satisfy the traditional criteria for utility regulation: They have large market share, are natural monopolies, and are difficult for customers to do without.

While the political momentum may not be strong enough right now for something as drastic as that, many in Washington are trying to envision what regulating Facebook would look like. After all, the solutions are not obvious. The world has never tried to rein in a global network with 2 billion users that is built on fast-moving technology and evolving data practices.

I talked to numerous experts about the ideas bubbling up in Washington. They identified four concrete, practical reforms that could address some of Facebook’s main problems. None are specific to Facebook alone; potentially, they could be applied to all social media and the tech industry.

1. Impose Fines for Data Breaches

The Cambridge Analytica data loss was the result of a breach of contract, rather than a technical breach in which a company gets hacked. But either way, it’s far too common for institutions to lose customers’ data — and they rarely suffer significant financial consequences for the loss. In the United States, companies are only required to notify people if their data has been breached in certain states and under certain circumstances — and regulators rarely have the authority to penalize companies that lose personal data.

Consider the Federal Trade Commission, which is the primary agency that regulates internet companies these days. The FTC doesn’t have the authority to demand civil penalties for most data breaches. (There are exceptions for violations of children’s privacy and a few other offenses.) Typically, the FTC can only impose penalties if a company has violated a previous agreement with the agency.

That means Facebook may well face a fine for the Cambridge Analytica breach, assuming the FTC can show that the social network violated a 2011 settlement with the agency. In that settlement, the FTC charged Facebook with eight counts of unfair and deceptive behavior, including allowing outside apps to access data that they didn’t need — which is what Cambridge Analytica reportedly did years later. The settlement carried no financial penalties but included a clause stating that Facebook could face fines of $16,000 per violation per day.

David Vladeck, former FTC director of consumer protection, who crafted the 2011 settlement with Facebook, said he believes Facebook’s actions in the Cambridge Analytica episode violated the agreement on multiple counts. “I predict that if the FTC concludes that Facebook violated the consent decree, there will be a heavy civil penalty that could well be in the amount of $1 billion or more,” he said.

Facebook maintains it has abided by the agreement. “Facebook rejects any suggestion that it violated the consent decree,” spokesman Andy Stone said. “We respected the privacy settings that people had in place.”

If a fine had been levied at the time of the settlement, it might well have served as a stronger deterrent against any future breaches. Daniel J. Weitzner, who served in the White House as the deputy chief technology officer at the time of the Facebook settlement, says that technology should be policed by something similar to the Department of Justice’s environmental crimes unit. The unit has levied hundreds of millions of dollars in fines. Under previous administrations, it filed felony charges against people for such crimes as dumping raw sewage or killing a bald eagle. Some ended up sentenced to prison.

“We know how to do serious law enforcement when we think there’s a real priority and we haven’t gotten there yet when it comes to privacy,” Weitzner said.

2. Police Political Advertising

Last year, Facebook disclosed that it had inadvertently accepted thousands of advertisements that were placed by a Russian disinformation operation — in possible violation of laws that restrict foreign involvement in U.S. elections. FBI special prosecutor Robert Mueller has charged 13 Russians who worked for an internet disinformation organization with conspiring to defraud the United States, but it seems unlikely that Russia will compel them to face trial in the U.S.

Facebook has said it will introduce a new regime of advertising transparency later this year, which will require political advertisers to submit a government-issued ID and to have an authentic mailing address. It said political advertisers will also have to disclose which candidate or organization they represent and that all election ads will be displayed in a public archive.

But Ann Ravel, a former commissioner at the Federal Election Commission, says that more could be done. While she was at the commission, she urged it to consider what it could do to make internet advertising contain as much disclosure as broadcast and print ads. “Do we want Vladimir Putin or drug cartels to be influencing American elections?” she presciently asked at a 2015 commission meeting.

However, the election commission — which is often deadlocked between its evenly split Democratic and Republican commissioners — has not yet ruled on new disclosure rules for internet advertising. Even if it does pass such a rule, the commission’s definition of election advertising is so narrow that many of the ads placed by the Russians may not have qualified for scrutiny. It’s limited to ads that mention a federal candidate and appear within 60 days prior to a general election or 30 days prior to a primary.

This definition, Ravel said, is not going to catch new forms of election interference, such as ads placed months before an election, or the practice of paying individuals or bots to spread a message that doesn’t identify a candidate and looks like authentic communications rather than ads.

To combat this type of interference, Ravel said, the current definition of election advertising needs to be broadened. The FEC, she suggested, should establish “a multi-faceted test” to determine whether certain communications should count as election advertisements. For instance, communications could be examined for their intent, and whether they were paid for in a nontraditional way — such as through an automated bot network.

And to help the tech companies find suspect communications, she suggested setting up an enforcement arm similar to the Treasury Department’s Financial Crimes Enforcement Network, known as FinCEN. FinCEN combats money laundering by investigating suspicious account transactions reported by financial institutions. Ravel said that a similar enforcement arm that would work with tech companies would help the FEC.

“The platforms could turn over lots of communications and the investigative agency could then examine them to determine if they are from prohibited sources,” she said.

3. Make Tech Companies Liable for Objectionable Content

Last year, ProPublica found that Facebook was allowing advertisers to buy discriminatory ads, including ads targeting people who identified themselves as “Jew-haters,” and ads for housing and employment that excluded audiences based on race, age and other protected characteristics under civil rights laws.

Facebook has claimed that it has immunity against liability for such discrimination under section 230 of the 1996 federal Communications Decency Act, which protects online publishers from liability for third-party content.

“Advertisers, not Facebook, are responsible for both the content of their ads and what targeting criteria to use, if any,” Facebook stated in legal filings in a federal case in California challenging Facebook’s use of racial exclusions in ad targeting.

But sentiment is growing in Washington to interpret the law more narrowly. Last month, the House of Representatives passed a bill that carves out an exemption in the law, making websites liable if they aid and abet sex trafficking. Despite fierce opposition by many tech advocates, a version of the bill has already passed the Senate.

And many staunch defenders of the tech industry have started to suggest that more exceptions to section 230 may be needed. In November, Harvard Law professor Jonathan Zittrain wrote an article rethinking his previous support for the law and declared it has become, in effect, “a subsidy” for the tech giants, who don’t bear the costs of ensuring the content they publish is accurate and fair.

“Any honest account must acknowledge the collateral damage it has permitted to be visited upon real people whose reputations, privacy, and dignity have been hurt in ways that defy redress,” Zittrain wrote.

In a December 2017 paper titled “The Internet Will Not Break: Denying Bad Samaritans 230 Immunity,” University of Maryland law professors Danielle Citron and Benjamin Wittes argue that the law should be amended — either through legislation or judicial interpretation — to deny immunity to technology companies that enable and host illegal content.

“The time is now to go back and revise the words of the statute to make clear that it only provides shelter if you take reasonable steps to address illegal activity that you know about,” Citron said in an interview.

4. Install Ethics Review Boards

Cambridge Analytica obtained its data on Facebook users by paying a psychology professor to build a Facebook personality quiz. When 270,000 Facebook users took the quiz, the researcher was able to obtain data about them and all of their Facebook friends — or about 50 million people altogether. (Facebook later ended the ability for quizzes and other apps to pull data on users’ friends.)

Cambridge Analytica then used the data to build a model predicting the psychology of those people, on metrics such as “neuroticism,” political views and extroversion. It then offered that information to political consultants, including those working for the Trump campaign.

The company claimed that it had enough information about people’s psychological vulnerabilities that it could effectively target ads to them that would sway their political opinions. It is not clear whether the company actually achieved its desired effect.

But there is no question that people can be swayed by online content. In a controversial 2014 study, Facebook tested whether it could manipulate the emotions of its users by filling some users’ news feeds with only positive news and other users’ feeds with only negative news. The study found that Facebook could indeed manipulate feelings — and sparked outrage from Facebook users and others who claimed it was unethical to experiment on them without their consent.

Such studies, if conducted by a professor on a college campus, would require approval from an institutional review board, or IRB, overseeing experiments on human subjects. But there is no such standard online. The usual practice is that a company’s terms of service contain a blanket statement of consent that users never read or agree to.

James Grimmelman, a law professor and computer scientist, argued in a 2015 paper that the technology companies should stop burying consent forms in their fine print. Instead, he wrote, “they should seek enthusiastic consent from users, making them into valued partners who feel they have a stake in the research.”

Such a consent process could be overseen by an independent ethics review board, based on the university model, which would also review research proposals and ensure that people’s private information isn’t shared with brokers like Cambridge Analytica.

“I think if we are in the business of requiring IRBs for academics,” Grimmelman said in an interview, “we should ask for appropriate supervisions for companies doing research.”

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

 


Facebook Update: 87 Million Affected By Its Data Breach With Cambridge Analytica. Considerations For All Consumers

Facebook logo Facebook.com has dominated the news during the past three weeks. The news media have reported about many issues, but there are more -- whether or not you use Facebook. Things began about mid-March, when Bloomberg reported:

"Yes, Cambridge Analytica... violated rules when it obtained information from some 50 million Facebook profiles... the data came from someone who didn’t hack the system: a professor who originally told Facebook he wanted it for academic purposes. He set up a personality quiz using tools that let people log in with their Facebook accounts, then asked them to sign over access to their friend lists and likes before using the app. The 270,000 users of that app and their friend networks opened up private data on 50 million people... All of that was allowed under Facebook’s rules, until the professor handed the information off to a third party... "

So, an authorized user shared members' sensitive information with unauthorized users. Facebook confirmed these details on March 16:

"We are suspending Strategic Communication Laboratories (SCL), including their political data analytics firm, Cambridge Analytica (CA), from Facebook... In 2015, we learned that a psychology professor at the University of Cambridge named Dr. Aleksandr Kogan lied to us and violated our Platform Policies by passing data from an app that was using Facebook Login to SCL/CA, a firm that does political, government and military work around the globe. He also passed that data to Christopher Wylie of Eunoia Technologies, Inc.

Like all app developers, Kogan requested and gained access to information from people after they chose to download his app. His app, “thisisyourdigitallife,” offered a personality prediction, and billed itself on Facebook as “a research app used by psychologists.” Approximately 270,000 people downloaded the app. In so doing, they gave their consent for Kogan to access information such as the city they set on their profile, or content they had liked... When we learned of this violation in 2015, we removed his app from Facebook and demanded certifications from Kogan and all parties he had given data to that the information had been destroyed. CA, Kogan and Wylie all certified to us that they destroyed the data... Several days ago, we received reports that, contrary to the certifications we were given, not all data was deleted..."

So, data that should have been deleted wasn't. Then, Facebook relied upon certifications from entities that had lied previously. Not good. Then, Facebook posted this addendum on March 17:

"The claim that this is a data breach is completely false. Aleksandr Kogan requested and gained access to information from users who chose to sign up to his app, and everyone involved gave their consent. People knowingly provided their information, no systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked."

Why the rush to deny a breach? It seems wise to complete a thorough investigation before making such a claim. In the 11+ years I've written this blog, whenever unauthorized persons access data they shouldn't have, it's a breach. You can read about plenty of similar incidents where credit reporting agencies sold sensitive consumer data to ID-theft services and/or data brokers, who then re-sold that information to criminals and fraudsters. Seems like a breach to me.

Cambridge Analytica logo Facebook announced on March 19th that it had hired a digital forensics firm:

"... Stroz Friedberg, to conduct a comprehensive audit of Cambridge Analytica (CA). CA has agreed to comply and afford the firm complete access to their servers and systems. We have approached the other parties involved — Christopher Wylie and Aleksandr Kogan — and asked them to submit to an audit as well. Mr. Kogan has given his verbal agreement to do so. Mr. Wylie thus far has declined. This is part of a comprehensive internal and external review that we are conducting to determine the accuracy of the claims that the Facebook data in question still exists... Independent forensic auditors from Stroz Friedberg were on site at CA’s London office this evening. At the request of the UK Information Commissioner’s Office, which has announced it is pursuing a warrant to conduct its own on-site investigation, the Stroz Friedberg auditors stood down."

That's a good start. An audit would determine or not data which perpetrators said was destroyed, actually had been destroyed. However, Facebook seems to have built a leaky system which allows data harvesting:

"Hundreds of millions of Facebook users are likely to have had their private information harvested by companies that exploited the same terms as the firm that collected data and passed it on to CA, according to a new whistleblower. Sandy Parakilas, the platform operations manager at Facebook responsible for policing data breaches by third-party software developers between 2011 and 2012, told the Guardian he warned senior executives at the company that its lax approach to data protection risked a major breach..."

Reportedly, Parakilas added that Facebook, "did not use its enforcement mechanisms, including audits of external developers, to ensure data was not being misused." Not good. The incident makes one wonder what other developers, corporate, and academic users have violated Facebook's rules: shared sensitive Facebook members' data they shouldn't have.

Facebook announced on March 21st that it will, 1) investigate all apps that had access to large amounts of information and conduct full audits of any apps with suspicious activity; 2) inform users affected by apps that have misused their data; 3) disable an app's access to a member's information if that member hasn't used the app within the last three months; 4) change Login to "reduce the data that an app can request without app review to include only name, profile photo and email address;" 5) encourage members to manage the apps they use; and reward users who find vulnerabilities.

Those actions seem good, but too little too late. Facebook needs to do more... perhaps, revise its Terms Of Use to include large fines for violators of its data security rules. Meanwhile, there has been plenty of news about CA. The Guardian UK reported on March 19:

"The company at the centre of the Facebook data breach boasted of using honey traps, fake news campaigns and operations with ex-spies to swing election campaigns around the world, a new investigation reveals. Executives from Cambridge Analytica spoke to undercover reporters from Channel 4 News about the dark arts used by the company to help clients, which included entrapping rival candidates in fake bribery stings and hiring prostitutes to seduce them."

Geez. After these news reports surfaced, CA's board suspended Alexander Nix, its CEO, pending an internal investigation. So, besides Facebook's failure to secure sensitive members' information, another key issue seems to be the misuse of social media data by a company that openly brags about unethical, and perhaps illegal, behavior.

What else might be happening? The Intercept explained on March 30th that CA:

"... has marketed itself as classifying voters using five personality traits known as OCEAN — Openness, Conscientiousness, Extroversion, Agreeableness, and Neuroticism — the same model used by University of Cambridge researchers for in-house, non-commercial research. The question of whether OCEAN made a difference in the presidential election remains unanswered. Some have argued that big data analytics is a magic bullet for drilling into the psychology of individual voters; others are more skeptical. The predictive power of Facebook likes is not in dispute. A 2013 study by three of Kogan’s former colleagues at the University of Cambridge showed that likes alone could predict race with 95 percent accuracy and political party with 85 percent accuracy. Less clear is their power as a tool for targeted persuasion; CA has claimed that OCEAN scores can be used to drive voter and consumer behavior through “microtargeting,” meaning narrowly tailored messages..."

So, while experts disagree about the effectiveness of data analytics with political campaigns, it seems wise to assume that the practice will continue with improvements. Data analytics fueled by social media input means political campaigns can bypass traditional news media outlets to distribute information and disinformation. That highlights the need for Facebook (and other social media) to improve their data security and compliance audits.

While the UK Information Commissioner's Office aggressively investigates CA, things seem to move at a much slower pace in the USA. TechCrunch reported on April 4th:

"... Facebook’s founder Mark Zuckerberg believes North America users of his platform deserve a lower data protection standard than people everywhere else in the world. In a phone interview with Reuters yesterday Mark Zuckerberg declined to commit to universally implementing changes to the platform that are necessary to comply with the European Union’s incoming General Data Protection Regulation (GDPR). Rather, he said the company was working on a version of the law that would bring some European privacy guarantees worldwide — declining to specify to the reporter which parts of the law would not extend worldwide... Facebook’s leadership has previously implied the product changes it’s making to comply with GDPR’s incoming data protection standard would be extended globally..."

Do users in the USA want weaker data protections than users in other countries? I think not. I don't. Read for yourself the April 4th announcement by Facebook about changes to its terms of service and data policy. It didn't mention specific countries or regions; who gets what and where. Not good.

Mark Zuckerberg apologized and defended his company in a March 21st post:

"I want to share an update on the Cambridge Analytica situation -- including the steps we've already taken and our next steps to address this important issue. We have a responsibility to protect your data, and if we can't then we don't deserve to serve you. I've been working to understand exactly what happened and how to make sure this doesn't happen again. The good news is that the most important actions to prevent this from happening again today we have already taken years ago. But we also made mistakes, there's more to do, and we need to step up and do it... This was a breach of trust between Kogan, Cambridge Analytica and Facebook. But it was also a breach of trust between Facebook and the people who share their data with us and expect us to protect it. We need to fix that... at the end of the day I'm responsible for what happens on our platform. I'm serious about doing what it takes to protect our community. While this specific issue involving Cambridge Analytica should no longer happen with new apps today, that doesn't change what happened in the past. We will learn from this experience to secure our platform further and make our community safer for everyone going forward."

Nice sounding words, but actions speak louder. Wired magazine said:

"Zuckerberg didn't mention in his Facebook post why it took him five days to respond to the scandal... The groundswell of outrage and attention following these revelations has been greater than anything Facebook predicted—or has experienced in its long history of data privacy scandals. By Monday, its stock price nosedived. On Tuesday, Facebook shareholders filed a lawsuit against the company in San Francisco, alleging that Facebook made "materially false and misleading statements" that led to significant losses this week. Meanwhile, in Washington, a bipartisan group of senators called on Zuckerberg to testify before the Senate Judiciary Committee. And the Federal Trade Commission also opened an investigation into whether Facebook had violated a 2011 consent decree, which required the company to notify users when their data was obtained by unauthorized sources."

Frankly, Zuckerberg has lost credibility with me. Why? Facebook's history suggests it can't (or won't) protect users' data it collects. Some of its privacy snafus: settlement of a lawsuit resulting from alleged privacy abuses by its Beacon advertising program, changed members' ad settings without notice nor consent, an advertising platform which allegedly facilitates abuses of older workers, health and privacy concerns about a new service for children ages 6 to 13, transparency concerns about political ads, and new lawsuits about the company's advertising platform. Plus, Zuckerberg made promises in January to clean up the service's advertising. Now, we have yet another apology.

In a press release this afternoon, Facebook revised upward the number affected by the Facebook/CA breach from 50 to 87 million persons. Most, about 70.6 million, are in the United States. The breakdown by country:

Number of affected persons by country in the Facebook - Cambridge Analytica breach. Click to view larger version

So, what should consumers do?

You have options. If you use Facebook, see these instructions by Consumer Reports to deactivate or delete your account. Some people I know simply stopped using Facebook, but left their accounts active. That doesn't seem wise. A better approach is to adjust the privacy settings on your Facebook account to get as much privacy and protections as possible.

Facebook has a new tool for members to review and disable, in bulk, all of the apps with access to their data. Follow these handy step-by-step instructions by Mashable. And, users should also disable the Facebook API platform for their account. If you use the Firefox web browser, then install the new Facebook Container new add-on specifically designed to prevent Facebook from tracking you. Don't use Firefox? You might try the Privacy Badger add-on instead. I've used it happily for years.

Of course, you should submit feedback directly to Facebook demanding that it extend GDPR privacy protections to your country, too. And, wise online users always read the terms and conditions of all Facebook quizzes before taking them.

Don't use Facebook? There are considerations for you, too; especially if you use a different social networking site (or app). Reportedly, Mark Zuckerberg, the CEO of Facebook, will testify before the U.S. Congress on April 11th. His upcoming testimony will be worth monitoring for everyone. Why? The outcome may prod Congress to act by passing new laws giving consumers in the USA data security and privacy protections equal to what's available in the United Kingdom. And, there may be demands for Cambridge Analytica executives to testify before Congress, too.

Or, consumers may demand stronger, faster action by the U.S. Federal Trade Commission (FTC), which announced on March 26th:

"The FTC is firmly and fully committed to using all of its tools to protect the privacy of consumers. Foremost among these tools is enforcement action against companies that fail to honor their privacy promises, including to comply with Privacy Shield, or that engage in unfair acts that cause substantial injury to consumers in violation of the FTC Act. Companies who have settled previous FTC actions must also comply with FTC order provisions imposing privacy and data security requirements. Accordingly, the FTC takes very seriously recent press reports raising substantial concerns about the privacy practices of Facebook. Today, the FTC is confirming that it has an open non-public investigation into these practices."

An "open non-public investigation?" Either the investigation is public, or it isn't. Hopefully, an attorney will explain. And, that announcement read like weak tea. I expect more. Much more.

USA citizens may want stronger data security laws, especially if Facebook's solutions are less than satisfactory, it refuses to provide protections equal to those in the United Kingdom, or if it backtracks later on its promises. Thoughts? Comments?


The 'CLOUD Act' - What It Is And What You Need To Know

Chances are, you probably have not heard of the "CLOUD Act." I hadn't heard about it until recently. A draft of the legislation is available on the website for U.S. Senator Orrin Hatch (Republican - Utah).

Many people who already use cloud services to store and backup data might assume: if it has to do with the cloud, then it must be good.  Such an assumption would be foolish. The full name of the bill: "Clarifying Overseas Use Of Data." What problem does this bill solve? Senator Hatch stated last month why he thinks this bill is needed:

"... the Supreme Court will hear arguments in a case... United States v. Microsoft Corp., colloquially known as the Microsoft Ireland case... The case began back in 2013, when the US Department of Justice asked Microsoft to turn over emails stored in a data center in Ireland. Microsoft refused on the ground that US warrants traditionally have stopped at the water’s edge. Over the last few years, the legal battle has worked its way through the court system up to the Supreme Court... The issues the Microsoft Ireland case raises are complex and have created significant difficulties for both law enforcement and technology companies... law enforcement officials increasingly need access to data stored in other countries for investigations, yet no clear enforcement framework exists for them to obtain overseas data. Meanwhile, technology companies, who have an obligation to keep their customers’ information private, are increasingly caught between conflicting laws that prohibit disclosure to foreign law enforcement. Equally important, the ability of one nation to access data stored in another country implicates national sovereignty... The CLOUD Act bridges the divide that sometimes exists between law enforcement and the tech sector by giving law enforcement the tools it needs to access data throughout the world while at the same time creating a commonsense framework to encourage international cooperation to resolve conflicts of law. To help law enforcement, the bill creates incentives for bilateral agreements—like the pending agreement between the US and the UK—to enable investigators to seek data stored in other countries..."

Senators Coons, Graham, and Whitehouse, support the CLOUD Act, along with House Representatives Collins, Jeffries, and others. The American Civil Liberties Union (ACLU) opposes the bill and warned:

"Despite its fluffy sounding name, the recently introduced CLOUD Act is far from harmless. It threatens activists abroad, individuals here in the U.S., and would empower Attorney General Sessions in new disturbing ways... the CLOUD Act represents a dramatic change in our law, and its effects will be felt across the globe... The bill starts by giving the executive branch dramatically more power than it has today. It would allow Attorney General Sessions to enter into agreements with foreign governments that bypass current law, without any approval from Congress. Under these agreements, foreign governments would be able to get emails and other electronic information without any additional scrutiny by a U.S. judge or official. And, while the attorney general would need to consider a country’s human rights record, he is not prohibited from entering into an agreement with a country that has committed human rights abuses... the bill would for the first time allow these foreign governments to wiretap in the U.S. — even in cases where they do not meet Wiretap Act standards. Paradoxically, that would give foreign governments the power to engage in surveillance — which could sweep in the information of Americans communicating with foreigners — that the U.S. itself would not be able to engage in. The bill also provides broad discretion to funnel this information back to the U.S., circumventing the Fourth Amendment. This information could potentially be used by the U.S. to engage in a variety of law enforcement actions."

Given that warning, I read the draft legislation. One portion immediately struck me:

"A provider of electronic communication service or remote computing service shall comply with the obligations of this chapter to preserve, backup, or disclose the contents of a wire or electronic communication and any record or other information pertaining to a customer or subscriber within such provider’s possession, custody, or control, regardless of whether such communication, record, or other information is located within or outside of the United States."

While I am not an attorney, this bill definitely sounds like an end-run around the Fourth Amendment. The review process is largely governed by the House of Representatives; a body not known for internet knowledge nor savvy. The bill also smells like an attack on internet services consumers regularly use for privacy, such as search engines that don't collect nor archive search data and Virtual Private Networks (VPNs).

Today, for online privacy many consumers in the United States use VPN software and services provided by vendors located offshore. Why? Despite a national poll in 2017 which found the the Republican rollback of FCC broadband privacy rules very unpopular among consumers, the Republican-led Congress proceeded with that rollback, and President Trump signed the privacy-rollback legislation on April 3, 2017. Hopefully, skilled and experienced privacy attorneys will continue to review and monitor the draft legislation.

The ACLU emphasized in its warning:

"Today, the information of global activists — such as those that fight for LGBTQ rights, defend religious freedom, or advocate for gender equality are protected from being disclosed by U.S. companies to governments who may seek to do them harm. The CLOUD Act eliminates many of these protections and replaces them with vague assurances, weak standards, and largely unenforceable restrictions... The CLOUD Act represents a major change in the law — and a major threat to our freedoms. Congress should not try to sneak it by the American people by hiding it inside of a giant spending bill. There has not been even one minute devoted to considering amendments to this proposal. Congress should robustly debate this bill and take steps to fix its many flaws, instead of trying to pull a fast one on the American people."

I agree. Seems like this bill creates far more problems than it solves. Plus, something this important should be openly and thoroughly discussed; not be buried in a spending bill. What do you think?


Report: Little Progress Since 2016 To Replace Old, Vulnerable Voting Machines In United States

We've know for some time that a sizeable portion of voting machines in the United States are vulnerable to hacking and errors. Too many states, cities, and town use antiquated equipment or equipment without paper backups. The latter makes re-counts impossible.

Has any progress been made to fix the vulnerabilities? The Brennan Center For Justice (BCJ) reported:

"... despite manifold warnings about election hacking for the past two years, the country has made remarkably little progress since the 2016 election in replacing antiquated, vulnerable voting machines — and has done even less to ensure that our country can recover from a successful cyberattack against those machines."

It is important to remember this warning in January 2017 from the Director of National Intelligence (DNI):

"Russian effortsto influence the 2016 US presidential election represent the most recent expression of Moscow’s longstanding desire to undermine the US-led liberal democratic order, but these activities demonstrated a significant escalation in directness, level of activity, and scope of effort compared to previous operations. We assess Russian President Vladimir Putin ordered an influence campaign in 2016 aimed at the US presidential election. Russia’s goals were to undermine public faith in the US democratic process... Russian intelligence accessed elements of multiple state or local electoral boards. Since early 2014, Russian intelligence has researched US electoral processes and related technology and equipment. DHS assesses that the types of systems we observed Russian actors targeting or compromising are not involved in vote tallying... We assess Moscow will apply lessons learned from its Putin-ordered campaign aimed at the US presidential election to future influence efforts worldwide, including against US allies and their election processes... "

Detailed findings in the BCJ report about the lack of progress:

  1. "This year, most states will use computerized voting machines that are at least 10 years old, and which election officials say must be replaced before 2020.
    While the lifespan of any electronic voting machine varies, systems over a decade old are far more likely to need to be replaced, for both security and reliability reasons... older machines are more likely to use outdated software like Windows 2000. Using obsolete software poses serious security risks: vendors may no longer write security patches for it; jurisdictions cannot replace critical hardware that is failing because it is incompatible with their new, more secure hardware... In 2016, jurisdictions in 44 states used voting machines that were at least a decade old. Election officials in 31 of those states said they needed to replace that equipment by 2020... This year, 41 states will be using systems that are at least a decade old, and officials in 33 say they must replace their machines by 2020. In most cases, elections officials do not yet have adequate funds to do so..."
  2. "Since 2016, only one state has replaced its paperless electronic voting machines statewide.
    Security experts have long warned about the dangers of continuing to use paperless electronic voting machines. These machines do not produce a paper record that can be reviewed by the voter, and they do not allow election officials and the public to confirm electronic vote totals. Therefore, votes cast on them could be lost or changed without notice... In 2016, 14 states (Arkansas, Delaware, Georgia, Indiana, Kansas, Kentucky, Louisiana, Mississippi, New Jersey, Pennsylvania, South Carolina, Tennessee, Texas, and Virginia) used paperless electronic machines as the primary polling place equipment in at least some counties and towns. Five of these states used paperless machines statewide. By 2018 these numbers have barely changed: 13 states will still use paperless voting machines, and 5 will continue to use such systems statewide. Only Virginia decertified and replaced all of its paperless systems..."
  3. "Only three states mandate post-election audits to provide a high-level of confidence in the accuracy of the final vote tally.
    Paper records of votes have limited value against a cyberattack if they are not used to check the accuracy of the software-generated total to confirm that the veracity of election results. In the last few years, statisticians, cybersecurity professionals, and election experts have made substantial advances in developing techniques to use post-election audits of voter verified paper records to identify a computer error or fraud that could change the outcome of a contest... Specifically, “risk limiting audits” — a process that employs statistical models to consistently provide a high level of confidence in the accuracy of the final vote tally – are now considered the “gold standard” of post-election audits by experts... Despite this fact, risk limiting audits are required in only three states: Colorado, New Mexico, and Rhode Island. While 13 state legislatures are currently considering new post-election audit bills, since the 2016 election, only one — Rhode Island — has enacted a new risk limiting audit requirement."
  4. "43 states are using machines that are no longer manufactured.
    The problem of maintaining secure and reliable voting machines is particularly challenging in the many jurisdictions that use machines models that are no longer produced. In 2015... the Brennan Center estimated that 43 states and the District of Columbia were using machines that are no longer manufactured. In 2018, that number has not changed. A primary challenge of using machines no longer manufactured is finding replacement parts and the technicians who can repair them. These difficulties make systems less reliable and secure... In a recent interview with the Brennan Center, Neal Kelley, registrar of voters for Orange County, California, explained that after years of cannibalizing old machines and hoarding spare parts, he is now forced to take systems out of service when they fail..."

That is embarrassing for a country that prides itself on having an effective democracy. According to BCJ, the solution would be for Congress to fund via grants the replacement of paperless and antiquated equipment; plus fund post-election audits.

Rather than protect the integrity of our democracy, the government passed a massive tax cut which will increase federal deficits during the coming years while pursuing both a costly military parade and an unfunded border wall. Seems like questionable priorities to me. What do you think?


Legislation Moving Through Congress To Loosen Regulations On Banks

Legislation is moving through Congress which will loosen regulations on banks. Is this an improvement? Is it risky? Is it a good deal for consumers? Before answering those questions, a summary of the Economic Growth, Regulatory Relief, and Consumer Protection Act (Senate Bill 2155):

"This bill amends the Truth in Lending Act to allow institutions with less than $10 billion in assets to waive ability-to-repay requirements for certain residential-mortgage loans... The bill amends the Bank Holding Company Act of 1956 to exempt banks with assets valued at less than $10 billion from the "Volcker Rule," which prohibits banking agencies from engaging in proprietary trading or entering into certain relationships with hedge funds and private-equity funds... The bill amends the United States Housing Act of 1937 to reduce inspection requirements and environmental-review requirements for certain smaller, rural public-housing agencies.

Provisions relating to enhanced prudential regulation for financial institutions are modified, including those related to stress testing, leverage requirements, and the use of municipal bonds for purposes of meeting liquidity requirements. The bill requires credit reporting agencies to provide credit-freeze alerts and includes consumer-credit provisions related to senior citizens, minors, and veterans."

Well, that definitely sounds like relief for banks. Fewer regulations means it's easier to do business... and make more money. Next questions: is it good for consumers? Is it risky? Keep reading.

The non-partisan Congressional Budget Office (CBO) analyzed the proposed legislation in the Senate, and concluded (bold emphasis added):

"S. 2155 would modify provisions of the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd Frank Act) and other laws governing regulation of the financial industry. The bill would change the regulatory framework for small depository institutions with assets under $10 billion (community banks) and for large banks with assets over $50 billion. The bill also would make changes to consumer mortgage and credit-reporting regulations and to the authorities of the agencies that regulate the financial industry. CBO estimates that enacting the bill would increase federal deficits by $671 million over the 2018-2027 period... CBO’s estimate of the bill’s budgetary effect is subject to considerable uncertainty, in part because it depends on the probability in any year that a systemically important financial institution (SIFI) will fail or that there will be a financial crisis. CBO estimates that the probability is small under current law and would be slightly greater under the legislation..."

So, the propose legislation means there is a greater risk of banks either failing or needing government assistance (e.g., bailout funds). Are there risks to consumers? To taxpayers? CNN interviewed U.S. Senator Elizabeth Warren (Dem- Mass.), who said:

"Frankly, I just don't see how any senator can vote to weaken the regulations on Wall Street banks.. [weakened regulations] puts us at greater risk that there will be another taxpayer bailout, that there will be another crash and another taxpayer bailout..."

So, there are risks for consumers/taxpayers. How? Why? Let's count the ways.

First, the proposed legislation increases federal deficits. Somebody has to pay for that: with either higher taxes, less services, more debt, or a combination of all three. That doesn't sound good. Does it sound good to you?

Second, looser regulations mean some banks may lend money to more people they shouldn't have = persons who default on loan. To compensate, those banks would raise prices (e.g., more fees, higher fees, higher interest rates) to borrowers to cover their losses. If those banks can't cover their losses, then they will fail. If enough banks fail at about the same time, then bingo... another financial crisis.

If key banks fail, then the government will bail out (again) banks to keep the financial system running. (Remember too big to fail banks?) Somebody has to pay for bailouts... with either higher taxes, less services, more debt, or a combination of all three. Does that sound good to you? It doesn't sound good to me. If it doesn't sound good, I encourage you to contact your elected officials.

It's critical to remember banking history in the United States. Nobody wants a repeat of the 2008 melt-down. There are always consequences when government... Congress decides to help bankers by loosening regulations. What do you think?


Facebook’s Experiment in Ad Transparency Is Like Playing Hide And Seek

[Editor's note: today's guest post, by the reporters at ProPublica, explores a new global program Facebook introduced in Canada. It is reprinted with permission.]

Facebook logo By Jennifer Valentino-DeVries, ProPublica

Shortly before a Toronto City Council vote in December on whether to tighten regulation of short-term rental companies, an entity called Airbnb Citizen ran an ad on the Facebook news feeds of a selected audience, including Toronto residents over the age of 26 who listen to Canadian public radio. The ad featured a photo of a laughing couple from downtown Toronto, with the caption, “Airbnb hosts from the many wards of Toronto raise their voices in support of home sharing. Will you?”

Placed by an interested party to influence a political debate, this is exactly the sort of ad on Facebook that has attracted intense scrutiny. Facebook has acknowledged that a group with ties to the Russian government placed more than 3,000 such ads to influence voters during the 2016 U.S. presidential campaign.

Facebook has also said it plans to avoid a repeat of the Russia fiasco by improving transparency. An approach it’s rolling out in Canada now, and plans to expand to other countries this summer, enables Facebook users outside an advertiser’s targeted audience to see ads. The hope is that enhanced scrutiny will keep advertisers honest and make it easier to detect foreign interference in politics. So we used a remote connection, called a virtual private network, to log into Facebook from Canada and see how this experiment is working.

The answer: It’s an improvement, but nowhere near the openness sought by critics who say online political advertising is a Wild West compared with the tightly regulated worlds of print and broadcast.

The new strategy — which Facebook announced in October, just days before a U.S. Senate hearing on the Russian online manipulation efforts — requires every advertiser to have a Facebook page. Whenever the advertiser is running an ad, the post is automatically placed in a new “Ads” section of the Facebook page, where any users in Canada can view it even if they aren’t part of the intended audience.

Facebook has said that the Canada experiment, which has been running since late October, is the first step toward a more robust setup that will let users know which group or company placed an ad and what other ads it’s running. “Transparency helps everyone, especially political watchdog groups and reporters, keep advertisers accountable for who they say they are and what they say to different groups,” Rob Goldman, Facebook’s vice president of ads, wrote before the launch.

While the new approach makes ads more accessible, they’re only available temporarily, can be hard to find, and can still mislead users about the advertiser’s identity, according to ProPublica’s review. The Airbnb Citizen ad — which we discovered via a ProPublica tool called the Political Ad Collector — is a case in point. Airbnb Citizen professed on its Facebook page to be a “community of hosts, guests and other believers in the power of home sharing to help tackle economic, environmental and social challenges around the world.” Its Facebook page didn’t mention that it is actually a marketing and public policy arm of Airbnb, a for-profit company.

Propublica-airbnb-citizen-adThe ad was part of an effort by the company to drum up support as it fought rental restrictions in Toronto. “These ads were one of the many ways that we engaged in the process before the vote,” Airbnb said. However, anyone who looked on Airbnb’s own Facebook page wouldn’t have found it.

Airbnb told ProPublica that it is clear about its connection to Airbnb Citizen. Airbnb’s webpage links to Airbnb Citizen’s webpage, and Airbnb Citizen’s webpage is copyrighted by Airbnb and uses part of the Airbnb logo. Airbnb said Airbnb Citizen provides information on local home-sharing rules to people who rent out their homes through Airbnb. “Airbnb has always been transparent about our advertising and public engagement efforts,” the statement said.

Political parties in Canada are already benefiting from the test to investigate ads from rival groups, said Nader Mohamed, digital director of Canada’s New Democratic Party, which has the third largest representation in Canada’s Parliament. “You’re going to be more careful with what you put out now, because you could get called on it at any time,” he said. Mohamed said he still expects heavy spending on digital advertising in upcoming campaigns.

After launching the test, Facebook demonstrated its new process to Elections Canada, the independent agency responsible for conducting federal elections there. Elections Canada recommended adding an archive function, so that ads no longer running could still be viewed, said Melanie Wise, the agency’s assistant director for media relations and issues management. The initiative is “helpful” but should go further, Wise said.

Some experts were more critical. Facebook’s new test is “useless,” said Ben Scott, a senior advisor at the think tank New America and a fellow at the Brookfield Institute for Innovation + Entrepreneurship in Toronto who specializes in technology policy. “If an advertiser is inclined to do something unethical, this level of disclosure is not going to stop them. You would have to have an army of people checking pages constantly.”

More effective ways of policing ads, several experts said, might involve making more information about advertisers and their targeting strategies readily available to users from links on ads and in permanent archives. But such tactics could alienate advertisers reluctant to share information with competitors, cutting into Facebook’s revenue. Instead, in Canada, Facebook automatically puts ads up on the advertiser’s Facebook page, and doesn’t indicate the target audience there.

Facebook’s test represents the least the company can do and still avoid stricter regulation on political ads, particularly in the U.S., said Mark Surman, a Toronto resident and executive director of Mozilla, a nonprofit Internet advocacy group that makes the Firefox web browser. “There are lots of people in the company who are trying to do good work. But it’s obvious if you’re Facebook that you’re trying not to get into a long conversation with Congress,” Surman said.

Facebook said it’s listening to its critics. “We’re talking to advertisers, industry folks and watchdog groups and are taking this kind of feedback seriously,” Rob Leathern, Facebook director of product management for ads, said in an email. “We look forward to continue working with lawmakers on the right solution, but we also aren’t waiting for legislation to start getting solutions in place,” he added. The company declined to provide data on how many people in Canada were using the test tools.

Facebook is not the only internet company facing questions about transparency in advertising. Twitter also pledged in October before the Senate hearing that “in the coming weeks” it would build a platform that would “offer everyone visibility into who is advertising on Twitter, details behind those ads, and tools to share your feedback.” So far, nothing has been launched.

Facebook has more than 23 million monthly users in Canada, according to the company. That’s more than 60 percent of Canada’s population but only about 1 percent of Facebook’s user base. The company has said it is launching its new ad-transparency plan in Canada because it already has a program there called the Canadian Election Integrity Initiative. That initiative was in response to a Canadian federal government report, “Cyber Threats to Canada’s Democratic Process,” which warned that “multiple hacktivist groups will very likely deploy cyber capabilities in an attempt to influence the democratic process during the 2019 federal election.” The election integrity plan promotes news literacy and offers a guide for politicians and political parties to avoid getting hacked.

Compared to the U.S., Canada’s laws allow for much stricter government regulation of political advertising, said Michael Pal, a law professor at the University of Ottawa. He said Facebook’s transparency initiative was a good first step but that he saw the extension of strong campaign rules into internet advertising as inevitable in Canada. “This is the sort of question that, in Canada, is going to be handled by regulation,” Pal said.

Several Canadian technology policy experts who spoke with ProPublica said Facebook’s new system was too inconvenient for the average user. There’s no central place where people can search the millions of ads on Facebook to see what ads are running about a certain subject, so unless users are part of the target audience, they wouldn’t necessarily know that a group is even running an ad. If users somehow hear about an ad or simply want to check whether a company or group is running one, they must first navigate to the group’s Facebook page and then click a small tab on the side labeled “Ads” that runs alongside other tabs such as “Videos” and “Community.” Once the user clicks the “Ads” tab, a page opens showing every ad that the page owner is running at that time, one after another.

The group’s Facebook page isn’t always linked from the text of the ad. If it isn’t, users can still find the Facebook page by navigating to the “Why am I seeing this?” link in a drop-down menu at the top right of each ad in their news feed.

As soon as a marketing campaign is over, an ad can no longer be found on the “Ads” page at all. When ProPublica checked the Airbnb Citizen Facebook page a week after collecting the ad, it was no longer there.

Because the “Ads” page also doesn’t disclose the demographics of the advertiser’s target audience, people can only see that data on ads that were aimed at them and were on their own Facebook news feed. Without this information, people outside an ad’s selected audience can’t see to whom companies or politicians are tailoring their messages. ProPublica reported last year that dozens of major companies directed recruitment ads on Facebook only to younger people — information that would likely interest older workers, but would still be concealed from them under the new policy. One recent ad by Prime Minister Justin Trudeau was directed at “people who may be similar to” his supporters, according to the Political Ad Collector data. Under the new system, people who don’t support Trudeau could see the ad on his Facebook page, but wouldn’t know why it was excluded from their news feeds.

Facebook has promised new measures to make political ads more accessible. When it expands the initiative to the U.S., it will start building a searchable electronic archive of ads related to U.S. federal elections. This archive will include details on the amount of money spent and demographic information about the people the ads reached. Facebook will initially limit its definition of political ads to those that “refer to or discuss a political figure” in a federal election, the company said.

The company hasn’t said what, if any, archive will be created for ads for state and local contests, or for political ads in other countries. It has said it will eventually require political advertisers in other countries, and in state elections in the U.S., to provide more documentation, but it’s not clear when that will happen.

Ads that aren’t political will be available under the same system being tested in Canada now.

Even an archive of the sort Facebook envisions wouldn’t solve the problems of misleading advertising on Facebook, Surman said. “It would be interesting to journalists and researchers trying to track this issue. But it won’t help users make informed choices about what ads they see,” he said. That’s because users need more information alongside the ads they are seeing on their news feeds, not in a separate location, he said.

The Airbnb Citizen ad wasn’t the only tactic that Airbnb adopted in an apparent attempt to sway the Toronto City Council. It also packed the council galleries with supporters on the morning of the vote, according to The Globe and Mail. Still, its efforts appear to have been unsuccessful.

On Dec. 6, two days after a reader sent us the ad, the City Council voted to keep people from renting a space that wasn’t their primary residence and stop homeowners from listing units such as basement apartments.

Filed under: Technology

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Advertising Agency Paid $2 Million To Settle Deceptive Advertising Charges

Marketing Architects inc. The U.S. Federal Trade Commission (FTC) announced that Minneapolis-based Marketing Architects, Inc. (MAI):

"... an advertising agency that created and disseminated allegedly deceptive radio ads for weight-loss products marketed by its client, Direct Alternatives, has agreed to pay $2 million to the Federal Trade Commission and State of Maine Attorney General’s Office to settle their complaint..."

First, some background. According to the FTC, MAI created advertising for several products (e.g., Puranol, Pur-Hoodia Plus, Acai Fresh, AF Plus, and Final Trim) by Direct Alternatives from 2006 through February 2015. Then, in 2016 the FTC and the State of Maine settled allegations against Direct Alternatives, which required the company to halt deceptive advertising and illegal billing practices.

Additional background according to the FTC: MAI previously created weight-loss ads for Sensa Products, LLC between March 2009 and May 2011. The FTC filed a complaint against Sensa in 2014, and subsequently Sensa agreed to refund $26.5 million to defrauded consumers. So, there's important, relevant history.

In the latest action, the joint complaint alleged that MAI created and disseminated radio ads with false or unsubstantiated weight-loss claims for AF Plus and Final Trim. Besides:

"... receiving FTC’s Sensa order, MAI was previously made aware of the need to have competent and reliable scientific evidence to back up health claims. Among other things, the complaint alleges that Direct Alternatives provided MAI with documents indicating that some of the weight-loss claims later challenged by the FTC needed to be supported by scientific evidence.

The complaint further charges that MAI developed and disseminated fictitious weight-loss testimonials and created radio ads for weight-loss products falsely disguised as news stories. Finally, the complaint charges MAI with creating inbound call scripts that failed to adequately disclose that consumers would be automatically enrolled in negative-option (auto-ship) continuity plans."

The latest action includes a proposed court order to ban MAI from making weight-loss claims about products the FTC has already advised as false, and:

"... requires MAI to have competent and reliable scientific evidence to support any other claims about the health benefits or efficacy of weight-loss products, and prohibits it from misrepresenting the existence or outcome of tests or studies. In addition, the order prohibits MAI from misrepresenting the experience of consumer testimonialists or that paid commercial advertising is independent programming."

This action is a reminder to advertising and digital agency executives everywhere: ensure that claims are supported by competent, reliable scientific evidence.

Good. Kudos to the FTC for these enforcement actions and for protecting consumers.


New Data Breach Legislation Proposed In North Carolina

After a surge in data breaches in North Carolina during 2017, state legislators have proposed stronger data breach laws. The National Law Review explained what prompted the legislative action:

"On January 8, 2018, the State of North Carolina released its Security Breach Report 2017, which highlights a 15 percent increase in breaches since 2016... Health care, financial services and insurance businesses accounted for 38 percent, with general businesses making up for just more than half of these data breaches. Almost 75 percent of all breaches resulted from phishing, hacking and unauthorized access, reflecting an overall increase of more than 3,500 percent in reported hacking incidents alone since 2006. Since 2015, phishing incidents increased over 2,300 percent. These numbers emphasize the warning to beware of emails or texts requesting personal information..."

So, fraudsters have tricked many North Carolina residents and employees into both opening fraudulent e-mail and text messages, and then responding by disclosing sensitive personal information. Not good.

Details about the proposed legislation:

"... named the Act to Strengthen Identity Theft Practices (ASITP), announced by Representative Jason Saine and Attorney General Josh Stein, attempts to combat the data breach epidemic by expanding North Carolina’s breach notification obligations, while reducing the time businesses have to comply with notification to the affected population and to the North Carolina Attorney General’s Office. If enacted, this new legislation will be one of the most aggressive U.S. breach notification statutes... The Fact Sheet concerning the ASITP as published by the North Carolina Attorney General proposes that the AG take a more direct role in the investigation of data breaches closer to their time of discovery...  To accomplish this goal, the ASITP proposes a significantly shorter period of time for an entity to provide notification to the affected population and to the North Carolina Attorney General. Currently, North Carolina’s statute mandates that notification be made to affected individuals and the Attorney General without “unreasonable delay.” Under the ASITP, the new deadline for all notifications would be 15 days following discovery of the data security incident. In addition to being the shortest deadline in the nation, it is important to note that notification vendors typically require 5 business days to process, print and mail notification letters... The proposed legislation also seeks to (1) expand the definition of “protected information” to include medical information and insurance account numbers, and (2) penalize those who fail to maintain reasonable security procedures by charging them with a violation under the Unfair and Deceptive Trade Practices Act for each person whose information is breached..."

Good. The National Law Review article also compared the breach notification deadlines across all 50 states and territories. It is worth a look to see how your state compares. A comparison of selected states:

Time After Discovery of Breach Selected States/Territories
10 calendar days Puerto Rico (Dept. of Consumer Affairs)
15 calendar days North Carolina (Proposed)
15 business California (Protected Health Information)
30 calendar days Florida
45 calendar days Ohio, Maryland
90 calendar days Connecticut
Most expedient time & without
unreasonable delay
California (other), Massachusetts, New York, North Carolina, Pennsylvania, Puerto Rico (other)
As soon as possible Texas

To learn more, download the North Carolina Security Breach Report 2017 (Adobe PDF), and the ASITP Fact Sheet (Adobe PDF).


Uber's Ripley Program To Thwart Law Enforcement

Uber logo Uber is in the news again, and not in a good way. TechCrunch reported:

"Between spring 2015 until late 2016 the ride-hailing giant routinely used a system designed to thwart police raids in foreign countries, according to Bloomberg, citing three people with knowledge of the system. It reports that Uber’s San Francisco office used the protocol — which apparently came to be referred to internally as ‘Ripley’ — at least two dozen times. The system enabled staff to remotely change passwords and “otherwise lock up data on company-owned smartphones, laptops, and desktops as well as shut down the devices”, it reports. We’ve also been told — via our own sources — about multiple programs at Uber intended to prevent company data from being accessed by oversight authorities... according to Bloomberg Uber created the system in response to raids on its offices in Europe: Specifically following a March 2015 raid on its Brussel’s office in which police gained access to its payments system and financial documents as well as driver and employee information; and after a raid on its Paris office in the same week."

In November of last year, reports emerged that the popular ride-sharing service experienced a data breach affecting 57 million users. Regulators said then that Uber tried to cover it up.

In March of last year, reports surfaced about Greyball, a worldwide program within Uber to thwart code enforcement inspections by governments. TechCrunch also described uLocker:

"We’ve also heard of the existence of a program at Uber called uLocker, although one source with knowledge of the program told us that the intention was to utilize a ransomware cryptolocker exploit and randomize the tokens — with the idea being that if Uber got raided it would cryptolocker its own devices in order to render data inaccessible to oversight authorities. The source said uLocker was being written in-house by Uber’s eng-sec and Marketplace Analytics divisions..."

Geez. First Greyball. Then Reipley and uLocker. And these are the known programs. This raises the question: how many programs are there?

Earlier today, Wired reported:

"The engineer at the heart of the upcoming Waymo vs Uber trial is facing dramatic new allegations of commercial wrongdoing, this time from a former nanny. Erika Wong, who says she cared for Anthony Levandowski’s two children from December 2016 to June 2017, filed a lawsuit in California this month accusing him of breaking a long list of employment laws. The complaint alleges the failure to pay wages, labor and health code violations... In her complaint, Wong alleges that Levandowski was paying a Tesla engineer for updates on its electric truck program, selling microchips abroad, and creating new startups using stolen trade secrets. Her complaint also describes Levandowski reacting to the arrival of the Waymo lawsuit against Uber, strategizing with then-Uber CEO Travis Kalanick, and discussing fleeing to Canada to escape prosecution... Levandowski’s outside dealings while employed at Google and Uber have been central themes in Waymo’s trade secrets case. Waymo says that Levandowski took 14,000 technical files related to laser-ranging lidar and other self-driving technologies with him when he left Google to work at Uber..."

Is this a corporation or organized crime? It seems difficult to tell the difference. What do you think?