128 posts categorized "Surveillance" Feed

You Snooze, You Lose: Insurers Make The Old Adage Literally True

[Editor's note: today's guest post, by reporters at ProPublica, is part of a series which explores data collection, data sharing, and privacy issues within the healthcare industry. It is reprinted with permission.]

By Marshall Allen, ProPublica

Last March, Tony Schmidt discovered something unsettling about the machine that helps him breathe at night. Without his knowledge, it was spying on him.

From his bedside, the device was tracking when he was using it and sending the information not just to his doctor, but to the maker of the machine, to the medical supply company that provided it and to his health insurer.

Schmidt, an information technology specialist from Carrollton, Texas, was shocked. “I had no idea they were sending my information across the wire.”

Schmidt, 59, has sleep apnea, a disorder that causes worrisome breaks in his breathing at night. Like millions of people, he relies on a continuous positive airway pressure, or CPAP, machine that streams warm air into his nose while he sleeps, keeping his airway open. Without it, Schmidt would wake up hundreds of times a night; then, during the day, he’d nod off at work, sometimes while driving and even as he sat on the toilet.

“I couldn’t keep a job,” he said. “I couldn’t stay awake.” The CPAP, he said, saved his career, maybe even his life.

As many CPAP users discover, the life-altering device comes with caveats: Health insurance companies are often tracking whether patients use them. If they aren’t, the insurers might not cover the machines or the supplies that go with them.

In fact, faced with the popularity of CPAPs, which can cost $400 to $800, and their need for replacement filters, face masks and hoses, health insurers have deployed a host of tactics that can make the therapy more expensive or even price it out of reach.

Patients have been required to rent CPAPs at rates that total much more than the retail price of the devices, or they’ve discovered that the supplies would be substantially cheaper if they didn’t have insurance at all.

Experts who study health care costs say insurers’ CPAP strategies are part of the industry’s playbook of shifting the costs of widely used therapies, devices and tests to unsuspecting patients.

“The doctors and providers are not in control of medicine anymore,” said Harry Lawrence, owner of Advanced Oxy-Med Services, a New York company that provides CPAP supplies. “It’s strictly the insurance companies. They call the shots.”

Insurers say their concerns are legitimate. The masks and hoses can be cumbersome and noisy, and studies show that about third of patients don’t use their CPAPs as directed.

But the companies’ practices have spawned lawsuits and concerns by some doctors who say that policies that restrict access to the machines could have serious, or even deadly, consequences for patients with severe conditions. And privacy experts worry that data collected by insurers could be used to discriminate against patients or raise their costs.

Schmidt’s privacy concerns began the day after he registered his new CPAP unit with ResMed, its manufacturer. He opted out of receiving any further information. But he had barely wiped the sleep out of his eyes the next morning when a peppy email arrived in his inbox. It was ResMed, praising him for completing his first night of therapy. “Congratulations! You’ve earned yourself a badge!” the email said.

Then came this exchange with his supply company, Medigy: Schmidt had emailed the company to praise the “professional, kind, efficient and competent” technician who set up the device. A Medigy representative wrote back, thanking him, then adding that Schmidt’s machine “is doing a great job keeping your airway open.” A report detailing Schmidt’s usage was attached.

Alarmed, Schmidt complained to Medigy and learned his data was also being shared with his insurer, Blue Cross Blue Shield. He’d known his old machine had tracked his sleep because he’d taken its removable data card to his doctor. But this new invasion of privacy felt different. Was the data encrypted to protect his privacy as it was transmitted? What else were they doing with his personal information?

He filed complaints with the Better Business Bureau and the federal government to no avail. “My doctor is the ONLY one that has permission to have my data,” he wrote in one complaint.

In an email, a Blue Cross Blue Shield spokesperson said that it’s standard practice for insurers to monitor sleep apnea patients and deny payment if they aren’t using the machine. And privacy experts said that sharing the data with insurance companies is allowed under federal privacy laws. A ResMed representative said once patients have given consent, it may share the data it gathers, which is encrypted, with the patients’ doctors, insurers and supply companies.

Schmidt returned the new CPAP machine and went back to a model that allowed him to use a removable data card. His doctor can verify his compliance, he said.

Luke Petty, the operations manager for Medigy, said a lot of CPAP users direct their ire at companies like his. The complaints online number in the thousands. But insurance companies set the prices and make the rules, he said, and suppliers follow them, so they can get paid.

“Every year it’s a new hurdle, a new trick, a new game for the patients,” Petty said.

A Sleep Saving Machine Gets Popular

The American Sleep Apnea Association estimates about 22 million Americans have sleep apnea, although it’s often not diagnosed. The number of people seeking treatment has grown along with awareness of the disorder. It’s a potentially serious disorder that left untreated can lead to risks for heart disease, diabetes, cancer and cognitive disorders. CPAP is one of the only treatments that works for many patients.

Exact numbers are hard to come by, but ResMed, the leading device maker, said it’s monitoring the CPAP use of millions of patients.

Sleep apnea specialists and health care cost experts say insurers have countered the deluge by forcing patients to prove they’re using the treatment.

Medicare, the government insurance program for seniors and the disabled, began requiring CPAP “compliance” after a boom in demand. Because of the discomfort of wearing a mask, hooked up to a noisy machine, many patients struggle to adapt to nightly use. Between 2001 and 2009, Medicare payments for individual sleep studies almost quadrupled to $235 million. Many of those studies led to a CPAP prescription. Under Medicare rules, patients must use the CPAP for four hours a night for at least 70 percent of the nights in any 30-day period within three months of getting the device. Medicare requires doctors to document the adherence and effectiveness of the therapy.

Sleep apnea experts deemed Medicare’s requirements arbitrary. But private insurers soon adopted similar rules, verifying usage with data from patients’ machines — with or without their knowledge.

Kristine Grow, spokeswoman for the trade association America’s Health Insurance Plans, said monitoring CPAP use is important because if patients aren’t using the machines, a less expensive therapy might be a smarter option. Monitoring patients also helps insurance companies advise doctors about the best treatment for patients, she said. When asked why insurers don’t just rely on doctors to verify compliance, Grow said she didn’t know.

Many insurers also require patients to rack up monthly rental fees rather than simply pay for a CPAP.

Dr. Ofer Jacobowitz, a sleep apnea expert at ENT and Allergy Associates and assistant professor at The Mount Sinai Hospital in New York, said his patients often pay rental fees for a year or longer before meeting the prices insurers set for their CPAPs. But since patients’ deductibles — the amount they must pay before insurance kicks in — reset at the beginning of each year, they may end up covering the entire cost of the rental for much of that time, he said.

The rental fees can surpass the retail cost of the machine, patients and doctors say. Alan Levy, an attorney who lives in Rahway, New Jersey, bought an individual insurance plan through the now-defunct Health Republic Insurance of New Jersey in 2015. When his doctor prescribed a CPAP, the company that supplied his device, At Home Medical, told him he needed to rent the device for $104 a month for 15 months. The company told him the cost of the CPAP was $2,400.

Levy said he wouldn’t have worried about the cost if his insurance had paid it. But Levy’s plan required him to reach a $5,000 deductible before his insurance plan paid a dime. So Levy looked online and discovered the machine actually cost about $500.

Levy said he called At Home Medical to ask if he could avoid the rental fee and pay $500 up front for the machine, and a company representative said no. “I’m being overcharged simply because I have insurance,” Levy recalled protesting.

Levy refused to pay the rental fees. “At no point did I ever agree to enter into a monthly rental subscription,” he wrote in a letter disputing the charges. He asked for documentation supporting the cost. The company responded that he was being billed under the provisions of his insurance carrier.

Levy’s law practice focuses, ironically, on defending insurance companies in personal injury cases. So he sued At Home Medical, accusing the company of violating the New Jersey Consumer Fraud Act. Levy didn’t expect the case to go to trial. “I knew they were going to have to spend thousands of dollars on attorney’s fees to defend a claim worth hundreds of dollars,” he said.

Sure enough, At Home Medical, agreed to allow Levy to pay $600 — still more than the retail cost — for the machine.

The company declined to comment on the case. Suppliers said that Levy’s case is extreme, but acknowledged that patients’ rental fees often add up to more than the device is worth.

Levy said that he was happy to abide by the terms of his plan, but that didn’t mean the insurance company could charge him an unfair price. “If the machine’s worth $500, no matter what the plan says, or the medical device company says, they shouldn’t be charging many times that price,” he said.

Dr. Douglas Kirsch, president of the American Academy of Sleep Medicine, said high rental fees aren’t the only problem. Patients can also get better deals on CPAP filters, hoses, masks and other supplies when they don’t use insurance, he said.

Cigna, one of the largest health insurers in the country, currently faces a class-action suit in U.S. District Court in Connecticut over its billing practices, including for CPAP supplies. One of the plaintiffs, Jeffrey Neufeld, who lives in Connecticut, contends that Cigna directed him to order his supplies through a middleman who jacked up the prices.

Neufeld declined to comment for this story. But his attorney, Robert Izard, said Cigna contracted with a company called CareCentrix, which coordinates a network of suppliers for the insurer. Neufeld decided to contact his supplier directly to find out what it had been paid for his supplies and compare that to what he was being charged. He discovered that he was paying substantially more than the supplier said the products were worth. For instance, Neufeld owed $25.68 for a disposable filter under his Cigna plan, while the supplier was paid $7.50. He owed $147.78 for a face mask through his Cigna plan while the supplier was paid $95.

ProPublica found all the CPAP supplies billed to Neufeld online at even lower prices than those the supplier had been paid. Longtime CPAP users say it’s well known that supplies are cheaper when they are purchased without insurance.

Neufeld’s cost “should have been based on the lower amount charged by the actual provider, not the marked-up bill from the middleman,” Izard said. Patients covered by other insurance companies may have fallen victim to similar markups, he said.

Cigna would not comment on the case. But in documents filed in the suit, it denied misrepresenting costs or overcharging Neufeld. The supply company did not return calls for comment.

In a statement, Stephen Wogen, CareCentrix’s chief growth officer, said insurers may agree to pay higher prices for some services, while negotiating lower prices for others, to achieve better overall value. For this reason, he said, isolating select prices doesn’t reflect the overall value of the company’s services. CareCentrix declined to comment on Neufeld’s allegations.

Izard said Cigna and CareCentrix benefit from such behind-the-scenes deals by shifting the extra costs to patients, who often end up covering the marked-up prices out of their deductibles. And even once their insurance kicks in, the amount the patients must pay will be much higher.

The ubiquity of CPAP insurance concerns struck home during the reporting of this story, when a ProPublica colleague discovered how his insurer was using his data against him.

Sleep Aid or Surveillance Device?

Without his CPAP, Eric Umansky, a deputy managing editor at ProPublica, wakes up repeatedly through the night and snores so insufferably that he is banished to the living room couch. “My marriage depends on it.”

In September, his doctor prescribed a new mask and airflow setting for his machine. Advanced Oxy-Med Services, the medical supply company approved by his insurer, sent him a modem that he plugged into his machine, giving the company the ability to change the settings remotely if needed.

But when the mask hadn’t arrived a few days later, Umansky called Advanced Oxy-Med. That’s when he got a surprise: His insurance company might not pay for the mask, a customer service representative told him, because he hadn’t been using his machine enough. “On Tuesday night, you only used the mask for three-and-a-half hours,” the representative said. “And on Monday night, you only used it for three hours.”

“Wait — you guys are using this thing to track my sleep?” Umansky recalled saying. “And you are using it to deny me something my doctor says I need?”

Umansky’s new modem had been beaming his personal data from his Brooklyn bedroom to the Newburgh, New York-based supply company, which, in turn, forwarded the information to his insurance company, UnitedHealthcare.

Umansky was bewildered. He hadn’t been using the machine all night because he needed a new mask. But his insurance company wouldn’t pay for the new mask until he proved he was using the machine all night — even though, in his case, he, not the insurance company, is the owner of the device.

“You view it as a device that is yours and is serving you,” Umansky said. “And suddenly you realize it is a surveillance device being used by your health insurance company to limit your access to health care.”

Privacy experts said such concerns are likely to grow as a host of devices now gather data about patients, including insertable heart monitors and blood glucose meters, as well as Fitbits, Apple Watches and other lifestyle applications. Privacy laws have lagged behind this new technology, and patients may be surprised to learn how little control they have over how the data is used or with whom it is shared, said Pam Dixon, executive director of the World Privacy Forum.

“What if they find you only sleep a fitful five hours a night?” Dixon said. “That’s a big deal over time. Does that affect your health care prices?”

UnitedHealthcare said in a statement that it only uses the data from CPAPs to verify patients are using the machines.

Lawrence, the owner of Advanced Oxy-Med Services, conceded that his company should have told Umansky his CPAP use would be monitored for compliance, but it had to follow the insurers’ rules to get paid.

As for Umansky, it’s now been two months since his doctor prescribed him a new airflow setting for his CPAP machine. The supply company has been paying close attention to his usage, Umansky said, but it still hasn’t updated the setting.

The irony is not lost on Umansky: “I wish they would spend as much time providing me actual care as they do monitoring whether I’m ‘compliant.’”

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

 


Google Admitted Tracking Users' Location Even When Phone Setting Disabled

If you are considering, or already have, a smartphone running Google's Android operating system (OS), then take note. ZDNet reported (emphasis added):

"Phones running Android have been gathering data about a user's location and sending it back to Google when connected to the internet, with Quartz first revealing the practice has been occurring since January 2017. According to the report, Android phones and tablets have been collecting the addresses of nearby cellular towers and sending the encrypted data back, even when the location tracking function is disabled by the user... Google does not make this explicitly clear in its Privacy Policy, which means Android users that have disabled location tracking were still being tracked by the search engine giant..."

This is another reminder of the cost of free services and/or cheaper smartphones. You're gonna be tracked... extensively... whether you want it or not. The term "surveillance capitalism" is often used.

A reader shared a blunt assessment, "There is no way to avoid being Google’s property (a/k/a its bitch) if you use an Android phone." Harsh, but accurate. What is your opinion?


Plenty Of Bad News During November. Are We Watching The Fall Of Facebook?

Facebook logo November has been an eventful month for Facebook, the global social networking giant. And not in a good way. So much has happened, it's easy to miss items. Let's review.

A November 1st investigative report by ProPublica described how some political advertisers exploit gaps in Facebook's advertising transparency policy:

"Although Facebook now requires every political ad to “accurately represent the name of the entity or person responsible,” the social media giant acknowledges that it didn’t check whether Energy4US is actually responsible for the ad. Nor did it question 11 other ad campaigns identified by ProPublica in which U.S. businesses or individuals masked their sponsorship through faux groups with public-spirited names. Some of these campaigns resembled a digital form of what is known as “astroturfing,” or hiding behind the mirage of a spontaneous grassroots movement... Adopted this past May in the wake of Russian interference in the 2016 presidential campaign, Facebook’s rules are designed to hinder foreign meddling in elections by verifying that individuals who run ads on its platform have a U.S. mailing address, governmental ID and a Social Security number. But, once this requirement has been met, Facebook doesn’t check whether the advertiser identified in the “paid for by” disclosure has any legal status, enabling U.S. businesses to promote their political agendas secretly."

So, political ad transparency -however faulty it is -- has only been operating since May, 2018. Not long. Not good.

The day before the November 6th election in the United States, Facebook announced:

"On Sunday evening, US law enforcement contacted us about online activity that they recently discovered and which they believe may be linked to foreign entities. Our very early-stage investigation has so far identified around 30 Facebook accounts and 85 Instagram accounts that may be engaged in coordinated inauthentic behavior. We immediately blocked these accounts and are now investigating them in more detail. Almost all the Facebook Pages associated with these accounts appear to be in the French or Russian languages..."

This happened after Facebook removed 82 Pages, Groups and accounts linked to Iran on October 16th. Thankfully, law enforcement notified Facebook. Interested in more proactive action? Facebook announced on November 8th:

"We are careful not to reveal too much about our enforcement techniques because of adversarial shifts by terrorists. But we believe it’s important to give the public some sense of what we are doing... We now use machine learning to assess Facebook posts that may signal support for ISIS or al-Qaeda. The tool produces a score indicating how likely it is that the post violates our counter-terrorism policies, which, in turn, helps our team of reviewers prioritize posts with the highest scores. In this way, the system ensures that our reviewers are able to focus on the most important content first. In some cases, we will automatically remove posts when the tool indicates with very high confidence that the post contains support for terrorism..."

So, Facebook deployed in 2018 some artificial intelligence to help its human moderators identify terrorism threats -- not automatically remove them, but to identify them -- as the news item also mentioned its appeal process. Then, Facebook announced in a November 13th update:

"Combined with our takedown last Monday, in total we have removed 36 Facebook accounts, 6 Pages, and 99 Instagram accounts for coordinated inauthentic behavior. These accounts were mostly created after mid-2017... Last Tuesday, a website claiming to be associated with the Internet Research Agency, a Russia-based troll farm, published a list of Instagram accounts they said that they’d created. We had already blocked most of them, and based on our internal investigation, we blocked the rest... But finding and investigating potential threats isn’t something we do alone. We also rely on external partners, like the government or security experts...."

So, in 2018 Facebook leans heavily upon both law enforcement and security researchers to identify threats. You have to hunt a bit to find the total number of fake accounts removed. Facebook announced on November 15th:

"We also took down more fake accounts in Q2 and Q3 than in previous quarters, 800 million and 754 million respectively. Most of these fake accounts were the result of commercially motivated spam attacks trying to create fake accounts in bulk. Because we are able to remove most of these accounts within minutes of registration, the prevalence of fake accounts on Facebook remained steady at 3% to 4% of monthly active users..."

That's about 1.5 billion fake accounts by a variety of bad actors. Hmmmm... sounds good, but... it makes one wonder about the digital arms race happening. If the bad actors can programmatically create new fake accounts faster than Facebook can identify and remove them, then not good.

Meanwhile, CNet reported on November 11th that Facebook had ousted Oculus founder Palmer Luckey due to:

"... a $10,000 to an anti-Hillary Clinton group during the 2016 presidential election, he was out of the company he founded. Facebook CEO Mark Zuckerberg, during congressional testimony earlier this year, called Luckey's departure a "personnel issue" that would be "inappropriate" to address, but he denied it was because of Luckey's politics. But that appears to be at the root of Luckey's departure, The Wall Street Journal reported Sunday. Luckey was placed on leave and then fired for supporting Donald Trump, sources told the newspaper... [Luckey] was pressured by executives to publicly voice support for libertarian candidate Gary Johnson, according to the Journal. Luckey later hired an employment lawyer who argued that Facebook illegally punished an employee for political activity and negotiated a payout for Luckey of at least $100 million..."

Facebook acquired Oculus Rift in 2014. Not good treatment of an executive.

The next day, TechCrunch reported that Facebook will provide regulators from France with access to its content moderation processes:

"At the start of 2019, French regulators will launch an informal investigation on algorithm-powered and human moderation... Regulators will look at multiple steps: how flagging works, how Facebook identifies problematic content, how Facebook decides if it’s problematic or not and what happens when Facebook takes down a post, a video or an image. This type of investigation is reminiscent of banking and nuclear regulation. It involves deep cooperation so that regulators can certify that a company is doing everything right... The investigation isn’t going to be limited to talking with the moderation teams and looking at their guidelines. The French government wants to find algorithmic bias and test data sets against Facebook’s automated moderation tools..."

Good. Hopefully, the investigation will be a deep dive. Maybe other countries, which value citizens' privacy, will perform similar investigations. Companies and their executives need to be held accountable.

Then, on November 14th The New York Times published a detailed, comprehensive "Delay, Deny, and Deflect" investigative report based upon interviews of at least 50 persons:

"When Facebook users learned last spring that the company had compromised their privacy in its rush to expand, allowing access to the personal information of tens of millions of people to a political data firm linked to President Trump, Facebook sought to deflect blame and mask the extent of the problem. And when that failed... Facebook went on the attack. While Mr. Zuckerberg has conducted a public apology tour in the last year, Ms. Sandberg has overseen an aggressive lobbying campaign to combat Facebook’s critics, shift public anger toward rival companies and ward off damaging regulation. Facebook employed a Republican opposition-research firm to discredit activist protesters... In a statement, a spokesman acknowledged that Facebook had been slow to address its challenges but had since made progress fixing the platform... Even so, trust in the social network has sunk, while its pell-mell growth has slowed..."

The New York Times' report also highlighted the history of Facebook's focus on revenue growth and lack of focus to identify and respond to threats:

"Like other technology executives, Mr. Zuckerberg and Ms. Sandberg cast their company as a force for social good... But as Facebook grew, so did the hate speech, bullying and other toxic content on the platform. When researchers and activists in Myanmar, India, Germany and elsewhere warned that Facebook had become an instrument of government propaganda and ethnic cleansing, the company largely ignored them. Facebook had positioned itself as a platform, not a publisher. Taking responsibility for what users posted, or acting to censor it, was expensive and complicated. Many Facebook executives worried that any such efforts would backfire... Mr. Zuckerberg typically focused on broader technology issues; politics was Ms. Sandberg’s domain. In 2010, Ms. Sandberg, a Democrat, had recruited a friend and fellow Clinton alum, Marne Levine, as Facebook’s chief Washington representative. A year later, after Republicans seized control of the House, Ms. Sandberg installed another friend, a well-connected Republican: Joel Kaplan, who had attended Harvard with Ms. Sandberg and later served in the George W. Bush administration..."

The report described cozy relationships between the company and Democratic politicians. Not good for a company wanting to deliver unbiased, reliable news. The New York Times' report also described the history of failing to identify and respond quickly to content abuses by bad actors:

"... in the spring of 2016, a company expert on Russian cyberwarfare spotted something worrisome. He reached out to his boss, Mr. Stamos. Mr. Stamos’s team discovered that Russian hackers appeared to be probing Facebook accounts for people connected to the presidential campaigns, said two employees... Mr. Stamos, 39, told Colin Stretch, Facebook’s general counsel, about the findings, said two people involved in the conversations. At the time, Facebook had no policy on disinformation or any resources dedicated to searching for it. Mr. Stamos, acting on his own, then directed a team to scrutinize the extent of Russian activity on Facebook. In December 2016... Ms. Sandberg and Mr. Zuckerberg decided to expand on Mr. Stamos’s work, creating a group called Project P, for “propaganda,” to study false news on the site, according to people involved in the discussions. By January 2017, the group knew that Mr. Stamos’s original team had only scratched the surface of Russian activity on Facebook... Throughout the spring and summer of 2017, Facebook officials repeatedly played down Senate investigators’ concerns about the company, while publicly claiming there had been no Russian effort of any significance on Facebook. But inside the company, employees were tracing more ads, pages and groups back to Russia."

Facebook responded in a November 15th new release:

"There are a number of inaccuracies in the story... We’ve acknowledged publicly on many occasions – including before Congress – that we were too slow to spot Russian interference on Facebook, as well as other misuse. But in the two years since the 2016 Presidential election, we’ve invested heavily in more people and better technology to improve safety and security on our services. While we still have a long way to go, we’re proud of the progress we have made in fighting misinformation..."

So, Facebook wants its users to accept that it has invested more = doing better.

Regardless, the bottom line is trust. Can users trust what Facebook said about doing better? Is better enough? Can users trust Facebook to deliver unbiased news? Can users trust that Facebook's content moderation process is better? Or good enough? Can users trust Facebook to fix and prevent data breaches affecting millions of users? Can users trust Facebook to stop bad actors posing as researchers from using quizzes and automated tools to vacuum up (and allegedly resell later) millions of users' profiles? Can citizens in democracies trust that Facebook has stopped data abuses, by bad actors, designed to disrupt their elections? Is doing better enough?

The very next day, Facebook reported a huge increase in the number of government requests for data, including secret orders. TechCrunch reported about 13 historical national security letters:

"... dated between 2014 and 2017 for several Facebook and Instagram accounts. These demands for data are effectively subpoenas, issued by the U.S. Federal Bureau of Investigation (FBI) without any judicial oversight, compelling companies to turn over limited amounts of data on an individual who is named in a national security investigation. They’re controversial — not least because they come with a gag order that prevents companies from informing the subject of the letter, let alone disclosing its very existence. Companies are often told to turn over IP addresses of everyone a person has corresponded with, online purchase information, email records and cell-site location data... Chris Sonderby, Facebook’s deputy general counsel, said that the government lifted the non-disclosure orders on the letters..."

So, Facebook is a go-to resource for both bad actors and the good guys.

An eventful month, and the month isn't over yet. Taken together, this news is not good for a company wanting its social networking service to be a source of reliable, unbiased news source. This news is not good for a company wanting its users to accept it is doing better -- and that better is enough. The situation begs the question: are we watching the fall of Facebook? Share your thoughts and opinions below.


Facebook To Remove Onavo VPN App From Apple App Store

Not all Virtual Private Network (VPN) software is created equal. Some do a better job at protecting your privacy than others. Mashable reported that Facebook:

"... plans to remove its Onavo VPN app from the App Store after Apple warned the company that the app was in violation of its policies governing data gathering... For those blissfully unaware, Onavo sold itself as a virtual private network that people could run "to take the worry out of using smartphones and tablets." In reality, Facebook used data about users' internet activity collected by the app to inform acquisitions and product decisions. Essentially, Onavo allowed Facebook to run market research on you and your phone, 24/7. It was spyware, dressed up and neatly packaged with a Facebook-blue bow. Data gleaned from the app, notes the Wall Street Journal, reportedly played into the social media giant's decision to start building a rival to the Houseparty app. Oh, and its decision to buy WhatsApp."

Thanks Apple! We've all heard of the #FakeNews hashtag on social media. Yes, there is a #FakeVPN hashtag, too. So, buyer beware... online user beware.


Test Finds Amazon's Facial Recognition Software Wrongly Identified Members Of Congress As Persons Arrested. A Few Legislators Demand Answers

In a test of Rekognition, the facial recognition software by Amazon, the American Civil Liberties Union (ACLU) found that the software misidentified 28 members of the United States Congress to mugshot photographs of persons arrested for crimes. Jokes aside about politicians, this is serious stuff. According to the ACLU:

"The members of Congress who were falsely matched with the mugshot database we used in the test include Republicans and Democrats, men and women, and legislators of all ages, from all across the country... To conduct our test, we used the exact same facial recognition system that Amazon offers to the public, which anyone could use to scan for matches between images of faces. And running the entire test cost us $12.33 — less than a large pizza... The false matches were disproportionately of people of color, including six members of the Congressional Black Caucus, among them civil rights legend Rep. John Lewis (D-Ga.). These results demonstrate why Congress should join the ACLU in calling for a moratorium on law enforcement use of face surveillance."

List of 28 Congressional legislators mis-identified by Amazon Rekognition in ACLU study. Click to view larger version With 535 member of Congress, the implied error rate was 5.23 percent. On Thursday, three of the misidentified legislators sent a joint letter to Jeffery Bezos, the Chief executive Officer at Amazon. The letter read in part:

"We write to express our concerns and seek more information about Amazon's facial recognition technology, Rekognition... While facial recognition services might provide a valuable law enforcement tool, the efficacy and impact of the technology are not yet fully understood. In particular, serious concerns have been raised about the dangers facial recognition can pose to privacy and civil rights, especially when it is used as a tool of government surveillance, as well as the accuracy of the technology and its disproportionate impact on communities of color.1 These concerns, including recent reports that Rekognition could lead to mis-identifications, raise serious questions regarding whether Amazon should be selling its technology to law enforcement... One study estimates that more than 117 million American adults are in facial recognition databases that can be searched in criminal investigations..."

The letter was sent by Senator Edward J. Markey (Massachusetts, Representative Luis V. Gutiérrez (Illinois), and Representative Mark DeSaulnier (California). Why only three legislators? Where are the other 25? Nobody else cares about software accuracy?

The three legislators asked Amazon to provide answers by August 20, 2018 to several key requests:

  • The results of any internal accuracy or bias assessments Amazon perform on Rekognition, with details by race, gender, and age,
  • The list of all law enforcement or intelligence agencies Amazon has communicated with regarding Rekognition,
  • The list of all law enforcement agencies which have used or currently use Rekognition,
  • If any law enforcement agencies which used Rekogntion have been investigated, sued, or reprimanded for unlawful or discriminatory policing practices,
  • Describe the protections, if any, Amazon has built into Rekognition to protect the privacy rights of innocent citizens cuaght in the biometric databases used by law enforcement for comparisons,
  • Can Rekognition identify persons younger than age 13, and what protections Amazon uses to comply with Children's Online Privacy Protections Act (COPPA),
  • Whether Amazon conduts any audits of Rekognition to ensure its appropriate and legal uses, and what actions Amazon has taken to correct any abuses,
  • Explain whether Rekognition is integrated with police body cameras and/or "public-facing camera networks."

The letter cited a 2016 report by the Center on Privacy and Technology (CPT) at Georgetown Law School, which found:

"... 16 states let the Federal Bureau of Investigation (FBI) use face recognition technology to compare the faces of suspected criminals to their driver’s license and ID photos, creating a virtual line-up of their state residents. In this line-up, it’s not a human that points to the suspect—it’s an algorithm... Across the country, state and local police departments are building their own face recognition systems, many of them more advanced than the FBI’s. We know very little about these systems. We don’t know how they impact privacy and civil liberties. We don’t know how they address accuracy problems..."

Everyone wants law enforcement to quickly catch criminals, prosecute criminals, and protect the safety and rights of law-abiding citizens. However, accuracy matters. Experts warn that the facial recognition technologies used are unregulated, and the systems' impacts upon innocent citizens are not understood. Key findings in the CPT report:

  1. "Law enforcement face recognition networks include over 117 million American adults. Face recognition is neither new nor rare. FBI face recognition searches are more common than federal court-ordered wiretaps. At least one out of four state or local police departments has the option to run face recognition searches through their or another agency’s system. At least 26 states (and potentially as many as 30) allow law enforcement to run or request searches against their databases of driver’s license and ID photos..."
  2. "Different uses of face recognition create different risks. This report offers a framework to tell them apart. A face recognition search conducted in the field to verify the identity of someone who has been legally stopped or arrested is different, in principle and effect, than an investigatory search of an ATM photo against a driver’s license database, or continuous, real-time scans of people walking by a surveillance camera. The former is targeted and public. The latter are generalized and invisible..."
  3. "By tapping into driver’s license databases, the FBI is using biometrics in a way it’s never done before. Historically, FBI fingerprint and DNA databases have been primarily or exclusively made up of information from criminal arrests or investigations. By running face recognition searches against 16 states’ driver’s license photo databases, the FBI has built a biometric network that primarily includes law-abiding Americans. This is unprecedented and highly problematic."
  4. " Major police departments are exploring face recognition on live surveillance video. Major police departments are exploring real-time face recognition on live surveillance camera video. Real-time face recognition lets police continuously scan the faces of pedestrians walking by a street surveillance camera. It may seem like science fiction. It is real. Contract documents and agency statements show that at least five major police departments—including agencies in Chicago, Dallas, and Los Angeles—either claimed to run real-time face recognition off of street cameras..."
  5. "Law enforcement face recognition is unregulated and in many instances out of control. No state has passed a law comprehensively regulating police face recognition. We are not aware of any agency that requires warrants for searches or limits them to serious crimes. This has consequences..."
  6. "Law enforcement agencies are not taking adequate steps to protect free speech. There is a real risk that police face recognition will be used to stifle free speech. There is also a history of FBI and police surveillance of civil rights protests. Of the 52 agencies that we found to use (or have used) face recognition, we found only one, the Ohio Bureau of Criminal Investigation, whose face recognition use policy expressly prohibits its officers from using face recognition to track individuals engaging in political, religious, or other protected free speech."
  7. "Most law enforcement agencies do little to ensure their systems are accurate. Face recognition is less accurate than fingerprinting, particularly when used in real-time or on large databases. Yet we found only two agencies, the San Francisco Police Department and the Seattle region’s South Sound 911, that conditioned purchase of the technology on accuracy tests or thresholds. There is a need for testing..."
  8. "The human backstop to accuracy is non-standardized and overstated. Companies and police departments largely rely on police officers to decide whether a candidate photo is in fact a match. Yet a recent study showed that, without specialized training, human users make the wrong decision about a match half the time...The training regime for examiners remains a work in progress."
  9. "Police face recognition will disproportionately affect African Americans. Police face recognition will disproportionately affect African Americans. Many police departments do not realize that... the Seattle Police Department says that its face recognition system “does not see race.” Yet an FBI co-authored study suggests that face recognition may be less accurate on black people. Also, due to disproportionately high arrest rates, systems that rely on mug shot databases likely include a disproportionate number of African Americans. Despite these findings, there is no independent testing regime for racially biased error rates. In interviews, two major face recognition companies admitted that they did not run these tests internally, either."
  10. "Agencies are keeping critical information from the public. Ohio’s face recognition system remained almost entirely unknown to the public for five years. The New York Police Department acknowledges using face recognition; press reports suggest it has an advanced system. Yet NYPD denied our records request entirely. The Los Angeles Police Department has repeatedly announced new face recognition initiatives—including a “smart car” equipped with face recognition and real-time face recognition cameras—yet the agency claimed to have “no records responsive” to our document request. Of 52 agencies, only four (less than 10%) have a publicly available use policy. And only one agency, the San Diego Association of Governments, received legislative approval for its policy."

The New York Times reported:

"Nina Lindsey, an Amazon Web Services spokeswoman, said in a statement that the company’s customers had used its facial recognition technology for various beneficial purposes, including preventing human trafficking and reuniting missing children with their families. She added that the A.C.L.U. had used the company’s face-matching technology, called Amazon Rekognition, differently during its test than the company recommended for law enforcement customers.

For one thing, she said, police departments do not typically use the software to make fully autonomous decisions about people’s identities... She also noted that the A.C.L.U had used the system’s default setting for matches, called a “confidence threshold,” of 80 percent. That means the group counted any face matches the system proposed that had a similarity score of 80 percent or more. Amazon itself uses the same percentage in one facial recognition example on its site describing matching an employee’s face with a work ID badge. But Ms. Lindsey said Amazon recommended that police departments use a much higher similarity score — 95 percent — to reduce the likelihood of erroneous matches."

Good of Amazon to respond quickly, but its reply is still insufficient and troublesome. Amazon may recommend 95 percent similarity scores, but the public does not know if police departments actually use the higher setting, or consistently do so across all types of criminal investigations. Plus, the CPT report cast doubt on human "backstop" intervention, which Amazon's reply seems to heavily rely upon.

Where is the rest of Congress on this? On Friday, three Senators sent a similar letter seeking answers from 39 federal law-enforcement agencies about their use facial recognition technology, and what policies, if any, they have put in place to prevent abuse and misuse.

All of the findings in the CPT report are disturbing. Finding #3 is particularly troublesome. So, voters need to know what, if anything, has changed since these findings were published in 2016. Voters need to know what their elected officials are doing to address these findings. Some elected officials seem engaged on the topic, but not enough. What are your opinions?


Facial Recognition At Facebook: New Patents, New EU Privacy Laws, And Concerns For Offline Shoppers

Facebook logo Some Facebook users know that the social networking site tracks them both on and off (e.g., signed into, not signed into) the service. Many online users know that Facebook tracks both users and non-users around the internet. Recent developments indicate that the service intends to track people offline, too. The New York Times reported that Facebook:

"... has applied for various patents, many of them still under consideration... One patent application, published last November, described a system that could detect consumers within [brick-and-mortar retail] stores and match those shoppers’ faces with their social networking profiles. Then it could analyze the characteristics of their friends, and other details, using the information to determine a “trust level” for each shopper. Consumers deemed “trustworthy” could be eligible for special treatment, like automatic access to merchandise in locked display cases... Another Facebook patent filing described how cameras near checkout counters could capture shoppers’ faces, match them with their social networking profiles and then send purchase confirmation messages to their phones."

Some important background. First, the usage of surveillance cameras in retail stores is not new. What is new is the scope and accuracy of the technology. In 2012, we first learned about smart mannequins in retail stores. In 2013, we learned about the five ways retail stores spy on shoppers. In 2015, we learned more about tracking of shoppers by retail stores using WiFi connections. In 2018, some smart mannequins are used in the healthcare industry.

Second, Facebook's facial recognition technology scans images uploaded by users, and then allows users identified to accept or decline labels with their name for each photo. Each Facebook user can adjust their privacy settings to enable or disable the adding of their name label to photos. However:

"Facial recognition works by scanning faces of unnamed people in photos or videos and then matching codes of their facial patterns to those in a database of named people... The technology can be used to remotely identify people by name without their knowledge or consent. While proponents view it as a high-tech tool to catch criminals... critics said people cannot actually control the technology — because Facebook scans their faces in photos even when their facial recognition setting is turned off... Rochelle Nadhiri, a Facebook spokeswoman, said its system analyzes faces in users’ photos to check whether they match with those who have their facial recognition setting turned on. If the system cannot find a match, she said, it does not identify the unknown face and immediately deletes the facial data."

Simply stated: Facebook maintains a perpetual database of photos (and videos) with names attached, so it can perform the matching and not display name labels for users who declined and/or disabled the display of name labels in photos (videos). To learn more about facial recognition at Facebook, visit the Electronic Privacy Information Center (EPIC) site.

Third, other tech companies besides Facebook use facial recognition technology:

"... Amazon, Apple, Facebook, Google and Microsoft have filed facial recognition patent applications. In May, civil liberties groups criticized Amazon for marketing facial technology, called Rekognition, to police departments. The company has said the technology has also been used to find lost children at amusement parks and other purposes..."

You may remember, earlier in 2017 Apple launched its iPhone X with Face ID feature for users to unlock their phones. Fourth, since Facebook operates globally it must respond to new laws in certain regions:

"In the European Union, a tough new data protection law called the General Data Protection Regulation now requires companies to obtain explicit and “freely given” consent before collecting sensitive information like facial data. Some critics, including the former government official who originally proposed the new law, contend that Facebook tried to improperly influence user consent by promoting facial recognition as an identity protection tool."

Perhaps, you find the above issues troubling. I do. If my facial image will be captured, archived, tracked by brick-and-mortar stores, and then matched and merged with my online usage, then I want some type of notice before entering a brick-and-mortar store -- just as websites present privacy and terms-of-use policies. Otherwise, there is no notice nor informed consent by shoppers at brick-and-mortar stores.

So, is facial recognition a threat, a protection tool, or both? What are your opinions?


The Wireless Carrier With At Least 8 'Hidden Spy Hubs' Helping The NSA

AT&T logo During the late 1970s and 1980s, AT&T conducted an iconic “reach out and touch someone” advertising campaign to encourage consumers to call their friends, family, and classmates. Back then, it was old school -- landlines. The campaign ranked #80 on Ad Age's list of the 100 top ad campaigns from the last century.

Now, we learn a little more about how extensive pervasive surveillance activities are at AT&T facilities to help law enforcement reach out and touch persons. Yesterday, the Intercept reported:

"The NSA considers AT&T to be one of its most trusted partners and has lauded the company’s “extreme willingness to help.” It is a collaboration that dates back decades. Little known, however, is that its scope is not restricted to AT&T’s customers. According to the NSA’s documents, it values AT&T not only because it "has access to information that transits the nation," but also because it maintains unique relationships with other phone and internet providers. The NSA exploits these relationships for surveillance purposes, commandeering AT&T’s massive infrastructure and using it as a platform to covertly tap into communications processed by other companies.”

The new report describes in detail the activities at eight AT&T facilities in major cities across the United States. Consumers who use other branded wireless service providers are also affected:

"Because of AT&T’s position as one of the U.S.’s leading telecommunications companies, it has a large network that is frequently used by other providers to transport their customers’ data. Companies that “peer” with AT&T include the American telecommunications giants Sprint, Cogent Communications, and Level 3, as well as foreign companies such as Sweden’s Telia, India’s Tata Communications, Italy’s Telecom Italia, and Germany’s Deutsche Telekom."

It was five years ago this month that the public learned about extensive surveillance by the U.S. National Security Agency (NSA). Back then, the Guardian UK newspaper reported about a court order allowing the NSA to spy on U.S. citizens. The revelations continued, and by 2016 we'd learned about NSA code inserted in Android operating system software, the FISA Court and how it undermines the public's trust, the importance of metadata and how much it reveals about you (despite some politicians' claims otherwise), the unintended consequences from broad NSA surveillance, U.S. government spy agencies' goal to break all encryption methods, warrantless searches of U.S. citizens' phone calls and e-mail messages, the NSA's facial image data collection program, the data collection programs included ordinary (e.g., innocent) citizens besides legal targets, and how  most hi-tech and telecommunications companies assisted the government with its spy programs. We knew before that AT&T was probably the best collaborator, and now we know more about why. 

Content vacuumed up during the surveillance includes consumers' phone calls, text messages, e-mail messages, and internet activity. The latest report by the Intercept also described:

"The messages that the NSA had unlawfully collected were swept up using a method of surveillance known as “upstream,” which the agency still deploys for other surveillance programs authorized under both Section 702 of FISA and Executive Order 12333. The upstream method involves tapping into communications as they are passing across internet networks – precisely the kind of electronic eavesdropping that appears to have taken place at the eight locations identified by The Intercept."

Former NSA contractor Edward Snowden commented on Twitter:


Apple To Close Security Hole Law Enforcement Frequently Used To Access iPhones

You may remember. In 2016, the U.S. Department of Justice attempted to force Apple Computer to build a back door into its devices so law enforcement could access suspects' iPhones. After Apple refused, the government found a vendor to do the hacking for them. In 2017, multiple espionage campaigns targeted Apple devices with new malware.

Now, we learn a future Apple operating system (iOS) software update will close a security hole frequently used by law enforcement. Reuters reported that the future iOS update will include default settings to terminate communications through the USB port when the device hasn't been unlocked within the past hour. Reportedly, that change may reduce access by 90 percent.

Kudos to the executives at Apple for keeping customers' privacy foremost.


Google To Exit Weaponized Drone Contract And Pursue Other Defense Projects

Google logo Last month, protests by current and former Google employees, plus academic researchers, cited ethical and transparency concerns with artificial intelligence (AI) help the company provides to the U.S. Department of Defense for Project Maven, a weaponized drone program to identify people. Gizmodo reported that Google plans not to renew its contract for Project Maven:

"Google Cloud CEO Diane Greene announced the decision at a meeting with employees Friday morning, three sources told Gizmodo. The current contract expires in 2019 and there will not be a follow-up contract... The company plans to unveil new ethical principles about its use of AI this week... Google secured the Project Maven contract in late September, the emails reveal, after competing for months against several other “AI heavyweights” for the work. IBM was in the running, as Gizmodo reported last month, along with Amazon and Microsoft... Google is reportedly competing for a Pentagon cloud computing contract worth $10 billion."


Privacy Badger Update Fights 'Link Tracking' And 'Link Shims'

Many internet users know that social media companies track both users and non-users. The Electronic Frontier Foundation (EFF) updated its Privacy Badger browser add-on to help consumers fight a specific type of surveillance technology called "Link Tracking," which facebook and many social networking sites use to track users both on and off their social platforms. The EFF explained:

"Say your friend shares an article from EFF’s website on Facebook, and you’re interested. You click on the hyperlink, your browser opens a new tab, and Facebook is no longer a part of the equation. Right? Not exactly. Facebook—and many other companies, including Google and Twitter—use a variation of a technique called link shimming to track the links you click on their sites.

When your friend posts a link to eff.org on Facebook, the website will “wrap” it in a URL that actually points to Facebook.com: something like https://l.facebook.com/l.php?u=https%3A%2F%2Feff.org%2Fpb&h=ATPY93_4krP8Xwq6wg9XMEo_JHFVAh95wWm5awfXqrCAMQSH1TaWX6znA4wvKX8pNIHbWj3nW7M4F-ZGv3yyjHB_vRMRfq4_BgXDIcGEhwYvFgE7prU. This is a link shim.

When you click on that monstrosity, your browser first makes a request to Facebook with information about who you are, where you are coming from, and where you are navigating to. Then, Facebook quickly redirects you to the place you actually wanted to go... Facebook’s approach is a bit sneakier. When the site first loads in your browser, all normal URLs are replaced with their l.facebook.com shim equivalents. But as soon as you hover over a URL, a piece of code triggers that replaces the link shim with the actual link you wanted to see: that way, when you hover over a link, it looks innocuous. The link shim is stored in an invisible HTML attribute behind the scenes. The new link takes you to where you want to go, but when you click on it, another piece of code fires off a request to l.facebook.com in the background—tracking you just the same..."

Lovely. And, Facebook fails to deliver on privacy in more ways:

"According to Facebook's official post on the subject, in addition to helping Facebook track you, link shims are intended to protect users from links that are "spammy or malicious." The post states that Facebook can use click-time detection to save users from visiting malicious sites. However, since we found that link shims are replaced with their unwrapped equivalents before you have a chance to click on them, Facebook's system can't actually protect you in the way they describe.

Facebook also claims that link shims "protect privacy" by obfuscating the HTTP Referer header. With this update, Privacy Badger removes the Referer header from links on facebook.com altogether, protecting your privacy even more than Facebook's system claimed to."

Thanks to the EFF for focusing upon online privacy and delivering effective solutions.


Academic Professors, Researchers, And Google Employees Protest Warfare Programs By The Tech Giant

Google logo Many internet users know that Google's business of model of free services comes with a steep price: the collection of massive amounts of information about users of its services. There are implications you may not be aware of.

A Guardian UK article by three professors asked several questions:

"Should Google, a global company with intimate access to the lives of billions, use its technology to bolster one country’s military dominance? Should it use its state of the art artificial intelligence technologies, its best engineers, its cloud computing services, and the vast personal data that it collects to contribute to programs that advance the development of autonomous weapons? Should it proceed despite moral and ethical opposition by several thousand of its own employees?"

These questions are relevant and necessary for several reasons. First, more than a dozen Google employees resigned citing ethical and transparency concerns with artificial intelligence (AI) help the company provides to the U.S. Department of Defense for Maven, a weaponized drone program to identify people. Reportedly, these are the first known mass resignations.

Second, more than 3,100 employees signed a public letter saying that Google should not be in the business of war. That letter (Adobe PDF) demanded that Google terminate its Maven program assistance, and draft a clear corporate policy that neither it, nor its contractors, will build warfare technology.

Third, more than 700 academic researchers, who study digital technologies, signed a letter in support of the protesting Google employees and former employees. The letter stated, in part:

"We wholeheartedly support their demand that Google terminate its contract with the DoD, and that Google and its parent company Alphabet commit not to develop military technologies and not to use the personal data that they collect for military purposes... We also urge Google and Alphabet’s executives to join other AI and robotics researchers and technology executives in calling for an international treaty to prohibit autonomous weapon systems... Google has become responsible for compiling our email, videos, calendars, and photographs, and guiding us to physical destinations. Like many other digital technology companies, Google has collected vast amounts of data on the behaviors, activities and interests of their users. The private data collected by Google comes with a responsibility not only to use that data to improve its own technologies and expand its business, but also to benefit society. The company’s motto "Don’t Be Evil" famously embraces this responsibility.

Project Maven is a United States military program aimed at using machine learning to analyze massive amounts of drone surveillance footage and to label objects of interest for human analysts. Google is supplying not only the open source ‘deep learning’ technology, but also engineering expertise and assistance to the Department of Defense. According to Defense One, Joint Special Operations Forces “in the Middle East” have conducted initial trials using video footage from a small ScanEagle surveillance drone. The project is slated to expand “to larger, medium-altitude Predator and Reaper drones by next summer” and eventually to Gorgon Stare, “a sophisticated, high-tech series of cameras... that can view entire towns.” With Project Maven, Google becomes implicated in the questionable practice of targeted killings. These include so-called signature strikes and pattern-of-life strikes that target people based not on known activities but on probabilities drawn from long range surveillance footage. The legality of these operations has come into question under international and U.S. law. These operations also have raised significant questions of racial and gender bias..."

I'll bet that many people never imagined -- nor want - that their personal e-mail, photos, calendars, video, social media, map usage, archived photos, social media, and more would be used for automated military applications. What are your opinions?


San Diego Police Widely Share Data From License Plate Database

Images of ALPR device mounted on a patrol car. Click to view larger version Many police departments use automated license plate reader (ALPR or LPR) technology to monitor the movements of drivers and their vehicles. The surveillance has several implications beyond the extensive data collection.

The Voice of San Diego reported that the San Diego Police Departments shares its database of ALPR data with many other agencies:

"SDPD shares that database with the San Diego sector of Border Patrol – and with another 600 agencies across the country, including other agencies within the Department of Homeland Security. The nationwide database is enabled by Vigilant Solutions, a private company that provides data management and software services to agencies across the country for ALPR systems... A memorandum of understanding between SDPD and Vigilant stipulates that each agency retains ownership of its data, and can take steps to determine who sees it. A Vigilant Solutions user manual spells out in detail how agencies can limit access to their data..."

San Diego's ALPR database is fed by a network of cameras which record images plus the date, time and GPS location of the cars that pass by them. So, the associated metadata for each database record probably includes the license plate number, license plate state, vehicle owner, GPS location, travel direction, date and time, road/street/highway name or number, and the LPR device ID number.

Information about San Diego's ALPR activities became public after a data request from the Electronic Frontier Foundation (EFF), a digital privacy organization. ALPRs are a popular tool, and were used in about 38 states in 2014. Typically, the surveillance collects data about both criminals and innocent drivers.

Images of ALPR devices mounted on unmarked patrol cars. Click to view larger version There are several valid applications: find stolen vehicles, find stolen license plates, find wanted vehicles (e.g., abductions), execute search warrants, find parolees, and find wanted parolees. Some ALPR devices are stationary (e.g., mounted on street lights), while others are mounted on (marked and unmarked) patrol cars. Both deployments scan moving vehicles, while the latter also facilitates the scanning of parked vehicles.

Earlier this year, the EFF issued hundreds of similar requests across the country to learn how law enforcement currently uses ALPR technology. The ALPR training manual for the Elk Grove, Illinois PD listed the data archival policies for several states: New Jersey - 5 years, Vermont - 18 months, Utah - 9 months,  Minnesota - 48 hours, Arkansas - 150 days, New Hampshire - not allowed, and California - no set time. The document also stated that more than "50 million captures" are added each month to the Vigilant database. And, the Elk Grove PD seems to broadly share its ALPR data with other police departments and agencies.

The SDPD website includes a "License Plate Recognition: Procedures" document (Adobe PDF), dated May 2015, which describes its ALPR usage and policies:

"The legitimate law enforcement purposes of LPR systems include the following: 1) Locating stolen, wanted, or subject of investigation vehicles; 2) Locating witnesses and victims of a violent crime; 3) Locating missing or abducted children and at risk individuals.

LPR Strategies: 1) LPR equipped vehicles should be deployed as frequently as possible to maximize the utilization of the system; 2) Regular operation of LPR should be considered as a force multiplying extension of an officer’s regular patrol efforts to observe and detect vehicles of interest and specific wanted vehicles; 3) LPR may be legitimately used to collect data that is within public view, but should not be used to gather intelligence of First Amendment activities; 4) Reasonable suspicion or probable cause is not required for the operation of LPR equipment; 5) Use of LPR equipped cars to conduct license plate canvasses and grid searches is encouraged, particularly for major crimes or incidents as well as areas that are experiencing any type of crime series... LPR data will be retained for a period of one year from the time the LPR record was captured by the LPR device..."

The document does not describe its data security methods to protect this sensitive information from breaches, hacks, and unauthorized access. Perhaps most importantly, the 2015 SDPD document describes the data sharing policy:

"Law enforcement officers shall not share LPR data with commercial or private entities or individuals. However, law enforcement officers may disseminate LPR data to government entities with an authorized law enforcement or public safety purpose for access to such data."

However, the Voice of San Diego reported:

"A memorandum of understanding between SDPD and Vigilant stipulates that each agency retains ownership of its data, and can take steps to determine who sees it. A Vigilant Solutions user manual spells out in detail how agencies can limit access to their data... SDPD’s sharing doesn’t stop at Border Patrol. The list of agencies with near immediate access to the travel habits of San Diegans includes law enforcement partners you might expect, like the Carlsbad Police Department – with which SDPD has for years shared license plate reader data, through a countywide arrangement overseen by SANDAG – but also obscure agencies like the police department in Meigs, Georgia, population 1,038, and a private group that is not itself a police department, the Missouri Police Chiefs Association..."

So, the accuracy of the 2015 document is questionable, it it isn't already obsolete. Moreover, what's really critical are the data retention and sharing policies by Vigilant and other agencies.


Oakland Law Mandates 'Technology Impact Reports' By Local Government Agencies Before Purchasing Surveillance Equipment

Popular tools used by law enforcement include stingrays, fake cellular phone towers, and automated license plate readers (ALPRs) to track the movements of persons. Historically, the technologies have often been deployed without notice to track both the bad guys (e.g., criminals and suspects) and innocent citizens.

To better balance the privacy needs of citizens versus the surveillance needs of law enforcement, some areas are implementing new laws. The East Bay Times reported about a new law in Oakland:

"... introduced at Tuesday’s city council meeting, creates a public approval process for surveillance technologies used by the city. The rules also lay a groundwork for the City Council to decide whether the benefits of using the technology outweigh the cost to people’s privacy. Berkeley and Davis have passed similar ordinances this year.

However, Oakland’s ordinance is unlike any other in the nation in that it requires any city department that wants to purchase or use the surveillance technology to submit a "technology impact report" to the city’s Privacy Advisory Commission, creating a “standardized public format” for technologies to be evaluated and approved... city departments must also submit a “surveillance use policy” to the Privacy Advisory Commission for consideration. The approved policy must be adopted by the City Council before the equipment is to be used..."

Reportedly, the city council will review the ordinance a second time before final passage.

The Northern California chapter of the American Civil Liberties Union (ACLU) discussed the problem, the need for transparency, and legislative actions:

"Public safety in the digital era must include transparency and accountability... the ACLU of California and a diverse coalition of civil rights and civil liberties groups support SB 1186, a bill that helps restores power at the local level and makes sure local voices are heard... the use of surveillance technology harms all Californians and disparately harms people of color, immigrants, and political activists... The Oakland Police Department concentrated their use of license plate readers in low income and minority neighborhoods... Across the state, residents are fighting to take back ownership of their neighborhoods... Earlier this year, Alameda, Culver City, and San Pablo rejected license plate reader proposals after hearing about the Immigration & Customs Enforcement (ICE) data [sharing] deal. Communities are enacting ordinances that require transparency, oversight, and accountability for all surveillance technologies. In 2016, Santa Clara County, California passed a groundbreaking ordinance that has been used to scrutinize multiple surveillance technologies in the past year... SB 1186 helps enhance public safety by safeguarding local power and ensuring transparency, accountability... SB 1186 covers the broad array of surveillance technologies used by police, including drones, social media surveillance software, and automated license plate readers. The bill also anticipates – and covers – AI-powered predictive policing systems on the rise today... Without oversight, the sensitive information collected by local governments about our private lives feeds databases that are ripe for abuse by the federal government. This is not a hypothetical threat – earlier this year, ICE announced it had obtained access to a nationwide database of location information collected using license plate readers – potentially sweeping in the 100+ California communities that use this technology. Many residents may not be aware their localities also share their information with fusion centers, federal-state intelligence warehouses that collect and disseminate surveillance data from all levels of government.

Statewide legislation can build on the nationwide Community Control Over Police Surveillance (CCOPS) movement, a reform effort spearheaded by 17 organizations, including the ACLU, that puts local residents and elected officials in charge of decisions about surveillance technology. If passed in its current form, SB 1186 would help protect Californians from intrusive, discriminatory, and unaccountable deployment of law enforcement surveillance technology."

Is there similar legislation in your state?


How Facebook Tracks Its Users, And Non-Users, Around the Internet

Facebook logo Many Facebook users wrongly believe that the social networking service doesn't track them around the internet when they aren't signed in. Also, many non-users of Facebook wrongly believe that they are not tracked.

Earlier this month, Consumer Reports explained the tracking:

"As you travel through the web, you’re likely to encounter Facebook Like or Share buttons, which the company calls Social Plugins, on all sorts of pages, from news outlets to shopping sites. Click on a Like button and you can see the number on the page’s counter increase by one; click on a Share button and a box opens up to let you post a link to your Facebook account.

But that’s just what’s happening on the surface. "If those buttons are on the page, regardless of whether you touch them or not, Facebook is collecting data," said Casey Oppenheim, co-founder of data security firm Disconnect."

This blog discussed social plugins back in 2010. However, the tracking includes more technologies:

"... every web page contains little bits of code that request the pictures, videos, and text that browsers need to display each item on the page. These requests typically go out to a wide swath of corporate servers—including Facebook—in addition to the website’s owner. And such requests can transmit data about the site you’re on, the browser you are using, and more. Useful data gets sent to Facebook whether you click on one of its buttons or not. If you click, Facebook finds out about that, too. And it learns a bit more about your interests.

In addition to the buttons, many websites also incorporate a Facebook Pixel, a tiny, transparent image file the size of just one of the millions of pixels on a typical computer screen. The web page makes a request for a Facebook Pixel, just as it would request a Like button. No user will ever notice the picture, but the request to get it is packaged with information... Facebook explains what data can be collected using a Pixel, such as products you’ve clicked on or added to a shopping cart, in its documentation for advertisers. Web developers can control what data is collected and when it is transmitted... Even if you’re not logged in, the company can still associate the data with your IP address and all the websites you’ve been to that contain Facebook code."

The article also explains "re-targeting" and how consumers who don't purchase anything at an online retail site will see advertisements later -- around the internet and not solely on the Facebook site -- about the items they viewed but not purchased. Then, there is the database it assembles:

"In materials written for its advertisers, Facebook explains that it sorts consumers into a wide variety of buckets based on factors such as age, gender, language, and geographic location. Facebook also sorts its users based on their online activities—from buying dog food, to reading recipes, to tagging images of kitchen remodeling projects, to using particular mobile devices. The company explains that it can even analyze its database to build “look-alike” audiences that are similar... Facebook can show ads to consumers on other websites and apps as well through the company’s Audience Network."

So, several technologies are used to track both Facebook users and non-users, and assemble a robust, descriptive database. And, some website operators collaborate to facilitate the tracking, which is invisible to most users. Neat, eh?

Like it or not, internet users are automatically included in the tracking and data collection. Can you opt out? Consumer reports also warns:

"The biggest tech companies don’t give you strong tools for opting out of data collection, though. For instance, privacy settings may let you control whether you see targeted ads, but that doesn’t affect whether a company collects and stores information about you."

Given this, one can conclude that Facebook is really a massive advertising network masquerading as a social networking service.

To minimize the tracking, consumers can: disable the Facebook API platform on their Facebook accounts, use the new tools (e.g., see these step-by-step instructions) by Facebook to review and disable the apps with access to their data, use ad-blocking software (e.g., Adblock Plus, Ghostery), use the opt out-out mechanisms offered by the major data brokers, use the OptOutPrescreen.com site to stop pre-approved credit offers, and use VPN software and services.

If you use the Firefox web browser, configure it for Private Browsing and install the new Facebook Container add-on specifically designed to prevent Facebook from tracking you. Don't use Firefox? Several web browsers offer Incognito Mode. And, you might try the Privacy Badger add-on instead. I've used it happily for years.

To combat "canvas fingerprinting" (e.g., tracking users by identifying the unique attributes of your computer, browser, and software), security experts have advised consumers to use different web browsers. For example, you'd use one browser only for online banking, and a different web browser for surfing the internet. However,  this security method may not work much longer given the rise of cross-browser fingerprinting.

It seems that an arms race is underway between software for users to maintain privacy online versus technologies by advertisers to defeat users' privacy. Would Facebook and its affiliates/partners use cross-browser fingerprinting? My guess: yes it would, just like any other advertising network.

What do you think? Some related reading:


The 'CLOUD Act' - What It Is And What You Need To Know

Chances are, you probably have not heard of the "CLOUD Act." I hadn't heard about it until recently. A draft of the legislation is available on the website for U.S. Senator Orrin Hatch (Republican - Utah).

Many people who already use cloud services to store and backup data might assume: if it has to do with the cloud, then it must be good.  Such an assumption would be foolish. The full name of the bill: "Clarifying Overseas Use Of Data." What problem does this bill solve? Senator Hatch stated last month why he thinks this bill is needed:

"... the Supreme Court will hear arguments in a case... United States v. Microsoft Corp., colloquially known as the Microsoft Ireland case... The case began back in 2013, when the US Department of Justice asked Microsoft to turn over emails stored in a data center in Ireland. Microsoft refused on the ground that US warrants traditionally have stopped at the water’s edge. Over the last few years, the legal battle has worked its way through the court system up to the Supreme Court... The issues the Microsoft Ireland case raises are complex and have created significant difficulties for both law enforcement and technology companies... law enforcement officials increasingly need access to data stored in other countries for investigations, yet no clear enforcement framework exists for them to obtain overseas data. Meanwhile, technology companies, who have an obligation to keep their customers’ information private, are increasingly caught between conflicting laws that prohibit disclosure to foreign law enforcement. Equally important, the ability of one nation to access data stored in another country implicates national sovereignty... The CLOUD Act bridges the divide that sometimes exists between law enforcement and the tech sector by giving law enforcement the tools it needs to access data throughout the world while at the same time creating a commonsense framework to encourage international cooperation to resolve conflicts of law. To help law enforcement, the bill creates incentives for bilateral agreements—like the pending agreement between the US and the UK—to enable investigators to seek data stored in other countries..."

Senators Coons, Graham, and Whitehouse, support the CLOUD Act, along with House Representatives Collins, Jeffries, and others. The American Civil Liberties Union (ACLU) opposes the bill and warned:

"Despite its fluffy sounding name, the recently introduced CLOUD Act is far from harmless. It threatens activists abroad, individuals here in the U.S., and would empower Attorney General Sessions in new disturbing ways... the CLOUD Act represents a dramatic change in our law, and its effects will be felt across the globe... The bill starts by giving the executive branch dramatically more power than it has today. It would allow Attorney General Sessions to enter into agreements with foreign governments that bypass current law, without any approval from Congress. Under these agreements, foreign governments would be able to get emails and other electronic information without any additional scrutiny by a U.S. judge or official. And, while the attorney general would need to consider a country’s human rights record, he is not prohibited from entering into an agreement with a country that has committed human rights abuses... the bill would for the first time allow these foreign governments to wiretap in the U.S. — even in cases where they do not meet Wiretap Act standards. Paradoxically, that would give foreign governments the power to engage in surveillance — which could sweep in the information of Americans communicating with foreigners — that the U.S. itself would not be able to engage in. The bill also provides broad discretion to funnel this information back to the U.S., circumventing the Fourth Amendment. This information could potentially be used by the U.S. to engage in a variety of law enforcement actions."

Given that warning, I read the draft legislation. One portion immediately struck me:

"A provider of electronic communication service or remote computing service shall comply with the obligations of this chapter to preserve, backup, or disclose the contents of a wire or electronic communication and any record or other information pertaining to a customer or subscriber within such provider’s possession, custody, or control, regardless of whether such communication, record, or other information is located within or outside of the United States."

While I am not an attorney, this bill definitely sounds like an end-run around the Fourth Amendment. The review process is largely governed by the House of Representatives; a body not known for internet knowledge nor savvy. The bill also smells like an attack on internet services consumers regularly use for privacy, such as search engines that don't collect nor archive search data and Virtual Private Networks (VPNs).

Today, for online privacy many consumers in the United States use VPN software and services provided by vendors located offshore. Why? Despite a national poll in 2017 which found the the Republican rollback of FCC broadband privacy rules very unpopular among consumers, the Republican-led Congress proceeded with that rollback, and President Trump signed the privacy-rollback legislation on April 3, 2017. Hopefully, skilled and experienced privacy attorneys will continue to review and monitor the draft legislation.

The ACLU emphasized in its warning:

"Today, the information of global activists — such as those that fight for LGBTQ rights, defend religious freedom, or advocate for gender equality are protected from being disclosed by U.S. companies to governments who may seek to do them harm. The CLOUD Act eliminates many of these protections and replaces them with vague assurances, weak standards, and largely unenforceable restrictions... The CLOUD Act represents a major change in the law — and a major threat to our freedoms. Congress should not try to sneak it by the American people by hiding it inside of a giant spending bill. There has not been even one minute devoted to considering amendments to this proposal. Congress should robustly debate this bill and take steps to fix its many flaws, instead of trying to pull a fast one on the American people."

I agree. Seems like this bill creates far more problems than it solves. Plus, something this important should be openly and thoroughly discussed; not be buried in a spending bill. What do you think?


Fitness Device Usage By U.S. Soldiers Reveal Sensitive Location And Movement Data

Useful technology can often have unintended consequences. The Washington Post reported about an interactive map:

"... posted on the Internet that shows the whereabouts of people who use fitness devices such as Fitbit also reveals highly sensitive information about the locations and activities of soldiers at U.S. military bases, in what appears to be a major security oversight. The Global Heat Map, published by the GPS tracking company Strava, uses satellite information to map the locations and movements of subscribers to the company’s fitness service over a two-year period, by illuminating areas of activity. Strava says it has 27 million users around the world, including people who own widely available fitness devices such as Fitbit and Jawbone, as well as people who directly subscribe to its mobile app. The map is not live — rather, it shows a pattern of accumulated activity between 2015 and September 2017... The U.S.-led coalition against the Islamic State said on Monday it is revising its guidelines on the use of all wireless and technological devices on military facilities as a result of the revelations. "

Takeaway #1: it's easier than you might think for the bad guys to track the locations and movements of high-value targets (e.g, soldiers, corporate executives, politicians, attorneys).

Takeaway #2: unintended consequences from mobile devices is not new, as CNN reported in 2015. Consumers love the convenience of their digital devices. It is wise to remember the warning from a famous economist, "There's no such thing as a free lunch."


Wisconsin Employer To Offer Its Employees ID Microchip Implants

Microchip implant to be used by Three Square Market. Click to view larger version A Wisconsin company said it will offer to its employees starting August 1 the option of having microchip identification implants. The company, Three Square Market (32M), will allow employees with the microchip implants to make purchases in the employee break room, open locked doors, login to computers, use the copy machine, and related office tasks.

Each microchip, about the size of a grain of rice (see photo on the right), would be implanted under the skin in an employee's hand. The microchips use radio-frequency identification (RFID), a technology that's existed for a while and has been used in variety of devices: employee badges, payment cards, passports, package tracking, and more. Each microchip electronically stores identification information about the user, and uses near-field communications (NFC). Instead of swiping a payment card, employee badge, or their smartphone, instead the employee can unlock a device by waving their hand near a chip reader attached to that device. Purchases in the employee break room can be made by waving their hand near a self-serve kiosk.

Reportedly, 32M would be the first employer in the USA to microchip its employees. CBS News reported in April about Epicenter, a startup based in Sweden:

"The [implant] injections have become so popular that workers at Epicenter hold parties for those willing to get implanted... Epicenter, which is home to more than 100 companies and some 2,000 workers, began implanting workers in January 2015. Now, about 150 workers have [chip implants]... as with most new technologies, it raises security and privacy issues. While biologically safe, the data generated by the chips can show how often an employee comes to work or what they buy. Unlike company swipe cards or smartphones, which can generate the same data, a person cannot easily separate themselves from the chip."

In an interview with Saint Paul-based KSTP, Todd Westby, the Chief Executive Officer at 32M described the optional microchip program as:

"... the next thing that's inevitably going to happen, and we want to be a part of it..."

To implement its microchip implant program, 32M has partnered with Sweden-based BioHax International. Westby explained in a company announcement:

"Eventually, this technology will become standardized allowing you to use this as your passport, public transit, all purchasing opportunities... We see chip technology as the next evolution in payment systems, much like micro markets have steadily replaced vending machines... it is important that 32M continues leading the way with advancements such as chip implants..."

"Mico markets" are small stores located within employers' offices; typically the break rooms where employees relax and/or purchase food. 32M estimates 20,000 micro markets nationwide in the USA. According to its website, the company serves markets in North America, Europe, Asia, and Australia. 32M believes that micro markets, aided by chip implants and self-serve kiosk, offer employers greater employee productivity with lower costs.

Yes, the chip implants are similar to the chip implants many pet owners have inserted to identify their dogs or cats. 32M expects 50 employees to enroll in its chip implant program.

Reportedly, companies in Belgium and Sweden already use chip implants to identify employees. 32M's announcement did not list the data elements each employee's microchip would contain, nor whether the data in the microchips would be encrypted. Historically, unencrypted data stored by RFID technology has been vulnerable to skimming attacks by criminals using portable or hand-held RFID readers. Stolen information would be used to cloned devices to commit identity theft and fraud.

Some states, such as Washington and California, passed anti-skimming laws. Prior government-industry workshops about RFID usage focused upon consumer products, and not employment concerns. Earlier this year, lawmakers in Nevada introduced legislation making it illegal to require employees to accept microchip implants.

A BBC News reporter discussed in 2015 what it is like to be "chipped." And as CBS News reported:

"... hackers could conceivably gain huge swathes of information from embedded microchips. The ethical dilemmas will become bigger the more sophisticated the microchips become. The data that you could possibly get from a chip that is embedded in your body is a lot different from the data that you can get from a smartphone..."

Example: employers installing RFID readers for employees to unlock bathrooms means employers can track when, where, how often, and the duration employees use bathrooms. How does that sound?

Hopefully, future announcements by 32M will discuss the security features and protections. What are your opinions? Are you willing to be an office cyborg? Should employees have a choice, or should employers be able to force their employees to accept microchip implants? How do you feel about your employer tracking what you eat and drink via purchases with your chip implant?

Many employers publish social media policies covering what employees should (shouldn't, or can't) publish online. Should employers have microchip implant policies, too? If so, what should these policies state?


Microsoft Fights Foreign Cyber Criminals And Spies

The Daily Beast explained how Microsoft fights cyber criminals and spies, some of whom with alleged ties to the Kremlin:

"Last year attorneys for the software maker quietly sued the hacker group known as Fancy Bear in a federal court outside Washington DC, accusing it of computer intrusion, cybersquatting, and infringing on Microsoft’s trademarks. The action, though, is not about dragging the hackers into court. The lawsuit is a tool for Microsoft to target what it calls “the most vulnerable point” in Fancy Bear’s espionage operations: the command-and-control servers the hackers use to covertly direct malware on victim computers. These servers can be thought of as the spymasters in Russia's cyber espionage, waiting patiently for contact from their malware agents in the field, then issuing encrypted instructions and accepting stolen documents.

Since August, Microsoft has used the lawsuit to wrest control of 70 different command-and-control points from Fancy Bear. The company’s approach is indirect, but effective. Rather than getting physical custody of the servers, which Fancy Bear rents from data centers around the world, Microsoft has been taking over the Internet domain names that route to them. These are addresses like “livemicrosoft[.]net” or “rsshotmail[.]com” that Fancy Bear registers under aliases for about $10 each. Once under Microsoft’s control, the domains get redirected from Russia’s servers to the company’s, cutting off the hackers from their victims, and giving Microsoft a omniscient view of that servers’ network of automated spies."

Kudos to Microsoft and its attorneys.


Russian Cyber Attacks Against US Voting Systems Wider Than First Thought

Cyber attacks upon electoral systems in the United States are wider than originally thought. The attacks occurred in at least 39 states. The Bloomberg report described online attacks in Illinois as an example:

"... investigators found evidence that cyber intruders tried to delete or alter voter data. The hackers accessed software designed to be used by poll workers on Election Day, and in at least one state accessed a campaign finance database. Details of the wave of attacks, in the summer and fall of 2016... In early July 2016, a contractor who works two or three days a week at the state board of elections detected unauthorized data leaving the network, according to Ken Menzel, general counsel for the Illinois board of elections. The hackers had gained access to the state’s voter database, which contained information such as names, dates of birth, genders, driver’s licenses and partial Social Security numbers on 15 million people, half of whom were active voters. As many as 90,000 records were ultimately compromised..."

Politicians have emphasized that the point of the disclosures isn't to embarrass any specific state, but to alert the public to past activities and to the ongoing threat. The Intercept reported:

"Russian military intelligence executed a cyberattack on at least one U.S. voting software supplier and sent spear-phishing emails to more than 100 local election officials just days before last November’s presidential election, according to a highly classified intelligence report obtained by The Intercept.

The top-secret National Security Agency document, which was provided anonymously to The Intercept and independently authenticated, analyzes intelligence very recently acquired by the agency about a months-long Russian intelligence cyber effort against elements of the U.S. election and voting infrastructure. The report, dated May 5, 2017, is the most detailed U.S. government account of Russian interference in the election that has yet come to light."

Spear-fishing is the tactic criminals use by sending malware-laden e-mail messages to targeted individuals, whose names and demographic details may have been collected from social networking sites and other sources. The spam e-mail uses those details to pretend to be valid e-mail from a coworker, business associate, or friend. When the target opens the e-mail attachment, their computer and network are often infected with malware to collect and transmit log-in credentials to the criminals; or to remotely take over the targets' computers (e.g., ransomware) and demand ransom payments. Stolen log-in credentials are how criminals steal consumers' money by breaking into online bank accounts.

The Intercept report explained how the elections systems hackers adopted this tactic:

"... the Russian plan was simple: pose as an e-voting vendor and trick local government employees into opening Microsoft Word documents invisibly tainted with potent malware that could give hackers full control over the infected computers. But in order to dupe the local officials, the hackers needed access to an election software vendor’s internal systems to put together a convincing disguise. So on August 24, 2016, the Russian hackers sent spoofed emails purporting to be from Google to employees of an unnamed U.S. election software company... The spear-phishing email contained a link directing the employees to a malicious, faux-Google website that would request their login credentials and then hand them over to the hackers. The NSA identified seven “potential victims” at the company. While malicious emails targeting three of the potential victims were rejected by an email server, at least one of the employee accounts was likely compromised, the agency concluded..."

Experts believe the voting equipment company targeted was VR Systems, based in Florida. Reportedly, it's electronic voting services and equipment are used in eight states. VR Systems posted online a Frequently Asked Questions document (adobe PDF) about the cyber attacks against elections systems:

"Recent reports indicate that cyber actors impersonated VR Systems and other elections companies. Cyber actors sent an email from a fake account to election officials in an unknown number of districts just days before the 2016 general election. The fraudulent email asked recipients to open an attachment, which would then infect their computer, providing a gateway for more mischief... Because the spear-phishing email did not originate from VR Systems, we do not know how many jurisdictions were potentially impacted. Many election offices report that they never received the email or it was caught by their spam filters before it could reach recipients. It is our understanding that all jurisdictions, including VR Systems customers, have been notified by law enforcement agencies if they were a target of this spear-phishing attack... In August, a small number of phishing emails were sent to VR Systems. These emails were captured by our security protocols and the threat was neutralized. No VR Systems employee’s email was compromised. This prevented the cyber actors from accessing a genuine VR Systems email account. As such, the cyber actors, as part of their late October spear-phishing attack, resorted to creating a fake account to use in that spear-phishing campaign."

It is good news that VR Systems protected its employees' e-mail accounts. Let's hope that those employees were equally diligent about protecting their personal e-mail accounts and home computers, networks, and phones. We all know employees that often work from home.

The Intercept report highlighted a fact about life on the internet, which all internet users should know: stolen log-in credentials are highly valued by criminals:

"Jake Williams, founder of computer security firm Rendition Infosec and formerly of the NSA’s Tailored Access Operations hacking team, said stolen logins can be even more dangerous than an infected computer. “I’ll take credentials most days over malware,” he said, since an employee’s login information can be used to penetrate “corporate VPNs, email, or cloud services,” allowing access to internal corporate data. The risk is particularly heightened given how common it is to use the same password for multiple services. Phishing, as the name implies, doesn’t require everyone to take the bait in order to be a success — though Williams stressed that hackers “never want just one” set of stolen credentials."

So, a word to the wise for all internet users: don't use the same log-in credentials at multiple site. Don't open e-mail attachments from strangers. If you weren't expecting an e-mail attachment from a coworker/friend/business associate, call them on the phone first and verify that they indeed sent an attachment to you. The internet has become a dangerous place.


60 Minutes Re-Broadcast Its 2014 Interview With FBI Director Comey

60 Minutes logo Last night, the 60 Minutes television show re-broadcast its 2014 interview with former Federal Bureau of Investigation (FBI) Director James Comey. The interview is important for several reasons.

Politically liberal people have criticized Comey for mentioning to Congress just before the 2016 election the FBI investigation of former Secretary of State Hilary Clinton's private e-mail server. Many believe that Comey's comments helped candidate Donald Trump win the Presidential election. Politically conservative people criticized Comey for not recommending prosecution of former Secretary Clinton.

The interview is a reminder of history and that reality is often far more nuanced and complicated. Back in 2004, when the George W. Bush administration sought a re-authorization of warrant-less e-mail/phone searches, 60 Minutes explained:

"At the time, Comey was in charge at the Justice Department because Attorney General John Ashcroft was in intensive care with near fatal pancreatitis. When Comey refused to sign off, the president's Chief of Staff Andy Card headed to the hospital to get Ashcroft's OK."

In the 2014 interview, Comey described his concerns in 2004 about key events:

"... [the government] cannot read your emails or listen to your calls without going to a federal judge, making a showing of probable cause that you are a terrorist, an agent of a foreign power, or a serious criminal of some sort, and get permission for a limited period of time to intercept those communications. It is an extremely burdensome process. And I like it that way... I was the deputy attorney general of the United States. We were not going to authorize, reauthorize or participate in activities that did not have a lawful basis."

During the interview in 2014 by 60 Minutes, then FBI Director Comey warned all Americans:

"I believe that Americans should be deeply skeptical of government power. You cannot trust people in power. The founders knew that. That's why they divided power among three branches, to set interest against interest... The promise I've tried to honor my entire career, that the rule of law and the design of the founders, right, the oversight of courts and the oversight of Congress will be at the heart of what the FBI does. The way you'd want it to be..."

The interview highlighted the letter Comey kept on his desk as a cautionary reminder of the excesses of government. That letter was about former FBI Director Herbert Hoover's investigations and excessive surveillance of the late Dr. Martin Luther King, Jr. Is Comey the bad guy that people on both sides of the political spectrum claim? Yes, history is far more complicated and nuanced.

So, history is complex and nuanced... far more than a simplistic, self-serving tweet:

Many have paid close attention for years. After the Snowden disclosures in 2013 about broad, warrantless searches and data collection programs by government intelligence agencies, in 2014 Comey urged all USA citizens to participate in a national discussion about the balance between privacy and surveillance.

You can read the full transcript of the 60 Minutes interview in 2014, watch this preview on Youtube, or watch last night's re-broadcast by 60 Minutes of the 2014 interview.