Internet-connected televisions, often referred to as "smart TVs," collect a wide variety of information about consumers. The devices track the videos you watch from several sources: cable, broadband, set-top box, DVD player, over-the-air broadcasts, and streaming devices. The devices collect a wide variety of information about consumers, including items such as as sex, age, income, marital status, household size, education level, home ownership, and household value. The TV makers sell this information to third parties, such as advertisers and data brokers.
Some people might call this "surveillance capitalism."
Reliability and trust with smart devices are critical for consumers. Earlier this month, Vizio agreed to pay $2.2 million to settle privacy abuse charges by the U.S. Federal Trade Commission (FTC).
What's a consumer to do to protect their privacy? This C/Net article provides good step-by-step instructions to turn off or to minimize the tracking by your smart television. The instructions include several smart TV brands: Samsung, Vizio, LG, Sony, and others. Sample instructions for one brand:
"Samsung: On 2016 TVs, click the remote's Home button, go to Settings (gear icon), scroll down to Support, then down to Terms & Policy. Under "Interest Based Advertisement" click "Disable Interactive Services." Under "Viewing Information Services" unclick "I agree." And under "Voice Recognition Services" click "Disable advanced features of the Voice Recognition services." If you want you can also disagree with the other two, Nuance Voice Recognition and Online Remote Management.
On older Samsung TVs, hit the remote's Menu button (on 2015 models only, then select Menu from the top row of icons), scroll down to Smart Hub, then select Terms & Policy. Disable "SynchPlus and Marketing." You can also disagree with any of the other policies listed there, and if your TV has them, disable the voice recognition and disagree with the Nuance privacy notice described above."
Browse the step-by-step instructions for your brand of television. If you disabled the tracking features on your smart TV, how did it go? If you used a different resource to learn about your smart TV's tracking features, please share it below.
Advocacy Groups And Legal Experts Denounce DHS Proposal Requiring Travelers To Disclose Social Media Credentials
Several dozen human rights organizations, civil liberties advocates, and legal experts published an open letter on February 21,2017 condemning a proposal by the U.S. Department of Homeland Security to require the social media credentials (e.g., usernames and passwords) of all travelers from majority-Muslim countries. This letter was sent after testimony before Congress by Homeland Security Secretary John Kelly. NBC News reported on February 8:
"Homeland Security Secretary John Kelly told Congress on Tuesday the measure was one of several being considered to vet refugees and visa applicants from seven Muslim-majority countries. "We want to get on their social media, with passwords: What do you do, what do you say?" he told the House Homeland Security Committee. "If they don't want to cooperate then you don't come in."
His comments came the same day judges heard arguments over President Donald Trump's executive order temporarily barring entry to most refugees and travelers from Syria, Iraq, Iran, Somalia, Sudan, Libya and Yemen. Kelly, a Trump appointee, stressed that asking for people's passwords was just one of "the things that we're thinking about" and that none of the suggestions were concrete."
The letter, available at the Center For Democracy & Technology (CDT) website, stated in part (bold emphasis added):
"The undersigned coalition of human rights and civil liberties organizations, trade associations, and experts in security, technology, and the law expresses deep concern about the comments made by Secretary John Kelly at the House Homeland Security Committee hearing on February 7th, 2017, suggesting the Department of Homeland Security could require non-citizens to provide the passwords to their social media accounts as a condition of entering the country.
We recognize the important role that DHS plays in protecting the United States’ borders and the challenges it faces in keeping the U.S. safe, but demanding passwords or other account credentials without cause will fail to increase the security of U.S. citizens and is a direct assault on fundamental rights.
This proposal would enable border officials to invade people’s privacy by examining years of private emails, texts, and messages. It would expose travelers and everyone in their social networks, including potentially millions of U.S. citizens, to excessive, unjustified scrutiny. And it would discourage people from using online services or taking their devices with them while traveling, and would discourage travel for business, tourism, and journalism."
The letter was signed by about 75 organizations and individuals, including the American Civil Liberties Union, the American Library Association, the American Society of Journalists & Authors, the American Society of News Editors, Americans for Immigrant Justice, the Brennan Center for Justice at NYU School of Law, Electronic Frontier Foundation, Human Rights Watch, Immigrant Legal Resource Center, National Hispanic Media Coalition, Public Citizen, Reporters Without Borders, the World Privacy Forum, and many more.
The letter is also available here (Adobe PDF).
A privacy watchdog group in the European Union (EU) are concerned about privacy and data collection practices by Microsoft. The group, comprising 28 agencies and referred to as the Article 29 Working Party, sent a letter to Microsoft asking for explanations about privacy concerns with the software company's Windows 10 operating system software.
The February 2017 letter to Brendon Lynch, Chief Privacy Officer, and to Satya Nadella, Chief Executive Officer, was a follow-up to a prior letter sent in January. The February letter explained:
"Following the launch of Windows 10, a new version of the Windows operating system, a number of concerns have been raised, in the media and in signals from concerned citizens to the data protection authorities, regarding protection of your users’ personal data... the Working Party expressed significant concerns about the default installation settings and an apparent lack of control for a user to prevent collection or further processing of data, as well as concerns about the scope of data that are being collected and further processed... "
Additionally, the purposes for which Microsoft collects personal data have to be specified, explicit and legitimate, and the data may not be further processed in a way incompatible with those purposes. Microsoft processes data collected through Windows 10 for different purposes, including personalised advertising. Microsoft should clearly explain what kinds of personal data are processed for what purposes. Without such information, consent cannot be informed, and therefore, not valid..."
"Republican Senator Jeff Flake, who opposes the Federal Communications Commission's broadband privacy rules, says he's readying a resolution to rescind them, Politico reports. Flake's confirmation to Politico comes days after Rep. Marsha Blackburn (R-Tennessee), the head of the House Communications Subcommittee, said she intends to work with the Senate to revoke the privacy regulations."
Blackburn's name is familiar. She was a key part of the GOP effort in 2014 to keep state laws in place to limit broadband competition by preventing citizens from forming local broadband providers. To get both higher speeds and lower prices compared to offerings by corporate internet service providers (ISPs), many people want to form local broadband providers. They can't because 20 states have laws preventing broadband competition. A worldwide study in 2014 found the consumers in the United States get poor broadband value: pay more and get slower speeds. Plus, the only consumers getting good value were community broadband customers. In June 2014, the FCC announced plans to challenge these restrictive state laws that limit competition, and keep your Internet prices high. That FCC effort failed. To encourage competition and lower prices, several Democratic representatives introduced the Community Broadband Act in 2015.That legislation went nowhere in a GOP-controlled Congress.
Pause for a moment and let that sink in. Blackburn and other GOP representatives have pursued policies where we consumers all pay more for broadband due to the lack of competition. The GOP, a party that supposedly dislikes regulation and prefers free-market competition, is happy to do the opposite to help their corporate donors. The GOP, a party that historically has promoted states' rights, now uses state laws to restrict the freedoms of constituents at the city, town, and local levels. And, that includes rural constituents.
Too many GOP voters seem oblivious to this. Why Democrats failed to capitalize on this broadband issue, especially during the Presidential campaign last year, is puzzling. Everyone needs broadband: work, play, school, travel, entertainment.
Now, back to the effort to revoke the FCC's broadband privacy rules. Several cable, telecommunications, and advertising lobbies sent a letter in January asking Congress to remove the broadband privacy rules. That letter said in part:
"... in adopting new broadband privacy rules late last year, the Federal Communications Commission (“FCC”) took action that jeopardizes the vibrancy and success of the internet and the innovations the internet has and should continue to offer. While the FCC’s Order applies only to Internet Service Providers (“ISPs”), the onerous and unnecessary rules it adopted establish a very harmful precedent for the entire internet ecosystem. We therefore urge Congress to enact a resolution of disapproval pursuant to the Congressional Review Act (“CRA”) vitiating the Order."
The new privacy rules by the FCC require broadband providers (a/k/a ISPs) to obtain affirmative “opt-in” consent from consumers before using and sharing consumers' sensitive information; specify the types of information that are sensitive (e.g., geo-location, financial information, health information, children’s information, social security numbers, web browsing history, app usage history and the content of communications); stop using and sharing information about consumers that have opted out of information sharing; meet transparency requirements to clearly notify customers about the information collection sharing and how to change their opt-in or opt-out preferences, prohibit "take-it-or-leave-it" offers where ISPs can refuse to serve customers who don't consent to the information collection and sharing; and comply with "reasonable data security practices and guidelines" to protect the sensitive information collected and shared.
The new FCC privacy rules are common sense stuff, but clearly these companies view common-sense methods as a burden. They want to use consumers' information however they please without limits, and without consideration for consumers' desire to control their own personal information. And, GOP representatives in Congress are happy to oblige these companies in this abuse.
Alarmingly, there is more. Lots more.
The GOP-led Congress also seeks to roll back consumer protections in banking and financial services. According to Consumer Reports, the issue arose earlier this month in:
"... a memo by House Financial Services Committee Chairman Rep. Jeb Hensarling (R-Tex), which was leaked to the press yesterday... The fate of the database was first mentioned [February 9th] when Bloomberg reported on a memo by Hensarling, an outspoken critic of the CFPB. The memo outlined a new version of the Financial CHOICE Act (Creating Hope and Opportunity for Investors, Consumers and Entrepreneurs), a bill originally advanced by the House Financial Services Committee in September. The new bill would lead to the repeal of the Consumer Complaint Database. It would also eliminate the CFPB's authority to punish unfair, deceptive or abusive practices among banks and other lenders, and it would allow the President to handpick—and fire—the bureau's director at will."
Banks have paid billions in fines to resolve a variety of allegations and complaints about wrongdoing. Consumers have often been abused by banks. You may remember the massive $185 million fine for the phony accounts scandal at Wells Fargo. Or, you may remember consumers forced to use prison-release cards. Or, maybe you experienced debt collection scams. And, this blog has covered extensively much of the great work by the CFPB which has helped consumers.
Does these two legislation items bother you? I sincerely hope that they do bother you. Contact your elected officials today and demand that they support the FCC privacy rules.
If you travel for business, pleasure, or both then today's blog post will probably interest you. Wired Magazine reported:
"In the weeks since President Trump’s executive order ratcheted up the vetting of travelers from majority Muslim countries, or even people with Muslim-sounding names, passengers have experienced what appears from limited data to be a “spike” in cases of their devices being seized by customs officials. American Civil Liberties Union attorney Nathan Wessler says the group has heard scattered reports of customs agents demanding passwords to those devices, and even social media accounts."
Devices include smartphones, laptops, and tablets. Many consumers realize that relinquishing passwords to social networking sites (e.g., Facebook, Instagram, etc.) discloses sensitive information not just about themselves, but also all of their friends, family, classmates, neighbors, and coworkers -- anyone they are connected with online. The "Bring Your Own Device" policies by many companies and employers means that employees (and contractors) can use their personal devices in the workplace and/or connected remotely to company networks. Those connected devices can easily divulge company trade secrets and other sensitive information when seized by Customs and Border Patrol (CBP) agents for analysis and data collection.
Plus, professionals such as attorneys and consultants are required to protect their clients' sensitive information. These professionals, who also must travel, require data security and privacy for business.
Wired also reported:
"In fact, US Customs and Border Protection has long considered US borders and airports a kind of loophole in the Constitution’s Fourth Amendment protections, one that allows them wide latitude to detain travelers and search their devices. For years, they’ve used that opportunity to hold border-crossers on the slightest suspicion, and demand access to their computers and phones with little formal cause or oversight.
Even citizens are far from immune. CBP detainees from journalists to filmmakers to security researchers have all had their devices taken out of their hands by agents."
For travelers wanting privacy, what are the options? Remain at home? This may not be an option for workers who must travel for business. Leave your devices at home? Again, impractical for many. The Wired article provided several suggestions, including:
"If customs officials do take your devices, don’t make their intrusion easy. Encrypt your hard drive with tools like BitLocker, TrueCrypt, or Apple’s Filevault, and choose a strong passphrase. On your phone—preferably an iPhone, given Apple’s track record of foiling federal cracking—set a strong PIN and disable Siri from the lockscreen by switching off “Access When Locked” under the Siri menu in Settings.
Remember also to turn your devices off before entering customs: Hard drive encryption tools only offer full protection when a computer is fully powered down. If you use TouchID, your iPhone is safest when it’s turned off, too..."
What are the consequences when travelers refuse to disclose passwords and encrpt devices? Ars Technica also explored the issues:
"... Ars spoke with several legal experts, and contacted CBP itself (which did not provide anything beyond previously-published policies). The short answer is: your device probably will be seized (or "detained" in CBP parlance), and you might be kept in physical detention—although no one seems to be sure exactly for how long.
An unnamed CBP spokesman told The New York Times on Tuesday that such electronic searches are extremely rare: he said that 4,444 cellphones and 320 other electronic devices were inspected in 2015, or 0.0012 percent of the 383 million arrivals (presuming that all those people had one device)... The most recent public document to date on this topic appears to be an August 2009 Department of Homeland Security paper entitled "Privacy Impact Assessment for the Border Searches of Electronic Devices." That document states that "For CBP, the detention of devices ordinarily should not exceed five (5) days, unless extenuating circumstances exist." The policy also states that CBP or Immigration and Customs Enforcement "may demand technical assistance, including translation or decryption," citing a federal law, 19 US Code Section 507."
The Electronic Frontier Foundation (EFF) collects stories from travelers who've been detained and had their devices seized. Clearly, we will hear a lot more in the future about these privacy issues. What are your opinions of this?
[Editor's note: today's guest post is by reporters at ProPublica. I've posted it because, a) many consumers don't know how their personal information is bought, sold, and used by companies and social networking sites; b) the USA is capitalist society and the sensitive personal data that describes consumers is consumers' personal property; c) a better appreciation of "a" and "b" will hopefully encourage more consumers to be less willing to trade their personal property for convenience, and demand better privacy protections from products, services, software, apps, and devices; and d) when lobbyists and politicians act to erode consumers' property and privacy rights, hopefully more consumers will respond and act. Facebook is not the only social networking site that trades consumers' information. This news story is reprinted with permission.]
Facebook has long let users see all sorts of things the site knows about them, like whether they enjoy soccer, have recently moved, or like Melania Trump.
But the tech giant gives users little indication that it buys far more sensitive data about them, including their income, the types of restaurants they frequent and even how many credit cards are in their wallets.
Since September, ProPublica has been encouraging Facebook users to share the categories of interest that the site has assigned to them. Users showed us everything from "Pretending to Text in Awkward Situations" to "Breastfeeding in Public." In total, we collected more than 52,000 unique attributes that Facebook has used to classify users.
Facebook's site says it gets information about its users "from a few different sources."
What the page doesn't say is that those sources include detailed dossiers obtained from commercial data brokers about users' offline lives. Nor does Facebook show users any of the often remarkably detailed information it gets from those brokers.
"They are not being honest," said Jeffrey Chester, executive director of the Center for Digital Democracy. "Facebook is bundling a dozen different data companies to target an individual customer, and an individual should have access to that bundle as well."
When asked this week about the lack of disclosure, Facebook responded that it doesn't tell users about the third-party data because its widely available and was not collected by Facebook.
"Our approach to controls for third-party categories is somewhat different than our approach for Facebook-specific categories," said Steve Satterfield, a Facebook manager of privacy and public policy. "This is because the data providers we work with generally make their categories available across many different ad platforms, not just on Facebook."
Satterfield said users who don't want that information to be available to Facebook should contact the data brokers directly. He said users can visit a page in Facebook's help center, which provides links to the opt-outs for six data brokers that sell personal data to Facebook.
Limiting commercial data brokers' distribution of your personal information is no simple matter. For instance, opting out of Oracle's Datalogix, which provides about 350 types of data to Facebook according to our analysis, requires "sending a written request, along with a copy of government-issued identification" in postal mail to Oracle's chief privacy officer.
Users can ask data brokers to show them the information stored about them. But that can also be complicated. One Facebook broker, Acxiom, requires people to send the last four digits of their social security number to obtain their data. Facebook changes its providers from time to time so members would have to regularly visit the help center page to protect their privacy.
One of us actually tried to do what Facebook suggests. While writing a book about privacy in 2013, reporter Julia Angwin tried to opt out from as many data brokers as she could. Of the 92 brokers she identified that accepted opt-outs, 65 of them required her to submit a form of identification such as a driver's license. In the end, she could not remove her data from the majority of providers.
ProPublica's experiment to gather Facebook's ad categories from readers was part of our Black Box series, which explores the power of algorithms in our lives. Facebook uses algorithms not only to determine the news and advertisements that it displays to users, but also to categorize its users in tens of thousands of micro-targetable groups.
Our crowd-sourced data showed us that Facebook's categories range from innocuous groupings of people who like southern food to sensitive categories such as "Ethnic Affinity" which categorizes people based on their affinity for African-Americans, Hispanics and other ethnic groups. Advertisers can target ads toward a group 2014 or exclude ads from being shown to a particular group.
Last month, after ProPublica bought a Facebook ad in its housing categories that excluded African-Americans, Hispanics and Asian-Americans, the company said it would build an automated system to help it spot ads that illegally discriminate.
Facebook has been working with data brokers since 2012 when it signed a deal with Datalogix. This prompted Chester, the privacy advocate at the Center for Digital Democracy, to filed a complaint with the Federal Trade Commission alleging that Facebook had violated a consent decree with the agency on privacy issues. The FTC has never publicly responded to that complaint and Facebook subsequently signed deals with five other data brokers.
To find out exactly what type of data Facebook buys from brokers, we downloaded a list of 29,000 categories that the site provides to ad buyers. Nearly 600 of the categories were described as being provided by third-party data brokers. (Most categories were described as being generated by clicking pages or ads on Facebook.)
The categories from commercial data brokers were largely financial, such as "total liquid investible assets $1-$24,999," "People in households that have an estimated household income of between $100K and $125K, or even "Individuals that are frequent transactor at lower cost department or dollar stores."
We compared the data broker categories with the crowd-sourced list of what Facebook tells users about themselves. We found none of the data broker information on any of the tens of the thousands of "interests" that Facebook showed users.
Our tool also allowed users to react to the categories they were placed in as being "wrong," "creepy" or "spot on." The category that received the most votes for "wrong" was "Farmville slots." The category that got the most votes for "creepy" was "Away from family." And the category that was rated most "spot on" was "NPR."
ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.
Today's blog post highlights how easy it is for manufacturers to make and sell smart-home devices that spy on consumers without notice nor consent. VIZIO, Inc., one of the largest makers of smart televisions, agreed to pay $2.2 million to settle privacy abuse charges by the U.S. Federal Trade Commission (FTC) and the State of New Jersey Attorney General. The FTC announcement explained:
"... starting in February 2014, VIZIO, Inc. and an affiliated company have manufactured VIZIO smart TVs that capture second-by-second information about video displayed on the smart TV, including video from consumer cable, broadband, set-top box, DVD, over-the-air broadcasts, and streaming devices. In addition, VIZIO facilitated appending specific demographic information to the viewing data, such as sex, age, income, marital status, household size, education level, home ownership, and household value... VIZIO sold this information to third parties, who used it for various purposes, including targeting advertising to consumers across devices... VIZIO touted its “Smart Interactivity” feature that “enables program offers and suggestions” but failed to inform consumers that the settings also enabled the collection of consumers’ viewing data. The complaint alleges that VIZIO’s data tracking—which occurred without viewers’ informed consent—was unfair and deceptive, in violation of the FTC Act and New Jersey consumer protection laws."
The FTC complaint (Adobe PDF) named as defendants VIZIO, Inc. and VIZIO Inscape Services, LLC, its wholly-owned subsidiary. VIZIO has designed and sold televisions in the United States since 2002, and has sold more than 11 million Internet-connected televisions since 2010. The complaint also mentioned:
"... the successor entity to Cognitive Media Services, Inc., which developed proprietary automated content recognition (“ACR”) software to detect the content on internet-connected televisions and monitors."
This merits emphasis because consumers thinking that they can watch DVD or locally recorded content in the privacy of their home with advertisers knowing it really can't because the ACR software can easily identify, archive, and transmit it. The complaint also explained:
"Through the ACR software, VIZIO’s televisions transmit information about what a consumer is watching on a second-by-second basis. Defendants’ ACR software captures information about a selection of pixels on the screen and sends that data to VIZIO servers, where it is uniquely matched to a database of publicly available television, movie, and commercial content. Defendants collect viewing data from cable or broadband service providers, set-top boxes, external streaming devices, DVD players, and over-the-air broadcasts... the ACR software captures up to 100 billion data points each day from more than 10 million VIZIO televisions. Defendants store this data indefinitely. Defendants’ ACR software also periodically collects other information about the television, including IP address, wired and wireless MAC addresses, WiFi signal strength, nearby WiFi access points, and other items."
That's impressive. The ACR software enabled VIZIO to know and collect information about other devices (e.g., computers, tablets, phones, printers) connected to your home WiFi network. Then, besides the money consumers paid for their VIZIO smart TVs, the company also made money by reselling the information it collected to third parties... probably data brokers and advertisers. You'd think that the company might lower the price of its smart TVs given that additional revenue stream, but I guess not.
Now, here is where VIZIO created problems for itself:
30 seconds? Really?! If a consumer left the room to grab a bite to eat or visit the bathroom for a bio break, they easily missed this pop-up message. No notice? Neither are good. VIZIO released a statement about the settlement:
"VIZIO is pleased to reach this resolution with the FTC and the New Jersey Division of Consumer Affairs. Going forward, this resolution sets a new standard for best industry privacy practices for the collection and analysis of data collected from today’s internet-connected televisions and other home devices,” stated Jerry Huang, VIZIO General Counsel. “The ACR program never paired viewing data with personally identifiable information such as name or contact information, and the Commission did not allege or contend otherwise. Instead, as the Complaint notes, the practices challenged by the government related only to the use of viewing data in the ‘aggregate’ to create summary reports measuring viewing audiences or behaviors... the FTC has made clear that all smart TV makers should get people’s consent before collecting and sharing television viewing information and VIZIO now is leading the way,” concluded Huang."
Terms of the settlement agreement and the Court Order (Adobe PDF) require VIZIO to:
B. Obtain the consumer’s affirmative express consent (1) at the time the disclosure...
C. Provide instructions, at any time the consumer’s affirmative express consent is sought under Part II.B, for how the consumer may revoke consent to collection of Viewing Data.
D. For the purposes of this Order, “Prominently” means that a required disclosure is difficult to miss (i.e., easily noticeable) and easily understandable by ordinary consumers..."
The Order also defines that disclosure must be visual, audible, in all formats which VIZIO uses, in easy-to-understand language, and not contradicted by any legal statements elsewhere. Terms of the settlement require VIZIO to pay $1.5 million to the FTC, $1.0 million to the New Jersey Division of Consumer Affairs (which includes a $915,940.00 civil penalty and $84,060.00 for attorneys’ fees and investigative costs). VIZIO will not have to pay $300,000 due to the N.j> Division of consumer affairs it the company complies with court order, and does not engage in acts that violate the New Jersey Consumer Fraud Act (CFA) during the next five years.
Additional terms of the settlement agreement require VIZIO to destroy information collected before March 1, 2016, establish and implement a privacy program, designate one or several employees responsible for that program, identify and risks of internal processes that cause the company to collect consumer information it shouldn't, design and implement a program to address those risks, develop and implement processes to identify service providers that will comply with the privacy program, and hire an independent third-party to audit the privacy program every two years.
I guess the FTC and New Jersey AG felt this level of specificity was necessary given VIZIO's past behaviors. Kudos to the FTC and to the New Jersey AG for enforcing and protecting consumers' privacy. Given the rapid pace of technological change and the complexity of today's devices, oversight is required. Consumers simply don't have the skills nor resources to do these types of investigations.
What are your opinions of the VIZIO settlement?
The Association of National Advertisers (ANA) and 15 other cable, telecommunications, advertising lobbies sent a letter on January 27, 2017 to key leaders in Congress urging them to repeal the broadband privacy rules the U.S. Federal Communications Commission (FCC) adopted in October 2016 requiring Internet service providers (ISPs) to protect the privacy of their customers. 15 advertising and lobbyist groups co-signed the letter with the ANA: the American Cable Association, the Competitive Carriers Association, CTIA-The Wireless Association (formerly known as the Cellular Communications Industry Association), the Data & Marketing Association, the Internet Advertising Bureau, the U.S. Chamber of Commerce, the U.S. Telecom Association, and others.
The letter, available at the ANA site and here (Adobe PDF; 354.4k), explained the groups' reasoning:
"Unfortunately, in adopting new broadband privacy rules late last year, the Federal Communications Commission (“FCC”) took action that jeopardizes the vibrancy and success of the internet and the innovations the internet has and should continue to offer. While the FCC’s Order applies only to Internet Service Providers (“ISPs”), the onerous and unnecessary rules it adopted establish a very harmful precedent for the entire internet ecosystem. We therefore urge Congress to enact a resolution of disapproval pursuant to the Congressional Review Act (“CRA”) vitiating the Order.
Adopted on a party-line 3-2 vote just ten days before the Presidential election, over strenuous objections by the minority and strong concerns expressed by entities throughout the internet ecosystem, the new rules impose overly prescriptive online privacy and data security requirements that will conflict with established law, policy, and practice and cause consumer confusion... the FCC Order would create confusion and interfere with the
ability of consumers to receive customized services and capabilities they enjoy and be informed of new products and discount offers. Further, the Order would also result in consumers being bombarded with trivial data breach notifications."
Data breach notifications are trivial? After writing this blog for almost 10 years, I have learned they aren't. Consumers deserve to know when companies fail to protect their sensitive personal information. Most states have laws requiring breach notifications. It seems as these advertising groups don't want to be responsible nor held accountable.
"The Congressional Review Act (CRA) has only worked precisely one time as a way for Congress to undo an executive branch regulation... The CRA was passed in 1996 as part of then-Speaker Newt Gingrich's (R-Ga.) "Contract with America." While executive branch agencies can only issue regulations pursuant to statutes passed by Congress, Congress wanted to find a way to make it easier to overturn those regulations. Previously there was a process by which, if one house of Congress voted to overturn the regulation, it was invalidated. This procedure was ruled unconstitutional by the Supreme Court in 1983.
Congress was still able to overturn an executive branch regulation by passing a law. Passing a law is, of course, subject to filibusters in the Senate. We've learned that the filibuster in recent years has made it quite difficult to pass laws. The CRA created a period of 60 "session days" (days in which Congress is in session) during which Congress could use expedited procedures to overturn a regulation.
Also on January 27, several consumer privacy advocates sent a letter (Adobe PDF) to the same Congressional representatives. The letter, signed by 20 privacy advocates including the American Civil Liberties Union, the Center for Democracy and Technology, the Center for Media Justice, Consumers Union, the National Hispanic Media Coalition, the Privacy Rights Clearing House, and others urging the Congressional representatives:
"... to oppose the use of the Congressional Review Act (CRA) to adopt a Resolution of Disapproval overturning the FCC’s broadband privacy order. That order implements the mandates in Section 222 of the 1996 Telecommunications Act, which an overwhelming, bipartisan majority of Congress enacted to protect telecommunications users’ privacy. The cable, telecom, wireless, and advertising lobbies request for CRA intervention is just another industry attempt to overturn rules that empower users and give them a say in how their private information may be used.
Not satisfied with trying to appeal the rules of the agency, industry lobbyists have asked Congress to punish internet users by way of restraining the FCC, when all the agency did was implement Congress’ own directive in the 1996 Act. This irresponsible, scorched-earth tactic is as harmful as it is hypocritical. If Congress were to take the industry up on its request, a Resolution of Disapproval could exempt internet service providers (ISPs) from any and all privacy rules at the FCC... It could also preclude the FCC from addressing any of the other issues in the privacy order like requiring data breach notification and from revisiting these issues as technology continues to evolve in the future... Without these rules, ISPs could use and disclose customer information at will. The result could be extensive harm caused by breaches or misuse of data.
Broadband ISPs, by virtue of their position as gatekeepers to everything on the internet, have a largely unencumbered view into their customers’ online communications. That includes the websites they visit, the videos they watch, and the messages they send. Even when that traffic is encrypted, ISPs can gather vast troves of valuable information on their users’ habits; but researchers have shown that much of the most sensitive information remains unencrypted. The FCC’s order simply restores people’s control over their personal information and lets them choose the terms on which ISPs can use it, share it, or sell it..."
The new FCC broadband privacy rules kept consumers in control of their online privacy. The new rules featured opt-in requirements allowing them to collect consumers' sensitive personal information only after gaining customers' explicit consent.
So, advertisers have finally stated clearly how much they care about protecting consumers' privacy. They really don't. They don't want any constraints upon their ability to collect and archive consumers' (your) sensitive personal information. During the 2016 presidential campaign, candidate and now President Donald Trump promised:
"One of the keys to unlocking growth is scaling-back years of disastrous regulations unilaterally imposed by our out-of-control bureaucracy. In 2015 alone, federal agencies issued over 3,300 final rules and regulations, up from 2,400 the prior year. Every year, over-regulation costs our economy $2 trillion dollars a year and reduces household wealth by almost $15,000 dollars. Mr. Trump has proposed a moratorium on new federal regulations that are not compelled by Congress or public safety, and will ask agency and department heads to identify all needless job-killing regulations and they will be removed... A complete regulatory overhaul will level the playing field for American workers and add trillions in new wealth to our economy – keeping companies here, expanding hiring and investment, and bringing thousands of new companies to our shores."
Are FCC rules protecting your privacy "over-regulation," "onerous and unnecessary?" Are FCC privacy rules keeping consumers in control over their sensitive personal information "disastrous?" Will the Trump administration side with corporate lobbies or consumers' privacy protections? We shall quickly see.
There is a clue what the answer to that question will be. President Trump has named Ajit Pai, a Republican member of the Federal Communications Commission, as the new FCC chair replacing Tom Wheeler, the former chair and Democrat, who stepped down on Friday. This will also give the Republicans a majority on the FCC.
Pai is also an opponent of net neutrality rules the FCC has also adopted, which basically says consumers (and not ISPs) decided where consumers go on the Internet with their broadband connections. Republicans in Congress and lobby groups have long opposed net neutrality. In 2014, more than 100 tech firms urged the FCC to protect net neutrality. With a new President in the White House opposing regulations, some companies and lobby groups seem ready to undo these consumer protections.
What do you think?
For the holidays, many consumers gave or received devices for their homes that are WiFi-connected, often referred to as the "Internet of Things" (IoT). Those devices include Internet routers, security cameras, home security systems, and a variety of appliances and electronics: televisions, refrigerators, clothes washers, lighting, heating/cooling systems, toys, DVRs, and more. Residences outfitted with these devices are often referred to as "Smart Homes" or "Connected Homes."
Experts forecast 50 billion devices globally by 2020. Plus, utilities have already installed smart meters in homes that regularly transmit consumers' water/oil/gas usage to their utility providers. Protecting those devices against hackers is critical.
While the FTC has published guidelines for manufacturers of IOT devices, those guidelines aren't mandatory. The privacy threats of IoT devices are known, and researchers have warned about the vulnerabilities in specific products.
To help consumers manage their WiFi-connected home devices, the U.S. Federal Trade Commission (FTC) announced a prize competition called the "IoT Home Inspector Challenge." The FTC will award the $25,000 top prize to the solution that best helps consumers protect their IoT devices against vulnerabilities and to manage passwords (e.g., replace factory-defaults) for all home devices. Up to three honorable mention prizes of $3,000 each area also available.
Consumers working individually, or in teams, can register and submit entries beginning March 1, 2017. The deadline for entries is May 22, 2017. Winners will be announced on July 27, 2017. To be considered, entries must meet the following criteria:
- Provide a technical solution, rather than a policy or legal solution
- Work on home IoT devices that currently exist on the market
- Protect information it collects both in transit and at rest,
- Explain how the tool or solution will avoid or mitigate any additional security risks that the tool itself might introduce into the consumer’s home by (example, software upgrades)
The judges will rate each entry based upon how well it addresses the following four components:
- Recognize what IoT devices are operating in the consumer’s home. This may be automatic or provide instructions for consumer input,
- Determine what software version is already on those IoT devices. Again, this may be automatic or provide instructions for consumer input,
- Determine the latest software version each home IoT device should have, and
- Assist with updates.
Visit the FTC IoT Home Inspector Challenge site for complete details about the competition, including contest rules, judges, FAQs, and the registration/submission process.
You may not know that hedge funds, in both the United Kingdom and in the United States, buy and sell a variety of information from data brokers: mobile app purchases, credit card purchases, posts at social networking sites, and lots more. You can bet that a lot of that mobile information includes geo-location data. The problem: consumers' privacy isn't protected consistently.
The industry claims the information sold is anonymous (e.g., doesn't identify specific persons), but researchers have it easy to de-anonymize the information. The Financial Times reported:
"The “alternative data” industry, which sells information such as app downloads and credit card purchases to investment groups, is failing to adequately erase personal details before sharing the material... big data is seen as an increasingly attractive source of information for asset managers seeking a vital investment edge, with data providers selling everything from social media chatter and emailed receipts to federal lobbying data and even satellite images from space..."
One part of the privacy problem:
“The vendors claim to strip out all the personal information, but we occasionally find phone numbers, zip codes and so on,” said Matthew Granade, chief market intelligence officer at Steven Cohen’s Point72. “It’s a big enough deal that we have a couple of full-time tech people wash the data ourselves.” The head of another major hedge fund said that even when personal information had been scrubbed from a data set, it was far too easy to restore..."
A second part of the privacy problem:
“... there is no overarching US privacy law to protect consumers, with standards set individually by different states, industries and even companies, according to Albert Gidari, director of privacy at the Stanford Center for Internet and Society..."
The third part of the privacy problem: consumers are too willing to trade personal information for convenience.
Since the Snowden disclosures in 2013, there have been plenty of news reports about how technology companies have assisted the U.S. government with surveillance programs. Some of these activities included surveillance programs by the U.S. National Security Agency (NSA) including innocent citizens, bulk phone calls metadata collection, warrantless searches by the NSA of citizen's phone calls and emails, facial image collection, identification of the best collaborator with NSA spying, fake cell phone towers (a/k/a 'stingrays') used by both federal government agencies and local police departments, and automated license plate readers to track drivers.
You may also remember, after Apple Computer's refusal to build a backdoor into its smartphones, the U.S. Federal Bureau of Investigation bought a hacking tool from a third party. Several tech companies built the reform government surveillance site, while others actively pursue "Surveillance Capitalism" business goals.
During the 2016 political campaign, candidate (and now President Elect) Donald Trump said he would require all Muslims in the United States to register. Mr. Trump's words matter greatly given his lack of government experience. His words are all voters had to rely upon.
So, The Intercept asked several technology companies a key question about the next logical step: whether or not they are willing to help build and implement a Muslim registry:
"Every American corporation, from the largest conglomerate to the smallest firm, should ask itself right now: Will we do business with the Trump administration to further its most extreme, draconian goals? Or will we resist? This question is perhaps most important for the country’s tech companies, which are particularly valuable partners for a budding authoritarian."
The companies queried included IBM, Microsoft, Google, Facebook, Twitter, and others. What's been the response? Well, IBM focused on other areas of collaboration:
"Shortly after the election, IBM CEO Ginni Rometty wrote a personal letter to President-elect Trump in which she offered her congratulations, and more importantly, the services of her company. The six different areas she identified as potential business opportunities between a Trump White House and IBM were all inoffensive and more or less mundane, but showed a disturbing willingness to sell technology to a man with open interest in the ways in which technology can be abused: Mosque surveillance, a “virtual wall” with Mexico, shutting down portions of the internet on command, and so forth."
The response from many other companies has mostly been crickets. So far, only executives at Twitter have flatly refused, and included with its reply a link to its blog post about developer policies:
"Recent reports about Twitter data being used for surveillance, however, have caused us great concern. As a company, our commitment to social justice is core to our mission and well established. And our policies in this area are long-standing. Using Twitter’s Public APIs or data products to track or profile protesters and activists is absolutely unacceptable and prohibited.
To be clear: We prohibit developers using the Public APIs and Gnip data products from allowing law enforcement — or any other entity — to use Twitter data for surveillance purposes. Period. The fact that our Public APIs and Gnip data products provide information that people choose to share publicly does not change our policies in this area. And if developers violate our policies, we will take appropriate action, which can include suspension and termination of access to Twitter’s Public APIs and data products.
We have an internal process to review use cases for Gnip data products when new developers are onboarded and, where appropriate, we may reject all or part of a requested use case..."
"A prominent supporter of Donald J. Trump drew concern and condemnation from advocates for Muslims’ rights on Wednesday after he cited World War II-era Japanese-American internment camps as a “precedent” for an immigrant registry suggested by a member of the president-elect’s transition team. The supporter, Carl Higbie, a former spokesman for Great America PAC, an independent fund-raising committee, made the comments in an appearance on “The Kelly File” on Fox News...
“We’ve done it based on race, we’ve done it based on religion, we’ve done it based on region,” Mr. Higbie said. “We’ve done it with Iran back — back a while ago. We did it during World War II with Japanese.”
You can read the replies from nine technology companies at the Intercept site. Will other companies besides Twitter show that they have a spine? Whether or not such a registry ultimately violates the U.S. Constitution, we will definitely hear a lot more about this subject in the near future.
A security firm has found a hidden feature that threatens the privacy of Apple iPhone and iCloud users. Forbes magazine reported:
"Whilst it was well-known that iCloud backups would store call logs, contacts and plenty of other valuable data, users should be concerned to learn that their communications records are consistently being sent to Apple servers without explicit permission, said Elcomsoft CEO Vladimir Katalov. Even if those backups are disabled, he added, the call logs continue making their way to the iCloud, Katalov said... All FaceTime calls are logged in the iCloud too, whilst as of iOS 10 incoming missed calls from apps like WhatsApp and Skype are uploaded..."
Reportedly, the feature is automatic and the only option for users wanting privacy is to not use Apple iCloud services. That's not user-friendly.
Should you switch from Apple iCloud to a commercial service? Privacy risks are not unique to Apple iCloud. Duane Morris LLP explained the risks of using cloud services such as Dropbox, SecuriSync, Citrix ShareFile, and Rackspace:
"Users of electronic file sharing and storage service providers are vulnerable to hacking... Dropbox as just one example: If a hacker was to get their hands on your encryption key, which is possible since Dropbox stores the keys for all of its users, hackers can then steal your personal information stored on Dropbox. Just recently, Dropbox reported that more than 68 million users’ email addresses and passwords were hacked and leaked onto the Internet... potentially even more concerning is the fact that because these service providers own their own servers, they also own any information residing on them. Hence, they can legally access any data on their servers at any time. Additionally, many of these companies house their servers outside of the United States, which means the use, operation, content and security of such servers may not be protected by U.S. law. Furthermore, consider the policies regarding the sharing of your information with third parties. Among others, Dropbox has said that if subpoenaed, it will voluntarily disclose your information to a third party, such as the Internal Revenue Service."
Regular readers of this blog know what that means. Many government entities, such as law enforcement and intelligence agencies besides the IRS issue subpoenas.
This highlights the double-edged sword from syncing and file-sharing across multiple devices (e.g., phone, laptop, desktop, tablet). Sure, is a huge benefit to have all of your files, music, videos, contacts, and data easily and conveniently available regardless of which device you use. Along with that benefit comes the downside privacy and security risks: data stored in cloud services is vulnerable to hacking and subject to government warrants, subpoenas, and court actions. As Duane Morris LLP emphasized, it doesn't matter whether your data is encrypted or not.
Also, Forbes magazine reported:
"Katalov believes automated iCloud storage of up-to-date logs would be beneficial for law enforcement wanting to get access to valuable iPhone data. And, he claimed, Apple hadn’t properly disclosed just what data was being stored in the iCloud and, therefore, what information law enforcement could demand."
Well, law enforcement, intelligence agencies, and cyber-criminals now know what information to demand.
Security analysts recently discovered surveillance malware in some inexpensive smartphones that run the Android operating system (OS) software. The malware secretly transmits information about the device owner and usage to servers in China. The surveillance malware was installed in the phones' firmware. The New York Times reported:
"... you can get a smartphone with a high-definition display, fast data service and, according to security contractors, a secret feature: a backdoor that sends all your text messages to China every 72 hours. Security contractors recently discovered pre-installed software in some Android phones... International customers and users of disposable or prepaid phones are the people most affected by the software... The Chinese company that wrote the software, Shanghai Adups Technology Company, says its code runs on more than 700 million phones, cars and other smart devices. One American phone manufacturer, BLU Products, said that 120,000 of its phones had been affected and that it had updated the software to eliminate the feature."
"... provides professional Firmware Over-The-Air (FOTA) update services. The company offers a cloud-based service, which includes cloud hosts and CDN service, as well as allows manufacturers to update all their device models. It serves smart device manufacturers, mobile operators, and semiconductor vendors worldwide."
Firmware is a special type of software store in read-only memory (ROM) chips that operates a device, including how it controls, monitors, and manipulates data within a device. Kryptowire, a security firm, discovered the malware. The Kryptowire report identified:
"... several models of Android mobile devices that contained firmware that collected sensitive personal data about their users and transmitted this sensitive data to third-party servers without disclosure or the users' consent. These devices were available through major US-based online retailers (Amazon, BestBuy, for example)... These devices actively transmitted user and device information including the full-body of text messages, contact lists, call history with full telephone numbers, unique device identifiers including the International Mobile Subscriber Identity (IMSI) and the International Mobile Equipment Identity (IMEI). The firmware could target specific users and text messages matching remotely defined keywords. The firmware also collected and transmitted information about the use of applications installed on the monitored device, bypassed the Android permission model, executed remote commands with escalated (system) privileges, and was able to remotely reprogram the devices.
The firmware that shipped with the mobile devices and subsequent updates allowed for the remote installation of applications without the users' consent and, in some versions of the software, the transmission of fine-grained device location information... Our findings are based on both code and network analysis of the firmware. The user and device information was collected automatically and transmitted periodically without the users' consent or knowledge. The collected information was encrypted with multiple layers of encryption and then transmitted over secure web protocols to a server located in Shanghai. This software and behavior bypasses the detection of mobile anti-virus tools because they assume that software that ships with the device is not malware and thus, it is white-listed."
So, the malware was powerful, sophisticated, and impossible for consumers to detect.
This incident provides several reminders. First, there were efforts earlier this year by the U.S. Federal Bureau of Investigation (FBI) to force Apple to build "back doors" into its phones for law enforcement. Reportedly, it is unclear what specific law enforcement or intelligence services utilized the data streams produced by the surveillance malware. It is probably wise to assume that the Ministry of State Security, China's intelligence agency, had or has access to data streams.
Second, the incident highlights supply chain concerns raised in 2015 about computer products manufactured in China. Third, the incident indicates how easily consumers' privacy can be compromised by data breaches during a product's supply chain: manufacturing, assembly, transport, and retail sale.
Fourth, the incident highlights Android phone security issues raised earlier this year. We know from prior reports that manufacturers and wireless carriers don't provide OS updates for all Android phones. Fifth, the incident highlights the need for automakers and software developers to ensure the security of both connected cars and driverless cars.
Sixth, the incident raises questions about how and what, if anything, President Elect Donald J. Trump and his incoming administration will do about this trade issue with China. The Trump-Pence campaign site stated about trade with China:
"5. Instruct the Treasury Secretary to label China a currency manipulator.
6. Instruct the U.S. Trade Representative to bring trade cases against China, both in this country and at the WTO. China's unfair subsidy behavior is prohibited by the terms of its entrance to the WTO.
7. Use every lawful presidential power to remedy trade disputes if China does not stop its illegal activities, including its theft of American trade secrets - including the application of tariffs consistent with Section 201 and 301 of the Trade Act of 1974 and Section 232 of the Trade Expansion Act of 1962..."
This incident places consumers in a difficult spot. According to the New York Times:
"Because Adups has not published a list of affected phones, it is not clear how users can determine whether their phones are vulnerable. “People who have some technical skills could,” Mr. Karygiannis, the Kryptowire vice president, said. “But the average consumer? No.” Ms. Lim [an attorney that represents Adups] said she did not know how customers could determine whether they were affected."
Until these supply-chain security issues get resolved it is probably wise for consumers to inquire before purchase where their Android phone was made. There are plenty of customer service sites for existing Android phone owners to determine the country their device was made in. Example: Samsung phone info.
Should consumers avoid buying Android phones made in China or Android phones with firmware made in China? That's a decision only you can make for yourself. Me? When I changed wireless carriers in July, I switched an inexpensive Android phone I'd bought several years ago to an Apple iPhone.
What are your thoughts about the surveillance malware? Would you buy an Android phone?
The Facebook social networking site introduced on October 28, 2016 a new feature where provides its voting-age users with previews of candidates and questions. The site presented users with the following ad:
Like other ads in the site, users can disable the ad. Users that select the "Preview Your Ballot" link will see next three pop-up pages which explain the new feature:
Then,, users can preview their ballot based upon where they live, which includes national candidates running for office and ballot questions. To view local candidates running for office and local ballot questions, users must provide Facebook with their complete street address:
Within the new feature, users can preview information about each candidates: Issue Positions, Endorsements, Recent Posts, and Website. "Issue Positions" links to content within the candidate's Facebook page. The "Endorsements" and "Recent Posts" selections link similar. "Website" links to the candidate's external website. Issue Positions includes the topics you might expect: budget, civil rights, economy, education, energy, environment, foreign policy, guns, health, immigration, infrastructure, military, Social Security, taxes, terrorism, and more.
Why did Facebook introduce this new feature? According to a popup within the feature:
"You're seeing this because you may be in a state that has a voter registration deadline or election coming up. We want to help people have their voice heard in the elections this year, so we're showing this message to people who are old enough to vote - no matter who they support.
We send reminders about voting every now and then. If you'd rather not see these in the future, click or tap the in the top right corner of the reminder and select Hide Reminder, then Hide all voting reminders."
"Voting is important... we’re encouraging civic participation. We want to make it easier for people who want to participate to do so, and to have a voice in the political process... Today, we’re introducing a new feature that shows you what’s on the ballot — from candidates to ballot initiatives. We also show you where the candidates stand on the issues...Not all states in America mail out sample ballots ahead of an election. This can make it challenging to find comprehensive information about the questions you’ll be expected to consider when you walk into the voting booth. Thanks to data gathered from election officials by the nonpartisan Center for Technology and Civic Life (CTCL), we can present you with a preview of the ballot you’ll receive on November 8. If you notice an issue with the CTCL data, we’ve built in a way for you to provide feedback and help correct the dataset.
Challenging to find information? What a load of bull. The Internet makes it easy to visit websites for candidates and ballot questions. Plus, information is available at every state. Example: ballot information in Massachusetts is available at websites by the Secretary of the Commonwealth and the City of Boston. Sample ballots were available during the primaries, too. Every state in the Union has a Secretary of State whose website you should visit anyway for elections and other information. Find your state in this list.
I first saw Facebook's new Elections Ballot feature on November 2, 2016 -- five days after the announcement, and less than 6 days before the November 8 Elections Day. You'd think that Facebook would have introduced this feature sooner; ideally, as soon as the main parties had nominated their candidates. Facebook didn't. Not good. And, the feature's availability may be too late for early voters.
What else is happening with this new feature? Several items are worth mentioning. First, executives at Facebook are probably well aware that two-thirds of the site's users get their news at the site. This new feature is clearly an attempt to keep users within the Facebook bubble: increase the amount of time on site and the number of pages viewed within the site.
Second, the accuracy of the new feature is suspect. I have never shared my residential address with Facebook, so the elections feature displayed 4 questions when there are actually 5 where I live. The fifth question is a local ballot iniative. Users like me, who haven't provided street address information, may get a wrong impression of what's on their ballot -- if they fail to read the fine print. And, we know that too many consumers never read the fine print.
Third, the local candidates and ballot questions are a slick way for Facebook to force users to share their residential street address information. Fourth, the new feature is an opportunity to capture users' voting information. Of course, not the official ballots, but the next closest thing. Users can select which candidates are their Favorites and share it with their Friends: people, coworkers, classmates, family, neighbors, and others they are connected to at the site. Favoriting a candidate within this new feature seems like a pretty explicit and accurate proxy instead of an official ballot:
Fifth, armed with this ballot information about its users, Facebook can probably charge more to advertisers (e.g., political campaigns, political action committees, pollsters, data brokers) interested in purchasing information about voting populations and/or buying targeted ads at the site. Consider this report by BuzzFeed from November 2014:
"At some point in the next two years, the pollsters and ad makers who steer American presidential campaigns will be stumped: The nightly tracking polls are showing a dramatic swing in the opinions of the electorate, but neither of two typical factors — huge news or a major advertising buy — can explain it. They will, eventually, realize that the viral, mass conversation about politics on Facebook and other platforms has finally emerged as a third force in the core business of politics, mass persuasion.
Facebook is on the cusp — and I suspect 2016 will be the year this becomes clear — of replacing television advertising as the place where American elections are fought and won. The vast new network of some 185 million Americans opens the possibility, for instance, of a congressional candidate gaining traction without the expense of television, and of an inexpensive new viral populism. The way people share will shape the outcome of the presidential election."
It seems that day has arrived. Shape the conversation and outcome, indeed. It's all driven by data -- big data -- data mining.
Sixth, the new feature raises questions and issues for users. Should Facebook know your voting decisions? Does Facebook have a right to know your voting decisions? Has Facebook earned the right to know your voting decisions? Facebook is a money-making enterprise, so it will sell your information to as many other companies as possible. According to the October 28 announcement:
"How you vote is a personal matter, and we’ve taken steps to make sure that you have utmost control over your plan. After you make a selection, you have to choose who you want to be able to see it (“Only me” or “Friends”). For example, you may want to be private about your choice for president, but share with friends your pick for a congressional race or a ballot initiative."
The language in the announcement seems to confusingly refer to the Facebook feature as voting, when it isn't. Do all of your friends need to know your voting preferences? What about friends with Facebook profiles that are open to the general public? In the latter case, anybody wandering in can view your voting information. Is that what you really want?
Not me. What happens in the voting booth stays in the voting booth. I may express concerns on Facebook, but my final vote is private. No doubt, some consumers will share their voting preferences without considering the implications.
I visited the CTCL website and found it underwhelming and lacking key information to uderstand what this organization really is and does. Not good.
What are your opinions of Facebook's new elections and ballot feature?
Late last month, the U.S. Federal Communications Commission (FC) adopted new privacy rules to require high-speed Internet service providers (ISPs) to protect the privacy of their customers. The FCC announcement explained the new privacy rules:
"Opt-in: ISPs are required to obtain affirmative “opt-in” consent from consumers to use and share sensitive information. The rules specify categories of information that are considered sensitive, which include precise geo-location, financial information, health information, children’s information, social security numbers, web browsing history, app usage history and the content of communications.
Opt-out: ISPs would be allowed to use and share non-sensitive information unless a customer “opts-out.” All other individually identifiable customer information – for example, email address or service tier information – would be considered non-sensitive and the use and sharing of that information would be subject to opt-out consent, consistent with consumer expectations.
Exceptions to consent requirements: Customer consent is inferred for certain purposes specified in the statute, including the provision of broadband service or billing and collection. For the use of this information, no additional customer consent is required beyond the creation of the customer-ISP relationship.
Transparency requirements that require ISPs to provide customers with clear, conspicuous and persistent notice about the information they collect, how it may be used and with whom it may be shared, as well as how customers can change their privacy preferences;
A requirement that broadband providers engage in reasonable data security practices and guidelines on steps ISPs should consider taking, such as implementing relevant industry best practices, providing appropriate oversight of security practices, implementing robust customer authentication tools, and proper disposal of data consistent with FTC best practices and the Consumer Privacy Bill of Rights.
Common-sense data breach notification requirements to encourage ISPs to protect the confidentiality of customer data, and to give consumers and law enforcement notice of failures to protect such information."
The new privacy rules prohibit “take-it-or-leave-it” offers, which means an ISP cannot refuse to serve customers who don’t consent to the use and sharing of their information for commercial purposes. The new rules also addressed the desire by ISPs to charge customers more fees for privacy. According to the FCC Fact Sheet:
"Recognizing that so-called “pay for privacy” offerings raise unique considerations, the rules require heightened disclosure for plans that provide discounts or other incentives in exchange for a customer’s express affirmative consent to the use and sharing of their personal information. The Commission will determine on a case-by-case basis the legitimacy of programs that relate service price to privacy protections. Consumers should not be forced to choose between paying inflated prices and maintaining their privacy.
ISPs like Comcast, AT&T, Charter, and Verizon opposed the stricter privacy rules. Google had argued for broader opt-out provisions and privacy rules the same as for websites, not stricter. The U.S. Chamber of Commerce, a political lobbying organization, opposed the stronger privacy rules the FCC proposed in March. Last week, Reuters reported:
"The final regulation is less restrictive than the initial plan proposed by FCC chairman Tom Wheeler in March and closer to rules imposed on websites by the Federal Trade Commission. Republican commissioners said the rules unfairly give websites the ability to harvest more data than service providers and dominate digital advertising."
FCC Chairman Wheeler released a statement on October 27 about the new broadband privacy rules:
"Last week, I visited Consumer Reports’ headquarters in Yonkers, New York, where I toured their product testing facility and met with senior leadership. When looking at a smart refrigerator that collects and shares data over the Internet, the discussion turned to privacy. Who would have ever imagined that what you have in your refrigerator would be information available to AT&T, Comcast, or whoever your network provider is?
The more our economy and our lives move online, the more information about us goes over our Internet Service Provider (ISP) – and the more consumers want to know how to protect their personal information in the digital age.
Today, the Commission takes a significant step to safeguard consumer privacy in this time of rapid technological change, as we adopt rules that will allow consumers to choose how their Internet Service Provider (ISP) uses and shares their personal data.
The bottom line is that it’s your data. How it’s used and shared should be your choice."
The last sentence cannot be over-emphasized. Consumers: it is our information -- property -- which ISPs use, sell, and make money with. Consumers should decide what data broadband and wireless providers share with marketers. Consumers must be in control.
And, there is more to come as the FCC oversees "pay-for-privacy" schemes by ISPs. So, thanks to the FCC and to Chairman Wheeler for fighting strongly for consumers' online privacy rights. What are your opinions of the new broadband privacy rules?
[Editor's Note: today's blog post is by guest author Cassie Phillips, a technology blogger who developed a special interest in cybersecurity after her webcam was hacked. While she’s interested to see how the Internet of Things changes how we use technology, she is very concerned about all the risks it poses.]
Many people and organizations have raised concerns about the potential risks related to the Internet of Things (IoT). It turns out that they were right to be concerned. Last month the France-based hosting provider, OVH, fell victim to an enormous distributed denial-of-service (DDoS) attack on the Minecraft servers that OVH was hosting.
DDoS attacks are attempts to make a resource (usually a website) inaccessible to its users through an inundation of requests, aiming to overburden the system. In the past, DDoS attacks were carried out by computers, with or without their owner’s consent. Hot Hardware reported:
“OVH was the victim of a wide-scale DDoS attack that was carried via a network of over 152,000 IoT devices… Of those IoT devices participating in the DDoS attack, they were primarily comprised of CCTV cameras and DVRs.”
Before the attack on OVH, there was another DDoS attack on prominent internet security researcher Brian Krebs’ website. This attack was also carried out by IoT devices. Akamai Technologies Inc., a provider of security services worldwide for major companies, cut ties with Mr. Krebs because the DDoS attack on Krebs’ website was enormous. Josh Shaul, Akamai’s vice president, said it was the worst DDoS attack the company had ever seen.
These broad attacks prove that the IoT does pose a significant security risk. And DDoS attacks are by no means the only security risks that the IoT presents. Let’s look at what the IoT is, the risks it presents and, most importantly, how to ensure that any IoT devices you use are secure.
What Is the Internet of Things?
The IoT is the idea that any device can be designed to be able to connect to the internet and other devices. These devices include mobile phones, washing machines, refrigerators, coffee makers, televisions, home thermostats, motion sensors, headphones, Barbie dolls and baby monitors. There is no limit except the imagination.
There are even buildings, cars, and health-related implants (such as pacemakers) that can connect to the internet and to each other. All of these devices can exchange information and collect data, creating a huge pool of information and an enormous network.
What Risks Does the Internet of Things Pose?
As mentioned above, the IoT poses a few risks and concerns. There are four key risks associated with the IoT, with the first being reliability. IoT devices are not necessarily reliable. While this may not be a crisis if the device in question is a refrigerator, it is deadly if devices such as cars fail or are hacked.
The second major risk related to the IoT is privacy. Each device in a network of the IoT can collect and share data. As consumers, we don’t always know who gets this data and what it is used for. The data will almost certainly be used to track consumers’ behavior, allowing companies to target each consumer with tailor-made advertising. While this data probably won’t always be used for nefarious purposes, it can be used in a way that violates our right to privacy. According to Buzzfeed:
“ "We were sleeping in bed, and basically heard some music coming from the nursery, but then when we went into the room the music turned off,” said the anonymous mother. They tracked the IP address that had accessed their camera and discovered a website with “thousands and thousands of pictures of cameras just like their own.” Anyone could use the site to access hacked cameras and monitors located in at least 15 different countries."
This leads to the third major risk associated with the IoT, namely security. Again, each of the IoT devices collects and transmits data. If these devices are hacked, criminals will have access to vast amounts of consumers' private information. Depending on the device, criminals can learn our routines, find out what valuables we keep in our homes, gain access to information about any security measures we use, and even collect sensitive information such as financial payment information.
Another security risk is the potential for hacking medical devices and implants. According to a report by research and advisory firm, Forrester, ransomware in medical devices is the single biggest cybersecurity threat for this year. Security researchers have already managed to hack into hospitals’ networks, pacemakers and other medical devices. This will put people’s lives at risk.
The potential for cyberattacks is the fourth major risk associated with the IoT. Because all these devices are connected, they have the potential to spread malware across homes and entire companies. However, the greatest risk lies in criminals’ ability to use our IoT devices in massive cyberattacks, such as the DDoS attack on OVH. Widespread vulnerabilities are only a few missteps away, and that is a seriously concerning fact.
How to Protect Yourself When Using IoT Devices
Given the risks listed above, it’s vital that consumers learn to protect our devices, our homes, and ourselves. The following actions are all essential to your security when using IoT devices:
- Carefully consider how much connectivity you need in your home and life. Then try to avoid any devices that unnecessarily connect to the internet. After all, you can always opt for a coffeemaker with a timer instead of one that connects to a mobile app on your phone.
- If you do decide to buy an IoT device, be sure to find one with the best security features possible.
- Read all the terms and conditions and privacy policies for any IoT device you intend to purchase. This will help you understand what data the device collects and what it does with the data.
- When you buy an IoT device, change its default password immediately. This also applies to any IoT devices that you already own. Be sure to use strong passwords and manage them effectively.
- Always keep the software on IoT devices up to date. Updates often contain essential bug fixes and security patches.
- If your IoT device supports security software, install it. Don’t forget that your mobile phone and tablet count as IoT devices!
- Use a reputable Virtual Private Network, such as one recommended by Secure Thoughts.
- If your IoT device allows it, use encryption technology.
- Switch off and unplug any IoT devices when you are not using them.
- If your IoT device uses location data unnecessarily, turn it off if possible.
- If your IoT device has a camera or monitor that you don’t think it needs, block the lens.
While it would be best if security features were built into the design of IoT devices, that’s not always the case. So it’s crucial that you implement the security ideas discussed above. Hopefully, we’ll start seeing a move toward creating an international standard for all IoT devices in the future.
Have you had any bad experiences with IoT devices? How do you think the technology is progressing? Share your thoughts in the comments section below.