122 posts categorized "Behavioral Advertising" Feed

UK Parliamentary Committee Issued Its Final Report on Disinformation And Fake News. Facebook And Six4Three Discussed

On February 18th, a United Kingdom (UK) parliamentary committee published its final report on disinformation and "fake news." The 109-page report by the Digital, Culture, Media, And Sport Committee (DCMS) updates its interim report from July, 2018.

The report covers many issues: political advertising (by unnamed entities called "dark adverts"), Brexit and UK elections, data breaches, privacy, and recommendations for UK regulators and government officials. It seems wise to understand the report's findings regarding the business practices of U.S.-based companies mentioned, since these companies' business practices affect consumers globally, including consumers in the United States.

Issues Identified

First, the DCMS' final report built upon issues identified in its:

"... Interim Report: the definition, role and legal liabilities of social media platforms; data misuse and targeting, based around the Facebook, Cambridge Analytica and Aggregate IQ (AIQ) allegations, including evidence from the documents we obtained from Six4Three about Facebook’s knowledge of and participation in data-sharing; political campaigning; Russian influence in political campaigns; SCL influence in foreign elections; and digital literacy..."

The final report includes input from 23 "oral evidence sessions," more than 170 written submissions, interviews of at least 73 witnesses, and more than 4,350 questions asked at hearings. The DCMS Committee sought input from individuals, organizations, industry experts, and other governments. Some of the information sources:

"The Canadian Standing Committee on Access to Information, Privacy and Ethics published its report, “Democracy under threat: risks and solutions in the era of disinformation and data monopoly” in December 2018. The report highlights the Canadian Committee’s study of the breach of personal data involving Cambridge Analytica and Facebook, and broader issues concerning the use of personal data by social media companies and the way in which such companies are responsible for the spreading of misinformation and disinformation... The U.S. Senate Select Committee on Intelligence has an ongoing investigation into the extent of Russian interference in the 2016 U.S. elections. As a result of data sets provided by Facebook, Twitter and Google to the Intelligence Committee -- under its Technical Advisory Group -- two third-party reports were published in December 2018. New Knowledge, an information integrity company, published “The Tactics and Tropes of the Internet Research Agency,” which highlights the Internet Research Agency’s tactics and messages in manipulating and influencing Americans... The Computational Propaganda Research Project and Graphika published the second report, which looks at activities of known Internet Research Agency accounts, using Facebook, Instagram, Twitter and YouTube between 2013 and 2018, to impact US users"

Why Disinformation

Second, definitions matter. According to the DCMS Committee:

"We have even changed the title of our inquiry from “fake news” to “disinformation and ‘fake news’”, as the term ‘fake news’ has developed its own, loaded meaning. As we said in our Interim Report, ‘fake news’ has been used to describe content that a reader might dislike or disagree with... We were pleased that the UK Government accepted our view that the term ‘fake news’ is misleading, and instead sought to address the terms ‘disinformation’ and ‘misinformation'..."

Overall Recommendations

Summary recommendations from the report:

  1. "Compulsory Code of Ethics for tech companies overseen by independent regulator,
  2. Regulator given powers to launch legal action against companies breaching code,
  3. Government to reform current electoral communications laws and rules on overseas involvement in UK elections, and
  4. Social media companies obliged to take down known sources of harmful content, including proven sources of disinformation"

Role And Liability Of Tech Companies

Regarding detailed observations and findings about the role and liability of tech companies, the report stated:

"Social media companies cannot hide behind the claim of being merely a ‘platform’ and maintain that they have no responsibility themselves in regulating the content of their sites. We repeat the recommendation from our Interim Report that a new category of tech company is formulated, which tightens tech companies’ liabilities, and which is not necessarily either a ‘platform’ or a ‘publisher’. This approach would see the tech companies assume legal liability for content identified as harmful after it has been posted by users. We ask the Government to consider this new category of tech company..."

The UK Government and its regulators may adopt some, all, or none of the report's recommendations. More observations and findings in the report:

"... both social media companies and search engines use algorithms, or sequences of instructions, to personalize news and other content for users. The algorithms select content based on factors such as a user’s past online activity, social connections, and their location. The tech companies’ business models rely on revenue coming from the sale of adverts and, because the bottom line is profit, any form of content that increases profit will always be prioritized. Therefore, negative stories will always be prioritized by algorithms, as they are shared more frequently than positive stories... Just as information about the tech companies themselves needs to be more transparent, so does information about their algorithms. These can carry inherent biases, as a result of the way that they are developed by engineers... Monika Bickert, from Facebook, admitted that Facebook was concerned about “any type of bias, whether gender bias, racial bias or other forms of bias that could affect the way that work is done at our company. That includes working on algorithms.” Facebook should be taking a more active and urgent role in tackling such inherent biases..."

Based upon this, the report recommended that the UK's new Centre For Ethics And Innovation (CFEI) should play a key role as an advisor to the UK Government by continually analyzing and anticipating gaps in governance and regulation, suggesting best practices and corporate codes of conduct, and standards for artificial intelligence (AI) and related technologies.

Inferred Data

The report also discussed a critical issue related to algorithms (emphasis added):

"... When Mark Zuckerberg gave evidence to Congress in April 2018, in the wake of the Cambridge Analytica scandal, he made the following claim: “You should have complete control over your data […] If we’re not communicating this clearly, that’s a big thing we should work on”. When asked who owns “the virtual you”, Zuckerberg replied that people themselves own all the “content” they upload, and can delete it at will. However, the advertising profile that Facebook builds up about users cannot be accessed, controlled or deleted by those users... In the UK, the protection of user data is covered by the General Data Protection Regulation (GDPR). However, ‘inferred’ data is not protected; this includes characteristics that may be inferred about a user not based on specific information they have shared, but through analysis of their data profile. This, for example, allows political parties to identify supporters on sites like Facebook, through the data profile matching and the ‘lookalike audience’ advertising targeting tool... Inferred data is therefore regarded by the ICO as personal data, which becomes a problem when users are told that they can own their own data, and that they have power of where that data goes and what it is used for..."

The distinction between uploaded and inferred data cannot be overemphasized. It is critical when evaluating tech companies statements, policies (e.g., privacy, terms of use), and promises about what "data" users have control over. Wise consumers must insist upon clear definitions to avoided getting misled or duped.

What might be an exampled of inferred data? What comes to mind is Facebook's Ad Preferences feature allows users to review and delete the "Interests" -- advertising categories -- Facebook assigns to each user's profile. (The service's algorithms assign Interests based groups/pages/events/advertisements users "Liked" or clicked on, posts submitted, posts commented upon, and more.) These "Interests" are inferred data, since Facebook assigned them, and uers didn't.

In fact, Facebook doesn't notify its users when it assigns new Interests. It just does it. And, Facebook can assign Interests whether you interacted with an item once or many times. How relevant is an Interest assigned after a single interaction, "Like," or click? Most people would say: not relevant. So, does the Interests list assigned to users' profiles accurately describe users? Do Facebook users own the Interests list assigned to their profiles? Any control Facebook users have seems minimal. Why? Facebook users can delete Interests assigned to their profiles, but users cannot stop Facebook from applying new Interests. Users cannot prevent Facebook from re-applying Interests previously deleted. Deleting Interests doesn't reduce the number of ads users see on Facebook.

The only way to know what Interests have been assigned is for Facebook users to visit the Ad Preferences section of their profiles, and browse the list. Depending how frequently a person uses Facebook, it may be necessary to prune an Interests list at least once monthly -- a cumbersome and time consuming task, probably designed that way to discourage reviews and pruning. And, that's one example of inferred data. There are probably plenty more examples, and as the report emphasizes users don't have access to all inferred data with their profiles.

Now, back to the report. To fix problems with inferred data, the DCMS recommended:

"We support the recommendation from the ICO that inferred data should be as protected under the law as personal information. Protections of privacy law should be extended beyond personal information to include models used to make inferences about an individual. We recommend that the Government studies the way in which the protections of privacy law can be expanded to include models that are used to make inferences about individuals, in particular during political campaigning. This will ensure that inferences about individuals are treated as importantly as individuals’ personal information."

Business Practices At Facebook

Next, the DCMS Committee's report said plenty about Facebook, its management style, and executives (emphasis added):

"Despite all the apologies for past mistakes that Facebook has made, it still seems unwilling to be properly scrutinized... Ashkan Soltani, an independent researcher and consultant, and former Chief Technologist to the US Federal Trade Commission (FTC), called into question Facebook’s willingness to be regulated... He discussed the California Consumer Privacy Act, which Facebook supported in public, but lobbied against, behind the scenes... By choosing not to appear before the Committee and by choosing not to respond personally to any of our invitations, Mark Zuckerberg has shown contempt towards both the UK Parliament and the ‘International Grand Committee’, involving members from nine legislatures from around the world. The management structure of Facebook is opaque to those outside the business and this seemed to be designed to conceal knowledge of and responsibility for specific decisions. Facebook used the strategy of sending witnesses who they said were the most appropriate representatives, yet had not been properly briefed on crucial issues, and could not or chose not to answer many of our questions. They then promised to follow up with letters, which -- unsurprisingly -- failed to address all of our questions. We are left in no doubt that this strategy was deliberate."

So, based upon Facebook's actions (or lack thereof), the DCMS concluded that Facebook executives intentionally ducked and dodged issues and questions.

While discussing data use and targeting, the report said more about data breaches and Facebook:

"The scale and importance of the GSR/Cambridge Analytica breach was such that its occurrence should have been referred to Mark Zuckerberg as its CEO immediately. The fact that it was not is evidence that Facebook did not treat the breach with the seriousness it merited. It was a profound failure of governance within Facebook that its CEO did not know what was going on, the company now maintains, until the issue became public to us all in 2018. The incident displays the fundamental weakness of Facebook in managing its responsibilities to the people whose data is used for its own commercial interests..."

So, internal management failed. That's not all. After a detailed review of the GSR/Cambridge Analytica breach and Facebook's 2011 Consent Decree with the U.S. Federal Trade Commission (FTC), the DCMS Committee concluded (emphasis and text link added):

"The Cambridge Analytica scandal was facilitated by Facebook’s policies. If it had fully complied with the FTC settlement, it would not have happened. The FTC Complaint of 2011 ruled against Facebook -- for not protecting users’ data and for letting app developers gain as much access to user data as they liked, without restraint -- and stated that Facebook built their company in a way that made data abuses easy. When asked about Facebook’s failure to act on the FTC’s complaint, Elizabeth Denham, the Information Commissioner, told us: “I am very disappointed that Facebook, being such an innovative company, could not have put more focus, attention and resources into protecting people’s data”. We are equally disappointed."

Wow! Not good. There's more:

"... a current court case at the San Mateo Superior Court in California also concerns Facebook’s data practices. It is alleged that Facebook violated the privacy of US citizens by actively exploiting its privacy policy... The published ‘corrected memorandum of points and authorities to defendants’ special motions to strike’, by the complainant in the case, the U.S.-based app developer Six4Three, describes the allegations against Facebook; that Facebook used its users’ data to persuade app developers to create platforms on its system, by promising access to users’ data, including access to data of users’ friends. The case also alleges that those developers that became successful were targeted and ordered to pay money to Facebook... Six4Three lodged its original case in 2015, after Facebook removed developers’ access to friends’ data, including its own. The DCMS Committee took the unusual, but lawful, step of obtaining these documents, which spanned between 2012 and 2014... Since we published these sealed documents, on 14 January 2019 another court agreed to unseal 135 pages of internal Facebook memos, strategies and employee emails from between 2012 and 2014, connected with Facebook’s inappropriate profiting from business transactions with children. A New York Times investigation published in December 2018 based on internal Facebook documents also revealed that the company had offered preferential access to users data to other major technology companies, including Microsoft, Amazon and Spotify."

"We believed that our publishing the documents was in the public interest and would also be of interest to regulatory bodies... The documents highlight Facebook’s aggressive action against certain apps, including denying them access to data that they were originally promised. They highlight the link between friends’ data and the financial value of the developers’ relationship with Facebook. The main issues concern: ‘white lists’; the value of friends’ data; reciprocity; the sharing of data of users owning Android phones..."

You can read the report's detailed descriptions of those issues. A summary: a) Facebook allegedly used promises of access to users' data to lure developers (often by overriding Facebook users' privacy settings); b) some developers got priority treatment based upon unclear criteria; c) developers who didn't spend enough money with Facebook were denied access to data previously promised; d) Facebook's reciprocity clause demanded that developers also share their users' data with Facebook; e) Facebook's mobile app for Android OS phone users collected far more data about users, allegedly without consent, than users were told; and f) Facebook allegedly targeted certain app developers (emphasis added):

"We received evidence that showed that Facebook not only targeted developers to increase revenue, but also sought to switch off apps where it considered them to be in competition or operating in a lucrative areas of its platform and vulnerable to takeover. Since 1970, the US has possessed high-profile federal legislation, the Racketeer Influenced and Corrupt Organizations Act (RICO); and many individual states have since adopted similar laws. Originally aimed at tackling organized crime syndicates, it has also been used in business cases and has provisions for civil action for damages in RICO-covered offenses... Despite specific requests, Facebook has not provided us with one example of a business excluded from its platform because of serious data breaches. We believe that is because it only ever takes action when breaches become public. We consider that data transfer for value is Facebook’s business model and that Mark Zuckerberg’s statement that “we’ve never sold anyone’s data” is simply untrue.” The evidence that we obtained from the Six4Three court documents indicates that Facebook was willing to override its users’ privacy settings in order to transfer data to some app developers, to charge high prices in advertising to some developers, for the exchange of that data, and to starve some developers—such as Six4Three—of that data, thereby causing them to lose their business. It seems clear that Facebook was, at the very least, in violation of its Federal Trade Commission settlement."

"The Information Commissioner told the Committee that Facebook needs to significantly change its business model and its practices to maintain trust. From the documents we received from Six4Three, it is evident that Facebook intentionally and knowingly violated both data privacy and anti-competition laws. The ICO should carry out a detailed investigation into the practices of the Facebook Platform, its use of users’ and users’ friends’ data, and the use of ‘reciprocity’ of the sharing of data."

The Information Commissioner's Office (ICO) is one of the regulatory agencies within the UK. So, the Committee concluded that Facebook's real business model is, "data transfer for value" -- in other words: have money, get access to data (regardless of Facebook users' privacy settings).

One quickly gets the impression that Facebook acted like a monopoly in its treatment of both users and developers... or worse, like organized crime. The report concluded (emphasis added):

"The Competitions and Market Authority (CMA) should conduct a comprehensive audit of the operation of the advertising market on social media. The Committee made this recommendation its interim report, and we are pleased that it has also been supported in the independent Cairncross Report commissioned by the government and published in February 2019. Given the contents of the Six4Three documents that we have published, it should also investigate whether Facebook specifically has been involved in any anti-competitive practices and conduct a review of Facebook’s business practices towards other developers, to decide whether Facebook is unfairly using its dominant market position in social media to decide which businesses should succeed or fail... Companies like Facebook should not be allowed to behave like ‘digital gangsters’ in the online world, considering themselves to be ahead of and beyond the law."

The DCMS Committee's report also discussed findings from the Cairncross Report. In summary, Damian Collins MP, Chair of the DCMS Committee, said:

“... we cannot delay any longer. Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalized ‘dark adverts’ from unidentifiable sources, delivered through the major social media platforms we use everyday. Much of this is directed from agencies working in foreign countries, including Russia... Companies like Facebook exercise massive market power which enables them to make money by bullying the smaller technology companies and developers... We need a radical shift in the balance of power between the platforms and the people. The age of inadequate self regulation must come to an end. The rights of the citizen need to be established in statute, by requiring the tech companies to adhere to a code of conduct..."

So, the report seems extensive, comprehensive, and detailed. Read the DCMS Committee's announcement, and/or download the full DCMS Committee report (Adobe PDF format, 3,5o7 kilobytes).

Once can assume that governments' intelligence and spy agencies will continue to do what they've always done: collect data about targets and adversaries, use disinformation and other tools to attempt to meddle in other governments' activities. It is clear that social media makes these tasks far easier than before. The DCMS Committee's report provided recommendations about what the UK Government's response should be. Other countries' governments face similar decisions about their responses, if any, to the threats.

Given the data in the DCMS report, it will be interesting to see how the FTC and lawmakers in the United States respond. If increased regulation of social media results, tech companies arguably have only themselves to blame. What do you think?


Survey: Users Don't Understand Facebook's Advertising System. Some Disagree With Its Classifications

Most people know that many companies collect data about their online activities. Based upon the data collected, companies classify users for a variety of reasons and purposes. Do users agree with these classifications? Do the classifications accurately describe users' habits, interests, and activities?

Facebook logo To answer these questions, the Pew Research Center surveyed users of Facebook. Why Facebook? Besides being the most popular social media platform in the United States, it collects:

"... a wide variety of data about their users’ behaviors. Platforms use this data to deliver content and recommendations based on users’ interests and traits, and to allow advertisers to target ads... But how well do Americans understand these algorithm-driven classification systems, and how much do they think their lives line up with what gets reported about them?"

The findings are significant. First:

"Facebook makes it relatively easy for users to find out how the site’s algorithm has categorized their interests via a “Your ad preferences” page. Overall, however, 74% of Facebook users say they did not know that this list of their traits and interests existed until they were directed to their page as part of this study."

So, almost three quarters of Facebook users surveyed don't know what data Facebook has collected about them, nor how to view it (nor how to edit it, or how to opt out of the ad targeting classifications). According to Wired magazine, Facebook's "Your Ad Preferences" page:

"... can be hard to understand if you haven’t looked at the page before. At the top, Facebook displays “Your interests.” These groupings are assigned based on your behavior on the platform and can be used by marketers to target you with ads. They can include fairly straightforward subjects, like “Netflix,” “Graduate school,” and “Entrepreneurship,” but also more bizarre ones, like “Everything” and “Authority.” Facebook has generated an enormous number of these categories for its users. ProPublica alone has collected over 50,000, including those only marketers can see..."

Now, back to the Pew survey. After survey participants viewed their Ad Preferences page:

"A majority of users (59%) say these categories reflect their real-life interests, while 27% say they are not very or not at all accurate in describing them. And once shown how the platform classifies their interests, roughly half of Facebook users (51%) say they are not comfortable that the company created such a list."

So, about half of persons surveyed use a site whose data collection they are uncomfortable with. Not good. Second, substantial groups said the classifications by Facebook were not accurate:

"... about half of Facebook users (51%) are assigned a political “affinity” by the site. Among those who are assigned a political category by the site, 73% say the platform’s categorization of their politics is very or somewhat accurate, while 27% say it describes them not very or not at all accurately. Put differently, 37% of Facebook users are both assigned a political affinity and say that affinity describes them well, while 14% are both assigned a category and say it does not represent them accurately..."

So, significant numbers of users disagree with the political classifications Facebook assigned to their profiles. Third, its' not only politics:

"... Facebook also lists a category called “multicultural affinity”... this listing is meant to designate a user’s “affinity” with various racial and ethnic groups, rather than assign them to groups reflecting their actual race or ethnic background. Only about a fifth of Facebook users (21%) say they are listed as having a “multicultural affinity.” Overall, 60% of users who are assigned a multicultural affinity category say they do in fact have a very or somewhat strong affinity for the group to which they are assigned, while 37% say their affinity for that group is not particularly strong. Some 57% of those who are assigned to this category say they do in fact consider themselves to be a member of the racial or ethnic group to which Facebook assigned them."

The survey included a nationally representative sample of 963 Facebook users ages 18 and older from the United States. The survey was conducted September 4 to October 1, 2018. Read the entire survey at the Pew Research Center site.

What can consumers conclude from this survey? Social media users should understand that all social sites, and especially mobile apps, collect data about you, and then make judgements... classifications about you. (Remember, some Samsung phone owners were unable to delete Facebook and other mobile apps users. And, everyone wants your geolocation data.) Use any tools the sites provide to edit or adjust your ad preferences to match your interests. Adjust the privacy settings on your profile to limit the data sharing as much as possible.

Last, an important reminder. While Facebook users can edit their ad preferences and can opt out of the ad-targeting classifications, they cannot completely avoid ads. Facebook will still display less-targeted ads. That is simply, Facebook being Facebook to make money. That probably applies to other social sites, too.

What are your opinions of the survey's findings?


Google Fined 50 Million Euros For Violations Of New European Privacy Law

Google logo Google has been find 50 million Euros (about U.S. $57 million) under the new European privacy law for failing to properly disclose to users how their data is collected and used for targeted advertising. The European Union's General Data Protection Regulations, which went into effect in May 2018, give EU residents more control over their information and how companies use it.

After receiving two complaints last year from privacy-rights groups, France's National Data Protection Commission (CNL) announced earlier this month:

"... CNIL carried out online inspections in September 2018. The aim was to verify the compliance of the processing operations implemented by GOOGLE with the French Data Protection Act and the GDPR by analysing the browsing pattern of a user and the documents he or she can have access, when creating a GOOGLE account during the configuration of a mobile equipment using Android. On the basis of the inspections carried out, the CNIL’s restricted committee responsible for examining breaches of the Data Protection Act observed two types of breaches of the GDPR."

The first violation involved transparency failures:

"... information provided by GOOGLE is not easily accessible for users. Indeed, the general structure of the information chosen by the company does not enable to comply with the Regulation. Essential information, such as the data processing purposes, the data storage periods or the categories of personal data used for the ads personalization, are excessively disseminated across several documents, with buttons and links on which it is required to click to access complementary information. The relevant information is accessible after several steps only, implying sometimes up to 5 or 6 actions... some information is not always clear nor comprehensive. Users are not able to fully understand the extent of the processing operations carried out by GOOGLE. But the processing operations are particularly massive and intrusive because of the number of services offered (about twenty), the amount and the nature of the data processed and combined. The restricted committee observes in particular that the purposes of processing are described in a too generic and vague manner..."

So, important information is buried and scattered across several documents making it difficult for users to access and to understand. The second violation involved the legal basis for personalized ads processing:

"... GOOGLE states that it obtains the user’s consent to process data for ads personalization purposes. However, the restricted committee considers that the consent is not validly obtained for two reasons. First, the restricted committee observes that the users’ consent is not sufficiently informed. The information on processing operations for the ads personalization is diluted in several documents and does not enable the user to be aware of their extent. For example, in the section “Ads Personalization”, it is not possible to be aware of the plurality of services, websites and applications involved in these processing operations (Google search, Youtube, Google home, Google maps, Playstore, Google pictures, etc.) and therefore of the amount of data processed and combined."

"[Second], the restricted committee observes that the collected consent is neither “specific” nor “unambiguous.” When an account is created, the user can admittedly modify some options associated to the account by clicking on the button « More options », accessible above the button « Create Account ». It is notably possible to configure the display of personalized ads. That does not mean that the GDPR is respected. Indeed, the user not only has to click on the button “More options” to access the configuration, but the display of the ads personalization is moreover pre-ticked. However, as provided by the GDPR, consent is “unambiguous” only with a clear affirmative action from the user (by ticking a non-pre-ticked box for instance). Finally, before creating an account, the user is asked to tick the boxes « I agree to Google’s Terms of Service» and « I agree to the processing of my information as described above and further explained in the Privacy Policy» in order to create the account. Therefore, the user gives his or her consent in full, for all the processing operations purposes carried out by GOOGLE based on this consent (ads personalization, speech recognition, etc.). However, the GDPR provides that the consent is “specific” only if it is given distinctly for each purpose."

So, not only is important information buried and scattered across multiple documents (again), but also critical boxes for users to give consent are pre-checked when they shouldn't be.

CNIL explained its reasons for the massive fine:

"The amount decided, and the publicity of the fine, are justified by the severity of the infringements observed regarding the essential principles of the GDPR: transparency, information and consent. Despite the measures implemented by GOOGLE (documentation and configuration tools), the infringements observed deprive the users of essential guarantees regarding processing operations that can reveal important parts of their private life since they are based on a huge amount of data, a wide variety of services and almost unlimited possible combinations... Moreover, the violations are continuous breaches of the Regulation as they are still observed to date. It is not a one-off, time-limited, infringement..."

This is the largest fine, so far, under GDPR laws. Reportedly, Google will appeal the fine:

"We've worked hard to create a GDPR consent process for personalised ads that is as transparent and straightforward as possible, based on regulatory guidance and user experience testing... We're also concerned about the impact of this ruling on publishers, original content creators and tech companies in Europe and beyond... For all these reasons, we've now decided to appeal."

This is not the first EU fine for Google. CNet reported:

"Google is no stranger to fines under EU laws. It's currently awaiting the outcome of yet another antitrust investigation -- after already being slapped with a $5 billion fine last year for anticompetitive Android practices and a $2.7 billion fine in 2017 over Google Shopping."


Companies Want Your Location Data. Recent Examples: The Weather Channel And Burger King

Weather Channel logo It is easy to find examples where companies use mobile apps to collect consumers' real-time GPS location data, so they can archive and resell that information later for additional profits. First, ExpressVPN reported:

"The city of Los Angeles is suing the Weather Company, a subsidiary of IBM, for secretly mining and selling user location data with the extremely popular Weather Channel App. Stating that the app unfairly manipulates users into enabling their location settings for more accurate weather reports, the lawsuit affirms that the app collects and then sells this data to third-party companies... Citing a recent investigation by The New York Times that revealed more than 75 companies silently collecting location data (if you haven’t seen it yet, it’s worth a read), the lawsuit is basing its case on California’s Unfair Competition Law... the California Consumer Privacy Act, which is set to go into effect in 2020, would make it harder for companies to blindly profit off customer data... This lawsuit hopes to fine the Weather Company up to $2,500 for each violation of the Unfair Competition Law. With more than 200 million downloads and a reported 45+ million users..."

Long-term readers remember that a data breach in 2007 at IBM Inc. prompted this blog. It's not only internet service providers which collect consumers' location data. Advertisers, retailers, and data brokers want it, too.

Burger King logo Second, Burger King ran last month a national "Whopper Detour" promotion which offered customers a once-cent Whopper burger if they went near a competitor's store. News 5, the ABC News affiliate in Cleveland, reported:

"If you download the Burger King mobile app and drive to a McDonald’s store, you can get the penny burger until December 12, 2018, according to the fast-food chain. You must be within 600 feet of a McDonald's to claim your discount, and no, McDonald's will not serve you a Whopper — you'll have to order the sandwich in the Burger King app, then head to the nearest participating Burger King location to pick it up. More information about the deal can be found on the app on Apple and Android devices."

Next, the relevant portions from Burger King's privacy policy for its mobile apps (emphasis added):

"We collect information you give us when you use the Services. For example, when you visit one of our restaurants, visit one of our websites or use one of our Services, create an account with us, buy a stored-value card in-restaurant or online, participate in a survey or promotion, or take advantage of our in-restaurant Wi-Fi service, we may ask for information such as your name, e-mail address, year of birth, gender, street address, or mobile phone number so that we can provide Services to you. We may collect payment information, such as your credit card number, security code and expiration date... We also may collect information about the products you buy, including where and how frequently you buy them... we may collect information about your use of the Services. For example, we may collect: 1) Device information - such as your hardware model, IP address, other unique device identifiers, operating system version, and settings of the device you use to access the Services; 2) Usage information - such as information about the Services you use, the time and duration of your use of the Services and other information about your interaction with content offered through a Service, and any information stored in cookies and similar technologies that we have set on your device; and 3) Location information - such as your computer’s IP address, your mobile device’s GPS signal or information about nearby WiFi access points and cell towers that may be transmitted to us..."

So, for the low, low price of one hamburger, participants in this promotion gave RBI, the parent company which owns Burger King, perpetual access to their real-time location data. And, since RBI knows when, where, and how long its customers visit competitors' fast-food stores, it also knows similar details about everywhere else you go -- including school, work, doctors, hospitals, and more. Sweet deal for RBI. A poor deal for consumers.

Expect to see more corporate promotions like this, which privacy advocates call "surveillance capitalism."

Consumers' real-time location data is very valuable. Don't give it away for free. If you decide to share it, demand a fair, ongoing payment in exchange. Read privacy and terms-of-use policies before downloading mobile apps, so you don't get abused or taken. Opinions? Thoughts?


Welcome To The New, Terrifying World Of Fake Porn. Plenty Of Consequences And Implications

First, I'd  like to thank all of my readers -- existing and new ones. Some have shared insightful comments on blog posts. Second, the last post of 2018 features a topic we will probably hear plenty about during 2019: artificial intelligence (AI) technologies.

To learn more about AI and related issues, watch or read the AI episodes within the CXO Talk site. And, MediaPost discussed the deployment of of AI by retail stores:

"... retailers seem much more bullish on artificial intelligence, with 7% already using some form of AI in digital assistants or chatbots, and most (64%) planning to have implemented AI within the next three years, 21% of those within the next 12 months. The top reason for using AI in retail is personalization (42%), followed by pricing and promotions (31%), landing page optimization (15%) and fraud detection (21%)."

Like any other online (or offline) technology, AI can be used for good and for bad. The good guys and bad actors both have access to AI technologies. MotherBoard reported:

"There’s a video of Gal Gadot having sex with her stepbrother on the internet. But it’s not really Gadot’s body, and it’s barely her own face. It’s an approximation... The video was created with a machine learning algorithm, using easily accessible materials and open-source code that anyone with a working knowledge of deep learning algorithms could put together."

You may remember Gadot from the 2017 film, "Wonder Woman." Other actors have been victims, too. Where do bad actors get tools to make AI-assisted fake porn? The fake porn with Gadot was:

"... allegedly the work of one person—a Redditor who goes by the name 'deepfakes'—not a big special effects studio... deepfakes uses open-source machine learning tools like TensorFlow, which Google makes freely available to researchers, graduate students, and anyone with an interest in machine learning. Like the Adobe tool that can make people say anything, and the Face2Face algorithm that can swap a recorded video with real-time face tracking, this new type of fake porn shows that we're on the verge of living in a world where it's trivially easy to fabricate believable videos of people doing and saying things they never did... the software is based on multiple open-source libraries, like Keras with TensorFlow backend. To compile the celebrities’ faces, deepfakes said he used Google image search, stock photos, and YouTube videos..."

There is also an AI App for fake porn. Yikes! As bad as this seems, it is worse. According to The Washington Post:

"... an anonymous online community of creators has in recent months removed many of the hurdles for interested beginners, crafting how-to guides, offering tips and troubleshooting advice — and fulfilling fake-porn requests on their own. To simplify the task, deepfake creators often compile vast bundles of facial images, called “facesets,” and sex-scene videos of women they call “donor bodies.” Some creators use software to automatically extract a woman’s face from her videos and social-media posts. Others have experimented with voice-cloning software to generate potentially convincing audio..."

This is beyond bad. It is terrifying.

The implications: many. Video, including speeches can easily be faked. Fake porn can be used as a weapon to harass women and/or to discredit accusers of sexual abuse and/or battery. Today's fake porn could be tomorrow's fake videos and fake news to discredit others: politicians, business executives, government officials (e.g., judges, military officers, etc.), individuals in minority groups, or activists. This places a premium upon mainstream news outlets to provide reliable, trustworthy news. This places a premium upon fact-checking sites.

The consequences: several. Social media users must first understand that they have made themselves vulnerable to the threats. Parents have made both themselves and their children vulnerable, too. How? The photographs and videos you've already uploaded to Facebook, Instagram, dating apps, and other social sites are source content for bad actors. So, parents must not only teach teenagers how to read terms-of-condition and privacy polices, but also how to fact-check content to avoid being tricked by fake videos.

This means all online users must become skilled consumers of information and news = read several news sources, verify, and fact check items. Otherwise, you are likely to be fooled... duped into joining or contributing to a bogus cause... tricked into voting for someone you wouldn't. This means social media users must carefully consider your photographs before you post online; and whether the social app or service truly provides effective privacy.

It also means that all social media users should NOT retweet or re-post every sensational item you see in their inboxes without fact-checking it first. Otherwise, you are part of the problem. Be part of the solution.

Video advertisements can easily be faked. So, it is in the interest of consumers, companies, and government agencies to both find solutions and to upgrade online privacy and digital laws -- which seem to constantly lag behind new technologies. There probably needs to be stronger consequences for offenders.

The Brookings Institute advised:

"In order to maximize positive outcomes [from AI], organizations should hire ethicists who work with corporate decision-makers and software developers, have a code of AI ethics that lays out how various issues will be handled, organize an AI review board that regularly addresses corporate ethical questions, have AI audit trails that show how various coding decisions have been made, implement AI training programs so staff operationalizes ethical considerations in their daily work, and provide a means for remediation when AI solutions inflict harm or damages on people or organizations."

These recommendations seems to apply to social media sites, which are high-value targets for bad actors wanting to post fake porn or other fake videos. It raises the question: which social sites have AI ethics policies and/or have hired ethicists and related staff to enforce such policies?

To do nothing seem unwise. Sticking our collective heads in the sane regarding new threats seems unwise, too. What issues concern you about AI-assisted fake porn or fake videos? What solutions do you want?


A Series Of Recent Events And Privacy Snafus At Facebook Cause Multiple Concerns. Does Facebook Deserve Users' Data?

Facebook logo So much has happened lately at Facebook that it can be difficult to keep up with the data scandals, data breaches, privacy fumbles, and more at the global social service. To help, below is a review of recent events.

The the New York Times reported on Tuesday, December 18th that for years:

"... Facebook gave some of the world’s largest technology companies more intrusive access to users’ personal data than it has disclosed, effectively exempting those business partners from its usual privacy rules... The special arrangements are detailed in hundreds of pages of Facebook documents obtained by The New York Times. The records, generated in 2017 by the company’s internal system for tracking partnerships, provide the most complete picture yet of the social network’s data-sharing practices... Facebook allowed Microsoft’s Bing search engine to see the names of virtually all Facebook users’ friends without consent... and gave Netflix and Spotify the ability to read Facebook users’ private messages. The social network permitted Amazon to obtain users’ names and contact information through their friends, and it let Yahoo view streams of friends’ posts as recently as this summer, despite public statements that it had stopped that type of sharing years earlier..."

According to the Reuters newswire, a Netflix spokesperson denied that Netflix accessed Facebook users' private messages, nor asked for that access. Facebook responded with denials the same day:

"... none of these partnerships or features gave companies access to information without people’s permission, nor did they violate our 2012 settlement with the FTC... most of these features are now gone. We shut down instant personalization, which powered Bing’s features, in 2014 and we wound down our partnerships with device and platform companies months ago, following an announcement in April. Still, we recognize that we’ve needed tighter management over how partners and developers can access information using our APIs. We’re already in the process of reviewing all our APIs and the partners who can access them."

Needed tighter management with its partners and developers? That's an understatement. During March and April of 2018 we learned that bad actors posed as researchers and used both quizzes and automated tools to vacuum up (and allegedly resell later) profile data for 87 million Facebook users. There's more news about this breach. The Office of the Attorney General for Washington, DC announced on December 19th that it has:

"... sued Facebook, Inc. for failing to protect its users’ data... In its lawsuit, the Office of the Attorney General (OAG) alleges Facebook’s lax oversight and misleading privacy settings allowed, among other things, a third-party application to use the platform to harvest the personal information of millions of users without their permission and then sell it to a political consulting firm. In the run-up to the 2016 presidential election, some Facebook users downloaded a “personality quiz” app which also collected data from the app users’ Facebook friends without their knowledge or consent. The app’s developer then sold this data to Cambridge Analytica, which used it to help presidential campaigns target voters based on their personal traits. Facebook took more than two years to disclose this to its consumers. OAG is seeking monetary and injunctive relief, including relief for harmed consumers, damages, and penalties to the District."

Sadly, there's still more. Facebook announced on December 14th another data breach:

"Our internal team discovered a photo API bug that may have affected people who used Facebook Login and granted permission to third-party apps to access their photos. We have fixed the issue but, because of this bug, some third-party apps may have had access to a broader set of photos than usual for 12 days between September 13 to September 25, 2018... the bug potentially gave developers access to other photos, such as those shared on Marketplace or Facebook Stories. The bug also impacted photos that people uploaded to Facebook but chose not to post... we believe this may have affected up to 6.8 million users and up to 1,500 apps built by 876 developers... Early next week we will be rolling out tools for app developers that will allow them to determine which people using their app might be impacted by this bug. We will be working with those developers to delete the photos from impacted users. We will also notify the people potentially impacted..."

We believe? That sounds like Facebook doesn't know for sure. Where was the quality assurance (QA) team on this? Who is performing the post-breach investigation to determine what happened so it doesn't happen again? This post-breach response seems sloppy. And, the "bug" description seems disingenuous. Anytime persons -- in this case developers -- have access to data they shouldn't have, it is a data breach.

One quickly gets the impression that Facebook has created so many niches, apps, APIs, and special arrangements for developers and advertisers that it really can't manage nor control the data it collects about its users. That implies Facebook users aren't in control of their data, either.

There were other notable stumbles. There were reports after many users experienced repeated bogus Friend Requests, due to hacked and/or cloned accounts. It can be difficult for users to distinguish valid Friend Requests from spammers or bad actors masquerading as friends.

In August, reports surfaced that Facebook approached several major banks offering to share its detailed financial information about consumers in order, "to boost user engagement." Reportedly, the detailed financial information included debit/credit/prepaid card transactions and checking account balances. Not good.

Also in August, Facebook's Onavo VPN App was removed from the Apple App store because the app violated data-collection policies. 9 To 5 Mac reported on December 5th:

"The UK parliament has today publicly shared secret internal Facebook emails that cover a wide-range of the company’s tactics related to its free iOS VPN app that was used as spyware, recording users’ call and text message history, and much more... Onavo was an interesting effort from Facebook. It posed as a free VPN service/app labeled as Facebook’s “Protect” feature, but was more or less spyware designed to collect data from users that Facebook could leverage..."

Why spy? Why the deception? This seems unnecessary for a global social networking company already collecting massive amounts of content.

In November, an investigative report by ProPublica detailed the failures in Facebook's news transparency implementation. The failures mean Facebook hasn't made good on its promises to ensure trustworthy news content, nor stop foreign entities from using the social service to meddle in elections in democratic countries.

There is more. Facebook disclosed in October a massive data breach affecting 30 million users (emphasis added):

For 15 million people, attackers accessed two sets of information – name and contact details (phone number, email, or both, depending on what people had on their profiles). For 14 million people, the attackers accessed the same two sets of information, as well as other details people had on their profiles. This included username, gender, locale/language, relationship status, religion, hometown, self-reported current city, birth date, device types used to access Facebook, education, work, the last 10 places they checked into or were tagged in, website, people or Pages they follow, and the 15 most recent searches..."

The stolen data allows bad actors to operate several types of attacks (e.g., spam, phishing, etc.) against Facebook users. The stolen data allows foreign spy agencies to collect useful information to target persons. Neither is good. Wired summarized the situation:

"Every month this year—and in some months, every week—new information has come out that makes it seem as if Facebook's big rethink is in big trouble... Well-known and well-regarded executives, like the founders of Facebook-owned Instagram, Oculus, and WhatsApp, have left abruptly. And more and more current and former employees are beginning to question whether Facebook's management team, which has been together for most of the last decade, is up to the task.

Technically, Zuckerberg controls enough voting power to resist and reject any moves to remove him as CEO. But the number of times that he and his number two Sheryl Sandberg have over-promised and under-delivered since the 2016 election would doom any other management team... Meanwhile, investigations in November revealed, among other things, that the company had hired a Washington firm to spread its own brand of misinformation on other platforms..."

Hiring a firm to distribute misinformation elsewhere while promising to eliminate misinformation on its platform. Not good. Are Zuckerberg and Sandberg up to the task? The above list of breaches, scandals, fumbles, and stumbles suggest not. What do you think?

The bottom line is trust. Given recent events, BuzzFeed News article posed a relevant question (emphasis added):

"Of all of the statements, apologies, clarifications, walk-backs, defenses, and pleas uttered by Facebook employees in 2018, perhaps the most inadvertently damning came from its CEO, Mark Zuckerberg. Speaking from a full-page ad displayed in major papers across the US and Europe, Zuckerberg proclaimed, "We have a responsibility to protect your information. If we can’t, we don’t deserve it." At the time, the statement was a classic exercise in damage control. But given the privacy blunders that followed, it hasn’t aged well. In fact, it’s become an archetypal criticism of Facebook and the set up for its existential question: Why, after all that’s happened in 2018, does Facebook deserve our personal information?"

Facebook executives have apologized often. Enough is enough. No more apologies. Just fix it! And, if Facebook users haven't asked themselves the above question yet, some surely will. Earlier this week, a friend posted on the site:

"To all my FB friends:
I will be deleting my FB account very soon as I am disgusted by their invasion of the privacy of their users. Please contact me by email in the future. Please note that it will take several days for this action to take effect as FB makes it hard to get out of its grip. Merry Christmas to all and with best wishes for a Healthy, safe, and invasive free New Year."

I reminded this friend to also delete any Instagram and What's App accounts, since Facebook operates those services, too. If you want to quit the service but suffer with FOMO (Fear Of Missing Out), then read the experiences of a person who quit Apple, Google, Facebook, Microsoft, and Amazon for a month. It can be done. And, your social life will continue -- spectacularly. It did before Facebook.

Me? I have reduced my activity on Facebook. And there are certain activities I don't do on Facebook: take quizzes, make online payments, use its emotion reaction buttons (besides "Like"), use its mobile app, use the Messenger mobile app, nor use its voting and ballot previews content. Long ago I disabled the Facebook API platform on my Facebook account. You should, too. I never use my Facebook credentials (e.g., username, password) to sign into other sites. Never.

I will continue to post on Facebook links to posts in this blog, since it is helpful information for many Facebook users. In what ways have you reduced your usage of Facebook?


Oath To Pay Almost $5 Million To Settle Charges By New York AG Regarding Children's Privacy Violations

Oath Inc. logo Barbara D. Underwood, the Attorney General (AG) for New York State, announced last week a settlement with Oath, Inc. for violating the Children’s Online Privacy Protection Act (COPPA). Oath Inc. is a wholly-owned subsidiary of Verizon Communications. Until June 2017, Oath was known as AOL Inc. ("AOL"). The announcement stated:

"The Attorney General’s Office found that AOL conducted billions of auctions for ad space on hundreds of websites the company knew were directed to children under the age of 13. Through these auctions, AOL collected, used, and disclosed personal information from the websites’ users in violation of COPPA, enabling advertisers to track and serve targeted ads to young children. The company has agreed to adopt comprehensive reforms to protect children from improper tracking and pay a record $4.95 million in penalties..."

The United States Congress enacted COPPA in 1998 to protect the safety and privacy of young children online. As many parents know, young children don't understand complicated legal documents such as terms-of-use and privacy policies. COPPA prohibits operators of certain websites from collecting, using, or disclosing personal information (e.g., first and last name, e-mail address) of children under the age of 13 without first obtaining parental consent.

The definition of "personal information" was revised in 2013 to include persistent identifiers that can be used to recognize a user over time and across websites, such as the ID found in a web browser cookie or an Internet Protocol (“IP”) address. The revision effectively prohibits covered operators from using cookies, IP addresses, and other persistent identifiers to track users across websites for most advertising purposes on COPPA-covered websites.

The announcement by AG Underwood explained the alleged violations in detail. Despite policies to the contrary:

"... AOL nevertheless used its display ad exchange to conduct billions of auctions for ad space on websites that it knew to be directed to children under the age of 13 and subject to COPPA. AOL obtained this knowledge in two ways. First, several AOL clients provided notice to AOL that their websites were subject to COPPA. These clients identified more than a dozen COPPA-covered websites to AOL. AOL conducted at least 1.3 billion auctions of display ad space from these websites. Second, AOL itself determined that certain websites were directed to children under the age of 13 when it conducted a review of the content and privacy policies of client websites. Through these reviews, AOL identified hundreds of additional websites that were subject to COPPA. AOL conducted at least 750 million auctions of display ad space from these websites."

AG Underwood said in a statement:

"COPPA is meant to protect young children from being tracked and targeted by advertisers online. AOL flagrantly violated the law – and children’s privacy – and will now pay the largest-ever penalty under COPPA. My office remains committed to protecting children online and will continue to hold accountable those who violate the law."

A check at press time of both the press and "company values" sections of Oath's site failed to find any mentions of the settlement. TechCrunch reported on December 4th:

"We reached out to Oath with a number of questions about this privacy failure. But a spokesman did not engage with any of them directly — emailing a short statement instead, in which it writes: "We are pleased to see this matter resolved and remain wholly committed to protecting children’s privacy online." The spokesman also did not confirm nor dispute the contents of the New York Times report."

Hmmm. Almost a week has passed since AG Underwood's December 4th announcement. You'd think that Oath management would have released a statement by now. Maybe Oath isn't as committed to children's online privacy as they claim. Something for parents to note.

The National Law Review provided some context:

"...in 2016, the New York AG concluded a two-year investigation into the tracking practices of four online publishers for alleged COPPA violations... As recently as September of this year, the New Mexico AG filed a lawsuit for alleged COPPA violations against a children's game app company, Tiny Lab Productions, and the online ad companies that work within Tiny Lab's, including those run by Google and Twitter... The Federal Trade Commission (FTC) continues to vigorously enforce COPPA, closing out investigations of alleged COPPA violations against smart toy manufacturer VTech and online talent search company Explore Talent... there have been a total of 28 enforcement proceedings since the COPPA rule was issued in 2000."

You can read about many of these actions in this blog, and how COPPA was strengthened in 2013.

So, the COPPA law works well and it is being vigorously enforced. Kudos to AG Underwood, her staff, and other states' AGs for taking these actions. What are your opinions about the AOL/Oath settlement?


Ireland Regulator: LinkedIn Processed Email Addresses Of 18 Million Non-Members

LinkedIn logo On Friday November 23rd, the Data Protection Commission (DPC) in Ireland released its annual report. That report includes the results of an investigation by the DPC of the LinkedIn.com social networking site, after a 2017 complaint by a person who didn't use the social networking service. Apparently, LinkedIn obtained 18 million email address of non-members so it could then use the Facebook platform to deliver advertisements encouraging them to join.

The DPC 2018 report (Adobe PDF; 827k bytes) stated on page 21:

"The DPC concluded its audit of LinkedIn Ireland Unlimited Company (LinkedIn) in respect of its processing of personal data following an investigation of a complaint notified to the DPC by a non-LinkedIn user. The complaint concerned LinkedIn’s obtaining and use of the complainant’s email address for the purpose of targeted advertising on the Facebook Platform. Our investigation identified that LinkedIn Corporation (LinkedIn Corp) in the U.S., LinkedIn Ireland’s data processor, had processed hashed email addresses of approximately 18 million non-LinkedIn members and targeted these individuals on the Facebook Platform with the absence of instruction from the data controller (i.e. LinkedIn Ireland), as is required pursuant to Section 2C(3)(a) of the Acts. The complaint was ultimately amicably resolved, with LinkedIn implementing a number of immediate actions to cease the processing of user data for the purposes that gave rise to the complaint."

So, in an attempt to gain more users LinkedIn acquired and processed the email addresses of 18 million non-members without getting governmental "instruction" as required by law. Not good.

The DPC report covered the time frame from January 1st through May 24, 2018. The report did not mention the source(s) from which LinkedIn acquired the email addresses. The DPC report also discussed investigations of Facebook (e.g., WhatsApp, facial recognition),  and Yahoo/Oath. Microsoft acquired LinkedIn in 2016. GDPR went into effect across the EU on May 25, 2018.

There is more. The investigation's findings raised concerns about broader compliance issues, so the DPC conducted a more in-depth audit:

"... to verify that LinkedIn had in place appropriate technical security and organisational measures, particularly for its processing of non-member data and its retention of such data. The audit identified that LinkedIn Corp was undertaking the pre-computation of a suggested professional network for non-LinkedIn members. As a result of the findings of our audit, LinkedIn Corp was instructed by LinkedIn Ireland, as data controller of EU user data, to cease pre-compute processing and to delete all personal data associated with such processing prior to 25 May 2018."

That the DPC ordered LinkedIn to stop this particular data processing, strongly suggests that the social networking service's activity probably violated data protection laws, as the European Union (EU) implements stronger privacy laws, known as General Data Protection Regulation (GDPR). ZDNet explained in this primer:

".... GDPR is a new set of rules designed to give EU citizens more control over their personal data. It aims to simplify the regulatory environment for business so both citizens and businesses in the European Union can fully benefit from the digital economy... almost every aspect of our lives revolves around data. From social media companies, to banks, retailers, and governments -- almost every service we use involves the collection and analysis of our personal data. Your name, address, credit card number and more all collected, analysed and, perhaps most importantly, stored by organisations... Data breaches inevitably happen. Information gets lost, stolen or otherwise released into the hands of people who were never intended to see it -- and those people often have malicious intent. Under the terms of GDPR, not only will organisations have to ensure that personal data is gathered legally and under strict conditions, but those who collect and manage it will be obliged to protect it from misuse and exploitation, as well as to respect the rights of data owners - or face penalties for not doing so... There are two different types of data-handlers the legislation applies to: 'processors' and 'controllers'. The definitions of each are laid out in Article 4 of the General Data Protection Regulation..."

The new GDPR applies to both companies operating within the EU, and to companies located outside of the EU which offer goods or services to customers or businesses inside the EU. As a result, some companies have changed their business processes. TechCrunch reported in April:

"Facebook has another change in the works to respond to the European Union’s beefed up data protection framework — and this one looks intended to shrink its legal liabilities under GDPR, and at scale. Late yesterday Reuters reported on a change incoming to Facebook’s [Terms & Conditions policy] that it said will be pushed out next month — meaning all non-EU international are switched from having their data processed by Facebook Ireland to Facebook USA. With this shift, Facebook will ensure that the privacy protections afforded by the EU’s incoming GDPR — which applies from May 25 — will not cover the ~1.5 billion+ international Facebook users who aren’t EU citizens (but current have their data processed in the EU, by Facebook Ireland). The U.S. does not have a comparable data protection framework to GDPR..."

What was LinkedIn's response to the DPC report? At press time, a search of LinkedIn's blog and press areas failed to find any mentions of the DPC investigation. TechCrunch reported statements by Dennis Kelleher, Head of Privacy, EMEA at LinkedIn:

"... Unfortunately the strong processes and procedures we have in place were not followed and for that we are sorry. We’ve taken appropriate action, and have improved the way we work to ensure that this will not happen again. During the audit, we also identified one further area where we could improve data privacy for non-members and we have voluntarily changed our practices as a result."

What does this mean? Plenty. There seem to be several takeaways for consumer and users of social networking services:

  • EU regulators are proactive and conduct detailed audits to ensure companies both comply with GDPR and act consistent with any promises they made,
  • LinkedIn wants consumers to accept another "we are sorry" corporate statement. No thanks. No more apologies. Actions speak more loudly than words,
  • The DPC didn't fine LinkedIn probably because GDPR didn't become effective until May 25, 2018. This suggests that fines will be applied to violations occurring on or after May 25, 2018, and
  • People in different areas of the world view privacy and data protection differently - as they should. That is fine, and it shouldn't be a surprise. (A global survey about self-driving cars found similar regional differences.) Smart executives in businesses -- and in governments -- worldwide recognize regional differences, find ways to sell products and services across areas without degraded customer experience, and don't try to force their country's approach on other countries or areas which don't want it.

What takeaways do you see?


Plenty Of Bad News During November. Are We Watching The Fall Of Facebook?

Facebook logo November has been an eventful month for Facebook, the global social networking giant. And not in a good way. So much has happened, it's easy to miss items. Let's review.

A November 1st investigative report by ProPublica described how some political advertisers exploit gaps in Facebook's advertising transparency policy:

"Although Facebook now requires every political ad to “accurately represent the name of the entity or person responsible,” the social media giant acknowledges that it didn’t check whether Energy4US is actually responsible for the ad. Nor did it question 11 other ad campaigns identified by ProPublica in which U.S. businesses or individuals masked their sponsorship through faux groups with public-spirited names. Some of these campaigns resembled a digital form of what is known as “astroturfing,” or hiding behind the mirage of a spontaneous grassroots movement... Adopted this past May in the wake of Russian interference in the 2016 presidential campaign, Facebook’s rules are designed to hinder foreign meddling in elections by verifying that individuals who run ads on its platform have a U.S. mailing address, governmental ID and a Social Security number. But, once this requirement has been met, Facebook doesn’t check whether the advertiser identified in the “paid for by” disclosure has any legal status, enabling U.S. businesses to promote their political agendas secretly."

So, political ad transparency -however faulty it is -- has only been operating since May, 2018. Not long. Not good.

The day before the November 6th election in the United States, Facebook announced:

"On Sunday evening, US law enforcement contacted us about online activity that they recently discovered and which they believe may be linked to foreign entities. Our very early-stage investigation has so far identified around 30 Facebook accounts and 85 Instagram accounts that may be engaged in coordinated inauthentic behavior. We immediately blocked these accounts and are now investigating them in more detail. Almost all the Facebook Pages associated with these accounts appear to be in the French or Russian languages..."

This happened after Facebook removed 82 Pages, Groups and accounts linked to Iran on October 16th. Thankfully, law enforcement notified Facebook. Interested in more proactive action? Facebook announced on November 8th:

"We are careful not to reveal too much about our enforcement techniques because of adversarial shifts by terrorists. But we believe it’s important to give the public some sense of what we are doing... We now use machine learning to assess Facebook posts that may signal support for ISIS or al-Qaeda. The tool produces a score indicating how likely it is that the post violates our counter-terrorism policies, which, in turn, helps our team of reviewers prioritize posts with the highest scores. In this way, the system ensures that our reviewers are able to focus on the most important content first. In some cases, we will automatically remove posts when the tool indicates with very high confidence that the post contains support for terrorism..."

So, Facebook deployed in 2018 some artificial intelligence to help its human moderators identify terrorism threats -- not automatically remove them, but to identify them -- as the news item also mentioned its appeal process. Then, Facebook announced in a November 13th update:

"Combined with our takedown last Monday, in total we have removed 36 Facebook accounts, 6 Pages, and 99 Instagram accounts for coordinated inauthentic behavior. These accounts were mostly created after mid-2017... Last Tuesday, a website claiming to be associated with the Internet Research Agency, a Russia-based troll farm, published a list of Instagram accounts they said that they’d created. We had already blocked most of them, and based on our internal investigation, we blocked the rest... But finding and investigating potential threats isn’t something we do alone. We also rely on external partners, like the government or security experts...."

So, in 2018 Facebook leans heavily upon both law enforcement and security researchers to identify threats. You have to hunt a bit to find the total number of fake accounts removed. Facebook announced on November 15th:

"We also took down more fake accounts in Q2 and Q3 than in previous quarters, 800 million and 754 million respectively. Most of these fake accounts were the result of commercially motivated spam attacks trying to create fake accounts in bulk. Because we are able to remove most of these accounts within minutes of registration, the prevalence of fake accounts on Facebook remained steady at 3% to 4% of monthly active users..."

That's about 1.5 billion fake accounts by a variety of bad actors. Hmmmm... sounds good, but... it makes one wonder about the digital arms race happening. If the bad actors can programmatically create new fake accounts faster than Facebook can identify and remove them, then not good.

Meanwhile, CNet reported on November 11th that Facebook had ousted Oculus founder Palmer Luckey due to:

"... a $10,000 to an anti-Hillary Clinton group during the 2016 presidential election, he was out of the company he founded. Facebook CEO Mark Zuckerberg, during congressional testimony earlier this year, called Luckey's departure a "personnel issue" that would be "inappropriate" to address, but he denied it was because of Luckey's politics. But that appears to be at the root of Luckey's departure, The Wall Street Journal reported Sunday. Luckey was placed on leave and then fired for supporting Donald Trump, sources told the newspaper... [Luckey] was pressured by executives to publicly voice support for libertarian candidate Gary Johnson, according to the Journal. Luckey later hired an employment lawyer who argued that Facebook illegally punished an employee for political activity and negotiated a payout for Luckey of at least $100 million..."

Facebook acquired Oculus Rift in 2014. Not good treatment of an executive.

The next day, TechCrunch reported that Facebook will provide regulators from France with access to its content moderation processes:

"At the start of 2019, French regulators will launch an informal investigation on algorithm-powered and human moderation... Regulators will look at multiple steps: how flagging works, how Facebook identifies problematic content, how Facebook decides if it’s problematic or not and what happens when Facebook takes down a post, a video or an image. This type of investigation is reminiscent of banking and nuclear regulation. It involves deep cooperation so that regulators can certify that a company is doing everything right... The investigation isn’t going to be limited to talking with the moderation teams and looking at their guidelines. The French government wants to find algorithmic bias and test data sets against Facebook’s automated moderation tools..."

Good. Hopefully, the investigation will be a deep dive. Maybe other countries, which value citizens' privacy, will perform similar investigations. Companies and their executives need to be held accountable.

Then, on November 14th The New York Times published a detailed, comprehensive "Delay, Deny, and Deflect" investigative report based upon interviews of at least 50 persons:

"When Facebook users learned last spring that the company had compromised their privacy in its rush to expand, allowing access to the personal information of tens of millions of people to a political data firm linked to President Trump, Facebook sought to deflect blame and mask the extent of the problem. And when that failed... Facebook went on the attack. While Mr. Zuckerberg has conducted a public apology tour in the last year, Ms. Sandberg has overseen an aggressive lobbying campaign to combat Facebook’s critics, shift public anger toward rival companies and ward off damaging regulation. Facebook employed a Republican opposition-research firm to discredit activist protesters... In a statement, a spokesman acknowledged that Facebook had been slow to address its challenges but had since made progress fixing the platform... Even so, trust in the social network has sunk, while its pell-mell growth has slowed..."

The New York Times' report also highlighted the history of Facebook's focus on revenue growth and lack of focus to identify and respond to threats:

"Like other technology executives, Mr. Zuckerberg and Ms. Sandberg cast their company as a force for social good... But as Facebook grew, so did the hate speech, bullying and other toxic content on the platform. When researchers and activists in Myanmar, India, Germany and elsewhere warned that Facebook had become an instrument of government propaganda and ethnic cleansing, the company largely ignored them. Facebook had positioned itself as a platform, not a publisher. Taking responsibility for what users posted, or acting to censor it, was expensive and complicated. Many Facebook executives worried that any such efforts would backfire... Mr. Zuckerberg typically focused on broader technology issues; politics was Ms. Sandberg’s domain. In 2010, Ms. Sandberg, a Democrat, had recruited a friend and fellow Clinton alum, Marne Levine, as Facebook’s chief Washington representative. A year later, after Republicans seized control of the House, Ms. Sandberg installed another friend, a well-connected Republican: Joel Kaplan, who had attended Harvard with Ms. Sandberg and later served in the George W. Bush administration..."

The report described cozy relationships between the company and Democratic politicians. Not good for a company wanting to deliver unbiased, reliable news. The New York Times' report also described the history of failing to identify and respond quickly to content abuses by bad actors:

"... in the spring of 2016, a company expert on Russian cyberwarfare spotted something worrisome. He reached out to his boss, Mr. Stamos. Mr. Stamos’s team discovered that Russian hackers appeared to be probing Facebook accounts for people connected to the presidential campaigns, said two employees... Mr. Stamos, 39, told Colin Stretch, Facebook’s general counsel, about the findings, said two people involved in the conversations. At the time, Facebook had no policy on disinformation or any resources dedicated to searching for it. Mr. Stamos, acting on his own, then directed a team to scrutinize the extent of Russian activity on Facebook. In December 2016... Ms. Sandberg and Mr. Zuckerberg decided to expand on Mr. Stamos’s work, creating a group called Project P, for “propaganda,” to study false news on the site, according to people involved in the discussions. By January 2017, the group knew that Mr. Stamos’s original team had only scratched the surface of Russian activity on Facebook... Throughout the spring and summer of 2017, Facebook officials repeatedly played down Senate investigators’ concerns about the company, while publicly claiming there had been no Russian effort of any significance on Facebook. But inside the company, employees were tracing more ads, pages and groups back to Russia."

Facebook responded in a November 15th new release:

"There are a number of inaccuracies in the story... We’ve acknowledged publicly on many occasions – including before Congress – that we were too slow to spot Russian interference on Facebook, as well as other misuse. But in the two years since the 2016 Presidential election, we’ve invested heavily in more people and better technology to improve safety and security on our services. While we still have a long way to go, we’re proud of the progress we have made in fighting misinformation..."

So, Facebook wants its users to accept that it has invested more = doing better.

Regardless, the bottom line is trust. Can users trust what Facebook said about doing better? Is better enough? Can users trust Facebook to deliver unbiased news? Can users trust that Facebook's content moderation process is better? Or good enough? Can users trust Facebook to fix and prevent data breaches affecting millions of users? Can users trust Facebook to stop bad actors posing as researchers from using quizzes and automated tools to vacuum up (and allegedly resell later) millions of users' profiles? Can citizens in democracies trust that Facebook has stopped data abuses, by bad actors, designed to disrupt their elections? Is doing better enough?

The very next day, Facebook reported a huge increase in the number of government requests for data, including secret orders. TechCrunch reported about 13 historical national security letters:

"... dated between 2014 and 2017 for several Facebook and Instagram accounts. These demands for data are effectively subpoenas, issued by the U.S. Federal Bureau of Investigation (FBI) without any judicial oversight, compelling companies to turn over limited amounts of data on an individual who is named in a national security investigation. They’re controversial — not least because they come with a gag order that prevents companies from informing the subject of the letter, let alone disclosing its very existence. Companies are often told to turn over IP addresses of everyone a person has corresponded with, online purchase information, email records and cell-site location data... Chris Sonderby, Facebook’s deputy general counsel, said that the government lifted the non-disclosure orders on the letters..."

So, Facebook is a go-to resource for both bad actors and the good guys.

An eventful month, and the month isn't over yet. Taken together, this news is not good for a company wanting its social networking service to be a source of reliable, unbiased news source. This news is not good for a company wanting its users to accept it is doing better -- and that better is enough. The situation begs the question: are we watching the fall of Facebook? Share your thoughts and opinions below.


Some Surprising Facts About Facebook And Its Users

Facebook logo The Pew Research Center announced findings from its latest survey of social media users:

  • About two-thirds (68%) of adults in the United States use Facebook. That is unchanged from April 2016, but up from 54% in August 2012. Only Youtube gets more adult usage (73%).
  • About three-quarters (74%) of adult Facebook users visit the site at least once a day. That's higher than Snapchat (63%) and Instagram (60%).
  • Facebook is popular across all demographic groups in the United States: 74% of women use it, as do 62% of men, 81% of persons ages 18 to 29, and 41% of persons ages 65 and older.
  • Usage by teenagers has fallen to 51% (at March/April 2018) from 71% during 2014 to 2015. More teens use other social media services: YouTube (85%), Instagram (72%) and Snapchat (69%).
  • 43% of adults use Facebook as a news source. That is higher than other social media services: YouTube (21%), Twitter (12%), Instagram (8%), and LinkedIn (6%). More women (61%) use Facebook as a news source than men (39%). More whites (62%) use Facebook as a news source than nonwhites (37%).
  • 54% of adult users said they adjusted their privacy settings during the past 12 months. 42% said they have taken a break from checking the platform for several weeks or more. 26% said they have deleted the app from their phone during the past year.

Perhaps, the most troubling finding:

"Many adult Facebook users in the U.S. lack a clear understanding of how the platform’s news feed works, according to the May and June survey. Around half of these users (53%) say they do not understand why certain posts are included in their news feed and others are not, including 20% who say they do not understand this at all."

Facebook users should know that the service does not display in their news feed all posts by their friends and groups. Facebook's proprietary algorithm -- called its "secret sauce" by some -- displays items it thinks users will engage with = click the "Like" or other emotion buttons. This makes Facebook a terrible news source, since it doesn't display all news -- only the news you (probably already) agree with.

That's like living life in an online bubble. Sadly, there is more.

If you haven't watched it, PBS has broadcast a two-part documentary titled, "The Facebook Dilemma" (see trailer below), which arguable could have been titled, "the dark side of sharing." The Frontline documentary rightly discusses Facebook's approaches to news, privacy, its focus upon growth via advertising revenues, how various groups have used the service as a weapon, and Facebook's extensive data collection about everyone.

Yes, everyone. Obviously, Facebook collects data about its users. The service also collects data about nonusers in what the industry calls "shadow profiles." CNet explained that during an April:

"... hearing before the House Energy and Commerce Committee, the Facebook CEO confirmed the company collects information on nonusers. "In general, we collect data of people who have not signed up for Facebook for security purposes," he said... That data comes from a range of sources, said Nate Cardozo, senior staff attorney at the Electronic Frontier Foundation. That includes brokers who sell customer information that you gave to other businesses, as well as web browsing data sent to Facebook when you "like" content or make a purchase on a page outside of the social network. It also includes data about you pulled from other Facebook users' contacts lists, no matter how tenuous your connection to them might be. "Those are the [data sources] we're aware of," Cardozo said."

So, there might be more data sources besides the ones we know about. Facebook isn't saying. So much for greater transparency and control claims by Mr. Zuckerberg. Moreover, data breaches highlight the problems with the service's massive data collection and storage:

"The fact that Facebook has [shadow profiles] data isn't new. In 2013, the social network revealed that user data had been exposed by a bug in its system. In the process, it said it had amassed contact information from users and matched it against existing user profiles on the social network. That explained how the leaked data included information users hadn't directly handed over to Facebook. For example, if you gave the social network access to the contacts in your phone, it could have taken your mom's second email address and added it to the information your mom already gave to Facebook herself..."

So, Facebook probably launched shadow profiles when it introduced its mobile app. That means, if you uploaded the address book in your phone to Facebook, then you helped the service collect information about nonusers, too. This means Facebook acts more like a massive advertising network than simply a social media service.

How has Facebook been able to collect massive amounts of data about both users and nonusers? According to the Frontline documentary, we consumers have lax privacy laws in the United States to thank for this massive surveillance advertising mechanism. What do you think?


FTC: How You Should Handle Robocalls. 4 Companies Settle Regarding Privacy Shield Claims

First, it seems that the number of robocalls has increased during the past two years. Some automated calls are English. Some are in other languages. All try to trick consumers into sending money or disclosing sensitive financial and payment information. Advice from the U.S. Federal Trade Commission (FTC):

Second, the FTC announced a settlement agreement with four companies:

"In separate complaints, the FTC alleges that IDmission, LLC, mResource LLC (doing business as Loop Works, LLC), SmartStart Employment Screening, Inc., and VenPath, Inc. falsely claimed to be certified under the EU-U.S. Privacy Shield, which establishes a process to allow companies to transfer consumer data from European Union countries to the United States in compliance with EU law... The Department of Commerce administers the Privacy Shield framework, while the FTC enforces the promises companies make when joining the framework."

According to the lawsuits, IDmission, a cloud-based services firm, applied in 2017 for Privacy Shield certification with the U.S. Department of Commerce but never completed the necessary steps to be certified under the program. The other three companies each obtained Privacy Shield certification in 2016 but allowed their certifications to lapse. VenPath is a data analytics firm. SmartStart offers employment and background screening services. mResource provides talent management and recruitment services.

Terms of the settlement agreements prohibit all four companies from misrepresenting their participation in any privacy or data security program sponsored by the government. Also:

"... VenPath and SmartStart must also continue to apply the Privacy Shield protections to personal information they collected while participating in the program, protect it by another means authorized by the Privacy Shield framework, or return or delete the information within 10 days of the order."


Besieged Facebook Says New Ad Limits Aren’t Response to Lawsuits

[Editor's note: today's guest post, by reporters at ProPublica, is the latest in a series monitoring Facebook's attempts to clean up its advertising systems and tools. It is reprinted with permission.]

By Ariana Tobin and Jeremy B. Merrill, ProPublica

Facebook logo Facebook’s move to eliminate 5,000 options that enable advertisers on its platform to limit their audiences is unrelated to lawsuits accusing it of fostering housing and employment discrimination, the company said Wednesday.

“We’ve been building these tools for a long time and collecting input from different outside groups,” Facebook spokesman Joe Osborne told ProPublica.

Tuesday’s blog post announcing the elimination of categories that the company has described as “sensitive personal attributes” came four days after the Department of Justice joined a lawsuit brought by fair housing groups against Facebook in federal court in New York City. The suit contends that advertisers could use Facebook’s options to prevent racial and religious minorities and other protected groups from seeing housing ads.

Raising the prospect of tighter regulation, the Justice Department said that the Communications Decency Act of 1996, which gives immunity to internet companies from liability for content on their platforms, did not apply to Facebook’s advertising portal. Facebook has repeatedly cited the act in legal proceedings in claiming immunity from anti-discrimination law. Congress restricted the law’s scope in March by making internet companies more liable for ads and posts related to child sex-trafficking.

Around the same time the Justice Department intervened in the lawsuit, the Department of Housing and Urban Development (HUD) filed a formal complaint against Facebook, signaling that it had found enough evidence during an initial investigation to raise the possibility of legal action against the social media giant for housing discrimination. Facebook has said that its policies strictly prohibit discrimination, that over the past year it has strengthened its systems to protect against misuse, and that it will work with HUD to address the concerns.

“The Fair Housing Act prohibits housing discrimination including those who might limit or deny housing options with a click of a mouse,” Anna María Farías, HUD’s assistant secretary for fair housing and equal opportunity, said in a statement accompanying the complaint. “When Facebook uses the vast amount of personal data it collects to help advertisers to discriminate, it’s the same as slamming the door in someone’s face.”

Regulators in at least one state are also scrutinizing Facebook. Last month, the state of Washington imposed legally binding compliance requirements on the company, barring it from offering advertisers the option of excluding protected groups from seeing ads about housing, credit, employment, insurance or “public accommodations of any kind.”

Advertising is the primary source of revenue for the social media giant, which is under siege on several fronts. A recent study and media coverage have highlighted how hate speech and false rumors on Facebook have spurred anti-refugee discrimination in Germany and violence against minority ethnic groups such as the Rohingya in Myanmar. This week, Facebook said it had found evidence of Russian and Iranian efforts to influence elections in the U.S. and around the world through fake accounts and targeted advertising. It also said it had suspended more than 400 apps “due to concerns around the developers who built them or how the information people chose to share with the app may have been used.”

Facebook declined to identify most of the 5,000 options being removed, saying that the information might help bad actors game the system. It did say that the categories could enable advertisers to exclude racial and religious minorities, and it provided four examples that it deleted: “Native American culture,” “Passover,” “Evangelicalism” and “Buddhism.” It said the changes will be completed next month.

According to Facebook, these categories have not been widely used by advertisers to discriminate, and their removal is intended to be proactive. In some cases, advertisers legitimately use these categories to reach key audiences. According to targeting data from ads submitted to ProPublica’s Political Ad Collector project, Jewish groups used the “Passover” category to promote Jewish cultural events, and the Michael J. Fox Foundation used it to find people of Ashkenazi Jewish ancestry for medical research on Parkinson’s disease.

Facebook is not limiting advertisers’ options for narrowing audiences by age or sex. The company has defended age-based targeting in employment ads as beneficial for employers and job seekers. Advertisers may also still target or exclude by ZIP code — which critics have described as “digital red-lining” but Facebook says is standard industry practice.

A pending suit in federal court in San Francisco alleges that, by allowing employers to target audiences by age, Facebook is enabling employment discrimination against older job applicants. Peter Romer-Friedman, a lawyer representing the plaintiffs in that case, said that Facebook’s removal of the 5,000 options “is a modest step in the right direction.” But allowing employers to sift job seekers by age, he added, “shows what Facebook cares about: its bottom line. There is real money in age-restricted discrimination.”

Senators Bob Casey of Pennsylvania and Susan Collins of Maine have asked Facebook for more information on what steps it is taking to prevent age discrimination on the site.

The issue of discriminatory advertising on Facebook arose in October 2016 when ProPublica revealed that advertisers on the platform could narrow their audiences by excluding so-called “ethnic affinity” categories such as African-Americans and Spanish-speaking Hispanics. At the time, Facebook promised to build a system to flag and reject such ads. However, a year later, we bought dozens of rental housing ads that excluded protected categories. They were approved within seconds. So were ads that excluded older job seekers, as well as ads aimed at anti-Semitic categories such as “Jew hater.”

The removal of the 5,000 options isn’t Facebook’s first change to its advertising portal in response to such criticism. Last November, it added a self-certification option, which asks housing advertisers to check a box agreeing that their advertisement is not discriminatory. The company also plans to require advertisers to read educational material on the site about ethical practices.

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


Whirlpool's Online Product Registration: Confidentiality and Privacy Concerns

Earlier this month, my wife and I relocated to a different city within the same state to live closer to our new, 14-month young grandson. During the move, we bought new home appliances -- a clothes washer and dryer, both made by Whirlpool -- which prompted today's blog post.

The packaging and operation instructions included two registration postcards with the model and serial numbers printed in the form. Nothing controversial about that. The registration cards included, "Other Easy Ways To Register," and listed both registration websites for the United States and Canada. I tried the online registration to see what improvements or benefits Whirlpool's United States registration site might offer over the old-school snail-mail method besides speed.

The landing page includes a form for the customer's contact information, product purchased information, and future purchase plans. Pretty standard stuff. Nothing alarming there. Near the bottom of the form and just above the "Complete Registration" button are links to Whirlpool's Terms & Conditions and Privacy policies. I read both and found some surprises.

First, the site uses inconsistent nomenclature: two different policy titles. The link says "Terms & Conditions" while the title of the actual policy page states, "Terms Of Use." Which is it? Inconsistent nomenclature can confuse users. Not good. Come on, Whirlpool! This is not hard. Good website usability includes the consistent use of the same page title, so uses know where they are going when they select a link, and that they've arrived at the expected destination.

Second, the Terms Of Use (well, I had to pick a title so it wold be clear for you) policy page lacks a date. This can be confusing, making it difficult to impossible for consumers to know and reference the exact document read; plus determine what, if any, changes were posted since the prior version. Not good. Come on Whirlpool! Add a publication date. It's not hard.

Third, the Terms Of Use policy contained this clause:

"Whirlpool Corporation welcomes your submissions; however, any information submitted, other than your personal information (for example, your name and e-mail address), to Whirlpool Corporation through this site is the exclusive property of Whirlpool Corporation and is considered NOT to be confidential. Whirlpool Corporation does not receive the submission in confidence or under any confidential or fiduciary relationship. Whirlpool Corporation may use the submission for any purpose without restriction or compensation."

So, the Terms of Use policy is both vague and clear at the same time. It was vague because it didn't list the exact data elements considered "personal information." Not good. This leaves consumers to guess. The policy lists only two data elements. What about the rest? Are all confidential, or only some? And if some, which ones? Here's the list I consider confidential: name, street address, country, phone number, e-mail address, IP address, device type, device model, device operating system, payment card information, billing address, and online credentials (should I create a profile at the Whirlpool site). Come on Whirlpool! Get it together and provide the complete list of data elements you consider "personal information." It's not hard.

Fourth, the Terms Of Use policy was also clear because the above sentences quoted made Whirlpool's intentions clear: submissions to the site other than "personal information" are not confidential and Whirlpool can do with them whatever it wants. Since the policy doesn't list which data elements are personal, one must assume all are.  Not good.

Next, I read Whirlpool's Privacy policy, and hoped that it would clarify things. Thankfully, a little good news. First, the Privacy policy listed a date: May 31, 2018. Second, more inconsistent site nomenclature: the page-bottom links across the site say "Privacy Policy" while the policy page title says "Privacy Statement." I selected the "Expand All" button to view the entire policy. Third, Whirlpool's Privacy Statement listed the items considered personal information:

"- Your contact information, such as your name, email address, mailing address, and phone number
- Your billing information, such as your credit card number and billing address
- Your Whirlpool account information, including your user name, account number, and a password
- Your product and ownership information
- Your preferences, such as product wish lists, order history, and marketing preferences"

This list is a good start. A simple link to this section from the Terms Of Use policy would do wonders to clarify things. However, Whirlpool collects some key data which it more freely collects and trades than "personal information." The Privacy Statement contains this clause:

"Whirlpool and its business partners and service providers may use a variety of technologies that automatically or passively collect information about how you interact with our Websites ("Usage Information"). Usage Information may include: (i) your IP address, which is a unique set of numbers assigned to your computer by your Internet Service Provider (ISP) (which, depending on your ISP, may be a different number every time you connect to the Internet); (ii) the type of browser and operating system you use; and (iii) other information about your online session, such as the URL you came from to get to our Websites and the date and time you visited our Websites."

And, the Privacy Statement mentions the use of several online tracking technologies:

"We use Local Shared Objects (LSOs) such as HTML5 or Flash on our Websites to store content information and preferences. Third parties with whom we partner to provide certain features on our Websites or to display advertising based upon your web browsing activity use LSOs such as HTML5 or Flash to collect and store information... Web beacons are tiny electronic image files that can be embedded within a web page or included in an e-mail message, and are usually invisible to the human eye. When we use web beacons within our web pages, the web beacons (also known as “clear GIFs” or “tracking pixels”) may tell us such things as: how many people are coming to our Websites, whether they are one-time or repeat visitors, which pages they viewed and for how long, how well certain online advertising campaigns are converting, and other similar Website usage data. When used in our e-mail communications, web beacons can tell us the time an e-mail was opened, if and how many times it was forwarded, and what links users click on from within the e- mail message."

While the "EU-US Privacy Shield" section of the privacy policy lists Whirlpool's European subsidiaries, and contains a Privacy Shield link to an external site listing the companies that are probably some of Whirlpool's service and advertising partners, the privacy policy really does not disclose all of the "third parties," "business partners," "service vendors," advertising partners, and affiliates Whirlpool shares data with. Consumers are left in the dark.

Last, the "Your Rights: Choice & Access" section of the privacy policy mentions the opt-out mechanism for consumers. While consumers can opt-out or cancel receiving marketing (e.g., promotional) messaging from Whirlpool, you can't opt-out of the data collection and archival. So, choice is limited.

Given this and the above concerns, I abandoned the product registration form. Yep. Didn't complete it. Maybe I will in the future after Whirlpool fixes things. Perhaps most importantly, today's blog post is a reminder for all consumers: always read companies' privacy and terms-of-use policies. Always. You never know what you'll find that is irksome. And, if you don't know how to read online polices, this blog has some tips and suggestions.


Experts Warn Biases Must Be Removed From Artificial Intelligence

CNN Tech reported:

"Every time humanity goes through a new wave of innovation and technological transformation, there are people who are hurt and there are issues as large as geopolitical conflict," said Fei Fei Li, the director of the Stanford Artificial Intelligence Lab. "AI is no exception." These are not issues for the future, but the present. AI powers the speech recognition that makes Siri and Alexa work. It underpins useful services like Google Photos and Google Translate. It helps Netflix recommend movies, Pandora suggest songs, and Amazon push products..."

Artificial intelligence (AI) technology is not only about autonomous ships, trucks, and preventing crashes involving self-driving cars. AI has global impacts. Researchers have already identified problems and limitations:

"A recent study by Joy Buolamwini at the M.I.T. Media Lab found facial recognition software has trouble identifying women of color. Tests by The Washington Post found that accents often trip up smart speakers like Alexa. And an investigation by ProPublica revealed that software used to sentence criminals is biased against black Americans. Addressing these issues will grow increasingly urgent as things like facial recognition software become more prevalent in law enforcement, border security, and even hiring."

Reportedly, the concerns and limitations were discussed earlier this month at the "AI Summit - Designing A Future For All" conference. Back in 2016, TechCrunch listed five unexpected biases in artificial intelligence. So, there is much important work to be done to remove biases.

According to CNN Tech, a range of solutions are needed:

"Diversifying the backgrounds of those creating artificial intelligence and applying it to everything from policing to shopping to banking...This goes beyond diversifying the ranks of engineers and computer scientists building these tools to include the people pondering how they are used."

Given the history of the internet, there seems to be an important take-away. Early on, many people mistakenly assumed that, "If it's in an e-mail, then it must be true." That mistaken assumption migrated to, "If it's in a website on the internet, then it must be true." And that mistaken assumption migrated to, "If it was posted on social media, then it must be true." Consumers, corporate executives, and technicians must educate themselves and avoid assuming, "If an AI system collected it, then it must be true." Veracity matters. What do you think?


Facial Recognition At Facebook: New Patents, New EU Privacy Laws, And Concerns For Offline Shoppers

Facebook logo Some Facebook users know that the social networking site tracks them both on and off (e.g., signed into, not signed into) the service. Many online users know that Facebook tracks both users and non-users around the internet. Recent developments indicate that the service intends to track people offline, too. The New York Times reported that Facebook:

"... has applied for various patents, many of them still under consideration... One patent application, published last November, described a system that could detect consumers within [brick-and-mortar retail] stores and match those shoppers’ faces with their social networking profiles. Then it could analyze the characteristics of their friends, and other details, using the information to determine a “trust level” for each shopper. Consumers deemed “trustworthy” could be eligible for special treatment, like automatic access to merchandise in locked display cases... Another Facebook patent filing described how cameras near checkout counters could capture shoppers’ faces, match them with their social networking profiles and then send purchase confirmation messages to their phones."

Some important background. First, the usage of surveillance cameras in retail stores is not new. What is new is the scope and accuracy of the technology. In 2012, we first learned about smart mannequins in retail stores. In 2013, we learned about the five ways retail stores spy on shoppers. In 2015, we learned more about tracking of shoppers by retail stores using WiFi connections. In 2018, some smart mannequins are used in the healthcare industry.

Second, Facebook's facial recognition technology scans images uploaded by users, and then allows users identified to accept or decline labels with their name for each photo. Each Facebook user can adjust their privacy settings to enable or disable the adding of their name label to photos. However:

"Facial recognition works by scanning faces of unnamed people in photos or videos and then matching codes of their facial patterns to those in a database of named people... The technology can be used to remotely identify people by name without their knowledge or consent. While proponents view it as a high-tech tool to catch criminals... critics said people cannot actually control the technology — because Facebook scans their faces in photos even when their facial recognition setting is turned off... Rochelle Nadhiri, a Facebook spokeswoman, said its system analyzes faces in users’ photos to check whether they match with those who have their facial recognition setting turned on. If the system cannot find a match, she said, it does not identify the unknown face and immediately deletes the facial data."

Simply stated: Facebook maintains a perpetual database of photos (and videos) with names attached, so it can perform the matching and not display name labels for users who declined and/or disabled the display of name labels in photos (videos). To learn more about facial recognition at Facebook, visit the Electronic Privacy Information Center (EPIC) site.

Third, other tech companies besides Facebook use facial recognition technology:

"... Amazon, Apple, Facebook, Google and Microsoft have filed facial recognition patent applications. In May, civil liberties groups criticized Amazon for marketing facial technology, called Rekognition, to police departments. The company has said the technology has also been used to find lost children at amusement parks and other purposes..."

You may remember, earlier in 2017 Apple launched its iPhone X with Face ID feature for users to unlock their phones. Fourth, since Facebook operates globally it must respond to new laws in certain regions:

"In the European Union, a tough new data protection law called the General Data Protection Regulation now requires companies to obtain explicit and “freely given” consent before collecting sensitive information like facial data. Some critics, including the former government official who originally proposed the new law, contend that Facebook tried to improperly influence user consent by promoting facial recognition as an identity protection tool."

Perhaps, you find the above issues troubling. I do. If my facial image will be captured, archived, tracked by brick-and-mortar stores, and then matched and merged with my online usage, then I want some type of notice before entering a brick-and-mortar store -- just as websites present privacy and terms-of-use policies. Otherwise, there is no notice nor informed consent by shoppers at brick-and-mortar stores.

So, is facial recognition a threat, a protection tool, or both? What are your opinions?


Researchers Find Mobile Apps Can Easily Record Screenshots And Videos of Users' Activities

New academic research highlights how easy it is for mobile apps to both spy upon consumers and violate our privacy. During a recent study to determine whether or not smartphones record users' conversations, researchers at Northeastern University (NU) found:

"... that some companies were sending screenshots and videos of user phone activities to third parties. Although these privacy breaches appeared to be benign, they emphasized how easily a phone’s privacy window could be exploited for profit."

The NU researchers tested 17,260 of the most popular mobile apps running on smartphones using the Android operating system. About 9,000 of the 17,260 apps had the ability to take screenshots. The vulnerability: screenshot and video captures could easily be used to record users' keystrokes, passwords, and related sensitive information:

"This opening will almost certainly be used for malicious purposes," said Christo Wilson, another computer science professor on the research team. "It’s simple to install and collect this information. And what’s most disturbing is that this occurs with no notification to or permission by users."

The NU researchers found one app already recording video of users' screen activity (links added):

"That app was GoPuff, a fast-food delivery service, which sent the screenshots to Appsee, a data analytics firm for mobile devices. All this was done without the awareness of app users. [The researchers] emphasized that neither company appeared to have any nefarious intent. They said that web developers commonly use this type of information to debug their apps... GoPuff has changed its terms of service agreement to alert users that the company may take screenshots of their use patterns. Google issued a statement emphasizing that its policy requires developers to disclose to users how their information will be collected."

May? A brief review of the Appsee site seems to confirm that video recordings of the screens on app users' mobile devices is integral to the service:

"RECORDING: Watch every user action and understand exactly how they use your app, which problems they're experiencing, and how to fix them.​ See the app through your users' eyes to pinpoint usability, UX and performance issues... TOUCH HEAT MAPS: View aggregated touch heatmaps of all the gestures performed in each​ ​screen in your app.​ Discover user navigation and interaction preferences... REALTIME ANALYTICS & ALERTS:Get insightful analytics on user behavior without pre-defining any events. Obtain single-user and aggregate insights in real-time..."

Sounds like a version of "surveillance capitalism" to me. According to the Appsee site, a variety of companies use the service including eBay, Samsung, Virgin airlines, The Weather Network, and several advertising networks. Plus, the Appsee Privacy Policy dated may 23, 2018 stated:

"The Appsee SDK allows Subscribers to record session replays of their end-users' use of Subscribers' mobile applications ("End User Data") and to upload such End User Data to Appsee’s secured cloud servers."

In this scenario, GoPuff is a subscriber and consumers using the GoPuff mobile app are end users. The Appsee SDK is software code embedded within the GoPuff mobile app. The researchers said that this vulnerability, "will not be closed until the phone companies redesign their operating systems..."

Data-analytics services like Appsee raise several issues. First, there seems to be little need for digital agencies to conduct traditional eye-tracking and usability test sessions, since companies can now record, upload and archive what, when, where, and how often users swipe and select in-app content. Before, users were invited to and paid for their participation in user testing sessions.

Second, this in-app tracking and data collection amounts to perpetual, unannounced user testing. Previously, companies have gotten into plenty of trouble with their customers by performing secret user testing; especially when the service varies from the standard, expected configuration and the policies (e.g., privacy, terms of service) don't disclose it. Nobody wants to be a lab rat or crash-test dummy.

Third, surveillance agencies within several governments must be thrilled to learn of these new in-app tracking and spy tools, if they aren't already using them. A reasonable assumption is that Appsee also provides data to law enforcement upon demand.

Fourth, two of the researchers at NU are undergraduate students. Another startling disclosure:

"Coming into this project, I didn’t think much about phone privacy and neither did my friends," said Elleen Pan, who is the first author on the paper. "This has definitely sparked my interest in research, and I will consider going back to graduate school."

Given the tsunami of data breaches, privacy legislation in Europe, and demands by law enforcement for tech firms to build "back door" hacks into their mobile devices and smartphones, it is startling alarming that some college students, "don't think much about phone privacy." This means that Pan and her classmates probably haven't read privacy and terms-of-service policies for the apps and sites they've used. Maybe they will now.

Let's hope so.

Consumers interested in GoPuff should closely read the service's privacy and Terms of Service policies, since the latter includes dispute resolution via binding arbitration and prevents class-action lawsuits.

Hopefully, future studies about privacy and mobile apps will explore further the findings by Pan and her co-researchers. Download the study titled, "Panoptispy: Characterizing Audio and Video Exfiltration from Android Applications" (Adobe PDF) by Elleen Pan, Jingjing Ren, Martina Lindorfer, Christo Wilson, and David Choffnes.


Facebook’s Screening for Political Ads Nabs News Sites Instead of Politicians

[Editor's note: today's post, by reporters at ProPublica, discusses new advertising rules at the Facebook.com social networking service. It is reprinted with permission.]

By Jeremy B. Merrill and Ariana Tobin, ProPublica

One ad couldn’t have been more obviously political. Targeted to people aged 18 and older, it urged them to “vote YES” on June 5 on a ballot proposition to issue bonds for schools in a district near San Francisco. Yet it showed up in users’ news feeds without the “paid for by” disclaimer required for political ads under Facebook’s new policy designed to prevent a repeat of Russian meddling in the 2016 presidential election. Nor does it appear, as it should, in Facebook’s new archive of political ads.

The other ad was from The Hechinger Report, a nonprofit news outlet, promoting one of its articles about financial aid for college students. Yet Facebook’s screening system flagged it as political. For the ad to run, The Hechinger Report would have to undergo the multi-step authorization and authentication process of submitting Social Security numbers and identification that Facebook now requires for anyone running “electoral ads” or “issue ads.”

When The Hechinger Report appealed, Facebook acknowledged that its system should have allowed the ad to run. But Facebook then blocked another ad from The Hechinger Report, about an article headlined, “DACA students persevere, enrolling at, remaining in, and graduating from college.” This time, Facebook rejected The Hechinger Report’s appeal, maintaining that the text or imagery was political.

As these examples suggest, Facebook’s new screening policies to deter manipulation of political ads are creating their own problems. The company’s human reviewers and software algorithms are catching paid posts from legitimate news organizations that mention issues or candidates, while overlooking straightforwardly political posts from candidates and advocacy groups. Participants in ProPublica’s Facebook Political Ad Collector project have submitted 40 ads that should have carried disclaimers under the social network’s policy, but didn’t. Facebook may have underestimated the difficulty of distinguishing between political messages and political news coverage — and the consternation that failing to do so would stir among news organizations.

The rules require anyone running ads that mention candidates for public office, are about elections, or that discuss any of 20 “national issues of public importance” to verify their personal Facebook accounts and add a "paid for by" disclosure to their ads, which are to be preserved in a public archive for seven years. Advertisers who don’t comply will have their ads taken down until they undergo an "authorization" process, submitting a Social Security number, driver’s license photo, and home address, to which Facebook sends a letter with a code to confirm that anyone running ads about American political issues has an American home address. The complication is that the 20 hot-button issues — environment, guns, immigration, values foreign policy, civil rights and the like — are likely to pop up in posts from news organizations as well.

"This could be really confusing to consumers because it’s labeling news content as political ad content," said Stefanie Murray, director of the Center for Cooperative Media at Montclair State University.

The Hechinger Report joined trade organizations representing thousands of publishers earlier this month in protesting this policy, arguing that the filter lumps their stories in with the very organizations and issues they are covering, thus confusing readers already wary of "fake news." Some publishers — including larger outlets like New York Media, which owns New York Magazine — have stopped buying ads on political content they expect would be subject to Facebook’s ad archive disclosure requirement.

"When it comes to news, Facebook still doesn’t get it. In its efforts to clear up one bad mess, it seems set on joining those who want blur the line between reality-based journalism and propaganda," Mark Thompson, chief executive officer of The New York Times, said in prepared remarks at the Open Markets Institute on Tuesday, June 12th.

In a statement Wednesday June 13th, Campbell Brown, Facebook’s head of global news partnerships, said the company recognized "that news content was different from political and issue advertising," and promised to create a "differentiated space within our archive to separate news content from political and issue ads." But Brown rejected the publishers’ request for a "whitelist" of legitimate news organizations whose ads would not be considered political.

"Removing an entire group of advertisers, in this case publishers, would go against our transparency efforts and the work we’re doing to shore up election integrity on Facebook," she wrote."“We don’t want to be in a position where a bad actor obfuscates its identity by claiming to be a news publisher." Many of the foreign agents that bought ads to sway the 2016 presidential election, the company has said, posed as journalistic outlets.

Her response didn’t satisfy news organizations. Facebook "continues to characterize professional news and opinion as ‘advertising’ — which is both misguided and dangerous," said David Chavern, chief executive of the News Media Alliance — a trade association representing 2,000 news organizations in the U.S. and Canada —and co-author of an open letter to Facebook on June 11.

ProPublica asked Facebook to explain its decision to block 14 advertisements shared with us by news outlets. Of those, 12 were ultimately rejected as political content, one was overturned on appeal, and one Facebook could not locate in its records. Most of these publications, including The Hechinger Report, are affiliated with the Institute for Nonprofit News, a consortium of mostly small nonprofit newsrooms that produce primarily investigative journalism (ProPublica is a member).

Here are a few examples of news organization ads that were rejected as political:

  • Voice of Monterey Bay tried to boost an interview with labor leader Dolores Huerta headlined "She Still Can." After the ad ran for about a day, Facebook sent an alert that the ad had been turned off. The outlet is refusing to seek approval for political ads, “since we are a news organization,” said Julie Martinez, co-founder of the nonprofit news site.
  • Ensia tried to advertise an article headlined: "Opinion: We need to talk about how logging in the Southern U.S. is harming local residents." It was rejected as political. Ensia will not appeal or buy new ads until Facebook addresses the issue, said senior editor David Doody.
  • inewsource tried to promote a post about a local candidate, headlined: "Scott Peters’ Plea to Get San Diego Unified Homeless Funding Rejected." The ad was rejected as political. inewsource appealed successfully, but then Facebook changed its mind and rejected it again, a spokeswoman for the social network said.
  • BirminghamWatch tried to boost a post about a story headlined, "‘That is Crazy:’ 17 Steps to Cutting Checks for Birmingham Neighborhood Projects." The ad was rejected as political and rejected again on appeal. A little while later, BirminghamWatch’s advertiser on the account received a message from Facebook: "Finish boosting your post for $15, up to 15,000 people will see it in NewsFeed and it can get more likes, comments, and shares." The nonprofit news site appealed again, and the ad was rejected again.

For most of its history, Facebook treated political ads like any other ads. Last October, a month after disclosing that "inauthentic accounts… operated out of Russia" had spent $100,000 on 3,000 ads that "appeared to focus on amplifying divisive social and political messages," the company announced it would implement new rules for election ads. Then in April, it said the rules would also apply to issue-related ads.

The policy took effect last month, at a time when Facebook’s relationship with the news industry was already rocky. A recent algorithm change reduced the number of posts from news organizations that users see in their news feed, thus decreasing the amount of traffic many media outlets can bring in without paying for wider exposure, and frustrating publishers who had come to rely on Facebook as a way to reach a broader audience.

Facebook has pledged to assign 3,000-4,000 "content moderators" to monitor political ads, but hasn’t reached that staffing level yet. The company told ProPublica that it is committed to meeting the goal by the U.S. midterm elections this fall.

To ward off "bad actors who try to game our enforcement system," Facebook has kept secret its specific parameters and keywords for determining if an ad is political. It has published only the list of 20 national issues, which it says is based in part on a data-coding system developed by a network of political scientists called the Comparative Agendas Project. A director on that project, Frank Baumgartner, said the lack of transparency is problematic.

"I think [filtering for political speech] is a puzzle that can be solved by algorithms and big data, but it has to be done right and the code needs to be transparent and publicly available. You can’t have proprietary algorithms determining what we see," Baumgartner said.

However Facebook’s algorithms work, they are missing overtly political ads. Incumbent members of Congress, national advocacy groups and advocates of local ballot initiatives have all run ads on Facebook without the social network’s promised transparency measures, after they were supposed to be implemented.

Ads from Senator Jeff Merkley, Democrat-Oregon, Representative Don Norcross, Democrat-New Jersey, and Representative Pramila Jayapal, Democrat-Washington, all ran without disclaimers as recently as this past Monday. So did an ad from Alliance Defending Freedom, a right-wing group that represented a Christian baker whose refusal for religious reasons to make a wedding cake for a gay couple was upheld by the Supreme Court this month. And ads from NORML, the marijuana legalization advocacy group and MoveOn, the liberal organization, ran for weeks before being taken down.

ProPublica asked Facebook why these ads weren’t considered political. The company said it is reviewing them. "Enforcement is never perfect at launch," it said.

Clarification, June 15, 2018: This article has been updated to include more specific information about the kinds of advertising New York Media has stopped buying on Facebook’s platform.

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


What Facebook’s New Political Ad System Misses

[Editor's Note: today's guest post is by the reporters at ProPublica. It is reprinted with permission.]

By Jeremy B. Merrill, Ariana Tobin, and Madeleine Varner, ProPublica

Facebook’s long-awaited change in how it handles political advertisements is only a first step toward addressing a problem intrinsic to a social network built on the viral sharing of user posts.

The company’s approach, a searchable database of political ads and their sponsors, depends on the company’s ability to sort through huge quantities of ads and identify which ones are political. Facebook is betting that a combination of voluntary disclosure and review by both people and automated systems will close a vulnerability that was famously exploited by Russian meddlers in the 2016 election.

The company is doubling down on tactics that so far have not prevented the proliferation of hate-filled posts or ads that use Facebook’s capability to target ads particular groups.

If the policy works as Facebook hopes, users will learn who has paid for the ads they see. But the company is not revealing details about the significant aspect of how political advertisers use its platform — the specific attributes the ad buyers used to target a particular person for an ad.

Facebook’s new system is the company’s most ambitious response thus far to the now-documented efforts by Russian agents to circulate items that would boost Donald Trump’s chances or suppress Democratic turnout. The new policies announced Thursday will make it harder for somebody trying to exploit the precise vulnerabilities in Facebook’s system exploited by the Russians in 2016 in several ways:

First, political ads that you see on Facebook will now include the name of the organization or person who paid for it, reminiscent of disclaimers required on political mailers and TV ads. (The ads Facebook identified as placed by Russians carried no such tags.)

The Federal Election Commission requires political ads to carry such clear disclosures but as we have reported, many candidates and groups on Facebook haven’t been following that rule.

Second, all political ads will be published in a searchable database.

Finally, the company will now require that anyone buying a political ad in their system confirm that they’re a U.S. resident. Facebook will even mail advertisers a postcard to make certain they’re in the U.S. Facebook says ads by advertisers whose identities aren’t verified under this process will be taken down starting in about a week, and they will be blocked from buying new ads until they have verified themselves.

While the new system can still be gamed, the specific tactics used by the Russian Internet Research Agency, such as an overseas purchase of ads promoting a Black Lives Matter rally under the name “Blacktivist,” will become harder — or at least harder to do without getting caught.

The company has also pledged to devote more employees to the issue, including 3,000-4,000 more content moderators. But Facebook says these will be not be additional hires — they will be included in the 20,000 already promised to tackle various moderation issues in the coming months.

What Is Facebook Missing?

The most obvious flaw in Facebook’s new system is that it misses ads it should catch. Right now, it’s easy to find political ads that are missing from their archive. Take this one, from the Washington State Democratic Party. Just minutes after Facebook finished announcing its launch of the tool, a participant in ProPublica’s Facebook Political Ad Collector project saw this ad, criticizing Republican congresswoman Cathy McMorris Rodgers… but it wasn’t in the database.

And there are others.

The company acknowledged that the process is still a work in progress, reiterating its request that users pitch in by reporting the political ads that lack disclosures.

Even as Facebook’s system gets better at identifying political ads, the company is withholding a critical piece of information in the ads it’s publishing. While we’ll see some demographic information about who saw a given ad, Facebook is not indicating which audiences the advertiser intended to target — categories that often include racial or political characteristics and which have been controversial in the past.

This information is critical to researchers and journalists trying to make sense of political advertising on Facebook. Take, for instance, this ad promoting the environmental benefits of nuclear power, from a group called Nuclear Matters: the group chose specifically to show it to people interested in veganism — a fact we wouldn’t know from looking at the demographics of the users who saw the ad.

Facebook said it considers the information about who saw an ad — age, gender and location — sufficient. Rob Leathern, Facebook’s Director of Product Management, said that the limited demographics-only breakdown “offers more transparency than the intent, in terms of showing the targeting.”

The company is also promising to launch an API, a technical tool which will allow outsiders to write software that would look for patterns in the new ad database. The company says it will launch an API “later this summer” but hasn’t said what data it will contain or who will have access to it.

ProPublica’s own Facebook Ad Collector tool, which also collects political ads spotted on Facebook, has an API that can be accessed by anyone. It also includes the targeting information — which users can also see on each ad that they view.

Facebook said it would not release data about ads flagged by users as political and then rejected by the system. We’re curious about those, and we know firsthand that their software can be imperfect. We’ve attempted to buy ads specifically about our journalism that were flagged as problematic — because the ads “contained profanity,” or were misclassified as discriminatory ads for “employment, credit or housing opportunities” by mistake.

Facebook’s track record on initiatives aimed at improving the transparency of its massively profitable advertising system is spotty. The company has said it’s going to rely in part on artificial intelligence to review ads — the same sort of technology that the company said in the past it would use to block discriminatory ads for housing, employment and credit opportunities.

When we tested the system almost a year after a ProPublica story showed Facebook was allowing advertisers to target housing ads in a way that violated Fair Housing Act protections, we found that the company was still approving housing ads that excluded African-Americans and other “multicultural affinities” from seeing them. The company was pressured to implement several changes to its ad portal and a Fair Housing group filed a lawsuit against the company.

Facebook also plans to rely in part on users to find and report political ads that get through the system without the required disclosures.

But its track record of moderating user-flagged content — when it comes to both hate speech and advertising — has been uneven. Last December, ProPublica brought 49 cases of user-flagged offensive speech to Facebook, and the company acknowledged that its moderators had made the wrong call in 22 of them.

The company admits it's playing a “cat and mouse game” with people trying to pass political ads through their system unnoticed. Just last month, Ohio Democratic gubernatorial candidate Richard Cordray’s campaign ran Facebook ads criticizing his opponent — but from a page called “Ohio Primary Info.”

The need for ad transparency goes way beyond Russian bad actors. Our tool has already caught scams and malware disguised as politics, which users raised as a problem years before Facebook made any meaningful change.

If you flag an ad to Facebook, please report them to us as well by sending an email to political.ads@propublica.org. We will be watching to see how well Facebook responds when users flag an ad.

How Will They Enforce the New Rules?

It’s one thing to create a set of rules, and another to enforce them consistently and on a large scale.

Facebook, which kept its content moderation and hate speech policies secret until they were revealed by ProPublica, won’t share the specific rules governing political ad content or details about the instructions moderators receive.

Leathern said the company is keeping the rules secret to frustrate the efforts of “bad actors who try to game our enforcement systems”

Facebook has said it’s looking to flag both electoral ads and those that take a position on its list of twenty “national legislative issues of public importance”. These range from the concrete, like “abortion” and “taxes,” to broad topics like “health” and “values.”

Facebook acknowledges its system will make mistakes and says it will improve over time. Ads for specific candidates are relatively easy to detect. “We’ll likely miss ads when they aim to persuade,” said Katie Harbath, Facebook’s Global Politics and Government Outreach Director.

We plan to keep an eye out for ads that don’t make it into the archive. We’ll be looking for ads that our Political Ad Collector tool finds that aren’t in Facebook’s database.

Want to Help?

We need your help building out our independent database of political ads! If you’re still reading this article, we’re giving you permission to stop and install the Political Ad Collector extension. Here’s what you need to know about how it works.

You can also help us find other people who can install the tool. We are especially in need of people who aren’t ProPublica readers already. We need people from a diverse set of backgrounds, and with different perspectives and political beliefs. Please encourage your friends and relatives — especially the ones you avoid talking politics with — to install it.

Do You Work at a News Outlet and Want to Partner With Us on This?

Awesome. We’re already working with quite a few newsrooms all over the world, including the CBC in Canada, Bridge Magazine in Michigan, The Guardian in Australia and more.

In the U.S., we’re trying to get eyes and ears on the ground in as many local elections as possible. If your readers would be interested in joining our transparency effort, please reach out. We’re happy to send more information about this and our larger Electionland project.

Filed under:

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

 


New Commissioner Says FTC Should Get Tough on Companies Like Facebook and Google

[Editor's note: today's guest post, by reporters at ProPublica, explores enforcement policy by the U.S. Federal Trade Commission (FTC), which has become more important  given the "light touch" enforcement approach by the Federal Communications Commission. Today's post is reprinted with permission.]

By Jesse Eisinger, ProPublica

Declaring that "the credibility of law enforcement and regulatory agencies has been undermined by the real or perceived lax treatment of repeat offenders," newly installed Democratic Federal Trade Commissioner Rohit Chopra is calling for much more serious penalties for repeat corporate offenders.

"FTC orders are not suggestions," he wrote in his first official statement, which was released on May 14.

Many giant companies, including Facebook and Google, are under FTC consent orders for various alleged transgressions (such as, in Facebook’s case, not keeping its promises to protect the privacy of its users’ data). Typically, a first FTC action essentially amounts to a warning not to do it again. The second carries potential penalties that are more serious.

Some critics charge that that approach has encouraged companies to treat FTC and other regulatory orders casually, often violating their terms. They also say the FTC and other regulators and law enforcers have gone easy on corporate recidivists.

In 2012, a Republican FTC commissioner, J. Thomas Rosch, dissented from an agency agreement with Google that fined the company $22.5 million for violations of a previous order even as it denied liability. Rosch wrote, “There is no question in my mind that there is ‘reason to believe’ that Google is in contempt of a prior Commission order.” He objected to allowing the company to deny its culpability while accepting a fine.

Chopra’s memo signals a tough stance from Democratic watchdogs — albeit a largely symbolic one, given the fact that Republicans have a 3-2 majority on the FTC — as the Trump administration pursues a wide-ranging deregulatory agenda. Agencies such as the Environmental Protection Agency and the Department of Interior are rolling back rules, while enforcement actions from the Securities and Exchange Commission and the Department of Justice are at multiyear lows.

Chopra, 36, is an ally of Elizabeth Warren and a former assistant director of the Consumer Financial Protection Bureau. President Donald Trump nominated him to his post in October, and he was confirmed last month. The FTC is led by a five-person commission, with a chairman from the president’s party.

The Chopra memo is also a tacit criticism of enforcement in the Obama years. Chopra cites the SEC’s practice of giving waivers to banks that have been sanctioned by the Department of Justice or regulators allowing them to continue to receive preferential access to capital markets. The habitual waivers drew criticism from a Democratic commissioner on the SEC, Kara Stein. Chopra contends in his memo that regulators treated both Wells Fargo and the giant British bank HSBC too lightly after repeated misconduct.

"When companies violate orders, this is usually the result of serious management dysfunction, a calculated risk that the payoff of skirting the law is worth the expected consequences, or both," he wrote. Both require more serious, structural remedies, rather than small fines.

The repeated bad behavior and soft penalties “undermine the rule of law,” he argued.

Chopra called for the FTC to use more aggressive tools: referring criminal matters to the Department of Justice; holding individual executives accountable, even if they weren’t named in the initial complaint; and “meaningful” civil penalties.

The FTC used such aggressive tactics in going after Kevin Trudeau, infomercial marketer of miracle treatments for bodily ailments. Chopra implied that the commission does not treat corporate recidivists with the same toughness. “Regardless of their size and clout, these offenders, too, should be stopped cold,” he writes.

Chopra also suggested other remedies. He called for the FTC to consider banning companies from engaging in certain business practices; requiring that they close or divest the offending business unit or subsidiary; requiring the dismissal of senior executives; and clawing back executive compensation, among other forceful measures.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.


News Media Alliance Challenges Tech Companies To 'Accept Accountability' And Responsibility For Filtering News In Their Platforms

Last week, David Chavern, the President and CEO of News Media Alliance (NMA), testified before the House Judiciary Committee. The NMA is a nonprofit trade association representing over 2,000 news organizations across the United States. Mr. Chavern's testimony focused upon the problem of fake news, often aided by social networking platform.

His comments first described current conditions:

"... Quality journalism is essential to a healthy and functioning democracy -- and my members are united in their desire to fight for its future.

Too often in today’s information-driven environment, news is included in the broad term "digital content." It’s actually much more important than that. While some low-quality entertainment or posts by friends can be disappointing, inaccurate information about world events can be immediately destructive. Civil society depends upon the availability of real, accurate news.

The internet represents an extraordinary opportunity for broader understanding and education. We have never been more interconnected or had easier and quicker means of communication. However, as currently structured, the digital ecosystem gives tremendous viewpoint control and economic power to a very small number of companies – the tech platforms that distribute online content. That control and power must come with new responsibilities... Historically, newspapers controlled the distribution of their product; the news. They invested in the journalism required to deliver it, and then printed it in a form that could be handed directly to readers. No other party decided who got access to the information, or on what terms. The distribution of online news is now dominated by the major technology platforms. They decide what news is delivered and to whom – and they control the economics of digital news..."

Last month, a survey found that roughly two-thirds of U.S. adults (68%) use Facebook.com, and about three-quarters of those use the social networking site daily. In 2016, a survey found that 62 percent of adults in the United States get their news from social networking sites. The corresponding statistic in 2012 was 49 percent. That 2016 survey also found that fewer social media users get their news from other platforms: local television (46 percent), cable TV (31 percent), nightly network TV (30 percent), news websites/apps (28 percent), radio (25 percent), and print newspapers (20 percent).

Mr. Chavern then described the problems with two specific tech companies:

"The First Amendment prohibits the government from regulating the press. But it doesn’t prevent Facebook and Google from acting as de facto regulators of the news business.

Neither Google nor Facebook are – or have ever been – "neutral pipes." To the contrary, their businesses depend upon their ability to make nuanced decisions through sophisticated algorithms about how and when content is delivered to users. The term “algorithm” makes these decisions seem scientific and neutral. The fact is that, while their decision processes may be highly-automated, both companies make extensive editorial judgments about accuracy, relevance, newsworthiness and many other criteria.

The business models of Facebook and Google are complex and varied. However, we do know that they are both immense advertising platforms that sell people’s time and attention. Their "secret algorithms" are used to cultivate that time and attention. We have seen many examples of the types of content favored by these systems – namely, click-bait and anything that can generate outrage, disgust and passion. Their systems also favor giving users information like that which they previously consumed, thereby generating intense filter bubbles and undermining common understandings of issues and challenges.

All of these things are antithetical to a healthy news business – and a healthy democracy..."

Earlier this month, Apple Computer and Facebook executives exchanged criticisms about each other's business models and privacy. Mr. Chavern's testimony before Congress also described more problems and threats:

"Good journalism is factual, verified and takes into account multiple points of view. It can take a lot of time and investment. Most particularly, it requires someone to take responsibility for what is published. Whether or not one agrees with a particular piece of journalism, my members put their names on their product and stand behind it. Readers know where to send complaints. The same cannot be said of the sea of bad information that is delivered by the platforms in paid priority over my members’ quality information. The major platforms’ control over distribution also threatens the quality of news for another reason: it results in the “commoditization” of news. Many news publishers have spent decades – often more than a century – establishing their brands. Readers know the brands that they can trust — publishers whose reporting demonstrates the principles of verification, accuracy and fidelity to facts. The major platforms, however, work hard to erase these distinctions. Publishers are forced to squeeze their content into uniform, homogeneous formats. The result is that every digital publication starts to look the same. This is reinforced by things like the Google News Carousel, which encourages users to flick back and forth through articles on the same topic without ever noticing the publisher. This erosion of news publishers’ brands has played no small part in the rise of "fake news." When hard news sources and tabloids all look the same, how is a customer supposed to tell the difference? The bottom line is that while Facebook and Google claim that they do not want to be "arbiters of truth," they are continually making huge decisions on how and to whom news content is delivered. These decisions too often favor free and commoditized junk over quality journalism. The platforms created by both companies could be wonderful means for distributing important and high-quality information about the world. But, for that to happen, they must accept accountability for the power they have and the ultimate impacts their decisions have on our economic, social and political systems..."

Download Mr. Chavern's complete testimony. Industry watchers argue that recent changes by Facebook have hurt local news organizations. MediaPost reported:

"When Facebook changed its algorithm earlier this year to focus on “meaningful” interactions, publishers across the board were hit hard. However, local news seemed particularly vulnerable to the alterations. To assuage this issue, the company announced that it would prioritize news related to local towns and metro areas where a user resided... To determine how positively that tweak affected local news outlets, the Tow Center measured interactions for posts from publications coming from 13 metro areas... The survey found that 11 out of those 13 have consistently seen a drop in traffic between January 1 and April 1 of 2018, allowing the results to show how outlets are faring nine weeks after the algorithm change. According to the Tow Center study, three outlets saw interactions on their pages decrease by a dramatic 50%. These include The Dallas Morning News, The Denver Post, and The San Francisco Chronicle. The Atlanta Journal-Constitution saw interactions drop by 46%."

So, huge problems persist.

Early in my business career, I had the opportunity to develop and market an online service using content from Dow Jones News/Retrieval. That experience taught me that the news - hard news - included who, where, when, and what happened. Everything else is either opinion, commentary, analysis, an advertisement, or fiction. And, it is critical to know the differences and/or learn to spot each type. Otherwise, you are likely to be misled, misinformed, or fooled.