Previous month:
February 2009
Next month:
April 2009

26 posts from March 2009

Binghamton University Students Circulate Petition For Removal of CISO

This InformationWeek news article caught my attention:

"Students at Binghamton University in New York are circulating a petition to remove the university's chief information security officer following the discovery of boxes full of documents listing personal information of students and parents in an unlocked storage room. The existence of the unsecured documents was discovered March 6 by a reporter working for student radio station WHRW and disclosed on March 9."

First, kudos to the student reporter. Sloppy and poor data security should be reported. Second, the school's CISO should lose his/her job. This type of data breach happens far too often in higher education institutions:

"A recent report, "Breaches in the Academia Sector," by John Correlli of JMC Privacy Consulting Group, noted that from 2005 through 2007, there were 277 publicly reported breaches at colleges and universities in the United States. Eighty-nine of those incidents followed from unauthorized access, 45 came from accidental online exposure, and 37 were the result of a laptop theft. And of the 263 reported privacy data breaches in the United States in 2008, about one-third (76) occurred at colleges and universities."

The news broadcast from the local FOX television affiliate:

The good news: the Binghamton students "get it." They understand the importance of good data security and the consequences of poor data security. They understand the importance of accountability... of holding the proper person responsible. That person is the CISO.

Too bad that the University's officials don't get it.


U.K. Parliament Debates ISP-Based Targeted Advertising

This ClickZ news story caught my attention:

"... in the House of Commons, Members of Parliament, Lords, and industry experts met to discuss the privacy implications concerning ISP-level behavioral targeting from companies such as Phorm and NebuAd. The intention of the session, which was hosted by Liberal Democrat Home Affairs spokesperson, Baroness Sue Miller, was to "inform parliamentarians" on the issues surrounding the controversial practice... the majority of conversation focused on ad-targeting technology firm Phorm, which is currently the most advanced U.K. player in the ISP-based behavioral ad space. The company announced in February 2008 that it would partner with three of the U.K.'s largest ISPs in order to sell and target ads based on user's online interaction data. Although a number of Phorm executives were present at the event, the company was refused a place on the panel, according to CEO Kent Ertugrul."

This has implications for the USA. If targeted advertising (a/k/a behavioral advertising) gains acceptance in the United Kingdom, it could make acceptance more likely here in the USA. I have written extensively about behavioral advertising, including the role of ISPs, abuses by ISPs of consumers' privacy, the class-action lawsuit against NebuAd, and AT&T's promise to do behavioral advertising "the right way."

Can ISPs be trusted to do behavioral advertising the right way? In my opinion, no. A few might, but the industry as a whole: no. In the USA, their collective historical actions speak far louder than their collective words and promises.

My position: ISPs should not be allowed to perform behavioral advertising. Period. Why? Behavioral advertising would allow ISPs to collect (and potentially share with vendors) massive amounts of the most sensitive consumer data: every site and web page a consumer visits on the Internet, the amount of time spent at that site and at each page, keywords entered at search engine sites, instant message contents, email contents, and so forth.

The monthly tsunami of data breaches proves that corporations, including ISPs, don't take data security seriously. It seems foolish to allow ISPs to collect and archive more personal data, only for them to later lose it or have it stolen, while consumers bare the credit monitoring and recovery costs. ISPs claim that they will anonymize the data to protect consumers' privacy, but don't offer any guarantees or methods for independent verification. There is no oversight planned, so there is no way for consumers to verify that ISPs would actually live up to their promises. And, there are no penalties for abuses.

ISP-based behavioral advertising would be a train wreck.

Sir Tim Berners-Lee, director of the World Wide Web Consortium, said this about ISPs and behavioral advertising:

"There should be no snooping on the Internet; it's the equivalent of wire tapping, or opening a person's mail... I'm here today to defend the integrity of the Internet."

The article covered the comments by other experts:

"Richard Clayton, treasurer for the Foundation for Information Policy Research, agreed that ISPs had no business in intercepting user communications, stating, "Providing better ads is not the role of the ISP. It's not lawful." Meanwhile, Jim Killock, executive director of the Open Rights Group, argued the practice would "undermine our confidence in governments to preserve our basic human rights..."

According to PCPro News, Berners-Lee's comments drew this response from Phorm's CEO:

"There have been a number of things said that patently misrepresent what we do," he argued. "We have the strongest privacy protection of everyone on the internet." He then went on to claim that the media wouldn't survive without the increased targeted-advertising revenue provided by services such as Phorm... Ertugrul then accused Berners-Lee of speaking from a position of ignorance, claiming the company had invited him to inspect its technology on several occasions, which he had declined."

A position of ignorance? Pleeze! Berners-Lee invented the Internet, without which Ertugrul wouldn't have a company. Ertugrul's response to Berners-Lee was just plain rude. What's is insulting is Ertugrul's claim that the media can't survive without behavioral advertising. Where's the proof? If anything, it is ISPs clamoring for behavioral advertising, not media companies.

The strongest privacy protection? Better than banks? I doubt it. If Phorm's privacy protection is so great, they should sell it to banks and financial services companies; both of whom have already experienced numerous data breaches here in the USA.

If ISP-based behavioral advertising concerns you (and I surely hope that it does), I encourage you to write to your elected officials in the USA and tell them, "no behavioral advertising for ISPs." Plus, there is two global Facebook groups you should join:


Real Change Underway At Facebook Regarding Consumers' Privacy?

From the New York Times Bits technology blog:

"Facebook’s chief privacy officer, Chris Kelly, is widely expected to take a leave of absence to run for attorney general of California in 2010. A Facebook spokeswoman said on Tuesday that the company had hired Timothy D. Sparapani, a senior lawyer with the American Civil Liberties Union, to become its director of public policy, a new position. At the A.C.L.U., Mr. Sparapani worked on issues including national ID cards, data mining, open government and E-Verify, the Internet system employers use to check an employee’s immigration status. Perhaps most important for Facebook, Mr. Sparapani has deep ties to some of its loudest critics at organizations like the Center for Democracy and Technology, the Electronic Frontier Foundation and the Center for Digital Democracy, which have been raising alarms about Facebook’s increasingly precise ad-targeting technology and its controversially revised terms of service agreement."

The MediaPost Daily Online Examiner reported:

"Jeff Chester, who heads the privacy group Center for Digital Democracy, called Sparapani "an honorable and skillful lobbyist and privacy advocate... It's a smart move on their part... In some ways, it's like bringing a potential critic in-house." The Center for Digital Democracy also says Facebook needs to elaborate on how it will share information about users with outsiders. "Users need to know how third-party developers use the data accessed or collected, including how the data is used for advertising and marketing..."

Does this signal a sincere change at Facebook, or is Facebook trying to "buy goodwill?" The first clue will be how long Sparapani stays at Facebook.


Laid-off Workers Leave With Company Data

This definitely has been the week for reviewing and discussing research reports and surveys. From the BBC News:

"Six out of every 10 employees stole company data when they left their job last year, said a study of US workers. The survey, conducted by the Ponemon Institute, said that so-called malicious insiders use the information to get a new job, start their own business or for revenge... "Our study showed that 59% of people will say 'I'm going to take something of value with me when I go'." The Ponemon Institute, a privacy and management research firm, surveyed 945 adults in the United States who were laid-off, fired or changed jobs in the last 12 months..."


Perimeter Analyzes Retail Data Breaches In the USA (Part Two)

Yesterday's blog post discussed the analysis of retail data breaches by Perimeter eSecurity, including breach notification exemptions, the ambiguity of states' identity-theft laws, and PCI DSS data security standards. Today's post continues to explore the report's findings, since they describe a business system at risk due to its poor data security.

About the increasing number of data breaches reported since 2002, the report concluded:

"You can see a clear growth pattern from the year 200 through 2006 with a slight dip for 2007... 11 additional states adoped similar [breach notification] legislation in 2005. 17 more states came on board in 2006. 5 additional states in 2007..."

Basically, the growth in the number of breach incidents clearly followed directly from the increased number of states requiring companies to notify consumers about data breach. The report also stated:

"The average number of records compromised in a single incident between 2000 and 2007 is 431,077 (which encompasses more than 300 million records over 711 data security breach incidents in the the U.S. where the number of records breached was known). 22 incidents include more than 1 million records compromised..."

While retail companies accounted for over 98 million records lost/stolen:

"Credit card information was more than 99% of all information compromised in security breaches... This is different from some other verticals, for example health care, where Social Security numbers are primarily targeted."

Businesses had both more breach incidents and more consumer records lost/stolen than government, education, or health care organizations:

Sector Breach Incidents Records Compromised
Business 33% 77%
Education 30% 2%
Government 26% 19%
Health Care 11% 2%
Total 100% 100%

Perimeter correctly focused on the retail vertical industry since the retail industry accounted for the most records compromised within the Business sector. Perimeter also analyzed the source of retail data breaches:

Rretail Source Breach Incidents Records Compromised
Hacking 45% 98%
Careless/untrained user 27% negligible
Theft 6% negligible
Third-party Fault 6% negligible
Malicious Insider 4% negligible
Total 100% 100%

Most breaches included records in electronic format (99%) and stored within the company (98%). This probably has implications for the health care industry, since there is a big push underway to convert records to electronic formats. If the retail industry's experience is a guide, electronic record format doesn't seem to make consumers' data any safer. It's just as easy to lose or have stolen.

Interested individuals can download the Perimeter eSecurity study (PDF format).


Perimeter Analyzes Retail Data Breaches In the USA (Part One)

In January, Perimeter eSecurity released a research report where the company analyzed data breaches at retail companies in the United States. I am a curious person, so I took the time to wade through this 32-page report.

Perimeter's analysis covered data breaches that occurred from 2000 through 2007. It included both breaches where consumers' sensitive personal information were lost/stolen, and breaches where this information wasn't lost/stolen -- to provide a more complete view of the problem. First, a definition of "data breach:"

"Nearly all organizations maintain records for their customers and employees. When this information falls into the wrong hands, or has the opportunity to be extracted, viewed, captured, or used by an unauthorized individual, it constitutes a data breach."

Corporations are notorious for being tight-lipped about details of their data breaches:

"... nearly one quarter of the incidents did not or could not disclose the number of records that were part of their data security breach."

Some of the retail companies listed in the report that didn't disclose the number of records lost/stolen because they either didn't know or didn't want to tell consumers:

  • April 27, 2001: Egghead.com
  • July 12, 2003: PetCo
  • June 21, 2005: CVS
  • October 8, 2005: Blockbuster
  • November 7, 2005: Papa John's
  • December 12, 2005: Sam's Club
  • February 19, 2007: Stop & Shop
  • March 29, 2007: Radio Shack
  • June 23, 2007: Winn-Dixie
  • July 11, 2007: Disney Movie Club
  • September 28, 2007: Gap, Inc.
  • October 17, 2007: Home Depot
  • October 23, 2007: Blockbuster

This means that all media reports that have cited statistics, about the number of consumers affected by data breaches, are low. The true number of lost or stolen records -- and hence affected consumers -- is much higher.

The research report also discussed "PCI DSS" requirements -- the Payment Card Industry (PCI) Data Security Standard (DSS) requirements that companies should follow when handling and storing consumer data. The Perimeter eSecurity report was helpful for me to understand what PCI DSS is and how it is used (or supposed to) by companies. PCI DSS is something most consumers aren't aware of and have no way of verifying if the company that shop at or bank with complies with the PCI DSS standards.

The worldwide PCI DSS standards permit companies to store certain portions of consumers' sensitive personal data (e.g., credit card account number, cardholder name), and prohibit the storage of other portions of consumers' data (e.g., information on the magnetic strip on credit/debit cards, PINs). The standards also specify which data must be protected and the type of protection the company should use (e.g., personal ID required for online access, encryption, etc.). The important points to know:

"PCI DSS requirements are applicable if a Primary Account Number (PAN) is stored, processed, or transmitted. If a PAN is not stored, processed, or transmitted, PCI DSS requirements do not apply. These security requirements apply to all "system components." System components are defined as any network component, server, or application that is included in or connected to the cardholder data environment..."

While I am not a data security professional, the PCI DSS standards seem kind of leaky to me. If a company chooses not to store any consumer data, then it seems they don't have to abide by these standards. That seems like lax security to me. Maybe some of the security professionals that read this blog can clarify this point.The report also discussed breach notification and the ambiguity of many states' breach notification laws:

"How quickly is notification required? Vaguely defined in most legislation, except Florida and Ohio (45 days after the security breach), many use the California definition of "the most expedient time possible and without unreasonable delay" and include provisions for the needs of law enforcement."

This may partially explain the delay by many organizations with notifying affected consumers after a data breach. In my experience, IBM notified me in May 2007 after its February 2007 data breach -- about two-and-a-half months later. That's plenty of time for identity thieves to do damage.

Regarding the loss/theft of sensitive consumers' personal data, you'd thing that there would not be any exceptions allowing companies to avoid (the cost) of notifying consumers affected by a data breach. Sadly, there are exceptions:

"Among the various states, encryption of customer data generally provides an exception to disclosure requirements... Kansas, Colorado and Delaware are among 18 states that have provisions exempting companies from disclosure if, upon investigation, it is believed that the stolen data will likely not be misued... Among the various states, encryption of customer data generally provides an exemption to disclosure requirements."

What?! It is prudent to assume the worst so consumers (and the company) can protect themselves in the future. How can company executives truly know the thieves' intent or motives, especially if they don't catch them or the stolen data? Even if the criminals' intent was to steal the computer hardware, mos criminals are smart and now recognize the value of consumers' sensitive personal data.

That "if you believe" clause in states' laws sounds plain stupid. It may help companies avoid breach notification costs, but it does nothing to protect consumers. If anything, it leaves consumers even more unprotected.

The problem with the exception for encrypted data:

"Security of the encryption keys themselves is also very important. If the keys are stolen along with the data, then the hacker can gain access to the information. These gaps were apparently being considered in Pennsylvania when they passed Senate Bill 712..."

In most breach notification letters I've read, few organizations (e.g., government agencies, corporations) mention whether or not the data was encrypted. Even fewer organizations mention whether or not the hackers also stole the encryption keys.

And there are still even more exemptions:

"Half of the states with data breach laws specifically mention data redaction as offering an exemption to disclosure requirements (as is the case in Arizona's Senate Bill 1338)..."

What I conclude from the report is this: the true number of data breaches is far higher because we consumers aren't told about breaches that fall into the exemption categories. So, the number of affected consumers is also higher. And, uninformed consumers can't make good decisions about avoiding companies with poor data security habits and records.

All of this is enough to scare the daylights out of anyone. Interested individuals can download the Perimeter eSecurity study (PDF format).


Consumers' Report Card on Data Breach Notification (Part Two)

Yesterday's post covered some of the high-level results from the "Consumers' Report Card on Data Breach Notification" study by the Ponemon Institute and ID Experts. Today, I explore the detailed findings because they highlight just how poor organizations' performance has been with breach responses:

67% of survey respondents indicated that they were notified by a personal letter addressed to them

I found this interesting because of what is happening with the other 33% of respondents. Companies and government agencies are notifying consumers with their account statement or bill, a phone call, e-mail, a phone call from a third party, at the company's web site, or with a general disclosure in the newspaper or television.

Do these organizations not understand that a data breach is a major event for consumers? I guess they don't understand, or don't care. It may save the organization money to notify consumers via phone or e-mail, but consumers are often wary. Most consumers have already experienced plenty of e-mail spam. When a consumer's sensitive personal data is lost or stolen, it deserves a personal letter addressed to the consumer so they know the situation is real and must be addressed quickly.

The study results indicated the type of organization that had the data breach:

"Financial institution or credit card company, 30%; retailer, 23%; government organization, 19%; online merchant, 10%; education institution, 9%; health care provider, 6%"

The survey respondents also reported -- as best they could -- the cause of the data breach:

"Don't know the cause, 50%; lost laptop or other portable device by the company, 19%; lost laptop or other portable device by a third-party, 15%; mishaps in the movement of paper information, 7%"

It is difficult for consumers to trust an organization that refuses to disclose details about its data breach. If organizations want to maintain consumers' trust, then transparency is the way. Do a better job of informing consumers about both the breach cause and the results of post-breach investigations. In my experience, IBM never disclosed the number of records lost/stolen. Nor did IBM disclose the results of its post-breach investigation. This tight-lipped approach may help a company maintain its stock price, but it doesn't help consumers' confidence or customer loyalty.

More results from the study:

"77% of survey respondents were either "concerned" or "very concerned about the loss or theft of their personal information"

This is good news in that consumers seem to understand the implications of a data breach. The bad news is that organizations' breach responses don't seem to resolve consumers' concerns. Survey respondents were concerned because:

"Possible theft of their identity, 52%; financial losses, 21%; need to transfer their business to another company, 12%; revelation of extremely confidential information, 6%; need to spend time correcting errors to their records, 6%"

Consumers were not satisfied with the organization's breach notification:

17% of survey respondents either "Strongly Agree" or "Agree" that the organization's communication provided useful information on simple things they could do to protect themselves from identity theft"

This meant that 83% -- most survey respondents -- found the organization's breach notification useless or unhelpful. Perhaps, the most important findings of the study is the action consumers did after receiving the breach notification:

"Contacted the company to learn more about the breach, 35%; discontinued their relationship with the company, 31%; did nothing, 26%; contacted the organization to purchase services to protect their personal information, 13%; made a formal complaint to the organization, 8%"

Consumers want to know details about the breach event, the post-breach investigation, and the free credit monitoring services arranged. Organizations must understand and respond to this. Otherwise, consumers will -- and should -- take their business elsewhere. And, perhaps we'll see some lawsuits when organizations' breach notifications don't meet states' breach notification laws.


Consumers' Report Card on Data Breach Notification (Part One)

Earlier this month, I attended an online webinar by ID Experts and Javelin Research titled, "Data Breach Defense 2009." The webinar was so popular that it was presented both in January and march. The webinar targeted companies and their need to protect the sensitive personal data they archived.

One of the major points presented was the consumers' drop in confidence of the company after a data breach. ID Experts emphasized this in their news release:

"New Research Reveals 45% of Card Breach Victims Lose Confidence in Their Financial Accounts"

This finding definitely reflected my experience after I was affected by IBM's February 2007 data breach. I definitely lost confidence in the company, especially since computing and data security is their primary business.

ID Experts offered webinar participants a free copy of the "Consumers' Report Card on Data Breach Notification" study (registration required) conducted by the Ponemon Institute and sponsored by ID Experts. While the study is almost a year old, it contains useful information which still applies today, since about 40 states have laws requiring companies to notify affected consumers after a data breach:

"... that legal compliance is the primary goal of many companies' notification efforts. This approach to responding to a data breach does not serve the best interests of consumers and contributes to a breakdown in trust that can impact a company monetarily..."

The researcher also found that the consumers affected by a company's data breach:

"... that took advantage of a free or subsidized offering, such as credit report monitoring, were two-and-a-half times more likely to feel that the company was helpful in responding to their concerns..."

IBM offered one year of free credit monitoring, which I took advantage of. That didn't increase my confidence in IBM since, a) one year of free credit monitoring wasn't long enough; and b) the free credit monitoring focused more on identity recovery than credit report monitoring. This assured IBM that it would keep its costs low, since it only incurred most expenses to help affected consumers fix their identity information and accounts corrupted by identity thieves. Most of IBM's data breach victims probably didn't experience identity fraud, so IBM's expenses for them were minimal.

These two findings imply that we consumers will continue to receive in the future the same types of post-data-breach offers we've seen so far. Why? Simply, it works for companies that suffer data breaches. Some other key findings from the study:

55% of respondents had been notified of two or more data breaches in the previous 24 months, including 8% with four or more notifications... More than 55% of respondents state that the notification about the data breach occurred more than one month after the incident, and more than 50% of respondents rated the timeliness, clarity, and quality of the notification as either fair or poor... 2% of respondents who had been notified of a data breach experienced identity theft as a result of the breach, while 64% were unsure..."

This clearly indicates that data breaches have become so widespread and frequent that an increasing number of consumers recedived (or will receive) multiple data breach notifications. Not good! It also indicates that while the status quo works for companies, it doesn't work for consumers. Companies must change their habits about protecting the sensitive personal data stored about employees, former employees, contractors, and customers.

It also indicates that the FTC's tendency toward self-requlation by companies just won't work. It hasn't worked so far.

How accurate is the 2% statistics above? It is definitely on the low side, since the study covered only survey respondents. Surveys were sent to 27,998 consumers and the study included 1,798 participants -- a 6.4% response rate.

One wonders why 93%+ didn't respond to the survey. Perhpas they were too busy monitoring their identity information. Or maybe they felt burned once by the company's data breach and were reluctant to share any more personal data with anyone -- an opinion I've heard repeatedly on this blog. Either way, more than 2% of data breach victims experienced ID-theft as a result of the breach.

Previously, I had heard that the true number is closer to 30%. The true statistic can only be discovered over time, since identity thieves will continue to resell and use stolen identity information as long as they consider it usable. That can be months or years.


Consumers Should Care What Personal Data Search Engines Collect

Prior posts covered behavioral advertising: opt-in versus opt-out system structure and notification issues, and the role of Internet Service Providers (ISPs). If you are new to the topic (or new to the Internet), there is a good primer article by CNN Money which highlights the issues consumers should know about search engines:

"Alone with just your computer screen, these searches can feel very private. But Google & Co. gather a lot of information about you as you surf, including the date and time for your search, your search terms, and your IP address, which is an 11-digit number that identifies your computer and, more important as far as advertisers are concerned, your location."

Internet users as young as teenagers need to know that:

"There are many free online tools, but they're not really free," explained Greg Conti, a professor of computer science at West Point and the author of Googling Security: How Much Does Google Know About You? "We end up paying for them with micro-payments of personal information..."

This is important because data breaches happen. Companies participating in targeted advertising will have consumers' data stolen or lost. It's important because consumers' privacy needs to by adequately protected:

"... Yahoo said it would keep personally-identifiable information in its database for no more than 90 days, down from 13 months previously... Google said it would keep your data for nine months instead of 18 months. Microsoft retains your information for 18 months... There are big differences, however, in the way that companies make user information anonymous. Google and Yahoo, for instance, block the last three digits, known in tech circles as the "last octet," of your IP address. Microsoft says it deletes the entire address."

Who knows what is really going on? The problem is: there isn't oversight. There isn't an independent audit. There is no way for consumers to know if the companies are honoring their data security as promised. This is critically important also because so much of consumers' sensitive personal data is outsourced overseas.

I am not saying that more government is the solution. Perhaps an an audit group partially funded by the search engine industry is the answer. Perhaps state and federal ID-theft legislation should be amended to require proper anonymizing of consumer data by search engines and other companies participating in targeted advertising. Perhaps amended FTC guidelines is the answer.

Whatever the solution, companies must protect consumers' personal data like nuclear fuel. What do you think?


ISPs Customers Should Avoid Doing Business With

From the ZD Net Zero Day blog:

"... the researchers at FireEye have recently launched a “Bad Actors series” aiming to put the spotlight on some of the currently active badware actors online. The sampled ISPs represent safe heavens for drop zones for banker malware, DNSChanger malware, rogue security software and live exploit URLs. From Starline Web Services, to ZlKon, Internet Path/Cernel, HostFresh and UralNet, the series draw a simple conclusion - that a dysfunctional abuse departments can indeed act as driving factor for the growth of cybercrime... Moreover, we cannot talk about cybercrime-friendly ISPs without mentioning the domain registrars of choice for the majority of cybercriminals, which KnujOn keeps profiling. Their February, 2009 Registrar Report states that 10 registrats are responsible for 83% of the fraudulent sites that they’ve analyzed, with the Chinese registrar XIN NET topping the chart for a second time."

Part of being an informed and smart consumer is to avoid doing business with ISPs (Internet Service Providers) that have poor data security -- by allowing malware, facilitating scammers, and/or by performing targeted advertising programs without informing their customers.


Need Some Cash? Report Software Piracy At Your Employer

This is capitalism at its finest. From the Los Angeles Times blog:

"Looking to make some extra cash during the recession? Turn in your bosses and co-workers for using pirated software! The Business Software Alliance said today that it paid out $136,100 last year through its "Know It, Report It, Reward It" program for "verifiable tips about software piracy."

Employees can report software piracy via www.nopiracy.com or a toll-free phone number. Does this program work? In 2008, 42 tipsters earned about $3,200 each, and:

"According to the BSA, several Southern California companies have settled software piracy cases after being reported. Among them were Acorn Engineering of City of Industry, Miller Automotive of Van Nuys, Western Power Products of Bakersfield and Z Gallerie of Gardena... the BSA's total payout increased sharply in 2008, from $23,000 in 2007 and $40,000 in 2006."

My question is this: why stop at offering rewards for tips about software piracy? I'd love to see a rewards program for tips about unreported data breaches at companies. The continual monthly incidents of data breaches indicates that too many companies fail to adequately protect the sensitive personal data of customers, employees, and former employees.

This tips program would cover both domestic USA and abroad company locations. The tips could be paid from a pool of corporate fines and settlements through a collective of participating State Attorney Generals' offices. Or the program could be administered through a co-operative of ID-theft nonprofts, like the ITRC and the PRC. Either way, it seems like a breach tips program would decrease or halt corporate data breaches.


ISPs Charge UK Child Abuse Detectives For Data

Last year, I started to follow the corporate responsibility and consumer-data-privacy habits of ISPs (Internet Service Providers) after several ISPs secretly performed targeted advertising programs without notifying their subscribers and without providing their subscribers with an opt-out method. During the last year or two, both US and UK ISPs have arrangements with targeted advertising vendors.

Given this, an IT Pro Fit For Business news story caught my attention:

"The Child Exploitation and Online Protection Centre (CEOP) told the BBC following a freedom of information request that since April of 2006 it had made 9,400 requests for user information, at a total cost of £171,505.99... the CEOP centre’s chief executive Jim Gamble said the body expected to pay as much as £100,000 to ISPs to get the information they needed to find children who were being abused – and the criminals hurting them."

Do the math and that is about £18.25 per request or about US $36. You'd think that ISPs would not charge government child abuse detectives for data. The CEOP uses information from ISPs to track online predators:

"... essentially to put a name and location to IP addresses. Some ISPs charge to supply that data, while others do not."

I think that everyone will agree that protecting children from abuse, finding abused and abducted children, and prosecuting predators is a high-priority task. I wonder what distinguishes an ISP who charges abuse detectives for data versus an ISP that provides that data for free. Is one ISP simply better managed than the other? Or is one ISP more greedy than the other? Or, is the decision to charge detectives determined by company size?

I wonder why government detectives haven't negotiated a volume-discount arrangement with ISPs for IP data, since thy know they will make a certain number of requests each year. Do the math and the 9,400 requests since 2006 equal about 3,130 requests per year.

I wonder what UK citizens think of this. Are they curious to know which ISPs charge for supplying IP data to abuse detectives? You would think that they would want to know where their tax dollars are spent. Would UK citizens shift their subscriptions to ISPs that don't charge abuse detectives for the data?

What do you think?


The Risks of Disclosing Your Birthday on Facebook And Other Social Networking Sites

Here in Boston, Channel 7 television news broadcast an interesting identity-theft report which I believe all consumers of popular social networking sites -- like Facebook, MySpace, Twitter -- should be aware of:

"When you're connecting online - and filling out your profile - experts say you could be leaving yourself vulnerable. The trouble can start with something as simple as your birthday... It seems harmless putting your birth date in your profile - but with that one bit of info we found identity thieves can actually get a copy of your birth certificate - a key document. And it's not hard to do."

The news broadcast showed how the reporters were able to get a valid copy of consumers' birth certificates. Some states, like Massachusetts, have lax procedures for distributing birth certificates. The bottom line: your complete birthday (e.g., March 12, 1984) is one of the valuable pieces of sensitive personal data which identity thieves can (and do) use. Here's how consumers can protect their identity information:

"Experts say, to keep safe - Never list your birthday publicly. Online - you should only become "friends" with people you know. And make sure your profile page is set to "private" or "friends only."

Should consumers display a partial birthday (e.g., March 12)? I've noticed that some Facebook users display a partial birthday. I don't disclose my birthday at all. Why? It still seems risky to display a partial birthday, because a determined identity thief can pretty easily deduce your birth year from your high school (or college) graduation.

It's important for consumers to practice safe identity-protection habits when using social media sites. Interested consumers can view the video or read the transcript at the Channel 7 site. Want to learn more? Read this prior post about birth dates.


Check Scam Still Operating at Craigs List Site

Fraudsters are still running the check scam on the Craig's List site, so consumers should be aware. A coworker, Peter, shared with me his experience which is instructive.

A person named "Phil" replied to Peter's ad about the furniture Peter was selling (for about $150). Peter came to me because Phil's e-mail reply seemed not quite right (typos and all):

"I really appreciate your response to my inquiry. Im interested in buying the Item from you. I would love to come and check it myself but I just got >married and im presently on my honeymoon trip to Hawaii with my wife.I would love a surprise change of furniture in our home on our return.Pls do withdraw the advert from Craigslist as i dont mind adding $20 for you to do that, so i can be rest assured that the item is held for me. I should believe it is in good condition as stated. I will be making the payment via a Check, which my secretary will mail to you. I'll be picking the item from you with the aid of my mover. My Mover will be coming to pick it from you once the Check has been cashed. Pls I will need both your full name and physical address along with your phone number to issue out the payment.
Thanks,
Phil"

Is Phil a scammer? It's a little hard too tell at first, but the above e-mail had some clues. A person really interested in the furniture would come buy and look at the furniture before offering payment; and perhaps negotiate the final price (if the furniture had some dings or tears). Phil was operating without seeing the furniture.

Peter was understandably uncomfortable with giving Phil his address information, but he wanted to sell the furniture. Phil and Pete traded a couple more e-mail messages (Peter's attempt to determine Phil's true level of interest), and Phil ultimately sent a check via FedEx -- for $2,600 and made out by the American Math Society, a real organization.

Correctly, Peter started watching his credit reports for any irregularities, placed Fraud Alerts on his credit reports, and contacted both FedEx and the American Math Society (AMS). Both organizations know about the scam. So far, nobody can find Phil.

According to the About Scams page at the Craigslist site, the check scam works like this:

"Most scams involve one or more of the following: inquiry from someone far away, often in another country; Western Union, Money Gram, cashier's check, money order, shipping, escrow service, or a "guarantee"; inability or refusal to meet face-to-face before consummating transaction"

Here's the payoff part of the scam:

"A distant person offers a genuine-looking (but fake) cashier's check... value of cashier's check often far exceeds your item - scammer offers to "trust" you, and asks you to wire the balance via money transfer service; banks will often cash these fake checks AND THEN HOLD YOU RESPONSIBLE WHEN THE CHECK FAILS TO CLEAR, including criminal prosecution in some cases!"

Is the $2,600 check real? Of course not. At some point, Phil would have asked Peter via e-mail to wire him money covering the different between the bogus check amount and the price of Peter's furniture; and Phil would have quickly disappeared. Peter would have been stuck with a bogus $2,600 check, bounced-check fees by his bank, and out $2,400 (the money wired to Phil for the difference between the bogus check amount and the value of Peter's used furniture).

But, by being alert with some healthy skepticism, Peter avoided all of this and didn't get scammed. A word to the wise: fraudsters attempt this check scam with other organizations' names besides the AMS; on other online-ad sites besides Craig's List, and on job search sites; and fake parking ticket scams. So, be alert and warn your family, friends, and neighbors.


Would You Sign A "Mutual Agreement to Maintain Privacy" With Your Doctor?

This blog is about empowering consumers. Recently, the MediaPost blog reported:

"In the five years since he co-founded RateMDs.com, a site where patients rate their doctors, John Swapceinski has been threatened with lawsuits at least once a week... But, starting six months ago, the nature of the threats changed. That's when Swapceinski began hearing from doctors who said that reviews on the site violated contracts with their patients. Apparently, some physicians are now asking patients to sign agreements in which they promise they won't review their doctors online."

The Mutual Agreement to Maintain Privacy" contract (MAMP) is the legal ruse doctors use to stifle consumer discussion. It's name may sound harmless enough, but it isn't. It's an attempt to bully consumers and shut down sites like the RateMDs.com site. I visit the doctor to maintain my health, not give up some of my online rights. And I suspect most consumers feel the same way.

"A company called Medical Justice has masterminded at least some of these agreements. The company, founded by doctor (and law school graduate) Jeffrey Segal, has signed up 2,000 physicians nationwide. Segal tells MediaPost that the majority of them ask patients to sign the privacy agreements."

The U.S. Federal Trade Commission solicits online complaints from consumers about a variety of topics -- including medical ID-theft, medical fraud and data breaches. This MAMP seems to contradict that. It'd be easy to say that these doctors don't get the social aspects of the Internet, and are guided by fear. the bigger issue is: what would you do if confronted by a doctor asking you to sign an MAMP?

Almost everyone I know participates in one or several social media sites (e.g., Facebook, Twitter, FriendFeed, etc.) where they discuss things online. Many products sites allow consumers to submit comments online. Same for cooking sites with recipes. Why should doctors be any different?

I think its fair to ask any doctor requesting a signed MAMP what the benefit is for the patient. Doctors and medical practices are already bound by medical privacy laws, including the Health Insurance Portability and Accountability Act (HIPAA) for patients' Protected Health Information (PHI). So, what's the benefit for patients to sign a MAMP? Are these MAMP-using doctors offering anything in return -- like enhanced data security or discounted services? I think not. That's what makes a MAMP bogus.

And while you are visiting your doctor, you might ask him or her what their practice is doing to proect your sensitive personal information on medical records. A MAMP is worthless if the medical practice has poor or no data security methods in place.


Still Lopsided Terms And Conditions: Facebook

On Friday, I finally gave in to the Force and went over to the "dark side" and joined Facebook. Yeah, that's a surprise. Call me crazy, after all I've written about Facebook's privacy and data security issues. But, my move was practical. When your boss at work sends you a link to a Facebook page he expects you to read, it's wise to read it.

I also joined because after last week's post about the Adzilla class-action, new I've Been Mugged readers were looking for me Facebook. While signing up, I reviewed again the September 23, 2008 Terms And Conditions at the Facebook site [bold added for emphasis):

"When you post User Content to the Site, you authorize and direct us to make such copies thereof as we deem necessary in order to facilitate the posting and storage of the User Content on the Site. By posting User Content to any part of the Site, you automatically grant, and you represent and warrant that you have the right to grant, to the Company an irrevocable, perpetual, non-exclusive, transferable, fully paid, worldwide license (with the right to sublicense) to use, copy, publicly perform, publicly display, reformat, translate, excerpt (in whole or in part) and distribute such User Content for any purpose, commercial, advertising, or otherwise, on or in connection with the Site or the promotion thereof, to prepare derivative works of, or incorporate into other works, such User Content, and to grant and authorize sublicenses of the foregoing. You may remove your User Content from the Site at any time. If you choose to remove your User Content, the license granted above will automatically expire, however you acknowledge that the Company may retain archived copies of your User Content. Facebook does not assert any ownership over your User Content; rather, as between us and you, subject to the rights granted to us in these Terms, you retain full ownership of all of your User Content and any intellectual property rights or other proprietary rights associated with your User Content."

Then there is this interesting section of the Facebook Privacy Policy:

"You post User Content (as defined in the Facebook Terms of Use) on the Site at your own risk. Although we allow you to set privacy options that limit access to your pages, please be aware that no security measures are perfect or impenetrable. We cannot control the actions of other Users with whom you may choose to share your pages and information. Therefore, we cannot and do not guarantee that User Content you post on the Site will not be viewed by unauthorized persons."

This data security statement is pretty similar to the terms I reviewed at Mint.com. These terms may be what Facebook, or social media sites, need to survive and operate profitably. They don't work for me.

While I am no lawyer, the above copy seems pretty clear. This pretty much guarantees that I will use Facebook minimally; posting the absolute minimum amount of personal information.


How Your Employer Guarantees It Will Experience Data Breaches In The Future

This SANS Internet diary entry, "How To Suck At Information Security," should be required reading for the CIO, CSO, C-Suite executives, and IT Pros in every company and government agency. Not just to get their jobs, but also to stay in their jobs. Zeltser does a good job of listing all of the bad habits by companies and their executives to perpetuate poor data security.

After reading about numerous corporate data breaches during the last two years, I've shortened Zeltser's list of poor data security habits to those more closely related to data breaches. So, here's my list of how senior management where you work can ensure that your employer will experience a data breach -- and lose or have stolen the sensitive personal data of employees, former employees and customers:

  • Ignore regulatory compliance requirements.
  • Make sure none of the employees finds the [data security] policies.
  • Assume that if the policies worked for you last year, they'll be valid for the next year.
  • Assume that being compliant means you're secure.
  • Make someone responsible for managing risk, but don't give the person any power to make decisions.
  • Assume you don't have to worry about security, because your company is too small or insignificant.
  • Assume you're secure because you haven’t been compromised recently.
  • Impose security requirements without providing the necessary tools and training.
  • Expect SSL to address all security problems with your web application.
  • Stop learning about technologies and attacks.
  • Use the same password on systems that differ in risk exposure or data criticality.

Do you recognize any of these bad habits where you work? If so, you might share this list with the senior management at your employer.