FTC and EU Weigh in on Face Recognition Applications – Why Limiting the Use of This Technology Matters

August 1, 2012

Who should own and control data about your face? Should companies be able to collect and use your facial data at will?

Is it enough to let users can opt out of facial recognition, or should companies be required to collect your specific opt in before collecting your facial data? If a company has multiple services, is one opt in enough, or should they be required to seek your permission for every new type of use? Under what conditions should a company be able to sell and monetize their ability to recognize you?[i]

There are a lot of cool uses for facial recognition tools, but how informed are you about the risks? How do you weigh the pros and cons to make an informed choice about who can identify you?

Governments are paying greater attention to potential privacy threats

A preliminary report by the Federal Trade Commission (FTC) identifying the latest facial recognition technologies and how these are currently being used by companies has just been released. The report also outlines the FTC’s plan for creating best-practice guidelines for the industry that should come out later this year.

In Europe concerns over facial recognition technologies potential to breach personal privacy has resulted in a similar review.

This is great news for consumers as it signals a shift in the timing of privacy reviews from a reactive approach where guidelines have come after consumers have largely already had their privacy trampled, to a far more proactive approach to protecting consumers online privacy, safety, and security.

In response, companies like Facebook and Google are dramatically increasing their lobbying budgets and campaign funding

It is no coincidence that as government bodies increase their focus on consumer’s online privacy that the companies making the biggest bucks from selling information about you – and access to you – are pouring money and human resources into influencing the government’s decisions.

According to disclosure forms obtained by The Hill, “Facebook increased its lobbying spending during the second quarter of 2012, allocating $960,000, or three times as much as during the same three-month period in 2011”.

And a report in the New York Times noted that “With Congress and privacy watchdogs breathing down its neck, Google is stepping up its lobbying presence inside the Beltway — spending more than Apple, Facebook, Amazon and Microsoft combined in the first three months of the year.” Google spent $5.03 million on lobbying from January through March of this year, a record for the Internet giant, and a 240 percent increase from the $1.48 million it spent on lobbyists in the same quarter a year ago, according to disclosures filed Friday with the clerk of the House.

In addition to lobbying spend, these companies, their political action committees (PAC’s) – and the billionaire individuals behind the companies have exorbitant amounts of money for political contributions; chits to be called in when privacy decisions that could impact their bottom line hang in the balance.

Here’s what today’s facial recognition technologies can – and are – doing:

 

It only takes a quick look for you to identify someone you know; yet facial recognition technologies are both faster and more accurate than people will ever be – and they have the capability of identifying billions of individuals.

Although many companies are still using basic, and largely non-invasive, facial recognition tools to simply recognize if there is a face in a photo, an increasing number of companies are leveraging advanced facial recognition tools that can have far reaching ramifications for your privacy, safety, and even employability.

Advanced facial recognition solutions include Google+’s Tag My Face, Facebook’s Photo Tag Suggest, Android apps like FaceLock, and Visidon AppLock, and Apple Apps like Klik,  FaceLook, and  Age Meter, then there are apps like SceneTap, FACER Celebrity, FindYourFaceMate.com and DoggelGanger.com.  New services leveraging these features will become increasingly common – particularly if strict privacy regulations aren’t implemented.

Some companies use facial recognition services in their photo and video applications to help users recognize people in photos, or even automatically tag them for you. (You may not want to be tagged in a particular, photo, but if you allow photo tagging you can only try to minimize the damage, you can’t proactively prevent it).

Some services use facial recognition for security purposes; your face essentially becomes your unique password (but what do you do if it gets hacked? Change your face??).

What are the potential risks of facial recognition tools to individuals?

The Online Privacy Blog enumerates some of the risks in easily understood terms; here is an excerpt from their article The Top 6 FAQs about Facial Recognition:

Take the massive amount of information that Google, Facebook, ad networks, data miners, and people search websites are collecting on all of us; add the info that we voluntarily provide to dating sites, social networks, and blogs; combine that with facial recognition software; and you have a world with reduced security, privacy, anonymity, and freedom.  Carnegie Mellon researchers predict that this is “a world where every stranger in the street could predict quite accurately sensitive information about you (such as your SSN, but also your credit score, or sexual orientation” just by taking a picture.

Risk 1:  Identity theft and security

Think of your personal information—name, photos, birthdate, address, usernames, email addresses, family members, and more—as pieces of a puzzle.  The more pieces a cybercriminal has, the closer he is to solving the puzzle.  Maybe the puzzle is your credit card number.  Maybe it’s the password you use everywhere.  Maybe you’re your social security number.

Identity thieves often use social security numbers to commit fraud. Photo: listverse.com.

Facial recognition software is a tool that can put all these pieces together.  When you combine facial recognition software with the wealth of public data about us online, you have what’s called “augmented reality:”  “the merging of online and offline data that new technologies make possible.”   You also have a devastating blow to personal privacy and an increased risk of identity theft.

Once a cybercriminal figures out your private information, your money and your peace of mind are in danger.  Common identity theft techniques include opening new credit cards in your name and racking up charges, opening bank accounts under your name and writing bad checks, using your good credit history to take out a loan, and draining your bank account.  More personal attacks may include hijacking your social networks while pretending to be you, reading your private messages, and posting unwanted or embarrassing things “as” you.

The research:  how facial recognition can lead to identity theft

Carnegie Mellon researches performed a 2011 facial recognition study using off-the-shelf face recognition software called PittPatt, which was purchased by Google.  By cross-referencing two sets of photos—one taken of participating students walking around campus, and another taken from pseudonymous users of online dating sites—with public Facebook data (things you can see on a search engine without even logging into Facebook), they were able to identify a significant number of people in the photos.  Based on the information they learned through facial recognition, the researchers were then able to predict the social security numbers of some of the participants.

They concluded this merging of our online and offline identities can be a gateway to identity theft:

If an individual’s face in the street can be identified using a face recognizer and identified images from social network sites such as Facebook or LinkedIn, then it becomes possible not just to identify that individual, but also to infer additional, and more sensitive, information about her, once her name has been (probabilistically) inferred.

Some statistics on identity theft from the Identity Theft Assistance Center (ITAC):

  • 8.1 million adults in the U.S. suffered identity theft in 2011
  • Each victim of identity theft loses an average of $4,607
  • Out-of-pocket losses (the amount you actually pay, as opposed to your credit card company) average $631 per victim
  • New account fraud, where thieves open new credit card accounts on behalf of their victims, accounted for $17 billion in fraud
  • Existing account fraud accounted for $14 billion.

Risk 2:  Chilling effects on freedom of speech and action

Facial recognition software threatens to censor what we say and limit what we do, even offlineImagine that you’re known in your community for being an animal rights activist, but you secretly love a good hamburger.  You’re sneaking in a double cheeseburger at a local restaurant when, without your knowledge, someone snaps a picture of you.  It’s perfectly legal for someone to photograph you in a public place, and aside from special rights of publicity for big-time celebrities; you don’t have any rights to control this photo.  This person may not have any ill intentions; he may not even know who you are.  If he uploads it to Facebook, and Facebook automatically tags you in it, you’re in trouble.

Anywhere there’s a camera, there’s the potential that facial recognition is right behind it.

The same goes for the staunch industrialist caught at the grassroots protest; the pro-life female politician caught leaving an abortion clinic; the CEO who has too much to drink at the bar; the straight-laced lawyer who likes to dance at goth clubs.  If anyone with a cell phone can take a picture, and any picture can be tied back to us even when the photographer doesn’t know who we are, we may stop going to these places altogether.  We may avoid doing anything that could be perceived as controversial.  And that would be a pity, because we shouldn’t have to.

Risk 3:  Physical safety and due process

Perhaps most importantly, facial recognition threatens our safety.  It’s yet another tool in stalkers’ and abusers’ arsenals.  See that pretty girl at the bar?  Take her picture; find out everything about her; pay her a visit at home.  It’s dangerous in its simplicity.

There’s a separate set of risks from facial recognition that doesn’t do a good job of identifying targets:  false identifications.  An inaccurate system runs the risk of identifying, and thus detaining or arresting, the wrong people.  Let’s say that an airport scans incoming travelers’ faces to search for known terrorists.  Their systems incorrectly recognize you as a terrorist, and you’re detained, searched, interrogated, and held for hours, maybe even arrested.  This is precisely why Boston’s Logan Airport abandoned its facial recognition trials in 2002:  its systems could only identify volunteers 61.4 percent of the time.

Learn more about facial recognition technologies, how they work and what the risks are in these resources:

Three steps to protecting your facial data:

  1. There are many positive uses for facial recognition technologies, but the lack of consumer protections make them unnecessarily risky. Until the control and management of this data is firmly in the hands of consumers, proactively opt out of such features and avoid services where opt out is not an option.
  2. Voice your concerns to elected officials to offset the impact of corporate lobbying and campaign contributions intended to soften proposed consumer protections.
  3. Voice your frustration to the companies that are leveraging this technology without providing you full control over your facial data – including the ability to have it removed, block it from being sold, traded, shared, etc., explicitly identify when and how this data can be used either for standalone purposes or combined with other data about you, and so on. If a company does not respect your wishes, stop using them. If you allow yourself to be exploited, plenty of companies will be happy to do so.

Linda


[i] See The One-Way-Mirror Society – Privacy Implications of Surveillance Monitoring Networks to understand some implications of facial recognition tool’s use when companies sell this information.

Advertisements

Why would Facebook want to enroll children in a service built with little regard for adult safety/privacy/security? For the money.

June 7, 2012

A floundering Facebook is under increased pressure to shore up their revenue and flat per user minutes. So it was no surprise to see the Wall Street Journal report that Facebook is developing technology to allow children younger than 13 years old to use the social-networking service under parental supervision in spite of their abysmal track record in protecting older consumers.

Yet Facebook is already in deep trouble over their consistent encroachment on consumer privacy, the service is a hot bed for malware and scams, their advertising is not suitable for younger users.

The issues around Facebook’s interest in onboarding the under 13’s fall into three categories:

  • Facebook’s predatory privacy practices
  • Facebook’s financial woes
  • There is a need for a responsible social network where children, adults and commercial content can mix, but Facebook isn’t it

Facebook’s Predatory Privacy Practices

From its inception, Facebook has shown a deliberate disregard for consumer privacy, trust, or safety. This attitude was evidenced by founder Mark Zuckerberg’s early IM comments, and has continued ever since through the company’s privacy policy choices, and blatant deception and exploitation of users information.  Consider the following points:

  • Consumer’s feelings of betrayal run so high that 70% of Facebook users say they do not trust Facebook with their personal information.
  • The FTC found Facebook’s assault on consumer privacyso egregious that last fall (2011) they charged Facebook with deceiving consumers by failing to keep their privacy promises. The FTC complaint lists a number of instances in which Facebook allegedly made promises that it did not keep:
    • In December 2009, Facebook changed its website so certain information that users may have designated as private – such as their Friends List – was made public. They didn’t warn users that this change was coming, or get their approval in advance.
    • Facebook represented that third-party apps that users’ installed would have access only to user information that they needed to operate. In fact, the apps could access nearly all of users’ personal data – data the apps didn’t need.
    • Facebook told users they could restrict sharing of data to limited audiences – for example with “Friends Only.” In fact, selecting “Friends Only” did not prevent their information from being shared with third-party applications their friends used.
    • Facebook had a “Verified Apps” program & claimed it certified the security of participating apps. It didn’t.
    • Facebook promised users that it would not share their personal information with advertisers. It did.
    • Facebook claimed that when users deactivated or deleted their accounts, their photos and videos would be inaccessible. But Facebook allowed access to the content, even after users had deactivated or deleted their accounts.
    • Facebook claimed that it complied with the U.S.- EU Safe Harbor Framework that governs data transfer between the U.S. and the European Union. It didn’t.

The settlement bars Facebook from making any further deceptive privacy claims, requires that the company get consumers’ approval before it changes the way it shares their data, and requires that it obtain periodic assessments of its privacy practices by independent, third-party auditors for the next 20 years.

Facebook also failed from the beginning to build in strong security, monitoring or abuse tracking technologies, issues they’ve attempted to patch with varying degrees of seriousness or success as evidenced by the constant circulation of malware on the service, the breach of consumer’s information, consumers inability to reach a human for help when abuse occurs, and so on.

Facebook’s financial woes

It’s always about the money. With over 900 million users Facebook is still a force to be reckoned with, but their trajectory is looking a lot more like the climax before the cliff that MySpace faced.

Facebook’s financial problems have been building for some time but the company’s IPO brought financial scrutiny to the forefront and highlights their need to infuse new blood into the service – even if that means exposing children to a service that already poses clear risks to adults. Here’s a quick recap of the financial failings of Facebook:

  • Facebook stock is in a free fall, closing at $26.90 on June 4th when this article was written. That’s down more than $11 dollars, or 29% in the first 17 days of trading – and the stock continued to fall in after hour trading.  
  • The IPO valuation fiasco is far from over; Zuckerberg is now being sued for selling more than $1 billion shares just before stock prices plummeted. The suit says Facebook a “knew there was not enough advertising revenue to support a $38 stock valuation but hid that revenue information in order to push up the share price”.
  • In April, Facebook’s payments revenue went flat according to Business Insider. After growing a consistent 20% quarter over quarter, the first quarter of this year “revenue from payments and other fees [from games and partners] actually fell slightly, according to its latest filing with the Securities and Exchange Commission.”

A new Reuters/Ipsos poll shows that 4 out of 5 Facebook users have never bought a product or service as a result of ads or comments on the site, highlighting Facebook’s inability to successfully market to users.

  • The amount of time users spend on Facebook has also gone flat according to ComScore, a fact also highlighted by the Reuters/Ipsos poll which found that 34% of Facebook users surveyed were spending less time on the website than six months ago, whereas only 20% were spending more.

 

  • Advertisers are bailing. A nasty reality check came from General Motors just days before the company’s IPO as GM pulled their ad campaigns saying “their paid ads had little effect on customers” according to the Wall Street Journal.  

And an article on HuffingtonPost.com  reports that “more than 50 percent of Facebook users say they never click on Facebook’s sponsored ads, according to a recent Associated Press-CNBC poll. In addition, only 12% of respondents say they feel comfortable making purchases over Facebook, begging the question of how the social network can be effectively monetized.”

It’s easy to see why investors are angry, lawsuits have been filed and the government is investigating the debacle.

When 54% of consumers distrust the safety of making purchases through Facebook, and 70% of consumers say they do not trust Facebook with their personal information, and reports that consumer distrust in Facebook deepened as a result of issues around the IPO the company are surfacing, Facebook is looking more tarnished than ever.

As Nicholas Thompson wrote in The New Yorker, “Facebook, more than most other companies, needs to worry deeply about its public perception. It needs to be seen as trustworthy and, above all, cool. Mismanaging an I.P.O. isn’t cool, neither is misleading shareholders. Government investigations of you aren’t cool either.” And ´ The reputation of Facebook’s management team has also been deeply tarnished, particularly by the accusations that it wasn’t entirely open to investors about declining growth in its advertising business.”

There is a need for a responsible social network where children, adults and commercial content can mix, but Facebook isn’t it

Facebook has identified a real gap. There is a legitimate need for a social networking platform where kids can interact with adult family members and other trusted older individuals as well as commercial entities.

This need is evidenced by the 5.6 million underage youth still using Facebook today with or without their parent’s permission for lack of a more appropriate solution.

This 5.6 million underage user number is noteworthy for two reasons:

A)      It shows a significant number of children are using the site.

B)      More importantly it represents a 25.3% reduction of underage users over the past year when Consumer Reports found 7.5 million underage users on the site.  One could reasonably assume that the dramatic drop in underage use of Facebook is precisely because Consumer Reports published their data and that alarmed parents stepped in to block their use.

That 25.3% reduction strikes at the very heart of two of Facebook and their advocates’ key tenants; 1) since so many underage kids are already using Facebook, it would be safer for them if Facebook opened up the service for underage users and gave parents some access and controls to manage their use, and 2) parents want their children to be able to use Facebook.

To be clear, of the 5.6 million children still on the service, many have parents who helped them get onto Facebook. According to Dr. Danah Boyd, one of the researchers studying the issue of parents helping children get on the site, the reason parents told her they allowed their children on Facebook was because parents “want their kids to have access to public life and, today, what public life means is participating even in commercial social media sites.”

Boyd added that the parents helping their kids with access “are not saying get on the sites and then walk away. These are parents who have their computers in the living room, are having conversations with their kids, they often helping them create their accounts to talk to grandma.”

Note that Boyd’s findings don’t say parents want their children on Facebook. The findings say parents want their child to have access to public life. Given the dearth of alternative options, they allow their kids on Facebook with considerable supervision.  Why with considerable supervision? Because the site has inherent safety issues for users of all ages, and the safety issues would be greater for kids.

To date, Facebook has chosen not to cater to children under 13 because to do so requires complying with the Children’s Online Privacy Protection Act (COPPA) which Facebook advocate Larry Magid suggests “can be difficult and expensive” – yet hundreds of companies who take children’s privacy seriously comply with the policies today.

It is more than a little suspicious that Facebook made public their consideration to open their service to children just after they doubled their lobbying budget and just before the upcoming review of COPPA requirements – where the company will have the opportunity to press for weaker COPPA regulations.

Would it be safer for kids using the site to have additional protections such as Facebook suggests implementing?  Yes. But that’s the wrong question.  It is also safer to give kids cigarettes that have filters than to let kids sneak cigarettes without filters.

The real question is how do we fill the existing gap that compels kids to get onto a service that was never designed to protect them?

We need a service that has all the safety, privacy and security protections of the best children’s sites that also allows access to the broader public, news, events, and even appropriate advertising.  That service could easily interface with aspects of Facebook, yet leverage reputations, content filtering and monitoring, and human moderators to provide an environment that Facebook does not have, nor has shown any interest in creating.

This service is imminently buildable with technologies available today, but Facebook’s track record shows they are not the company to entrust with building the online service capable and willing to protect our children’s online safety, privacy and security.

Facebook’s proposal is all about their financial needs, not the needs of children.

Linda


Facebook Dominates Social Networking, Garnering 95% of Consumers Social Networking Time

December 26, 2011

Social networking is all but synonymous with Facebook according to new an analysis of comScore data and charted by web publisher Ben Elowitz of Wetpaint.

The service commands 95% of all social networking time, a remarkable feat essentially accomplished in just 4 ½ years.

Facebook’s fortunes took off when the disastrous mismanagement of MySpace, horrific lapses in privacy and safety features (think of the news stories of early 2009 when MySpace had to acknowledge removing 90,000 convicted sex offenders) and tawdry ads placed on user’s pages disgusted their user base and marketers alike.

How much has Facebook learned from MySpace’s foibles?

While Facebook has largely avoided the label of being a haven for sexual predators, they have been slow to provide consumer with customer support or assistance, and they have trampled consumer privacy so many times that last month’s FTC charges against the company for deceiving consumers by failing to keep their privacy policies is but one incident in a long line of penalties and fines Facebook has faced for their practices. Of note is the $9 million dollar fine levied by the Canadian Privacy Commissioner’s office in 2009, the Facebook Buzz debacle, and the current demand by European countries for changes, see Europeans calls on Facebook to adapt data-privacy changes to comply with local laws.

It is tempting to believe that Facebook is an unstoppable juggernaut, but that may change if another, more respectful alternative comes along.

Linda


80% of Americans Will Purchase a Gift Card this Holiday Season; Know the Risks

December 6, 2011

A record number of gift card purchases are expected this holiday season according to an NRF survey conducted by BIGresearch which estimates 80.2% of American’s will purchase at least one gift card[i].

The research also indicates that holiday shoppers will spend average of $155.43 on gift cards, a 6.7% rise from $145.61 last year. If these numbers hold true, total spending on gift cards this holiday season will reach $27.8 billion dollars, a 12% increase over the $24.78 billion spent in 2010.

Unfortunately, the convenience of giving gift cards isn’t reflected in the actual use of the cards.

Why gift cards can be risky

Studies show that consumers lose billions of dollars from gift cards each year as cards are forgotten, misplaced, portions are taken as user fees, or the stores behind the cards go bankrupt.

Last year, (2010) the financial services research firm, The Tower Group, estimated consumers lost about $2.5 billion from gift cards. This loss stems from a number of issues:

  • According to a Consumer Reports poll, 27% of people who received gift cards last holiday season have yet to use them (Oct. 2011 data). Respondents were most likely to say this was because they did not have time (51%) or because they forgot about the gift card (41%)sub>[ii]. Lost or damaged cards are also responsible for a slice of the money lost from unused cards.
  • In spite of the 2010 Credit CARD Act that put stiffer laws into effect in August of 2010 intended to protect consumers from high usage fees, short expiration dates, and other practices. However, gift card issuers can still charge hefty fees to buy the cards (expect a fee ranging from $3-7 dollars per card).
  • Card issuers may also charge a fee for every month of inactivity; Visa gift cards for example lose $2.50 a month after 12 months of inactivity[iii].
  • Another loophole not covered in the Credit Card Act is that card issuers do not have to reimburse the value of the cards if they go bankrupt[iv]. To make matters worse, stores do not have to inform you that they have filed for bankruptcy when selling their gift cards – allowing them to collect substantial sums they will never have to repay and even after a company has gone bankrupt, gift card resale sites may still be selling cards to unsuspecting consumers[v].
  • Thieves may have tampered with the gift card before you even purchased it. Using a handheld scanner, thieves read the card’s code, and, when combined with the information on the front of the card, it gives the thieves all they need to redeem the card before you do. On cards without a fixed value they simply call the 800 number to see if it has been loaded with a dollar amount, and if so for how much.  Consumer reports recommends that to reduce the chances that thieves will drain the card, don’t use gift cards hanging on a rack, ask for one that is behind the counter, and if the card is preloaded,  ask the cashier to scan the card to see that the value is intact[vi].
  •  designed to make redeeming the full value of gift cards difficult or impossible.

When giving a gift card, think safety

The Consumers Union has lobbied petitioned the Federal Trade Commission on behalf of consumers asking the commission to go further in their protection of consumers holding gift cards particularly when companies are facing bankruptcy so that funds are set aside to cover the value of these cards. They also recommend that the FTC establish a registry of businesses who have filed for bankruptcy so that consumers have an easier way to gauge the risk of a gift-card purchase. Until these proposals become law, you still have to largely take your own precautions:

  1. Consumer reports says you can reduce the chances that thieves have compromised a gift card, by following a couple of simple steps: 1) don’t use gift cards hanging on a rack, ask for one that is behind the counter, and 2) if the card is preloaded, ask the cashier to scan the card to see that the value is still intact[vii].
  2. If purchasing gift cards online, always look at the site’s refund policy and keep your receipts in case of trouble and you need documentation.
  3. Check the solvency of the card issuer; this is particularly important for restaurants and smaller businesses, but bankruptcies have hit companies of all sizes.
  4. Look for the gotcha’s – excessive fees, penalties for not using the card within a specified time period, etc.

Not all gift cards are equal if you try turning gift cards into cash

There are now several websites like Plasticjungle, Giftcardrescue, and Cardpool that allow you to exchange your gift cards for cash for a percentage of their value – they also resell these cards at a discount.  What you’ll find however is that just because the dollar amounts on gift cards are equal, doesn’t mean the cards have equal value. These websites usually pay more for cards from huge chains like Home Depot or Wal-Mart where there is a large consumer interest in the resold cards, and less for cards from more niche businesses.  Because the value of a card can vary, be sure to look at several card exchange sites to get the best deal; you may get 95% of the card’s value, or you may only be offered 50%.

At the end of the day, a nice holiday card with cash inside is a far safer form of giving.

Linda



FTC Asked to Investigate Use of Supercookies

October 2, 2011

The House of Representatives bipartisan privacy caucus has asked the FTC to look into companies’ use of supercookies – which are like traditional tracking cookies on steroids.

A basic “cookie,” is a small file that websites install on consumers’ computers and other internet connected devices that allow the website or service to track the user’s online activities.  These cookies can be deleted by a user, effectively wiping out a website’s ability to track that user.

Supercookies on the other hand, are capable of re-creating users’ profiles even after people delete regular cookies, and these new tracking methods are almost impossible for computer users to detect, according to researchers at Stanford University and University of California at Berkeley and reported in the Wall Street Journal.

In a letter written by Reps. Ed Markey (D-Mass.) and Joe Barton (R-Tex.), the co-chairmen of the bipartisan privacy caucus, they state that the use of supercookies invades user privacy and may be a violation of the FTC’s unfair and deceptive acts guidelines.

Speaking on the subject, Barton said, “I think supercookies should be outlawed because their existence eats away at consumer choice and privacy.”

Among the companies researchers identified as having employed supercookies are MSN.com and Hulu.com; both companies have said they have since taken action to change their tracking.

It will be interesting to see how the FTC rules on supercookie technology implementations; we can only hope that consumer privacy comes out on top.

Linda


FTC: Trample Children’s Privacy, and You’ll Pay the Price

August 31, 2011

The FTC has levied a $50k fine against the developer of such children’s apps as Zombie Duck Hunt, Truth or Dare and Cootie Catcher, Emily’s Girl World, Emily’s Dress Up and Emily’s Runway High Fashion, for collecting information from children without first gaining parental consent.

W3 Innovations was charged with collecting and storing children’s email addresses and allowing children to post personal information on public message boards, a violation of the Children’s Online Privacy Protection Act (COPPA) which requires parental consent for data collection about or from a child under the age of 13.

The apps found in violation of COPPA laws were marketed towards children in the Apple App store where more than 50,000 downloads were made before the FTC discovered the company was encouraging children to independently enter personal data through the games according to an article on digitaltrends.com. Additionally, the Emily apps encouraged children to email comments to “Emily” on the Emily blog.

Jon Leibowitz, the chairman of the commission said in a statement, “The F.T.C.’s COPPA Rule requires parental notice and consent before collecting children’s personal information online, whether through a Web site or a mobile app. Companies must give parents the opportunity to make smart choices when it comes to their children’s sharing of information on smart phones.”

This marks the first mobile COPPA case the FTC has reviewed, and it is unlikely to be the last. While I wholly support the protection of children’s privacy and regulations around obtaining parental consent, the antiquated methods currently employed to gain this consent need some rethinking.

In an age of instant access, companies and parents must be able to exchange an information request and verifiable consent within moments or the experience for the child wanting to play a game is really poor.

Industry, it’s time to step up to better methods for authentication and approval, those who don’t will find their apps aren’t used.

Linda


Comments regarding the FTC Staff Report on “Protecting Consumer Privacy in an Era of Rapid Change: A Proposed Framework for Businesses and Policymakers”

February 4, 2011

In response to increasing consumer demand, the Federal Trade Commission has recommended the creation of a Do-Not-Track service that would allow consumers to opt out of online data tracking by advertisers. We firmly believe that consumers have the right to decide who can track their actions online, when those actions  can be tracked and what information about those actions can be tracked.

However, the question of whether this should be a government regulated function, or an industry driven function is worth evaluating. Consider how consumer demand has been met with market driven solutions:

To test the appeal of a Do-Not-Track service, Gallup and USA Today conducted a poll in December of 2010 that gives some insights into consumer opinion on these tracking methods. Gallup found Internet users are for the most part aware that advertisers use their online browsing history to target ads to their interests, and they are largely opposed to such tactics, even if they help to keep websites free.

Though the poll found that 61% of respondents said they had noticed that ads had been targeted to them, 90% said they paid little to no attention to the ads, a finding that remained consistent across all age and economic groups. Most poll respondents said they would prefer to allow the advertisers of their choosing to target ads to them rather than allow all or no advertisers to do so.

In response, many online advertisers and companies have gone to great lengths to allow consumers to choose whether or not they will be tracked for advertising. Products such as Microsoft’s Internet Explorer, Google’s Chrome, Apple’s Safari and Mozilla Firefox have already, or are in the process of incorporating these consumer choices into their services, and more will do so in the months ahead without a federal mandate requirement.

Industry organizations like the Network Advertising Initiative’s (NAI) have built tools to enable consumers to opt out of behavioral Advertising, as has the consumer protection organization PrivacyChoice.org. These companies and organizations should be commended for building consumer tracking choices into their products and incorporating some form of anonymity options through preferences or third-party plug-ins.

The poll results suggest alternative approaches that provide consumers a more thoughtful, user-driven approach to targeted advertising can be successful. When consumers have the opportunity to specifically choose the advertisers that can target them and manage the ads to ones they want to see, they are more likely to pay attention to the ads, and are much less likely to object to the data collection methods advertisers use to customize the content.

As companies continue to respond to consumer privacy concerns, solutions should be founded on three core consumer protection principles:

  1. Transparency: Consumers should have the ability to see what information is being collected about them by any party, and have the ability to understand how it’s being used- particularly if that information is shared or sold to other parties.
  2. Choice: Consumers should have the ability to easily find and modify information in their profiles, choose what types of information they will allow to have tracked, and choose who can or cannot track them.
  3. Control: Consumers should have the ability to effect a one-click opt-out (or similar ease of opt-out method) of data collection, or a clear notification that the use of a service precludes this choice.

Ad tracking is an integral part of the Internet’s revenue model, and helps finance the development and access to rich content and services that consumers would otherwise have to cover through subscription fees. Targeted advertising is not inherently wrong, and as discussions move forward the FTC and Congress should be mindful of whether legislation can do a better job than the industry in meeting consumer demands in a fast paced technology environment, or whether legislation could dampen a recovering economy.

We urge all parties in this debate to place the safety of consumers first and look forward to working with companies and regulators as they increase consumer safeguards to privacy in the future.

Linda