FTC and EU Weigh in on Face Recognition Applications – Why Limiting the Use of This Technology Matters

August 1, 2012

Who should own and control data about your face? Should companies be able to collect and use your facial data at will?

Is it enough to let users can opt out of facial recognition, or should companies be required to collect your specific opt in before collecting your facial data? If a company has multiple services, is one opt in enough, or should they be required to seek your permission for every new type of use? Under what conditions should a company be able to sell and monetize their ability to recognize you?[i]

There are a lot of cool uses for facial recognition tools, but how informed are you about the risks? How do you weigh the pros and cons to make an informed choice about who can identify you?

Governments are paying greater attention to potential privacy threats

A preliminary report by the Federal Trade Commission (FTC) identifying the latest facial recognition technologies and how these are currently being used by companies has just been released. The report also outlines the FTC’s plan for creating best-practice guidelines for the industry that should come out later this year.

In Europe concerns over facial recognition technologies potential to breach personal privacy has resulted in a similar review.

This is great news for consumers as it signals a shift in the timing of privacy reviews from a reactive approach where guidelines have come after consumers have largely already had their privacy trampled, to a far more proactive approach to protecting consumers online privacy, safety, and security.

In response, companies like Facebook and Google are dramatically increasing their lobbying budgets and campaign funding

It is no coincidence that as government bodies increase their focus on consumer’s online privacy that the companies making the biggest bucks from selling information about you – and access to you – are pouring money and human resources into influencing the government’s decisions.

According to disclosure forms obtained by The Hill, “Facebook increased its lobbying spending during the second quarter of 2012, allocating $960,000, or three times as much as during the same three-month period in 2011”.

And a report in the New York Times noted that “With Congress and privacy watchdogs breathing down its neck, Google is stepping up its lobbying presence inside the Beltway — spending more than Apple, Facebook, Amazon and Microsoft combined in the first three months of the year.” Google spent $5.03 million on lobbying from January through March of this year, a record for the Internet giant, and a 240 percent increase from the $1.48 million it spent on lobbyists in the same quarter a year ago, according to disclosures filed Friday with the clerk of the House.

In addition to lobbying spend, these companies, their political action committees (PAC’s) – and the billionaire individuals behind the companies have exorbitant amounts of money for political contributions; chits to be called in when privacy decisions that could impact their bottom line hang in the balance.

Here’s what today’s facial recognition technologies can – and are – doing:

 

It only takes a quick look for you to identify someone you know; yet facial recognition technologies are both faster and more accurate than people will ever be – and they have the capability of identifying billions of individuals.

Although many companies are still using basic, and largely non-invasive, facial recognition tools to simply recognize if there is a face in a photo, an increasing number of companies are leveraging advanced facial recognition tools that can have far reaching ramifications for your privacy, safety, and even employability.

Advanced facial recognition solutions include Google+’s Tag My Face, Facebook’s Photo Tag Suggest, Android apps like FaceLock, and Visidon AppLock, and Apple Apps like Klik,  FaceLook, and  Age Meter, then there are apps like SceneTap, FACER Celebrity, FindYourFaceMate.com and DoggelGanger.com.  New services leveraging these features will become increasingly common – particularly if strict privacy regulations aren’t implemented.

Some companies use facial recognition services in their photo and video applications to help users recognize people in photos, or even automatically tag them for you. (You may not want to be tagged in a particular, photo, but if you allow photo tagging you can only try to minimize the damage, you can’t proactively prevent it).

Some services use facial recognition for security purposes; your face essentially becomes your unique password (but what do you do if it gets hacked? Change your face??).

What are the potential risks of facial recognition tools to individuals?

The Online Privacy Blog enumerates some of the risks in easily understood terms; here is an excerpt from their article The Top 6 FAQs about Facial Recognition:

Take the massive amount of information that Google, Facebook, ad networks, data miners, and people search websites are collecting on all of us; add the info that we voluntarily provide to dating sites, social networks, and blogs; combine that with facial recognition software; and you have a world with reduced security, privacy, anonymity, and freedom.  Carnegie Mellon researchers predict that this is “a world where every stranger in the street could predict quite accurately sensitive information about you (such as your SSN, but also your credit score, or sexual orientation” just by taking a picture.

Risk 1:  Identity theft and security

Think of your personal information—name, photos, birthdate, address, usernames, email addresses, family members, and more—as pieces of a puzzle.  The more pieces a cybercriminal has, the closer he is to solving the puzzle.  Maybe the puzzle is your credit card number.  Maybe it’s the password you use everywhere.  Maybe you’re your social security number.

Identity thieves often use social security numbers to commit fraud. Photo: listverse.com.

Facial recognition software is a tool that can put all these pieces together.  When you combine facial recognition software with the wealth of public data about us online, you have what’s called “augmented reality:”  “the merging of online and offline data that new technologies make possible.”   You also have a devastating blow to personal privacy and an increased risk of identity theft.

Once a cybercriminal figures out your private information, your money and your peace of mind are in danger.  Common identity theft techniques include opening new credit cards in your name and racking up charges, opening bank accounts under your name and writing bad checks, using your good credit history to take out a loan, and draining your bank account.  More personal attacks may include hijacking your social networks while pretending to be you, reading your private messages, and posting unwanted or embarrassing things “as” you.

The research:  how facial recognition can lead to identity theft

Carnegie Mellon researches performed a 2011 facial recognition study using off-the-shelf face recognition software called PittPatt, which was purchased by Google.  By cross-referencing two sets of photos—one taken of participating students walking around campus, and another taken from pseudonymous users of online dating sites—with public Facebook data (things you can see on a search engine without even logging into Facebook), they were able to identify a significant number of people in the photos.  Based on the information they learned through facial recognition, the researchers were then able to predict the social security numbers of some of the participants.

They concluded this merging of our online and offline identities can be a gateway to identity theft:

If an individual’s face in the street can be identified using a face recognizer and identified images from social network sites such as Facebook or LinkedIn, then it becomes possible not just to identify that individual, but also to infer additional, and more sensitive, information about her, once her name has been (probabilistically) inferred.

Some statistics on identity theft from the Identity Theft Assistance Center (ITAC):

  • 8.1 million adults in the U.S. suffered identity theft in 2011
  • Each victim of identity theft loses an average of $4,607
  • Out-of-pocket losses (the amount you actually pay, as opposed to your credit card company) average $631 per victim
  • New account fraud, where thieves open new credit card accounts on behalf of their victims, accounted for $17 billion in fraud
  • Existing account fraud accounted for $14 billion.

Risk 2:  Chilling effects on freedom of speech and action

Facial recognition software threatens to censor what we say and limit what we do, even offlineImagine that you’re known in your community for being an animal rights activist, but you secretly love a good hamburger.  You’re sneaking in a double cheeseburger at a local restaurant when, without your knowledge, someone snaps a picture of you.  It’s perfectly legal for someone to photograph you in a public place, and aside from special rights of publicity for big-time celebrities; you don’t have any rights to control this photo.  This person may not have any ill intentions; he may not even know who you are.  If he uploads it to Facebook, and Facebook automatically tags you in it, you’re in trouble.

Anywhere there’s a camera, there’s the potential that facial recognition is right behind it.

The same goes for the staunch industrialist caught at the grassroots protest; the pro-life female politician caught leaving an abortion clinic; the CEO who has too much to drink at the bar; the straight-laced lawyer who likes to dance at goth clubs.  If anyone with a cell phone can take a picture, and any picture can be tied back to us even when the photographer doesn’t know who we are, we may stop going to these places altogether.  We may avoid doing anything that could be perceived as controversial.  And that would be a pity, because we shouldn’t have to.

Risk 3:  Physical safety and due process

Perhaps most importantly, facial recognition threatens our safety.  It’s yet another tool in stalkers’ and abusers’ arsenals.  See that pretty girl at the bar?  Take her picture; find out everything about her; pay her a visit at home.  It’s dangerous in its simplicity.

There’s a separate set of risks from facial recognition that doesn’t do a good job of identifying targets:  false identifications.  An inaccurate system runs the risk of identifying, and thus detaining or arresting, the wrong people.  Let’s say that an airport scans incoming travelers’ faces to search for known terrorists.  Their systems incorrectly recognize you as a terrorist, and you’re detained, searched, interrogated, and held for hours, maybe even arrested.  This is precisely why Boston’s Logan Airport abandoned its facial recognition trials in 2002:  its systems could only identify volunteers 61.4 percent of the time.

Learn more about facial recognition technologies, how they work and what the risks are in these resources:

Three steps to protecting your facial data:

  1. There are many positive uses for facial recognition technologies, but the lack of consumer protections make them unnecessarily risky. Until the control and management of this data is firmly in the hands of consumers, proactively opt out of such features and avoid services where opt out is not an option.
  2. Voice your concerns to elected officials to offset the impact of corporate lobbying and campaign contributions intended to soften proposed consumer protections.
  3. Voice your frustration to the companies that are leveraging this technology without providing you full control over your facial data – including the ability to have it removed, block it from being sold, traded, shared, etc., explicitly identify when and how this data can be used either for standalone purposes or combined with other data about you, and so on. If a company does not respect your wishes, stop using them. If you allow yourself to be exploited, plenty of companies will be happy to do so.

Linda


[i] See The One-Way-Mirror Society – Privacy Implications of Surveillance Monitoring Networks to understand some implications of facial recognition tool’s use when companies sell this information.

Advertisements

Why would Facebook want to enroll children in a service built with little regard for adult safety/privacy/security? For the money.

June 7, 2012

A floundering Facebook is under increased pressure to shore up their revenue and flat per user minutes. So it was no surprise to see the Wall Street Journal report that Facebook is developing technology to allow children younger than 13 years old to use the social-networking service under parental supervision in spite of their abysmal track record in protecting older consumers.

Yet Facebook is already in deep trouble over their consistent encroachment on consumer privacy, the service is a hot bed for malware and scams, their advertising is not suitable for younger users.

The issues around Facebook’s interest in onboarding the under 13’s fall into three categories:

  • Facebook’s predatory privacy practices
  • Facebook’s financial woes
  • There is a need for a responsible social network where children, adults and commercial content can mix, but Facebook isn’t it

Facebook’s Predatory Privacy Practices

From its inception, Facebook has shown a deliberate disregard for consumer privacy, trust, or safety. This attitude was evidenced by founder Mark Zuckerberg’s early IM comments, and has continued ever since through the company’s privacy policy choices, and blatant deception and exploitation of users information.  Consider the following points:

  • Consumer’s feelings of betrayal run so high that 70% of Facebook users say they do not trust Facebook with their personal information.
  • The FTC found Facebook’s assault on consumer privacyso egregious that last fall (2011) they charged Facebook with deceiving consumers by failing to keep their privacy promises. The FTC complaint lists a number of instances in which Facebook allegedly made promises that it did not keep:
    • In December 2009, Facebook changed its website so certain information that users may have designated as private – such as their Friends List – was made public. They didn’t warn users that this change was coming, or get their approval in advance.
    • Facebook represented that third-party apps that users’ installed would have access only to user information that they needed to operate. In fact, the apps could access nearly all of users’ personal data – data the apps didn’t need.
    • Facebook told users they could restrict sharing of data to limited audiences – for example with “Friends Only.” In fact, selecting “Friends Only” did not prevent their information from being shared with third-party applications their friends used.
    • Facebook had a “Verified Apps” program & claimed it certified the security of participating apps. It didn’t.
    • Facebook promised users that it would not share their personal information with advertisers. It did.
    • Facebook claimed that when users deactivated or deleted their accounts, their photos and videos would be inaccessible. But Facebook allowed access to the content, even after users had deactivated or deleted their accounts.
    • Facebook claimed that it complied with the U.S.- EU Safe Harbor Framework that governs data transfer between the U.S. and the European Union. It didn’t.

The settlement bars Facebook from making any further deceptive privacy claims, requires that the company get consumers’ approval before it changes the way it shares their data, and requires that it obtain periodic assessments of its privacy practices by independent, third-party auditors for the next 20 years.

Facebook also failed from the beginning to build in strong security, monitoring or abuse tracking technologies, issues they’ve attempted to patch with varying degrees of seriousness or success as evidenced by the constant circulation of malware on the service, the breach of consumer’s information, consumers inability to reach a human for help when abuse occurs, and so on.

Facebook’s financial woes

It’s always about the money. With over 900 million users Facebook is still a force to be reckoned with, but their trajectory is looking a lot more like the climax before the cliff that MySpace faced.

Facebook’s financial problems have been building for some time but the company’s IPO brought financial scrutiny to the forefront and highlights their need to infuse new blood into the service – even if that means exposing children to a service that already poses clear risks to adults. Here’s a quick recap of the financial failings of Facebook:

  • Facebook stock is in a free fall, closing at $26.90 on June 4th when this article was written. That’s down more than $11 dollars, or 29% in the first 17 days of trading – and the stock continued to fall in after hour trading.  
  • The IPO valuation fiasco is far from over; Zuckerberg is now being sued for selling more than $1 billion shares just before stock prices plummeted. The suit says Facebook a “knew there was not enough advertising revenue to support a $38 stock valuation but hid that revenue information in order to push up the share price”.
  • In April, Facebook’s payments revenue went flat according to Business Insider. After growing a consistent 20% quarter over quarter, the first quarter of this year “revenue from payments and other fees [from games and partners] actually fell slightly, according to its latest filing with the Securities and Exchange Commission.”

A new Reuters/Ipsos poll shows that 4 out of 5 Facebook users have never bought a product or service as a result of ads or comments on the site, highlighting Facebook’s inability to successfully market to users.

  • The amount of time users spend on Facebook has also gone flat according to ComScore, a fact also highlighted by the Reuters/Ipsos poll which found that 34% of Facebook users surveyed were spending less time on the website than six months ago, whereas only 20% were spending more.

 

  • Advertisers are bailing. A nasty reality check came from General Motors just days before the company’s IPO as GM pulled their ad campaigns saying “their paid ads had little effect on customers” according to the Wall Street Journal.  

And an article on HuffingtonPost.com  reports that “more than 50 percent of Facebook users say they never click on Facebook’s sponsored ads, according to a recent Associated Press-CNBC poll. In addition, only 12% of respondents say they feel comfortable making purchases over Facebook, begging the question of how the social network can be effectively monetized.”

It’s easy to see why investors are angry, lawsuits have been filed and the government is investigating the debacle.

When 54% of consumers distrust the safety of making purchases through Facebook, and 70% of consumers say they do not trust Facebook with their personal information, and reports that consumer distrust in Facebook deepened as a result of issues around the IPO the company are surfacing, Facebook is looking more tarnished than ever.

As Nicholas Thompson wrote in The New Yorker, “Facebook, more than most other companies, needs to worry deeply about its public perception. It needs to be seen as trustworthy and, above all, cool. Mismanaging an I.P.O. isn’t cool, neither is misleading shareholders. Government investigations of you aren’t cool either.” And ´ The reputation of Facebook’s management team has also been deeply tarnished, particularly by the accusations that it wasn’t entirely open to investors about declining growth in its advertising business.”

There is a need for a responsible social network where children, adults and commercial content can mix, but Facebook isn’t it

Facebook has identified a real gap. There is a legitimate need for a social networking platform where kids can interact with adult family members and other trusted older individuals as well as commercial entities.

This need is evidenced by the 5.6 million underage youth still using Facebook today with or without their parent’s permission for lack of a more appropriate solution.

This 5.6 million underage user number is noteworthy for two reasons:

A)      It shows a significant number of children are using the site.

B)      More importantly it represents a 25.3% reduction of underage users over the past year when Consumer Reports found 7.5 million underage users on the site.  One could reasonably assume that the dramatic drop in underage use of Facebook is precisely because Consumer Reports published their data and that alarmed parents stepped in to block their use.

That 25.3% reduction strikes at the very heart of two of Facebook and their advocates’ key tenants; 1) since so many underage kids are already using Facebook, it would be safer for them if Facebook opened up the service for underage users and gave parents some access and controls to manage their use, and 2) parents want their children to be able to use Facebook.

To be clear, of the 5.6 million children still on the service, many have parents who helped them get onto Facebook. According to Dr. Danah Boyd, one of the researchers studying the issue of parents helping children get on the site, the reason parents told her they allowed their children on Facebook was because parents “want their kids to have access to public life and, today, what public life means is participating even in commercial social media sites.”

Boyd added that the parents helping their kids with access “are not saying get on the sites and then walk away. These are parents who have their computers in the living room, are having conversations with their kids, they often helping them create their accounts to talk to grandma.”

Note that Boyd’s findings don’t say parents want their children on Facebook. The findings say parents want their child to have access to public life. Given the dearth of alternative options, they allow their kids on Facebook with considerable supervision.  Why with considerable supervision? Because the site has inherent safety issues for users of all ages, and the safety issues would be greater for kids.

To date, Facebook has chosen not to cater to children under 13 because to do so requires complying with the Children’s Online Privacy Protection Act (COPPA) which Facebook advocate Larry Magid suggests “can be difficult and expensive” – yet hundreds of companies who take children’s privacy seriously comply with the policies today.

It is more than a little suspicious that Facebook made public their consideration to open their service to children just after they doubled their lobbying budget and just before the upcoming review of COPPA requirements – where the company will have the opportunity to press for weaker COPPA regulations.

Would it be safer for kids using the site to have additional protections such as Facebook suggests implementing?  Yes. But that’s the wrong question.  It is also safer to give kids cigarettes that have filters than to let kids sneak cigarettes without filters.

The real question is how do we fill the existing gap that compels kids to get onto a service that was never designed to protect them?

We need a service that has all the safety, privacy and security protections of the best children’s sites that also allows access to the broader public, news, events, and even appropriate advertising.  That service could easily interface with aspects of Facebook, yet leverage reputations, content filtering and monitoring, and human moderators to provide an environment that Facebook does not have, nor has shown any interest in creating.

This service is imminently buildable with technologies available today, but Facebook’s track record shows they are not the company to entrust with building the online service capable and willing to protect our children’s online safety, privacy and security.

Facebook’s proposal is all about their financial needs, not the needs of children.

Linda


Bill of Rights for Social Networkers [Infographic]

May 23, 2012

A new infographic by BackgroundCheck does an excellent job of highlighting the issues surrounding requests for access to personal social networking sites by employers, would-be employers, government agencies, law enforcement, colleges and other groups.   Check it out:

Social Networking Bill of Rights

Linda


Gang Members Charged with Recruiting Teens Online for Prostitution

April 15, 2012

A new criminal case brings back into focus the role of the internet in gang activities as 5 members of the gang set ‘Underground Gangster Crips’ have been charged with sex-trafficking underage girls. The Crips is a massive gang with over 35,000 members organized into an estimated 800 individual gangs or “sets” in more than 30 states and 120 cities.

Court documents say the gang members recruited teen girls through Facebook, DateHookUp.com, in schools, and on Metro busses. In the course of pimping and trafficking these girls, the court records charge these gang members transported them across at least 4 states.  If convicted, each could serve life in prison.

Gangs have been leveraging the internet’s power to glamorize gang life, recruit, coordinate, commit crimes, and brag about their crimes, for many years. To learn more about gangs internet use and prostitution/human trafficking see my extensive coverage in these blogs: 2011 National Gang Threat Assessment – Emerging Trends and The Internet,   Gangs use of the Internet and Cell Phones  and Human trafficking Top Initiative for Incoming NAAG President Rob McKenna

According to court documents, in this most recent case just outside of D.C. if the girls tried to quit, they were threatened and subjected to violent and frequent beatings and threats. The ring leader, Justin Strom, 26, allegedly choked one girl when she said she no longer wanted to engage in prostitution.

The affidavit shows that in another occurrence, Strom made a 17-year-old girl use cocaine, cut her on the arm with a knife and then forced her to have sex with him. The girl was

The gang members charged allegedly recruited at least 10 girls – most of whom are 15 and 16-years-old – over the last three years often telling the girls they were pretty and could make money by having sex with men. Many girls were required to have sex with gang members as an “initiation”. The girls allegedly would share the proceeds with the gang — sometimes $50 or $100 per customer — afterwards gang members would supply the girls with drugs and party with them, according to court records.then taken to an apartment, where she was forced to have sex with 14 other men. Strom allegedly collected $1,000 that night. Sometimes, girls were forced to go door-to-door soliciting men for sex. Two gang members drove her home, and according to the affidavit she was told she “got what she had coming” and that if she told anyone they would “come back and kill her.”

These arrests add to a string of prostitution and trafficking charges against gangs in the D.C. area; Members of the gang Mara Salvatrucha, also known as MS-13, also have been accused in federal court in Virginia of prostitution-related charges involving juvenile victims.

The Underground Gangster Crips have a long and violent history in Fairfax County, allegedly committing rapes, armed robbery and selling drugs, according to court documents.

Speaking about the case, U.S. Attorney Neil H. McBride said “The sex trafficking of young girls is an unconscionable crime involving unspeakable trauma. These gang members are alleged to have lured many area high school girls in the vile world of prostitution, and used violence and threats to keep them working as indentured sex slaves.”

Virginia Attorney General Kenneth T. Cuccinelli II described the case as “every parent’s worst nightmare,” adding that it showed human trafficking can happen anywhere and is “a very real danger here in Virginia. By working together with U.S. Attorney Neil MacBride and our law enforcement partners, we will send a swift and strong message that this criminal behavior will not be tolerated in the Commonwealth of Virginia.”

Learn more in my posts Child Trafficking and the Internet and ‘Women to Go’ Shop Drives Trafficking Awareness in Israel; What You Can do in Your Community.

The sexual exploitation of women and children by gangs and other criminals isn’t just a problem in D.C., or L.A. or Chicago. The sexual exploitation of women and children as a revenue stream is occurring in towns and cities across the country. And every single decent person has a role to play – you’re either part of the problem or part of the solution when it comes to identifying and reporting the exploitation of women and children.

To get involved in the campaign against human trafficking, you can find more information and organizations in your area by searching online for human trafficking and your region. If you suspect someone is a victim of human trafficking, or you yourself are a victim, get help by calling the Human Trafficking Hotline 1-888-373-7888. Or Call the National Center for Missing and Exploited Children’s hotline 1-800-THE-LOST.

Location of the NCMEC posters:

Linda


New Online Safety Lesson: Online Hate Crimes: Are you part of the solution or part of the problem?

March 21, 2012

The 14th installment in the lesson series I’m writing on behalf of iKeepSafe, looks at taking a stand against hate crimes and content groups on the internet.

The vast majority of people in every country oppose hate, hate groups, and hate crimes. Unfortunately however, the number of hate groups around the world is increasing. In the U.S. hate groups have surged by 54% since 2000 when there were 602 hate groups, to 1,018 official hate groups in 2011.

The rise in hate groups isn’t just an American problem; Germany, South Africa, France, Britain, and other countries also struggle with rapidly expanding numbers of hate groups.

To see and use this lesson, the companion presentation, professional development materials, and parent tips click here: Online Hate Crimes: Are you part of the solution or part of the problem?

Linda


New Online Safety Lesson: Connecting Technology Across Generations

February 17, 2012

The 11th installment in the lesson series I’m writing on behalf of iKeepSafe, focuses on leveraging the internet to connect generations.

Who says technology is hurting interpersonal relationships? New research shows that the “computer generation” no longer encompasses just the teens who grew up with technology. Seniors are migrating online like never before, which offers new channels for communication between the generations.

Whether texting, Skyping, Facebooking or emailing, seniors and youth have much to gain from each other. Read further for some surprising statistics on how seniors are increasingly embracing current technologies and finding new ways to communicate with their grandchildren and other youth. And, don’t miss out on tips to help deepen interaction between younger and older generations.

To see and use this lesson, the companion presentation, professional development materials, and parent tips click here: Connecting Technology Across Generations 

Linda


What Criteria Do You Apply Before Adding or Deleting Someone On Facebook?

December 27, 2011

New survey data from NM Incite shows what motivates us to add – or drop – friends on Facebook.

Why add someone? The most common reason is that you already know them. The most common reason to dump someone is offensive comments.

Some recent news articles – How Facebook Can Hurt Your Credit Rating, Privacy Fades in Facebook Era, and the recent FTC ruling against Facebook, Facebook Settles FTC Charges That It Deceived Consumers By Failing To Keep Privacy Promises – plus a new 30 minute school curriculum piece I just finished for ikeepcurrent may give you even more reasons to be selective when evaluating potential friends – because what they post may not only be rude, irritating or depressing, it may also harm your future.

Linda