FTC and EU Weigh in on Face Recognition Applications – Why Limiting the Use of This Technology Matters

August 1, 2012

Who should own and control data about your face? Should companies be able to collect and use your facial data at will?

Is it enough to let users can opt out of facial recognition, or should companies be required to collect your specific opt in before collecting your facial data? If a company has multiple services, is one opt in enough, or should they be required to seek your permission for every new type of use? Under what conditions should a company be able to sell and monetize their ability to recognize you?[i]

There are a lot of cool uses for facial recognition tools, but how informed are you about the risks? How do you weigh the pros and cons to make an informed choice about who can identify you?

Governments are paying greater attention to potential privacy threats

A preliminary report by the Federal Trade Commission (FTC) identifying the latest facial recognition technologies and how these are currently being used by companies has just been released. The report also outlines the FTC’s plan for creating best-practice guidelines for the industry that should come out later this year.

In Europe concerns over facial recognition technologies potential to breach personal privacy has resulted in a similar review.

This is great news for consumers as it signals a shift in the timing of privacy reviews from a reactive approach where guidelines have come after consumers have largely already had their privacy trampled, to a far more proactive approach to protecting consumers online privacy, safety, and security.

In response, companies like Facebook and Google are dramatically increasing their lobbying budgets and campaign funding

It is no coincidence that as government bodies increase their focus on consumer’s online privacy that the companies making the biggest bucks from selling information about you – and access to you – are pouring money and human resources into influencing the government’s decisions.

According to disclosure forms obtained by The Hill, “Facebook increased its lobbying spending during the second quarter of 2012, allocating $960,000, or three times as much as during the same three-month period in 2011”.

And a report in the New York Times noted that “With Congress and privacy watchdogs breathing down its neck, Google is stepping up its lobbying presence inside the Beltway — spending more than Apple, Facebook, Amazon and Microsoft combined in the first three months of the year.” Google spent $5.03 million on lobbying from January through March of this year, a record for the Internet giant, and a 240 percent increase from the $1.48 million it spent on lobbyists in the same quarter a year ago, according to disclosures filed Friday with the clerk of the House.

In addition to lobbying spend, these companies, their political action committees (PAC’s) – and the billionaire individuals behind the companies have exorbitant amounts of money for political contributions; chits to be called in when privacy decisions that could impact their bottom line hang in the balance.

Here’s what today’s facial recognition technologies can – and are – doing:

 

It only takes a quick look for you to identify someone you know; yet facial recognition technologies are both faster and more accurate than people will ever be – and they have the capability of identifying billions of individuals.

Although many companies are still using basic, and largely non-invasive, facial recognition tools to simply recognize if there is a face in a photo, an increasing number of companies are leveraging advanced facial recognition tools that can have far reaching ramifications for your privacy, safety, and even employability.

Advanced facial recognition solutions include Google+’s Tag My Face, Facebook’s Photo Tag Suggest, Android apps like FaceLock, and Visidon AppLock, and Apple Apps like Klik,  FaceLook, and  Age Meter, then there are apps like SceneTap, FACER Celebrity, FindYourFaceMate.com and DoggelGanger.com.  New services leveraging these features will become increasingly common – particularly if strict privacy regulations aren’t implemented.

Some companies use facial recognition services in their photo and video applications to help users recognize people in photos, or even automatically tag them for you. (You may not want to be tagged in a particular, photo, but if you allow photo tagging you can only try to minimize the damage, you can’t proactively prevent it).

Some services use facial recognition for security purposes; your face essentially becomes your unique password (but what do you do if it gets hacked? Change your face??).

What are the potential risks of facial recognition tools to individuals?

The Online Privacy Blog enumerates some of the risks in easily understood terms; here is an excerpt from their article The Top 6 FAQs about Facial Recognition:

Take the massive amount of information that Google, Facebook, ad networks, data miners, and people search websites are collecting on all of us; add the info that we voluntarily provide to dating sites, social networks, and blogs; combine that with facial recognition software; and you have a world with reduced security, privacy, anonymity, and freedom.  Carnegie Mellon researchers predict that this is “a world where every stranger in the street could predict quite accurately sensitive information about you (such as your SSN, but also your credit score, or sexual orientation” just by taking a picture.

Risk 1:  Identity theft and security

Think of your personal information—name, photos, birthdate, address, usernames, email addresses, family members, and more—as pieces of a puzzle.  The more pieces a cybercriminal has, the closer he is to solving the puzzle.  Maybe the puzzle is your credit card number.  Maybe it’s the password you use everywhere.  Maybe you’re your social security number.

Identity thieves often use social security numbers to commit fraud. Photo: listverse.com.

Facial recognition software is a tool that can put all these pieces together.  When you combine facial recognition software with the wealth of public data about us online, you have what’s called “augmented reality:”  “the merging of online and offline data that new technologies make possible.”   You also have a devastating blow to personal privacy and an increased risk of identity theft.

Once a cybercriminal figures out your private information, your money and your peace of mind are in danger.  Common identity theft techniques include opening new credit cards in your name and racking up charges, opening bank accounts under your name and writing bad checks, using your good credit history to take out a loan, and draining your bank account.  More personal attacks may include hijacking your social networks while pretending to be you, reading your private messages, and posting unwanted or embarrassing things “as” you.

The research:  how facial recognition can lead to identity theft

Carnegie Mellon researches performed a 2011 facial recognition study using off-the-shelf face recognition software called PittPatt, which was purchased by Google.  By cross-referencing two sets of photos—one taken of participating students walking around campus, and another taken from pseudonymous users of online dating sites—with public Facebook data (things you can see on a search engine without even logging into Facebook), they were able to identify a significant number of people in the photos.  Based on the information they learned through facial recognition, the researchers were then able to predict the social security numbers of some of the participants.

They concluded this merging of our online and offline identities can be a gateway to identity theft:

If an individual’s face in the street can be identified using a face recognizer and identified images from social network sites such as Facebook or LinkedIn, then it becomes possible not just to identify that individual, but also to infer additional, and more sensitive, information about her, once her name has been (probabilistically) inferred.

Some statistics on identity theft from the Identity Theft Assistance Center (ITAC):

  • 8.1 million adults in the U.S. suffered identity theft in 2011
  • Each victim of identity theft loses an average of $4,607
  • Out-of-pocket losses (the amount you actually pay, as opposed to your credit card company) average $631 per victim
  • New account fraud, where thieves open new credit card accounts on behalf of their victims, accounted for $17 billion in fraud
  • Existing account fraud accounted for $14 billion.

Risk 2:  Chilling effects on freedom of speech and action

Facial recognition software threatens to censor what we say and limit what we do, even offlineImagine that you’re known in your community for being an animal rights activist, but you secretly love a good hamburger.  You’re sneaking in a double cheeseburger at a local restaurant when, without your knowledge, someone snaps a picture of you.  It’s perfectly legal for someone to photograph you in a public place, and aside from special rights of publicity for big-time celebrities; you don’t have any rights to control this photo.  This person may not have any ill intentions; he may not even know who you are.  If he uploads it to Facebook, and Facebook automatically tags you in it, you’re in trouble.

Anywhere there’s a camera, there’s the potential that facial recognition is right behind it.

The same goes for the staunch industrialist caught at the grassroots protest; the pro-life female politician caught leaving an abortion clinic; the CEO who has too much to drink at the bar; the straight-laced lawyer who likes to dance at goth clubs.  If anyone with a cell phone can take a picture, and any picture can be tied back to us even when the photographer doesn’t know who we are, we may stop going to these places altogether.  We may avoid doing anything that could be perceived as controversial.  And that would be a pity, because we shouldn’t have to.

Risk 3:  Physical safety and due process

Perhaps most importantly, facial recognition threatens our safety.  It’s yet another tool in stalkers’ and abusers’ arsenals.  See that pretty girl at the bar?  Take her picture; find out everything about her; pay her a visit at home.  It’s dangerous in its simplicity.

There’s a separate set of risks from facial recognition that doesn’t do a good job of identifying targets:  false identifications.  An inaccurate system runs the risk of identifying, and thus detaining or arresting, the wrong people.  Let’s say that an airport scans incoming travelers’ faces to search for known terrorists.  Their systems incorrectly recognize you as a terrorist, and you’re detained, searched, interrogated, and held for hours, maybe even arrested.  This is precisely why Boston’s Logan Airport abandoned its facial recognition trials in 2002:  its systems could only identify volunteers 61.4 percent of the time.

Learn more about facial recognition technologies, how they work and what the risks are in these resources:

Three steps to protecting your facial data:

  1. There are many positive uses for facial recognition technologies, but the lack of consumer protections make them unnecessarily risky. Until the control and management of this data is firmly in the hands of consumers, proactively opt out of such features and avoid services where opt out is not an option.
  2. Voice your concerns to elected officials to offset the impact of corporate lobbying and campaign contributions intended to soften proposed consumer protections.
  3. Voice your frustration to the companies that are leveraging this technology without providing you full control over your facial data – including the ability to have it removed, block it from being sold, traded, shared, etc., explicitly identify when and how this data can be used either for standalone purposes or combined with other data about you, and so on. If a company does not respect your wishes, stop using them. If you allow yourself to be exploited, plenty of companies will be happy to do so.

Linda


[i] See The One-Way-Mirror Society – Privacy Implications of Surveillance Monitoring Networks to understand some implications of facial recognition tool’s use when companies sell this information.


Top 10 Takeaways from AT&T Study of Families Mobile Phone Perceptions

July 10, 2012

To better understand the landscape for families and mobile phones, AT&T commissioned GfK Roper Public Affairs for a national study on parents and children’s (ages 8–17) views.  Among the findings:

  1. On Average, kids receive their first mobile phone at age 12, and 34% get a smartphone.
  2. 53% of kids report that they have ridden with someone who was texting and driving.
  3. 22% say they’ve been bullied via a text message by another kid.
  4. 46% of kids ages 11–17 say they have a friend who has received a message or picture that their parents would not have liked because it was too sexual.
  5. 90% of kids think it’s OK for parents to set rules on how kids use their phone;  66% of kids say they have rules and 92% think the rules are fair (consistent across age groups and types of phone)
  6. If kids had to choose one technology device for the rest of their lives, the majority say they would choose a mobile phone above all else — computer, television, tablet.
  7. 75% of kids think their friends are addicted to phones.
  8. 62% of parents are concerned that they are not able to fully monitor everything their child is doing and seeing on the phone.
  9. 40% of kids with a mobile phone say their parents have not talked to them about staying safe and secure when using the mobile phone.
  10. 58% of parents say that their mobile phone provider offers tools or resources for parents to address issues like overages, safety, security and monitoring.

If you’re among the 38% of parents at a loss as to how to help your children be safer on their mobile phones, see my blog Using Mobile Phones Safely.

Linda


Want Increased Control Over online Communications? Consider Wickr

July 9, 2012

If you’re tired of having your personal information, conversations, photos, texts, and video messages exploited by companies, used to embarrass you by frenemies, or pawed over by data collection services, Wickr’s an app worth considering.

The company’s founders have the credentials and the right motivation to build a tool that puts control of your communications squarely – and simply – in your own hands.  Kara Lynn Coppa, is a former defense contractor; Christopher Howell, is a former forensics investigator for the State of New Jersey; Robert Statica, is a director at the Center for Information Protection at the New Jersey Institute of Technology; and Nico Sell, is a security expert and longtime organizer for Defcon, an annual hacker convention.

Responding to questions during an interview, Ms. Sell said, “Right now, everyone is being tracked and traced in ways they don’t understand by numerous governments and corporations,” “Our private communications, by default, should be untraceable. Right now, society functions the other way around.”

Continuing, Ms. Sell said, “If my daughter wants to post a picture of our dog, Max, on Instagram, she shouldn’t have to know to turn the geo-location off,” “People have always asked me ‘How do I communicate securely and anonymously?’ There was never an easy answer, until now.”

Mr. Statica added to this point saying “There is no reason your pictures, videos and communications should be available on some server, where it can easily be accessed by who-knows-who, or what service, without any control over what people do with it.”

Amen to these views.

So what does Wickr offer?

Encrypted messaging – all messages – text, photos, video and audio – sent through the service are secured “by military-grade encryption… They can only be read by you and the recipients on the devices you authorize,” Wickr only stores the encoded result – and only for as long as needed for system continuity.

Self-destruct option – allows you to determine how long the people you communicate with can view the content – text, video, photos – before it is erased. (Recipients can however still capture a screenshot of the content, but the team behind Wickr is looking for ways to notify the sender if a screenshot is taken).

Total phone wipe – one of the risks of recycling cellphones is that you can’t easily erase the phone’s hard drive which enables criminals (and forensic investigators) to recreate your content. Wickr addresses this issue with an anti-forensics mechanism that erases deleted content by overwriting the metadata and rendering indecipherable.

Anonymity on Wickr – the service takes your privacy so seriously they don’t even know your username, you aren’t forced to share your email address or any other personal information that could identify you to the service or to others. Instead, your information is “irreversibly encoded with multiple rounds of salted cryptographic hashing prior to being sent to our servers. Even we cannot determine the actual values based on the hashed values we store.”

Free to use – you might think a service like this could put a hefty price on your privacy, instead the company has chosen to use the “freemium” business model that charges only for premium service features like sending files to large groups or sending large files.

NOTE: I am not associated in any way with this app, nor do I know any of the individuals behind it. While it’s rare I endorse a product, the philosophy behind the service is fabulous, and the tools are something every consumer needs to protect themselves and their privacy.

The next step is for every consumer to demand this same level of respect and security of EVERY online service with whom they interact. 

Want to learn more? Read Wickr’s FAQ

 

Linda


Canadian Teens Behind Human Trafficking Ring

June 14, 2012

In a sickening new twist in teen-on-teen sexual exploitation, three Canadian teens girls (two 15-year-olds and one 17-year-old) are charged with human trafficking by luring other teen girls through social media services, asking to meet up at a housing complex and then forcing them into prostitution.

According to ABC news, three separate incidents were identified in which three female victims, ranging from 13 to 17 years old, were lured to a housing complex in Ottawa and then forcibly driven to other locations for prostitution purposes.” …[Ottawa Police Staff Sgt. John] McGetrick said that “social media was a factor” in planning the initial meetings arranged between the suspects and the victims. He told ABCNews.com that the suspects and the victims were “vaguely known to each other,” but were not friends. “The meetings were intended to do an enjoyable activity, let’s say, hang out,” he said. “There was no ill-intention in the invite. Obviously things changed once that happened.”

Shockingly, police do not believe that adults were involved in the trafficking ring; it looks as if this was entirely the creation of the three girls.

In addition to human trafficking charges, these girls are charged with robbery, procuring, forcible confinement, sexual assault, assault, uttering threats and abduction, yet because they are minors, McGetrick believes the teens will be tried as juveniles and only face up to 3 years in prison.

This isn’t the first case of teens leveraging technology and online services to sexually exploit others, in 2010 a 17-year-old was arrested for pimping a hooker through Craigslist, and there are reports that teen girls in the UK deliberately set up meetings between pedophiles they’d found online and girls they didn’t like in the hopes these girls would be exploited.

But this case is, as far as we know, unique in the deliberation shown in befriending and grooming their victims, the logistical complexity of arranging these abductions, and the sophistication in finding men  interested in raping young girls and arranging the meetings with these ‘johns’.

Takeaway for parents

Teens virtually meet other teens online every day, and it can seem particularly innocuous when the teens are remotely connected as they apparently were in this case. Most of the time, these friendships are harmless, in many cases they have very positive and lasting benefits that should be encouraged. So how do you spot and mitigate risks to prevent tragedies like those reported here from occurring?

  1. Talk and keep talking. You should know and understand the services your teens are using and who their interacting with. That doesn’t mean reading all their comments, but it does mean making sure the conversations are healthy, and it does mean discussing situations where even when everything seems fine – as it surely did for the victims in this case – things may not be as they seem.
  2. Make sure your teen knows they can NEVER EVER meet someone or a group of people for the first time alone or in a private place. Meetings need to occur in public places when other people are around. Ideally you go with them to meet the other teen and that teen’s parents.  At a bare minimum you need to know where and when they’re meeting and how to contact the person their meeting up with, AND they should be required to check in with you on a set schedule.  This way, any missed call alerts you instantly to take action.

How you negotiate that your kids always tell you before meeting anyone is critical. With 4 young adult children I’ve lived this compromise with each of them during their teens.

They thought I was paranoid, but we framed it this way – they wanted to do something; and I needed to know they were safe. Then we negotiated a solution that both of us could live with. This gave my teens a way to do what they wanted – meet up – and minimized my anxiety.

In each case my teens were fairly sarcastic when they called to check in, and it was clear whomever they met up with knew exactly what the calls were about because in the background of the “hi mom, just want you to know I’m ok, the friend really is my age and not an old pervert” I could hear laughing, but I was more than fine with that, and I knew exactly where to find my teen, I knew they were ok, and equally important, the person they met knew I was monitoring their meeting.

Fortunately the vast majority of people we, and our teens, meet online are wonderful, respectful people who have represented themselves and their intentions honestly.

For the fraction of cases that are malicious, the talks and the safety rules need to be in place.

To learn more about human trafficking and the internet see my blogs:

To learn more about meeting online ‘friends’ in person safely see my advice for online daters, and buyers and sellers:

Linda


Infographic – Mother, Can I trust Google?

June 3, 2012

This infographic by BackgroundCheck.org provides a great timeline of Google feature rollouts and some of their largest privacy breaches. It also suggests ways for users to reduce tracking of their online actions. It’s definitely worth a scan.

Contact us

Linda


Teens, Millennials, and Technology; How Well Do You Know What They’re Doing? [Infographic]

June 2, 2012

This infographic, from OnlineSchools.com titled “The Millennial Teenager” has some great stats to help you understand the devices teens and millennials (18-34-year-olds) use, what they’re doing about their privacy, and how they split their time between multiple devices and technologies. It’s a fun, and informative read.

The Millennial Teenager

 

Linda


Frustrated by CAPCHA’s with wavy, pale, weird or unintelligible characters? Now, there’s hope!

May 14, 2012

You’ve seen CAPCHA’S – Completely Automated Public Turing Test to Tell Computers and Humans Apart – on plenty of websites, the words are scrambled, twisted, wavy, or embellished with lines, wiggles , (even overlaid  with images of cats), and are designed to be decipherable by humans, yet block automated programs from getting into websites.

The problem is that all too often they’re NOT decipherable. You’ve probably cursed the darn things on numerous occasions as you fail – repeatedly – to figure out the characters and are presented with a new set of largely indecipherable options.

If you’re among the millions frustrated with wiggles and dots, take heart. I’ve recently seen very cool security checks that allow you to pass the ‘human’ test by applying basic logic. AMEN!

In the example shown below, users are asked to identify a number in a sequence based on instructions, and it was such a pleasant experience that I’m recommending companies switch methods. The key is to have infinite number of variables, some spelled, some shown numerically so a automated system can’t simply recognize a few options.

Is it possible an advanced scripted program could figure out the logic? Probably. But there are additional tests that can be performed to identify non-humans by their interactions on a site, and that don’t put humans through visual contortions.

Whoever thought of this alternative is brilliant.

Linda