FTC and EU Weigh in on Face Recognition Applications – Why Limiting the Use of This Technology Matters

August 1, 2012

Who should own and control data about your face? Should companies be able to collect and use your facial data at will?

Is it enough to let users can opt out of facial recognition, or should companies be required to collect your specific opt in before collecting your facial data? If a company has multiple services, is one opt in enough, or should they be required to seek your permission for every new type of use? Under what conditions should a company be able to sell and monetize their ability to recognize you?[i]

There are a lot of cool uses for facial recognition tools, but how informed are you about the risks? How do you weigh the pros and cons to make an informed choice about who can identify you?

Governments are paying greater attention to potential privacy threats

A preliminary report by the Federal Trade Commission (FTC) identifying the latest facial recognition technologies and how these are currently being used by companies has just been released. The report also outlines the FTC’s plan for creating best-practice guidelines for the industry that should come out later this year.

In Europe concerns over facial recognition technologies potential to breach personal privacy has resulted in a similar review.

This is great news for consumers as it signals a shift in the timing of privacy reviews from a reactive approach where guidelines have come after consumers have largely already had their privacy trampled, to a far more proactive approach to protecting consumers online privacy, safety, and security.

In response, companies like Facebook and Google are dramatically increasing their lobbying budgets and campaign funding

It is no coincidence that as government bodies increase their focus on consumer’s online privacy that the companies making the biggest bucks from selling information about you – and access to you – are pouring money and human resources into influencing the government’s decisions.

According to disclosure forms obtained by The Hill, “Facebook increased its lobbying spending during the second quarter of 2012, allocating $960,000, or three times as much as during the same three-month period in 2011”.

And a report in the New York Times noted that “With Congress and privacy watchdogs breathing down its neck, Google is stepping up its lobbying presence inside the Beltway — spending more than Apple, Facebook, Amazon and Microsoft combined in the first three months of the year.” Google spent $5.03 million on lobbying from January through March of this year, a record for the Internet giant, and a 240 percent increase from the $1.48 million it spent on lobbyists in the same quarter a year ago, according to disclosures filed Friday with the clerk of the House.

In addition to lobbying spend, these companies, their political action committees (PAC’s) – and the billionaire individuals behind the companies have exorbitant amounts of money for political contributions; chits to be called in when privacy decisions that could impact their bottom line hang in the balance.

Here’s what today’s facial recognition technologies can – and are – doing:

 

It only takes a quick look for you to identify someone you know; yet facial recognition technologies are both faster and more accurate than people will ever be – and they have the capability of identifying billions of individuals.

Although many companies are still using basic, and largely non-invasive, facial recognition tools to simply recognize if there is a face in a photo, an increasing number of companies are leveraging advanced facial recognition tools that can have far reaching ramifications for your privacy, safety, and even employability.

Advanced facial recognition solutions include Google+’s Tag My Face, Facebook’s Photo Tag Suggest, Android apps like FaceLock, and Visidon AppLock, and Apple Apps like Klik,  FaceLook, and  Age Meter, then there are apps like SceneTap, FACER Celebrity, FindYourFaceMate.com and DoggelGanger.com.  New services leveraging these features will become increasingly common – particularly if strict privacy regulations aren’t implemented.

Some companies use facial recognition services in their photo and video applications to help users recognize people in photos, or even automatically tag them for you. (You may not want to be tagged in a particular, photo, but if you allow photo tagging you can only try to minimize the damage, you can’t proactively prevent it).

Some services use facial recognition for security purposes; your face essentially becomes your unique password (but what do you do if it gets hacked? Change your face??).

What are the potential risks of facial recognition tools to individuals?

The Online Privacy Blog enumerates some of the risks in easily understood terms; here is an excerpt from their article The Top 6 FAQs about Facial Recognition:

Take the massive amount of information that Google, Facebook, ad networks, data miners, and people search websites are collecting on all of us; add the info that we voluntarily provide to dating sites, social networks, and blogs; combine that with facial recognition software; and you have a world with reduced security, privacy, anonymity, and freedom.  Carnegie Mellon researchers predict that this is “a world where every stranger in the street could predict quite accurately sensitive information about you (such as your SSN, but also your credit score, or sexual orientation” just by taking a picture.

Risk 1:  Identity theft and security

Think of your personal information—name, photos, birthdate, address, usernames, email addresses, family members, and more—as pieces of a puzzle.  The more pieces a cybercriminal has, the closer he is to solving the puzzle.  Maybe the puzzle is your credit card number.  Maybe it’s the password you use everywhere.  Maybe you’re your social security number.

Identity thieves often use social security numbers to commit fraud. Photo: listverse.com.

Facial recognition software is a tool that can put all these pieces together.  When you combine facial recognition software with the wealth of public data about us online, you have what’s called “augmented reality:”  “the merging of online and offline data that new technologies make possible.”   You also have a devastating blow to personal privacy and an increased risk of identity theft.

Once a cybercriminal figures out your private information, your money and your peace of mind are in danger.  Common identity theft techniques include opening new credit cards in your name and racking up charges, opening bank accounts under your name and writing bad checks, using your good credit history to take out a loan, and draining your bank account.  More personal attacks may include hijacking your social networks while pretending to be you, reading your private messages, and posting unwanted or embarrassing things “as” you.

The research:  how facial recognition can lead to identity theft

Carnegie Mellon researches performed a 2011 facial recognition study using off-the-shelf face recognition software called PittPatt, which was purchased by Google.  By cross-referencing two sets of photos—one taken of participating students walking around campus, and another taken from pseudonymous users of online dating sites—with public Facebook data (things you can see on a search engine without even logging into Facebook), they were able to identify a significant number of people in the photos.  Based on the information they learned through facial recognition, the researchers were then able to predict the social security numbers of some of the participants.

They concluded this merging of our online and offline identities can be a gateway to identity theft:

If an individual’s face in the street can be identified using a face recognizer and identified images from social network sites such as Facebook or LinkedIn, then it becomes possible not just to identify that individual, but also to infer additional, and more sensitive, information about her, once her name has been (probabilistically) inferred.

Some statistics on identity theft from the Identity Theft Assistance Center (ITAC):

  • 8.1 million adults in the U.S. suffered identity theft in 2011
  • Each victim of identity theft loses an average of $4,607
  • Out-of-pocket losses (the amount you actually pay, as opposed to your credit card company) average $631 per victim
  • New account fraud, where thieves open new credit card accounts on behalf of their victims, accounted for $17 billion in fraud
  • Existing account fraud accounted for $14 billion.

Risk 2:  Chilling effects on freedom of speech and action

Facial recognition software threatens to censor what we say and limit what we do, even offlineImagine that you’re known in your community for being an animal rights activist, but you secretly love a good hamburger.  You’re sneaking in a double cheeseburger at a local restaurant when, without your knowledge, someone snaps a picture of you.  It’s perfectly legal for someone to photograph you in a public place, and aside from special rights of publicity for big-time celebrities; you don’t have any rights to control this photo.  This person may not have any ill intentions; he may not even know who you are.  If he uploads it to Facebook, and Facebook automatically tags you in it, you’re in trouble.

Anywhere there’s a camera, there’s the potential that facial recognition is right behind it.

The same goes for the staunch industrialist caught at the grassroots protest; the pro-life female politician caught leaving an abortion clinic; the CEO who has too much to drink at the bar; the straight-laced lawyer who likes to dance at goth clubs.  If anyone with a cell phone can take a picture, and any picture can be tied back to us even when the photographer doesn’t know who we are, we may stop going to these places altogether.  We may avoid doing anything that could be perceived as controversial.  And that would be a pity, because we shouldn’t have to.

Risk 3:  Physical safety and due process

Perhaps most importantly, facial recognition threatens our safety.  It’s yet another tool in stalkers’ and abusers’ arsenals.  See that pretty girl at the bar?  Take her picture; find out everything about her; pay her a visit at home.  It’s dangerous in its simplicity.

There’s a separate set of risks from facial recognition that doesn’t do a good job of identifying targets:  false identifications.  An inaccurate system runs the risk of identifying, and thus detaining or arresting, the wrong people.  Let’s say that an airport scans incoming travelers’ faces to search for known terrorists.  Their systems incorrectly recognize you as a terrorist, and you’re detained, searched, interrogated, and held for hours, maybe even arrested.  This is precisely why Boston’s Logan Airport abandoned its facial recognition trials in 2002:  its systems could only identify volunteers 61.4 percent of the time.

Learn more about facial recognition technologies, how they work and what the risks are in these resources:

Three steps to protecting your facial data:

  1. There are many positive uses for facial recognition technologies, but the lack of consumer protections make them unnecessarily risky. Until the control and management of this data is firmly in the hands of consumers, proactively opt out of such features and avoid services where opt out is not an option.
  2. Voice your concerns to elected officials to offset the impact of corporate lobbying and campaign contributions intended to soften proposed consumer protections.
  3. Voice your frustration to the companies that are leveraging this technology without providing you full control over your facial data – including the ability to have it removed, block it from being sold, traded, shared, etc., explicitly identify when and how this data can be used either for standalone purposes or combined with other data about you, and so on. If a company does not respect your wishes, stop using them. If you allow yourself to be exploited, plenty of companies will be happy to do so.

Linda


[i] See The One-Way-Mirror Society – Privacy Implications of Surveillance Monitoring Networks to understand some implications of facial recognition tool’s use when companies sell this information.


Want Increased Control Over online Communications? Consider Wickr

July 9, 2012

If you’re tired of having your personal information, conversations, photos, texts, and video messages exploited by companies, used to embarrass you by frenemies, or pawed over by data collection services, Wickr’s an app worth considering.

The company’s founders have the credentials and the right motivation to build a tool that puts control of your communications squarely – and simply – in your own hands.  Kara Lynn Coppa, is a former defense contractor; Christopher Howell, is a former forensics investigator for the State of New Jersey; Robert Statica, is a director at the Center for Information Protection at the New Jersey Institute of Technology; and Nico Sell, is a security expert and longtime organizer for Defcon, an annual hacker convention.

Responding to questions during an interview, Ms. Sell said, “Right now, everyone is being tracked and traced in ways they don’t understand by numerous governments and corporations,” “Our private communications, by default, should be untraceable. Right now, society functions the other way around.”

Continuing, Ms. Sell said, “If my daughter wants to post a picture of our dog, Max, on Instagram, she shouldn’t have to know to turn the geo-location off,” “People have always asked me ‘How do I communicate securely and anonymously?’ There was never an easy answer, until now.”

Mr. Statica added to this point saying “There is no reason your pictures, videos and communications should be available on some server, where it can easily be accessed by who-knows-who, or what service, without any control over what people do with it.”

Amen to these views.

So what does Wickr offer?

Encrypted messaging – all messages – text, photos, video and audio – sent through the service are secured “by military-grade encryption… They can only be read by you and the recipients on the devices you authorize,” Wickr only stores the encoded result – and only for as long as needed for system continuity.

Self-destruct option – allows you to determine how long the people you communicate with can view the content – text, video, photos – before it is erased. (Recipients can however still capture a screenshot of the content, but the team behind Wickr is looking for ways to notify the sender if a screenshot is taken).

Total phone wipe – one of the risks of recycling cellphones is that you can’t easily erase the phone’s hard drive which enables criminals (and forensic investigators) to recreate your content. Wickr addresses this issue with an anti-forensics mechanism that erases deleted content by overwriting the metadata and rendering indecipherable.

Anonymity on Wickr – the service takes your privacy so seriously they don’t even know your username, you aren’t forced to share your email address or any other personal information that could identify you to the service or to others. Instead, your information is “irreversibly encoded with multiple rounds of salted cryptographic hashing prior to being sent to our servers. Even we cannot determine the actual values based on the hashed values we store.”

Free to use – you might think a service like this could put a hefty price on your privacy, instead the company has chosen to use the “freemium” business model that charges only for premium service features like sending files to large groups or sending large files.

NOTE: I am not associated in any way with this app, nor do I know any of the individuals behind it. While it’s rare I endorse a product, the philosophy behind the service is fabulous, and the tools are something every consumer needs to protect themselves and their privacy.

The next step is for every consumer to demand this same level of respect and security of EVERY online service with whom they interact. 

Want to learn more? Read Wickr’s FAQ

 

Linda


New Weekly Headlines-Inspired Internet Safety Content Available for Schools and Parents

December 20, 2011

In collaboration with the internet safety group iKeepSafe, I’m pleased to announce a new initiative for introducing digital literacy, safety, security and privacy topics to students and your children.

Each week on behalf of iKeepSafe’s iKeepCurrent project, I pick a current news story and use it as the genesis of a short safety, security, privacy, citizenship, or other internet related lesson. By pulling from news of what’s happening today the lessons are extremely relevant and provide a natural way to pull events into perspective as teachable moments, and as drivers for learning new and positive online skills.

Every lesson includes a list of key concepts, vocabulary words, equipment needed, the full news articles, the lesson plan, optional activities, additional resources, plus learning development resources for teachers, and specific material just for parents.

To check out the lessons and see how you can leverage this material, click on one of the thumbnails (below) or go to http://ikeepcurrent.org/ and register to a weekly email.

I will begin posting these lessons every week as they appear.

Linda


Credit Score on Steroids to Track Consumers Every Financial Move

December 4, 2011

If you think lenders to date have poked their nose too far into your financial affairs, you’re in for a shock.  A new service by a company called CoreLogic is now leveraging massive amounts of data – from your past and current financial actions – to strip your financial profile to the bone.

This new service will delve into things like missed rental payments, applications for payday loans, repayment histories, child support payments, evictions, whether you’re behind on property taxes , owe homeowners dues, include information about whether you’re underwater on your home, information about all your properties, and more. They are also evaluating including information on things like your cellphone and utility bill payment histories.

The goal of the service is to give lenders a better picture of the real economic state of potential lenders to help them better screen loan applicants.

But what does it do to you and your privacy?

You don’t get to decide if this information is collected about you or not, and when combined with everything else tracked and indexed about you – your location, your web patterns, any posts you make, or posts others make about you, government data like your birth, marriage, divorce, property, voter records, vehicles, addresses, etc. the intrusion is alarming.

And this is just the tip of the iceberg.  As reported in the New York Times, Joanne Gaskin, director of product management global scoring at FICO (who is partnering with CoreLogic ) says the company’s next step is to build something that will try to get even deeper inside your financial mind; a more sophisticated tool that will predict how you might behave under different loan terms.

Sound like a new twist on the 2002 film Minority report? Unfortunately this time it isn’t a movie.

To learn more about CoreLogic and what this service may mean to you, there is an excellent New York Times article titled A Credit Score That Tracks You More Closely, written by Tara Seigel Bernard that I highly recommend.  She summed the article up with “while the credit bureaus may not yet know every last detail about your financial life, you should assume that they are watching”.

Where will we as a society draw a line in the sand?

Of course businesses want this information – and more –about you. The question is, whether we’ll roll over; cede our privacy, including projections of our future actions, to corporations.

As the old Lynyrd Skynyrd song put it ‘if you know what I mean why don’t you stand up and scream, cause there’s things going on that you don’t know’.

Linda


FTC Asked to Investigate Use of Supercookies

October 2, 2011

The House of Representatives bipartisan privacy caucus has asked the FTC to look into companies’ use of supercookies – which are like traditional tracking cookies on steroids.

A basic “cookie,” is a small file that websites install on consumers’ computers and other internet connected devices that allow the website or service to track the user’s online activities.  These cookies can be deleted by a user, effectively wiping out a website’s ability to track that user.

Supercookies on the other hand, are capable of re-creating users’ profiles even after people delete regular cookies, and these new tracking methods are almost impossible for computer users to detect, according to researchers at Stanford University and University of California at Berkeley and reported in the Wall Street Journal.

In a letter written by Reps. Ed Markey (D-Mass.) and Joe Barton (R-Tex.), the co-chairmen of the bipartisan privacy caucus, they state that the use of supercookies invades user privacy and may be a violation of the FTC’s unfair and deceptive acts guidelines.

Speaking on the subject, Barton said, “I think supercookies should be outlawed because their existence eats away at consumer choice and privacy.”

Among the companies researchers identified as having employed supercookies are MSN.com and Hulu.com; both companies have said they have since taken action to change their tracking.

It will be interesting to see how the FTC rules on supercookie technology implementations; we can only hope that consumer privacy comes out on top.

Linda


Evolution of Parent’s Awareness of Internet Risks Infographic

June 26, 2011

A fun new infographic by McAfee titled What Do Pirates, Coffee Pots, and Smiley Faces Have to Do with Kids Online? provides a great overview of how far the internet has come in 40 years, and how each step has brought tremendous new potential along with a few real risks.  The biggest takeaway should be the question of how, knowing what we do, do we incorporate and evolve technologies in a way that integrates consumer safety, privacy, security, and education into the development process.

For all the quantum leaps forward in enabling technology, we have largely failed to stay apace in defending that same technology. The race to be first with cool features meant deprioritizing building safer tools. To show this tradeoff, look at this illustration showing Quality, Time, and Features:

  1. You can cut quality to put in more features faster.
  2. You can improve quality by either taking more time or adding fewer features.
  3. What you can’t do is go faster with more features of high quality.

Given these constraints, and knowing that consumers rant about safety but go with whichever company delivers the cool new features first, it is quality that suffers. Companies skip safety, security & privacy reviews, they cut the safety, security & privacy features, they fail to have employees dedicated to designing and testing for safety, security and privacy, and they shortchange the UI and other education that could have help overcome some of their safety gaps. Then, because support staff is expensive, they cut quality consumer support as well.

Add to this that for the most part we still have feature development in one part of a company, and ‘safety’ in another (often safety, security and privacy are three separate groups just to make matters worse). So the development teams create cool features in a relative vacuum where features and speed always win; and then once the feature is released and the furor starts the safety/security/privacy team’s jump in to push requirements to the development team attempting to retroactively solve/minimize/or create a great PR pitch that doesn’t do anything to actually resolve the problems.

Now, compound this with an understanding that the development teams have already moved on to building the next set of cool features they’re trying to bring to market faster (by cutting quality), and they don’t want to be hampered by security safety and privacy requirements that would make them go back and try to add a sloppy patches to their previous features (the solutions have to be patches because it wasn’t designed to work that way up front, and they’re sloppy because they don’t want to spend time on it) and the result is:

Programs and Services that are easily hacked and data stolen. (Oops, we cut test time for security issues, and those we found we either postponed or decided not to fix.) Policies for acceptable use that aren’t enforced. (Oops, that would have required building abuse tracking, reporting, and moderation tools, but in the need-for-speed those were cut). Safety, security, and privacy functions that are slapped together, building on the previous tier of slapped together functions because no one took time to design for these in the first place. And so on.

Research repeatedly highlights the need for consumers and businesses to trust the internet in order to sustain the continued growth of the internet economy, but that message largely falls on deaf ears.  Why? It’s not because big businesses don’t understand the need for consumer trust and confidence, it’s that these are long-term needs for an enormous industry, and they’re delivering right-now technologies for their own company’s competitive advantage. It’s called passing the buck to earn a buck. Grabbing market share before the confidence collapse requires all companies to take a safety, security and privacy investment. It’s a very poor way to do business, and it’s a very poor way to treat consumers.

This infographic does a good job of showing how one aspect of consumer trust in the internet – that of parents – is eroding.

It’s Internet Safety Month; what are companies going to do to turn this around? What will the meter on this infographic show next year?

Linda


Your Information IS Displayed on Spokeo – Here’s How to Remove It

June 4, 2011

Spokeo is a search engine that is specifically designed to collect YOUR information to make public. My frustration with Spokeo and other data aggregators this is that they do not ask your permission to expose your information nor notify you of their actions. They just happily make money by placing you at risk.

Spokeo has been around for several years, so why am I writing this now? The short answer is that I got a Google alert informing me of the data so it just recently pissed me off.  I tested the service when it was first getting underway and but then the data was pretty sparse.  Now you’ll be uncomfortable about what they’ve collected – and display about you.

To start go to Spokeo, and enter your (or someone else’s) name, phone number (cell or home phone), email alias,  or user name.

The ‘top level’ information is free, you pay to see more info, and/or they make money by advertising a company that will help you keep your information off sites like Spokeo. But within that top level FREE information you learn quite a bit about the person you are looking up including:

Age Full name (with middle initials) Marital status Address (including Google Earth View) Gender
Religious affiliation Educational status Who else lives in your home Ethnic background Horoscope sign
Phone number (even ‘private’ numbers and cell  numbers) Whether you own a  home, and the home’s worth Economic and wealth levels Lifestyle and Interests (like: loves reading, has children, enjoys shopping, subscribes to magazines. Neighborhood info like cost of homes, average incomes, ethnic and age profile,
Social networks participated in Publicly posted photos Email address Political affiliation Occupation

In their own words, this is what the company says they’re about (italics added):

Spokeo is a search engine specialized in aggregating and organizing vast quantities of people-related information from a large variety of public sources. The public data is amassed with lightning speed, and presented almost instantly in an integrated, coherent, and easy-to-follow format.

While an individual could on their own, for example, potentially locate a person’s phone number or address by searching phone books, then redirect their search to a county tax assessor’s office to determine a home’s value, they would have to conduct literally hundreds of searches to discover all of the information available through only a single search on Spokeo.

Spokeo’s unique and powerful algorithms can swiftly navigate, sift through, and collect multitudes of scattered data that are spread across hundreds of locations, and synthesize that information in one convenient summary, delivering the most comprehensive snapshot of people-related, public data offered online to date. The search results represent an unparalleled mosaic of the vast stockpiles of public information accessible, and can offer invaluable insight into both the individual being searched, as well as the different types of information published.

When it comes to locating people-related information, Spokeo’s powerful search and organization technology far surpasses that of conventional search engines. That is because Spokeo’s specialized web crawlers can penetrate lesser accessed, content-rich areas of the web, collectively known as the “deepnet which many general-purpose search engines cannot. The “deepnet” is home to vast and largely untapped, dynamically-generated sites. And, since the majority of people-related public records are frequently stored on these types of sites rather than on web pages, Spokeo has a distinctive advantage over traditional search engines to which these rich stockpiles of data remain out of reach.

In other words, Spokeo exposes far more information about you than even Google exposes. Without your consent. Without your knowledge.

If you want to know more about someone, you can pay a monthly fee to dig deeper.

To protect themselves in case this information is  used for malicious purposes – like the wife beater trying to find where his ex-wife has moved to get away from the abuse – the site has a clause in their Terms of Use that says you may not use Spokeo.com or any information acquired from Spokeo.com:  to engage in activities that would violate applicable local, state, national or international law, or any regulations having the force of law, including the laws, regulations, and ordinances of any jurisdiction from which You access Spokeo.com. I’ll just bet this is a huge deterrent to the stalker, and a real comfort for those at risk for harm.

So how do you remove your information from Spokeo’s search results?

You’ll notice in the first Spokeo graphic in this article, on the bottom of the page they have a section called Protecting Your Online Identity, and that it contains their justification that boils down to… everyone else has your info, all we do is collect it…as well as a product pitch to pay to be protected. “All of the information that appears on Spokeo is publicly available and therefore may appear on other sites. To protect your online identity you can use a service like Reputation to manage your publicly available information.” Think about it, they get to make money off exposing you, while you have to pay to protect it. That’s just wrong.

If you’re still determined to have your information removed (and you should be) they have the following privacy statement and instructions for removing yourself:

While our search results show only publicly-accessible information gathered from hundreds of public sources, such as phone books, marketing surveys, business sites and more, we understand that you are concerned about the information shown our search results, and allow all users to opt out. You can do so by clicking on the Privacy link located at the bottom of the page which will take you here: http://www.spokeo.com/privacy

Removing Search Results

  1. Locate the search result you want removed. For name search results, click on the listing you want removed.
  2. Copy the URL from your browser’s address bar.
  3. Go to http://www.spokeo.com/privacy
  4. Paste the URL.
  5. Provide your email address (required to complete the verification process).
  6. Type in the Captcha Code exactly as you see it.
  7. Check your Inbox for the confirmation email, and click on the link to complete removal process.
  8. Once you click on the link, be sure you see the following message:

Once you’ve removed your information: 

  • Put a reminder on your calendar for a month from now, and check again to see if your information remains off the service.
  • Tell your friends
  • Contact your elected officials and demand better privacy regulation – including better privacy over your property records.
  • Start requesting additional sites take down your information. For example, have views of your home removed from Google’s Street View (see my blog How to Remove Images of Your Home from Google’s Street View), remove information from White Pages, and so on.

Linda