Men More Reckless with Personal Information Online

February 22, 2012

There is still widespread naiveté about the value of personal information and the way data is aggregated according to a new survey by Usamp.

Men and women are quite willing to share personal information about relationships, education, employment, brand preferences and political and religious affiliations.

But when it comes to information like email or physical address, phone numbers, or their location, women put a higher premium on physical safety and are markedly more guarded than their male counterparts.

What users have to gain a better understanding of is the very clear risks all of this information sharing represents, and how, with the information women were willing to share, the rest of their information is fairly easily exposed.

Why all that information matters

When looking at the types of information both men and women were fairly willing to share, it is the unintended use of that information that place you at risk.

For example, it was through hard fought battles in the 20th century that we gained a number of civil rights designed to protect every citizen from discrimination based on gender, religion, race, color, national origin, age, marital or family status, physical or mental disability, sexual orientation, political affiliation, financial status, and more.

These prejudices remain, and by sharing this information freely online users enable the very types of discrimination that civil rights were established to prohibit. And users do it in a way that never places an employer or company at legal risk. A candidate will never know why they weren’t considered, they won’t even make it to the interview.

To understand how this works, Microsoft conducted research in January 2010, to expand the understanding around role of online information and reputation.

One aspect of the research looked specifically at how recruiters and HR professionals use online information in their candidate screening process.

As you can see in this table, would-be employers can now make decisions based on a number of factors long before ever inviting a candidate in for an interview process where some system of oversight could possibly identify discriminatory practices against selected candidates.

With this type of undetectable prescreening, employers can make decisions based on how people look in their photos – weight, age, skin color, health, prettiness factor, style, tattoos, and economic indicators. They can look at comments made by the candidate, friends or family members that they would never have had the right to access pre-internet. They can look at groups and organizations a person is associated with – and potentially make decisions based on political affiliations, faith, sexual preferences, even medical factors – if this information is indicated through the groups and organizations to which the candidate belongs.

Learn more about the erosion of civil rights in my blog Civil Rights Get Trampled in Internet Background Checks.

The damage doesn’t end there

It is not just would be employers or college application review boards who can and do use this information.  If 5 years ago someone posted a photo of you on a drinking binge, will it impact whether an auto insurance company accepts you, or quotes you a higher rate?  Will it impact your medical insurance rate? How about your ability to get a car, school, or home loan? The answer is likely to be YES.

A reluctance to share address, email, phone numbers and other ‘locatable’ information doesn’t matter if you’re willing to share your name, employer etc.

The study found that among the types of personal information shared, men and women are most likely to be happy to share their names (86% and 88%, respectively) and email addresses (55.2% and 42.4%, respectively). Yet unless you live off the grid, your name alone is probably enough to get your address and phone number – and sometimes your email address. It’s enough to discover if you own or rent, if you vote, have a criminal record, etc. Compounding your risks, the facial recognition tools now in Facebook and Google+, mean that even your face in a photo may be enough to collect all this information.

Does it mean you hop off the internet and hide? No. But it does mean that before sharing any information you should ask yourself who could see it? What could they do with it? Will it damage you, your child, or someone else in the future? If your information is already out there, you may want to work with websites to have any sensitive information removed.

Linda


New Online Safety Lesson: Using Twitter Wisely

February 3, 2012

The 10th installment in the lesson series I’m writing on behalf of iKeepSafe, focuses on teens and Twitter use.

Teens are increasingly turning to Twitter as an alternative or addition to other social media platforms. Like any technology, it has its own language, culture… and risks. How are teens using Twitter and how can they minimize privacy concerns? While you can make your Twitter account “private,” or even use a pseudonym, others may still be watching-including peers, school officials, parents, and even Homeland Security.

As we learn to integrate new technologies into our everyday lives, students and professionals alike grapple with the thorny questions of the boundaries surrounding freedom of speech, appropriate speech, and content censoring. Read on for a primer on Twitter-speak, and find out who’s Twittering… and who’s reading.

To see and use this lesson, the companion presentation, professional development materials, and parent tips click here: Using Twitter Wisely 

Linda


New Online Safety Lesson: What does data privacy mean to you?

January 30, 2012

The 9th installment in the lesson series I’m writing on behalf of iKeepSafe, is timed to coincide with Data Privacy Day. This week, more than 40 countries will celebrate Data Privacy Day. It is a day designed to promote awareness about the many ways personal information is collected, stored, used, and shared, and education about privacy practices that will enable individuals to protect their personal information.

To view and use this lesson, the companion presentation, professional development materials, and parent tips click here: What does data privacy mean to you?

Linda


When it Comes to Online Ad Tracking, You Can Opt out Any Time You’d Like – But Can You Ever Leave?

August 16, 2011

Even when users take steps to opt out of online tracking, many ad companies still track their activity according to preliminary research findings by Stanford University’s Center for Internet and Society.

As Arvind Narayanan, Postdoctoral fellow at the Center for Internet and Society puts it “A 1993 New Yorker cartoon famously proclaimed, “On the Internet, nobody knows you’re a dog.” The Web is a very different place today; you now leave countless footprints online. You log into websites. You share stuff on social networks. You search for information about yourself and your friends, family, and colleagues. And yet, in the debate about online tracking, ad networks and tracking companies would have you believe we’re still in the early 90s — they regularly advance, and get away with, “anonymization” or “we don’t collect Personally Identifiable Information” as an answer to privacy concerns.

In the language of computer science, clickstreams — browsing histories that companies collect — are not anonymous at all; rather, they are pseudonymous. The latter term is not only more technically appropriate, it is much more reflective of the fact that at any point after the data has been collected, the tracking company might try to attach an identity to the pseudonym (unique ID) that your data is labeled with. Thus, identification of a user affects not only future tracking, but also retroactively affects the data that’s already been collected. Identification needs to happen only once, ever, per user.

Will tracking companies actually take steps to identify or deanonymize users? It’s hard to tell, but there are hints that this is already happening: for example, many companies claim to be able to link online and offline activity, which is impossible without identity.

Regardless, what I will show you is that if they’re not doing it, it’s not because there are any technical barriers. Essentially, then, the privacy assurance reduces to: “Trust us. We won’t misuse your browsing history.”  I highly recommend you read his full article.

Advertisers fund the internet – in exchange for personal information

Remember the dot.com bubble burst of 2000? It happened because internet companies built their content and services on one key concept – that we, the consumers, would subscribe to use their services. There was just one fatal flaw – consumers wanted everything to be free. But free doesn’t pay the bills, let alone turn a profit, and internet companies either went bankrupt or changed their revenue model to ad funded.

Reasonably, advertisers want a return on their investment for funding the internet and their primary requirement – as with any advertising – is to be able to segment internet user demographics so they don’t waste money marketing shaving cream to toddlers.

Internet companies quickly learned that the more targeted the ads could be, the more advertisers were willing to pay them for access to their users… from there it doesn’t take a leap to understand how we’ve come to a place where ads follow us , and behavioral advertising is the name of the game.

In theory you are able to opt-out, in reality you’ll never know

A do-not-track feature has been added to both the Mozilla Firefox and the Microsoft IE 9 browsers that supposedly allows users to check a box in their preferences indicating they do not wish to have their online purchases, browsing patterns, search strings, or personal information be tracked. Once checked, any website the user goes to receives notice of their preference.

However, there is no law requiring companies to respect consumers do-not-track preference, and according to Stanford’s research few websites comply with users requests for privacy; choosing instead to continue tracking the user without their knowledge.  They do so in at least 5 ways, as shown on Stanford’s website and paraphrased here:

1. The third party is sometimes a first party

Companies with the biggest reach in terms of third-party tracking, such as Google and Facebook, are often also companies that users have a first-party relationship with. When you visit these sites directly, you’re giving them your identity, and there is no technical barrier to them associating your identity with your clickstream collected in the third-party context.

2. Leakage of identifiers from first-party to third-party sites

In a paper published just a few months ago, Balachander Krishnamurthy, Konstantin Naryshkin and Craig Wills exposed the various ways in which users’ information can and does leak from first parties to third parties. Fully three-quarters of sites leaked sensitive information or user IDs. There are at least four mechanisms by which identity is leaked: Email address or user ID in the Referer header, potentially identifying demographic information (gender, ZIP, interests) in the Request-URI, identifiers in shared cookies resulting from “hidden third-party” servers, and username or real name in page title.

3. The third party buys your identity

Ever seen one of those “Win a free iPod!” surveys? The business model for many of these outfits, going by the euphemism “lead-generation sites,” is to collect and sell your personal information. Increasingly, these sites have ties with tracking companies.

When you reveal your identity to a survey site, there are two ways in which it could get associated with your browsing history. First, the survey site itself could have a significant third-party presence on other sites you visit. When you visit the survey site and sign up, they can simply associate that information with the clickstream they’ve already collected about you. Later on, they can also act as an identity provider to sites on which they have a third-party presence.

Alternately, they could pass on your identity to trackers that are embedded in the survey site, allowing the tracker to link your identifying information with their cookie, and in turn associate it with your browsing history. In other words, the tracker has your browsing history, the survey site has your identity, and the two can be linked via the referrer header and other types of information leakage.

4. Hacks

A variety of browser and server-side bugs can exploited to discover users’ social identities. The known bugs have all been fixed, but computer security is a never-ending process of finding and fixing bugs.

5. Deanonymization

So far I’ve talked about identifying a user when they interact with the third party directly or indirectly. However, if the mountain of deanonymization research that has accumulated in the last few years has shown us one thing, it is that the data itself can be deanonymized by correlating its external information.

The logic is straightforward: in the course of a typical day, you might comment on a news article about your hometown, tweet a recipe from your favorite cooking site, and have a conversation on a friend’s blog. By these actions, you have established a public record of having visited these three specific URLs. How many other people do you expect will have visited all three, and at roughly the same times that you did? With a very high probability, no one else. This means that an algorithm combing through a database of anonymized clickstreams can easily match your clickstream to your identity. And that’s in a single day. Tracking logs usually stretch to months and years.

Legislation pending

The unveiling of secret tracking has galvanized congress, the FTC and even the president. Bills have been proposed to create do-not-track lists with industry compliance requirements for all users, and for minors. The European Unions “right to be forgotten” model, which would give users the right to require companies to remove all of their information from websites, is coming into favor.

If your data privacy matters to you – and it should – don’t remain silent. Let your elected officials know you support legislation that gives you the ultimate control over your information.

Linda


Largest Data Breaches – Seeing is Understanding

June 26, 2011

Check out FlowingData’s visual version of the largest data breaches of all time. The website and content is the work of Nathan Yau, a Ph.D. candidate in statistics. In his own words “I live and breathe data… I want to make data available and useful to those who aren’t necessarily data experts; I think visualization plays a major role in this”.

(To understand the visualization, it helps to know that each square = 1 million records, and that Green = Hacked, Blue = lost, Grey = stolen, and Pink = Fraud.

Pulling Sony breaches out uniquely, Yau illustrates an abysmal 3-month mess.

Linda


If it’s Personal, or Controversial, Don’t Post it on Twitter

November 28, 2010

In an interview for Forbes.com, Linda Criddle, President LOOKBOTHWAYS and the Safe Internet Alliance outlined the most common mistakes consumers make when posting comments on Twitter.

To read the full article, click here. Read on for an excerpt of Linda’s comments.

Posting on Twitter requires even less time–and therefore, thought–than the average Facebook post. “People say, ‘I’m stuck in traffic,’ or ‘I’m at the mall, saw a great discount,'” says Criddle. “If I look at someone’s Twitter history over time, I’ll notice things like that they tweet from the same Starbucks ( SBUXnews people ) every morning. People give away their daily routines, and that allows me to impersonate you, or ‘coincidentally’ meet you.”

What to do when meeting people in person. Criddle advises Twitter friends meeting in person for the first time to insist upon doing so in a public place. Guarding emotions is important, too. In addition to allowing crooks to feign empathy and build trust with victims, letting people know how you feel can put you at risk for emotional abuse and cyber-bullying, she warns.

Poorly thought-out social media posts can also ruin careers and destroy reputations. “There was the Obama speechwriter who put up the picture of himself groping a cardboard cutout of Hillary Clinton,” says Criddle. Tweeters and other social network users may also encounter malicious situations where someone else is deliberately trying to tear another person down, and there are unwitting attacks; a friend tweets something that gets you into hot water.”

To read the full article, click here


Google’s WiFi Data Collection Larger than Previously Known

November 1, 2010

Google violated Canadian law when it collected personal information from unsecured WiFi networks while photographing buildings and homes as part of its Street View mapping service.  “Our investigation shows that Google did capture personal information — and, in some cases, highly sensitive personal information such as complete e-mails. This incident was a serious violation of Canadians’ privacy rights” said Canadian Privacy Commissioner Jennifer Stoddart in comments last week.

This story began to unfold last May when Google admitted they had “collected only fragments of payload data” from unencrypted wireless networks. That news prompted a flurry of inquiries from privacy officials across the globe and under inspection by external regulators have inspected the data as part of their investigations, at which point whole e-mails, URLs, and passwords were discovered.

According to Alan Eustace, senior vice president of engineering and research at Google, while most of the data collection was “fragmentary, in some instances entire e-mails and URLs were captured, as well as passwords,” adding the company is “mortified” by what happened and wants “to delete this data as soon as possible.”

Commissioner Stoddart asked Google to do four things before she would consider the matter closed: instigate a governance model to ensure that privacy is protected when new products are launched; enhance privacy compliance training among all employees; designate an individual responsible for privacy issues; and delete the Canadian data.

In response to concern, Eustace announced that Google has put several changes in place since discovering the problem.

  1. They have appointed Alma Whitten to serve as Google’s director of privacy across privacy and engineering. “Her focus will be to ensure that we build effective privacy controls into our products and internal practices. Alma is an internationally recognized expert in the computer science field of privacy and security. She has been our engineering lead on privacy for the last two years, and we will significantly increase the number of engineers and product managers working with her in this new role.”
  2. Second, Google will enhance its core privacy training for engineers and other groups, like product management and their legal department “with a particular focus on the responsible collection, use and handling of data,” Starting in December, all employees will also be required to undertake a new information security awareness program, which will include clear guidance on both security and privacy.
  3. Finally, Google said it will improve their existing review system. Going forward, “every engineering project leader will be required to maintain a privacy design document for each initiative they are working on” that will detail how user information is handled and this document will be reviewed regularly by managers and an independent audit team.

“We believe these changes will significantly improve our internal practices, and we look forward to seeing the innovative new security and privacy features that Alma and her team develop,” Eustace concluded.

The furor is directed at an add-on project being run by Google Street View cars. In addition to taking photos for the Street View project – which itself has come under heavy international criticism for violating consumer’s privacy – the cars were collecting information on wireless networks including their MAC addresses, to use in building a database of them in the future.

According to Google, an engineer’s experimental code was inadvertently included in the software used to gather the data. “He [the engineer] thought it might be useful to Google in the future and that this type of collection would be appropriate,” That resulted in the gathering of “payload data,” from personal unsecured wireless networks that included complete e-mails, e-mail addresses, user names and passwords, names and residential telephone numbers and addresses, health details, and other personal information.

Excerpts from the Commissioners Report: (Underlines added)

The engineer involved included lines to the code that allowed for the collection of payload data. He thought it might be useful to Google in the future and that this type of collection would be appropriate.

This code was later used by Google when it decided to launch a particular location-based service. The service relies on a variety of signals (such as GPS, the location of cell towers and the location of WiFi access points) to provide the user with a location. Google installed antennas and appropriate software (including Kismet, an open-source application) on its Google Street View cars in order to collect publicly broadcast WiFi radio signals within the range of the cars while they travelled through an area. These signals are then processed to identify the WiFi networks (using their MAC address) and to map their approximate location (using the GPS co-ordinates of the car when the signal was received). This information on the identity of WiFi networks and their approximate location then populates the Google location-based services database.

Google’s future plans for its location-based services

Google still intends to offer location-based services, but does not intend to resume collection of WiFi data through its Street View cars. Collection is discontinued and Google has no plans to resume it.

Google does not intend to contract out to a third party the collection of WiFi data.

Google intends to rely on its users’ handsets to collect the information on the location of WiFi networks that it needs for its location-based services database.  The improvements in smart-phone technology in the past few years have allowed Google to obtain the data it needs for this purpose from the handsets themselves.

Although it has no tracking tool to keep records of a customer’s locations (and does not intend to create one), Google acknowledges that it does need to examine the potential privacy concerns of this method of collection.

Stoddard gave Google until Feb. 1, 2011 to comply with those requirements, but resolving Canada’s concerns may just be the tip of the iceberg. Investigations are still underway by privacy commissioners worldwide, and Spain’s Data Protection Agency has just announced plans to fine Google between $84,000 and $840,000 per offense due to the Wi-Fi data Google collected with its Street View cars. In the U.S. there are at least 3 lawsuits seeking class action status for the stealth collection of personal information form home networks.

Why this matters to you

If you have – or had – a wireless network that was not password protected, information from your computer(s) may have been collected.  Google has committed to destroying all the information, but it’s a serious breach of your privacy that information was collected without your knowledge or permission in the first place.

You may also feel that the collection and public display of images of your home is a breach of your privacy. If you want these removed see my blog How to Remove Images of Your Home from Google’s Street View. NOTE: you will have to check back periodically to be sure that any images you requested be deleted remain deleted, as I have found these can reappear.

You should also be concerned about Google’s future plans to collect information about WiFi networks from your Smartphone(s). How this is done is going to be critical to your safety and privacy. In the report Google acknowledges that it does need to examine the potential privacy concerns of this method of collection.  It remains to be seen what the outcome of that examination will entail, and whether they inform users in advance and allow you to opt out if this is not something you want collected from your phone.

Linda