Why would Facebook want to enroll children in a service built with little regard for adult safety/privacy/security? For the money.

June 7, 2012

A floundering Facebook is under increased pressure to shore up their revenue and flat per user minutes. So it was no surprise to see the Wall Street Journal report that Facebook is developing technology to allow children younger than 13 years old to use the social-networking service under parental supervision in spite of their abysmal track record in protecting older consumers.

Yet Facebook is already in deep trouble over their consistent encroachment on consumer privacy, the service is a hot bed for malware and scams, their advertising is not suitable for younger users.

The issues around Facebook’s interest in onboarding the under 13’s fall into three categories:

  • Facebook’s predatory privacy practices
  • Facebook’s financial woes
  • There is a need for a responsible social network where children, adults and commercial content can mix, but Facebook isn’t it

Facebook’s Predatory Privacy Practices

From its inception, Facebook has shown a deliberate disregard for consumer privacy, trust, or safety. This attitude was evidenced by founder Mark Zuckerberg’s early IM comments, and has continued ever since through the company’s privacy policy choices, and blatant deception and exploitation of users information.  Consider the following points:

  • Consumer’s feelings of betrayal run so high that 70% of Facebook users say they do not trust Facebook with their personal information.
  • The FTC found Facebook’s assault on consumer privacyso egregious that last fall (2011) they charged Facebook with deceiving consumers by failing to keep their privacy promises. The FTC complaint lists a number of instances in which Facebook allegedly made promises that it did not keep:
    • In December 2009, Facebook changed its website so certain information that users may have designated as private – such as their Friends List – was made public. They didn’t warn users that this change was coming, or get their approval in advance.
    • Facebook represented that third-party apps that users’ installed would have access only to user information that they needed to operate. In fact, the apps could access nearly all of users’ personal data – data the apps didn’t need.
    • Facebook told users they could restrict sharing of data to limited audiences – for example with “Friends Only.” In fact, selecting “Friends Only” did not prevent their information from being shared with third-party applications their friends used.
    • Facebook had a “Verified Apps” program & claimed it certified the security of participating apps. It didn’t.
    • Facebook promised users that it would not share their personal information with advertisers. It did.
    • Facebook claimed that when users deactivated or deleted their accounts, their photos and videos would be inaccessible. But Facebook allowed access to the content, even after users had deactivated or deleted their accounts.
    • Facebook claimed that it complied with the U.S.- EU Safe Harbor Framework that governs data transfer between the U.S. and the European Union. It didn’t.

The settlement bars Facebook from making any further deceptive privacy claims, requires that the company get consumers’ approval before it changes the way it shares their data, and requires that it obtain periodic assessments of its privacy practices by independent, third-party auditors for the next 20 years.

Facebook also failed from the beginning to build in strong security, monitoring or abuse tracking technologies, issues they’ve attempted to patch with varying degrees of seriousness or success as evidenced by the constant circulation of malware on the service, the breach of consumer’s information, consumers inability to reach a human for help when abuse occurs, and so on.

Facebook’s financial woes

It’s always about the money. With over 900 million users Facebook is still a force to be reckoned with, but their trajectory is looking a lot more like the climax before the cliff that MySpace faced.

Facebook’s financial problems have been building for some time but the company’s IPO brought financial scrutiny to the forefront and highlights their need to infuse new blood into the service – even if that means exposing children to a service that already poses clear risks to adults. Here’s a quick recap of the financial failings of Facebook:

  • Facebook stock is in a free fall, closing at $26.90 on June 4th when this article was written. That’s down more than $11 dollars, or 29% in the first 17 days of trading – and the stock continued to fall in after hour trading.  
  • The IPO valuation fiasco is far from over; Zuckerberg is now being sued for selling more than $1 billion shares just before stock prices plummeted. The suit says Facebook a “knew there was not enough advertising revenue to support a $38 stock valuation but hid that revenue information in order to push up the share price”.
  • In April, Facebook’s payments revenue went flat according to Business Insider. After growing a consistent 20% quarter over quarter, the first quarter of this year “revenue from payments and other fees [from games and partners] actually fell slightly, according to its latest filing with the Securities and Exchange Commission.”

A new Reuters/Ipsos poll shows that 4 out of 5 Facebook users have never bought a product or service as a result of ads or comments on the site, highlighting Facebook’s inability to successfully market to users.

  • The amount of time users spend on Facebook has also gone flat according to ComScore, a fact also highlighted by the Reuters/Ipsos poll which found that 34% of Facebook users surveyed were spending less time on the website than six months ago, whereas only 20% were spending more.

 

  • Advertisers are bailing. A nasty reality check came from General Motors just days before the company’s IPO as GM pulled their ad campaigns saying “their paid ads had little effect on customers” according to the Wall Street Journal.  

And an article on HuffingtonPost.com  reports that “more than 50 percent of Facebook users say they never click on Facebook’s sponsored ads, according to a recent Associated Press-CNBC poll. In addition, only 12% of respondents say they feel comfortable making purchases over Facebook, begging the question of how the social network can be effectively monetized.”

It’s easy to see why investors are angry, lawsuits have been filed and the government is investigating the debacle.

When 54% of consumers distrust the safety of making purchases through Facebook, and 70% of consumers say they do not trust Facebook with their personal information, and reports that consumer distrust in Facebook deepened as a result of issues around the IPO the company are surfacing, Facebook is looking more tarnished than ever.

As Nicholas Thompson wrote in The New Yorker, “Facebook, more than most other companies, needs to worry deeply about its public perception. It needs to be seen as trustworthy and, above all, cool. Mismanaging an I.P.O. isn’t cool, neither is misleading shareholders. Government investigations of you aren’t cool either.” And ´ The reputation of Facebook’s management team has also been deeply tarnished, particularly by the accusations that it wasn’t entirely open to investors about declining growth in its advertising business.”

There is a need for a responsible social network where children, adults and commercial content can mix, but Facebook isn’t it

Facebook has identified a real gap. There is a legitimate need for a social networking platform where kids can interact with adult family members and other trusted older individuals as well as commercial entities.

This need is evidenced by the 5.6 million underage youth still using Facebook today with or without their parent’s permission for lack of a more appropriate solution.

This 5.6 million underage user number is noteworthy for two reasons:

A)      It shows a significant number of children are using the site.

B)      More importantly it represents a 25.3% reduction of underage users over the past year when Consumer Reports found 7.5 million underage users on the site.  One could reasonably assume that the dramatic drop in underage use of Facebook is precisely because Consumer Reports published their data and that alarmed parents stepped in to block their use.

That 25.3% reduction strikes at the very heart of two of Facebook and their advocates’ key tenants; 1) since so many underage kids are already using Facebook, it would be safer for them if Facebook opened up the service for underage users and gave parents some access and controls to manage their use, and 2) parents want their children to be able to use Facebook.

To be clear, of the 5.6 million children still on the service, many have parents who helped them get onto Facebook. According to Dr. Danah Boyd, one of the researchers studying the issue of parents helping children get on the site, the reason parents told her they allowed their children on Facebook was because parents “want their kids to have access to public life and, today, what public life means is participating even in commercial social media sites.”

Boyd added that the parents helping their kids with access “are not saying get on the sites and then walk away. These are parents who have their computers in the living room, are having conversations with their kids, they often helping them create their accounts to talk to grandma.”

Note that Boyd’s findings don’t say parents want their children on Facebook. The findings say parents want their child to have access to public life. Given the dearth of alternative options, they allow their kids on Facebook with considerable supervision.  Why with considerable supervision? Because the site has inherent safety issues for users of all ages, and the safety issues would be greater for kids.

To date, Facebook has chosen not to cater to children under 13 because to do so requires complying with the Children’s Online Privacy Protection Act (COPPA) which Facebook advocate Larry Magid suggests “can be difficult and expensive” – yet hundreds of companies who take children’s privacy seriously comply with the policies today.

It is more than a little suspicious that Facebook made public their consideration to open their service to children just after they doubled their lobbying budget and just before the upcoming review of COPPA requirements – where the company will have the opportunity to press for weaker COPPA regulations.

Would it be safer for kids using the site to have additional protections such as Facebook suggests implementing?  Yes. But that’s the wrong question.  It is also safer to give kids cigarettes that have filters than to let kids sneak cigarettes without filters.

The real question is how do we fill the existing gap that compels kids to get onto a service that was never designed to protect them?

We need a service that has all the safety, privacy and security protections of the best children’s sites that also allows access to the broader public, news, events, and even appropriate advertising.  That service could easily interface with aspects of Facebook, yet leverage reputations, content filtering and monitoring, and human moderators to provide an environment that Facebook does not have, nor has shown any interest in creating.

This service is imminently buildable with technologies available today, but Facebook’s track record shows they are not the company to entrust with building the online service capable and willing to protect our children’s online safety, privacy and security.

Facebook’s proposal is all about their financial needs, not the needs of children.

Linda

Advertisements

The Digital Marketing Mess – And Insight into Who’s Looking to Profit from Your Information

June 2, 2012

A new infographic showing the social media marketing landscape was unveiled by Buddy Media this month and though the intent was to show how complicated the marketing world has become, it is also an excellent representation of some of the companies fighting to make money off of your information. It’s staggering.

While not every company listed here is collecting or sharing your information – like the URL shorteners – most of these companies would not exist if it were not for the personal information shared online.

Make no mistake, it’s all about the money. You and your information are commodities driving a multi-billion dollar ecosystem. There is absolutely nothing wrong with this business model – you gain tremendous benefits from the internet and all the tools and services it provides – it’s critical that every user to understands just how far their information may be spread and why managing the information they share, and with whom they share it is critical.

 

Linda


STOP THE TEXTS. STOP THE WRECKS. An Important New Campaign

May 1, 2012

Today the National Highway Traffic Safety Administration and the Ad Council have launched a new campaign to discourage teens – and all drivers – from texting while driving. This campaign, and those like it, are vital elements in reducing the number of tragic deaths and injuries caused by distracted drivers.

However, campaigns alone will not solve the problem. Stiffer fines, laws, and penalties will not alone solve the problem. What we need is a cultural shift making texting while driving an unacceptable behavior, and for that to happen every single person has a clear role to play. Please play your role.

Here are some of the resources made available to consumers through this STOP THE TEXTS. STOP THE WRECKS. campaign:

  • Facts sheet – with 30 sobering facts, here’s a sample
  • Survey results
  • Videos – 4 videos that help illustrate how quickly distraction leads to disaster
  • Infographic – see below

This campaign has partnered with the U.S Department of Transportation who created the excellent Distraction.gov materials.

 

Also check out the following blogs:

 

 

Linda


Men More Reckless with Personal Information Online

February 22, 2012

There is still widespread naiveté about the value of personal information and the way data is aggregated according to a new survey by Usamp.

Men and women are quite willing to share personal information about relationships, education, employment, brand preferences and political and religious affiliations.

But when it comes to information like email or physical address, phone numbers, or their location, women put a higher premium on physical safety and are markedly more guarded than their male counterparts.

What users have to gain a better understanding of is the very clear risks all of this information sharing represents, and how, with the information women were willing to share, the rest of their information is fairly easily exposed.

Why all that information matters

When looking at the types of information both men and women were fairly willing to share, it is the unintended use of that information that place you at risk.

For example, it was through hard fought battles in the 20th century that we gained a number of civil rights designed to protect every citizen from discrimination based on gender, religion, race, color, national origin, age, marital or family status, physical or mental disability, sexual orientation, political affiliation, financial status, and more.

These prejudices remain, and by sharing this information freely online users enable the very types of discrimination that civil rights were established to prohibit. And users do it in a way that never places an employer or company at legal risk. A candidate will never know why they weren’t considered, they won’t even make it to the interview.

To understand how this works, Microsoft conducted research in January 2010, to expand the understanding around role of online information and reputation.

One aspect of the research looked specifically at how recruiters and HR professionals use online information in their candidate screening process.

As you can see in this table, would-be employers can now make decisions based on a number of factors long before ever inviting a candidate in for an interview process where some system of oversight could possibly identify discriminatory practices against selected candidates.

With this type of undetectable prescreening, employers can make decisions based on how people look in their photos – weight, age, skin color, health, prettiness factor, style, tattoos, and economic indicators. They can look at comments made by the candidate, friends or family members that they would never have had the right to access pre-internet. They can look at groups and organizations a person is associated with – and potentially make decisions based on political affiliations, faith, sexual preferences, even medical factors – if this information is indicated through the groups and organizations to which the candidate belongs.

Learn more about the erosion of civil rights in my blog Civil Rights Get Trampled in Internet Background Checks.

The damage doesn’t end there

It is not just would be employers or college application review boards who can and do use this information.  If 5 years ago someone posted a photo of you on a drinking binge, will it impact whether an auto insurance company accepts you, or quotes you a higher rate?  Will it impact your medical insurance rate? How about your ability to get a car, school, or home loan? The answer is likely to be YES.

A reluctance to share address, email, phone numbers and other ‘locatable’ information doesn’t matter if you’re willing to share your name, employer etc.

The study found that among the types of personal information shared, men and women are most likely to be happy to share their names (86% and 88%, respectively) and email addresses (55.2% and 42.4%, respectively). Yet unless you live off the grid, your name alone is probably enough to get your address and phone number – and sometimes your email address. It’s enough to discover if you own or rent, if you vote, have a criminal record, etc. Compounding your risks, the facial recognition tools now in Facebook and Google+, mean that even your face in a photo may be enough to collect all this information.

Does it mean you hop off the internet and hide? No. But it does mean that before sharing any information you should ask yourself who could see it? What could they do with it? Will it damage you, your child, or someone else in the future? If your information is already out there, you may want to work with websites to have any sensitive information removed.

Linda


Giving Technology This Season? Use McAfee’s 10 Tips to Keeping Devices Safe

December 21, 2011

Tech items are wish-list toppers again this year, and if you’re among the millions planning on giving devices, don’t forget to include the safety, privacy and security tools and education that are needed to ensure the recipient is protected. This festive tip sheet from McAfee helps identify areas to think about.

Linda


Who has Primary Responsibility for Internet Safety, Security & Privacy?

October 18, 2011

If we could only figure out the answer to this question we could sue the irresponsible company, government entity, person, or standards body and get on with things – or not.

Unfortunately, the ugly truth is that we all share in the responsibility of protecting ourselves and others online – and like any project undertaken by committee things can get messed up.

There are five key stakeholder groups when it comes to protecting the internet: Industry companies & organizations; Governments & regulators; Law enforcement & oversight boards; Individuals & families; and Schools & other educational resources. Here is an overview of who should be responsible for which safety elements:

Government & regulators have primary responsibility to ensure internet services aren’t built without proper safety, security and privacy impact evaluations. Government is responsible to ensure clear regulations are in place and responsible for tightly monitoring products that impact consumers daily lives. It is the role of government to ensure these products are in compliance with baseline safety features, and this responsibility must extend to internet products and services; particularly since so many internet companies have demonstrated a failure to design, test and implement for safety, security and privacy.

Society has also tasked government with ensuring the dissemination of public service messages yet much of the current internet safety, security and privacy messaging fails to provide useful, actionable information. The result is that a high percentage of the population remains unaware of the safeguards they need to have in place to be safer online.

Government & regulators also have the primary responsibility of protecting consumer data. For most consumers, information posted and exposed by the federal, state, county and local government agencies represents your greatest risk of becoming a victim of identity theft.

There is a world of difference between requiring governments to be transparent in their actions – often called sunshine laws or freedom of information legislation – to guarantee access to data held by the government, and the wholesale exploitation of consumer’s information by posting birth, marriage, death, property, power of attorney, voter records, criminal records, and more online where individual criminals, would be stalkers, freaks or wholesale criminal organizations can leverage the data in a way that threatens the safety, security, privacy and financial stability of every man, woman and child in the country.

While I support “right-to-know” laws, these need to focus on government actions and stop at the door of private individuals.

Companies have primary responsibility when they provide consumers access to products or services that can hurdle them through cyberspace at warp speed, collect, trade, and sell consumer data. Unfortunately, feeding the bottom line wins out over protecting consumer interests in most cases and companies simply provide consumers access to services and urge them to go have fun; while make it nearly impossible to really understand the safety, privacy and security tradeoffs they’ve just made.

Companies have the primary responsibility to enforce their codes of conduct and ensure users have a reasonable level of safety and control over their data destiny. We hold amusement parks responsible for negligent conditions that allow injuries to occur, it is reasonable to apply the same standard to ‘virtual’ amusement parks. In their own online environments they must be the first line of consumer defense.

Companies have primary responsibility to post notices when a product is about to be expanded and to inform consumers about changes that will add new levels of safety and privacy. As companies rush to add great new features they too often cut corners. Being first with a feature, or a fast follower, requires tradeoffs and all too often the first thing cut and the last piece reluctantly added are safety, security and privacy elements that specifically help users manage their exposure.

Service providers will continue to innovate and this is good for everyone. However, consumers have the right to be informed about each new feature that affects their exposure to risk, and be able to determine whether the risk potential is appropriate for themselves and their families. Automatic ‘upgrades’ without notification can bear a strong resemblance to ‘bait and switch’. The Internet industry has for years promoted self-regulation of online tools & services, but they have largely failed to deliver adequate safeguards for consumers.

Here is a standard by which companies should be measured when considering whether they have stepped up to their responsibilities:

Consumer Internet Safety and Privacy Rights – A Standard for Respectful Companies

ALL Internet users have the expectation of a safe Internet experience, and respectful companies strive to provide quality safety and privacy options that are easily discovered and used by consumers.  Your safety and privacy, as well as the safety and privacy of your family on the Internet should be core elements of online product and service design.

In a nutshell, online consumers should demand these rights:

  1. Establishing safety and privacy settings should be an element in the registration, or activation of a specific feature’s, process.  This includes informing you in easily understood language about the potential consequences of your choices. This allows, and requires, you to make your own choices, rather than being pushed into hidden, default settings.
  2. During the registration or activation process, articles of the terms and conditions, and privacy policy, that might affect your privacy or safety, or that of a minor in your care, should be presented to you in easy to understand language, not in a long, complicated legal document in small font.
  3. You should expect complete, easily understood information and age appropriate recommendations about every safety and privacy feature in a product or service.
  4. You should expect to easily report abuse of the products or abuse through the products of you or someone in your care.
  5. You should expect a notice or alert if a significant safety or privacy risk is discovered in an online product or service you or someone in your care is using.
  6. The provider needs to publish on a regular basis statistics demonstrating how well the company enforces its policies.  Such statistics should include; the number and types of abuse reports, number of investigations conducted, and number and type of corrective actions taken by the provider.
  7. When services or products are upgraded, you have the right to be informed of new features or changes to existing features and their impact on your – or your child’s – safety or privacy in advance of the rollout.
  8. When the terms of use or privacy policy of any provider are about to change, you have the right to be informed in advance of the changes and their impact on your – or your child’s – safety and privacy.
  9. When a provider informs you of changes to their features, privacy policy, or terms and conditions, they should provide you with a clearly discoverable, way to either opt out, or block the change, or to terminate your account.
  10. When terminating an account, your provider should enable you to remove permanently and completely all of your personal information, posts, photos, and any other personal content you may have provided or uploaded, or that has been collected by the provider about you.

Law enforcement has primary responsibility to monitor society’s safety, prevent crime and bring to justice those who break the law. Yet, this is a tall order when adequate laws & regulations are missing to facilitate enforcement, adequate safety features weren’t built into the products to minimize the potential for exploitation, and there has been a critical failure to allocate the funding, training and resources law enforcement needs in order to provide the level of safety we expect.

Crime has always enjoyed better funding than law enforcement, but without assurances of basic safety the public will not be able to fully realize the tremendous opportunities the Internet has to offer – and criminals will run rampant.

Schools have primary responsibility for teaching youth and adults the tools and skills they need to be successful members of society. Mastering the Internet and the necessary safety security and privacy skills need to use the internet successfully are critical life skills. But, no one has taught teachers how to teach Internet safety, or provided a solid curriculum for classrooms. While on the one hand we seem flooded with ‘safety information’ there is a shortage of factual, practical, flexible and free information for consumers to take action on. To address this issue, the LOOKBOTHWAYS Foundation has created the NetSkills4Life curriculum. The first 4 lessons of the full K-12 interactive online and FREE curriculum is available to the public now, more lessons are being developed as quickly as possible.

Families have the primary responsibility of teaching their children how to become honest, ethical and capable adults. In today’s world that includes teaching our children to be honest, ethical and capable online. While this is a unique challenge that parents of previous generations have not had to master, it’s time to suck it up and learn how to pass these skills on to our children.

Technology advances and a parent’s job is to keep up. Did parents whine when cars were created and they had to teach their kids to drive and understand traffic safety? What about when phones were invented? Did parents throw up their arms and give up?   The internet has been a critical part of society for at least 10 years now so step up and learn; you don’t have to be a techspert (technical expert) to successfully help your children master the tools and responsibilities they need to be successful.

Parents have the responsibility to say YES to their children’s online activities. Far too many parents (and schools) take the kneejerk ‘no’ response route and this is perhaps the worst possible choice. Failing to allow youth to learn to use the internet sets them up to fail when they finally get out from under their parent’s reach.  Or it forces youth to sneak behind their parents back and use the internet without the support and guidance of a parent.

Instead, parents need to teach the skills and social responsibilities needed to use new online tools and when youth have demonstrated they have mastered both the skills and the responsibilities they need to be allowed to use the services that are appropriate for them. This also means that parents have the responsibility to respect the age restrictions placed on sites, and to teach their children to respect these age boundaries.

Individuals have the primary responsibility for their own safety and ethical use – certainly from the time they reach adulthood. Childhood is a transitional phase where children gain more responsibility as they show they can master situations. For example, while a 16 year old may not be ready to take full responsibility for their online security or privacy, they are ready to be held fully responsible for their online behavior towards others.

In spite of being able to identify the responsibilities of all these stakeholder groups, the internet has not become a safer place.

What’s missing? Commitment.  Each stakeholder group must become more committed and invest more in Internet safety, security, privacy, and in creating a positive online environment. Beyond that commitment each stakeholder group must deliver on three key action areas – providing education, creating a safer product, services and online environment infrastructure, and enforcing the safety, security, privacy, and respect of everyone online. This must happen in a far more coordinated method that is being employed today.

Integration of initiatives is complicated, but the level of collaboration required is not new. We’ve done it in other areas like road safety, drug safety, health issues, etc. it is past time that we put the same level of collaboration in place online.

Without synchronized efforts by all stakeholder groups the web of safety will continue to have gaps that far too many consumers of all ages will fall through.

Seen as a table, responsibilities look like this:

Linda


Part 4: McAfee Threat Predictions for 2011 – Apple: No longer flying under the radar

January 16, 2011

This is the fourth installment of my series covering McAfee’s Threat Predictions for 2011. To make the predictions for 2011 more digestible, I’ve broken each area out to show McAfee’s drilldown on the risk, and what the risk means to you. Click here to read the first, second, and third segments.

From McAfee Threat Report – Apple: No longer flying under the radar

Historically, the Mac OS platform has remained relatively unscathed by malicious attackers, but McAfee Labs warns that Mac-targeted malware will continue to increase in sophistication in 2011. The popularity of iPads and iPhones in business environments, combined with the lack of user understanding of proper security for these devices, will increase the risk for data and identity exposure, and will make Apple botnets and Trojans a common occurrence.

What this means to you

For Apple lovers, the Mac OS and Apple device’s underdog status against PC’s and the Windows OS long served as a hardy defense against criminal exploits – criminals target the largest possible segment for the largest possible return.

But with the Mac OS making stronger inroads, and the advent and mass adoption of  iPhones, and iPads, Apple is facing new threats – much like the general mobile market is now facing. (See Part 3: McAfee Threat Predictions for 2011 – Mobile: Usage is rising in the workplace, and so will attacks).  So it now appears that assuming you’re safe from malware on Apple devices is not longer a safe bet.

To gain some insight into why criminals are taking an interest in Apple, consider the company’s 2010 Sales data (Fiscal year ended Sept 25th 2010) results, it is easy to see why criminal interests are now focusing on these products. In just the past three years, Apple has sold 33.7 million computers, 72.5 million iPhones, and iPad sales are soaring.  Add to that the over 300 thousand applications in the Apple App store and the potential for exploitation becomes even more interesting. (To learn more about threats to the iPhone see Researcher warns of risks from rogue iPhone apps).

The future for Apple users is likely to adopt the same advice that PC users have been given for years. Protect your devices, only download apps from trusted and tested sites, and leverage Safari’s antiphishing, antivirus, and Malware Protection to avoid and block malware.

Linda