Internet anonymity is a critical element in allowing people to retain their privacy for a wide spectrum of legitimate and honorable reasons.
Unfortunately, anonymity is also used as a blanket covering cowards who want to lash out while hiding from their actions; the racists, sexists, ageists, the religiously intolerant, the homophobes, the cyberbullies, cyberstalkers, cyber trolls, and every other miserable crank that wants to be mean without facing the consequences.
The Washington Post just posted an excellent article written by Jesse Washington of the Associated Press titled Racist messages pose quandary for mainstream sites that tackles the issues online news sites, and other web properties face when allowing readers to post comments on their sites.
In the article, Washington dives straight into the issues:
Do these [racist] comments reflect a reversal of racial progress? Is that progress an illusion while racism thrives underground? What kind of harm are these statements doing? Could there be any value in such venting? And what, if anything, should a free society do about it?
“We’ve seen comments that people would not make in the public square or any type of civic discussion, maybe even within their own families,” said Dennis Ryerson, editor of The Indianapolis Star. “There is no question in my mind that the process, because it’s largely anonymous, enables people who would never speak up on Main Street to communicate their thoughts.”
Hateful sentiments have unfortunately always been elements in our society, what the Internet has done is to provide a platform and megaphone for these views to be expressed publicly.
Linda Chavez, chairman of the conservative Center for Equal Opportunity, says racist comments come from a “very small but often vocal minority of people”…. But she does see a destructive aspect: “It may actually increase the percentage who will feel comfortable expressing these views. Social pressure is important.”
Providing comments sections for readers is intended to increase engagement with the websites, build a sense of community, expand discussions, and bring in higher advertising dollars. But the cost in terms of civility can be steep, and the cost of hiring moderators to filter out hate comments can be high.
“It astonishes me that they [companies with websites] allow such blatant expressions,” said Robert Steele, a journalism scholar at DePauw University and The Poynter Institute. “Even if it’s legitimate to try and draw viewers to sites, is it legitimate to allow individuals who are swinging a sharp ax, and often doing so with a hood over their heads in anonymous fashion, to have this forum that can not only create harm but breed hatred? I recognize the value of citizen dialogue, but when the comments are poisonous … you have to go back to the issue of why you would allow the dialogue.”
Echoing this sentiment, Herb Strentz, a retired journalism professor and dean at Drake University in Des Moines said, “For me, all the problems of online anonymity and comments outweigh any imagined benefits. If people want to contribute thoughtful things, they should be willing to stand up for them and be quoted.”
According to the Southern Poverty Law Center, the number of hate groups in America has more than doubled in the last 10 years, totaling 932 identified hate groups in 2009. And Anti-Defamation League civil rights director Deborah Lauter said there are thousands of hate websites, some with tens of thousands of viewers each month.
Tackling Hatred Online
The most common ‘solution’ described to stop hate content on news sites, and other web properties, is to require users authenticate themselves before posting content. This is actually a fairly cumbersome and difficult task. Exactly how would a website do this? Using a credit card ID? A driver’s license? While ‘good’ people can show their credentials, they may be reluctant to relinquish personally sensitive information – I would be. For ‘bad’ people, getting a fake ID, or using someone else’s ID, isn’t hard to do.
This solution also places the websites and companies in the position of needing to validate and store this sensitive information, a job they should be reluctant to take on.
This leaves us with three primary methods for tackling expressions of hatred online
- Teach tolerance and acceptance to reduce the audience for hate sites and hate content – an ongoing effort that to date has yielded mixed results, but an effort that must continue nevertheless.
- Filter consumer content ideally prior to upload onto news sites, and other web properties, but at a minimum shortly after posting. This is a very expensive proposition, but critical to maintaining the reputation of websites.
- Deploy a system that rates users by reputation. Similar to Ebay, or how credit bureaus work. This allows users to remain anonymous because it doesn’t care who the user is, it cares how the user behaves.
This is one of those rare exceptions, because our ReputationShare service is designed for just this type of scenario.
ReputationShare places privacy first. At no time does our service ever know who a user is, we simply know – and share with participating companies – how that user behaves online.
This service enables web services to instantly know the reputation of someone coming to their site- without that individual identify themselves by name or other personally identifiable information. With this information, websites can determine how they want to interact with that user. They may choose to welcome a user with a sterling online reputation with open arms and extra bonuses, welcome a user with an ok reputation with no reservations (but no extras), or welcome a user who’s had a history of disrespectful/racist/bullying or other issue but include some restrictions on their ability to do things like post comments until the user has kept their nose clean for a while.
This service is totally transparent to users, they can see their reputation score at any time, and they can challenge any negative (or positive, but we’ve never seen that happen!) score they’ve been given by taking the issue up with the company that gave them the score. Essentially, ReputationShare works like a credit bureau with one key difference – we don’t know who you are, just how you behave.
For companies, this enables strong moderation of consumer content their websites at a very low cost in two ways:
- Once the company establishes the guidelines for how users with different types of reputations are handled, these settings are reflected in the user’s registration process. Problem users, who aren’t allowed to post content, aren’t going to leave a mess for moderators to clean up.
- Consumers with a good reputation want to maintain their good reputation and are likely to act appropriately. This means that websites don’t have to monitor 100% of comments; they only need to monitor the comments left by the 5-15% of users who are known to behave poorly, or who for whatever reason do not have an established reputation.
This service also helps consumer through the services. First, I don’t have to be subjected to content that is offensive. I can also set my options to only receive interactions from people with a reputation rating above ‘x’; where I can choose for myself, or for minors in my care, the types of people that can interact with us. For example, on an online dating site, I may say that I don’t want to be paired up with anyone who has been flagged for cyberstalking or financial scams, and then anyone who had these negative reputation flags would be eliminated from my search results.
We can tackle online civility and it is critical that we do so. The internet is in its infancy, but the choices we make now about acceptable online behavior will have very long shadows.