“Web 2.0 harnesses the stupidity of crowds as well as its wisdom. Some of the comments on YouTube make you weep for the future of humanity just for the spelling alone, never mind the obscenity and the naked hatred.”
Lev Grossman, Time Magazine
Photo credit: Stephen McLeod
Originally when this blog started and had readers numbering only in the tens, rather than the tens of thousands, some of the regular comment makers were very witty and brought gossip. In the last four years 200,000 comments have been made, the signal to noise ratio and average quality of the comments has declined.
[…] There will be incentives for those who produce better quality commentary based on a new element of co-conspirator community rating. Good comments will be more prominently displayed, disliked comments will be less prominent. The biggest innovation is that it will be possible for readers to set their own tolerance thresholds. Poorly rated comments will be invisible to those who set their preferences accordingly. If you only want to see comments judged by co-conspirators to be witty, amusing or illuminating, set your threshold to “Recommended”. Don’t want to read foul language? Set your threshold to “U”. Want to see all and any comments no matter how foul? Set your threshold to “XXX”.
It’s intriguing that bloggers like Guido are now looking at how to improve the signal to noise ratio in their user feedback. It’s the only way to go – I don’t even bother to read the vitriolic or simply tedious nonsense that is inevitably posted on virtually any commentable story on BBC News Online or popular YouTube video. And as more people start to see the web as place for self-expression in their own spaces and other people’s, hearing the interesting voices in the crowd is only going to become more important.
In ‘A Group Is Its Own Worst Enemy’, Clay Shirky (available as part of Joel Spolsky’s collection of the Best Software Writing) he describes the phenomena of group behaviour online:
The core group has rights that trump individual rights in some situations. This pulls against the libertarian view that’s quite common on the network, and it absolutely pulls against the one person/one vote notion. But you can see examples of how bad an idea voting is when citizenship is the same as ability to log in.
eBay has done us all an enormous disservice, because eBay works in non-iterated atomic transactions, which are the opposite of social situations. eBay’s reputation system works incredibly well, because it starts with a linearizable transaction — “How much money for how many Smurfs?” — and turns that into a metric that’s equally linear. That doesn’t work well in social situations. If you want a good reputation system, just let me remember who you are. And if you do me a favor, I’ll remember it.
He concludes with four design principles for successful, large-scale online interactions:
- handles the user can invest in: strong online identities which people use and remember and associate with helpful and trollish interactions
- a way for there to be members in good standing: whether it’s reputation points, publicly-displayed length of membership or endorsement by other members
- barriers to participation: prioritising the functioning of the group, putting practical barriers in the way to dissuade casual contributions (like a login)
- a way to spare the group from scale: breaking the group into more manageable interactions (like Twitter @replies filters)
Tech sites have been here before, and had to solve these problems first. In ‘Social Software and the Politics of Groups‘, Shirky describes the evolution of the Slashdot approach:
Slashdot’s core principle, for example, is “No censorship”; anyone should be able to comment in any way on any article. Slashdot’s constitution (though it is not called that) specifies only three mechanisms for handling the tension between individual freedom to post irrelevant or offensive material, and the group’s desire to be able to find the interesting comments. The first is moderation, a way of convening a jury pool of members in good standing, whose function is to rank those posts by quality. The second is meta-moderation, a way of checking those moderators for bias, as a solution to the “Who will watch the watchers?” problem. And the third is karma, a way of defining who is a member in good standing. These three political concepts, lightweight as they are, allow Slashdot to grow without becoming unusable.
The same applies to answers to questions in technical forums – how do you sift the knowledgable, valuable answers from the flames, errors and meanderings of newbies? Again, through reputation. StackOverflow takes the importance of reputation to a new level:
Here’s how it works: if you post a good question or helpful answer, it will be voted up by your peers. If you post something that’s off topic or incorrect, it will be voted down. Each up vote adds 10 reputation points; each down vote removes 2. You can earn up to 200 reputation per day, but no more. Amass enough reputation points and Stack Overflow will allow you to do more things on the site, beyond simply asking and answering questions […] At the high end of this reputation spectrum there is little difference between users with high reputation and moderators. That is very much intentional. We don’t run Stack Overflow. The community does.
Social media thrives on feedback. For sites like this one, the volumes are manageable and the topics niche enough for readers to identify valuable comments. But on a larger scale, we’re going to have to start seeing and as readers, using, the paradigms of reputation and peer moderation to ensure that feedback itself continues to have any value at all.