Moderatrix to gain even more sinister powers
Cyberbullies quail as online reputation software launches
A new system to improve the behaviour of visitors to internet sites, by granting more draconian exclusion powers to moderators, is launching this week in the UK.
The ReputationShare technology was launched in the US in April of this year by LOOKBOTHWAYS Inc, a US-based company that has already developed a number of systems designed to combat cyberbullying and encourage more courteous online behaviour.
The principle behind ReputationShare is simple: individuals who behave badly on one site are likely to behave badly on other sites too – so anything that can help site administrators spot those individuals before they flame others (or otherwise “act up”) has to be a good thing.
ReputationShare is based on a points system. When individuals join a site, their score is low, and they may gain or lose points according to how they behave: at present, the system recognises some 45 categories of behaviours, ranging from highly positive to highly negative.
Negative behaviours would include flaming, online bullying, and a range of behaviours for which moderators might feel it necessary to issue a warning or suspension. Positive behaviour may be harder to define, but in essence, individuals who contribute positively to a site may be noticed and gain points.
Whilst the system currency is the unique email address, users with positive scores can opt to associate different email addresses, so that they are recognised wherever they are registered.
The judgement call when it comes to specific behaviours is entirely in the hands of system moderators – unlike the reputation systems run by sites such as eBay and Amazon, where reputation is based on the views of other users.
Scores ought to be dished out in line with site Terms and Conditions. According to Linda Criddle, President and Co-Founder of LOOKBOTHWAYS, they do not allow companies to run this system unless it is clearly flagged to individuals that they are being monitored and scored in this way.
She added: "We also require companies to have an appeals procedure in place, so that individuals who feel that they have been unfairly picked on can challenge moderator decisions." Users can view their score and see companies that have contributed to it. If they feel a score is unjust, they can go back to the reporting companies and ask for arbitration.
The reason that this system takes reputation monitoring to the next level is the fact that it combines reputation scores across sites. Anyone registering with a site will be required to provide an email address. The ReputationShare software then applies a "one-way crypted hash algorithm", which converts the email address to anonymised format, and then stores it on the LOOKBOTHWAYS server.
Thereafter, anyone registering on a new site and using that email address will bring their reputation score with them, allowing site owners to spot potential troublemakers in advance.
Of course, it is possible that the real trolls will just keep changing email address. This will help them a little – but new email addresses will start off with a low score and, if this system takes off, then it is likely that the downweighting applied to new addresses will become even more negative.
In the same way, bad behaviour does not stick forever, with negative scores gradually reducing over time.
According to Linda Criddle, there are a multitude of ways in which this approach can work. She said: “Parents can get involved, and set filters so that their children can only interact with people who have never been flagged for bullying. Dating sites might bias partner selection against cyber-stalkers.”
There is no software licence involved: organisations are charged according to how often they query a service, and may incur additional costs if they integrate it wholly within in-house systems. However, for a site with a million registered users, using a typical use case scenario, the monthly cost of using ReputationShare is anticipated to be below US$5,000.
It is being brought to the UK by e-Moderation, who have moderators ready to use the system and operations people capable of integrating it to existing UGC systems. e-Moderation is offering ReputationShare as part of its offering, to all its clients, going forward.
According to LOOKBOTHWAYS, the system has no Data Protection implications. We spoke to the Office of the Information Commissioner, who agreed that so long as personal information was not passing "hand to hand", this was the correct interpretation of the law - although if matters changed, they would look into it. ®
Re: Re: But who will Moderatrix the Moderatrix?
Oh - I stand corrected, it was just a feeling I had that way, way back it had happened - apparently my paranoia is really taking over, bummer
I get that some people really ARE trolls and have no problems with a site saying "nope you don't get to say that here",it's the online equivalent of "management reserves the right to refuse entry". If people genuinely are getting bounced all the time from a site for minor faux pas' (however you pluralise that), then they are just going to quit with the site, and rightly so; there are several sites I very rarely even bother visiting anymore as I know from experience the comments are so heavily moderated that what you end up reading is no longer a representative cross section of opinions, newspapers seem especially bad at this.
That's bad enough, but the idea that loads of sites are going to club together to moderate will just put people off (certainly puts me off). Before ever posting anything you'll be sat wondering "Is this site likely to mark me as a Troll for this based on their own agenda?", "Is this going to cause problems for me on other sites?", "ooh I better not...", and before you know it you are no longer contributing to a discussion for fear of wider repurcussions
Reduce people's ability or desire to be involved and you lose a major reason to visit some site
s, leading to a potential downward trend in visitors. I'd say implementing this system would be a risk rather than a benefit to most sites, if you are going to implement you are probably already moderating anyway - why would you want to effectivly outsource your moderation to other people who potentially have their own agendas?
Right enough of this - I'm off to another site to call someone a c**t
Mike Powers makes a good point
Forums aren't like CIX, The Well or Compuserve where there was a large number of paid users with a definite identity. Their income typically comes from adverts and charging for classified ads. There's one particular site that invited various fuckwits to comment on one very popular thread just so they had ad viewing..
There are too many ways round automatic moderation : it will usually require a decent moderator. That may involve playing by the moderator's rules, and I don't have a problem with that. Don't like it? Set up your own group and see if people prefer your style.
A few forums are like a decent debating society. Most, however, are like a pub after a few beers... Being 100% correct may result in your being branded an arsehole if the people aren't ready, or don't want to face reality. Are you going to ban/mark down the 90% of the user community who refuse to confront their own prejudices, instead of the 1-2 people who are quoting real research, fact and informed opinion - because that's (theoretically) what should be done. Who is going to do this marking down, when it's most of the community that is actually in the wrong? Is it fair to be unfairly affected on other sites, when you were responding to ill informed or blatantly stupid points on a particular site? That then leads down the hideously slippery slope of defining morals.
Slashdot is not a good example of decent moderation. I never liked it much, and a cursory examination shows it is the same case of more trendy viewpoints being marked up, and unusual but better thought out opinions being barely marked. It's also difficult to navigate.
I'm not sure of the solution. I want a modern CIX. Lots of varied users. The ability to ban people. Verified identities that people ideally have to pay for (reduces some of the idiots). CIX still exists, but it no longer has the critical mass to make it viable, and the ability to follow certain people like in livejournal (good for blogging, crap for proper discussion).
Perhaps the seriousness level of the discussion could be found by grepping the original message/response ratio of 'TL;DR' and 'LOL'....
To all the naysayers
Obviously you haven't moderated more than one discussion board, if you don't realise that the majority of trolls are morons in the technical sense as well as the other senses.
Most of them *do* seem to use the same email address in multiple locations, and "real"-appearing ones, not throwaway mailinator ones.
As for the argument that trolls don't disrupt discussion that much, you obviously haven't been on a site that addresses controversial topics, like *gasp*, feminism. While manual intervention is the primary way to address these idiots, most blog/discussion sites of that nature don't have 24/7 mods, as shocking as that may be to some. Blog engines like WordPress help by having a setting that sends first-time comments to the moderator queue, and Akisimet is a useful thing. But something that works without your having to intervene so much would be useful.
Even if it helps cut down the volume of f#ckwittedtude by 30%, I think it's worthwhile.