HBGary 'puppets' FAIL to convince
Leaked doc outlines dumb rep management strategy
It looks like we should all learn Homer Simpson’s sock-puppet phobia.
If this blog post is accurate, then corporates aren’t just briefing social media teams to “manage” their reputation on services like Twitter. They’re creating armies of software-driven sock-puppets to gang up on bloggers and commenters to swamp negative comment.
The Daily Kos poster is particularly offended that HBGary, the company that embarrassed itself by taking on “hacktivist” group Anonymous and being hacked in return, would be deploying such tactics against its critics.
The technique is based on creating a kind of meta-manager of online personae, to make sure (as the HBGary document puts its) that the person hired to massage their employers’ online reputation doesn’t “accidentally cross-contaminate personas during use”.
“Get over it” is one reasonable response. The only thing revealed by HBGary is that the business of sock-puppet management should be more sophisticated than, perhaps, “real” people might expect. But it should not be surprising: people have been prepared to pay for competitive advantage in the world of “reputation management”, and where there’s money, there will always be someone to provide their own innovations to grab a slice.
So in this iteration of the online arms race that I’m tempted to call “The King’s Shilling” (except I suppose that’s too awful a pun), someone’s realized that instead of a social media team sharing one account so as to keep the flow of up-vibe posts flowing, they can have one social media sow in a stall suckling lots of Facebook and Twitter piglets all at once.
It’s hard to work up a good imitation of surprise at this. It’s also hard to see such a strategy working.
No matter the advances in artificial intelligence over the years, “real” people remain good at identifying fakes. If you watch even a couple of contentious hashtags – in Australia, #nbn (the hashtag Aussies use to discuss the National Broadband Network) will do as an example – the auto-Tweets stand out as if lit by neon.
For a start, telling a machine to throw a couple of links, RSS feeds or pre-canned responses in the direction of any given hashtag results in giveaway howlers: “watch this!” messages turning up with links to American news programming.
“There are a variety of social media tricks we can use to add a level of realness to fictitious personas,” says one of the HBGary documents. It may be so: but I don’t see any evidence that people's Twitter-bots have passed the Turing test yet.
While it looks a little like the corporate threat to democracy and free discussion that the Daily Kos believes it to be, it’s also a completely self-destructive strategy. The personas will invade any and every conversation they’re instructed to, acting like over-indulged toddlers and yelling “want #banana NOW!” across grown-up conversations.
Instead of creating an illusion of consensus, they’ll either be blocked by people who want to talk like adults, or where they can’t be blocked, they’ll drive users away from the medium they seek to dominate.
And they’ll be deploying their bots and “social media experts” into a world in which an army of amateur – but frequently effective – sleuths will be ready to unmask and pounce upon their inept attempts to manage conversations in their direction. ®
Sponsored: 2016 Cyberthreat defense report