Feeds

Google-backed research fights review spam

Seeing through the sockpuppet

Beginner's guide to SSL certificates

University of Illinois at Chicago researchers are taking aim at fake reviews, which they say can seriously damage online businesses.

In particular, the Google-backed study is designed to seek out organized groups of comment fraudsters, and automate the process of identifying and shutting them down.

Fake reviewers can have devastating affects on a variety of Internet-dependent businesses, but with the emergence of user-reviewed social-style operations like Yelp and TripAdvisor, both positive (to promote a business) and negative (to damage a competitor) frauds are becoming endemic.

For the affected business, the researchers say, weeding out the fakes is expensive: while it’s not hard for a human to identify a fraud, the process is labour-intensive.

In their paper, authored by the university’s Bing Liu and Arjun Mukherjee, along with Google’s Natalie Glance, the researchers present an algorithm called GSRank which they hope can be deployed against review fraud.

The key to identifying groups working organized review fraud is their behavior, the paper states, with key fingerprints comprising:

* Time window – members of a group working together to promote or demote a product or service are likely to post reviews within a few days of each other;

* Deviation – naturally, since they’re hired to push a products ratings in a particular direction, an organized group will all post similar ratings. The degree to which the group’s reviews deviate from “genuine” reviews is a hint that someone’s trying to game the system;

* Content similarity – not only will a group give their target the same rating, they’ll also often copy content among themselves. In addition, individuals trying to eke out a living in the cents-per-review business of fraud will have stock phrases that they re-use in different reviews;

* Get in first – the researchers also note that fake reviews will be posted early in the life of a product or service. “Spammers usually review early to make the biggest impact” they write, because “when group members are among the very first people to review a product, they can totally hijack the sentiments”. That behavior can, however, also help identify the fakes;

* Group size – the size of the group, and its size relative to the number of genuine reviews, can both indicate the presence of spammers; and

* “Group support count” – as the researchers note, it’s unlikely that the same (say) 10 random individuals would repeatedly find themselves reviewing many different products; so to have the same group turning up across many different products also helps indicate spammers.

The researchers note that they can’t tell the difference between multiple individuals working together, or one “sockpuppet” user operating multiple user IDs. However, since their algorithm is looking at behavior rather than identity, that shouldn’t matter. ®

Intelligent flash storage arrays

Whitepapers

Go beyond APM with real-time IT operations analytics
How IT operations teams can harness the wealth of wire data already flowing through their environment for real-time operational intelligence.
10 threats to successful enterprise endpoint backup
10 threats to a successful backup including issues with BYOD, slow backups and ineffective security.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Website security in corporate America
Find out how you rank among other IT managers testing your website's vulnerabilities.