It's hard for a platform to have meaningful, useful ratings/reviews without both
substantially Knowing Your Customer, engineering to detect manipulated reviews,
and responding in a nuanced way -- to increase a fraudster's costs,
and not just train them to hide better. Lots of examples of diverse
platforms not doing a very good job of this. (I'll also talk about how this
knowledge sometimes leads platforms try to manipulate their own
customers to maximize their sales).
Ratings and reviews, although almost universally relied on by consumers, are, like
much other online info, often manipulated to increase sales, pump up merchant reputation
but are sometimes used malicious to slam a competitor).
Even sites that only allow reviews from purchasers can be manipulated, particularly on platforms
when low cost products are sold. Ebay harbors fraudulent sellers by combining buyer and
seller reputation, and not weighting by sale price. (So a 5 star rating for a trivial purchase accrues equal reputation as a large value sale.)
Many manipulations should be easily detectable by looking for some clear behavioral signatures, and then not training the adversaries by using adversary engineering rather than simply deactivating accounts. (I'll show you how to spot a lot of the red flags.)
Examples ranging from pumped up restaurant listings (up to #1 in London), Amazon
and Ebay's problems, a puppy sales site that had a rating system so bad by design
that they were sued by an animal rights org for facilitating fraud by puppy mills.
(There are a lot of sick puppies out there...)
Licensed to the public under https://creativecommons.org/licenses/by/4.0/
about this event: https://program.why2025.org/why2025/talk/CJQD7U/