For beauty shoppers, it was already hard enough to trust reviews online.
Brands such as Sunday Riley and Kylie Skin are among those to have been caught up in scandals over fake reviews, with Sunday Riley admitting in a 2018 incident that it had tasked employees with writing five-star reviews of its products on Sephora. It downplayed the misstep at the time, arguing it would have been impossible to post even a fraction of the hundreds of thousands of Sunday Riley reviews on platforms around the globe.
Today, however, that’s increasingly plausible with generative artificial intelligence.
Text-generating tools like ChatGPT, which hit the mainstream just over a year ago, make it easier to mimic real reviews faster, better and at greater scale than ever before, creating more risk to shoppers being taken in by bogus testimonials. Sometimes there are dead giveaways. “As an AI language model, I don’t have a body, but I understand the importance of comfortable clothing during pregnancy,” began one Amazon review of maternity shorts spotted by CNBC. But often there’s no way to know.
“Back in the day, you would see broken grammar and you’d think, ‘That doesn’t look right. That doesn’t sound human,’” said Saoud Khalifah, a former hacker and founder of Fakespot, an AI-powered tool to identify fake reviews. “But over the years we’ve seen that drop off. These fake reviews are getting much, much better.”
Fake reviews have become an industry in themselves, driven by fraud farms that act as syndicates, according to Khalifah. A 2021 report by Fakespot found roughly 31 percent of reviews across Amazon, Sephora, Walmart, eBay, Best Buy and sites powered by Shopify — which altogether accounted for more than half of US online retail sales that year — to be unreliable.
Undue Influence
It isn’t just bots that are compromising trust in beauty reviews. The beauty industry already relies heavily on incentivised human reviewers, who receive a free product or discount in exchange for posting their opinion. It can be a valuable way for brands to get new products into the hands of their target audience and boost their volume of reviews, but consumers are increasingly suspicious of incentivised reviews, so brands should use them strategically, and should always explicitly declare them.
Sampling and review syndicators such as Influenster are keen to point out that receiving a free product does not oblige the reviewer to give positive feedback, but it’s clear from the exchanges in online communities that many users of these programmes believe they will receive more freebies if they write good reviews. As one commenter wrote in a post in Sephora’s online Beauty Insider community, “People don’t want to stop getting free stuff if they say honest or negative things about the products they receive for free.”
That practice alone can skew the customer rating of a product. On Sephora, for example, the new Ouai Hair Gloss In-Shower Shine Treatment has 1,182 reviews and a star rating of 4.3. But when filtering out incentivised reviews, just 89 remain. Sephora also doesn’t recalculate the star rating after removing those reviews. Among just the non-incentivised reviews, the product’s rating is 2.6 stars. The issue has sparked some frustration among members of its online community. Sephora declined to comment.
But the situation gets even murkier when factoring in the rise in reviews partially created by a human and partially by AI. Khalifah describes these kinds of reviews as “a hybrid monstrosity, where it’s half legit and half not, because AI is being used to fill the gaps within the review and make it look better.”
Adding AI to the Mix
The line between authentic reviews and AI-generated content is itself beginning to blur as review platforms roll out new AI-powered tools to assist their communities in writing reviews. Bazaarvoice, a platform for user-generated content which owns Influenster and works with beauty brands including L’Oréal, Pacifica, Clarins and Sephora, has recently launched three new AI-powered features, including a tool called “Content Coach.” The company developed the tool based on research showing that 68 percent of its community had trouble getting started when writing a review, according to Marissa Jones, Bazaarvoice senior vice president of product.
Content Coach gives users prompts of key topics to include in their review, based on common themes in other reviews. The prompts for a review of a Chanel eyeliner might include “pigmentation,” “precision” and “ease of removal,” for instance. As users type their review, the topic prompts light up as they are addressed, gamifying the process.
Jones stressed that the prompts are meant to be neutral. “We wanted to provide an unbiased way to give [users] some ideas,” she said. “We don’t want to influence their opinion or do anything that pushes them one direction or the other.”
But even such seemingly innocuous AI “nudges” as those created by Content Coach can still influence what a consumer writes in a product review, shifting it from a spontaneous response based on considered appraisal of a product to something more programmed that requires less thought.
Ramping Up Regulation
Fakespot’s Khalifah points out that governments and regulators around the globe have been slow to act, given the speed at which the problem of fake reviews is evolving with the advancement of generative AI.
But change is finally on the horizon. In July 2023, the US Federal Trade Commission introduced the Trade Regulation Rule on the Use of Consumer Reviews and Testimonials, a new piece of regulation to punish marketers who feature fake reviews, suppress negative reviews or offer incentives for positive ones.
“Our proposed rule on fake reviews shows that we’re using all available means to attack deceptive advertising in the digital age,” Samuel Levine, director of the FTC’s Bureau of Consumer Protection, said in a release at the time. “The rule would trigger civil penalties for violators and should help level the playing field for honest companies.”
In its notice of proposed rule-making, the FTC shared comments from industry players and public interest groups on the damage to consumers caused by fake reviews. Amongst these, the National Consumers League cited an estimate that, in 2021, fraudulent reviews cost US consumers $28 billion. The text also noted that “the widespread emergence of AI chatbots is likely to make it easier for bad actors to write fake reviews.”
In beauty, of course, the stakes are potentially higher, as fake reviews can also mislead consumers into buying counterfeit products, which represent a risk to a shopper’s health and wellbeing as well as their wallet.
If the FTC’s proposed rule gets the green light, as expected, it will impose civil penalties of up to $51,744 per violation. The FTC could take the position that each individual fake review constitutes a separate violation every time it is viewed by a consumer, establishing a considerable financial deterrent to brands and retailers alike.
With this tougher regulatory stance approaching, beauty brands should get their houses in order now, and see it as an opportunity rather than an imposition. There is huge potential for brands and retailers to take the lead on transparency and build an online shopping experience consumers can believe in.