Billionaire Elon Musk’s $44 billion near-buy of Twitter was one of the most steamy business sagas of our time. The case is currently in court, after Musk refused to go through with the deal, citing the social media company’s failure to respond to his multiple requests for fake or spam accounts. Twitter says spam accounts make up less than 5% of its user base; independent researchers say the number could be three times higher.
The row between Musk and Twitter over spambots is a subset of a larger question plaguing marketers: How much of the targeted audience is real?
The bot business
Twitter said it typically deletes more than a million spam accounts every day — during or shortly after creation — and locks millions of suspicious spam accounts if they don’t pass the verification process. In 2020, Instagram announced a new spambot policy involving rigorous vetting and stricter monitoring of “suspicious activity”. And yet, bots remain a concern on social media platforms.
“Bots are a common problem [in the industry]. That’s why it’s important for brands to seek out influencers with good levels of engagement,” says Lakshmi Balasubramanian, co-founder of influencer marketing company Greenroom, who averages 15-20 influencer campaigns. per month. “Benchmark varies from brand to brand, but 4-5% engagement is a good average to look for.”
Viraj Sheth, co-founder of influencer marketing agency Monk Entertainment, disagrees, however. “Engagement may not be an ideal metric because you can buy views, likes, and comments. The best way to evaluate bots is to see the quality of comments. If comments are “spam” or one-word answers, there’s a good chance it’s spam bot comments,” he says.
This is where influencer agencies come in, Balasubramanian points out. When pitching influencers for a particular campaign, they screen the content for brand relevance and suitability for the campaign brief. “For example, a mass brand will want an influencer with a following or audience that reflects that. Based on the brief, there could be different filters – geographic breakdown, age breakdown, and number of subscribers, among others. We will also review their content to ensure that they are capable of delivering the desired content. [for the campaign] and good commitment,” she explains.
Conversations around bots and fake accounts have punctuated the influencer marketing narrative, said Shruti Deora, head of partnerships at digital marketing agency White Rivers Media. The crux of the matter, she says, is the importance people place on follower numbers and engagement.
“Currently there is a lot of emphasis on vanity metrics like views, likes and followers. We need to focus on the depth of interactions to improve the quality of the campaign,” says Deora, “The true “An influencer stems from the genuine relationship they have with the audience, which comes down to the content they create and the reaction they get.”
While no app can determine how many bot accounts make up an influencer’s follower directory, there are tools like Upfluence, Neontools, Social Blade, and Not Just Analytics that can help brands understand the relationship between a influencer and their audience. Or at least the data behind this relationship.
The Good Glamm Group, which owns a portfolio of D2C beauty brands, works closely with their creator company, The Good Creator Co., to do this. “We have SOPs (standard operating procedures) in place for reporting influencers who have added bots to their profiles,” said co-founder Priyanka Gill. This helps the company select the right set of creators for a brand and weed out those using bots, she explains.
The Good Creator Co. uses a “creator score” to weed out bots and fake accounts at the discovery stage. Based on metadata captured when rating the creator’s profile, the score lists 26 different attributes, including engagement. “This score allows us to make decisions based on data, leaving very little room for subjectivity. The higher the creator score, the more authentic the creator’s followers are,” Gill shares, adding that the tool has had a success rate of over 95% so far.
Wakefit.co, a D2C home and sleep solutions brand, claims to use secondary reports to track bot engagement. To minimize wasted ad spend, the brand targeted targeting that strongly reflects its buyer audience by leveraging factors such as purchase behavior, device usage, demographics and interest. “Through this, we are able to cut through the deluge of accounts and users and target those we care about,” said director and co-founder Chaitanya Ramalingegowda. “We don’t measure the success of our campaigns just with metrics like likes, but continue to optimize for second-tier engagements like shares, profile visits, and comment reading.”
Impersonator accounts are also a huge problem for brands. Several personal finance influencers on Instagram recently called on the platform to crack down on fake users impersonating them and scamming their followers. Most of these accounts impersonate real influencers, so it is much more difficult for the platform and users to identify them. Unless a copycat account is reported, they often remain under the radar.
One way to tell the “expert” from the scammers is to look for a blue tick. “Blue verification can help brands avoid fake accounts impersonating an influencer or creator. Alternatively, influencers should ensure that on all their social networks they mention their official credentials,” says Sheth de Monk Entertainment.
Another concerning issue for brands is the possibility of influencers being used against them. Last month, online marketplace Meesho sent a legal notice to some social media influencers who took part in an alleged smear campaign against them. “Following the briefing, some influencers acknowledged the tweets were paid promotions, while others deleted their posts,” Meesho said in a statement. The company has not commented on which influencers it collaborates with.
What are spambots?
Sometimes called “bots”, “spam”, or “fake accounts”, these are inauthentic accounts that mimic the way people use Twitter. Some spam accounts are automated, but others are managed by people, which makes it difficult to detect.
While Twitter Spring cleans up bots daily and locks out millions every week, the company does, however, allow “good spam bots” that perform a “useful” service. Twitter encourages many of these accounts to call themselves bots for more transparency. He defines these accounts as automated accounts that “help people find useful, entertaining, and relevant information.” For example, @mrstockbot gives people automated responses when they ask for a stock quote, while @earthquakebot tweets in real time about a 5.0 or greater magnitude earthquake around the world.
But other spambots are used by governments or companies for several nefarious purposes. During the 2016 US presidential election, Russian spambot accounts impersonated Americans and divided voters. Spammers also frequently try to persuade people to send cryptocurrency to online wallets for prices that don’t exist. Sometimes spambots are also used to attack celebrities or politicians and create a hostile environment for them online.
– The New York Times