Since the world learned of state-sponsored campaigns to spread disinformation on social media and sway the 2016 election, Twitter has scrambled to rein in the bots and trolls polluting its platform. But when it comes to the larger problem of automated accounts on Twitter designed to spread spam and scams, inflate follower counts, and game trending topics, a new study finds that the company still isn’t keeping up with the deluge of garbage and abuse.
In fact, the paper’s two researchers write that with a machine-learning approach they developed themselves, they can identify abusive accounts in far greater volumes and faster than Twitter does—often flagging the accounts months before Twitter spotted and banned them.
Flooding the Zone
In a 16-month study of 1.5 billion tweets, Zubair Shafiq, a computer science professor at the University of Iowa, and his graduate student Shehroze Farooqi identified more than 167,000 apps using Twitter’s API to automate bot accounts that spread tens of millions of tweets pushing spam, links to malware, and astroturfing campaigns. They write that more than 60 percent of the time, Twitter waited for those apps to send more than 100 tweets before identifying them as abusive; the researchers’ own detection method had flagged the vast majority of the malicious apps after just a handful of tweets. For about 40 percent of the apps the pair checked, Twitter seemed to take more than a month longer than the study’s method to spot an app’s abusive tweeting. That lag time, they estimate,
Read more here: https://www.wired.com/story/twitter-abusive-apps-machine-learning