Is it real news or fake news? A team from the University of Texas at Arlington and the University of Texas at Dallas is working to build computer tools to detect those pesky social bots on the web that create and spread fake news.
The project is being led by Chengkai Li, an associate professor in the Department of Computer Science and Engineering at UT Arlington.
It is funded through the Texas National Security Network Excellence Fund from the University of Texas at Austin’s Clements Center for National Security and The Robert S. Strauss Center for International Security and Law.
“This is a seed grant that we hope will lead to a much larger grant that will identify these bots for social media users,” Li said in a release. “Right now, you don’t know what is coming from a real person and what’s coming from a computer, sometimes for malicious, or at least, misleading reasons.”
“Right now, you don’t know what is coming from a real person and what’s coming from a computer, sometimes for malicious, or at least, misleading reasons.”
The project Bot vs. Bot: Automated Detection of Fake News Bots, will focus on Twitter accounts.
On Twitter, bots are accounts that are run by computer programs which automatically publish and forward content, follow other accounts, leave comments, and conduct what appears to be “real” activity, according to the university.
While many experts say that fake news played a big role in the recent presidential election, the project will focus on national security threats rather than domestic politics, Li said.
“These bots often are sponsored by nation states that are hostile to U.S. interests,” Li said. “This project needs to have a worldwide reach.”
The UTA team will use very sophisticated algorithms to combat the bots and the spread of fake news.
Christoph Csallner, UTA associate professor in the Department of Computer Science and Engineering, and co-principal investigator, said that the project intends to create computer programs that can distinguish bot from human.
FAKE NEWS COMES IN DIFFERENT LEVELS
“For example, even if a bot uses high-end artificial intelligence and massive processing power, an extremely simple detection technique may be enough if the bot always posts at the same time of day or has some other trait that makes it easy to distinguish the bot from humans,” Csallner said.
It’s not as easy as it might seem, said another co-principal investigator Mark Tremayne, UTA assistant professor in the Department of Communication.
“You might find that a bot takes a piece of real and true information, then adds an element that isn’t true. So, in the end, you have different levels of fake news,” Tremayne said.
Other co-principal investigators include Zhiqiang Lin, UT Dallas associate professor of computer science, and Angela Lee, UT Dallas assistant professor of emerging media and communication.