Social bots play a major role in spreading fake news. Researchers at Indiana University conducted a study of 14 million tweets that revealed Twitter to be a major source for the spread of fake news.
Certain manipulation strategies are why Twitter bots are so effective. First, they amplify fake news in its early stages, long before it goes viral. Then they target individual users through replies and mentions, instead of writing broad posts or retweeting. This increases the chances a post could go viral because it injects fake news directly into a closely connected human network. Finally, bots disguise themselves as human by changing their geographic location. These manipulations are largely why people spread false news from bots just as much as other humans, according to the study.
From this research, my goal is to combat this social automation of fake news by creating bots that contradict themselves. In the same manner that Amanda Ullman used contradictory images to fool people she was going through a certain journey, I wish to create a bot that has several personalities. A bot that agrees and disagrees with liberals and conservatives. A bot that argues with itself about a topic on a certain thread. A bot that essentially represents the hypocrisy and contradictory nature of the media. By using the above mentioned manipulation strategies, I wish to help people question the information provided to them on the Internet. People often choose things based on opinions that they agree with. Hence, if they follow my bot and agree with a certain statement, the bot can then contradict its statement and show them possibly a different point of view than they were expecting.
The medium I am thinking of using is Twitter since a lot of discussion happens in this social platform. Also, I am going to outline certain personalities I want my bot(s) to have that I will then diffuse upon the Internet to test people’s reactions.