Introduction to my study
LITTLE WORK has been done to connect the current disinformation climate to COVID-19. One of the challenges of this research has been the fact that it is an ongoing crisis. Each day, it seemed like disinformation and conspiracy theories were changing.
An article by New York Times opinion columnist Farhad Manjoo discusses the complex, ongoing nature of pandemic-related disinformation and conspiracy theories. “Combating the deception that has overrun public discourse should be a primary goal of our society,” he writes.
As an early attempt to combat this deception, the primary purpose of this study was to understand it. What many researchers have failed to uncover is the environment disinformation and conspiracy theories thrive in. Mainly, how society’s quest for answers has propelled them into an infodemic.
My first research question looked at the topical features of disinformation and conspiracy theories during COVID-19. I was aiming to discover:
What topics are being discussed through Twitter.
Who is spreading disinformation and conspiracy theories.
What these posts, links, and URLs look like.
My second research question looked at the societal and political implications that might result from disinformation and conspiracy theories during the pandemic. Including:
The role of fact-checking.
User behaviours in spreading false information.
The presence of bot accounts (and their role in spreading disinformation/ conspiracy theories).
How I conducted my research
Over the course of seven months, I worked diligently to understand how disinformation and conspiracy theories operate in the current COVID-19 climate.
To gain a better understanding of the current disinformation landscape, I gathered data through Twitter between September 26 and October 2, 2020. Again, the ongoing nature of COVID-19 means that many of these conversations have, and will change.
The first step in understanding current Twitter conversations about COVID-19 was to collect a cluster of keywords using hashtagify.me. This application helped find the most popular hashtags related to a certain topic.
First, I entered the word ‘COVID-19,’ which resulted in the popular synonymous hashtags ‘China’ and ‘vaccine.’ Second, the word ‘Vaccine’ was entered, which resulted in ‘hydroxychloroquine’ and ‘QAnon.’ The five keywords that were chosen to represent current Twitter conversations during the COVID-19 period were: COVID-19, China, vaccine, Hydroxychloroquine, and QAnon.
Once gathering these five keywords, a Microsoft Excel add-in called NodeXL Pro was used to conduct a network and content analysis. NodeXL Pro is an open-source, publicly available software tool that allows researchers to visualise network relationships. Simply, NodeXL displays a range of information through a Microsoft Excel spreadsheet: This information can range from but is not limited to, relationships between Twitter users (how Twitter users are connected to each other), what groups have formed among the keywords, and how positive, negative, or hostile/violent the conversations are online.
How I measured the data
My study yielded interesting and unexpected results. Once five unique datasets were collected, I was able to analyse the influence (reach) and characteristics of the workbook to identify any volumetric and topical changes to disinformation over a week period. Overall, I looked at 5,000 individual Twitter users.
I aimed to measure the number of individual users and the words they use: To do this, I used NodeXL Pro to run a sentiment analysis for each keyword to measure if conversations were positive, negative, or hostile/violent.
To measure disinformation specifically, I ranked the top 50 users within the data sets by their output. Once I had this information, I analysed the top uniform resource locators (URLs) contained in their posts.
The final step of my study was to run the top 50 users through a bot-detector to see if they were real people. As I would come to discover, many of them were not.