The use of artificial intelligence in social media monitoring

Different types of AI-led monitoring methods are utilised by governments to monitor social media. These methods can be categorised into three groups: (1) the use of in-house applications, (2) the use of third-party applications and (3) the use of services provided by platform companies. In-house applications are AI monitoring tools developed and utilised internally by…

Different types of AI-led monitoring methods are utilised by governments to monitor social media. These methods can be categorised into three groups: (1) the use of in-house applications, (2) the use of third-party applications and (3) the use of services provided by platform companies. In-house applications are AI monitoring tools developed and utilised internally by government agencies, e.g. the Swedish Defence Research Agency (Schmuziger Goldzweig et al., 2019, p. 41). Many of the tools utilised to monitor political content were initially created for business purposes, such as strategic marketing or online reputation management, and have been repurposed for political use (Croll & Power, 2009; Schmuziger Goldzweig et al., 2019). There are, therefore, numerous commercial products and tools that can aid in collecting, protecting and presenting data obtained through monitoring (Twetman et al., 2020, p. 47). Examples of such applications include CrowdTangle (Schmuziger Goldzweig et al., 2019, p. 26), Brandwatch (Schmuziger Goldzweig et al., 2019, p. 15; Twetman et al., 2020, p. 51) and Knowlesys (Shahbaz & Funk, 2019).

Platform companies offer application programming interfaces (APIs) and various other services that can be used for SMM. By accessing their APIs, it is possible to obtain detailed data on publicly shared posts, including information on who shared the post, who liked it and how many interactions it received. This data can be helpful for various purposes, including tracking sentiment and identifying potential issues or threats. In addition, companies provide services that can assist in maintaining electoral integrity, such as requiring advertisers to disclose information about who is funding the ads and to whom they are being targeted (Clark-Schiff, 2021), as well as identifying suspicious accounts and content and blocking them. These measures can help prevent the spread of disinformation and ensure that users have access to accurate information during election periods (Brattberg & Maurer, 2018).

 Collaboration between platform companies and policymakers has increased over recent years in many European governments, such as Germany (Miguel, 2021), Sweden and the Netherlands (Brattberg & Maurer, 2018). As Zuboff (2015, p. 86) points out, the boundaries between the governments and platform companies in relation to surveillance have become blurred. Some, such as the European Commission during the 2019 European elections, have opted for the self-regulation of platforms. Others, such as the Netherlands during the 2021 election, have focused on national regulation (Wolfs & Veldhuis, 2023, p. 2). However, there remains much uncertainty about the relationship between national regulations and platform self-regulation (Meyer & Siebert, 2021).

Whether governments develop in-house applications or use ready-made products or services, monitoring social media presents new challenges for election observation. The vast volume of data can make it difficult to identify the most relevant information, and the diverse landscapes of social media platforms add more complexity to this task. Nevertheless, with the help of AI systems, the filtering and analysis of large volumes of data has become much more manageable.

References

Brattberg, E., & Maurer, T. (2018). Russian election interference: Europe’s counter to fake news and cyber attacks. the Carnegie Endowment for International Peace. https://carnegieendowment.org/2018/05/23/russian-election-interference-europe-s-counter-to-fake-news-and-cyber-attacks-pub-76435

Clark-Schiff, S. (2021, January 25). Increasing transparency around US 2020 elections ads. Meta. https://about.fb.com/news/2021/01/increasing-transparency-around-us-2020-elections-ads/

Croll, A., & Power, S. (2009). Complete web monitoring: Watching your visitors, performance, communities, and competitors (1st ed.). O’Reilly Media, Inc.

Meyer, V., & Siebert, Z. (2021, September 30). Reducing disinformation and hate in election campaigns: How can we detox the debating culture? The Heinrich-Böll-Stiftung. https://eu.boell.org/en/2021/09/30/reducing-disinformation-and-hate-election-campaigns-how-can-we-detox-debating-culture

Miguel, R. (2021, September 24). The battle against disinformation in the upcoming federal election in Germany: Actors, initiatives and tools. EU DisinfoLab.

Schmuziger Goldzweig, R., Iskra Kirova, Lupion, B., Meyer-Resende, M., & Morgan, S. (2019). Social media monitoring during elections: Cases and best practice to inform electoral observation missions. Open Society Foundations. https://www.opensocietyfoundations.org/publications/social-media-monitoring-during-elections-cases-and-best-practice-to-inform-electoral-observation-missions

Shahbaz, A., & Funk, A. (2019). Freedom on the net 2019. In Freedom House (p. 32). Freedom House. https://www.freedomonthenet.org/report/freedom-on-the-net/2019/the-crisis-of-social-media

Twetman, H., Paramonova, M., & Hanley, M. (2020). Social media monitoring: A primer. Methods, tools, and applications for monitoring the social media space. NATO Strategic Communications Centre of Excellence. https://stratcomcoe.org/cuploads/pfiles/social_media_monitoring_a_primer_12-02-2020.pdf

Wolfs, W., & Veldhuis, J. J. (2023). Regulating social media through self-regulation: A process-tracing case study of the European Commission and Facebook. Political Research Exchange, 5(1), 1–23. https://doi.org/10.1080/2474736X.2023.2182696

Zuboff, S. (2015). Big other: Surveillance capitalism and the prospects of an information civilization. Journal of Information Technology, 30(1), 75–89. https://doi.org/10.1057/jit.2015.5

Tags: