Electoral manipulation using amplification and de-amplification of content on social media
One of the main concerns of this blog until at least the 29th of May, when South Africa’s next national elections take place, will be the possibility [or maybe that should be ‘probability’] of electoral manipulation. One of the most famous recent cases of such manipulation is the alleged influence of Russian intelligence agencies on the United States election outcomes of 2016 and 2020. That influence purportedly took place through influencing social media content and algorithms.
All major studies agree that the United States is the world’s leading cyberpower. If the USA could not anticipate or defend against such influence, much smaller countries with almost non-existent cybercapability and very weak national intelligence capacity are likely to have little chance of doing so. This is true for such efforts by any major cyberpower. In an article in 2023, I wrote the following:
Much reporting focuses on potential meddling by Russia, which is plausible though unproven. But it would be naïve and ahistorical to ignore the potential for interference by Western countries – not least when many important South African civil society actors currently rely on funding from institutions in Germany, the United States, the United Kingdom, and so on.
Sovereignty means being fundamentally independent of any foreign influence, not only the countries you may dislike. I do not have a great deal of faith in any of our institutions, whether the State Security Agency, the Electoral Commission of South Africa or indeed the largest media groups, including the SABC, to protect South African democracy from such interference.
Here is a short excerpt of a comment I made at an event organised by the SABC (South Africa’s public broadcaster) and Media Monitoring Africa (which has partnered with the Independent Electoral Commission to combat misinformation). It was sponsored, somewhat awkwardly, by the German Embassy. The point I emphasise is that manipulation of election outcomes via social media is not only, or even primarily, about crude strategies like bot farms and fake content. The more sophisticated approach is to use control of such platforms (Twitter, Facebook, Instagram, TikTok, etc) to amplify content that serves a particular agenda and ‘de-amplify’ content that does not serve it.
For example, if you want to favour the Red Party then you amplify the tweets of people saying positive things about that party. Someone with a few thousand followers might get a million views. Whereas if you want to negatively influence voting for the Green Party, you would ‘de-amplify’ any positive tweets: making sure hardly anyone actually sees them. This can of course extend to other things like efforts to get people to register to vote, to actually turn out at election time and so forth. If data from these platforms was publicly available, researchers could - in theory - check to see if this was happening. Unfortunately such data is not public, and one of the first things Elon Musk did after taking over Twitter was to shut down the data feeds to independent research centres looking at such things.
What that means is that whatever manipulation does take place, effectively takes place in the dark. Countries like South Africa, and even some much more cyber-capable nations, have little hope of combating this. And there are reasons to believe that the platforms themselves cannot be trusted to assist in combating such interference by all foreign states (though they may assist with combating some).