The idea that social media bots and misinformation could influence our political conversation isn’t a new concept. The phrase “election hacking” has been thrown around and misused by numerous news outlets and pundits to describe the use of bots and junk news on social media and their potential effects upon elections in the US, UK, Germany, France, and beyond. Throughout the US election campaign, there were countless complaints of bots taking over Reddit, upvoting anti-Clinton articles and pro-Trump posts, and taking over some comments sections. This is unnerving in itself, but where it takes a step into the truly scary, is the suggestion that not only could this sort of attack upon political conversation take place in the real world, but that it could be organised by a state-sponsored group of actors looking to influence the results of a national election.
In the UK, a Parliamentary probe into ‘fake news’ asked both Facebook and Twitter to hand over information on Russian-linked advertising that may have been targeted at influencing voters in the Brexit referendum. Cambridge Analytica, a data analysis firm, was a part of the Trump, Cruz, and Leave.eu campaigns (to whom they even offered free services), is just one of the many links between bots, targeted advertising, and fake news that purportedly sought to manipulate voters over the past two years. Damian Collins, a Conservative Party MP, wrote to Mark Zuckerberg “politely requesting” details on the Russian-linked adverts and accounts, including how much money was spent on ads, how many times they were viewed and which Facebook users were targeted.
Scout.ai have written extensively about the impact that weaponised propaganda can have upon elections,
“By leveraging automated emotional manipulation alongside swarms of bots, Facebook dark posts, A/B testing, and fake news networks, a company called Cambridge Analytica has activated an invisible machine that preys on the personalities of individual voters to create large shifts in public opinion.”
On November 1st, Facebook, Twitter, and Google will all appear to testify before the Senate Intelligence Committee about attempts by alleged Russian linked accounts to use their platforms to influence the 2016 election. Ahead of the hearing, we decided to take a look at how Facebook, Twitter, and other tech giants are combating the manipulation of their platforms by bots and propaganda.
Having already concluded that ‘fake news’ was a serious problem on their network, Facebook recently hired PolitiFact, FactCheck.org, Snopes.com, the AP and ABC News to help fact-check articles going out across the site. Users can flag stories as untrue and a machine learning algorithm trawls the site for stories that seem false and adds them to the queue – if two fact-checkers label a story as false, then Facebook label it as disputed. During and after the 2016 Presidential election campaign numerous pundits were suggesting that fake-news was part of what led the Trump charge. However, new information revealed by Facebook suggests that there was a far more subtle digital campaign taking place.
Facebook admitted to congressional investigators that it had sold ads to a Russian company attempting to influence voters. Facebook accounts with alleged Russian ties bought around $150,000 in political ads aimed at American voters during key periods of the 2016 presidential campaign. Around 470 accounts associated with around 3,000 ads were connected and according to Alex Stamos, Facebook’s chief security officer, they were “likely operated out of Russia”. The vast majority of these ads did not specifically reference any party, candidate, or even the election itself, rather they were designed to amplify “hot-button social and political issues, such as LGBT rights, race, immigration and gun rights.” Nearly 10 million Americans saw the ads on Facebook, that is roughly 1 out of every 12 people who voted in the presidential election.
Post-election, Facebook implemented limits on news feeds that share stories with consistent clickbait headlines and blocks on pages that repeatedly share fake news stories to advertise. Many smaller or independent outlets have seen traffic suffer as a result of the algorithmic changes and there have been accusations of bias against alternative sources who perhaps don’t fit the mainstream narrative. They also removed 30,000 fake accounts before the French elections in April and tens of thousands of accounts before the United Kingdom’s snap election in June.
We discussed the use of social media bots in our podcast with Lisa-Maria Neudert of the Computational Propaganda Project below.
Rather than take the pro-active approach that Facebook decided to take to combat misinformation and fake accounts, Twitter were initially very reluctant to interfere with the content on their platform. In a blog-post in June 2017 they reasoned that the very nature of Twitter meant that the users had the opportunity to fact-check in real time, providing a constantly updating roster of facts and information that would outweigh any deliberate propagandising or ‘junk news’.
“Twitter’s open and real-time nature is a powerful antidote to the spreading of all types of false information. This is important because we cannot distinguish whether every single Tweet from every person is truthful or not. We, as a company, should not be the arbiter of truth. Journalists, experts and engaged citizens Tweet side-by-side correcting and challenging public discourse in seconds. These vital interactions happen on Twitter every day, and we’re working to ensure we are surfacing the highest quality and most relevant content and context first.”
Twitter claimed to have been expanding their teams and resources dedicated to monitoring and dealing with the use of bots,
“We’ve been doubling down on our efforts here, expanding our team and resources, and building new tools and processes. We’ll continue to iterate, learn, and make improvements on a rolling basis to ensure our tech is effective in the face of new challenges.”
They are attempting to tackle spam at its source, identifying mass distribution of Tweets and hashtag manipulation used to push certain topics to the top of the trending agenda. Twitter reduce visibility of any “potentially spammy Tweets or accounts” whilst they conduct their investigations and will take action against accounts that abuse Twitter’s public API to automate activity.
In a more recent blog post, Twitter affirmed their continued desire to “strengthen Twitter against attempted manipulation, including malicious automated accounts and spam”. They claim to be continually improving their internal systems to detect and prevent spam and malicious automation (although the open API is used to encourage posting from other apps and games) and expanding their efforts “to educate the public on how to identify and use quality content on Twitter.”
After a closed Senate Intelligence Committee briefing earlier this month, Senator Mark Warner (D) called the information shared by Twitter as “inadequate” and “deeply disappointing”. He felt that their testimony “showed an enormous lack of understanding from the Twitter team of how serious this issue is”.
There are a number of ways that Twitter could attempt to better crack down on bots. David Carroll, an assistant professor at the New School in New York, suggested that Twitter could deploy a bot detection tool to help users identify automated accounts, scholars at the University of Indiana proposed that Twitter could require certain users to prove they’re human by passing a “captcha” test before posting, or Twitter could enable users to directly flag suspected bot accounts.
Illusion of Truth Principle
The real time corrective nature of Twitter and their recently introduced counter-spam measures, sadly, do not deal with the most chilling part of misinformation and ‘junk news’, nor do any attempts by Facebook to retroactively flag and fact-check information. The act of simply hearing a false statement, can after a few days, leave us thinking that that statement is in fact, true, thus, altering our very perception of reality. This is known as the ‘Illusion of Truth’. The ‘Illusion of Truth’ principle has shown us that people are far more likely to remember false statements as true, rather than true statements as false.
A study done in 1997 by psychologist Graham Davey, aimed to examine the negative effects of the sensationalism of news programming. He found that negative (often exaggerated) stories increased anxiety and worry amongst test subjects. Furthermore, in the book, Applied Social Psychology, studies were shown to indicate that “heavier viewers of the local news are more likely to experience fear and be concerned about crime rates in their community than lighter viewers”. Whilst these theories were initially developed during studies on traditional media, it is difficult to discount their applicability to social media.
The investigation and re-evaluation of how culpable social networks actually are for the content they allow is on-going and sites like YouTube and Reddit are also attempting to deal with ‘undesirable’ content in their own ways. YouTube are controversially demonetising videos with content they deem to be controversial, whilst Reddit have changed their front-page algorithm to prevent manipulation by specific subreddits like r/TheDonald and even went as far as to ban discussion of Pizzagate.
Bad-actors, both independent and state-sponsored, have recognised the enormous power of the social platforms that consume so much of our daily lives and are now successfully using them to manipulate public opinion. The extent to which they influenced elections, the exact methods that they used, and the states or funders behind these campaigns remain obscured and may never be fully understood. Until we can effectively combat this modern form of information warfare and big data driven propaganda, the burden lies on each of us to understand where, what, and how we consume news, media, and information, lest we fall victim to those who seek to subvert our democracy.
If you enjoyed what you read here you can follow us on Facebook,Twitter, and Instagram to keep up to date with everything we are covering, or sign up to our mailing list here! If you want to hear more from us you can check out our podcast, Chatter, or subscribe to us on iTunes here.