Russian Bots, The US Election, And Trevor’s Axiom

South Park may be a cartoonish, poorly drawn parody of society, but more often than not they manage to cut through and make some fantastic insights into the way in which the world works. The clip above from the season 20 finale depicted an idea called “Trevor’s Axiom” (we can’t find a source for the concept, so we assume that Trey Parker and Matt Stone coined the phrase themselves). It is essentially a theory of online trolling, that suggests that extreme abuse is never meant to simply harm the individual at which it is aimed, rather they are trying to elicit a controversial reaction from someone jumping to the defence of the victim. The goal is to start a chain reaction of responses that stokes the stereotypes that both sides have of the other in any given argument, to enhance the echo-chamber and perpetuate the myth that “the other side” is completely crazy, delusional, and out of touch with reality.

The idea is partially based on concepts of bullying, whereby one individual can successfully divide societies and groups up as defenders or attackers of a victim. This can happen in prisons, school-yards, offices, and now is happening more and more in the digital space. David Graeber of TheBaffler explains it like this,

“Just as a woman, confronted by an abusive man who may well be twice her size, cannot afford to engage in a “fair fight,” but must seize the opportune moment to inflict as much as damage as possible on the man who’s been abusing her—since she cannot leave him in a position to retaliate—so too must the schoolyard bullying victim respond with disproportionate force, not to disable the opponent, in this case, but to deliver a blow so decisive that it makes the antagonist hesitate to engage again.”

The reactions have become so strong that any attack on the ideas or morals of one side is automatically refuted as an attack on the very roots and identity of that group or individual, and anyone not with them must be against them. In other words, people get very easily ‘triggered’ and start to make strawman arguments on a massive scale. Every issue becomes a battle between left and right, Democrat or Republican, Labour or Conservative, and it is destroying our political conversation. Graeber purports that by being “triggered” we could actually become the perfect victim for bullies (or in this case for trolls),

“The ideal victim is not absolutely passive. No, the ideal victim is one who fights back in some way but does so ineffectively, by flailing about, say, or screaming or crying, threatening to tell their mother, pretending they’re going to fight and then trying to run away. Doing so is precisely what makes it possible to create a moral drama in which the audience can tell itself the bully must be, in some sense, in the right.”

This moral drama is played out a million times a day across Twitter, Facebook, and Reddit. Stoked daily by the self-perpetuating, gas-lighting, clickbait, outrage culture that seems to increasingly consume out news cycle, we as a society are becoming ever more prone to this sort of trolling.

The idea that online trolls could influence our political conversation isn’t new, throughout the US election campaign there were many complaints of bots taking over Reddit, upvoting anti-clinton articles and pro-Trump posts and taking over some comments sections. Where it takes a step into the truly scary, is the suggestion that not only could this sort of attack upon political conversation take place in the real world, but that it could be organised by a state-sponsored group of actors looking to influence the results of a national election.

This is exactly what has been uncovered by Facebook. Having already concluded that ‘fake news’ was a serious problem on their network, Facebook recently hired PolitiFact, FactCheck.org, Snopes.com, the AP and ABC News to help fact-check articles going out across the site. Users can flag stories as untrue and a machine learning algorithm trawls the site for stories that seem false and adds them to the queue – if two fact-checkers label a story as false, then Facebook label it as disputed. During and after the 2016 Presidential election campaign numerous pundits were suggesting that fake-news was part of what led the Trump charge. However, new information revealed by Facebook suggests that there was a far more subtle digital campaign taking place.

Facebook admitted to congressional investigators that it had sold ads to a Russian company attempting to influence voters. Facebook accounts with alleged Russian ties bought around $150,000 in political ads aimed at American voters during key periods of the 2016 presidential campaign. Around 470 accounts associated with around 3,000 ads were connected and according to Alex Stamos, Facebook’s chief security officer, they were “likely operated out of Russia”. The vast majority of these ads did not specifically reference any party, candidate, or even the election itself, rather they were designed to amplify “hot-button social and political issues, such as LGBT rights, race, immigration and gun rights.”

Many of the ads in question ran before the primaries had concluded and many experts have dismissed the minimal spend of $150,000. It pales in comparison to the $55 million spent by one of Hillary Clinton’s primary digital firms and the $90 million spent by Donald Trump through his main digital adviser, Brad Parscale.

Post-election, Facebook implemented limits on news feeds that share stories with consistent clickbait headlines and blocks on pages that repeatedly share fake news stories to advertise. Many smaller or independent outlets have seen traffic suffer as a result of the algorithmic changes and there have been accusations of bias against alternative sources who perhaps don’t fit the mainstream narrative. They also removed 30,000 fake accounts before the French elections in April and tens of thousands of accounts before the United Kingdom’s snap election in June.

The New York Times (in conjunction with FireEye) have recently revealed that on Twitter (and on Facebook) thousands of suspected Russian-linked accounts used the platforms to spread anti-Clinton messages and promote leaked material. Many of these were bots who, according to FireEye researchers, put out identical messages seconds apart in the exact alphabetical order of their made-up names. For example on Election Day, they found that one group of Twitter bots sent out the hashtag #WarAgainstDemocrats more than 1,700 times.

Post-election Facebook have been stepping up their attempt to combat bots and fake news, whilst Twitter has become increasingly corrupted by these bots. Former F.B.I. agent, Clinton Watts, told the New York Times that this “bot cancer” is “eroding trust on their platforms”, although Twitter does purport that the platform’s “open and real-time nature is a powerful antidote” to fake news and misinformation.

Bots can be hard to trace and are becoming ever-more realistic as technology improves and machine learning allows them to better mimic human behavior online and evade detection. Ben Nimmo, a senior fellow in information defense at the Digital Forensic Research Lab at the Atlantic Council in Washington, believes that bots are incredibly useful for helping to bring ideas from the fringes of society into the mainstream through a combination of hashtags, memes, spamming, fake news, and blog links. He believes these bots are becoming more influential as more and more people realise their power,

““People have woken up to the idea that bots equal influence and lots of people will be wanting to be influencing the midterms.”

Whether Russian or otherwise, these bots and trolls are having a massive influence on our political conversation. By provoking a gas-lighting they can magnify their message 1000 times over and since the new Facebook algorithm seems to be based on comments and reactions, it is posts and articles that trigger and provoke outrage that will increasingly find their way to the top of our news feeds and, unfortunately, our collective psyche.

If you enjoyed what you read here you can follow us on Facebook, Twitter, and Instagram to keep up to date with everything we are covering, or sign up to our mailing list here! If you want to hear more from us you can check out our podcast, Chatter, or subscribe to us on iTunes here.

2 thoughts on “Russian Bots, The US Election, And Trevor’s Axiom

Leave a Reply