By Francisco Santos
Despite not being its sole cause the influence social media had in the summer riots that took place across over 20 U.K. cities is undeniable, hence the importance in identifying and avoiding it and teaching others around you to do the same.
After the shocking Southport Knife attack the general public turned online for news and updates but were instead faced with misinformation fuelled by anti-immigration, racist and Islamophobic rhetoric peddled by countless accounts including that of English Defence League (EDL) founder Stephen Yaxley-Lennon, who also goes by Tommy Robinson.
From X (formerly Twitter) to Telegram threats turned into action and within the next week several riots had been organised and were taking place across the whole of the U.K., leaving many to wonder how it escalated to this point to begin with.
When asked to what extent did he believe social media actually influenced the action of the rioters, Dr Geoff Walton a Senior Lecturer from MMU with a PhD in Information Science, said the following: “Social media tends to amplify the spread of misinformation, both in terms of speed and reach. My feeling is that there were those already intent on some form of action – perhaps quite a small number – they would have turned up anyway.”
“What social media looks like is influencing is those that might not have ‘normally’ become involved but were drawn in because of the misinformation and accompanying information about where and when to assemble for the ‘protest’. I suspect there was a spectrum here of those who are clearly sympathetic to the EDL, ‘casual’ racists ‘fired up’ by the misinformation, those just intent on mischief and those ready for violence.
Dr Walton refers to this social media spaces where misinformation is spread as echo-chambers, a place where people are constantly validating their biases and strengthening their false beliefs.
He adds: “Confirmation bias is a strong aspect of our cognitive capabilities and not something of which we are always aware. Add to that our default position of truth and a toxic mix can arise. So, the problem is that some people will have developed a racist worldview, their confirmation bias fuels it and misinformation adds even more perceived ‘legitimacy’ to their beliefs. Others, through a default position of trust might develop racist views by trusting misinformation such as the various anti-immigration themes.”
When faced with all of these points it seems as though combating this type of misinformation is an insurmountable challenge, however the senior lecturer has a bright outlook in this regard.
In explaining how to avoid misinformation he says: “The good news is you can shift the way people engage with misinformation to begin to recognise when they are encountering it. The problem is that, to think ‘critically’ about the information and misinformation one is encountering it takes more cognitive effort than to simply trust.
“Our research has shown that raising people’s ability to make well-calibrated judgements can be enhanced, arguably, relatively easily achieved in an academic setting because good quality information is well-defined and findable in this context, plus it is moderated by the community of academics – the issue is how to translate that into the completely different complex unmoderated everyday contexts. Our research indicates what is needed is something different.”
Dr Walton goes on to list the necessary ‘ingredients’ that he and his team developed from their British Academy funded project focused on training the avoidance of misinformation, stating that the training should be: personalised in accordance with a person’s context; conversational and informal; collaborative instead of patronising; practical instead of theoretical; and that there should be an incentive passed from the facilitators to the learners to think critically.
He also mentions Inoculation theory, developed in 1961 by psychologist William J. McGuire, its name a purposeful vaccine metaphor as it functions similar to one. The theory proposes that the same way you can give someone a weak version of a virus to boost their immune system against it, you can give people a weak version of an argument to not only boost their disbelief in the argument but also to counter-act future stronger more persuasive ones.
When applied to the misinformation context research shows that interactive methods can train people to better identify misinformation.
He adds: “These techniques don’t work on everyone because their worldview is too fixed. There are also some pitfalls that are quite tricky to avoid. The major one is that sometimes you can cause people to become cynics rather than sceptics – where they refuse to believe anything.
“Another difficult one is that people get the idea for one context but are unable to transfer their knowledge to a new context – for example from education to shopping when someone sees a bargain online which is clearly too good to be true but fall for the scam anyway because it is seductive.”
Being able to learn how to avoid misinformation is in a way a tool everyone has access to but that might often go unused, but as new technologies develop and especially Large Language Models’ powered A.I., new fears about false information rise.
Discussing this Dr Walton stated: “Generative AI brings some real tangible benefits but has some very clear and problematic limitations. I’ve already seen in students’ assignments the presentation of misinformation based on the use of ChatGPT. The software created non-existent citations for ‘evidence’ to support a claim.”
He gives this final statement about what can be expected for the future: “We cannot simply turn the tide back as the story of King Cnut demonstrated many years ago, we have to embrace, adapt and work out how to adopt it so that it is a tool we can use positively. The development of AI in the health field is looking especially promising, especially around the issue of diagnosis.”