Disinformation

I gave a talk about disinformation at Eastercon Conversation held in Birmingham during April 2023. Inevitably, I didn’t quite cover everything that I’d prepared, despite practicing and timing it! Anyway, here are my prepared notes covering what I spoke about, plus a little extra.

Slide 1: How to spot disinformation and help stop its spread

Disinformation is rife and threatening democracies. This presentation looks at practical methods everyone can use to identify it and help to curb its spread.

I define what disinformation is and isn’t, and talk about some of the similar problems confronting modern democracies.

I’ll talk about a few easy ways everyone can use to spot disinformation and take steps to stop it spreading.

There will be time for questions and possibly even comments at the end. I am autistic and recovering from burn out, which is why I’m asking for questions and comments at the end, please. Please do feel free to write down questions, if preferred, or to add them to the online chat. If I could please have a volunteer to read them…

Before we begin, a quick advisory: I will refer to quite a few examples, many of which have reprehensible morality behind them. I will try to avoid details that could be distressing and I apologise in advance if I fail. A few topics I’ll mention are complex and nuanced, and I won’t have the time to go into every detail.

First, though – why should you trust what I’ve got to say on the topic?

Slide 2: About me

I was an intelligence analyst in Australia (1999-2004) with the defunct National Crime Authority and still running NSW Police; and in the UK (2005-2019) with the National Crime Agency (and its precursor agencies: the National Crime Squad of England and Wales, and the Serious Organised Crime Agency). As part of my job, I taught intel analysts and officers in a range of agencies about disinformation, misinformation, deception, etc. I completed two overseas secondments for separate projects, one in the Netherlands with the Dutch police, and one in the USA with the FBI.

In a moment, I’ll introduce you to the basic concepts of intel analysis that are relevant to this talk. Note that I left the UK civil service in September 2019 to pursue a writing career. Nothing I say in this talk is from secret or privileged information.

Finally, I wrote a novel about disinformation and its role in destroying democracies like that in the UK. It’s called The Disinformation War and will be out on 13 June this year from GoldSF. Pre-order links are in the online chat, or will be soon after this talk. I’ll also put other works I refer to in the online chat.

So, what is intelligence analysis?

There is a tonne of books and articles that try to define just what intel analysis is, none fully agreeing. That’s a symptom of a relatively new academic area and nuance depends on what type of intelligence analysis we’re talking about.

For most of my career, I worked in the criminal justice arena, and was predominantly focussed on organised crime. I worked occasionally in counter-terrorism and national security, and worked alongside military intel a few times.

For the purposes of this talk, I’m drawing from Michael Herman’s definition. He wrote extensively about the topic from a national security/warfare point of view. One of his major publications on the topic was Intelligence Power in Peace and War, Cambridge: University of Cambridge Press, 1996. I like his definition in that work because it’s one of the few that can be applied across the wide range of uses to which intel analysis is put.

Basically, intelligence is assessed information about adversaries of some kind (e.g. enemy countries, rival companies, criminal enterprises, terrorists…) for the purpose of aiding leaders in making decisions about what to do in the face of a threat. Most often, that’ll be military officers, company directors, senior ranking police, politicians in government, etc.

Intelligence assessment is about anticipating events: is the adversary about to attack? If yes, how? How much of a threat are they? And how well do we know all this?

That last bit is vitally important. 20 years ago, the USA, UK, Australia, and other countries invaded Iraq using the claim that Iraq had weapons of mass destruction to justify it. It’s been the subject of several enquiries in various countries; the gist is that the decision to invade was made on the strength of “single strand intel” that was considered weak at the time. At the time I believed the military action to be morally reprehensible and I still do; for this talk, the key point is that certain politicians obfuscated their reasons to invade Iraq by blaming the intel assessment process, and some of them still do.

One thing that marks intel different from other types of research and analysis is that the information informing the analysis is often covertly gathered – that’s the job of spies and informants, and technological spyware. It’s secret to protect the people who provide the information, as well as the methods. Because the stakes are often high – and we are talking about life and death here – people who want to stop that information going to others will kill and destroy to stop it.

The sources of information is where knowledge and understanding of disinformation and deception are vital for intelligence analysts. Uniquely, I think, intelligence analysis is most at risk from deliberately dodgy information – counter-espionage is the main example. Intelligence analysis is also at risk from incomplete information, and contradictory information, especially in rapidly developing situations that are highly pressurised - again, we are talking about life and death decisions.

To note, secret sources are not exclusively used in intel work – the picture would be unnecessarily incomplete if that was the case.

Like I said, my career dealt mostly with organised crime so the deceptive practices I considered involved efforts to hide criminal activity, and included money laundering schemes.

Fraud was another area that I needed to know a lot about. There does seem to be a bit of an overlap between fraudulent activity – especially grifting – and the types of disinformation used in global politics over the last few years. Handily, a few of the techniques to identify and deal with disinformation are the same as that for fraud.

So, what is disinformation? Is it just deception?

Slide 3: What is “disinformation”?

The USA’s National Endowment for Democracy describes disinformation as the purposeful use of misleading or manipulative information to subvert political discourse and confuse, divide, or otherwise negatively influence the public. That accords with many other definitions in use. So, no, it’s not just deception…

Disinformation can include “fake news”, “post-truth”, “conspiracy theories”, propaganda, deception, “gaslighting”, fraud, etc., etc., but the key thing to know is that it is deliberately using deception to divide, subvert and confuse political discourse – it is fundamentally anti-democracy. Our system of government relies on a level of honesty for people to make informed decisions about who governs. It’s not perfect by a long shot, even when it works, but it’s much better than autocracies and totalitarian regimes.

Distrust creates chaos to hide power grabs. One recent example from the USA involved US Senator John Fetterman of Pennsylvania. US journalists Dan Rather and Elliot Kirschner wrote that “during the last election season, concerns about Fetterman’s physical health were widespread as he publicly navigated the difficult rehabilitation of a stroke victim. His Republican opponent, the TV doctor Mehmet Oz, made questions about Fetterman’s health a major line of attack.” At the time, at least some of the claims made by Mehmet Oz about his opponent’s health were identified as “misinformation”. This strategy failed: Fetterman won that election. But to what cost? How much might Oz’s incorrect statements “stick” because of his status as a “doctor”? (This was cited in Dan Rather and Elliot Kirschner's Substack feed, called Steady.)

Distrust and confusion enables and plays on “conspiracy” mindsets. If you want to learn more about that aspect of this topic, then do consider attending the panel on “Cults and conspiracy theories”.

Certainly, politicians can and do use some of the themes that turn up in conspiracy “theories” to sow doubt and distrust. Recent US examples abound and the interplay between the Republican party and Alex Jones and his Info-wars who peddled notable conspiracy theories (and Jones has been found guilty of defamation relating to his stories concerning the victims of the Sandy Hook mass shooting). Another conspiracy theory with overtly political connotations and spread by Jones (and others) during 2016 claimed that US presidential candidate, Hilary Clinton, was involved in running a child sexual abuse ring out of a basement in a Washington DC pizza place. The pizza place concerned does not have a basement, which should have been the easiest way to disprove the claim. The lies that proliferated through various social and other media and rumour-mongering led to a guy going to the pizza place, armed to the teeth… Rolling Stone Magazine researched the whole story and pieced together how an extensive eco-system operated, with both Russia and the Trump campaigns exploiting it for their own political ends.

By the way, it’s not only the “right wing” of politics that use it; it’s just that they’ve provided some graphic examples of it recently.

It’s also important to understand that everyone is susceptible to being hoodwinked. If something is said that we agree with on an emotional level can sweep us into believing what we might not usually.

Remember – if something is too good to be true (like you’ve won a million pounds in a lottery you never entered) then it’s not likely to be true.

Disinformation is also used to dehumanise and demonise groups of people (related to propaganda), which enables genocide, mass murder, violent attacks. It’s often emotional, appealing to fear and usually plays up on old prejudices.

Similar techniques as used by abusers – DARVO (Deny, Attack, & Reverse who is the Victim and Offender).

Key thing is that it’s deliberate and has a purpose… even if that purpose isn’t always clear.

Misinformation is often used as a synonym. The USA’s National Endowment for Democracy splits out the definition usefully as an inadvertent or accidental spreading of mistruth.

Why care about whether it’s deliberate or a mistake? The damage is the same…

It’s about identifying where it’s come from and how to tackle it best.

Most of us are likely to encounter disinformation and misinformation online in social media spaces. If we attempt to confront an account spreading disinformation they are unlikely to stop – they are too invested in what they are doing. They might be employed to do it, they may be soldiers under orders. They might even be a “bot”. If, however, we confront someone who has accidentally boosted a mistruth, there is more of a chance for them to correct the mistake. No guarantees, though.

Oh, and do be aware of how social media sites works in terms of algorithms, etc., and be aware that highlighting something bad also spreads it.

Note: to prove someone’s told a lie is incredibly hard, especially to legal or semi-legal standards. In the UK we’ve seen a lot of this in recent times in terms of parliamentary standards. One example is former PM Johnson and whether or not he knew he broke Covid-19 restrictions issued by his own government… I’m not going to get distracted on the details of that, but you may have noted that journalists and fact checkers are careful about what words they use to describe the former PM and his statements. In many cases, that’s not because they’re defending him, but because of potential legal ramifications.

So, how to work out if something’s disinformation, or misinformation? Noting that this is if you want to do something about a piece of wrong information. There is always the option to block and move on if it’s on social media, or to delete and ignore if it’s an email/text message, or whatever suits the medium where you’ve encountered it.

Slide 4: Deception Detection

There are a few ways to do so but this method is one of the easiest and doesn’t involve tech. It’s basically four ways to look at a piece of information with questions to prompt whether or not someone is spreading disinformation or misinformation.

Full credit to Richards J. Heuer, Jr. and Randolph H. Pherson, Structured Analytical Techniques for Intelligence Analysts, CQ Press. 2011. I’ve adapted a technique in that book for this.

First up: Motive, Opportunity, Means

Ask yourself what the goal or goals of a potential deceiver might be. What’s their motive for spreading disinformation? If it’s to destabilise or dehumanise, then you’re probably right in assuming its disinformation. If it’s more likely to be a genuinely held belief even though it’s factually wrong, then maybe it’s misinformation.

Is the information coming from someone who might be seizing an opportunity to sow discord? Are they leaping on to a social media trend? Is their timeline obsessed about the topic and how have they engaged with other people who may have challenged them? Skipping over or ignoring factual challenges indicates that they’re more likely to be intent on destabilising and/or dehumanising.

What means are available to the potential deceiver to feed information to us? Most of us are on social media platforms, but few of us – any? - are on all of them. Most of us would have one account on each, possibly two, depending on our own life circumstances. I’d question the motives of people who have set up multitudes of accounts ready to go at any moment to replace “banned” accounts… They are fairly easy to spot but the details depend on each platform.

Past Operational Practices

This is about looking at the history of the person sharing the information. On social media, take a quick look at the timelines – those that seem obsessed and repetitive are more likely to be set up for spreading disinformation. Those that show interest in a range of topics and engage with others are more likely to be acting with different motives.

Disinformation doesn’t just use social media, even if that’s where most of us would encounter much of it. Mainstream media – aka commercial or professional news organisations – are a potential source of disinformation or misinformation.

Three things to remember:

1. Disinformation is deliberate and the intent is to destabilise and/or dehumanise. There are news media organisations that engage in propaganda that’s indistinguishable from disinformation, and it’s not just one side of politics that does it. Not all of their content is dubious, but there is enough to cause problems…

2. A tactic used by autocratic politicians is to accuse all mainstream media of engaging in disinformation. The intention is to destabilise genuine news media. Genuine news media are essential for liberal democracies to function.

3. What’s the difference between genuine news media and propaganda outlets? Genuine news media do make mistakes but they’ll fix it when it’s called out. They won’t be afraid of speaking truth to power. Truth is the key word there.

Manipulation of Sources

Things to consider here include whether or not the source of information is vulnerable to control or manipulation by those who desire to destabilise or dehumanise – that is, engage in disinformation.

Think about who owns mainstream and social media organisations, their stated aims for their platform, how the platforms are run, and what sort of checks and balances exist.

What is the basis for judging the information broadcast to be reliable? Are they repeating verbatim a political line known to be untrue?

How good is their track record of reporting? Are they constantly challenged through checks and balances and having to make corrections?

Has anything changed recently that affects their output? A different owner, for example.

Evaluation of Evidence

How accurate is the reporting? Think about this phenomenon: if you are familiar with a topic and you read a report about it in a newspaper – even a reputable one – but they get basic factors wrong. Are the errors within or outwith tolerable levels?

What’s their reputation for fact checking? Have translations and transliterations been checked?

Balance of opinion and reporting in news media? If it has more opinion than factual reporting then it’s unlikely to be a trustworthy source of information.

A question for social media posts – how is the message being pushed? As a rule of thumb, be careful of messages not from established news services claiming that something is “BREAKING”… and that demand urgent pushing… There’s an anti-fraud message that’s great for this, too: Take Five. Also think about whether the information is confirming a bias you might have.

Do other sources of information provide corroborating evidence? Or does it conflict with most other information sources? But it’s not a foregone conclusion that more sources saying the same thing is right… Of more interest are the people who warn about particular information sources – but, again, be careful and think about their agendas, too. Yes. This is hard.

Is any evidence one would expect to see noteworthy by its absence? Think of the Holmesian dog that didn’t bark in the night.

Slide 5: Images

“A picture tells a thousand stories…” and they can be and are manipulated to tell all sorts of stories. Disasters and wars generate loads of this, and their purpose is often to confuse the situation and provoke emotional responses.

If in doubt then don’t spread them further. A clue is that social media posters will push them as urgent, much like fraudsters will claim that action has to be taken NOW.

There are some neat tech solutions freely available online. I’m not going to go through a list, but do be wary about their odd quirks. If you want to check a picture then perhaps use a few of these tools.

Tech isn’t the solution to everything. There are often clues in the image that might point to them not being what they are purported to be. If there’s been image manipulation watch for shadows in the wrong places, and odd matching. Depends on how good the manipulation is, of course, but a lot of it isn’t great. Also check details – signs in the background might be in the wrong language, or colours, or be able to aid dating of images. Flags have been a giveaway, as have various landmarks. And check what people are wearing – I’ve seen pictures purporting to be taken in winter and yet it looks like a summer scene from what people in the background were wearing.

Tech can help identify certain “tells”, but we can’t rely on tech to solve all the problems. It’s easy, too, to think of the problem as a purely tech one. Mike DeVito, graduate researcher at Midwestern University Pew Research Centre, said this in 2019:

“These are not technical problems; they are human problems that technology has simply helped scale, yet we keep attempting purely technological solutions. We can’t machine-learn our way out of this disaster, which is a perfect storm of poor civics knowledge and poor information literacy.”

Slide 6: Remember…

All people can be misled or deceived – if something is “too good to be true” it usually is a fraud or scam, or untrue.

Pay attention to caution raised by others who know the source of information.

Never rely on the same source of information over others, especially if it always confirms your views.

Seek out other ways to corroborate or verify information. “Take five”, especially if being rushed to act.

Check if there is a pattern or history of a source being initially credible but then found to be giving wrong information – has that pattern emerged elsewhere?

Slide 7: Questions first, then comments

Q: I believe that much of the disinformation in the political sphere contains "information" that targets emotions, rather than peoples' intellects. For that reason, trying to counteract disinformation with logical arguments or accurate information can be ineffective or even backfire. Do you have any tips for how to reach people who've had their buttons pushed, emotionally, by disinfo?

A: This is not my area of expertise, but I have seen a few studies supporting what you've said about logical arguments and counter-providing factual information not working. A lot of it is because it's "heat of the moment" and emotional, not to mention "clickbait". I understand that techniques used by agencies to "deprogram" people caught up in cults can help - this all needs to be considered carefully and on a case-by-case basis. For the most part, and when we don't know the person concerned, on social media it's best to block/mute/report if posts breach terms and conditions rather than engage. (A later comment in the online chat suggested to boost factual information on social media rather than try to put down the disinformation through responding to posts, which I agree with.)

Slide 8: A few final words

In my novel, The Disinformation War, one of the characters keeps a journal. In one entry, the character writes:

“We are fighting a war, which is about ideas. Healthy ideas vs toxic ideas. We can be strong with our ideas, because they’re healthy and vital. As in, ideas that will make life flourish. We don’t have to lie like the liars do because of all that. Our enemies are scared of that, and they want us to die along with our ideas. Disinformation, lies and propaganda erode trust. Trust underpins liberal representative democracies, and any worthwhile human relationship. Disinformation, lies and propaganda are used in war/ conflict to destabilise and dehumanise the enemy. Lies linger, long after the fight.”

Thank you.

© 8 April 2023 and 16 May 2023