05 Jul
05Jul

On 17 May, a workshop on "Big Disinfo" took place at the University of Manchester. Reflecting on Joseph Bernstein's 2021 article in Harper's magazine, the workshop participants applied their own research perspectives and findings to current debates about "Big Disinfo". The workshop brought together invited experts and researchers from two current AHRC-funded research projects: Everything is Connected and our own (Mis)Translating Deceit. The workshop was organised and funded by the Democracy & Trust cluster of the Centre for Digital Trust and Society, part of the Digital Futures research platform at the University of Manchester.

After the event, the workshop participants were asked to provide short summaries of their presentations and/or share their post-workshop observations. We are providing them below keeping the original style and format chosen by the authors.


Vera Tolz from (Mis)Translating Deceit project focused first on the problems that arise when scholars (who are central to the ‘industry’ that Bernstein identifies) shape their conceptual apparatuses around the uncritical use of terms coined by practitioners and political actors (e.g. disinformation, misinformation, fake news, information war). Particularly when these terms are developed as loosely defined, maximally capacious tools with which to assail political foes (disinformation, fake news), they become distorted by accusatory, ‘othering’ functions which render them non-conducive to the precision, self-reflexivity and appreciation of complexity essential to academic study. Endorsing Bernstein’s concern that the phenomenon of disinformation as an accusatory discourse has generated something resembling a moral panic, Vera also linked this to the fact that traditional knowledge gatekeepers (mainstream media; elected politicians) are losing the power to define that knowledge to publics newly empowered by high levels of education and unprecedented access to information. She went on to stress the importance of placing such panics in a historical perspective, arguing that fears of the threat to human behaviours posed by new technologies, and of the effects of hyper-partisan propaganda on democracies, have surfaced throughout the 20th century. This historical legacy, Vera suggested, has these lessons for contemporary Disinformation scholars:

  • Societal views on the nature of truth and knowledge and on who is best equipped to define them change historically.
  • From the advent of the printing press onwards, new technologies have spawned panics around the dangers posed to publics by incorrect or manipulated information.
  • Researchers have long disagreed over whether publics can make rational judgements about news and information. Media literacy courses were proposed in the US in the 1930s and their effectiveness has been debated ever since.
  • The availability of significant funding for disinformation/propaganda research (e.g. at the start of the Cold War) has not always benefited academic work. In the 1950s, government-funded scholars tended to exaggerate their ability to develop methods for ‘protecting’ publics from ‘hostile narratives’


Steve Hutchings, the PI of (Mis)Translating Deceit, identified further corollaries to Bernstein’s diagnosis:

  • Noting that the unprecedented new deception techniques facilitated by digital innovation have been mirrored by the emergence of equally powerful forensic monitoring tools, he suggested that disinformation has thus served as a testing ground for proving the efficacy of big data processing programmes. The problem with such programmes is that they depend largely on the reification of their object of analysis as vast quantities of non-facts to be counted, or whose toxic spread is to be tracked. This overlooks the reliance of disinformation on integration within complex meaning-making structures and systems, rather than on merely negating facts. It also ignores how the meanings around facts, and their status as facts, or falsehoods, are contested and mutable across time, space, language, platform, media genre and other borders.
  • He continued by pointing out that we should pay attention not just to counter-disinformation’s incestuous relationship with the neoliberal economy as indicated by such quantitative biases, and as Bernstein argues, but to its contradictory articulation with liberal democracy. This affects how it is carried out, explaining why there are so few examples of fully and openly state-run units, and why the preferred model is of the semi-autonomous NGO whose multiple funding sources (including some governmental), and emphasis on participatory citizen involvement and crowd funding, signals an adherence to the democratic ethos. This model, however, leads to unstable resourcing, a related tendency to inflate disinformation counts to ensure continued support, inconsistent access to the very professional expertise that liberal democracy values; the outsourcing of monitoring to actors with their own political agendas; and the covert masking of state support within a performance of democratic ‘transparency’. Thus, disinformation tracking, a valuable, necessary activity, must seek to resolve its tense relationship with both neoliberalism and liberal democracy.
  • Further interrogating Bernstein’s assertion that ‘Big Disinfo’ has left key epistemological issues unresolved, Steve acknowledged that positivist, fact-centred approaches to disinformation are now widely questioned within the ‘industry.’ He argued, however, that Cold War-inflected true/false absolutes remain influential. An example is the recent redefinition of disinformation (i.e. negated truth) in terms of ‘adversarial narratives’ ascribed to extra-democratic actors (as though adversarial narratives can be designated ‘false’, and as though they are not also inherent to democracies). Steve advocated greater recognition of truth’s complexity and plurality, and of its tense relationship to politics in all environments, including democratic ones.


Focusing specifically on scholarly approaches to disinformation, Sabina Mihelj, core member of MD project from Loughborough University, discussed the prevailing trends and gaps in research on the effects of disinformation and misinformation. She noted that such research constitutes a relatively limited part of the overall body of work on the topic, and typically focuses on direct effects of exposure to disinformation on individuals, and specifically on factors that can aggravate or mitigate the effects of exposure to disinformation, without necessarily providing a clear sense of the nature, magnitude, or gravity of the problem. She also highlighted the growing awareness of the importance of studying second-order effects of disinformation, including the effects of accusations of disinformation and the potential negative societal effects of widespread concerns and media coverage of disinformation activity on public trust in authoritative sources of knowledge and in democratic institutions.


Maxim Alyukov, as associate member of (Mis)Translating Deceit from the University of Manchester, approached the issue of current practical approaches to disinformation, highlighting the need for a more balanced approach. While effective counter-disinformation practice is likely to encompass a wide variety of measures, one specific issue which consistently emerges in research is that of the overemphasis on debunking false information which works against the need to ensure balance between refutation and confirmation. On the one hand, Maxim argued, there is a growing body of evidence suggesting that debunking and emphasising the threat of disinformation and misinformation can have adverse unintentional effects, increasing overall scepticism, undermining the credibility of reliable sources, and bolstering support for repressive legislation. Research in this area has not yet reached the stage where reliable generalizations can be drawn based on meta-analyses of numerous studies, as is possible in other fields, such as research on the effects of fact-checking. However, scholars are likely to reach this point soon. On the other hand, authoritarian governments in Russia, China, and East Asia increasingly adopt counter-disinformation strategies from democracies for propaganda purposes. State-sponsored media use fact-checking elements to discredit criticism of the government as fake. These authoritarian counter-counter-disinformation tactics reduce the credibility of reliable information and polarize audiences, making people more defensive and resistant to political information they dislike. 

Maxim went on to emphasise a paradox. Over the past decade, scholars have stressed critical thinking in their approach to state media influence in autocracies. The initial idea was that propaganda suppresses critical thinking, so citizens should become more critical to undermine trust in propaganda. However, today this notion has been appropriated by propagandists themselves. For instance, regime supporters who consume propaganda in Russia often stress the importance of critically analysing information, while regime critics admit they rely on independent sources they trust, delegating the task of verifying information to these sources. Maxim concluded that refutation is unavoidable, but it is equally important to build trust in reliable sources. Focusing solely on countering disinformation could lead to a self-reinforcing cycle in which democracies debunk disinformation disseminated by autocracies, which in turn debunk information endorsed by democracies, fostering polarization and leaving publics questioning reliable sources.


Neil Sadler, a core member of (Mis)Translating Deceit from the University of Leeds, highlighted areas of conceptual ambiguity in relation to the key concepts of ‘disinformation’ and ‘information’. In relation to the first, he emphasised that the term is at times used very narrowly, resulting in a perhaps excessive emphasis on claims that are outright false at the expense of examining wider approaches to strategic communication. Deception may play a meaningful role within these strategies without necessarily being the primary objective. At other times it is used too broadly and treated as either partially or wholly synonymous with related terms such as ‘propaganda’ and ‘information war’, causing the specificity of disinformation as a distinct phenomenon to be lost. Neil emphasised, furthermore that ‘information’ itself receives little direct attention within the existing literature, despite serving as the implicit yardstick against which ‘disinformation’ is to be identified. Working negatively from common approaches to disinformation, he suggested that accurate information is implicitly:

  • Good both morally in and of itself and in terms of its instrumental value in sustaining healthy societies;
  • Self-contained, autonomous, and thing-like, with the consequence that it can be passed around as if it were a physical object, remaining unchanged throughout;
  • Comprised primarily of individual facts which can be readily verified in terms of their correspondence to material realities.

He suggested that each of these ideas is problematic in different ways. The first inadequately recognises that information can be both accurate and potentially harmful for wider society. The second does not recognise the active role of knowers and constitutive role of the discourses and narratives through which information is mediated and remediated. The third fails to adequately recognise the central role of contextual embedding in defining what individual claims mean as they circulate and are recontextualised. At times it also falls into naïve realism by assuming that language can allow transparent and unmediated access to empirical realities, free of distortion or bias.


Sacha Altay (University of Zurich) 

  • There are certainly bad actors trying to influence public opinion, they are a problem and having a healthy information ecosystem with information that people can trust is extremely important. However, to know if we have a ‘Big Disinfo’ problem we need to understand if these bad actors are indeed successful at fostering polarisation or influencing elections, and that’s unclear to me.
  • I think public discourse tend to exaggerate the prevalence of disinformation and its impact. For instance, a lot of evidence has shown that in western democracies misinformation consumption is small and mostly drive by a small group of very active and vocal internet users. I also think people tend to confuse misperceptions with misinformation: people do not necessarily hold misperceptions because they have been exposed to false information and accepted, given what we know about growing negative perceptions of the news, I think that people more commonly hold misperceptions because they avoid or distrust reliable news. I’m worried that some narratives about the effect and prevalence of disinformation may exacerbate these trends, confuse people, and reduce trust in the news or institutions even more.
  • Now let’s assume for a second that half of the UK population believes Russian disinformation. We would clearly have a Big Disinfo problem, but we’d need to understand why one half is accepting it and the other half is rejecting it. And we would probably come to realize that the problem is not ‘information’, but some pre-existing attitudes or values that lead some people to accept disinformation and others to reject it.


Clare Birchall (King’s College London)

Is there a “Big Disinfo” problem? Recent conversations in critical disinformation studies have inverted the “problem” of disinformation. Rather than taking “the problem” as self-evident (a problem with the scale of disinformation), the critique asks us to consider the definers of that “problem” as themselves problematic (Bernstein 2021). It has become a meta-conversation about the discursive power of liberal institutions to performatively create an issue upon which their existence depends. 

In Laurent Binet’s The 7th Function of Language, he situates knowledge of performative speech acts as a secret weapon sought after by competing political factions. In the book, it is the liberal left that commands control of the power of rhetoric to sway listeners. It’s set in the 1980s, but one of the characters talks about passing the secret seventh function of language, an imagined extra to Roman Jacobson’s six, to a “black man from Hawaii” (Binet 2018, 372). It is true that Obama had the gift of the gab. And his 2008 slogan, “Yes we can” seems to capture the incantatory power of the seventh function. 

However, it was the alt-right and Trump that weaponised language not by performatively making things happen with words necessarily (although he did incite followers to “fight like hell” on January 6th), but by unmooring terms from their origins, emptying language of its orientational force, and encouraging what has been called “epistemic murk” (Masco and Wedeen 2024, 4). The appropriation of “fake news” was first, but we have seen a similar trick being performed on “disinformation”. Binet’s novel depicts language as a secret weapon and in doing so provides a wry commentary on the not-so-secret weaponisation of language. 

The “problem”, therefore, is not only with a liberal establishment hellbent on preserving a particular form of rationality that situates its representatives as central beneficiaries, but also with the very language available to understand and discuss what is happening. We should be wary of a leftist critique of “Big Disinfo” (so-called after “Big Pharma” etc.) becoming complicit with that coming from the right. We only need to think of Elon Musk’s attacks on the Global Disinformation Index for an example of the latter. 

Me and others at this event have been thinking through these issues concerning right-wing conspiracy theories in particular and their resonance with forms of leftist theoretical critique – including the kind of Foucauldian reading that enables us to think about the conditions of power-knowledge that I gestured towards a moment ago (Birchall 2006; Birchall and Knight 2023). 

But are we really saying that we have to decide whether the real problem is with bad faith disinfo agents who know exactly what they are doing or with good faith counter-disinfo agents who either also know what they are doing in some kind of grand liberal conspiracy or unwittingly engage in ideological reproduction. Is this really a choice we should make? 

If it is the case that certain liberal institutions are invested in continuing the monopoly they hold on reason and knowing, leaving a lot of people feeling unrepresented and unacknowledged, we would then need to outline a settlement that would allow for forms of alternative, popular knowledge to circulate in ways that don’t pose a threat to already minoritised or otherwise vulnerable people – those who often bear the brunt of conspiracy theories. 

Organisations tasked with tackling disinformation are surely not invested in self-preservation any more than any other kind of organisation, although it’s true that funding structures introduce an unhelpful competitiveness. Moreover, many of these organisations are dedicated to weeding out forms of prejudice and hate speech. I think most of us would not want to live in a world without some gatekeeping around extreme forms of misogyny, racism, homophobia etc. And I’m not sure I, as a white middle class woman, am in a position to discredit all gatekeeping as self-interested power-brokering because I’m not disinfo’s most vulnerable target. 

So, there clearly is some kind of problem with disinformation (although that might not be the best term for what we are talking about and the “problem” might not align with the account offered by the counter-disinfo organisations). And yet, acknowledging this doesn’t mean we should be complicit in any undue alarmism that might arise, wittingly or unwittingly, from the work conducted by counter-disinfo organisations. Nor does it mean that we should refrain from questioning the ways that liberal institutions are inveigled in forms of discipline and control when it comes to the production and policing of knowledge. (On the flip side, we should recognise that weaponised versions of “free speech evangelism” and “academic freedom” are also policing knowledge – this is why proponents only ever seem to want to protect the rights of regressive thinkers to speak on campuses but are perfectly happy when critics of Israel are dismissed from academic posts (Srinivasan 2024).) 

Just as we can insist that ethics accompanies the development of new technology from the very beginning (rather than being turned to as an afterthought), perhaps we should insist that counter-disinfo organisations think through the politics of knowledge before getting down to the business of naming and shaming. (A Foucauldian on every Advisory Board perhaps!) 

I want to finish with one last comment that arises from my comparative project on counter-disinformation organisations across Europe (REDACT): the idea of an all-powerful liberal rational power bloc falls flat in some contexts. It doesn’t make sense in Estonia, for example, where there’s really only one fact checking agency to speak of despite the country’s geopolitical vulnerabilities. Even our co-researchers in Germany refute the diagnosis for their region. They see it as particular to the Anglo-American context: one with no recent historical experience of totalitarianism invested in forms of regressive world building. I think it might have even less relevance, or might at least need to be read through a post-colonial lens, in the global south. I therefore think that in the same way we must accept that “disinformation” will look different depending on geographical context, we must be mindful of regional differences when it comes to the diagnosis and critique of so-called “Big Disinfo”. 

Bernstein, Joseph. 2021. ‘Bad News: Selling the Story of Disinformation’. Harper’s Magazine, 9 August 2021. https://harpers.org/archive/2021/09/bad-news-selling-the-story-of-disinformation/. 

Binet, Laurent, ed. 2018. The 7th Function of Language. Translated by Sam Taylor. London: Vintage. 

Birchall, Clare. 2006. Knowledge Goes Pop: From Conspiracy Theory to Gossip. Oxford: Berg. 

Birchall, Clare, and Peter Knight. 2023. ‘Has Conspiracy Theory Run out of Steam?’ In Theory Conspiracy, edited by Frida Beckman and Jeffrey R. Di Leo, 149–67. London: Routledge. 

Masco, Joseph, and Lisa Wedeen. 2024. Conspiracy/Theory. Durham: Duke University Press Books. 

Srinivasan, Amia. 2024. ‘If We Say Yes’. The London Review of Books 46 (10). https://www.lrb.co.uk/the-paper/v46/n10/amia-srinivasan/if-we-say-yes.


Kenzie Burchell (University of Toronto)

Beyond Disinformation[1]  

Journalists, fact-checkers, policy-makers, and scholars alike often think of disinformation in  limited scales of news making and audience practices:  context defined by content alone  – real or fake news in terms of editorial and production decisions; an event-based orientation to representation – fact or fiction as determined by the indexical where and when of witnessing; news cycles and narrative fidelity forcing a rapid turnover of history-in-the-making – verified or debunked just in time for trust to be gained or lost among audiences.  

But what lies beyond this orientation to disinformation, what domains of activity and lived realities speak to the concurrent renegotiation of the transnational information order? Evolving Narratives, Systemic Edges, Authoritarian Techniques, and Best Practicesfor Democratic Communication each represent a field of intervention – ripe for exploitation – to undermine certainty in news making institutions by teasing apart our shared epistemological landscape into fragmented self-sustaining referential systems. 

Evolving narratives of diaspora identity, community, and culture are today forged, lived, and co-constituted locally and globally in relation to neo-colonial revisionist histories –entire geographies of the past, present and future are reshaped through these narratives – not seeking to win the hearts and minds of everyone, only to sow uncertainty among some while consolidating the real and imagined transnational identifies among other already-sympathetic audiences.  

Repressive digital governance techniques and transnational networks of malevolent non-state actors are coercively reshaping and spoiling the global informational order.  Disinformation is a cover – for corruption, for kleptocracy, for authoritarianism, and for military adventurism. To focus on the content of truth and fact is to miss that function of disinformation as a tool of authoritarian political economies, which translate into the geopolitical realities of disenfranchised populations cum online vigilantes, gig economy trolls, and private armies of mercenaries for hire. 

Systemic edges imposed by data colonialism,[2] marketcraft,[3] and lawfare are the fields through which the scope of hybrid war coheres into the on-going realities of hybrid empire.  The full invasion of Ukraine in 2022, the sanctions that followed, and the geopolitical competition to secure data sovereignty and digital supply chains are only the most legible of contours across a forcibly reconstituted world order.  By confronting the inefficacy of international institutions, a unipolar world led by the West has been teased a part, limiting Western political protection to the borders of NATO and remapping the global economy into a limited sanctions-minded neoliberal free market and authoritarian-led visions of neocolonial development, while the unaligned global majority population states grow weary of the former and wary of the latter. Fact or fiction across the information order is increasingly determined by the reach of newly balkanized islands of platform governance and legal jurisdiction, reemphasizing the material realities of power through the geographical specificity of natural resources and high tech manufacturing. What does this shifting reality mean for today’s greatest challenges, which until recently many thought were still on the distant horizon –   such as Ai and Climate Change? 

Best practices for media literacy, inclusive communicative engagement, and responsible public discourse represent the most immediate field through which policy makers must respond to the strategic spoiling of the global information order.  Entertainment and engagement in our media are not enough without education; news framed only by single events and on-going emergencies creates the sense of unending global crises without a grounding in lived and local experiences; individualistic society and autocratic states both sap our sense of mutual responsibility. In the face of revanchist neocolonialism, authoritarianism, and populism, policy makers must go beyond disinformation to reckon first and foremost with the global inequities and exclusions that undermine the participation of diverse audiences in worldmaking practices that align with their experience and understanding –  the past, present, future imaginaries of our collective identities must find a place alongside and in relation to one another – despite disinformation.


Kamile Grusauskaite (KU Leuven)

The World Economic Forum has identified "disinformation" as the primary threat facing the world today. In the past decades, research on disinformation has often had the normative goal of combatting it. At the same time, a new pool of research is emerging, studying the unintended consequences of these new initiatives and policies to fight disinformation online, like freedom of expression and delimiting the “sayable.” As the field is becoming more and more saturated with technological solutions to the disinformation problem, the questions: 1) How should we tackle disinformation? 2) whose responsibility is it? 3) what should the relationship between research and policy be? are becoming ever more salient. In the following, I will outline two key points to start off this conversation and hopefully contribute to finding the answers. 

The Cultural, Historical and Social: Technological Solutions to Social Problems? Much of the (psychological and political science) literature on disinformation to date sees ‘disinformation’ as a social problem that requires solutions. Even more dire, the existing EU policy on disinformation considers disinformation as a social-technological problem that drives polarization and democratic backsliding without ever discussing its contents. In analyzing (and, in turn, constructing) disinformation as a problem in its own right, both the existing literature and policy miss its historical, cultural, and political dimensions. Specifically, we overlook how disinformation is used as a media strategy to reproduce and reinforce power hierarchies (Kuo & Marwick, 2021) rooted in histories of nationalism, patriarchy, colonialism, and white supremacism, to name a few (in the West). We also fail to recognize the role of identity, especially race, in the messages and strategies of disinformation producers and in determining who is most affected by disinformation and misinformation. 

Considering this, many proposals to curb disinformation that typically rely on technological solutions like de-platforming and de-platformization seem inadequate at best. Instead of addressing the problem, it distracts from it. To quote David Graeber: “Bureaucracies are ways of organizing stupidity - of managing relationships that are already characterized by extremely unequal structures of imagination, which exist because of the existence of structural violence.” 

Looking at disinformation as a result of cultural, historical, and social processes opens up questions about how diverse forms of “knowledge” must be treated. At the moment, academia and policy conflate types of disinformation: far-right nationalist narratives about the “great replacement” are governed under the same label as those of people questioning their marginalized status in society using conspiratorial logic. The role of academia here is to criticize the normative and un-nuanced analyses and policies surrounding disinformation and bring questions of power and culture back into the study of disinformation.   

What kind of academic expertise is seen as legitimate to “fight” disinformation?  The current state of disinformation studies is largely influenced by psychological, communications, and political science approaches. The existing EU and US policy toward disinformation also solely follow these approaches (often relying on in-house expertise). These fields often focus on psychological biases, "threat perception" mechanisms, the spread of disinformation, and its impacts on public life, such as elections. In turn, these are the issues both platforms and governments currently address. 

While these studies are undeniably valuable and important, they tend to overlook the cultural and social dimensions of disinformation. Disinformation is deeply intertwined with culture, social interactions, and political contexts. Therefore, to fully understand and address how disinformation is produced and spread, we need expertise from cultural studies, sociology, and socio-technical disciplines. Technological solutions alone cannot address the underlying cultural issues. 

Cultural and social approaches to disinformation provide contextual understanding and targeted and culturally sensitive interventions. Cultural and sociological approaches provide insights into the contexts in which disinformation thrives. This includes understanding local narratives, historical grievances, and social norms that may influence the reception and spread of disinformation. For instance, the communities (e.g., alt-right conspiracy theorists) that these policies target often see these measures as authoritarian and “censoring.” Policies that consider cultural nuances are less likely to be perceived as external or invasive and more likely to gain public support. This is particularly important in diverse societies where a one-size-fits-all approach may not be suitable. 

The question remains: Why have cultural and sociological academic perspectives been overlooked in shaping disinformation policy? Why have psychological and communication approaches taken precedence in informing policy decisions? Is it for the sake of simplicity and finding a one-size-fits-all solution? And if so, what are the potential consequences of this approach?


Daniël de Zeeuw (University of Amsterdam/KU Leuven) 

Misinformed about information? (or how all information is disinformation) 

Based on a recent paper about post-truth conspiracism and the pseudo-public sphere - where I interrogated the problematic assumptions that figure into imagining social media platforms as a public sphere– I wanted to connect this to some more historical and conceptual questions about the term ‘information’ in misinformation studies (aka Big Disinfo). As Rob Topinka rightly commented in response to my remarks, there seems to be something ‘ironic’ about the reference to ‘information’ in misinformation studies. So where does this irony reside, and why would it matter for (a critique of) Big Disinfo? 

In his book What is Information? Peter Janich talks about the different notions of information that are out there. He writes: 

'It seems clear, [therefore] that everyday language is bound up with two different traditions [of thinking about information, D.]: one the tradition of linguistic communication (which requires an attention to the functionality of information), the other the tradition of information technology focused on processes of transforming, coding, and decoding. These telecommunicative processes have a structure, but that structure does not have to be linguistic, meaningful, or capable of being true or false.' (2018: 9, my emphasis) 

So on the one hand, there is the conventional, everyday sense of information as a carrier of meaning, including true or false statements, e.g. the newspaper contains information about the (typically terrible) events of the day. On the other hand, there is the technical notion of information from information science as a measure of uncertainty, one based in a statistical universe of engineering and data processing machines. In his introduction to Claude Shannon’s theory of information, Warren Weaver defines the correspondent notion of communication as the process by which one mind affects another, quickly adding that ‘mind’ may also be substituted with ‘machine’. 

Crucially, the example he gives to illustrate his definition does not stem from a human speech situation. Instead, he uses the founder of cybernetics Norbert Wiener’s case of an enemy airplane that communicates with the anti-aircraft system based on feedback (an exchange of information). The question of true or false never comes up in this little scene: what counts is the successful extrapolation of (and subsequent acting upon) the information received. From the perspective of the anti-aircraft station, the goal is not to represent the external world correctly but to shoot down the enemy plane. Information can be false in the sense that it is wrong or prone to errors that can be statistically corrected (the location data was corrupted by some static interference or noise). 

Now, it is the first, functional tradition of information to which misinformation studies seems to adhere: what distinguishes misinformation from ‘regular’ information is its truth-content (in the case of misinformation), or the misleading intentions or extraneous strategic purposes of its disseminator (in the case of disinformation). But how does – or should – Big Disinfo account for this other, more immanently techno-logical tradition of information that its notion of information also implicitly evokes? 

One possible example: in the context of Russia’s current invasion of Ukraine, apparent footage showing Russians gaining the upper hand on the battlefield was widely circulated on social media. However, the footage was clearly fake, as it consisted of images from an older PC game called Arma III. Here the question arises: is this information problematic or dangerous because it is false yet circulated as true, and thus potentially believed (and acted upon) by those who watch it? Or could actual battlefield footage have an equally damaging effect, precisely in its capacity as a circulating image with an ability to affectively rally users around it, recruiting them into taking a pro-Russian stance on the war? Each actual piece of decontextualized footage could probably serve this function. Moreover, the obvious fakeness of the videogame footage may have a performative advantage. As Tuters and Noorderbos show in their paper on ambient propaganda (forthcoming), it is in the subject’s response to the call of ideology that subjectification occurs, not in the holding of an actual belief. 

Perhaps Big Disinfo needs to overcome or reconsider its ‘epistemic bias’, i.e. its privileging of cognitive and representational dimensions of communicative interaction in platformed environments, and shift focus to its operational (informational) dimensions, i.e. its power to affect and shape conduct (as is the case with the “affective turn” in media studies). This would also imply shifting away from an understanding of the internet as a public sphere, towards a theater of operations, in line with the Cold War rationality that inflected the cybernetic theories of information and communication and that still underlie current social media platforms. 

Then again, perhaps Big Disinfo already had this orientation, as its pragmatic goal is to prevent certain types of information from circulating and causing certain negative societal effects, e.g. polarization, radicalization, etc. In that case it is still not clear where exactly questions of truth come into play, and we could follow Jack Bratich when he interprets current alarmism around post-truth and misinformation as part of a ‘war of restoration’ on the part of the liberal status quo. Then we would have already entered the realm of information warfare without realizing it… 

In any case, we need to learn to pose different questions, we need a better criterion than truth/false if we want to formulate new ethical orientations to deal with current information environments.


Boris Noordenbos (University of Amsterdam) 

Vera Tolz and Rob Topinka already touched on this: concepts assist us in imagining complex phenomena, but they may also blind us to certain aspects or “displace” the discussion about them. In my view, the primary blind spots of “disinformation” ensue from the reification inherent in the concept. In its reified form, disinformation is perceived not only as a dangerous “thing” or “substance” – capable of polluting the views of those exposed to it – but also as a foreign one. Tellingly, on their website the European External Action Service uses “disinformation” interchangeably with “Foreign Information Manipulation and Interference” (FIMI).” They regard FIMI as a significant security challenge with the potential to “contribute to increasing polarization and division within the EU.” This externalization of disinformation risks reinforcing stereotypical civilizational binaries in which otherwise healthy Western democracies are under threat from authoritarian adversaries, often from the East. The foreign emphasis, furthermore, diverts attention from the potential domestic causes of “polarization and division,” such as the lack of governmental transparency or the structural inequalities engendered by neoliberal policies.  

Looming over such notions of disinformation as foreign interference are the metaphors of “injection” (the “hypodermic needle model” of communication) and “infection.” The latter is vividly exemplified by the WHO’s Covid-era campaign against a global “infodemic” (Cf. Peter Knight’s discussion of the “infodemic” metaphor and the notes of intervention that ensue from it). These “viral” imaginaries continue a long tradition, explored already by Susan Sontag, of using “illness” metaphors to create morality tales, which in the disinformation case typically feature foreign manipulators, irresponsible (super)spreaders, and advocates of “information hygiene”. The rhetoric around disinformation combines these viral metaphors with a seemingly incompatible notion of intentionality that is central to most of its definitions. If disinformation is a “thing” with a penetrating force and dangerous effects the question arises almost naturally: who are “its” creators, and how do they benefit? Even if not leading to conspiratorial speculation, the – often unattainable – goal of pinpointing origins and purposes behind “wrong” content further muddies the already loose definitions of disinformation. 

With its clear-cut, actionable presentation of things, the reification paradigm caters to the vested interests of Big Disinfo as described by Bernstein. Meanwhile, it overlooks the participatory and processual nature of meaning-making. Especially in the highly ephemeral environments of new media platforms, “content,” like the subjects engaging with it, are not static and require continuous “updating to remain the same.” These “updates” are not driven primarily by chance mutations (as per the metaphor of viral spread) or by foreign actors (the FIMI frame of nefarious intentionality). They are propelled by common users, who actively rehash, translate, combine, and reinterpret material within participatory online environments. Form and medium matter in this process: a pro-Kremlin TikTok craze holds a different status, truth claims, and implications than a YouTube video by RT, or a Tweet by Donald Trump. While this might seem obvious, the reification of disinformation does not offer much space, nor tools, to account for such differences.  

Highlighting the complexities of online meaning-making does not deny the existence of coordinated foreign manipulation in the digital sphere. However, the notion of information as a “thing” – framed in civilizational, moral, and epistemic binaries – fails to enhance understanding of current information threats. The paper on “ambient propaganda,” co-authored by Marc Tuters and me, seeks to explore an alternative to the reified paradigm. Building on recent trends in propaganda studies, we conceptualize Kremlin-aligned, pro-war sections of TikTok in “ambient” terms. We show how users’ engagements with this “environment,” as well as with the platform’s specific affordances, is deeply participatory and multidirectional. Users “tune in” to the ambience of pro-Russian TikTok, replicating its trending memes, performing the associated gestures, and lip-synching to its tunes, while the platform’s “For You” page is progressively tailored to their personal preferences. Even if some of the circulating signs and symbols originate in a state-backed influence campaign, these users’ behavior cannot be simply ascribed to the wit of Russian information strategists. Certainly, TikTok’s logic is unique, but the case hopefully helps to demonstrate that, in other contexts too, what is often labelled “disinformation” is not a static thing, but a dynamic practice.


Robert Topinka (Birkbeck, University of London)

It’s becoming a common critique to say that the alarm around disinformation now amounts to a moral panic. This might be true, but it’s worth remembering that moral panics are not just imaginary: they respond to a crisis of some kind, but they tend to displace the crisis and then target the displacement rather than the crisis itself. In Policing the Crisis, a classic text on the subject, a moral panic around Black youth violence displaced a real crisis of political hegemony in 1970s Britain and created a new crime called ‘mugging,’ which became the target of policing. The moral panic around ‘disinformation’ displaces the contemporary poly-crisis (of democracy, public health, climate change, and war) and creates online disinformation as a target of policy intervention. The trouble is that social media does not just passively record social processes, it also acts on them and transforms them. Likewise, the problem is not only that disinformation circulates on social media, but that disinformation and responses to it actively transform public discourse: debunking and fact-checking become sites of contestation and political manoeuvre alongside ‘alternative facts’, ‘fake news’ and conspiracy theories. Instead of setting out to identify individual instances of disinformation and measuring their influence, it might be more productive to think of disinformation as its own active milieu, and to identify which facet of the contemporary poly-crisis that milieu displaces.


Marc Tuters (University of Amsterdam) 

I have become increasingly interested in the concept of propaganda, which has a history that goes back to the beginnings of the field of communication studies. A lot of recent important research has sought to conceptualise online disinformation through this lens (Benkler, Woolley, Starbird, DiResta, etc.), but it all arguably suffers from the shortcoming that propaganda requires intent—like disinformation or murder. This, I think, speaks to a very contemporary problem that Foucault understood best: "How to talk about in­tentionality without a subject, a strategy without a strategist?" and he thought that the answer lay in studying "the practices themselves." As a new media researcher, I believe that we can operationalise Foucault’s insight through empirical methods that study how media environments’ affordances incentivise and constrain different types of actions differently. Today it is becoming harder and harder to disentangle disinformation from other sorts of online subcultures that both seem to depend on the same platform affordances—especially of 'imitation' and of 'influence'. My example here would be the Russian 'Z' phenomenon, referred to by some as ‘the new swastika’. Early on in the Russian invasion of Ukraine, there was a viral trend with TikTokers making Z hand gestures, which was later picked up by the Kremlin as part of its own messaging. What is Z? What is the content of its message? Who is behind it? In my work with Boris Noordenbos we consider it as exemplary of a new form that we call ‘ambient propaganda’, in which different actors ‘tune in’ to and ‘vibe’ with one another, with their audience, with the algorithm and even with the totalitarian state. If the model that underpins the concepts of both disinformation and of propaganda can be understood as being the linear transmission of messages from sender to receiver, then the point here is to try to think of communication in more ‘multi-directional’ terms.




[1]  The “Beyond Disinformation” project is funded by the Social Sciences and Humanities Research Council in partnership with Genome Canada as well as a Joint Institutional Partnership between the Universities of Manchester, Melbourne and Toronto; “Beyond Disinformation” est financé par le Conseil de recherches en sciences humaines en partenariat avec Génome Canada ainsi qu'un partenariat institutionnel conjoint entre les Universités de Manchester, Melbourne et Toronto.   

[2] Nick Couldry, and Ulises A. Mejias, “Data Colonialism: Rethinking Big Data’s Relation to the Contemporary Subject,” Television & New Media 20, no. 4 (2019).

[3] Steven Kent Vogel, Marketcraft: How Governments Make Markets Work (Oxford: Oxford University Press, 2018).   

Comments
* The email will not be published on the website.