Skip to main content Skip to home page
Essay

After Free Speech

The Algorithm Dispositif and the Limits of Liberalism

How to think critically about the ubiquity of algorithmic governance.

  • Davide Panagia
Programming code abstract technology background of software developer and Computer script
Credit: monsitj

This essay is part of the Special Issue “After Life: Identity and Indifference in the Time of Planetary Peril.” It features papers presented at the inaugural symposium of The Democracy Institute at the Ahimsa Center at Cal Poly Pomona, which was co-sponsored by the Institute for New Global Politics, in March 2023.

 

The title of my reflections promises to address what I take to be one of the central challenges for life in democratic societies today: how do we think critically about the ubiquity of algorithmic governance?

As someone trained in the study of political theory this question became pressing when I first came across this 2013 YouTube video that shows ten seconds of Blackberry stock trading (slowed down to three and a half minutes). For reasons I can’t fully explain, the video struck a chord. It was clear to me at the time that what compelled me about this video wasn’t that it betrays the truism that we live in a neoliberal regime of finance capital that has intensified feelings of alienation and conditions of exploitation that are all too familiar. No; we’ve known this for some time. What was striking to me, in retrospect, was that I could not explain the nature of the activity I was witnessing, and this wasn’t because I lack the expertise to do so (although I do lack the technical expertise involved in financial analysis). What I was witnessing, as if for the first time, was the depiction of a type of activity that I could not account for in the traditional ways in which I am trained to account for and describe action.

It is by now a commonplace – as the recent TikTok US congressional hearings painfully put on display – to express uninformed anxieties and, at times, even more uninformed moral panics about the forms of algorithmic agency on display in the BlackBerry YouTube video, and at work in our individual and collective lives. Such concerns are not unwarranted, to be sure, nor are they unique. There has hardly been a time in human history when the emergence of a new media technology hasn’t caused fears and alarms. In the tradition of the West, Plato and the Ancient Greeks worried about the medium of stage and theater and its political (dis)advantages, so much so that Plato uses a story about spectatorship – the famous parable of the cave in his Republic – to assert the importance of clear and unobstructed thinking when reflecting upon the just society.

Plato’s concerns and admonitions may have adapted and morphed during the millennia since his writing of the Republic, but they have also remained remarkably familiar. One only has to think to how prescient and persuasive the 1999 film The Matrix was to those old enough to remember its sensational arrival on the big screen. (For those of you reading this who are too young to remember it, or have not yet seen it, I invite you to do so; it’s a marvelous film that still holds up). Rather than shadowy images on a cave wall representing political life in ancient Athens, the Wachowski siblings offered a dystopian world governed by self-aware code representing our (then) emerging reality. In The Matrix, as in Plato’s Republic, the moral of the story is that technology is a threat to human development and self-fulfillment, indeed to human freedom and justice. We are enchained by the world of appearances and to wish to remain in such a condition is a type of moral failure, exemplified by The Matrix’s “Cypher” character (played by Joe Pantoliano) who famously quips – while planning his Judas-like betrayal of the Christ-like Neo – “You know, I know this steak doesn’t exist. I know that when I put it in my mouth, the Matrix is telling my brain that it is juicy and delicious. After nine years, you know what I realize? Ignorance is bliss.”

Unfortunately Plato’s moral universe, and the one of The Matrix, is a little too black and white (or, better, red pill – blue pill) given the challenges we face. It is comforting to believe we can rely on the simplicity of such a dualistic moral universe to tell us what is right and wrong, but such a universe is also devastatingly idealistic and, indeed, unreal. None of us, I suspect, would wish to board a commercial airliner knowing that the algorithms that helm the autopilot are not working and that we must trust exclusively the pilot’s skill. Nor would many of us, I suspect, wish to do away with either spelling or grammar checks on our word processing programs, despite the occasional frustration we may have with their intrusiveness. Hence our dilemma, and our challenge: the moral conventions and paradigms of critical thinking we have inherited in the Western tradition of political and aesthetic thought rely on a system of beliefs about which ways of representing the world are better or worse; and our ideals of justice, morality, freedom, governance and selfhood are rooted in these ideals of representation.

Just think how important our beliefs about political and aesthetic representation are when voting. It is unquestionably the case that we are willing to trust our various democratic systems of governance because we believe they are designed to more or less accurately represent the votes of a constituency through a fair system of elections. When we suspect malfeasance, or when we express distrust in such a system, we appeal to errors in the mechanisms of representation, or of the system of counting, or of the elected officials, and thus question the legitimacy and validity of representation. Like a poem or a painting, electoral systems are technologies of representation that more or less accurately figure the values and beliefs – that is, the identities – of a constituency. But what if the most pervasive and ubiquitous form of governance in the world today has nothing to do with the accuracy of representation or the effectiveness of representatives? What if – to extend this question further – the technologies that manage our lives are not human, and thus cannot be understood in terms of will, or intention, or even deception? Finally, what if we don’t even have a vocabulary that equips us to reflect on the political agency of such forms of governance?

These questions envelop our thesis, which is the following: Algorithms generate outputs, they do not represent realities.

An algorithm’s political agency – how it works among and upon us – has nothing to do with its ability to represent truths or falsities about the world through speech or image but, instead, regards the recursive, dispositional ordering of bodies and energies in space and time. This is what I call the dispositional powers of the algorithm dispositif. I will thus begin my reflections on the above thesis with the following provocation:

The liberal democratic paradigm of surveillance and free speech is wholly inadequate to account for the dynamics of power in the age of the algorithm dispositif.

By this I don’t mean to suggest that there aren’t abuses of privacy in our world, far from it. Nor, for that matter, do I wish to suggest that we shouldn’t concern ourselves with the relationship between surveillance and political coercion – whether in the form of corporate violence, the Ideological State Apparatus, or even the Repressive State Apparatus. Indeed, I think that one of the biggest issues we have to contend with is the fact that the ever-growing list of F.A.N.G. corporations are not merely media conglomerates but now operate as imperialist NGOs whose interventions proliferate at the everyday level of granular particularity.

But the algorithm dispositif is not a surveillant assemblage because algorithms are neither representational nor visual media. Nothing is visible to algorithms, not in the way things are visible to animals and humans. Instead, I wish to propose that we consider these as cynegetic systems that track and render data. “Cynegetics,” from the ancient Greek kunēgetikos, refers to the art “of or for hunting,” most notably with dogs. It is a term I borrow from Grégoire Chamayou’s study of Manhunts that offers a “history of the technologies of predation indispensable for the establishment and reproduction of relationships of domination.” (Chamayou) As technologies of cynegetic rendering the algorithm dispositif is coincident with the emergence of scientific policing in the nineteenth century and the related innovations of anthropometrics and the police dog. I identify the algorithm dispositif as a cynegetic assemblage instead of a surveillance assemblage because the enterprise of tracking and rendering does not require the visual identification of an individual or the censorship of their speech but, rather, a wrangling of data points. The difference – from both a political and an aesthetic point of view – is one that matters to how we might reflect on the power of algorithms.

 

Beyond Ideological State Apparatuses

 

Let me elaborate on some characteristics that I associate most strongly with the surveillance and censorship paradigm and that make me suspicious of its effectiveness as a critical tool. My cursory exploration focuses on the ideas of Louis Althusser and Michel Foucault, two of the most influential thinkers on the topic. Despite their profound differences, both the Althusserian and Foucaultian accounts of surveillance imagine an embodied and willful subject as a site of political agency and resistance. In the Althusserian case, we are given the interpellation scenario as the exemplary instance of ideology’s policing function. Recognition in interpellation is a sign of ideological subjection that functions through the deployment of bodily sensation (in Althusser’s scenario the primary sensations are viewing, hearing, and halting). The policing function operates through recognition to reproduce humans as always-already subjects of ideology. Foucault’s account of surveillance is, of course, less concerned with recognition and much more concerned with bodily habituation, or discipline. Again, rehearsing what is well known, modern surveillance for Foucault operates not via a command and response scenario (like Althusser’s hailing) but through micro habits of bodily training. Discipline doesn’t require Foucault’s prisoners to recognize themselves as prisoners; that is, one is hard pressed to find a scenario of interpellation in Foucault’s account of discipline and punish because subjectification does not operate at the level of recognition (as does Althusser’s account) but on bodily dispositions. That’s the whole point of Foucault’s admiration of the Panopticon as a disciplinary dispositif: it doesn’t require a sovereign voice or a sovereign gaze; it simply requires a distribution of visibilities and trajectories of valuation that are perpetually re-assignable because not designated to any specific identity.

Today, however, the matter has shifted completely because we no longer live in a surveillance economy that relies on perpetual observation or identification, but in a world of Bayesian probability with massive, dispersed data pools that receive continuous update from ubiquitous entries. The insufficiency of the surveillance paradigm, in other words, is not simply due to the fact that the algorithm dispositif is not a visual technology; it’s also, crucially, the case that identity formation/recognition (and thus privacy) is not the result of the operations of representation. It’s in this sense that we must rethink our inclinations to consider algorithmic governance in either representational or surveillant terms because the task of the algorithm dispositif is not to identify and represent identities but to track, render, and correlate data. The proof of this lies in the fact that in the analog instances of the surveillance paradigm rehearsed above, there is a strong commitment to identifying the operation of subjectification as a limit on movement. Whether we think of free speech and censorship as a limit on words and their circulation, or we think of surveillance and imprisonment as a kind of physical constraint – we are still operating within a classical liberal conception of freedom that understands domination as a limit on movement. But here’s the big difference: the algorithm dispositif’s power is effective if and only if there is perpetual movement and constant circulation – that is, total freedom. This is key to its effectivity as a system of governance.

Perhaps an illustration might help focus my speculations. It is from the first few pages of Daniel Kahneman, Olivier Sibony and Cass Sunstein’s recent collaboration, Noise: A Flaw in Human Judgment. (Kahneman et al.)

 

Here are four teams of five shooters, each of which hits their target with different collective results. Team A is, of course, the most successful; Team B is “biased” according to our authors because all the shots ‘miss’ the target in exactly the same way; Team C is not biased but noisy as there is no discernible pattern that would allow us to determine the trajectory of any bias; and Team D is both noisy and biased. The premise of the book is not to provide instruction on how to create a system whose outcome is always A, but how to think about judgment as the coordination of randomness such that we might ultimately discern patterns and from that, adjust our practices so as to champion both bias and noise to produce relevant outputs. In short, our authors’ main concern is the nature of predictive judgment as a tool for making decisions about future outcomes.

Key to the motivation of our authors is a mid-20th Century cybernetic discovery called “negative feedback” that is a central feature of the algorithm dispositif. The term is an important one because it describes the mechanisms in and through which the dispositional powers of cynegetic rendering operate. What negative feedback does is eradicate any plausible and useful distinction between true and false data such that all movement in the universe is available as data.

In its hey-day, the science of cybernetics devised a signal/response reflex calculation that made it possible to receive a correction on the trajectory of missiles once launched. This became the basis of Allied radar technology that made it possible to track, adjust, and capture targets in real time. Here is how the cyberneticists Norbert Weiner, Arturo Rosenblueth, and Julien Bigelow define the term negative feedback in their classic 1943 paper, Behavior, Purpose, and Teleology: “The term feed-back,” they explain, “is also employed in a more restricted sense to signify that the behavior of an object is controlled by the margin of error at which the object stands at a given time with reference to a specific goal. The feed-back is then negative … the signals from the goal are used to restrict outputs which would otherwise go beyond the goal.”(Rosenblueth et al.) We can appreciate from this definition how little truth and falsity – that is, representational accuracy – matters to the scenario. The orientation is not one of confirming or denying the truth of a statement, of a belief, of values, or even of the identity of the target. Output is neither a teleological truth nor a logical conclusion; it is a perpetually revisable outcome that offers a probability, not a truth.

I’m not here to discuss the merits or lack thereof of the cybernetic paradigm or the moral validity of the use of predictive algorithms. What I do wish to note is the extent to which perpetual movement (as we say in the Blackberry YouTube video) is central to the cybernetic operation, and how this is the source of data. More to the point, the extraction of data is not the result of human judgment and control (as we tend to assume relying on our fears of the pervasiveness of Orwell’s Big Brother) but calculations of cynegetic rendering that include all of the elements I’ve discussed thus far. In short, cynegetic rendering operates as the power of the algorithmic dispositif and this has nothing to do either with surveillance or with control on free speech. The algorithm’s powers are not the same as the powers of domination; they are dispositional.

Dispositional powers differ from the controlling powers of domination in that they do not express will or intention. They are not intentional because they are tendentious. (Mumford and Anjum) By this I mean that dispositional powers are those types of powers that lean us toward something, in the way that we tend towards a musical preference or an affection. If an ideological apparatus has, as its correlate, the controlling effects of a sovereign Big Brother, then the dispositional powers of the algorithm have, as their correlate, the relational powers of the sentiments. Unlike either the causal necessity that characterizes control in the surveillance assemblage, dispositional powers are multifaceted and undetermined forces of attachment and/or detachment. They are multifaceted and undetermined, I would add, because there is no single or necessary cause for their manifestation.

The dispositional powers of cynegetic rendering correlate patterns of data that include animal urges, habits, and actions; but also weather conditions, time of year, road conditions, keystrokes, previous likes, and so on and so forth. None of these, I would suggest, provide the kinds of violations of individual privacy we typically associate with the coercion of free speech in the surveillance assemblage, because we typically don’t account for nor define the human individual in terms of these granular and stochastic data points. Relatedly, unlike the surveillance assemblage, the dispositional powers of the algorithm dispositif are not teleological in the way we imagine George Orwell’s Big Brother to be: they are not oriented at the production of a specific type of person or set of beliefs. This, because an output rendered by an algorithm is a probability that exists in a condition of perpetual recursivity and thus perpetual update. The result is that when living in the world of the algorithm dispositif we do not barter in representations (whether true or false) but in rendered outputs.

I began my reflections with the provocation that the languages and theories of liberal democratic political life are at once misplaced and misdirected in relationship to the dispositional powers of the algorithm dispositif. I hope that I have been able to persuade you, my reader, that our willingness to rely on such steadfast concepts as free speech, privacy, and even sovereign selfhood are precisely the conceptual tools, psychological ideals, and political values that facilitate the maximum functioning of the algorithm dispositif. Thus, once again, our challenge: what is the difference between representation and rendering, and how can we think critically about technologies of output rendering that don’t make any representational claims whatsoever?

Davide Panagia is Professor and Chair, Department of Political Science, UCLA.

 

 

 

References

 

Chamayou, Grégoire. Manhunts: A Philosophical History. Princeton University Press, 2012.

Kahneman, Daniel, et al. Noise: A Flaw in Human Judgment. Little, Brown, 2021.

Mumford, Stephen, and Rani Lill Anjum. “Dispositional Modality.” Deutsches Jahrbuch Philosophie 02. Lebenswelt Und Wissenschaft, edited by Carl Friedrich Gethmann, Meiner Felix Verlag GmbH, 2011, pp. 380–94.

Rosenblueth, Arturo, et al. “Behavior, Purpose and Teleology.” Philosophy of Science, vol. 10, no. 1, 1943, pp. 18–24.

 

explore more on

RELATED ARTICLES

Go to top