Skip to main content Skip to home page
Essay

What’s Democratic About ‘Democratizing AI’?

By invoking the language of democracy without considering how best to ensure collective governance, tech companies fail at offering a truly democratic AI.

UNITED KINGDOM - MARCH 13: "Visitors to Selfridge's store in London admiring the electric robot which answers questions and tells fortune. It has been installed in connection with the store's twenty-fifth anniversary. England. Photograph. 1934." (Photo by Imagno/Getty Images)
Diana Acosta-Navas, Henrik Kugelberg, Ting-An Lin, Lorenzo Manuali, Rhiannon Neilsen, and Rob Reich

 

There was a time when everyone could get their hands on some uranium.

Back in the 1950s, parents could buy their kids the Gilbert U-238 Atomic Energy Lab — a real lab kit for measuring the radioactivity of certain elements, including uranium.

Today, we’re grateful for rules preventing such a democratization of uranium. Not just because of its destructive power — the science kits were not going to allow the production of nuclear weapons — but because of the danger that radioactive materials and their widespread dissemination pose to us all.

Similarly, 3-D printers have democratized guns — giving anybody with a 3-D printer and the necessary raw materials the ability to print plastic firearms that might be able to pass metal detectors at security checks (for example, at airports).

In other words, democratic access to something is not straightforwardly a good thing.

That’s something the tech industry seems to ignore.

The tech industry has made “democratizing AI” a mantra for those in tech wishing to do good. Most recently, Facebook’s parent company Meta announced that it was making its large-scale language model accessible to a broad audience of AI researchers across the world. Their announcement trumpeted the decision to release the pretrained models, code, and logbook as “democratizing AI.”

Like the Atomic Energy Lab and the 3D printers, this release seeks to make something accessible to a broader public. The underlying idea of democratization is (apparently) that everyone should have access to and be able to use AI. Practically, this can mean a variety of things — from reducing the computing power needed for AI, to providing tools and interfaces that make it easier to use AI and educating the public about AI.

When Meta says they’re “democratizing AI,” this is what — and, crucially, all — they are saying.

However, like access to uranium and guns, broad access to large AI models may have serious negative consequences. Among other things, these models may be used to generate and propagate disinformation campaigns powered by image and video generation that are tailored to create social divisions and, in some instances, undermine the democratic process; they may provide accurate instructions for cooking methamphetamine; and they may produce nonsense under the guise of scientific statements.

Democratizing access to AI is not necessarily a bad thing. The risks and benefits of open-source AI models are an area of interest for developers and academics alike, and frameworks for the responsible release of models are currently being developed. But if democratizing access to uranium and guns teaches us anything, it’s that broader access might not be a good thing in all situations. What’s more — tech companies have not exactly been great at avoiding unintended consequences in the past.

We therefore have reason to carefully scrutinize calls from the tech industry to democratize AI. As others have noted, “democratizing AI” has multiple conflicting meanings. Reducing the idea of democratization to increased access misconstrues what is valuable about democracy. At best, what it offers is only a partial realization of democratic ideals. After all, democracy is about much more than access. At the heart of democracy is the idea that there is great value in having a fair, free, and equal collective decision-making procedure for issues of public concern.

But hold on, you might say. Yes, thinking of democratization only as access might leave open the possibility for bad outcomes. But what about democratizing the benefits of AI? If this is what democratization achieves, maybe we don’t need to worry about anything more.

In line with this, “democratizing AI” has been used in the tech sector to call for an “AI for social good.” The idea is that the benefits of AI should accrue to a wider swath of society. How can increasing access to AI while making sure it distributes its benefits widely amount to a bad thing?

The problem is that what the “social good” refers to is always underspecified. Who decides what is “socially good?” What does the distribution of social benefits look like exactly? By what means are social benefits distributed — and by whom? In order to make such decisions, we need democratization in the form of collective decision making – or governance – to weigh, sort and prioritize various people’s ideas of the social good.

By invoking the language of democracy without considering how best to ensure a fair and equal institutional design for the collective governance of AI, tech companies fail to even approximate a plausible vision for a truly democratic AI.

We — including tech corporations — therefore need to understand what it means to “democratize AI” in a way that does not merely pay lip service to democratization, but, as others have noted, takes “democratization” as a concept seriously. We need an understanding of “democratizing AI” that actually guides us toward  long-lasting, institutionalized, democratic structures for AI governance. Under this definition, “democratizing AI” means that the design and development of AI takes the perspectives of multiple stakeholders into account in a fair and equitable way. Such democratic governance has value in and of itself. But this conception may also better help us achieve the goal of increasing the number of beneficiaries. Or, at the very least, it may help us avoid the worst harms of AI, since democratic governance is particularly good at avoiding very bad outcomes.

There have been promising initiatives seeking to pivot “democratizing AI” in the direction of collective governance, with projects such as Collective Constitutional AI, Democratic Fine-Tuning, Recursive Public, and more receiving funding from Open AI, Anthropic, and other sources. But to assess how each proposal measures up to democratic ideals, we need to be clear that, when we talk about “democratizing AI,” we’re talking about collective governance.

Let’s not wait for an AI Chernobyl. We can put democracy back into democratizing AI.

 

Author bios (in alphabetical order):

 

Diana Acosta-Navas is Assistant Professor of Business Ethics at Loyola University Chicago.

Henrik Kugelberg is a British Academy Postdoctoral Fellow at the London School of Economics.

Ting-An Lin is Assistant Professor in the Department of Philosophy at the University of Connecticut.

Lorenzo Manuali is a Graduate Student in the Department of Philosophy at the University of Michigan

Rhiannon Neilsen is the Cyber Security Fellow at the Center for International Security and Cooperation (CISAC) at Stanford University.

Rob Reich is Professor of Political Science and, by courtesy, Professor of Philosophy and at the Graduate School of Education at Stanford University, where he is co-director of the Center on Philanthropy and Civil Society and associate director of the Institute for Human-Centered Artificial Intelligence.

explore more on

RELATED ARTICLES

Go to top