Democracy and AI Collision: Navigating the Crossroads of Power and Technology

Adrian Zidaritz
4 min readJan 3, 2024

Two trends have strengthened in the US during the past few years. Democracy has been trending downwards. A significant percentage of Americans are willing to risk it altogether and support Donald Trump, the former president whose authoritarian impulses are clear by now. Even if the threat to democracy does not materialize with the 2024 presidential election, Americans have shown a surprising tolerance for authoritarian demagoguery. Artificial Intelligence (AI) has been trending upwards, as exemplified by the spectacular success of the Large Language Models (LLMs) like ChatGPT. The two trends are poised to collide if the democratic trend continues downwards. Can this collision be avoided?

The use of AI to distort facts and inflame opinions happens daily on social media. But this usage is small when compared with more potent and more institutionalized uses of AI to wreck our democracy. We’ll use these LLMs as an example, and to understand how they can be used for non-democratic agendas, we look at the two distinct phases of building such an LLM.

In the first phase, a raw model is trained on an enormous dataset, practically all the text that exists in digital form on the Internet. This training makes such a raw model extraordinarily good at language tasks, like answering questions and generating very persuasive content.

It is the second phase though that is of higher interest to us in this article. Because a raw model would give its users effective instructions on how to achieve any goal (including very destructive ones), during this second phase human annotators fine tune the raw model to produce text that is better aligned with human values. This fine tuning is what led to the wide acceptance of these LLMs. However, there is nothing to prevent a group of people from doing the opposite and aligning a powerful raw model with a strongly anti-democratic agenda. Imagine now that group of people as being part of our governing institutions.

At least two actions can be taken to avoid collisions of democracy and AI in the longer term. First, a stronger Congress would be needed to proactively drive AI regulatory legislation (including constitutional amendments) without relying on self-regulatory initiatives from high tech. Secondly, there would have to be some changes made in our education system in order to promote stronger citizenship values. AI regulatory action would set the rules which any deployed AI system will have to satisfy, and citizenship values would ensure that people developing and using AI systems would do so within constitutionally democratic bounds.

Unfortunately, governmental power has gradually shifted away from the legislative (the Congress) toward the executive (the President), and to a lesser extent toward the judicial (the Supreme Court). The result of this power shift is that guardrails for AI have to be set now by executive order instead of congressional legislation (see President Biden’s October 30 order).

To shift some of the power back toward the legislative and ensure a longer-term viability of both our democracy and of our AI systems, constitutional amendments would be needed, including clearer limits for executive power and an AI Bill of Rights. With AI rising rapidly and impacting every aspect of our lives, we may consider Thomas Jefferson’s design for a more frequently amendable Constitution; even the nineteen or twenty years that he proposed for constitutional revisions may not be frequent enough to keep pace with changes AI will bring.

Let’s look at the second action that would be needed. The current polarization in America and the resulting danger to our democracy is not based on income levels or economic classes as it was in the past, it is now based on a growing gap in education levels. The 2016 presidential election clearly demonstrated that, with less educated counties voting for Trump and more educated counties voting for Hillary Clinton. This split has persisted through the 2020 election.

One may think that with the increasing march of AI it is the STEM education that would need more attention. And with escalating world competition in AI, that thought would effortlessly make sense. But strengthening STEM would be insufficient if the collision between democracy and AI is to be avoided.

As it was shown above with the second phase of training an LLM, it is humans who are aligning AI with their values, and that includes democratic values. Therefore, strengthening the humanities classes in order to fortify the character of younger citizens would have to be given a higher priority in school curricula. Just like the neural networks inside a raw LLM need alignment with human values, the neural networks inside the brains of youth need to receive instructions about good citizenship and about the participatory nature of both democracy and AI.

It is almost certain that a new kind of citizen will emerge in the near future, a citizen endowed with a personal AI assistant running on personal devices, or in cyberspace, or more probably in both. Much work is needed to ensure that this new citizen will lead to a stronger Union. Despite the alarmist tendencies of the past year, for a long time it will still be human intelligence driving the AI technology, not the other way around. And better educated citizens, making positive use of their AI assistants and driving those assistants, should allow for the survival of our democracy and help it avoid a collision with AI.



Adrian Zidaritz

Computer Scientist. PhD Mathematics UC Berkeley, 1992. Writes about the impact of artificial intelligence on democracy, human values, and your own well-being.