Why the Struggle for Safe AI Must Be Political

This post was written by Cansu Kutay and is cross-posted from our Substack. Kindly read the description of this sequence to understand the context in which this was written.

When I first set out to write this piece, I struggled with where to begin because the topic could be approached from so many angles. It could be framed through the lens of Big Tech, data privacy and surveillance, labour and energy costs, ethics and economics, or through concrete outcomes of AI already being deployed in society. This topic is far too wide to tackle in a short blog post, but, this challenge further proved to me the necessity for at least providing an introduction on why politicizing this struggle is necessary. When it comes to AI safety, we must take our security into our own hands.

The growing influence of AI presents us with a fundamental challenge that extends beyond technical solutions. The struggle for safe AI is inherently a political struggle that demands societal transformation. AI systems are sociotechnical in nature (Kudina et al., 2024): they are influenced by and influence societal dynamics and human behaviour. AI safety cannot be understood in isolation and must be viewed from its broader societal, ethical and governmental context. Right now, safety is an afterthought, but it needs to be integrated into the whole process. The technical solutions we have right now—red-teaming, robustness testing, value alignment—are very impactful, but they are short-term solutions. AI safety requires significant transformation in communities and governments. Technical safeguards can mitigate risks, but with broader shifts in values, governance, and culture, we cannot address the deeper societal consequences posed by AI.

To tackle this subject, I will begin by exploring what politics itself is, how AI will reshape many aspects of human existence, and why democracy demands that citizens have a voice in governing technologies that will profoundly affect their lives.

What is Politics?

According to Wikipedia, an overall definition of politics could be summed as: “the set of activities that are associated with making decisions ingroups, or other forms ofpower relations among individuals, such as the distribution ofstatus orresources.” There are many political philosophers who offer their own definition, but they mainly converge on the fact that politics is the sphere where collective decisions are made to resolve problems that concern everyone that lives together in communities. This is the broad definition that I will use throughout this post.

AI’s Impact on Human Life

AI is the technology upon which our future is being built on. It intersects with so many aspects of our lives: healthcare, housing, education, agriculture, transportation, love and death (Webb, A. 2019). These are not marginal issues, they are fundamental human needs, and AI is increasingly tangled in how they are met.

AI has reshaped labour markets across many sectors that differ in ways from past technological revolutions. The AI revolution risks displacing knowledge workers and professionals, like the Industrial Revolution, which cost many skilled artisans their jobs. However, the Industrial Revolution happened over an extended period of time, whereas the AI revolution will happen at digital speeds (Brady, 2025). This shift forces us to confront political and ethical questions: How should welfare be distributed when machines perform tasks once reserved for human expertise? What will work and the economic distribution look like in the future?

Additionally, this means that decision-making is delegated to AI even in positions concerning the life or death situation of individuals. Processes such as screening and hiring for jobs, health insurance eligibility, loan and credit approvals are important decisions that make or break people’s lives, and they are increasingly being delegated to AI systems. More directly, militaries are experimenting with AI advisors that recommend battlefield strategies, including where and whom to target. As described by The Guardian, companies like Palantir train tools known as ISTAR systems (intelligence, surveillance, target acquisition and reconnaissance) to aid stripping people of their human rights: mass surveillance, forced migration and urban warfare. The use of these tools in actual war zones and migrants in the US prove that minorities will suffer first and most from the consequences of AI. Yet, accountability for these machine-driven decisions are still in a gray zone. The speed of AI’s integration into decision-making outpaces the speed at which we govern it.

Meanwhile, our social lives and interactions online are filtered through AI algorithms themselves. Recommendation systems on social media decide what information we encounter and how we debate and discuss phenomena. In doing so, they influence not only individual perspectives but the structure of democratic deliberation today. The opaque algorithms raise concerns about who is controlling information flow, privacy and polarization.

AI development also affects the environment. Training and running large AI systems consume enormous amounts of energy and water, which are diverted from ecosystems and local communities. As Webb (2019) describes, this “nowist” approach seeks short-term innovation and solutions rather than making the deep investments we need into a sustainable future.

The ownership of AI systems brings new perspectives into the issue. A few corporations hold disproportionate power over systems that touch the most fundamental aspects of life. If control over food, housing, health, and social interaction flows through AI and control of AI is in private hands, then the question of ownership becomes the main question of political power.

The pervasiveness of AI in so many aspects that concern the basic rights of humans means decisions about AI development, deployment and regulation are not merely technical choices but political decisions that will shape the conditions of human existence. This means that AI safety is a collective action problem, which calls for all of us to be involved in the decision-making regarding AI systems to make sure it serves our collective interests and represents our shared values.

Political Nature of AI Safety

Who gets to make choices about AI development? Who benefits from AI systems? Who bears the risks of them? These are some political questions about the distribution of resources, opportunities and inevitable burdens in society. Current control over AI governance concentrates the decision-making power in the hands of big technology companies and state actors, and the risks of these systems are distributed across society.

Decisions about what constitutes “safe” AI is a political choice that determines how society would be organized. This is the case because it poses many sub-questions like whether AI systems should prioritize efficiency over privacy, whether they should be more sustainable or be optimized for economic growth. These are not technical questions with an objective answer, but rather political decisions that require democratic participation.

Democratic Participation

We can analyse the AI safety issue from the theories of Robert Dahl, who was a democratic theorist. He argued that democracy’s justification rests on what he called the “all-affected principle”, which is the idea that all of those affected by the consequences of decisions should have equal opportunities to take part in making those decisions. His theory of “polyarchy” emphasized contestation (the right to oppose decisions) and inclusiveness (participation in decision-making) (Dahl, 2008).

His principle becomes particularly important when considering AI safety, since as analysed above, AI systems will affect virtually everyone in the world, while being developed and controlled by a few actors. Dahl’s “polyarchy” theory suggests that democracy is only legitimate if all relevant groups are included and taken into account. Thus, safeguarding AI safety means ensuring input from all concerned parties, preventing the dominance of the small few actors who own the AI, and ensuring marginalized voices are heard and represented.

Dahl also argued that for citizens to be effective democratic agents, they must have understanding of the issues at stake. This means, algorithmic transparency needs to be prioritized, AI and technical literacy needs to be integrated into education and accessible explanations of algorithmic decisions need to be produced. Without such understanding, citizens cannot challenge or influence decisions, and therefore not take legitimate part in governing AI.

Lastly, as mentioned, Dahl’s model of polyarchy views contestability as essential. Therefore, AI safety needs to include direct accountability mechanisms. We need to be able to hold AI systems, alongside the people and companies responsible for them, accountable in their decision-making so that we can oppose or have a say in their decisions.

AI is specifically hard to govern, since it is a fast and revolutionary technology evolving at very high speeds. It is hard to predict its outcomes because of the novelty, and new features can be deployed by private actors before traditional representative mechanisms have time to provide governmental guidelines. This is why the majority participating in decision-making regarding policies can be beneficial. There are many people who get affected by the outcomes, which can help policymakers find more thorough and comprehensive laws that address possible harms earlier, instead of waiting for negative outcomes to prompt revisions. Giving a voice to the majority also fosters democratic legitimacy, social understanding of issues and trust in the government, strengthening the long-term impact of regulations (Ter-Minassian, 2025).

One counter-argument to this principle is usually that laypersons do not know about the technical side of AI and therefore cannot make sound judgements about governing it.

In response to this counter-argument, I would draw on Barnett et al. (2025), who maintain that lay stakeholders must play a greater role in governing AI. In their paper, they successfully demonstrate an inclusive method for gathering lay stakeholder insights to inform AI policymaking. They mention that relying solely on expert knowledge creates several problems: Bias from having profit as your main incentive, developers overlooking the real impacts on end users, focusing solely on foreseeable risks, making them slow to change and reactive to consequences rather than preventative. Without lived experience, the assessment of AI systems lack both breadth and contextual depth. In contrast, including stakeholder perspectives brings a wider range of insights into potential consequences, making governance more responsive, grounded in social reality and preventative. This is not to diminish the importance of technical expertise in AI safety, but rather attempts to design governance systems that combine both technical competence and democratic participation.

What Can We Do?

When the Industrial Revolution came about and many labourers lost their jobs, there were policy shifts to ensure that these individuals can get training and get industrial jobs. Labour unions came about to fight for workers rights, including better wages and safer conditions. Today, we face a similar fate with AI. We need policy changes and AI literacy training by our governments, but we also need to unionize as people to fight for our own rights when the government is not responsive enough.

As individuals, we can take powerful steps to stay informed, raise awareness and educate one another, and express how we want AI to serve society. Some studies have proposed ways of how to involve citizens in governance of AI. Some of these ideas are:

  • Citizen assemblies made up of randomly selected citizens who receive AI education and deliberate AI policies.

  • Multi-stakeholder governance bodies that bring together both technical experts and civil society groups that include affected communities.

  • Greater transparency in AI decision-making through algorithmic auditing and open democratic debate.

  • Participatory technology assessment that gives citizens a role in evaluating emerging technologies and their wider societal impact.

I want to conclude by saying, although the risks are great, AI promises considerable potential. We have already witnessed its transformative benefits across medicine, education, science and countless other domains, and there is no doubt that it can radically enhance our quality of life. But this trajectory will not shape itself. The increase in extremist political leaders and the rising concentration of power makes it especially dangerous to leave this technology unchecked in their hands. A technology moving at such high speeds cannot be left to private interests and policies that are reactionary. With us citizens consistently losing power over decisions that shape our lives, we have to reclaim democratic control over how AI is developed, deployed and governed. We need a societal shift to make technology for humanity rather than for profit. This means mobilizing effectively, educating ourselves and one another, and demanding transparent and inclusive methods of accountability. The challenge is political, but so is the promise. If we treat AI not just as a technical tool but as a matter of collective agency and shared responsibility, we can turn it into a force that strengthens democracy and society rather than weakening it.

References

Barnett, J., Kieslich, K., Helberger, N., & Diakopoulos, N. (2025, June). Envisioning Stakeholder-Action Pairs to Mitigate Negative Impacts of AI: A Participatory Approach to Inform Policy Making. In Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency (pp. 1424-1449).

Brady, K. (2025, July 16). Council Post: Steam to Silicon: Comparing the Industrial Revolution and the AI age. Forbes. https://​​www.forbes.com/​​councils/​​forbesbusinesscouncil/​​2025/​​07/​​16/​​steam-to-silicon-comparing-the-industrial-revolution-and-the-ai-age/​​

Dahl, R. A. (2008). Polyarchy: Participation and opposition. Yale University Press.

Fung, A., & Gray, S. W. D. (2024). The All-Affected Principle: A Pathway to Democracy for the Twenty-First Century? In A. Fung & S. W. D. Gray (Eds.), Empowering Affected Interests: Democratic Inclusion in a Globalized World (pp. 1–18). chapter, Cambridge: Cambridge University Press.

Kudina, O., & van de Poel, I. (2024). A sociotechnical system perspective on AI. Minds and Machines, 34(3), 21.

Ter-Minassian, L. (2025). Democratizing AI Governance: Balancing Expertise and Public Participation. arXiv preprint arXiv:2502.08651.

Webb, A. (2019). The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity. PublicAffairs.