top of page

Why The AI Race Has No Place In Public Policy

ree

Though AI safety is often perceived as an overly nihilistic field of study, it has become an increasingly prevalent feature of international policy debates ranging from its use in courts and parliaments to the defence industry. AI safety has entered public discourse in the context of spurring on unemployment, AI misuse, and the potential for critical systems failures or bias. In a European context, the EU is explicitly seeking to develop a uniquely European approach to developing and investing in AI tools which prioritises digital sovereignty and data privacy.


What is the validity of concerns surrounding AI safety and sovereignty in a world where the rise in Artificial Intelligence (AI) has been characterised as a bubble or boom, with its limitations being touted by sceptics of its ubiquity? It is important to note that advocates for AI safety acknowledge its potential benefits for humanity, and in fact seek to maximise these benefits by mitigating potential risks that rapid development presents. These benefits, especially when applied to governance and social infrastructure, could speed up data processing tasks and result in increased administrative efficiency, especially given the notoriously slow pace of E.U. bureaucracy as a whole. Yet, concerns regarding the usage of algorithms to evaluate social issues such as housing, debts, and general risk were raised before the rise of AI in late 2022. Automating these processes, and the reasoning behind them has severe personal and human consequences, especially in determining zoning or eligibility for welfare benefits. Thus, some argue that the unquestioning and near ubiquitous use of AI in helping to make social and financial decisions will exacerbate these issues. 


The fear of AI doing the work of humans is a salient one, given the overdone debates on the ultimate purpose of AI tools: are they here to help us or to replace us? AI safety discourse often suffers from ascribing overly anthropocentric characteristics to Large Language Models (LLMs) and other AI tools, predicated on somewhat real fears over AI “sleeper-agents”. The biases implicit in these models reflect common human or societal perspectives, as demonstrated by algorithmic inabilities to transcend ingrained biases (like racial ones) while making social (ex: housing) decisions. Research has demonstrated that in a scenario where AI tools gain agentic capabilities which is in fact a goal of various developers), they would gain more advanced capabilities to respond, react to, or reject the instructions they are given. This is why EU policy militates for values being instilled into AI tools, so that even if they are to become agentic, they remain under the control and for the use of human actors. However, the explicit addition of social and political values into AI training allows for the potential of outright rejection of these values by AI tools, insofar as that the AI might reject human control and the values that come with it. 


Does treating AI as hostile or a potential risk make it more likely to be one? Some scholars argue that the more humans attempt to control AI amidst the proliferation of rapid developmental goals and fierce international competition for advancement, the more compliance issues agentic AI could come into. In a philosophical sense, AI is not an existential risk to humans unless we let it become one. For example, the EU has determined that they are falling behind in the “AI Race”, and are thus struggling to come up with LLMs that reflect their values and are forced to use American tools like ChatGPT or Claude for daily operations. To this, I pose the simple question: who is forcing European governments and publics to use under-researched AI tools in their operations? No one. Moreover, what are the ethics behind developing technology that is explicitly designed to promote a certain set of nationally-oriented values through speech acts or risk assessments produced by LLMs? This does not mean that it should be blindly used in corporate or governmental settings, rather, if AI is treated by AI safety focused scholars as unpredictable, what place does it have in critical decision-making?


What does this mean for public policy surrounding developing AI? Firstly, ubiquitous AI use is not inevitable. In fact, if we see AI as an existential risk, we should be sceptical of it, rather than view ourselves as regressive for our refusal to allow it to permeate our lives. This individual level of analysis is critical at this stage in AI development given that policymakers can choose whether or not they would like to use AI to synthesise information and assess risk, lawyers can regulate how much they rely upon tools like Harvey, and governments can choose how they use language to securitise the issue of the existing “AI race”.We do not know yet what agentic AI might look like, which makes it dangerous to incorporate into social welfare frameworks and freely input sensitive data into, no matter how closed information datasets for specific LLMs might be, or what values any individual chatbot might reflect.




Image: Flickr/DSIT (Alecsandra Dragoi)

No image changes made.

Comments


bottom of page