Weighing the Great AI Pause
“ We agree. That point is now.”
In March, 1,000 top tech experts including Elon Musk, Apple co-founder Steve Wozniak and OpenAI CEO Sam Altman published their call for a six-month moratorium on further development of AI systems more powerful than GPT-4, citing what they perceived to be a potential runaway risk without sufficient guardrails. In May, Geoffrey Hinton, known as the “Godfather of AI,” resigned his post at Google to use that time to speak out about the technology he helped create.
In an open letter, published here, the signatories said:
“Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable … “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.” We agree. That point is now.”
In 2022, the University of Florida signed onto the Rome Call for AI Ethics, a commitment to ‘digital innovation and technological progress that serves human genius and creativity and not their gradual replacement.’
We asked a panel of faculty AI experts at the Herbert Wertheim College of Engineering (HWCOE) to discuss the implications of this proposed pause.
Can you explain why 1,000 tech leaders are calling for a six-month pause on AI development? What are their concerns and who would be impacted by such a pause if it were put into place?
My T. Thai, Ph.D., professor in the Department of Computer and Information Science and Engineering (CISE) and associate director of the Nelms Institute for the Connected World: There appears to be a race on this such development where no-one can fully understand how these powerful system work, how they will be used/misused, and how to assess their risks. Thus, these powerful AI systems should be developed only once we are confident that their effects to society and humanity will be positive and their safety can be verifiable.
There appears to be a race on this development where no-one can fully understand how these powerful systems work, how they will be used/misused, and how to assess their risks. Thus, these powerful AI systems should be developed only once we are confident that their effects to society and humanity will be positive and their safety can be verifiable.
Lisa Anthony, Ph.D., associate professor and director of Intelligent Natural Interaction Technology in the Department of Computer and Information Science and Engineering: The tech experts are concerned about what we would call ’emergent behavior,’ actions or output by the AI systems that weren’t predicted by the original developers. They are particularly concerned that, without a global agreement on ethical regulations around AI, AI technology could be exploited by bad actors who can encourage use of the AI to harm others. The tech experts signing on to the recent call for the AI pause are arguing that the potential impacts of AI on humanity are worth a global conversation before AI tech becomes too fragmented and ubiquitous, and the negative broader impacts cannot be contained.
As AI experts and ethics researchers / scholars, how do you propose we go about addressing these proposed concerns, whether through research, curriculum or other ways?
Barbara Evans, Ph.D., J.D., holds a dual appointment as a professor in both the Levin College of Law and the Herbert Wertheim College of Engineering: Apart from the Rome Call for Ethics, UF scholars have contributed to the White House Office of Science & Technology Policy’s development of the Blueprint for an AI Bill of Rights that was published in October 2022, and faculty we are providing input into specific policy solutions for the particular types of AI systems they work with. I work with AI/machine learning medical software and my work focuses on (1) data privacy protections for people whose data is used to train these tools, and (2) the federal and state regulatory frameworks for ensuring that these systems are safe and effective in clinical use and are as free of biases as possible.
(MT): At UF, we have been addressing these concerns through our research. For example, my lab has been working on trustworthy and explainable AI, which aims to make AI systems more interpretable and responsible. UF Professor Duncan Purves recently hosted us in an AI ethics panel, discussing our response in light of the Rome Call.
(LA): UF has been positioning itself as the top institution in embedded AI curriculum, and the faculty here are on the ground floor of both developing the fundamental technologies underlying AI as well as the equally important side of AI interactions with people and their data. Faculty all across campus, not just in computing departments, are engaging with the broader implications of AI for society. We are also continuing to grow our team to bring in some of the world’s top minds in this area. In particular, the CISE department is currently hiring for the Glenn and Deborah Renwick Leadership Professor in AI and Ethics, seeking a senior thought leader in this space who will help us shape the conversation.
What do you think a six-month moratorium might accomplish in terms of addressing the concerns? What do you surmise will be taking place in the tech/research sector during this time, and what might be the basis for a green light to proceed?
(MT): I think the call is more or less to bring back our full attention on responsible AI — to tell us slow down on the race of building a more powerful AI system than ChatGPT-4, and to refocus on making modern AI systems more understandable, safe, self-assessable and trustworthy. The call is not actually calling for a pause itself. Six months is obviously not long enough to accomplish such a complex goal.
(LA): Frankly, I don’t think six months would be long enough to come to a global agreement around how to handle the ethics of AI and data. I also worry that the tech industry is already racing behind the scenes to the next ‘killer app’ for AI that will put them ahead in this revolution, and it may already be too late to get them to rein this in. However, I think if the top tech CEOs around the globe could gather for a summit, along with governments and academic institutions, a series of ethical guidelines and regulations around data use and computational model power might be possible to create. Such guidelines would have to have consequences for violation, for example, governments might agree to ban any AI system found to be noncompliant.
(BE): First, for context, there were similar calls for moratoria back in the late 1970s/early 1980s with recombinant DNA technology, and more recently in 2014-15 after CRISPR gene editing technology emerged. As with the AI-enabled large language models (LLMs) of today, these new technologies seemed to threaten large – maybe even catastrophic – risks that seemed unknown and unquantifiable. The question always is whether the best stance is to be precautionary or venturesome. There is a whole spectrum of ways to take a precautionary approach.
A moratorium is the most extreme precautionary stance. It presumes a couple of things: that the unquantifiable risks are very serious, and; we can somehow gain insight on how to deal with those risks during the period the moratorium will be in effect.
There are several concerns with the proposed six-month moratorium on AI/LLMs research.
Many of the risks are “unknown unknowns.” We cannot fully know what the risks are going to
be unless we continue developing and researching the technology. It’s a chicken-and-egg
situation. Second, it’s not clear if six months is really long enough to discover what those risks are, to quantify their seriousness, and to implement regulations and social welfare systems aimed at addressing those risks. In the United States, passage of new legislation by Congress to create a new regulatory framework can take five years or more — then another five to seven years to get regulations in place.
And how do you ensure compliance with a moratorium — who will comply with it? Will it only halt research by the good actors who are trying to be ethically responsible, while leaving the bad actors to continue making progress with the technology?
When risks are not fully foreseen, the only way to understand them is to proceed cautiously without a “full-stop” moratorium that could halt the research needed to develop appropriate regulatory solutions. We see this approach used in medical technology and environmental regulations in many countries: Governmental agencies grant licenses or permits to conduct research. They require various layers of ethical and safety oversight for the research as well as commitments from researchers to proceed safely and responsibly, subject to defined study protocols that aim to generate better knowledge about the risks the technology poses.
Another approach is to presume a new technology is safe until it proves itself unsafe, but that is a riskier approach because it implies harms may already have occurred by the time you first become aware of them.
If such a pause were put into place, what would it mean for institutions like UF in terms of advancing AI research?
(LA): I would hope industry leaders would recognize the importance of bringing in the academic sector into the conversations about AI pauses, regulations, ethics and future development. Unlike industry, which is beholden to stakeholders and profits, academic research has the benefit of exploring such fundamental questions without having a financial stake in the results. We also have access to the next generation’s brightest software developers, business leaders, philosophers and ethicists. Industry should want to bring UF and similar institutions into the room.
(MT): As mentioned in the pause letter, this will be used to develop and implement a set of shared safety protocols for advanced AI design and development. This does not mean a pause on AI development in general. Indeed, it creates several new opportunities in advancing AI research, including standards, regulations and those safety protocols.
(BE): If this moratorium proposal is limited to LLM research that aims to produce systems more powerful than GPT-4, I assume it would not affect much or most of the research that goes on here, e.g., development of clinical decision support tools for use in health care settings, and environmental systems, and various other applications.
Given your AI expertise, what do you see as the biggest concerns and challenges in AI and how should we, as a society, go about addressing these items?
(MT): We have witnessed a massive adoption of the very large AI systems, and as of now there is no provable formula for evaluating their impact and risk, nor rigorously verifying their safety for the public before deployment. This is a very complex problem that requires us to work together, internationally and interdisciplinary.
(LA): My current biggest concerns include (1) the misuse of AI ‘deepfake’ technology for spreading misinformation and worse, (2) the use of data-driven AI technology to continue to marginalize our most vulnerable members of society as has been done in the past through, e.g., facial recognition analysis and statistical red-lining, and (3) the unbridled popular trust in AI output as accurate without fact-checking or critical analysis. As with most prior technology revolutions, I think taking a human-centered approach, and considering users’ needs, expectations, abilities, and impacts for a wide variety of representative users is essential to creating technology that can improve our lives rather than continue to burden it. For example, in my research, we have focused on understanding how systems can be designed for the unique needs of elementary aged children, often overlooked as ‘first class’ users of emerging technologies like touchscreens and VR. Similar broad studies will be necessary to ensure AI can benefit all, rather than only a privileged few.
(BE): It concerns me that policymakers sometimes discuss AI as if it were a single, unified phenomenon. In reality, it encompasses a very broad array of computational tools that will be deployed in a wide variety of vastly different contexts, each posing different risks and offering different societal benefits. Each specific use context presents different risks, and different levels of precaution may be needed for each. For example, the AI algorithm that drives a robotic vacuum cleaner can probably get by with lighter safety oversight than we would want for an AI tool that will have life-and-death consequences in a healthcare setting. The best way to advance policy may be for everybody to stop trying to announce one-size-fits-all grand policy solutions that promise to address the risks of all AI in all contexts. Instead, we need to roll up our sleeves and get down to the arduous work of designing appropriate oversight frameworks that are tailored to the specific risks that AI poses in each of the many contexts where AI is going to be deployed.