Herndon, VA, May 29, 2025 – Many policy discussions on AI safety regulation have focused on the need to establish regulatory “guardrails” to protect the public from the risks of AI technology. In a new paper published in the journal Risk Analysis, two experts argue that, instead of imposing guardrails, policymakers should demand “leashes.”
Director of the Penn Program on Regulation and professor at University of Pennsylvania Carey Law School, Cary Coglianese and University of Notre Dame computer science doctoral candidate Colton R. Crum explain that management-based regulation (a flexible “leash” strategy) will work better than a prescriptive guardrail approach, as AI is too heterogeneous and dynamic to operate within fixed lanes. Leashes “are flexible and adaptable - just as physical leashes used when walking a dog through a neighborhood allow for a range of movement and exploration,” the authors write. Leashes “permit AI tools to explore new domains without regulatory barriers getting in the way.”
The various applications of AI include social media, chatbots, autonomous vehicles, precision medicine, fintech investment advisors, and many more. While AI offers benefits for society, such as, to pick but one example, the ability to find evidence of cancerous tumors that well-trained radiologists can miss, it also can pose risks.
In their paper, Coglianese and Crum offer three examples of AI risks: autonomous vehicle (AV) collisions, suicide associated with social media, and bias and discrimination brought about by AI through a variety of applications and digital formats, such as AI-generated text, images, and videos.
With flexible management-based regulation, firms using AI tools that pose risks in each of these settings—and others—would be expected to put their AI tools on a leash by creating internal systems to anticipate and reduce the range of possible harms from the use of their tools.
Management-based regulation can flexibly respond to “AI’s novel uses and problems and better allows for technological exploration, discovery, and change,” write Coglianese and Crum. At the same time, it provides “a tethered structure that, like a leash, can help prevent AI from ‘running away.’”
About SRA
The Society for Risk Analysis is a multidisciplinary, interdisciplinary, scholarly, international society that provides an open forum for all those interested in risk analysis. SRA was established in 1980. Since 1982, it has continuously published Risk Analysis: An International Journal, the leading scholarly journal in the field. For more information, visit www.sra.org.
###
Journal
Risk Analysis
Article Title
Leashes, not guardrails: A management-based approach to artificial intelligence risk regulation
Article Publication Date
29-May-2025