New risk assessment framework offers clarity for open AI models

There is a debate within the AI community around the risks of widely releasing foundation models with their weights and the societal impact of that decision. Some are arguing that the wide availability of Llama2 or Stable Diffusion XL are a net negative for society. A position paper released today shows that there is insufficient evidence to effectively characterize the marginal risk of these models relative to other technologies. 

The paper was authored by Sayash Kappor of Princeton University and Rishi Bommasani of Stanford University, me and others and is directed at AI developers, researchers investigating the risks of AI, competition regulators, and policymakers who are challenged with how to govern open foundation models. 

This paper introduces a risk assessment framework to be used with open models. This resource helps explain why the marginal risk is low in some cases where we already have evidence from past waves of digital technology. It reveals that past work has focused on different subsets of the framework with different assumptions, serving to clarify disagreements about misuse risks. By outlining the necessary components of a complete analysis of the misuse risk of open foundation models, it lays out a path to a more constructive debate moving forward.

I hope this work will support a constructive debate where risks of AI are grounded in science and today’s reality, rather than hypothetical, future scenarios. This paper offers a position that balances the case against open foundation models with substantiated analysis and a useful framework on which to build. Please read the paper and leave your comments on Mastodon or LinkedIn.

Click Here to View Original Source (opensource.org)

Leave a Reply

Your email address will not be published. Required fields are marked *

Shared by: voicesofopensource

Tags: ,