OSI participates in Columbia Convening on openness and AI; first readouts available
I was invited to join Mozilla and the Columbia Institute of Global Politics in an effort that explores what “open” should mean in the AI era. A cohort of 40 leading scholars and practitioners from Open Source AI startups and companies, non-profit AI labs, and civil society organizations came together on February 29 at the Columbia Convening to collaborate on ways to strengthen and leverage openness for the good of all. We believe openness can and must play a key role in the future of AI. The Columbia Convening took an important step toward developing a framework for openness in AI with the hope that open approaches can have a significant impact on AI, just as Open Source software did in the early days of the internet and World Wide Web.
This effort is aligned and contributes valuable knowledge to the ongoing process to find the Open Source AI Definition.
As a result of this first meeting of Columbia Convening, two readouts have been published; a technical memorandum for technical leaders and practitioners who are shaping the future of AI, and a policy memorandum for policymakers with a focus on openness in AI.
Technical readout
The Columbia Convening on Openness and AI Technical Readout was edited by Nik Marda with review contributions from myself, Deval Pandya, Irene Solaiman, and Victor Storchan.
The technical readout highlighted the challenges of understanding openness in AI. Approaches to openness are falling under three categories: gradient/spectrum, criteria scoring, and binary. The OSI is championing a binary approach to openness, where AI systems are either “open” or “closed” based on whether they meet a certain set of criteria.
The technical readout also provided a diagram that shows how the AI stack may be described by the different dimensions (AI artifacts, documentation, and distribution) of its various components and subcomponents.
Policy readout
The Columbia Convening on Openness and AI Policy Readout was edited by Udbhav Tiwari with review contributions from Kevin Klyman, Madhulika Srikumar, and myself.
The policy readout highlighted the benefits of openness, including:
- Enhancing reproducible research and promoting innovation
- Creating an open ecosystem of developers and makers
- Promoting inclusion through open development culture and models
- Facilitating accountability and supporting bias research
- Fostering security through widespread scrutiny
- Reducing costs and avoiding vendor lock-In
- Equipping supervisory authorities with necessary tools
- Making training and inference more resource-efficient, reducing environmental harm
- Ensuring competition and dynamism
- Providing recourse in decision-making
The policy readout also showcased a table with the potential benefits and drawbacks of each component of the AI stack, including the code, datasets, model weights, documentation, distribution, and guardrails.
Finally, the policy readout provided some policy recommendations:
- Include standardized definitions of openness as part of AI standards
- Promote agency, transparency and accountability
- Facilitate innovation and mitigate monopolistic practices
- Expand access to computational resources
- Mandate risk assessment and management for certain AI applications
- Hold independent audits and red teaming
- Update privacy legislation to specifically address AI challenges
- Updated legal framework to distinguish the responsibilities of different actors
- Nurture AI research and development grounded in openness
- Invest in education and specialized training programs
- Adapt IP laws to support open licensing models
- Engage the general public and stakeholders
You can follow along with the work of Columbia Convening at mozilla.org/research/cc and the work from the Open Source Initiative on the definition of Open Source AI at opensource.org/deepdive.
Leave a Reply