Compelling responses to NTIA’s AI Open Model Weights RFC

The National Telecommunications and Information Administration (NTIA) posted a request for comments on Dual Use Foundation Artificial Intelligence Models with Widely Available Model Weights, and it has received 362 comments.

In addition to the Open Source Initiative’s (OSI) joint letter drafted by Mozilla and the Center for Democracy and Technology (CDT), the OSI has also sent a letter of its own, highlighting our multi-stakeholder process to create a unified, recognized definition of Open Source AI.

The following is a list of some comments from nonprofit organizations and companies.

Comments from additional nonprofit organizations

  • Researchers from Stanford University’s Human-centered AI (HAI) and Princeton University recommend that the federal government prioritize understanding of the marginal risk of open foundational models when compared to proprietary, creating policies based on this marginal risk. Their response also highlighted several unique benefits from open foundational models, including higher innovation, transparency, diversification, and competitiveness.
  • Wikimedia Foundation recommends that regulatory approaches should support and encourage the development of beneficial uses of open technologies rather than depending on more closed systems to mitigate risks. Wikimedia believes open and widely available AI models, along with the necessary infrastructure to deploy them, could be an equalizing force for many jurisdictions around the world by mitigating historical disadvantages in the ability to access, learn from, and use knowledge.
  • EleutherAI Institute recommends Open Source AI and warns that restrictions on open-weight models are a costly intervention with comparatively little benefit. EleutherAI believes that open models enable people close to the deployment context to have greater control over the capabilities and usage restrictions of their models, study the internal behavior of models during deployment, and examine the training process and especially training data for signs that a model is unsafe to deploy in a specific use-case. They also lower barriers of entry by making models cheaper to run and enable users whose use-cases require strict guarding of privacy (e.g., medicine, government benefits, personal financial information) to use.
  • MLCommons recommends the use of standardized benchmarks, which will be a critical component for mitigating the risk of models both with and without widely available open weights. MLCommons believes models with widely available open weights allow the entire AI safety community – including auditors, regulators, civil society, users of AI systems, and developers of AI systems – to engage with the benchmark development process. Together with open data and model code, open weights enable the community to clearly and completely understand what a given safety benchmark is measuring, eliminating any confounding opacity around how a model was trained or optimized.
  • The AI Alliance recommends regulation shaped by independent, evidence-based research on reliable methods of assessing the marginal risks posed by open foundation models; effective risk management frameworks for the responsible development of open foundation models; and balancing regulation with the benefits that open foundation models offer for expanding access to the technology and catalyzing economic growth.
  • The Alliance for Trust in AI recommends that regulation should protect the many benefits of increasing access to AI models and tools. The Alliance of Trust in AI believes that openness should not be artificially restricted based on a misplaced belief that this will decrease risk.
  • Access Now recommends NTIA to think broadly about how developments in AI are reshaping or consolidating corporate power, especially with regard to ‘Big Tech.’ Access Now believes in the development and use of AI systems in a sustainable, resource-friendly way that considers the impact of models on marginalized communities and how those communities intersect with the Global South.
  • Partnership on AI (PAI) recommends NTIA’s work should be informed by the following principles: all foundation models need risk mitigations; appropriate risk mitigations will vary depending on model characteristics; risk mitigation measures, for either open or closed models, should be proportionate to risk; and voluntary frameworks are part of the solution.
  • R Street recommends pragmatic steps towards AI safety, relying on multistakeholder processes to address problems in a more flexible, agile, and iterative fashion. The government should not impose arbitrary limitations on the power of Open Source AI systems, which could result in a net loss of competitive advantage.
  • The Computer and Communications Industry Association (CCIA) recommends assessment based on the risks, highlighting that open models provide the potential for better security, less bias, and lower costs to AI developers and users alike. The CCIA acknowledged that the vast majority of Americans already use systems based on Open Source software (knowingly or unknowingly) on a daily basis.
  • The Information Technology Industry Council (ITI) recommends adopting a risk-based approach with respect to open foundation models, since not all models pose an equivalent degree of risk, and that the risk management is a shared responsibility across the AI value chain.
  • The Center for Data Innovation recommends that U.S. policymakers defend open AI models at the international level as part of its continued embrace of the global free flow of data. It also encourages them to learn lessons from past debates about dual-use technologies, such as encryption, and refrain from imposing restrictions on foundation models because such policies would not only be ultimately ineffective at addressing risk, but they would slow innovation, reduce competition, and decrease U.S. competitiveness.
  • The International Center for Law & Economics recommends that AI regulation must be grounded in empirical evidence and data-driven decision making. Demanding a solid evidentiary basis as a threshold for intervention would help policymakers to avoid the pitfalls of reacting to sensationalized or unfounded AI fears.
  • New America’s Open Technology Institute (OTI) recommends a coordinated interagency approach designed to ensure that the vast potential benefits of a flourishing open model ecosystem serve American interests, in order to counter or at least offset the trend toward dominant closed AI systems and continued concentrations of power in the hands of a few companies.
  • Electronic Privacy Information Center (EPIC) recommends NTIA to grapple with the nuanced advantages, disadvantages, and regulatory hurdles that emerge within AI models along the entire gradient of openness, highlighting that AI models with weights widely available may foster more independent evaluation of AI systems and greater competition compared to closed systems.
  • The Software & Information Industry Association (SIIA) recommends a risk-based approach to foundation models that considers the degree and type of openness. SIIA believes openness has already proved to be a catalyst for research and innovation by essentially democratizing access to models that are cost-prohibitive for many actors in the AI ecosystem to develop on their own.
  • The Future Society recommends that the government should establish risk categories (i.e., designations of “high-risk” or “unacceptable-risk”), thresholds, and risk-mitigation measures that correspond to evaluation outcomes. The Future Society is concerned that overly restrictive policies could lead to market concentration, hindering competition and innovation in both industry and academia. A lack of competition in the AI market can have far-reaching knock-on consequences, including potentially stifling efforts to improve transparency, safety, and accountability in the industry. This, in turn, can impair the ability to monitor and mitigate the risks associated with dual-use foundation models and to develop evidence-based policymaking.
  • The Software Alliance (BSA) recommends NTIA to avoid restricting the availability of open foundation models; ground policies that address risks of open foundation models on empirical evidence; and encourage the implementation of safeguards to enhance the safety of open foundation models. BSA recognizes the substantial benefits that open foundation models provide to both consumers and businesses.
  • The US Chamber of Commerce recommends NTIA to make decisions based on sound science and not unsubstantiated concerns that open models pose an increased risk to society. The US Chamber of Commerce believes that Open-source technology allows developers to build, create, and innovate in various areas that will drive future economic growth.

Comments from companies

  • Meta recommends NTIA to establish common standards for risk assessments, benchmarks and evaluations informed by science, noting that the U.S. national interest is served by the broad availability of U.S.-developed open foundation models. Meta highlighted that Open source democratizes access to the benefits of AI, and that these benefits are potentially profound for the U.S., and for societies around the world. 
  • Google recommends a rigorous and holistic assessment of the technology to evaluate benefits and risks. Google believes that Open models allow users across the world, including in emerging markets, to experiment and develop new applications, lowering barriers to entry and making it easier for organizations of all sizes to compete and innovate.
  • IBM recommends preserving and prioritizing the critical benefits of open innovation ecosystems for AI for increasing AI safety, advancing national competitiveness, and promoting democratization and transparency of this technology. 
  • Intel recommends accountability for responsible design and implementation to help mitigate potential individual and societal harm. This includes establishing robust security protocols and standards to identify, address, and report potential vulnerabilities. Intel believes openness not only allows for faster advancement of technology and innovation, but also faster, transparent discovery of potential harms and community remediation and address. Intel also believes that Open AI development is essential to facilitate innovation and equitable access to AI, as open innovation, open platforms, and horizontal competition help offer choice and build trust. 
  • Stability AI recommends that regulation must support a diverse AI ecosystem – from the large firms building closed products to the everyday developers using, refining, and sharing open technology. Stability AI recognizes that Open models promote transparency, security, privacy, accessibility, competition, and grassroots innovation in AI.
  • Hugging Face recommends establishing standards for best practices building on existing work and prioritizing requirements of safety by design across both the AI development chain and its deployment environments. Hugging Face believes that open-weight models contribute to competition, innovation, and broad understanding of AI systems to support effective and reliable development.
  • GitHub recommends regulatory risk assessment should weigh empirical evidence of possible harm against the benefits of widely available model weights. GitHub believes Open source and widely available AI models support research on AI development and safety, as well as the use of AI tools in research across disciplines. To-date, researchers have credited these models with supporting work to advance the interpretability, safety, and security of AI models; to advance the efficiency of AI models enabling them to use less resources and run on more accessible hardware; and to advance participatory, community-based ways of building and governing AI.
  • Microsoft recommends cultivating a healthy and responsible open source AI ecosystem and ensuring that policies foster innovation and research. This will be achieved through direct engagement with open source communities to understand the impact of policy interventions on them and, as needed, calibrations to address risks of concern while also minimizing negative impacts on innovation and research.
  • Y Combinator recommends NTIA and all stakeholders to realize the immense promise of open-weight AI models while ensuring this technology develops in alignment with our values. Y Combinator believes the degree of openness of AI models is a crucial factor shaping the trajectory of this transformative technology. Highly open models, with weights accessible to a broad range of developers, offer unparalleled opportunities to democratize AI capabilities and promote innovation across domains. Y Combinator has seen firsthand the incredible progress driven by open models, with a growing number of startups harnessing these powerful tools to pioneer groundbreaking applications. 
  • AH Capital Management, L.L.C. (a16z) recommends NTIA to be wary of generalized claims about the risks of Open Models and calls to treat them differently from Closed Models, especially those made by AI companies seeking to insulate themselves from market competition. a16z believes Open Models promote innovation, reduce barriers to entry, protect against bias, and allow such models to leverage and benefit from the collective expertise of the broader artificial intelligence (“AI”) community. 
  • Uber recommends promoting widely available model weights to spur innovation in the field of AI. Uber believes that, by democratizing access to foundational AI models, innovators from diverse backgrounds can build upon existing frameworks, accelerating the pace of technological advancement and increasing competition in the space. Uber also believes widely available model weights, source code, and data are necessary to foster accountability, facilitate collaboration in risk mitigation, and promote ethical and responsible AI development.
  • Databricks recommends regulation of highly capable AI models should focus on consumer-facing deployments and high risk deployments, with the obligations focused on the deployer. Databricks believes that the benefits of open models substantially outweigh the marginal risks, so open weights should be allowed, even at the frontier level.

Click Here to View Original Source (opensource.org)

Leave a Reply

Your email address will not be published. Required fields are marked *

Shared by: voicesofopensource

Tags: , ,