FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

AGI is generally accepted by scientists as an existential threat to humanity. Do we see solutions here for human alignment and control of AGI forever? You decide…

Microsoft, Google, OpenAI Respond to Biden’s Call for AI Accountability. Dive into the insights from major tech players as they weigh in on the future of AI accountability and regulation in response to NTIA’s nationwide call. 23 JUNE 2023

Dive into the insights from major tech players as they weigh in on the future of AI accountability and regulation in response to NTIA’s nationwide call.

The Gist

  • Industry engagement. High-profile tech firms contribute to AI policy discussions.
  • Regulatory balance. Need for nuanced, risk-based AI regulations underscored.
  • Transparency advocacy. Increased AI accountability and transparency widely advocated.

Microsoft’s 6-Point Plan for AI Oversight

In its comprehensive 16-page commentary, Microsoft distilled six recommendations for AI regulation:

  • Leverage government-led AI safety frameworks: Microsoft highlighted the AI Risk Management Framework (RMF) by NIST as a particularly promising foundation, offering a robust template for AI governance that can be immediately utilized to manage AI risks.
  • Formulate legal and regulatory frameworks based on AI’s tech architecture: Microsoft stressed the necessity of a regulatory approach that considers all layers of the AI tech stack and the potential risks inherent at each level. The company advocates for the enforcement of existing laws and regulations, particularly at the application layer.
  • Prioritize transparency for AI accountability: To enhance accountability, Microsoft suggests the creation of a public, inspectable registry of high-risk AI systems and the implementation of “know your content” requirements, enabling users to discern when content has been generated by AI.
  • Invest in capacity building for lawmakers and regulators: Microsoft recognizes the importance of equipping lawmakers and regulators with the necessary knowledge and resources to effectively manage AI systems, and that includes bolstering agency budgets, providing education on new technologies and fostering inter-agency coordination for a consistent regulatory regime.
  • Promote research to address open socio-technical questions: Microsoft encourages further technical and human-centered research to refine AI governance. The main research priorities should include the development of real-world evaluation benchmarks, improvement of AI model explainability and emphasis on human-computer interactions that respect users’ values and goals.
  • Develop and align with international standards and best practices: Microsoft stresses the need for ongoing development of foundational international AI standards and accelerating the adoption of government-led AI safety frameworks, citing the ISO 27001 series as an existing model of best practices for information security.

Microsoft Response:

page 7

Recommendation 2: Develop a legal and regulatory framework based on the technology architecture for AI 

A framework for AI accountability must be grounded in an understanding of the AI “tech stack” and the way in which AI risk emerges from decisions taken by the actors involved at the different layers of this stack, including a licensing regime for highly capable models and providers of AI datacenters for these models. Increasingly, AI applications are built on top of foundation models, highly capable AI models trained on large amounts of data (such as billions of pages of publicly available text) like OpenAI’s GPT-4. These types of models demonstrate a wide range of capabilities, such as summarizing text, moderating content, finding relevant passages across thousands of legal files or even generating code, and can be adapted to a broad range of tasks and integrated into several different types of AI applications. Users typically interact with these models through an application sitting at the top of the tech stack, such as ChatGPT, Bing Chat, or GitHub Copilot. Organizations across society are building new types of applications atop foundation models, including new consumer applications that are household names and other in-house applications built to serve customers or improve internal processes. Sitting at the bottom of this tech stack is the massive supercomputing infrastructure needed to train and run foundation models, the likes of which Microsoft has built out as part of our partnership with OpenAI to enable OpenAI to train their cutting-edge foundation models. 

page 8.

Microsoft also believes that an important part of this licensing regime will be the development of licensing requirements for operators of AI datacenters to be used for testing and deploying highly capable models. Similar to the regulatory model for telecommunications network operators26 and critical infrastructure providers27, we see a role for licensing the providers of AI datacenters to help advance the safe and secure development of highly capable models. To obtain a license, an AI datacenter operator would need to satisfy certain technical capabilities around cybersecurity, physical security, safety architecture, and potentially export control compliance. This would likely include an exchange of threat intelligence between the operator of the datacenter, the model developer and a regulator. 

Page 9

We believe this could be adapted and extended to the AI context to obligations to know one’s cloud, one’s customers, and one’s content. The developer of a highly capable AI model, for example, would need to know that the provider of their cloud was appropriately licensed. The operators of AI datacenters would, in certain instances such as scenarios that involve sensitive uses, need to know the customers that are accessing the model. They would also be accountable for helping regulators ascertain that all appropriate licenses for model deployment had been obtained, including potentially by monitoring for and reporting substantial uses of compute that are consistent with large training runs to a regulator for further investigation. As export controls evolve, operators of AI datacenters could also assist with the enforcement of those measures, including those that attach to the infrastructure and model layers. Infrastructure providers could also help ensure that customers deploying AI to manage critical infrastructure have implemented effective safety brakes. 

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.