FORBES. Congress Can Shape AI Governance Without Stifling Innovation. Here’s How.
Divyansh Kaushik.
17 APRIL 2023.
As we progress deeper into the artificial intelligence (AI) era, it is critical to rethink how we address the growing impact of AI technologies on society. Just days ago, Senator Chuck Schumer (D – NY) announced launching “a major effort to get ahead of artificial intelligence”. His suggestion for a framework that prioritizes transparency and comprehension is consistent with recommendations from numerous AI experts and holds the potential to provide a solid basis for policy development.
While Senator Schumer is yet to release legislation outlining concrete actions, a press release from his office shares that his framework would comprise four guardrails to ensure responsible AI development. These measures include identifying algorithms’ creators and target audiences, disclosing data sources, clarifying decision-making processes, and implementing explicit and robust ethical guidelines.
This is a good start.
The prospect of ungovernable, insufficiently explainable, and sometimes unreliable AI tools presents numerous challenges. The stakes are hard to overstate: When AI may be utilized to manage power grids and even pilot airplanes, mistakes could literally be life or death matters. That means both predictability and accountability in AI are paramount. Private-sector investments alone may not engender responsible AI development due to insufficient economic incentives: Meeting minimum standards typically suffices for reaping economic benefits. To steer corporations toward committed engagement and ethical practice in this arena, public sector investment will prove indispensable.
What Could A Regulatory Strategy Involve?
Although results are far from guaranteed, Schumer’s push for bipartisan, comprehensive legislation could bear fruit in the form of judicious regulations that manage technological diffusion without curtailing innovation. Working from a fundamental vision and collaborating closely with experts to refine that vision laudably exemplifies open-minded engagement.
While there has been a lot said (and more will be said) on the important questions of civil liberties, civil rights, consumer rights, data privacy and liabilities, I would like to focus on an area that we have not talked much about: national security. How do we govern AI in dual-use scenarios? The Department of Defense has its Responsible AI Principles—and an implementation strategy for the same—that was released long before NIST released its AI Risk Management Framework, and if you haven’t read either of those two documents, you should.
In my view, one critical component of a regulatory strategy should be establishing a licensing framework for developing, deploying, and productizing certain AI technologies that could have dual-use implications. Under such a system, regulators would be informed before the training of certain AI models (whether it be large, general-purpose models or smaller, targeted models) begins; compliance with risk management practices and other safeguards would be mandatory and developing larger models like GPT-4 or Stable Diffusion could invite stricter safeguards and regulations. Additionally, closed-beta or sandbox environment testing could occur before AI models are released to the general public. Compliance with risk management practices would be meticulously audited by regulators possessing extensive technical expertise in AI, preferably prioritizing powerful AI models posing significant unintended risks to national security or public welfare.
Furthermore, to preserve intellectual property from falling into the hands of rogue actors and mitigate potential threats to national security, export controls should be implemented on specific AI technologies whose intentional or unintentional release may pose serious risks. My colleague Matt Korda recently showed how immediate and dangerous the problem could be: He was able to get ChatGPT to give him instructions on how to build explosives both as basic as those that merely use fertilizer and diesel fuel, and as sophisticated as nuclear bombs. Threat detection and mitigation in this emerging arena is urgent, but it will also require laser-focused delineation of terms regarding the licensing of AI technology development and dissemination, targeting only those products where ample justification exists.
And until Congress establishes a purpose-built agency, the United States Department of Energy (DOE) is best-suited to lead this charge. The agency has profound expertise in artificial intelligence and high-performance computing, as well as established work regulating industries and establishing standards (through its work on energy efficiency and cybersecurity measures). It also has experience addressing intricate dual-use technology implications and capability as a grant-making research agency. While it should lead the effort, its actions should complement—not replace—enforcement by other agencies.
Public Sector Investments Will Also Be Critical
Parallel to any regulatory approach, public sector investment in an international initiative analogous to the European Organization for Nuclear Research (CERN), which cultivated worldwide cooperation in particle physics research—producing critical advancements with appropriate safeguards—could facilitate a secure, governable, and innovative future for AI.
Direct public investment into research and development focused on, among other topics, robustness to distribution shifts, causal AI, uncertainty quantification, hybrid neural-symbolic learning, and creating high-quality training datasets would create a foundation upon which a global “AI Research Consortium” can thrive. Over time, this consortium would form an ecosystem that balances ethical development with substantial growth.
Crucial to this consortium’s viability is encouraging collaboration between government policymakers, researchers, and leading technology companies. Substantial commitment involving billions of dollars annually enables multilateral cooperation across nations while fortifying public-private partnerships that ensure transparency across industries: Global private investment in AI has hovered around $100 Billion annually for the last two years, whereas the public sector investment is merely a fraction of this number.
In this context, the proposal to create a National AI Research Resource—a national research cloud that is the subject of a recent report by a Task Force that was established by the National AI Initiative Act of 2020 to study the feasibility of such a resource—offers promising prospects as a project that could serve as a catalyst for broader international collaboration. However, adopting such an approach should serve as merely one component within a larger framework. Ultimately, engaging like-minded democratic nations in a multilateral effort will be a key objective to success—a point emphasized by the National Security Commission on Artificial Intelligence (NSCAI) in its final report to Congress.
Ultimately, The Global War for AI Talent Will Define Geopolitics of AI
NSCAI also emphasized the central role talent will play in the ongoing rivalry surrounding artificial intelligence technology. They stated, “The winner of the AI competition will not be determined solely by superior technology but also by the side with access to a diverse and highly skilled pool of tech-savvy talents.” The United States is currently engaged in an intense global contest to attract and maintain scarce AI experts.
Drawing lessons from historical technological fields such as space exploration, semiconductors or quantum computing teaches us that victory in technological races can be short-lived. What matters is maintaining a lead position, and striving to widen that advantage whenever possible.
China is graduating twice as many students from STEM master’s programs and will graduate twice as many STEM Ph.D. candidates as the United States by 2025. This presents a challenging scenario for American leadership when encountering competition from the CCP.
However, the United States holds a significant edge over China due to its attractiveness for innovative talent worldwide; individuals are drawn to study, work, and reside within this country rather than relocating to China. It’s worth noting that nearly 60% of Computer Science Ph.D.s and over half of all STEM Ph.D.s awarded in America are granted to international students. However, more than half of AI Ph.D. graduates who leave the United States mention immigration challenges as their primary departure reason.
The current American practice of educating world-class AI talent within prestigious universities only then to send them back to their respective countries must be revised—keeping these bright minds would benefit both our economy and security infrastructure immensely—as several national security leaders have saidtime and again.
Prudent regulation, combined with significant public sector investments and a strategy to attract and retain global AI talent, could ensure that the United States leads and shapes “the rules governing such a transformative technology and not permit China to lead on innovation or write the rules of the road” (in Schumer’s words).