C-SPAN. Senate Hearing on Regulating Artificial Intelligence Technology. JULY 25, 2023.
The Senate Judiciary Subcommittee on Privacy, Technology, and the Law held a hearing to examine ways to regulate but also innovate artificial intelligence (AI) technology. The panel of witnesses included Dario Amodei, co-founder & CEO of Anthropic, an AI developer. All believed the technology should be subjected to government regulation and oversight to varying degrees. Subcommittee members raised concerns and asked numerous questions on topics ranging from national security to election manipulation, labor exploitation, and data privacy.
On Tuesday, July 25, the U.S. Senate Subcommittee on Privacy, Technology and the Law held a hearing titled “Oversight of A.I.: Principles for Regulation.”
Witnesses included:
- Stuart Russell, Professor of Computer Science, The University of California, Berkeley (testimony)
- “Now, regulation is often said to stifle innovation, but there is no real trade-off between safety and innovation. An AI system that harms human beings is simply not good AI. And I believe analytic predictability is as essential for safe AI as it is for the autopilot on an airplane. This committee has discussed ideas such as third party testing, licensing, national agency and international coordinating body, all of which I support. Here are some more ways to, as it said, move fast and fix things. First, an absolute right to know if one is interacting with a person or a machine. Second, no algorithms that can decide to kill human beings, particularly when attached to nuclear weapons. Third, a kill switch that must be activated if systems break into other computers or replicate themselves. Fourth, go beyond the voluntary steps announced last Friday. Systems that break the rules must be recalled from the market for anything from defaming real individuals to helping terrorists bill biological weapons. Now, developers may argue that preventing these behaviors is too hard because LLMs have no notion of truth and are just trying to help. This is no excuse eventually, and the sooner the better. I would say we will develop forms of AI that are provably safe and beneficial, which can then be mandated. Until then, we need real regulation and a pervasive culture of safety.”
- Yoshua Bengio, Founder and Scientific Director of Mila – Québec AI Institute, Professor in the Department of Computer Science and Operations Research at Université de Montréal (testimony)
- “These severe risks could arise either intentionally because of malicious actors using AI systems to achieve harmful goals, or unintentionally. If an AI system develops strategies that are misaligned with our values and norms, I would like to emphasize four factors that governments can focus on in their regulatory efforts to mitigate all AI harms and risks. First, access limiting who has access to powerful AI systems, structuring the proper protocols, duties, oversight, and incentives for them to act safely. Second, alignment, ensuring that AI systems will act as intended in agreement with our values and norms. Third, raw intellectual power, which depends on the level of sophistication of the algorithms and the scale of computing resources and of data sets. And fourth, scope of actions. The potential for harm an AI system can affect indirectly, for example, through human actions or directly, for example, through the internet. So looking at risks through the lens of each of these four factors, access, alignment, intellectual power, and scope of actions is critical to designing appropriate government interventions.”
- Dario Amodei, Chief Executive Officer, Anthropic (testimony)
- “Anthropic in collaboration with world-class biosecurity experts has conducted an intensive study of the potential for AI to contribute to the misuse of biology today. Certain steps in bio weapons production involve knowledge that can’t be found on Google or in textbooks, and requires a high level of specialized expertise. This being one of the things that currently keeps us safe from attacks, we found that today’s AI tools can fill in some of these steps, albeit incompletely and unreliably. In other words, they’re showing the first nascent signs of danger. However, a straightforward extrapolation of today’s systems to those we expect to see in two to three years suggests a substantial risk that AI systems will be able to fill in all the missing pieces, enabling many more actors to carry out large scale biological attacks. We believe this represents a grave threat to US national security. We have instituted mitigations against these risks in our own deployed models, briefed a number of US government officials, all of whom found the results disquieting and are piloting a responsible disclosure process with other AI companies to share information on this in similar risks.”
What follows is a lightly edited transcript of the hearing. Refer to the video to check quotes.
Transcript. Senate Hearing on Principles for AI Regulation PUBLISHED JULY 29, 2023
Sen. Richard Blumenthal (D-CT):
Privacy and Technology subcommittee will come to order. Thank you to our three witnesses for being here. I know you’ve come a long distance and to the ranking member, Senator Hawley for being here as well on a day when many of us are flying back. I got off a plane about, I think less than an hour ago, so forgive me for being a little bit late. I know many of you have flown in as well. And thank you to all of our audience and many are outside the hearing room. Some of you may recall at the last hearing, I began with a voice, not my voice, although it sounded exactly like mine because it was taken from floor speeches and an introduction, not my words, but concocted by ChatGPT that actually mesmerized and deeply frightened a lot of people who saw and heard it.
The opening today, my opening at least, is not gonna be as dramatic, but the fears that I heard as I went back to Connecticut and also heard from people around the country were supported by that kind of voice impersonation and content creation. And what I have heard again and again and again, and the word that has been used so repeatedly is scary, scary when it comes to artificial intelligence. And as much as I may tell people, you know, there’s enormous good here, potential for benefits in curing diseases, helping to solve climate change, workplace efficiency, what rivets their attention is the science fiction image of an intelligence device out of control, autonomous self-replicating, potentially creating diseases, pandemic grade viruses or other kinds of evils, purposely engineered by people or simply the result of mistakes not malign intention.
And frankly, the nightmares are reinforced in a way by the testimony that I’ve read from each of you. In no way disparagingly do I say that those fears are reinforced because I think you have provided objective fact-based views on what the dangers are and the risks, and potentially even human extension and existential threat, which has been mentioned by many more than just the three of you experts who know firsthand the potential for harm. But these fears need to be addressed and I think can be addressed through many of the suggestions that you are making to us and others as well. I’ve come to the conclusion that we need some kind of regulatory agency, but not just a reactive body.
Not just a passive rules of the road maker edicts on what guardrails should be, but actually investing proactively in research so that we develop countermeasures against the kind of autonomous, out of control scenarios that are potential dangers. An artificial intelligence device that is in effect programmed to resist any turning off a decision by AI to begin a nuclear reaction to a non-existent attack. The White House certainly has recognized the urgency with a historic meeting of the seven major companies, which made eight profoundly significant commitments. And I commend and thank the President of the United States for recognizing the need to act. But we all know, and you have pointed out in your testimony that these commitments are unspecific and unenforceable. A number of them on the most serious issues say that they will give attention to the problem. All good. But it’s only a start. And I know the doubters about and about our ability to act, but the urgency here demands action. The future is not science fiction or fantasy. It’s not even the future. It’s here and now. And a number of you have put the timeline at two years before we see some of the biological most severe dangers. It may be shorter because the kinds of pace of development is not only stunningly fast, it is also accelerated at a stunning pace because of the quantity of chips, the speed of chips, the effectiveness of algorithms. It is an inexorable flow of development. We can condemn it, we can regret it, but it is real. And the White House’s principles actually align with a lot of what we have said among us in Congress, and notably in the last hearing that we held. We’re here now because AI is already having a significant impact on our economy, safety and democracy. The dangers are not just extinction, but loss of jobs, one of potentially the worst nightmares that we have each day. These issues are more common, more serious, and more difficult to solve. And we can’t repeat the mistakes that we made on social media, which was to delay and disregard the dangers. So the goal for this hearing is to lay the ground for legislation to go from general principles to specific recommendations to use this hearing to write real laws, enforceable laws.
In our past two hearings, we heard from panelists that Section 230, the legal shield that protects social media, should not apply to AI. Based on that feedback, Senator Hawley and I introduced the no Section 230 Immunity for AI Act building on our previous hearing. I think there are core standards that we are building bipartisan consensus around, and I welcome hearing from many others on these potential rules, establishing a licensing regime for companies that are engaged in high risk AI development, a testing and auditing regimen by objective third parties, or by preferably, the new entity that we will establish imposing legal limits on certain uses related to elections. Senator Klobuchar has raised this danger directly related to nuclear warfare. China apparently agrees that AI should not govern the use of nuclear warfare, requiring transparency about the limits and use of AI models. This includes watermarking labeling disclosure when AI is being used and data access, data access for researchers.
So I appreciate the commitments that have been made by anthropic open AI and others and the White House related to security testing and transparency last week. It shows these goals are achievable and that they will not stifle innovation, which has to be an objective, avoid stifling innovation, and we need to be creative about the kind of agency or entity, the body or administration. It can be called administration, an office. I think the language is less important than its real enforcement power and the resources invested in it. We are really lucky, very, very fortunate to be joined by three true experts today. One of the most distinguished panels I have seen in my time in the United States Congress, which is only about 12 years. One of the leading AI companies, which was founded with the goal of developing AI that is helpful, honest, and harmless. A researcher who’s groundbreaking work led him to be recognized as one of the godfathers of AI and a computer science professor whose publications and testimony on the ethics of AI have shaped regulatory efforts like the EU AI Act. So welcome to all of you, and thank you so much for being here. I turn to the ranking member Senator Hawley.
Sen. Josh Hawley (R-MO):
Thank you very much, Mr. Chairman. Thanks to all of our witnesses for being here. I wanna start by thanking the chairman, Senator Blumenthal for his terrific work on these hearings. It’s been a privilege to get to work with him. These have been incredibly substantive hearings. I’m really looking forward to hearing from each of you today. I wanna thank his staff for their terrific work. It takes a lot of effort to put together a hearing of this substance, and I wanna thank Senator Blumenthal for being willing to do something about this problem. As he alluded to a moment ago, he and I a few weeks ago introduced the first bipartisan bill to put safeguards around AI development, the first bill to be introduced to the United States Senate, which will protect the right of Americans to vindicate their privacy, their personal safety, and their interests in court against any company that would develop or deploy ai.
This is an absolutely critical foundation, right? You can give Americans paper rights, parchment rights, as our founder said, all you want, if they can’t get into court to enforce ’em, they don’t mean anything. And so I think it’s significant that our first bipartisan effort is to guarantee that every American will have the right to vindicate their rights, their interests, their privacy, their data protection, their kids’ safety in court. And I look forward to more to come with Senator Blumenthal and with other members who I know are interested in this. I think that for my part, I have expressed my own sense of what our priorities ought to be when it comes to legislation. It’s very simple, workers, kids, consumers, and national security. As AI develops, we’ve got to make sure that we have safeguards in place that will ensure this new technology is actually good for the American people.
I’m confident it’ll be good for the companies. I have no doubt about that. The biggest companies in the world who currently make money hand over fist in this country and benefit from our laws, I know they’ll, they’ll be great. Google, Microsoft Meta, many of whom have invested in the companies we’re gonna talk to today, and we’ll get into that a little bit more in just a minute, but I’m confident they’re gonna do great. What I’m less confident of is that the American people are gonna do, all right. So I’m less interested in the corporation’s profitability. In fact, I’m not interested in that at all. I’m interested in protecting the rights of American workers and American families and American consumers against these massive companies that threaten to become a total law unto themselves. Imagine you wanna talk about a dystopia. Imagine a world in which AI is controlled by one or two or three corporations that are basically governments under themselves.
And then the United States government and foreign entities talk about a massive accretion of power from the people to the powerful that is the true nightmare. And for my money, that is what this body has got to prevent. We wanna see technology developed in a way that actually benefits the people, the workers, the kids, and the families of this country. And I think the real question before Congress is, will Congress actually do anything? As Senator Blumenthal, I think put his finger on it precisely. I mean, look at what this Congress did or did not do with regard to these very same companies, these same behemoth companies when it came to social media. It’s all the same players. Let’s be honest. We’re talking about the same people and AI as we were in social media. It’s Google again, it’s Microsoft, it’s meta, all the same people. And what I notice is in my short time in the Senate, there’s a lot of talk about doing something about big tech and absolutely zero movement to actually put meaningful legislation on the floor of the United States Senate and do something about it. So I think the real question is, will the Senate actually act? Will the leadership in both parties, both parties, will it actually be willing to act? We’ve had a lot of talk, but now is the time for action. And I think if the urgency of the new generative AI technology does not make that clear to folks, then you’ll never be convinced. And to me, that really defines the urgent needs of this moment. Thank you, Mr. Chairman.
Sen. Richard Blumenthal (D-CT):
I’m gonna turn to Senator Klobuchar in case she has some remarks.
Sen. Amy Klobuchar (D-MN):
Thank you. A woman of action, I hope Senator Hawley.
Sen. Richard Blumenthal (D-CT):
Definitely a woman of action and someone who has invested a lot of time and yes.
Sen. Amy Klobuchar (D-MN):
Well, I just wanna thank both of you for doing this. I mostly just want to hear from the witnesses. I do agree with both Senator Blumenthal and Senator Hawley. This is the moment and the fact that this has been bipartisan so far in the work that Senator Schumer, Senator Young are doing. The work that is going on in this subcommittee with the two of you and the work, Senator Hawley and I are also engaged in on some of the other issues related to this. I I actually think that if we don’t act soon we could decay into not just partisanship, but in, in action. And the point that Senator Hawley just made is right, we didn’t get ahead of the Congress didn’t get ahead of with Section 230 and the like, and some of the things that were done for maybe good reasons at the time and then didn’t do anything.
And now you’ve got kids getting addicted to fentanyl and you’ve got off that they get online. You’ve got privacy issues. You’ve got kids being exposed to content they shouldn’t see. You’ve got small businesses that have been pushed down search engines and the like, and I still think we can fix some of that. But this is certainly a moment to engage. And I’m actually really excited about what we can get done, the potential for good here, but what we can do to put in guardrails and have an American way of putting things in place and not just defer to the rest of the world, which is what’s starting to happen on some of the other topics I raised. So I’m particularly interested, which is not as much our focus today on the election side and democracy and making sure that we do not have these ads that aren’t the real people. I don’t care what political party people are with that we are give voters the information they need to make a decision and that we are able to protect our democracy. And there’s some good work being done on that front. So thank you.
Sen. Richard Blumenthal (D-CT):
Let me introduce the witnesses and seize this moment to let you have the floor. We wanna be joined by Dario Amodei who is the CEO of Anthropic and AI safety and Research Company. It’s a public benefit corporation dedicated to building steerable AI systems that people can rely on and generating research about the opportunities and risks of ai. Anthropic’s AI assistant Claude is based on its research into training helpful, honest, and harmless AI systems.
Yoshua Bengio is a recognized, worldwide recognized leading expert in artificial intelligence. He is known for his conceptual and engineering breakthroughs in artificial neural networks and deep learning. He pioneered many of the discoveries and advances that have led us to this point today. And he’s a full professor in the Department of Computer Science and Operations Research at the University of Montreal, and the founder and scientific director of Mila Quebec Artificial Intelligence Institute, one of the largest academic institutes in deep learning and one of the three federally funded centers of excellence in AI research and innovation in Canada. With apologies, I’m not going to repeat all the awards and recognitions that you’ve received because it would probably take the rest of the afternoon.
We’re also honored to be joined by Stuart Russell. He received his BA with first class honors in physics from Oxford University in 1982, and his PhD in computer science from Stanford, 1986. He then joined the faculty of the University of California at Berkeley, where he is professor and formerly chair of Electrical Engineering and Computer Sciences, and the holder of the Smiths data chair in engineering Director of the Center for Human Compatible AI and Director of the CLI Center for Ethics Science and the Public. He’s also served as an adjunct professor of neurological surgery at UC San Francisco. Again, many honors and recognitions. All of you have received in accordance with the custom of our committee. I’m gonna ask you to stand and take an oath. Do you solemnly swear that the testimony you’re about to give is the truth, the whole truth, and nothing but the truth, so help you God? Thank you Mr. Amodei we’ll begin with you.
Dario Amodei:
Thank you, chairman Blumenthal, Excuse me. Chairman Blumenthal, ranking member Hawley and members of the committee, thank you for the opportunity to discuss the risks and oversight of AI with you. Anthropic is a public benefit corporation that aims to lead by example in developing and publishing techniques to make AI systems safer and more controllable. And by deploying these safety techniques in state-of-the-art models, research conducted by Anthropic includes constitutional ai, a method for training AI systems to behave according to an explicit set of principles, early work on red teaming or adversarial testing of AI systems to uncover bad behavior and foundational work in AI interpretability the science of trying to understand why AI systems behave the way they do this month. After extensive testing, we were proud to launch our AI model, Claude two. For US users. Claude two puts many of these safety improvements into practice. While we’re the first to admit that our measures are still far from perfect, we believe they’re an important step forward in a race to the top on safety, we hope we can inspire other researchers and companies to do even better.
AI will help our country accelerate progress in medical research, education, and many other areas. As you said in your open remarks, the benefits are great. I would not have found it philanthropic if I did not believe AI’s benefits could outweigh its risks. However, it is very critical that we address the risks. My written testimony covers three categories of risks, short term risks that we face right now, such as bias, privacy, misinformation, medium term risks related to misuse of AI systems as they become better at science and engineering tasks, and long-term risks related to whether models might threaten humanity as they become truly autonomous, which you also mentioned in your open testimony. In these short remarks, I wanna focus on the medium term risks, which present an alarming combination of imminence and severity. Specifically, philanthropic is concerned that AI could empower a much larger set of actors to misused biology over the last six months.
Andro in collaboration with world-class biosecurity experts has conducted an intensive study of the potential for AI to contribute to the misuse of biology today. Certain steps in bio weapons production involve knowledge that can’t be found on Google or in textbooks, and requires a high level of specialized expertise. This being one of the things that currently keeps us safe from attacks, we found that today’s AI tools can fill in some of these steps, albeit incompletely and unreliably. In other words, they’re showing the first nascent signs of danger. However, a straightforward extrapolation of today’s systems to those we expect to see in two to three years suggests a substantial risk that AI systems will be able to fill in all the missing pieces, enabling many more actors to carry out large scale biological attacks. We believe this represents a grave threat to US national security. We have instituted mitigations against these risks in our own deployed models, briefed a number of US government officials, all of whom found the results disquieting and are piloting a responsible disclosure process with other AI companies to share information on this in similar risks.
However, private action is not enough. This risk and many others, like it requires a systemic policy response. We recommend three broad classes of actions. First, the US must secure the AI supply chain in order to maintain its lead while keeping these technologies out of the hands of bad actors. This supply chain runs from semiconductor manufacturing equipment to chips, and even the security of AI models stored on the servers of companies like ours. Second, we recommend a testing and auditing regime for new and more powerful models similar to cars or airplanes. AI models of the near future will be powerful machines that possess great utility, but can be lethal if designed incorrectly or misused. New AI models should have to pass a rigorous battery of safety tests before they can be released to the public at all, including tests by third parties and national security experts in government.
Third, we should recognize the science of testing and auditing for AI systems is in its infancy. It is not currently easy to detect all the bad behaviors in the AI system is capable of without first broadly deploying it to users, which is what create the risk. Thus, it is important to fund both measurement and research on measurement to ensure a testing and auditing regime is actually effective funding. NIST and the National AI Research Resource are two examples of ways to ensure America leads here. The three directions above our synergistic responsible supply chain policies help give America enough breathing room to impose rigorous standards on our own companies without seeding our national lead to adversaries. And funding measurement, in turn, makes these rigorous standards meaningful. The balance between mitigating AI’s risks and maximizing its benefits will be a difficult one, but I’m confident that our country can rise to the challenge. Thank you.
Sen. Richard Blumenthal (D-CT):
Thank you very much. Why don’t we go to Ms. Bengio.
Yoshua Bengio:
Chairman Blumenthal, ranking member Hawley, members of the Judiciary Committee. Thank you for the invitation to speak today. The capabilities of AI systems have steadily increased over the last two decades, thanks to advances in deep learning that I and others introduced. While this revolution has the potential to enable tremendous progress and innovation, it also entails a wide range of risks from immediate ones like discrimination to growing ones like disinformation, and even more concerning ones in the future, like loss of control of superhuman ais. Recently, I and many others have been surprised by the giant leap realized by systems like ChatGPT to the point where it becomes difficult to discern whether one is interacting with another human or a machine. These advancements have led many top AI researchers, including myself, to revise our estimates of when human level intelligence could be achieved previously thought to be decades or even centuries away. We now believe it could be within a few years or decades. The shorter timeframe, say five years, is really worrisome because we’ll need more time to effectively mitigate the potentially significant threats to democracy, national security, and our collective future. As Sam Altman said here, if this technology goes wrong, it could go terribly wrong.
These severe risks could arise either intentionally because of malicious actors using AI systems to achieve harmful goals, or unintentionally. If an AI system develops strategies that are misaligned with our values and norms, I would like to emphasize four factors that governments can focus on in their regulatory efforts to mitigate all AI harms and risks. First, access limiting who has access to powerful AI systems, structuring the proper protocols, duties, oversight, and incentives for them to act safely. Second, alignment, ensuring that AI systems will act as intended in agreement with our values and norms. Third, raw intellectual power, which depends on the level of sophistication of the algorithms and the scale of computing resources and of data sets. And fourth, scope of actions. The potential for harm an AI system can affect indirectly, for example, through human actions or directly, for example, through the internet. So looking at risks through the lens of each of these four factors, access, alignment, intellectual power, and scope of actions is critical to designing appropriate government interventions.
I firmly believe that urgent efforts, preferably in the coming months, are required in the following three areas. First, the coordination of highly agile, national and international regulatory frameworks and liability incentives that bolster safety. This would require licenses for people and organizations with standardized duties to evaluate and mitigate potential harm. Allow independent audits and restrict AI systems with unacceptable levels of risk. Second, because the current methodologies are not demonstrably safe, significantly accelerate global research endeavors focused on AI safety, enabling the informed creation of essential regulations, protocols, safe AI methodologies and governance structures. And third, research on countermeasures to protect society from potential rogue ais. ’cause No regulation is gonna be perfect. This research in AI and international security should be conducted with several highly secure and decentralized labs operating under multilateral oversight to mitigate an AI arms race, given the significant potential for detrimental consequences, we must therefore allocate substantial additional resources to safeguard our future. At least as much as we are collectively globally investing in increasing the capabilities of ai, I believe we have a moral responsibility to mobilize our greatest minds and make major investments in a bold and internationally coordinated effort to fully reap the economic and social benefits of AI while protecting society and our shared future against its potential perils. Thank you for your attention to this pressing matter. I look forward to your questions.
Sen. Richard Blumenthal (D-CT):
Thank you very much, professor. Professor Russell.
Stuart Russell:
Thank you, chair Blumenthal and ranking member Hawley and members of the subcommittee for the invitation to speak today and for your excellent work on this vital issue. Ai, as we all know, is the study of how to make machines intelligent. Its stated goal is general purpose artificial intelligence, sometimes called a g i or artificial general intelligence machines that match or exceed human capabilities in every relevant dimension. The last 80 years have seen a lot of progress towards that goal. For most of that time, we created systems whose internal operations we understood drawing on centuries of work in mathematics, statistics, philosophy, and operations research. Over the last decade, that has changed, beginning with vision and speech recognition, and now with language, the dominant approach has been end-to-end training of circuits with billions or trillions of adjustable parameters. The success of these systems is undeniable, but their internal principles of operation remain a mystery.
This is particularly true for the large language models or l lms such as chat, g p T. Many researchers now see a g i on the horizon. In my view, LLMs do not constitute a g i, but they are a piece of the puzzle. We’re not sure what shape the piece is yet or how it fits into the puzzle, but the field is working hard on those questions and progress is rapid. If we succeed, the upside could be enormous. I have estimated a cash value of at least 14 quadrillion dollars for this technology, a huge magnet in the future pulling us forward. On the other hand, Alan Turing, the founder of computer science, warned in 1951, that once AI outstrips our feeble powers, we should have to expect the machines to take control. We have pretty much completely ignored this warning. It’s as if an alien civilization warned us by email of its impending arrival, and we replied, humanity is currently outta the office. Fortunately, humanity is now back in the office and has read the email from the aliens.
Of course, many of the risks from AI are well-recognized already, including bias, disinformation, manipulation, and impacts on employment. I’m happy to discuss any of these, but most of my work over the last decade has been on the problem of control. How do we maintain power forever over entities more powerful than ourselves? The core problem we have studied comes from AI systems pursuing fixed objectives that are miss specified, the so-called King Midas problem. For example, social media algorithms were trained to maximize clicks and learn to do so by manipulating human users and polarizing societies. But with LLMs, we don’t even know what their objectives are. They learn to imitate humans and probably absorb all two human goals in the process.
Now, regulation is often said to stifle innovation, but there is no real trade-off between safety and innovation. An AI system that harms human beings is simply not good ai. And I believe analytic predictability is as essential for safe AI as it is for the autopilot on an airplane. This committee has discussed ideas such as third party testing, licensing, national agency and international coordinating body, all of which I support. Here are some more ways to, as it said, move fast and fix things. First, an absolute right to know if one is interacting with a person or a machine. Second, no algorithms that can decide to kill human beings, particularly when attached to nuclear weapons. Third, a kill switch that must be activated if systems break into other computers or replicate themselves. Fourth, go beyond the voluntary steps announced last Friday. Systems that break the rules must be recalled from the market for anything from defaming real individuals to helping terrorists bill biological weapons. Now, developers may argue that preventing these behaviors is too hard because LLMs have no notion of truth and are just trying to help. This is no excuse eventually, and the sooner the better. I would say we will develop forms of AI that are provably safe and beneficial, which can then be mandated. Until then, we need real regulation and a pervasive culture of safety. Thank you.
Sen. Richard Blumenthal (D-CT):
Thank you very much. I’ll begin the questioning. We’re gonna have seven minute rounds. I expect we’ll have many more than one. Given the challenges and complexity that you all have raised so eloquently. I have to say professor Russell, you also in your testimony, the written testimony recount a remark of Lord Rutherford September 11th, 1933 at a conference when he was asked about atomic energy. And he said, quote, anyone who looks for a source of power in the transformation of the Adams is talking moonshine. End quote. The ideas about the limits of human ingenuity have been proven wrong again and again and again. And we’ve managed to do things that people thought unthinkable, whether it’s the Manhattan Project under the guidance of Robert Oppenheimer, who now has become a bold face term in popular print or putting man on the moon, which many thought was impossible to do.
So we know how to do big things. This is a big thing that we must do, and we have to be back in the office to answer that email that is, in fact, a siren blaring for everyone to hear and see. AI is here and beware of what it will do if we don’t do something to control it. And not just in some distant point in the future, but as all of you have said with a time horizon that would’ve been thought unimaginable just a few years ago. Unimaginably quick, let me ask each of you, because part of that time horizon is our next election in 2024, and if there’s nothing that focuses the attention of Congress, it is an election, nothing better than an election to focus the attention of Congress. Let me ask each of you what you see as the immediate threats to the integrity of our election system, whether it’s the result of misinformation or manipulation of electoral counts, or any of the possible areas where you see an immediate danger as we go into this next election. I’ll begin with you, Mr. Ameda.
Dario Amodei:
Yes. So I am thankful for this. Thanks for the question, Senator. You know, I think this is obviously a very timely thing to worry about. You know, when I think of the risks here, my mind goes to misinformation generation of deep fakes use of AI systems to manipulate people or produce propaganda or just, just do anything deceptive. I, you know, I can speak a little bit about some of the things we’re doing. You know, we train our model with, you know, this method called constitutional AI, where you can lay out explicit principles. It doesn’t mean the model will follow the principles, but there are terms in our constitution, which is publicly available, that tells the model not to generate misinformation. The same is true in our, in our business terms of use. One of the commitments with the White House was to start to watermark content, particularly in the auto audio, in the audio and visual domain. I think that’s very helpful, but would also benefit from watermarking, gives you the technical capability to, you know, to detect that something is AI generated, but requiring it on the side of the law to be labeled, I think would be something that would be very helpful and timely.
Sen. Richard Blumenthal (D-CT):
Thank you, Mr. Bengio.
Yoshua Bengio:
I agree with all of that. I will add a few things. One, one concern I have is that even if companies use watermarking and especially because there’s now several open source versions to train LLMs or use them, including model weights that have been made available to the global community. We also need to understand how things can go wrong on that front. In other words, people are not all going to obey that law. And one important thing I’m concerned about is one can take a pre-trained model, say by, by a company that made it public, and then without huge computing resources. So not the hundred million cost that it takes to train them, but, but something very cheap can tune these systems to a particular task, which could be to play the game of being a troll, for example. There’s plenty of examples of that to train them on or other examples in, in generating deep fakes in a way that might be more powerful than what we’ve seen up to now. So I don’t know how to fix this, but I wanna bring that to the attention of this committee.
Sen. Richard Blumenthal (D-CT):
Thank you. Well, I, on that point, and on both of the excellent points that you both have raised, I would invite fixes and
Yoshua Bengio:
Well, I mean, one immediate fix is to avoid releasing more of these pre-trained large models. That’s the thing that governments can do, because right now, very few companies, including, you know, the seven you, you brought last week can do that. And so that’s a place where government can act.
Sen. Richard Blumenthal (D-CT):
Professor Russell.
Stuart Russell:
I certainly like to support the remarks of the other two witnesses. And I would say my major concern with respect to elections would be disinformation and particularly external influence campaigns. Because with these systems, we can present to the system a great deal of information about an individual everything they’ve ever written or published on Twitter or Facebook, their social media presence there, floor speeches and train the system and ask it to generate a disinformation campaign, particularly for that person. And then we can do that for a million people before lunch. And that has a far greater effect than, you know, the sort of spamming and broadcasting of false information that isn’t tailored to the individual. I think labeling is important for text. It’s gonna be very difficult to tell whether a short piece of text is machine generated if someone doesn’t want you to know that it’s machine generated.
I think an important proposal from the global partnership on AI is actually for a kind of an escrow, an encrypted storage where every output from a model is stored in an encrypted form, enabling, for example, a platform to check whether a piece of text that’s uploaded is actually machine generated by testing it against the, the escrow storage without revealing private information et cetera. So that can be done. Another problem we face is that there are many, many extremely well-intended efforts to create standards around labeling and how platforms should respond to labels in terms of what should be posted. And media organizations like the B B C the New York Times wall Street Journal, et cetera, et cetera, there are dozens of these coalitions. The effort is very fragmented. And, you know, there are as many standards as there are coalitions.
I think it really needs national and probably international leadership to bring these together to have a pretty much a unified approach and standards that all organizations can sign up to. And thirdly, I think there’s a lot of experience in other spheres, such as in the equity markets, in real estate, in insurance business, where truth is absolutely essential. If you take the equity markets, if companies can make up their quarterly figures then the equity markets collapse. And so we developed this whole regulated third party structure of accountants audits so that the information is reasonably trustworthy in real estate. We have title registries, we have notaries, all kinds of stuff to make it work. We don’t really have that structure in the public information sphere. And we see, you know, again, it’s very fragmented. There’s factcheck.org, there’s Snopes, there’s I suppose Elon Musk is gonna have his truth, G p T and so on. Again, this is something that I think governments can help in terms of licensing and standards for how those organizations should function. And again, what platforms do with the information that the third party institutions supply to enable users to have access to high quality information streams. So there’s, I think there’s quite a lot we can do, but it’s pretty urgent.
Sen. Richard Blumenthal (D-CT):
Thank you. I, I think all of these points argue very, very powerfully against fragmentation for some kind of single entity that would establish oversight standards, enforcement of rules. Because as you say malign actors can not only eliminate quarterly reporting, they can also make up numbers for corporations that can disastrously impact the stock of the corporation. I’m gonna,
Stuart Russell:
If I just might add one, one point, we’re, we’re absolutely not talking about a ministry of truth. In some sense, it’s similar to what happens in the courts. The courts had standards for finding out what the truth is, but they don’t say what the truth is. And that’s what we need,
Sen. Richard Blumenthal (D-CT):
But protecting our election system has to be a priority. I think all of you are very, very emphatically and cogently making that point. Professor .
Yoshua Bengio:
Yeah, I would like to add one, one suggestion which may sound drastic, but isn’t, if you look at other fields like banking in order to reduce the chances that AI systems will massively influence voters through social media, one thing that should have been done a long time ago is that social media accounts should be restricted to actual human beings that have identified themselves ideally in person, right? And right now social media companies are spending a lot of money to figure out whether an account is legitimate or not. They will not by themselves force these kinds of regulations because it’s going to create friction to recruit more users. But if the government says everyone needs to do it, they’ll be happy. Well, I’m not them, but that’s what I would do if I were them. Thank you, Senator Hawley.
Sen. Josh Hawley (R-MO):
Let’s start, if we could, by talking about who controls this technology currently and who’s developing it. Mr. Amide, if I could just start with you just help me understand some of the structure of your company, of Anthropic. Google owns a significant stake in your company, doesn’t it?
Dario Amodei:
Yes. Google was an investor in Anthropic. They don’t control any board seats, but yes, Google is an investor in philanthropic.
Sen. Josh Hawley (R-MO):
Give us a sense of what we are talking about? What, what kind of stake are we talking about?
Dario Amodei:
I don’t remember exactly. Couldn’t give it to you exactly. I suspect it’s low double digits, but would need to follow up on this.
Sen. Josh Hawley (R-MO):
Well, the press has reported it at $300 million in investment with at least a 10% stake in the company. Does that sound broadly correct?
Dario Amodei:
That sounds broadly correct.
Sen. Josh Hawley (R-MO):
That’s a pretty big stake. Let’s talk about OpenAI where you used to work, right? Yes. OpenAI, it’s been reported, has a very significant chunk of funding that comes from another massive technology company, Microsoft. It’s been reported in the press that this was one of the reasons that you left the company. You were concerned about this. I’ll let you, you can speak to that if you want to. I don’t wanna put words in your mouth, but the stake that I believe Microsoft is reported to have an open AI approaches 49%. So it’s not controlling, but it’s, it’s awfully, awfully close. Tell me this, when Google’s stake in your company occurred, the Financial Times broke the story on this, but reported that the transaction wasn’t publicized when it actually happened. Why was that? Do you know?
Dario Amodei:
I couldn’t speak to the, yeah, I couldn’t speak to the decisions made by Google here. I do wanna make one point, which is our relationship with Google at the present time. It’s primarily focused on hardware. So in order to train these models, you need to purchase chips. And you know, this, this investment came with you know, commitment to spend on the cloud. And our relationship with Google has been primarily focused on hardware, hasn’t primarily been you know, commercial or involved with, with governance.
Sen. Josh Hawley (R-MO):
So there’s no plans to integrate your Claude, your equivalent of ChatGPT. There’s no plans to integrate that with Google search, for example.
Dario Amodei:
That’s not occurring at the present time.
Sen. Josh Hawley (R-MO):
Well, I know it’s not occurring, but, but are there plans to do it, I guess is my, is my question.
Dario Amodei:
I can’t, can’t speak to what the possibilities are for the future, but that’s not something that’s occurring at the present.
Sen. Josh Hawley (R-MO):
Don’t you think that that would be frightening? I mean, just to come back to something, professor Russell said a moment ago, he talked about the ability in the election context of AI to fed in the information from, let’s say, one political figure. Everything about that person, it the ability to come up with a very convincing misinformation campaign. Now imagine if that technology also, if, if the same large language model, for example, also had the information, the voter files, millions of voters, and what knew exactly what would capture those voters’ attention, what would hold it, what arguments they found most persuasive, the ability to weaponize misinformation and to target it toward particular voters would be exceptionally powerful. Right? Now, search is all about getting and keeping users’ attention. That’s how Google makes money. I’m just imagining your technology, a generative AI aligned and integrated and folded into search, the power that would give Google to get users’ attention, keep their attention, push information to them, would be extraordinary, wouldn’t it?
Dario Amodei:
Yes. So, I mean, I think Senator, I think these are very important issues. And, you know, I want to wanna raise a few points on here. One is some of the things I said in response to Senator Blumenthal’s questioning, which is you know, on, on misinformation. So we put terms in Claude’s constitution that tell it not to generate misinformation or political bias in any direction. I, again, want to emphasize over and over again that these methods are, are not yet perfect, and the science of producing this is, is, is not, is not exact yet, but this is something we work on. I, you know, I think you’re also getting at some important privacy issues here about personal information. And this is an area where also in our constitution, we, we discourage our models from from producing personal information. We don’t train on, you know, publicly, publicly available, publicly available information. So, you know, it’s, it’s, it’s very corridor mission you know, to produce models that, that don’t, that, that, that at least try not to have these problems.
Sen. Josh Hawley (R-MO):
Well, you say that you tell the model not to produce misinformation. I’m not sure exactly what that means, but do you tell it not to help massive companies make a profit? Well, this would be Google’s, this would be Google’s interest, right? Above all profits. The whole reason they want to get users’ attention and then keep users’ attention and keep us searching and scrolling is so that they can push products to us and make lots and lots of money, which they do. It seems to me that your technology melded with theirs could make them an enormous sum of money. That would be great for them. Would it be so good for the American consumer?
Dario Amodei:
Again, I can’t speak to decisions made by a different company like Google. But you know, we’re, we are, we are, you know, we are doing the best we can to make our systems ethical. When, you know, in terms of, you know, how do we tell our model not to do things? There’s a training process where, you know, we train the model in a loop to tell it for some for some given output. You know, is your, is your response in line with these principles. And, you know, over, over the last six months since we’ve developed this method of constitutional ai, we’ve gotten better and better at getting the model to be in line with what the Constitution says. Again, I would still say it’s not perfect, but, you know, we very much focus on the, the, the safety of the model so that it, it, it doesn’t do the things that you’re, that you’re concerned about Senator.
Sen. Josh Hawley (R-MO):
Well listen, this, I think this surfaced an important point, and I just want to, I just wanna underscore this ’cause I think it’s important. All, all this, I appreciate that you want your models to be ethical and so forth. That’s great. But I would just suggest that that is in the eye of the beholder and the talk of what is ethical or what is appropriate is going to really d vary significantly determined by, or depending on who controls the technology. So I’m sure that Google or Microsoft using these generative models, linking it up with their ad-based models would say, oh, it’s perfectly ethical for us to try and get the attention of as many consumers as possible by any means possible, and to hold it as long as possible. And they would say, there’s no problem with that, that’s not misinformation, that’s business.
Now, would that be good for American consumers? I doubt it. Would that be respectful of American consumers’ privacy and their integrity? Would it prevent them or would it protect them rather from manipulation? I doubt it. I mean, so I think we’ve gotta give some serious thought here to who controls this technology and how they are using it. And this is, I, I appreciate all that you’re doing. I appreciate your commitments. I think that’s great. I just wanna say, I just wanna underline, this is a very serious structural issue here that we’re gonna have to think hard about. And the control of this technology by just a handful of companies and governments is a huge, huge problem. Hopefully we can come back to this. Thanks, Mr. Chairman.
Sen. Richard Blumenthal (D-CT):
Thanks, Senator Hawley. Senator Klobuchar.
Sen. Amy Klobuchar (D-MN):
Thank you very much. So I chair the Rules committee and we’re working on a number of pieces of legislation and I’ve really appreciated working with Senator Hawley on some of this. But one bill is, you know, watermarks and making sure that at the election materials say produced by ai, but I don’t think that’s enough when you look at the fact that someone’s going to watch a fake Joe Biden or fake Donald Trump, or a fake Elizabeth Warren. All of this has really happened. And then not know who the person is and not know if it’s really them. And it’s not gonna help just at the very end. It might for some things, but to just say at the end, oh, by the way, that was produced by ai, I hope you saw a little mark that end that says that. So could, could you address that Professor Russell? How within the clear confines of the Constitution for things like satire, we’re gonna have to do more than just watermarks.
Stuart Russell:
I do wanna be careful not to veer into, once again, the sort of ministry of truth idea. But I think clear labeling, I mean, if, if you, if you look at what happened with credit cards, for example, used to be that credit cards came with 14 pages of tiny, tiny print. And that allowed companies to rip off the consumer all the time. And eventually Congress said, no, there’s gotta be disclosure. You’ve gotta say, this is the interest rate, this is the grace period, this is the late fee. And a couple of other things that has to be in big print on the front of the envelope or on the front page. There are very strict rules now about how you direct market credit cards and other lending products. And that’s been enormously beneficial because it actually allows competition on those primary features of the product
Sen. Amy Klobuchar (D-MN):
As opposed, yeah, you can’t really compare a credit card to someone who’s telling the United States of America that there’s some kind of a nuclear explosion when there isn’t. Right.
Stuart Russell:
But we, but the, the point being, we can mandate much clearer labeling than just a little thing in the corner at the end of the 92nd piece. Right. We could say, for example, there’s gotta be a big red frame around the outside of the image when it’s a machine generated image.
Sen. Amy Klobuchar (D-MN):
Okay. I’m just gonna, Professor Bengio, what do you think?
Yoshua Bengio:
Well my view on this is we should be very careful with anything, any kind of use of AI for political purposes, political advertising, whether it’s done officially through some agency that does advertising or in a more direct way, but
Sen. Amy Klobuchar (D-MN):
It might not be actual advertising, it’s just put out for circulation. That’s always what we’ve confronted because the Federal Election Commission, while deadlocking on this, has asked for authority, including the Republican appointed members to do more. But go ahead.
Yoshua Bengio:
In many countries, any kind of advertising which would include disseminating such videos is not allowed for some period before the election to try to minimize the mm-hmm. , you know, the potential effect of these things.
Sen. Amy Klobuchar (D-MN):
Right. could I just Mr. Amodei, the one significant concern, just switching gears here, ’cause I talked to some people in the banking community about this small banks, is that they are really worried they’re gonna see AI used to scam people you know, pretending to be your mom’s voice or your more likely granddaughter’s voice actually getting that voice, right, making a call for money. How can Congress ensure that companies that create AI platforms can not be used for those deceptive platforms? What kind of rules should we put in place so that doesn’t happen?
Dario Amodei:
Yes, Senator. So, so I think these questions about deception and scams are probably closely related to these questions about misinformation, right? Yeah. They’re, they’re a little bit two sides of the same coin. So I think on the misinformation, I wanted to kind of clarify, you know, there’s, there’s technical measures and there’s policy measures. So, you know, watermarking is the technical measure. Watermarking makes it possible to take an AI to take the output of an AI system, run it through some automated process, that will then return an answer that it was generated by AI or not generated by ai. That’s important. And, you know, we’re working on that and others are working on that. But I think we also need policy measures. So going back to what the other two witnesses said focusing on, you know, a requirement to label AI systems is not the same as a requirement to watermark them. One is for the designer of the AI system to embed something. The other is for wherever the AI system ends up Yep. In the end for someone to be required to, to label it. So I think we need both and, and probably, you know, this, this, this, this, this Congress can do more on the second thing and the companies and researchers can do more on the first thing. Mm-Hmm.
Sen. Amy Klobuchar (D-MN):
. Okay. You, and so what are you talking about the scams where the granddaughter calls in, the grandma goes out and takes all her money out, we’re just gonna, yeah. I mean, let that happen,
Dario Amodei:
Or, well, I mean, certainly certainly it’s, it’s, it’s already, it’s already illegal to do that. I can think of a number of authorities that Uhhuh would use to strengthen that for AI in particular. I think, you know, that’s, that’s kind of up to the Senate and the Congress to figure out what the best, the best measure is. But, you know, certainly I’d be in favor of strength and protections there.
Sen. Amy Klobuchar (D-MN):
Well, I hope so. About half of the states have laws that give individuals control over the use of their name, image, and voice. But in the other half of the country someone who was harmed by a fake recording purporting to be them has little recourse. Senator Coons and Tillis just did a hearing on this. Would you support a federal law, Mr. Bengio, that gives individuals control over the use of their name, image, and voice?
Yoshua Bengio:
Certainly, but I would go further. If you think about counterfeiting money, the criminal penalties are very high, and that deters a lot of people. And when it comes to counter feeding humans, it should be at least at the same level.
Sen. Amy Klobuchar (D-MN):
Mm-Hmm. Okay. one last thing I wanted to ask about here is just the ability of researchers to be able to figure out what is going on. And we have, there’s a bill that a number of us are supporting, including center Blumenthal that allows for researchers the transparency that we need. And including Center Cassidy, corny and Koons and Romney. It’s called the Platform Accountability. And Trent Parents Act to require social media companies to share data with researchers. So we can try to figure out what’s happening with the algorithms and the like. Dr. Russell, why is researcher access to social media platform data so important for regulating ai?
Stuart Russell:
So our experience actually involved three years of negotiating an agreement with one of the large platforms only to be told at the end that actually they didn’t wanna pursue this collaborative agreement after all.
Sen. Amy Klobuchar (D-MN):
We don’t really have three years to despair on AI, it sounds like. So no, we don’t continue on.
Stuart Russell:
Yes. and, and you know, I then discussed this with the director of the digital division of OECD, and he said I was about the 10th person who had told him the same story. So it seems there’s a modus operandi of appearing to be open to collaborations with researchers only to terminate that collaboration right before it actually begins. There have been claims that they have provided open data sets to researchers to allow this type of research, but I’ve talked to those researchers and it hasn’t happened.
Sen. Amy Klobuchar (D-MN):
And why is it important to have it to put in place these regulations? We know we’ll be, we can’t wait for you to get all the data, obviously, and it, we can’t let it take three years, but putting in place a clear mandates that that data be shared, why does that helpful?
Stuart Russell:
Because the effects of, for example, the social media recommender systems can, are, are, they’re correlated across hundreds of millions of people. So those systems can shift public opinion in ways that are not even necessarily deliberate. They’re probably not deliberate, but they can be massive and polarizing unless we have access to the data, which the companies internally certainly do. And I think the Facebook revelations from a few years ago suggested that they are totally aware of what’s happening, but that information is not available to governments and researchers. And I think, you know, in a democracy, we have a right to know if our democracy is being subverted by an algorithm. And that seems absolutely crucial.
Sen. Amy Klobuchar (D-MN):
Alright. You wanna add one more thing?
Yoshua Bengio:
Yes. trying to respond to your question from another angle. Why researchers, I would say academic researchers, not all of them, but many of them don’t have any commercial ties. They have a reputation to keep in order to continue their career. So they’re not perfect, but I think it’s a very good yardstick to judge that
Sen. Amy Klobuchar (D-MN):
Something except for Professor Russell . Okay. Very good. Do you agree with it too, Ben?
Dario Amodei:
Yes. I just wanted to say, I think transparency is important even as a, even as a broader issue, you know, a number of our research efforts go into looking inside to see what happens inside AI systems, why they make the decisions that they make.
Sen. Amy Klobuchar (D-MN):
Okay. And yeah, I’ve gotta turn over to my colleagues who’ve been patiently waiting. Thank you.
Sen. Richard Blumenthal (D-CT):
Thank you. We’ll, we’ll circle back to the black box algorithms, which is a major topic of interest. Senator Blackburn.
Sen. Marsha Blackburn (R-TN):
Thank you, Mr. Chairman. And thank you all for being here. Mr. Amadei, I think you got a little aggravated trying to answer Senator Hawley’s question about something you may create that you think I’ve as an ethical use. But let me tell you why this bothers us. The unethical use Senator Blumenthal have worked, and I have worked together for nearly four years on looking at social media and the harms that have happened to our nation’s youth. And hopefully this week our kids Online Safety Act comes out of committee. It wasn’t intended, social media wasn’t intended. The intent was not to harm children, cause mental health crises, put children in touch with drug dealers and pedophiles. But we have heard story after story and have uncovered instance after instance where the technology was used in a way that nobody ever thought it was. And now we’re trying to clean it up because we’ve not put the right guardrails in place.
So as we look at ai, the guardrails are very important. And Professor Russell, I wanna come to you because the US is behind the, we’re, we’re really behind our colleagues in the eu, the uk New Zealand, Australia, Canada, when it comes to online consumer privacy and having a way for consumers to protect that name, image, voice, having a way for them to protect their data, their writings, so that AI is not trained on their data. So talk for just a minute about how we keep our position as a global leader in generative ai and at the same time, protect consumer privacy. Would a federal privacy standard help? What are your recommendations there?
Stuart Russell:
I think there needs to be absolutely a requirement to disclose if the system is harvesting the data from individual conversations. And my guess is that immediately people would stop using a system that says, I am taking your conversation I am folding it into the next version, the model. And anyone in the country can basically listen in on this conversation because they’re gonna be asking questions about what I do.
Sen. Marsha Blackburn (R-TN):
Lemme ask this. Do you think the industry is mature enough to self-regulate?
Stuart Russell:
No.
Sen. Marsha Blackburn (R-TN):
Not. So therefore, it is going to be necessary for us to mandate a structure.
Stuart Russell:
Yes. I think there is certainly a change of heart at OpenAI. Initially, they were harvesting the data produced by individual conversations. And then more recently they said, we’re gonna stop doing that. And, and clearly if, if you’re in a company and even even not considering personal conversation, but just in a company, and you want the system to help you with some internal operation, you’re going to be divulging company proprietary information to the chat bot to get it, to give you the answers you want. And if that information is then available to your competitors by simply asking ChatGPT what’s going on over in that company? This would, this would be terrible. So having a clear definition of what it is, there’s a technical term oblivious, right? Which basically says, whatever we talk about, I am gonna forget completely, right? That’s a guarantee that systems should offer. I actually believe that browsers and any other device that interacts with individuals should offer that as a, as a formal guarantee. Let me also make the point about enforcement which I think Senator Hawley mentioned at the beginning, a right of action. But for example, we have a federal do not call list. So, as I understand it is a federal crime for a company to robocall people who are on the federal do not call list. My estimate is that there are hundreds of billions or possibly a trillion federal crimes happening every year. Every day.
Sen. Marsha Blackburn (R-TN):
Yes.
Stuart Russell:
So say we’re not really enforcing anything.
Sen. Marsha Blackburn (R-TN):
Right. So you would say existing law is not sufficient for AI.
Stuart Russell:
Correct. And okay. And existing, all right. Let enforcement patents as well.
Sen. Marsha Blackburn (R-TN):
Yes. Let me move on. In Tennessee, AI is important. Our auto industry uses so many AI applications, you know, and we followed this issue for quite a, quite a period of time because of the auto industry, because of the healthcare industry and the healthcare technology industry that is headquartered in Nashville. And of course, predictive diagnosis, disease analysis, research, pharmaceutical research benefits tremendously from ai. And then you look at the entertainment industry and the voice cloning, and you look at what our entertainers, our songwriters, our artists, our authors, our publishers, our TV actors, our TV producers are facing with ai. And to them, it is an absolute way that they’re robbing them of their ability to make a living off of their creative work. So our creative community has a different set of issues. Martina McBride, who is no stranger to country music, went into Spotify and the playlist, or a big thing, building your own playlist.
So she was going to build a country music playlist outta Spotify. She had to refresh that 13 times before a song by a female artist came up 13 times. So you look at the power of AI to shape what people are hearing. And in Nashville we like to say, you can go on lower broad, you can go to one of the honky tonks. Your band can have a great night. You can be discovered in YouTube, could end up with a record deal. But if you’ve got these algorithmically AI generated playlist that cut out new artists or females or certain sounds, then you are limiting someone’s potential. Just as if you allow AI generated content like on jukebox, which OpenAI is experimenting with, then you and you train it on that artist sound and their their songs to imitate them, then you are robbing them of the ability to be compensated. So how do we ensure that that creative community is still going to have a way to make a living without having AI become a way to still their creative talents and works?
Stuart Russell:
I think this is a very important issue. I think it also applies to book authors, some of whom are suing OpenAI. And I believe I’m not really an expert on copyright at all but some of my colleagues are like Pam Samuelsson, for example. And I think she would be a great witness for a future hearing. And I think the view is that the law as it’s written simply wasn’t ready for this kind of thing to be possible. So if by accident the system produces a song that has the same melody, then it’s gonna fall under existing law that you are basically plagiarizing. And there are, there have been cases of human plagiarism that have, that have succeeded.
Sen. Marsha Blackburn (R-TN):
Well we’ve explored the fair use issue in this committee, and we’ll continue to do so. And my time has expired. Okay. Thank you, Mr. Chairman.
Sen. Richard Blumenthal (D-CT):
Thanks, Senator Blackburn. We’ll begin a second round of questions. And I want to begin with one of the points that Senator Blackburn was making about private rights of action, which I think Senator Hawley and I have discussed incorporating in legislation. In many instances, let’s be very blunt, agencies become captive of the industries they’re supposed to regulate. And this one is too important to allow it to become captive. And one very good check on the captivity of federal entities, agencies, or offices is in fact private rights of action. So I would hope that you would endorse that idea. I recognize you’re not lawyers, you don’t you’re not in the business of litigating, but I’m hoping that you would support that idea. I see nodding heads for the record. Let me turn to also to recap the very important comments that you all have made about elections to take action against deep fakes against impersonation, whether it’s by labeling or watermarks, some kind of disclosure without censorship, we don’t want a ministry of truth.
We wanna preserve civil rights and liberties. The free speech rights are fundamental to our democracy. But the kinds of manipulation that can take place in an election, including interfering with vote counts and misdirection to election officials about what’s happening presents a very dangerous specter. Superhuman ai, superhuman ai. I think all of you agree, we’re not decades away. We’re perhaps just a couple of years away, and you describe it. Well, all of you do. In terms of the biologic effects, the development of viruses, pandemics, toxic chemicals but superhuman AI evokes for me, artificial intelligence that could on its own, develop a pandemic virus on its own, decide Joe Biden shouldn’t be our next president on its own, decide that the water supply of Washington DC should be contaminated with some kind of chemical and have the knowledge to do it through public utility system.
And I think that argues for the urgency. And these are not sort of science fiction anymore. You described them in your testimony, others have done it as well. So I think your warning to us has really graphic content, and it ought to give us impetus with that kind of urgency to develop an entity that can not only establish standards and rules, but also, but research on countermeasures that detect those misdirections, whether they’re the result of mal actors or mistakes by AI or malign operation of AI itself. Do you think those countermeasures are within our reach as human beings? And is that a function for an entity like this one to develop?
Dario Amodei:
Yes. I mean, I think this is, yeah, this is, this is one of the core things that, you know, whether it’s the, the bio risks from models that, you know, I’ve kind of stated in testimony, you know, are likely to come in two to three years, or the risks from truly autonomous models, which I think are more than that, but might not be a whole, whole lot more than that. I think this idea of being able to even measure that the risk is there, is really the critical thing. If we can’t measure, then, you know, we can put in place all of these regulatory apparatus, but it, you know, it’ll all, it’ll all be a, a rubber stamp. And so funding for the measurement apparatus and the enforcement apparatus working in concert is, is, is really gonna be central here. I mean, our suggestion was, you know, NIST and the National AI Research Cloud you know, which can, can help kind of allow a wider range of researchers to study these risks and develop countermeasures. So I think that that, that seems like a very, very important, that seems like a very important measure. I’m worried about our ability to do this in time, but, you know, we have to try and we have to put in all the effort that we can.
Sen. Richard Blumenthal (D-CT):
Mr. Bengio.
Yoshua Bengio:
Yes, I completely agree about the timeline. There’s a lot of uncertainty. So as I wrote in my testimony, it could be a few years, but it could also be a couple of decades because there, you know, research is impossible to predict. But if we follow the trend, it’s very concerning. And regulation liability, they will help a lot. My calculations is, you know, we could reduce the probability of a rogue AI showing up by maybe a factor of a hundred if we do the right things in terms of regulation. So it’s really worth it, but it’s not gonna bring those risks to zero, and especially for bad actors that don’t follow the rules anyways. So we need that investment in countermeasures and AI is gonna help us with that, but we have to do it carefully so that we don’t create the problem that we’re trying to solve in the first place.
Another aspect to this is it’s not just ai. You know, it needs to bring expertise in national security, in bio weapons, chemical weapons and AI people together. The organizations that are gonna do that, in my opinion, shouldn’t be for-profit. We shouldn’t mix the objective of making money, which, you know, makes a lot of sense in our economic system with the objective, which should be single-minded of defending humanity against a potential rogue AI. Also, I think we should be very careful to do this with our allies in the world and not do it alone.
There is first we, we can have a diverse set of approaches. Because we don’t know how to really do this. We are hoping that as we move forward and we try to solve the problem, we’ll find solutions. But we need a diversity of approaches. And we also need some kind of robustness against the possibility that one of the governments involved in this kind of research isn’t democratic anymore for some reason, right? This can happen. We don’t want a country that was democratic and has, and has power over a superhuman AI to be the only country working on this. We need a resilient system of partners so that if one of them ends up being a bad actor the others are there.
Sen. Richard Blumenthal (D-CT):
Thank you very much. I’ll turn to Professor Russell if you have a comment.
Stuart Russell:
Yeah. So I, I completely agree that if there is a body that’s set up that it should be enabled to fund and coordinate this type of research, and I completely agree with the other witnesses that we haven’t solved the problem yet. I think there are a number of approaches that are promising. I tend towards approaches that provide mathematical guarantees rather than just sort of best effort guarantees. And, you know, we’ve seen that in the nuclear area where originally the standard, I believe was, you know, you could have a core, a major core accident every 10,000 years. And you had to demonstrate that your system design met that requirement, then it was a million years, and now it’s 10 million years. And so that’s progress. And it comes from actually having a real scientific understanding of the materials, the designs, redundancy et cetera.
And we are just in the infant stages of a corresponding understanding of the AI systems that we’re building. I would also say that no government agency is going to be able to match the resources that are going into the creation of these AI systems. The numbers I’ve seen are roughly $10 billion a month going into a g i startups. And just for comparison that’s about 10 times the amount of the entire National Science Foundation of the United States, which has to cover physics, chemistry, basic biology, et cetera, et cetera, et cetera. So how do we get that resource flow directed towards safety? I actually believe that the involuntary recall provisions that I mentioned would have that effect, because if a company puts out a system that violates one of the rules and then is recalled until the company can demonstrate that it will never do that again then the company can go outta business.
So they have a very strong incentive to actually understand how their systems work, and if they can’t, to redesign their systems so that they do understand how they work. That just seems like basic common sense to me. I also wanna mention on, on rogue ai, right? The bad actors professor Bengio has mentioned an approach based on AI systems that are developed to try to counteract that possibility. But I also feel that we may end up needing a very different kind of digital ecosystem in general. And what do I mean by that? Right now to a first approximation, a computer runs any piece of binary code that you load into it we put layers on top of that that say, okay, that looks like a virus. I’m not running that. We actually need to go the other way around. The system should not run any piece of binary code unless it can prove to itself that this is a safe piece of code to run. So it’s sort of flipping the notion of permission. And with that approach I think we could actually have a chance of preventing bad actors from being able to circumvent these controls, because for them to develop their own hardware resources is into the tens or hundreds of billions of dollars. And so that’s an approach I would recommend.
Sen. Richard Blumenthal (D-CT):
I have more questions, but I’m gonna turn to Senator Hawley.
Sen. Josh Hawley (R-MO):
Let’s talk a little bit about national security and, and AI, if we could, Mr. Amodei, to come back to you. You mentioned in your written testimony, in your policy recommendations, your first recommendation in fact is the United States must secure the AI supply chain. And then you mentioned immediately as an example of these chips used for training AI systems. Where are most of the chips made now?
Dario Amodei:
So I think you, what I, what I had in mind….
Sen. Josh Hawley (R-MO):
Your microphone, I think maybe, sorry. That’s okay. Everyone’s eager to hear what you have to say. Go ahead.
Dario Amodei:
Yes. What I had in mind here yes, is that, you know, there are, there are certain bottlenecks in the production of AI systems you know, that ranges from semiconductor manufacturing equipment to chips to the actual, you know, to the actual produced systems, which then have to be stored on a server somewhere, and in theory could be stolen or, or released in an uncontrolled way. So I think, you know, compared to some of the more software elements, those are areas where there are substantially more bottlenecks.
Sen. Josh Hawley (R-MO):
Well, so, okay, understood. But we’ve heard a lot about chips, GPUs about the shortage of them. My question is, and and maybe, maybe you don’t know the answer to this, maybe somebody else does, but, but do you know where most of them are currently manufactured?
Dario Amodei:
Yeah. There are a number of steps in the production process, okay, for chips, right? You produce the raw chip or the actual GPU you know, those happen in a number of, number of places.
Sen. Josh Hawley (R-MO):
For example.
Dario Amodei:
So, you know, an important, important player on the, you know, kind of like making the, the base fabrication side would be TSMC, which is in Taiwan. And then within, you know companies like Nvidia with the United States, you know, then, then then you know, produce those into GPUs. And I don’t know exactly where that process happens. It could be in a large number of places.
Sen. Josh Hawley (R-MO):
As part of securing our supply chain here in this area, should we consider limitations, if not outright prohibitions on components that are manufactured in China?
Dario Amodei:
I, you know, I think on that, on that particular issue, you know, that’s, that’s, that’s not one where I have a huge amount of where I have a huge amount of knowledge. I mean, I think we should think a little bit in the other direction of our things that are produced by our supply chain. Do they end up in places that we don’t want them to be? So we’ve worried a lot about that in the context of models. We just had a blog post out today about AI models saying, Hey, you, you might’ve spent a large number of millions of dollars, maybe someday it’s gonna be billions of dollars to train an AI system. And then, you know, you, you don’t want some state actor or criminal or rogue organization to then steal that and, you know, use it in some, use it in some irresponsible way that you, that you don’t endorse.
Sen. Josh Hawley (R-MO):
Let me, let me get at this problem from a slightly different angle, which is, let’s imagine a hypothetical in which the communist government of Beijing decides to launch an invasion of Taiwan. And let’s imagine, and sadly, it doesn’t take very much imagination. Let’s imagine that they’re successful in doing so. Just give me a back of the envelope forecast, what might that do to AI production?
Dario Amodei:
Yeah, so, I mean, you know, I’m not an economist and it’s hard to, to forecast, but a very large fraction of the chips are indeed, you know, some somewhere go through the supply chain in Taiwan. So I think there’s, you know, there’s no doubt that that is a, there’s no doubt that that is a hotspot and, you know, something that we should be concerned about for sure.
Sen. Josh Hawley (R-MO):
Do either of the other panelists wanna say anything about this? Professor Russell, perhaps?
Stuart Russell:
Yeah, I mean, there are, there are studies my colleague Orville Shell, who’s a China expert, has been working on a a study of these issues. There are already plans to diversify away from Taiwan. TSMC is trying to create a plant in the us. Intel is now building some very large plants in the US and in Germany, I believe. So but it’s, it’s taking time. I think if the invasion that you mentioned happened tomorrow we would be in a huge amount of trouble. As far as I understand it, there are plans to sabotage all the TSMC operations in Taiwan, if an invasion were to take place. So it’s not that all that capacity would then be taken over by Chin.,
Sen. Josh Hawley (R-MO):
That’s what’s sad about that scenario is that would be the best case scenario, right? I mean, if there’s an invasion of Taiwan, the best we could hope for is maybe all of their capacity, or most of it gets sabotaged, and maybe the whole world has to be in the dark for however long. That’s the best case scenario. The point I’m trying to make is, I think your point, Mr. Amodei about securing our supply chains is absolutely critical, and thinking very seriously about decoupling efforts, strategic decoupling efforts, is absolutely vital at every point of the supply chain that we can. And I think we don’t do that with China soon. Frankly, we should have done it a long time ago. If we don’t do it very, very quickly, I think we’re really in trouble. And I think we’ve gotta think seriously about what may happen in the event of a Taiwan invasion. Yeah, go ahead.
Dario Amodei:
Yes, I just wanted to emphasize professor Russell’s point even more strongly that we are trying to move some of the chip fab production capabilities to the US, but that needs to be faster, right? We’re talking about, you know, two to three years for some of these very scary applications, and maybe not much longer than that for truly autonomous AI. Correct me if I’m wrong, but I think the timelines for moving these, these, these production facilities look more like, you know, five years, seven years, and we’ve only started on a small, small component of them. So, so just to, just to emphasize this, I think this is absolutely essential.
Sen. Josh Hawley (R-MO):
Yeah. good. Let me ask you about a different issue related to labor overseas and labor exploitation. The Wall Street Journal published a piece today entitled “Cleaning Up ChatGPT Takes Heavy Toll on Human Workers” Contractors in Kenya say they were traumatized by the effort to screen out description to violence and sexual abuse during the runup to open AI’s hit chatbot, namely ChatGPT, the article details, the widespread use of labor in Kenya to do this training work on the ChattGPT model. I encourage everyone to read it, and I’d like to ask the chairman to be able to enter this into the record without objection. One of the disturbing, it’s a couple of disturbing things. I mean, one is, if we’re talking about a thousand or more workers outsourced overseas.
We’re talking about exploitation of those workers. They work round the clock, the material they’re exposed to is incredible and I’m sure extremely damaging, and that constitutes the issue of lawsuits that they’re now bringing. Here’s another interesting tidbit. The workers on the project were paid an average of between $1 and 46 cents an hour, and $3 and 74 cents an hour. Lemme say that again. The workers on the project were paid on average between $1 and 46 cents an hour and 3.74 an hour. Now, OpenAI says, oh, we thought that they were being paid over $12 an hour. And so we have the classic, classic corporate outsource maneuver where a company outsources jobs that couldn’t be done in the United States, outsources jobs, exploits foreign workers to do it, and then says, oh, we don’t know anything about it. We’re, we’re asking them to engage in this psychologically harmful activity. We’re probably overworking them doing it, and we’re not paying them. But you know, oops, I guess my question is, how widespread is this in the AI industry? Because it strikes me that we’re told AI is new, and it’s a whole new kind of industry, and it’s glittering and it’s almost magical, and yet it looks like it depends in critical respects on very old fashioned disgusting, immoral labor exploitation. So, go ahead, Mr. Amodei. Yes.
Dario Amodei:
So this is actually one area where Anthropic has a substantially different approach from the one that you’ve described. Yeah, but I can’t speak for what other, other companies are doing. But a couple points. One is this, this constitutional AI method, which I mentioned is a way for one copy of the AI system to moderate or help to train another copy of the AI system. This is something that reduces, it does not eliminate, but it substantially reduces the need for the kind of human labor that you’re describing. Second, in our own contracting practices and, you know, I would have to talk to you directly for exact numbers, but I believe that the companies we contract out to are something like northwards of, you know, 75% workers from the US and Canada, and all paid above the, all paid above the California minimum wage. So, I share your concern about these issues, and, you know, we’re committed to both developing research that kind of obviates the, you know, the, the need for some of this, this, this kind of moderation and, you know, not, not exploiting these workers.
Sen. Josh Hawley (R-MO):
Well, it’s good because here’s, I think what would be terrible to see is this new technology that is built by foreign workers, not American workers. That seems like the same old story we’ve heard for 30, 40 years in this country, where we’re told, oh, no, American workers, they cost too much. They’re American workers, they’re just too demanding American workers. They don’t have the skills, so we’re gonna outsource it. We’re gonna give it to other foreign workers. Then you mistreat the foreign workers, then you don’t pay the foreign workers. And then who benefits from it? At the end of the day, these few companies that we talked about earlier who make all the profit and control of it, that seems like an old, old story that I frankly don’t wanna see replicated again, that seems like a dystopia, not like a, a new future. So I think it’s critical that we find out what the labor practices are of these companies.
I’m glad that you’re charting a different course Mr. Amide, and certainly we wanna hold that, hold you to that. But I, I think it’s vital that as we continue to look at, at how this technology’s developing, that we actually push for what, I mean, what’s wrong with having a, a technology that actually employs people in the United States of America and pays them well, I mean, why should an American workers and American families protected by our labor laws benefit from this technology? I, I don’t think that’s too much to ask. And frankly, I think that we ought to expect that of companies in this country who are with access to our markets who are working on this technology. Mr. Chairman.
Sen. Richard Blumenthal (D-CT):
Thank you. I don’t think you’ll find much disagreement with that proposition. But to have American workers do those jobs, we need to train them. And you all in some sense, because you’re all teachers, you’re all professors are engaged in that enterprise. Mr. Amodei, I don’t know whether you can be called still a professor, but probably not. I was never a Professor , but we need to train workers to do these jobs, and for those who want to pause, and some of the experts have written that we should pause AI development I don’t think it’s gonna happen. We right now have a gold rush, literally much like the gold rush that we had in the Wild West, where in fact, there are no rules, and everybody’s trying to get to the gold without very many law enforcers out there preventing the kinds of crimes that can occur. So I am totally in, in agreement with Senator Hawley in focusing on keeping it in America, made in America when we’re talking about ai. And I think he is absolutely right that we need to build those kinds of structures, provide the training and incentives that enable it and enforce it. Let me, though, come back to this issue of national security. Who are our competitors among our adversaries and our allies who are closest to the United States in terms of developing AI? Is it China? Are there other adversaries out there that could be rogue nations, not just rogue actors, but rogue nations and whom we need to bring into some international body of cooperation?
Stuart Russell:
So I think the closest competitor we have is probably the UK in terms of making advances in base research, both in academia and in DeepMind in particular, which is based in London now being merged more forcefully into the larger Google organization. But they have a very distinct approach and they’ve created an ecosystem in the UK that’s really quite productive. I’ve spent a fair amount of time in China. I was there a month ago talking to the major institutions that are working on a g I and my sense is that we have slightly overstated the level of threat that they currently present. They’ve mostly been building copycat systems that turn out not to be nearly as good as the systems that are coming out from anthropic and open AI and Google.
So but the intent is definitely there. I mean, they’ve publicly stated their goal to be the world leader and they are investing probably larger sums of public money than we are in the US, smaller sums in the private sector, the areas where they are actually most effective. And I was actually on a panel in Tianjin for the top 50 Chinese AI startups, and they were giving out awards. But I think about 40 of those 50, their primary customer was state security. So they’re extremely good at voice recognition, face recognition tracking and recognition of humans based on gait and, and similar capabilities. They’re useful for state security. Other areas like reasoning and so on, planning, they’re just not in, they’re not really that close. So they have a pretty good academic sector that they are in the process of ruining by forcing them to meet numerical publication targets and things like that. They don’t give people the freedom to think hard about the most important problems. And, and they are not producing the basic research breakthroughs that we’ve seen both in the academic and the private sector in the us I’m also hard to
Sen. Richard Blumenthal (D-CT):
Produce a superhuman thinking machine if you don’t allow humans to think.
Stuart Russell:
Yep. you know, I’ve also looked a lot of European countries. I’m working with the French government quite a bit, and, and I don’t think anywhere else is in the same league as those three. Russia in particular has been completely denuded of its experts and was already well behind
Sen. Richard Blumenthal (D-CT):
Mr. Bengio. Professor.
Yoshua Bengio:
On the allied side there are a few countries including Canada from which I come from that have really important concentration of talent in, in ai and in, you know, in Canada, we’ve contributed a lot of the principles behind what we’re seeing today. There is also a lot of really good European researchers in the UK and outside the uk. So I think that we would all gain by making sure we work with these countries to develop these countermeasures, as well as the improved understanding of the potentially dangerous scenarios and what methodologies in terms of safety can protect us.
Sen. Richard Blumenthal (D-CT):
You’ve advocated decentralized labs.
Yoshua Bengio:
Yes. But under a common umbrella that would be multilateral, maybe this could be a good starting place could be five eyes or G seven, and that would capture pretty much bulk of the expertise in these very strong AI systems that, that could be important here.
Sen. Richard Blumenthal (D-CT):
And there would probably be some way for our entity, our national oversight body doing licensing and registration to still cooperate. In fact, I would guess, oh, yeah, that’s one of the reasons to have a single entity to be able to work and collaborate. Yes, with other countries,
Yoshua Bengio:
There’s no doubt that individual countries have their own national security organizations and are gonna do their own laws. But the more we can coordinate on this the better. Of course, I think some of that research should be classified and, and, and not shared with anyone, only trusted parties. So there are aspects of what we have to do that have to be really broad at the international level. And I think the guidelines or the maybe mandatory rules for safety should be something we do internationally, like with the un, like we want every country to follow some basic rules, because even if they don’t have the technology, some rogue actor even here in the US might just go and do it somewhere else. And then you know, viruses, computer or biological viruses don’t see any border. So we need to make sure there’s an international effort in terms of these safety measures. We need to agree with China on these safety measures, is the first interlocutor, and we need to work with our allies on these countermeasures.
Sen. Richard Blumenthal (D-CT):
I think that all those observations are extremely timely and important. And on the issue of safety, I know that Anthropic has developed a model card for Claude that essentially involves evaluation capabilities. Your red teaming considered the risk of self replication or a similar kind of danger. Open eye, engaged, the same kind of testing. We’ve been talking a lot about testing and auditing. So apparently you share the concern that these systems may get outta control. Professor Russell recommended an obligation to be able to terminate and AI system Microsoft called this requirement safety breaks. When we talk about legislation, would you recommend that we impose that kind of requirement as a condition for testing and auditing the evaluation that goes on when deploying certain AI systems? Obviously, again, focusing on risk. I think everybody has talked about systems that are vulnerable risk systems and AI models spreading like a virus seems a bit like science fiction, but these safety breaks could be very, very important to stop that kind of danger. Would you agree?
Dario Amodei:
Yes. So I think… I, for one, think that makes a lot of sense. I mean, the, the way I would think about it is, you know, in the, in the testing and auditing regime that, that we’ve all, we’ve all discussed, you know, the, the best case is if all of these dangers that we’re talking about don’t happen in the first place, because we run tests that detect the, the dangers, and there’s, there’s basically, there’s basically prior restraint, right? If these things are a concern for public safety and national security, we never want the bad things to happen in the first place, but precisely because we’re still getting good at the science of measurement probably it will happen at least once, and unfortunately, perhaps repeatedly that we run these tests, we think things are safe, and then they turn out not to be safe. And so I agree. We also need a mechanism for recalling things if the tower modifying things, if the tests ended up being wrong. So that, that, that seems like common sense to me, for sure.
Sen. Richard Blumenthal (D-CT):
And, and I think there’s been some talk about AutoGPT, maybe you can talk a little bit about how that relates to safety breaks.
Dario Amodei:
Yes. So AutoGPT refers to use of, you know, currently deployed AI systems, which are, which are not designed to be agents, right, which are just chatbots, but kind of commandeering such systems for taking actions on the internet. You know, to be honest, such systems are not particularly effective at that yet, but they may be a taste of the future and the kinds of things we’re worried about in the future, the long-term risks that I described in the short, medium and, and long-term risks. So I don’t as of yet see a particularly high amount of danger from things like the system you described, but it tells us where we’re going and where we’re going is quite concerning to me.
Sen. Richard Blumenthal (D-CT):
You know in some of the areas that have been mentioned, like medicines and transportation there are public reporting requirements. For example, when there’s a failure, the fa a’s system has an accident, an incident report, they collect data about failures in those kinds of machinery. And it serves as a warning to consumers. It creates a deterrence for putting unsafe products on the market. And it adds to oversight of public safety issues. We’ve discussed this afternoon, both short term and long term kinds of risks that can cause very significant public harm. It doesn’t seem like AI companies have an obligation to report issues right now. In other words, there’s no place to report it. They have no obligation to make it known. If they discover the, oh my God, how did that happen? It can be entirely undisclosed. Would you all favor some kind of requirement for that kind of reporting? Absolutely. And it may be obvious, but let me ask all of you, I, I see, again, your head’s nodding for the record. Would that inhibit creativity or innovation to have that kind of requirement?
Dario Amodei:
I don’t think, I mean, there are many areas where there’s important trade offs. I don’t think this is one of them. I think such requirements make sense. I mean, to give a little of our experience in, you know, red teaming for these biological harms, you know, we’ve had to work on, you know, piloting a responsible disclosure process. I think that’s less about reporting to the public, more about making the other companies aware. But, you know, the two things are similar to each other. So, you know, a lot of this is being done on voluntary terms, and you see some of it coming up in the, you know, the commitments that the seven companies make. But yeah, I think there’s, there’s a lot of legal and process infrastructure that’s missing here and should be filled in.
Stuart Russell:
Yeah. I think to go along with a notion of an involuntary recall there has to be that reporting step happening first.
Sen. Richard Blumenthal (D-CT):
You know you mentioned recalls. Both Senator Hawley and I were state attorneys general before we got this job. And both of us are familiar with consumer issues. One of the frustrations for me always was that even with a recall, a lot of consumers didn’t do anything about it. And so I think the recall as a concept is a good one, but there have to be teeth to it. There has to be a cop on the beat, a cop on the AI beat. And I think the enforcement powers here are tremendously important. And the point that you made about the tremendous amount of money is very important. You know, right now it’s all private funding, or mostly private funding, but the government has an obligation to invest. I think all, all of you would agree, invest in safety just as it has in other technology and innovation because we can’t rely on private companies to police themselves. That cop on the beat in the AI context has to be not only enforcing rules, but as I said at the very beginning, incentivizing innovation and sometimes funding it to provide the airbags and the seat belts and the crash proof kinds of safety measures that we have in the automobile industry. I recognize that the analogy is imperfect, but I think the concept is, is there, Senator Hawley,
Sen. Josh Hawley (R-MO):
This has been a tremendously helpful hearing. I just wanna thank each of you again for taking the time to be here. Can I just ask you if you could give us your one or at most two recommendations for what you think Congress ought to do right now, what, what should, what should we do right now based on your expertise, what we’ve talked about today? I would, I would be very, very curious to hear. So maybe we’ll start with you, professor Russell, and, and go that way.
Stuart Russell:
So I, I gave some you know, move fast and fix things, recommendations in my opening remarks. And I, I, I think there’s no doubt that we’re going to have to have an agency you know, if, if things go as expected AI is gonna end up being responsible for the majority of economic output in the United States. So it cannot be the case that there’s no overall regulatory agency for this technology. And the second thing I think would, would be just to focus again on systems that violate a certain set of unacceptable behaviors are removed from the market. And I think that will have not only a benefit in terms of protecting the American people and our national security, but also stimulating a great deal of research on ensuring that the AI systems are well understood, predictable, controllable and that’s it.
Sen. Josh Hawley (R-MO):
Very good. Professor Bengio.
Yoshua Bengio:
What I would suggest in addition to what Professor Russell said is to make sure, either through incentives to companies, but also direct investment in non-profit that we invest heavily. So totally as much as we spend on you know, making more capable ais that we invest heavily in, in safety whether it’s at the level of the hardware or it’s the level of cybersecurity and, and national security to, to protect the public.
Sen. Josh Hawley (R-MO):
Very good. Mr. Ramee,
Dario Amodei:
I would, again, emphasize the testing and auditing regime for all the risks ranging from, you know, those, those we’ve, we face today, like misinformation came up to the biological risks that I’m worried about in two or three years to the you know, risks of autonomous replication that are some unspecified period after that. You know, all, all, all of those can be tied to different kind of tests that we can, we can, that we can run in our model. And so that strikes me as a, you know, as a scaffolding on which we can build lots of different concerns about AI systems, right? If we start by testing for only one thing, we can, in the end, test for a much, much wider range of concerns. And I think without such testing, we’re blind. Like I give you an AI system, another company gives you an AI system, you, you, you talk to it, it’s not straightforward to determine whether this is a safe system or a dangerous system.
So I would, again, make the analogy to, you know, it’s, it’s like we’re, we’re making these machines, you know, cars, airplanes, these are complex machines. We need an enforcement mechanism in people who are able to look at these machines and say, what are the, what, what are the benefits of these? And what, what is the danger of this particular machine as well as, as well as machines in, in general, once we measure that, I, I feel it’s all gonna work out well. But, but you know, before we’ve identified and have a process for this, we’re, we’re from a regulatory perspective, shooting in the dark. And the final thing I would emphasize is, you know, I, I don’t think we have a lot of time, you know, I personally am open to whatever administrative mechanism puts those kinds of tests in place. You know, very agnostic to whether it’s, you know, a new agency or extending the authorities of existing agencies. But whatever we do, it has to happen fast. And I think to focus people’s minds on the bio risks, I would really target 20 25, 20 26, maybe even some chance of 2024. If, if we don’t have things in place that are restraining what can be done with AI systems, we’re gonna have a really bad time.
Sen. Josh Hawley (R-MO):
Let me just thank each of you. That’s really helpful. Let me just throw an idea out to you while I have you here, so to speak which is, when we think about protecting individuals and their personal data and making sure that it doesn’t end up being used to train one of these generative AI systems without the individual’s consent, we know that there’s just an enormous amount of, of, of pers of our own personal information out there in public, kind of, you know, I mean, it’s really without our permission, but it’s out there on, on the web and everything from our, our credit histories to social media posts, et cetera, et cetera. Should we, should we, in addition to assigning property rights in individual data, you know, explicitly give every American a property, right? And their data, should we also require monetary compensation if AI companies want to use individual data in their model in some way? Professor Gio, go ahead.
Yoshua Bengio:
There, it’s not always gonna be possible to know, to attribute the output of a system to a particular piece of data. ’cause These systems are not just copying, they’re integrating information from many, many sources. And so we need other mechanisms to share to the people who are losing something, for example, artists. But in some cases it could be identified if an output, if close enough to something that has been, you know, as, as copyright or something. I think in that case, yes we should do it.
Sen. Josh Hawley (R-MO):
Any other thoughts? That’s all of my questions, Mr. Chairman.
Sen. Richard Blumenthal (D-CT):
I just, remarkably, I just have a couple of more questions. I promise they will be brief. You’ve been very patient, but this panel is such a great resource that I want to impose on your patience and your wisdom. The the point that you were making earlier about the red teaming and the importance of testing and auditing reminded me about your testimony, your prepared testimony, but also a conversation that you and I had about how anthropic went about testing its large language model, particularly as related to the biological dangers where you worked with world-class biosecurity experts, I think was your quote over many months in order to be able to identify and mitigate the risks that Claude two might raise. On the other hand, I think you may have mentioned a company that basically used graduate students to do the same task. There’s an enormous difference in those two testing regimens. Now, right now, there’s no requirement, there’s no legal duty, but would you recommend that when we write legislation, that we impose some kind of qualifications on the testers and the evaluators so as to have that expertise?
Dario Amodei:
Yes. So Sally, I’m very aligned with that. I mean, I wanna say clearly, like all of us, all the companies, all the researchers are trying our best to figure this out. So, you know, I don’t wanna, don’t want to, don’t want to call out you know, any, any, any, any, any companies here. I think we’re all trying to figure it out together, but I think it is an object lesson in that in testing these models, you know, you can do something that you might think is a very reliable way of soliciting bad, bad behavior from the models or you know, a test that you think is truthful. And you know, you can, you can find out later that that really wasn’t the case. Even if you had all the, all the good intent in the world, in the case of bio, the key was, you know, to have world experts and to zero in on a few things in other areas, the key, the key might be different.
And so I think the most important thing, maybe not so much the static requirements, although I, although I, you know, I would certainly endorse, you know, the level of expertise has to be very high. But, making the process have some, some living element to it so that it can be adjusted. We used to think that this test was okay, this test was not okay. You know, just, just imagine we’re a few years after, you know, the invention of flying, and we’re like, we’re looking at these big machines, and we’re like, well, how do we know if this thing’s gonna crash? Right now we know very little, somehow we need to design the regulatory architecture so that we can, we can get to the point where if we learn new things about what makes planes safe and what makes planes crash, they get kind of automatically hooked into whatever architecture we’ve built. I don’t know the best way to do that, but I think that should be the goal.
Sen. Richard Blumenthal (D-CT):
Well, you know, that’s a very timely analogy because a lot of the, the military aircraft we’re building now basically fly on computers and the pilot is in the planes right now, but we’re moving toward such sophisticated and complicated aircraft, which I know a little bit about because I’m on the Armed Service committee, that you know, they’re a lot smarter than pilots in some of the flying they can do. But at the same time, they are certainly red team to avoid misdirection and, and mistakes. And the kinds of specifics that you just mentioned are where the rubber hits the road. These kinds of specifics are where the legislation will be very important. President Biden has enlisted or elicited commitments to security, safety, transparency announced on Friday as an important step forward. But this red teaming is an example of how voluntary non-specific commitments are insufficient.
The advantages are in the details, not just the devil. The details are tremendously important. And when it comes to economic pressures, companies can cut corners. Again, the gold rush. These decisions have real economic consequences. I wanna just in the last, maybe the last question I have on the issue of open source you each raise the security and safety risk of AI models that are open source or are a leak to the public, the danger, there are some advantages to having open source as well. It’s a complicated issue. I appreciate that open source can be an extraordinary resource, but even in the short time that we’ve had some AI tools and they’ve been available, they have been abused. For example, I’m aware that a group of people took stable diffusion and created a version for the express purpose of creating non-consensual sexual material.
So on the one hand, access to AI data is a good thing for research, but on the other hand the same open models can create risks just because they are open. Senator Hawley and I as an example of our cooperation road to meta about an AI model that they released to the public, you are familiar with it, I’m sure Lama they put the first version of LAMA out there with not much consideration of risk. And it was leaked, or it was somehow made known. The second version had more documentation of its safety work. But it seems like meta or Facebook’s business decisions may have been driving its agenda. So let me ask you about that phenomenon. I think you have commented on it Dr. Bengio. So let me talk to you first.
Yoshua Bengio:
Yes. I think it’s really important because when we put open source out there for something that could be dangerous, which is a tiny minority of all the code that’s open source, essentially we’re opening the door to all the bad actors. And as these systems become more capable, bad actors don’t need to have very strong expertise, whether it’s in bio weapons or cybersecurity in order to take advantage of systems like this. And they don’t even need to have huge amounts of compute either to take advantage of systems like this. Now, I believe that the different companies that committed to these measures last week probably have a different interpretation of what is a dangerous system. And I think it’s really important that the government comes up with some definition, which is gonna keep moving, but make sure that future releases are gonna be very carefully evaluated for that potential before they’re released. I’ve been a staunch advocate of open source for all my scientific career. Open source is great for scientific progress, but as Jeff Hint and my colleague was saying, if nuclear bombs were software, would you, you know, allow open source of nuclear bombs? Right?
Sen. Richard Blumenthal (D-CT):
And I think the comparison is apt. You know, I’ve been reading the most recent biography of Robert Oppenheimer, and every time I think about AI, the specter of quantum physics, nuclear bombs, but also atomic energy, both peaceful and military purposes is inescapable.
Yoshua Bengio:
So I have another thing to add on, on open source. Some of it is coming from companies like Meta, but there’s also a lot of open source coming out of universities. Now, usually these universities don’t have the means of training the kind of large systems that we’re saying in industry, but the code could be then, you know, used by a rich bad actor and turned into something dangerous. So I believe that we need ethics review boards in universities for AI, just like we have for biology and medicine right now. There’s no such thing. Yeah, I mean, there are ethics in principle. They could do that, but they’re not set up for that. They don’t have the expertise, they don’t have the kind of protocols we need to move into a culture where universities across the world, but, you know, in, in, in developed nations in particular, adopt these ethics reviews with the same principles, what we’re doing for other sciences where there is dangerous output. But in the case of AI.,
Dario Amodei:
Yeah, I strongly share Professor Bengio’s view here. I wanna make sure I’m kind of precise in my views. ’cause I think there’s, you know, there’s, there is nuance to it. You know, in line with Professor Bengio, I think in most scientific fields, open source is a good thing. It accelerates progress. And I think even within ai, there’s room for models on the smaller and medium side. I don’t think anyone thinks those models are seriously dangerous. They have, they have some risks, but the benefits may outweigh the costs. And, and I think to be fair, even up to the level of open source models that have been released so far, the risks are relatively limited. So construed very narrowly, I’m not sure I have an objection, but I’m very concerned about where things are going. If we talk about two to three years for the frontier models, for the bio risks, and probably less than that for things like misinformation, we’re, we’re there now.
I think the path that things are going in terms of the scaling of, of open source models, I think it’s going down a very dangerous path. And if the, if again, if the path continues, I think we could get to a very dangerous place. I think it’s worth saying some things on open source models that are clear to all the experts, but I want to make sure is, is understood by, by this committee, which is when, when you control a model and you’re deploying it, you have the ability to moderate usage. It might be misused at one point, but then you can alter the model. You can revoke a user’s access, you can change what the model is willing to do when a model is released in an uncontrolled manner, there’s no ability to do that. It’s entirely out of your hands.
And so I think that should be attended to carefully. There may be ways to release models open source so that it’s harder to circumvent the guardrails, but that’s a much harder problem and we should, we should confront the advocates of this with, with, with that problem and challenge them to solve it. Finally, I’d say open source is a little bit of a misnomer here, right? Open source normally refers to, you know, smaller developers who are iterating quickly. And I think that’s a good thing. But I think here we’re talking about something a little bit different, which is a more uncontrolled release of larger models by, you know, again, your point, Senate Senator Hawley, like much larger entities that pay tens or even hundreds of million of dollars to, to train them. I think we should think of that in a little bit of a different category and their obligations in a little bit of a different category.
Stuart Russell:
So I just like to add a couple of points. I agree with everything the other witness had said. So one issue is being able to trace the provenance of from the, the output that is problematic through to which model was used to create it through to where did that model come from. And a second point is about liability. And it’s not completely clear where exactly the liability should lie, but if, to continue the nuclear analogy, if, if a corporation decided they wanted to sell a lot of enriched uranium in supermarkets and someone decided to take that in rich uranium and buy several pounds of it and make a bomb, wouldn’t we say that some liability should reside with the company that decided to sell the enrich uranium? They could put a vice on it saying, do not use more than, you know, three ounces of this in one place or something. But no one’s gonna say that that absolves them from liability. So so, so I think those two are really important and the open source community has got to start thinking about whether they should be liable for putting stuff out there that is ripe for misuse.
Sen. Richard Blumenthal (D-CT):
I I want to invite any of you who have closing comments or thoughts that you haven’t had an opportunity to express. So
Yoshua Bengio:
I would like to add a point about international or multilateral collaboration on these things and how it’s related to having maybe a single agency here in the United States. If, if there are 10 different agencies trying to regulate AI in its various forms that could be useful. But as Stuart Russell was saying, this is, this is gonna be very big in terms of this, what it, you know, the space it takes in the economy, but also we need to have a single voice that, that coordinates with, with the other countries. And having one agency that does that is going to be very important. Also, we need an agency in the first place because we can’t predict, we can’t put in a law every protection that is needed. Every regulation that is needed. We don’t know yet what the regulation should be in one year, two years, three years from now. So we need to build something that’s gonna be very agile. And I know it’s difficult for governments to do that. Maybe we can do research to improve on that front agility and, and, and during the right thing. But having an agency is at least a tool towards that goal.
Sen. Richard Blumenthal (D-CT):
I would just close by saying that is exactly why we’re here today to develop an entity or a body that will be agile, nimble, and fast, because we have no time to waste. I don’t know who the Prometheus is on AIi, but I know we have a lot of work to make sure that the fire here is used productively and there are enormously productive uses. We haven’t really talked about them much whether it is curing cancer, treating diseases, some of them mundane by screening x-rays, by developing new technology that can help stop climate change. There are a vast variety of potentially productive uses and it should be done with American workers. I think very much in agreement here. And the last point I would make on, on agreement, what you’ve seen here is not all that common, which is bipartisan unanimity, that we need guidance from the federal government.
We can’t depend on private industry, we can’t depend on academia. The federal government has a role that is not only reactive and regulatory, it is also proactive in investing in research and development of the tools that are needed to make this fire work for all of us. So I wanna thank every one of you for being here today. We look forward to continuing this conversation with you. Our record is gonna remain open for two weeks in case any of my colleagues have written questions for you. I may have some too. If you have additional thoughts, feel free to submit them. I’ve read a number of your writings and I’m sure I will continue reading them and look forward to talking again. With that, this hearing is adjourned.