BBC: “It’s a question of enforcement.”
Countries around the world are working to implement legislation to regulate artificial intelligence’s (AI’s) development and use. This week, AI technology companies from across the globe including the US, China, Middle East and Europe developed a set of agreements at the AI Seoul Summit. These agreements pertain to thresholds for severe AI risks, including in building biological and chemical weapons. The European Union’s AI Act, a more comprehensive standard than the regulations in the US thus far, will go into effect next month. Subscribe here: http://bit.ly/1rbfUog For more news, analysis and features visit: www.bbc.com/news
TRANSCRIPT (Youtube)
you are watching the context it is time for our regular weekly segment AI [Music] decoded welcome to AI decoded if you were with us last week and I must encourage you to look at our previous episodes on YouTube then you will have heard as talk about the huge advances at open Ai and the launch of chat GPT 40 well tonight we’re going to focus on the one issue that worries us all who is in control Where is the balance between Innovation and Regulation and it has been quite the week on Tuesday the European Union got serious setting out the most comprehensive legislation on AI anywhere in the world tonight we’ll hear from the woman who played a key role in developing the eu’s AI Bill margareta vtiger the European commission commissioner for competition you know this idea that you should not regulate technology because that the develops really really fast but you should regulate the use of it that idea is gaining a lot of traction at the AI Summit in Seoul this week 16 of the biggest AI developers signed an international agreement that builds on the commitments secured last year at Bletchley Park the companies from China America and the Middle East have agreed not to develop or deploy AI models that cannot maintain risk below a certain threshold EU legislation a new international agreement but where is the overlap with China and the United States two of the biggest developers and where do the big Powers diverge we’ll get the thoughts tonight of Miles Taylor who is advising the US Congress with me here in the studio our regular AI contributor and author on technology Stephanie here hello hello um can we talk about the UK and the the South Korea Summit um the British government setting out this new agreement this week Rishi sunak said it’s a world F to have so many leading AI companies all agreeing to the same commitments on AI safety how significant do you think it is well I guess it just depends again it’s better than nothing which is where we were back when we had the first AI safety Summit here in the United Kingdom in November that said it’s a question of enforcement big tech companies love voluntary self-regulation why because they can’t be sued if they don’t maintain their commitments there’s also who’s actually watching the Watchers who’s checking on what they’re doing they constantly like to say that things are intellectual property they can’t reveal training data so really what’s happening here it sounds great but it might just be a pinky promise if these 16 companies are the market leaders do their safety commitments control the entire industry I mean do the smaller developers who use the technology on their platforms are are they hemmed in by the protocols that have been agreed this week only in so much that it’s enforced so yes in terms of tone you would think that that would certainly set the direction of travel but again who is putting actual resources Manpower and money to checking all of this and we all saw you know very recently with open Ai and the American actress Scarlet Johansson her voice being used without her consent who was checking that right so why did why did she find out after the AI assistant was revealed even though she’d already said no she had to get lawyers to come in to do a cease and assist and now they’re doing Discovery to find out what training data was used who was checking that beforehand no one well listen among the 16 signatories are ziu AI from China and the Technology Innovation Institute from the United Arab Emirates two signatures from countries that so far have been less than willing to buy their biggest companies to safety regulation Rishi sunak says that is the result of the lighter touch the British government has taken the EU is approaching it in a different way and this week the leaders of all 27 countries Endor the new AI legislation it’s much more comprehensive than the light touch voluntary approach that Stephanie’s just been talking about that the UK and the US have taken and one of the key architects of that bill is the eu’s competition commissioner margara investiga yesterday I went to see her in Brussels commissioner the the eu’s new rules on artificial intelligence were endorsed by the 27 countries on Tuesday you say it’s the global Benchmark uh for AI regulation why do you think it will have an impact beyond the eu’s borders I think because uh the choice of regulating the use of technology and only having sort of a a tight regulatory grip when it’s something inter existential for the individual that approach is something that we shared uh very early on with the Americans within the trade and Technology Council so they have the same approach it’s also the approach that we see with the Canadians who are passing uh AI legislation hopefully as we speak uh when I talk with colleagues uh from other jurisdictions you know this idea that you should not regulate technology because that develops really really fast but you should regulate the use of it that that idea is gaining a lot of traction what might surprise people is that this new legislation will not be fully in place until 2026 and we know that AI is growing exponentially aren’t people bound to ask what are you doing to address the risk today and isn’t there a danger that Pandora’s Box is already open by the time this comes into play oh but the Pandora Box is open right now uh I I don’t think one should be fooled about that so for instance we use our Digital Services uh legislation the thing that is going to keep Digital Services safe for us that they are not addictive that they do not promote mental health problems that they cannot be used to undermine elections here we also say to the companies providing these Services well you need to be extra aware if you fuel this with artificial intelligence you need to be extra aware about uh fake uh products uh fake videos uh now that we have an election coming up uh and we know about the abuse that is going on out there you need to pay really close attention uh because uh due to this legislation you already have obligations to protect people from fraud from things that are illegal in member states so I think uh that and then of course that we have the G7 uh code of conduct uh we have a lot of commitment from businesses but of course commitments is a very different things from having legislation that obliges you uh to be careful and and put Safety First in in critical situations can you regulate something that people don’t fully understand I get the impression even from the AI companies themselves that this thing is providing breakthroughs developments that they didn’t expect themselves but I think this is this is actually quite a a modest ask that for instance if you use AI to decide who would you call for a job interview who can get a mortgage uh what kind of treatment should patients have well you should have a pretty good idea that it’s not about your postal code uh or your gender or your political views that you get this treatment or you get the mortgage or you get the interview for the job uh when you get into that situation well then you would need a human in the loop and you would ask for explainability as to why to have this outcome instead of the other one there’s been a lot written in in the last week or so about the the so-called doomers at open AI the people who wanted to go more slowly wanted to see how things operated they’ve left the board are you at all concerned that profit is being put before safety well I have been working with big tech for 10 years uh and it is it’s experience-based uh that I have a concern that profit is put uh before uh other concerns uh that could be safety that can be mental health uh that can be uh you know normal competition that even as a as a big player that you can be challenged so so knowing the sector I think is really important uh that the public uh that the common good also have a very Stern presence uh in these companies and are facing these companies with very specific asks in order for this to be safe for us to use and and of course there are sort of the sort of existential risks for Humanity when it comes to AI they are in the future but not in the too far future and I think you can better relate to those if you deal with the existential risks for the individual because it is an existential risk if you can afford a mortgage but you canot have it if if you need treatment but you’re not given the right treatment because the algorithm doesn’t know you’re that you’re a woman and because of that your symptoms are different than what they were for a man so I think if we are careful now when something is at stake for the individual we’ll be much better at preventing things that can sort of undermine how humanity is working Margaret talking to me yesterday in Brussels two things about that interview that I I’d like you to clear up for me first of all this idea that you can regulate the technology even though you don’t know what’s coming how can you regulate something that even science doesn’t know where it’s going I think what you’re going to do is you’re setting out the guard rails if you will for uses so let’s take something like the fork that we all know and love we don’t regulate Forks but I could use a fork to eat a salad or I could potentially use a fork to reach across this table and stab you we legislate and regulate stabbing you we don’t regulate the fork right right so we want to leave all of the Innovation routes open but what we want to do is make sure that you and I both walk around knowing that I can’t harm you which brings me to my second question she’s suggesting that there there needs to be a symbiotic relationship between legis L ators and the developers who sits in these companies and says right bear in mind there is legislation in Europe now and they’ve done it pretty fast I mean they’ve been developing this like you say since the pandemic so quicker than anyone else but who’s saying you need to look at this law because it it it is relevant to us yeah so it’s probably in most companies going to be their Chief legal counsel but you will see in some of the most Innovative ones they will actually have a chief AI officer just like some have a Chief privacy officer officer so any company that is looking to really embed AI into their operations I think we will start to see a trend emerging they will have a board level person and they will welcome this because they have something to work to yes now regulation actually makes things very clear what’s expected and what is out of scope right so where does that leave then the United States companies coming up after the break we’ll speak to Miles Taylor he’s a US National Security expert he previously worked in the Trump Administration as chief of Homeland Security he is now heavily focused on artificial intelligence advising all the parties in Congress we’ll see if there’s any overlap where do these regulatory Frameworks diverge and what does that mean for our safety well Stephanie’s just been telling us how important it is for these AI officers that sit on boards to work to rules but there are no comprehensive federal laws that regulate AI in the United States no federal obligation on developers users operators or deployers of AI systems but late last year President Biden did sign an executive order that proposed legislation to address safety responsible development civil rights privacy all the pertinent areas now commissioner vesa who we heard from in the first half of the program she says there has been discussion between the EU and the United States but how much and how aligned are they on regulation that is coming let’s bring in Miles Taylor former Chief of Staff at the US Department of Homeland Security he’s now a tech and security expert who advisor lawmakers on AI give us the the broad picture miles would you where do you think you are in the United States in respect of legislation and regulation well you know Christian I actually think the United States and Europe are still quite far apart on this subject uh and I’m going to give you an example is you right now I’m in Dallas Texas at Capital factories Health Supernova this is a big event of AI CEOs and top officials and others talking about the applications of AI to healthcare now there are folks here from European companies who are used to the conversation about this technology being more heavily regulated and who in conversations today have made the case that that regulation is important for protecting life and safety but you have a lot of American entrepreneurs here who are talking to government officials and saying to the contrary they feel like overly regulating AI in their View will keep them from moving forward with life-saving AI breakthroughs that could impact health care so very different perceptions now how does that impact the legislative discussion well you see it right now in Congress is the US Congress has been very very slow to do any Global broad regulation on artificial intelligence and instead Senate Senators have only recently released a framework for how to regulate Ai and a framework that’s very decentralized when compared to the European Union a framework that allows individual Regulators that already oversee different sectors of society to regulate AI rather than doing it as uh a federal government so uh big big differences remain however I do think that there will be alignment between the EU and the US when it comes to the most sophisticated AI models those that some might claim could end up being sentient in the coming years miles I’m really curious to hear what you think about whether or not we’re going to see States taking a different approach versus the federal level because states can often move faster than the US federal government yeah it’s it’s a great question in fact we are already seeing that so even though the federal government of the United States is moving slowly a number of states are moving very very quickly with big AI regulatory schemes in fact right now at this very moment there’s a lot of controversy in the United States about what’s happening in California there is legislation proposed in California that would create a very broad regulatory regime for artificial intelligence one that some of the bigger companies might be comfortable with but you’re seeing smaller companies what we call Little Tech come forward and say they’re very concerned about California’s regulations because they might not be able to comply with them they don’t have the man hours the time the staff to be able to comply uh with the very very detailed and prescription a prescriptive bill that came out but the last note there would be when you have 50 states and all 50 of them have slightly different requirements it gets incredibly complicated so as that starts to happen you will see calls from tech companies for the federal government to step in in a bigger way to try to override those laws so they don’t have to deal with a conflicting Patchwork of AI regulation well Patchwork is the word I mean I was just going to say we you know we’re talking about an agreement in Soul we’re talking about EU regulation state by state sector by sector as you just explained it’s very fragmented and and we’ve sat on a panel before miles where someone has talked about an iaea style um agreement among all countries where we sort of have an international agreement that brings all this regulation together and someone that sits on the board and and can sort of regulate around the world why are we not going in that direction well you know I think Christian right now it’s because we’re still in the wild west of understanding what this technology will and will not do and in the meantime Builders are moving forward with building I mean I’ll tell you just a floor below me where a lot of these uh startup Founders are meeting and talking about their AI Technologies I would suspect that most of them have no idea what’s happening in the regulatory conversations in the US or the EU they’re just head down building products now the problem is going to become when some of those amazing Founders go try to introduce those products into the marketplace to go benefit people well they may be able to sell in one state and not another then they’ll run into this regulation and that regulation and as these consumer AI products start to come to Market those Builders those investors they’ll start to feel the friction and we’re starting to see that oh Stephanie in social media with apple and uh and Google who are running into regulations here in the EU yes and I guess my question for a m since we’ve got you here is to what extent is the US reluctant to regulate AI because they’re worried about about China really taking the lead it has been the most important part of the conversation behind the scenes whether it’s a Democrat or Republican member of Congress right now the fear that the United States could fall behind China is what’s keeping legislators who normally favor regulation away from grab grabbing the regulatory lever they are concerned that if the United States puts a complicated regulatory architecture in place that Beijing will Speed Ahead of Washington when it comes to this topic of innovation it’s been a very dominant part of the conversation behind the scenes and I would suspect those fears of China keep the United States from doing anything too dramatic when it comes to AI rules really fascinating I keep your comments coming in on some of the things that we’re discussing um I’m going to lighten the mood a little do you do you like Pink Floyd miles I love Pink Floyd right well anyone yeah why wouldn’t you anyone who knows their music will know that it’s 50 years ago this March that Pink Floyd released Dark Side of the Moon further evidence that all good things were born in 1973 and to mark that anniversary the band has invited a new generation of animators to create music videos for any of the album’s 10 songs one of those who responded to that challenge is The Finnish musician and AI Creator NTI karaja who has been combining the music with some of the most dystopian ideas that we’re facing take a look at this it is set to To Shine On You Crazy Diamond [Music] [Music] it’s quite psychedelic isn’t it I sat and watch this for some time this afternoon it’s a bit trippy uh let’s bring in NY kataga who developed it it’s fantastic honestly and it’s had nearly a million views on YouTube which tells you how good it is I’m interested what you were feeding into this Abby what sort of prompts did you give it how much of this is you and how much of this is the AI vision of what our future looks like you know what it’s I usually try to uh take the AI kind of Take the Lead I mean I start prompting very simply first I mean uh that song I mean Shine On You Crazy Diamond my idea was first because it’s a song about Sid Barrett and his mental uh issues I figured out I wanted to first get a big diamond in the kind of gallery settings and I start molding with the very simple prompts one word couple of words and then start adding up those and then I start printing uh things that I like and I uh start feeding them back to the machine it’s it’s amazing I mean I what what really strikes me about this is is for you as a musician first and foremost if you’d have tried to make something like this to promote one of your songs it would have cost you a fortune but it looks I mean it looks five star this it’s fantastic isn’t it does it open up a whole new world for you oh this is amazing this technology I always wanted to make uh videos for my own music and this is great and it’s also you know it’s a I think the visual uh thing with the music it’s just like I can kind of see what I want to why what I want to like first of all I hear something and then I see something and that’s it it’s kind of like joins together perfectly I don’t know if you can see it miles it’s interesting what you were saying earlier in the program that you know Scarlet Johansson I think Stephanie mentioned it as as suing a open AI because a voice that has appeared that is a little bit like hers and she says you know she didn’t want them using it and yet on the flip side you’ve got Pink Floyd who were really happy to to use their music to encourage this kind of creativity and I just wonder in the realm of what we’ve just been talking about regulation whether that complicates things it well it does look the intellectual property component of this is what’s going to cause a lot of these different issues to come to the for and intellectual property is a it’s you know look that’s a legal challenge that’s as old as legal challenges uh themselves and old as the law is itself people claiming someone else stole their work and so you’re going to see that happen a lot when these models are used for Creative purposes but I will say this is a very impressive product and there was another one just the other week is the band washed out came out with a song called the hardest part and they used Sora open ai’s video algorithm to make the first AI powered uh music video and I’ll have to confess it was quite remarkable and affecting so I think you’re going to see people feel uneasy but also pretty excited about this moment yeah I don’t think I want to live in this dystopian world there’s some pretty there’s some pretty scary stuff in there niy but it is a fantastic piece listen we’re up against the break uh we’re out of time thank you to Stephanie thank you to Miles and niy we’ll be here same time next Thursday hope you’ll join us for that