News in English

AI Safety Summit

Ronan Murphy
So good afternoon, everybody. Thank you very much for joining this call. This is a press briefing hosted by the Center for European Policy Analysis, and we’re going to be discussing issues in and around the upcoming AI Safety Summit and what might come from it, what might not. And we’re taking views from both sides of the Atlantic here today. We’ve got, joined by senior fellows from CEPA, one in Brussels, Bill Echikson, who has many years experience in technology, both on the inside and reporting from the outside, and is also the editor of bandwidth, our online journal on technology policy. And in Washington, DC, we’ve got CEPA Senior fellow, Heather West again, many years experience working within the technology business and consulting and working in the legal field, expert in AI policy, cyber security, and we’re very glad to have both of you with us here today. Just to speak quickly to basic housekeeping, we are on the record, and we will be providing a recording and a transcript after the call. And my colleagues from from CEPA, including Michael, who you can see in the top corner, probably, are on the call and will fill in on any of the detail regarding that at the end of the call. So please, you can ask questions at any stage. And in terms of the format, we’ll, I’ll throw a couple of opening questions to our to our guest speakers, but we really encourage, I actively encourage you to add questions by the chat. If you do, I’ll probably ask you to read them out yourself, for the record, for everybody, but if you’re not in a position to do so, I’ll read them out on your behalf. That’s no problem. And, and, yeah, with that in mind, I think we can, we can get going. And it’s a, AI is a topic you can’t avoid, and for for lots of people in this business, it’s brought up every day everywhere. And why we’re having this call is that there is a summit next week in San Francisco, the AI Safety Summit, stemming from efforts started by the UK to try and put some framework, some international framework, on an approach to AI safety specifically, and what the, what the base is for that everyone can agree on. So I might ask, Heather, if you maybe want to set the scene as to what we what we might expect and what might be good, then both aren’t necessarily the same thing from the gathering,

Heather West
Sure. So, so I think it, this is a, I believe, the first meeting of this international network of AI Safety Institutes and this convening is really intended to create an environment for global collaboration, which is really wonderful. And they’re bringing together very specifically technical experts, and they’re bringing folks together who really know how to do the work and understand the technology which really gives us an opportunity to move the work forward. Looking towards that, that very technical measurement and science based rules of the road for AI safety, the way that was laid out in the Seoul Statement, should be really interesting. And they they’ve got a number of countries joining, I know that they’re thinking about other stakeholders as well. But, but what I’m expecting to see is kind of moving that collaborative framework forward and looking towards which areas of work they want to undertake first, and where they can they can already find agreement, and where there looks to be tractable work, and what things they might save for later.

Ronan Murphy
And Bill just from, from the European perspective, maybe not the hot topic in Brussels right now, there is a lot happening elsewhere, but maybe set the scene on what, what we feel might come from Brussels, what they might contribute or not at this point in these discussions.

Bill Echikson
Yeah, I mean this, fewer here in Brussels are talking about the AI Safety Summit in San Francisco, which is a half a world away. I think Europeans have long felt that they were, and are, the leaders in pursuing regulation to ensure the safety. They passed an AI Act, the first sort of binding legal restrictions on AI, and at this point, I think they’re having a few second thoughts about it, because they’re most concerned, the talk here is mostly focused on competitiveness and making sure, already, people are saying that the Europeans have missed the AI revolution, and it’s, they’re too far behind to catch up with the US or Chinese lead. But, so in the recent hearings that we had for the new commissioners, there was really a sense that they were going to to roll back, or at least reduce some of the restrictions, particularly for startups, and that they’ll be talking a little bit less about safety and restrictions on AI and a little more on innovation and encouraging AI in the coming years.

Ronan Murphy
Yeah, and I think maybe the the Terminator vision of what’s going to happen with AI has has receded somewhat. You mentioned, you know, the the EU might be, Europe might be perceived as falling behind, you mentioned China. Maybe you speak to that for a second, Heather, I mean, China’s engagement here is, is limited, and invitations are probably limited. Is that going to undermine or is that the entire process, or is that a good a good starting point that just can’t be trusted? Is that the thinking from, from, yeah, from the US-EU perspective. How do you think that’s going to shape out?

Heather West
I think there’s some hesitancy. But I actually, I think that starting with a smaller group of aligned countries makes a lot of sense. It’s really hard to come to broad consensus when, when everyone’s starting from drastically different places and drastically different understandings of what AI security or AI safety might mean. Starting small and then growing that group makes a lot of sense to me. And finding alignment before you go out into the bigger, broader world, and keeping that invitation list a little bit smaller is probably a pretty productive strategy.

Ronan Murphy
Okay, so, yeah, I mean, we’ve seen another [unintelligible] and so on. Sorry, Bill, I don’t know if you have, anything want to add there? I heard a voice, might have been yours.

Bill Echikson
No, no, I’m fine about it. Again, I think, you know, safety is a little bit,just to follow up, fears are really reduced a little bit. I think this was really when Open AI released the chat bot, that really people said, wow, this could really do something and then it became a very political issue in Europe. And now, again, it’s some of the fears have been reduced and some of the worries are more about, hey, we need this too, we really have to catch up.

Ronan Murphy
Yeah.

Heather West
And there’s value in having a group of countries who are really looking to ensure innovation within their framework and their context here. If they’re thinking about competition with China, which I don’t want to say that I know exactly what they’re thinking, but I am sure that that is a part of the conversation, and trying to make sure that we have a framework and a shared framework to move AI innovation forward quickly.

Ronan Murphy
Okay, so I’d encourage anyone on the call, if you want to raise hands, if you have a question to ask, please do so. I know we’ve we’ve allocated an hour of time, but that’s not a target. So we were happy to move things quicker than that. And I think that political momentum that was perhaps there at the start, there was an interest, because it seemed that ChatGPT came out of the blue for a lot of people, like you mentioned Bill. Is the political appetite, has it dissipated? Are they happy to hand over? Like you’ve mentioned, Heather, this is the technical meeting, it’s at a technical level. Do you think that that we have moved to that point now that, and that’s probably good news? Quickly, and I know we [unintelligible].

Bill Echikson
I would say, I can answer. You know that originally, when the AI Act was proposed here in Europe, it was a technical discussion, it was pretty benign. It was really a little more transparency, very narrow restrictions and then that changed as it became such a front page issue and AI began to dominate everyone’s discussions. Maybe as it recedes and the emphasis changes, I think that, you know, the law is there and how it will be implemented will be a real, is what’s preoccupying European policy makers, and how they put in rules, for example, on copyright is going to be a big issue, because they have very restrictive rules here in Europe. But again, they’re going to try, I think, at least, to weaken or dilute the law enough so that it allows innovation.

Ronan Murphy
Okay. Okay, I think Matt, Matt, you’ve got your hand up if you want to ask your question, please.

Matt O’Brien
Thanks. I think Rick has a similar question, but the Biden administration is kind of hosting this event. President Elect Donald Trump has said he wants to repeal Biden’s signature AI policy. What does that mean for all of the other countries that are attending? I mean, or do you think they’re going to be looking for some signal as to if there’s going to be changes? Do you think that will change how countries are collaborating on this stuff?

Heather West
I think, I think that this, this is a good demonstration of why I’m excited that it’s a very technical group coming together. The technical pieces aren’t going to change that much with policy, with a change of administration. But it is also important to remember that the Biden AI EO is one step in a long process. We shouldn’t forget that the 2019, Trump administration AI EO. focused on safety, security, and, I believe resiliency, and so they’re very similar themes, and there’s no reason to believe that we’ll be doing the 180 when it comes to the work of the AI Safety Institute. Although, I don’t think that anyone knows exactly what the change in administration means for the AI Safety Institute in this work.

Bill Echikson
Yeah, and I think from Europe, this is larger. This is not the front line issue that Europeans are worried about right now, it’s more about the threat of tariffs, particularly tariffs on German cars, for example, and whether they have to retaliate, or whether they can diffuse that blooming trade war that’s at the top of the economic agenda. On AI, I think the Europeans and Americans, while sharing more than we would have expected, you know, there was more of a political consensus here that regulation was required. It’s done and dusted. And, you know, I don’t think the Europeans will, this will be the focus of any sort of Transatlantic Trump infused debate.

Ronan Murphy
Okay. Okay, I think we, Rick, I think we, we were speaking to your question there, I don’t know if there’s anything you want to add there. Do you hear us Rick?

Heather West
I think, I think I can add one additional piece here. When we’re thinking about, very specifically, the future of the AI Safety Institute, the US AI Safety Institute, it’s useful to remember that there’s very broad industry support for its work. I believe there are several 100 members of the AI Safety Institute Consortium. And in my work, I talk with some of those companies on a regular basis, and I hear a lot of support. And so I think, I think any, any new administration, as they evaluate what they want to do on AI policy is going to have to think about that.

Ronan Murphy
Okay. Okay, Rick, I think you muted yourself from unmuted.

Rick Weber
Can you hear me?

Ronan Murphy
I can now, yeah.

Rick Weber
Okay, yes. So Mike, I guess, I guess most of my question was answered, but I guess I’m still wondering if the E, if the EO is repealed, does everybody expect that the AI Institute in the United States and in other countries will continue to have the support that it has now?

Ronan Murphy
Go on, Heather.

Heather West
Yes, yes. So certainly changing or repealing the Biden AI EO wouldn’t impact the work of other countries. And, and it’s, it’s, you know, the AI Safety Institute isn’t mentioned in the Biden AI EO. It is an outgrowth of the direction that they were taking. And it, we don’t know. No one knows exactly what this kind of stated repeal of the AI, the Biden AI EO might look like, but what I’m hearing is it’s much more likely that there will be pieces of it that are drawn back and potentially some replacement language, but that the overall direction may not change. The emphasis is likely to change.

Rick Weber
Okay, can everybody still hear me?

Ronan Murphy
We can. Yeah, go ahead.

Rick Weber
Okay, so then a follow up question in terms of outcome of the meeting next week. It’s focused on technical so what are you expecting will be product that comes out of the meeting next week?

Heather West
I hope that there’s a work plan.

Ronan Murphy
So we had a declaration from [unintelligible], I think, at the start, but that was at a political level. And so yeah, work plan would be interesting, if I might, it’s related to what Rick just asked. Is open source versus not open source going to be discussed? Is that the level that they’ll get to, or is that too commercially sensitive for for this group do you think?

Heather West
I don’t know how explicitly it will be mentioned, but certainly it will be part of discussions given, given the predominance of various kinds of delivery mechanisms and business models behind AI systems. And, and there’s a lot of discussion about what the definition of open source is. I don’t think that will necessarily be a topic of conversation, but I, it would be a mistake for the AI safety institutes to ignore that, that there is a diversity of models behind these AI models to pay attention to as they do their work.

Ronan Murphy
Yeah, yeah. And I think Bill, we’ve heard from people in Brussels and elsewhere within Europe that the regular, the AI Act, as it stands, cannot get in the way of doing business. And as you said, that they’ve rolled back a little, and it seems unlikely the European Union is going to be advocating for anything that’s going to contradict that view,

Bill Echikson
Yeah, I think one of the things that might be a divergence between the US and Europe on this. I mean, on the one hand, the US has managed to sort of restrict the sale of hardware required for AI to China, forcing the Netherlands ASML to not sell its most advanced machines to to China. But I haven’t heard any discussion like I have in the US about sort of restricting the export or diffusion of AI style software to China. I think that’s a step too far for most Europeans and wouldn’t get much support, especially if they felt they were being buoyed to prevent the collaboration with the Chinese over some of this AI technology.

Ronan Murphy
Yeah, and for those interested, we do have a couple of good pieces on export controls and their impact on on LLMs and AI, AI software, Bill alluded to on cepa.org. And I’ll ask the group, is there anyone else who would like to put their hand up to ask anything specific? Craig, I think you’re a recent addition and please go ahead. In the meantime, are there any concrete projects, things that we could hope to, you know, this group could focus on that could they could deliver something on things that make sense to the rest of the world. Maybe Heather is the best way putting it.

Heather West
Sure. No I, one of the things that I hope the AI Safety Institutes pay attention to is like, where they can find this, this large consensus already, one of the things that I think most of these countries, if not all, agree on generally, is the importance of cybersecurity and how to secure software systems and some of these best practices. But they, they have the opportunity to really translate that into the AI context, into that ecosystem, and to talk about where this might be different, or where there might be gaps, and that’s something that’s really achievable and concrete and measurable, such that they could make some really immediate progress over the next six months.

Ronan Murphy
Okay, so something concrete could come from this, hopefully. Okay. Ruth.

Get the Latest
Sign up to receive regular emails and stay informed about CEPA's work.

Ruth Reader
Hi, I’m curious, you know AI, and maybe I’m getting ahead of sort of reasonable progress, but AI is such a broad technology, so like in the context of safety, there really is going to require some industry specific safeguards I guess. Is that a part of discussions already? Do we expect that to be a part of discussions? Any sort of insight there?

Heather West
I think that’s already a part of discussions and there’s, there’s discussion about whether you can create high level frameworks and high level best practices that apply across the board. But I think there’s also broad agreement that applying them contextually and within a particular sector or particular use case will always require additional translation. Just because, you know, the use of these AI systems in healthcare and finance is different than, you know, me playing around with the chat bot. You know, that those have very different safety implications. But I do think that there’s, there’s already discussion about kind of how you measure those safety implications and how you measure those risks that I’m really excited to see what comes of it.

Bill Echikson
Yeah, and in Europe, definitely, in the AI Act, that was one of the key debates, was how, what should be seen as critically or dangerous and therefore, or high risk I think they called it, sorry, in the act. And it started out as a very narrow list, maybe, basically about job hunting and maybe health, and then as as they got scared and frightened it was expanded, and now I think they’re trying to narrow it again a little bit. So, it definitely was at the heart of the debate was, how do you define which parts of AI or which machine learning was in scope, and in scope, in what way, with more or less restrictions.

Ruth Reader
Just to follow up really quickly. Can you both talk a little bit about the metrics? Like, what kind of metrics are we talking about so far in terms of, like, how we assess safety.

Heather West
Bill do you want to go first? Or should I?

Bill Echikson
I think you’ll have to take that one. I’m not quite sure how, if I have the detailed knowledge of the AI Act for what metrics. I can say in general, one of the criticisms I’m hearing more and more in Europe is that they weren’t doing impact assessments, that they weren’t having metrics when they went ahead with certain regulations here, especially a lot of the digital regulations. And I think one of the things that maybe changed is there’s a demand among Europeans themselves, the governments, that the European Commission just can’t sort of move forward without, without really much more studies about the potential impact. These impact assessments now have been sort of like box checked, you know, and not really reviewed independently, just reviewed by the commission itself. And I’ve heard a lot of talk about changing that going forward. Heather?

Heather West
Metrics are hard, right? I, and that’s, but that’s, in the US, that’s what NIST is really, really good at and they’re, of course, running the AI Safety Institute. And I think they’re going to be looking at, at some of the things that you can measure, and starting with the things that you can measure easily. You know accuracy, in a particular use case that requires accuracy, how do you measure accuracy when you’re talking about potentially generative AI. Whether that’s summarization or transcription, there’s a lot of things that might be in scope there. I think that we’re going to start to see better metrics around resiliency and robustness. Those are complicated measures, and I’m not sure that we’re ever going to have a really simple algorithm for them, but I think that we’re going to have much better measures for that evaluation. And to Bill’s point, I think we’re going to see a much broader and well understood process around things like AI impact assessments. You know, here are the things that you should think through, here is the process to do that risk evaluation, even if we can’t come up with the numbers and we don’t have necessarily a concrete, numerical scoring metric. We probably will have more, more flexible ways to measure and understand and communicate the risk of a given system.

Bill Echikson
And just to add that Europe’s AI Act insists on certain transparency requirements, handing over data to be independently reviewed by this AI office. I mean, the AI office was just started, like, a month ago, and it’s really, I think they have 100 employees, they’re going to go up to 200, according to the new commissioner. But it’s been, it’s too early to really, they don’t have the data to make any of those sort of detailed impact assessments right now, or to put even, to understand the metrics that you’re talking about. Maybe they’ll get there, maybe Europe will get it. Right now, the other thing I should say is a lot of the AI products aren’t being rolled out in Europe because of uncertainty about this. We wrote an article about it. If you look all the major AI, US AI companies have either, you know, not rolled out at all or slowed down their rollout. It’s not just the AI act, it’s also privacy rules and the antitrust rules that have, sort of a combination that has made regulatory uncertainty their sort of watch word. Eventually, I think they will roll it out, it’s too big a market, and I think the risks aren’t as high as they’re maybe saying, there could be some political points in that, but, but we are seeing a slower roll out here of AI. Yeah, and I once did a, you know, you do a Google search in Europe and a Google search at the same time, we did it in Washington, and it came up with, you know, the Washington one was just an AI result and here in Europe it was 10 links, it was really much worse. So.

Ronan Murphy
Yeah, so already having an impact. And, like you say, Bill, it isn’t necessarily even the AI Act, the GDPR has definitely had an impact on

Bill Echikson
I mean, I’ve been asking questions left and right about which parts of the regulations are causing this refusal to launch the products in Europe. And, you know, I get it. It’s a combination.

Ronan Murphy
Yeah. And Michael has kindly shared the, helpfully shared as well, the links to several of those articles in the chat, for those who are interested and like to read a bit more. Ruth, is, is there anything else I don’t know from your perspective? Was there a, what metrics had you in mind? Or is this a?

Ruth Reader
Sure. Well, I, I’m a health and technology reporter, so I’m really looking a lot at how the US is handling AI in the healthcare space. And it’s interesting, you’re seeing a lot of like industry led efforts to try and handle AI. Obviously, HHS has some sort of various guidance that they’ve put out around AI’s use in medicine, depending on the context.But it’s still very, it’s not, we don’t have a lot of metrics either, metrics are a big conversation, about how you sort of even understand these technologies and also the context in which they’re tested. I mean, medicine is sort of unique in a certain way, because, you know, a big piece of the conversation that people are having is, where do we even test it, right? And a lot of feeling is that you kind of have to test the AI situationally. And that has a lot of and that’s complicated. So anyway, that’s why I was sort of.

Bill Echikson
Very sensitive. I mean, health data was always considered sort of the high risk type of thing, I think even in the early drafts, as we got. But I do think, also it was interesting, I was talking to, apparently, there’s Swedish startups that are doing Swedish health data so that you, and building it on top of ChatGPT OpenAI technology, so that you know, you could see this localized as as well. And they, as a startup, they would not be subject to many of the restrictions.

Ronan Murphy
Yes, it will help. As an industry that’s accustomed to regulation, expects regulation. And so, yeah, maybe, maybe there’s a natural link. Well, tell us the rules, we won’t do anything without the rules. I don’t know, maybe there’s a culture of that and that will play a role. And so, yeah, I mean, sorry, Heather, I don’t know if you were coming in.

Heather West
No, no. I was just gonna say I think there are ways that they can kind of up level the situational metrics a little bit. Healthcare is a fantastic example actually. You know, you can talk about levels of accuracy and precision, and in some contexts you’re going to need a low level of accuracy and precision, and in many contexts, in the healthcare world, you’re going to need very high ones. You can, you can probably put some initial scores against a given model or tool, but you’re still going to have to evaluate them in context for that final assessment.

Ronan Murphy
Yeah, and I, now that a question that comes up sometimes is, you’ve mentioned, Bill, Swedish, Swedish health data is probably in a very good state, if you will, and it’s probably well maintained. It’s a, it’s a heavily digitalized society, people use the health service in that manner. That’s not necessarily the case elsewhere, and that in of itself, is a risk. If the data isn’t ready for use, then you can’t plug it into said model and expect results that matter or that are accurate or helpful or safe, potentially.

Bill Echikson
I think the case in Sweden, or here, I mean, there is a centralized data system, the National Health Services, but local companies have more access. So, I mean, you know, they can train their data on the local data, where they have more of it than the international, usually American companies.

Ronan Murphy
Which, which sounds an awful lot like the dreaded sovereignty word, and so a local, a local provider, can access the local data.

Bill Echikson
Yes, but I think this is more like they have a competitive advantage because they are local.

Ronan Murphy
Yeah, okay, yeah.

Bill Echikson
I don’t think it’s the government requiring or restricting or necessarily, as from what I’ve heard, even investing in this, although we are hearing that there will be a new, you know, EU AI Act like the CHIPS Act, where they will put public funds into AI catch up. That’s probably one of the first things that will come out from, from Europe in the new administration there.

Ronan Murphy
Yeah, and you mentioned HHS route, and we’ve covered the Trump administration’s coming in. There’s a lot of under new management signs hanging on doors around the place, and we cannot honestly answer what the impact is going to be yet. And I raised sovereignty, Heather, because it has been, it’s discussed, it’s a very big topic in, in Europe, has been able to stand on our own two feet, whatever, whatever definition you want on it. Are sovereignty and safety compatible, even within a small group like this? If, are they, can you have both at the same time?

Heather West
Absolutely.

Ronan Murphy
Yeah, okay.

Heather West
I think that you can have, well, let me back up a little bit. There have been, there’s been some really great work, kind of bringing together the common themes of AI safety across borders and across frameworks and across cultures. And mostly, we agree. We might, at a national level, at a cultural level, put emphasis on different things, but generally it’s the same, the same categories. There’s no reason that everyone needs to do the exact same thing. It should fit for their their company, or their country, their companies, their culture, but, but, like, 95% of it is probably going to be the same. And so, so there is no reason that we all have to be identical. However, I think it is really important that we are all compatible.

Ronan Murphy
Yeah, okay. So yeah, interoperability at some level.

Heather West
It’s a very connected world we’re in.

Ronan Murphy
Yeah, okay. So again, encourage anyone else with any specific questions. If not, I mean, we’re, I’m happy to wrap up with any closing remarks. Bill or Heather, you’d like to add, that’s great, but if not, Michael, maybe you, if there’s anything you want to say on the logistics here, or the dissemination of the call from today, please go ahead and before you do, just say thanks to everyone for joining, and particularly thanks to Heather and Bill for your contributions. Very much appreciate it.

Michael Newton
Thanks very much, Ronan and thank you Heather, thank you Bill, and thank you everyone for joining the question, joining the conversation. We will be emailing out a link to the recording of this video, and we’ll also have a rough, auto generated transcript. This will be followed by an official transcript that will go on our website in the coming days. If you have any other inquiries, please feel free to email press@cepa.org, and we’ll be able to connect you either with Heather or Bill if you’ve got follow up questions or any of our other experts. Thanks very much, and have a great day. Bye.

The post AI Safety Summit appeared first on CEPA.

Читайте на 123ru.net