How Should We Regulate Rapidly Changing AI Technologies?
Since ChatGPT was released in 2022, significant uncertainty has accompanied the fast-emerging field of artificial intelligence (AI). Maximizing the benefits and avoiding the pitfalls requires global coordination and regulation—but how should this be managed, who is responsible, and can regulation keep pace with technological change? In a new episode of Talking Policy, host Lindsay Shingler talks with Robert Trager, the co-director of the Oxford Martin AI Governance Initiative, International Governance Lead at the Centre for the Governance of AI, and Senior Research Fellow at the Blavatnik School of Government at the University of Oxford about the risks and potential solutions. This interview was conducted on June 4, 2024. The transcript has been edited for length and clarity.
Subscribe to Talking Policy on Spotify, Apple Podcasts, Soundcloud, or wherever you get your podcasts.
Lindsay: Since ChatGPT was released in 2022, a lot of uncertainty has accompanied the fast-emerging field of artificial intelligence. Although AI has the potential to benefit our world in many ways, it also raises questions about its potential to cause harm. Maximizing the benefits and avoiding the pitfalls requires coordination and regulation, and so the looming question for policymakers is: how can we govern the use of AI?
Here today to provide some insight into the challenges of regulating AI, and the potential impacts on global security, I’m glad to welcome back to the podcast, Robert Trager. Robert is the co-director of the Oxford Martin AI Governance Initiative, International Governance Lead at the Centre for the Governance of AI, and Senior Research Fellow at the Blavatnik School of Government at the University of Oxford. He regularly advises government and industry leaders on the governance of emerging technologies. He previously taught at the University of California, Los Angeles, and has been affiliated with IGCC for many years. Robert, it’s great to be with you. Welcome to Talking Policy.
Rob: Oh, I’m so glad to be here and glad to see a hint of California sunshine there.
Lindsay: Yeah, just barely coming in the window. Yeah, that’s right. That’s right. So we hear that AI will help cure disease, that it will raise living standards, that it will do all matter of good things potentially in the world. We also hear that it will decimate jobs, that it will spread disinformation, that it will cause potentially the extinction of human beings.
So it’s this thing that has the potential to have a massive scope in a negative and positive side. And I’m still trying to get my head around what exactly is AI. Can you start by doing a little scene-setting for us? Help us get a handle on what this emerging technology is. What are we talking about when we talk about AI?
Rob: Yeah, what are we talking about when we talk about AI? I mean, people debate about how to define AI and how to think about it. I like to think about in terms of decision-making. All of human history, we’ve had maybe some aids to decision-making, but really, most of the decision-making was something that we had to do. And now, we’re getting more and more assistance with that, and we’re getting systems that are really broad in their ability to handle lots of decisions and to interact with us almost the way that other humans interact with us.
We don’t really have that now, but we’re starting to have things that fool us a little bit into thinking that we do have that. And, you know, more and more, we’re probably going to approach that. Nobody really knows what the timeframes are. You know, nobody knows if we’re going to have some agent [An artificial intelligence (AI) agent is a software program that can interact with its environment, collect data, and use the data to perform self-determined tasks to meet predetermined goals] that passes the Turing test for artistic creativity across different domains. Are we going to have that in a year? Maybe. Maybe we won’t have that in ten years. So there’s a lot that that we don’t know, but these technologies… we’re able to interact with them in unstructured ways that, just a few years ago, we really weren’t.
If you were to ask lots of people in the field of developing AI, a few years ago, where we would be today, most of those people would have been wrong. And they were surveyed, so we sort of know that on average people thought that we’d be quite a bit behind where we are today in terms of the capabilities of AI systems. So the people who are in the field, a few years ago, they were off in their predictions, and we’ve moved more quickly in terms of the ability of language models to interact with us in terms of the ability to do a whole range of tasks, including mathematics.
I think it’s probably true that in the next, I don’t know, but maybe nine months we’ll have a bunch of agents out there and probably we’ll be able to say to an agent, could you go do this thing for me? And maybe we’ll have an agent that is a bit competent at doing things. But how competent will it be and how much of an impact will it have? I don’t think we know that yet.
If they do turn out to be really competent, then maybe we’re talking about huge energy demands. Right now, we think about how hard it is to train AI systems [AI model training is the process of feeding curated data to selected algorithms to help the system refine itself to produce accurate responses to queries]. It takes huge amounts of compute to train AI systems. And that has a huge cost in energy and has implications for sustainable energy and the potential for sustainable energy. That’s how much we’re talking as these systems are scaled up. But if we have lots of agents out there, then it’s possible that those agents, with everybody using them, are going to take huge amounts of energy also, so it won’t just be the training of systems
So we don’t know where all this is going, but it seems like these things are having large impacts on society already. But I think one of the things that a lot of people keep in mind is that we’re not just talking about what systems can do today. We’re thinking about what are they going to be able to do in nine months from today. Well, we’re not sure, but we have some ideas. What are they going to be able to do a year, two years, three years, five years, ten years down the road?
Lindsay: Yeah, nine months. Boy, that’s such a fast timeframe and just speaks volumes to how quickly these technologies are accelerating.
Well, and there’s, again, in terms of getting a handle on what AI is, there’s a difference between narrow AI and general AI, right? Can you help us understand what’s the difference, and what are some examples of narrow AI that we would all recognize from our own lives?
Rob: Narrow AI is like particular use cases. So, you might think about AI that is designed to help with farming, let’s say. Maybe you can train a system to optimize some of the inputs that are going into farming and people can interact with that. It can help people potentially all over the world. Any sort of narrow use case, that’s what we think of as narrow AI.
Whereas general-purpose AI would be a system that is general in the sense that you can use it to do many different things. You might say to the system, help me with some farming that I’m doing, and it’d have ideas and give you some advice about how to farm better and crop yields and all sorts of things. And then you might say to the same system, tell me how to create, I don’t know, a cyber weapon. So you have the system that can both do all kinds of good things in terms of improving crop yields, potentially, or giving some good advice. I don’t want to exaggerate what these systems can do now. It’s not that they’ll necessarily give you such great advice. A lot of what they do today is kind of take an average of what’s said on the Internet about things. But, you know, it can give you some advice, if you don’t have access to the Internet and it’s helpful to have this thing that’s kind of summarizing what’s said out there, then that could be helpful. But it can be helpful to people to do things that are very positive for society and also do things that can be negative, and so that’s the kind of challenge of the general-purpose side of things.
Lindsay: Yeah. So it seems like when we think about the benefits of AI that we can think of in our everyday lives, things like its use in health and medicine. So like image processors that can outperform radiologists in detecting tiny malignancies or even, you know, online shopping. We’re all used to being recommended different things based on our browsing history that kind of point us to, you might like these shoes, or you might want to watch this movie. There’s sort of those things that we can think of as tangible everyday benefits that we get from AI.
But when we think about the more kind of nefarious uses, risks, unintended consequences, it seems like we’re talking more about this general AI. These, you know, large language models that absorb all of this information and then generate new material seemingly out of thin air. But is that the right way to characterize it or am I mischaracterizing it?
Rob: I think a lot of people think of it that way. I think that you’re absolutely right. I think the general purpose involves some particular challenges, and that some people are really worried about the kind of trajectory of general-purpose systems, and they want to start figuring out how to govern them now.
I think that we probably wouldn’t want to say that all the dangerous stuff is associated with general-purpose, as opposed to the narrow. There’s one example that a lot of people like to talk about in terms of narrow purpose systems that that was quite concerning, and that was a few years ago. This little company called Collaborations Pharmaceuticals that was looking for drugs to help people and they were using AI basically try to predict whether molecules would be good for clinical purposes. And in order to do that, they were penalizing potential molecules for toxicity.
So, this is all narrow-purpose, and somebody came along to them and said, just as an experiment, since you’re doing all this, where you’re penalizing molecules for toxicity, could you just flip the minus sign and look for toxic molecules using the same technique? And so they tried that, they turned on their systems, and six hours later, they’d found 40,000 proposed toxic molecules, including the most toxic molecule known to man, which is VX.
A tiny little dot of VX on your skin that you can’t see will probably kill you, and it persists in the environment for a long time, and it’s really horrible stuff and it can be a gas and do horrible things that way. This system found VX, even though VX hadn’t been put into it, and it found thousands of molecules that it predicted would be orders of magnitude more toxic than VX.
That’s also narrow AI. So there are things like that that people are worried about. Even in that space of narrow AI, there is this kind of dual use. And maybe that’s like a really interesting, extreme example of it, because you can see that a system, which is literally designed just to help people, all you have to do is flip one thing. And in some ways, it’s like there’s this really interesting feature of it, where the better the system is at doing the useful thing—that is, avoiding toxicity—the better it is at doing the really harmful thing that is finding toxic molecules. So those two things tend to go together, which is one of the really challenging aspects of governing the technology.
Lindsay: Well, so the issue of control becomes so important. Is one of the concerns, when we talk about protections and regulation of general AI, that there’s some level of autonomy to general AI? Is that what we’re projecting? Is that true now? Does the technology have the capability to act on its own, or is it still wholly dependent on human instruction, input, control… on/off switch?
Rob: Yeah, yeah, that’s a good question. I mean, I guess you would say that the things that we call general-purpose right now don’t have that much in the way of autonomy.
ChatGPT is general, let’s say, but it can’t really do things on its own. On the other hand, you can build things on top of something like ChatGPT that can use this world model that’s embedded in ChatGPT, and yet have some higher degree of autonomy. So I think that’s one of the things that a lot of folks are thinking about now. Because with something like ChatGPT, there’s a huge investment into making it, and it takes this enormous amount of compute and data and lots of clever things in terms of algorithms to make it, and not that many actors in the world can do something like it, or even close to like it. Not that many actors in the world have the resources to do that.
But if something like that were available, if what are called the model weights—which is just like a bit of software within some numbers in it that are the essence of what the model is. If those are widely available, then things can be built on top of that. And it’s really hard to know what the abilities of those things are that are built on top of it. So it might be that systems with a world model which is getting better and better but is highly imperfect at the moment. It might be that those sorts of systems can be imbued with autonomy.
Lindsay: You’re blowing my mind.
(music)
Okay, so governments, international organizations, the tech companies have started to try to answer some of these questions about regulation protections over the past couple of years. Two noteworthy strides forward have been the European Union’s AI Act and President Biden’s executive order that sets guidelines on the safe development and use of AI, which was released last fall.
Can you give us a sense of what these instruments do? And at a broad level, how they’re different, how they interact?
Rob: So, the EU AI act is this attempt to have risk classifications and to say that we need different sorts of mitigations for different levels of risk. The original version of the EU AI Act said that all these different use cases are associated with different sorts of risks. And so there are a few levels of risk, and that was the way that it was all set out. And then there were these debates about, well, you know, which risk classification should different use cases have?
And then along came more general-purpose models, things like ChatGPT. It was really a huge ruckus, if you will, to try to figure out where on the risk classification scale should those general-purpose models go. And many people were arguing for high level of risk classification and other people were saying, well, if you have a high level of risk classification, then it’s going to involve regulatory measures that are going to really slow down the development of the technology, and they didn’t want to do that. So they made a separate classification that really wasn’t on the scale at all, just for general-purpose models. And then they gave the ability to impose fairly significant fines on companies that violated standards, and they set up an EU AI office in the European Commission to develop standards and to develop the whole infrastructure of governing these models.
But I would say that a lot of those details, particularly when it comes to general-purpose models, are sort of unknown and in the future. We don’t know yet exactly how they’re going to get transparency into what companies are doing. We don’t know exactly what sort of standards they’re going to have, what sort of requirements they’re going to have. I think all these things are a bit to be determined. So a lot has been done and there’s a lot of ability to do things, but exactly what will be done and how these laws are really going to work in practice, I think is unknown.
And then you have the executive order, which, of course, is not a law. I think the U.S. administration, to its credit, saw all these different concerns that different communities had, and it sort of puts them all into this executive order and talks about the sorts of principles that an AI system should adhere to, and what would constitute fairness and things that it had already talked about, in terms of a bill of rights, find their way into the executive order in terms of requirements that are at a general level, and again, sort of like the European context, still to be cashed out. And a lot of the work now is in cashing those things out.
But they also were thinking about general-purpose models, and in particular, really large models, models that use a lot of compute to train them and at least thinking about the sorts of requirements that we would like to put on companies that are training those models and also companies who are providing the compute to train those really large models. So, you know, this idea of a really large model and sort of using that as a kind of measure of risk is, I think, something that a lot of people turn to, because they didn’t have anything better, frankly. It’s not that only the large models can be associated with risk. It’s that it seemed like, at the moment, capabilities of systems were reliably associated with being large model. But the hope is that in the future, we have more targeted ways of saying, well, where are the risks really? Not just saying, well, any model that’s larger than X or something like that.
Lindsay: I think the EU has been considered kind of the front runner of AI regulation, and the U.S. has been a bit more laissez-faire in its approach. I’m curious if you agree with that characterization, and if you think that is reflected in that executive order and the EU act?
Rob: I think that used to be definitely true. And I think now it’s a little bit more muddied, I would say, the picture. So when the EU AI act, which was negotiated over a period of years, when that was sort of being negotiated, it really did look like the only game in town for a while, and there wasn’t any equivalent in the United States.
And that was, of course, even before the executive order that this law was being negotiated in the EU, and so I think it was reasonable then. I think maybe I’m enough of a cynic to feel like the European firms, at that point, were a bit behind where the U.S. firms were, and so in some ways, I think, especially when it comes to general-purpose technologies, it was easier to regulate when you’re really talking about firms somewhere else, rather than local firms. And then when some of the European firms suddenly start to look like they were doing pretty well actually, and there were what the Europeans referred to as “national champions”, you know, then there was like a little bit of uncertainty. Did they really want to regulate them in such a strict way that might harm innovation in Europe? And so there was a bit of an attempt to water down some of the things that they were doing.
And then around the same time, also in the U.S., there was a growing concern in some corners of the administration. So some of the things that they’re doing with the executive order and recent proposed rules in commerce and things like that, I think are also significant and things that maybe don’t find a correlate in the European context. So it’s a little bit muddier now, but you’ve seen it kind of shift as interests have shifted.
Lindsay: Yeah. There have been many summits, including this past May, when governments and tech companies met in Seoul. And I believe you were in Seoul at those meetings where some companies agreed to make voluntary safety commitments.
What do you think AI governance initiatives have gotten right so far? Like, what are some of the elements that you think have been appropriate, that have good potential? And then what are some of the more difficult areas to address?
Rob: I think, you know, some of the initiatives now have been pretty useful, if we’re thinking about it in the safety frame. Some of the things that what started out as the AI safety summits with the Bletchley summit in the UK, and then recently there was the AI Korea summit—no longer called the safety summit, but it’s part of that same process. I think there’s going to be a French summit now in February of the coming year.
I think they’ve often done a pretty good job at listing the sort of things that we would want. And at the level of principle, they’ve really been good. You know, they’ve thought about things like the right to be forgotten, which I think is a really interesting thing that maybe we’d all like. We do something online, we’d like it if that could not exist forever. Or the content provenance, the ability to know if you’re interacting with an AI, or the right to, if some decision is made that affects your life, the right to have an explanation for that decision. These kinds of things at the level of principle, I think there’s been a lot of thinking on, both at the international level and domestically. For instance, UNESCO was early in talking about a series of principles that it was very influential. And I think that’s been very helpful in terms of giving us a direction to head. They’ve thought about things like should we have auditing for models? Should there be outside auditors that come in and check them in certain ways? And there’ve been commitments to do that.
There’s also the safety institutes, which from a safety point of view is useful. And the company is committed to make their models available for scrutiny from the safety institutes. They haven’t always done that, however, which maybe that gets us to the issues of the fact that these commitments are voluntary.
A key thing that they did in Korea is, basically, all the companies committed, including at least one Chinese company, Zhipu AI, and I think other companies are thinking about it. They committed to have what are called responsible scaling protocols, basically defining what would be kind of red lines that they wouldn’t want to cross without some mitigations. And so all these companies have committed to having this sort of policy, which I think is useful. It’s useful to be thinking about it, it’s useful to realize that all companies should be thinking about it and have a policy like that. That’s well and good, but I think it’s a bit important to note that this is entirely voluntary. So I think it’s a useful process, but I’m not one that thinks that voluntary regulation should be the end of the road.
Lindsay: What are governments most worried about when you’re sitting in meetings in various global capitals? What are people talking about? What are the biggest concerns?
Rob: Well, I mean, different governments are worried about different things. You know, I think a lot of governments across the global majority are worried about being left behind with respect to the technology, and they’re worried about a lack of voice in the development of the technology. They hear that this technology is affecting everyone, that the effects cross borders, and they think, well, how much voice do we have in all these decisions that are being made about the development of the technology?
I think some countries at the frontier of the technology are concerned about sort of safety implications. They’re concerned that maybe we’ll have a model in the near future that will help anyone who wants to, to make some kind of cyber weapon or a bioweapon. So that kind of safety frame is, for a set of countries, particularly important.
I think many countries are worried about being able to just sort of compete, if you will. And they’re worried about, for instance, when the U.S. Commerce Department recently suggested that maybe any cloud providers, U.S. cloud providers, should report when their customers are training a large model, even if it’s a foreign customer training it on foreign compute somewhere else. A lot of countries around the world were really upset by this. The French, for instance, released quite a bracing statement. I don’t think it used the word neocolonial, but it wasn’t far off from that kind of sentiment, because I think they feel like it’s almost like spying on what their what their citizens are doing. Or I guess we should really say what their very large corporate interests are doing.
Lindsay: And do you mean using people’s data to train the models? Is that what you mean?
Rob: Not just using people’s data to train the model. So in this case, the reporting requirement that the Commerce Department was talking about was really just reporting when some AI developer was using a ton of compute to train a large model. So they just had to report, you know, if Mistral in France is training a really large model on whatever data, then if that’s being trained on Amazon systems, Amazon would have to report that to the U.S. government.
So actually it’s interesting, because when Amazon was asked to comment on this proposed rule, they said that sounds like a terrible idea, because it’s also the last thing that they would like is to have to report on what their customers are doing because their customers don’t like it. On the other hand, they also said, well, if you’re going to require us to do that sort of reporting, like it’s done in the finance area, then maybe we should have an international regime, like they have in the finance area. Rather than just everybody reporting to the U.S. government, maybe there can be reporting to some kind of international body, and we can internationalize this, so it doesn’t seem so threatening to others around the world.
Lindsay: When you advise governments and industry leaders on the governance of emerging technologies, I imagine there’s a pretty significant technical knowledge gap between policymakers and technical experts in industry. Is that part of the dynamic that you observe in this, and how do you address something like that?
Rob: I mean, I would say two things on two sides of that. So one is that, yes, there is the sense in which this technology, more so than probably other transformative technologies like the Internet and nuclear technologies, is being developed in the private sector, and the public sector isn’t where the expertise is. And so there’s this real need to talk to the private sector and to get information from the private sector, which is hard because governments are obviously thinking about regulating and things like that, which makes it hard to feel like the private sector is giving them all the unvarnished truth of the information. So, on the one hand, I think it definitely is true that we really need to build up capacity within governments. That’s a huge challenge that we need to undertake.
On the other hand, I would say that I also have seen governments engaged in upskilling, in terms of knowledge of the technology. You know, I saw that when Rishi Sunak, the prime minister of the U.K., announced that he was going to have the AI Safety Summit in the U.K. last fall, the U.K. government had to learn so much so quickly coming to terms with governance in this space. I think they were incredible at that, frankly. And I’ve seen even some of the hearings, like the Sam Altman hearing that he was at, at the Senate some months ago. It was, I think, heartening to see the sorts of informed questions that senators were asking at that time.
So I do think that it’s worth saying that we really need to upskill. And sometimes it is kind of fun to bash governments for not knowing things and like having no idea about how TikTok works and that kind of thing. But on the other hand, I do think also that they have a lot that they have to be aware of, and in some cases I think they’ve been really impressive in terms of how quickly they can upskill.
Lindsay: What are the risks of AI that worry you most?
Rob: What are the risks that worry me most? I mean, I don’t know. I just, I have so many worries you know, and I—
Lindsay: (laughing) Oh, God!
Rob: I mean, in the sense of like, you know, we should. We want a future of human flourishing, and there are lots of ways that aspects of that future could be taken away from us. And I sort of feel like, rather than debate which risks are more important, we should just be addressing all of the range of risks to the extent that we can, and I think we have more capacity than we’re using right now to address all these different risks.
And so, things like really high levels of inequality, that worries me. Lack of access and decision-making for huge parts of the globe, that worries me a ton. I think we should really be thinking about those things. I think some of these safety risks are significant, not so much for the current systems, although it’s always hard to know what gets built on top of the current systems that makes them have higher level capacity. But for future systems, I think the safety risks are significant.
I think people talk about loss of control risks, and it really sounds like The Terminator or something, but I don’t know. If you just make a computer virus or something, you don’t have control of it anymore. It’s out there in the world and maybe it does some horrible things that you didn’t intend for it to do, but it does those things. So, loss of control is super easy. And as we build more and more capable systems, they may well go out into the world like a virus and do things that the creators wouldn’t have wanted them to do.
So I think we should take those seriously. Obviously, the bias that’s inherent in the development of many of these systems is really important. Those things are definitely real. All the studies show that. You know, it might be an autonomous car that hasn’t been trained in enough data from people who look a certain way and so doesn’t recognize that as a person or something like that. I mean, things as dumb as that are real and are happening today. I don’t think we should fight about which of these risks is most important. I just think we have to address all those things.
Lindsay: So given that, what would your ideal regulatory regime look like?
Rob: Well, I mean, you know, there’s just, there’s so much complexity here. I think it would be hard in a podcast to give one regulatory regime that would cover all areas of AI and all that stuff. So maybe I’ll just say, something that I focus on a lot is international governance of AI. And so what would that look like? And I would just say that I think three pillars would be really useful for that sort of regime.
I think the first thing we need to do is develop some standards, some international standards. And there’s lots of work that’s going on now to develop those. Places like the International Standards Organization and the IEEE and ITU and other places are thinking about standards. I do think that safety standards in particular are this area that doesn’t always fit so well into the other standards processes, partly because usually what standards process do is they distill a body of knowledge into a set of standards. But in the case of safety standards, we don’t have that body of knowledge. We need to develop it. So it’s really cutting-edge research, which is a different thing. And I think that the safety institutes, that exist in some countries and are being developed in other countries, are a place where some of those standards development probably can happen. But then we need a kind of a process to go from those things that are developed in different safety institutes in different countries around the world to become international standards and to have more voices in the room in giving input into those developing international standards. So developing the standards, that’s a key thing. We don’t have those right now.
Second pillar, I think it’d be really good to have an international reporting regime where we’re reporting. First of all, if an AI developer is training a large model, reports that; if a compute provider is having large model trained on its systems, it reports that. A nice thing about something as simple as that is that if anybody’s trying to get around a regulatory regime that maybe says there are certain things you have to do to mitigate risks if you’re training a really large model, is that you start to have the ability to detect if somebody is trying to get around that, right? If they’re doing it in one jurisdiction and then another jurisdiction so that it doesn’t look like they’re training such a large model in any one jurisdiction, well, you can put those bits of information together and figure it out. It’s a lot like money laundering actually, and we have similar rules around money laundering. So we can figure out if somebody is trying to do that sort of thing and get around a regulatory regime. And compute providers can play this really interesting role, not just collecting information, providing transparency for governance, which is really important, but also in the future, I think, can play a bit of an enforcement role.
You know, it can be sort of like, when you try to use your credit card, but there’s some red flags and you get denied and you can’t use your card until you’ve resolved those red flags. There may be some things that we want AI developers and compute providers to do to be sure that the systems that they’re developing meet governance guidelines, and if a compute provider reports up to, let’s say their home government, who then reports up to some international body, and if at some point it gets reported back that this AI developer that’s developing an AI system on the compute provider systems doesn’t have the right permissions, then they can deny computing resources until those red flags are cleared, or something like that, right? You can envision different ways that enforcement can happen, but you have some ability there to do some enforcement.
And then the third pillar, I think in other industries like civil aviation and maritime and organizations like the Financial Action Task Force, and others, I should say, they all do a particular thing, which is it’s an international organization that isn’t auditing firms. The ICAO, the International Civil Aviation Organization, it’s not auditing airlines, it’s auditing regulatory jurisdictions to be sure that they have the right regulation, in some cases that they’re actually enforcing the right regulation. And so I think a body like that, in AI, would be a great idea. And we can give a regime like that some teeth, too. So, for instance, the FAA in the United States has an authority granted by Congress to restrict access to U.S. airspace to any flight that’s originating from a jurisdiction that’s violating ICAO rules. So you could do something like that in AI. You can have ties, for instance, to the trade regime, and you can say that if a jurisdiction doesn’t have certification, if it’s violating international standards, then other countries can say, well, we’re not going to import any AI whose supply chains have something coming from that jurisdiction, and we’re not going to export the inputs into AI technologies to those jurisdictions that are violating international standards.
So that’s the kind of thing that we do in aviation and maritime and finance and elsewhere. And I think if we had those three sorts of institutions, I think at least in terms of governing the civilian sector, that would be really good.
Lindsay: That’s very interesting. You think that we’re up to the task, especially given how quickly the technology evolves? Can we do it?
Rob: Yeah, I think that there is a lot of focus and energy and a lot of people who are thinking about it and are trying to do it. And so I think that we have a chance. I do think that we have a lot of very significant challenges. Somebody said it’s like trying to develop standards for a car from inside the car while the car is in motion. Yeah, that’s hard to do. But we have to try!
Lindsay: Yeah. Robert Trager, thanks so much for providing some insight into this interesting, interesting space. And it’s good to be with you on Talking Policy. Thanks for joining us.
Rob: Oh, thanks for having me. It’s good to chat about. Thank you so much.
Lindsay: Thank you for listening to this episode of Talking Policy. Talking Policy is a production of the University of California Institute on Global Conflict and Cooperation. This episode was produced and edited by Tyler Ellison. To ensure you never miss an episode, subscribe to Talking Policy wherever you get your podcasts.
Thumbnail credit: Wikimedia Commons