Special Online Briefing with Nathaniel C. Fick Ambassador-at-Large for Cyberspace and Digital Policy and Dr. Seth Center Deputy Envoy for Critical and Emerging Technology

Regional Affairs

ORGANIZATIONS IN THIS STORY

LETTER TO THE EDITOR

Have a concern or an opinion about this story? Click below to share your thoughts.
Send a Letter

The following Special Briefing was published by the U.S. Department of State, Bureau of Near Eastern Affairs on July 26. It is reproduced in full below.

MODERATOR: Greetings to everyone from the U.S. Department of State’s London International Media Hub. I would like to welcome our participants dialing in from the Middle East, Europe, and around the world for this on-the-record briefing with Ambassador at Large for Cyberspace and Digital Policy Nathanial C. Fick and Deputy Envoy for Critical and Emerging Technology Dr. Seth Center.

The speakers will discuss the Biden administration’s recent announcement of voluntary commitments from leading tech companies to ensure safe, secure, and trustworthy AI systems, and leveraging AI tools to empower solutions to global problems. We will have some opening remarks from our speakers and then they will take questions from participating journalists.

I will now turn it over to Ambassador Fick for his opening remarks. Sir, the floor is yours.

AMBASSADOR FICK: Thank you, Liz, and thank you all for joining us. Dr. Center and I are pleased to have this opportunity to speak with everyone about artificial intelligence and what the United States is doing diplomatically on this front. I will spare you the truisms about how AI is changing everything - or has the potential to change everything - and start with our north star. And I think the north star, the orienting principle for the United States on this topic, is preserving the power of innovation and really continuing to enable this new technology area to have positive impact in so many of the areas of common interest across the world.

It is no secret that only something like 15 - one-five - percent of the UN’s sustainable development goals are currently on track. And so we can imagine the application of emerging technologies, including AI, for advances in things like climate science and agricultural productivity and medical diagnostics and therapeutics. And so I do want to emphasize right at the outset that preserving this innovative capacity and enabling the up-side by advantages brought by AI are and should remain our north star.

That said, of course, there are risks. And it’s essential that governments put in place responsible guardrails to mitigate these risks and to safeguard our citizens. I will say here in the U.S. we have learned the lessons of the recent past. We do not intend to take a passive approach on the governance of artificial intelligence. We have been working with seven leading companies in order to develop a set of voluntary commitments - that those companies voluntarily agree to - to begin putting in place the governance structure for artificial intelligence.

And I think you’ve seen the press release and the news articles. The Secretary of State and Secretary of Commerce had a piece in the Financial Times yesterday. I won’t go through, in detail, the specifics of this except to say kind of in the aggregate that there are three buckets of commitments that we believe are fundamental to the future of AI, and they are safety, security, and trust.

So first, the companies have a duty to ensure that their products are safe. This means rigorous testing in order to ensure that the models are not returning or enabling kind of the worst dangerous outcomes in things like biological science or cyber security. Second, the companies have a duty to build systems that are secure. So these are commitments related to safeguarding the models themselves from cyber attack externally, from insider threats.

And then third, the companies have a duty to do right by individuals and by society. And it’s important that users around the world are able to tell whether audio content or visual content is in its original form or whether it’s been altered. It’s important that consumers be able to tell whether something is AI-generated. And so those are the - that is the initial instantiation of these voluntary commitments.

We started with voluntary for two reasons: First, because by their very nature, the fact that they’re voluntary on the part of the companies, they’re not inhibiting the ability to innovate in this important new technology area; and second, voluntary means fast. We don’t have a decade to put in place a governance structure here given the pace of technological change.

So these commitments are a first step toward a robust governance structure, a dynamic governance structure, a flexible governance structure. They are not the last step. And our role here in the U.S. Department of State now is to multilateralize these commitments. We’ve been in dialogue with about 20 of our closest partners. We intend to work fully through the G7 Hiroshima AI process under Japan’s leadership. We intend to support fully the UK’s global summit on AI safety, to be held this fall. And going back to where I began, we look forward to working broadly within the United Nations to harness the up-side, the advantages of AI, in support of the sustainable development goals.

So I’ll pause there. I look forward to a dialogue with all of you and turn the floor over to my colleague, Dr. Center. Thank you.

MR CENTER: And thank you, Ambassador Fick. I think that’s a thorough lay-down of the domestic piece and how the - how we’d like to think about internationalizing this. I think, just

to reinforce just a couple points to situate where we are, one, this is the beginning of the beginning of a new AI era. And what these voluntary commitments constitute is the first effort by the United States Government - working with some of the leading developers, foundation models - to put in place a framework that captures some of the concerns we have and how we propose to deal with them.

What is, I think, important to stress is if one goes into the commitments in some detail, you see a level of technical rigor that we think at least represents the start of a conversation around the kinds of benchmark topics that any country will need to engage in as they think about how to benefit from AI while minimizing the risks.

We see this, obviously, only* within our domestic context, but within also our diplomacy as a bridge to a larger conversation. And we think these commitments represent an important step in our AI diplomacy because they’ll allow all of us, as partners, to move from the form of AI diplomacy to the content that we want to fill that form with within the different forums where AI is being discussed. I think in this case, when one talks about leadership of AI governance, leadership in this case means moving quickly with our best possible effort as early as possible because as Ambassador Fick noted, we don’t have a decade to confront this issue. And in moving quickly, we recognize that not every answer is satisfactory at this point, that we don’t have a full picture of all of the potential downsides that may have to be governed for. But this has to be a point of departure, and we’re eager to move forward with our partners in doing that. Over.

MODERATOR: Thank you, Dr. Center. We will now begin the question-and-answer portion of today’s call. Our first question is a pre-submitted question and it comes from Aya Sayed from Roayah News Network. Aya asks: “How would the U.S. coordinate with other governments to establish responsible guardrails for AI?"

AMBASSADOR FICK: Thank you for the question, and I think we alluded to some of that in our opening comments. I would just reinforce that we view the AI era as fundamentally global. The benefits are global; the risks are global. The advantages and the challenges are all global; the participants are global. And so no single country, no small group of countries, no individual company or small group of companies alone, can dictate our future here. So it’s imperative from the beginning that we take a broad multilateral approach. And we identified a few of the vectors there that we view as most essential here at the outset.

Just to recap them, the UK has stepped up and announced an intention to lead via this global summit on AI safety in the fall. We applaud that effort. The G7, under Japan’s leadership, intends to run a robust Hiroshima AI process, which we look forward to being a part of. And then, of course, more broadly, at the United Nations we think that there are broadly inclusive opportunities to harness AI for good, for the achievement of the SDGs or application toward achieving the SDGs. But this is in no way an exhaustive list. This is - as Dr. Center said, we’re at the beginning of the beginning, but these are a few early efforts to make sure that that we take as broadly coordinated an approach as possible. Thank you.

MODERATOR: Next we’ll go to a question from the chat. Cristina Criddle of FT asks: “How do you view regulation or licensing of foundation models? Would you consider a tiered approach to enable smaller entrants to the market?"

AMBASSADOR FICK: Yes. I think that fundamentally I’ll go back to the point that we’ve made, that we’re at the beginning of the beginning, that one lesson I have drawn from 15 years working in the technology sector as an entrepreneur and as an investor is that the future is fundamentally impossible to predict. Our innate human desire to predict the future is outweighed by our inability to do so. So how will the licensing landscape evolve? How will the competitive landscape evolve? The number of companies, the diversity of the models, these all remain to be seen. And so the approach that we take early on is quite deliberately based on principles and based on establishing norms and guidelines that we expect responsible players to sign up to and to adhere to.

So we’re focused on evaluating and addressing the risks and the benefits of the new technology, including the risks and the benefits posed by open-source models that could perhaps be powering these smaller entrants to the market. And so under the voluntary commitments that we’ve announced, all models, including those that would be released in the future under open source commercial licenses, would need to meet the standards that have been set out. So that means that these baseline open source models - and again, I’m assuming that those would be the models in all likelihood powering smaller, newer market entrants - that they will adhere to these commitments.

So again, it’s dynamic. We’re in the early innings of this game, and we continue to look closely at how to address the changes that independent developers could make to the models. Thank you.

MODERATOR: Next, we’ll go to a pre-submitted question from Dima Wannous of Al-Arabi Al-Jadid. Dima asks: “How will the United States deal with potential control of China and its allies over new artificial intelligence programs?"

AMBASSADOR FICK: I’m happy to have Dr. Center offer an initial answer there, and then I may elaborate.

MR CENTER: Sure. So I think in the in the context of today’s conversation about how to manage the risks, particularly of these foundation models that are going to increasingly, I think, define the AI landscape, if one looks at the categories of concern that we have - safety, security, in particular - I think those are global issues and global challenges. We’re really - many of the categories of concern and many of the technical best practices that you can imagine coming out need to be applied by any responsible company in any responsible state, whether it’s ensuring that the proliferation of a powerful model that could reveal kinds of capabilities around bio or chem or radiological risks could emerge.

I think these are these are the places where, when Ambassador Fick talked about a global conversation, one could imagine that happening, whether at the UN or someplace else,

because these do transcend political systems. And so I think it is in our interest to build a global consensus around what secure and safe deployment of these new powerful systems looks like.

Ambassador Fick, you want to add anything to that?

AMBASSADOR FICK: No, I think you covered it. Thank you.

MODERATOR: Thank you both. We will move to another question from the Q&A chat. Comes from Jannis Brühl of Süddeutsche Zeitung: “How do you expect the companies to realize watermarking? There does not seem to be a sensible way to do it so far."

AMBASSADOR FICK: Yeah, I appreciate the challenge in the question. It’s certainly been a topic of paramount interest to us. We think that it’s essential that consumers, that users, have the greatest possible ability to identify and distinguish AI-generated content in this world where all of us in democratic societies must anticipate the proliferation of AI-generated content, including things like deep fakes that really, without proper governance, can challenge the fundamental underpinnings of our democratic societies. I think that the technical question then of how to make watermarking feasible is exactly the right question. It’s something that we’re in dialogue with on the companies right now.

I would point you to OpenAI, among others, has a blogpost on this topic, identifying how they would intend to approach the problem. I don’t want to speak for the technologists except to say that the - that we agree that together we need to develop robust mechanisms to verify provenance, somehow allow viewers who - let’s face it - they’re scrolling quickly, usually, through this content, right. They need to be able to identify whether audio content or visual content has been altered. And the - we have a shared understanding of the challenge. I think we have a shared commitment to finding a technical way to address it.

And I think it would be - I think it’s essential that we at the same time talk about kind of user and consumer education. It’s going to be important for consumers to be - to some extent to be discerning consumers, right. One can imagine a public education campaign about the world of AI-generated content in the same way that many societies have done public education campaigns on things like seatbelt usage or smoking. So there’s a policy answer here. There’s a technology answer here. But there’s also a public education answer. And we think it’s going to require all of those pieces. Thank you.

MODERATOR: Next, we’ll go to a live question from Corinna Visser of Table.Media. Corrina, please go ahead.

QUESTION: Hi. Thank you for taking my question. I just put it also in the chat. Secretary of State, Mr. Blinken, and EU Commissioner Vestager had agreed to establish common guardrails for AI within the framework of the EU-U.S. TTC. But you didn’t mention this process. Has this been - has this been settled by the voluntary commitment or are you still working on the process? Thank you for taking the question.

AMBASSADOR FICK: Sure. Thank you for the question. And that was not an intentional omission on my part. I’ve attended the last two TTC ministerials - TTC3 in the United States and TTC4 in Sweden. We look forward, of course, to TTC5 to be held later this year back in the United States. And I’m certain that AI and U.S.-EU collaboration on AI governance will be a major topic of discussion at the next TTC ministerial.

I think that it is laudable that the EU has been focused on governance of AI for some time now. And we - the United States perspective - again, back to the beginning of my opening comments - is that it really is imperative that we put in place a governance and regulatory structure that addresses the downside risks without constraining the company’s ability to innovate. And I look to history for a guide here in the context of the U.S. and the EU. And if you’ll indulge me for 30 seconds, I’ll share that perspective on the history here.

Thirty years ago a handful of companies in the United States, France, Sweden, Finland, and a couple other places - Japan and South Korea come to mind - had what felt like a shared and unassailable global advantage in telecom technology. We lost that advantage to some extent. Many of those companies no longer exist. The ones that do are not as robust as they once were, and that is - that is an important example of why it matters that we are deliberate and that we are coordinated in our approaches on these technologies across trustworthy suppliers.

If you fast-forward to the era of cloud computing, it’s no accident that the five global leaders in cloud computing are all American companies, because of immigration policy, because of tax and regulatory policy, because of a legal structure that allows - and frankly, a culture - that allows for failure and entrepreneurship, because of the ecosystem benefits in places like Silicon Valley. And looking ahead now to the era of AI, I think that we - the world is better off if we have leading AI companies being built in many parts of the world, including in the EU. And so it’s imperative that the regulatory approach that the EU takes doesn’t stifle that innovation. And so that is - thank you for identifying the TTC as another vector of collaboration here on AI. I certainly agree with you and I’m sure it’ll be a major part of the agenda at the next ministerial. Thank you.

MR CENTER: Ambassador Fick, if I could add one piece. I think both the United States and the EU have a shared belief that any type of code or rules or guardrails could not simply be a joint U.S.-EU project. And in part that’s the reason why both the United States and the EU have endorsed Japan’s Hiroshima process as a way, really, to rapidly internationalize an effort to establish these guardrails - and in part, also, as a way to find a mechanism to even further expand those guardrails beyond just a G7 context, which is the most important part of this in our mind. And so I think that’s the goal now, is to think about how we can use these truly international mechanisms to inform what a code could look like.

MODERATOR: Thank you. We will now go to a pre-submitted question from Hal Hodson at The Economist. Hal asks: “Does the U.S. Government believe that it is feasible to regulate the development and deployment of artificial intelligence in a way that goes beyond these voluntary commitments?"

AMBASSADOR FICK: Yeah, thanks for the question. I think that - I think that the fundamental answer to that question is that - is the old adage that politics is the art of the possible, and that each of us in our domestic political systems is going to be - to some extent the outcomes will be constrained or dictated by the will of our citizens and by our elected representatives. We think these voluntary commitments are an important first step for industry. We are looking forward to an executive order issued by the President of the United States in the coming months that will set some guidelines for U.S. Government use of AI. And I do anticipate different regulatory approaches, legislative approaches in our legislature on Capitol Hill. I - we already are seeing that obviously in the European Union, we’re seeing it in some other places around the world. And again, we need to strike a balance here, obviously, between national approaches and international harmonization. I think that’s always a tension point in these global technologies.

And - but the short answer to the question is I - we look at these voluntary commitments as a first step towards a more robust and multifaceted governance regime that may well include legislation and regulation. Thank you.

MODERATOR: Thank you. And we’ll now go to another pre-submitted question from Alaa Abood of Al-Qahera News. Alaa asks: “What is your strategy to ensure the safe usage of artificial intelligence now and in the future while we listen every day about the risk of it on human jobs and safety?"

AMBASSADOR FICK: Sure. I think that the - Dr. Center, I see you unmuting there. Do you have something you want to say here? I’m happy to offer a perspective, but don’t want to -

MR CENTER: Go ahead, go ahead.

AMBASSADOR FICK: I think that all of us obviously are focused on the effects of these technologies on our labor markets, on our workers, on our companies. And here too to some extent the future is to some degree unknowable. And so it’s important from the beginning in - again, that we focus on the incredible benefits and opportunities unlocked by these technological revolutions. We heard, of course, that the internet was going to make whole industries obsolete. That may have been true on the margins in some places, but that job destruction was far outweighed by job creation and opportunity creation and economic growth enabled by these technologies. And if I had to make a bet about the AI era, it would be similar - that yes, there will be disruption; yes, people will need to learn new skills; that yes, all of us in developed economies and developing economies alike will face the challenge of some of our people, some of our workers being left behind, and we have a duty, I think, to try to anticipate those challenges and to focus on training and skilling.

And let’s not forget as well, we’re talking about artificial intelligence, we’re talking about these emerging frontiers of new technology, but we’re doing it in a context where one-third of the human beings on this planet are still not connected to the internet, where a third of humanity still does not have the opportunity to benefit from the upsides of these technologies. And so

we do, I would argue, have both an interest and an obligation as developed economies to do what we can to connect the unconnected all around the world.

I’ll pause there, but thank you for the question.

MODERATOR: I think we have time for one more question, and we’ll go to Rishi Iyengar of Foreign Policy. “How do you see China’s role in global AI development? How much scope is there for the U.S. and China to find common ground on AI guardrails?"

AMBASSADOR FICK: Look to Dr. Center to lead on this.

MR CENTER: So this is a great question, and it starts with one reality, which is that in - by many metrics, China is a leading AI state, whether it’s in the publication of research, commercialization at home, leading talent. And so, I think rightly, one can’t imagine a truly global consensus on the kinds of criteria one would want to govern the future of AI unless all countries, including all of the major AI powers, had a basic agreement.

I think the challenge and the reality the United States has is that the foundation for our entire approach to AI governance is anchored in the AI Bill of Rights and a rights-based approach to thinking about the application of AI. And that is in many ways, at a fundamental level, incompatible with how the PRC would think about the utilization and application of these technologies within its own context. And so we’ll have to work through the areas where one can find common guardrails, and again, that might be around managing the kinds of risks outlined in the principles and commitments around concern about the proliferation of these models to the kinds of actors that are unresponsive to international norms and rules. And there are going to be areas where it’s going to be very hard for the United States to have agreement with any country that is an authoritarian government in the context of how one can responsibly employ, deploy, and manage these powerful AI systems in the future.

AMBASSADOR FICK: I have nothing to add. Thanks very much for the opportunity to speak with you. We appreciated the discussion.

MODERATOR: And that concludes today’s call. I am sorry that we could not get to all questions today. I would like to thank Ambassador Fick and Dr. Center for joining us, and I would like to thank all of our journalists for participating. If you have any questions about today’s call, you may contact the London International Media Hub at MediaHubLondon@state.gov.

Source: U.S. Department of State, Bureau of Near Eastern Affairs

ORGANIZATIONS IN THIS STORY

LETTER TO THE EDITOR

Have a concern or an opinion about this story? Click below to share your thoughts.
Send a Letter

Submit Your Story

Know of a story that needs to be covered? Pitch your story to The StateNewswire.
Submit Your Story

More News