REAIM. Outcome of Responsible AI in Military Domain (REAIM) Summit 2024
2024-09-10
The REAIM Summit 2024, co-organized by the Republic of Korea Ministry of Foreign Affairs (MOFA) and Ministry of National Defense (MND), ended in a success after a two-day run from September 9 to 10, 2024.
The ministerial roundtable on Tuesday, September 10, which Minister of Foreign Affairs Cho Tae-yul chaired and Minister of National Defense Kim Yong Hyun delivered a keynote speech at, brought together government representatives from over 90 states. The representatives discussed and shared their national views on (1) general approaches and priorities, (2) concerns and challenges, and (3) prospects for international cooperation.
The REAIM Plenary, consisting of three keynote panel discussions and 47 corresponding breakout sessions, was held on September 9 and 10, In the first plenary session on “understanding the implications of AI on international peace and security,” participants assessed the potential impact of AI on the international security environment — particularly on the conflict dynamics and the proliferation of weapons of mass destruction (WMDs) — and the benefits and risks associated with the military application of AI. The second plenary session covered the agenda of “implementing responsible applications of AI in the military domain”: Participants identified key principles and measures necessary for the responsible application of AI in the military domain and explored ways to translate general principles into concrete actions. The third plenary session addressed the agenda of “envisioning the future governance of AI in the military domain”: Participants explored key considerations in the way forward for AI governance and measures to enhance international cooperation on the responsible use of AI in the military domain.
In addition, two rounds of REAIM Talks were held in the form of special sessions. The first session of REAIM Talks, featuring experts from relevant industries and academia, focused on the essential aspects of AI that policymakers should understand. In the second special session, technical experts explored engineering approaches to implementing policies for responsible application of AI.
Moreover, an AI exhibition and demonstrations, organized by Korean companies, were held at the Lotte Hotel Seoul lobby throughout the two-day Summit. Visitors were able to have firsthand experiences of how AI can be applied in the real-world military domain. The participating Korean companies took the opportunity to deepen their understanding of responsible development, deployment and use of AI in the military domain and to promote their products and technologies to visitors from around the world.
At the closing session that followed the ministerial roundtable, “Blueprint for Action” was officially endorsed as the outcome document of the REAIM Summit 2024. This declarative document, supported by 61 states, lays out a roadmap for establishing norms of AI in the military domain: It suggests principles and framework for the future governance, emphasizing that being responsible entails complying with international law, holding humans responsible and accountable, ensuring reliability and trustworthy of AI, maintaining an appropriate human involvement and improving AI explainability.
ROK Minister of Foreign Affairs Cho Tae-yul appreciated the Blueprint for Action as a collective achievement, reflecting the collaboration and insight of the international community on AI in the military domain. He further emphasized the importance of advancing efforts to translate the principles suggested in the document into concrete actions and developing measures for their implementation.
The Korean government, as a global pivotal state, remains committed to continuing its leadership on setting norms and governance for AI. Building on the global AI leadership established through the REAIM Summit 2024 as well as the AI Seoul Summit and the AI Global Forum held in May this year, Korea is determined to further advance these efforts and lead the international community’s discourse on AI.
From Principles to Action: Charting a Path for Military AI Governance
As governments around the world prepare to deploy AI at scale, the second Summit on Responsible AI in the Military Domain (REAIM) convened on September 9-10 in Seoul to discuss the future of governance efforts. The REAIM Summit is a unique multistakeholder dialogue gathering representatives from governments, industry, academia, and civil society organizations. It is also one of the few venues where Chinese and American government officials, alongside delegates from more than 90 countries, meet to discuss the global governance of military AI. But heightened geopolitical competition, as well as contested notions of what constitutes “responsible AI,” pose immediate challenges to the development and adoption of shared normative, legal, and policy frameworks.
A Blueprint for Action?
The 2024 REAIM Summit culminated in a “Blueprint for Action,” a non-binding document building on the “Call to Action” that came out of last year’s summit in The Hague. The Blueprint outlines a series of principles surrounding the impact of AI on international peace and security, the implementation of responsible AI in the military domain, and the future of AI governance. Recognizing that the debate on military AI has moved beyond lethal autonomous weapons systems, the Blueprint affirms that the principles apply to all “AI applications in the military domain,” including those used in logistics, intelligence operations, and decision-making.
The Blueprint calls for policymakers to pay particular attention to AI-enabled weapons, AI decision-support systems, and the use of AI in cyber operations, electronic warfare, information operations, and the nuclear domain. While still heavily focused on the tactical level, the Blueprint acknowledges the potential for AI to undermine international peace and security by complicating escalation dynamics and lowering the threshold for the use of force. Finally, the Blueprint urges stakeholders to recommit to knowledge exchanges, capacity-building, and forging a common understanding of the risks and opportunities associated with military AI.
The Blueprint for Action was endorsed by 61 countries, reportedly including France, Japan, Pakistan, South Korea, Sweden, Switzerland, Ukraine, the United Kingdom, and the United States. China was not among them, despite endorsing the Call to Action last year. Beijing also has not endorsed the U.S.-led “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy,” which offers parameters for the development, deployment, and use of military AI. Neither has Russia. And Israel, a key U.S. ally that is already deploying AI-enabled technologies on the battlefield in Gaza, has not endorsed any of these initiatives.
The lack of international consensus on the Blueprint for Action is disappointing, if not surprising. The principles contained in the document reflect the lowest common denominator of agreement among stakeholders on what constitutes “responsible” use of military AI, such as the assertion that “AI applications should be ethical and human centric,” where humans remain responsible and accountable for AI development and use. Similarly, the document uncontroversially affirms that AI-enabled technologies should be subject to legal reviews and used in accordance with all applicable international laws, namely international humanitarian law (IHL) and international human rights law (IHRL). Still, roughly 30 government representatives at REAIM did not endorse these statements.
The failure of states to agree on such fundamental principles suggests that the framing around “responsible AI” remains contested at the global level, with democratic countries adopting a more values-driven approach than authoritarian regimes to mitigating potential harms from AI to individuals and society. That lack of consensus, however, should not be interpreted as a lack of concern. In one of the breakout sessions at the REAIM 2024 Summit, Chinese representatives privately expressed concern about Israel’s use of AI in Gaza, underscoring a shared sense of urgency that the inability of AI governance frameworks to keep pace with technology is already resulting in real-world harm.
REAIM, of course, is only one forum where discussions on military AI governance are unfolding. The United Nations Summit of the Future, which takes place in New York just over a week after the REAIM Summit, will provide another opportunity to address these challenges and foster global consensus.
Moving from Principles to Practice
While the Blueprint may not be groundbreaking, there are signs we are moving in the right direction. There was an emerging consensus at REAIM that policy debates have for too long revolved around lethal autonomous weapons systems (LAWS) as opposed to broader AI applications, while narrowly focusing on tactical issues at the expense of strategic considerations. Moreover, participants agreed that these debates have been held at a level that is too general and abstract, thereby impeding effective governance.
But that is gradually changing. Several sessions at REAIM emphasized the need to move away from the LAWS debate, focus on strategic security, and embrace rigorous testing and analysis of AI use in concrete case studies. Throughout, there was broad agreement on the need to be more forensic about research questions and more precise in identifying knowledge gaps. As RAND Europe’s James Black put it, “We are still in the problem-finding, not problem-solving, phase.”
How can we move toward the problem-solving phase? That was the focus of the REAIM panel, “Responsible Military Use of AI: Bridging the Gap Between Principles and Practice,” co-hosted by Carnegie Council for Ethics in International Affairs and the Oxford Institute for Ethics, Law and Armed Conflict (ELAC). Drawing together leading ethicists, international lawyers, and military strategists, I asked panelists how decision-makers can more effectively translate principles for AI governance into concrete global action. The panel was the beginning of a series of planned workshops that aim to address this urgent question.
Professor Kenneth Payne of King’s College London captured the sentiment in the room: “Talk about norms can easily pick off the low-hanging fruit, like noting concern and the need for greater trust—but it’s hard to progress far beyond that.” In an era of increased geopolitical competition, AI poses significant risks at the strategic level, including by rendering conflict escalation dynamics more opaque and unpredictable. “The ‘grammar’ of war, to borrow from Clausewitz, is uncertain,” Payne said.
Amidst that uncertainty, being a first mover in shaping the rules of the road offers distinct strategic and normative advantages. Dr. Paul Lyons, senior director for defense at the Special Competitive Studies Project and a former U.S. Navy captain, stressed that “while governance is important, while responsible speed is important, let’s not forget that there is an innovation race, and the first mover will assign those values.” Responsible AI, according to Lyons, requires rapid experimentation, where operators test different governance models in a controlled environment.
As states race to develop and deploy new technologies, transparency will play a critical role in ensuring responsible AI use. “Implementing responsible AI principles can create efficiency, effectiveness, and legitimacy,” emphasized Dr. Tess Bridgeman, co-editor-in-chief at Just Security and former U.S. National Security Council deputy legal adviser. “Transparency is an enabler of trust in the AI systems that we’re producing and operating, and trust in turn can foster faster adoption of those capabilities.” Lyons agreed, observing that “trust is inherent in who we are, what we do, the visions we follow, and the values that serve us, as we prescribe how AI should be used.”
There are several steps states could take today to improve transparency on military AI. First, states could publish their domestic national strategies, policy documents, frameworks, and legislation for the development and use of AI in the military domain. Crucially, this should involve redacting and declassifying policies and procedures for the use of AI in national security contexts, including in decision-making pertaining to the use of force and intelligence operations that support combat functions. Second, states could create compendiums of existing knowledge on military AI. Such compendiums might usefully include case studies on the responsible use of AI-enabled technologies in conventional and unconventional conflicts. Finally, states should develop collective interpretations of how international law applies in specific AI use cases in ways that are accessible to public debate, while sharing best practices for legal reviews.
Alongside transparency, knowledge transfers and capacity-building will be essential for making military AI governance more robust. While policy debates largely have focused on regulation, lessons learned from drone warfare suggest that non-binding instruments—including policy guidance, rules of engagement, and codes of conduct—may be more effective in practice in ensuring that ethical guardrails are in place. In short, military AI governance may lie more at the level of policy than law, with training being a fundamental component of implementation.
Above all, effective AI governance requires political will. Professor Toni Erskine of the Australian National University stressed the need to develop “supplementary moral responsibilities of restraint,” that is, responsibilities to create the conditions within which we can discharge the duties of restraint that are already broadly endorsed in international relations in the UN Charter and international humanitarian law. “We face the dual task of establishing a consensus on new applications of existing hard-won international norms, and also generating agreement on new principles that would contribute to emerging norms.” Given the risks to humanity, Erskine concludes that a “coalition of the obligated,” rather than of the willing, is needed now. Delivering on this task will require strengthening transparency and confidence-building measures, drawing on lessons from the cyber and nuclear domains.
Everything, All at Once
REAIM underscored two imperatives for military AI governance: forging greater international consensus on what the principles should be and taking concrete steps to implement those principles. But what should take precedence? How should policymakers prioritize AI risks? And are there any “quick wins,” given the pace of technological advancements has already outpaced nascent AI governance frameworks?
“The blueprint is an incremental step forward,” observed Dr. Giacomo Persi Paoli, head of the Security and Technology Programme at the United Nations Institute for Disarmament Research. “By going too fast, too soon, there is a very high risk that many countries do not want to engage.” Experts are measuring governance progress in years, not months. Meanwhile, tech companies are moving ahead rapidly. “Just outside the doors [of REAIM] where this timeframe is espoused as inevitable…companies are actively selling the very systems” that require regulation, commentedDr. Alexi Drew, technology policy adviser at the International Committee of the Red Cross.
Policymakers can triage and prioritize AI risks to a certain extent. Focusing on high-impact concerns, such as AI’s potential to destabilize strategic deterrence and lead to nuclear warfare, is one place to start. Another approach is to tackle risks that have already manifested, such as the use of AI decision-support systems in targeting. Governance frameworks could also prioritize the regulation of AI tools which are highly destabilizing and could easily fall into the hands of terrorists.
That list of priorities highlights the need to address near-term and existential threats simultaneously. As a result, as Payne suggested at the REAIM panel, discussions on military AI governance and existential risk could soon converge. When it comes to AI governance, policymakers need to do everything, all at once.
There are no quick wins here. Relatively low-hanging fruit includes developing and promoting national policies for AI governance, as well as mandating regular legal reviews of AI-enabled systems in military contexts. The transparency measures raised in the Carnegie Council-ELAC panel at REAIM are other steps that policymakers can take right away, in the absence of legislation or international consensus. And while not necessarily a quick nor easy win, U.S. diplomats are and should continue to sign up more countries to the Political Declaration—particularly allies who may be among the worst abusers of this technology. At the same time, governments must build a “coalition of the obligated” to tackle the enormous challenges that still lie ahead.
None of these steps will be possible without greater collaboration between technical and policy experts in government, industry, academia, and civil society. More robust public-private partnerships are essential. REAIM provides an important model for partnerships that bring together a diverse, interdisciplinary group of experts to tackle these types of complex issues.
As this dialogue unfolds, it is essential to address deeper shifts in how AI is reshaping the ethical and strategic dimensions of war. Unlike conventional weapons, AI changes not just how we use force but also how we conceive of the use of force. The character of warfare is not the only thing that is changing; human perceptions of war are changing too. To meet the challenges of the present moment, AI governance must be grounded in the broader social, institutional, and uniquely human environments in which the technologies are deployed.
Dr. Brianna Rosen (@rosen_br) is a senior fellow at Just Security and a strategy and policy fellow at Oxford University’s Blavatnik School of Government. She previously served for a decade in the U.S. government, including at the White House National Security Council and Office of the Vice President.
Carnegie Council for Ethics in International Affairs is an independent and nonpartisan nonprofit. The views expressed within this article are those of the authors and do not necessarily reflect the position of Carnegie Council.
From Principles to Action: Charting a Path for Military AI Governance
As governments around the world prepare to deploy AI at scale, the second Summit on Responsible AI in the Military Domain (REAIM) convened on September 9-10 in Seoul to discuss the future of governance efforts. The REAIM Summit is a unique multistakeholder dialogue gathering representatives from governments, industry, academia, and civil society organizations. It is also one of the few venues where Chinese and American government officials, alongside delegates from more than 90 countries, meet to discuss the global governance of military AI. But heightened geopolitical competition, as well as contested notions of what constitutes “responsible AI,” pose immediate challenges to the development and adoption of shared normative, legal, and policy frameworks.
A Blueprint for Action?
The 2024 REAIM Summit culminated in a “Blueprint for Action,” a non-binding document building on the “Call to Action” that came out of last year’s summit in The Hague. The Blueprint outlines a series of principles surrounding the impact of AI on international peace and security, the implementation of responsible AI in the military domain, and the future of AI governance. Recognizing that the debate on military AI has moved beyond lethal autonomous weapons systems, the Blueprint affirms that the principles apply to all “AI applications in the military domain,” including those used in logistics, intelligence operations, and decision-making.
The Blueprint calls for policymakers to pay particular attention to AI-enabled weapons, AI decision-support systems, and the use of AI in cyber operations, electronic warfare, information operations, and the nuclear domain. While still heavily focused on the tactical level, the Blueprint acknowledges the potential for AI to undermine international peace and security by complicating escalation dynamics and lowering the threshold for the use of force. Finally, the Blueprint urges stakeholders to recommit to knowledge exchanges, capacity-building, and forging a common understanding of the risks and opportunities associated with military AI.
The Blueprint for Action was endorsed by 61 countries, reportedly including France, Japan, Pakistan, South Korea, Sweden, Switzerland, Ukraine, the United Kingdom, and the United States. China was not among them, despite endorsing the Call to Action last year. Beijing also has not endorsed the U.S.-led “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy,” which offers parameters for the development, deployment, and use of military AI. Neither has Russia. And Israel, a key U.S. ally that is already deploying AI-enabled technologies on the battlefield in Gaza, has not endorsed any of these initiatives.
The lack of international consensus on the Blueprint for Action is disappointing, if not surprising. The principles contained in the document reflect the lowest common denominator of agreement among stakeholders on what constitutes “responsible” use of military AI, such as the assertion that “AI applications should be ethical and human centric,” where humans remain responsible and accountable for AI development and use. Similarly, the document uncontroversially affirms that AI-enabled technologies should be subject to legal reviews and used in accordance with all applicable international laws, namely international humanitarian law (IHL) and international human rights law (IHRL). Still, roughly 30 government representatives at REAIM did not endorse these statements.
The failure of states to agree on such fundamental principles suggests that the framing around “responsible AI” remains contested at the global level, with democratic countries adopting a more values-driven approach than authoritarian regimes to mitigating potential harms from AI to individuals and society. That lack of consensus, however, should not be interpreted as a lack of concern. In one of the breakout sessions at the REAIM 2024 Summit, Chinese representatives privately expressed concern about Israel’s use of AI in Gaza, underscoring a shared sense of urgency that the inability of AI governance frameworks to keep pace with technology is already resulting in real-world harm.
REAIM, of course, is only one forum where discussions on military AI governance are unfolding. The United Nations Summit of the Future, which takes place in New York just over a week after the REAIM Summit, will provide another opportunity to address these challenges and foster global consensus.
Moving from Principles to Practice
While the Blueprint may not be groundbreaking, there are signs we are moving in the right direction. There was an emerging consensus at REAIM that policy debates have for too long revolved around lethal autonomous weapons systems (LAWS) as opposed to broader AI applications, while narrowly focusing on tactical issues at the expense of strategic considerations. Moreover, participants agreed that these debates have been held at a level that is too general and abstract, thereby impeding effective governance.
But that is gradually changing. Several sessions at REAIM emphasized the need to move away from the LAWS debate, focus on strategic security, and embrace rigorous testing and analysis of AI use in concrete case studies. Throughout, there was broad agreement on the need to be more forensic about research questions and more precise in identifying knowledge gaps. As RAND Europe’s James Black put it, “We are still in the problem-finding, not problem-solving, phase.”
How can we move toward the problem-solving phase? That was the focus of the REAIM panel, “Responsible Military Use of AI: Bridging the Gap Between Principles and Practice,” co-hosted by Carnegie Council for Ethics in International Affairs and the Oxford Institute for Ethics, Law and Armed Conflict (ELAC). Drawing together leading ethicists, international lawyers, and military strategists, I asked panelists how decision-makers can more effectively translate principles for AI governance into concrete global action. The panel was the beginning of a series of planned workshops that aim to address this urgent question.
Professor Kenneth Payne of King’s College London captured the sentiment in the room: “Talk about norms can easily pick off the low-hanging fruit, like noting concern and the need for greater trust—but it’s hard to progress far beyond that.” In an era of increased geopolitical competition, AI poses significant risks at the strategic level, including by rendering conflict escalation dynamics more opaque and unpredictable. “The ‘grammar’ of war, to borrow from Clausewitz, is uncertain,” Payne said.
Amidst that uncertainty, being a first mover in shaping the rules of the road offers distinct strategic and normative advantages. Dr. Paul Lyons, senior director for defense at the Special Competitive Studies Project and a former U.S. Navy captain, stressed that “while governance is important, while responsible speed is important, let’s not forget that there is an innovation race, and the first mover will assign those values.” Responsible AI, according to Lyons, requires rapid experimentation, where operators test different governance models in a controlled environment.
As states race to develop and deploy new technologies, transparency will play a critical role in ensuring responsible AI use. “Implementing responsible AI principles can create efficiency, effectiveness, and legitimacy,” emphasized Dr. Tess Bridgeman, co-editor-in-chief at Just Security and former U.S. National Security Council deputy legal adviser. “Transparency is an enabler of trust in the AI systems that we’re producing and operating, and trust in turn can foster faster adoption of those capabilities.” Lyons agreed, observing that “trust is inherent in who we are, what we do, the visions we follow, and the values that serve us, as we prescribe how AI should be used.”
There are several steps states could take today to improve transparency on military AI. First, states could publish their domestic national strategies, policy documents, frameworks, and legislation for the development and use of AI in the military domain. Crucially, this should involve redacting and declassifying policies and procedures for the use of AI in national security contexts, including in decision-making pertaining to the use of force and intelligence operations that support combat functions. Second, states could create compendiums of existing knowledge on military AI. Such compendiums might usefully include case studies on the responsible use of AI-enabled technologies in conventional and unconventional conflicts. Finally, states should develop collective interpretations of how international law applies in specific AI use cases in ways that are accessible to public debate, while sharing best practices for legal reviews.
Alongside transparency, knowledge transfers and capacity-building will be essential for making military AI governance more robust. While policy debates largely have focused on regulation, lessons learned from drone warfare suggest that non-binding instruments—including policy guidance, rules of engagement, and codes of conduct—may be more effective in practice in ensuring that ethical guardrails are in place. In short, military AI governance may lie more at the level of policy than law, with training being a fundamental component of implementation.
Above all, effective AI governance requires political will. Professor Toni Erskine of the Australian National University stressed the need to develop “supplementary moral responsibilities of restraint,” that is, responsibilities to create the conditions within which we can discharge the duties of restraint that are already broadly endorsed in international relations in the UN Charter and international humanitarian law. “We face the dual task of establishing a consensus on new applications of existing hard-won international norms, and also generating agreement on new principles that would contribute to emerging norms.” Given the risks to humanity, Erskine concludes that a “coalition of the obligated,” rather than of the willing, is needed now. Delivering on this task will require strengthening transparency and confidence-building measures, drawing on lessons from the cyber and nuclear domains.
Everything, All at Once
REAIM underscored two imperatives for military AI governance: forging greater international consensus on what the principles should be and taking concrete steps to implement those principles. But what should take precedence? How should policymakers prioritize AI risks? And are there any “quick wins,” given the pace of technological advancements has already outpaced nascent AI governance frameworks?
“The blueprint is an incremental step forward,” observed Dr. Giacomo Persi Paoli, head of the Security and Technology Programme at the United Nations Institute for Disarmament Research. “By going too fast, too soon, there is a very high risk that many countries do not want to engage.” Experts are measuring governance progress in years, not months. Meanwhile, tech companies are moving ahead rapidly. “Just outside the doors [of REAIM] where this timeframe is espoused as inevitable…companies are actively selling the very systems” that require regulation, commentedDr. Alexi Drew, technology policy adviser at the International Committee of the Red Cross.
Policymakers can triage and prioritize AI risks to a certain extent. Focusing on high-impact concerns, such as AI’s potential to destabilize strategic deterrence and lead to nuclear warfare, is one place to start. Another approach is to tackle risks that have already manifested, such as the use of AI decision-support systems in targeting. Governance frameworks could also prioritize the regulation of AI tools which are highly destabilizing and could easily fall into the hands of terrorists.
That list of priorities highlights the need to address near-term and existential threats simultaneously. As a result, as Payne suggested at the REAIM panel, discussions on military AI governance and existential risk could soon converge. When it comes to AI governance, policymakers need to do everything, all at once.
There are no quick wins here. Relatively low-hanging fruit includes developing and promoting national policies for AI governance, as well as mandating regular legal reviews of AI-enabled systems in military contexts. The transparency measures raised in the Carnegie Council-ELAC panel at REAIM are other steps that policymakers can take right away, in the absence of legislation or international consensus. And while not necessarily a quick nor easy win, U.S. diplomats are and should continue to sign up more countries to the Political Declaration—particularly allies who may be among the worst abusers of this technology. At the same time, governments must build a “coalition of the obligated” to tackle the enormous challenges that still lie ahead.
None of these steps will be possible without greater collaboration between technical and policy experts in government, industry, academia, and civil society. More robust public-private partnerships are essential. REAIM provides an important model for partnerships that bring together a diverse, interdisciplinary group of experts to tackle these types of complex issues.
As this dialogue unfolds, it is essential to address deeper shifts in how AI is reshaping the ethical and strategic dimensions of war. Unlike conventional weapons, AI changes not just how we use force but also how we conceive of the use of force. The character of warfare is not the only thing that is changing; human perceptions of war are changing too. To meet the challenges of the present moment, AI governance must be grounded in the broader social, institutional, and uniquely human environments in which the technologies are deployed.
Dr. Brianna Rosen (@rosen_br) is a senior fellow at Just Security and a strategy and policy fellow at Oxford University’s Blavatnik School of Government. She previously served for a decade in the U.S. government, including at the White House National Security Council and Office of the Vice President.
Carnegie Council for Ethics in International Affairs is an independent and nonpartisan nonprofit. The views expressed within this article are those of the authors and do not necessarily reflect the position of Carnegie Council.
Learn more:
Global Commission on Responsible Artificial Intelligence in the Military Domain (GC REAIM)
REAIM Blueprint for Action
GC REAIM at the REAIM 2024 Summit. September 17, 2024
Sixty countries endorse ‘blueprint’ for AI use in military; China opts out – Reuters
UN advisory body makes seven recommendations for governing AI – Reuters