How to Cite
Cotino Hueso, L., & Gómez de Ágreda, Ángel. (2024). Ethical and international humanitarian law criteria in the use of artificial intelligence-powered military system. Novum Jus, 18(1), 249–283. https://doi.org/10.14718/NovumJus.2024.18.1.9
License
Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

Authors who publish with this journal agree to the following terms:

    1. Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution Non-Commercial License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
    2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
    3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).

Abstract

The uses of artificial intelligence (AI) and lethal autonomous weapon systems (LAWS) have no specific international regulation and proposals for general AI regulation usually decline to engage in discussing them. Nevertheless, military AI systems must comply and integrate into their design the applicable International Humanitarian Law (IHL) and the principles of AI ethics. This should consider military distinctive elements such as effectiveness, criticality of the results, protection and quality of information and data, complexity and dynamism, dual nature of technologies, potential use by terrorist groups or organizations or scalability of use. Based on comparative experience, the authors formulate ethical principles of artificial intelligence in defense, in line with general ones, but with due regard for the particularities of the military context. Several topics are particularly emphasized, such as the need for human control (limited transfer of autonomy, meaningful human control, accountability throughout the life cycle), absence of bias and robustness, especially against unintended engagements, as well as the principles of reliability, transparency, traceability, and security of military AI systems. Specific aspects concerning the sector are discussed such as the individual responsibility that governs IHL, the difficulty of projecting the "doctrine of double effect" to autonomous systems or the unpredictability of these systems.

Keywords:

References

“Airbus Defense White paper: The Responsible Use of Artificial Intelligence in FCAS – An Initial Assessment”, Airbus, Fraunhofer FKIE, 2020. https://www.fcas-forum.eu/articles/responsible-use-of-artificial-intelligence-in-fcas

“An Agenda for Action Alternative Processes for Negotiating a Killer Robots Treaty”, Human Rights Watch, 19 de noviembre de 2022. https://www.hrw.org/report/2022/11/10/agenda-action/alternative-processes-negotiating-killer-robots-treaty

Argentina, Costa Rica, Guatemala, Kazakhstan, Nigeria, Panamá, Filipinas, Sierra Leona, Palestina y Uruguay. “Proposal: Roadmap Towards New Protocol on Autonomous Weapons Systems”, reachingcriticalwill.org, 13 de marzo de 2022. https://reachingcriticalwill.org/images/documents/Disarmament-fora/ccw/2022/gge/documents/G13_March2022.pdf

Australia, Canadá, Japón, República de Corea, Reino Unido y Estados Unidos. “Principles and Good Practices on Emerging Technologies in the Area of LAWS”, reachingcriticalwill.org, 7 de marzo de 2022. https://reachingcriticalwill.org/images/documents/Disarmament-fora/ccw/2022/gge/documents/USgroup_March2022.pdf

Automated Decision Research, State positions. https://automatedresearch.org/state-positions/

Blanchard Alexander, Luciano Floridi y Mariarosaria Taddeo. “The Doctrine of Double Effect & Lethal Autonomous Weapon Systems”, SSRN, 27 de diciembre de 2022 http://dx.doi.org/10.2139/ssrn.4308862

Boulanin, Vicent, Neil Davison, Netta Goussac y Moa Peldán Carlsson, Limits on Autonomy in Weapon Systems: Identifying Practical Elements of Human Control (Ginebra: ICRC y Stockholm International Peace Research Institute, 2020), 21-25 https://www.icrc.org/en/download/file/121024/icrc_sipri_limits_on_autonomy_june_2020.pdf

“Campaign to Stop Killer Robots”, Stop Killer Robots, 2018. https://www.stopkillerrobots.org/

Chakraborty, Swaroop. “Inteligencia artificial y derechos humanos: ¿son convergentes o paralelos entre sí?”. Novum Jus 12, núm. 2 (2018): 13-38. https://doi.org/10.14718/NovumJus.2018.12.2.2

Comisión Europea, Dirección General de Redes de Comunicación, Contenido y Tecnologías. “Directrices éticas para una IA fiable”. Oficina de Publicaciones, 8 de abril de 2019. https://data.europa.eu/doi/10.2759/14078

Comisión Europea, “Propuesta de Reglamento por el que se establecen normas armonizadas en materia de inteligencia artificial”, EUR-Lex Access to European Union Law, 21 de abril de 2021. https://eur-lex.europa.eu/legal-content/ES/TXT/?uri=celex:52021PC0206

Consejo de Europa, Comité sobre inteligencia artificial (CAI). “Borrador de trabajo consolidado del convenio marco sobre inteligencia artificial, derechos humanos, democracia y Estado de derecho”. Council of Europe (COE), Estrasburgo, 7 de julio de 2023. https://rm.coe.int/cai-2023-18-consolidated-working-draft-framework-convention/1680abde66

Cotino Hueso, Lorenzo. “Ética en el diseño para el desarrollo de una inteligencia artificial, robótica y big data confiables y su utilidad desde el derecho”. Revista Catalana de Derecho Público, núm. 58 (2019): 29-48. http://dx.doi.org/10.2436/rcdp.i58.2019.3303

Cruz Mate, Antonio, “Responsabilidad internacional penal del individuo por violaciones de normas de derecho internacional humanitario relativas a la protección de las personas civiles y la población civil en los conflictos armados internos”. Tesis doctoral,

Universidad Complutense, Madrid, 2015.

Cummings, M. L. “Lethal Autonomous Weapons: Meaningful Human Control or Meaningful Human Certification?” [Opinion], en IEEE Technology and Society Magazine 38, núm. 4, (2019): 20-26. https://doi.org/10.1109/MTS.2019.2948438

Davison, Neil. “A legal perspective: Autonomous weapon systems under international humanitarian law”. UNODA Occasional Papers, núm 30 (2017) 5-18. https://doi.org/10.18356/29a571ba-en

Devitt, Kate, Michael Gan, Jason Scholz y Robert Bolia. A Method for Ethical AI in Defence. Australia: Departamento de defensa, 2020. Publicación número DSTG-TR-3786. https://www.dst.defence.gov.au/publication/ethical-ai

Dietrich Schindler y Jiri Toman. The Laws of Armed Conflicts: A Collection of Conventions, Resolutions and other Documents. Ginebra: Martinus Nijhoff/Instituto Henry Dunant, 1988. https://doi.org/10.1163/9789047405238

Estados Unidos, Departamento de Defensa, Defense Innovation Board. “AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense”, Department of Defense USA (DoD), Supporting Document, octubre 31 de 2019. https://media.defense.gov/2019/Oct/31/2002204458/-1/-1/0/DIB_AI_PRINCIPLES_PRIMARY_DOCUMENT.PDF

Estados Unidos, Departamento de Defensa. “The Department of Defense AI Ethical Principles”, The Joint Artificial Intelligence Center, 24 de febrero de 2020. https://www.ai.mil/blog_02_24_20‐dod‐ai_principles.html

Estados Unidos, Departamento de Defensa, DoD Law of War Manual. Washington: Consejo General del Departamento de Defensa, 2023. https://media.defense.gov/2023/Jul/31/2003271432/-1/-1/0/DOD-LAW-OF-WAR-MANUAL-JUNE-2015-UPDATEDJULY%202023.PDF

Finlandia, Francia, Alemania, Países Bajos, Noruega, España y Suecia, “Working paper submitted to the 2022 Chair of GGE on LAWS”, 13 de julio de 2022, https://documents.unoda.org/wp-content/uploads/2022/07/WP-LAWS_DE-ES-FI-FR-NL-NO-SE.pdf

Fjeld, Jessica, Nele Achten, Hannah Hilligoss, Adam Nagy, and Madhulika Srikumar. “Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to Principles for AI”, Berkman Klein Center for Internet & Society Research at Harvard University, 2020. https://doi.org/10.2139/ssrn.3518482 http://nrs.harvard.edu/urn-3:HUL.InstRepos:42160420

Floridi, Luciano, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, Burkhard Schafer, Peggy Valcke y Effy Vayena. , “AI4People —An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations”, Minds and Machines 28, núm 4 (2018): 689-707 https://doi.org/10.1007/s11023-018-9482-5

Francia, Defence Ethics Committee. “Opinion on the Augmented soldier”, Ministère des Armées, 18 de septiembre de 2020. https://n9.cl/6qw9u

García del Blanco, Ibán. “REPORT with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies”, Parlamento Europeo, 8 de octubre de 2020, Report - A9-0186/2020. https://www.europarl.europa.eu/doceo/document/A-9-2020-0186_EN.html

Gómez de Ágreda, Ángel. “Ethics of autonomous weapons systems and its applicability to any AI systems”, Telecommunications Policy 44, núm. 6 (2020): 1-15. https://doi.org/10.1016/j.telpol.2020.101953

Greppi, Edoardo. “Evolución de la responsabilidad penal individual bajo el derecho internacional”. Revista Internacional de la Cruz Roja (RICR), núm. 835 (30 de septiembre de 1999): 531-554. https://www.icrc.org/es/doc/resources/documents/misc/5tdnnf.htm

Gunning, David. “Explainable Artificial Intelligence (XAI). The Need for Explainable AI”. Philosophical Transactions. Series A, Mathematical, Physical, and Engineering Sciences 58, núm. 4 (2017). https://doi.org/10.1111/fct.12208

International Business Machines (IBM). “Foundation models: Opportunities, risks and mitigations”, julio de 2023. https://www.ibm.com/downloads/cas/E5KE5KRZ

International Committee of the Red Cross (ICRC). “Base de datos sobre DIH consuetudinario”, Normas. https://ihldatabases.icrc.org/es/customary-ihl/v1

International Committee of the Red Cross (ICRC). “ICRC Position on Autonomous Weapon Systems & Background Paper”. ICRC, 12 de mayo de 2021. https://www.icrc.org/en/download/file/166330/icrc_position_on_aws_and_background_paper.pdf

International Committee of the Red Cross (ICRC), “International Committee of the Red Cross (ICRC) position on autonomous weapon systems: ICRC position and background paper”, IRRC, núm 915, enero de 2022. https://international-review.icrc.org/articles/icrc-position-on-autonomous-weapon-systems-icrc-positionand-background-paper-915

International Committee of the Red Cross (ICRC). “Protocolo I adicional a los Convenios de Ginebra de 1949 relativo a la protección de las víctimas de los conflictos armados internacionales”. 18 de junio de 1977. https://www.icrc.org/es/document/protocolo-i-adicional-convenios-ginebra-1949-proteccion-victimas-conflictosarmados-internacionales-1977

International Committee of the Red Cross (ICRC). “Statement of the ICRC to the UN CCW GGE on Lethal Autonomous Weapons Systems”. 21-25 de septiembre de 2020, Ginebra.

International Committee of the Red Cross (ICRC). “Views of the ICRC on autonomous weapon systems”. ICRC, 11 abril 2016. https://www.icrc.org/en/document/views-icrc-autonomous-weapon-system

“Letter dated 8 March 2021 from the Panel of Experts on Libya Established pursuant to Resolution 1973 (2011) addressed to the President of the Security Council”, Naciones Unidas, Biblioteca digital. https://digitallibrary.un.org/record/3905159?ln=es

McGregor, Lorna, Daragh Murray y Vivian Ng, “International human rights law as a framework for algorithmic accountability”. International and Comparative Law Quarterly 68, núm. 2 (2019): 309-343. https://doi.org/10.1017/S0020589319000046

Michael C. Horowitz, “Public opinion and the politics of the killer robots debate”, Research and Politics 3, núm. 1 (2016). https://doi.org/10.1177/2053168015627183

Microsoft. “AI Principles”, 2019. https://www.microsoft.com/en-us/ai/principles-and-approach

Minor, E. “Laws for LAWS. Towards a treaty to regulate lethal, autonomous weapons”. Friedrich Ebert Stiftung New York Analysis, Febrero de 2023. https://library.fes.de/pdf-files/international/20013.pdf

Nadella, Satya. “The Partnership of the Future. Microsoft’s CEO explores how humans and A.I. can work together to solve society’s greatest challenges”, Slate, 28 de junio de 2016. https://slate.com/technology/2016/06/microsoft-ceo-satya-nadella-humansand-a-i-can-work-together-to-solve-societys-challenges.html

Naciones Unidas, Convention on Conventional Weapons (CCW), “Report of the 2018 Group of Governmental Experts on Lethal Autonomous Weapons Systems”, 31 de agosto de 2018.

Naciones Unidas. “Reunión de las Altas Partes Contratantes de la Convención sobre prohibiciones o restricciones del uso de ciertas armas convencionales que puede considerarse excesivamente perjudicial o tener efectos indiscriminados”. Reporte Final, Ginebra, 16-18 de noviembre de 2022. UN Document CCW/MSP/2022/7. https://unoda-documents-library.s3.amazonaws.com/Convention_on_Certain_Conventional_Weapons_-_Meeting_of_High_Contracting_Parties_(2022)/CCW-MSP-2022-7-Advance_version.pdf

Naciones Unidas, Institute for Disarmament Research. “The Weaponization of Increasingly Autonomous Technologies: Concerns, Characteristics and Definitional Approaches, a primer”. UNIDIR Resources, núm. 6 (2017). https://unidir.org/publication/the-weaponization-of-increasingly-autonomous-technologies-concerns-characteristics-and-definitional-approaches/

Organización para la Cooperación y el Desarrollo Económicos (OCDE). “Recommendation of the Council on Artificial Intelligence”, OECD/LEGAL/0449, OECD Legal Instruments, 2023. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449

Pichai, Sundar. “AI, Google: our principles” (Blog). 7 de junio de 2018. https://blog.google/technology/ai/ai-principles/

Potoglou, Dimitris, Sunil Patil, Covadonga Gijon, Juan Palacios y Claudio Feijoo. “The value of personal information online: Results from three stated preference discrete choice experiments in the UK”. Conferencia presentada en el 21st European Conference for Information Systems ORCA, Utrech, 5-8 de junio de 2013. http://orca.cf.ac.uk/51292/

Ramos, Gabriela. “Ética de la inteligencia artificial”. Unesco. Inteligencia artificial. https://www.unesco.org/es/artificial-intelligence/recommendation-ethics

Responsible AI in the Military domain Summit (REAIM). “REAIM Call to Action”, Gobierno de Países Bajos, 16 de febrero de 2023. https://www.government.nl/documents/publications/2023/02/16/reaim-2023-call-to-action

Scalia, Tania, Alessandro Di Mezza, Alessandra Masini, Sebastien Sylvestre, Robert Thomas, Jean-Louis Szabo, Marcel De Heide, Maurits Butter y David Parker. “Final technical report. Study on the dual-use potential of Key Enabling Technologies (KETs)”. Comisión Europea, Agencia Ejecutiva para las Pequeñas y Medianas Empresas, 13 de enero de 2017. https://doi.org/10.2826/12343

Schmitt, Michael. “Grey Zones in the International Law of Cyberspace”. The Yale Journal of International Law Online 42, núm. 2 (2016): 1-21. https://bpb-us-w2.wpmucdn.com/campuspress.yale.edu/dist/8/1581/files/2017/08/Schmitt_Grey-Areas-inthe-International-Law-of-Cyberspace-1cab8kj.pdf

Unión Europea, Parlamento Europeo. “Normas de Derecho civil sobre robótica”. Resolución del 16 de febrero de 2017, con recomendaciones destinadas a la Comisión sobre normas de Derecho civil sobre robótica (2015/2103(INL)). http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//TEXT+TA+P8-TA-2017-0051+0+DOC+XML+V0//ES

Ulloa Plaza, Jorge y Maria A. Benavides Casals. “Moralidad, guerra y derecho internacional. Tres cuerdas para un mismo trompo: la humanidad”. Novum Jus 17, núm. 1 (2023): 259-282 https://doi.org/10.14718/NovumJus.2023.17.1.11

Verdiesen, Ilse, Filippo Santoni de Sio y Virginia Dignum. “Accountability and Control Over Autonomous Weapon Systems: A Framework for Comprehensive Human Oversight”. Minds and Machines 31, (2021): 137-163. https://doi.org/10.1007/s11023-020-09532-9

Walzer, Michael. Just and Unjust Wars: A Moral Argument with Historical Illustrations. New York: Basic Books, 1977.

Williams, Andrew y Paul D. Scharre, Autonomous Systems: Issues for Defence Policymakers. Norfolk: Nato Communications and Information Agency, 2015. https://apps.dtic.mil/sti/pdfs/AD1010077.pdf

Reference by

Sistema OJS 3 - Metabiblioteca |