Kaja Kowalczewska: Podcast recording transcript in English

source: www.youtube.com/watch?v=X9dtEOMcglI&ab_channel=InstytutSprawObywatelskich

0:06

Are you aware?

0:19

There is an existential question awaiting an answer, concerning whether our civilization, in the spirit of humanitarianism, is ready to accept the robotization of the process of depriving people of life. Kaja Kowalczewska’s book “Lethal systems with advancing autonomy – international law analysis”.

0:44

The targeting of humans by autonomous weapon systems is an example of brutal digital dehumanization. It strips people of dignity, degrades human nature, and eliminates or replaces human involvement or responsibility in the use of force through the use of automated decision-making processes. Stop killer robots.

1:09

Israel today carries out the first truly industrial technological annihilation of the twenty-first century. Jaroslaw Pietrzak, Palestine Israel, genocide with artificial intelligence. Good morning to you from this side, Rafal Gorski, another episode of the Podcast you hold consciousness as part of the Civil Affairs Weekly, and today your and my guest is Dr. Kaja Kowalczewska. Assistant professor at the incubator for scientific excellence at the University of Wroclaw, doctor of jurisprudence, member of the Commission for the Dissemination of International Humanitarian Law operating at the PCK. She is currently co-author of the project “Common defense policy – the legal framework for the development of the European defense industry”, funded by the Central European Academy in Budapest, where she works on the subject of military robots. Good morning, doctor. What are the main ethical and legal challenges of integrating artificial intelligence into weapon systems?

2:36

I think these are challenges that we are also familiar with outside of military discussions about how artificial intelligence affects our lives in general. It’s just that in the context of military conflicts, in my opinion at least, they have far-reaching consequences, because the use of AI especially in decisions that will end in death or serious destruction are much more severe than just the fact that, for example, we can be deceived or targeted by ads on social networks. So really, the discussion about this application of AI is, first of all, about the decision itself. Do we want to transfer some decisions in war to AI? And if such a decision is made and it is positive, well, are we aware of all these challenges related to the so-called bias, that is, to a certain conglomerate of biases and simplifications that the use of artificial intelligence, which is supposedly going to replace our human brain, brings with it? Our cognitive skills of qualitative assessment of what is going on around us. And on the other hand, we know that this artificial intelligence, at least at this stage and the one we are at today, is not very predictable, subject to a large number of errors as to whether it performs the task we would like it to perform. Well, and how do we deal with certain errors? Well, usually in the legal aspect, there is also the question of who will bear the responsibility for this error, and with all the complexity of artificial intelligence, the way it is created for lawyers to use, this is what poses the biggest challenge. Who will be held responsible, and in war? Well, who, if any, will bear the responsibility for war crimes, for crimes against humanity, or the genocide mentioned here in the introduction.

4:46

It is scary to listen to you. You wrote a book Artificial Intelligence in War where you discuss various aspects of use, and in the military context, which one do you consider the most problematic? Well, and why?

5:04

It’s as if here I have just tried to announce it already, the application of any, and here you also cited such examples, at least from Israel, where in fact this AI is still used to support decision-making processes, like we have supporting decision-making processes about recruiting an employee, or others that we know from everyday life, and these are of course very important elements in war, but in my opinion, these are not the most critical ones. Well, because these are just processes that support human decisions. In my opinion, the problem begins when we decide that maybe this human factor can already be eliminated and maybe artificial intelligence will simply make certain decisions for us. That is, and in addition, if these will be decisions, as in war, about killing, about destruction.

6:05

In this international discussion in the public debate, which unfortunately is still lacking in Poland, there are 2 such terms they in Polish terminology are abbreviations AWS not to be confused with the Electoral Action Solidarity (Akcja Wyborcza Solidarność) and LAWS as if you could still explain to our listeners what this is about.

7:09

Well, yes, the terminology here is of course complicated because in these debates a large part of the creators are lawyers, and it is known that we love very precise definitions, especially if such definitions are not really in the regulations. It’s these terms that you cited, of course, they come up, they are such a mental shortcut and this AWS means here: we have got this system, that is, autonomous systems by implication are supposed to be systems just based on artificial intelligence.

And this is a very broad category. Here can be those systems that, for example, help analyze data those that help, for example, do some predictive analysis. Those that under somewhere in transport logistics will help us, or in adjusting the amount of ammunition to what we need in order to perform in an effective and yet economical way a certain attack. Well, and such a kind of subgroup of them I would say are LAWS, which is exactly also autonomous weapon systems. Only at the beginning we add the word Lethal. that is, in Polish, it is lethal, that is, exactly those systems whose main task is, well, to carry the physical force that will cause death.


8:01

Regarding warfare and these systems, by these definitions, as I mentioned, there is an emphasis on how the entire process is carried out, based on artificial intelligence. Because these systems can have various applications and, for example, in the case of lethal ones, they can be equipped with different types of weapons. Additionally, as I mentioned, there is also a lack of definition. We don’t really know where automated systems end and autonomous ones begin, where some predictable algorithm ends, and where full artificial intelligence begins. Consequently, we also don’t really know which systems among those already in place can be counted among them, and whether we are even at the stage of discussing the distant future when it comes to this artificial intelligence, especially the lethal kind.

8:59

And how does the development of these systems relate to the conduct of warfare, still, here in the context of international law.


9:16
Generally in law, we have specific laws that address particular weapons. There are treaties that prohibit the use of chemical, biological, and nuclear weapons, as well as anti-personnel mines and cluster munitions. However, currently, there is no treaty regarding autonomous systems. There has been discussion on this for the past decade, and last year the UN Secretary General stated that one of his goals is to have such a treaty signed by 2026 at the latest. However, these are merely his assumptions and promises. Personally, I am rather pessimistic about the creation of such a treaty, so we currently lack dedicated regulation for these systems. In the absence of such regulation, we rely on general principles that have been in place since the start of warfare.

Following the Second World War, the Geneva Conventions and their additional protocols were developed, which outline the protection of certain groups and facilities, regardless of whether force is used against them from autonomous systems or more conventional means. These conventions also specify the obligations of warring parties when conducting attacks, particularly regarding targeting. In my opinion, targeting is the most controversial aspect, as well as the most damaging, when it comes to these lethal systems used for identification, targeting, and engagement. I apologize if my translation doesn’t sound as fluent as the original English text.

Firstly, there’s the principle of distinction, which means that whenever we launch an attack, we must distinguish between what is a military target, a legitimate target for which we do not bear responsibility for destruction, and what is, in a sense, the remaining part of our universe, primarily referring to civilian objects and civilians. This is a general principle to which AI will also have to adapt, so here we have several question marks as to how capable AI is of identifying objects and individuals in a dynamic environment and correctly classifying them into these groups. Additionally, I’ll add that this requirement is also a significant challenge for human soldiers, so distinguishing is just the beginning. We have the principle of proportionality, which states that every attack must be proportionate, meaning that the losses incurred, the so-called collateral damage during the attack, must be justified by the direct military advantage gained from that attack. This is something we often forget, although it seems to me that now, due to the proximity of armed conflicts, our society is beginning to understand a bit more that not all destruction of civilian property and civilian casualties are war crimes, but only those where the principle of proportionality is violated. Moreover, this is done consciously and with the intention of causing disproportionate harm. And of course, here again, we see that it is not a simple mathematical formula that we can input into AI and it will generate a proportional result for us. On the other hand, it is also a challenge for humans and arises from many smaller elements that military personnel must take into account, especially military commanders, when deciding on a particular attack.

And there is yet another such important principle among these main ones, namely the principle of precaution, where during the planning and execution of an attack, the parties involved must consider the broader context in which the attack is to be carried out. Again, this is a situation where a qualitative assessment must be made, and we know that currently, AI may be better than us in terms of quantitative analysis of the amount of big data it can process. However, when it comes to qualitatively assessing certain information it receives, which we in our human world define in legal terms, it is still a significant challenge.

So, summarizing a bit this lengthy discourse, if these systems are already being developed or will be developed, they will have to operate in accordance with these principles. In my opinion, for now, we can sleep a little more peacefully because we know that the technology is not sufficiently advanced. However, perhaps these are matters of decades or years, or rather more than a decade when this technology proves to be better in qualitative assessment. Then the question remains, if such a machine is able to perform tasks better not only from a military standpoint but also from a legal one, then we are left with only an ethical dilemma. Do we want decisions of this kind to be handed over to and executed by artificial intelligence? That is the crux of the legal debate about these systems.

15:35

Exactly, but the first quote, which is the last sentence of your doctoral thesis, was about that. Out of curiosity, while browsing through your work, I searched for the name Asimov. His famous book, „I, Robot,” with the Three Laws of Robotics. I understand you didn’t write about these laws in your work. I’ll remind the readers or listeners of the Three Laws of Robotics. The first law: A robot may not injure a human being or, through inaction, allow a human being to come to harm. That’s the first law. The second law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. And the third law of robotics: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Later, after discussions and various debates surrounding the book and these laws, Asimov added the Zeroth Law, which states: „A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”


16:34
I must admit, as you rightly noticed, my doctoral thesis was dedicated to law, and these ethical aspects were only mentioned there. I think ethics is a completely different field, connected in a way but still distinct from legal sciences. And while these laws may sound warm and comforting to us, as I think, placing humans first and having robots as guardians, the truth is a bit different.

I depart from a different premise because if we were to put these laws of robotics in the context of armed conflicts, I am very curious about what Asimov would have said because I think the realities of war contradict some of these laws, yet wars happen, and unfortunately, I don’t think they will stop happening. So, what can we do with such laws then? Because the benefits from them, I believe, remain purely theoretical.


18:35

What are the potential consequences for global security if control over combat decisions is handed over to machines?

18:45

Here we also remain in a rather catastrophic perspective. If we were to entrust decisions to this type of system, it seems to me that there could be certain uncontrollable consequences. A bit like with not entirely balanced decision-makers who have access to the nuclear weapons button. I think I would compare it to that because when it comes to the current stage of technology development, I think there’s still too much uncertainty. The outcome would be quite random, unpredictable. But if we could encode a large set of ethical rules and principles by which we want this international order to be governed, then it might turn out that artificial intelligence is more morally bound than national leaders. Since the 1940s or 1950s, there has been a prohibition on the use of force in international relations, yet we see that it doesn’t necessarily work. So perhaps artificial intelligence, which would have a significantly greater imperative to adhere to these principles without autonomously changing them, would be more ethical and bring us more security. But I think it’s a difficult discussion to lead because we have too little data on how artificial intelligence works.


20:21
Sure, can you provide more information about this? You mentioned it a bit, but where are the conflicts potentially taking place now, in which these technologies are being used? Which ones are you referring to?

Yes, that’s the thing with armed conflicts – only a certain layer of information reaches us from abroad. We have the conflict in Ukraine, and we know that there’s also a conflict in Gaza nearby. There are also numerous regional conflicts on practically every continent. In reality, it’s only after their conclusion, if at all, that we’ll learn more details about what was actually used, because it’s not in the interest of the warring parties to show all their cards to the world. However, we do hear that these systems are being used in Ukraine and in Israel or in Gaza, and from what I know, they are currently used to support human commanders’ decisions. So, we’re not dealing with these lethal systems yet, at least not in Israel, because there was quite a bit of noise about this Gospel system, which was somewhat demonized in the media, portrayed as more powerful than it actually is. Nevertheless, Israeli forces are undeniably among the world’s top technologically advanced armies and use artificial intelligence. It’s been years, not just this year or this new episode of the conflict with Palestine, that they’ve been targeting military objectives based on previously aggregated data from earlier episodes of the conflict, probably critical locations for armed conflicts. They certainly use it to analyze collected intelligence data but also, as I’ve heard regarding Ukraine, apart from this intelligence data analysis, they also use it to predict the adversary’s next steps based on this type of analysis and thus to conduct these decision-making processes in the military more quickly and effectively. We also heard, this was probably a few years ago, reports that in the conflict in Libya, a drone was used that was supposed to be a Turkish-made autonomous drone. However, the information that was publicly available, I tried to investigate it, didn’t point to a clear answer whether it was indeed an autonomous drone or a drone like many others nowadays, with certain automated functions, a bit lower than autonomous, which could fly to specific coordinates while simultaneously adjusting its path due to atmospheric conditions or certain limited scenarios that were previously programmed into it, so at the moment, it seems to me that these systems we’re discussing here today and are being debated, aren’t actively on the battlefield yet. However, we’re increasingly hearing about these non-lethal, decision-supporting systems in combat zones, but as I said, just like in every aspect of our lives where there’s increasingly more data, the military also uses these systems. And as long as it preserves the human factor, I think it can bring more benefits.


24:31
Is it even possible to maintain some balance between the benefits of using such technologies for military purposes and the need to preserve human control and ensure compliance with ethics? And if it is possible in your opinion, how can it be achieved?


24:54
It must be possible, because if not, then we simply have to put a firm stop to the development of such systems. I think it’s something we can’t pre-determine, we can’t firmly establish how we think it will look, because it will turn out that with the development of these technologies, which we are still learning about, we learn as they develop, as they delve further into our intimate spheres of life. I think from a legal perspective, maintaining the human factor is a necessary element that will provide us with some semblance of security, because perhaps every military commander who decides to use this type of weaponry will think twice if they know that they or their subordinate, or anyone in the entire state system, may be held accountable for it. So, it’s a fairly traditional deterrent role of the law. And this human factor is needed to anchor that responsibility somewhere. Of course, the problem lies at which stage, because it’s incredibly difficult due to the fact that firstly, the state makes a political decision to acquire a certain type of weaponry. Then it’s usually developed by some private entity or a private military consortium, but certainly not by one person, as is the case with AI, where this responsibility suddenly becomes blurred because we don’t know who wrote a specific piece of code, who created the neural networks that will then operate. Whether it should be a military commander who may not fully understand how the system he will use works, so from a legal point of view, the human factor must be assigned somewhere, and the decision about where it will be assigned belongs to the states that create the law, and from an ethical point of view, it seems to me that this human factor will also be a barrier against total dehumanization. This war of dehumanization relies on the fact that we are surrendering this last bastion of human agency to machines, because I remember very well that from an ethical point of view, it is a major challenge and what kind of actions do we want to delegate to AI besides warfare, for example, do we want human judges in courts to be replaced? Replaced also by artificial intelligence, and it seems to me that if we want to continue living in this civilization where human dignity is the ultimate value, we have to make that decision. But where exactly to draw that ethical and legal boundary? That’s beyond my cognition.


28:11
Here I will mention that the Institute of Civic Affairs, for the listeners who read the Civic Affairs Weekly systematically, as they know, has joined the Stop Killer Robots Coalition. We are members, and this coalition aims to achieve the adoption of a Treaty introducing a ban on these technologies. Question to the doctor, what should the international community actually do to prevent it from ending in a black scenario? At the beginning, I also mentioned that you work on an international project. Now you could also say a few words about it here.


29:15
It seems to me that the 10 years of discussion, which were actually initiated by the coalition you just mentioned and to which the Institute joined, have been sufficient time to familiarize ourselves with the topic. For many years, even before the pandemic began, this discussion has matured to the point where we should move from substantive discussions on all these ethical, technological, military, and legal aspects to political decision-making. This political decision should be made at a forum representing primarily military powers. For example, when we talk about a Treaty banning the use of nuclear weapons, which has many signatories but the states possessing such weapons in their arsenals have not joined it, we can legitimately question the effectiveness of such a solution. So, on the one hand, we need to consider that there may be a treaty that sounds beautiful on paper but has zero impact in practice because only states far behind the military powers developing such technologies, which will allow for the creation of LAWS and AWS, will sign it. That’s one aspect. On the other hand, states should make a decision, whether to go for a total ban or for some regulation. A more nuanced approach, where I believe the human control factor would be very relevant and would show that certain AI-based systems may be permissible up to a certain point, but beyond that point, especially where there is no human factor, we will not use them. Of course, we need to consider the international dynamics, where major powers like the United States, Israel, Russia, China, India, leading in AI technologies, may not be willing to sign such a treaty. We must also consider that EU Member States, including Poland, are also members of NATO, so they have to deal with a partner who may be reluctant. But I would add right away that this is not the first time all EU Member States have signed a treaty while the United States has not, and it’s actually a traditional situation. So, I don’t think this argument could be used here. In the European Union, or actually throughout the Council of Europe, we have committed at the legal level to higher standards of human rights protection than the rest of the world. I think this is a very strong grounding for all these moral commitments of states to address the challenges we discussed today, both ethical and legal.

The project you mentioned is indeed being prepared at the Hungarian University of Miskolc, with the aim that Hungary will assume the presidency of the Council of the European Union in July. One of their priorities, given the geopolitical situation, is to consider how states should regulate new military technologies. Robots, whether remotely controlled or endowed with artificial intelligence, are just one element of this project. I think it’s a good initiative in some respects that a state is taking on the role of a leader, even regionally, because without such leaders, no treaties would be signed. In summary, it seems to me that the whole initiative regarding the Treaty is a good direction. However, we need to delicately balance it to avoid overhyping and creating theoretical constructs, but rather aim for a solution that has the greatest chance of actually influencing the armies that will use artificial intelligence, while not compromising all the humanitarian achievements we have made since World War II. I know this is not an ideal answer. However, international affairs, especially the regulation of armed conflict, are very complex topics. Quite dynamic, where many aspects need to be considered. Therefore, for example, the last sentence about my project. I think it’s a great initiative that we are trying to have a multidisciplinary approach where lawyers, ethicists, and military experts who are also experts in military technologies discuss together because only with such a group of experts we can develop effective and implementable recommendations. So, I also strongly support your efforts to speak to our Polish government and try to make this issue one of the priorities of our foreign policy.


35:00

Well, it’s a very interesting issue altogether. I don’t know, it’s rather hair-raising when you start looking around the political scene and various institutions of our country, asking about the issues we’re discussing today, and basically, there’s no one to talk to.

35:34

Yes, and it’s just frightening, so all the more I’m here full of positive amazement that the Hungarians came up with this in advance, knowing that Poland will take over the presidency right after them, so… The time is now. Exactly, there you can see that their thinking is much more forward-looking than ours.

36:13

Alright, alright, I think we’ll come back to the topic.


36:17

So I think the topic, as they say colloquially, is ongoing. Today, our guest was Dr. Kaja Kowalczewska, Assistant Professor at the Excellence Science Incubator at the University of Wrocław, a doctor of legal sciences. Dr. Kowalczewska, thank you very much for the conversation, and goodbye. This is Rafał Górski signing off. Join us for the next episode of the Podcast „Czy Masz Świadomość” within the framework of Tygodnik Spraw Obywatelskich. See you next week on Tuesday.

36:53

Thank you for your attention and see you in the next episode. Are you aware of all our podcasts? You can listen on the website of the Institute of Civic Affairs PL, as well as on the channels of the Institute of Civic Affairs on YouTube and Spotify. Listen, think, act.

Kérjük, ossza meg cikkünket a kedvenc csatornáján, vagy küldje el ismerőseinek.

Facebook
X
LinkedIn

Hasonló bejegyzések

2026. április 17-én Prof. Dr. Frane Staničić, a Horvát Alkotmánybíróság elnöke fogadta a Miskolci Egyetem…

Dr. Benyusz Márta, a Miskolci Egyetem Közép-európai Akadémia kutatója, a Gyermekjogok és Családjog Intézetének megbízott vezetője, valamint…

A Miskolci Egyetem Közép-európai Akadémia 2026. március 2–6. között Strasbourgban rendezte meg idei Workshop on…

cea mail modal