on 22 June 2024
The round table discussion, organised in a hybrid format commenced on 22 June 2024, at 17.00 hours with a short presentation by Dr. János Székely, who began the event by welcoming participants, and stating that the event is part of the dissemination of research conducted by the host of the event at the Central European Academy (CEA). Then the presenters and participants to the researcher’s round table were presented to the audience. These were Barnabás Székely, certified data protection expert, Dr. János Székely, researcher and senior university lecturer at the Sapientia Hungarian University of Transylvania, Tamás Szendrei, junior lecturer at the Sapientia Hungarian University of Transylvania, and doctoral student at the University of Debrecen, Faculty of Law, Marton Géza School of Doctoral Studies as well as Dr. Csongor Veress, senior university lecturer at the Károli Gáspár University of the Reformed Church in Hungary, a graduate of the National University of Public Service ‘Ludovika’ School of Doctoral Studies.
In his presentation, Barnabás Székely discusses the European Union’s AI Act. Approved by the European Parliament on May 21, 2024, the AI Act is set to become the world’s first regulatory framework for AI, following a path similar to the GDPR in exporting EU standards globally. This extraterritorial approach means that even non-EU companies marketing AI products affecting EU residents must comply with the Act.
Székely highlights the AI Act’s hybrid nature, combining product safety regulations with human rights considerations. The legislation adopts a risk-based approach, categorizing AI systems into four levels: prohibited, high-risk, limited risk, and minimal risk. Prohibited uses include social scoring, mass manipulation, and certain types of surveillance, especially in educational institutions and workplaces. High-risk AI systems, such as those used in healthcare and employment, face stringent obligations like conformity assessments. Limited risk systems require transparency measures, ensuring users understand how the AI operates. Minimal risk systems are encouraged to adopt voluntary codes of conduct, although Székely notes that Eastern European companies may lag in this area.
He also discusses exceptions to the AI Act’s scope, notably in military, defense, and national security applications, as well as in personal use and research and development. This creates a “hole in the sieve”, allowing significant AI applications to operate outside the regulatory framework. Székely raises concerns about consulting firms that help implement AI systems but have no obligations under the Act, questioning the professional accountability in AI deployment.
Addressing the challenges ahead, Székely emphasizes the complexity of the AI Act, noting that its technical jargon makes it difficult to interpret. He predicts it may take at least a decade to fully understand and apply the legislation, given the lack of existing case law and the need for new guidelines.
Székely delves into the inherent risks of AI technologies, such as data privacy concerns due to AI’s reliance on vast amounts of personal data. He warns of biases and distortions in AI systems that can amplify societal inequalities, citing examples like discriminatory recruitment algorithms that disadvantage women or minorities. Copyright infringement is another issue, exemplified by AI models trained on unauthorized use of online content, leading to legal disputes with entities like The New York Times.
He highlights the dangers of deepfakes and misinformation, explaining how AI-generated content can manipulate public opinion and threaten democratic processes. The difficulty in distinguishing between reality and fabricated content poses a significant challenge, as illustrated by a deepfake incident in South Korea’s elections.
Concluding his presentation, Székely references István Örkény’s insight on the blurred lines between reality and fiction, emphasizing the responsibility to ensure AI does not perpetuate discrimination or amplify inequalities. He encourages proactive engagement with AI ethics to safeguard human rights and democratic values in an increasingly AI-driven world.
In his presentation, Dr. János Székely explores the intricate issue of dual-use technologies – tools and systems that can be utilized for both civilian and military purposes – and their profound impact on modern technology and international relations. He begins by contrasting military-specific technologies with dual-use items. For instance, the RQ-9 Viper drone is a weapon explicitly designed for combat, leaving no ambiguity about its purpose. In contrast, commercially available drones like DJI quadcopters, originally intended for civilian use such as photography, have been repurposed in conflicts like the war in Ukraine. By attaching hand grenades instead of cameras, these modified drones have significantly altered warfare dynamics, diminishing the effectiveness of traditional armored units and slowing down troop movements. This exemplifies how easily accessible technology can be transformed into potent weaponry, blurring the lines between civilian and military applications.
Dr. Székely emphasizes the ubiquity of dual-use technologies and the challenges they pose in terms of regulation and non-proliferation. Technologies like microchips are integral to both everyday consumer electronics and advanced military systems, making it difficult to control their spread without hindering technological progress. Historically, attempts to regulate these technologies include the Coordinating Committee for Multilateral Export Controls (COCOM) during the Cold War, which aimed to prevent the Soviet bloc from acquiring advanced technologies. After the Cold War, the Wassenaar Arrangement was established with similar intentions. However, its effectiveness is limited due to its voluntary nature and the inclusion of countries like Russia, while excluding others like China.
He discusses the inadequacies of current international agreements in addressing the complexities of modern dual-use technologies, especially in the realm of artificial intelligence (AI) and advanced computing. The rapid advancement of AI, exemplified by tools like ChatGPT, poses new challenges. While such AI systems can provide beneficial services, they can also be misused – for example, generating instructions for harmful activities if not properly restricted. Dr. Székely points out that AI systems do not inherently prevent misuse; safeguards must be explicitly programmed, highlighting the ethical responsibilities of developers.
The presentation delves into the challenges of preventing the proliferation of potentially dangerous technologies without stifling innovation or violating international trade agreements governed by bodies like the World Trade Organization (WTO). Dr. Székely highlights unilateral actions taken by states, such as the United States imposing export bans on companies like Huawei to limit their access to critical technologies like AI-related advanced microchips. While these actions are justified on national security grounds, they raise legal dilemmas regarding the balance between self-defence and the principles of free trade.
He brings attention to China’s social credit system as a real-world example of AI’s dual-use nature. This system assigns scores to citizens based on their behaviour, affecting their ability to travel, obtain loans, or access certain jobs. During the COVID-19 pandemic, it was used to enforce compliance with health measures and other regulations, demonstrating how AI can facilitate extensive social control. This raises significant humanitarian, legal, and privacy concerns, especially since such practices are not constrained by regulations like the European Union’s AI Act, which does not fully apply outside its jurisdiction.
Dr. Székely references popular culture, such as the Marvel films, to illustrate the potential future risks of AI in predicting and pre-emptively acting against individuals deemed a threat by analyzing data from their entire lives. While fictional, these scenarios underscore real ethical and legal questions about surveillance, pre-emptive justice, and the erosion of personal freedoms.
In conclusion, Dr. Székely calls for a critical examination of how dual-use technologies are regulated on both national and international levels. He emphasizes the need for coherent policies that address the civilian and military applications of advanced technologies without impeding legitimate trade and innovation. By inviting other participants to contribute to the discussion, he highlights the importance of collaborative efforts in developing legal frameworks that protect security interests while upholding humanitarian values, individual liberties, and privacy rights in an increasingly interconnected and technologically advanced world.
The presentation by Drd. Szendrei Tamás focuses on the growing role of artificial intelligence (AI) in corporate governance and its significant impact on various managerial functions. The speaker, a legal professional and PhD candidate specializing in digitalization in corporate law, emphasizes that AI technologies are enhancing company efficiency, improving decision-making processes, and optimizing overall operations.
Beginning with decision-making, AI systems are highlighted for their ability to analyze vast amounts of data swiftly and accurately, aiding business leaders in making well-informed choices. The presentation distinguishes between fully automated decision-making – where AI systems make decisions without human intervention – and assisted decision-making, where AI supports but does not replace the human manager. Currently, the latter is more prevalent, with AI serving as a tool to complement human judgment.
Risk management is identified as another critical area where AI proves invaluable. Through predictive analytics and machine learning, AI systems can forecast potential risks, detect fraudulent activities by identifying unusual transactions, and assess credit applications to mitigate default risks. This proactive approach enables companies to manage risks before they escalate into serious problems.
The speaker also discusses AI’s role in ensuring regulatory compliance. AI systems can monitor operations, automatically flagging any deviations from compliance standards, which is particularly useful in manufacturing processes where products must meet specific regulations. In supply chain and logistics, AI optimizes inventory management and supply routes, reducing costs and improving delivery and stock replenishment times.
Financial analysis and reporting are undergoing a revolution due to AI’s capabilities. Automated accounting processes reduce human error and increase efficiency by generating real-time financial reports, allowing managers to respond swiftly to financial developments. Surveys indicate that nearly three-quarters of companies already utilize AI in some capacity for financial reporting, with this trend expected to grow.
In human resource management, AI assists in automating recruitment processes, evaluating employee performance, and identifying training needs. While AI can significantly reduce the time and cost associated with recruitment and improve candidate selection accuracy, it also poses risks such as digital discrimination. The presentation references a case where an AI recruitment tool unintentionally discriminated against female applicants due to biases in the training data, underscoring the importance of addressing ethical considerations. The presentation delves into the benefits of AI, including increased efficiency, enhanced decision-making, and proactive risk management. However, it also addresses challenges like accountability and transparency. One of the significant concerns is the “black box” phenomenon, where the inner workings of AI algorithms are not transparent, making it difficult to understand how decisions are made. This lack of transparency complicates the detection and proof of violations, particularly in cases of discrimination.
Liability issues are complex and depend on the legal environment, corporate structure, and the specific AI system used. Questions arise regarding who is responsible for decisions made by AI – the developers, the service providers, or the employees utilizing the AI systems. Ethical issues, such as bias and data distortion, can lead to unfair decisions, highlighting the need for embedded ethics and the implementation of moral values within AI technologies.
The speaker explores the legal assessment of AI, questioning whether AI should be considered merely as software, as an incorporeal entity, or even granted legal personality. While some theories suggest that recognizing AI as a legal person could address liability issues, the speaker concludes that applying existing legal frameworks may suffice without extending legal personality to AI technologies.
In summary, the presentation emphasizes that while AI offers substantial advantages in corporate governance by streamlining processes and enhancing decision-making, it also presents significant challenges. Issues of accountability, transparency, ethical considerations, and legal implications require careful examination. Despite these challenges, the economic potential of AI is immense, with studies forecasting that AI-based technologies could generate substantial global economic growth by 2030.
In his lecture, Dr. Csongor Veress examines the evolution of warfare with a focus on the fourth and fifth generations and their relationship with artificial intelligence (AI). As a PhD holder in security policy and a lecturer at Károli Gáspár Reformed University and Pázmány Péter Catholic University, Dr. Veress brings a nuanced understanding of how modern warfare has been transformed by the integration of AI and digital technologies.
He begins by outlining the generational shifts in warfare according to William Lind’s classification. The first generation of warfare commenced in 1648 with the Peace of Westphalia, marking the point when states assumed the monopoly over warfare from feudal lords. This era was characterized by formation-based combat, with clear distinctions between soldiers and civilians, and the concentration was primarily on manpower.
The second generation emerged at the end of the 19th century and extended through World War I. It incorporated technological advancements from the Industrial Revolution, such as the telegraph, steam locomotives, automatic rifles, airplanes, submarines, and tanks. The focus shifted to the concentration of firepower, and warfare became more mechanized.
The third generation culminated during World War II and introduced the concept of total war. This generation saw the strategic bombing of civilian areas as a means of demoralization and the advent of nuclear weapons, fundamentally altering the nature of conflict.
Moving into the fourth generation, Dr. Veress notes that scholars debate its exact starting point – some citing the Six-Day War of 1967, others pointing to Russia’s annexation of Crimea in 2014. Regardless of its inception, this generation is marked by hybrid warfare, where the lines between peace and war, and between civilian and soldier, become blurred. Non-state actors play significant roles, and warfare incorporates minimal conventional military force, relying more on political, economic, legal, and technological means to achieve strategic objectives. The distinction between combatants and non-combatants erodes, and warfare is no longer exclusively the domain of nation-states.
He explains that major powers have developed their own doctrines around hybrid warfare. China introduced the concept of “Unrestricted Warfare” in a book published 25 years ago by two intelligence officers of the Chinese People’s Liberation Army. This theory advocates for the use of all available means – military and non-military – to achieve strategic goals. Russia’s Gerasimov Doctrine, developed by Colonel General Valery Gerasimov, emphasizes “strategic deterrence” through a coordinated use of military and non-military tools, including political, diplomatic, legal, economic, ideological, and technological methods. The United States refers to this approach as hybrid warfare in its strategic framework.
Dr. Veress then explores the emerging concept of fifth-generation warfare, which is characterized by a largely digitalized battlefield and the significant role of AI as both a tool and a weapon. Biotechnology also plays an important role, although it was not the focus of his discussion. He acknowledges that there is ongoing debate among scholars about whether we are currently in the fourth or fifth generation of warfare, but the critical point is the transformative impact of AI and digital technologies on modern conflict.
Delving into the security policy implications of AI, he identifies several key areas of concern:
1. Military Applications: AI enables the development of autonomous drones, robots, and weapon systems. While international law prohibits the use of autonomous weapons that can make life-and-death decisions without human intervention, AI can still enhance military effectiveness and potentially reduce human casualties. However, accountability becomes a significant issue. If autonomous weapons malfunction or make erroneous decisions resulting in civilian casualties, it is unclear who would be held responsible – the developers, operators, or commanders.
2. Cyber Operations: AI can process vast amounts of data rapidly, aiding in intelligence and reconnaissance efforts. It enhances defensive capabilities through automatic intrusion detection and risk analysis, but it also poses risks by enabling offensive cyber operations like automated phishing attacks. The dual-use nature of AI in cyberspace complicates efforts to secure networks and protect sensitive information.
3. Information Warfare: AI facilitates the creation and dissemination of disinformation and propaganda. Through the use of deepfake videos and targeted advertising, adversaries can manipulate public perception and undermine trust in institutions. This aspect of hybrid warfare exploits the difficulty in distinguishing between authentic and fabricated content, posing a significant threat to national security and societal stability.
4. Surveillance and Monitoring: AI-powered facial recognition and crowd surveillance technologies raise concerns about privacy and civil liberties. The capability of AI to analyze patterns and predict behaviours from various data sources, such as by Wi-Fi signal fluctuations, demonstrates its potential for intrusive surveillance. While these technologies can enhance security, they also risk being misused for unwarranted monitoring and control of populations.
5. Economic Instruments: AI’s impact on the economy is multifaceted. On one hand, it drives innovation and efficiency within corporate environments, as highlighted by previous speakers. On the other hand, it can be weaponized in economic warfare to disrupt financial systems, manipulate markets, or create dependencies. The integration of AI into economic strategies is a component of hybrid warfare, where economic tools are used to achieve geopolitical objectives without direct military confrontation.
In conclusion, Dr. Veress emphasizes the need for international cooperation and regulation to mitigate the risks associated with AI in warfare and security contexts. He underscores that as AI continues to evolve and permeate various aspects of warfare and security, both international law and domestic legal frameworks must adapt to address these new realities.
In the panel discussion moderated by Dr. János Székely, the participants examined the feasibility of establishing an international liability regime for artificial intelligence (AI) to mitigate cross-border harms. Dr. Székely initiated the conversation by highlighting that while civil liability structures are well-established globally, AI introduces complexities in international torts, particularly in military actions, corporate governance, and personal data misuse. He questioned whether a global consensus could be achieved to limit AI-induced harm and what legal philosophies might underpin such a regime, noting that the European Union’s AI Act adopts a risk-based approach that may not be universally applicable.
Dr. Csongor Veress proposed that updating international humanitarian law, including the Geneva Conventions, could serve as a foundation for regulating AI, especially in military and cyber contexts. He acknowledged that while this falls outside his primary expertise, adapting existing frameworks to modern realities is necessary.
Barnabas Székely expressed skepticism about the effectiveness of forthcoming regulations like the AI Liability Directive, pointing out delays and the likelihood of fragmented implementation among EU Member States. He observed that the AI Act excludes military applications, leaving a regulatory gap in high-stakes areas. Székely also doubted the practicality of international agreements on AI due to the difficulty of enforcement and verification, especially concerning military uses where states can easily obscure their activities.
Dr. Előd Pál highlighted the lack of political will as a significant barrier. He contrasted the pro-innovation regulatory stance of the United States with the European Union’s more restrictive approach, suggesting that these divergent paradigms make a uniform international liability regime unlikely. He noted that even within the EU, the preference for directives over regulations indicates insufficient consensus for stronger, more cohesive policies.
The panel delved into enforcement challenges, with Székely and Pál agreeing that without effective international review mechanisms or sanctions, compliance would be minimal. They discussed how the securitization of economic relations – where economic advantages are framed within security policy – further complicates cooperation. The economic benefits of AI technology create incentives for nations to prioritize competitive advantages over collaborative regulation.
In conclusion, the panelists acknowledged the critical need to address AI’s potential for international harm but recognized substantial obstacles. These include a lack of political will, enforcement difficulties, and conflicting national interests driven by economic and security considerations. The discussion underscored the complexity of forming a multilateral regime to regulate AI effectively on the global stage, given the current geopolitical and economic climate.