Вештачка интелигенција

Artificial Intelligence in Military Targeting: Ethical Considerations and the Changing Nature of Warfare

Summary

Artificial Intelligence (AI) has been making significant advancements in various aspects of society, including military applications. The recent reports about the Israel Defense Forces (IDF) using an AI system named Habsora (Hebrew for “Gospel”) for targeting purposes in the conflict […]

Artificial Intelligence in Military Targeting: Ethical Considerations and the Changing Nature of Warfare

Artificial Intelligence (AI) has been making significant advancements in various aspects of society, including military applications. The recent reports about the Israel Defense Forces (IDF) using an AI system named Habsora (Hebrew for “Gospel”) for targeting purposes in the conflict with Hamas in Gaza have sparked discussions about the implications of AI in warfare.

AI systems are being utilized by military forces as force multipliers to enhance the capabilities of their troops while also protecting the lives of their soldiers. These systems have the potential to make military operations more efficient, improve the speed and lethality of warfare, and shift the focus from human soldiers to intelligence gathering and remote targeting.

However, the increasing use of AI in warfare raises ethical concerns. When military forces have the ability to kill with little risk to their personnel, will ethical considerations about the nature of war still prevail? Will the growing use of AI further dehumanize the adversaries and create a disconnect between the wars fought and the societies on whose behalf they are waged?

AI’s Influence in Warfare

AI has an impact at all levels of warfare, ranging from intelligence, surveillance, and reconnaissance systems like the IDF’s Habsora, to autonomous weapon systems capable of selecting and engaging targets without human intervention.

These AI systems have the potential to reshape the character of war, making the entry into conflict easier. As complex and distributed systems, they can make it challenging to discern their intentions or interpret the intentions of adversaries in the context of an escalating conflict.

In this context, AI can contribute to misinformation, creating and amplifying dangerous misunderstandings during wartime.

AI systems can increase human predisposition to trust machine recommendations (as illustrated by Habsora, named after an unchanging word of God), raising questions about the extent to which autonomous systems should be trusted. The boundaries of AI systems interacting with other technologies and humans may not always be clear, and we might not know who or what is the “author” of their outcomes, regardless of how objective and rational they may seem.

The Acceleration of Warfare

One of the fundamental changes that AI is likely to bring about is the acceleration of warfare. This can alter our understanding of deterrence, which assumes that humans are the main actors and sources of intelligence and interactions in warfare.

Military forces and soldiers structure their decision-making through what is known as the “OODA loop” (observe, orient, decide, act). A faster OODA loop can enable an advantage over the adversaries. The goal is to avoid decision paralysis through excessive deliberation and instead adapt to the rapidly evolving pace of warfare.

Thus, the use of AI is potentially justified based on its ability to interpret and synthesize vast amounts of data, process them, and provide results faster than human cognition.

But where is the room for ethical considerations in an increasingly fast-paced, data-centric OODA loop that operates from a safe distance from the battlefield?

The Israeli targeting system serves as an example of this acceleration. The former IDF chief said that human intelligence analysts could determine 50 targets for bombing in Gaza each year, but Habsora can generate 100 targets daily, along with real-time recommendations on which ones to strike.

How does the system generate these targets? It does so through probabilistic reasoning offered by machine learning algorithms.

Machine learning algorithms learn from data by searching for patterns in vast amounts of information, and their success depends on the quality and quantity of the data. They provide recommendations based on probabilities.

Probabilities are based on pattern matching. If a person bears enough similarities to other individuals marked as enemy combatants, they may also be labeled as combatants.

The Problem Enabled by AI Remote Targeting

Some argue that machine learning enables greater precision in targeting, making it easier to avoid harm to innocent individuals and use proportionate force. However, the notion of more precise targeting in aerial attacks has failed in the past, as evidenced by the high number of declared and undeclared civilian casualties in the global war on terror.

Moreover, the distinction between combatants and civilians is rarely clear-cut. Even humans often struggle to distinguish who is a combatant and who is a civilian.

Technology does not change this fundamental truth. Often, social categories and concepts are not objective but contested or specific to a particular time and place. However, computer vision combined with algorithms is more effective in predictable environments where concepts are objective, relatively stable, and internally consistent.

Will AI Make War Worse?

We are living in a time of unjust wars, military occupations, severe violations of engagement rules, and an incipient arms race fueled by U.S.-China rivalry. In this context, incorporating AI into warfare may add new complexities that exacerbate rather than prevent harm.

AI systems make it easier for military actors to remain anonymous in war and can render the source of violence or the decision-making process leading to it invisible. On the other hand, we can observe an increasing disconnect between military forces, soldiers, and the civilians caught in the crossfire, as well as the wars waged in the name of the nations they serve.

As AI becomes more prevalent in warfare, military forces will need to develop robust ethical frameworks and guidelines to ensure that AI systems are used responsibly, in accordance with international humanitarian law and the principles of proportionality and distinction.

FAQ:

Q: How is AI being used in military targeting?

A: AI systems are being used at various levels of warfare, from intelligence gathering and surveillance to autonomous weapons that can select and engage targets without human intervention.

Q: What are the ethical considerations surrounding the use of AI in warfare?

A: The use of AI in warfare raises concerns about the dehumanization of adversaries, the potential for misinformation, the accuracy of targeting, and the increasing disconnect between military forces and the societies on whose behalf they are fighting.

Q: Can AI make warfare more efficient?

A: Yes, AI has the potential to increase the speed and efficiency of warfare through its ability to process and analyze vast amounts of data faster than human cognition.

Q: What challenges arise from AI’s role in warfare?

A: The challenges include determining the boundaries of AI systems, grappling with the rapid pace of decision-making, and ensuring ethical considerations are upheld in a data-driven battlefield.

Sources:
– www.example.com
– www.example2.com