Вештачка интелигенција

Regulating Artificial Intelligence: Lessons from the Military

Summary

In a bustling restaurant in the heart of Svudgrad, USA, an overwhelmed manager turns to artificial intelligence (AI) to help him cope with staff shortages and customer service. In another part of town, a weary newspaper editor uses AI to […]

Regulating Artificial Intelligence: Lessons from the Military

In a bustling restaurant in the heart of Svudgrad, USA, an overwhelmed manager turns to artificial intelligence (AI) to help him cope with staff shortages and customer service. In another part of town, a weary newspaper editor uses AI to generate news articles. Both are part of the growing number of people relying on AI for their everyday business needs. But what happens when technology goes wrong or creates risks we haven’t fully considered?

The current conversation about AI regulation is primarily focused on a handful of powerful companies producing AI. A new impressive executive order on AI also targets developers and government users. It’s time to shift our focus to how to regulate (and honestly, assist) the millions of smaller players and individuals who will increasingly use this technology. As we navigate these uncharted territories, we can find guidance from an unexpected source: the US military.

Every day, the US military entrusts the most powerful weapons in the world to hundreds of thousands of young service members stationed worldwide, most of whom are under the age of 30. The military mitigates the potential risks of these powerful technologies deployed in the hands of young and often inexperienced users through a three-tiered approach: technology regulation, user qualifications, and unit qualifications. The government has the opportunity to do the same with AI.

Depending on the task, military personnel must successfully complete courses, apprenticeships, and oral exams before they are authorized to operate a ship, fire a weapon, or perform maintenance tasks. Each qualification reflects how technologically complex the system is, how deadly it can be, and how much decision-making authority the user will have. Moreover, the military has a backup system of standard operating procedures and checklists that ensure consistent and safe behavior – something akin to what surgeons, for example, have emulated.

Risk mitigation in the military involves not only individuals but also units. “Carrier qualifications,” for instance, are not just for individual pilots. They must be earned through a joint demonstration of the aircraft carrier and its assigned aviation group (pilots’ group). Unit qualifications emphasize teamwork, collective responsibility, and integrated functioning of different roles in a specific context. This ensures that each team member is not only skilled in their tasks but fully understands their duties in the broader context.

Finally, in addition to qualifications and checklists, the military separates and delineates authorities to different individuals based on their tasks and levels of responsibility or seniority positions. For example, a surface warfare officer with the authority to fire weapons still needs to seek permission from the ship captain to fire certain types of weapons. This control ensures that individuals with the proper authorization and awareness have the opportunity to address specific categories of risks – such as those that could escalate conflict or deplete the stockpile of particularly important weapons.

These military risk management strategies should inspire conversations about AI regulation because we’ve seen similar approaches work in other non-military communities. Qualifications, standard operating procedures (SOPs), and regulated authorities already complement technical and engineering regulations in sectors such as healthcare, finance, and policing. While the military has a unique ability to implement such qualification regimes, these structures can also be effectively applied in civilian sectors. Their adoption could be incentivized by demonstrating the business value of these tools, through government regulation or by leveraging economic incentives.

The main advantage of a qualification system would be limiting access to potentially dangerous AI systems only to verified and trained users. The verification process helps reduce the risk of malicious individuals, such as those who would use AI to create texts or videos impersonating public figures or even for harassment or stalking of private individuals. Training helps reduce the risk of well-intentioned individuals who still don’t fully understand these technologies from misusing them, like a lawyer using ChatGPT to prepare a legal memorandum.

To further enhance the accountability of individual users, certain qualifications, such as designing special biological agents, could require users to have a unique identifier, similar to a national identification number or driver’s license number. This would enable professional organizations, courts, and law enforcement agencies to effectively track and address cases of AI misuse, adding a mechanism of accountability that our legal system understands well.

Supplementing individual qualifications with organizational qualifications can create even stronger and multi-layered oversight for high-performance AI systems that perform critical mission functions. This underscores that AI safety is not just an individual responsibility but also an organizational one. This qualification approach would also support the development of delineated responsibilities that restrict particularly important decisions to only those who are not only qualified but also specifically authorized, similar to how the Securities and Exchange Commission (SEC) regulates who can participate in high-frequency trading operations. In other words, it won’t be enough for the user to simply know how to use AI; they must also know when it is appropriate and under whose authority.

Qualifications and checklists can have secondary benefits as well. Designing, implementing, and monitoring them create jobs. Both national and federal governments can become qualified agencies, professional associations leaders in security research and accompanying standards. Even AI companies can benefit economically by supporting qualification training programs for their systems.

The idea of implementing a qualification or licensing system for AI usage presents an appealing but complex set of possibilities and challenges. This framework would significantly improve safety and accountability, but at the same time, there will be barriers and potential downsides, with the first being the creation of barriers to access these tools and a less diverse field of practitioners. Qualification systems also come with bureaucratic burdens, and there’s a risk that different legal systems will create different qualifications that unnecessarily hinder innovation and an efficient global AI market. And of course, qualifications may only make it more difficult, not necessarily prevent malicious actors intending to cause harm.

These drawbacks have to be thoroughly considered and addressed in the design of any AI regulation framework. Additionally, ongoing research, industry collaboration, and public engagement are crucial in refining and implementing such a system. By drawing inspiration from the military’s risk management strategies and adapting them to a civilian context, we can move towards a balanced and effective regulatory approach for the increasing use of AI in various sectors of society.

FAQs