Artificial Intelligence (AI) is rapidly becoming a crucial tool in various sectors, including government. The recent Executive Order signed by President Biden on the Safe, Secure, and Trusted Development and Use of AI provides comprehensive guidelines for the use of […]
Artificial Intelligence (AI) is rapidly becoming a crucial tool in various sectors, including government. The recent Executive Order signed by President Biden on the Safe, Secure, and Trusted Development and Use of AI provides comprehensive guidelines for the use of AI in areas like privacy, content verification, and the immigration of tech workers. It introduces key frameworks for utilizing AI and takes important steps in safeguarding people’s rights.
However, it is important to acknowledge the inherent limitations of executive actions. Unlike congressional laws, executive orders cannot establish new agencies or grant new regulatory authorities to private companies. Furthermore, they can be reversed by subsequent presidents. A draft memorandum by the Office of Management and Budget (OMB), released two days after the Executive Order, provides additional guidance to the federal government on managing risks and accountability requirements in AI innovation. The combination of these two government directives offers one of the most detailed pictures of how governments should establish rules and guidelines related to AI.
It is worth noting that these actions primarily address current harms rather than existential risks. Therefore, they can serve as a useful guide for policymakers focusing on the everyday concerns of their citizens. With the inherent limitations of the Executive Order, the next step will be for other policymakers – from Congress to state governments – to use these documents as a guide for future measures in demanding accountability when utilizing AI.
When analyzing the Executive Order and the OMB memorandum together in terms of accountability guidelines, the following key points emerge:
Both documents prescribe strict accountability measures. Rather than voluntary standards and company obligations, the Executive Order directs federal agencies to enforce protection against algorithmic discrimination. It also requires companies developing next-generation AI models to regularly report to the government to ensure compliance with specific security, evaluation, and reporting procedures. For instance, the draft OMB memorandum emphasizes a minimum level of security and rights protection that agencies themselves must adhere to in order to use such technology. It provides clear definitions for AI impacting safety and rights and includes lists of specific systems that should be considered security or rights-critical. The focus is on establishing protective frameworks based on impact to protect against potential harm from systems such as hiring algorithms, criminal risk assessments, and AI medical devices.