While artificial intelligence (AI) has been a catalyst for change in various industries, a recent study by Metrigy reveals that only 25% of business leaders and 13% of consumers have complete trust in generative AI. The executive order issued by […]
While artificial intelligence (AI) has been a catalyst for change in various industries, a recent study by Metrigy reveals that only 25% of business leaders and 13% of consumers have complete trust in generative AI. The executive order issued by the Biden administration to regulate the development and use of AI may seem like a positive step towards addressing this issue. However, Metrigy’s research highlights that the majority of business leaders and consumers do not believe that government oversight alone can solve the trust problem in AI. Only 30% of business leaders and 22% of consumers state that government regulations would make them more confident towards generative AI.
The study by Metrigy further illustrates that IT leaders, customer experience (CX) professionals, and business units perceive other approaches as more effective in building trust in AI. These include limiting the data that AI can use to generate content (56%) and implementing human oversight in content creation (51%). Consumers also agree with the need for human control (41%) and limitations on AI capabilities (39%). However, consumers are four times less likely to have complete trust in AI compared to IT leaders, CX professionals, or business units, highlighting the importance of efforts to reassure consumers.
During interviews with research participants, the majority expressed a preference for AI governance to be managed by a non-governmental organization composed of technology vendors, academic leaders, researchers, IT experts, and government officials working collaboratively on matters of interest. They argue that standardization would be more effective than government regulation carried out by a specific task force or agency.
Challenges of the Executive Order
Does the executive order provide value? It may offer some assistance, but it will not eliminate risks. In fact, it could potentially do more harm than good by stifling innovation in the United States. Falling behind other countries in AI development due to government bureaucracy would put the country at a disadvantage in various sectors, including defense, healthcare, energy, and technological innovation.
It is expected that the government will provide more detailed information on the order and its implementation through agencies such as the National Institute of Standards and Technology, the Department of Homeland Security, and the Department of Energy and Commerce. However, this information will not be readily available. Details regarding regulations related to this order will be adopted in nine months. Considering AI drives new products, services, and applications every minute, the landscape will be changing faster than regulations can keep up.
Even when detailed guidelines are established, how enforceable will they be, given that they will not be legislated? In the meantime, technology vendors have agreed to take voluntary precautionary measures in AI model development. However, the challenge lies with “bad actors” who do not abide by the rules.
Frequently Asked Questions (FAQ)
1. What is generative AI?
Generative AI refers to the ability of an artificial intelligence system to generate new content, such as text, images, or audio, based on training data.
2. How can trust in AI be enhanced?
According to the Metrigy study, trust in AI can be increased by limiting the data AI can use for content creation and implementing human control in the process.
3. Who should be responsible for AI governance?
The research participants expressed a preference for a non-governmental organization consisting of various stakeholders, including technology vendors, academic leaders, researchers, IT experts, and government officials, to collectively manage AI governance.
4. What challenges does the executive order face?
The executive order may not provide a comprehensive solution to trust issues in AI, and its regulations may lag behind the rapidly evolving AI landscape. Additionally, enforcing guidelines without legislative backing and dealing with non-compliant entities pose challenges.