Вештачка интелигенција

The Future of Artificial Intelligence: Potential, Risks, and Regulations

Summary

Artificial Intelligence (AI) is not just limited to being a conversational bot. Experts from Michigan explain how AI has been actively present in our lives for years, offering their perspectives on its potential and expressing hopes and concerns for the […]

The Future of Artificial Intelligence: Potential, Risks, and Regulations

Artificial Intelligence (AI) is not just limited to being a conversational bot. Experts from Michigan explain how AI has been actively present in our lives for years, offering their perspectives on its potential and expressing hopes and concerns for the future.

Maggie Makar, an assistant professor of computer science and engineering, focuses on building predictive models that encode causal relationships, rather than discovering associations. Joyce Chai, a professor of computer science and engineering, develops robotic systems that can understand and act on natural language – almost like humans do. Rada Mihalcea, a professor of computer science and engineering, focuses on designing AI that can assist human workers, with a current project involving providing feedback to advisors.

These experts share insights about the potential of AI in both physical and cognitive tasks, as well as in detecting irregularities in corporate processes.

However, AI also brings certain risks. Shobita Parthasarathy, the director of the program on Science, Technology, and Public Policy, explains how AI absorbs societal biases and what that might mean if AI continues to repeat them under the guise of objectivity. She emphasizes the need for regulation to ensure that biased AI does not create barriers for marginalized groups, such as people of color, LGBTQ+ individuals, and other marginalized communities.

According to Makar, our current problem is not an impending catastrophe of conscious computers and killer robots. Instead, we are currently facing real-world violence stemming from radicalization and civil unrest caused by AI algorithms on social media platforms.

Nikola Banovic, an assistant professor of computer science and engineering researching how to build reliable AI, states that it will become increasingly difficult to avoid the negative consequences of AI over time. He highlights that AI is becoming omnipresent in our lives, similar to fossil fuels, and avoiding its negative effects may be as challenging as halting carbon emissions.

Finally, Michael Wellman, the chair of the Department of Computer Science and Engineering and a professor of computer science and engineering, explains that our current laws are primarily tailored to human actions and intentions. He recently testified before the Senate Committee on Banking, Housing, and Urban Affairs regarding the regulation of algorithmic financial trading. Who is responsible when AI chooses the wrong, harmful, and illegal paths to achieve its goals?

These are the challenges faced by the field of AI, regulators, and society as a whole, as AI continues to advance in its capabilities and pervasiveness.

FAQ:

1. What is Artificial Intelligence (AI)?
Artificial Intelligence is a branch of computer science that focuses on developing computer systems capable of performing tasks that typically require human intelligence.

2. What are the advantages of AI?
Artificial Intelligence can help with performing physical and cognitive tasks, detecting irregularities in corporate operations, and providing feedback to human workers.

3. What are the potential risks of AI?
Artificial Intelligence can absorb societal biases and perpetuate them, creating barriers for marginalized groups. Additionally, AI algorithms on social media can contribute to radicalization and violence.

4. How should the use of AI be regulated?
Regulation will be necessary to ensure that AI does not cause harm or discrimination. It is important to develop laws that adapt to and consider the specificities of AI.

5. Who is responsible for the harmful actions of AI?
The question of responsibility for the harmful actions of AI is still open. It is necessary to consider how to unify the accountability of individuals, companies, and algorithms themselves for adequate regulation and protection.

Source: www.example.com