Artificial Intelligence (AI) is revolutionizing the way we live. From personalized playlists to chatbots and even rubbish trucks, AI is already present in our homes and on our streets. A report published this week by the Australian Academy of Technological […]
Artificial Intelligence (AI) is revolutionizing the way we live. From personalized playlists to chatbots and even rubbish trucks, AI is already present in our homes and on our streets.
A report published this week by the Australian Academy of Technological Sciences and Engineering (ATSE) and the Australian Institute of Machine Learning (AIML) states that it is time to either embrace or reject the future of responsible AI. Speaking to the Australian Science Media Centre (AusSMC) earlier this month, AI experts presented the findings of the report, which include a compilation of previously unpublished insights from 13 Australian leaders in the field of AI. “Just as the steam engine fundamentally changed the way people lived and worked, AI is the steam engine of today, if you will,” says ATSE Executive Director Kylie Walker. According to Walker, Australia not only has the expertise, industry, and stability to lead AI development, but also good governance to ensure it is done responsibly and inclusively. But what does that mean? According to the report, there is a growing awareness that AI systems can carry the biases of their creators, as well as the data used to train them.
One study found that AI-generated image systems perceive that 98% of surgeons are white males, while another study revealed that AI-generated content consistently portrays men as strong and competent leaders, while women are often depicted as emotional and ineffective. As AI is increasingly used to support everyday processes, from automated hiring to medical care, the development of responsible AI should help address our greatest societal challenges, such as inequality, rather than contribute to it, says Professor Shazia Sadiq FTSE from the University of Queensland. Additionally, issues of consent and data ownership are also crucial. “Many current AI systems are trained using data from publicly available sources like Wikipedia,” says Sadiq. “This data is often collected without the explicit consent of the individuals who created the content. Particularly, creative industries are highly vulnerable in this regard.”
Frequently Asked Questions (FAQ)
1. What is degenerative artificial intelligence?