The Federal and State Governments have recently unveiled a national framework for the routine use of generative artificial intelligence (AI) in schools. This framework aims to provide schools with guidance on incorporating generative AI algorithms, which have the capability to […]
The Federal and State Governments have recently unveiled a national framework for the routine use of generative artificial intelligence (AI) in schools. This framework aims to provide schools with guidance on incorporating generative AI algorithms, which have the capability to create new content, into classrooms across the country.
This long-awaited framework comes a year after the launch of ChatGPT, and schools have had diverse reactions to this technology, ranging from outright bans to attempts to integrate it into the teaching process.
What does the framework entail and what are its limitations?
The framework, approved by education ministers at the federal and state levels in October, was released to the public last week.
Designed to assist schools in the safe and effective use of generative AI, it acknowledges the “great potential for aiding teaching and reducing administrative burden in Australian schools.” However, it also highlights potential risks and consequences, including:
– Potential errors and biases in generative AI content produced by algorithms.
– Misuse of personal or confidential information.
– Inappropriate applications of generative AI, such as individual or group discrimination, or compromising the integrity of student assessment.
Federal Education Minister Jason Clare also emphasized that “schools should not use generative AI products that sell student data.”
What does the framework include?
The framework itself is concise, spanning only two pages, and comprises six key principles and 25 guidelines. The six principles are:
1. Teaching and learning, including educating students about how these tools function, along with their potential limitations and biases.
2. Human and social wellbeing, encompassing the use of these tools in ways that avoid exacerbating biases.
3. Transparency, including the obligation to inform stakeholders about the use and impact of generative AI tools.
4. Equity, ensuring access for diverse groups and individuals from vulnerable backgrounds.
5. Accountability, including the testing of tools prior to implementation.
6. Privacy, security, and safety, involving the use of robust cybersecurity measures.
The framework will undergo a review every 12 months.
The need for caution
The framework plays a crucial role in recognizing the opportunities presented by this technology while emphasizing the importance of wellbeing, privacy, security, and safety.
However, some of these concepts are far less clear-cut than the framework suggests. As experts in generative AI, our perspective on this technology has evolved from optimism to a more cautious stance over the past year. As UNESCO recently cautioned:
“The speed at which generative AI technologies are being integrated into educational systems without scrutiny, rules, or regulations is astounding.”
The framework places a considerable burden on schools and teachers, requiring them to fulfill responsibilities for which they may not be qualified, lack time, or lack financial support.
For instance, the framework demands “explainability” – yet even AI model developers struggle to fully explain how these models work.
Furthermore, the framework expects schools to assess algorithmic risks, design appropriate educational content, review assessment practices, consult with communities, educate themselves about intellectual property rights, and become experts in the use of generative AI.
It remains unclear how all of this will be achievable within existing obligations, which are already known to be overwhelming. This is particularly challenging considering the complex and controversial nature of generative AI ethics. We also know that the technology is imperfect and prone to errors.
Here are five areas that we believe should be included in all future iterations of this framework:
1. What is generative AI?
Generative AI refers to algorithms that can create original content. These algorithms have the ability to generate new text, images, or other forms of creative output without explicit human input.
2. What is the purpose of the national framework for generative AI in schools?
The purpose of the framework is to provide guidance for schools on how to use generative AI in a safe and effective manner. It aims to address potential risks and consequences associated with the use of this technology in classrooms, such as algorithmic bias, privacy concerns, and inappropriate use.
3. How long is the framework and what does it include?
The framework consists of two pages and includes six overarching principles and 25 guiding statements. The principles cover areas such as teaching and learning, human and social wellbeing, transparency, fairness, accountability, and privacy, security, and safety.
4. Will the framework be reviewed regularly?
Yes, the framework will be reviewed every 12 months to ensure its relevance and effectiveness in the rapidly evolving field of generative AI.
5. What are some areas that need to be addressed in future versions of the framework?
Some areas that need to be included in future versions of the framework include a more honest discussion about the biases present in generative AI, the need for more research on the benefits and drawbacks of using this technology in education, clarification on the appropriate use of AI for different tasks, transparency in the use of generative AI by teachers, and safeguarding the expertise and roles of teachers in the face of increasing AI use in education.
6. Is there evidence of generative AI improving teaching and learning?
So far, there is little research demonstrating the benefits of generative AI use in education. While the technology holds potential, there is a need for more evidence to support its effectiveness in enhancing educational outcomes. Additionally, studies have shown potential harms and limitations of algorithms, such as narrowing the scope of student writing and privileging certain voices over others.