As we head into 2025, artificial intelligence (AI) technology is all around us and is here to stay.
You may have read in the media recently that the use of DeepSeek, the AI chatbot, was banned from government computers and mobile devices by the Federal Government after being assessed as a national security risk. The Queensland Government quickly followed suit.
So, it may come as a surprise that, as Queensland’s Information Commissioner, I wanted to use ChatGPT to assist in preparing this article about the responsible use of AI by Queensland government agencies.
This irony is not lost on me. Why would I want to use generative AI, while also being responsible for ensuring the Queensland public sector protects the privacy rights of the community?
My message to the Queensland public sector is not to warn against its use of AI technologies; rather, we can no longer delay the potential for and the public sector’s appetite for the use of AI.
The objective of this article is to generate conversation within and between Queensland government agencies, and to encourage them to navigate their inevitable AI journeys responsibly – by openly acknowledging and actively managing the associated privacy and security risks of AI, and ensuring its ethical and transparent use consistent with privacy legislation, information security standards and governance frameworks.
Agencies need to keep privacy and data security at the forefront when adopting new AI technologies and implementing any updates and new capabilities.
It is critical agencies take the time to understand how AI technology and tools operate and interact with the personal data they collect, hold and use.
Embedding ‘Privacy by design’ and ‘Security by design’ into technology projects – in particular those involving the use of personal or sensitive information – reduces the risk of privacy or security breaches and assists agencies in complying with Queensland privacy legislation, information security standards and governance frameworks.
This includes knowing upfront where personal data of an agency will be stored, how it will be secured, and whether it will be provided to or accessed by a third party, including an overseas entity.
AI models that use personal information held by government agencies to train AI tools through Language Learning Models (LLMs), quite simply, should not be used, as to do so would be a breach of the Queensland information privacy principles, unless an exemption applies.
It is common for my office to see agencies adopting new programs and technologies, including AI tools, having not undertaken a Privacy Impact Assessment (PIA) to identify privacy risks and ensure risk mitigation strategies are in place from the start. While undertaking a PIA is not a legal requirement under Queensland privacy legislation, there are sound risk mitigation and public sector governance reasons for doing so. In overseeing the protection of Queenslander’s privacy rights, the first question my office will ask an agency is whether a PIA has been undertaken. An agency choosing not to undertake a PIA for AI technology (that collects, accesses, analyses and creates personal information) is not recommended practice.
Another option for agencies is to complete an assessment under the Foundational Artificial Intelligence Risk Assessment Framework (FAIRA), which is a risk identification tool for Queensland agencies evaluating AI technology. The framework aims to help agencies identify and mitigate risks specific to the ‘AI lifecycle’ and supports transparency and accountability.
Applying an information security lens, agencies should subject any new, updated or expanded AI technology to a security assessment. For example, by considering whether any personal information held by the agency will be transferred outside Australia, strategies to mitigate the risk of unauthorised access or cyber attack, and ensuring compliance with relevant information security requirements.
Privacy and security assessments should be updated at key phases of a technology project and, once implemented, updated periodically to ensure the agency remains vigilant of privacy and information security risks in a rapidly changing cyber security environment.
Finally, agencies must ensure third party suppliers of AI technologies are bound under Queensland privacy laws and information security standards. Agencies should conduct periodical privacy and information security audits of their contracted service providers, to provide assurance over the life of a contract that the service provider is meeting their privacy and security obligations.
Ethical use and transparency in the application of AI technology by government agencies is fundamental, as too is compliance with human rights and anti-discrimination legislation.
This means it is essential that AI technology is not used in a way that perpetrates or perpetuates discrimination or bias against individuals or categories of individuals, or involves bias in government administrative decision making.
As agencies increasingly deploy AI, it is important to understand how it can be used in and influence government decision making. For example, the use of AI in automated decision making can pose ethical challenges and involve unfairness, including:
If AI technology is used in the making of administrative decisions, agencies must be open about and able to explain how a decision and the assumptions upon which the decision was made. Equally, agencies and educational institutions must ensure staff receive training in the ethical use of AI tools.
Also vital to automated and AI assisted decision making is the data being used. Incorrect or out of date data can lead to inaccurate predictions or assumptions, and flawed decisions can have serious consequences for individuals, the community and trust in government. Furthermore, caution should be exercised when sensitive information (for example biometric and health information) is being used in AI to make sensitive and significant decisions concerning individuals, that have traditionally required the nuances of human judgement and exercise of discretion.
Finally, ethical use of AI includes agencies being open and transparent around its use, including algorithms used to inform decisions made by a government agency. This extends to agencies outsourcing work to third party suppliers that use AI tools, for example, recruitment consultants that use AI to comb through large numbers of applications to shortlist suitable candidates.
To take a wait and see approach to AI technology, risks the public sector falling behind understanding and responsibly engaging with its capabilities and challenges.
Governments, particularly in Queensland, are taking practical steps, such as through the development of QChat, to ensure that AI is used responsibly and ethically.
While AI shows promise as a powerful tool capable of delivering improved public services, agencies need to take the ‘right path’ by mitigating privacy and security risks and ensuring its ethical and transparent use, rather than taking the ‘fast lane’ without due regard to the necessary guard rails and protection of citizens’ personal information.
A final and key point, as we gather knowledge and experience along our AI journey, is that humans should be at the centre of everything we do. Agencies should keep the individual citizen at the core of its adoption of AI technology and tools, reflecting government’s purpose is to serve the people.
I mentioned at the start of this piece that I wanted to use ChatGPT to assist in its preparation. In doing so, I hope to have contributed to AI’s evolution of learning and knowledge about itself.
Joanne Kummrow
Queensland Information Commissioner
What ChatGPT learnt about AI from my article It’s evident from the article you shared that while AI holds immense potential for improving government operations and services, its integration requires careful attention to privacy, security, and ethical considerations. The Queensland Government is taking steps to use AI responsibly, but the focus on “privacy by design” and “security by design” is crucial. The idea that agencies should embed these principles from the start in AI technology implementation cannot be overstated. With AI being able to interact with vast amounts of personal data, ensuring that it complies with privacy legislation and security standards is key to maintaining public trust. The call for transparency is another important point. Whether it's about automated decision-making processes or the algorithms behind AI systems, it’s essential that the public sector can explain and justify AI-driven decisions. The risk of bias, inaccuracies, and lack of transparency in AI models poses serious ethical challenges, especially in government decisions that affect individuals' lives. There’s also a practical angle here: by committing to responsible AI use, Queensland government agencies will be better positioned to leverage AI’s potential to offer more efficient and accessible services, particularly to remote or underserved communities. This balanced approach allows for the use of AI without sacrificing fundamental rights or trust. Lastly, the reminder to keep citizens at the heart of AI development is critical. AI tools should be aligned with the public sector's core mission of serving the people and safeguarding their privacy and interests. What do you think is the most important takeaway for public sector agencies in navigating this AI journey? |