Contents
-
Commencement
-
Bills
-
-
Motions
-
-
Parliamentary Procedure
-
Parliamentary Committees
-
-
Question Time
-
-
Grievance Debate
-
-
Parliamentary Procedure
-
Grievance Debate
-
-
Private Members' Statements
-
-
Bills
-
-
Parliamentary Procedure
-
-
Motions
-
-
Bills
-
-
Estimates Replies
-
Artificial Intelligence
Mr BROWN (Florey) (14:41): My question is to the Treasurer. Can the Treasurer outline the importance of members setting an example regarding the responsible use of AI?
The Hon. S.C. MULLIGHAN (Lee—Treasurer, Minister for Defence and Space Industries, Minister for Police) (14:42): I am pleased to take this question from the member for Florey, given his keen interest in the application of AI technologies, particularly as it relates to people who are involved in the Public Service. In fact, the member for Florey has been tasked with carrying this forward in the public sector environment in South Australia.
In the recent state budget, we allocated $28 million towards the appropriate and responsible rollout of AI across public sector agencies in a way that involves careful trialling, the training of people who would use it so that they know how it can be used responsibly and the opportunity to make sure that it's only used for purposes that enhance productivity but maintain the high standards expected within the public sector of integrity and accuracy.
Just as an example, I noted the response that was given by the opposition yesterday to the provision of bogus references and source materials by the Hon. Frank Pangallo to a select committee. Yesterday, the story was 'it was just an administrative error.' I thought, 'I have heard those words before: administrative error.' That's the same excuse that the then government used in the last term of the parliament to justify dozens and dozens of bogus erroneous accommodation allowance claims: 'administrative error'. Now I know everyone is down on Frank this week, and rightly so, but at least he's got the team line right: 'administrative error'.
It is pretty apparent to most people who have had even a passing experience of using some of the artificial intelligence tools that are now widely available to members of the community that care needs to be taken. Even if you search in Google, 'What are the risks of using generative AI for the provision of information?' then Google Gemini itself will provide you the answer of the risks of doing it, and it talks in particular about AI hallucination, including fabricated facts like a chatbot citing a non-existent study. Just one inclusion in a Google search bar can come up with that. Seconds of effort is all that's required in order to guard oneself against the risks of using this technology.
I must say today we have moved on from administrative error and unfortunately we have had the unedifying episode of the hunter becoming the hunted—the tabloid journalist fleeing down the corridor away from the media, seeing someone else's foot put in his door jamb in an effort to just secure an accurate explanation about what actually went wrong.
I think it is clear to all of us that we have all learnt something this week. There is a lot of work to do for those opposite making sure they can use complex parts of technology, like a Google search bar, and making sure that we are providing accurate information to the parliament in whichever chamber or whichever committee it is required to be provided.