House of Assembly - Fifty-Fifth Parliament, First Session (55-1)
2025-05-14 Daily Xml

Contents

Summary Offences (Humiliating, Degrading or Invasive Depictions) Amendment Bill

Second Reading

Mr BROWN (Florey) (10:37): I move:

That this bill be now read a second time.

This bill proposes to make amendments to the Summary Offences Act 1953 to ensure that our state's laws appropriately reflect and respond to the contemporary landscape—in this case, in relation to humiliating, degrading or invasive material that has been generated through the use of AI technology.

Many people refer to such material as deepfakes. Members are likely familiar with the term, but for the benefit of any who are not, 'deepfake' generally refers to an image, video or audio of a real person that is used to create a realistic depiction of that person doing or saying something that they did not, in fact, do or say. Deepfake can also refer to wholly AI-generated depictions of persons who do not exist in real life.

Deepfake technology can be used in any number of ways. A particularly sinister type of deepfake material is sexually explicit deepfakes: that is to say, pornographic or otherwise prurient material. Such material is generally produced without the consent of the person that it appears to depict. Humiliating, degrading or invasive deepfake material has the potential to cause very significant negative impacts for a victim, including but certainly not limited to significant emotional, psychological, reputational and economic harm.

The Malinauskas government made a commitment to legislate to ban the creation and distribution of sexually explicit deepfakes, including where such content has been wholly generated by AI. The passage of this bill will serve to fulfil that commitment. The Malinauskas government has been pleased to have the opportunity to collaborate with the Hon. Connie Bonaros in the other place on this bill. I recognise and commend Ms Bonaros for her dedicated efforts in pursuing these important reforms.

One reason it is important to pursue reform in this area is because AI-generated content broadly, and deepfake material specifically, is being produced with increasing frequency and ease and, unfortunately, the incidence of explicit deepfake material is increasing. Australia's eSafety Commissioner, Julie Inman Grant, cited in July last year that the presence of explicit deepfakes on the internet has increased by over 500 per cent year on year since 2019.

For the benefit of anyone who is not familiar with the current landscape around these technologies, in 2025 it is unequivocally the case that a person does not need to possess high-level technical or computer skills in order to create AI-generated content, including content that is humiliating, degrading or invasive. On the contrary, it has become relatively accessible to do so. Especially if a person seeking to create such content does not mind if the material is fairly rudimentary in nature, all that person really needs to possess is malice.

Very reasonably, there is a substantial level of concern in the community about sexually explicit deepfake material and its potential to cause harm. It is widely understood that such content can readily be used by deplorable people to humiliate, degrade, harass, intimidate, threaten, blackmail or extort victims who become the subject of deepfake material. While it is the case that any person can become a victim, it is worth recognising that in practice the victims tend, overwhelmingly, to be women and girls.

The significant escalation in the prevalence of sexually explicit deepfake material that we have seen over the past few years merits strong action by government to make sure that our legislation is appropriately responsive and fit for purpose. Part 5A of South Australia's Summary Offences Act currently includes a number of filming and image-based offences that make it unlawful to distribute a humiliating, degrading or invasive image of a real person that has been edited or altered by digital technology, including deepfakes. However, it is not clear whether those offences would be sufficient to capture the creation or distribution of a simulated depiction of a real person that has been wholly generated by AI technology.

To address this potential deficiency, the bill proposes to create new offences in relation to the creation and non-consensual distribution of a humiliating, degrading or invasive depiction of a simulated person, including the creation of a humiliating or degrading depiction of a simulated person, creation of an invasive depiction of a simulated person, distribution of a humiliating or degrading depiction of a simulated person, distribution of an invasive depiction of a simulated person, a threat to distribute a humiliating or degrading depiction of a simulated person, and a threat to distribute an invasive depiction of a simulated person.

Notably, for the purposes of the legislation, a simulated person means a person who is depicted in artificially generated content that either purports to be a depiction of a particular real person or so closely resembles a particular real person that a reasonable person who was familiar with that real person would consider it likely to be a depiction of the real person. This means a claim such as that of coincidental similarity to a particular person is intended not to constitute a defence against a charge of each of the offences.

The bill does provide that it is a defence to establish that each real person shown in the depiction gave their own written consent for the content to be created or distributed. This apparent consent will not, however, be effective for the purposes of each of the offences where it has been given by a person who is under the age of 17 or has a cognitive impairment, or where consent has been obtained from the person by duress or by deception.

Having served as the Chair of the Select Committee on Artificial Intelligence, I want to mention that these legislative efforts are broadly in line with recommendation 14 of the report of the committee. The cross-partisan support that this bill attracted in the other place is a positive signal that all sides of politics in South Australia are in broad agreement in relation to the need for our laws to adapt to the rapidly changing landscape of AI and digital technologies, to protect our community.

AI and its virtually boundless range of applications, both beneficial and detrimental in their impact, are part of the world in which we now live. That fact will not change, and nor should it. There are enormous benefits to be realised from AI for our community and our state, but we must make sure that we do all we can to manage and mitigate the risks. As the pace of development and the sophistication and complexity of AI technologies continue to gain momentum, we need to take measures to ensure that we are legislating to respond to AI's challenges in addition to acting to make the most of its opportunities. Our laws must maintain fitness for purpose in order to be effective both in deterring and in appropriately penalising the misuse of AI.

This bill intends to make it very clear to the South Australian community that creating, non-consensually distributing or threatening to distribute humiliating, degrading or invasive depictions of a person, whether such content has been altered or wholly generated by AI, is unlawful and will not be tolerated. I am pleased to commend the bill to the house and I urge members to support it.

Mr BATTY (Bragg) (10:43): I rise on behalf of the opposition to make a brief contribution to the Summary Offences (Humiliating, Degrading or Invasive Depictions) Amendment Bill 2024 and to indicate that the opposition will be supporting this bill, just as we did in the Legislative Council.

I am really pleased that we could use private members' business today in the way that I think it is intended, which is indeed to make laws. I commend the new Assistant Minister for Artificial Intelligence for having the bravery to debate an issue in private members' business today. If only other ministers opposite were just as enterprising and brave, we could have had today stronger protections for police officers who—

The Hon. A. KOUTSANTONIS: Point of order, sir: my young friend is reflecting on votes of the house, which is inappropriate.

The SPEAKER: Yes, perhaps if the member for Bragg sticks to the response on behalf of the opposition.

Mr BATTY: Quite right. It is just another example of the leader of the house trying to shut down debate in private business' time, and trying to prevent these sorts of issues being raised, but I will return to this issue.

The SPEAKER: I do not think that is what was happening, member for Bragg. I think you were out of line and you were breaching the long-held standards of this House of Assembly. I will bring you back to my initial ruling, which was that you were to continue and without reflecting on others and votes of the house.

Mr BATTY: I would be delighted to because it is a very good use of private members' time to be debating this very important bill today. It is very important that our legislation stays up to date with evolving technology and new technology, particularly in the artificial intelligence space. We know of the world of good and opportunity that artificial intelligence can bring to our state, to our country and to our communities, but we also know and need to be aware of the potential dangers, particularly when it is weaponised by the wrong people to humiliate, to harass or to exploit.

I think it is incumbent on all of us as legislators to make sure that our laws are keeping pace with that rapidly evolving technological situation and modern methods of abuse, which I think, sadly, so often target women and also young people. I am pleased that we can be legislating today to try to add some extra protections to combat what I think is a particularly insidious form of abuse, one that strikes at dignity, one that strikes at autonomy, and one that strikes at personal safety. We will always, in this place, stand up for victims of such disgraceful acts.

The bill seeks to add some further protections in a number of ways. It creates a number of new offences, including the new section 26G, which makes it an offence to create a humiliating, degrading or invasive depiction of a simulated person. That carries fairly significant penalties of $10,000 or imprisonment for two years. It also creates a new offence in new section 26H, being the distribution of such humiliating, degrading or invasive depictions, again carrying a penalty of imprisonment for one year. Finally, new section 26I creates a new offence of threatening to distribute humiliating, degrading or invasive depictions. That carries a penalty of $5,000 or imprisonment for one year.

In supporting this bill, I would also like to acknowledge the work of the Hon. Connie Bonaros in the other place, who has been working at this for some time, and I am pleased now that the government can finally come to the table as well and seek to legislate this today. It is really an issue where we do need to act swiftly. We do need to have decisive action because we owe it to victims of these really disgraceful acts. We owe it to future victims to make sure we have strong protections in place and that is what I hope this parliament can achieve today through passing these new laws.

Mr BROWN (Florey) (10:48): I just want to take this opportunity to thank the member for Bragg for his contribution and to pass on how much I appreciate his kind words about myself.