Legislative Council - Fifty-Fifth Parliament, First Session (55-1)
2025-04-30 Daily Xml

Contents

Summary Offences (Invasive Images and Depictions) Amendment Bill

Second Reading

Adjourned debate on second reading.

(Continued from 19 June 2024.)

The Hon. J.S. LEE (17:42): I rise today to speak in support of the Summary Offences (Invasive Images and Depictions) Amendment Bill 2024. I wish to thank the Hon. Connie Bonaros for bringing this bill to the chamber and for her diligent work negotiating with the government on subsequent amendments that greatly strengthen the efficacy of the proposed legislation.

The bill will create new offences related to the creation, distribution and threat to distribute artificially generated invasive images and depictions. Commonly known as deepfakes, AI-generated depictions may draw on real photos, video or audio of a real person to create a realistic-looking but false image or video of that person—how disgusting. Most often, such deepfakes are invasive and sexually explicit depictions that seem to portray a real person doing or saying something that they did not actually do, which is pretty scary.

Rapid advances in AI capability have seen an explosion of explicit deepfakes on the internet, with authorities estimating that there has been growth of 550 per cent year on year since 2019. According to Australia's eSafety Commissioner, pornography videos make up 98 per cent of deepfake material currently online, and 99 per cent of that imagery is of women and girls. Deepfakes can be almost impossible to detect, with detectors specifically designed to analyse whether images, video and other media have been artificially manipulated or fabricated entirely, struggling to tell the difference between what is real and what is fake.

With AI programs becoming a common practice in our daily lives, creating deepfakes has become easier than ever before, and our laws must keep pace. We have seen deepfakes used to create child exploitation material, create pornography material without a person's consent, create revenge porn, bully, blackmail, spread misinformation, and destroy reputations. While there are a number of laws that may apply to deepfakes in some circumstances, depending on how they are created or used, this bill will ensure there is no doubt that creating, distributing and threatening to distribute artificially generated invasive images and depictions without consent is unlawful.

The bill seeks to double the existing penalties for indecent filming or sexting offences to match the new deepfake offences. Like the existing penalties, the new offences would have a higher penalty if the person depicted is under the age of 17 years. I note that the honourable member has also filed a series of amendments to include offences for creating a humiliating or degrading depiction of a person, such as content depicting an assault or other act of violence done by or against a person, or an act that would be considered humiliating or degrading to the real person.

There is a test against generally accepted community standards, so something that would only be considered to cause minor or moderate embarrassment would not be captured by this legislation—it is pretty commonsense stuff. It is important that amended offences are included. Humiliating and degrading deepfake content can be just as damaging to a victim's reputation and personal and professional relationships as can sexually explicit content.

The idea of someone maliciously spreading an awful image or video that looks like you or someone you love is terrible and nauseating. It can have long-term psychological impacts, causing shame, anxiety and depression. We have all seen too many cases of the tragic impact it can have, particularly on vulnerable young people. I also mention that, for multicultural communities, if any shame or lies are told about them, it creates lots of taboos and damage, not just for the person but for the families and the communities.

We must ensure that our laws prevent the proliferation of harmful deepfake content and protect vulnerable community members from being threatened and exploited in such a way. I commend the Hon. Connie Bonaros for her interest, passion and diligent and thoughtful work on this bill. With those remarks, I commend the bill to the chamber.

The Hon. R.A. SIMMS (17:48): I rise to speak in favour of this bill on behalf of the Greens, and in so doing I acknowledge the leadership of the Hon. Connie Bonaros. She is very passionate about this area and has been pushing the parliament to deal with this. We are in a situation where technology has developed at a pace that has been out of step with legislation, and legislators like the Hon. Connie Bonaros have played a very important role in making sure that we pause and take note of those advances in technology and ensure that vulnerable people, in particular children, are not falling mercy to this technology. I thank her for her leadership in this space.

Artificial intelligence has enormous potential benefits, but it also has the potential to harm society, the economy and our personal lives. Artificial intelligence technology has crossed a threshold with the capability to make people look and sound like other people. A deepfake is fabricated, hyper-realistic digital media, including video, image and audio content. Not only has this technology created confusion, scepticism and the spread of misinformation—and we have certainly seen this particularly in other jurisdictions in the context of election campaigns—but deepfakes also pose a threat to privacy, security and psychological wellbeing.

Manipulation of images is not new, but over recent decades, digital recording and editing techniques have made it far easier to produce fake visual and audio content not just of humans but of animals, machines and even inanimate objects. Advances in artificial intelligence (AI) and machine learning have taken the technology even further, allowing it to rapidly generate content that is extremely realistic, almost impossible to detect with the naked eye and very difficult to debunk. This is why the resulting photos, videos and sound files are called deepfakes.

To generate convincing content, deepfake technology often requires only a small amount of genuine data, images, footage or sound recordings. Indeed, the field is evolving so rapidly that deepfake content can be generated without the need for any human supervision at all. The possibilities for misuse of this technology are growing exponentially as digital distribution platforms become more publicly accessible and the tools to create deepfakes become relatively cheap, user friendly and mainstream.

Deepfakes have the potential to cause significant damage. They have been used to create fake news, false pornographic videos and malicious hoaxes usually targeting well-known people such as politicians and celebrities. Potentially, deepfakes can be used as a tool for identity theft, extortion, sexual exploitation, reputational damage, ridicule, intimidation and harassment. Any person who is targeted by such efforts may experience financial loss, damage to their professional or social standing, fear, humiliation, shame, a loss of self esteem or reduced confidence.

Reports of misrepresentation and deception could undermine trust in digital platforms and services and increase general levels of fear and suspicion within our society. As advances in deepfake technology gather pace and apps and tools are emerging that allow the general public to produce credible deepfakes, concerns are growing about the potential for harm to both individuals and society.

As noted in eSafety Commissioner Julie Inman Grant's opening statement to a Senate standing committee inquiring into the Criminal Code Amendment (Deepfake Sexual Material) Bill of last year:

Deepfake detection tools are lagging behind the technology itself. Open-source AI apps have proliferated online and are often free and easy to use to create damaging digital content including deepfake image-based abuse material and hyper-realistic synthetic child sexual abuse material. Companies [should] be doing more to reduce the risks that their platforms can be used to generate damaging content.

However, using deepfakes to target and abuse others is not simply a technology problem. It is a result of social, cultural and behavioural issues that are being played out in the online space. As noted by the Australian Strategic Policy Institute's report, 'Weaponized deep fakes', there are challenges to security and democracy represented by deepfakes. These include heightened potential for fraud, propaganda and disinformation, military deception and even the erosion of trust in our institutions and fair election processes.

The risks of deploying a technology without first assessing and addressing the potential for individual and societal impacts are very high. Deepfakes provide yet another example of the importance of safety by design to assist in anticipating and engineering out misuse at the get-go. It is very clear that AI technology has rapidly outpaced government regulation. Digital rights are essential for a fair and just society. People deserve control over their data, transparency and automated decision-making and robust protections against misuse, including from the harmful practice of creating, distributing and threatening to distribute artificially generated images.

As I said from the outset, the Greens appreciate the work of the Hon. Connie Bonaros in this space. This is an important reform, I think, in terms of moving us more towards a society that strikes a better balance between technology and the rights of all members of our society to live free from harm. The Greens support the bill.

The Hon. K.J. MAHER (Minister for Aboriginal Affairs, Attorney-General, Minister for Industrial Relations and Public Sector, Special Minister of State) (17:54): I rise to speak briefly on this private member's bill and at the outset indicate the government will be supporting the bill. I would like to acknowledge the significant work and advocacy of the Hon. Ms Bonaros, who has undertaken a significant amount of work leading up to this point regarding invasive deepfakes.

The use of artificial intelligence is now a part of our world, whether we like it or not, and we will need to make sure that we are doing all we can to harness the positive potentials of AI while ensuring our laws are fit for purpose to dissuade and punish its misuse. One of the more harmful ways that artificial intelligence can be misappropriated—and more often than not it is targeted on women and girls—is via artificially generated deepfakes that are often sexual in nature. Authorities have reported that deepfake technology is being widely used in the creation of non-consensual pornography, and concerns have also been raised about the potential to use deepfakes to harass, intimidate, threaten, blackmail or extort victims, including victim survivors of domestic and family violence.

Earlier this week, we heard distressing reports of sexualised deepfakes targeting at least 15 female public servants in Canberra, with the Australian Capital Territory's current laws unable to hold the young male creator to account. That loophole is exactly what this bill seeks to close. As the Hon. Jing Lee pointed out, it is estimated that as many as 90 to 95 per cent of all deepfakes created are non-consensual pornography and 99 per cent of the victims depicted are women and girls.

The eSafety Commissioner, Julie Inman Grant, has further stated that explicit deepfakes have increased on the internet as much as 550 per cent year on year since 2019. Those alarming statistics highlight the need for more to be done to protect members of our community from these malicious attacks. That is why we have seen not just one but two pieces of legislation introduced, seeking to protect from these harmful deepfakes.

I am pleased at the work the government has been able to do alongside the Hon. Connie Bonaros with Michael Brown MP, the member for Florey, who chaired a committee looking at the use of AI in South Australia. This takes the best elements of both the bill that the government has put forward and the one that the Hon. Connie Bonaros has introduced, to ensure we have the best legislation we can to protect people from these harmful deepfakes. As I said at the outset, the government will be supporting this bill, but we do reserve the right to ask some very pointed, difficult and challenging questions during the committee stage.

The Hon. N.J. CENTOFANTI (Leader of the Opposition) (17:56): I rise today to offer the opposition's support for the Summary Offences (Invasive Images and Depictions) Amendment Bill 2024 introduced by the Hon. Connie Bonaros. This bill is timely, it is necessary and it is incredibly important, and I want to commend the member for her commitment to this process. I am glad the government has recognised that the honourable member started this process and has allowed her the opportunity to see it through by removing their own catch-up version of the bill.

This bill recognises the rapid escalation of new technology, particularly artificial intelligence, in enabling the creation and spread of invasive, degrading and, critically, non-consensual depictions of individuals. It is a technology that, while powerful in legitimate hands, has also been weaponised to humiliate, to harass and to exploit. This parliament has a duty to ensure that our laws keep pace with the modern methods of abuse. Without legislative action, victims—often young people and often women—are left exposed to an insidious form of violence, one that strikes at dignity, at autonomy and at personal safety.

The bill does three key things. Firstly, it increases penalties for existing offences. These include for the distribution of an invasive image without consent under section 26C, for engaging in indecent filming under section 26D, for the distribution of images obtained by indecent filming under section 26D(3), and for threatening to distribute invasive images or indecent filming material under sections 26DA(1) and 26DA(2).

Secondly, it creates new offences to address the menace of artificially generated contents in sections 26G(1) and (2) and 26H. This is by specifically noting the creation, distribution and the threat of distribution of an invasive depiction. Thirdly, it introduces balanced protections. A defence is available under section 26G(3), where a depiction is created with the written consent of each real person depicted, and at section 26GA(4)(b) there is also a defence for distribution, where the depiction forms part of a work of artistic merit.

Further, section 26I makes clear that consent given by someone under 17, by someone with cognitive impairment, or obtained through duress or deception, is not valid consent for the purposes of the new offences in this bill. Importantly, the Summary Offences (Invasive Images and Depictions) Amendment Bill 2024 provides for the forfeiture of devices or objects used in the commission of these offences under sections 26I(2) and 26I(3).

This issue must have swift and decisive action. Every day without action risks further harm to innocent people that may truly have a lifelong traumatic impact. The digital footprint of humiliation can be impossible to erase. The opposition stands ready to work constructively to ensure these new protections are passed into law today without delay. We owe it to the victims, we owe it to the future, and I commend the bill to the chamber.

The Hon. C. BONAROS (18:00): It heartens me that there is always room for middle ground and meaningful reform and collaboration in this place. At the outset, can I thank Ms Emilia Freitas Lay, South Australian parliamentary intern student through the Uni of South Australia—and this is important—because she did a substantial body of work for my office that dealt with deepfakes and the impact on women and girls, much of which has been referred to today. Can I also make special thanks to the Assistant Minister for AI, Mr Brown, the member for Florey, not just for his collaboration on this bill but his dedication more broadly to ensuring AI plays its role in emerging technologies in safe and appropriate ways.

It is a critically important body of work and I know the assistant minister, who is also a computer programmer by background, I understand, has been keen to promote AI opportunities, but not at the expense of these sorts of laws. In so doing, he has in fact prioritised the safety of victims of deepfakes, the majority of whom—in fact almost 100 per cent of whom, as we have just heard—are women and girls. I do note again, and I think it is worth pointing out for the record, the assistant minister led an inquiry that backed calls to develop this sort of technology capability. That was very welcome and had multipartisan support from this parliament but, again, has prioritised these areas.

There is, of course, a very sinister side to AI, and it comes at great personal costs to its victims, so I am equally pleased that the minister has used his position to promote and advocate for these laws. Of course, I thank the Attorney-General for his commitment to strengthening our laws when it comes to issues of sexual offending, sexual harassment, child exploitation; and this body of work is really a continuation for me of something that I think we have all been committed to, and he has shown a genuine desire to get it right in this place when it comes to stamping out abuse and empowering victims. I am grateful to both the Attorney and the assistant minister for their willingness to work so collaboratively and constructively on this bill.

I would also, of course, like to thank today's speakers, the Hon. Ms Lee, the Hon. Mr Simms, the Attorney-General, and the Leader of the Opposition, for what I think is a good outcome in politics when we all come together in this multipartisan way and support something that is so critically important in our community. As we have heard overwhelmingly, it is women and girls who are the victims of deepfake abuse. As the assistant minister said earlier today, you do not have to be Taylor Swift to be the subject of inappropriate deepfakes, and today we are collectively drawing a line in the sand and introducing what will be the toughest laws in the country. Together with Victoria, we are leading the nation through these reforms.

You have heard that current estimates suggest that up to 95 to 98 per cent of all deepfakes are nonconsensual pornography, and 99 per cent of victims are women and girls. The eSafety Commissioner, as the Hon. Jing Lee has pointed out, has reported that explicit deepfake content online has exploded by a whopping 550 per cent year on year since 2019. It is becoming one of the most serious threats facing women and young girls online and, even more frightening, it is happening in our schoolyards. That is because what two or three years ago was very much in the realm of hackers and the underworld of the internet is now mainstream.

It is being used to harass, to shame, to intimidate, to extort and to violate young women and girls with devastating psychological impacts and, worse still, suicides. Sadly, it is perpetrated in the main by men and reflects broader cultural attitudes around power, control and entitlements over women's and girls' bodies. These laws make it clear that we will not stand by while technology is weaponised to humiliate and harm anyone.

There is no excuse for this and if you feel entitled enough to exploit, to humiliate, to denigrate and to degrade a person by using their image without their consent, then you should feel the full consequences of the law. There is simply no excuse for this sort of behaviour and it will not be tolerated. That is what the work of this place today reflects, and I am very grateful for that.

In closing I will leave you with this: it should horrify all of us—it horrifies me and I am sure it does horrify us all equally, especially when it comes to our kids—that victims of child pornography and child exploitation material could be none the wiser that their images have been used to create hideous, horrendous and sickening content and their families and parents may be none the wiser that their kids' images are being used for such sickening and depraved purposes. That does not make them victimless crimes though, and we have to do all we can, as the Hon. Rob Simms, said to keep up with technologies that enable perpetrators who use these tools for such sinister and depraved purposes.

In terms of the amendments, I might take this opportunity to speak to them because it is probably the easiest way to deal with this bill going forward. As the Attorney said, we have taken the best elements of both bills and put them together. I pause there and remind honourable members firstly of the importance of education and deterrence. We have seen how important these sorts of laws are when it comes to sexting offences, to the stealthing laws that we passed through this place and to consent laws, especially amongst younger people, and it is absolutely critical that it should form part of the education curriculum going forward.

There are a number of amendments to the bill, and the easiest way to explain that is that rather than tinkering with the existing provisions and incorporating them as I had initially proposed into clause 5B, I have effectively introduced amendments in set 3 which seek to do all of the things that we have agreed to do in one consistent set. I have done that first because it is much cleaner, it is much more streamlined and it deals explicitly with deepfakes. This is important because, as I have said, it is cleaner for one but, more importantly, it fits in with the intent of this bill to remove any doubt and close any potential loophole when it comes to deciphering between a real image, an image that is wholly generated by AI or an image that is partly generated by AI.

In short, the laws as amended will apply to all images, regardless of whether they are real or generated in part or in whole by AI. It is the depiction of a simulated person itself and the creation of that depiction that will form the basis of these laws and, together with existing laws, the creation and/or dissemination or even threat to disseminate will be the subject of hefty criminal penalties. The bill, as amended, will remove any doubt as to its applicability for perpetrators of deepfake abuse and it will also ensure consistency with penalties under the existing penalty regime.

With those words, I thank everybody once again—all honourable members across the political divide—for their support on this. I thank the Attorney for working diligently with me to get to this point and look forward to the swift passage of this bill through this place.

Bill read a second time.

Committee Stage

In committee.

Clause 1.

The Hon. C. BONAROS: I move:

Amendment No 1 [Bonaros–3]—

Page 2, line 4—Delete 'Invasive Images and' and substitute 'Humiliating, Degrading or Invasive'

This amendment seeks to delete the current 'Invasive Images and' words and replace them with 'Humiliating, Degrading or Invasive', which is more consistent with the language we use in other parts of this legislation.

Amendment carried; clause as amended passed.

New clause 1A.

The Hon. C. BONAROS: I move:

Amendment No 2 [Bonaros–3]—

Page 2, after line 5—Insert:

1A—Commencement

This Act comes into operation on a day to be fixed by proclamation.

This amendment simply deals with the commencement date of the legislation.

New clause inserted.

Clause 2.

The CHAIR: The Hon. Ms Bonaros has indicated that she will be opposing this clause.

The Hon. C. BONAROS: I move:

Amendment No 3 [Bonaros–3]—

Page 2, lines 7 to 11—This clause will be opposed

I move this amendment for the reasons already outlined.

Clause negatived.

Clauses 3 and 4 negatived.

Clause 5.

The Hon. C. BONAROS: I move:

Amendment No 6 [Bonaros–3]—

Page 3, line 17 [clause 5, inserted section 26F(1), definition of artificially generated content]—Delete 'an audiovisual or visual' and substitute 'audiovisual, visual, or audio'

Amendment No 7 [Bonaros–3]—

Page 3, lines 24 to 29 [clause 5, inserted section 26F(1), definition of depicted person]—Delete the definition

Amendment No 8 [Bonaros–3]—

Page 3, line 30 [clause 5, inserted section 26F(1), definition of depiction]—After 'audiovisual' insert ', audio'

Amendment No 9 [Bonaros–3]—

Page 3, after line 31 [clause 5, inserted section 26F(1)]—After the definition of distribute insert:

humiliating or degrading depiction, in relation to a simulated person, means artificially generated content depicting—

(a) an assault or other act of violence done by or against the simulated person; or

(b) an act done by or against the simulated person that reasonable adult members of the community would, were the act to be done by or against a real person, consider to be humiliating or degrading to the real person (but does not include an act that reasonable adult members of the community would consider to cause only minor or moderate embarrassment);

Amendment No 10 [Bonaros–3]—

Page 3, line 32 [clause 5, inserted section 26F(1), definition of invasive depiction]—Delete 'depicted person' and substitute 'simulated person'

Amendment No 11 [Bonaros–3]—

Page 3, line 34 [clause 5, inserted section 26F(1), definition of invasive depiction, (a)]—Delete 'depicted person' and substitute 'simulated person'

Amendment No 12 [Bonaros–3]—

Page 3, line 35 [clause 5, inserted section 26F(1), definition of invasive depiction, (a)(i)]—Delete 'depicted' and substitute 'simulated'

Amendment No 13 [Bonaros–3]—

Page 4, line 3 [clause 5, inserted section 26F(1), definition of invasive depiction, (b)]—Delete 'depicted person' and substitute 'simulated person'

Amendment No 14 [Bonaros–3]—

Page 4, line 4 [clause 5, inserted section 26F(1), definition of invasive depiction]—Delete 'depicted person' and substitute 'simulated person'

Amendment No 15 [Bonaros–3]—

Page 4, after line 11 [clause 5, inserted section 26F(1)]—Insert:

simulated person means a person depicted in artificially generated content that—

(a) purports to be a depiction of a particular real person; or

(b) so closely resembles a depiction of a particular real person that a reasonable person who knew the real person would consider it likely to be a depiction of the real person.

Amendment No 16 [Bonaros–3]—

Page 4, lines 16 to 37 (inclusive) [clause 5, inserted section 26G]—Delete the section and substitute:

26G—Creation of humiliating, degrading or invasive depiction

(1) A person who creates a humiliating or degrading depiction of a simulated person is guilty of an offence.

Maximum penalty: $10,000 or imprisonment for 2 years.

(2) A person who creates an invasive depiction of a simulated person is guilty of an offence.

Maximum penalty:

(a) if the simulated person purports to be a real person who is under the age of 17 years—$20,000 or imprisonment for 4 years;

(b) in any other case—$10,000 or imprisonment for 2 years.

(3) It is a defence to a charge of an offence against this section to prove that the creation of the humiliating or degrading depiction or invasive depiction (as the case may be) occurred with the written consent of each real person depicted in the depiction.

26GA—Distribution of humiliating, degrading or invasive depiction

(1) A person who distributes a humiliating or degrading depiction of a simulated person is guilty of an offence.

Maximum penalty: Imprisonment for 1 year.

(2) A person who distributes an invasive depiction of a simulated person is guilty of an offence.

Maximum penalty:

(a) if the simulated person purports to be a real person who is under the age of 17 years—$20,000 or imprisonment for 4 years;

(b) in any other case—$10,000 or imprisonment for 2 years.

(3) It is a defence to a charge of an offence against this section to prove that the distribution of the humiliating or degrading depiction or invasive depiction (as the case may be) occurred with the written consent of each real person depicted in the depiction.

(4) No offence is committed against this section—

(a) by law enforcement personnel and legal practitioners, or their agents, acting in the course of law enforcement or legal proceedings; or

(b) by reason of the distribution of artificially generated content that constitutes, or forms part of, a work of artistic merit if, having regard to the artistic nature and purposes of the work as a whole, there is no undue emphasis on aspects of the work that might otherwise be considered to be a humiliating or degrading depiction or an invasive depiction (as the case may be) of a simulated person.

Amendment No 17 [Bonaros–3]—

Page 4, line 38 [clause 5, inserted section 26H, heading]—After 'distribute' insert 'humiliating, degrading or'

Amendment No 18 [Bonaros–3]—

Page 4, after line 38 [clause 5, inserted section 26H]—Before subsection (1) insert:

(a1) A person who—

(a) threatens to distribute a humiliating or degrading depiction of a simulated person; and

(b) intends to arouse a fear that the threat will be, or is likely to be, carried out, or is recklessly indifferent as to whether such a fear is aroused,

is guilty of an offence.

Maximum penalty: $5,000 or imprisonment for 1 year.

Amendment No 19 [Bonaros–3]—

Page 4, lines 40 to 41 [clause 5, inserted section 26H(1)(a)]—Delete 'depicted person' and substitute 'simulated person'

Amendment No 20 [Bonaros–3]—

Page 5, lines 5 to 10 [clause 5, inserted section 26H(1), penalty provision]—

Delete the penalty provision and substitute:

Maximum penalty:

(a) if the simulated person purports to be a real person who is under the age of 17 years, or the threat is made to a person who is under the age of 17 years—$10,000 or imprisonment for 2 years;

(b) in any other case—$5,000 or imprisonment for 1 year.

Amendment No 21 [Bonaros–3]—

Page 5, line 13 [clause 5, inserted section 26H(2)]—Delete 'consented to the distribution of the invasive depiction' and substitute:

gave written consent to the distribution of the humiliating or degrading depiction or invasive depiction (as the case may be)

Amendments carried; clause as amended passed.

Title passed.

Bill reported with amendment.

Third Reading

The Hon. C. BONAROS (18:13): I move:

That this bill be now read a third time.

Bill read a third time and passed.