eSafety increases pressure on tech giants over child abuse content

Apple, Meta and other tech giants have been ordered to report on the steps they are taking to address child sexual abuse material on their platforms twice a year, in an escalation of Australia’s online safety compliance regime.

eSafety Commissioner Julie Inman Grant issued statutory notices to eight companies on Wednesday, requiring them to report on their efforts to handle child sexual abuse material in Australia every six months for the next two years.

Apple, Google, Meta (and WhatsApp) and Microsoft (and Skype), as well as chat platform owners Discord and Snapchat, have been targeted for the new reporting regime, partly in response to responses to an earlier round of legal notices.

Image: Shutterstock.com/Cristian Dina

Ms Inman Grant said the “alarming but not surprising” responses confirmed what the internet security regulator had long suspected, that there were “significant gaps and differences between services’ practices”.

“In our subsequent conversations with these companies, we have yet to see any significant changes or improvements to these identified security deficiencies,” she said in the statement Wednesday.

Citing the first transparency report for Internet Safety Expectations (BOSE) in December 2022, Ms Inman Grant said there was no effort by Apple and Microsoft to proactively detect child abuse material on their iCloud platforms and OneDrive.

While eSafety has since introduced mandatory standards, operators of cloud and messaging services will not be required to detect and remove known child abuse material before December.

Ms Inman Grant also has concerns that Skype, Microsoft Teams, FaceTime and Discord are not yet using the technology to detect live child sexual abuse in video chats.

The sharing of information between Meta services is another concern, with offenders banned on services like Instagram in some cases continuing to commit abuses on other platforms of parent company WhatsApp or Facebook.

The legal notices require companies to explain how they are handling child abuse material, live abuse, online grooming, sexual exploration and child abuse material created using generative artificial intelligence.

On Tuesday, Ms Inman Grant said she was in discussions with Delia Rickard, who is reviewing the Internet Safety Act, about the need to fill “gaps” in existing legislation and codes that currently only cover objectionable content and pornography.

“There is a final gap and now is the time to think about the kinds of powers we might need to make us more effective at a systemic level,” Ms Grant said.

Another concern is the speed with which companies are responding to user reports of child sexual exploitation, noting that Microsoft took an average of two days to respond in 2022.

Ms Inman Grant said the new expectations would force companies to “raise their game” and show they are “making improvements”, with the regulator regularly reporting on the findings of the notices.

The mechanism is one of three tools available under BOSE to help “lift the hood” on online security initiatives pursued by providers of social media, messaging and gaming services.

“These notifications will let us know whether these companies have made any improvements to online safety since 2022/3 and ensure that these companies remain accountable for the harm that is still being done to children on their services,” said Mrs. Inman Grant.

“We know some of these companies have made improvements in some areas – this is an opportunity to show us progress across the board.”

The six companies will have to provide their first round of responses by February 15 or face fines of up to $782,500 per day.

Do you know more? Contact James Riley via email.

Add a Comment

Your email address will not be published. Required fields are marked *

x