There is increasing strictness on content related to sexual exploitation of children on online platforms across the world. Recently, the discussion regarding this has increased after Grok AI created objectionable pictures of women. At the same time, social media has been banned in Australia for children under 16 years of age. However, reports suggest that despite all these efforts, big tech companies are still lagging behind in effectively dealing with this serious problem. According to the Australian Center to Counter Child Exploitation, about 83,000 reports related to online child sexual abuse were filed in the year 2024-25. This figure is 41% higher than last year, and most of these cases were linked to mainstream digital platforms.
In view of this increasing threat, Australia’s e-Safety Commissioner Julie Inman Grant has asked Google, Apple, Microsoft, Meta and other big tech companies to submit transparency reports every six months. This recently released report has definitely shown some improvements, but at the same time many serious security flaws have also come to light.
Strictness increased on wrong content, but efforts inadequate
According to the report, some companies have made progress in identifying exploitative content, AI-generated content, live-streaming abuse, online grooming and sexual extortion. Moderation time has also been reduced in some cases. For example, Snapchat’s parent company Snap has reduced the processing time for child sexual abuse content on its platform from 90 minutes to 11 minutes. Microsoft has also expanded the scope of detection of such content in Outlook.
However, the report also notes that Meta and Google are still not monitoring the exploit through live-streaming on video calling services like Messenger and Google Meet, despite detection tools being in place on their other platforms. Apple and Discord also lack an active detection system. Apple still relies on user reports in most cases rather than automatic security technology.
Lack of security on live video platforms
The report also revealed that platforms like Apple, Discord, Google Chat, Meet and Messages, Microsoft Teams and Snap are not using the software available to identify sexual extortion involving children. The biggest concern is around live video and encrypted platforms, where adequate tools are still not in place to identify exploits.
Along with these reports, e-Safety has also launched a new dashboard, which will track the progress of tech companies. It will show which technologies are being used by the companies, how much content was removed after complaints from users and the number of security personnel.
