Australia attracted international headlines last year after becoming the first country in the world to pass legislation requiring social media platforms to take reasonable steps to prevent users under 16 from having an account on their platforms.
The aim was to combat ongoing online safety concerns and protecting young Australians by minimising the risk of online harm. But is a social media ban the most efficient way of keeping children safe online?
In the ODPA’s 2025 Project Bijou lecture, Guernsey Data Protection Commissioner Brent Homan sits down with Australian Privacy Commissioner Carly Kind to tackle this and many other thorny privacy-related issues which are affecting people across the world.
(3:55) I certainly have expressed some hesitation about whether or not a ban is the right way to go about things while recognising the clear harms children face online. And I should say of course it's not really my place now that the social media minimum age assurance bill is now law to question it and my office will be regulating it but at the time when the bill was going through, some of the hesitation I had concerned the effectiveness and utility of restricting children from social media platforms noting the likelihood that they would be able to circumvent such restrictions but moreover from a privacy perspective, the risks that requiring age assurance across the social media platforms that are used by the vast majority of Australians on a daily basis would create further incursions into privacy was really at the forefront of my mind so now requirements to age assure all users will affect not only children but adults as well and I do think there are real risks that this will result in the collection and use of more information including of identity documents which we have already identified as a real area of vulnerability in the Australia ecosystem, the collection and retention of identity documents and their vulnerability to data breaches and misuse and loss.
(5:43) We do need to do something to address the harms that children face online. My preferred way of going about that is trying to change the platforms themselves and to require them to reshape the way in which they deal with children, including not only through safety-centric measures that relate to content but also through the way they collect and use children's personal information including to deliver then both content and advertising and the more problematic underpinnings of the business model of social media platforms generally which is highly personal data dependent as we know.
(10:25) There is a big role for us and for others including in civil society to play to educate children not only about these technical forms of privacy, like online privacy and how to use your privacy settings but actually about the whole fundamental values at the heart of those things, what is privacy, why is it important and why should you care about it?
Discussing social media platforms and regulation:
(6:40) We should not forget that technologies are at the end of the day humans all the way down, they are made by humans, governed by human choices. There is nothing inevitable or preordained about social media or any other technology and that means it can change, it can change in the way that regulators and policy makers say it can change.
Discussing 'Big Tech' and privacy:
(14:22) The entire ecosystem of this online environment which is personal data driven because that is the business model which sustains these big entities is one which is grounded in surveillance and tracking and monitoring. We know from our research with communities and individuals that this feels creepy to people, it makes them feel they don't have control over their personal data, like they don't know who knows what about them and they can't claw back any control in that environment because everything is fair game no matter where they go and that they are not in an empowered position to say no to requests for their personal information because it results in a denial of services or products etc.
(17:52) One of the biggest risks i see is that Big Tech platforms are becoming big AI very clearly and AI is potentially the infrastructure that will order the future of our societies and at this stage we run the risk of these same harms and risks being baked into new AI products as well.
Discussing enforcement:
(21:30) We are a small regulator and for me, the overriding principle is that the cheapest thing we can do is to get to compliance by education... The best way to ensure the best outcome for the most amount of people is through education and the provision of guidance. Enforcement action is costly, it's time intensive, it's rarely able to deliver an outcome within an approximate period to the harm so it's not an attractive route I don't think in and of itself for regulators. Where it becomes attractive is where it can achieve general deterrents for a sector or a set of practices or acts and send a very strong and loud message about the need to invest in compliance where other messages haven't yet broken through, or where there is real community or public interest in there being punitive action to achieve that really specific deterrent or remediation of a harm. Last year we struck a settlement with Meta with respect to the Cambridge Analytica event and that resulted in Meta establishing a compensation fund in the amount of 50 million dollars and I think that's a really important outcome because it's about delivering recourse back to the consumer, back to the individual...I think there's an important point of public interest and justice around that outcome.