Part 3: The Law (Australia v Social Media Mini-Series)

Pod Notes

This is episode 3 of a 5-part Tech Mirror mini-series, Australia vs Social Media: Inside the world-first online safety experiment. In this episode, we make sense of the Social Media Minimum Age legislation, explaining the limits of the law and what it actually requires of social media companies, young people, parents, and the community. We also unpack how this new law interrelates with other existing online safety measures, including industry codes.

We speak with Cam Wilson from Crikey, Australia’s eSafety Commissioner Julie Inman Grant, Australia’s Privacy Commissioner Carly Kind, and Deputy Program Director of the Age Assurance Technology Trial Andrew Hammond.

Credits

Written and narrated by Johanna Weaver, Executive Director, Tech Policy Design Institute.

Produced by Olivia O’Flynn & Kate Montague, Audiocraft.

Research by Amy Denmeade.

Original music by Thalia Skopellos.

Created on the lands of the Ngunnawal, Ngambri people and the Gadigal people of the Eora Nation.

Special thanks to all the team at the Tech Policy Design Institute, without whom the pod would not be possible, especially Zoe Hawkins, Meredith Hodgman, and Dorina Wittmann.

Links

Minister Wells Press Conference (16 September 2025), supplied.

Online Safety Amendment (Social Media Minimum Age) Bill 2024, including the explanatory memorandum and transcripts of all second reading speeches https://www.aph.gov.au/Parliamentary_Business/Bills_Legislation/Bills_Search_Results/Result?bId=r7284

Government announces plans to introduce the minimum age legislation (8 November 2024), Minimum age for social media access to protect Australian kids https://www.pm.gov.au/media/minimum-age-social-media-access-protect-australian-kids & https://anthonyalbanese.com.au/media-centre/social-media-ban

Social media reforms to protect our kids online pass Parliament (29 November 2024) https://alp.org.au/news/social-media-reforms-to-protect-our-kids-online-pass-parliament/

eSafety Commissioner Advice to the Minister for Communications on draft Online Safety Rules (June 2025) https://www.infrastructure.gov.au/department/media/publications/esafety-commissioner-advice-minister-communications-draft-online-safety-rules

Albanese Government protecting kids from social media harms (July 2025) https://www.pm.gov.au/media/albanese-government-protecting-kids-social-media-harms

Prime Minister and Minister for Communications media conference, Canberra (July 2025) https://minister.infrastructure.gov.au/wellseSaeft/transcript/press-conference-parliament-house-canberra

Online Safety (Age-Restricted Social Media Platforms) Rules 2025 https://www.legislation.gov.au/F2025L00889/latest/text

eSafety Commissioner’s regulatory guidance https://www.esafety.gov.au/industry/regulatory-guidance#social-media-minimum-age

Minister for Communications and eSafety Commissioner’s media conference (September 2025) https://minister.infrastructure.gov.au/wells/transcript/press-conference-sydney

Privacy Guidance on Part 4A (Social Media Minimum Age) of the Online Safety Act 2021 (October 2025) https://www.oaic.gov.au/privacy/privacy-legislation/related-legislation/social-media-minimum-age

Platforms on notice to comply with Social Media Minimum Age, via eSafety Commissioner (November 2025) https://www.esafety.gov.au/newsroom/media-releases/platforms-on-notice-to-comply-with-social-media-minimum-age

Social media minimum age platform assessments, Minister for Communications media release (November 2025) https://minister.infrastructure.gov.au/wells/media-release/social-media-minimum-age-platform-assessments & https://minister.infrastructure.gov.au/wells/transcript/press-conference-canberra-0

Press Conference: Social Media minimum Age Platform Assessments, Minister for Communications media release (November 2025) https://www.youtube.com/watch?v=b9CIZK_12Zc

eSafety assesses Twitch as an age restricted social media platform (November 2025) https://www.esafety.gov.au/newsroom/media-releases/twitch-assessed-as-age-restricted-social-media-platform

Office of the eSafety Commissioner’s industry codes and standards https://www.esafety.gov.au/industry/codes

Age Assurance Technology Trial https://ageassurance.com.au/

Age Assurance Technology Trial— Final Report https://www.infrastructure.gov.au/department/media/publications/age-assurance-technology-trial-final-report

Transcript

Johanna: The Tech Policy Design Institute acknowledges and pays our respects to all First Nations people. We recognize and celebrate that indigenous people were this continent’s first tech innovators.

Minister Wells: The Albanese Government’s social media delay is genuinely world-leading. It is the first of its kind to pass anywhere in the world. Australia should be immensely proud that as a country we have decided to prioritize the online safety of children and put families before platforms. This is Australian commitment at its best because if it was easy, other countries would’ve done it already.

Johanna: This is Communications Minister Anika Wells speaking on the floor of Parliament on the 31st of July in 2025. Here is what Australian parents had to say after the social media minimum age rules were tabled in the house yesterday. Alexandra said, I love these laws because it changes the conversations I have with my. Seven and nine year olds and hopefully makes me less of the villain. It’s the law. It makes it a little easier. Jackie said, parents cannot police these things entirely on their own, so having policies that support good practice is a relief. Rebecca said, this feels like a step in common sense as a basic principle. I don’t know if I’m the only one, but in the last decade it’s felt like we are all. Spiraling a little, so this is refreshing. Rebecca, you are not alone. We are all in this fight together and the Albanese government has your back. There is no perfect solution when it comes to keeping young Australians safe online, but the social media minimum age will make a meaningful difference.

Intro clips: The minister and I have an important announcement. Social media is harming children. Social media. It’s not a band on content. Age restriction, the age assurance band, band

Johanna: Australia’s teen social media age restriction legislation comes into effect on the 10th of December this year. But what actually is the law? Welcome to Tech Mirror. I’m Johanna Weaver, co-founder and executive director of the Tech Policy Design Institute, an independent, non-partisan think tank dedicated to technology policy in this five-part series.

We’re exploring Australia’s new social media, minimum age restrictions. This is episode three, making sense of the Law. Because let’s face it, there are a lot of people talking about this policy, but I suspect most of them have not actually read the legislation. One person who has is tech journalist Cam Wilson.

Cam: There’s really two parts to the teen social media ban, the teen social media, minimum age, whatever you wanna call it. The first one is this idea that they are setting a legislated minimum age for social media platforms. That is at 16. At the moment. There’s like an industry defacto standard of 13, and so this is raising it up a few years, but to enforce that. Minimum age. They’re also requiring, and by they, I mean government is requiring social media platforms to take reasonable steps to enforce these minimum age restrictions. Because at the moment, while these platforms have these minimum ages, there’s nothing making them have to enforce that limit. And so you’re seeing a lot of people getting around that. Under this legislation, they’re gonna have to step that up and take reasonable steps to keep people under 16 from having accounts on their platforms.

Johanna: Another person who most certainly has read the legislation is the eSafety Commissioner Julie Inman Grant. The social media minimum age bill is meant to raise the age at which young Australians can have and hold a social media account to 16, and we think that those three years between 13 and 16 gives us valuable time to build their digital resilience and their critical reasoning skills.

And it puts the onus on platforms to ensure that under 16-year-old Australians do not have or hold an account. So it’s not a pure prohibition. In fact, there are clear exemptions around messaging and gaming platforms and, uh, we’re working through an assessment process right now because there. Aren’t really any clear lines, mass messaging platforms or broadcasting out and are starting to deliver advertising. Gaming has social like interactive features, so it isn’t a cut and dried exercise.

Johanna: So what exactly are the criteria? For social media services that will be captured by these minimum age restrictions.

Julie: If you look at the legislative criteria set out in the legislative rules made by Minister Annika Wells on July 29th, the conditions for age restriction are it has the sole or a significant purpose of an enabling online social interactive action between two or more end users. It allows users to link to or interact with other end users. It allows users to post material on the service. It has material that is accessible to or delivered to end users in Australia.

Johanna: When we look at which social media services are included, a lot of it is going to come down to if that particular social media platform has a sole or significant purpose of enabling online social interaction between two or more people.

Julie: Of course, as you can imagine, most of the companies are saying that they’re not. That isn’t their solar primary purpose. Pinterest said We’re a visual search engine. I thought, oh, okay. Oh, alright. Roblox is saying our primary purpose is an online gaming platform. I think that’s true, but it has a lot of social features and chat functionality and the whole idea is to play with friends and they’re actually introducing new functionality.

Um, one called Moments that isn’t released here in Australia, but it’s like stories, it’s social media esque. YouTube has said they’re a video sharing platform. I would argue that there is quite a bit of social media functionality and there are the harmful and deceptive design features that I’m concerned about.Same with Snap, who refers to themselves as a camera app and largely a messaging app. But Snap Streaks are a kind of an addictive design feature  as Julie’s just explained.

Johanna: She’s working with her team to go through an assessment process, but they can only assess social media companies against the criteria that are set out in the legislation. It’s notable that the definition in the law doesn’t include any requirement to assess harm. Rather, it focuses on the questions of, does the social media service have the sole or primary purpose of social interaction or posting?

Cam: Then you’ve got the eSafety commissioner whose job it is to actually enforce this law, and the funny thing is that they will issue a penalty. They’ll be like, you know, TikTok, you haven’t done this, and therefore we believe that you are liable for a fine. And then TikTok can challenge that. And then ultimately that goes to judicial review and a judge will decide if TikTok qualifies under the law as this, this platform. Now, I don’t know exactly know what’s happening in the eSafety commissioner’s office, but I imagine that must be a little bit, um, a bit frustrating for them because. They actually don’t know for sure who is gonna be an age restricted social media platform or not

Julie: Really. It just comes down to a drafting issue where I don’t have specific declaratory powers. I guess this is the beauty and also the curse of an independent regulator. You’re given a piece of legislation and it gives you parameters, and then you have to actually figure out how it works in practice.

Johanna: Yes, you are understanding that right. All of this confusion about which platforms are in and which platforms are out, that’s because the law doesn’t actually give the eSafety Commissioner the power to declare which companies are covered and which are not. Now, there are some people who will say that the eSafety Commissioner as a non-elected official shouldn’t be given that type of power, but any such concerns could be addressed by making any formal list from the eSafety Commissioner, a disallowable legislative instrument. This means that any list that the eSafety Commissioner prepares would need to be tabled in Parliament for the elected politicians to either accept or reject, thereby acting as a check and balance on the eSafety Commissioner’s power. Now the legislation as it passed through Parliament does give the communications Minister, minister Wells the power to declare services in and out.

Minister Wells has used that power to exempt classes of services from the law, so email messaging and gaming. But as at the time of recording, she hasn’t used that power to include or exclude specific companies. This is possibly because. Any such rules are also disa allowable instruments, which means that they have to be put to parliament for approval. And while the law had strong bipartisan support, there is less unity when it comes to this question of which social media services are in and which are out.

Now, Annika Wells did stand right next to Julia Mon Grant on the 5th of November when they announced which services were captured by the ban, including Facebook, Instagram, YouTube, TikTok.

But this was not Minister Wells using her powers and tabling legislative rules. It was the independent regulator. Julie Inman Grant sharing her assessment. At that press conference, she made it very clear that this was not a requirement of the legislation.

Julie: Technology is fast changing and ever evolving, which means this will never be a static list. Our work will continue. These assessments we have announced today are not a requirement of the legislation. eSafety off our own vat. Develop the self-assessment tools. For industry to provide a template to ensure fair and consistent application of the legislative criteria.

Johanna: As it stands, eSafety List carries no legal weight, whereas when the minister makes a declaration via the rules, it would be legally binding. And while the eSafety Commissioner has released a list, you heard her saying there that the list is dynamic and will keep changing. eSafety website says that they will not be considering every service before the 10th of December, presumably because they just don’t have time. eSafety have also said that they’re prioritizing those services that have the greatest number of Australian users under 16 with the greatest risk of harm.

So while we do have a list of platforms that are included, that list is not the complete list. Many more platforms are captured by the law, but unless the minister chooses to use her powers to provide greater clarity, the services are left essentially self-assessing whether the law applies to them. And there is no formal mechanism in the law for them to confirm if their assessments are correct.

Going forward, the onus will continue to be on the eSafety Commissioner to monitor to determine if companies are implementing appropriate social media, minimum age restrictions, and if they’re not, to find those companies in breach and issue a penalty. The companies could then challenge the penalty in court, and it will be a judge that will be the ultimate arbiter as to whether or not a company falls within the definition of a social media minimum age restricted platform.

The result could be a game of chicken where some companies just wait and see if the eSafety Commissioner issues enforcement proceedings, which take a long time, cost a lot of money and need to be prioritized. This confusion and cost could be avoided if the eSafety Commissioner with appropriate parliamentary oversight had the power to declare who is captured by the law. This is another important thing for politicians to consider when the law comes under review. We certainly anticipate that the issue of exactly which companies are captured by the law will continue to be controversial and maybe even subject of court cases well into the future.

Julie: We also know, I mean, one of the platforms threatened to sue the government before the minister made decisions.

Johanna: That would be Google who had been lobbying for the government to exclude YouTube.

Julie: Had that exemption gone through, probably the other platforms would’ve challenged it. So we need to make sure that we’re demonstrating procedural fairness when we’re engaging with the companies. Again, you’re damned if you do. You’re damned if you don’t. If you don’t do enough due diligence because some of these platforms do have social media functionality, then you’re not doing enough. But if you’re too narrow and you miss things, then that’s problematic as well.

Since we’ve recorded this interview, the eSafety Commissioner has announced this list of major platforms that are included in the ban.

But this list will keep changing. And just because a company is not on the list does not mean that they’re not required to comply with the law.

Julie: So if you’re a smaller platform and you may be considered a social media site, but you just don’t have the capacity to build an effective age assurance system.

If that changes and you have people flocking to your platform, we’ll be watching that. We may have more expectations and as companies add features and functionalities that might put them into the age restricted social media services, we’ll change that. So I expect once we get this preliminary list out there, and again, we need to make sure legally our reasoning is consistent, fair and watertight, that we’ll probably update this on a periodic basis and we’ll explain clear reasons why

Johanna: Clear is mud, right. The key point is there will be a list. Some companies are gonna object to being on the list. There will be fights and legal battles about it, and the list will keep changing. And in the next episode we’ll have some practical tips about what this all means for parents and young people preparing for the implementation of this law.

But first, I wanna draw your attention to another set of important regulations that are gonna have a much bigger impact on the way that we engage online than these social media minimum age restrictions, but which are barely getting any attention.

Johanna: Back in 2021, there was a big reform of Australia’s online safety regime. It introduced something called the basic Online Safety Expectations, otherwise known as the Bose and Industry Codes. This is a bland title for something that is going to have an impact on the lives of all Australians.

Julie: The co-regulatory codes and standards were actually a feature of the Online Safety Act of 2021, but they were so complex. We agreed with industry in 2021 that it would effectively be between 17 and 24 codes, not just one code, and that we. Would divide them into two tranches. So we dealt with phase one codes first, which dealt with illegal content like child sexual abuse material, and TVEC because the companies had systems in place to deal with illegal content.

Johanna: it took four years and it was not a smooth process, but these industry codes that deal with this really heinous activity and illegal content. Are now in force. Most of us have probably not noticed because quite frankly, this is the type of content that we hope no one is ever looking for. These codes do impose significant obligations on accompanies, for example, requiring gaming platforms like Roblox to take measures to ensure that their platforms are not being used to facilitate online child sexual exploitation.

With this first tranche of codes in the bag, Julie and her team turned to the tranche two codes.

Julie: So really what the phase two codes are about is preventing under eighteens from accessing harmful content class two content like explicit violence, pornography, suicidal ideation, and disordered eating. So that’s actually preventing young people from accessing content while the age restriction is really about lifting the age that Australians can join social media.

Johanna: This is a really important point. The social media minimum age restrictions delay the age that kids can have a social media account until 16, but the industry codes that come into a force on the 28th of December and in March next year prevent people under 18 from accessing what was sometimes called lawful but awful content. This is basically what traditional Australian media classification rules refer to as. R rated content that is restricted content. And while the social media delay only applies to social media, the codes apply much more broadly to search engines, gaming messaging, apps, websites. Here’s Julie explaining what will happen after the social media age restrictions come into play.

Julie: Although they will still be able to access gaming and messaging, and of course we know that when kids migrate to these other services, that doesn’t mean that these services are necessarily safe, and we’ll make sure that we underscore that, but we also know that the harms are going to migrate.

Because really when you’re talking about things like image-based abuse, which is the non-consensual sharing of intimate images and videos or cyber bullying, these are human behavioral issues. Social interaction through these technology platforms facilitates.

Johanna: This is where the industry codes combined with the basic online safety expectations are going to step in. And frankly, the impact of these industry codes is going to be much greater than the social media age restriction law. Let’s take an example of something that many of us are concerned about AI chatbots. Now, these are not considered to be a social media age restricted service and are therefore not captured by the ban.

Julie: We started hearing in October last year from school nurses that 11 year olds were spending up to five or six hours. On AI companions, including sexualized chatbots. And so we looked into it, they felt addicted to them. They felt that these were quasi romantic relationships. So in February we put out our first online safety advisory around AI companions and chatbots.

And as we were negotiating with the industry around our co-regulatory codes, they initially pushed back on preventing under eighteens from accessing. Pornographic self-harm, suicidal ideation, disordered eating content that was in text form. They said, this is meant to be videos and photos. And I said, no.

These sexualized chatbots are actually inciting young people to engage in harmful sexual acts. You need to cover text as well, and to our great delight and surprise. The industry did step up and put forward a code that covers AI companions and chatbots. California has just followed suit, and I suspect there will be more laws around the globe that will start to tackle this.

Johanna: So it’s not just Australia’s social media minimum age restrictions that are world leading. We are doing a lot that is world first. The social media minimum age restriction is a restriction on access to a social media account, but it won’t prevent young people from accessing content. But these new industry codes will restrict people under 18 from accessing class two content that is explicit violence, pornography, suicide ideation, and disordered eating, not just on social media.

But wherever it is found online search engines, games, messaging apps, websites, and AI chatbots from the 27th of December, the biggest change that you’ll see is that search results that return pornographic images, for example, will be blurred out. People will still be able to click through on those results and see the pornographic images.

But from March next year, when you click on those results, you’ll be required to prove that you are over 18 before you see the content. This will apply to porn, but it will also apply to content that is extreme violence that encourages suicide or disordered eating, or even predatory gambling. These new codes have been in the works for years.

They’re coming into force at around the same time as the social media age restrictions, and I think it’s highly likely that these two laws are going to be conflated. And so you, dear listeners, can be one of the informed Australians that helps explain at your dinner table the barbecue or perhaps at the sports field what the difference is.

When you hear people blaming the social media minimum age ban for the fact that they need to provide age assurance when they’re accessing adult entertainment sites, you can set them straight and say that it actually has got nothing to do with the social media age ban and everything to do with these industry codes that have been under negotiation for four years.

And if you are having those conversations, no doubt, the next question that you’re going to get is, well, how exactly am I going to be asked to verify my age? So let’s ask an expert.

Carly: There’s no specific technical measures required to comply with the act and that the Act is technology neutral and entities can choose how to implement it. So, broadly speaking, we understand there are three ways in which. Platforms are likely to implement this obligation. The first is age estimation. The second is age verification, and the third is age inference.

Johanna: This is Carly Kind. She’s Australia’s privacy commissioner. While the eSafety Commissioner is responsible for implementing the safety dimensions of the social media minimum age restrictions and the industry codes, it’s Carly’s job to ensure that the companies that are using age assurance technologies do so in a way that respects Australians privacy. Here’s Carly explaining in more detail the three techniques that she expects Australians will start to encounter online.

Carly: Estimation is when they use a technical process of analyzing your facial features or other physical features in order to guess how old you are.

Johanna: This could be by getting you to take a selfie using your phone or a computer.

Carly: Age verification is give us a copy of some kind of government issued ID that verifies how old you are. And then the third is age inference. And age inference is when a platform might say, okay, we already have all of this data about you. We think that we can probably guess how old you are based on what kinds of posts you like or what times of day you log in. Or what you’ve said in your profile that your captain of your school netball team, for example. So those are the three buckets of approaches we think that entities are likely to use.

Johanna: So much of the focus of the public conversation around age assurance is on the first two examples that Carly just gave age estimation. That’s when they take a selfie and estimate your age, an age verification when you are uploading some kind of ID document, but age inference when companies use your existing data is also going to be really important. There are a number of efforts in trained by tech companies to improve the accuracy and effectiveness of this particular technique.

The eSafety commissioner was recently in the US where she met with a number of the big tech companies, and this was something that came up in her conversations.

Julie: One of the great things we learned from our meeting with Apple is they now have an age API, which is what a lot of the social media companies have asked for, so they can get an additional signal that they’re about the ages of their users and they’re actually classifying their apps now according to 16 plus and 18 plus, which is helpful for. Both of these. So great job to Apple, they’ve really lifted their game. But I’m encouraging parents to use these parental controls to vouch so that these signals can go from their devices or the app stores to these platforms. And the more these companies have other signals that are de-identified and are privacy preserving, the more age inference information they’ll be able to have.

Johanna: of course, each age assurance method comes with its own trade-offs between privacy, accuracy, and practicality. And that’s where Carly Kind, the Privacy Commissioner, will play a key oversight role.

Carly: The role of the Privacy Commissioner and the Office of the Australian Information Commissioner more broadly is really to oversee compliance with one section of the act, which is Section 63 F, which really relates to the use of information to assure age restricted users are prevented from holding accounts, and that provision was inserted during the drafting of the legislation.

Out of concerns for privacy and in particular, it seems that the driving motivations behind that particular provision were that entities would use age restricted information for other purposes, and that they may require government documents to be disclosed exclusively in order to assure users. So the provisions.

Of 63 F essentially require where social media platforms are collecting data for the purposes of assuring an individual’s age that they only use it for that purpose or for any other purposes. They articulate when they’re collecting the data and that they delete it or destroy it. As soon as they have done that process of age assurance.

There are a number of carve outs to that. For example, if they get the individual’s consent to use it for other purposes, and that doesn’t relate to the use of information they already hold. To assure age, assure an individual. So it is a relatively narrow set of restrictions, albeit quite onerous restrictions because other rules that apply in the privacy Act don’t require the deletion of information in such clear circumstances.

So there really was an intention to say, if you are asking, for example, a copy of an individual’s. Identity document in order to establish their age, you have to delete that as soon as you have done what you said you would do.

Johanna: Carly will be back in the next episode to give us some practical tips to look out for when we’re using age assurance technologies. But it’s fair to say that age assurance has been controversial. So I would like you to hear from the folks who were contracted by the government to test it to see if this technology actually worked.

Andrew: My name is Andrew Hammond on the Age Assurance Technology Trial. I was the deputy program director, so I oversaw a lot of the, the actual implementation of the testing.

Johanna: In an earlier episode, the eSafety commissioner said that whether or not the social media age restrictions would work would depend on whether or not this age assurance technology was effective. Broadly, we were asked to test whether age assurance could be done in the Australian context, like our Australian population is quite multicultural. Obviously we have our First Nations brothers and sisters as well. The real key there was could you take the technology that’s available today? And use it in the Australian context with the Australian population. And so broadly, that was what we were asked to do. And so the conclusions we found is that age insurance can be done in actual fact age insurance is being done in many places already.

Our findings were that it could be done, it wasn’t perfect, and there was definitely no. One size fits all solution, which I know sounds strange in that, oh, this is just for social media. But there’s so many different facets to how their systems work, the processes they use, and probably the one or two key points that we had as well.

We weren’t asked to rank or select a product. So this wasn’t about down selecting the product to be used for all social media companies in Australia. It was about testing everything that was available in the market that we get our hands on. So we, we used the same testing team and the same students in the classroom to test every piece of technology, regardless of whether that was something that was well established in market or an up and coming solution.

And essentially the, what we found was it could be done and there’s variants in there. But that’s all in the report.

Johanna: We’ll add a link to the report in the show notes. But for those of you who are curious about how the tests were conducted, Andrew worked with students from schools across Australia. They represented a wide range of socioeconomic backgrounds, and each student was asked to complete between one and 10 different tests.

Each of the students volunteered to participate in the trial. We had a range of companies put forward their technology and we built an interface to connect their technology. So what that allowed us to do was to walk anyone who was doing the testing through, we captured a little bit of their information about their demographics, so how old they were, uh, just based on age month and year of birth, not date of birth.

We didn’t want any identifying information to be captured. And then we would throw to a different challenge. So between one and 10 different types of technology would be presented to the user. And they would then step through. Sometimes that was a facial age estimation. Other times it would be a bit more dynamic.

Sometimes there was, uh, a need for id. Other times there was a need for hand movements and stuff like that. And so we would do that and then we would get the result back from that vendor that would either estimate or verify the age of the participant. So there was a 50 test results to weighed through.

Johanna: So the technology has been tested and there are options that are available to the platforms to use. What is this going to look like in practice for us? What are we most likely to encounter online? If we are asked to provide age assurance?

Andrew: It’s most likely to be facial age estimation. And the reason I say that is because it’s the lowest barrier to entry, and also you don’t have to give up your identity into terms of documents. So I think from the most part. We’ll be asked to just to do a facial age estimation, and if the system can’t get an accurate estimate, because it could be the, the lighting in the room is not good, or you might have a, a really young looking face or then they might throw to provide a piece of ID and that might be a license or a passport or some other evidentiary piece that is, you can demonstrate your.

Johanna: While facial recognition might be the front runner, there are some other interesting options on the table.

Andrew: There’s a really fascinating solution that can use your bank as a reference point because when you create a bank account, you have to prove who you are and it’s quite detailed process. So there’s a, a company that works with the big four banks in Australia that can use your bank account and the fact that you have a bank account to be able to prove that I’m over 16 and I can therefore access that service.

The other one, a fascinating one, will Derman, which uses your hands essentially there’s a tendon in your arm that ages. Quite well with how old someone is. And then they put up a challenge and they use your left hand and there’s a series of moves that you have to do and their system is super accurate.

Essentially there’s a, apart from how you move your hand, there’s a cognitive challenge there to say, can you see what’s on the screen? Replicate that with your hand movement and then do it. And then also, is your hand stable when you’re doing it? ’cause obviously if you’re quite young. A toddler, that cognitive ability is gonna be a bit tricky.

And then equally, if you’re an older person, the stability in your hand might not be there. So there might be some movement and stuff like that. So it’s a fascinating approach there. So, so it could be as easy as showing your hand to the camera and doing a couple of hand movements and that could get through.So yeah, it’s definitely a variety of ways that someone might be able to demonstrate their age.

Johanna: Some of these options sound high tech, and they all raise an important question. Just how accurate are they? Because when it comes down to proving our age, our expectations for technology are high.

Andrew: What we have in our current day society is if you go to buy some alcohol, which is probably the easiest one to talk about. The signs that are at the shop now or at the bar says if you look younger than 25, you’ll be asked for id. You only need to be 18 to purchase alcohol. So that’s a seven-year difference. And so what we see is that. We’ve got people who are generally untrained on the other side of the counter. They’re not age, uh, estimating experts with a seven-year buffer zone.

And we’re quite comfortable with that now. And we quite often, we hear of people who are under 18 being able to purchase alcohol or consume alcohol. And so we have this tangent where as humans, we’re actually not that accurate in guessing age. But when we look at tech doing it for us, we want it to be near a hundred percent.

So think there’s a attention to manage in that space. Again, we didn’t pick the particular tech that should be used, but there is definitely a multitude of tech out there that could be in the high nineties. And, and if you think about the, what does that mean practically? It means that’s nine out 10 people that go through will get their age within 1.5 years to 18 months. And they’ll say along through, or they won’t get through, like they’ll be blocked.

Julie: probably isn’t a day that goes by that there aren’t different interpretations of the 1200 page age assurance technical trial and people saying facial age estimation won’t work. Or saying age inference shouldn’t be required, or, you know, don’t use digital id. And in fact, we were very clear in our regulatory guidance that digital or government ID, even though it’s a much more.

Comfortable form of identification management for middle-aged Australians. And kids are much more comfortable with liveness tests and facial age estimation because they’re doing it all the time. A successive validation or waterfall approach means you’re not going to use a single facial age estimation scheme and you know that’s it.

That’s determinative. Platforms today, particularly the major ones that are likely going to be age restricted, um, sites, we all know they have a lot of data on us and can target us with deadly precision around advertising. They know how old we are, they know what color our hair is, et cetera, et cetera.

They can do the same with age. They just haven’t had to in the past. And so. A lot of them are using age inference. There’s a lot of promise in multimodal LLMs to help scale this, um, more effectively. But they’ll need to use multiple ways of assessing age. And we’ve also said in the regulatory guidance, we know that they’re going to be people who are missed.

So they have to have discoverable, usable, and responsive reporting schemes, um, so that parents or educators can say, Hey, this young person is on when they shouldn’t be, and they’re going to have to work through how they prevent. Malicious de platforming, but we’ve also said there needs to be an appropriate appeals process if they over block or take someone down, that should be up there.

Johanna: So here’s the thing. From the 10th of December, we may be asked to assure our age, and it will be up to the platforms to choose which technological methods they use. Some of those methods are more accurate than others, and that’s why the appeals process that the eSafety commissioner was just talking about is going to be really important.

Another important thing to emphasize is that the law expressly stipulates that platforms cannot require government ID to be the only way that you can assure your age. They must provide you with other alternatives. The legislation also dictates that platforms must delete your information once they’ve completed the age assurance process. They can only keep that information if you unambiguously consent to it.

Johanna: In the next episode, we are gonna get really practical what will happen on the 10th of December when the social media minimum age restrictions come into force. And what do we all need to be doing to prepare?

Amanda: So in the lead up to the implementation of the new legislation in December, I think a lot of teenagers and their parents and caregivers are gonna be a bit nervous about what lies ahead, and that’s totally understandable.

Minh: Like you can’t just simply take away like the devices from young people without investing in the infrastructure for them to have a community offline.

Johanna: This has been Tech Mirror, a podcast brought to you by the Tech Policy Design Institute. We are based in Canberra on the lands of the Nal Nare people. You can find information about the research mentioned in the episode in the show notes. A big thank you to our guests, Cam Wilson. Julie Inman Grant, Andrew Hammond, and Carly Kind.

This podcast is made possible with thanks to the generous contributions from government, industry and philanthropy to the tech Policy Design Fund, the full details of which are transparently disclosed on our website. For information about the archival audio we’ve used in this episode, please check the show notes.

The soundtrack is by Thalia Skopellos. A Sydney based artist and entrepreneur with Aboriginal and Greek heritage. This podcast was produced with the support of audio craft on the lands of the Gadigal people of the eora nation. Amy Denmeade provided invaluable research support. A big thank you also to all the team at the Tech Policy Design Institute, without whom this pod wouldn’t be possible.

For more information about our work, visit us at techpolicy.au or follow us on LinkedIn.