Johanna discusses Australia’s draft Children’s Online Privacy Code with Privacy Commissioner Carly Kind and Director of the Code Taskforce Kate Bower. They discuss how the proposed privacy code will work and how it will shape and benefit young people’s privacy in the future, with the potential for positive spillover for all users regardless of age. Carly and Kate also answer questions sent in by TPDi’s three youth ambassadors. Listen to the episode and provide your input on the draft code – consultations open until 5 June 2026!
Links:
OAIC Privacy for Kids (for copy of the draft code, and loads of great resources for parents and kids): https://www.oaic.gov.au/privacy/privacy-for-kids
Participate in a Roundtable on the draft Code:
Recap the way the Code might work with Digital Duty of Care:
https://techpolicy.au/podcast/part-5-next-steps-australia-v-social-media-mini-series
For transcript and full show notes visit techpolicy.au/podcast
Transcript
Johanna: The Tech Policy Design Institute acknowledges and pays our respects to all First Nations people. We recognize and celebrate that among many things, indigenous people were Australia’s first tech innovators.
Kate: I mean, privacy is not dead. But I would say that wouldn’t I? What we’ve heard from children and young people and engage with them is not so much that privacy is outdated, but A, they don’t feel like they have enough education. They talk a lot, we get a lot of online safety education, we don’t get enough about privacy.
I really hope that the code changes that. But b, I also think if you speak to young people and teens in particular, you can see in their behaviors that they care about privacy. So things like ephemeral messaging, so disappearing messaging, very popular with young people. The group chat as opposed to just putting everything out on your social media for everyone to have an opinion on are very popular.
But I think even in the way that young people engage as content creators and influencers, shows that they understand the contract here, right? Like they understand that their opinions are worth something and they wanna see something from that side of the bargain. So I think privacy is not dead, and I actually think that young people have some of the most sophisticated understandings of privacy
Johanna: Hello and welcome to another episode of Tech Mirror. I’m Johanna Weaver, your host, and this is the podcast where we talk about how technology is shaping our world, but how we the humans. Can shape technology back. Now for regular listeners of the pod, you would be expecting the final episode in our deep dive on tech and geopolitics.
We had an excellent lineup for you with the Danish and the Canadian ambassadors, who Australia works very closely with internationally on many of these technology policy issues. Unfortunately, though the Danes had an election on the 24th of March and haven’t yet managed to form a government, so the ambassador can’t represent a government when the government hasn’t been formed.
An interesting piece of diplomatic protocol for you all. So we have put that episode on hold, but we’ll bring it back for you as soon as the Danes get their act together and form their government. But never fear. There’s always something new and interesting to be focused on in tech policy land. And today we’re bringing you an episode on the Children’s Online Privacy Code, which is currently in draft form and open for consultation.
And we have two absolutely stellar guests to join us. Carly Kind and Kate Bower. So welcome to Tech Mirror.
Carly: Thanks for having us, Johanna.
Johanna: Now Carly will be a familiar voice to many on the pod. She’s the Australia’s privacy commissioner. She started about two years ago and before that she was the inaugural director of the ADA Lovelace Institute in the uk.
And she has a long career in privacy and human rights law. Carly, it’s wonderful to have you back on the pod. And Kate is currently the director of Privacy reform, implementation and social media, which is Prism Great acronym, at the office of the Australian Information Commissioner, or the OAIC.
And Kate is leading the work on the design of the Children’s Online privacy code. Again, brings a long history of advocacy and work in this field. I think one of the most notable examples of the work that Kate had done before she joined the commission is having worked with choice on the campaign and the investigations that led to the prosecutions of Bunnings for indiscriminate use of facial recognition.
So two deep subject matter experts who are committed to improving children’s online privacy. Joining us here today to kick us off with the first question, given the background that you both have in advocacy and civil society. But now sitting within a regulator, how has that background that the two of you bring shaped the way that you have engaged in this process of drafting the children’s online privacy code?
Carly: I think Kate and I both share the experience of working in civil society to affect change is one of working in constrained funding environments where you have to be really clear-eyed about how you’re achieving impact and what that impact is and the best way to get there. And so I think we both bring that to our work.
We’re both very kind of purpose driven, I think, in what we do. And I think the other thing that having come from civil society and advocacy has really given me is an appreciation for the rich amount of expertise and intelligence that is in those communities that can be drawn upon and leveraged by regulators.
And we’ve really tried to do that in the consultative approach. We’ve taken around this piece of work over the last year or so. We know there’s an immense amount of expertise on. Children’s rights children in the digital environment, and we’ve really sought to bring that in to the OAIC, both literally in hiring people like Kate and her team, but also in terms of inviting those people to contribute to the consultation to partner with us on building resources, et cetera.
Johanna: And what about for you, Kate?
Kate: Yeah. I would say to a certain extent, you know, advocacy is the art of the possible and you are always operating those constrained environments. And I think that is similar to regulation, specifically at the OAIC, where we know that there are constraints. But the thing I think that that advocacy background also gives me is a sense that I know how hard fought.
Some of these opportunities are, and I think that that really gives, I think both Kelly and I a sense of ambition about what’s possible here and to not let this be a wasted opportunity. That it’s very, very rare that we get these opportunities to actually uplift and really progress privacy rights, and particularly for some of those more vulnerable groups of children in our society.
And we really take seriously that opportunity. And you know, I bear heavily the weight of the responsibility that comes with that, but I also am appreciative of the fact that we have such an engaged child rights and digital rights civil society in Australia who are really supporting us in that mission.
And so I think it’s been incredibly actually beneficial to come from advocacy. And now I kind of have to walk the walk having been the person on the other side saying, Hey, the government should do this, the regulator should do that. Now I have to kind of deal with the messiness and the complexity, but also the incredible opportunity that that is.
Johanna: And Carly, I remember when you first started your term and we were talking about that part of the decision making be behind taking up and accepting the offer to become the privacy commissioner was because you saw that there was such an opportunity in the five year
So for those listeners who haven’t been following along really closely, can you just situate this children’s online privacy code, which there is a draft out which you are consulting on right now.
How did it come about? What was the context for it?
Carly: So, as Kate said, civil society researchers, advocates have been really pushing for a children’s online privacy code for some years. Now, organizations like Reset Tech and others have really led that work. And in some respects, they’re pushing out an open door with the Australian government being very committed to both updating the Privacy Act large, but also in particular in advancing protections for children online.
And we’ve seen the work that’s been done in the social media minimum age space, as one very prominent initiative to seek to achieve that end. So that pressure and was building and the consensus around the need for the code has been building for some time. I think it particularly responds to the fact that the Privacy Act is old legislation, which is outdated in many respects, and importantly, it doesn’t currently even mention children, let alone include specific protections for children.
And so I think it became very clear that wasn’t fit for purpose in. Acknowledgement of all the ways in which children’s personal information are currently being collected, used and disclosed online. So in 2024, parliament passed the privacy and other legislation Amendment Act, which was also known as tranche.
One of the Privacy Act reforms and tranche one included few but important reforms to our legislative scheme. And the key one was really a mandate for the office of the Australian Information Commissioner to develop the children’s online privacy code. And I should say we have been working now for 18 months or so to do that.
In the first year we did widespread consultation as well as a lot of work in actually drafting the provisions of the code. It has just gone out for consultation and that consultation on the exposure draft is open for two months and then it will be returned to us. We’ll do some more work on it to ensure that everything we hear during the consultation is reflected in the final code, and then it has to be registered by the Attorney General by December this year to come into effect thereafter.
Johanna: So we’ve got a draft on the table, there’s been considerable consultation that has led to the crafting of the draft, and it’s out for consultation, then it will be revised. Kate, can you take us through, I guess, some of the key pieces of reform that you’re hoping to bring forward in this particular code?
Kate: When we looked at developing the code, obviously we did a lot of consultation, and particularly with children and young people, we really tried to listen to what their expectations were for the code. And some of the key messages that we got were really around young people wanting to have a more knowledge about what information is being collected about them, but more agency and more control, and also a sense of.
They’re having that agency control to opt in rather than things being automatic. So some of the key obligations of the code go to those points. So one is around a kind of data minimization principle, which looks at says that by default information that is to be collected is to be what’s strictly necessary.
So a narrowing from reasonably necessary to strictly necessary. And then the other things need to be opt-in. So offering those children the options to turn on settings for things like personalization or things like targeted advertising, there’s an overarching obligation for. All information handling, so collection, use and disclosure to be in the best interests of the child.
That was something that came clearly through, particularly from the child rights and digital rights spaces as being a really meaningful protection so that even when children choose not to engage in using those controls, they’ve got that safeguard, that information can’t be used against them. And then the other is a suite of obligations really around transparency by design.
So having information explained to children and controls available to children in ways that they can understand. So having plain language explanations right through from the privacy policies to the way that in-app notices work in terms of a PP five, all the way through to complaints handling and how information access requests are processed.
All of that needs to be in a way that children themselves can understand in a way that gives them that more agency and autonomy to control their online experience.
Johanna: There’s a lot in there and we’ll unpack that sort of step by step. And just to jump on one acronym there, a PP, Australian Privacy Principles.
I suspect we’re gonna hear that one a few times through the conversation. So that’s the broad overarching objectives for this particular code. Carly, can we just take one step back and say, look at two foundationally definitional questions. So one, how are we defining children in the context of the code?
And then the other is looking at the scope. So when we say online children’s privacy code, what services or what platforms are going to be in scope for the code?
Carly: Yeah, absolutely. So children are everyone under the age of 18 and for the purposes of the code entities will have to apply the protections of the code to children as defined as under the age of 18.
Now, some aspects of the code have a requirement to get consent from children or from young people, and that requirement for consent requires that the individual has capacity to consent of course. And the O’S longstanding guidance has been that children 15 and over have to have capacity to consent. So the for the relevant provisions in the code, which apply to consent and the capacity to consent.
If you are under 15, you require a parent or guardian to consent on your behalf. That’s kind of one of the important distinctions the code makes, and we can come back to exactly what the code requires in that context. Importantly, the code requires entities to take reasonable steps to understand. When they are dealing with children actually, so that they can afford them the protections of the code.
And in that regard, they’ve got two options. They can either. Afford the protections of the code to all users on their service or site, which would absolutely be our preferred outcome. Or they can choose to apply it just to children, in which case they need to take reasonable steps to find out the age of the child in terms of which services and sites have to comply with this law.
The mandate given to us by Parliament was to develop a code that is directed at online services that are likely to be accessed by children. Now, that threshold likely to be accessed is the same that is in use in the UK with their age appropriate design code, which is one of the instruments that the Australian code is kind of modeled on.
Likely to be accessed is a kind of open evaluative threshold. It requires consideration of a range of different factors. We can’t say yes or no. Absolutely that some. Platforms are or are not likely to be accessed, but it will require a consideration of things like, is the platform directed at children?
Does it have imagery and features that are particularly appealing to children, but also does the service know? For a fact or should know for a fact that there are a significant amount of children using the service. So a range of different factors will feed into whether our service is likely to be accessed by children.
Now, in addition to capturing that set of online services, we’ve also chosen proposed in the exposure draft to extend the application of the code to services, whose activities are primarily concerned with the collection of children’s information, children’s personal information. And in that regard, we’re trying to also capture.
Online services where there’s a bunch of children’s personal information, but the child themselves didn’t necessarily put it there. The child is not the user, their parents or their carers or others in their life, other users. So in that respect, we’re thinking about, for example, the apps that are used by daycare providers to attract children’s daily attendance and which keep track of photos and things like that.
The final thing to say in terms of entities that are and are not captured by the code, and I probably should have begun with this one, is that they have to be an a PP entity. So that means they have to already be captured by the jurisdiction of the Privacy Act, which has a few kind of nuances to it, but essentially is any entity with a revenue of over 3 million Australian dollars a year, and also that the code will not apply to health services, so that subset of entities is excluded from the application of the code.
Johanna: So I think the two key takeouts for me outta that Carly listening to it, is a, it’s much broader in scope in terms of its application to platforms and services, not just social media, for example, like the social media minimum age restrictions or the delay, but also the different age banding that you have there.
So social media laws are applying for 16 and under the code you are talking about is applying to anyone under 18. But then you also have a, a pivot point at 15 for the consent.
So I wanna move now to a question from a Tipy youth ambassador. So this is a program that we have just kicked off at the Tech Policy Design Institute, and it’s supported by Lego, which we’re very grateful for their support for this. We’ve been through a selection process, identify. Three youth ambassadors to bring into all of Tip DE’s work this year to ensure that we are getting that intergenerational perspective into the technology policy design.
Because of course, the people who are going to be living with the consequences of so much of this technology policy design are young people. I’m gonna throw now to Bianca, one of the inaugural cohort of Tip’s, youth ambassadors to put a question to you. Carly.
Bianca: Hi Carly. My name’s Bianca Kendrick and I’m the Alano Madeline Foundation’s Youth Program and policy advisor.
I wanna start by saying thank you so much for all the effort and work you and your commission have put into developing the code. It’s looking fantastic and really comprehensive, and we appreciate all of your work. The question that I had in relation to the code is whether social media platforms will be exempt from the code in relation to under sixteens because they’re not intended to be accessing these platforms under the social media delay.
And if so, how does this ensure protection for children who may still in practice, be accessing these platforms?
Carly: Yeah, great question. It’s um, kind of tricky intersection of the legislation, but I would say that the platforms would have to be able to establish that they’re not likely to be accessed by children.
And in that regard, I think they’d have to have some. High level of confidence in their age assurance and restriction policies. In any event, given that the code covers up until 18 and the social media ban restricts children 16 and under, there is of course the matter of 16 and 17 and 18 year olds who will be able to access social media platforms and would be covered by the code, so they would absolutely have to comply to that extent.
Kate: I would also point out that there will be a range of other types of services where that kind of consideration may be made. So things like online gaming where there are age restricted material codes in place, that really, the question will be how robust is their age verification model. If they can actively demonstrate that there aren’t under eighteens using their game or their platform, then the code wouldn’t apply.
But if there is actual evidence of children continuing to use online games or to continuing to be on the platform, then it could bring them within scope of the code.
Johanna: That was such a fabulous question, Bianca, going straight to the pointy end of the issues. So Kate, going to another question on the detail of this proposed code.
It’s proposing protections around requiring online services to notify children when their parents are consenting to the collection of personal information on their behalf. So this is when mom turns on track, your iPhone family monitoring, et cetera. What feedback have you received from people so far on this and what happens if the children and the parents consent doesn’t match?
How is the code proposing to deal with this issue?
Kate: I’ll speak first about how this has been received. I’d say this is one of the aspects of the code that people have been most interested in, and I think as. Generated the most excitement. I think part of that is because it’s a novel obligation and something that we’ve considered after speaking with regulators around the world, those who currently have children’s code in place, such as the ICO we mentioned earlier, but also we met with the New York State’s attorney’s office who are considering introducing something called the Safer Kids Act, which has a similar obligation.
The thinking here is that from our consultations with children, we know they don’t always know what information’s being collected about them. They don’t always know what their parents are consenting to on their behalf. The reason for this obligation is that it then introduces this kind of digital literacy piece.
We know that children have developing capacity as they age, but we also recognize that perhaps many of them are too young to kind of legally be on the hook in terms of being able to provide consent. But this is supposed to help them participate in that process and hopefully encourage open conversations between children and their parents and carers about what these types of consents.
Actually mean, and for them children to get that information in a way that they can understand, in a way that’s clearly explained, hopefully improves their digital literacy so that by the time they turn 15 and are consenting on their own behalf, then they’ll know what that means and they’ll still have the protections of the code for another three years after that point.
Then hopefully by the time they move into the world as digitally literate citizens at 18, they’re well equipped to be able to participate and to make some of those choices meaningfully In terms of what happens when parents and the children disagree legally in terms of the code, it is the parents’ consent.
That is the legally required consent. So what we’re calling children do is assent. So they have to agree in the user journey. It means if a child turns on a feature, it provides them with that notice and it says, Hey, this is what this is about. This is what you’re turning on. We need to ask your parents about this.
Do you agree? If they don’t agree, the notice won’t be sent to the parents. But in the opposite circumstance, when a parent turns on. Either against or, or with the child’s wishes, the parent turns on that setting and then the child gets the notice. If the child actually does still have an ability to withdraw the consent, but ultimately it’s the parent who’s the final decision maker.
I’m hoping that this doesn’t lead to too much conflict around the dinner table, but we kind of know already that online spaces and screen time is already one of those topics of discussion for children and parents. That certainly is in my house as well. I will admit that there’s many robust conversations about my wishes versus my children’s wishes, but I hope that this does give a bit more knowledge.
To children and young people and enable them to become more participatory in how those decisions are made.
Johanna: So just let me step this out and check that I’m understanding it correctly. So, up until 15, it’s the parents who have to consent and the children with the associated account will be notified when the consent happens.
And that’s just something that’s gonna be happening in the app. It’s not, you know, extra things that people need to go. It’s just the requirement under this code is that the tech companies are actually updating the tech so that this is a good user process. And then above 15. It’s the children who are then consenting and the parents who are being notified.
Kate: So the way that it works is that children, which we’re proposing as the age of 15, and we’re certainly very open to feedback from stakeholders during the consultation. If 15 is the right age, we’ll be able to consent on their own behalf legally from between the ages of 15 and 18 more and ongoing from the age of 15.
So it’s only for those children under 15 where the parent provides the legal consent.
Johanna: It’s really interesting and it is one of the things that our youth ambassadors praised actually in the feedback, this distinction between children and young people and actually that helping to build the muscle of digital literacy of young people rather than just sort of, you’ve got no access and then you have access.
It’s sort of, you can see how that is a progression, that that is positive. So on that note, we’re gonna go to another question from TIPd Youth Ambassador. And this question is coming from Rachel.
Rachael: My name is Rachel Burns. I am a mental health and disability advocate from Blu or Perth Western Australia. I’m also the founder of Integrity Initiative. My question is, how is best interest of the child being defined and assessed for young people across different circumstances, particularly for those who might be more vulnerable, such as young people with intellectual disabilities or mature minors under 15.
And importantly, how are these groups being meaningfully included in consultation or co-design processes?
Carly: I’ll take the second part of it first. We have had a lot of engagement with child rights organizations and others that work in community with children and young people to really make sure we’re getting a broad subsection of children inputting into the exposure draft of the code.
And we wanna make sure that includes children from particular vulnerable communities. We also have worked with partners to develop curriculum materials for teachers to use in schools, and we’ve done that in kind of age appropriate terms, and we hope that we’ll be able to get some input from. Teachers and their students as well, including not only in written form, but also in visual form, which I think is a really nice approach.
So we really have thought through, again, informed by all of the experts with which we’ve engaged, what is the best way to make sure we’re getting the broadest possible set of input from children of all. Types and demographics and needs in terms of best interest of the child. So best interest of the child is a internationally recognized standard that is found in lots of different legal frameworks.
It originally comes from the United Nations Children’s Convention on the Rights of the Child, which states that the best interest of the child should be a primary consideration generally when making decisions and taking actions that affect children. Now in practice, what this means is that if an online service is collecting or using or sharing personal information about a child, it must consider what the child’s best interests are, which includes asking, is this necessary?
What are the potential risks of harm that might be associated with the collection? And that harm might include, for example, potential for exploitation, but not only of the kind of sexual exploitation, but also commercial exploitation that might flow from the collection of children’s personal information and really demands that the service ask.
Is this in the best interest of the child? And basically the code says if the collection isn’t aligned with the best interest of the child, it can’t be considered. Fair and lawful for the purposes of a PP 3.5 of the Privacy Act. So again, it’s a kind of open and evaluative criteria, but it doesn’t require an analysis of the best interest of the particular child that the entity is dealing with, but is rather a more general standard of what might be the best interest of children more generally.
Kate: There is some further detail in the explanatory statement that sits alongside the exposure draft that I think goes to this point. But in most instances, it is this consideration of the best interest of the child as a cohort of children. The expectation here is that entities will consider the service and the type of service it is and make some assessments about.
The cohort of children that are likely to be using that service, so it’s not necessarily like a generic or people zero to 18. If your service is directly aimed at the early childhood space, or if it’s a learn to read app, or if it’s used in primary schools, for example, then the expectation is that you.
Consider the actual user cohort of children that are most likely to be using your service. But you do also give consideration to what some of the vulnerabilities or the experience of marginalized children within that group. So it does require a consideration of those children who are at the margins or who might have different capacities or be experiencing different kinds of vulnerabilities.
So it certainly won’t be sufficient just to think, oh, I’ve picked a generic 10-year-old and therefore I’ve just thought about their best interest. It is more work than that. There will be some circumstances, particularly in relation to an information access request. The code introduces some new obligations around and some new rights for children to be able to request access to their information.
Also, to request potential destruction of their information. The entity may need to consider the rights of the individual child in that circumstance. So there might. Situations where there may even be a tiny bit of conflict I think between what’s the best interest of children as a cohort and that individual child.
And certainly there is some information in the explanatory statement so far. But that is also something that I think probably develop guidance on as time goes on. And we see how these obligations play out in practice. But it’s also the usefulness of API A, that’s a privacy impact assessment. So we don’t go too far down the jargon route.
There is an obligation in the code to to conduct API A. And I think that’s really where a lot of the heavy lifting of this can be done to really think through meaningfully, what’s the information that we collect and is it in the child’s best interest, and what else are we then doing with that information?
And is that in the child’s best interest?
Johanna: So Kate, you’ve just referred there to the obligation to destroy sensitive information, and that is an excellent segue into the final question we have from our youth ambassadors, which I have to say are such pointy questions. They’re fabulous.
George: Hi, my name is George Truness and I’m a penalty at year student in the Bachelor of Advanced Finance and Economics at the University of Queensland.
And my question for the draft privacy code is that it states and entity must destroy sensitive information collected for age verification as soon as practicable. It also requires entities to take reasonable steps to ascertain the age of the user. Given that both reasonable steps and as soon as practicable are inherently qualitative, how will the code prevent this ambiguity from being exploited in practice?
Thank you.
Carly: So, I mean these kinds of terms are really common in the Privacy Act and, and other similar principles based legislation. And they can be really important actually, because it allows the obligations in the code to scale with the size of the entity and their circumstances that the entity is in when the same piece of legislation applies to the richest companies in the world, like Meta, for example, as well as local Australian companies building small apps to use in daycare centers for example.
It is really important that we as the regulator and they as the regulated entity, can take into consideration what their circumstances are and kind of toggle our expectations accordingly. And that’s why we use phrases such as reasonable steps in the circumstances. So I think those are quite important features that make the legislation more usable and more kind of technologically neutral.
Having said that, I think the, those kinds of requirements should also be viewed in the context of, for example, the, the best interest obligation in the code. So in all circumstances, we are asking entities to ensure that they’re doing the right thing by children and any deliberate attempts to, to contrave that or I suppose, um, kind of disingenuous interpretation of principle based requirements that conflict with best interest of the child won’t be consistent with the spirit and purpose of the code and, and therefore be in compliance with it.
And I think that that’s why we’re relatively happy with where we land in terms of using those more scalable obligations against a backdrop of strong requirements where possible and absolute requirements where possible, and the underpinning of best interest of the child, which goes throughout the code.
As in all circumstances, in a context in which this is legislation that is designed to guide compliance, but that enforcement happens after the fact enforcement will be a very important tool, both in terms of creating and providing examples of what good looks like and what non-compliance looks like with respect to those principled based requirements, as well as deterring any bad faith or disingenuous interpretations of those requirements.
So we are already thinking about how we will be developing an enforcement strategy around this code because we think that that will be an important part of ensuring there’s a kind of belt and braces approach to regulating online services.
Johanna: And I think a lot of people will be focused on this question of enforcement and how do we enforce this, you know, with one eye to some of the battles that Julia Man Grant, the Safety Commissioner has been and probably will be having in the near future.
We’ll come to that question in just a second. Carly, I do wanna drill down a little bit more on that, but I just wanna one more question on the detail of the code. Carly, I think it was you who mentioned earlier that it is too sort of, there’s an option for providers to either. Verify that people are over the age of 18 and therefore the code doesn’t apply.
Or they can apply these provisions to everybody. And obviously the second example of that is better because we are lifting the bar for all of us. Kate, if platforms or services are choosing the first option, that is, they have to verify age. Does that mean we are looking at age verification technology and, and how does that fit into the code, especially given the controversy of that around the social media minimum age delay.
Kate: Yeah, I think it’s a good question, Johanna, and certainly one that’s been of interest to stakeholders, what we’re proposing in the code. Certainly our first preference is that the code is applied as broadly as possible. That’s a better outcome for all end users. Our other expectation is that when a service is clearly for children, the code should apply.
So we think in many circumstances, like some of those examples I mentioned earlier, where a service is directed at children, the service is for children. Age assurance shouldn’t be used, and in fact, the code should just apply with no wage assurance requirements needed. But in these circumstances where we have services which we know are accessed by both adults and children, I think it’s reasonable to assume that some of those entities would like to be able to have things like targeted advertising as part of their revenue model that necessarily requires personal information collection, which is.
Restricted somewhat by the code. So I think realistically we think age assurance will be used, but what we’re proposing in the code is firstly, if there is already an age signal that can be relied on. So for example, if this is an online service that’s subject to the age restricted material codes, if this is an online service that’s subject to the social media minimum age, there’s opportunities to rely on, or perhaps it’s a service like a bank which has a know your customer requirement.
If there’s some other way in which you already know the age of the user, you should rely on that in the first instance. If that is not the case, and in this narrow set of circumstances where we think age assurance should be used, we’re asking the entities to do a risk assessment based on the privacy harm of the particular collection.
So what we’re asking them to do there is slightly different than thinking about age assurance in an online safety context, which is about restriction, but thinking about what are the privacy harms that come from the personal information that you are asking for if it’s very minimal. So perhaps you’ll just collect an email address and maybe a country location. It might be the circumstance where, and let’s say there’s no secondary use that might lead to harmful behaviors such as data breaches or targeted advertising, et cetera. It might be the circumstance that a self declaration or parental attestation is a suitable method of age assurance.
So essentially that’s when a child says, I’m over 18, or I’m under 18 in a situation where there’s a potential for more harm. So I think in the circumstance where there’s a revenue model that relies on passive tracking and use of pixels, et cetera, where there’s targeted advertising at play, where there’s multiple secondary uses of personal information, that’s where we would anticipate or more robust method of age assurance.
So here we actually kind of are, I think, pushing it onto entities to really consider what’s at stake here. Uh, certainly our preference is that the easiest way to comply with the code through all obligations. Is to take a data minimizing approach and design for children. That’s what we hope that the code incentivizes.
And so we really would like entities to think seriously here about is age assurance something that’s needed, or is there actually a way to design our service so that it can be used by children and by adults with those higher level of privacy protections in place.
Johanna: And just one follow up question on that.
You’ve mentioned the targeted advertising, and I think certainly in the conversations I’ve been having, this is one of the things that is. Misunderstood, I think to the detriment of understanding the detail of the code in the sense that it has a data minimization requirement, but not a direct ban on targeted advertising.
Could one of you speak to that? I think it’s really interesting and I have heard a lot of conversation about that, Kate.
Kate: Yes. So yeah, I think this is one of these weird peculiarities of the Privacy Act is what we have in the Privacy Act is an Australian privacy principle that refers to direct marketing.
And direct marketing can be very broad in scope. So it can be things as straightforward as I’ve visited a website and I wanna find out more about. The services on that website. So I sign up to a newsletter and I provide them with some personal information, and then they send me a newsletter. And it’s what I was anticipating right through to the potentially much more harmful types of profiling and targeted advertising that kind of follows you around the web and involves use of, you know, passive tracking technologies.
And so all of that is encapsulated within direct marketing. So the way that we’ve considered this in the code is that you can only do direct marketing in line with the Australian privacy principles when it is a, in the best interest of the child because that best interest of the child is an overarching obligation across all collection use and disclosure of children’s personal information.
B, when it’s with. The consent of either the child or the parent in the case of a child under 15. And importantly here, I think in relation to profiling in particular, when the information is collected directly from the child. So what that means is that if a child wants to say, sign up. To a really kind of informative or helpful newsletter that encourages their civic and political participation, uh, in the world.
You know, they’ve gone onto a Digital Rights Watch website or something, and they wanna participate in a campaign, or they want to join Project Rocket. All of those things which might otherwise fall within direct marketing should be permissible because they’re clearly in the best interest of the child.
And we wouldn’t wanna limit their participation. But what it should mean is that some of these options, like passive tracking, use of pixels, use of SDKs, use of data broking and data enrichment around children in order for the purposes of targeted advertising should be limited by those restrictions. So it’s not a blanket ban, but I think a blanket ban in this instance would actually limit children’s participation and engagement.
So I’m hoping that people, stakeholders can engage with that level of detail, and certainly we’re always open to feedback about how it might be improved. But that’s certainly where we’ve tried to go with the direct marketing changes.
Johanna: Mm. And I think it’s, it’s another example of the very nuance. But very, I think, sensible approach that you’re taking through this in really considering the rights of the child and the rights to participate, but also the need to have some higher levels of protection for younger people and how we can thread that needle.
So I wanna pivot a little bit now, move away from the detail of the code, at least what’s in the draft. And this is very much open for consultation now, and we’ll provide some details at the end of how people can participate and help answer some of these questions. But Carly, let’s look at the code a little bit more in context.
So you’ve mentioned there enforcement and enforcement going to be a key challenge. Let’s assume we’re on the 11th of December, when I think it’s the 10th of December. The code has to come in, has to be enforced. So we’ve passed the deadline, we’ve got this fabulous online child’s privacy code in place.
Are you worried about enforcement given. The size of some of the companies that you’re likely to be coming up against, and the size and heft of the Privacy Commissioners, your office, the Office of the Australian Information Commissioner, which I’m just gonna say again because I say every time I talk to you, but no one seems to be listening, needs to be funded more.
But Carly, how are you thinking about that enforcement battle, which is definitely going to be ahead of you?
Carly: Yeah, I think given that enforcement is a really important kind of lever to ensure compliance, generally speaking, but I think particularly when you have new obligations like this, being able to enforce ’em and being seen to enforce them is really important to.
Encourage and motivate compliance. I think the biggest challenge for us is going to be related to resourcing in the sense that there are a lot of obligations in the code. There’s gonna be a lot of online services captured by the code, and some of them, as you say, are going to be very large entities. And we do have a wide range of regulatory and enforcement powers that we can use.
And actually that’s to our benefit, absolutely, because we can choose the right tool for the right issue and problem and entity. But I think it will be a challenge for us to take the number of enforcement actions that we would ideally like to do to really kind of boldly take this. Code from proposal to to reality, and we’ll have to be very proportionate and very strategic, I suppose, in which enforcement actions we do choose so that we’re able to also achieve results in those actions as soon as possible.
Because one of the things we know from experience is when we commence enforcement action, particularly against large entities and particularly against some of the digital platforms who really. Have priced in regulatory action as the cost of doing business. We find ourselves in kind of multi-year long litigation endeavors, and that wouldn’t necessarily be the best thing for the community.
So we will have to be specific and tactical in how we choose our enforcement options here. But the other resourcing challenge will be for us, whether, because we have a complaints handling function, we’re able to receive complaints for contraventions of the act, and we’ll be able to see complaints for contraventions of the code directly.
And we’re in a circumstance in which there’s no dedicated dispute resolution scheme for digital platforms in particular, we anticipate that we will get a lot of incoming complaints from children and their parents and carers where. And if in a way, if we’re doing our job right, we’ll get a lot of complaints because we will have educated people.
What well about the obligations in the code. You know, the community we anticipate will be quite active and engaged in identifying instances of non-compliance. So that’s a potential risk and challenge for us, as you say, as a small regulator that will have to really think about how to manage. Having said all of that, I think we have shown that we can be tactical in how we choose our enforcement actions and choose the ones that give us the biggest bang from for our buck.
So we will be bringing. That kind of lends to our enforcement strategy on this as well.
Johanna: Yeah, definitely small but mighty. So no disputing on that front. We’ve spoken a lot about consultation with young people and parents. The, the response you’ve just given me there, Carly, makes me ask to what extent are you consulting with the tech companies themselves around the technical feasibility of implementing what you are asking?
’cause it’s one of the primary critiques, often of regulatory measures is, well, it’s all well and good for regulators to to say this, but it’s actually really hard to build the tech. And sometimes that’s true and sometimes it’s not. So how are you engaging with the companies?
Carly: Yeah, so Kate’s team have done an immense amount of work in this regard in the first round of consultations in which I think we engaged with around 60 representatives from industry, and now going into the second round of engagements where we’re holding multiple webinars, round tables, and other sessions with industry, particularly trying to look at different sectors and sub-sectors.
So digital platforms are only one, but others as well. We absolutely are genuine in our desire to hear from those entities about where the proposals in the code aren’t workable, including where they’re not technically workable. We are, however, Johanna informed by a few factors. One is that the code leverages many obligations that are in existence in other places, both in Australia.
So for example, age assurance requirements are now found throughout the eSafety codes, for example, as well as through the SMMA scheme and also internationally. So. One of the areas of questioning we’ve had is whether there’s a workable way in which to give effect to the right to have data deleted or destroyed.
And that, of course, is very firmly established in the European Union where the GDPR has made that right. A reality for more than, well, probably somewhere around eight years by now, we have been kind of conscious of the technical feasibility and try to leverage that where it is enshrined in other jurisdictions.
And as I referenced earlier, the UK’s age appropriate design code has also been an important reference point for us.
Johanna: And so that’s a really good, and I think important point is how this fits in internationally with other obligations. So Kate Carly’s referred there to the UK age appropriate design code.
There’s also the Ireland’s Children Fundamentals on data processing. I mean, these all have terribly boring names, but they’re so important in terms of like the impact that they have on people’s lives. So rather than asking you what is the things that are the same, I actually wanna flip the question a little bit and say what is different in what Australia is doing?
Where is our point of difference? Is there points of difference? And if there are, why have we taken the decision or why have you taken the decision to depart from the practices in other places around the world?
Kate: Well, as you say, some of these codes have been in place for a number of years, and in some ways that kind of looks a little bit like Australia’s lagging again, but it actually is also a fantastic opportunity to learn from, to gather insights, and particularly in the case of the UK, who are about to embark on a review of the rage appropriate design code.
We very much benefited from conversations with the UK regulator ICO, but also with islands and with Canada and New Zealand, France, anywhere where they’re either considering introducing these obligations or where they’ve been in place for some time. Certainly, the message we got loud and clear from engagement with industry last year was that where alignment is possible, we should seek.
To do that. ’cause it certainly does make implementation more straightforward. And so we have done that to a large extent. But also from those conversations we’ve learned where some of those sticking points are, or where some of the enforcement challenges have been or been able to learn from those other regulators about, oh gosh, if we had our time again, this is what we might’ve done differently.
And so some of the novel obligations go to that. So for example, there’s the obligation exists in the age appropriate design code around notification of parental controls being notified to the child user, which we’ve brought in. And there is also one around geolocation of parents tracking that. But we’ve extended that to any user of the service.
So for example, that might mean a little, I was probably a bit dated now, but this idea of when Snapchat first turned on snap maps and suddenly all these people had, were suddenly like, oh, I didn’t realize I was sharing my location with everyone in my contact. List. So just some really kinds of basic lessons learned from how these obligations might be improved from other regulators.
And then the other novel obligation is the one that we talked about too, this two step consent model. And that was really speaking to child rights experts, speaking to the New York State’s attorney’s office, learning from them. In fact, it was ICO who first referred us to that Safer Kids Act as when we were having those conversations around consent as something that we might want to consider about how do we more meaningfully engage children and young people in that process of consent and really think more about digital literacy.
The other thing I think that we have done novelly, or is, is the consultation approach. Certainly in the first design of the age prep design code regulated entities and this issue of technical implementation was really that kind of critical point. And I think that’s shown that it can be done, that it is technically possible.
In fact, it’s already happening in other jurisdictions. And that kind of freed us up to really think about if we speak to the community at large, people who have an interest in this, and parents and children are such a large part of our regulated community, the people who benefit from the OASC, upholding privacy rights and privacy protections, what are their expectations from the code?
And that’s really enabled us, I think, to move further into that space and also to demonstrate, uh, walk the walk if you will. Like we’re saying, you need to explain to children what privacy means in plain language. We also wanted to demonstrate that that was possible. And it was possible with a pretty small team and a pretty small budget like that.
This is actually not something that requires you to spend millions and millions of dollars, that this is something that you can do if you want to. And I think that’s also something novel that we’ve brought to the, the development of the code as well as the obligations.
Johanna: We talk a lot at TPDi about best practice technology policy design, and there isn’t actually a lot of examples in the Australian tech policy ecosystem in recent times where I can point to.
But I think this is one where you actually are going through almost in textbook style, the stages of best practice tech policy. So acknowledge, applaud, and we need to see more of that because the outcomes at the end will be better, as I’m quite confident we will see with this. So Carly stepping a little bit away from the privacy code now, but not.
Totally. How do you see the children’s online privacy code interacting with other pieces of technology policy in the ecosystem? So here I’m thinking about things like the social media age delay, but also the forthcoming digital duty of care, which the government has committed to implementing in this term of parliament and just recommitted to the response to the online safety review that came out a week or so ago.
It’ll be a couple of weeks by the time this goes live. This is such an important piece of regulation. How does it fit with the other pieces of the puzzle?
Carly: Yeah, I think that there’s a lot of synergy between the code and the digital duty of care proposals, as well as the existing regulation in the. Social media space.
I think what the code, I think does really well is it, it lays a really important kind of bedrock of protection for children when it comes to their personal information that can really span sectors and practices in a kind of technology neutral way, which I think is quite important as we see, you know, the potential for terms like, or sectors like social media to really fundamentally change actually, particularly as we move into more AI driven technologies.
Our very conception of social media or even digital platforms may change fundamentally. And I think what the code does is it looks at, at everything through this cross-cutting lens of personal information, which is so integral to different services. So I think that will lay a really important foundation upon which other things can be built or certainly complement, including the digital duty of care.
I also think because the code includes the requirement around a privacy impact assessment, I think there’s again, a lot of synergy with requirements to do risk assessment around duties of care more generally. And so I think and hope and certainly it’s our aspiration that by acquitting your obligations under the code, you’ll contribute to acquitting your obligations in other requirements in this space as well.
So I think there’s a lot of synergy there. The last thing I’d say is that you, you may have noticed that a recent theme in lots of our regulatory work has been around fairness, and the code takes that forward in really important ways as well as we’ve discussed, including in in respect of nudge techniques, which we haven’t talked about too much in this call, but is another thing that’s covered off by the code, and that really positions us very well.
To make sure there’s synergy between the code, whatever comes forward in terms of the trench two of the Privacy Act reforms, which we anticipate and hope will include the fair and reasonable test. And again, so we’ve really thought through how, by continuing to lean into fairness as a really important key concept of the Privacy Act, we’ll position entities well to make that transition, should that legislation go through.
And also the unfair trading practices work that’s being done by government as well, and the kind of crossover there too. So I think we’re really conscious of that broader environment of legislative change and acknowledge that there’s crossover, but I, I think that creates synergies rather than, than gaps.
Johanna: Another thing that I particularly like about this code is the breadth of it. So you’re sort of future proofing, not narrowing and saying these particular types of technologies, it’s broad enough that it will be able to stand the test of time. So another really pivotal requirement, recognizing that you’re not gonna be able to go back and do these things over many times.
And for folks who are interested in learning a little bit more about some of those concepts that Carly was just talking there, both the unfair practices, but the fair and reasonable test. You can go and listen to the final episode in the social media miniseries that we had on the podcast the end of last year, which is talking about many of those concepts.
Kate Bauer, is privacy dead? Is it an outdated concept? Children have different concepts of privacy. You hear all this all of the time. How do you respond to that?
Kate: I mean, privacy is not dead. But I would say that wouldn’t I? What we’ve heard from children and young people and engage with them is not so much that the privacy is outdated, but A, they don’t feel like they have enough education.
They talk a lot. We get a lot of online safety education. We don’t get enough about privacy. I really hope that the code changes that. But B, I also think if you speak to young people and teens in particular, you can see in their behaviors that they care about privacy. So things like ephemeral messaging, so disappearing messaging, very popular with young people.
The group chat as opposed to just putting everything out on your social media for everyone to have an opinion on are very popular. But I think even in the way that young people engage as content creators and influencers, shows that they understand the contract here, right? Like they understand that their opinions are worth something and they wanna see something from that side of the bargain.
So I think privacy is not dead, and I actually think that young people have some of the most sophisticated understandings of privacy and the most well-developed.
Johanna: I a hundred percent agree. I think they’re far more nuanced in an understanding of privacy than most people my age, for example, let alone the generation above.
So endorse that response. Carly, I’m gonna ask you what success looks like for implementation of the code as your final question, but I just wanna sort of pause. Is there anything we haven’t. Touched on in this conversation that you think is a really important feature in the code that you want to draw people’s attention to.
Kate: just something in terms of the change that we’re hoping to see. And Carly will speak to the success of what success would look like. Certainly Carly spoke to the challenge of enforcement, but I actually think reputation here will play a very important role already. We are seeing more and more organizations proactively move to developing and designing online services with children in mind and already even before the code is in place.
Designing services with that data minimization, um, approach. And I actually think there will be a bit of a, a pull and also a bit of a, for the organizations that don’t do that, a little bit of eyes looking on them going, oh, actually, come on, pull your socks up here. And in fact, the first industry written submission that we’ve.
Got is in fact from a small organization, a small startup in Australia saying, Hey, we work in this space and we just wanna tell you, we think this is technically feasible and we think you should go for it, basically. And given us some like real life examples from how their organization are already doing many of these things.
So I actually think here it’s, yes, it is an enforcement challenge. Like we shouldn’t mistake how big that challenge is. But actually I think we are seeing a shift into, you know, the ARC Center of Excellence for the Digital Child had this manifesto for the children’s internet, which was when we all think back to what children’s TV was like when we were younger.
This idea that you would have a Sesame Street or a play school that was designed with children’s developmental needs in mind, that was for children and in their best interests. That’s what’s been missing in the online experience, and I’m hoping what we’ll see through the code is a shift towards more organizations taking that opportunity to design their services in a way that benefits children.
And that there’ll be, you’ll kind of be a bit on the nose if you’re not one of those organizations that’s doing that.
Johanna: We start every episode of the pod by saying that technology is shaping humans, but we can shape it back. Right? And I just think this code is one of those really tangible examples of you are working on a piece of legislation, you’re getting people to consult on it, or a regulatory code rather than a piece of legislation if I’m being particularly finicky.
But that is going to change the way that companies make technology and it’s going to change it for the better. And so this is really an example of how we actually have agency to shape the technology that we use every day. So Carly, what does success look like for you? You’re not halfway through your term yet, but you get to the end of your five-year term.
You look back. What in relation to this particular policy are you looking? What’s the change you wanna see?
Carly: Kate was incredibly articulate in the way she described her vision for this work. I think if I zoom out even a little further than that, I suppose I’d say to. Big things, one. As the parent of young children, I would love to start to see these changes filter through in my everyday experience and their everyday experience.
Whether that be when I’m. Logging my child in at a daycare center, which I do every morning to the process I go through when I have to always request schools not to take photos of my child and when he’s signing up for apps and games, et cetera. I would love to see that tangible reflection of the requirements of the code filter.
And that will give me, I think, a lot of kind of validation that this work is starting, as you say, to change things in practice. But I think that is really a stepping stone along the way to how it’s gonna change expectations. So when our children grow up in a world. In which they don’t expect to be required to hand over their personal information every time that they wanna do anything in the online or offline world that will inform their expectations as adults.
And that will, I think, help to coalesce kind of community and political power to change the broader digital ecosystem. Because once we get used to how and acknowledge that it’s possible for this entire digital world to exist without our personal information being the currency that is traded between these entities, where in the case of children will come, we’ll start to say, well, why can’t we have it as adults too?
And so really it is about realizing, maybe to circle back to the initial comments from realizing that art of the possible, it is possible to have an online, digital world in which personal information isn’t exploited and collected in excessive means. We’re gonna show that when it comes to children and then hopefully the, that will be a stepping stone to making the case for why it could apply more broadly to adults as well.
Johanna: It’s so inspiring, and I think just to reinforce a point, which I know you a hundred percent on board with, it’s also, it’s not just shaping the digital environments, right? That’s gonna then have a very direct impact on our experiences in the physical world. And the extent that you can even separate the two, which young people always tell me there’s no such thing as between the two.
It’s just the world. So Carly Kate, the code is open for consultation now. How can people get involved and what input are you looking for? Where should people go to find out more information?
Kate: So we’ve launched a new section of the OAIC website called Privacy for Kids, and that is the best place that people can go to.
You can find it just by going to our homepage, but there we’ve got resources both for children and for adults. We’ve developed. A range of child friendly versions of the code. So there’s a short guide to the code for younger primary age children. There’s an extended guide for high school aged children, and those were co-design with the Youth collective of Project Rocket.
So we’ve had that opportunity to test, we’ve got the language right and we’ve also got online workbooks as well as physical downloadable workbooks, a set of lesson plans and workshop facilitation tools so that people can run their own consultations either in classrooms or in community groups throughout.
We’re also hosting a series of round tables. Um, we can share with your listeners in the show notes and EOI to participate in a round table, and we’d certainly welcome members of the public or members of industry or interested members of civil society to participate in those round tables. And we’re also exec.
Written submissions until the 5th of June. So there’s lots of different ways that people can participate. We really do wanna get the message out there to as many kitchen tables as possible that this opportunity is here to have your say about what you think is good about the code, what you think could be better, and how you think we might be able to improve it, and make sure that it, it, we meet the opportunity and it’s as good as it can be.
Johanna: I really commend the resources online to the listeners. I think I’ve never seen the approach that you’ve taken before in terms of there is the same boring consultation that comes out to people like me and is about the exposure draft, but there really is accessible information about what this code is.
And I think it’s just another example of best practice policy design in that you’re actually seeking the input of the people that are going to be most affected by this. So bravo to that Bravo for the work that you’re doing and looking forward to continuing to work with you. And perhaps we’ll have you back once the code’s finalized and as it’s about to be implemented, which is exciting.
This is world changing, world shaping stuff. Guys, thank you so much for joining us today on Tech Mirror.
Well, that’s it for this episode of Tech Mirror, which is brought to you by the Tech Policy Design Institute. We are based here in Canberra on the lands of the Ngunnawal Ngambri people.
If you found today’s conversation useful or thought provoking, please do share it with a friend or a colleague, or leave a review and subscribe wherever you get your podcast.
If you’re watching, please don’t forget to like and follow. For show notes, you can visit tech policy au slash podcast. And this podcast was made possible with thanks to the generous contributions from government, industry and philanthropy to the tech policy design fund, the full details of which are available on our website.
The team at Audiocraft produced this pod on the lands of the Gadigal people of the Eora Nation and Amy Deme provided invaluable research support.
Music is by Thalia Skopellos.
A big thank you also to the team at the Tech Policy Design Institute, without whom this pod would not be possible.
Thank you for joining us and as always, get in touch and get involved.