Australia’s Social Media Ban
The law banning Australian under-16s from accessing social media platforms comes into force on 10 December. This explainer sets out what the ban covers, how it will work and some issues that may arise from its implementation.
Summary
The Online Safety Amendment (Social Media Minimum Age) Act 2024 imposes an obligation on “age-restricted” services to take reasonable steps to prevent an “age-restricted user” – essentially an child ordinarily resident in Australia who has not reached 16 years – from having an account on the service. By contrast to rules in some other jurisdictions, parents cannot consent to under-16s having an account. This does not affect viewing social media services when logged out (eg looking at videos on YouTube without signing in, or accessing the landing page of businesses on Facebook).
Significant fines may be applied to services which do not comply – described as being in line with the size of penalties available in the regimes in the UK, EU and Ireland. There is no penalty for children under 16 who access restricted social media platforms, or their parents or guardians.
The Act, and the related eSafety Commissioner’s powers, come into force on 10th December 2025. This has allowed a year for industry to prepare for the duty, and applies to both new and existing accounts. The Act will be reviewed after two years.
The eSafety Commissioner’s Social Media Age Restrictions Hub provides comprehensive information on all aspects of the law, including a list of all the platforms that are age-restricted (and those which are not), information for young people, parents and platforms and details on implementation.
How does the law work?
The regime builds on the existing definition of “social media service” already found in the Australian online safety regime – the focus is whether a service enables online social interactions between two or more users. The precise services in scope will be determined through secondary legislation. There is also flexibility to take into account emerging technologies and services. The Government decided to exercise these powers to exclude certain services from the definition of age restricted social media. The following types of service are not included in the ban:
- messaging apps, including WhatsApp
- online gaming services
- services with the primary purpose of supporting the health and education of end-users
- product review.
The eSafety Commissioner has developed a self-assessment tool for services. There is also a list of services that the eSafety Commissioner believes fall in or outside the regime.
The regime does not specify the reasonable steps that a service should take though there is a strong expectation that there will be some form of age assurance (and there are separate rules and a regulator for privacy). Whether the mechanism is reasonable is an objective question, taking into account the range of methods available, their cost, their efficacy and their consequences for user privacy/data protection. The eSafety Commissioner has provided guidance, in which it has sought to develop principles rather than detailed rules, and has already issued a report following an age assurance trial.
The eSafety Commissioner has identified the following principles for assessing whether steps are reasonable. Are the steps:
- reliable, accurate, robust and effective
- privacy-preserving and data-minimising
- accessible, inclusive and fair
- transparent
- proportionate
- evidence-based and responsive to emerging technology and risk?
The Guidance also notes that respect and protection of fundamental human rights underpin the principles and providers should have regard to the best interests of the child when considering the design of their services.
The obligation is systems-based and will not result in liability for the service in individual cases where young people circumvent the measures a service has put in place (although the eSafety Commissioner expects services to take into account the possibility of circumvention when deciding on its reasonable steps).
Recently a High Court challenge to the constitutionality of the legislation was lodged in the name of two Australian teenagers, though it seems that this action is being coordinated by the Digital Freedom Project. The claim is that the law interferes with freedom of political communication which is implied by the Australian Constitution’s requirement that parliamentarians be “chosen” by the people. Even if the right is engaged, interference with the right can be justified where the measure is proportionate.
International comparisons
Australia is not the only state to consider minimum ages for social media use. Some US states have passed comparable laws, though First Amendment claims have disrupted enforcement. The Indonesian Government introduced minimum age rules; South Korea had curfew rules on gaming, though it is not clear how well these operated in practice and the South Korean Government is considering social media age limits. China has had access restrictions for minors since 2021, limiting the amount of time spent on apps. The matter is now on the EU agenda with the European Parliament passing a resolution on the topic, and a number of individual Member States either have some form of similar rule, or rules requiring parental consent (France, Germany, Italy) and school smartphone bans (The Netherlands, Greece) or are looking at minimum age requirements (eg Spain) (though there are questions about how such national rules fit with the DSA obligations). The topic might be introduced on to the EU legislative agenda with an EU-wide age of digital adulthood. An ePetition calling for such a ban in the UK was debated in Westminster in early 2025.
Issues to Consider
The requirement for a minimum age to have an account is not as extensive a prohibition as, for example, a shutdown of the internet or a total ban on smartphones would be. Under-age users still have access to other internet services and, since the prohibition relates to opening an account, means that under-age users can still view content from any social media platform that does not require an account to see (types of) content. On this view, the restriction is more proportionate – though to satisfy any rights assessment, it must also be effective. Does allowing access to non-account content (which could be harmful content) undermine effectiveness? Perhaps a less intrusive mechanism still might be to require services to enforce their own minimum age requirements (which are, on the whole, set at a level demanded by US rules not related to online safety).
There are disadvantages. It might be said that in adopting this approach the Government is reducing the pressure for services to aim for safe services, and that the corresponding shock to the user’s system when they move from no social media to social media might be the greater because the user will be moving into a comparatively less safe space. Moreover, making the choice to exclude users below a certain age then means the price for safety is being paid by children, who have less access to products which may have benefits for them, rather than the company that designed the service and contributed to the problem. A further consideration, however, might be whether the services could ever be made safe enough for certain age-groups. It is accepted that some products are not available to children – tobacco, alcohol - but in these products the essential element of the product cannot be made safe. Is social media in this category?
Some commentators have noted that there is a concern about exclusion and FOMO when parents have imposed, on their own initiative, restrictions on their children’s phone access/use. It might be argued that FOMO is less compelling if all young people are restricted, and justifies State action rather than relying on parental responsibility. The degree to which this argument is convincing is affected by the (a) the services in scope and which services young people might adopt in lieu of open social media; and (b) the effectiveness of enforcement/ease of circumvention.
The Australian regime excludes messaging apps and gaming platforms. It is arguable that young people might choose to use these instead and that at least some of the harmful content/behaviours about which the Government is concerned might replicate themselves there (in addition to the fact that under-age users might still see harmful content on social media even if not logged in). An additional concern is whether the impact of these rules is to move interactions away from open platforms and on to encrypted services, which could make detection of problems more difficult. It might also be that young people are more hesitant to ask for help if they are on social media platforms illicitly.
There is also the possibility that young people use VPNs to seek to evade the prohibition by trying to make it look like they are in another country. There were newspaper reports that VPN use in the UK surged when the children’s duties and, specifically, the age verification obligation came into force in July. Ofcom’s 2025 Review Report notes that the number of UK VPN users had dropped again by October and recent research from the Safer Internet Centre suggests that the summer surge was not due to children using VPNs. The guidance from the eSafety Commissioner states that services should bear circumvention in mind. It is possible that other signals might give away where a user is located; moreover, other industries use more effective location identification tools (eg gambling sector) – eg GPS data or mobile mast data. In some cases, the content of an account or the user sign up data may be a giveaway. This, however, requires services to take steps beyond accepting location data based on IP address; it is uncertain whether the services would be proactive in that regard. It is also uncertain how much enthusiasm the regulator would have for broader enforcement responsibilities. Note here that the obligation is for the service to take reasonable steps; they are not required to ensure 100% effectiveness. Where will the regulator draw the line on what is reasonable given the range of options open to service providers and given the state of age verification technology at the moment?
Age verification can give rise to concerns about privacy and data protection. Most regulators seem to be aware of this; certainly the rules in this context specifically note data protection concerns (see principles noted above) and the responsibility to comply with data protection law. In this context poor or malign compliance should not be treated as automatically indicating the regime is faulty; enforcement here whether by online safety regulator or data protection regulator is key.
Initial enforcement is likely to be bumpy (as the eSafety Commissioner herself has recently acknowledged) especially in relation to teens who have had an account and are now having them removed. Bumpy introduction is perhaps par for the course when a new regime restricting conduct comes in.