ITIF - The Information Technology and Innovation Foundation

09/13/2024 | Press release | Archived content

Comments to Canada’s Office of the Privacy Commissioner Regarding Age Assurance and Privacy

Introduction

The Centre for Canadian Innovation and Competitiveness at the Information Technology and Innovation Foundation (ITIF) appreciates the opportunity to comment on privacy and age assurances. The Office of the Privacy Commissioner (OPC) is advancing well with its preliminary views and should continue to advocate for age assurance systems to focus on only high-risk scenarios and to prioritize the privacy of online users, both young and old.

Ensuring the online safety of children is extremely important. However, as the OPC notes, there are limitations to age assurance methods in limiting children's exposure to harmful or inappropriate material, such as technology constraints and privacy concerns. We agree with the OPC that age assurances are not the only available option to protect children online and that the promotion of existing parental control systems and public education campaigns could address many of the current concerns.

RECOMMENDATIONS

If policymakers are set on creating policies mandating age assurance systems, regulation must be informed by the following points:

1.Age assurance regulation should be limited to high-risk circumstances.

Internet users should not be treated as children unless proven otherwise. In the same way that patrons of a bar inside a shopping mall would be carded upon entering the bar, rather than at the doors of the shopping mall, access to the large swaths of the internet should not be predicated upon verifying one's age. Creating age assurance systems where the onus is on the user to prove they are an adult when trying to access websites that pose little to no risk to children, such as online dictionaries or furniture websites, would be needlessly inconvenient and impractical for both users and businesses. Therefore, any regulation requiring age assurances should focus only on use cases where there is a significant risk of harm to the online safety of children. The determination of high-risk status should be carefully considered to avoid including edge cases, such as non-adult content websites that may contain small amounts of explicit content (Wikipedia, legal databases, news websites with comments sections).

2.The level of regulation and means of age assurance should be proportionate to the level of risk.

Age assurance regulation should be proportionate to the risk of the use case at hand. More stringent age assurance requirements result in greater privacy risks and limitations on free expression. Therefore, the method of age assurance should be the minimum required in a particular use case relative to the risk to children's online safety. This will minimize friction in user experience and encourage broader adoption of age assurance practices without imposing unreasonable demands on businesses that are not providing adult content.

3.Regulators should avoid inflexible requirements.

Regulation should allow businesses to implement solutions that work for their particular use cases rather than prescribing specific systems across the board. Businesses will be most well-suited to determine which method will minimize privacy risks and work best for their particular use cases. For example, online marketplace services might opt to utilize parental vouching if it is one where the parents are likely to already have accounts where their status as an adult is easily verified (credit cards, government ID, etc.). In contrast, a virtual reality game may instead use age estimation due to the presence of voice and gait data. This will minimize the compliance burden upon businesses while ensuring that the regulator is not overburdened with constantly having to provide guidance on minor implementation issues.

4.Age assurance systems should be designed to be technologically and commercially agnostic.

Technological progress in the field of AI age estimation is progressing rapidly, and there are many competing models. The U.S. National Institute of Standards and Technology's face analysis technology evaluation report from this year notes that AI age estimation accuracy has improved significantly since 2014 and that some proprietary facial analysis algorithms performed better than others with various demographic groups.[1]This means that regulation should be designed such that age assurance systems are not mandated to use specific technologies or technology providers in the long-term. Furthermore, federal and provincial governments across Canada continuing to explore digital identification could lead to more easily implemented digital verification techniques that the government could use with businesses.

5.Explore opt-in alternatives.

An alternative to age assurance would involve creating a "trustworthy child flag" for user accounts that signals to apps and websites that a user is underage and requiring apps and websites that serve age-restricted content to check for this signal from their users and block underage users from this content. Rather than using ID checks to determine whether to activate this child flag option, this would be an opt-in process built into existing parental controls. Parents could activate or disable the child flag option depending on their values and the maturity of their children. Because this approach does not require anyone to disclose or verify their identity, it does not create privacy risks by forcing users to share their government IDs or allowing online services to link their online activity to their offline identities. It is also a low-impact approach, allowing adults to continue using the internet as they do today. Similarly, the vast majority of websites and apps that are meant for the general public would not have to take any action. Third, it would be entirely voluntary for users. Parents who want to control what their children see on the internet could choose to use this feature, and other parents could choose not to.

Thank you for your consideration.