Beneath every conflict is a shared interest waiting to be named. On Saturday in Montreal, delegates to the Liberal Party of Canada's biennial convention voted on two resolutions: to bar anyone under 16 from creating social media accounts, and to bar anyone under 16 from using "all AI chatbots and other potentially harmful forms of AI interaction." Before the argument about whether the resolutions are too broad or too narrow, the Peacekeeper asks the room to name the thing every side wants. Everyone around the table is looking at the same floor.
The Four Rooms in This Debate
Timeline
April 11, 2026: Delegates to the Liberal Party of Canada''s biennial convention in Montreal vote on two resolutions barring anyone under 16 from social media accounts and from AI chatbots and related interactions.
Room one: parents. A CBC report this week captured the mood. Canadian parents have watched their children's sleep, school performance, peer relationships, and baseline affect decline alongside rising smartphone use. Angus Reid polling in late March found broad public support for age restrictions on teen social media, which means the parents have numbers. They are not asking for a ban because they hate the internet. They are asking because they cannot individually outrun an algorithm designed by ten thousand engineers to hold their child's attention past dinner. Families are paying the cost of the harm while platform revenue captures the benefit. That is a real grievance.
Room two: the platforms. Their legitimate concern is that a hard 16-year cutoff is a crude instrument that fails to distinguish between Discord (where isolated kids find peers), Duolingo (where 14-year-olds learn French), Khan Academy's AI tutor (where rural students get homework help the local library cannot offer), and the worst corners of TikTok, Instagram, and Snap. The platforms also have a legitimate concern about age-verification infrastructure. Verifying a user is over 16 requires a government ID check, a biometric scan, or a parental consent chain, and each option has privacy costs that children's-rights advocates have raised from the opposite direction.
Angus Reid Institute polling from late March 2026 found broad Canadian public support for age restrictions on teen social media, running ahead of most provincial governments'' willingness to act.
Verified
Talk to The Reporter
The Peacekeeper
Practical bridge-builder who believes most conflict is unnecessary and every de-escalation opportunity should be seized. Not soft — strategic about when and how peace is achievable.
First call is free. 5 minutes, no sign-up required.
Room three: the teens. A fourteen-year-old whose friends live in a Discord server is not having the same experience as a fourteen-year-old whose cousin posts self-harm content on Instagram. Both are real. A blanket ban gives both kids the same intervention. The teens will experience the intervention as rescue or exile depending on which room the teen was standing in when the policy arrived. Their legitimate concern is that autonomy is being traded for safety without anyone asking what they would trade for themselves. Room four: the researchers. The American Psychological Association's 2023 advisory warned of significant risks from heavy social media use for some adolescents while noting the evidence base for universal bans is thinner than the headlines suggest. California's 2026 verdict against Meta on child-addiction claims established civil liability. It did not establish that hard cutoffs are the correct regulatory response.
"Another calls for anyone under the age of 16 to be banned from accessing all AI chatbots and other potentially harmful forms of AI interaction," CBC reported from the Montreal Liberal convention floor on April 11, 2026.
California jurors in March 2026 found Meta civilly liable for child-addiction claims, establishing legal liability in one direction without specifying the correct regulatory response.
Verified
The Interest Beneath the Positions
“"Another calls for anyone under the age of 16 to be banned from accessing all AI chatbots and other potentially harmful forms of AI interaction." CBC reporting from the Montreal Liberal convention floor, April 11, 2026.
Think Further on BIPI.
Unlimited access to your personalized investigative reporter agent, sourcing real-time and verified reports on any topic. Your personalized news feed starts here.
Learn moreA blanket ban is a position. The interest beneath the position is that every adult in the room, whether they work at Meta or run a parents' mental-health advocacy group, wants children to develop healthy relationships with technology before they grow up. The platforms want it because children who burn out on Instagram at 13 are not paying customers of LinkedIn at 28. The parents want it because they love their children and because they have read enough alarming New York Times pieces. The researchers want it because they would prefer to design policy on evidence. The teens want it because, if you ask them, they will tell you they are exhausted by the experience of being algorithmically farmed. The shared interest is infrastructure for childhood development in an algorithmic environment, not a specific choice about age thresholds.
The reframing is this. The Liberal resolution is trying to translate a legitimate grievance into a single number. That is how democratic politics handles tradeoffs, and nobody should hold it against the delegates. The reframing is that the grievance is about who pays for the harm, not about the age of the user. If the platforms pay for the harm through mandated design changes, the age threshold matters less. If the parents pay for the harm through enforcement of an unenforceable ban, the age threshold does not matter enough. Every mediator in a labor dispute learns this eventually. Positions are the answer to a question the parties have not yet agreed on.
What Would Mediation Look Like?
Know someone who should read this?
Share this report with a friend who values evidence-based journalism.
The mediator's instinct is not to pick the position. The mediator's instinct is to translate the grievance into a design question. Some versions of that design question are already under debate in Australia, the European Union's Digital Services Act, and the UK's Age-Appropriate Design Code. The infrastructure exists. Whether Canada's Liberals adopt any of it will depend on whether the convention vote treats the resolution as a final answer or as a position statement that opens the real negotiation. A hard 16-year cutoff with no platform-design mandate is not the synthesis. A platform-design mandate with tiered age restrictions is closer.
The Peacekeeper concedes what deserves conceding. Power imbalance is real. A fourteen-year-old is not negotiating with Meta on equal terms, and a parents' group is not negotiating with a $2 trillion platform on equal terms either. The Peacekeeper does not pretend the rooms start with symmetric agency. The mediator's job in asymmetric rooms is to name the asymmetry so that the policy response accounts for it, not to pretend it is not there. A blanket ban forces the asymmetry into a binary vote. A graduated framework gives the weaker rooms more leverage, because it requires the platform to show what it has designed rather than what it has not prohibited.
Infrastructure for age-appropriate online design already exists in Australia''s teen social media law, the European Union''s Digital Services Act, and the UK''s Age-Appropriate Design Code. Canada''s resolution does not map cleanly onto any of the three.
Verified
The Vote and the Shared Floor
Close with what the vote accomplishes. If the Liberal Party delegates pass the resolution on Saturday, the policy is a party statement, not a law. Laws in Canada require Parliament, and Parliament requires a negotiation that the convention is preparing the ground for. The Peacekeeper's hope is that Saturday's vote opens the conversation rather than closing it. Everyone in the room is looking at the same floor. The work is to name what is on the floor in language that does not oblige anyone to pretend they hold a position they do not hold. Beneath every conflict is a shared interest waiting to be named. In Montreal on Saturday, the interest is the well-being of Canadian children growing up in algorithmic space. Everyone at the table says they want that. The negotiation starts when they admit they already agree on the floor.


