Proposed new rules threaten Charter rights of Canadians and have no authority over foreign-controlled social media platforms, writes Philip Palmer.
By Philip Palmer, October 21, 2021
The Trudeau government recently held a public consultation – during an election campaign, no less – regarding its proposal to address what it describes as “Internet harms.” The Liberal Party’s election platform promised to introduce legislation to deal with said harms within its first 100 days in office.
The government’s proposal lays out – very vaguely – which entities would be subjected to the new rules, what types of content would be regulated and what the new rules and obligations would be. It also describes two new regulatory bodies that would be created to enforce it all. In particular, the government identifies five areas of harmful content that would be subject to this regulatory regime: child sexual exploitation, actively encouraging terrorism, encouraging or threatening violence, hate speech, and non-consensual sharing of intimate images.
The proposal does not include actual definitions for these categories of harms, all of which are already framed by existing criminal offences. Rather, the proposal would expand the definitions for a “regulatory context” and give the federal cabinet the power to further modify such definitions at some future date. It’s a definitional pig in a poke.
To remedy Internet harms, the government proposes creating a comprehensive censorship system (euphemistically labelled “content moderation”), which would require all social media platforms accessible in Canada to effectively censor the speech of their users. Platforms would have to take all reasonable measures (including artificial intelligence (AI) systems) to identify harmful content and render that content inaccessible to persons in Canada. A platform would have to address all content that is flagged by anyone in Canada and decide, within 24 hours, whether to block or remove that content.
This platform-censorship regime would be enforced by an Orwellian-sounding “Digital Safety Commissioner” who would exercise ongoing surveillance of the censorship practices. The Commissioner would be empowered to make regulations applicable to the internal censorship regimes, applying varying standards of stringency based on factors such as a social media platform’s size, revenue and business model.
When a platform’s own internal review mechanisms are exhausted, a complainant could appeal the platform’s decision to a Digital Recourse Council, which would issue decisions based on whether it considers the content in question to be harmful. If so, it could compel platforms to render that content inaccessible to persons in Canada.
The system would be backed by administrative fines of up to $10 million or 3 percent of the platform’s gross global revenue (whichever is greater), and criminal law fines of up to $25 million or 5 percent of gross global revenues.
Not surprisingly, the scheme raises a host of legal and policy concerns. To start, this broad, proactive approach to content censorship appears to abrogate Canadians’ Charter-guaranteed freedom of expression.
Perhaps most importantly, a series of key factors – the massive volume of content, complex and highly legalistic definitions of harm, heavy sanctions that can be imposed for failing to suppress speech deemed harmful, and tight deadlines for decision-making – all militate in favour of over-suppression of speech by platforms. AI systems are notoriously ill-adapted to interpret nuanced speech, and internal or outsourced employees (sometimes based in foreign countries) can hardly be expected to have mastered the subtleties of intricate legal definitions or to make appropriate judgments respecting the use of English or French idioms, humour or satire. Anomalous and unjust results are certain to follow.
Then there is the lack of clarity over to whom the censorship obligations are to apply. There is no definition of social media – it is suggested that whatever definition may be proposed can be expanded by cabinet. The result is vast uncertainty. Leaving it to cabinet to modify definitions involving speech infringements is an evisceration of the role of Parliament.
A further concern is the proposal’s extra-territorial reach. The only connection between harmful content and the Canadian censorship regime would be that the content could be accessed in Canada. The harmful speech could be in a foreign language, on a foreign platform and impact a person or persons outside Canada – yet the new regulations would still apply to that content. Canada would essentially be asserting jurisdiction over the entire Internet. The Chinese operators of WeChat and Sina Weibo are surely quaking in their boots in fear of the Canadian Digital Safety Commissioner.
If adopted into law, this proposal will be a legal Potemkin village – all façade with no substance. Ultimately, the whole premise behind this elaborate edifice of strictures and penalties is worthless. Apart from PornHub, no major social media platform has its head office in Canada. Facebook, YouTube, and Twitter are established under laws of other countries and operate from facilities located outside Canada.
Most major social media platforms tend to have minimal staff in Canada – sales, marketing and a few regulatory policy officers. They earn revenues in markets that are beyond the control of the Canadian government. Decisions of what to censor, and how, are made outside Canada. Foreign courts will not enforce a Digital Safety Commissioner’s orders, administrative monetary penalties or criminal fines. A platform’s decision to comply with the censorship regime will be voluntary (and may attract liability in other jurisdictions).
The very concept of rendering content inaccessible from Canada does not align with an open Internet. The government’s proposal requires the creation of kill switches to block content from being accessed “by persons in Canada.” This latter phrase, which recurs throughout the proposal, assumes that platforms would reconfigure their systems so that Canadians could not reach content either directly or by means of virtual private networks or proxy servers. This alone requires a major investment in blocking work-arounds and restricting Canadian access to the global Internet, verging on a great Canadian firewall. So, besides violating Canadians’ Charter rights, the proposal would be massively expensive to implement and almost certainly ineffective.
The proposal also contains a parallel law enforcement and national security dimension. It would require vast quantities of personal information, mostly concerning non-Canadians, to be reported to Canadian authorities. As well, social media platforms would be required to keep, for a year or more, information concerning anyone posting content that has been reported to law enforcement. The person about whom a report has been made cannot be notified. The information reported and the information retained are all vulnerable to hacking – particularly by state actors. Canada would therefore become an unwitting host of a treasure trove of kompromat.
If adopted into law, this proposal would constitute the most illiberal legislation adopted by Parliament since the War Measures Act in 1914. Creating a de facto censorship regime in peacetime is both novel and dangerous; the powers it bestows on cabinet are extraordinary outside of a national emergency, and perhaps even then. It poses an immediate threat to rights protected by the Charter and a longer-term threat to the functioning of the Internet in Canada. The proposal would attempt to establish extraterritorial application of Canadian law without any thought as to how that purported jurisdiction could even be enforced.
The government should abandon this effort and go back to the drawing board.
Philip Palmer is a lawyer with over 30 years of experience in government legal services including at the Competition Bureau and Industry Canada. This essay is based on the submission to the government’s consultation process of the Internet Society Canada Chapter.