The gravity of the consequences of disinformation campaigns to our democracy raises questions as to how best to address the challenges. Unfortunately, Canada has not moved as quickly as others to address the threats of fake news. This article was first published on barrysookman.com and can be read here.
By Barry Sookman, January 7, 2019
There was a time when large platforms could do no wrong. They were engines that facilitated free speech, political debate, and were seen as a revolutionary force for democratization. They were largely unregulated. In fact, they were accorded special trust and treatment, especially in the United States, where they were given unprecedented and controversial immunities from suits under the Communications Decency Act for enabling the dissemination of illegal content such as hate speech, defamation, and harassing information.
Then came the public revelations about false and misleading information campaigns, disinformation and manipulation of news, disseminated via the Internet and social media (referred to here collectively as “fake news”) of gargantuan proportions – Cambridge Analytica, Brexit, and the presidential election of Donald Trump, among others.
The reliance on fake news to manipulate public opinion and political choices is not new to the Internet or social media. Propaganda techniques were well known and used before them. But, fake news disseminated through the social media echo chamber and the tendency of most people to read only headlines and not dig into news stories to discern facts from “alternative facts” (demonstrable falsehoods) has dramatically changed the media landscape.
The dissemination of “problematic content” is not, of course, limited to fake news. There has been enormous criticism including in the United Kingdom, for example, about how large social media platforms have dealt with – or failed adequately to deal with – other types of content such as personal information, hate, abuse, extremism, and other illegal content.
The dissemination and amplification of fake news to oppose legal protection of intellectual property has been part of the copyright exceptionalism (arguments advanced to erode effective copyright rights and remedies that defy common sense and rational economic thinking and which offend commonly held norms and values and that would be categorically rejected in other policy contexts) playbook for quite some time. It was used, for example, to oppose the Anti-Counterfeiting Trade Agreement (ACTA), SOPA in the U.S., during the copyright reform process in Canada (including over legal protection for TPMs and for secondary liability for enablement). It was more recently employed in Canada by Open Media and others to oppose the TPP, the FairPlay website blocking proposal, and in Europe to oppose measures to level the playing field between large platforms and newspaper publishers, the music and other creative industries over Articles 11 and 13 of the proposed new EU Copyright Directive.
The harms caused by the dissemination of false or misleading information has been recognized for a long time as being serious – serious enough to be criminal. In fact, knowingly publishing a false statement that causes or is likely to cause injury or mischief to a public interest had been a criminal offense for centuries in the U.K. and was a criminal offense in Canada (s.181 Criminal Code, Spreading false news) until it was ruled unconstitutional by the Supreme Court in the Zundel case.
The criminality of providing or disseminating false or misleading information still lives on in traditional contexts in the Criminal Code. Examples include the offences of making a false statement under oath (s131(1)), fabricating evidence to be used in judicial proceedings (s137), making a false statement to a police officer (s140), providing misleading information to obtain a passport (s57(2)), attempting to create a false or misleading appearance of trading in a security or a price of a security (s382), intending to mislead by providing a receipt for property not delivered (s388), making a false or misleading statement to obtain carriage where the transportation would be unlawful (s401), hate propaganda (s319(2)), and communicating false information that endangers the safe navigation of a ship (s78.1(3)). There are also other crimes that can be committed by communicating falsehoods such as fraud (s380).
There are also various offenses under s52 of the Competition Act for false or misleading advertising(knowingly or recklessly making or sending of a representation (includes permitting a representation to be made or sent) to the public that is false or misleading in a material respect for the purpose of promoting the supply or use of a product or any business interest). The Act was also amended by Canada’s anti-spam law (CASL) to add new offenses under s52.01 related to misleading information in electronic messages.
The gravity of the consequences of disinformation campaigns to our democracy raises questions as to how best to address the challenges. One might conclude without too much anxiety that intentionally undermining our democratic institutions by spreading fake news is at least as serious as some of the criminal offenses still on the books. New targeted sanctions might be justified in certain instances against the purveyors of misleading content or intermediaries who knowingly facilitate and profit from the dissemination of fake news.
This issue has now begun to be studied by other countries which have prioritized the issue with a real sense of urgency. According to a recent report by Poynter Institute, some countries have used, propose to use, or are considering, various approaches to combating fake news including the use of criminal laws, blocking and removing fake content, treating social media sites as broadcasters for the purposes of regulation and liability, requiring reporting on who purchased sponsored content or campaign ads and for what price, and educational campaigns.
The UK and European Union have recognized the seriousness of fake news and illegal online content and in the last year alone have published papers and proposals for law and regulatory reform. These were summarized by Graham Smith as follows:
The UK government has published its Internet Safety Strategy Green Paper, the precursor to a White Paper to be published in winter 2018-2019 which will include intermediary liability, duties and responsibilities. In parallel the House of Lords Communications Committee is conducting an inquiry on internet regulation, including intermediary liability. A House of Commons Committee examining Disinformation and Fake News has also touched on the topic. Before that the UK Committee on Standards in Public Life suggested that Brexit presents an opportunity to depart from the intermediary liability protections of the ECommerce Directive.
On 12 September 2018 the European Commission published a Proposal for a Regulation on preventing the dissemination of terrorist content online. This followed its September 2017 Communication on Tackling Illegal Content Onlineand March 2018 Recommendation on Measures to Effectively Tackle Illegal Content Online. It is notable for one hour takedown response times and the ability for Member States to derogate from the ECommerce Directive Article 15 prohibition on imposing general monitoring obligations on conduits, caches and hosts.
Canada has not moved as quickly to address the threats of fake news. However, a very useful step was taken by the Standing Committee on Access to Information, Privacy and Ethics in its recent report Democracy under Threat: Risks and Solutions in the Era of Disinformation and Data Monopoly.
The report is worth reading in full. Here are a few excerpts that bring home the nature and gravity of the problem.
The nature of the problem
“The Committee repeatedly heard about problems with social media platforms that allow or facilitate the spread of disinformation and misinformation.”
“The era when the only ways for people to learn the news were to listen to the radio, read a print newspaper or watch a live news broadcast is long past. Today, a vast amount of content is available online, and the publishers of the past have been replaced by artificial intelligence (AI).”
“…developments in social media in recent years have created a new structure that determines what is acceptable and sets the boundaries on public debate: the platforms’ filtering mechanisms, which decide what people see and whether our content will be seen. Mr. Owen asserted that Canadians should be concerned about filtering by algorithms and the business models that determine the content that people see.”
”Mr. Harris said that the question is at what point publishers are responsible for the content they transmit. He believes it makes sense for technology companies not to be responsible for the industrial amount of content that people post to their platforms. However, when the content is fuelled by recommendations generated by the platforms, using AI that they have programmed, (e.g., Alex Jones videos that were recommended 15 billion times on YouTube), they should perhaps be held responsible for publishing those recommendations. By making them responsible for their business model, that model would become more costly.”
“Right now we have dirty-burning technology companies that use this perverse business model that pollutes the social fabric. Just as with coal, we need to make that more expensive, so you’re paying for the externalities that show up on society’s balance sheet, whether those are polarization, disinformation, epistemic pollution, mental health issues, loneliness or alienation. That has to be on the balance sheets of companies.”
“The structural problems inherent in social media platforms serve to fuel the attention economy and help in the promotion of disinformation and misinformation to millions of addicted users. The Committee is very concerned about the negative externalities these platforms have.”
Inadequacy of self-regulation
“The Privacy Commissioner described the current situation in alarming terms.
Last week, I attended the 40th international conference of data protection and privacy commissioners, in Brussels. The conference confirmed what I had explained in my last annual report: There is a crisis in the collection and processing of personal information online. Even tech giants … are recognizing that the status quo cannot continue.
Apple CEO Tim Cook spoke of “a data industrial complex” and warned that “[o]ur own information, from the everyday to the deeply personal, is being weaponized against us with military efficiency”.… Facebook’s Mark Zuckerberg admitted that his company committed a serious breach of trust in the Cambridge Analytica matter. Both companies expressed support for a new U.S. law that would be similar to Europe’s General Data Protection Regulation or GDPR.
When the tech giants have become outspoken supporters of serious regulation, then you know that the ground has shifted and we have reached a crisis point.”
“The government, however, has been slow to act, thereby putting at continued risk the trust that Canadians have in the digital economy, in our democratic processes and in other fundamental values.”
“The Committee has taken the comments from industry representatives and some academics into account, but still believes that some type of regulation is necessary.”
”Recommendation 8 on regulating certain social media platforms: That the Government of Canada enact legislation to regulate social media platforms using as a model the thresholds for Canadian reach described in clause 325.1(1) of Bill C‑76, An Act to amend the Canada Elections Act and make certain consequential amendments. Among the responsibilities should be included a duty:
to clearly label content produced automatically or algorithmically (e.g. by ‘bots’);
to identify and remove inauthentic and fraudulent accounts impersonating others for malicious reasons;
to adhere to a code of practices that would forbid deceptive or unfair practices and require prompt responses to reports of harassment, threats and hate speech and require the removal of defamatory, fraudulent, and maliciously manipulated content (e.g. “deep fake” videos); and
to clearly label paid political or other advertising.”
“Recommendation 9 on algorithmic transparency: That the Government of Canada enact transparency requirements with respect to algorithms and provide to an existing or a new regulatory body the mandate and the authority to audit algorithms.”
“The Committee wishes to specify that the monetary sanctions imposed by the new proposed legislative measures should represent more than the mere cost of doing business for a company.”
Content moderation
“Mr. Scott said that citizens should have a right to be protected from illegal content. Hate speech, defamation, harassment, and incitement to violence are all considered illegal in the off-line world. He believes that such content should also be considered illegal in the online world and quickly taken down from social media platforms using a “process that is rigorously overseen by regular judicial oversight and that has an appeals process so that we are not endangering freedom of expression.” He believes that while the power to take down illegal content must not be ceded to social media platforms, their involvement is needed to speed up the process.”
“Recommendation 10 on the taking down of illegal content by social media platforms: That the Government of Canada enact legislation imposing a duty on social media platforms to remove manifestly illegal content in a timely fashion, including hate speech, harassment and disinformation, or risk monetary sanctions commensurate with the dominance and significance of the social platform, and allowing for judicial oversight of takedown decisions and a right of appeal.”
The conclusions in the report are also worth reading:
“The Privacy Commissioner did not mince his words when describing the current situation: there is a crisis in the collection and processing of online personal data. The Committee does not take such remarks lightly and believes that by sounding the alarm, he has made its recommendations all the more important.
As the Committee concludes this study, it continues to believe that changes to Canada’s legislative and regulatory landscape are needed in order to neutralize the threat that disinformation and misinformation campaigns pose to the country’s democratic process.
It is critical that the Government of Canada be a leader in bringing in sustainable legislative solutions to protect the personal data of Canadians without hampering innovation. It must also invest the time and resources needed to better educate Canadians about the dangers of the era of disinformation and data-opolies. No effort should be spared so that Canadians can participate in the digital economy and the democratic process without fear.
Lastly, the Committee maintains that if there is one thing that the events of the past year have brought to light, it is that social media platforms should carry out a thorough self-examination, as they have an important choice to make. Do they wish to continue with a business model designed to be addictive while ignoring the harmful effects their platforms can have on the social fabric, and their long-term human impact? Or would they rather make technology more ethical and compatible with the capabilities of the human mind? The Committee sincerely hopes that they will chose the latter.”
Barry Sookman is a member of the Macdonald-Laurier Institute’s advisory council and senior partner with McCarthy Tétrault Technology Law Group.