This article originally appeared in the Globe and Mail.
By Ryan Khurana, January 2, 2025
A Canadian, Geoffrey Hinton, has won the 2024 Nobel Prize in physics for his groundbreaking work in artificial intelligence – largely conducted in domestic institutions – but we face a stark paradox.
While celebrating this historic recognition, new Stanford University AI vibrancy rankings reveal that Canada fell from third in the world, behind only the United States and China in 2017, to 14th in 2023 over a wide range of AI metrics.
This dichotomy reflects a troubling pattern: Canada excelled at foundational research but struggles to maintain leadership in advancement, commercialization and deployment. The federal government’s recent investment of as much as $240-million in Cohere, a Toronto-based AI leader, furthers Canada’s $2-billion Sovereign AI Compute Strategy. But when it comes to the regulatory environment, the recently proposed Artificial Intelligence and Data Act (AIDA) threatens to exacerbate Canada’s challenges, potentially stifling adoption while failing to provide the clarity our AI ecosystem desperately needs.
AIDA’s approach to regulation, while well-intentioned, raises several red flags. The legislation’s broad introduction of “high-impact” AI systems is to be defined in regulation. Yet the guidance that it will aim for interoperability with the European Union’s AI law indicates a propensity toward a similar lack of specification on the harms to be avoided, with significant penalties for not avoiding them.
The proposed legislation attempts to exempt the development of open-source AI from the high-impact designation – something AI researchers criticized the EU for not distinguishing – based on the fact that “these models alone do not constitute a complete AI system.”
The boundaries, however, between research and commercial application are increasingly blurred in modern AI development, with firms such as OpenAI and Anthropic engaging in both. Supporting research while restricting applications prevents the virtuous cycle of commercial-directed development that accelerates leadership and has been pivotal in enabling other countries to leapfrog Canada’s AI ecosystem.
The solution isn’t to abandon regulation entirely. Rather, we need to fundamentally rethink AIDA’s approach. The goal should be to ensure that AI is developed safely, avoiding the catastrophic risks that the likes of Prof. Hinton and many others have increasingly worried about, while allowing for the practical use of current systems to expand.
California’s AI safety bill, vetoed by the Governor after divided takes from Silicon Valley, demonstrated how to address legitimate AI safety concerns while maintaining a vibrant innovation ecosystem. Unlike AIDA’s focus on high-impact systems, California’s bill, SB-1047, focused specifically on “frontier” AI models, those requiring massive computing resources that could pose existential risks.
The harms to be avoided are those that are caused by AI itself, of which there are potentially many. Where use could cause harm rather than the AI itself, SB-1047 leverages existing regulatory frameworks, such as consumer protection and privacy legislation. In Canada, there is an opportunity to take seriously AI safety concerns about alignment and AI failure that would provide leadership in ethical AI development. By focusing instead on improving Canada’s ability to build frontier models in line with values we would like to see embedded in AI systems, the downstream worries about potential harms in use can be further mitigated.
We risk chilling adoption of AI if we regulate use based on unspecified potential harms and further limit Canada’s ability to support cutting-edge development. AI is a critical economic necessity, promising to kick-start a new era, with global consulting firm McKinsey forecasting as much as US$4.4-trillion in annual global GDP gained through AI-enabled productivity growth.
Similarly, health care breakthroughs enabled by AI, such as AlphaFold, which earned DeepMind founder Demis Hassabis the 2024 Nobel Prize in chemistry, promise to redefine the future of health care and healthy societies.
Canada has an incredible need for both productivity and health care advancement, given our rapidly aging population, and with our historical investment in this field, we should not allow this technology to be defined by the highest bidder.
We’ve already demonstrated our capacity for world-changing innovation through the work of researchers such as Prof. Hinton. Now we need policy that builds on this legacy rather than constrains it.
As the federal government considers AIDA, it must recognize that effective AI regulation should enable innovation while protecting against genuine harms. The current draft risks achieving neither and stifling the value of new investments.
Without significant revision, we may find ourselves celebrating past achievements while watching our future leadership slip away.
Ryan Khurana is a senior fellow at the Foundation for American Innovation and a contributing author to the Macdonald-Laurier Institute.