Canada is doing a big study to better understand the risks of AI. They aren’t shying away from the topic of catastrophic existential risk. This seems like good news for shifting the Overton window of political discussions about AI (in the direction of strict international regulations). I hope this is picked up by the media so that it isn’t easy to ignore. It seems like Canada is displaying an ability to engage with these issues competently.
This is an opportunity for those with technical knowledge of the risks of artificial intelligence to speak up. Making such knowledge legible to politicians and the general public is an important part of civilization being able to deal with AI in a sane manner. If you can state the case well, you can apply to speak to the committee:
which study you want to participate in (Challenges Posed by Artificial Intelligence and its Regulation)
who you are and why the committee should care about what you have to say
what you want to talk about
indicate what language(s) you can testify (english/french) and virtually vs in-person
Luc Theriault is responsible for this study taking place.
I don’t think the ‘victory condition’ of something like this is a unilateral Canadian ban/regulation—rather, Canada and other nations need to do something of the form “If [some list of other countries] pass [similar regulation], Canada will [some AI regulation to avoid the risks posed by superintelligence]”.
Here’s a relatively entertaining second hour of proceedings from 26 January:
I think it’s quite plausible that many politicians in many states are concerned with AI existential/catastrophic risk, but don’t want to be the first ones to come out as crazy doomsayers. Some of them might not even allow the seeds of their concern to grow, because, like, “if those things really were that concerning, surely many people around me (and my particularly reasonable tribe in particular) would have voiced those concerns already”.
Sure, we have politicians who say this, e.g., Brad Sherman in the US (apparently since at least 2007!) and, e.g., IABIED sent some ripples. But for many people to gut-level believe that this concern of theirs is important/good/legitimate to voice, they need clear social proof that “if I think/say this, I won’t be a weird outlier”, and for that, some sort of critical mass of expression of concern/belief/preference must be achieved in a relevant sort of population.
Canada’s government, tackling those issues with apparent seriousness, has the potential to be that sort of critical mass.
This was an interesting watch. Just a few days ago
Challenges Posed by Artificial Intelligence and its Regulation
Witnesses
As an individual
• Steven Adler, Artificial Intelligence Researcher
The Human Line Project
• Etienne Brisson, Chief Executive Officer
ControlAI
• Andrea Miotti, Chief Executive Officer
AI Governance and Safety Canada
• Wyatt Tessari L’Allié, Founder and Executive Director
It’s inspiring to watch these people saying the right things to the right people.
I am doing a research internship in Ottawa right now (w/ the Canadian federal govt), so it might be feasible for me to request to participate in person. However, I’m not sure that I can state the case well without some prior preparation and guidance. If you (or another qualified person reading this) think it is worth the effort, I’d like to discuss about if I should request to speak and how to prepare if I do.
Not sure how this study on AI risks came to be. But to address your point about whether there is anything going on in Canada: AI Governance and Safety Canada (AIGS) has been coordinating advocacy and offering policy recommendations around AI safety for a few years now. Two of the witnesses, Andrea Miotti and Wyatt Tessari L’Allié, mentioned in Steven McCulloch’s comment are from AIGS. One of the witnesses from the ETHI proceedings is Wyatt Tessari L’Allié of AIGS.
(Correction: Andrea Miotti is from Control AI, not AIGS.)
Canada is doing a big study to better understand the risks of AI. They aren’t shying away from the topic of catastrophic existential risk. This seems like good news for shifting the Overton window of political discussions about AI (in the direction of strict international regulations). I hope this is picked up by the media so that it isn’t easy to ignore. It seems like Canada is displaying an ability to engage with these issues competently.
This is an opportunity for those with technical knowledge of the risks of artificial intelligence to speak up. Making such knowledge legible to politicians and the general public is an important part of civilization being able to deal with AI in a sane manner. If you can state the case well, you can apply to speak to the committee:
Send a request to ETHI@parl.gc.ca, stating:
which study you want to participate in (Challenges Posed by Artificial Intelligence and its Regulation)
who you are and why the committee should care about what you have to say
what you want to talk about
indicate what language(s) you can testify (english/french) and virtually vs in-person
Luc Theriault is responsible for this study taking place.
I don’t think the ‘victory condition’ of something like this is a unilateral Canadian ban/regulation—rather, Canada and other nations need to do something of the form “If [some list of other countries] pass [similar regulation], Canada will [some AI regulation to avoid the risks posed by superintelligence]”.
Here’s a relatively entertaining second hour of proceedings from 26 January:
https://youtu.be/W0qMb1qGwFw?si=EqgPSHRt_AYuGgu8&t=4123
Full videos:
https://www.youtube.com/watch?v=W0qMb1qGwFw&t=30s
https://www.youtube.com/watch?v=mow9UFdxiIw&t=30s
https://www.youtube.com/watch?v=ipMS1S5oOlg&t=19s
Potentially huge.
I think it’s quite plausible that many politicians in many states are concerned with AI existential/catastrophic risk, but don’t want to be the first ones to come out as crazy doomsayers. Some of them might not even allow the seeds of their concern to grow, because, like, “if those things really were that concerning, surely many people around me (and my particularly reasonable tribe in particular) would have voiced those concerns already”.
Sure, we have politicians who say this, e.g., Brad Sherman in the US (apparently since at least 2007!) and, e.g., IABIED sent some ripples. But for many people to gut-level believe that this concern of theirs is important/good/legitimate to voice, they need clear social proof that “if I think/say this, I won’t be a weird outlier”, and for that, some sort of critical mass of expression of concern/belief/preference must be achieved in a relevant sort of population.
Canada’s government, tackling those issues with apparent seriousness, has the potential to be that sort of critical mass.
This was an interesting watch. Just a few days ago
Challenges Posed by Artificial Intelligence and its Regulation
Witnesses
As an individual • Steven Adler, Artificial Intelligence Researcher The Human Line Project • Etienne Brisson, Chief Executive Officer ControlAI • Andrea Miotti, Chief Executive Officer AI Governance and Safety Canada • Wyatt Tessari L’Allié, Founder and Executive Director
It’s inspiring to watch these people saying the right things to the right people.
https://www.ourcommons.ca/committees/en/WitnessMeetings?organizationId=43696
do you know how long these proceedings will last / what the deadline is for requesting participation?
AFAIK, to be included in the report, the written testimony[1] would have to be sent this week, ideally before Thursday.
They likely don’t have more capacity for live participation.
I am doing a research internship in Ottawa right now (w/ the Canadian federal govt), so it might be feasible for me to request to participate in person. However, I’m not sure that I can state the case well without some prior preparation and guidance.
If you (or another qualified person reading this) think it is worth the effort, I’d like to discuss about if I should request to speak and how to prepare if I do.
I believe they aren’t taking more witnesses unfortunately :/
Interesting. Why did this happen? I know of ai safety lobbing in US but is there anything going on in Canada? I doubt it happened out of the blue
Not sure how this study on AI risks came to be. But to address your point about whether there is anything going on in Canada: AI Governance and Safety Canada (AIGS) has been coordinating advocacy and offering policy recommendations around AI safety for a few years now.
Two of the witnesses, Andrea Miotti and Wyatt Tessari L’Allié, mentioned in Steven McCulloch’s comment are from AIGS.One of the witnesses from the ETHI proceedings is Wyatt Tessari L’Allié of AIGS.(Correction: Andrea Miotti is from Control AI, not AIGS.)
My guess is that this is posturing to threaten an economic strike on American tech, in coördination with Europe.
I’m in Canada and have been AI x-risk pilled for years but I really don’t have bandwidth to participate in this. Hope something good comes out of it.