I disagree with a number of statements made in the post and do not support an AI development ban or pause. But I support Leo speaking his mind about this and I think it’s important for OpenAI and other labs to have a culture where employees feel free to speak about such issues.
I wonder if there’s a palatable middle ground where instead of banning all AI research, we might get people to agree to ban in advance only dangerous types of ASI.
My current personal beliefs: - ASI existential risk is very much worth worrying about - Dangerous ASI is likely the #1 threat to humanity - In the next few decades, the odds of ASI killing/disempowering us is tiny - I feel good accelerating capabilities at OpenAI to build technology that helps more people - I would not support a ban or pause on AI/AGI (because it deprives people of AI benefits, breaks promises, and also accumulates a compute overhang for whenever the ban is later lifted) - I would happily support a preemptive ban on dangerous ASI
In the next few decadues, the odds of ASI killing/disempowering us is tiny
I found this point surprising. Is this because of long timelines to ASI?
Regardless, while it seems very hard to implement well, I’m happy to publicly say that I am in favour of a well-implemented preemptive ban on dangerous ASI
I expect existentially dangerous ASI to take longer than ASI, which will take longer than AGI, which will take longer than powerful AI. Killing everyone on Earth is very hard to do, few are motivated to do it, and many will be motivated to prevent it as ASI’s properties become apparent. So I think the odds are low. And I’ll emphasize that these are my odds including humanity’s responses, not odds of a counterfactual world where we sleepwalk into oblivion without any response.
Is this a question for me? I am assuming “why not” refers to why I do not support a pause or a ban and not why I support that OpenAI employees should be feel free to speak up in support of such policies if that is what they believe.
This is a bit too complex to go into in a comment. I hope at some point to write a longer text (specifically I have a plan on doing a book review of “if anyone builds it then everyone dies”, maybe together with “The AI con” and “AI snake oil”) and to go there more into why I don’t think the proposed policies are good. Just a matter of getting the time…
I disagree with a number of statements made in the post and do not support an AI development ban or pause. But I support Leo speaking his mind about this and I think it’s important for OpenAI and other labs to have a culture where employees feel free to speak about such issues.
I wonder if there’s a palatable middle ground where instead of banning all AI research, we might get people to agree to ban in advance only dangerous types of ASI.
My current personal beliefs:
- ASI existential risk is very much worth worrying about
- Dangerous ASI is likely the #1 threat to humanity
- In the next few decades, the odds of ASI killing/disempowering us is tiny
- I feel good accelerating capabilities at OpenAI to build technology that helps more people
- I would not support a ban or pause on AI/AGI (because it deprives people of AI benefits, breaks promises, and also accumulates a compute overhang for whenever the ban is later lifted)
- I would happily support a preemptive ban on dangerous ASI
I found this point surprising. Is this because of long timelines to ASI?
Regardless, while it seems very hard to implement well, I’m happy to publicly say that I am in favour of a well-implemented preemptive ban on dangerous ASI
Yes, mostly.
I expect existentially dangerous ASI to take longer than ASI, which will take longer than AGI, which will take longer than powerful AI. Killing everyone on Earth is very hard to do, few are motivated to do it, and many will be motivated to prevent it as ASI’s properties become apparent. So I think the odds are low. And I’ll emphasize that these are my odds including humanity’s responses, not odds of a counterfactual world where we sleepwalk into oblivion without any response.
why not?
Is this a question for me? I am assuming “why not” refers to why I do not support a pause or a ban and not why I support that OpenAI employees should be feel free to speak up in support of such policies if that is what they believe.
This is a bit too complex to go into in a comment. I hope at some point to write a longer text (specifically I have a plan on doing a book review of “if anyone builds it then everyone dies”, maybe together with “The AI con” and “AI snake oil”) and to go there more into why I don’t think the proposed policies are good. Just a matter of getting the time…