No, it’s not at all the same thing as OpenAI is doing.
First, OpenAI is working using a methodology that’s completely inadequate for solving the alignment problem. I’m talking about racing to actually solve the alignment problem, not racing to any sort of superintelligence that our wishful thinking says might be okay.
Second, when I say “racing” I mean “trying to get there as fast as possible”, not “trying to get there before other people”. My race is cooperative, their race is adversarial.
Third, I actually signed the FLI statement on superintelligence. OpenAI hasn’t.
Obviously any parallel efforts might end up competing for resources. There are real trade-offs between investing more in governance vs. investing more in technical research. We still need to invest in both, because of diminishing marginal returns. Moreover, consider this: even the approximately-best-case scenario of governance only buys us time, it doesn’t shut down AI forever. The ultimate solution has to come from technical research.
Agree that your research didn’t make this mistake, and MIRI didn’t make all the same mistakes as OpenAI. I was responding in context of Wei Dai’s OP about the early AI safety field. At that time, MIRI was absolutely being uncooperative: their research was closed, they didn’t trust anyone else to build ASI, and their plan would end in a pivotal act that probably disempowers some world governments and possibly ends up with them taking over the world. Plus they descended from a org whose goal was to build ASI before Eliezer realized alignment should be the focus. Critch complained as late as 2022 that if there were two copies of MIRI, they wouldn’t even cooperate with each other.
It’s great that we have the FLI statement now. Maybe if MIRI had put more work into governance we could have gotten it a year or two earlier, but it took until Hendrycks got involved for the public statements to start.
when I say “racing” I mean “trying to get there as fast as possible”, not “trying to get there before other people”
how about “climbing” metaphor instead? ..I have a hard time imagining non-competitive speed race (and not even F1 formulas use nitroglycerine for fuel), while auto-belay sounds like a nice safety feature even in speed climbing
nonconstructive complaining intermezzo
if we want to go for some healthier sports metaphor around spending trillions of dollars to produce the current AI slop and future AGI that will replace all jobs and future ASI that will kill us all in the name of someone thinking they can solve-in-theory the unsolvable-in-practice alignment problems
as for climbing to new peaks, you need different equipment for a local hill, for Mount Everest (you even need to slow down to avoid altitude sickness) and for Olympus Mons (now you need rockets and spacesuits and institutional backing for traveling to other planets)
No, it’s not at all the same thing as OpenAI is doing.
First, OpenAI is working using a methodology that’s completely inadequate for solving the alignment problem. I’m talking about racing to actually solve the alignment problem, not racing to any sort of superintelligence that our wishful thinking says might be okay.
Second, when I say “racing” I mean “trying to get there as fast as possible”, not “trying to get there before other people”. My race is cooperative, their race is adversarial.
Third, I actually signed the FLI statement on superintelligence. OpenAI hasn’t.
Obviously any parallel efforts might end up competing for resources. There are real trade-offs between investing more in governance vs. investing more in technical research. We still need to invest in both, because of diminishing marginal returns. Moreover, consider this: even the approximately-best-case scenario of governance only buys us time, it doesn’t shut down AI forever. The ultimate solution has to come from technical research.
Agree that your research didn’t make this mistake, and MIRI didn’t make all the same mistakes as OpenAI. I was responding in context of Wei Dai’s OP about the early AI safety field. At that time, MIRI was absolutely being uncooperative: their research was closed, they didn’t trust anyone else to build ASI, and their plan would end in a pivotal act that probably disempowers some world governments and possibly ends up with them taking over the world. Plus they descended from a org whose goal was to build ASI before Eliezer realized alignment should be the focus. Critch complained as late as 2022 that if there were two copies of MIRI, they wouldn’t even cooperate with each other.
It’s great that we have the FLI statement now. Maybe if MIRI had put more work into governance we could have gotten it a year or two earlier, but it took until Hendrycks got involved for the public statements to start.
how about “climbing” metaphor instead? ..I have a hard time imagining non-competitive speed race (and not even F1 formulas use nitroglycerine for fuel), while auto-belay sounds like a nice safety feature even in speed climbing
nonconstructive complaining intermezzo
if we want to go for some healthier sports metaphor around spending trillions of dollars to produce the current AI slop and future AGI that will replace all jobs and future ASI that will kill us all in the name of someone thinking they can solve-in-theory the unsolvable-in-practice alignment problems
as for climbing to new peaks, you need different equipment for a local hill, for Mount Everest (you even need to slow down to avoid altitude sickness) and for Olympus Mons (now you need rockets and spacesuits and institutional backing for traveling to other planets)