It’s probably better to be safe than sorry when it comes to AI, so I’m not against AI safety research, but I do personally think doom will take longer than 3-10 years.
My reasoning is that I’m pessimistic about the underlying technology in a way that I talked about a few days ago here. I think I’ve picked up an intuition of how language models work by using them for coding, and I see their limitations. I don’t buy the benchmarks saying how awesome they are.
I don’t think Yunna is unrealistic beyond the fact that she’s superintelligent earlier than I’d predict. And assuming corrigibility turns out to not be super hard.
It’s probably better to be safe than sorry when it comes to AI, so I’m not against AI safety research, but I do personally think doom will take longer than 3-10 years.
My reasoning is that I’m pessimistic about the underlying technology in a way that I talked about a few days ago here. I think I’ve picked up an intuition of how language models work by using them for coding, and I see their limitations. I don’t buy the benchmarks saying how awesome they are.
I don’t think Yunna is unrealistic beyond the fact that she’s superintelligent earlier than I’d predict. And assuming corrigibility turns out to not be super hard.