This is great. Since you already anticipate the dangerous takeoff that is coming, and we are unsure if we notice and can act on time: why not cull now?
I get that part of the point is slowing down the takeoff and culling now does not get that effect. But what if March 2027 is too late? What if getting proto-AGIs to do AI R&D only requires minor extra training or unhobbling?
I’d trust a plan that relies on already massively slowing down AI now way more than one that relies on it still being on time later.
I fail to see how that’s an argument. It doesn’t seem to me a reason not to cull now, only maybe not to advocate for it, and even that I would disagree with. Can you explain yourself?
“Cull” is not in anyone’s action space. It’s a massive coordinated global policy. The only thing we can do is advocate for it. The OP specified that we wait until there’s popular will to do something potentially radical sbout AGI for pragmatic reasons. Culling now would be nice but is not possible.
This is great.
Since you already anticipate the dangerous takeoff that is coming, and we are unsure if we notice and can act on time: why not cull now?
I get that part of the point is slowing down the takeoff and culling now does not get that effect.
But what if March 2027 is too late? What if getting proto-AGIs to do AI R&D only requires minor extra training or unhobbling?
I’d trust a plan that relies on already massively slowing down AI now way more than one that relies on it still being on time later.
Because no one will agree to do it.
I fail to see how that’s an argument. It doesn’t seem to me a reason not to cull now, only maybe not to advocate for it, and even that I would disagree with. Can you explain yourself?
“Cull” is not in anyone’s action space. It’s a massive coordinated global policy. The only thing we can do is advocate for it. The OP specified that we wait until there’s popular will to do something potentially radical sbout AGI for pragmatic reasons. Culling now would be nice but is not possible.