Yeah, agreed with all mechanistic points, disagreed on timelines for a network of artificial agents to pull this off. I do think it initially looks like existing human culture subnetworks of malice being amplified by ai misuse, but we see that now. And that one group of beings who like being loud about capability increase seem increasingly like The Borg every time I encounter them on Twitter. I’ve inverted my downvote because this post plus these comments seems more reasonable, but I still think the claim in the title is very overconfident—evidence against foom-in-a-box is just an improvement to the map of how to foom.
evidence against foom-in-a-box is just an improvement to the map of how to foom.
Could you elaborate on this? I equate foom with the hard take-off scenario, for which I think I’ve stated why I think this is virtually impossible, in contrast to the slow take-off, which in spite of being slow is still very dangerous, as I described.
I think my view roughly aligns with those of Robin Hanson and Paul Christiano, but I think I’ve provided a more precise, gears-level description that has been lacking and why the onus is really on those who think the hard take-off is possible at all.
Yeah, agreed with all mechanistic points, disagreed on timelines for a network of artificial agents to pull this off. I do think it initially looks like existing human culture subnetworks of malice being amplified by ai misuse, but we see that now. And that one group of beings who like being loud about capability increase seem increasingly like The Borg every time I encounter them on Twitter. I’ve inverted my downvote because this post plus these comments seems more reasonable, but I still think the claim in the title is very overconfident—evidence against foom-in-a-box is just an improvement to the map of how to foom.
Could you elaborate on this? I equate foom with the hard take-off scenario, for which I think I’ve stated why I think this is virtually impossible, in contrast to the slow take-off, which in spite of being slow is still very dangerous, as I described.
I think my view roughly aligns with those of Robin Hanson and Paul Christiano, but I think I’ve provided a more precise, gears-level description that has been lacking and why the onus is really on those who think the hard take-off is possible at all.