People here shouldn’t assume that, because Eliezer never posted a detailed analysis on LessWrong, everyone on the doomer train is starting from unreasonable premises regarding how robot building and research could function in practice.
I agree but unfortunately my Google-fu wasn’t strong enough to find detailed prior explanations of AGI vs. robot research. I’m looking forward to your explanation.
Points from this post I agree with:
AGI will have at least 100x faster decision making speed for any given decision, compared to human decision making
AGI will be able to interact with all 9 billion humans at once, in parallel, giving it a massive advantage
Slow motion videos present a helpful analogy
My objection is primarily around the fact that having a 100x faster processing power wouldn’t automatically allow you to do things 100x faster in the physical world:
Any mechanical systems that you control won’t be 100x faster due to limitations of how faster real-world mechanical parts will work. I.e. if you control a drone, you have to deal with the fact that the drone won’t fly/rotate 100x faster just because your processing power is 100x faster. And you’ll probably have to control the drone remotely because you wouldn’t fit the entire AGI on the drone itself, placing a limit on how fast you can make decisions.
Any operations where you rely on human action will run at 1x speed, even if you somewhat streamline them thanks to parallelization and superior decision making
Being 100x faster is useless if you don’t have full information on what the humans are doing/plotting. And they could hide pretty easily by meeting up offline with no electronics in place.