Thanks for coming! Please fill out a short survey if you have a moment: https://forms.gle/WDHS9Z2dvgoM7iCg9
Nikita Sokolsky
ACX/LW Seattle spring meetup 2024
ACX Everywhere—Punta Cana (DR)
Seattle ACX Everywhere—October 2022
Seattle September meetup: Absurdity Bias
Weather is nearly perfect, so the event is definitely happening today. I’ll be wearing a black jacket and a grey shirt.
Easy guide for running a local Rationality meetup
Follow up summer event is now live: https://www.lesswrong.com/events/zPw5WLaJ9f4QEfpyR/lw-acx-seattle-summer-meetup.
LW/ACX/EA Seattle summer meetup
Points from this post I agree with:
AGI will have at least 100x faster decision making speed for any given decision, compared to human decision making
AGI will be able to interact with all 9 billion humans at once, in parallel, giving it a massive advantage
Slow motion videos present a helpful analogy
My objection is primarily around the fact that having a 100x faster processing power wouldn’t automatically allow you to do things 100x faster in the physical world:
Any mechanical systems that you control won’t be 100x faster due to limitations of how faster real-world mechanical parts will work. I.e. if you control a drone, you have to deal with the fact that the drone won’t fly/rotate 100x faster just because your processing power is 100x faster. And you’ll probably have to control the drone remotely because you wouldn’t fit the entire AGI on the drone itself, placing a limit on how fast you can make decisions.
Any operations where you rely on human action will run at 1x speed, even if you somewhat streamline them thanks to parallelization and superior decision making
Being 100x faster is useless if you don’t have full information on what the humans are doing/plotting. And they could hide pretty easily by meeting up offline with no electronics in place.
Even “manipulating humans” is something that can be hard to do if you don’t have a way to directly interact with the physical world. I.e. good luck manipulating the Ukrainian war zone from the Internet.
But how will the AI get that confidence without trial & error?
You should make a separate post on “Can AGI just simulate the physical world?”. Will make it easier to find and reference in the future.
“but with humanity being overwhelmed by the number of different kinds of attack.”
But AGI will only be able to start carrying out these sneaky attacks once its fairly convinced it can survive without human help? Otherwise humans will notice the various problems propping up and might just decide to “burn all GPUs” which is currently an unimaginable act.. So AGI will have to act sneakily behind the scenes for a very long time. This is again coming back to the argument that humans have a strong upper hand as long as we’ve got monopoly on physical world manipulation.
People here shouldn’t assume that, because Eliezer never posted a detailed analysis on LessWrong, everyone on the doomer train is starting from unreasonable premises regarding how robot building and research could function in practice.
I agree but unfortunately my Google-fu wasn’t strong enough to find detailed prior explanations of AGI vs. robot research. I’m looking forward to your explanation.
Those are excellent comments! Do you mind if I add a few quotes from them to the post?
Contra EY: Can AGI destroy us without trial & error?
Physical space = did you like that the meetup was in Capitol Hill? (ignoring that it was at Optimism specifically)
Venue = did you like that the meetup was in Optimism Brewing? (ignoring that its located in Cap Hill)
Thank you for attending! The guessing jar count is… 1400. And the winner is Utna D / @regexkind with a guess of 1425. The median guess (excluding the 1 outlier) was 1374, which was remarkably close! The full data is here: https://tinyurl.com/4vdv346f
I wasn’t able to find Utna/@regexkind’s email or Facebook, so please reach out to me to claim your $50 prize!
And if you have a free minute, I’d appreciate if you could fill out a survey about the meetup: https://forms.gle/yZJQQTKJYSsKiB5XA. Responses will be shared with Scott Alexander and other meetup organizers, to help make the next one even better.
If you have a bit of free time, we’d appreciate if you fill out the post-event survey: https://docs.google.com/forms/d/e/1FAIpQLSe8Lpqt-In6aIAtSlA9pWEgRUlwW2CbLzYogJhJ3KC7mkycVg/viewform