Haha… well it looks by your profile you’re still managing to think about things you find stressful. “chances of AGI in the next few years are high enough (though still <50%) that it’s best to focus on disseminating safety relevant research as rapidly as possible”… so no problems there. Hope my comment didn’t come across as mean.
Also you’re advised by Marcus Hutter? That’s cool! I got a copy of “Universal Artificial Intelligence” I want to get to reading sometime. Could I DM you and talk about UAI sometime?
Sure, anytime. I also organize the AIXI research community here: https://uaiasi.com
There is a reading group on the newer one “an introduction to UAI” running now (mostly finished but maybe we’ll start another round). The old book still has advantages.
Haha… well it looks by your profile you’re still managing to think about things you find stressful. “chances of AGI in the next few years are high enough (though still <50%) that it’s best to focus on disseminating safety relevant research as rapidly as possible”… so no problems there. Hope my comment didn’t come across as mean.
Also you’re advised by Marcus Hutter? That’s cool! I got a copy of “Universal Artificial Intelligence” I want to get to reading sometime. Could I DM you and talk about UAI sometime?
Sure, anytime. I also organize the AIXI research community here: https://uaiasi.com
There is a reading group on the newer one “an introduction to UAI” running now (mostly finished but maybe we’ll start another round). The old book still has advantages.