My p(this branch of humanity won’t fulfill the promise of the night sky) is actually more like 0.82 or sth, idk. (I’m even lower on p(everyone will die), because there might be superintelligences in other branches that acausally trade to save the existing lives, though I didn’t think about it carefully.)
I’m chatting 1 hour every 2 weeks with Erik Jenner. We usually talk about AI safety stuff. Otherwise also like 1h every 2 weeks with a person who has sorta similar views to me. Otherwise I currently don’t talk much to people about AI risk.
Seems totally unrelated to my post but whatever:
My p(this branch of humanity won’t fulfill the promise of the night sky) is actually more like 0.82 or sth, idk. (I’m even lower on p(everyone will die), because there might be superintelligences in other branches that acausally trade to save the existing lives, though I didn’t think about it carefully.)
I’m chatting 1 hour every 2 weeks with Erik Jenner. We usually talk about AI safety stuff. Otherwise also like 1h every 2 weeks with a person who has sorta similar views to me. Otherwise I currently don’t talk much to people about AI risk.