Thanks for the link! There’s a bunch of interesting stuff on here. For instance, here’s a subforum on Felicifia that deals with futurism and xrisk:
Utilitarian future Will we transcend our human bodies? Extend our lives? Create superhuman artificial intelligence? Mitigate existential risks? etc.
There’s currently a thread in it about whether SIAI is the most optimal charity.
Thanks for the link! There’s a bunch of interesting stuff on here. For instance, here’s a subforum on Felicifia that deals with futurism and xrisk:
There’s currently a thread in it about whether SIAI is the most optimal charity.