“AGI is death, you want Friendly AI in particular and not AGI in general.”
I’m not sure of the technical definition of AGI, but essentially I mean a machine that can reason. I don’t plan to give it outputs until I know what it does.
“‘Life’ is not the terminal value, terminal value is very complex.”
I don’t mean that life is the terminal value that all human’s actions reduce to. I mean it in exactly the way I said above; for me to achieve any other value requires that I am alive. I also don’t mean that every value I have reduces to my desire to live, just that, if it comes down to one or the other, I choose life.
I recently found Less Wrong through Eliezer’s Harry Potter fanfic, which has become my second favorite book. Thank you so much Eliezer for reminding my how rich my Art can be.
I was also delighted to find out (not so surprisingly) that Eliezer was an AI researcher. I have, over the past several months, decided to change my career path to AGI. So many of these articles have been helpful.
I have been a rationalist since I can remember. But I was raised as a Christian, and for some reason it took me a while to think to question the premise of God. Fortunately as soon as I did, I rejected it. Then it was up to me to 1) figure out how to be immortal and 2) figure out morality. I’ll be signing up for cryonics as soon as I can afford it. Life is my highest value because it is the terminal value; it is required for any other value to be possible.
I’ve been reading this blog every day since I’ve found it, and hope to get constant benefit from it. I’m usually quiet, but I suspect the more I read, the more I’ll want to comment and post.