Like pointed out by David Gerard, EY isn’t the only game in town. I think the basic question he’s trying to answer- “If you had to build human values from scratch, how would you do it?”—is a very interesting one, even if I think his answer to it is not very good.
How you answer that question depends on what shape you think the future will take. EY thinks we’ll invent a god, and so we need to proceed very carefully and get the answer completely correct the first time. Hanson thinks we’ll evolve as a society, and so most of the answers will get found along the way- but we can make some predictions and alter the shape of the future with our actions now.
Personal values also differ heavily. I don’t expect to live forever- and so if my descendents are memetic rather than genetic, and synthetic rather than organic, it’s no great loss. (That’s not to say there are no values / memes I’d like to preserve, of course.) To someone that wants to personally exist for a long time, it becomes very relevant what part humans have in the future.
To someone that wants to personally exist for a long time, it becomes very relevant what part humans have in the future.
I think this is an awesome point I overlooked. That talk of future of mankind, that assigning of the moral values to future humans but zero to the AI itself… it does actually make a lot more sense in context of self preservation.
Like pointed out by David Gerard, EY isn’t the only game in town. I think the basic question he’s trying to answer- “If you had to build human values from scratch, how would you do it?”—is a very interesting one, even if I think his answer to it is not very good.
How you answer that question depends on what shape you think the future will take. EY thinks we’ll invent a god, and so we need to proceed very carefully and get the answer completely correct the first time. Hanson thinks we’ll evolve as a society, and so most of the answers will get found along the way- but we can make some predictions and alter the shape of the future with our actions now.
Personal values also differ heavily. I don’t expect to live forever- and so if my descendents are memetic rather than genetic, and synthetic rather than organic, it’s no great loss. (That’s not to say there are no values / memes I’d like to preserve, of course.) To someone that wants to personally exist for a long time, it becomes very relevant what part humans have in the future.
I think this is an awesome point I overlooked. That talk of future of mankind, that assigning of the moral values to future humans but zero to the AI itself… it does actually make a lot more sense in context of self preservation.