We blatantly have updatable goals: people do not have the same goals at 5 as they do at 20 or 60. I don’t know why perfect introspection would be needed to have some ability to update.
Sorry, that was bad wording on my part; I should’ve said, “updatable terminal goals”. I agree with what you said there.
How so ? Are you asserting that there exists an optimal ethical system that is independent of the actors’ goals ?
Yes, that’s what this whole discussion is about.
I don’t feel confident enough in either “yes” or “no” answer, but I’m currently leaning toward “no”. I am open to persuasion, though.
I personally don’t know of any evidence in favor of terminal values, so I do agree with you there. Still, it makes a nice thought experiment: could we create an agent possessed of general intelligence and the ability to self-modify, and then hardcode it with terminal values ? My answer would be, “no”, but I could be wrong.
That said, I don’t believe that there exists any kind of a universally applicable moral system, either.
people do not have the same goals at 5 as they do at 20 or 60
Source?
They take different actions, sure, but it seems to me, based on childhood memories etc, that these are in the service of roughly the same goals. Have people, say, interviewed children and found they report differently?
We blatantly have updatable goals: people do not have the same goals at 5 as they do at 20 or 60.
I don’t know why perfect introspection would be needed to have some ability to update.
How so ? Are you asserting that there exists an optimal ethical system that is independent of the actors’ goals ?
Yes, that’s what this whole discussion is about.
Sorry, that was bad wording on my part; I should’ve said, “updatable terminal goals”. I agree with what you said there.
I don’t feel confident enough in either “yes” or “no” answer, but I’m currently leaning toward “no”. I am open to persuasion, though.
You can make the evidence compatble with the theory of terminal values, but there is still no support for the theory of terminal values.
I personally don’t know of any evidence in favor of terminal values, so I do agree with you there. Still, it makes a nice thought experiment: could we create an agent possessed of general intelligence and the ability to self-modify, and then hardcode it with terminal values ? My answer would be, “no”, but I could be wrong.
That said, I don’t believe that there exists any kind of a universally applicable moral system, either.
Source?
They take different actions, sure, but it seems to me, based on childhood memories etc, that these are in the service of roughly the same goals. Have people, say, interviewed children and found they report differently?
How many 5 year olds have the goal of Sitting Down WIth a Nice Cup of Tea?
One less now that I’m not 5 years old anymore.
Could you please make a real argument? You’re almost being logically rude.
Why do you think adults sit down with a nice cup of tea? What purpose does it serve?