Why is the Alignment Researcher different than a normal AI researcher?
E.g. Markov decision processes are often conceptualized as “agents” which take “actions” and receive “rewards” etc. and I think none of those terms are “True Names”.
Despite this, when researchers look into ways to give MDP’s some other sort of capability or guarantee, they don’t really seem to prioritize finding True Names. In your dialogue: the AI researcher seems perfectly fine accepting the philosopher’s vaguely defined terms.
What is it about alignment which makes finding True Names such an important strategy, when finding True Names doesn’t seem to be that important for e.g. learning from biased data sets (or any of the other million things AI researchers try to get MDP’s to do)?
Why is the Alignment Researcher different than a normal AI researcher?
E.g. Markov decision processes are often conceptualized as “agents” which take “actions” and receive “rewards” etc. and I think none of those terms are “True Names”.
Despite this, when researchers look into ways to give MDP’s some other sort of capability or guarantee, they don’t really seem to prioritize finding True Names. In your dialogue: the AI researcher seems perfectly fine accepting the philosopher’s vaguely defined terms.
What is it about alignment which makes finding True Names such an important strategy, when finding True Names doesn’t seem to be that important for e.g. learning from biased data sets (or any of the other million things AI researchers try to get MDP’s to do)?