It seems very clear that Jaime thinks that AI x-risk, is unimportant relative to almost any other issue, given his non-interest in trading off x-risk against those other issues.
Does not seem a fair description of
I wont endanger the life of my family, myself and the current generation for a small decrease of the chances of AI going extremely badly in the long term
People are allowed to have multiple values! If someone would trade a small amount of value A for a large amount of value B, this is entirely consistent with them thinking both are important.
Like, if you offer people the option to commit suicide in exchange for reducing x-risk by x%, what value of x do you think they would require? And would you say they are not x risk motivated if they eg aren’t willing to do it at 1e-6?
In practice this doesn’t really come up, so it’s not that relevant. Similarly for Jaime’s position, how much he believes himself to be in situations where he’s trading off meaningful harm to today and meaningful harm to the present generation seems very important.
I think you’re strawmanning him somewhat
Does not seem a fair description of
People are allowed to have multiple values! If someone would trade a small amount of value A for a large amount of value B, this is entirely consistent with them thinking both are important.
Like, if you offer people the option to commit suicide in exchange for reducing x-risk by x%, what value of x do you think they would require? And would you say they are not x risk motivated if they eg aren’t willing to do it at 1e-6?
In practice this doesn’t really come up, so it’s not that relevant. Similarly for Jaime’s position, how much he believes himself to be in situations where he’s trading off meaningful harm to today and meaningful harm to the present generation seems very important.