Would we rather AIs be good at decision theory or bad at decision theory? I think this is really unclear and would be curious for someone to write up something weighing the upsides and downsides
Good if Friendly, bad otherwise.
Would we rather AIs be good at decision theory or bad at decision theory? I think this is really unclear and would be curious for someone to write up something weighing the upsides and downsides
Good if Friendly, bad otherwise.