0. How accurate is it? 1-4 all seem to be about how fast we can get the answer to a problem that a certain algorithm would give, including by using a new algorithm to get to the same answer. But it’s also (maybe primarily more) important to get a better answer.
Yes, and that gets into another aspect of my skepticism about AI risk. More thinking is not necessarily better thinking.
EDIT: I just realized that I’m the one who is smuggling in this assumption that RSI refers to speed improvements. So I guess the deeper question is, where does more and/or better come from? And, if we’re talking about better, how does the AGI know what better is?
0. How accurate is it? 1-4 all seem to be about how fast we can get the answer to a problem that a certain algorithm would give, including by using a new algorithm to get to the same answer. But it’s also (maybe primarily more) important to get a better answer.
Yes, and that gets into another aspect of my skepticism about AI risk. More thinking is not necessarily better thinking.
EDIT: I just realized that I’m the one who is smuggling in this assumption that RSI refers to speed improvements. So I guess the deeper question is, where does more and/or better come from? And, if we’re talking about better, how does the AGI know what better is?