Also if you read almost anything on the subject, people will be constantly saying how they don’t think superhuman intelligence is inevitable or close
If it’s “meaningfully close enough to do something about it” I will take that as being ’close”. I don’t think Bostrom puts a number on it, or I don’t remember him doing so, but he seems to address a real possibility rather than a hypothetical that is hundreds or thousands of years away.
What do you mean, you’ve never seen a consistent top-to-bottom reasoning for it? This is not a rhetorical question, I am just not sure what you mean here. If you are accusing e.g. Bostrom of inconsistency, I am pretty sure you are wrong about that.
I mean, I don’t see a chain of conclusions that leads to the theory being “correct” , Vaniver mentioned bellow how this is not the correct perspective to adopt and I agree with that.… or I would, assuming that the hypothesis would be Popperian (i.e. that one could do something to disprove AI being a large risk in the relative near future).
If you are just saying he hasn’t got an argument in premise-conclusion form, well, that seems true but not very relevant or important. I could make one for you if you like.
If you could make such a premise-conclusion case I’d be more then welcome to hear it out.
ease of data collection? Cost of computing power? Usefulness of intelligence? -- but all three of these things seem like things that people have argued about at length, not assumed
Well, I am yet to see the arguments
Also the case for AI safety doesn’t depend on these things being probable, only on them being not extremely unlikely.
It depends on you being able to put number on those probabilities though, otherwise you are in a Pascal wager’s scenario, where any event that is not almost certainly ruled out should be taken into account with an amount of seriousness proportional to it’s fictive impact.