It’s hard to summarize without apparently straw-man arguments, e.g. “AIX + Moore’s law means that all powerful superhuman intelligence is dangerous, inevitable and close.” That’s partly because I’ve never seen a consistent top-to-bottom reasoning for it. Its proponents always seem to start by assuming things which I wouldn’t hold as given about the ease of data collection, the cost of computing power, the usefulness of intelligence.
I object to pretty much everything in this quote. I think the straw-man argument you give is pretty obviously worse than many other summaries you could give, e.g. Stuart Russell’s “Look, humans have a suite of mental abilities that gives them dominance over all other life forms on this planet. The goal of much AI research is to produce something which is better in those mental abilities than humans. What if we succeed? We’d better figure out how to prevent history from repeating itself, and we’d better do it before it’s too late.”
Also no one in the AI safety sphere thinks that all powerful superhuman intelligence is dangerous; otherwise what would be the point of AI alignment research?
Also if you read almost anything on the subject, people will be constantly saying how they don’t think superhuman intelligence is inevitable or close. Have you even read Superintelligence?
What do you mean, you’ve never seen a consistent top-to-bottom reasoning for it? This is not a rhetorical question, I am just not sure what you mean here. If you are accusing e.g. Bostrom of inconsistency, I am pretty sure you are wrong about that. If you are just saying he hasn’t got an argument in premise-conclusion form, well, that seems true but not very relevant or important. I could make one for you if you like.
I don’t know what assumptions you think the case for AI safety depends on—ease of data collection? Cost of computing power? Usefulness of intelligence? -- but all three of these things seem like things that people have argued about at length, not assumed. Also the case for AI safety doesn’t depend on these things being probable, only on them being not extremely unlikely.
Also if you read almost anything on the subject, people will be constantly saying how they don’t think superhuman intelligence is inevitable or close
If it’s “meaningfully close enough to do something about it” I will take that as being ’close”. I don’t think Bostrom puts a number on it, or I don’t remember him doing so, but he seems to address a real possibility rather than a hypothetical that is hundreds or thousands of years away.
What do you mean, you’ve never seen a consistent top-to-bottom reasoning for it? This is not a rhetorical question, I am just not sure what you mean here. If you are accusing e.g. Bostrom of inconsistency, I am pretty sure you are wrong about that.
I mean, I don’t see a chain of conclusions that leads to the theory being “correct” , Vaniver mentioned bellow how this is not the correct perspective to adopt and I agree with that.… or I would, assuming that the hypothesis would be Popperian (i.e. that one could do something to disprove AI being a large risk in the relative near future).
If you are just saying he hasn’t got an argument in premise-conclusion form, well, that seems true but not very relevant or important. I could make one for you if you like.
If you could make such a premise-conclusion case I’d be more then welcome to hear it out.
ease of data collection? Cost of computing power? Usefulness of intelligence? -- but all three of these things seem like things that people have argued about at length, not assumed
Well, I am yet to see the arguments
Also the case for AI safety doesn’t depend on these things being probable, only on them being not extremely unlikely.
It depends on you being able to put number on those probabilities though, otherwise you are in a Pascal wager’s scenario, where any event that is not almost certainly ruled out should be taken into account with an amount of seriousness proportional to it’s fictive impact.
I object to pretty much everything in this quote. I think the straw-man argument you give is pretty obviously worse than many other summaries you could give, e.g. Stuart Russell’s “Look, humans have a suite of mental abilities that gives them dominance over all other life forms on this planet. The goal of much AI research is to produce something which is better in those mental abilities than humans. What if we succeed? We’d better figure out how to prevent history from repeating itself, and we’d better do it before it’s too late.”
Also no one in the AI safety sphere thinks that all powerful superhuman intelligence is dangerous; otherwise what would be the point of AI alignment research?
Also if you read almost anything on the subject, people will be constantly saying how they don’t think superhuman intelligence is inevitable or close. Have you even read Superintelligence?
What do you mean, you’ve never seen a consistent top-to-bottom reasoning for it? This is not a rhetorical question, I am just not sure what you mean here. If you are accusing e.g. Bostrom of inconsistency, I am pretty sure you are wrong about that. If you are just saying he hasn’t got an argument in premise-conclusion form, well, that seems true but not very relevant or important. I could make one for you if you like.
I don’t know what assumptions you think the case for AI safety depends on—ease of data collection? Cost of computing power? Usefulness of intelligence? -- but all three of these things seem like things that people have argued about at length, not assumed. Also the case for AI safety doesn’t depend on these things being probable, only on them being not extremely unlikely.
If it’s “meaningfully close enough to do something about it” I will take that as being ’close”. I don’t think Bostrom puts a number on it, or I don’t remember him doing so, but he seems to address a real possibility rather than a hypothetical that is hundreds or thousands of years away.
I mean, I don’t see a chain of conclusions that leads to the theory being “correct” , Vaniver mentioned bellow how this is not the correct perspective to adopt and I agree with that.… or I would, assuming that the hypothesis would be Popperian (i.e. that one could do something to disprove AI being a large risk in the relative near future).
If you could make such a premise-conclusion case I’d be more then welcome to hear it out.
Well, I am yet to see the arguments
It depends on you being able to put number on those probabilities though, otherwise you are in a Pascal wager’s scenario, where any event that is not almost certainly ruled out should be taken into account with an amount of seriousness proportional to it’s fictive impact.