Out of interest, can you give a rough idea of your probability estimate that a functioning superintelligent AI can be created in a reasonable time-scale without our having first gained a detailed understanding of the human brain—i.e. that an superintelligence is built without the designers reverse-engineering an existing intelligence to any significant extent?
Edit: because there is nothing rational about interpreting words like “surely” literally when they are obviously being used in a casual or innocently rhetorical way.
because there is nothing rational about interpreting words like “surely” literally when they are obviously being used in a casual or innocently rhetorical way.
You and Nesov either did not interpret your use of ‘surely’ (in context) to mean the same thing, or Nesov thought that additional clarification was needed (a statement which you do not seem to agree with). I’m failing to parse your use of the word rational in this context.
Intention: Helpful information. I may not respond to a reply.
If Nesov thought that additional clarification was necessary, he could have said so. But actually he simply criticised the use of the word “surely”.
I consider pedantry to be a good thing. On the other hand, it is at least polite to be charitable in interpreting someone, particularly when the nitpick in question is basically irrelevant to main thrust of the argument.
“Surely” is just a word. Literally it means 100% or ~100% probability, but sometimes it just sounds good or it is used sloppily. If I had to give a number, I’d have said 95% probability that superintelligent AI won’t be developed before we learn about the human brain in detail.
I’m highly amenable to criticism of that estimate from people who know more about the subject, but since my politely phrased request for Nesov’s own estimate was downvoted I decided that this kind of uncharitableness has more to do with status than constructive debate. As such it is not rational.
I retracted the comment on the basis that it was a little petulant but since you asked, there is my explanation.
Surely not “surely”.
If not, then the particular point I was making is strengthened.
I care not which position a flaw supports, and this one seems like grievous overconfidence.
Out of interest, can you give a rough idea of your probability estimate that a functioning superintelligent AI can be created in a reasonable time-scale without our having first gained a detailed understanding of the human brain—i.e. that an superintelligence is built without the designers reverse-engineering an existing intelligence to any significant extent?
Edit: because there is nothing rational about interpreting words like “surely” literally when they are obviously being used in a casual or innocently rhetorical way.
You and Nesov either did not interpret your use of ‘surely’ (in context) to mean the same thing, or Nesov thought that additional clarification was needed (a statement which you do not seem to agree with). I’m failing to parse your use of the word rational in this context.
Intention: Helpful information. I may not respond to a reply.
If Nesov thought that additional clarification was necessary, he could have said so. But actually he simply criticised the use of the word “surely”.
I consider pedantry to be a good thing. On the other hand, it is at least polite to be charitable in interpreting someone, particularly when the nitpick in question is basically irrelevant to main thrust of the argument.
“Surely” is just a word. Literally it means 100% or ~100% probability, but sometimes it just sounds good or it is used sloppily. If I had to give a number, I’d have said 95% probability that superintelligent AI won’t be developed before we learn about the human brain in detail.
I’m highly amenable to criticism of that estimate from people who know more about the subject, but since my politely phrased request for Nesov’s own estimate was downvoted I decided that this kind of uncharitableness has more to do with status than constructive debate. As such it is not rational.
I retracted the comment on the basis that it was a little petulant but since you asked, there is my explanation.