Well, you’ve picked the weakest of his points to answer, and I put it to you that it was clearly the weakest.
You are right of course that what does or doesn’t show up in Charles Stross’s writing doesn’t constitute evidence in either direction—he’s a professional fiction author, he has to write for entertainment value regardless of what he may or may not know or believe about what’s actually likely or unlikely to happen.
A better example would be e.g. Peter Norvig, whose credentials are vastly more impressive than yours (or, granted, than mine), and who thinks we need to get at least another couple of decades of progress under our belts before there will be any point in resuming attempts to work on AGI. (Even I’m not that pessimistic.)
If you want to argue from authority, the result of that isn’t just tilted against the SIAI, it’s flat out no contest.
A better example would be e.g. Peter Norvig, whose credentials are vastly more impressive than yours (or, granted, than mine), and who thinks we need to get at least another couple of decades of progress under our belts before there will be any point in resuming attempts to work on AGI. (Even I’m not that pessimistic.)
If this means “until the theory and practice of machine learning is better developed, if you try to build an AGI using existing tools you will very probably fail” it’s not unusually pessimistic at all. “An investment of $X in developing AI theory will do more to reduce the mean time to AI than $X on AGI projects using existing theory now” isn’t so outlandish either. What was the context/cite?
I don’t have the reference handy, but he wasn’t saying let’s spend 20 years of armchair thought developing AGI theory before we start writing any code (I’m sure he knows better than that), he was saying forget about AGI completely until we’ve got another 20 years of general technological progress under our belts.
Those would seem likely to be helpful indeed. Better programming tools might also help, as would additional computing power (not so much because computing power is actually a limiting factor today, as because we tend to scale our intuition about available computing power to what we physically deal with on an everyday basis—which for most of us, is a cheap desktop PC—and we tend to flinch away from designs whose projected requirements would exceed such a cheap PC; increasing the baseline makes us less likely to flinch away from good designs).
Well, you’ve picked the weakest of his points to answer, and I put it to you that it was clearly the weakest.
You are right of course that what does or doesn’t show up in Charles Stross’s writing doesn’t constitute evidence in either direction—he’s a professional fiction author, he has to write for entertainment value regardless of what he may or may not know or believe about what’s actually likely or unlikely to happen.
A better example would be e.g. Peter Norvig, whose credentials are vastly more impressive than yours (or, granted, than mine), and who thinks we need to get at least another couple of decades of progress under our belts before there will be any point in resuming attempts to work on AGI. (Even I’m not that pessimistic.)
If you want to argue from authority, the result of that isn’t just tilted against the SIAI, it’s flat out no contest.
If this means “until the theory and practice of machine learning is better developed, if you try to build an AGI using existing tools you will very probably fail” it’s not unusually pessimistic at all. “An investment of $X in developing AI theory will do more to reduce the mean time to AI than $X on AGI projects using existing theory now” isn’t so outlandish either. What was the context/cite?
I don’t have the reference handy, but he wasn’t saying let’s spend 20 years of armchair thought developing AGI theory before we start writing any code (I’m sure he knows better than that), he was saying forget about AGI completely until we’ve got another 20 years of general technological progress under our belts.
Not general technological progress surely, but the theory and tools developed by working on particular machine learning problems and methodologies?
Those would seem likely to be helpful indeed. Better programming tools might also help, as would additional computing power (not so much because computing power is actually a limiting factor today, as because we tend to scale our intuition about available computing power to what we physically deal with on an everyday basis—which for most of us, is a cheap desktop PC—and we tend to flinch away from designs whose projected requirements would exceed such a cheap PC; increasing the baseline makes us less likely to flinch away from good designs).