1) “AI” is a fuzzy term. We have some pretty smart programs already. What counts? Watson can answer jeopardy questions. Compilers can write code and perform sophisticated optimizations. Some chatbots are very close to passing the Turing Test. It’s unlikely that we’re going to jump suddenly from where we are now to human-level intelligence. There will be time to adapt.
AI is a fuzzy term, but that doesn’t at all back up the statement “it’s unlikely that we’re going to jump suddenly from where we are to human-level intelligence.” This isn’t an argument.
2) Plausible. Read Permutation City, where the first uploads run much slower. This isn’t strong evidence against foom though.
3) Being able to read your own source code does not mean you can self-modify. You know that you’re made of DNA. You can even get your own “source code” for a few thousand dollars. No humans have successfully self-modified into an intelligence explosion; the idea seems laughable.
Humans don’t have real-time access to the individual neurons in our brains, and we don’t even know how they work at that level anyway.
AI is a fuzzy term, but that doesn’t at all back up the statement “it’s unlikely that we’re going to jump suddenly from where we are to human-level intelligence.” This isn’t an argument.
2) Plausible. Read Permutation City, where the first uploads run much slower. This isn’t strong evidence against foom though.
Humans don’t have real-time access to the individual neurons in our brains, and we don’t even know how they work at that level anyway.
1) You are right; that was tangential and unclear. I have edited my OP to omit this point.
2) It’s evidence that it will take a while.
3) Real-time access to neurons is probably useless; they are changing too quickly (and they are changing in response to your effort to introspect).