Just going to focus on the first one to get my opinion down: Will Automating AI R&D not work for some reason, or will it not lead to vastly superhuman superintelligence within 2 years of “~100% automation” for some reason?
I think that there are diminishing returns on ‘intelligence’, and that while something with a testable IQ that ‘maxes out’ any available test may well come along in the next few years, possibly by surprise, the net effect, while transformative, will not be an intelligence explosion that destroys the planet and human race with its’ brilliance.
I think there’s a bit of a conceit that ‘with enough smarts, someone doesn’t have to be bossed around by idiots’, when in practice this seems to rarely happen.
How many people pay for the google extreme AI package, and how helpful has it been in terms of amassing resources and professional advancement?
Just going to focus on the first one to get my opinion down: Will Automating AI R&D not work for some reason, or will it not lead to vastly superhuman superintelligence within 2 years of “~100% automation” for some reason?
I think that there are diminishing returns on ‘intelligence’, and that while something with a testable IQ that ‘maxes out’ any available test may well come along in the next few years, possibly by surprise, the net effect, while transformative, will not be an intelligence explosion that destroys the planet and human race with its’ brilliance.
I think there’s a bit of a conceit that ‘with enough smarts, someone doesn’t have to be bossed around by idiots’, when in practice this seems to rarely happen.
How many people pay for the google extreme AI package, and how helpful has it been in terms of amassing resources and professional advancement?