By the way, I have spent quite a long time trying to “debunk” the set of ideas around Friendly AI and the Singularity, and my conclusion is that there’s simply no reasonable mainstream disagreement with that somewhat radical hypothesis. Why is FAI/Singularity not mainstream? Because the mainstream of science doesn’t have to publicly endorse every idea it cannot refute. There is no “court of crackpot appeal” where a correct contrarian can go to once and for all show that their problem/idea is legit. Academia can basically say “fuck off, we don’t like you or your idea, you won’t get a job at a university unless you work on something we like”.
Now such ability to arbitrarily tell people to get lost is useful because there are so many crackpots around, and they are really annoying. But it is a very simple and crude filter, akin to cutting your internet connection to prevent spam email. Just losing Eliezer and Nick Bostrom’s insight about friendly AI may cost academia more than all the crackpots put together could ever have cost.
Robin Hanson’s way around this was to expend a significant fraction of his life getting tenure, and now they can’t sack him, but that doesn’t mean that mainstream consensus will update to his correct contrarian position on the singularity; they can just press the “ignore” button.
That’s precisely the point I’m trying to make. We do lose a lot by ignoring correct contrarians. I think academia may be losing a lot of knowledge by filtering crudely. If indeed there is no mainstream academic position, pro or con, on Friendly AI, I think academia is missing something potentially important.
On the other hand, institutions need some kind of a filter to avoid being swamped by crackpots. A rational university or journal or other institution, trying to avoid bias, should probably assign more points to “promiscuous investigators,” people with respected mainstream work who currently spend time analyzing contrarian claims, whether to confirm or debunk. (I think Robin Hanson is a “promiscuous investigator.”)
My social intuitions tell me it is generally a bad idea to say words like ‘kill’ (as opposed to, say, ‘overwrite’, ‘fatally reorganize’, or ‘dismantle for spare part(icle)s’) in describing scenarios like that, as they resemble some people’s misguided intuitions about anthropomorphic skynet dystopias. On Less Wrong it matters less, but if one was trying to convince an e.g. non-singularitarian transhumanist that singularitarian ideas were important, then subtle language cues like that could have big effects on your apparent theoretical leaning and the outcome of the conversation. (This is more of a general heuristic than a critique of your comment, Roko.)
Good point, but one of the possibilities is the UFAI takes long enough to become completely secure in its power that it actually does try to eliminate people as a threat or a slowing factor. Since in this scenario, unlike in the “take apart for raw materials” scenario, people dying is the UFAI’s intended outcome and not just a side effect, “kill” seems an accurate word.
Yes, it is true. I would avoid ‘overwrite’ or ‘fatally reorganize’ because people might not get the idea. Better to go with “rip you apart and re-use your constituent atoms for something else”.
I don’t expect the post Singularity world as something pretty much as an extended today, with scientists in postlabs and postuniversities and waitresses in postpubs
Peer-Review is about low hanging branches, the stuff supported by enough evidence already that writing about it can be done easily by sourcing extensive support from prior work.
As for the damage of ignoring correct contrarians, there was a nobel prize in economics awarded for a paper on markets with asymmetric information which a reviewer rejected with a comment like “If this is correct then all of economics is wrong”.
There is also the story of someone who failed to get a PhD for their work presenting it on multiple seperate occasions, the last of which Einstein was in the room and said it was correct (and it was).
There is also the story of someone who failed to get a PhD for their work presenting it on multiple separate occasions, the last of which Einstein was in the room and said it was correct (and it was).
You might be thinking of de Broglie. Einstein was called in to review his PhD thesis. Though he did end up getting his PhD (and the Nobel).
I should clarify: my position on the factual questions surrounding the Singularity/FAI is mostly the same as the consensus of the original SIAI guys: Eliezer, Mike Vassar, Carl Shulman. Perhaps I have a slightly larger probability assigned to the “Something outside of our model will happen” category, and I place a slightly longer time lag on any of this stuff happening. And this is after disagreeing significantly with them and admitting that they were right.
By the way, I have spent quite a long time trying to “debunk” the set of ideas around Friendly AI and the Singularity, and my conclusion is that there’s simply no reasonable mainstream disagreement with that somewhat radical hypothesis. Why is FAI/Singularity not mainstream? Because the mainstream of science doesn’t have to publicly endorse every idea it cannot refute. There is no “court of crackpot appeal” where a correct contrarian can go to once and for all show that their problem/idea is legit. Academia can basically say “fuck off, we don’t like you or your idea, you won’t get a job at a university unless you work on something we like”.
Now such ability to arbitrarily tell people to get lost is useful because there are so many crackpots around, and they are really annoying. But it is a very simple and crude filter, akin to cutting your internet connection to prevent spam email. Just losing Eliezer and Nick Bostrom’s insight about friendly AI may cost academia more than all the crackpots put together could ever have cost.
Robin Hanson’s way around this was to expend a significant fraction of his life getting tenure, and now they can’t sack him, but that doesn’t mean that mainstream consensus will update to his correct contrarian position on the singularity; they can just press the “ignore” button.
That’s precisely the point I’m trying to make. We do lose a lot by ignoring correct contrarians. I think academia may be losing a lot of knowledge by filtering crudely. If indeed there is no mainstream academic position, pro or con, on Friendly AI, I think academia is missing something potentially important.
On the other hand, institutions need some kind of a filter to avoid being swamped by crackpots. A rational university or journal or other institution, trying to avoid bias, should probably assign more points to “promiscuous investigators,” people with respected mainstream work who currently spend time analyzing contrarian claims, whether to confirm or debunk. (I think Robin Hanson is a “promiscuous investigator.”)
I hereby nominate this for understatement of the millennium:
If true, it will eventually be accepted by the academia. Ironically enough, there will be no academia in the present sense anymore.
Does a uFAI killing all of our scientists count as them “accepting” the idea? Rhetorical question.
My social intuitions tell me it is generally a bad idea to say words like ‘kill’ (as opposed to, say, ‘overwrite’, ‘fatally reorganize’, or ‘dismantle for spare part(icle)s’) in describing scenarios like that, as they resemble some people’s misguided intuitions about anthropomorphic skynet dystopias. On Less Wrong it matters less, but if one was trying to convince an e.g. non-singularitarian transhumanist that singularitarian ideas were important, then subtle language cues like that could have big effects on your apparent theoretical leaning and the outcome of the conversation. (This is more of a general heuristic than a critique of your comment, Roko.)
Good point, but one of the possibilities is the UFAI takes long enough to become completely secure in its power that it actually does try to eliminate people as a threat or a slowing factor. Since in this scenario, unlike in the “take apart for raw materials” scenario, people dying is the UFAI’s intended outcome and not just a side effect, “kill” seems an accurate word.
Yes, it is true. I would avoid ‘overwrite’ or ‘fatally reorganize’ because people might not get the idea. Better to go with “rip you apart and re-use your constituent atoms for something else”.
I like to use the word “eat”; it’s short, evocative, and basically accurate. We are edible.
I want a uFAI lolcat that says “I can has ur constituent atomz?” and maybe a “nom nom nom” next to an Earth-sized paper clip.
I’d never thought about that, but it sounds very likely, and deserves to be pointed out in more than just this comment.
I don’t expect the post Singularity world as something pretty much as an extended today, with scientists in postlabs and postuniversities and waitresses in postpubs
A childish assumption.
Come on, where else could I possibly get my postbeer?
http://michaelnielsen.org/blog/three-myths-about-scientific-peer-review/
is a post that I find relevant.
Peer-Review is about low hanging branches, the stuff supported by enough evidence already that writing about it can be done easily by sourcing extensive support from prior work.
As for the damage of ignoring correct contrarians, there was a nobel prize in economics awarded for a paper on markets with asymmetric information which a reviewer rejected with a comment like “If this is correct then all of economics is wrong”.
There is also the story of someone who failed to get a PhD for their work presenting it on multiple seperate occasions, the last of which Einstein was in the room and said it was correct (and it was).
You might be thinking of de Broglie. Einstein was called in to review his PhD thesis. Though he did end up getting his PhD (and the Nobel).
Another near-miss case also preceding peer review was Arrhenius’s PhD thesis.
I should clarify: my position on the factual questions surrounding the Singularity/FAI is mostly the same as the consensus of the original SIAI guys: Eliezer, Mike Vassar, Carl Shulman. Perhaps I have a slightly larger probability assigned to the “Something outside of our model will happen” category, and I place a slightly longer time lag on any of this stuff happening. And this is after disagreeing significantly with them and admitting that they were right.
Does “Friendly AI and the Singularity” qualify as being “a hypothesis” in the first place?
“Friendly AI” seems more like an action plan—and “the Singularity” seems to be a muddled mixture of ideas—some of which are more accurate than others.