I watched the end of this video and liked it quite a lot. Pretty good job, Eliezer. And thanks for the link.
And wow, the Q&A at the end of the talk has some tragically confused Q. And I’m sure these are people who consider themselves intelligent. Very amusing, and maddening.
Selection pressure might be even weaker a lot of the time than a 3% fitness advantage having a 6% chance of becoming universal in the gene pool, or at least it’s more complicated—a lot of changes don’t offer a stable advantage over long periods.
I think natural selection and human intelligence at this point can’t really be compared for strength. Each is doing things that the other can’t—afaik, we don’t know how to deliberately create organisms which can outcompete their wild conspecifics. (Or is it just that there’s no reason to try and/or we have too much sense to do the experiments?)
And we certainly don’t know how to deliberately design a creature which could thrive in the wild, though some animals which have been selectively bred for human purposes do well as ferals.
This point may be a nitpick since it doesn’t address how far human intelligence can go.
Another example of attribution error: Why would Gimli think that Galadriel is beautiful?
Eliezer made a very interesting claim—that current hardware is sufficient for AI. Details?
Another example of attribution error: Why would Gimli think that Galadriel is beautiful?
To be fair, the races of Middle-Earth weren’t created by evolution, so the criticism isn’t fully valid. Ilúvatar gave the dwarves spirits but set them to sleep so that they wouldn’t awaken before the elves. It’s not unreasonable to assume that as he did so, he also made them admire elven beauty.
Eliezer made a very interesting claim—that current hardware is sufficient for AI. Details?
Most who think Moravec and Kurzweil got this about right think that supercomputer hardware could run something similar to a human brain today—if you had the dollars, were prepared for it to run a bit slow—and had the right software.
Another example of attribution error: Why would Gimli think that Galadriel is beautiful?
If I’m not mistaken, all those races were created, so they could reasonably have very similar standards of beauty, and the elves might have been created to match that.
In The Lord of the Rings Tolkien writes that they breed slowly, for no more than a third of them are female, and not all marry; also, female Dwarves look and sound (and dress, if journeying — which is rare) so alike to Dwarf-males that other folk cannot distinguish them, and thus others wrongly believe Dwarves grow out of stone. Tolkien names only one female, Dís. In The War of the Jewels Tolkien says both males and females have beards.[18]
On the other hand, I suppose it’s possible that if humans find Elves that much more beautiful than humans, maybe Dwarves would be affected the same way, though it seems less likely for them.
Yeah, as JamesAndrix alludes to (warning: extreme geekery), the Dwarves were created by Aulë (one of the Valar (Gods)) because he was impatient for the Firstborn Children of Iluvatar (i.e., the Elves) to awaken. So you might call the Dwarves Aulë′s attempt at creating the Elves; at least, he knew what the Elves would look like (from the Great Song), so it’s pretty plausible that he impressed in the Dwarves an aesthetic sense which would rank Elves very highly.
There are different kinds of plausibility. There’s plausibility for fiction, and there’s plausibility for culture. Both pull in the same direction for LOTR to have Absolute Beauty, which by some odd coincidence, is a good match for what most of its readers think is beautiful.
What might break your suspension of disbelief? The usual BEM behavior would probably mean that the Watcher at the Gate preferencially grabbing Galadriel if she were available would seem entirely reasonable, but what about Treebeard? Shelob?
for LOTR to have Absolute Beauty, which by some odd coincidence, is a good match for what most of its readers think is beautiful.
Particularly when referring to the movie versions, you could consider this simply a storytelling device, similar to all the characters speaking English even in movies set in non-English speaking countries (or planets). It’s not that the Absolute Beauty of Middle-Earth is necessarily a good match for our beauty standards, it’s that it makes it easier for us to relate to the characters and experience what they’re feeling.
You write “Eliezer made a very interesting claim—that current hardware is sufficient for AI. Details?”
I don’t know what argument Eliezer would’ve been using to reach that conclusion, but it’s the kind of conclusion people typically reach if they do a Fermi estimate. E.g., take some bit of nervous tissue whose function seems to be pretty well understood, like the early visual preprocessing (edge detection, motion detection...) in the retina. Now estimate how much it would cost to build conventional silicon computer hardware performing the same operations; then scale the estimated cost of the brain in proportion to the ratio of volume of nervous tissue.
See http://boingboing.net/2009/02/10/hans-moravecs-slide.html for the conclusion of one popular version of this kind of analysis. I’m pretty sure that the analysis behind that slide is in at least one of Moravec’s books (where the slide, or something similar to it, appears as an illustration), but I don’t know offhand which book.
The analysis could be grossly wrong if the foundations are wrong, perhaps because key neurons are doing much more than we think. E.g., if some kind of neuron is storing a huge number of memory bits per neuron (which I doubt: admittedly there is no fundamental reason I know of that this couldn’t be true, but there’s also no evidence for it that I know of) or if neurons are doing quantum calculation (which seems exceedingly unlikely to me; and it is also unclear that quantum calculation can even help much with general intelligence, as opposed to helping with a few special classes of problems related to number theory). I don’t know any particularly likely for way the foundations to be grossly wrong, though, so the conclusions seem pretty reasonable to me.
Note also that suitably specialized computer hardware tends to have something like an order of magnitude better price/performance than the general-purpose computer systems which appear on the graph. (E.g., it is much more cost-effective to render computer graphics using a specialized graphics board, rather than using software running on a general-purpose computer board.)
I find this line of argument pretty convincing, so I think it’s a pretty good bet that given the software, current technology could build human-comparable AI hardware in quantity 100 for less than a million dollars per AI; and that if the figure isn’t yet as low as one hundred thousand dollars per AI, it will be that low very soon.
Why would Gimli think that Galadriel is beautiful?
The dwarves were intelligently designed by some god or other. That a dwarf can find an elf more beautiful than dwarves could be an unfortunate design flaw.
(Elves were also intelligently designed, but their creator was perhaps more intelligent.)
Edit: The creator-god of dwarves probably imbued them with some of his own sense of beauty.
With all respect to Eliezer I think nowadays the gravely anachronistic term “village idiot” shouldn’t be used anymore. I wanted to say that almost every time when I see the intelligence scale graphic in his talks.
Why do you think the term “village idiot” is “gravely anachronistic”? It’s part of an idiom. “Idiot” was briefly used as a quasi-scientific label for certain range of IQs, and that usage is certainly anachronistic, but “idiot” had meaning before that, and continues to. The same is true for “village idiot”.
You’re right, wnoise, “village idiot” is part of an idiom but one I don’t like at all and I don’t think I’m particular in this regard.
I should have put my objection as “‘Village idiot’ is gravely anachronistic unless you want to be insensitive by subsuming a plethora of medical conditions and social determinants under a dated, derogatory term for mentally disabled people.”
This may sound like nit-picking but obviously said intelligence graph is an important item in SIAI’s symbolic tool kit and therefore every detail should be right. When I see the graph, I’m always thinking: Please, “for the love of cute kittens”, change the “village idiot”!
For what it’s worth, I don’t find anything wrong with the term “village idiot”.
However, from previous discussions here, I think I might be on the low side of the community for my preference for “lengths to which Eliezer and the SIAI should go to accommodate the sensibilities of idiots”—there are more important things to do, and a never-ending supply of idiots.
Still, maybe it should be changed. It’s not because it doesn’t offend me that it won’t offend anybody reasonable.
In conversation with friends I tend to use George W Bush as the other endpoint—a dig at those hated Greens but it’s uncontentious here in the UK, and if it helps keep people listening (which it seems to) it’s worth it.
This seems a bad example to use given the context. If you are trying to convince people that greater than human intelligence will give AIs an insurmountable advantage over even the smartest humans then drawing attention to a supposed idiot who became the most powerful man in the world for 8 years raises the question of whether you either don’t know what intelligence is or vastly overestimate its ability to grant real world power.
Wikipedia gives him an estimated IQ of 125, which may be a wee bit off for the low end of the IQ distribution. Still, if that’s the example that requires the less explanation in practice, why not.
We who are the first intelligences ever to exist … our tiny little brains at the uttermost dawn of mind … as awkward as the first replicator (2:01 in).
I watched the end of this video and liked it quite a lot. Pretty good job, Eliezer. And thanks for the link.
And wow, the Q&A at the end of the talk has some tragically confused Q. And I’m sure these are people who consider themselves intelligent. Very amusing, and maddening.
Selection pressure might be even weaker a lot of the time than a 3% fitness advantage having a 6% chance of becoming universal in the gene pool, or at least it’s more complicated—a lot of changes don’t offer a stable advantage over long periods.
I think natural selection and human intelligence at this point can’t really be compared for strength. Each is doing things that the other can’t—afaik, we don’t know how to deliberately create organisms which can outcompete their wild conspecifics. (Or is it just that there’s no reason to try and/or we have too much sense to do the experiments?)
And we certainly don’t know how to deliberately design a creature which could thrive in the wild, though some animals which have been selectively bred for human purposes do well as ferals.
This point may be a nitpick since it doesn’t address how far human intelligence can go.
Another example of attribution error: Why would Gimli think that Galadriel is beautiful?
Eliezer made a very interesting claim—that current hardware is sufficient for AI. Details?
To be fair, the races of Middle-Earth weren’t created by evolution, so the criticism isn’t fully valid. Ilúvatar gave the dwarves spirits but set them to sleep so that they wouldn’t awaken before the elves. It’s not unreasonable to assume that as he did so, he also made them admire elven beauty.
Why do humans think dolphins are beautiful?
Is a human likely to think that one specific dolphin is so beautiful as to be almost worth fighting a duel about it being the most beautiful?
Well, it’s always possible that Gimli was a zoophile.
Yeah, I mean have you seen Dwarven women?
I’m a human and can easily imagine being attracted to Galadriel :) I can’t speak for dwarves.
Well, elves were intelligently designed to specifically be attractive to humans...
Most who think Moravec and Kurzweil got this about right think that supercomputer hardware could run something similar to a human brain today—if you had the dollars, were prepared for it to run a bit slow—and had the right software.
“Another example of attribution error: Why would Gimli think that Galadriel is beautiful?”
A waist:hip:thigh ratio between 0.6 & 0.8 & a highly symmetric fce.
But she doesn’t even have a beard!
but he did have a preoccupation with her hair...
If I’m not mistaken, all those races were created, so they could reasonably have very similar standards of beauty, and the elves might have been created to match that.
[From Wikipedia:}(http://en.wikipedia.org/wiki/Dwarf_%28Middle-earth%29)
On the other hand, I suppose it’s possible that if humans find Elves that much more beautiful than humans, maybe Dwarves would be affected the same way, though it seems less likely for them.
Also, perhaps dwarves don’t have their beauty-sense linked to their mating selection. They appreciate elves as beautiful but something else as sexy.
Yeah, as JamesAndrix alludes to (warning: extreme geekery), the Dwarves were created by Aulë (one of the Valar (Gods)) because he was impatient for the Firstborn Children of Iluvatar (i.e., the Elves) to awaken. So you might call the Dwarves Aulë′s attempt at creating the Elves; at least, he knew what the Elves would look like (from the Great Song), so it’s pretty plausible that he impressed in the Dwarves an aesthetic sense which would rank Elves very highly.
Yes this is definitively correct. Also, it’s a world with magic rings and dragons people.
There are different kinds of plausibility. There’s plausibility for fiction, and there’s plausibility for culture. Both pull in the same direction for LOTR to have Absolute Beauty, which by some odd coincidence, is a good match for what most of its readers think is beautiful.
What might break your suspension of disbelief? The usual BEM behavior would probably mean that the Watcher at the Gate preferencially grabbing Galadriel if she were available would seem entirely reasonable, but what about Treebeard? Shelob?
Particularly when referring to the movie versions, you could consider this simply a storytelling device, similar to all the characters speaking English even in movies set in non-English speaking countries (or planets). It’s not that the Absolute Beauty of Middle-Earth is necessarily a good match for our beauty standards, it’s that it makes it easier for us to relate to the characters and experience what they’re feeling.
You write “Eliezer made a very interesting claim—that current hardware is sufficient for AI. Details?”
I don’t know what argument Eliezer would’ve been using to reach that conclusion, but it’s the kind of conclusion people typically reach if they do a Fermi estimate. E.g., take some bit of nervous tissue whose function seems to be pretty well understood, like the early visual preprocessing (edge detection, motion detection...) in the retina. Now estimate how much it would cost to build conventional silicon computer hardware performing the same operations; then scale the estimated cost of the brain in proportion to the ratio of volume of nervous tissue.
See http://boingboing.net/2009/02/10/hans-moravecs-slide.html for the conclusion of one popular version of this kind of analysis. I’m pretty sure that the analysis behind that slide is in at least one of Moravec’s books (where the slide, or something similar to it, appears as an illustration), but I don’t know offhand which book.
The analysis could be grossly wrong if the foundations are wrong, perhaps because key neurons are doing much more than we think. E.g., if some kind of neuron is storing a huge number of memory bits per neuron (which I doubt: admittedly there is no fundamental reason I know of that this couldn’t be true, but there’s also no evidence for it that I know of) or if neurons are doing quantum calculation (which seems exceedingly unlikely to me; and it is also unclear that quantum calculation can even help much with general intelligence, as opposed to helping with a few special classes of problems related to number theory). I don’t know any particularly likely for way the foundations to be grossly wrong, though, so the conclusions seem pretty reasonable to me.
Note also that suitably specialized computer hardware tends to have something like an order of magnitude better price/performance than the general-purpose computer systems which appear on the graph. (E.g., it is much more cost-effective to render computer graphics using a specialized graphics board, rather than using software running on a general-purpose computer board.)
I find this line of argument pretty convincing, so I think it’s a pretty good bet that given the software, current technology could build human-comparable AI hardware in quantity 100 for less than a million dollars per AI; and that if the figure isn’t yet as low as one hundred thousand dollars per AI, it will be that low very soon.
Thanks. I’m not sure how much complexity is added by the dendrites making new connections.
The dwarves were intelligently designed by some god or other. That a dwarf can find an elf more beautiful than dwarves could be an unfortunate design flaw.
(Elves were also intelligently designed, but their creator was perhaps more intelligent.)
Edit: The creator-god of dwarves probably imbued them with some of his own sense of beauty.
With all respect to Eliezer I think nowadays the gravely anachronistic term “village idiot” shouldn’t be used anymore. I wanted to say that almost every time when I see the intelligence scale graphic in his talks.
Why do you think the term “village idiot” is “gravely anachronistic”? It’s part of an idiom. “Idiot” was briefly used as a quasi-scientific label for certain range of IQs, and that usage is certainly anachronistic, but “idiot” had meaning before that, and continues to. The same is true for “village idiot”.
You’re right, wnoise, “village idiot” is part of an idiom but one I don’t like at all and I don’t think I’m particular in this regard.
I should have put my objection as “‘Village idiot’ is gravely anachronistic unless you want to be insensitive by subsuming a plethora of medical conditions and social determinants under a dated, derogatory term for mentally disabled people.”
This may sound like nit-picking but obviously said intelligence graph is an important item in SIAI’s symbolic tool kit and therefore every detail should be right. When I see the graph, I’m always thinking: Please, “for the love of cute kittens”, change the “village idiot”!
For what it’s worth, I don’t find anything wrong with the term “village idiot”.
However, from previous discussions here, I think I might be on the low side of the community for my preference for “lengths to which Eliezer and the SIAI should go to accommodate the sensibilities of idiots”—there are more important things to do, and a never-ending supply of idiots.
Still, maybe it should be changed. It’s not because it doesn’t offend me that it won’t offend anybody reasonable.
In conversation with friends I tend to use George W Bush as the other endpoint—a dig at those hated Greens but it’s uncontentious here in the UK, and if it helps keep people listening (which it seems to) it’s worth it.
This seems a bad example to use given the context. If you are trying to convince people that greater than human intelligence will give AIs an insurmountable advantage over even the smartest humans then drawing attention to a supposed idiot who became the most powerful man in the world for 8 years raises the question of whether you either don’t know what intelligence is or vastly overestimate its ability to grant real world power.
For the avoidance of doubt, it seems very unlikely in practice that Bush doesn’t have above-average intelligence.
Wikipedia gives him an estimated IQ of 125, which may be a wee bit off for the low end of the IQ distribution. Still, if that’s the example that requires the less explanation in practice, why not.
Maybe Forrest Gump would work as well?
My most recent use of this example got the response George W Bush Was Not Stupid.