On the other hand, I am willing to donate to SIAI out of my “donate to webcomics” mental account instead of my “save lives” mental account. ;)
Regardless of whether or not he ever solves the Friendly AI problem, Eliezer’s writing, on this blog and elsewhere, has given me enough of what might pejoratively be called “entertainment value” for me to want to pay him to keep doing it.
Why don’t SIAI and FHI get evaluated by GiveWell? Maybe there would be some confusion regarding their less direct ways of helping people but I’d at least like some information about their effectiveness at what they claim to do.
Or maybe that information is out there already. Anyone?
That’s how I interpreted it, but I see the ambiguity now that you mention it. It doesn’t help that the two statements are basically equivalent if you use “weird” as a relative term.
GiveWell is a pretty small organization, and they haven’t yet devoted any resources to evaluating research-based charities—they’re looking for charities that can prove that they’re providing benefits today, and lots of research ends up leading nowhere. How many increments of $1,000 - the amount it takes to cure an otherwise fatal case of tuberculosis—have been spent on medical research that amounted to nothing?
For the record, I agree that SIAI is doing important work that must be done someday, but I don’t expect to see AGI in my lifetime; there’s no particular urgency involved. If Eliezer and co. found themselves transported back in time to 1890, would they still say that solving the Friendly AI problem is the most important thing they could be doing, given that the first microprocessor was produced in 1971? I’d tell them that the first thing they need to do is to go “discover” that DDT (first synthesized in 1874) kills insects and show the world how it can be used to kill disease vectors such as mosquitoes; DDT is probably the single man-made chemical that, to date, has saved more human lives than any other.
If Eliezer and co. found themselves transported back in time to 1890, would they still say that solving the Friendly AI problem is the most important thing they could be doing, given that the first microprocessor was produced in 1971?
In 1890, the most important thing to do is still FAI research. The best case scenario is that we already had invented the math for FAI before the first vacuum tube, let alone microchip. Existential risk reduction is the single highest utility thing around. Sure, trying to get nukes never made or made by someone capable of creating an effective singleton is important, but FAI is way more so.
Well, what if he were sent back to Ancient Greece (and magically acquired the ability to speak Greek)? Even if he got all the math perfectly right, who would care? Or even understand it?
Well, what if he were sent back to Ancient Greece (and magically acquired the ability to speak Greek)? Even if he got all the math perfectly right, who would care? Or even understand it?
He would then spend the rest of his life ensuring that it is preserved. If necessary he would go around hunting for obscure caves with a chisel in hand. Depending, of course, on how much he cares about influencing the future of the universe as opposed to other less abstract goals.
Anyway, I have much more confidence that Eliezer and future generations of Friendly AI researchers will succeed in making sure that nobody turns on an AGI that isn’t Friendly than in Eliezer and his disciples solving both the AGI and Friendly AI problems in his own lifetime. Friendly AI is a problem that needs to be solved in the future, but, barring something like a Peak Oil-induced collapse of civilization to pre-1920 levels, the future will be a lot better at solving these problems than the present is—and we can leave it to them to worry about. After all, the present is certainly better positioned to solve problems like epidemic disease and global warming than the past was.
Actually, I would; I’ve donated a small amount of money already. Investing in anti-aging research won’t pay off for at least thirty years—that’s the turnaround time of medical research from breakthrough to useable treatment—but it’s a lot less of a pie-in-the-sky concern. (Although as long as people are dying for want of $1,000 TB medication, it still might be more cost effective to save those lives than to extend the lives of relatively rich people in developed countries.)
I don’t think he is if the point is to establish that “lack of FAI could at some point lead to Earth’s destruction” isn’t a unconditionally applicable argument.
I am curious: are you very old or suffering from a fatal disease? I am 25 and healthy, so “lifetime” probably means something different to me...
That would make my winking outright cruel! No, I’m referring to the general problem of betting against the success of the person with whom you are making the bet. In CronoDAS’s case the threshold for a self-sabotage outcome is somewhat reduced by his expressed suicidal inclinations.
I am personally convinced (am a one-time donor myself), but the optimal charity argument in favour of Friendly AI research and development (which will be fully developed in this paper) is something I can use with my friends. They are pretty much the practical type and will definitely respond to wanting more bang for their buck and where their marginal rupee of charity should go.
There are inferential gaps. And when me, a known sci-fi fan presents the argument, I get all sorts of looks. If I have a peer reviewed paper to show them, that would work nicely in my favour.
I believe that the answer is a combination of the fact that SIAI and FHI aren’t on their list (of charities to evaluate), as well as the fact that their methodology is heavily dependent on quality of information, and actual evidence that the charity is working.
Sure. But if GiveWell isn’t going to do it then someone should. Are their budgets public? So many people here are skeptical of regular charities what evidence is there that these charities are different?
I don’t expect Eliezer and co. to succeed, if you define “success” as actually building a transhuman Friendly AI before Eliezer is either cryopreserved or suffers information-theoretic death. My “wild guess” at the earliest plausible date for AGI of any kind is 2100.
1) The past failure of AGI research to deliver progress
2) The apparent difficulty of the problem. We don’t know how to do it, and we don’t know what we would need to know before we can know how to do it. Or, at least, I don’t.
3) My impressions of the speed of scientific progress in general. For example, the time between “new discovery” and “marketable product” in medicine and biotechnology is about 30 years.
4) My impressions of the speed of progress in mathematics, in which important unsolved problems often stay unsolved for centuries. It took over 300 years to prove Fermat’s Last Theorem, and the formal mathematics of computation is less than a century old; Alan Turing described the Turing Machine in 1937.
5) The difficulty of computer programming in general. People are bad at programming.
Actually, no, but I also expect that it’ll be around for quite a while before running a whole brain emulation becomes cheaper than hiring a human engineer. I don’t expect a particularly fast em transition; it took many years for portable telephones to go from something that cost thousands of dollars and went in your car to the cell phones that everyone uses today.
The Singularity was created by Nikola Tesla and Thomas Edison, and ended some time around 1920. Get used to it. ;)
Even at Moore’s Law speeds, simulating 10^11 neurons, 10^11 glial cells, 10^15 synaptic connections, and concentrations of various neurotransmitters and other chemicals in real time or faster-than-real time is going to be expensive for a long time before it becomes cheap.
Not necessarily. If a human brain with no software tricks requires 10^20 CPS (a very high estimate), then (according to Kurzweil, take with grain of salt) the computational capacity will be there by ~2040. However, it’s certainly possible that we don’t get the software until 2050, at which point anyone with a couple hundred dollars can run one.
Depends on which details actually need to be simulated. I suspect that most intracellular activity can be neglected or replaced with some simple rules on when a cell divides, adds a synapse, etc.
For the record, this is something I don’t have much confidence in—WBE requires a sufficiently detailed brain scan, computers of sufficient processing power to run the simulation, and enough knowledge of brains on the microscopic level to program a simulation and understand the output of the simulation. I do not know which will turn out to be the bottleneck in the process.
Most technological developments seem to go from “We don’t know how to do this at all” to “We know how to do this, but actually doing it costs a fortune” to “We know how to do this at an affordable price.” WBE could be an exception, though, and completely skip over the second stage.
I disagree, but we probably have different estimates as to just how effective DNA modification and/or intelligence enhancing drugs are going to be in the future. I don’t think Eliezer is going to make all that big of a dent in the FAI problem until he becomes more intelligent, and it’s hard to estimate how much faster that will make him. I think I can say that intelligence enhancement could turn an impossible problem into a possible problem. It also means that there will be many more people out there capable of making meaningful contributions to the FAI problem.
This should help.
In general, the best charities are SIAI, SENS and FHI.
I disagree. I recommend the top rated charities on givewell.net, specifically the Stop TB Partnership. (They also have a nice blog.)
On the other hand, I am willing to donate to SIAI out of my “donate to webcomics” mental account instead of my “save lives” mental account. ;)
Regardless of whether or not he ever solves the Friendly AI problem, Eliezer’s writing, on this blog and elsewhere, has given me enough of what might pejoratively be called “entertainment value” for me to want to pay him to keep doing it.
Why don’t SIAI and FHI get evaluated by GiveWell? Maybe there would be some confusion regarding their less direct ways of helping people but I’d at least like some information about their effectiveness at what they claim to do.
Or maybe that information is out there already. Anyone?
They are weird.
Basically, yes, with “They” referring to SIAI and FHI.
That’s how I interpreted it, but I see the ambiguity now that you mention it. It doesn’t help that the two statements are basically equivalent if you use “weird” as a relative term.
GiveWell is a pretty small organization, and they haven’t yet devoted any resources to evaluating research-based charities—they’re looking for charities that can prove that they’re providing benefits today, and lots of research ends up leading nowhere. How many increments of $1,000 - the amount it takes to cure an otherwise fatal case of tuberculosis—have been spent on medical research that amounted to nothing?
For the record, I agree that SIAI is doing important work that must be done someday, but I don’t expect to see AGI in my lifetime; there’s no particular urgency involved. If Eliezer and co. found themselves transported back in time to 1890, would they still say that solving the Friendly AI problem is the most important thing they could be doing, given that the first microprocessor was produced in 1971? I’d tell them that the first thing they need to do is to go “discover” that DDT (first synthesized in 1874) kills insects and show the world how it can be used to kill disease vectors such as mosquitoes; DDT is probably the single man-made chemical that, to date, has saved more human lives than any other.
In 1890, the most important thing to do is still FAI research. The best case scenario is that we already had invented the math for FAI before the first vacuum tube, let alone microchip. Existential risk reduction is the single highest utility thing around. Sure, trying to get nukes never made or made by someone capable of creating an effective singleton is important, but FAI is way more so.
Well, what if he were sent back to Ancient Greece (and magically acquired the ability to speak Greek)? Even if he got all the math perfectly right, who would care? Or even understand it?
He would then spend the rest of his life ensuring that it is preserved. If necessary he would go around hunting for obscure caves with a chisel in hand. Depending, of course, on how much he cares about influencing the future of the universe as opposed to other less abstract goals.
Yes, who today cares what any Greek mathematician had to say...
Now you’re just moving the goal posts.
Sorry. :(
Anyway, I have much more confidence that Eliezer and future generations of Friendly AI researchers will succeed in making sure that nobody turns on an AGI that isn’t Friendly than in Eliezer and his disciples solving both the AGI and Friendly AI problems in his own lifetime. Friendly AI is a problem that needs to be solved in the future, but, barring something like a Peak Oil-induced collapse of civilization to pre-1920 levels, the future will be a lot better at solving these problems than the present is—and we can leave it to them to worry about. After all, the present is certainly better positioned to solve problems like epidemic disease and global warming than the past was.
Would you consider SENS a viable alternative to SIAI? Or do you think ending aging is also impossible/something to be put off?
Actually, I would; I’ve donated a small amount of money already. Investing in anti-aging research won’t pay off for at least thirty years—that’s the turnaround time of medical research from breakthrough to useable treatment—but it’s a lot less of a pie-in-the-sky concern. (Although as long as people are dying for want of $1,000 TB medication, it still might be more cost effective to save those lives than to extend the lives of relatively rich people in developed countries.)
My guess is that SENS is more cost effective, but I haven’t done the calculating. Does anyone have access to those sorts of figures?
Ball parking:
$1000 buys you 45 extra person-years.
$10 billion buys you 30 extra person-years for a billion people.
Of course that depends on how much you agree with the figures given by de Grey.
I don’t think he is if the point is to establish that “lack of FAI could at some point lead to Earth’s destruction” isn’t a unconditionally applicable argument.
That’s an easy prediction for you to make. ;)
Well, I don’t expect that my brother will see AGI in his lifetime, either.
I am curious: are you very old or suffering from a fatal disease? I am 25 and healthy, so “lifetime” probably means something different to me…
It’s an ironic remark about my depression. I’m 27 and physically healthy.
That would make my winking outright cruel! No, I’m referring to the general problem of betting against the success of the person with whom you are making the bet. In CronoDAS’s case the threshold for a self-sabotage outcome is somewhat reduced by his expressed suicidal inclinations.
See my reply to Lucas.
Edit: Also, I’m sympathetic to your skepticism re: SIAI as the best charity.
I think it is precisely to that effect that this paper is aimed. Lets see when the paper comes out, lets see how persuasive it is.
Edited for formatting
I think I need to clarify here.
I am personally convinced (am a one-time donor myself), but the optimal charity argument in favour of Friendly AI research and development (which will be fully developed in this paper) is something I can use with my friends. They are pretty much the practical type and will definitely respond to wanting more bang for their buck and where their marginal rupee of charity should go.
There are inferential gaps. And when me, a known sci-fi fan presents the argument, I get all sorts of looks. If I have a peer reviewed paper to show them, that would work nicely in my favour.
Sounds like a good idea to me.
I believe that the answer is a combination of the fact that SIAI and FHI aren’t on their list (of charities to evaluate), as well as the fact that their methodology is heavily dependent on quality of information, and actual evidence that the charity is working.
Sure. But if GiveWell isn’t going to do it then someone should. Are their budgets public? So many people here are skeptical of regular charities what evidence is there that these charities are different?
I don’t think they publish a full budget but there is a breakdown of what the current fundraising drive is for. http://singinst.org/grants/challenge#grantproposals
Could you explain why? Do you believe that SIAI/FHI aren’t accomplishing what they set out to do? Do you discount future lives? Something else?
I don’t expect Eliezer and co. to succeed, if you define “success” as actually building a transhuman Friendly AI before Eliezer is either cryopreserved or suffers information-theoretic death. My “wild guess” at the earliest plausible date for AGI of any kind is 2100.
What do you think you know and how do you think you know it?
I’m guessing based on several factors:
1) The past failure of AGI research to deliver progress
2) The apparent difficulty of the problem. We don’t know how to do it, and we don’t know what we would need to know before we can know how to do it. Or, at least, I don’t.
3) My impressions of the speed of scientific progress in general. For example, the time between “new discovery” and “marketable product” in medicine and biotechnology is about 30 years.
4) My impressions of the speed of progress in mathematics, in which important unsolved problems often stay unsolved for centuries. It took over 300 years to prove Fermat’s Last Theorem, and the formal mathematics of computation is less than a century old; Alan Turing described the Turing Machine in 1937.
5) The difficulty of computer programming in general. People are bad at programming.
Do you also evaluate the chances of WBE as being vanishingly slim over the next century?
Actually, no, but I also expect that it’ll be around for quite a while before running a whole brain emulation becomes cheaper than hiring a human engineer. I don’t expect a particularly fast em transition; it took many years for portable telephones to go from something that cost thousands of dollars and went in your car to the cell phones that everyone uses today.
The Singularity was created by Nikola Tesla and Thomas Edison, and ended some time around 1920. Get used to it. ;)
So you expect that WBE will become possible before cheap supercomputers?
You might like to quantify “cheap” and “super”.
See reply to CronoDAS below.
Even at Moore’s Law speeds, simulating 10^11 neurons, 10^11 glial cells, 10^15 synaptic connections, and concentrations of various neurotransmitters and other chemicals in real time or faster-than-real time is going to be expensive for a long time before it becomes cheap.
Not necessarily. If a human brain with no software tricks requires 10^20 CPS (a very high estimate), then (according to Kurzweil, take with grain of salt) the computational capacity will be there by ~2040. However, it’s certainly possible that we don’t get the software until 2050, at which point anyone with a couple hundred dollars can run one.
Depends on which details actually need to be simulated. I suspect that most intracellular activity can be neglected or replaced with some simple rules on when a cell divides, adds a synapse, etc.
For the record, this is something I don’t have much confidence in—WBE requires a sufficiently detailed brain scan, computers of sufficient processing power to run the simulation, and enough knowledge of brains on the microscopic level to program a simulation and understand the output of the simulation. I do not know which will turn out to be the bottleneck in the process.
It looks like “enough knowledge of brains on the microscopic level to program a simulation” might be the limiting factor.
In which case, we have a hardware overhang and an explosive em transition.
Most technological developments seem to go from “We don’t know how to do this at all” to “We know how to do this, but actually doing it costs a fortune” to “We know how to do this at an affordable price.” WBE could be an exception, though, and completely skip over the second stage.
I disagree, but we probably have different estimates as to just how effective DNA modification and/or intelligence enhancing drugs are going to be in the future. I don’t think Eliezer is going to make all that big of a dent in the FAI problem until he becomes more intelligent, and it’s hard to estimate how much faster that will make him. I think I can say that intelligence enhancement could turn an impossible problem into a possible problem. It also means that there will be many more people out there capable of making meaningful contributions to the FAI problem.