I don’t work for SI and this is not an SI-authorized response, unless SI endorses it later. This comment is based on my own understanding based on conversations with and publications of SI members and general world model, and does not necessarily reflect the views or activities of SI.
The first thing I notice is that your interpretation of SI’s goals with respect to AGI are narrower than the impression I had gotten, based on conversations with SI members. In particular, I don’t think SI’s research is limited to trying to make AGI friendliness provable, but on a variety of different safety strategies, and on the relative win-rates of different technological paths, eg brain uploading vs. de-novo AI, classes of utility functions and their relative risks, and so on. There is also a distinction between “FAI theory” and “AGI theory” that you aren’t making; the idea, as I see it, is that to the extent to which these are separable, “FAI theory” covers research into safety mechanisms which reduce the probability of disaster if any AGI is created, while “AGI theory” covers research that brings the creation of any AGI closer. Your first objection—that a maximizing FAI would be very dangerous—seems to be based on a belief, first, that SI is researching a narrower class of safety mechanisms than it really is, and second, that SI researches AGI theory, which I believe it explicitly does not.
You seem a bit sore that SI hasn’t talked about your notion of Tool-AI, but I’m a bit confused by this, since it’s the first time I’ve heard that term used, and your link is to an email thread which, unless I’m missing something, was not disseminated publicly or through SI in general. A conversation about tool-based AI is well worth having; my current perspective is that it looks like it interacts with the inevitability argument and the overall AI power curve in such a way that it’s still very dangerous, and that it amounts to a slightly different spin on Oracle AI, but this would be a complicated discussion. But bringing it up effectively for the first time, in the middle of a multi-pronged attack on SI’s credibility, seems really unfair. While there may have been a significant communications failure in there, a cursory reading suggests to me that your question never made it to the right person.
The claim that SI will perform better if they don’t get funding seems very strange. My model is that it would force their current employees to leave and spend their time on unrelated paid work instead, which doesn’t seem like an improvement. I get the impression that your views of SI’s achievements may be getting measured against a metric of achievements-per-organization, rather than achievements-per-dollar; in absolute budget terms, SI is tiny. But they’ve still had a huge memetic influence, difficult as that is to measure.
All that said, I applaud your decision to post your objections and read the responses. This sort of dialogue is a good way to reach true beliefs, and I look forward to reading more of it from all sides.
In particular, I don’t think SI’s research is limited to trying to make AGI friendliness provable, but on a variety of different safety strategies, and on the relative win-rates of different technological paths, eg brain uploading vs. de-novo AI, classes of utility functions and their relative risks, and so on.
I agree, and would like to note the possibility, for those who suspect FAI research is useless or harmful, of earmarking SI donations to research on different safety strategies, or on aspects of AI risk that are useful to understand regardless of strategy.
This likely won’t work. Money is fungible, so unless the total donations so earmarked exceeds the planned SI funding for that cause, they won’t have to change anything. They’re under no obligation to not defund your favorite cause by exactly the amount you donated, thus laundering your donation into the general fund. (Unless I misunderstand the relevant laws?)
EDIT NOTE:
The post used to say vast majority; this was changed, but is referenced below.
You have an important point here, but I’m not sure it gets up to “vast majority” before it becomes relevant.
Earmarking $K for X has an effect once $K exceeds the amount of money that would have been spent on X if the $K had not been earmarked. The size of the effect still certainly depends on the difference, and may very well not be large.
Suppose you earmark to a paper on a topic X that SI would otherwise probably not write a paper on. Would that cause SI to take money out of research on topics similar to X and into FAI research? There would probably be some sort of (expected) effect in that direction, but I think the size of the effect depends on the details of what causes SI’s allocation of resources, and I think the effect would be substantially smaller than would be necessary to make an earmarked donation equivalent to a non-earmarked donation. Still, you’re right to bring it up.
I don’t work for SI and this is not an SI-authorized response, unless SI endorses it later. This comment is based on my own understanding based on conversations with and publications of SI members and general world model, and does not necessarily reflect the views or activities of SI.
The first thing I notice is that your interpretation of SI’s goals with respect to AGI are narrower than the impression I had gotten, based on conversations with SI members. In particular, I don’t think SI’s research is limited to trying to make AGI friendliness provable, but on a variety of different safety strategies, and on the relative win-rates of different technological paths, eg brain uploading vs. de-novo AI, classes of utility functions and their relative risks, and so on. There is also a distinction between “FAI theory” and “AGI theory” that you aren’t making; the idea, as I see it, is that to the extent to which these are separable, “FAI theory” covers research into safety mechanisms which reduce the probability of disaster if any AGI is created, while “AGI theory” covers research that brings the creation of any AGI closer. Your first objection—that a maximizing FAI would be very dangerous—seems to be based on a belief, first, that SI is researching a narrower class of safety mechanisms than it really is, and second, that SI researches AGI theory, which I believe it explicitly does not.
You seem a bit sore that SI hasn’t talked about your notion of Tool-AI, but I’m a bit confused by this, since it’s the first time I’ve heard that term used, and your link is to an email thread which, unless I’m missing something, was not disseminated publicly or through SI in general. A conversation about tool-based AI is well worth having; my current perspective is that it looks like it interacts with the inevitability argument and the overall AI power curve in such a way that it’s still very dangerous, and that it amounts to a slightly different spin on Oracle AI, but this would be a complicated discussion. But bringing it up effectively for the first time, in the middle of a multi-pronged attack on SI’s credibility, seems really unfair. While there may have been a significant communications failure in there, a cursory reading suggests to me that your question never made it to the right person.
The claim that SI will perform better if they don’t get funding seems very strange. My model is that it would force their current employees to leave and spend their time on unrelated paid work instead, which doesn’t seem like an improvement. I get the impression that your views of SI’s achievements may be getting measured against a metric of achievements-per-organization, rather than achievements-per-dollar; in absolute budget terms, SI is tiny. But they’ve still had a huge memetic influence, difficult as that is to measure.
All that said, I applaud your decision to post your objections and read the responses. This sort of dialogue is a good way to reach true beliefs, and I look forward to reading more of it from all sides.
I agree, and would like to note the possibility, for those who suspect FAI research is useless or harmful, of earmarking SI donations to research on different safety strategies, or on aspects of AI risk that are useful to understand regardless of strategy.
This likely won’t work. Money is fungible, so unless the total donations so earmarked exceeds the planned SI funding for that cause, they won’t have to change anything. They’re under no obligation to not defund your favorite cause by exactly the amount you donated, thus laundering your donation into the general fund. (Unless I misunderstand the relevant laws?)
EDIT NOTE: The post used to say vast majority; this was changed, but is referenced below.
You have an important point here, but I’m not sure it gets up to “vast majority” before it becomes relevant.
Earmarking $K for X has an effect once $K exceeds the amount of money that would have been spent on X if the $K had not been earmarked. The size of the effect still certainly depends on the difference, and may very well not be large.
Suppose you earmark to a paper on a topic X that SI would otherwise probably not write a paper on. Would that cause SI to take money out of research on topics similar to X and into FAI research? There would probably be some sort of (expected) effect in that direction, but I think the size of the effect depends on the details of what causes SI’s allocation of resources, and I think the effect would be substantially smaller than would be necessary to make an earmarked donation equivalent to a non-earmarked donation. Still, you’re right to bring it up.
Some recent discussion of AIs as tools.