The discussion of how conjunctive SIAI’s vision is seems unclear to me. Luke appears to have responded to only part of what I think Holden is likely to have meant.
Some assumptions whose conjunctions seem important to me (in order of decreasing importance):
1) The extent to which AGI will consist of one entity taking over the world versus many diverse entities with limited ability to dominate the others.
2) The size of the team required to build the first AGI (if it requires thousands of people, a nonprofit is unlikely to acquire the necessary resources; if it can be done by one person, I wouldn’t expect that person to work with SIAI [1]).
3) The degree to which concepts such as “friendly” or “humane” can be made clear enough to be implemented in software.
4) The feasibility of an AGI whose goals whose goals can be explicitly programmed before AGIs with messier goals become dominant. We have an example of intelligence with messy goals, which gives us some clues about how hard it is to create one. We have no comparable way of getting an outside view of the time and effort required for an intelligence with clean goals.
It seems reasonable to infer from this that SIAI has a greater than 90% chance of becoming irrelevant. But for an existential risk organization, a 90% chance of being irrelevant should seem like a very weak argument against it.
I believe that the creation of CFAR is a serious attempt to bypass problems associated with assumption 2, and my initial impression of CFAR is that it (but not SIAI) has a good claim to being the most valuable charity.
[1] I believe an analogy to Xanadu is useful, especially in the unlikely event that an AGI can be built by a single person. The creation of the world wide web was somewhat predictable and predicted, and for a long time Xanadu stood out as the organization which had given the most thought to how the web should be implemented. I see many similarities between the people at Xanadu and the people at SIAI in terms of vision and intelligence (although people at SIAI seem more willing to alter their beliefs). Yet if Tim Berners-Lee had joined Xanadu, he wouldn’t have created the web. Two of the reasons is that the proto-transhumanist culture with which Xanadu was associated was reluctant to question the beliefs that the creators of the web needed to charge money for their product, and that the web should ensure that authors were paid for their work. I failed to question those beliefs in 1990. I haven’t seen much evidence that either I or SIAI are much better today at doing the equivalent of identifying those as assumptions that were important to question.
The discussion of how conjunctive SIAI’s vision is seems unclear to me. Luke appears to have responded to only part of what I think Holden is likely to have meant.
Some assumptions whose conjunctions seem important to me (in order of decreasing importance):
1) The extent to which AGI will consist of one entity taking over the world versus many diverse entities with limited ability to dominate the others.
2) The size of the team required to build the first AGI (if it requires thousands of people, a nonprofit is unlikely to acquire the necessary resources; if it can be done by one person, I wouldn’t expect that person to work with SIAI [1]).
3) The degree to which concepts such as “friendly” or “humane” can be made clear enough to be implemented in software.
4) The feasibility of an AGI whose goals whose goals can be explicitly programmed before AGIs with messier goals become dominant. We have an example of intelligence with messy goals, which gives us some clues about how hard it is to create one. We have no comparable way of getting an outside view of the time and effort required for an intelligence with clean goals.
It seems reasonable to infer from this that SIAI has a greater than 90% chance of becoming irrelevant. But for an existential risk organization, a 90% chance of being irrelevant should seem like a very weak argument against it.
I believe that the creation of CFAR is a serious attempt to bypass problems associated with assumption 2, and my initial impression of CFAR is that it (but not SIAI) has a good claim to being the most valuable charity.
[1] I believe an analogy to Xanadu is useful, especially in the unlikely event that an AGI can be built by a single person. The creation of the world wide web was somewhat predictable and predicted, and for a long time Xanadu stood out as the organization which had given the most thought to how the web should be implemented. I see many similarities between the people at Xanadu and the people at SIAI in terms of vision and intelligence (although people at SIAI seem more willing to alter their beliefs). Yet if Tim Berners-Lee had joined Xanadu, he wouldn’t have created the web. Two of the reasons is that the proto-transhumanist culture with which Xanadu was associated was reluctant to question the beliefs that the creators of the web needed to charge money for their product, and that the web should ensure that authors were paid for their work. I failed to question those beliefs in 1990. I haven’t seen much evidence that either I or SIAI are much better today at doing the equivalent of identifying those as assumptions that were important to question.