Given that heuristic AGI’s have an advantage in development speed over your approach, how do you plan to deal with the existential risk that these other projects will pose?
And given this dev-speed disadvantage for SI, how is it possible that SI’s future AI design might not only be safer, but also have significant implementation advantage over competitors, as I have heard from SI’ers (if I understood them correctly)?
Given that heuristic AGI’s have an advantage in development speed over your approach
Are you asking him to assume this? Because, um, it’s possible to doubt that OpenCog or similar projects will produce interesting results. (Do you mean, projects by people who care about understanding intelligence but not Friendliness?) Given the assumption, one obvious tactic involves education about the dangers of AI.
Yes, I ask him about that. All other things equal, a project without a constraint will move faster than a project with a constraint (though 37Signals would say otherwise.)
But on the other hand, this post does ask about the converse, namely that SI’s implementation approach will have a dev-speed advantage. That does not make sense to me, but I have heard it from SI-ers, and so asked about it here.
I may have been nitpicking to no purpose, since the chance of someone’s bad idea working exceeds that of any given bad idea working. But I would certainly expect the strategy of ‘understanding the problem’ to produce Event-Horizon-level results faster than ‘do stuff that seems like it might work’. And while we can imagine someone understanding intelligence but not Friendliness, that looks easier to solve through outreach and education.
But I would certainly expect the strategy of ‘understanding the problem’ to produce Event-Horizon-level results faster than ‘do stuff that seems like it might work’.
The two are not mutually exclusive. The smarter non-SI teams will most likely try to ‘understand the problem ’ as best they can, experimenting and plugging gaps with ‘stuff that seems that it might work’, for which they will likely have some degree of understanding as well.
That would be nice, but there is no reason to think it is happening.
In terms of personnel numbers, SI is still very small. Other organizations may quickly become larger with moderate funding., and either SI or the other organizations may have hard-working individuals.
If you mean “work harder,” then yes, SI has some super-smart people, but there are some pretty smart and even super-smart people elsewhere
Understatement :-)
Given that heuristic AGI’s have an advantage in development speed over your approach, how do you plan to deal with the existential risk that these other projects will pose?
And given this dev-speed disadvantage for SI, how is it possible that SI’s future AI design might not only be safer, but also have significant implementation advantage over competitors, as I have heard from SI’ers (if I understood them correctly)?
Are you asking him to assume this? Because, um, it’s possible to doubt that OpenCog or similar projects will produce interesting results. (Do you mean, projects by people who care about understanding intelligence but not Friendliness?) Given the assumption, one obvious tactic involves education about the dangers of AI.
Yes, I ask him about that. All other things equal, a project without a constraint will move faster than a project with a constraint (though 37Signals would say otherwise.)
But on the other hand, this post does ask about the converse, namely that SI’s implementation approach will have a dev-speed advantage. That does not make sense to me, but I have heard it from SI-ers, and so asked about it here.
I may have been nitpicking to no purpose, since the chance of someone’s bad idea working exceeds that of any given bad idea working. But I would certainly expect the strategy of ‘understanding the problem’ to produce Event-Horizon-level results faster than ‘do stuff that seems like it might work’. And while we can imagine someone understanding intelligence but not Friendliness, that looks easier to solve through outreach and education.
The two are not mutually exclusive. The smarter non-SI teams will most likely try to ‘understand the problem ’ as best they can, experimenting and plugging gaps with ‘stuff that seems that it might work’, for which they will likely have some degree of understanding as well.
By doing really hard work way before anyone else has an incentive to do it.
That would be nice, but there is no reason to think it is happening.
In terms of personnel numbers, SI is still very small. Other organizations may quickly become larger with moderate funding., and either SI or the other organizations may have hard-working individuals.
If you mean “work harder,” then yes, SI has some super-smart people, but there are some pretty smart and even super-smart people elsewhere