Hasn’t Eliezer Yudkowsky largely failed at solving alignment and getting other to solve alignment? And wasn’t he largely responsible for many people noticing that AGI is possible and potentially highly fruitful? Why would a world where he’s the median person be more likely to solve to solve alignment?
I don’t think a lack of IQ is the reason we’ve been failing at making AI sensibly. Rather, it’s a lack of good incentive making. Making an AI recklessly is current much more profitable than not doing do- which imo, shows a flaw in the efforts which have gone towards making AI safe—as in, not accepting that some people have a very different mindset/beliefs/core values and figuring out a structure/argument that would incentivize people of a broad range of mindsets.
Hasn’t Eliezer Yudkowsky largely failed at solving alignment and getting other to solve alignment?
And wasn’t he largely responsible for many people noticing that AGI is possible and potentially highly fruitful?
Why would a world where he’s the median person be more likely to solve to solve alignment?
In a world where the median IQ is 143, the people at +3σ are at 188. They might succeed where the median fails.
I don’t think a lack of IQ is the reason we’ve been failing at making AI sensibly. Rather, it’s a lack of good incentive making.
Making an AI recklessly is current much more profitable than not doing do- which imo, shows a flaw in the efforts which have gone towards making AI safe—as in, not accepting that some people have a very different mindset/beliefs/core values and figuring out a structure/argument that would incentivize people of a broad range of mindsets.