If an AI has human interests as its main goal, it is already friendly. The question was whether intelligence on its own is enough to align it with human interests, which seems very unlikely. If the AI actually has cooperation with humans or fulfillment of some human wish as its goal, it will be able to use intelligence to better fulfill the wishes with all available context. But it’s getting the AI to operate with that goal that is difficult, I believe.
If an AI has human interests as its main goal, it is already friendly. The question was whether intelligence on its own is enough to align it with human interests, which seems very unlikely. If the AI actually has cooperation with humans or fulfillment of some human wish as its goal, it will be able to use intelligence to better fulfill the wishes with all available context. But it’s getting the AI to operate with that goal that is difficult, I believe.