Collective mankind, as a political institution, does not exist. That’s also a very big part of the problem. I am not talking about jokes such as the UN, but effective, well-coordinated, power-wielding institutions that would be able to properly represent mankind in its relationships with and continued (it is hoped...) control over synthetic intelligent beings.
I find the naivety of not even evoking the profit motive here somewhat baffling. What do the incentives look like for Sam and his good people at open AI. Would they fare better financially if AI scales rapidly and is widely deployed? Does that matter to them at all? Yes—I know, the non-profit structure, the “oh we could still cancel all the equity if we felt like it”. Pardon me if I’m a bit skeptical. When Microsoft (or Google for that matter) make billion dollars investments, they generally have an ulterior motive, and it’s a bit more specific than “Let’s see if this could help mankind.”
So in the end, we’re left in a position hoping that the Sams of the world have a long-term survival instinct (after all if mankind is wiped out, everybody is wiped out, including them—right? Actually, I think AI, in any gradual destruction scenario, would very likely, in its wisdom, target the people that know it best first...) that trumps their appetite for money and power—a position I would much, much rather not find myself in.
You know that something is wrong in a picture, when it shows you an industry whose top executives (Altman, Musk...) are litterally begging politicians to start regulating them now, being so affraid of the monsters they are creating, and the politicians are just blisfully unaware (either that, or, for the smarter subsection of them, they don’t want to sound like crazies that have seen Terminator II one time too many in front of the average voter).
Collective mankind, as a political institution, does not exist. That’s also a very big part of the problem. I am not talking about jokes such as the UN, but effective, well-coordinated, power-wielding institutions that would be able to properly represent mankind in its relationships with and continued (it is hoped...) control over synthetic intelligent beings.
I find the naivety of not even evoking the profit motive here somewhat baffling. What do the incentives look like for Sam and his good people at open AI. Would they fare better financially if AI scales rapidly and is widely deployed? Does that matter to them at all? Yes—I know, the non-profit structure, the “oh we could still cancel all the equity if we felt like it”. Pardon me if I’m a bit skeptical. When Microsoft (or Google for that matter) make billion dollars investments, they generally have an ulterior motive, and it’s a bit more specific than “Let’s see if this could help mankind.”
So in the end, we’re left in a position hoping that the Sams of the world have a long-term survival instinct (after all if mankind is wiped out, everybody is wiped out, including them—right? Actually, I think AI, in any gradual destruction scenario, would very likely, in its wisdom, target the people that know it best first...) that trumps their appetite for money and power—a position I would much, much rather not find myself in.
You know that something is wrong in a picture, when it shows you an industry whose top executives (Altman, Musk...) are litterally begging politicians to start regulating them now, being so affraid of the monsters they are creating, and the politicians are just blisfully unaware (either that, or, for the smarter subsection of them, they don’t want to sound like crazies that have seen Terminator II one time too many in front of the average voter).