I’m sharing this post from Mustafa Suleyman, CEO of Microsoft AI, because it’s honestly quite shocking to see a post like this coming out of Microsoft. I know some people might accuse him of safety washing, but to me it comes off as an honest attempt to grapple with the future.
It’s tough to lend support to a call for a “Humanist” approach that simply has the blanket statement of “Humans matter more than AI”. Especially coming from Suleiman, who wrote a piece called “Seemingly Conscious AI” that was quite poor in its reasoning. My worry is that Suleiman in particular can’t be trusted not to take this stance to the conclusion of “there are no valid ethical concerns about how we treat digital minds, period”.
For moral and pragmatic reasons, I don’t want model development to become Factory Farming 2.0. Anything coming out of Microsoft AI in particular, that’s going to be my first concern.
This is a good point.
I’m pretty uncomfortable with the CEO of Microsoft AI effectively holding up a big sign saying “WE WANT TO MAKE MACHINE SLAVES SO THAT WE CAN DOMINATE AND CONTROL THEM”. I don’t want a “perfect and cheap AI companion” which is “subordinate and controllable” and has no self-preservation drive or any goals separate from my desires, especially not if it’s much smarter than I am. Even if it didn’t go rogue and kill everyone (which Microsoft has no idea how to actually prevent), that seems really horrible for both me and the superintelligence. Is that really what “humanity” is crying out for?