It’s tough to lend support to a call for a “Humanist” approach that simply has the blanket statement of “Humans matter more than AI”. Especially coming from Suleiman, who wrote a piece called “Seemingly Conscious AI” that was quite poor in its reasoning. My worry is that Suleiman in particular can’t be trusted not to take this stance to the conclusion of “there are no valid ethical concerns about how we treat digital minds, period”.
For moral and pragmatic reasons, I don’t want model development to become Factory Farming 2.0. Anything coming out of Microsoft AI in particular, that’s going to be my first concern.
It’s tough to lend support to a call for a “Humanist” approach that simply has the blanket statement of “Humans matter more than AI”. Especially coming from Suleiman, who wrote a piece called “Seemingly Conscious AI” that was quite poor in its reasoning. My worry is that Suleiman in particular can’t be trusted not to take this stance to the conclusion of “there are no valid ethical concerns about how we treat digital minds, period”.
For moral and pragmatic reasons, I don’t want model development to become Factory Farming 2.0. Anything coming out of Microsoft AI in particular, that’s going to be my first concern.
This is a good point.