[Question] In AI Risk what is the base model of the AI?

That is not the best statement but close enough to keep a simple title/​question.

When I read discussions and comments about AI risks I find myself thinking that there might be two (unstated?) base models in place. I suspect when people talk about the “AI wants” or go about applying utility functions they actually are using humans as some primitive model from which the AI derives.

Similarly, when I hear talk about extinction potential I have the view that the model is that about biology and biological evolution and competition within environmental niches.

Is that something anyone even talks about? If so, what is the view—any specific papers or comments where I can take a look? If not, does it sound like a reasonable inference about implicit assumptions/​maps in this area?

No comments.