Is this analogous to the stance-dependency of agents and intelligence?
It is analogous, to some extent; I do look into some aspect of Daniel Dennett’s classification here: https://www.youtube.com/watch?v=1M9CvESSeVc
I also had a more focused attempt at defining AI wireheading here: https://www.lesswrong.com/posts/vXzM5L6njDZSf4Ftk/defining-ai-wireheading
I think you’ve already seen that?
Is this analogous to the stance-dependency of agents and intelligence?
It is analogous, to some extent; I do look into some aspect of Daniel Dennett’s classification here: https://www.youtube.com/watch?v=1M9CvESSeVc
I also had a more focused attempt at defining AI wireheading here: https://www.lesswrong.com/posts/vXzM5L6njDZSf4Ftk/defining-ai-wireheading
I think you’ve already seen that?