I very much doubt that we have enough understanding of human values / preferences / utility functions to say that anything makes the list, in any capacity, without exception.
In this case, I think that information is useful as an instrumental value, but not as a terminal value in and of itself. It may lie on the path to terminal values in enough instances (the vast majority), and be such a major part of realizing those values, that a resource-constrained reasoning agent might treat it like a terminal value, just to save effort.
I look at it like a genie bottle: nearly anything you want could be satisfied with it, or would be made much easier with its use, but the genie isn’t what you really want.
I very much doubt that we have enough understanding of human values / preferences / utility functions to say that anything makes the list, in any capacity, without exception.
In this case, I think that information is useful as an instrumental value, but not as a terminal value in and of itself. It may lie on the path to terminal values in enough instances (the vast majority), and be such a major part of realizing those values, that a resource-constrained reasoning agent might treat it like a terminal value, just to save effort.
I look at it like a genie bottle: nearly anything you want could be satisfied with it, or would be made much easier with its use, but the genie isn’t what you really want.
Well, all agents are resource-constrained. But I get what you mean.