But isn’t “don’t lose a lot”, for example is a goal by itself?
Rob Dost
Karma: 10
I mean, you suppose that agent should care about possibly caring in the future, but this itself constitutes an ‘ought’ statement.
Yeah, but answer a question “why should agent care about ‘preparing’?” Then any answer you give will yield “why this?” ad infinitum. So this chain of “whys” cannot be stopped unless you specify some terminal point. And the moment you do specify such a point, you introduce an “ought” statement.
If I understand you right, basically, you say that once we postulate consciousness as some basic, irreducible building block of reality, confusion related to consciousness will evaporate. Maybe it will help partially, but I think it will not solve problem completely. Why? Let’s say that consciousness is some terminal node in our world-model, this still leaves the question “What systems in word are conscious?”. And I guess that current hypotheses for answer to this question are rather confusing. We didn’t have same level of confusion with other models of basic building blocks. For example, with atoms we thought “yup, everything is an atom, to build this rock we need these atoms and for the cat—other”, then with quantum configurations we think “OK, universe is one gigantic configuration, the rock is this factor and this cat is other” etc and it doesn’t seem very unintuitive (even if the process of producing these factors is hard, it is know ‘in principle’), but with consciousness we don’t know (even in principle!) how to measure of consciousness in any particular system and that’s, IMHO, the important difference.