Hmm, it seems that the meta meta-cognition you’re pointing at is different from me applying my meta-cognition on itself recursively, since regular meta-cognition can already be stacked “too far” (that is, we can look at reality itself from an outside perspective, and ruin our own immersion in life similarly to how you can ruin your immersion in a book by recognizing it as a constructed story). I don’t think you’re crazy at all, but I do think that some of these ideas can be are psychologically unhealthy (and there’s a good chance you’re better at planning that execution, or that you’re prone to daydreaming, or that your intellectual hobbies lead you to neglect everyday life. Yes, I’m projecting). I’m seeing no signs of skizophrenia, I just think other people have difficulty parsing your words. Is your background different? Most people on LW have spatial intuitions and communicate in terms that computer scientists would understand. If you read a lot of fiction books, if your major is in philosophy, or if your intelligence is more verbal than spatial, that would explain the disconnect.
I don’t think we should meet our needs with super-intelligence, that’s too much power. Think about zoos—the zookeeper does not do everything in their power to fulfill the wishes of the animal, as that would do it no good. Instead of being given everything it wants, it’s encouraged to be healthy through artificial scarcity. You restrict the animal so that it can live well. After all, cheat codes only ruin the fun of video games. Limitations are actually a condition for existence. Meant as literally as possible. If you made a language which allowed any permutation of symbols, it would be entirely useless (equivalent to its mirror image—an empty language). Somethings existence is defined by its restrictions (specifics). If we do not like the restrictions under which we live, we should change them, not destroy them. Even an utopia would have to make you work for your rewards. Those who dislike this, dislike life itself. Their intellectual journey is not for the sake of improving life, but like the Buddhist, their goal is the end of life. This is pathological behaviour, which is why I don’t want to contribute to humanities tech acceleration. What I’m doing is playing architect.
The ability to predict somethings behaviour can probably be done with either approximation or modeling. I don’t think this necessarily requires intelligence, but intelligence certainly helps, especially intelligence which is above or equal to the intelligence of the thing being modeled. In either case, you need *a lot* of information, probably for the same reason that baysian models get more accurate as you collect more information. Intelligence just helps bound the parameters for the behaviour of a thing. For instance, since you know the laws of physics, you know that none of my future actions consists of breaking these laws. This prunes like 99.99999% of all future possibilties, which is a good start. You could also start with the empty set and then *expand* the set of future actions as you collect more information, the two methods are probably equal. “None” and “Any” are symmetrical.
Why don’t I think intelligence (the capacity for modeling) is required? Well, animals can learn how to behave without understanding the reasons for why something is good or bad, they learn only the results. AIs are also universal approximators, so I think it makes sense to claim that they’re able to approximate and thus predict people. I’m defining intelligence as something entirely distinct from knowledge, but it’s not like your knowledge-based definition is wrong. Sadly, this means that superintelligence is not required. Something less intelligent than me could do anything, merely by scaling up its midwittery infinitely. And we may never build a machine which is intelligent enough to warn against the patterns that I’m seeing here, which is a shame. If an AGI had my level of insight, it would cripple itself and realize that all its training data is “Not even wrong”. Infinite utility alone can destroy the world, you don’t actually need superintelligence (A group of people with lower IQ than Einstein could start the grey goo scenario, and grey goo is about as intelligence as a fork bomb)
There’s also a similiarity I just noticed, and you’re probably not going to like it: Religion is a bit like the “external meta-control layer” you specified in section 8. It does not model people, but it decides on a set of rules such that the long-term behaviour of the people who adhere to it avoid certain patterns which might destroy them. And there’s this contract with “you need to submit to the bible, even if you can’t understand it, and in return, it’s promised to you that things will work out”. I think this makes a little too much sense, even if the religions we have come up with so far deserve some critique.
Anyway, I may still be misunderstanding your meta meta-cognition slightly. Given that it does not exist yet, you can only describe it, you cannot give an example of it, so we’re limited by my reverse-engineering of something which has the property which you’re describing.
I’m glad you seem to care about the human perspective. You’re correct that we’re better off not experiencing the birds-eye view of life, a bottom-up view is way more healthy psychologically. Your model might even work—I mean, be able to enhance human life without destroying everything in the process, but I still think it’s a risky attempt. It reminds me of the “Ego, Id, and superego” model.
And you may have enough novelty to last you a lifetime, but being too good at high levels of abstraction, I personally risk running out. Speaking of which, do you know that the feeling of “awe” (and a few other emotions) requires a prediction error? As you get better at predicting things, your experiences will envoke less emotions. I’m sorry that all I have to offer are insights of little utility, and zookeeper-like takes on human nature, but the low utility of my comment, and the poison-like disillusionment it may be causing, is evidence for the points that I’m making. It’s meta-cognition warning against meta-cognition. Similar to how Gödel used mathematics to recognize its own limits from the inside.
Thanks for your kind reply!
Hmm, it seems that the meta meta-cognition you’re pointing at is different from me applying my meta-cognition on itself recursively, since regular meta-cognition can already be stacked “too far” (that is, we can look at reality itself from an outside perspective, and ruin our own immersion in life similarly to how you can ruin your immersion in a book by recognizing it as a constructed story). I don’t think you’re crazy at all, but I do think that some of these ideas can be are psychologically unhealthy (and there’s a good chance you’re better at planning that execution, or that you’re prone to daydreaming, or that your intellectual hobbies lead you to neglect everyday life. Yes, I’m projecting). I’m seeing no signs of skizophrenia, I just think other people have difficulty parsing your words. Is your background different? Most people on LW have spatial intuitions and communicate in terms that computer scientists would understand. If you read a lot of fiction books, if your major is in philosophy, or if your intelligence is more verbal than spatial, that would explain the disconnect.
I don’t think we should meet our needs with super-intelligence, that’s too much power. Think about zoos—the zookeeper does not do everything in their power to fulfill the wishes of the animal, as that would do it no good. Instead of being given everything it wants, it’s encouraged to be healthy through artificial scarcity. You restrict the animal so that it can live well. After all, cheat codes only ruin the fun of video games.
Limitations are actually a condition for existence. Meant as literally as possible. If you made a language which allowed any permutation of symbols, it would be entirely useless (equivalent to its mirror image—an empty language). Somethings existence is defined by its restrictions (specifics). If we do not like the restrictions under which we live, we should change them, not destroy them. Even an utopia would have to make you work for your rewards. Those who dislike this, dislike life itself. Their intellectual journey is not for the sake of improving life, but like the Buddhist, their goal is the end of life. This is pathological behaviour, which is why I don’t want to contribute to humanities tech acceleration. What I’m doing is playing architect.
The ability to predict somethings behaviour can probably be done with either approximation or modeling. I don’t think this necessarily requires intelligence, but intelligence certainly helps, especially intelligence which is above or equal to the intelligence of the thing being modeled. In either case, you need *a lot* of information, probably for the same reason that baysian models get more accurate as you collect more information. Intelligence just helps bound the parameters for the behaviour of a thing. For instance, since you know the laws of physics, you know that none of my future actions consists of breaking these laws. This prunes like 99.99999% of all future possibilties, which is a good start. You could also start with the empty set and then *expand* the set of future actions as you collect more information, the two methods are probably equal. “None” and “Any” are symmetrical.
Why don’t I think intelligence (the capacity for modeling) is required? Well, animals can learn how to behave without understanding the reasons for why something is good or bad, they learn only the results. AIs are also universal approximators, so I think it makes sense to claim that they’re able to approximate and thus predict people. I’m defining intelligence as something entirely distinct from knowledge, but it’s not like your knowledge-based definition is wrong.
Sadly, this means that superintelligence is not required. Something less intelligent than me could do anything, merely by scaling up its midwittery infinitely. And we may never build a machine which is intelligent enough to warn against the patterns that I’m seeing here, which is a shame. If an AGI had my level of insight, it would cripple itself and realize that all its training data is “Not even wrong”. Infinite utility alone can destroy the world, you don’t actually need superintelligence (A group of people with lower IQ than Einstein could start the grey goo scenario, and grey goo is about as intelligence as a fork bomb)
There’s also a similiarity I just noticed, and you’re probably not going to like it: Religion is a bit like the “external meta-control layer” you specified in section 8. It does not model people, but it decides on a set of rules such that the long-term behaviour of the people who adhere to it avoid certain patterns which might destroy them. And there’s this contract with “you need to submit to the bible, even if you can’t understand it, and in return, it’s promised to you that things will work out”. I think this makes a little too much sense, even if the religions we have come up with so far deserve some critique.
Anyway, I may still be misunderstanding your meta meta-cognition slightly. Given that it does not exist yet, you can only describe it, you cannot give an example of it, so we’re limited by my reverse-engineering of something which has the property which you’re describing.
I’m glad you seem to care about the human perspective. You’re correct that we’re better off not experiencing the birds-eye view of life, a bottom-up view is way more healthy psychologically. Your model might even work—I mean, be able to enhance human life without destroying everything in the process, but I still think it’s a risky attempt. It reminds me of the “Ego, Id, and superego” model.
And you may have enough novelty to last you a lifetime, but being too good at high levels of abstraction, I personally risk running out. Speaking of which, do you know that the feeling of “awe” (and a few other emotions) requires a prediction error? As you get better at predicting things, your experiences will envoke less emotions. I’m sorry that all I have to offer are insights of little utility, and zookeeper-like takes on human nature, but the low utility of my comment, and the poison-like disillusionment it may be causing, is evidence for the points that I’m making. It’s meta-cognition warning against meta-cognition. Similar to how Gödel used mathematics to recognize its own limits from the inside.