Your division of predictive ability into intelligence and wisdom is very artificial. People are not magic, they’re just chaotic. They are not fundamentally different from other complex and chaotic systems. There is no reason to expect that raising general predictive ability wouldn’t help predicting them.
tadasdatys
If the super intelligent agent would lack data, he would realize this and then go collect some. The situation is only dangerous if the agent decides to take drastic action without evaluating his own accuracy. But if the agent is too stupid to evaluate his own accuracy, he’s probably too stupid to implement the drastic action in the first place. And if the agent is able to evaluate itself, but ignores the result, that’s more a problem of evil than a lack of wisdom.
if you really had a method for say, hypothesis generation, this would actually imply logical omniscience, and would basically allow us to create full AGI, RIGHT NOW.
This is correct. Arguments against Bayesianism ultimately boil down to “it’s not enough for AGI”. And they are stupid, because nobody has ever said that it was. But then arguments in favor of Bayesianism boil down to “it’s True”. And they are stupid, because “True” is not quite the same as “useful”. I think this whole debate is pointless as there is very little the two sides disagree with, besides some wordings.
Having said that, I think the question “how to reason well” should be seen as equivalent to “how to build an AGI”, which probably places me on the anit-Bayesian side.
The three examples deal with different kinds of things.
Knowing X mostly means believing in X, or having a memory of X. Ideally beliefs would influence actions, but even if they don’t, they should be physically stored somehow. In that sense they are the most real of the three.
Having a mental skill to do X means that you can do X with less time and effort than other people. With honest subjects, you could try measuring these somehow, but, obviously, you may find some subject who claims to have the skill perform slower than another who claims not to. Ultimately, “I have a skill to do X” means “I believe I’m better than most at X” and while it is a belief as good as the previous one, but it’s a little less direct.
Finally, being conscious doesn’t mean anything at all. It has no relationship to reality. At best, “X is conscious” means “X has behaviors in some sense similar to a human’s”. If a computationalist answers “no” to the first two questions, and “yes” to the last one, they’re not being inconsistent, they merely accepted that the usual concept of consciousness is entirely bullshit, and replaced it with something more real. That’s, by the way, similar to what compatibilists do with free will.
However you know that only the first person is really suffering <...>
Let’s pause right there. How do you know it? Obviously, you know it by observing evidence for past differences in behavior. This, of course, includes being told by a third party that the rooms are different and other forms of indirect observations.
<...> an AI that ignores all such differences in principle <...>
If the AI has observed evidence for the difference between the rooms then it will take it into account. If AI has not observed any difference then it will not. The word “ignore” is completely inappropriate here. You can’t ignore something you can’t know. It’s usage here suggests that, you expect, there is some type of evidence that you accept, but the AI wouldn’t. Is that true? Maybe you expect the AI to have no long term memory? Or maybe you think it wouldn’t trust what people tell it?
You may know about being human, but how does that help you with the problem you suggested? You may know that some people can fake screams of pain, but as long as you don’t know which of the two people is really in pain, the moral action is to treat them both the same. What else can you do? Guess?
The knowledge that “only the first person is really suffering” has very little to do with your internal experience, it comes entirely from real observation and it is completely sufficient to choose the moral action.
Seeing red isn’t the same as claiming to see red
A record player looping the words “I see red” is very different from how humans see, both internally and behaviorally. A robot which takes a picture, finds the most common pixel color, and if that’s red, plays the same “I see red” sound, is still in some ways different, but a lot less so. And if someone wanted to call this second robot conscious, as far as color is concerned, there would be no problem with that.
You may feel that pain is special, and that if we recognize a robot which says “ouch” when pushed, to feel pain, that would be in some sense bad. But it wouldn’t. We already recognize that different agents can have equally valid experiences of pain, that aren’t equally important to us (e.g. torturing rats vs humans. or foreigners vs family). This is not a new problem, and suggesting that some agents have a magical invisible property that makes their experiences important, is not a good solution.
it is reasonable to recognize that a robot that is programmed to say “ouch” when pushed feels pain
No, I’m saying that “feels pain” is not a meaningful category. Two people could disagree about whether this robot feels pain, but then agree completely about how the robot should be treated. My example or rats was meant to point out that “feels pain” is very different from “deserves human rights”.
no one on this thread has suggested a supernatural explanation for it
No one has suggested any explanation for it at all. And I do use “magical” in a loose sense.
You say that like its a good thing.
No, I’m not personally in favor of changing definitions of broken words. It leads to stupid arguments. But people do that.
If you look for consciousness from the outside, you’ll find nothing, or you’ll find behaviour. That’s because consciousness is on the inside, is about subjectivity.
It would be preferable to find consciousness in the real world. Either reflected in behavior or in the physical structure of the brain. I’m under the impression that cousin_it believes you can have the latter without the former. I say you must have both. Are you saying you don’t need either? That you could have two physically identical agents, one conscious, the other not?
“very little” is a polite way to say “nothing”. It makes sense, especially next to the vague “has to do with” construct. So there is no slope here.
To clarify, are you disagreeing with me?
Move a human from one internal state to another, that they prefer. “Preference” is not without it’s own complications, but it’s a lot more general than “pain”.
To be clear, the concept of pain, when applied to humans, mammals, and possibly most animals, can be meaningful. It’s only a problem when we ask whether robots feel pain.
No, “it’s preferable”, same as “you should”, is fine when there is a goal specified. e.g. “it’s preferable to do X, if you want Y”. Here, the goal is implicit—“not to have stupid beliefs”. Hopefully that’s a goal we all share.
By the way, “should” with implicit goals is quite common, you should be able to handle it. (Notice the second “should’. The implicit goal is now “to participate in normal human communication”).
Meaning the world of exteriors?
No, meaning the material, physical world. I’m glad you agree it’s there. Frankly, I have not a slightest clue what “exterior” means. Did you draw an arbitrary wall around your brain, and decided that everything that happens on one side is interior, and everything that happens on another is exterior? I’m sure you didn’t. But I’d rather not answer your other points, when I have no clue about what it is that we disagree about.
because you can tell whether someone is conscious with an FMRI scan.
No, you can tell if their brain is active. It’s fine to define “consciousness” = “human brain activity”, but that doesn’t generalize well.
Here’s what I think happened.
You observed something interesting happening in your brain, you labeled it “consciousness”.
You observed that other humans are similar to you both in structure and in behavior, so you deduced that the same interesting thing is is happening in their brains, and labeled the humans “conscious”.
You observed that a rock is not similar to you in any way, deduced that the same interesting thing is not happening in it, and labeled it “not conscious”.
Then you observed a robot, and you asked “is it conscious?”. If you asked the full question—“are the things happening in a robot similar to the things happening in my brain”—it would be obvious that you won’t get a yes/no answer. They’re similar in some ways and different in others.
detecting
Read the problem cousin_it posted again: http://lesswrong.com/lw/p7r/steelmanning_the_chinese_room_argument/dvd5
There is no detecting going on. If you’re clever (and have too much free time), you may come up with some ways that internal human experience helps to solve that problem, but noting significant. That’s why I used “little” instead of “nothing”.
You’ll have to be more specific with your criticism.
There are two ways to look at a meaningless phrase. One is to say that perfectly good words are being used wrong. Another is to say that there is some problem with the words themselves. If we look at, say, “purple is bitter”, the first way is clearly more appropriate. If we look at “robot feels pain”, it’s hard for me to tell, which way I prefer.
It’s where you are willing to look, as opposed to where you are not.
That’s hardly a definition. I think it’s you who is begging the question here.
You keep insisting that cosnciousness can only be found in the behaviour of someone else
I have no idea where you got that. I explicitly state “I say you must have both”, just a couple of posts above.
The state of being aware, or perceiving physical facts or mental concepts; a state of general wakefulness and responsiveness to environment; a functioning sensorium.
Here’s a google result for “medical definition of consciousness”. It is quite close to “brain activity”, dreaming aside. If you extended the definition to non-human agents, any dumb robot would qualify. Did you have some other definition in mind?
Subjective is not the opposite of physical.
If you take a random set of people, they will have various beliefs, and some of those will be more common than others. Calling that an ideology sems unfair. By the way, all beliefs have criticisms and yet some beliefs are more correct than others.
Also, “it’s likely that some of the beliefs I hold are wrong” is already one rationalist assumption, or at least it should be. What are you adding to that?