What sort of evidence would you expect to see if a bureaucracy were a “Strong AI” that you would expect not to see if it were not a “Strong AI”?
I am not asking you to come up with an “avalanche of empirical evidence”. I am asking you to distinguish your claim from a floating belief, belief-as-attire, or the like. I am asking what — in anticipated experiences — would it mean for this idea to be true?
I would expect a bureaucracy to be capable of self-reflection and self-identity that exists independent of its constituent (human) decision-making modules. I would expect it to have a kind of “team spirit” or “internal integrity” that defines how it goes about solving problems, and which artificially constrains its decision tree from “purely optimal”, towards “maintaining (my) personal identity”.
In other words, I would expect the bureaucracy to have an identifiable “personality”.
This sounds very little like something I would expect someone who knew what a strong AI was but had never observed a bureaucracy to come up with as a way to determine whether bureaucracies are strong AIs or non-strong AIs.
Not everything that is capable of self-reflection and self-identity is a strong AI; indeed I think it’s reasonable to say that out of the sample of observed things capable of self-reflection and self-identity, none of them are strong AIs.
Bureaucracies don’t even fulfill the basic strong AI criterion of being smarter than a human being. They may perform better than an individual in certain applications, but then, so can weak AI, and bureaucracies often engage in behavior which would be regarded as insane if engaged in by an individual with the same goal.
This sounds very little like something I would expect someone who knew what a strong AI was but had never observed a bureaucracy to come up with as a way to determine whether bureaucracies are strong AIs or non-strong AIs.
That’s very plausible; all of my AI research has been self-directed and self-taught, entirely outside of academia. It is highly probable that I have some very fundamental misconceptions about what it is I think I’m doing.
As I mentioned in the original post, I fully admit that I’m likely wrong—but presenting it in a “comment if you like” format to people who are far more likely than me to know seemed like the best way to challenge my assumption, without inconveniencing anyone who might actually have something more important to do than schooling a noob.
I’m not sure how to tell what sorts of groups of humans have self-reflection. For animals, including human infants, we can use the mirror test. How about for bureaucracies?
I’m not sure whether “team spirit” might be a projection in the minds of members or observers; or in particular a sort of belief-as-cheering for the psychological benefit of members (and opponents). How would we tell?
Likewise, how would we inquire into a bureaucracy’s decision tree? I don’t know how to ask a corporation to play chess.
Bald assertion: the fact that “team spirit” might be a mere projection in the minds of members is as irrelevant to whether it causes self-reflection as the fact that “self-awareness” might be a mere consequence of synapse patterns.
Just because we’re more intimately familiar with what “team spirit” feels like from the inside, than we are with what having your axons wired up to someone else’s dendrites, doesn’t mean that “team spirit” isn’t part of an actual consciousness-generating process.
No, I was presenting a potential counter to the idea that “I’m not sure whether ‘team spirit’ might be a projection in the minds of members or observers”.
It might or might not be a projection in the minds of observers, but I don’t think that it’s relevant whether it is or not to the questions I’m asking, in the same sense that “are we conscious because we have a homunculus-soul inside of us, or because neurons give rise to consciousness?” isn’t relevant to the question of “are we conscious?”
We know we are conscious as a bald fact, and we accept that other humans are conscious whenever we reject solipsism; we happen to be finding out the manner in which we are conscious as a result of our scientific curiosity.
But accepting an entity as “conscious” / “self-aware” / “sapient” does not require that we understand the mechanisms that generate its behavior; only that we recognize that it has behavior that fits certain criteria.
What sort of evidence would you expect to see if a bureaucracy were a “Strong AI” that you would expect not to see if it were not a “Strong AI”?
I am not asking you to come up with an “avalanche of empirical evidence”. I am asking you to distinguish your claim from a floating belief, belief-as-attire, or the like. I am asking what — in anticipated experiences — would it mean for this idea to be true?
I would expect a bureaucracy to be capable of self-reflection and self-identity that exists independent of its constituent (human) decision-making modules. I would expect it to have a kind of “team spirit” or “internal integrity” that defines how it goes about solving problems, and which artificially constrains its decision tree from “purely optimal”, towards “maintaining (my) personal identity”.
In other words, I would expect the bureaucracy to have an identifiable “personality”.
This sounds very little like something I would expect someone who knew what a strong AI was but had never observed a bureaucracy to come up with as a way to determine whether bureaucracies are strong AIs or non-strong AIs.
Not everything that is capable of self-reflection and self-identity is a strong AI; indeed I think it’s reasonable to say that out of the sample of observed things capable of self-reflection and self-identity, none of them are strong AIs.
Bureaucracies don’t even fulfill the basic strong AI criterion of being smarter than a human being. They may perform better than an individual in certain applications, but then, so can weak AI, and bureaucracies often engage in behavior which would be regarded as insane if engaged in by an individual with the same goal.
That’s very plausible; all of my AI research has been self-directed and self-taught, entirely outside of academia. It is highly probable that I have some very fundamental misconceptions about what it is I think I’m doing.
As I mentioned in the original post, I fully admit that I’m likely wrong—but presenting it in a “comment if you like” format to people who are far more likely than me to know seemed like the best way to challenge my assumption, without inconveniencing anyone who might actually have something more important to do than schooling a noob.
I’m not sure how to tell what sorts of groups of humans have self-reflection. For animals, including human infants, we can use the mirror test. How about for bureaucracies?
I’m not sure whether “team spirit” might be a projection in the minds of members or observers; or in particular a sort of belief-as-cheering for the psychological benefit of members (and opponents). How would we tell?
Likewise, how would we inquire into a bureaucracy’s decision tree? I don’t know how to ask a corporation to play chess.
Bald assertion: the fact that “team spirit” might be a mere projection in the minds of members is as irrelevant to whether it causes self-reflection as the fact that “self-awareness” might be a mere consequence of synapse patterns.
Just because we’re more intimately familiar with what “team spirit” feels like from the inside, than we are with what having your axons wired up to someone else’s dendrites, doesn’t mean that “team spirit” isn’t part of an actual consciousness-generating process.
“You can’t prove it’s not!” arguments...?
Recommended reading: the Mysterious Answers to Mysterious Questions sequence.
No, I was presenting a potential counter to the idea that “I’m not sure whether ‘team spirit’ might be a projection in the minds of members or observers”.
It might or might not be a projection in the minds of observers, but I don’t think that it’s relevant whether it is or not to the questions I’m asking, in the same sense that “are we conscious because we have a homunculus-soul inside of us, or because neurons give rise to consciousness?” isn’t relevant to the question of “are we conscious?”
We know we are conscious as a bald fact, and we accept that other humans are conscious whenever we reject solipsism; we happen to be finding out the manner in which we are conscious as a result of our scientific curiosity.
But accepting an entity as “conscious” / “self-aware” / “sapient” does not require that we understand the mechanisms that generate its behavior; only that we recognize that it has behavior that fits certain criteria.