This sounds very little like something I would expect someone who knew what a strong AI was but had never observed a bureaucracy to come up with as a way to determine whether bureaucracies are strong AIs or non-strong AIs.
Not everything that is capable of self-reflection and self-identity is a strong AI; indeed I think it’s reasonable to say that out of the sample of observed things capable of self-reflection and self-identity, none of them are strong AIs.
Bureaucracies don’t even fulfill the basic strong AI criterion of being smarter than a human being. They may perform better than an individual in certain applications, but then, so can weak AI, and bureaucracies often engage in behavior which would be regarded as insane if engaged in by an individual with the same goal.
This sounds very little like something I would expect someone who knew what a strong AI was but had never observed a bureaucracy to come up with as a way to determine whether bureaucracies are strong AIs or non-strong AIs.
That’s very plausible; all of my AI research has been self-directed and self-taught, entirely outside of academia. It is highly probable that I have some very fundamental misconceptions about what it is I think I’m doing.
As I mentioned in the original post, I fully admit that I’m likely wrong—but presenting it in a “comment if you like” format to people who are far more likely than me to know seemed like the best way to challenge my assumption, without inconveniencing anyone who might actually have something more important to do than schooling a noob.
This sounds very little like something I would expect someone who knew what a strong AI was but had never observed a bureaucracy to come up with as a way to determine whether bureaucracies are strong AIs or non-strong AIs.
Not everything that is capable of self-reflection and self-identity is a strong AI; indeed I think it’s reasonable to say that out of the sample of observed things capable of self-reflection and self-identity, none of them are strong AIs.
Bureaucracies don’t even fulfill the basic strong AI criterion of being smarter than a human being. They may perform better than an individual in certain applications, but then, so can weak AI, and bureaucracies often engage in behavior which would be regarded as insane if engaged in by an individual with the same goal.
That’s very plausible; all of my AI research has been self-directed and self-taught, entirely outside of academia. It is highly probable that I have some very fundamental misconceptions about what it is I think I’m doing.
As I mentioned in the original post, I fully admit that I’m likely wrong—but presenting it in a “comment if you like” format to people who are far more likely than me to know seemed like the best way to challenge my assumption, without inconveniencing anyone who might actually have something more important to do than schooling a noob.