Well, if we’re talking about real world analogies to the AI box test, I have a minor caveat: sometimes, on Less Wrong, I see what seems to me to be the implied message that the more intelligent not only have an advantage over the less intelligent, but that the more intelligent can ipso facto completely control the less intelligent, at least in the context of hypotheticals and puzzles. This may be a wise assumption to make when we’re dealing with a self-improving AGI, or with Eliezer in the context of his famous tests. But in my own experience, I personally find it difficult to control some minds that are on some level weaker than my own. Think of training cats, or calming down a screaming toddler.
I also suspect that, without too much trouble, I could go to the seedier sides of any big city or a sleazy traveling carnival and find a fair number of people who might not have anything like my academic credentials, but who would be able to con me out of my money if I were foolish enough to listen to them. Is that different from playing the AI box game with Eliezer? I don’t know, because we don’t have transcripts of the two games.
Who here would be confident in his or her ability to win the AI box game against an experienced professional grifter of average intelligence cast in the AI role? For that matter, if such a game could be arranged, who—if cast in the role of the AI—would be confident in his or her ability to win the AI box game against a cranky toddler?
Any smart person who really knows how to control the actions of less intelligent people could potentially make a fortune advising corrections facilities, juvenile halls, and schools with severe chronic discipline problems.
Any smart person who really knows how to control the actions of less intelligent people could potentially make a fortune advising corrections facilities, juvenile halls, and schools with severe chronic discipline problems
I’m in general agreement with your post, but being good at X is quite different from being good at teaching how to do X.
This assumes that we are talking about a single linear measure of intelligence, which doesn’t seem to be the case with normal humans. For example the same person can be above average in spacial reasoning but below average in verbal reasoning.
The relevant analogy for this would be social intelligence, so the person you should be the most suspicious of is the person who has displayed the greatest ability to manipulate social situations, though ironically if they are socially intelligent they should be able to prevent you suspecting them.
Well, if we’re talking about real world analogies to the AI box test, I have a minor caveat: sometimes, on Less Wrong, I see what seems to me to be the implied message that the more intelligent not only have an advantage over the less intelligent, but that the more intelligent can ipso facto completely control the less intelligent, at least in the context of hypotheticals and puzzles. This may be a wise assumption to make when we’re dealing with a self-improving AGI, or with Eliezer in the context of his famous tests. But in my own experience, I personally find it difficult to control some minds that are on some level weaker than my own. Think of training cats, or calming down a screaming toddler.
I also suspect that, without too much trouble, I could go to the seedier sides of any big city or a sleazy traveling carnival and find a fair number of people who might not have anything like my academic credentials, but who would be able to con me out of my money if I were foolish enough to listen to them. Is that different from playing the AI box game with Eliezer? I don’t know, because we don’t have transcripts of the two games.
Who here would be confident in his or her ability to win the AI box game against an experienced professional grifter of average intelligence cast in the AI role? For that matter, if such a game could be arranged, who—if cast in the role of the AI—would be confident in his or her ability to win the AI box game against a cranky toddler?
Any smart person who really knows how to control the actions of less intelligent people could potentially make a fortune advising corrections facilities, juvenile halls, and schools with severe chronic discipline problems.
I’m in general agreement with your post, but being good at X is quite different from being good at teaching how to do X.
Good point. Anyway, I think we agree that controlling people is not always an easy task, even for people who are very, very smart in other ways.
This assumes that we are talking about a single linear measure of intelligence, which doesn’t seem to be the case with normal humans. For example the same person can be above average in spacial reasoning but below average in verbal reasoning.
The relevant analogy for this would be social intelligence, so the person you should be the most suspicious of is the person who has displayed the greatest ability to manipulate social situations, though ironically if they are socially intelligent they should be able to prevent you suspecting them.