I’m not sure that there’s a lack of creativity involved in the sort of context of the plate situation. The real issue in the context of the plate seems to be a general heuristic that people aren’t trying to deceive us. In order for society to function this is a good heuristic to have. Countries where people trust more are generally more prosperous and while there are correlation v. causation issues here there’s surrounding evidence which suggests that there really is a causal link from high trust to more prosperity.
That said, the notion that being able to do this and think creatively when one knows that a trick is involved might be interesting and the idea that it would help promote creative hypothesis generation is intriguing.
I don’t think that the point with the plate example is that they should have guessed that the teacher cheated, but rather that if “convection” or whatever doesn’t actually predict what you saw, then it’s better to say “I don’t know” than to try and guess the password.
You’re right that that was the point, but the setup is still isomorphic to a deception-based magic trick, in that students were told that the correct way to explain a phenomenon is to simply search for the matching password, which they did. And like in a magic trick, their lie-grounded expectations were easily frustrated.
The real issue in the context of the plate seems to be a general heuristic that people aren’t trying to deceive us.
Not surprisingly then, a key element of most magic tricks is misdirection, or outright lying. There was a discussion about this on LW a while back, but I can’t find it. Someone mentioned how if you just boldly lie about key elements of the setup, people will form expectations that you can then easily surprise. The commenter then found that this skill at lying (and noticing how trusting people are) bled into the rest of his life, which led me to suggest people should be extra careful around magicians even when they’re not on stage!
That’s the one! Thanks for catching that. The relevant quote and my reaction were:
This riddle made me remember reading about how beginning magicians are very nervous in their first public performances, since some of their tricks involve misdirecting the audience by openly lying… they learn to be more comfortable once they find out how easily the audience will pretty much accept whatever false statements they make.
It makes me wonder how dangerous magicians can become in their regular lives.
If you have some basic understanding of how physics works, then the heating of the plate’s wrong side should strike you as extremely inplausible—a lot more so than hypotheses about deliberate manipulation of experiment settings. A physical theory should be some kind of quantitative guide about updating probablities, and being such it should be able to give you very low probabilites for a class of hypotheses. The teacher’s trick drew attention to the fact that the students markedly did not use physics to generate quantitative answers. So, per my reading, the focal point of the experiment wasn’t suspending trust in people.
Still, “the teacher turned the plate around” should come up in the grand list of possible solutions, but it shouldn’t be privileged, and it should be weighed against the prior probability of “the teacher will deceive us”.
There’s a difference between “considering deception and then dismissing it as unlikely” and “not even considering deception at all”.
Considering a hypothesis takes non-negligible time. “Not even considering it at all” is what “dismissing due to low posterior weighting due to low prior weighting” feels like from the inside. If the posterior gets high enough, you’ll “start considering it”.
There may also be good signalling reasons to shy away from considering the hypothesis that your is deceiving you. If so, I would expect my untutored priors to be too low.
I’m not sure I understand what you are saying. To me, it seems you are suggesting that we subconsciously consider every possibility and weigh the probabilities, and only the likely enough ones float to our conscious consideration.
Also, I can consciously consider situations that have very, very low odds, such as the assumption that the plate is a metal alloy not just from any planet, but specifically from Zepton IV. So if I can consider outlandish things like that, why don’t I consider the idea that the teacher was deceptive?
We consider low-probability possibilities in batch, by treating with their reference classes. For instance, once we think of the turned-plate hypothesis, we don’t separately consider “the plate was turned 180 degrees”, “the plate was turned 179.9 degrees”, “the plate was turned 179.95 degrees”, etc. If we wanted to distinguish among these subhypotheses, we could, but mostly we don’t bother.
You chose Zepton IV largely at random. The hard part is locating the right specific hypothesis.
How do you know that? I don’t mean that in a “I think you’re wrong” way, but in a “I think you’re right, but I’m interested in knowing why” kind of way.
Sometimes a human feels like they’re not considering a hypothesis at all, and later starts considering it. That’s not what confidence zero looks like.
Actually, humans do sometimes behave as though their confidence in a proposition was as close to zero as we’re able to measure. I can’t think of any non-politically-charged examples at the moment, but consider for example the sort of confusion that leads someone to ask “So are you Blue or Green?” of someone who’s just finished explaining that they’re Red.
That’s what I would describe as someone not considering a hypothesis, and then later starting considering it. That is not what I would describe as someone subconsciously considering a hypothesis the entire time, at least without further justification.
Remember that not considering a hypothesis is not the same as saying the hypothesis has a low probability. Saying the hypothesis has a low probability is considering it and then discarding it. I think we’re talking about two different things.
I can’t think of any non-politically-charged examples at the moment, but consider for example the sort of confusion that leads someone to ask “So are you Blue or Green?” of someone who’s just finished explaining that they’re Red.
If those teacher’s students were absolutely not expecting a lie, then another out-of-the-box question based on physics they should understand wouldn’t trick them. The trust has been broken. On the other hand, if the problem is their inability to be creative enough, they won’t become creative just because they learned not to trust the teacher.
My high school physics teacher in high school who liked tricking us. Demonstrating his point about reflections off of light/dark surfaces, he covered up the laser pointer while shining it at a black binder. He put a compass next to a magnet to throw us off. These tricks were rare enough that we didn’t expect them every time, but we also knew not to blindly trust his setup. Still, there were plenty of people who fell for them every time.
And then came the torque wheel, a gyroscope bicycle wheel almost exactly like the one in this video. My first reaction, based on physics I did understand (and that wasn’t it at that time) was, “That’s impossible!” Then the teacher then told us it wasn’t a trick. He wouldn’t lie, but my reaction was still, “That’s impossible!” If I remember correctly, my hypothesis involved a hinge at the end of a solid string. Eventually, the teacher just had me hold the wheel and spun it… and the friggin’ thing moved on its own!!! I even checked that the axis or the rim didn’t contain any magicary before I was able to admit that, “Huh, I guess it is possible.”
A couple years later after that, another physics teacher inadvertently placed a compass on top of the table with a classroom computer inside of it. And then he had us learn N/S/E/W by pointing. I was the only moron in a 200-person class pointing to the “wrong” North.
Countries where people trust more are generally more prosperous and while there are correlation v. causation issues here there’s surrounding evidence which suggests that there really is a causal link from high trust to more prosperity.
My intuition (and only my intuition, I haven’t been able to research this effectively) suggests that the causal link is in the other direction. That is, in more prosperous countries/regions there is higher trust, since fewer people (in general) need to push limits to be able to live comfortably, so there is less crime.
It’s worth noting that a standard technique to weaken a rival group is to deter or prevent its members from cooperating — sometimes by disrupting their communications; but also by inciting rivalries or jealousies (i.e. lack of trust) within the group. We see this in everything from high-school cliques, to Internet trolling, to office politics, to counterintelligence and psyops among nation-states.
Ceteris paribus, any human social group is more effective if its members cooperate toward accomplishing its goals. Therefore a rival group which wants to prevent those goals from being accomplished can do well by disrupting the group’s cooperation.
“Creativity” implies generation. And yet, does the proper definition of it emphasize generation? Scott Adams has said that creativity is the ability to purge compelling, half functional ideas from one’s mind, basically holding off on proposing solutions.
The strength of intelligence, and humanity, is improvising where hard-coded adaptations would be inferior.
I’m not willing to say that the problem in such a case was the absence of a specific, narrow heuristic rather than a lack of creativity. Creativity is getting by when narrow heuristics fail, and not cutting corners by assuming too much.
However unlikely it is in a specific case that the specific usually-valid heuristic “they are not trying to deceive you” fails, this was not a one-off case because there are a great number of usually-valid heuristics, each of which will rarely fail, but at least one of which will fail somewhat often, such that somewhat often a person with low creativity will fail.
I’m not sure that there’s a lack of creativity involved in the sort of context of the plate situation. The real issue in the context of the plate seems to be a general heuristic that people aren’t trying to deceive us. In order for society to function this is a good heuristic to have. Countries where people trust more are generally more prosperous and while there are correlation v. causation issues here there’s surrounding evidence which suggests that there really is a causal link from high trust to more prosperity.
That said, the notion that being able to do this and think creatively when one knows that a trick is involved might be interesting and the idea that it would help promote creative hypothesis generation is intriguing.
I don’t think that the point with the plate example is that they should have guessed that the teacher cheated, but rather that if “convection” or whatever doesn’t actually predict what you saw, then it’s better to say “I don’t know” than to try and guess the password.
You’re right that that was the point, but the setup is still isomorphic to a deception-based magic trick, in that students were told that the correct way to explain a phenomenon is to simply search for the matching password, which they did. And like in a magic trick, their lie-grounded expectations were easily frustrated.
You’re right that’s the main point of the story, but that doesn’t mean I can’t adapt the story to also serve my purposes.
Not surprisingly then, a key element of most magic tricks is misdirection, or outright lying. There was a discussion about this on LW a while back, but I can’t find it. Someone mentioned how if you just boldly lie about key elements of the setup, people will form expectations that you can then easily surprise. The commenter then found that this skill at lying (and noticing how trusting people are) bled into the rest of his life, which led me to suggest people should be extra careful around magicians even when they’re not on stage!
Sounds like my post from 2009, Misleading the Witness, perhaps?
That’s the one! Thanks for catching that. The relevant quote and my reaction were:
If you have some basic understanding of how physics works, then the heating of the plate’s wrong side should strike you as extremely inplausible—a lot more so than hypotheses about deliberate manipulation of experiment settings. A physical theory should be some kind of quantitative guide about updating probablities, and being such it should be able to give you very low probabilites for a class of hypotheses. The teacher’s trick drew attention to the fact that the students markedly did not use physics to generate quantitative answers. So, per my reading, the focal point of the experiment wasn’t suspending trust in people.
Still, “the teacher turned the plate around” should come up in the grand list of possible solutions, but it shouldn’t be privileged, and it should be weighed against the prior probability of “the teacher will deceive us”.
There’s a difference between “considering deception and then dismissing it as unlikely” and “not even considering deception at all”.
Considering a hypothesis takes non-negligible time. “Not even considering it at all” is what “dismissing due to low posterior weighting due to low prior weighting” feels like from the inside. If the posterior gets high enough, you’ll “start considering it”.
There may also be good signalling reasons to shy away from considering the hypothesis that your is deceiving you. If so, I would expect my untutored priors to be too low.
I’m not sure I understand what you are saying. To me, it seems you are suggesting that we subconsciously consider every possibility and weigh the probabilities, and only the likely enough ones float to our conscious consideration.
Also, I can consciously consider situations that have very, very low odds, such as the assumption that the plate is a metal alloy not just from any planet, but specifically from Zepton IV. So if I can consider outlandish things like that, why don’t I consider the idea that the teacher was deceptive?
We consider low-probability possibilities in batch, by treating with their reference classes. For instance, once we think of the turned-plate hypothesis, we don’t separately consider “the plate was turned 180 degrees”, “the plate was turned 179.9 degrees”, “the plate was turned 179.95 degrees”, etc. If we wanted to distinguish among these subhypotheses, we could, but mostly we don’t bother.
You chose Zepton IV largely at random. The hard part is locating the right specific hypothesis.
Right, I agree with you. Sorry about the “Zepton IV” thing, then.
But still, we do consider low-probability possibilities. So I’m still not sure what you meant by:
I mean that when a human thinks they feel like they’re “not even considering it at all”, they actually are very slightly considering it.
How do you know that? I don’t mean that in a “I think you’re wrong” way, but in a “I think you’re right, but I’m interested in knowing why” kind of way.
Sometimes a human feels like they’re not considering a hypothesis at all, and later starts considering it. That’s not what confidence zero looks like.
Actually, humans do sometimes behave as though their confidence in a proposition was as close to zero as we’re able to measure. I can’t think of any non-politically-charged examples at the moment, but consider for example the sort of confusion that leads someone to ask “So are you Blue or Green?” of someone who’s just finished explaining that they’re Red.
That’s what I would describe as someone not considering a hypothesis, and then later starting considering it. That is not what I would describe as someone subconsciously considering a hypothesis the entire time, at least without further justification.
Remember that not considering a hypothesis is not the same as saying the hypothesis has a low probability. Saying the hypothesis has a low probability is considering it and then discarding it. I think we’re talking about two different things.
That reminds me of what Scott Adam’s called The Two-Bucket Mind.
If those teacher’s students were absolutely not expecting a lie, then another out-of-the-box question based on physics they should understand wouldn’t trick them. The trust has been broken. On the other hand, if the problem is their inability to be creative enough, they won’t become creative just because they learned not to trust the teacher.
My high school physics teacher in high school who liked tricking us. Demonstrating his point about reflections off of light/dark surfaces, he covered up the laser pointer while shining it at a black binder. He put a compass next to a magnet to throw us off. These tricks were rare enough that we didn’t expect them every time, but we also knew not to blindly trust his setup. Still, there were plenty of people who fell for them every time.
And then came the torque wheel, a gyroscope bicycle wheel almost exactly like the one in this video. My first reaction, based on physics I did understand (and that wasn’t it at that time) was, “That’s impossible!” Then the teacher then told us it wasn’t a trick. He wouldn’t lie, but my reaction was still, “That’s impossible!” If I remember correctly, my hypothesis involved a hinge at the end of a solid string. Eventually, the teacher just had me hold the wheel and spun it… and the friggin’ thing moved on its own!!! I even checked that the axis or the rim didn’t contain any magicary before I was able to admit that, “Huh, I guess it is possible.”
A couple years later after that, another physics teacher inadvertently placed a compass on top of the table with a classroom computer inside of it. And then he had us learn N/S/E/W by pointing. I was the only moron in a 200-person class pointing to the “wrong” North.
My intuition (and only my intuition, I haven’t been able to research this effectively) suggests that the causal link is in the other direction. That is, in more prosperous countries/regions there is higher trust, since fewer people (in general) need to push limits to be able to live comfortably, so there is less crime.
It’s worth noting that a standard technique to weaken a rival group is to deter or prevent its members from cooperating — sometimes by disrupting their communications; but also by inciting rivalries or jealousies (i.e. lack of trust) within the group. We see this in everything from high-school cliques, to Internet trolling, to office politics, to counterintelligence and psyops among nation-states.
Ceteris paribus, any human social group is more effective if its members cooperate toward accomplishing its goals. Therefore a rival group which wants to prevent those goals from being accomplished can do well by disrupting the group’s cooperation.
“Creativity” implies generation. And yet, does the proper definition of it emphasize generation? Scott Adams has said that creativity is the ability to purge compelling, half functional ideas from one’s mind, basically holding off on proposing solutions.
The strength of intelligence, and humanity, is improvising where hard-coded adaptations would be inferior.
I’m not willing to say that the problem in such a case was the absence of a specific, narrow heuristic rather than a lack of creativity. Creativity is getting by when narrow heuristics fail, and not cutting corners by assuming too much.
However unlikely it is in a specific case that the specific usually-valid heuristic “they are not trying to deceive you” fails, this was not a one-off case because there are a great number of usually-valid heuristics, each of which will rarely fail, but at least one of which will fail somewhat often, such that somewhat often a person with low creativity will fail.