2. I mentioned that there should be much more impressive behavior if they were that smart; I don’t recall us talking about that much, not sure.
You said “why don’t they e.g. jump in prime numbers to communicate they are smart?” and i was like “hunter gatherer’s don’t know prime numbers and perhaps not even addition” and you were like “fair”.
I mean I thought about what I’d expect to see, but I unfortunately didn’t really imagine them as smart but just as having a lot of potential but being totally untrained.
3. I recommended that you try hard to invent hypotheses that would explain away the brain sizes.
(I’m kinda confused why your post here doesn’t mention that much; I guess implicitly the evidence about hunting defeats the otherwise fairly [strong according to you] evidence from brain size?)
I suggest that a bias you had was “not looking hard enough for defeaters”. But IDK, not at all confident, just a suggestion.
Yeah the first two points in the post are just very strong evidence that overpower my priors (where by priors i mean considerations from evolution and brain size, as opposed to behavior). Ryan’s point changed my priors, but I think it isn’t related enough to “Can I explain away their cortical neuron count?” that asking myself this question even harder would’ve helped.
Maybe I made a general mistake like “not looking hard enough for defeaters”, but it’s not that actionable yet. I did try to take all the available evidence and update properly on everything. But maybe some motivated stopping on not trying even longer to come up with a concrete example of what I’d have expected to see from orcas. It’s easier to say in retrospect though. Back then I didn’t know in what direction I might be biased.
But I guess I should vigilantly look out for warning signs like “not wanting to bother to think about something very carefully” or so. But it doesn’t feel like I was making the mistake, even though I probably did, so I guess the sensation might be hard to catch at my current level.
I did try to take all the available evidence and update properly on everything. But maybe some motivated stopping on not trying even longer to come up with a concrete example of what I’d have expected to see from orcas.
These sound good, and maybe you have in mind the same thing I mean, but to clarify, I mean like: Do biased thinking in both directions. I.e. be a lawyer for each side in turn. (Don’t only do this of course, also do other things like neutral integration / comparison etc.)
So like, you get your model / argument that says orcas are smart (or that this is a good project). Then you put on the anti hat, and try really hard to find counterarguments—e.g. by thinking of them, and also by motivatedly looking for information that would give a counterargument.
To do this properly you may have to unblend from your wanting X to be true.
You said “why don’t they e.g. jump in prime numbers to communicate they are smart?” and i was like “hunter gatherer’s don’t know prime numbers and perhaps not even addition” and you were like “fair”.
I mean I thought about what I’d expect to see, but I unfortunately didn’t really imagine them as smart but just as having a lot of potential but being totally untrained.
Yeah the first two points in the post are just very strong evidence that overpower my priors (where by priors i mean considerations from evolution and brain size, as opposed to behavior). Ryan’s point changed my priors, but I think it isn’t related enough to “Can I explain away their cortical neuron count?” that asking myself this question even harder would’ve helped.
Maybe I made a general mistake like “not looking hard enough for defeaters”, but it’s not that actionable yet. I did try to take all the available evidence and update properly on everything. But maybe some motivated stopping on not trying even longer to come up with a concrete example of what I’d have expected to see from orcas. It’s easier to say in retrospect though. Back then I didn’t know in what direction I might be biased.
But I guess I should vigilantly look out for warning signs like “not wanting to bother to think about something very carefully” or so. But it doesn’t feel like I was making the mistake, even though I probably did, so I guess the sensation might be hard to catch at my current level.
These sound good, and maybe you have in mind the same thing I mean, but to clarify, I mean like: Do biased thinking in both directions. I.e. be a lawyer for each side in turn. (Don’t only do this of course, also do other things like neutral integration / comparison etc.)
So like, you get your model / argument that says orcas are smart (or that this is a good project). Then you put on the anti hat, and try really hard to find counterarguments—e.g. by thinking of them, and also by motivatedly looking for information that would give a counterargument.
To do this properly you may have to unblend from your wanting X to be true.
Yeah I’ve really started loving “self-dialogues” since discovering them last month, I have two self-dialogues in my notes just from the last week.
Ah, thx! Will try.