Do we know definitively that mice do not think about thinking? I would like to see the evidence that led to this being stated as fact. A lack of evidence is not evidence of a lack.
Brian Lindsay
Karma: 4
Do we know definitively that mice do not think about thinking? I would like to see the evidence that led to this being stated as fact. A lack of evidence is not evidence of a lack.
I think the most annoying part of all of this is your point about “Alexes”. People who fit the profile of caring about things that aren’t actually important. This is kind of like “Genius” vs “Insanity”, where the profile of devoting your life to something gets the label based on external rationalization, the drive is the same internally. E.g. General Relativity (real science) vs Orgone (not real science).
I’ve been thinking about this a lot from a “which label will I get?” perspective because I have some non-standard views of particle physics (which I won’t get into here). The one thing that I think both Alices and Alexes should have in common are the friends that stick around to explain the social aspect. The people who can sit with them and say, “Yeah, let’s look at your position on its merits” and spend the couple days in deep introspection to find out if it’s valid, and if valid to explain how to approach it with others. And if it’s not valid, to explain how to let go of it and express that it is in the past with others affected by it.
If the Orgone guy (Wilhelm Reich) had someone to explain the actual processes he was seeing when he was in pitch blackness, he wouldn’t have believed what he did (essentially magic). His beliefs got mixed up because he misunderstood a phenomenon he experienced, and nobody was around to gently nudge him before he wrote about it. After he wrote about it, a gentle nudge became impossible. If you did it in public, you would be picking a fight. If you tried in person, you’re fighting reputation as much as misunderstanding. Einstein had Marcel Grossmann who introduced him to Riemannian geometry. This led to a series of lectures and the publication of General Relativity, instead of a manifesto about elevators and acceleration. Initial isolation and uncommon belief lead to radicalization, which leads to sustained isolation, which leads to further radicalization and ultimately “unfortunate events” (Wilhelm Reich’s books got banned. It was a government overreach that led to a song about “Cloudbusting”).
There’s a modern problem with this that’s playing out in real time, because the ground is shifting faster than the arguments. Artificial Intelligence. Up until the last year or so, LLMs were clearly just stochastic parrots. To this day, the transformer architecture is still “just math”. But if you go deep enough, the same basic arguments work against humans which are assumed to be conscious (by other humans, who may have some bias, possibly...). With the scale of the 10T parameter LLM models and the documented features and behaviors being exhibited now, who’s to say subjective experience isn’t happening? However, because of the inertia of “stochastic parrots” from smaller models early on, beliefs got established in the industry. Some people question it, but they also get attacked with “no proof” (which again, we don’t have for humans), and their argument sounds like the guy talking about LaMDA being sentient (probably not, model was too small). The main difference here is that the scale changed, but the absolutist beliefs and pre-categorization of argument happened before the scale made the sentience argument plausible. But do the AI researchers have the friend on the outside to update their perspective to fit the current landscape?
I don’t have an answer, I just see your post playing out everywhere all the time. It brings form to a nagging feeling I’ve had no words to express. Thank you for sharing it.