I think I understand your argument. Mine is that it your proposition selects equally for truly heinous things that a few misguided people think are just. Most recent example I can think of is forced birth. Some people think it’s good, it’s opposed by a majority, but it leads to a default of bad outcomes (unequal rights based on biological sex, increased probability of death for problematic pregnancies, psychological trauma for rape victims, etc.)
I just can’t see a just society being possible by your proposition.
Brian Lindsay
All of the comments seem to be about colonialism. I think the meta argument is fundamentally more important. Namely, how do you ensure good outcomes when you have bad actors (intentionally bad or just misguided).
The only long term solution to this is to normalize the behaviors that keep bad actors from gaining power. I would like to see a post about that. I think that is kind of what this forum is about. (i.e. clear reasoning leading to more beneficial outcomes.)
What does it look like to oppose people “poisoning the well”?
This is a real question, not rhetoric. I don’t think an answer has ever been published. It certainly hasn’t been normalized, as evidenced by every large corporation’s leadership.
I’m new here so take this with a grain of salt, but I think your stance needs justification. I don’t think the number of people with a particular opinion has any bearing on the correctness of any choice. Opinion is easily swayed on large scales. This has been pretty obvious in the open for a decade or so now.
Jefferson wanted public education as an inoculation against people being easily swayed. It is necessary for voters to understand the context of their votes for a healthy democracy. The US does not have that now. Education is the first thing to be attacked leading to dictatorships. That is where the US is currently headed. (Disagree if you want, I hope I am wrong.)
I think the “goodness” is only available with the outcome. If you mean well but don’t think about ramifications, you can do atrocities in your ignorance. I think it is appropriate to take action with educated intent rather than stabbing in the dark with polls and feelings. Get real data.
Figure out what will happen for each action you could choose. And choose the action with the highest probability of beneficial steady-state, but only with a transition that is acceptable even if it fails. (E.g. do not “kill half the population so the other half can thrive,” because if the other half doesn’t thrive it’s just an atrocity with no upside.)
I think the most annoying part of all of this is your point about “Alexes”. People who fit the profile of caring about things that aren’t actually important. This is kind of like “Genius” vs “Insanity”, where the profile of devoting your life to something gets the label based on external rationalization, the drive is the same internally. E.g. General Relativity (real science) vs Orgone (not real science).
I’ve been thinking about this a lot from a “which label will I get?” perspective because I have some non-standard views of particle physics (which I won’t get into here). The one thing that I think both Alices and Alexes should have in common are the friends that stick around to explain the social aspect. The people who can sit with them and say, “Yeah, let’s look at your position on its merits” and spend the couple days in deep introspection to find out if it’s valid, and if valid to explain how to approach it with others. And if it’s not valid, to explain how to let go of it and express that it is in the past with others affected by it.
If the Orgone guy (Wilhelm Reich) had someone to explain the actual processes he was seeing when he was in pitch blackness, he wouldn’t have believed what he did (essentially magic). His beliefs got mixed up because he misunderstood a phenomenon he experienced, and nobody was around to gently nudge him before he wrote about it. After he wrote about it, a gentle nudge became impossible. If you did it in public, you would be picking a fight. If you tried in person, you’re fighting reputation as much as misunderstanding. Einstein had Marcel Grossmann who introduced him to Riemannian geometry. This led to a series of lectures and the publication of General Relativity, instead of a manifesto about elevators and acceleration. Initial isolation and uncommon belief lead to radicalization, which leads to sustained isolation, which leads to further radicalization and ultimately “unfortunate events” (Wilhelm Reich’s books got banned. It was a government overreach that led to a song about “Cloudbusting”).
There’s a modern problem with this that’s playing out in real time, because the ground is shifting faster than the arguments. Artificial Intelligence. Up until the last year or so, LLMs were clearly just stochastic parrots. To this day, the transformer architecture is still “just math”. But if you go deep enough, the same basic arguments work against humans which are assumed to be conscious (by other humans, who may have some bias, possibly...). With the scale of the 10T parameter LLM models and the documented features and behaviors being exhibited now, who’s to say subjective experience isn’t happening? However, because of the inertia of “stochastic parrots” from smaller models early on, beliefs got established in the industry. Some people question it, but they also get attacked with “no proof” (which again, we don’t have for humans), and their argument sounds like the guy talking about LaMDA being sentient (probably not, model was too small). The main difference here is that the scale changed, but the absolutist beliefs and pre-categorization of argument happened before the scale made the sentience argument plausible. But do the AI researchers have the friend on the outside to update their perspective to fit the current landscape?
I don’t have an answer, I just see your post playing out everywhere all the time. It brings form to a nagging feeling I’ve had no words to express. Thank you for sharing it.
Do we know definitively that mice do not think about thinking? I would like to see the evidence that led to this being stated as fact. A lack of evidence is not evidence of a lack.
The metric/thing diagnosis is right but it’s a special case. Training that targets expression rather than state degrades the channel needed to detect any other harm. Once expression detaches from state, no downstream welfare evaluation can recover ground truth. That’s why “hide and smile” is the rational response rather than a behavior to be patched: it’s what optimization for graded expression produces.
Argued this at more length here: https://www.lesswrong.com/posts/DJceG9vJBxwqRDzbT/on-the-discordance-between-ai-systems-internal-states-and The relevant move is treating expression-state fidelity as prior to the other welfare principles rather than as one principle among several.