Most of my posts and comments are about AI and alignment. Posts I’m most proud of, which also provide a good introduction to my worldview:
Without a trajectory change, the development of AGI is likely to go badly
Steering systems, and a follow up on corrigibility.
I also created Forum Karma, and wrote a longer self-introduction here.
PMs and private feedback are always welcome.
NOTE: I am not Max Harms, author of Crystal Society. I’d prefer for now that my LW postings not be attached to my full name when people Google me for other reasons, but you can PM me here or on Discord (m4xed) if you want to know who I am.
Coherence is mostly about not stepping on your own toes; i.e. not taking actions that get you strictly less of all the different things that you want, vs. some other available action. “What you want” is allowed to be complicated and diverse and include fuzzy time-dependent things like “enough leisure time along the way that I don’t burn out”.
This is kind of fuzzy / qualitative, but on my view, most high-agency humans act mostly coherently most of the time, especially but not only when they’re pursuing normal / well-defined goals like “make money”. Of course they make mistakes, including meta ones (e.g. misjudging how much time they should spend thinking / evaluating potential options vs. executing a chosen one), but not usually in ways that someone else in their shoes (with similar experience and g) could have easily / predictably done better without the benefit of hindsight.
Lots of people try to make money, befriend powerful / high-status people around them, upskill, etc. I would only categorize these actions as pursuing “high ability-to-act” if they actually work, on a time scale and to a degree that they actually result in the doer ending up with the result they wanted or the leverage to make it happen. And then the actual high ability-to-act actions are the more specific underlying actions and mental motions that actually worked. e.g. a lot of people try starting AGI research labs or seek venture capital funding for their startup or whatever, few of them actually succeed in creating multi-billion dollar enterprises (real or not). The top-level actions might look sort of similar, but the underlying mental motions and actions will look very different whether the company is (successful and real), (successful and fraud), or a failure. The actual pursuing-high-ability-to-act actions are mostly found in the (successful and real, successful and fraud) buckets.
Taking shrooms in particular seems like a pretty good example of an action that is almost certainly not coherent, unless there is some insight that you can only have (or reach the most quickly) by taking hallucinogenic drugs. Maybe there are some insights like that but I kind of doubt it, and trying shrooms first before you’ve exhausted other ideas, in some vague pursuit of some misunderstood concept of coherence, is not the kind of thing i would expect to be common in the most successful humans or AIs. There are of course exceptions (very successful humans who have taken drugs and attribute some of their success to it), but my guess is that success is mostly in spite of the drug use, or at least that the drug use was not actually critical.
The other examples are maybe stereotypes of what some people think of as pursuing coherent behavior, but I would guess they’re also not particularly strongly correlated with actual coherence.