I don’t think that intelligence and military are likely to be much more of reckless idiots than Altman and co., what seems more probable is that their interests and attitudes genuinely align.
xpym
most modern humans are terribly confused about morality
The other option is being slightly less terribly confused, I presume.
This is why MAPLE exists, to help answer the question of what is good
Do you consider yourselves having significant comparative advantage in this area relative to all other moral philosophers throughout the millenia whose efforts weren’t enough to lift humanity from the aforementioned dismal state?
Oh, sure, I agree that an ASI would understand all of that well enough, but even if it wanted to, it wouldn’t be able to give us either all of what we think we want, or what we would endorse in some hypothetical enlightened way, because neither of those things comprise a coherent framework that robustly generalizes far out-of-distribution for human circumstances, even for one person, never mind the whole of humanity.
The best we could hope for is that some-true-core-of-us-or-whatever would generalize in such way, the AI recognizes this and propagates that while sacrificing inessential contradictory parts. But given that our current state of moral philosophy is hopelessly out of its depth relative to this, to the extent that people rarely even acknowledge these issues, trusting that AI would get this right seems like a desperate gamble to me, even granting that we somehow could make it want to.
Of course, it doesn’t look like we would get to choose not to get subjected a gamble of this sort even if more people were aware of it, so maybe it’s better for them to remain in blissful ignorance for now.
I expect this because humans seem agent-like enough that modeling them as trying to optimize for some set of goals is a computationally efficient heuristic in the toolbox for predicting humans.
Sure, but the sort of thing that people actually optimize for (revealed preferences) tends to be very different from what they proclaim to be their values. This is a point not often raised in polite conversation, but to me it’s a key reason for the thing people call “value alignment” being incoherent in the first place.
But meditation is non-addictive.
Why not? An ability to get blissed-out on demand sure seems like it could be dangerous. And, relatedly, I have seen stuff mentioning jhana addicts a few times.
Indeed, from what I see there is consensus that academic standards on elite campuses are dramatically down, likely this has a lot to do with the need to sustain holistic admissions.
As in, the academic requirements, the ‘being smarter’ requirement, has actually weakened substantially. You need to be less smart, because the process does not care so much if you are smart, past a minimum. The process cares about… other things.
So, the signalling value of their degrees should be decreasing accordingly, unless one mainly intends to take advantage of the process. Has some tangible evidence of that appeared already, and are alternative signalling opportunities emerging?
I think Scott’s name is not newsworthy either.
Metz/NYT disagree. He doesn’t completely spell out why (it’s not his style), but, luckily, Scott himself did:
If someone thinks I am so egregious that I don’t deserve the mask of anonymity, then I guess they have to name me, the same way they name criminals and terrorists.
Metz/NYT considered Scott to be bad enough to deserve whatever inconveniences/punishments would come to him as a result of tying his alleged wrongthink to his real name, is the long and short of it.
Right, the modern civilization point is more about the “green” archetype. The “yin” thing is of course much more ancient and subtle, but even so I doubt that it (and philosophy in general) was a major consideration before the advent of agriculture leading to greater stability, especially for the higher classes.
and another to actually experience the insights from the inside in a way that shifts your unconscious predictions.
Right, so my experience around this is that I’m probably one of the lucky ones in that I’ve never really had those sorts of internal conflicts that make people claim that they suffer from akrasia, or excessive shame/guilt/regret. I’ve always been at peace with myself in this sense, and so reading people trying to explain their therapy/spirituality insights usually makes me go “Huh, so apparently this stuff doesn’t come naturally to most people, shame that they have to bend themselves backwards to get to where I have always been. Cool that they have developed all these neat theoretical constructions meanwhile though.”
Maybe give some of it a try if you haven’t already, see if you feel motivated to continue doing it for the immediate benefits, and then just stick to reading about it out of curiosity if not?
Trying to dismiss the content of my thoughts does seem to help me fall asleep faster (sometimes), so there’s that at least :)
Thanks for such a thorough response! I have enjoyed reading your stuff over the years, from all the spirituality-positive people I find your approach especially lucid and reasonable, up there with David Chapman’s.
I also agree with many of the object-level claims that you say spiritual practices helped you reach, like the multi-agent model of mind, cognitive fusion, etc. But, since I seem to be able to make sense of them without having to meditate myself, it has always left me bemused as to whether meditation really is the “royal road” to these kinds of insight, and if whatever extra it might offer is worth the effort. Like, for example, I already rate my life satisfaction at around 7, and this seems adequate given my objective circumstances.
So, I guess, my real question for the therapy and spirituality-positive people is why they think that their evidence for believing what they believe is stronger than that of other people in that field who have different models/practices/approaches but about the same amount of evidence for its effectiveness. Granted that RCTs aren’t always, or even often, easy, but it seems to me that the default response to lack of strong evidence of that sort, or particularly reliable models of reality like those that justify trusting parachutes even in the absence of RCTs, is to be less sure that you have grasped the real thing. I have no reason to doubt that plenty of therapists/coaches etc. have good evidence that something that they do works, but having a good, complete explanation of what exactly works or why is orders of magnitude harder, and I don’t think that anybody in the world could reasonably claim to have the complete picture, or anything close to it.
I think western psychotherapies are predicated on incorrect models of human psychology.
Yet they all seem to have positive effects of similar magnitude. This suggests that we don’t understand the mechanism through which they actually work, and it seems straightforward to expect that this extends to less orthodox practices.
RCTs mostly can’t capture the effects of serious practice over a long period of time
But my understanding is that benefits of (good) spiritual practices are supposed to be continuous, if not entirely linear. However much effort you invest correlates with the amount of benefits you get, until enlightenment
and becoming as gods.
Some forms of therapy, especially ones that help you notice blindspots or significantly reframe your experience or relationship to yourself or the world (e.g. parts work where you first shift to perceiving yourself as being made of parts, and then to seeing those parts with love)
What is your take on the Dodo bird verdict, in relation to both therapy and Buddhism-adjacent things? All this stuff seems to be very heavy on personal anecdotes and just-so stories, and light on RCT-type things. Maybe there’s a there there, but it doesn’t seem like serious systematic study of this whole field has even begun, and there’s plenty of suspicious resistance to even the idea of that from certain quarters.
For whatever reason, it looks like when these kinds of delusions are removed, people gravitate towards being compassionate, loving, etc.
This is also a big if true type claim which from the outside doesn’t seem remotely clear, and to the extent that it is true causation may well be reversed.
That is, for all its associations with blue (and to a lesser extent, black), rationality (according to Yudkowsky) is actually, ultimately, a projectof red. The explanatory structure is really: red (that is, your desires), therefore black (that is, realizing your desires), therefore blue (knowledge being useful for this purpose; knowledge as a form of power).
Almost. The explanation structure is: green (thou art godshatter), therefore red, therefore black, therefore blue. Yudkowsky may not have a green vibe, as you describe it in this series, but he certainly doesn’t shy from acknowledging that there’s no ultimate escaping from the substrate.
Green is the idea that you don’t have to strive towards anything.
Can only be said by somebody not currently starving, freezing/parched or chased by a tiger. Modern civilization has insulated us from those “green” delights so thoroughly that we have an idealized conception far removed from how things routinely are in the natural world. Self-preservation is the first thing that any living being strives towards, the greenest thing there is, any “yin” can be entertained only when that’s sorted out.
But some of them don’t immediately discount the Spokesperson’s false-empiricism argument publicly
Most likely as a part of the usual arguments-as-soldiers political dynamic.
I do think that there’s an actual argument to be made that we have much less empirical evidence regarding AIs compared to Ponzis, and plently of people on both sides of this debate are far too overconfident in their grand theories, EY very much included.
Sure, there is common sense, available to plenty of people, of which reference classes apply to Ponzi schemes (but, somehow, not to everybody, far from it). Yudkowsky’s point, however, is that the issue of future AIs is entirely analogous, so people who disagree with him on this are as dumb as those taken in by Bernies and Bankmans. Which just seems empirically false—I’m sure that the proportion of AI doom skeptics among ML experts is much higher than that that of Ponzi believers among professional economists. So, if there is progress to be made here, it probably lies in grappling with whatever asymmetries are between these situations. Telling skeptics a hundredth time that they’re just dumb doesn’t look promising.
And due to obvious selection effects, such people are most likely to end up in need of one. Must be a delightful job...
The standard excuse is that the possibility to ruin everything was a necessary cost of our freedom, which doesn’t make much sense
There’s one further objection to this, to which I’ve never seen a theist responding.
Suppose it’s true that freedom is important enough to justify the existence of evil. What’s up with heaven then? Either there’s no evil there and therefore no freedom (which is still somehow fine, but if so, why the non-heaven rigmarole then?), or both are there and the whole concept is incoherent.
That’s probably Kevin’s touch. Robin has this almost inhuman detachment, which on the one hand allows him to see things most others don’t, but on the other makes communicating them hard, whereas Kevin managed to translate those insights into engaging humanese.
Any prospective “rationality” training has to comprehensively grapple with the issues raised there, and as far as I can tell, they don’t usually take center stage in the publicized agendas.
I do agree that there are some valuable Eastern insights that haven’t yet penetrated the Western mainstream, so work in this direction is worth a try.
Also reasonable.
Here I disagree. I think that much of “what is good” is contingent on our material circumstances, which are changing ever faster these days, so it’s no surprise that old answers no longer work as well as they did in their time. Unfortunately, nobody has discovered a reliable way to timely update them yet, and very few seem to even acknowledge this problem.