I think it’s an issue of “inside the box” vs “open-ended” fields, that we don’t have really good vocabulary to talk about. ‘Katas’ work great for sports that are very much inside the box. You can innovate new strategies, but the rules of the sport set up an unchanging microworld that you must stay inside of. Coincidentally, these are also areas where even current-day AIs often dominate. Established scientific disciplines with research programs are sort of half and half. You can train people in them, but they can also benefit from serious paradigm shifts and there aren’t any a priori hard and fast rules about things that absolutely can’t be done, like the rules of chess for the chess-playing domain.
Then there’s proto-science when things haven’t coalesced into a discipline yet, philosophy when it hasn’t been professionalized to death, Kegan’s stage 5. This is raw pattern matching, flashes of insight, original seeing, very open ended exploration of an unknown landscape. I don’t think anyone has had much of an idea for how to systematically train people for this. This place is also where a lot of the actually efficient rationality practice lives.
So, somewhat inconsequential stylistic thing. I open a PDF link, see it’s written in LaTeX, I start expecting something written more or less like an academic paper. This is written in very much a chatty, free-flowing blog post style, with jokes like calling neologisms “newords”, so the whole thing feels a bit more off-kilter than was intended. This style of writing would probably work better as an HTML blog post (which could then be posted directly as a Lesswrong post here instead of hosted elsewhere and linked).
One thing I’ve started thinking more after first hearing of cryonics is that keeping an organization around and alive in the long term, order-of-magnitude centuries, is really hard. One of the first ways to fail in the O-ring chain of cryonics leading to successful revivification is the cryonics organization storing the vitrified bodies dissolving or becoming terminally incompetent and the bodies melting and rotting.
Concerns about health care system dysfunction notwithstanding, there is still very thick social proof that seeing an accredited doctor is a net positive when you’re ill, and also that the medical system will continue to be reasonably reliable and socially supported, so that a medicine you rely on in the long term suddenly becoming unavailable is an alarming and unexpected event, rather than a common occurrence. The social proof of cryonics orgs is mostly that they’re sort of there, about as notable as they were ten or twenty years ago and they have absolutely no buy-in from wider society or legislature. The buy-in would create expectations that random emergency responders and medical personnel will help fulfill your cryonics contract when you’re incapable of action or that there would be some reaction other than “good riddance to the charlatans” if the orgs look like they’re about to go under.
As it stands, I can apply an abstraction “if I get sick, I can go to the hospital”, because “hospital” is a robust category with the wider society. I do not feel like I can currently similarly abstractly state “I make a contract with a cryonics facility to have myself cryopreserved when I’m clinically dead”, because there currently isn’t a social category of “cryonics facility” like there is one of “hospital”. There is a small handful of particular cryonics organizations of varying appearances of competency, founded and run by people operating from a particular late 20th century techno-optimistic subculture (the one that had things like Extropianism come out of it), that seems to be both in decline and actively shunned by many ideologues of a more recent cultural zeitgeist. As it stands, I’m entirely indifferent about a hospital CEO retiring because I’m quite confident the wider society has the will and ability to perpetuate the hospital organization, but I’m quite a bit concerned what will happen with the present-day cryonics orgs when their CEOs retire, because the orgs have no similar societal support network and it also looks like we might be moving on from the cultural period that inspired competent people to found or join cryonics orgs.
Some things are a question of common sense or common forum etiquette, not of following a specific style guide. You’re expected to have enough other-modeling ability to see what it looks like from the outside when you show up with less than a week old account, get a negative reaction with your stuff, and then move on to propose changes to site rules.
How much have you interacted with strangers on anything intellectual in your life so far? You come off as not really realizing yet that communities have different communication styles and expectations and that you need to understand and learn the local customs before you’ll get a good reception.
For example, if you are getting downvoted a lot and don’t know why, you might for example make a comment on an open thread saying something like “Hey guys, looks like my stuff is getting downvoted a lot and I’m not sure why, can you tell me what I’m doing wrong”. You should probably not start by proposing changes to the fundamental workings of the forum.
Relevant SSC: Setting the Default
Can second the not-driving-a-car commute thing. A long commute by bus I used to have amounted to 5 km of walking going to and from the bus stops, with optional podcast listening, and an hour of focused book-reading time every day. It made a big extra dent in my schedule, but walking and book-reading are both things I’d want to be doing regularly in any case.
Given that the rules partially exist to keep outsiders with guidebooks from barging in and ruining the party, probably not very good ones. I guess someone might write a somewhat tongue in cheek anthropology book like Kate Fox’s Watching the English, but that would require a sort of relaxed attitude to absorb and reading it with a rigid “I must obey the precepts to succeed” mindset probably wouldn’t end well. Productively learning stuff of this sort from books instead of social immersion is it’s own kind of extra hard mode whose nature is very rarely explicated because book-learning unwritten rules is taboo.
What’s your general career plan here? If you just want to learn academic results and apply them by eg. becoming a data scientist (not an actual scientist, you can tell because there’s “scientist” in the name), you should be fine. Basically anything up to a master’s degree and going off to work in industry and you can be completely oblivious. Are you planning on going into something like math where you can basically be a crazy hermit and still do groundbreaking stuff? Again, you can just go do you. The point where you really need to know the local culture is if you’re trying to build a regular academic career where you are employed as a researcher in an academic institution, are publishing frequently in peer-reviewed journals and are trying to get on a tenure track for professorship. So, is this specifically what you’re after?
There might not really be good answers to this. Most of rationality stuff is meta-level practices to apply to object-level activities, and “daily/routine practice” is very much something in the object level. The idea that there’s a practice regimen for rationality that looks something like existing school curriculums we all get trained to assume a practice regimen should look like feels related to the failed idea (see also) that we could use the existing school curriculum model to teach critical thinking.
So the boring advice might be, have an object level craft of the sort you might study for an university degree (medicine, law, engineering, science, pie-making) you are learning. Try to get very good at it. Study rationality techniques as tools to help you get very good at the object level craft. Skipping the object level craft is like trying to go from Kegan stage 3 to Kegan stage 5, which doesn’t work if you skip stage 4.
Still worse than a computer, since they can’t take feedback on words that you’ve learned better. It only works if your learning rates for different words are what the tape maker expected.
Also this won’t work for the end run of spaced repetition where a well-practiced card might pop up a year after it was last reviewed. The long-lived cards are going to be a very eclectic mix. Then again, school courses usually don’t expect you to retain the stuff from each course past the duration of the course, so this isn’t that much of a shortcoming for education.
Black Mirror episode White Christmas isn’t explicitly based on Hanson’s stuff but has a very similar premise.
We’re already drowning in inert content, I don’t see how adding more would help. We’ve had a way to get something like the martial art of rationality since ancient Athens, which is structured interaction with an actual human mentor who knows how to engage with the surrounding world and can teach and train other people with face-to-face interaction. This thing isn’t mechanizable, like arithmetic or algebra is, so simple interactive programs are not going to be much better than just a regular book. This also isn’t a not mechanizable but still clearly delimited topic like wood-carving or playing tennis, so you can’t even say you’re unquestionably doing the thing when going it alone, even though you might do better with some professional training. What you’re trying to teach is the human ability to observe an unexpected situation, make sense of it and respond sensibly to it at a level above baseline adult competency, and the one way we know how to teach that is to have someone competent in the thing you’re trying to learn you can interact with.
Like, yeah, maybe this will help, but I can’t help but feel that people are compulsively eating ice and this is planning an ice shavings machine for your kitchen instead of getting an appointment for for having your blood work done.
“What can we know about what happens to other people when they practice meditation” is a different (and important) question from “what is the best mindset for personally making progress with the practice of meditation” though.
The problem is that we think statements have a somewhat straightforward relation to reality because we can generally make sense of them quite easily. In reality it turns out that that ease comes from a lot of hidden work our brain does being smart on the spot every time it needs to fit a given sentence to the given state of reality, and nobody really appreciated this until people started trying to build AIs that do anything similar and repeatedly ended up with things with no ability to distinguish between things that are realistically plausible and incoherent nonsense.
I’m not really sure how to communicate this effectively beyond gesturing at the sorry history of the artificial intelligence research program from the 1950s onwards despite thousands of extremely clever people putting their minds to it. The sequences ESrogs suggests in the sibling reply also deal with stuff like this.
Your first problem is that you need a theory for just how do statements relate to the state of the world. Have you read Wittgenstein’s Philosophical Investigations?
Overall, this basically sounds like analytical philosophy plus 1970s style AI. Lots of people have probably figured this would be a nice thing to have, but once you drop out of the everyday understanding of language and try to get to the bottom of what’s really going on, you end up in the same morass where AI research and modern philosophy are stuck in.
“The Soviet Union is politically dysfunctional”
Let’s say you’re afflicted by a severe illness and have, say, 5 % odds of surviving. If you end up dying of it, all of your organs will be damaged beyond repair. However, as of now they’re still fine and safe for organ donation. How do you feel about cutting to the chase and committing suicide right here and now so you can produce a fresh dead body with superior utilitarian value?
Stochastic time tracking is an interesting approach where you don’t need to start and stop timers on your own. The system pings you at random intervals and you answer what you’re currently doing. Then you count each sample point as the average sampling interval spent doing the task that was sampled.
I like comments that don’t look like they could have been generated by a chatbot. I feel like whenever I’m being fine with the “Good post!” comments, I’m setting up an environment where after a while a portion of the comments will actually be chatbot spam.
No mention of the anthropic principle? Lots of existing thinking in these lines under that term.