This is the kind of content I’ve missed from LW in the past couple of years. Reminded me of something on old LW a while back that is a nice object level complement to this post. I saved it and look at it occasionally for inspiration (I don’t really think it’s a definitive list of ‘things to do as a superhuman’, or even a good list of things to do at all, but just as a nice reminder that ambitious people are interesting and fun):
Become awesome at mental math
Learn mnemonics. Practise by memorizing and rehearsing something, like the periodic table or the capitals of all nations or your multiplication tables up to 30x30.
Practise visualization, i.e. seeing things that aren’t there. Try inventing massive palaces mentally and walking through them mentally when bored. This can be used for memorization (method of loci).
Research n-back and start doing it regularly.
Learn to do lucid dreaming
Learn symbolic shorthand I recommend Gregg
Look at the structure of conlangs like Esperanto and Lojban and Ilaksh I feel like this is mind-expanding, like I have a better sense of how language and communication and thought works after being exposed to this..
Learn to stay absolutely still for extended periods of time; convince onlookers that you are dead.
Learn to teach yourself stuff.
Live out of your car for a while, or go homeless by choice
Can you learn to be pitch-perfect? Anyway, generally learn more about music.
Exercise. Consider ‘cheating’ with creatine or something. Creatine is also good for mental function for vegetarians If you want to jump over cars, try plyometrics ..
Eat healthily. This has become a habit for me. Forbid yourself from eating anything for which a more healthy alternative exists (eg., no more white rice (wild rice is better), no more white bread, no more soda, etc.). Look into alternative diets; learn to fast.
Self-discipline in general. Apparently this is practisable. Eliminate comforting lies like that giving in just this once will make it easier to carry on working. Tell yourself that you never ‘deserve’ a long-term-destructive reward for doing what you must, that doing what you must is just business as usual. Realize that the part of your brain that wants you to fall to temptation can’t think long-term—so use the disciplined part of your brain to keep a temporal distance between yourself and short-term-gain-long-term-loss things. In other words, set stuff up so you’re not easy prey to hyperbolic discounting.
Learn not just to cope socially, but to be the life of the party. Maybe learn the PUA stuff.
That said, learn to not care what other people think when it’s not for your long-term benefit. Much of social interaction is mental masturbation, it feels nice and conforming so you do it. From HP and the MOR:
For now I’ll just note that it’s dangerous to worry about what other people think on instinct, because you actually care, not as a matter of cold-blooded calculation. Remember, I was beaten and bullied by older Slytherins for fifteen minutes, and afterward I stood up and graciously forgave them. Just like the good and virtuous Boy-Who-Lived ought to do. But my cold-blooded calculations, Draco, tell me that I have no use for the dumbest idiots in Slytherin, since I don’t own a pet snake. So I have no reason to care what they think about how I conduct my duel with Hermione Granger.
Learn to pick locks. If you want to seem awesome, bring padlocks with you and practise this in public
Learn how to walk without making a sound
Learn to control your voice. Learn to project like an actress. PUAs have also written on this.
Do you know what a wombat looks like, or where your pancreas is? Learn basic biology, chemistry, physics, programming, etc.. There’s so much low-hanging fruit.
Learn to count cards, like for blackjack. Because what-would-James-Bond-do, that’s why! (Actually, in the books Bond is stupidly superstitious about, for example, roulette rolls.)
Learn to play lots of games (well?). There are lots of interesting things out there, including modern inventions like Y and Hive that you can play online.
Learn magic. There are lots of books about this.
Learn to write well, as someone else here said.
Get interesting quotes, pictures etc. and expose yourself to them with spaced repetition. After a while, will you start to see the patterns, to become more ‘used to reality’?
Learn to type faster. Try alternate keyboard layouts, like Dvorak.
Try to make your senses funky. Wear a blindfold for a week straight, or wear goggles that turn everything a shade of red or turn everything upside-down or an eye patch that takes away your depth-sense. Do this for six months, or however long it takes to get used to them. Then, of course, take them off. The when you’re used to not having your goggles on, put them on again. You can also do this on a smaller scale, by flipping your screen orientation or putting your mouse on the other side or whatnot.
Become ambidextrous. Commit to tying your dominant hand to your back for a week.
Humans have magnetite deposits in the ethmoid bone of their noses. Other animals use this for sensing direction; can humans learn it?
Some blind people have learned to echolocate. [Seriously](http://en.wikipedia.org/wiki/Human_echolocation)
Learn how to tie various knots. This is useless but awesome.
Wear one of those belts that tells you which way north is. Keep it on until you are homing pigeon.
Learn wilderness survival. Plently of books on the net about this.
Learn first aid. This is one of those things that’s best not self-taught from a textbook.
Learn more computer stuff. Learn to program, then learn more programming languages and how to use e.g. the Linux coreutils. Use dwm. Learn to hack. Learn some weird programming languages If you’re actually using programming in your job, though, make sure you’re scarilyawesome at at least one language.
Learn basic physical feats like handstands, somersaults, etc..
Use all the dead time you have lying around. Constantly do mental math in your head, or flex all your muscles all the time, or whatever.
All that limits you is your own weakness of will.
(Not sure who the author is, if anyone finds the original post please link to it! I’ll try to find it when I get the time)
Looks like it’s from here:
I also distinctly remember that post.
I exhaled shortly through my nose at the irony in this one:
Learn wilderness survival. Plently of books on the net about this.
I do strongly recommend at least visiting the wilderness, and spending time moving around in it. Particularly at night. Walking around in the woods is one of the most impactful experiences I have had of noticing new details, while having a clear memory of not noticing those details before, in a way which was immediately useful.
Hello my values a decade ago, it’s so nice to see you publicly documented! In retrospect & in particular, the level of paranoia imbued here will serve you well against incentive hijacking, and will serve as a foundational stone in goal stability.
There is one particular policy here, where my thinking has changed significantly since then; and I’d love to check against Time whether it makes sense, or has my values been shifted:
| Reject invest-y power. Some kinds of power increase your freedom. Some other kinds require an ongoing investment of your time and energy, and explode if you fail to provide it. The second kind binds you, and ultimately forces you to give up your values. The second kind is also easier, and you’ll be tempted all the time.
| Optimization never stops. Avoid one-time effort if at all possible. Aim for long-term stability of the process that generates improvements. There is no room for the psychological comfort of certainty.
So, the operative word above is “freedom” (personally, I’ve used “possibility space maximization”), and it’s super useful to run a conceptually exhaustive search across surface-y options . But.
You probably have goals of interest, that you wish to achieve (eg “long-term future of humanity”). Some of these might require banging at stuff for an extended period of time. You have behaviours (eg your meta-policies), which you do for an extended period of time. Whether you recognize it as such, or not, you are also vesting into these; and by way of the forgetting curve, and blog readership, they also require ongoing maintenance. And yes, there might come future technological change which will make them obsolete, and put you into the decision between “your values” & “rolling with changes”.
So, my counter to this is, _Anything which does not take into consideration the passage of time, gets eaten by it._ Your Time is a super scarce resource -probably the scarcest of them all. One way to turn this liability into an asset is by vesting into stuff (projects, startups, skills, people, ideas, what have you), and riding the compounding interest across time. This is, to my knowledge, the only way one can scale scarce resources into epic levels of task-specific utility.
(Relatedly, it seems to me, that there is a sliding scale between the need for change in the face of future changes and vesting into things, that most people tend to shift through as they age. Obvious problem here is simulated annealing being susceptible to fixation on phantom (local) maxima by way of changing environment.)
So, unpacking the desiredata from above, the model I’d offer for consideration is the Affordable Loss Principle, with a side dish of Avoiding Infinite Optimizers:
* The affordable-loss principle: prescribes committing in advance to what one is willing to lose rather than investing in calculations about expected returns to the project. Key to affordable loss policies is generation of Next-best-alternatives, such so when it comes to move, there is something to seamlessly move forward to.
Or, in the wise words of Zvi: https://www.lesserwrong.com/posts/ENBzEkoyvdakz4w5d/out-to-get-you
Get Got when the deal is Worth It.
When you Get Got, do it on purpose.
But, You cannot afford to Get Got if the price is not compact. (Sufficiently advanced optimizers will eat your time, attention, and resources for breakfast _if you let them_ . Don’t. )
In conclusion, I’d suggest that yes, run a freedom-maximizing circle, because it eliminates conceptual blindsight, and there is a lot of low-hanging fruit you can pick up on your way. But additionally, be on the lookout for opportunities that are compact, low-hanging, and compounding across time, such so that linear investments today leads to incremental & compounding utility for tomorrow.
This is very good. I don’t think I disagree with anything you wrote. In practice, I recognize that most things which are dropped explode at least a little bit, and my implementation of “reject invest-y power” attempts to make sure these explosions are small enough that I can take them without significant damage (not literally zero damage).
Indeed, compounding interest is juicy, and I have also noticed biologically programmed annealing in myself.
I really like the general idea of this. Would have loved to see an example with for instance, making decisions over a second, day, week, month, and year, to get a better concrete idea of how this actually cashes out in terms of decision making, planning, and motivational processes.
That would be a very cool post to write, if I ever got around to writing it :)
One quick remark is that because the process is implemented by updating the way I think, it feels completely transparent from the inside (until I go to the meta level to check what’s on track). Mostly I don’t notice what the system is doing until I reflect on it later. Meanwhile, any new metacognitive content which I’m importing goes through explicit channels and gets lots of attention.
This isn’t really a “process”. Maybe it could be “guidelines”? Either way, some of them are pretty good. Some of them are pretty bad (No falling in love). Some are just weird (Beware of consumer electronic devices).
(Beware of consumer electronic devices).
This seemed straightforward to me: if you are serious about security, most consumer electronics are not going to be secure enough for your purposes. Don’t write up anything on a computer that would be bad if the wrong people knew, eventually.
Two issues. First, are you serious about security? Should you be? What is the bad outcome you’re trying to protect yourself from? It’s possible that OP has good reasons to want security, but it’s also possible that they are paranoid. Note, OP didn’t say “if”. Presumably they think that everyone always needs security.
Second, what is better than a computer? Surely not paper. Don’t post your secrets to facebook in plain text. Anything smarter than that is probably going to work fine for you.
The point is well taken, but I disagree with your default position. It is important to at least understand enough about security to make an informed choice—if you don’t have any methods available, by the time you know you need them it will be irrevocably too late. Some common activities in this community which have strong security implications:
Running a start-up
Running a website
The don’t-post-everything-on-Facebook heuristic is not satisfactory in any of those cases.
The “Don’t write up anything on a computer that would be bad if the wrong people knew, eventually.” heuristic is pretty impractical for any of your three cases too tho.
I agree that some of them are pretty good. I find the whole thing both inspiring and intruiging.
Not falling in love was shocking to see. I find it interesting… would be curious to hear other people’s thoughts on it.
I figured “someone else would express shock and confusion at not falling in love for the obvious reasons”, and I figure the obvious counters were pretty obvious. Falling in love is a serious distraction that can compromise your values if your core values don’t actually include falling in love.
If your core values do include falling in love I assume you would develop a different set of meta principles. I wouldn’t want SquirrelInHell’s own system for myself but I do respect it.
The thing that intrigued me, from SquirrelInHell’s own value system, was:
Being attracted to someone is a sign that your mental security is compromised, and that they are more adequate than you in some respect.
This seemed odd to me—it does seem like an obvious security vulnerability, but the specifics mechanism of “a sign they are more adequate than you in some respect” does not seem obvious, although plausibly an artifact of either Squirrel’s particular psychology, or the effects of employing the rest of the meta system.
Yeah, the reasons are obvious.
I think what goes on in my head when I hear that is how it doens’t seem to go along with the rationalist discourse. Total self-sacrifice isn’t actually popular, rather I see a lot of trying to be reasonable, optimizing everything persistently without being extreme. That, and people have posted about how to optimize dating aswell. This is particualrly true on SSC, but SSC also seems to be functioning as a bridge between rationalists and other very smart people, so I guess that’s to be expected.
In any case, calling love “a sign that your mental security is compromised” is exactly the kind of extreme statement that most rationalists seem to want to avoid, and that would immediately turn off any normal person. Hence why I’m curious about reactions, particularly on LW.
But none of this necessarily means anything. I am actually sympathetic to this view. Falling in love does take away resources, and any happiness anyone experiences before something goes foom can probably be rounded to zero.
I would worry if it was taken as default-value on LW that you’re not supposed to fall in love, but I think a lot of value of the site is being able to seriously entertain counterinuititive ideas.
That said, I also worry that this concept is going to be The Concept That Gets Talked about in a gigantic sprawling thread, ignoring a lot of important substance in the rest of Squirrel’s post. (I’m committing to not further discussing this particular subtopic to avoid contributing to that)
Half way through I started to wonder if it is satire.
I think I would have wondered that myself if I hadn’t had serious conversations with people who use similar thinking systems, which initially I parsed as very outlandish, but after a lot of in depth discussion came to respect. I don’t think the current post is optimized for persuading anyone who’s not generally on board with the project, but it wasn’t trying to be.
(I currently classify things along a broader distinction of “self improvement that’s attempting to involve rigorous integration of your goals” and “self improvement that’s just trying to hack together something that works.” The former seems harder to do. The people who do advocate for it say that the effort is worth it. I’m currently mulling it over a bit and seeing if it’s worth it to me)