Also discovered bone conduction headphones and I am impressed with the quality.
Do you have a recommendation? Constantly on the look out for new headphone styles, I have weird ear holes that nothing fits in.
Also discovered bone conduction headphones and I am impressed with the quality.
Do you have a recommendation? Constantly on the look out for new headphone styles, I have weird ear holes that nothing fits in.
Taking my place in history—one of my first tasks as an intern at MIRI was to write some ruby scripts that dealt with some aspects of that donation.
Not only did that experience land me my first programming job, but just realizing now that it was also the impetus that led me to grab more bitcoin (I had sold mine at the first peak in 2013) AND look into Stellar. Probably the most lucrative internship ever.
(Shoutout to Malo/Alex if you guys are still lurking LW)
I’m feeling nostalgic.
Is there any interest in having a monthly thread where we re-post links to old posts/comments from LW? Possibly scoped to that month in previous years? i.e, each comment would look like
(2013) link
brief description / thoughts
or something.
It’s pretty easy to go back and look through some of the older, more popular posts—but I think there were many open thread comments or frontpage posts not by Yvain / Eliezer that are starting to slip through the cracks of time. Would be nice to see what we all remember.
This is the kind of content I’ve missed from LW in the past couple of years. Reminded me of something on old LW a while back that is a nice object level complement to this post. I saved it and look at it occasionally for inspiration (I don’t really think it’s a definitive list of ‘things to do as a superhuman’, or even a good list of things to do at all, but just as a nice reminder that ambitious people are interesting and fun):
Become awesome at mental math
Learn mnemonics. Practise by memorizing and rehearsing something, like the periodic table or the capitals of all nations or your multiplication tables up to 30x30.
Practise visualization, i.e. seeing things that aren’t there. Try inventing massive palaces mentally and walking through them mentally when bored. This can be used for memorization (method of loci).
Research n-back and start doing it regularly.
Learn to do lucid dreaming
Learn symbolic shorthand I recommend Gregg
Look at the structure of conlangs like Esperanto and Lojban and Ilaksh I feel like this is mind-expanding, like I have a better sense of how language and communication and thought works after being exposed to this..
Learn to stay absolutely still for extended periods of time; convince onlookers that you are dead.
Learn to teach yourself stuff.
Live out of your car for a while, or go homeless by choice
Can you learn to be pitch-perfect? Anyway, generally learn more about music.
Exercise. Consider ‘cheating’ with creatine or something. Creatine is also good for mental function for vegetarians If you want to jump over cars, try plyometrics ..
Eat healthily. This has become a habit for me. Forbid yourself from eating anything for which a more healthy alternative exists (eg., no more white rice (wild rice is better), no more white bread, no more soda, etc.). Look into alternative diets; learn to fast.
Self-discipline in general. Apparently this is practisable. Eliminate comforting lies like that giving in just this once will make it easier to carry on working. Tell yourself that you never ‘deserve’ a long-term-destructive reward for doing what you must, that doing what you must is just business as usual. Realize that the part of your brain that wants you to fall to temptation can’t think long-term—so use the disciplined part of your brain to keep a temporal distance between yourself and short-term-gain-long-term-loss things. In other words, set stuff up so you’re not easy prey to hyperbolic discounting.
Learn not just to cope socially, but to be the life of the party. Maybe learn the PUA stuff.
That said, learn to not care what other people think when it’s not for your long-term benefit. Much of social interaction is mental masturbation, it feels nice and conforming so you do it. From HP and the MOR:
For now I’ll just note that it’s dangerous to worry about what other people think on instinct, because you actually care, not as a matter of cold-blooded calculation. Remember, I was beaten and bullied by older Slytherins for fifteen minutes, and afterward I stood up and graciously forgave them. Just like the good and virtuous Boy-Who-Lived ought to do. But my cold-blooded calculations, Draco, tell me that I have no use for the dumbest idiots in Slytherin, since I don’t own a pet snake. So I have no reason to care what they think about how I conduct my duel with Hermione Granger.
Learn to pick locks. If you want to seem awesome, bring padlocks with you and practise this in public
Learn how to walk without making a sound
Learn to control your voice. Learn to project like an actress. PUAs have also written on this.
Do you know what a wombat looks like, or where your pancreas is? Learn basic biology, chemistry, physics, programming, etc.. There’s so much low-hanging fruit.
Learn to count cards, like for blackjack. Because what-would-James-Bond-do, that’s why! (Actually, in the books Bond is stupidly superstitious about, for example, roulette rolls.)
Learn to play lots of games (well?). There are lots of interesting things out there, including modern inventions like Y and Hive that you can play online.
Learn magic. There are lots of books about this.
Learn to write well, as someone else here said.
Get interesting quotes, pictures etc. and expose yourself to them with spaced repetition. After a while, will you start to see the patterns, to become more ‘used to reality’?
Learn to type faster. Try alternate keyboard layouts, like Dvorak.
Try to make your senses funky. Wear a blindfold for a week straight, or wear goggles that turn everything a shade of red or turn everything upside-down or an eye patch that takes away your depth-sense. Do this for six months, or however long it takes to get used to them. Then, of course, take them off. The when you’re used to not having your goggles on, put them on again. You can also do this on a smaller scale, by flipping your screen orientation or putting your mouse on the other side or whatnot.
Become ambidextrous. Commit to tying your dominant hand to your back for a week.
Humans have magnetite deposits in the ethmoid bone of their noses. Other animals use this for sensing direction; can humans learn it?
Some blind people have learned to echolocate. [Seriously](http://en.wikipedia.org/wiki/Human_echolocation)
Learn how to tie various knots. This is useless but awesome.
Wear one of those belts that tells you which way north is. Keep it on until you are homing pigeon.
Learn self-defence.
Learn wilderness survival. Plently of books on the net about this.
Learn first aid. This is one of those things that’s best not self-taught from a textbook.
Learn more computer stuff. Learn to program, then learn more programming languages and how to use e.g. the Linux coreutils. Use dwm. Learn to hack. Learn some weird programming languages If you’re actually using programming in your job, though, make sure you’re scarilyawesome at at least one language.
Learn basic physical feats like handstands, somersaults, etc..
Use all the dead time you have lying around. Constantly do mental math in your head, or flex all your muscles all the time, or whatever.
All that limits you is your own weakness of will.
(Not sure who the author is, if anyone finds the original post please link to it! I’ll try to find it when I get the time)
For anyone interested in vipassana meditation, I would recommend checking out Shinzen Young. He takes a much more technical approach to the practice. This pdf by him is pretty good.
Oh my god if we can get this working with org-mode and habitrpg it will be the ultimate trifecta. And I’ve already got the first two (here).
Seriously this could be amazing. Org-mode and habitrpg are great, but they don’t really solve the problem of what to do next. But with this, you get the data collection power of org mode with the motivational power of habitrpg—then Familiar comes in, looks at your history (clock data, tags, agendas, all of the org mode stuff will be a huge pool of information that it can interact with easily because emacs) and does its thing.
It could tell habitrpg to give you more or less experience for things that are correlated with some emotion you’ve tagged an org mode item with. Or habits that are correlated with less clocked time on certain tasks. If you can tag it org mode you can track it with familiar, and familiar will then controls how habitrpg calculates your experience. Eventually you won’t have that nagging feeling in the back of your head that says “Wow, I’m really just defining my own rewards and difficulty levels, how is this going to actually help me if I can just cheat at any moment?”—Maybe you can still cheat yourself, but Familiar will tell you exactly the extent of your bullshit. It basically solves the biggest problem of gamification! You’ll have to actually fight for your rewards, since Familiar won’t let you get away with getting tons of experience for tasks that are not correlated with anything useful. Sure it won’t be perfectly automated, but it will be close enough.
It could sort your agenda by what you actually might get done vs shit that you keep there because you feel bad about not doing it—and org mode already has a priority system. It could tell you what habits (org-mode has these too) are useful and what you should get rid of.
It could work with magit to get detailed statistics about your commit history and programming patterns.
Or make it work with org-drill to analyze your spaced repetition activity! Imagine, you could have an org-drill file associated with a class you are taking and use it to compare test grades and homework scores and the clocking data from homework tasks. Maybe there is a correlation between certain failing flashcards and your recent test score. Maybe you are spending too much time on SRS review when it’s not really helping. These are things that we usually suspect but won’t act on, and I think seeing some hard numbers, even if they aren’t completely right, will be incredibly liberating. You don’t have to waste cognitive resources worrying about your studying habits or wondering if you are actually stupid, because familiar will tell you! Maybe it could even suggest flashcards at some point, based on commit history or wikipedia reading or google searches.
Maybe some of this is a little far fetched but god would it be fun to dig into.
I’ve been surprised by people’s ability to avert bad outcomes. Only two nuclear weapons have been used since nuclear weapons were developed, despite the fact that there are 10,000+ nuclear weapons around the world. Political leaders are assassinated very infrequently relative to how often one might expect a priori.
Why would a good AI policy be one which takes as a model a universe where world destroying weapons in the hands of incredibly unstable governments controlled by glorified tribal chieftains is not that bad of a situation? Almost but not quite destroying ourselves does not reflect well on our abilities. The Cold War as a good example of averting bad outcomes? Eh.
AI risk is a Global Catastrophic Risk in addition to being an x-risk. Therefore, even people who don’t care about the far future will be motivated to prevent it.
This is assuming that people understand what makes an AI so dangerous—calling an AI a global catastrophic risk isn’t going to motivate anyone who thinks you can just unplug the thing (and even worse if it does motivate them, since then you have someone who is running around thinking the AI problem is trivial).
The people with the most power tend to be the most rational people, and the effect size can be expected to increase over time (barring disruptive events such as economic collapses, supervolcanos, climate change tail risk, etc). The most rational people are the people who are most likely to be aware of and to work to avert AI risk. Here I’m blurring “near mode instrumental rationality” and “far mode instrumental rationality,” but I think there’s a fair amount of overlap between the two things. e.g. China is pushing hard on nuclear energy and on renewable energies, even though they won’t be needed for years.
I think you’re just blurring “rationality” here. The fact that someone is powerful is evidence that they are good at gaining a reputation in their specific field, but I don’t see how this is evidence for rationality as such (and if we are redefining it to include dictators and crony politicians, I don’t know what to say), and especially of the kind needed to properly handle AI—and claiming evidence for future good decisions related to AI risk because of domain expertise in entirely different fields is quite a stretch. Believe it or not, most people are not mathematicians or computer scientists. Most powerful people are not mathematicians or computer scientists. And most mathematicians and computer scientists don’t give two shits about AI risk—if they don’t think it worthy of attention, why would someone who has no experience with these kind of issues suddenly grab it out of the space of all possible ideas he could possibly be thinking about? Obviously they aren’t thinking about it now—why are you confident this won’t be the case in the future? Thinking about AI requires a rather large conceptual leap—“rationality” is necessary but not sufficient, so even if all powerful people were “rational” it doesn’t follow that they can deal with these issues properly or even single them out as something to meditate on, unless we have a genius orator I’m not aware of. It’s hard enough explaining recursion to people who are actually interested in computers. And it’s not like we can drop a UFAI on a country to get people to pay attention.
Availability of information is increasing over time. At the time of the Dartmouth conference, information about the potential dangers of AI was not very salient, now it’s more salient, and in the future it will be still more salient.
In the Manhattan project, the “will bombs ignite the atmosphere?” question was analyzed and dismissed without much (to our knowledge) double-checking. The amount of risk checking per hour of human capital available can be expected to increase over time. In general, people enjoy tackling important problems, and risk checking is more important than most of the things that people would otherwise be doing.
It seems like you are claiming that AI safety does not require a substantial shift in perspective (I’m taking this as the reason why you are optimistic, since my cynicism tells me that expecting a drastic shift is a rather improbable event) - rather, we can just keep chugging along because nice things can be “expected to increase over time”, and this somehow will result in the kind of society we need. These statements always confuse me; one usually expects to be in a better position to solve a problem 5 years down the road, but trying to describe that advantage in terms of out of thin air claims about incremental changes in human behavior seems like a waste of space unless there is some substance behind it. They only seem useful when one has reached that 5 year checkpoint and can reflect on the current context in detail—for example, it’s not clear to me that the increasing availability of information is always a net positive for AI risk (since it could be the case that potential dangers are more salient as a result of unsafe AI research—the more dangers uncovered could even act as an incentive for more unsafe research depending on the magnitude of positive results and the kind of press received. But of course the researchers will make the right decision, since people are never overconfident...). So it comes off (to me) as a kind of sleight of hand where it feels like a point for optimism, a kind of “Yay Open Access Knowledge is Good!” applause light, but it could really go either way.
Also I really don’t know where you got that last idea—I can’t imagine that most people would find AI safety more glamorous then, you know, actually building a robot. There’s a reason why it’s hard to get people to do unit tests and software projects get bloated and abandoned. Something like what Haskell is to software would be optimal. I don’t think it’s a great idea to rely on the conscientiousness of people in this case.
focus@will is pretty useful for me—I’ve never been into movie music, but the cinematic option was very inspiring for me. There is some science behind the project too.
For the GTD stuff, I use emacs + org-mode + .emacs based on this configuration + mobile org.
Since I try to work exclusively in emacs, I can quickly capture notes and “things that need to get done” in their proper context, all of which is aggregated under an Agenda window. The Agenda window manages a collection of ”.org” files which store the specific details of everything. MobileOrg syncs all these .org files to my phone. Combined with the GTD philosophy of never having anything uncategorized bouncing around in my mind, this system works very well for me.
Example workflow (a better and more complete example is in the configuration I linked above):
At the end of class, Professor assigns a programming project due in a week. I pull out my phone and quickly capture a TODO item with a deadline in Mobileorg. Mobileorg syncs this to google calendar.
I get home and pull up the agenda in emacs. The item referencing the programming project shows up in my “Tasks to refile” category (equivalent to “Inbox” in GTD terms), along with any other TODOs I captured while I was at school.
I refile the project to an org file that contains all the information about my classes and define a NEXT item under it, which represent the next action I need to take on the project. When I start working on the project, I can attach any files related to it directly on the TODO item identifying the project.
The NEXT item shows up on a list of NEXT items on the agenda. I can filter these by project (defined in the GTD way) or by the tag system.
It all seems very complicated, but all of this is literally a couple of keystrokes. And this barely scratches the surface (take a look at the aforementioned configuration to see what I mean).
Pros:
Forces you to learn emacs.
Easily configurable and incredibly robust.
Optimized for functionality rather than prettiness (i.e if you end up liking it, you’ll know it wasn’t because of the nice UI, which is usually the main selling point for any computer based organizational system).
Cons:
Forces you to learn emacs.
Takes a huge amount of effort to set up. I would compare it to setting up an Arch Linux system.
Can get messy if you don’t know what you’re doing.
Getting the syncing functionality isn’t easy.
A spaced repetition package is also available for org-mode, which really ties the whole thing together for me.
EDIT: You can also overlay latex fragments directly in org-mode, which is really nice for notetaking. Whole .org files can be exported to latex as well.
Whether it is meant for entertainment or not I think the usefulness of these hypothetical scenarios (in the context of a community blog) is directly proportional to the precision of their construction.
I understand, and I do think you gave good advice (I love pg’s writing).
On a related note, I just get a little worried when these threads come up. We like to hide behind computing jargon and Spock-like introspection; this does help with efficient communication, but probably makes us look more resilient than we really are. These kind of LW discussion posts are probably of very high social value to the OP and the tone of the responses have more of an effect than we would like to admit.
So helping the OP to see hard truths is all well and good, but it seems to me that we could use a bit more finesse. It’s easier to understand the root of a problem when we have such precise words for everything, but it also means our pontifications must be just as precise or miss the mark completely—possibly hitting something we weren’t aiming for.
He is older than 23 per this comment. But reading his posts, either you have some extremely high standards for high school students or I am terrible at estimating someone’s level of education. (Unless you were measuring emotional maturity somehow).
In any case, I would find it pretty disheartening if someone asked me if I was in high school in a post about my own mental health. I’m sure you didn’t mean to be rude, but I find it hard to believe that this response would be anything but patronizing or insulting to anyone who isn’t a high school student.
Someone who doesn’t want to read science-y stuff because they have that kind of mindset is not going to suddenly become curious when someone tells them it’s based on science-y stuff from less than 30 years ago.
I like to think of it temporally; that religion is much like rationalists facing the wrong direction. Both occasionally look over their shoulders to confirm their beliefs (although with theists it’s more like throwing a homunculus into the distant past and using that for eyes), while most of the time the things we really care about and find exciting are in front of us. Original vs unoriginal with respect to modern thought is of no practical interest to someone with the “every innovation is heretical” mindset unless it is completely within their usual line of sight—heretical is code for “I don’t want to keep looking over my shoulder”, not “I hate the original on principle”. So unless you put that “original” encouragement thousands of years ago where they can see it, where it’s a matter of one in front and one behind, the distinction between which is the greater turn-off is not going to matter, or bait anyone into turning around—there is nothing in their usually observed world to relate it to.
I think I just imagined HPMOR in the My Little Pony universe, which does not sound appealing at all (to me). This is much better.
With regard to the piracetam combo, yes I still use that regularly. With modafinil, I wouldn’t say regularly, since it’s a little expensive to keep that up. But I didn’t actively stop using it. I pretty much use the same amount as I did when I was monophasic—i.e when I have it, I take it on a semi-regular basis.
I’m still on the Everyman-3, and have been for about 7 months now.
The first couple times I tried it, I had the exact same experience, though it took me a little longer to give up. What really helped me finally adjust was using nootropics. I had a lot of success with piracetam + choline + l-theanine after each nap, sometimes adding coffee when I needed it. I also used modafinil every other day for the first two weeks (I wouldn’t recommend this though, since most people can’t sleep on it).
The coolest thing about the modafinil (and to a lesser extent piracetam, etc) use during this period was that I could really see the difference between my sleep deprived self and my normal self, since modafinil completely erases all of the effects of sleep deprivation. On my previous attempts I did feel very useless, but I didn’t realize the extent to which I just couldn’t do things until I took modafinil on a particularly difficult day—it felt like someone gave me an entirely new brain. So it’s really clear to me how much sleep dep actually impairs my ability to do things.
I wish I had that schedule calculator earlier—I must have spent a couple of hours googling (#1 failure of my rationality skills) for one because I was sure someone had to have made it, given that all these polyphasic sleepers have oodles of free time.
I re-read Atlas Shrugged once or twice a year. One of my first posts on LW was this (and you even commented on it!):
https://www.lesswrong.com/posts/7s5gYi7EagfkzvLp8/in-defense-of-ayn-rand
Not necessarily proud of it, but it’s interesting to re-read it after fully reconciling the book with my own internal principles. I can see how much I struggled with the fact that I really resonated with the idea of hero-worship, while also feeling so fragile in my own judgments, simultaneously. It really is a wonderful book, and I no longer feel the need to defend anything about it—I just get a little sad when it gets brushed off (the lord of the rings comparison joke really gets me), as an honest reading will always reveal something fundamental, even in criticism.