I decided to start reading the Sequences today. I will read one post per day because I’m also learning math and programming at the same time. I also intent to use this link (all articles from Less Wrong, in chronological order) rather than this one.
Actually I plan to alternate between the following activities in my spare-time. Either each for half an hour per day or a certain amount (x videos, LW posts etc.):
Taking some time to meditate about the big picture, what I want, what I should do or some miscellaneous problem.
Sports
3 times per week running and weightlifting.
Relaxing
Alternating between playing a video game or reading science fiction for 30 minutes each evening.
What do you think? Is it too much at the same time, should I concentrate more on a certain activity or maybe dismiss one altogether, am I missing something important? My fear is that some of the activities might be more important for various reasons. For example, should I solely concentrate on math because I’ll be unable to learn much when I get older (I’m already 27)? Any suggestions are appreciated!
What do you think? Is it too much at the same time
Yes, it is too much to start doing at the same time. It is a great looking program to build up to. But building habits is really, really hard. A common rule of thumb is to limit yourself to one change per month. (More than that tends to require a significantly stressful external stimulus.)
For example, should I solely concentrate on math because I’ll be unable to learn much when I get older (I’m already 27)?
No (particular) need to rush the math. You might be getting to old to be making grand new exciting mathematical discoveries but for learning the basics there is no dire limit. Particularly if you execute the exercising habit consistently and are keeping your mind active learning other things (which is obviously the plan!) Heck, you’re learning Haskell. You are not going to be losing basic-math-learning potential while studying Haskell.
Any suggestions are appreciated!
Find something to be doing with the skills in the intermediate term. Make sure they are giving you real world positive feedback. They can’t just be things you think you ‘should’ do to meet some vague abstract goal that isn’t immediately salient.
They can’t just be things you think you ‘should’ do to meet some vague abstract goal that isn’t immediately salient.
I really don’t want to do those things because I think I ‘should’ do them but because I’d love to do them. If I would be living in Bank’s Culture or under Yudkowsky’s FAI oversight I would want to do exactly those things. That I ask about them at all is because I fear I should be doing other things. If I would just accept the line of reasoning that there is nothing more important than donating to the SIAI then I would choose to become a street builder and work long hours. You can earn good money doing that here in Germany while you don’t need to have particular skills. I don’t expect the time I would need to acquire advanced knowledge to pay off either by enabling me to get a better job or by being able to work towards FAI directly.
I suppose all my questions and submissions here tend towards the goal of allowing me to conclude that I can ignore all this. My nightmare is that reality is fucked up enough that trying to do the ‘right’ thing makes you eventually end up seriously considering three spins of a roulette wheel or tossing a quantum coin to gamble at a 10,000:1 ratio.
Does my desire to learn not eventually result in knowing what is right and wrong? Yeah, but making decisions under uncertainty would currently force me to take the risk of not pursuing any terminal goals directly but rather to try to mitigate risks from AI. That sucks and I try to ignore it but haven’t been able to do so yet.
I’m just looking for some justification to do what I want and ignore what I don’t want.
“Before enlightenment, mountains are mountains and rivers are rivers. At the moment one is enlightened, mountains are no longer mountains and rivers are no longer rivers. After enlightenment, mountains are again mountains and rivers are again rivers.”
When you learn about relativity, do you throw away your watch because time is relative anyway? No, because relativity must give the same results as Newtonian physics in the conditions where (and to the precision that) Newtonian mechanics has already been verified. If you have a theory that says strange things about what happens in strange conditions, it may be true. If you have a theory that says strange things about what happens in normal conditions, you know the theory must be false, however elegant the mathematics.
The take-home message is that it all has to reduce to normality. No, quantum mechanics really doesn’t have implications for everyday life. No, thought experiments involving copying of people really aren’t useful guides for what to expect in our world where we can’t copy people. No, AI really isn’t going to conquer the world. No, donating your income to charity really isn’t the best way to make a positive difference. No, dust specks really aren’t worse than torture, thought experiments about group sizes of 3^^^3 notwithstanding. No, treading on ants really isn’t a sin.
None of this is to say that you shouldn’t learn about intellectual topics. By all means do, if that’s what you want! But you seem to be suggesting you feel obliged to deal with some of these topics even if it’s not necessarily what you want. And I’m saying you don’t need to feel that way. You don’t actually need to learn tensor calculus to know that relativity doesn’t keep your watch from working normally. If what you want to do for a career is design GPS satellites, or if you just plain think tensor calculus is interesting, by all means go ahead and learn it. But if you want to ignore it and go do something else instead, then do that.
To rephrase my suggestion for clarity: Make sure your instinctive reward system is getting positive feedback to associate with your learning habits in a form that it can understand. Such as a social component, feedback by peers or instructors, short term goals, experiences of ‘flow’, etc.
And for anyone starting from the beginning, here is the index. I notice it’s fallen behind—the posts are up to #19 while the index has links up to 11.
Following links from Luke’s posts is probably a better way to read Eliezer’s posts than the Sequences page on the LW wiki is, because of the summaries, and because many of the best articles are one-offs that didn’t fit into the “sequences” classification. (Though it does have the disadvantage of not including other authors, some of whom do feature on the Sequences page. Luke, perhaps when Reading Yudkowsky starts nearing the end, you’ll continue with Reading Yvain Et Al?)
The series won’t include the other posters (I know this because I’ve actually finished writing the sequence (62 posts total, if I recall), but I haven’t published them all yet).
Those all sound like valuable activities. The programming would probably be better to do in a smaller number of longer sessions—I find that programming gets more informative and productive the more consecutive hours I spend on it, because of all the context I need to remind myself of.
I’ve been watching some Khan Academy videos myself, and one thing I’ve found useful is downloading them and watching them with VLC player, which can adjust the speed to my preferred pacing (I speed it up by 1.5-2x, and use the time saved to watch the tricky parts a second time if necessary). Also, having a folder full of video files that I move into the ‘finished’ folder is somehow more motivating than having a page full of links that I turn purple.
I decided to start reading the Sequences today. I will read one post per day because I’m also learning math and programming at the same time. I also intent to use this link (all articles from Less Wrong, in chronological order) rather than this one.
Actually I plan to alternate between the following activities in my spare-time. Either each for half an hour per day or a certain amount (x videos, LW posts etc.):
Mathematics
Khan Academy (~3 videos per day)
Probability
Programming
Haskell
One miscellaneous book
Starting with Darwin’s Dangerous Idea, Daniel Dennett.
LessWrong Sequences
One post/article per day.
Thinking
Taking some time to meditate about the big picture, what I want, what I should do or some miscellaneous problem.
Sports
3 times per week running and weightlifting.
Relaxing
Alternating between playing a video game or reading science fiction for 30 minutes each evening.
What do you think? Is it too much at the same time, should I concentrate more on a certain activity or maybe dismiss one altogether, am I missing something important? My fear is that some of the activities might be more important for various reasons. For example, should I solely concentrate on math because I’ll be unable to learn much when I get older (I’m already 27)? Any suggestions are appreciated!
Yes, it is too much to start doing at the same time. It is a great looking program to build up to. But building habits is really, really hard. A common rule of thumb is to limit yourself to one change per month. (More than that tends to require a significantly stressful external stimulus.)
No (particular) need to rush the math. You might be getting to old to be making grand new exciting mathematical discoveries but for learning the basics there is no dire limit. Particularly if you execute the exercising habit consistently and are keeping your mind active learning other things (which is obviously the plan!) Heck, you’re learning Haskell. You are not going to be losing basic-math-learning potential while studying Haskell.
Find something to be doing with the skills in the intermediate term. Make sure they are giving you real world positive feedback. They can’t just be things you think you ‘should’ do to meet some vague abstract goal that isn’t immediately salient.
I really don’t want to do those things because I think I ‘should’ do them but because I’d love to do them. If I would be living in Bank’s Culture or under Yudkowsky’s FAI oversight I would want to do exactly those things. That I ask about them at all is because I fear I should be doing other things. If I would just accept the line of reasoning that there is nothing more important than donating to the SIAI then I would choose to become a street builder and work long hours. You can earn good money doing that here in Germany while you don’t need to have particular skills. I don’t expect the time I would need to acquire advanced knowledge to pay off either by enabling me to get a better job or by being able to work towards FAI directly.
I suppose all my questions and submissions here tend towards the goal of allowing me to conclude that I can ignore all this. My nightmare is that reality is fucked up enough that trying to do the ‘right’ thing makes you eventually end up seriously considering three spins of a roulette wheel or tossing a quantum coin to gamble at a 10,000:1 ratio.
Does my desire to learn not eventually result in knowing what is right and wrong? Yeah, but making decisions under uncertainty would currently force me to take the risk of not pursuing any terminal goals directly but rather to try to mitigate risks from AI. That sucks and I try to ignore it but haven’t been able to do so yet.
I’m just looking for some justification to do what I want and ignore what I don’t want.
“Before enlightenment, mountains are mountains and rivers are rivers. At the moment one is enlightened, mountains are no longer mountains and rivers are no longer rivers. After enlightenment, mountains are again mountains and rivers are again rivers.”
When you learn about relativity, do you throw away your watch because time is relative anyway? No, because relativity must give the same results as Newtonian physics in the conditions where (and to the precision that) Newtonian mechanics has already been verified. If you have a theory that says strange things about what happens in strange conditions, it may be true. If you have a theory that says strange things about what happens in normal conditions, you know the theory must be false, however elegant the mathematics.
The take-home message is that it all has to reduce to normality. No, quantum mechanics really doesn’t have implications for everyday life. No, thought experiments involving copying of people really aren’t useful guides for what to expect in our world where we can’t copy people. No, AI really isn’t going to conquer the world. No, donating your income to charity really isn’t the best way to make a positive difference. No, dust specks really aren’t worse than torture, thought experiments about group sizes of 3^^^3 notwithstanding. No, treading on ants really isn’t a sin.
None of this is to say that you shouldn’t learn about intellectual topics. By all means do, if that’s what you want! But you seem to be suggesting you feel obliged to deal with some of these topics even if it’s not necessarily what you want. And I’m saying you don’t need to feel that way. You don’t actually need to learn tensor calculus to know that relativity doesn’t keep your watch from working normally. If what you want to do for a career is design GPS satellites, or if you just plain think tensor calculus is interesting, by all means go ahead and learn it. But if you want to ignore it and go do something else instead, then do that.
To rephrase my suggestion for clarity: Make sure your instinctive reward system is getting positive feedback to associate with your learning habits in a form that it can understand. Such as a social component, feedback by peers or instructors, short term goals, experiences of ‘flow’, etc.
Note that I am already systematically blogging through every single Eliezer post. Here is the latest.
What do you think about cross-posting them here on the main page?
I don’t think that is wanted...
And for anyone starting from the beginning, here is the index. I notice it’s fallen behind—the posts are up to #19 while the index has links up to 11.
Following links from Luke’s posts is probably a better way to read Eliezer’s posts than the Sequences page on the LW wiki is, because of the summaries, and because many of the best articles are one-offs that didn’t fit into the “sequences” classification. (Though it does have the disadvantage of not including other authors, some of whom do feature on the Sequences page. Luke, perhaps when Reading Yudkowsky starts nearing the end, you’ll continue with Reading Yvain Et Al?)
Yeah, I only update the index every now and then.
The series won’t include the other posters (I know this because I’ve actually finished writing the sequence (62 posts total, if I recall), but I haven’t published them all yet).
You wrote them that far in advance of your publishing schedule? Whatever anti-akrasia drug you’re on, I’d like some...
Those all sound like valuable activities. The programming would probably be better to do in a smaller number of longer sessions—I find that programming gets more informative and productive the more consecutive hours I spend on it, because of all the context I need to remind myself of.
I’ve been watching some Khan Academy videos myself, and one thing I’ve found useful is downloading them and watching them with VLC player, which can adjust the speed to my preferred pacing (I speed it up by 1.5-2x, and use the time saved to watch the tricky parts a second time if necessary). Also, having a folder full of video files that I move into the ‘finished’ folder is somehow more motivating than having a page full of links that I turn purple.