What do you think? Is it too much at the same time
Yes, it is too much to start doing at the same time. It is a great looking program to build up to. But building habits is really, really hard. A common rule of thumb is to limit yourself to one change per month. (More than that tends to require a significantly stressful external stimulus.)
For example, should I solely concentrate on math because I’ll be unable to learn much when I get older (I’m already 27)?
No (particular) need to rush the math. You might be getting to old to be making grand new exciting mathematical discoveries but for learning the basics there is no dire limit. Particularly if you execute the exercising habit consistently and are keeping your mind active learning other things (which is obviously the plan!) Heck, you’re learning Haskell. You are not going to be losing basic-math-learning potential while studying Haskell.
Any suggestions are appreciated!
Find something to be doing with the skills in the intermediate term. Make sure they are giving you real world positive feedback. They can’t just be things you think you ‘should’ do to meet some vague abstract goal that isn’t immediately salient.
They can’t just be things you think you ‘should’ do to meet some vague abstract goal that isn’t immediately salient.
I really don’t want to do those things because I think I ‘should’ do them but because I’d love to do them. If I would be living in Bank’s Culture or under Yudkowsky’s FAI oversight I would want to do exactly those things. That I ask about them at all is because I fear I should be doing other things. If I would just accept the line of reasoning that there is nothing more important than donating to the SIAI then I would choose to become a street builder and work long hours. You can earn good money doing that here in Germany while you don’t need to have particular skills. I don’t expect the time I would need to acquire advanced knowledge to pay off either by enabling me to get a better job or by being able to work towards FAI directly.
I suppose all my questions and submissions here tend towards the goal of allowing me to conclude that I can ignore all this. My nightmare is that reality is fucked up enough that trying to do the ‘right’ thing makes you eventually end up seriously considering three spins of a roulette wheel or tossing a quantum coin to gamble at a 10,000:1 ratio.
Does my desire to learn not eventually result in knowing what is right and wrong? Yeah, but making decisions under uncertainty would currently force me to take the risk of not pursuing any terminal goals directly but rather to try to mitigate risks from AI. That sucks and I try to ignore it but haven’t been able to do so yet.
I’m just looking for some justification to do what I want and ignore what I don’t want.
“Before enlightenment, mountains are mountains and rivers are rivers. At the moment one is enlightened, mountains are no longer mountains and rivers are no longer rivers. After enlightenment, mountains are again mountains and rivers are again rivers.”
When you learn about relativity, do you throw away your watch because time is relative anyway? No, because relativity must give the same results as Newtonian physics in the conditions where (and to the precision that) Newtonian mechanics has already been verified. If you have a theory that says strange things about what happens in strange conditions, it may be true. If you have a theory that says strange things about what happens in normal conditions, you know the theory must be false, however elegant the mathematics.
The take-home message is that it all has to reduce to normality. No, quantum mechanics really doesn’t have implications for everyday life. No, thought experiments involving copying of people really aren’t useful guides for what to expect in our world where we can’t copy people. No, AI really isn’t going to conquer the world. No, donating your income to charity really isn’t the best way to make a positive difference. No, dust specks really aren’t worse than torture, thought experiments about group sizes of 3^^^3 notwithstanding. No, treading on ants really isn’t a sin.
None of this is to say that you shouldn’t learn about intellectual topics. By all means do, if that’s what you want! But you seem to be suggesting you feel obliged to deal with some of these topics even if it’s not necessarily what you want. And I’m saying you don’t need to feel that way. You don’t actually need to learn tensor calculus to know that relativity doesn’t keep your watch from working normally. If what you want to do for a career is design GPS satellites, or if you just plain think tensor calculus is interesting, by all means go ahead and learn it. But if you want to ignore it and go do something else instead, then do that.
To rephrase my suggestion for clarity: Make sure your instinctive reward system is getting positive feedback to associate with your learning habits in a form that it can understand. Such as a social component, feedback by peers or instructors, short term goals, experiences of ‘flow’, etc.
Yes, it is too much to start doing at the same time. It is a great looking program to build up to. But building habits is really, really hard. A common rule of thumb is to limit yourself to one change per month. (More than that tends to require a significantly stressful external stimulus.)
No (particular) need to rush the math. You might be getting to old to be making grand new exciting mathematical discoveries but for learning the basics there is no dire limit. Particularly if you execute the exercising habit consistently and are keeping your mind active learning other things (which is obviously the plan!) Heck, you’re learning Haskell. You are not going to be losing basic-math-learning potential while studying Haskell.
Find something to be doing with the skills in the intermediate term. Make sure they are giving you real world positive feedback. They can’t just be things you think you ‘should’ do to meet some vague abstract goal that isn’t immediately salient.
I really don’t want to do those things because I think I ‘should’ do them but because I’d love to do them. If I would be living in Bank’s Culture or under Yudkowsky’s FAI oversight I would want to do exactly those things. That I ask about them at all is because I fear I should be doing other things. If I would just accept the line of reasoning that there is nothing more important than donating to the SIAI then I would choose to become a street builder and work long hours. You can earn good money doing that here in Germany while you don’t need to have particular skills. I don’t expect the time I would need to acquire advanced knowledge to pay off either by enabling me to get a better job or by being able to work towards FAI directly.
I suppose all my questions and submissions here tend towards the goal of allowing me to conclude that I can ignore all this. My nightmare is that reality is fucked up enough that trying to do the ‘right’ thing makes you eventually end up seriously considering three spins of a roulette wheel or tossing a quantum coin to gamble at a 10,000:1 ratio.
Does my desire to learn not eventually result in knowing what is right and wrong? Yeah, but making decisions under uncertainty would currently force me to take the risk of not pursuing any terminal goals directly but rather to try to mitigate risks from AI. That sucks and I try to ignore it but haven’t been able to do so yet.
I’m just looking for some justification to do what I want and ignore what I don’t want.
“Before enlightenment, mountains are mountains and rivers are rivers. At the moment one is enlightened, mountains are no longer mountains and rivers are no longer rivers. After enlightenment, mountains are again mountains and rivers are again rivers.”
When you learn about relativity, do you throw away your watch because time is relative anyway? No, because relativity must give the same results as Newtonian physics in the conditions where (and to the precision that) Newtonian mechanics has already been verified. If you have a theory that says strange things about what happens in strange conditions, it may be true. If you have a theory that says strange things about what happens in normal conditions, you know the theory must be false, however elegant the mathematics.
The take-home message is that it all has to reduce to normality. No, quantum mechanics really doesn’t have implications for everyday life. No, thought experiments involving copying of people really aren’t useful guides for what to expect in our world where we can’t copy people. No, AI really isn’t going to conquer the world. No, donating your income to charity really isn’t the best way to make a positive difference. No, dust specks really aren’t worse than torture, thought experiments about group sizes of 3^^^3 notwithstanding. No, treading on ants really isn’t a sin.
None of this is to say that you shouldn’t learn about intellectual topics. By all means do, if that’s what you want! But you seem to be suggesting you feel obliged to deal with some of these topics even if it’s not necessarily what you want. And I’m saying you don’t need to feel that way. You don’t actually need to learn tensor calculus to know that relativity doesn’t keep your watch from working normally. If what you want to do for a career is design GPS satellites, or if you just plain think tensor calculus is interesting, by all means go ahead and learn it. But if you want to ignore it and go do something else instead, then do that.
To rephrase my suggestion for clarity: Make sure your instinctive reward system is getting positive feedback to associate with your learning habits in a form that it can understand. Such as a social component, feedback by peers or instructors, short term goals, experiences of ‘flow’, etc.