I think yes, given the following benefits, with the main costs being opportunity cost and risk of losing a bunch of money in an irrational way (e.g. couldn’t quit if I turned out to be a bad trader), I think. Am I missing anything or did you have something in mind when asking this?
physical and psychic benefits of having greater wealth/security
social benefits (within my immediate family who know about it, and now among LW)
calibration about how much to trust my own judgment on various things
it’s a relatively enjoyable activity (comparable to playing computer games, which ironically I can’t seem to find the motivation to play anymore)
some small chance of eventually turning the money into fraction of lightcone
I was thinking mostly along the lines of, it sounds like you made money, but not nearly as much money as you could have made if you had instead invested in or participated more directly in DL scaling (even excluding the Anthropic opportunity), when you didn’t particularly need any money and you don’t mention any major life improvements from it beyond the nebulous (and often purely positional/zero-sum), and in the mean time, you made little progress on past issues of importance to you like decision theory while not contributing to DL discourse or more exotic opportunities which were available 2020-2025 (like doing things like, eg. instill particular decision theories into LLMs by writing online during their most malleable years).
Thanks for clarifying! I was pretty curious where you were coming from.
not nearly as much money as you could have made if you had instead invested in or participated more directly in DL scaling (even excluding the Anthropic opportunity)
Seems like these would all have similar ethical issues as investing in Anthropic, given that I’m pessimistic about AI safety and want to see an AI pause/stop.
when you didn’t particularly need any money and you don’t mention any major life improvements from it beyond the nebulous
To be a bit more concrete, the additional wealth allowed us to escape the political dysfunction of our previous locality and move halfway across the country (to a nicer house/location/school) with almost no stress, and allows us not to worry about e.g. Trump craziness affecting us much personally since we can similarly buy our way out of most kinds of trouble (given some amount of warning).
(and often purely positional/zero-sum)
These are part of my moral parliament or provisional values. Do you think they shouldn’t be? (Or what is the relevance of pointing this out?)
you made little progress on past issues of importance to you like decision theory
By 2020 I had already moved away from decision theory and my new area of interest (metaphilosophy) doesn’t have an apparent attack so I mostly just kept it in the back of my mind as I did other things and waited for new insights to pop up. I don’t remember how I was spending my time before 2020, but looking at my LW post history, it looks like mostly worrying about wokeness, trying to find holes in Paul Christiano’s IDA, and engaging with AI safety research in general, none of which looks super high value in retrospect.
More generally I often give up or move away from previous interests (crypto and programming being other examples) and this seems to work for me.
eg. instill particular decision theories into LLMs by writing online during their most malleable years
Ex post or ex ante, do you feel like this was ultimately a good use of your time starting in mid-2020? (I might have asked you this already.)
I think yes, given the following benefits, with the main costs being opportunity cost and risk of losing a bunch of money in an irrational way (e.g. couldn’t quit if I turned out to be a bad trader), I think. Am I missing anything or did you have something in mind when asking this?
physical and psychic benefits of having greater wealth/security
social benefits (within my immediate family who know about it, and now among LW)
calibration about how much to trust my own judgment on various things
it’s a relatively enjoyable activity (comparable to playing computer games, which ironically I can’t seem to find the motivation to play anymore)
some small chance of eventually turning the money into fraction of lightcone
evidence about whether I’m in a simulation
some marginal increase in credibility for my ideas
I was thinking mostly along the lines of, it sounds like you made money, but not nearly as much money as you could have made if you had instead invested in or participated more directly in DL scaling (even excluding the Anthropic opportunity), when you didn’t particularly need any money and you don’t mention any major life improvements from it beyond the nebulous (and often purely positional/zero-sum), and in the mean time, you made little progress on past issues of importance to you like decision theory while not contributing to DL discourse or more exotic opportunities which were available 2020-2025 (like doing things like, eg. instill particular decision theories into LLMs by writing online during their most malleable years).
Thanks for clarifying! I was pretty curious where you were coming from.
Seems like these would all have similar ethical issues as investing in Anthropic, given that I’m pessimistic about AI safety and want to see an AI pause/stop.
To be a bit more concrete, the additional wealth allowed us to escape the political dysfunction of our previous locality and move halfway across the country (to a nicer house/location/school) with almost no stress, and allows us not to worry about e.g. Trump craziness affecting us much personally since we can similarly buy our way out of most kinds of trouble (given some amount of warning).
These are part of my moral parliament or provisional values. Do you think they shouldn’t be? (Or what is the relevance of pointing this out?)
By 2020 I had already moved away from decision theory and my new area of interest (metaphilosophy) doesn’t have an apparent attack so I mostly just kept it in the back of my mind as I did other things and waited for new insights to pop up. I don’t remember how I was spending my time before 2020, but looking at my LW post history, it looks like mostly worrying about wokeness, trying to find holes in Paul Christiano’s IDA, and engaging with AI safety research in general, none of which looks super high value in retrospect.
More generally I often give up or move away from previous interests (crypto and programming being other examples) and this seems to work for me.
I would not endorse doing this.