Thanks for clarifying! I was pretty curious where you were coming from.
not nearly as much money as you could have made if you had instead invested in or participated more directly in DL scaling (even excluding the Anthropic opportunity)
Seems like these would all have similar ethical issues as investing in Anthropic, given that I’m pessimistic about AI safety and want to see an AI pause/stop.
when you didn’t particularly need any money and you don’t mention any major life improvements from it beyond the nebulous
To be a bit more concrete, the additional wealth allowed us to escape the political dysfunction of our previous locality and move halfway across the country (to a nicer house/location/school) with almost no stress, and allows us not to worry about e.g. Trump craziness affecting us much personally since we can similarly buy our way out of most kinds of trouble (given some amount of warning).
(and often purely positional/zero-sum)
These are part of my moral parliament or provisional values. Do you think they shouldn’t be? (Or what is the relevance of pointing this out?)
you made little progress on past issues of importance to you like decision theory
By 2020 I had already moved away from decision theory and my new area of interest (metaphilosophy) doesn’t have an apparent attack so I mostly just kept it in the back of my mind as I did other things and waited for new insights to pop up. I don’t remember how I was spending my time before 2020, but looking at my LW post history, it looks like mostly worrying about wokeness, trying to find holes in Paul Christiano’s IDA, and engaging with AI safety research in general, none of which looks super high value in retrospect.
More generally I often give up or move away from previous interests (crypto and programming being other examples) and this seems to work for me.
eg. instill particular decision theories into LLMs by writing online during their most malleable years
Thanks for clarifying! I was pretty curious where you were coming from.
Seems like these would all have similar ethical issues as investing in Anthropic, given that I’m pessimistic about AI safety and want to see an AI pause/stop.
To be a bit more concrete, the additional wealth allowed us to escape the political dysfunction of our previous locality and move halfway across the country (to a nicer house/location/school) with almost no stress, and allows us not to worry about e.g. Trump craziness affecting us much personally since we can similarly buy our way out of most kinds of trouble (given some amount of warning).
These are part of my moral parliament or provisional values. Do you think they shouldn’t be? (Or what is the relevance of pointing this out?)
By 2020 I had already moved away from decision theory and my new area of interest (metaphilosophy) doesn’t have an apparent attack so I mostly just kept it in the back of my mind as I did other things and waited for new insights to pop up. I don’t remember how I was spending my time before 2020, but looking at my LW post history, it looks like mostly worrying about wokeness, trying to find holes in Paul Christiano’s IDA, and engaging with AI safety research in general, none of which looks super high value in retrospect.
More generally I often give up or move away from previous interests (crypto and programming being other examples) and this seems to work for me.
I would not endorse doing this.