i think it’s quite valuable to go through your key beliefs and work through what the implications would be if they were false. this has several benefits:
picturing a possible world where your key belief is wrong makes it feel more tangible and so you become more emotionally prepared to accept it.
if you ever do find out that the belief is wrong, you don’t flinch away as strongly because it doesn’t feel like you will be completely epistemically lost the moment you remove the Key Belief
you will have more productive conversations with people who disagree with you on the Key Belief
you might discover strategies that are robustly good whether or not the Key Belief is true
you will become better at designing experiments to test whether the Key Belief is true
“agi happens almost certainly within in the next few decades” → maybe ai progress just kind of plateaus for a few decades, it turns out that gpqa/codeforces etc are like chess in that we only think they’re hard because humans who can do them are smart but they aren’t agi-complete, ai gets used in a bunch of places in the economy but it’s more like smartphones or something. in this world i should be taking normie life advice a lot more seriously.
“agi doesn’t happen in the next 2 years” → maybe actually scaling current techniques is all you need. gpqa/codeforces actually do just measure intelligence. within like half a year, ML researchers start being way more productive because lots of their job is automated. if i use current/near-future ai agents for my research, i will actually just be more productive.
“alignment is hard” → maybe basic techniques is all you need, because natural abstractions is true, or maybe the red car / blue car argument for why useful models are also competent at bad things is just wrong because generalization can be made to suck. maybe all the capabilities people are just right and it’s not reckless to be building agi so fast
Making a list of your beliefs can be complicated. Recognizing the belief as a “belief” is the necessary first step, but the strongest beliefs (those that examining them would be most useful?) are probably transparent, they feel like “just how the world is”.
Then again, maybe listing all the strong beliefs would actually be useless, because the list would contain tons of things like “I believe that 2+2=4”, and examining those would be mostly a waste of time. We want the beliefs that are strong but possibly wrong. But when you notice that they are “possibly wrong”, you have already made the most difficult step; the question is how to get there.
i think it’s quite valuable to go through your key beliefs and work through what the implications would be if they were false. this has several benefits:
picturing a possible world where your key belief is wrong makes it feel more tangible and so you become more emotionally prepared to accept it.
if you ever do find out that the belief is wrong, you don’t flinch away as strongly because it doesn’t feel like you will be completely epistemically lost the moment you remove the Key Belief
you will have more productive conversations with people who disagree with you on the Key Belief
you might discover strategies that are robustly good whether or not the Key Belief is true
you will become better at designing experiments to test whether the Key Belief is true
what are some of your key beliefs and what were the implications if they were false?
some concrete examples
“agi happens almost certainly within in the next few decades” → maybe ai progress just kind of plateaus for a few decades, it turns out that gpqa/codeforces etc are like chess in that we only think they’re hard because humans who can do them are smart but they aren’t agi-complete, ai gets used in a bunch of places in the economy but it’s more like smartphones or something. in this world i should be taking normie life advice a lot more seriously.
“agi doesn’t happen in the next 2 years” → maybe actually scaling current techniques is all you need. gpqa/codeforces actually do just measure intelligence. within like half a year, ML researchers start being way more productive because lots of their job is automated. if i use current/near-future ai agents for my research, i will actually just be more productive.
“alignment is hard” → maybe basic techniques is all you need, because natural abstractions is true, or maybe the red car / blue car argument for why useful models are also competent at bad things is just wrong because generalization can be made to suck. maybe all the capabilities people are just right and it’s not reckless to be building agi so fast
Making a list of your beliefs can be complicated. Recognizing the belief as a “belief” is the necessary first step, but the strongest beliefs (those that examining them would be most useful?) are probably transparent, they feel like “just how the world is”.
Then again, maybe listing all the strong beliefs would actually be useless, because the list would contain tons of things like “I believe that 2+2=4”, and examining those would be mostly a waste of time. We want the beliefs that are strong but possibly wrong. But when you notice that they are “possibly wrong”, you have already made the most difficult step; the question is how to get there.