You mean the AC thing? If I’m wrong, it wouldn’t be enough bits to flip all the relevant parts of my alignment views, but it would be enough bits that I’d be a lot less certain and invest more in finding other ways to gain bits.
(Though obviously that depends somewhat on how I turn out to be wrong.)
I feel like I’d think less of John if it weren’t a crux for him? Like, one of the troubles with worldviews like this is they lean both on your theories and on your evidence, and so you really need to grab at the examples that do shine thru of “oh, my worldview found this belief of mine very confirming, but people disagree with it; we should figure out whether or not I’m right.”
I think it makes sense to have a loose probabilistic relationship. I do not think it makes sense for it to be a crux, in the sense of a thing which, if false, would make John abandon his view. There are just too many weak steps. The AI industry is not the AC industry. I happen to agree with John’s views about AC, but it’s not obvious to me that those views imply this particular test turning out as he’s predicting. (Is he averaging over the wrong points?) It’s more probable than not, but my point here is that the whole thing is made of fairly weak inferences.
To be clear, I am pro what John is doing and how he is engaging; it’s more John’s commentors who felt confusing to me.
Is it a crux for you? It feels to me like it shouldn’t be a crux for you.
You mean the AC thing? If I’m wrong, it wouldn’t be enough bits to flip all the relevant parts of my alignment views, but it would be enough bits that I’d be a lot less certain and invest more in finding other ways to gain bits.
(Though obviously that depends somewhat on how I turn out to be wrong.)
Yeah, sounds about right.
I feel like I’d think less of John if it weren’t a crux for him? Like, one of the troubles with worldviews like this is they lean both on your theories and on your evidence, and so you really need to grab at the examples that do shine thru of “oh, my worldview found this belief of mine very confirming, but people disagree with it; we should figure out whether or not I’m right.”
I think it makes sense to have a loose probabilistic relationship. I do not think it makes sense for it to be a crux, in the sense of a thing which, if false, would make John abandon his view. There are just too many weak steps. The AI industry is not the AC industry. I happen to agree with John’s views about AC, but it’s not obvious to me that those views imply this particular test turning out as he’s predicting. (Is he averaging over the wrong points?) It’s more probable than not, but my point here is that the whole thing is made of fairly weak inferences.
To be clear, I am pro what John is doing and how he is engaging; it’s more John’s commentors who felt confusing to me.