Did you have some specific cases in mind when writing this? For example, HCH is interesting and not obviously going to fail in the ways that some other proposals I’ve seen would, and the proposal there seems to have gotten better as more details have been fleshed out even if there’s still some disagreement on things that can be tested eventually even if not yet. Against this we’ve seen lots of things, like various oracle AI proposals, that to my mind usually have fatal flaws right from the start due to misunderstanding something that they can’t easily be salvaged.
I don’t want to disincentivize thinking about solving AI alignment directly when I criticize something, but I also don’t want to let pass things that to me have obvious problems that the authors probably didn’t think about or thought about from different assumptions that maybe are wrong (or maybe I will converse with them and learn that I was wrong!). It seems like an important part of learning in this space is proposing things and seeing why they don’t work so you can better understand the constraints of the problem space to work within them to find solutions.
Did you have some specific cases in mind when writing this? For example, HCH is interesting and not obviously going to fail in the ways that some other proposals I’ve seen would, and the proposal there seems to have gotten better as more details have been fleshed out even if there’s still some disagreement on things that can be tested eventually even if not yet. Against this we’ve seen lots of things, like various oracle AI proposals, that to my mind usually have fatal flaws right from the start due to misunderstanding something that they can’t easily be salvaged.
I don’t want to disincentivize thinking about solving AI alignment directly when I criticize something, but I also don’t want to let pass things that to me have obvious problems that the authors probably didn’t think about or thought about from different assumptions that maybe are wrong (or maybe I will converse with them and learn that I was wrong!). It seems like an important part of learning in this space is proposing things and seeing why they don’t work so you can better understand the constraints of the problem space to work within them to find solutions.