I may have experienced this. I was reading a recent discussion about AGI doom, where Eliezer Yudkowsky and others were debating whether one could use aligned human-level AGI to solve alignment before strong ASI is developed.
After reading this thread, I went for a walk and thought about it.
The no arguments seemed straightforward and elegant in comparison to the yes arguments, which seemed contingent on on lots of little details.
Straightforward and elegant ideas often represent reality better in my experience. Is that why yes seems more convincing?
Perhaps instead it’s because the yes arguments fit in my head better
But didn’t I engage with the arguments? I read them, tried to understand, and remained unconvinced.
I still haven’t resolved this. Did I do the dumb thing?
Discussion heavily using a metaphor about dragons, from the last 3 months, does anyone recall? I looked briefly.
I may have experienced this. I was reading a recent discussion about AGI doom, where Eliezer Yudkowsky and others were debating whether one could use aligned human-level AGI to solve alignment before strong ASI is developed.
After reading this thread, I went for a walk and thought about it.
The no arguments seemed straightforward and elegant in comparison to the yes arguments, which seemed contingent on on lots of little details.
Straightforward and elegant ideas often represent reality better in my experience. Is that why yes seems more convincing?
Perhaps instead it’s because the yes arguments fit in my head better
But didn’t I engage with the arguments? I read them, tried to understand, and remained unconvinced.
I still haven’t resolved this. Did I do the dumb thing?
Discussion heavily using a metaphor about dragons, from the last 3 months, does anyone recall? I looked briefly.