I’m interested in doing in-depth dialogues to find cruxes. Message me if you are interested in doing this.
I do alignment research, mostly stuff that is vaguely agent foundations. Currently doing independent alignment research on ontology identification. Formerly on Vivek’s team at MIRI.
My argument was that there were several of “risk factors” that stack. I agree that each one isn’t overwhelmingly strong.
I prefer not to be rude. Are you sure it’s not just that I’m confidently wrong? If I was disagreeing in the same tone with e.g. Yampolskiy’s argument for high confidence AI doom, would this still come across as rude to you?