I think with the distilled version in this post people get the gist of what I’m hypothesising — that there is a reasonable optimistic AI alignment scenario under the conditions I describe.
(I should have just said this, I didn’t mean to be leading sorry).
I’m going for: people who understand as well as you understand it, or such that you’re confident they could give a summary that you’d be ok with others reading.
You said above you’ve heard no strong counterarguments, it might be good to put that in proportion to the amount of people who you’re confident have a good grasp of your idea.
Obviously it has to start at 0, but if I were keeping track of feedback on my idea, I’d be keenly interested in this number.
Oh I see — if I were to estimate I’d say around 10-15 people counting either people I’ve had 1hr + conversations about this with or people who have provided feedback/questions tapping into the essence of the argument.
How many people understand your argument?
I think with the distilled version in this post people get the gist of what I’m hypothesising — that there is a reasonable optimistic AI alignment scenario under the conditions I describe.
Is that what you mean?
(I should have just said this, I didn’t mean to be leading sorry).
I’m going for: people who understand as well as you understand it, or such that you’re confident they could give a summary that you’d be ok with others reading.
You said above you’ve heard no strong counterarguments, it might be good to put that in proportion to the amount of people who you’re confident have a good grasp of your idea.
Obviously it has to start at 0, but if I were keeping track of feedback on my idea, I’d be keenly interested in this number.
Oh I see — if I were to estimate I’d say around 10-15 people counting either people I’ve had 1hr + conversations about this with or people who have provided feedback/questions tapping into the essence of the argument.