There are definitely answers that your model wants rationalists to give but that I think are incompatible with LW-style rationalism. For instance:
“People’s anecdotes about seeing ghosts aren’t real evidence for ghosts” (your model wants “agree strongly”): of course people’s anecdotes about seeing ghosts are evidence for ghosts; they are more probable if ghosts are real than if they aren’t. They’re just really weak evidence for ghosts and there are plenty of other reasons to think there aren’t ghosts.
“We need more evidence that we would benefit before we charge ahead with futuristic technology that might irreversibly backfire” (your model wants “disagree” or “disagree strongly”): there’s this thing called the AI alignment problem that a few rationalists are slightly concerned about, you might have heard of it.
And several others where I wouldn’t go so far as to say “incompatible” but where I confidently expect most LWers’ positions not to match your model’s predictions. For instance:
“It is morally important to avoid making people suffer emotionally”: your model wants not-agreement, but I think most LWers would agree with this.
“Workplaces should be dull to reflect the oppressiveness of work”: your model wants not-disagreement, but I think most LWers would disagree (though probably most would think “hmm, interesting idea” first).
“Religious people are very stupid”; your model wants agreement, but I think most LWers are aware that there are plenty of not-very-stupid religious people (indeed, plenty of very-not-stupid religious people) and I suspect “disagree strongly” might be the most common response from LWers.
I don’t claim that the above lists are complete. I got 11⁄24 and I am pretty sure I am nearer the median rationalist than that might suggest.
I agree with these points but as I mentioned in the test:
Warning: this is not necessarily an accurate or useful test; it’s a test that arose through irresponsible statistics rather than careful thought.
The reason I made this survey is to get more direct data on how well the model extrapolates (and maybe also to improve the model so it extrapolates better).
There are definitely answers that your model wants rationalists to give but that I think are incompatible with LW-style rationalism. For instance:
“People’s anecdotes about seeing ghosts aren’t real evidence for ghosts” (your model wants “agree strongly”): of course people’s anecdotes about seeing ghosts are evidence for ghosts; they are more probable if ghosts are real than if they aren’t. They’re just really weak evidence for ghosts and there are plenty of other reasons to think there aren’t ghosts.
“We need more evidence that we would benefit before we charge ahead with futuristic technology that might irreversibly backfire” (your model wants “disagree” or “disagree strongly”): there’s this thing called the AI alignment problem that a few rationalists are slightly concerned about, you might have heard of it.
And several others where I wouldn’t go so far as to say “incompatible” but where I confidently expect most LWers’ positions not to match your model’s predictions. For instance:
“It is morally important to avoid making people suffer emotionally”: your model wants not-agreement, but I think most LWers would agree with this.
“Workplaces should be dull to reflect the oppressiveness of work”: your model wants not-disagreement, but I think most LWers would disagree (though probably most would think “hmm, interesting idea” first).
“Religious people are very stupid”; your model wants agreement, but I think most LWers are aware that there are plenty of not-very-stupid religious people (indeed, plenty of very-not-stupid religious people) and I suspect “disagree strongly” might be the most common response from LWers.
I don’t claim that the above lists are complete. I got 11⁄24 and I am pretty sure I am nearer the median rationalist than that might suggest.
I agree with these points but as I mentioned in the test:
The reason I made this survey is to get more direct data on how well the model extrapolates (and maybe also to improve the model so it extrapolates better).