Well, I think the similarity to actual IRL course evaluations is probably intentional—they were probably modeling the questions on either a particular course evaluation questionnaire or a mixture of many. And this shows that course evaluations are pretty bad at picking out professors who cannot explain to people what they are talking about. Given how useful a little impenetrability can be in many fields of research, one wonders how intentional this might be...
Well, I think the similarity to actual IRL course evaluations is probably intentional—they were probably modeling the questions on either a particular course evaluation questionnaire or a mixture of many. And this shows that course evaluations are pretty bad at picking out professors who cannot explain to people what they are talking about. Given how useful a little impenetrability can be in many fields of research, one wonders how intentional this might be...