This is one of the less-edited transcripts; I often try to change it from one long sentence, which is appropriate for talks, to many smaller sentences and paragraphs, which reads better online; also I try to delete false starts and so on. I’ve been busier and putting less time into the editing, so some of the quality decrease from previous summaries is me.
I’m also becoming less confident that his ‘reminders at the beginning of the next lecture’ are the right summaries to use; they’re much more “ok, here’s where we were, now let’s keep going” instead of “here’s the main change from the last lecture, now let’s look at the next topic in order.”
[There’s also a big inferential distance problem here, where he’s built up some jargon and summarizes his points in that jargon, which (of course) does not make the points any easier to transfer. Like, this really isn’t a substitute for the lectures yet!]
Yeah, I wanted to comment on that second paragraph being way overly complex, but didn’t have much to say apart from that. Your description seems apt. I hope at least he knows what he’s talking about with all these words. But in terms of communicating these ideas, that does not do the job. (And my memory is that I felt pretty much the same while watching the full lecture, even though i really like his idea of relevance realization)
I cannot distinguish this from GPT-3. Is it just me?
This is one of the less-edited transcripts; I often try to change it from one long sentence, which is appropriate for talks, to many smaller sentences and paragraphs, which reads better online; also I try to delete false starts and so on. I’ve been busier and putting less time into the editing, so some of the quality decrease from previous summaries is me.
I’m also becoming less confident that his ‘reminders at the beginning of the next lecture’ are the right summaries to use; they’re much more “ok, here’s where we were, now let’s keep going” instead of “here’s the main change from the last lecture, now let’s look at the next topic in order.”
[There’s also a big inferential distance problem here, where he’s built up some jargon and summarizes his points in that jargon, which (of course) does not make the points any easier to transfer. Like, this really isn’t a substitute for the lectures yet!]
If I edited it, I’m not sure there would be anything left. :)
Yeah, I wanted to comment on that second paragraph being way overly complex, but didn’t have much to say apart from that. Your description seems apt. I hope at least he knows what he’s talking about with all these words. But in terms of communicating these ideas, that does not do the job. (And my memory is that I felt pretty much the same while watching the full lecture, even though i really like his idea of relevance realization)