Looking over the comments, some of the most upvoted comments express the sentiment ththat Yudkowsky is not the best communicator. This is what the people say.
I’m afraid the evolution analogy isn’t as convincing an argument for everyone as Eliezer seems to think. For me, for instance, it’s quite persuasive because evolution has long been a central part of my world model. However, I’m aware that for most “normal people”, this isn’t the case; evolution is a kind of dormant knowledge, not a part of the lens they see the world with. I think this is why they can’t intuitively grasp, like most rat and rat-adjacent people do, how powerful optimization processes (like gradient descent or evolution) can lead to mesa-optimization, and what the consequences of that might be: the inferential distance is simply too large.
I think Eliezer has made great strides recently in appealing to a broader audience. But if we want to convince more people, we need to find rhetorical tools other than the evolution analogy and assume less scientific intuition.
That’s a bummer. I’ve only listened partway but was actually impressed so far with how Eliezer presented things, and felt like whatever media prep has been done has been quite helpful
Certainly he did a better job than he has in previous similar appearances. Things get pretty bad about halfway through though, Ezra presents essentially an alignment-by-default case and Eliezer seems to have so much disdain for that idea that he’s not willing to engage with it at all (I of course don’t know what’s in his brain. This is how it reads to me, and I suspect how it reads to normies.)
I am a fan of Yudkowsky and it was nice hearing him of Ezra Klein, but I would have to say that for my part the arguments didn’t feel very tight in this one. Less so than in IABED (which I thought was good not great).
Ezra seems to contend that surely we have evidence that we can at least kind of align current systems to at least basically what we usually want most of the time. I think this is reasonable. He contends that maybe that level of “mostly works” as well as the opportunity to gradually give feedback and increment current systems seems like it’ll get us pretty far. That seems reasonable to me.
As I understand it, Yudkowsky probably sees LLMs as vaguely anthropomophic at best, but not meaningfully aligned in a way that would be safe/okay if current systems were more “coherent” and powerful. Not even close. I think he contended that if you just gave loads of power to ~current LLMs, they would optimize for something considerably different than the “true moral law”. Because of the “fragility of value”, he also believes it is likely the case that most types of psuedoalignments are not worthwhile. Honestly, that part felt undersubstantiated in a “why should I trust that this guy knows the personality of GPT 9″ sort of way; I mean, Claude seems reasonably nice right? And also, ofc, there’s the “you can’t retrain a powerful superintelligence” problem / the stop button problem / the anti-natural problems of corrigible agency which undercut a lot of Ezra’s pitch, but which they didn’t really get into.
So ya, I gotta say, it was hardly a slam dunk case / discussion for high p(doom | superintelligence).
Looking over the comments, some of the most upvoted comments express the sentiment ththat Yudkowsky is not the best communicator. This is what the people say.
I’m afraid the evolution analogy isn’t as convincing an argument for everyone as Eliezer seems to think. For me, for instance, it’s quite persuasive because evolution has long been a central part of my world model. However, I’m aware that for most “normal people”, this isn’t the case; evolution is a kind of dormant knowledge, not a part of the lens they see the world with. I think this is why they can’t intuitively grasp, like most rat and rat-adjacent people do, how powerful optimization processes (like gradient descent or evolution) can lead to mesa-optimization, and what the consequences of that might be: the inferential distance is simply too large.
I think Eliezer has made great strides recently in appealing to a broader audience. But if we want to convince more people, we need to find rhetorical tools other than the evolution analogy and assume less scientific intuition.
That’s a bummer. I’ve only listened partway but was actually impressed so far with how Eliezer presented things, and felt like whatever media prep has been done has been quite helpful
Certainly he did a better job than he has in previous similar appearances. Things get pretty bad about halfway through though, Ezra presents essentially an alignment-by-default case and Eliezer seems to have so much disdain for that idea that he’s not willing to engage with it at all (I of course don’t know what’s in his brain. This is how it reads to me, and I suspect how it reads to normies.)
Ah dang, yeah I haven’t gotten there yet, will keep an ear out
I am a fan of Yudkowsky and it was nice hearing him of Ezra Klein, but I would have to say that for my part the arguments didn’t feel very tight in this one. Less so than in IABED (which I thought was good not great).
Ezra seems to contend that surely we have evidence that we can at least kind of align current systems to at least basically what we usually want most of the time. I think this is reasonable. He contends that maybe that level of “mostly works” as well as the opportunity to gradually give feedback and increment current systems seems like it’ll get us pretty far. That seems reasonable to me.
As I understand it, Yudkowsky probably sees LLMs as vaguely anthropomophic at best, but not meaningfully aligned in a way that would be safe/okay if current systems were more “coherent” and powerful. Not even close. I think he contended that if you just gave loads of power to ~current LLMs, they would optimize for something considerably different than the “true moral law”. Because of the “fragility of value”, he also believes it is likely the case that most types of psuedoalignments are not worthwhile. Honestly, that part felt undersubstantiated in a “why should I trust that this guy knows the personality of GPT 9″ sort of way; I mean, Claude seems reasonably nice right? And also, ofc, there’s the “you can’t retrain a powerful superintelligence” problem / the stop button problem / the anti-natural problems of corrigible agency which undercut a lot of Ezra’s pitch, but which they didn’t really get into.
So ya, I gotta say, it was hardly a slam dunk case / discussion for high p(doom | superintelligence).