One of my opinions on this stuff is that Yudkowsky does not understand politics at all very deep level, and Yudkowskys writings are one the of the main attractors in this space, so lesswrong systematically attracts people who are bad at understanding politics (but may be good at some STEM subject).
[Edit: I wrote my whole reply thinking that you were talking about “organizational politics.” Skimming the OP again, I realize you probably meant politics politics. :) Anyway, I guess I’m leaving this up because it also touches on the track record question.]
I thought Eliezer was quite prescient on some of this stuff. For instance, I remember this 2017 dialogue (so less than 2y after OpenAI was founded), which on the surface talks about drones, but if you read the whole post, it’s clear that it’s meant as an analogy to building AGI:
AMBER: The thing is, I am a little worried that the head of the project, Mr. Topaz, isn’t concerned enough about the possibility of somebody fooling the drones into giving out money when they shouldn’t. I mean, I’ve tried to raise that concern, but he says that of course we’re not going to program the drones to give out money to just anyone. Can you maybe give him a few tips? For when it comes time to start thinking about security, I mean.
CORAL: Oh. Oh, my dear, sweet summer child, I’m sorry. There’s nothing I can do for you.
AMBER: Huh? But you haven’t even looked at our beautiful business model!
CORAL: I thought maybe your company merely had a hopeless case of underestimated difficulties and misplaced priorities. But now it sounds like your leader is not even using ordinary paranoia, and reacts with skepticism to it. Calling a case like that “hopeless” would be an understatement.
[...]
CORAL: I suppose you could modify your message into something Mr. Topaz doesn’t find so unpleasant to hear. Something that sounds related to the topic of drone security, but which doesn’t cost him much, and of course does not actually cause his drones to end up secure because that would be all unpleasant and expensive. You could slip a little sideways in reality, and convince yourself that you’ve gotten Mr. Topaz to ally with you, because he sounds agreeable now. Your instinctive desire for the high-status monkey to be on your political side will feel like its problem has been solved. You can substitute the feeling of having solved that problem for the unpleasant sense of not having secured the actual drones; you can tell yourself that the bigger monkey will take care of everything now that he seems to be on your pleasantly-modified political side. And so you will be happy. Until the merchant drones hit the market, of course, but that unpleasant experience should be brief.
These passages read to me a bit as though Eliezer called in 2017 that EAs working at OpenAI as their ultimate path to impact (as opposed to for skill building or know-how acquisistion) were wasting their time.
Maybe a critic would argue that this sequence of posts was more about Eliezer’s views on alignment difficulty than on organizational politics. True, but it still reads as prescient and contains thoughts on org dynamics that apply even if alignment is just hard rather than super duper hard.
I agree Yudkowsky is not incompetent at understanding politics. I’m saying he’s not exceptionally good at it. Basically, he’s average. Just like you and me (until proven otherwise).
I didn’t read the entire post, I only skimmed it, but my understanding is this post is Yudkowsky yet again claiming alignment is difficult and that there are some secret insights inside Yudkowsky’s head as to why alignment is hard that can’t be shared in public.
I remember reading Yudkowsky versus Christiano debates some years back and they had this same theme of inexplicable insights inside Yudkowkys head. The reasoning about politics in the post you just linked mostly assumes there exist some inexplicable but true insights about alignment difficulty inside Yudkowskys head.
Can I double-click on what “does not understand politics at [a] very deep level” means? Can someone explain what they have in mind? I think Eliezer has probably better models than most of what our political institutions are capable of, and probably isn’t very skilled at personally politicking. I’m not sure what other people have in mind.
I’m not sure if the two are separable. Let’s say you believe in “great man” theory of history (I.e. few people disproportionately shape history, and not institutions, market forces etc). Then your ability to predict what other great men could do automatically means you may have some of the powers of a great man yourself.
Also yes I mean he isn’t exceptionally skilled at either of the two. My bet is there are people who can make significantly better predictions than him, if only they also understood technical details of AI.
One of my opinions on this stuff is that Yudkowsky does not understand politics at all very deep level, and Yudkowskys writings are one the of the main attractors in this space, so lesswrong systematically attracts people who are bad at understanding politics (but may be good at some STEM subject).
[Edit: I wrote my whole reply thinking that you were talking about “organizational politics.” Skimming the OP again, I realize you probably meant politics politics. :) Anyway, I guess I’m leaving this up because it also touches on the track record question.]
I thought Eliezer was quite prescient on some of this stuff. For instance, I remember this 2017 dialogue (so less than 2y after OpenAI was founded), which on the surface talks about drones, but if you read the whole post, it’s clear that it’s meant as an analogy to building AGI:
[...]
These passages read to me a bit as though Eliezer called in 2017 that EAs working at OpenAI as their ultimate path to impact (as opposed to for skill building or know-how acquisistion) were wasting their time.
Maybe a critic would argue that this sequence of posts was more about Eliezer’s views on alignment difficulty than on organizational politics. True, but it still reads as prescient and contains thoughts on org dynamics that apply even if alignment is just hard rather than super duper hard.
I agree Yudkowsky is not incompetent at understanding politics. I’m saying he’s not exceptionally good at it. Basically, he’s average. Just like you and me (until proven otherwise).
I didn’t read the entire post, I only skimmed it, but my understanding is this post is Yudkowsky yet again claiming alignment is difficult and that there are some secret insights inside Yudkowsky’s head as to why alignment is hard that can’t be shared in public.
I remember reading Yudkowsky versus Christiano debates some years back and they had this same theme of inexplicable insights inside Yudkowkys head. The reasoning about politics in the post you just linked mostly assumes there exist some inexplicable but true insights about alignment difficulty inside Yudkowskys head.
I really liked your quote and remarks. So much so, that I made an edited version of them as a new post here: http://mflb.com/ai_alignment_1/d_250207_insufficient_paranoia_gld.html
Can I double-click on what “does not understand politics at [a] very deep level” means? Can someone explain what they have in mind? I think Eliezer has probably better models than most of what our political institutions are capable of, and probably isn’t very skilled at personally politicking. I’m not sure what other people have in mind.
Sorry for delay in reply.
I’m not sure if the two are separable. Let’s say you believe in “great man” theory of history (I.e. few people disproportionately shape history, and not institutions, market forces etc). Then your ability to predict what other great men could do automatically means you may have some of the powers of a great man yourself.
Also yes I mean he isn’t exceptionally skilled at either of the two. My bet is there are people who can make significantly better predictions than him, if only they also understood technical details of AI.