Note that I played a part in convincing MIRI to create IAF, and wrote the only comment on the IAF post you linked, so rest assured that I’m watching you folks :-) My thinking has changed over time though, and probably diverged from yours. I’ll lay it out here, hopefully it won’t sound too harsh.
First of all, if your goal is explaining math using simpler math, I think there’s a better way to do it. In a good math explanation, you formulate an interesting problem at level n whose solution requires level n+1. (Ideally n should be as low as possible.) In a bad math explanation, you assume the reader understands level n, then write out the basic definitions of level n+1 and formulate a problem using those. That loses the reader, unless they are already interested in level n+1.
But that’s still underestimating the problem by a couple orders of magnitude. To jumpstart engagement, you need something as powerful as this old post by Eliezer. That’s a much more complicated beast. The technical content is pretty much readable to schoolchildren, yet somehow readers are convinced that something magical is going on and they can contribute, not just read and learn. Coming back to that post now, I’m still in awe of how the little gears work, from the opening sentence to the “win” mantra to the hint that he knows the solution but ain’t telling. It hits a tiny target in manipulation-space that people don’t see clearly even now, after living for a decade inside the research program that it created.
Apart from finding the right problem and distilling it in the right manner, I think the next hardest part is plain old writing style. For example, Eliezer uses lots of poetic language and sounds slightly overconfident, staying mostly in control but leaving dozens of openings for readers to react. But you can’t reuse his style today, the audience has changed and you’ll sound phony. You need to be in tune with readers in your own way. If I knew how to do it, I’d be doing it already. These comments of mine are more like meta-manipulation aimed at people like you, so I can avoid learning to write :-)
Note that I … wrote the only comment on the IAF post you linked
Yes, I replied to it :)
Unfortunately, I don’t expect to have more Eliezer-level explanations of these specific lines of work any time soon. Eliezer has a fairly large amount of content on Arbital that hasn’t seen LW levels of engagement either, though I know some people who are reading it and benefiting from it. I’m not sure how LW 2.0 is coming along, but it might be good to have a subreddit for content similar to your recent post on betting. There is an audience for it, as that post demonstrated.
I think Eliezer’s Arbital stuff would’ve been popular in blog form. (Converting it to a blog now won’t work, the intrigue is gone.) The sequences had lots of similar quality material, like “Created already in motion”. I don’t like it much because it’s so far out, but it gets readers.
The technical content is pretty much readable to schoolchildren, yet somehow readers are convinced that something magical is going on and they can contribute, not just read and learn.
I don’t think that’s a matter of writing style. It’s a matter of whether the prospective “research area” is simple enough that all of its general prerequisites can be stated in a popular blogpost, and otherwise be assumed to be known to the reader. (For example, many OvercomingBias/LessWrong readers have enough of a background in rational-action theory to know what “precommitment” and “dynamic inconsistency” mean, and these notions are indeed necessary for a proper understanding of EY’s point.) At one point, that was true of the general area of timeless/updateless decision theory. It seems to be less true of the logical induction problem.
I think logical induction could’ve been popularized with just as much effort (that is, a lot). For example, the second problem from the post linked by endoself was discussed by Wei and me in 2012, with >40 comments each. If we’d been better at mass appeal, instead of coasting on the audience attracted by Eliezer, we could’ve had even more engagement. (Note the comment from thescoundrel in the second link, that’s the kind of good idea out of nowhere that mass appeal is all about.)
Does popularization produce the goods? Lots of people have the background and skill to contribute to this problem who aren’t currently in our community and don’t have day jobs.
Choosing the right problem is certainly important, but I don’t think it’s the bottleneck. There’s plenty of low hanging fruit. Knowing how to play your audience seems like more of a bottleneck, and it takes a lot of effort to learn.
Note that I played a part in convincing MIRI to create IAF, and wrote the only comment on the IAF post you linked, so rest assured that I’m watching you folks :-) My thinking has changed over time though, and probably diverged from yours. I’ll lay it out here, hopefully it won’t sound too harsh.
First of all, if your goal is explaining math using simpler math, I think there’s a better way to do it. In a good math explanation, you formulate an interesting problem at level n whose solution requires level n+1. (Ideally n should be as low as possible.) In a bad math explanation, you assume the reader understands level n, then write out the basic definitions of level n+1 and formulate a problem using those. That loses the reader, unless they are already interested in level n+1.
But that’s still underestimating the problem by a couple orders of magnitude. To jumpstart engagement, you need something as powerful as this old post by Eliezer. That’s a much more complicated beast. The technical content is pretty much readable to schoolchildren, yet somehow readers are convinced that something magical is going on and they can contribute, not just read and learn. Coming back to that post now, I’m still in awe of how the little gears work, from the opening sentence to the “win” mantra to the hint that he knows the solution but ain’t telling. It hits a tiny target in manipulation-space that people don’t see clearly even now, after living for a decade inside the research program that it created.
Apart from finding the right problem and distilling it in the right manner, I think the next hardest part is plain old writing style. For example, Eliezer uses lots of poetic language and sounds slightly overconfident, staying mostly in control but leaving dozens of openings for readers to react. But you can’t reuse his style today, the audience has changed and you’ll sound phony. You need to be in tune with readers in your own way. If I knew how to do it, I’d be doing it already. These comments of mine are more like meta-manipulation aimed at people like you, so I can avoid learning to write :-)
Yes, I replied to it :)
Unfortunately, I don’t expect to have more Eliezer-level explanations of these specific lines of work any time soon. Eliezer has a fairly large amount of content on Arbital that hasn’t seen LW levels of engagement either, though I know some people who are reading it and benefiting from it. I’m not sure how LW 2.0 is coming along, but it might be good to have a subreddit for content similar to your recent post on betting. There is an audience for it, as that post demonstrated.
I think Eliezer’s Arbital stuff would’ve been popular in blog form. (Converting it to a blog now won’t work, the intrigue is gone.) The sequences had lots of similar quality material, like “Created already in motion”. I don’t like it much because it’s so far out, but it gets readers.
I don’t think that’s a matter of writing style. It’s a matter of whether the prospective “research area” is simple enough that all of its general prerequisites can be stated in a popular blogpost, and otherwise be assumed to be known to the reader. (For example, many OvercomingBias/LessWrong readers have enough of a background in rational-action theory to know what “precommitment” and “dynamic inconsistency” mean, and these notions are indeed necessary for a proper understanding of EY’s point.) At one point, that was true of the general area of timeless/updateless decision theory. It seems to be less true of the logical induction problem.
I think logical induction could’ve been popularized with just as much effort (that is, a lot). For example, the second problem from the post linked by endoself was discussed by Wei and me in 2012, with >40 comments each. If we’d been better at mass appeal, instead of coasting on the audience attracted by Eliezer, we could’ve had even more engagement. (Note the comment from thescoundrel in the second link, that’s the kind of good idea out of nowhere that mass appeal is all about.)
Does popularization produce the goods? Lots of people have the background and skill to contribute to this problem who aren’t currently in our community and don’t have day jobs.
Choosing the right problem is certainly important, but I don’t think it’s the bottleneck. There’s plenty of low hanging fruit. Knowing how to play your audience seems like more of a bottleneck, and it takes a lot of effort to learn.