Posts I’d Like To Write (Includes Poll)

Summary: There are a bunch of posts I want to write; I’d like your help prioritizing them, and if you feel like writing one of them, that would be awesome too!

I haven’t been writing up as many of my ideas for Less Wrong as I’d like; I have excuses, but so does everyone. So I’m listing out my backlog, both for my own motivation and for feedback/​help. At the end, there’s a link to a poll on which ones you’d like to see. Comments would also be helpful, and if you’re interested in writing up one of the ideas from the third section yourself, say so!

(The idea was inspired by lukeprog’s request for post-writing help, and I think someone else did this a while ago as well.)

Posts I’m Going To Write (Barring Disaster)

These are posts that I currently have unfinished drafts of.

Decision Theories: A Semi-Formal Analysis, Part IV and Part V: Part IV concerns bargaining problems and introduces the tactic of playing chicken with the inference process; Part V discusses the benefits of UDT and perhaps wraps up the sequence. Part IV has been delayed by more than a month, partly by real life, and partly because bargaining problems are really difficult and the approach I was trying turned out not to work. I believe I have a fix now, but that’s no guarantee; if it turns out to be flawed, then Part IV will mainly consist of “bargaining problems are hard, you guys”.

Posts I Really Want To Write

These are posts that I feel I’ve already put substantial original work into, but I haven’t written a draft. If anyone else wants to write on the topic, I’d welcome that, but I’d probably still write up my views on it later (unless the other post covers all the bases that I’d wanted to discuss, most of which aren’t obvious from the capsule descriptions below).

An Error Theory of Qualia: My sequence last summer didn’t turn out as well as I’d hoped, but I still think it’s the right approach to a physically reductionist account of qualia (and that mere bullet-biting isn’t going to suffice), so I’d like to try again and see if I can find ways to simplify and test my theory. (In essence, I’m proposing that what we experience as qualia are something akin to error messages, caused when we try and consciously introspect on something that introspection can’t usefully break down. It’s rather like the modern understanding of déjà vu.)

Weak Solutions in Metaethics: I’ve been mulling over a certain approach to metaethics, which differs from Eliezer’s sequence and lukeprog’s sequence (although the conclusions may turn out to be close). In mathematics, there’s a concept of a weak solution to a differential equation: a function that has the most important properties but isn’t actually differentiable enough times to “count” in the original formulation. Sometimes these weak solutions can lead to “genuine” solutions, and other times it turns out that the weak solution is all you really need. The analogy is that there are a bunch of conditions humans want our ethical theories to satisfy (things like consistency, comprehensivity, universality, objectivity, and practical approximability), and that something which demonstrably had all these properties would be a “strong” solution. But the failure of moral philosophers to find a strong solution doesn’t have to spell doom for metaethics; we can focus instead on the question of what sorts of weak solutions we can establish.

Posts I’d Really Love To See

And then we get to ideas that I’d like to write Less Wrong posts on, but that I haven’t really developed beyond the kernels below. If any of these strike your fancy, you have my atheist’s blessing to flesh them out. (Let me know in the comments if you want to publicly commit to doing so.)

Living with Rationality: Several people in real life criticize Less Wrong-style rationality on the grounds that “you couldn’t really benefit by living your life by Bayesian utility maximization, you have to go with intuition instead”. I think that’s a strawman attack, but none of the defenses on Less Wrong seem to answer this directly. What I’d like to see described is how it works to actually improve one’s life via rationality (which I’ve seen in my own life), and how it differs from the Straw Vulcan stereotype of decisionmaking. (That is, I usually apply conscious deliberation on the level of choosing habits rather than individual acts; I don’t take out a calculator when deciding who to sit next to on a bus; I leave room for the kind of uncertainty described as “my conscious model of the situation is vastly incomplete”, etc.)

An Explanation of the Born Probabilities in MWI: This topic might be even better suited to an actual physicist than to a know-it-all mathematician, but I don’t see why the Born probabilities should be regarded as mysterious at all within the Many-Worlds interpretation. The universe is naturally defined as a Hilbert space, and the evolution of the wavefunction has a basic L^2 conservation law. If you’re going to ask “how big” a chunk of the wavefunction is (which is the right way to compute the relative probabilities of being an observer that sees such-and-such), the only sane answer is going to be the L^2 norm (i.e. the Born probabilities).

Are Mutual Funds To Blame For Stock Bubbles? My opinion about the incentives behind the financial crisis, in a nutshell: Financial institutions caused the latest crash by speculating in ways that were good for their quarterly returns but involved themselves in way too much risk. The executives were incentivized to act in that short-sighted way because the investors wanted short-term returns and were willing to turn a blind eye to that kind of risk. But that’s a crazy preference for most investors (I expect it had seriously negative value), so why weren’t investors smarter (i.e. why didn’t they flee from any company that wasn’t clearly prioritizing longer-term expected value)? Well, there’s one large chunk of investors with precisely those incentives: the 20% of the stock market that’s composed of mutual funds. I’d like to test this theory and think about realistic ways to apply it to public policy. (It goes without saying that I think Less Wrong readers should, at minimum, invest in index funds rather than mutual funds.)

Strategies for Trustworthiness with the Singularity: I want to develop this comment into an article. Generally speaking, the usual methods of making the principal-agent problem work out aren’t available; the possible payoffs are too enormous when we’re discussing rapidly accelerating technological progress. I’m wondering if there’s any way of setting up a Singularity-affecting organization so that it will be transparent to the organization’s backers that the organization is doing precisely what it claims. I’d like to know in general, but there’s also an obvious application; I think highly of the idealism of SIAI’s people, but trusting people on their signaled idealism in the face of large incentives turns out to backfire in politics pretty regularly, so I’d like a better structure than that if possible.

On Adding Up To Normality: People have a strange block about certain concepts, like the existence of a deity or of contracausal free will, where it seems to them that the instant they stopped believing in it, everything else in their life would fall apart or be robbed of meaning, or they’d suddenly incur an obligation that horrifies them (like raw hedonism or total fatalism). That instinct is like being on an airplane, having someone explain to you that your current understanding of aerodynamic lift is wrong, and then suddenly becoming terrified that the plane will plummet out of the sky now that there’s no longer the kind of lift you expected. (That is, it’s a fascinating example of the Mind Projection Fallacy.) So I want a general elucidation of Egan’s Law to point people to.

The Subtle Difference Between Meta-Uncertainty and Uncertainty: If you’re discussing a single toss of a coin, then you should treat it the same (for decision purposes) whether you know that it’s a coin designed to land heads 34 of the time, or whether you know there’s a 50% chance it’s a fair coin and a 50% chance it’s a two-headed coin. Metauncertainty and uncertainty are indistinguishable in that sense. Where they differ is in how you update on new evidence, or how you’d make bets about three upcoming flips taken together, etc. This is a worthwhile topic that seems to confuse the hell out of newcomers to Bayesianism.

(Originally, this was a link to a poll on these post ideas)

Thanks for your feedback!

UPDATE:

Thanks to everyone who gave me feedback; results are in this comment!