# philh

Karma: 6,414

2 Dec 2022 19:25 UTC
2 points
• 2 Dec 2022 0:09 UTC
2 points
0 ∶ 0

Kelly maximizes the expected growth rate, .

I… think this is wrong? It’s late and I should sleep so I’m not going to double check, but this sounds like you’re saying that you can take two sequences, one has a higher value at every element but the other has a higher limit.

If something similar to what you wrote is correct, I think it will be that Kelly maximizes . That feels about right to me, but I’m not confident.

• 1 Dec 2022 23:55 UTC
2 points
0 ∶ 0

I think the key thing to note here is that “maximizing expected growth” looks the same whether the thing you’re trying to grow is money or log-money or sqrt-money or what. It “just happens” that (at least in this framework) the way one maximizes expected growth is the same as the way one maximizes expected log-money.

I’ve recently written about this myself. My goal was partly to clarify this, though I don’t know if I succeeded.

I think the post confuses things by motivating the Kelly bet as the thing that maximizes expected log-money, and also has other neat properties. To my mind, if you want to maximize expected log-money, you just… do the arithmetic to figure out what that means. It’s not quite trivial, but it’s stats-101 stuff. I don’t think it seems more interesting to do the arithmetic that maximizes expected log-money compared to expected money or expected sqrt-money. Kelly certainly didn’t introduce the criterion as “hey guys, here’s a way to maximize expected log-money”. (Admittedly, I don’t much care about his framing either. The original paper is information-theoretic in a way that seems to be mostly forgotten about these days.)

To my mind, the important thing about the Kelly bet is the “almost certainly win more money than anyone using a different strategy, over a long enough time period” thing. (Which is the same as maximizing expected growth rate, when growth is exponential. If growth is linear you still might care if you’re earning $2/​day or$1/​day, but the “growth rate” of both is 0 as defined here.) So I prefer to motivate the Kelly bet as being the thing that does that, and then say “and incidentally, turns out this also maximizes expected log-wealth, which is neat because...”.

• https://​​manifold.markets/​​PhilipHazelden/​​by-2028-will-i-think-miri-has-been

By 2028, will I think MIRI has been net-good for the world?

Resolves according to my subjective judgement, but I’ll take opinions of those I respect at the time into account. As of market creation, people whose opinions I value highly include Eliezer Yudkowsky and Scott Alexander.

As of market creation, I consider that AI safety is important; making progress on it is good and making progress on AI capabilities is bad. If I change my mind by 2028, I’ll resolve according to my beliefs at the time.

I will take into account their output (e.g. papers, blog posts, people who’ve trained at them) but also their inputs (e.g. money and time). I consider counterfactuals valid, like “okay MIRI did X but maybe someone else would have done X anyway”; but currently I think those considerations tend to be weak and hard to evaluate.

If I’m unconfident I may resolve the market PROB.

If MIRI rebrands, the question will pass to them. If MIRI stops existing I’ll leave the market open.

I don’t currently intend to bet on this market until at least a week has passed, and to stop betting in 2027.

Resolution criteria subject to change; my current plan is to figure out what I’m doing with this market and then make similar ones for other orgs. Feel free to ask about edge cases. Feel free to ask for details about my opinions. If you think markets like this are a bad idea feel free to convince me to delete it.

(Sharing here because I’m interested in more eyes on the market and also in ways to make it better.)

• 29 Nov 2022 20:15 UTC
2 points
0 ∶ 0
in reply to: Samuel Hapák’s comment

The “I think” is filler because it is implied. Of course the author writes what he thinks.

I disagree with this. As a writer, I don’t mean the same thing by “I think it cost over $100” versus “it cost over$100″. The latter is more confident; I don’t intend to literally never be wrong when I say things like it, but I do intend to very rarely be wrong. The former suggests that I don’t remember very well and I didn’t look it up. And as a reader, I think I roughly by-default expect writers to be doing the same, and if they regularly say things unhedged that turn out to be false (or that I think they couldn’t possibly know) I lose respect for them.

I don’t know how common it is for other readers to read like me, or other writers to write like me. But I’d be surprised if either demographic was fewer than 10% on LW.

I weakly predict that if you compare the typical writer who doesn’t use “I think” to the typical writer who does, the one who doesn’t is less capable of distinguishing what-is from what-seems-to-be; and is less well-calibrated if you press them to put probabilities on their statements.

• 28 Nov 2022 22:28 UTC
12 points
2 ∶ 0
in reply to: Richard Korzekwa ’s comment

Agreed, but I’d also like examples from commenters who disagree with OP, of self-aware style that they consider bad. I wonder if my reaction would be “oh I didn’t even notice the things that distracted you so much” or “yeah that seems excessive to me too” or what.

• 28 Nov 2022 22:22 UTC
6 points
1 ∶ 0
in reply to: Samuel Hapák’s comment

Fwiw I think if I were rewriting the first paragraph to self-aware style I’d go for something like:

It feels like you’re taking this to the extreme. The goal as I see it is to make text succinct, to get rid of fillers. Which doesn’t mean [… no other changes].

And yeah, I do think that’s an improvement in terms of things I’d personally like to read. It doesn’t just acknowledge uncertainty, but subjectivity. E.g. I think the “I feel like” makes it easier for me to react like “interesting, I don’t feel like that, I wonder why you do” versus “what, no I’m not”.

(Or maybe my rewrite doesn’t actually reflect what you think? Like, maybe you’re confident that you’re speaking for Pinker as well as just yourself, in which case you could start with “Pinker would say”, or “I think Pinker would say” if you’re less confident.)

• 26 Nov 2022 11:08 UTC
2 points
0 ∶ 0

Endorsed. I wildly guess that in practice “counterparty might do better with the money than me” will rarely be a big consideration; but I could see “transaction costs plus externalities plus harm to counterparty, together burn more value than my charitable donations create” being a thing, especially if you’re doing low-margin high-volume.

• 25 Nov 2022 9:15 UTC
2 points
0 ∶ 0

I think this relies on “Val is not successfully communicating with the reader” being for reasons analogous to “Val is speaking English which the store clerk doesn’t, or only speaks it poorly”. But I suspect that if we unpacked what’s going on, I wouldn’t think that analogy held, and I would still think that what you’re doing seems bad.

(Also, I want to flag that “justify that we’re helping the clerk deepen their skill with interfacing with the modern world” doesn’t pattern match to anything I said. It hints at pattern matching with me saying something like “part of why we should speak with epistemic rigor is to help people hear things with epistemic rigor”, but I didn’t say that. You didn’t say that I did, and maybe the hint wasn’t intentional on your part, but I wanted to flag it anyway.)

• 25 Nov 2022 8:47 UTC
2 points
1 ∶ 0

Yes, endorsed. That should probably be mentioned explicitly. (e: added to the post)

(Technically neither of the technical definitions I gave applies here. And this is a case where you can’t maximize every percentile simultaneously—maximizing your 11th percentile returns means betting nothing, and maximizing your 10th percentile means betting everything. But yes, for a single bet, maximizing “probability of ending up richer than I would have, if I had bet a different amount but the result was the same” is probably the natural way to extend the concept to cases like this, and it means betting nothing in this case.)

# On Kelly and altruism

24 Nov 2022 23:40 UTC
12 points
(reasonableapproximation.net)
• 24 Nov 2022 17:32 UTC
8 points
3 ∶ 0

My experience is that folk who need support out of tough spots like this have a harder time hearing the deeper message when it’s delivered in carefully caveated epistemically rigorous language.

I kinda feel like my reaction to this is similar to your reaction to frames:

I refuse to comply with efforts to pave the world in leather. I advocate people learn to wear shoes instead. (Metaphorically speaking.)

To be more explicit, I feel like… sure, I can believe that sometimes epistemic rigor pushes people into thinky-mode and sometimes that’s bad; but epistemic rigor is good anyway. I would much prefer for people to get better at handling things said with epistemic rigor, than for epistemic rigor to get thrown aside.

And maybe that’s not realistic everywhere, but even then I feel like there should be spaces where we go to be epistemically rigorous even if there are people for whom less rigor would sometimes be better. And I feel like LessWrong should be such a space.

I think the thing I’m reacting to here isn’t so much the lack of epistemic rigor—there are lots of things on LW that aren’t rigorous and I don’t think that’s automatically bad. Sometimes you don’t know how to be rigorous. Sometimes it would take a lot of space and it’s not necessary. But strategic lack of epistemic rigor—“I want people to react like _ and they’re more likely to do that if I’m not rigorous”—feels bad.

• A question I have about the FTX thing: people keep saying that the LUNA crash was part of the thing that sparked it. Is this the same Luna that was a blockchain-related dating service that Scott reviewed the whitepaper of?

• 19 Nov 2022 23:19 UTC
4 points
0 ∶ 0

So like, these do seem related, but… I think I feel like you think they’re more closely related than I think they are? Like the kind of thing they’re using as a branching-off point is different from the kind of thing my comment was.

So I’d summarize those posts as saying: “if you’re going to say “let’s _”, it would be nice if you went into more detail about how to _ and what exactly _ looks like”.

But I’m not saying “let’s _”. I’m saying “we might think we can’t _ because [...], but that doesn’t hold because [...]. I currently think _ is possible.” And now I’m similarly being asked to go into detail about how to _ and what exactly _ looks like, and...

Yeah, there’s an implied “let’s _” in my comment, and it’s a perfectly fine question in general, but...

It feels like it’s missing the point of what I said; and in this context, and the way it’s been asked, it feels kind of aggressive and offputting to me.

(I would much less have this reaction, if my second comment in this thread had been my first one. The kind of thing my second comment is, feels much more the kind of thing those posts are reacting to. But I only made my second comment after being asked, and I explicitly said that it was a different question and I didn’t necessarily endorse my answers.)

• 19 Nov 2022 1:01 UTC
3 points
0 ∶ 0

This feels like an isolated demand for a thing that I’m not trying to do.

Yes, obviously if I have concrete suggestions that would be great, and likely those would involve looking inside EA at the people and organizations within it and identifying specific points of intervention that could have avoided this problem, or something.

But I’m not trying to identify a solution, I’m trying to identify a problem. A thing where I think EA could have done better. I think it’s ridiculous to suggest either that I can’t do that without also suggesting improvements, or that I can’t do it without looking inside EA.

Maybe you’re not intending to suggest anything like that? But it feels to me like you are, and I find it annoying.

• 18 Nov 2022 12:51 UTC
4 points
0 ∶ 0

Noting that that’s a separate question, possible answers that come to mind (which I’m not necessarily endorsing) include:

• Not holding up Sam as an exemplar of EA, as I gather kind of happened

• Declining to take more than \$X from Sam, on the grounds that “a large amount of EA funding being dependent on someone with bad ethics seems bad”

• Noticing that the combination “bad ethics and bad capital controls” makes fraud both easy and likely, and explicitly warning people about that. (And taking the lack-of-ethics as a reason to look into capital controls, if they didn’t know about that.)

I do think “EA knows about SBF’s ethics and acts exactly as they did anyway” is not a story that’s flattering about EA.

• I think it’s worth being clear about what exactly “this” is.

My mainline story right now (admitting that I’m not fully caught up) is that prior to 2022:

• There was a lack of capital controls, that would have made fraud and large mistakes easier;

• There was plenty of reason to doubt SBF’s ethics;

• But there was no actual fraud.

Professional investors and EA would both have cared about the first point. But it’s not clear how investors would have felt about it; I could believe anything from “this is a dealbreaker” to “this is positive on net”. (Is Sam doing fraud bad in expectation for his investors? He might not get caught; and if he does, they’ll lose money but probably won’t take most of the flak.) Professional investors probably wouldn’t have cared about the second point much, though I could see it being a mild negative or mild positive.

So, “should EA have caught the fraud”? I think that might be asking too much.

“Should EA have noticed the lack of controls and reacted to that?” Or, “should EA have noticed Sam’s lack of ethics and reacted to that?” I currently think those would have been possible, and “but professional investors didn’t” isn’t much of a defense.

• 14 Nov 2022 9:36 UTC
4 points
0 ∶ 0