Self review: I really like this post. Combined with the previous one (from 2022), it feels to me like “lots of people are confused about Kelly betting and linear/log utility of money, and this deconfuses the issue using arguments I hadn’t seen before (and still haven’t seen elsewhere)”. It feels like small-but-real intellectual progress. It still feels right to me, and I still point people at this when I want to explain how I think about Kelly.
That’s my inside view. I don’t know how to square that with the relative lack of attention the post got, and it feels weird to be writing it given that fact, but oh well. There are various stories I could tell: maybe people were less confused than I thought; maybe my explanation is unclear; maybe I’m still wrong on the object level; maybe people just don’t care very much; maybe it just happened not to get seen.
If I were writing this today, my guess is:
It’s worth combining the two posts into one.
The rank optimization stuff is fine to cut, given that I tentatively propose it in one post and then in the next say “probably not very useful”. Maybe have a separate post for exploring it. No need to go into depth on “extending Kelly outside its original domain”.
The charity stuff might also be fine to cut. At any rate it’s not a focus.
Someone sent me an example function satisfying the “I’m pretty sure yes” criteria, so that can be included.
Not sure if this belongs in the same place, but I’d still like to explore more the “what if your utility function is such that maximizing expected utility at time doesn’t maximize expected utility at time ?” thing. (I thought I wrote this in the post somewhere, but can’t see it: the way I’d explore this is from the perspective of “a utility function is isomorphic to a description of betting preferences that satisfy certain constraints, so when we talk about a utility function like that, what betting preferences are we talking about?” Feels like the kind of thing someone’s likely already explored, but I haven’t seen it if so.)
I think it’s good that this post was written, shared to LessWrong, and got a bunch of karma. And (though I haven’t fully re-read it) it seems like the author was careful to distinguish observation from inference and to include details in defense of Ziz when relevant. I appreciate that.
I don’t think it’s a good fit for the 2023 review. Unless Ziz gets back in the news, there’s not much reason for someone in 2025 or later to be reading this.
If I was going to recommend it, I think the reason would be some combination of
This is a good example of investigative journalism, and valuable to read as such.
It’s a good case study of a certain type of person that it’s important to remember exists.
But I don’t think it stands out as a case study (it’s not trying to answer questions like “how did this person become Ziz”), and I weakly guess it doesn’t stand out as investigative journalism either. E.g. when I’m thinking on these axes, TracingWoodgrains on David Gerard feels like the kind of thing I’d recommend above this.
Which, to be clear, not a slight on this post! I think it does what it wanted to do very well, and what it wants to do is valuable, it’s just not a kind of thing that I think the 2023 review is looking to reward.