I maintain a reading list on Goodreads.
I have a personal website with some blog posts, mostly technical stuff about math research.
I am also on github
There is also predictionbook, which seems to be a similar sort of thing.
Of course, there’s also metaculus, but that’s more of a collaborative prediction aggregator, not so much a personal tool for tracking your own predictions.
If anyone came across this comment in the future—the CFAR Participant Handbook is now online,
which is more or less the answer to this question.
The Terra Ignota sci-fi series by Ada Palmer depicts a future world which is also driven by “slack transportation”.
The mechanism, rather than portals, is a super-cheap global network of autonomous flying cars (I think they’re supposed to run on nuclear engines? The technical details are not really developed).
It’s a pretty interesting series, although it doesn’t explore the practical implications so much as the political/sociological ones (and this is hardly the only thing driving the differences between the present world and the depicted future)
I think, rather than “category theory is about paths in graphs”, it would be more reasonable to say that category theory is about paths in graphs up to equivalence, and in particular about properties of paths which depend on their relations to other paths (more than on their relationship to the vertices)*. If your problem is most usefully conceptualized as a question about paths (finding the shortest path between two vertices, or counting paths, or something in that genre, you should definitely look to the graph theory literature instead)
* I realize this is totally incomprehensible, and doesn’t make the case that there are any interesting problems like this. I’m not trying to argue that category theory is useful, just clarifying that your intuition that it’s not useful for problems that look like these examples is right.
As an algebraic abstractologist, let me just say this is an absolutely great post. My comments:
Category theorists don’t distinguish between a category with two objects and an edge between them, and a category with two objects and two identified edges between them (the latter object doesn’t really even make sense in the usual account). In general, the extra equivalence relation that you have to carry around makes certain things more complicated in this version.
I do tend to agree with you that thinking of categories as objects, edges and an equivalence relation on paths is a more intuitive perspective, but let me defend the traditional presentation. By far the most essential/prototypical examples are the categories of sets and functions, or types and functions. Here, it’s more natural to speak of functions from x to y, than to speak of “composable sequences of functions beginning at x and ending at y, up to the equivalence relation which identifies two sequences if they have the same composite”.
Again, I absolutely love this post. I am frankly a bit shocked that nobody seems to have written an introduction using this language—I think everyone is too enamored with sets as an example.
This is a reasonable way to resolve the paradox, but note that you’re required to fix the max number of people ahead of time—and it can’t change as you receive evidence (it must be a maximum across all possible worlds, and evidence just restricts the set of possible worlds). This essentially resolves Pascal’s mugging by fixing some large number X and assigning probability 0 to claims about more than X people.
Just to sketch out the contradiction between unbounded utilities and gambles involving infinitely many outcomes a bit more explicitly.
If your probability function is unbounded, we can consider the following wager:
You win 2 utils with probability 1⁄2, 4 utils with probability 1⁄4, and so on.
The expected utility of this wager is infinite.
(If there are no outcomes with utility exactly 2, 4, etc, we can award more—this is possible because utility is unbounded).
Now consider these wagers on a (fair) coinflip:
A: Play the above game if heads, pay out 0 utils if tails
B: Play the above game if heads, pay out 100000 utils if tails
(0 and 100000 can be any two non-equal numbers).
Both of these wagers have infinite expected utility, so we must be indifferent between them.
But since they agree on heads, and B is strictly preferred to A on tails, we must prefer B (since tails occurs with positive probability)
Information about people behaving erratically/violently is better at grabbing your brain’s “important” sensor? (Noting that I had exactly the same instinctual reaction). This seems to be roughly what you’d expect from naive evopsych (which doesn’t mean it’s a good explanation, of course)
CFAR must have a lot of information about the efficacy of various rationality techniques and training methods (compared to any other org, at least). Is this information, or recommendations based on it, available somewhere? Say, as a list of techniques currently taught at CFAR—which are presumably the best ones in this sense. Or does one have to attend a workshop to find out?
There’s some recent work in the statistics literature exploring similar ideas. I don’t know if you’re aware of this, or if it’s really relevant to what you’re doing (I haven’t thought a lot about the comparisons yet), but here are some papers.
Beckers-Halpern, Abstracting Causal Models
Chalukpa-Perona-Eberhardt, Visual Causal Feature Learning
Rubenstein et al, Causal consistency of structural equation models
A thought about productivity systems/workflow optimization:
One principle of good design is “make the thing you want people to do, the easy thing to do”. However, this idea is susceptible to the following form of Goodhart: often a lot of the value in some desirable action comes from the things that make it difficult.
For instance, sometimes I decide to migrate some notes from one note-taking system to another.
This is usually extremely useful, because it forces me to review the notes and think about how they relate to each other and to the new system. If I make this easier for myself by writing a script to do the work (as I have sometimes done), this important value is lost.
Or think about spaced repetition cards: You can save a ton of time by reusing cards made by other people covering the same material—but the mental work of breaking the material down into chunks that can go into the spaced-repetition system, which is usually very important, is lost.
This is a great list.
The main criticism I have is that this list overlaps way too much with my own internal list of high-quality sites, making it not very useful.
The example of associativity seems a little strange, I’m note sure what’s going on there.
What are the three functions that are being composed?
Should there be an arrow going from n*f(n-1) to f (around n==0?) ?
The output of the system also depends on n*f(n-1), not just on whether or not n is zero.
A simple remark: we don’t have access to all of E, only E up until the current time.
So we have to make sure that we don’t get a degenerate pair which diverges wildly from the actual universe at some point in the future.
Maybe this is similar to the fact that we don’t want AIs to diverge from human values once we go off-distribution? But you’re definitely right that there’s a difference: we do want AIs to diverge from human behaviour (even in common situations).
I’m curious about the remaining 3% of people in the 97% program, who apparently both managed to smuggle some booze into rehab, and then admitted this to the staff while they were checking out. Lizardman’s constant?
I’ve noticed a sort of tradeoff in how I use planning/todo systems (having experimented with several such systems recently). This mainly applies to planning things with no immediate deadline, where it’s more about how to split a large amount of available time between a large number of tasks, rather than about remembering which things to do when. For instance, think of a personal reading list—there is no hurry to read any particular things on it, but you do want to be spending your reading time effectively.
On one extreme, I make a commitment to myself to do all the things on the list eventually. At first, this has the desired effect of making me get things done. But eventually, things that I don’t want to do start to accumulate. I procrastinate on these things by working on more attractive items on the list. This makes the list much less useful from a planning perspective, since it’s cluttered with a bunch of old things I no longer want to spend time on (which make me feel bad about not doing them whenever I’m looking at the list).
On the other extreme, I make no commitment like that, and remove things from the list whenever I feel like it. This avoids the problem of accumulating things I don’t want to do, but makes the list completely useless as a tool for getting me to do boring tasks.
I have a hard time balancing these issues. I’m currently trying an approach to my academic reading list where I keep a mostly unsorted list, and whenever I look at it to find something to read, I have to work on the top item, or remove it from the list. This is hardly ideal, but it mitigates the “stale items” problem, and still manages to provide some motivation, since it feels bad to take items off the list.
I found Predictably Irrational, Superforecasting, and Influence to be good.