# Nisan

Karma: 5,908
• 1 Dec 2021 7:23 UTC
LW: 2 AF: 1
AF
in reply to: Charlie Steiner’s comment

I think you’re saying , right? In that case, since embeds into , we’d have embedding into . So not really a step up.

If you want to play ordinal games, you could drop the requirement that agents are computable /​ Scott-continuous. Then you get the whole ordinal hierarchy. But then we aren’t guaranteed equilibria in games between agents of the same order.

I suppose you could have a hybrid approach: Order is allowed to be discontinuous in its order- beliefs, but higher orders have to be continuous? Maybe that would get you to .

• And as a matter of scope, your reaction here is incorrect. [...] Reacting to it as a synecdoche of the agricultural system does not seem useful.

On my reading, the OP is legit saddened by that individual turkey. One could argue that scope demands she be a billion times sadder all the time about poultry farming in general, but that’s infeasible. And I don’t think that’s a reductio against feeling sad about an individual turkey.

Sometimes, sadness and crying are about integrating one’s beliefs. There’s an intuitive part of your mind that doesn’t understand your models of big, global problems. But, like a child, it understands the small tragedies you encounter up close. If it’s shocked and surprised, then it is still learning what the rest of you knows about the troubles of the world. If it’s angry and outraged, then there’s a sense in which those feelings are “about” the big, global problems too.

# My take on higher-or­der game theory

30 Nov 2021 5:56 UTC
34 points
• it legitimately takes the whole 4 years after that to develop real AGI that ends the world. FINE. SO WHAT. EVERYONE STILL DIES.

By Gricean implicature, “everyone still dies” is relevant to the post’s thesis. Which implies that the post’s thesis is that humanity will not go extinct. But the post is about the rate of AI progress, not human extinction.

This seems like a bucket error, where “will takeoff be fast or slow?” and “will AI cause human extinction?” are put in the same bucket.

• The central hypothesis of “takeoff speeds” is that at the time of serious AGI being developed, it is perfectly anti-Thielian in that it is devoid of secrets

No, the slow takeoff model just precludes there being one big secret that unlocks both 30%/​year growth and dyson spheres. It’s totally compatible with a bunch of medium-sized $1B secrets that different actors discover, adding up to hyperbolic economic growth in the years leading up to “rising out of the atmosphere”. Rounding off the slow takeoff hypothesis to “lots and lots of little innovations adding up to every key AGI threshold, which lots of actors are investing$10 million in at a time” seems like black-and-white thinking, demanding that the future either be perfectly Thielien or perfectly anti-Thielien. The real question is a quantitative one — how lumpy will takeoff be?

• I don’t think “viciousness” is the word you want to use here.

• Ah, great! To fill in some of the details:

• Given agents and numbers such that , there is an aggregate agent called which means “agents and acting together as a group, in which the relative power of versus is the ratio of to ”. The group does not make decisions by combining their utility functions, but instead by negotiating or fighting or something.

• Aggregation should be associative, so .

• If you spell out all the associativity relations, you’ll find that aggregation of agents is an algebra over the operad of topological simplices. (See Example 2 https://​​arxiv.org/​​abs/​​2107.09581.)

• Of course we still have the old VNM-rational utility-maximizing agents. But now we also have aggregates of such agents, which are “less Law-aspiring” than their parts.

• In order to specify the behavior of an aggregate, we might need more data than the component agents and their relative power . In that case we’d use some other operad.

• I like that you glossed the phrase “have your cake and eat it too”:

It’s like a toddler thinking that they can eat their slice of cake, and still have that very same slice of cake available to eat again the next morning.

I also like that you explained the snowclone “lies, damned lies, and statistics”. I’m familiar with both of these cliches, but they’re generally overused to the point of meaninglessness. It’s clear you used them with purpose.

• The psychotic break you describe sounds very scary and unpleasant, and I’m sorry you experienced that.

• Typo: “common, share, agreed-on” should be ”...shared...”.

• People are fond of using the neologism “cruxy”, but there’s already a word for that: “crucial”. Apparently this sense of “crucial” can be traced back to Francis Bacon.

# Nisan’s Shortform

12 Sep 2021 6:05 UTC
8 points