# Nisan

Karma: 5,978
• Also, chess usually ends in a draw, which is lame. Go rarely if ever ends in a draw.

# In­flec­tion AI: New startup re­lated to lan­guage models

2 Apr 2022 5:35 UTC
20 points
• 1 Mar 2022 1:24 UTC
LW: 21 AF: 9
AF

CFAR used to have an awesome class called “Be specific!” that was mostly about concreteness. Exercises included:

• Rationalist taboo

• A group version of rationalist taboo where an instructor holds an everyday object and asks the class to describe it in concrete terms.

• A role-playing game where the instructor plays a management consultant whose advice is impressive-sounding but contentless bullshit, and where the class has to force the consultant to be specific and concrete enough to be either wrong or trivial.

• People were encouraged to make a habit of saying “can you give an example?” in everyday conversation. I practiced it a lot.

IIRC, Eliezer taught the class in May 2012? He talks about the relevant skills here and here. And then I ran it a few times, and then CFAR dropped it; I don’t remember why.

• Agents who model each other can be modeled as programs with access to reflective oracles. I used to think the agents have to use the same oracle. But actually the agents can use different oracles, as long as each oracle can predict all the other oracles. This feels more realistic somehow.

• Ok, I think in the OP you were using the word “secrecy” to refer to a narrower concept than I realized. If I understand correctly, if Alice tells Bob “please don’t tell Bob”, and then five years later when Alice is dead or definitely no longer interested or it’s otherwise clear that there won’t be negative consequences, Carol tells Bob, and Alice finds out and doesn’t feel betrayed — then you wouldn’t call that a “secret”. I guess for it to be a “secret” Carol would have to promise to carry it to her grave, even if circumstances changed, or something.

In that case I don’t have strong opinions about the OP.

• Become unpersuadable by bad arguments. Seek the best arguments both for and against a proposition. And accept that you’ll never be epistemically self-sufficient in all domains.

• Suppose Alice has a crush on Bob and wants to sort out her feelings with Carol’s help. Is it bad for Alice to inform Carol about the crush on condition of confidentiality?

• Your Boycott-itarianism could work just through market signals. As long as your diet makes you purchase less high-cruelty food and more low-cruelty food, you’ll increase the average welfare of farm animals, right? Choosing a simple threshold and telling everyone about it is additionally useful for coordination and maybe sending farmers non-market signals, if you believe those work.

If you really want the diet to be robustly good with respect to the question of whether farm animals’ lives are net-positive, you’d want to tune the threshold so as not to change the number of animals consumed (per person per year, compared to a default diet, over the whole community). One would have to estimate price elasticities and dig into the details of “cage-free”, etc.

• 1 Dec 2021 7:23 UTC
LW: 2 AF: 1
AF
in reply to: Charlie Steiner’s comment

I think you’re saying , right? In that case, since embeds into , we’d have embedding into . So not really a step up.

If you want to play ordinal games, you could drop the requirement that agents are computable /​ Scott-continuous. Then you get the whole ordinal hierarchy. But then we aren’t guaranteed equilibria in games between agents of the same order.

I suppose you could have a hybrid approach: Order is allowed to be discontinuous in its order- beliefs, but higher orders have to be continuous? Maybe that would get you to .

• And as a matter of scope, your reaction here is incorrect. [...] Reacting to it as a synecdoche of the agricultural system does not seem useful.

On my reading, the OP is legit saddened by that individual turkey. One could argue that scope demands she be a billion times sadder all the time about poultry farming in general, but that’s infeasible. And I don’t think that’s a reductio against feeling sad about an individual turkey.

Sometimes, sadness and crying are about integrating one’s beliefs. There’s an intuitive part of your mind that doesn’t understand your models of big, global problems. But, like a child, it understands the small tragedies you encounter up close. If it’s shocked and surprised, then it is still learning what the rest of you knows about the troubles of the world. If it’s angry and outraged, then there’s a sense in which those feelings are “about” the big, global problems too.

# My take on higher-or­der game theory

30 Nov 2021 5:56 UTC
34 points
• it legitimately takes the whole 4 years after that to develop real AGI that ends the world. FINE. SO WHAT. EVERYONE STILL DIES.

By Gricean implicature, “everyone still dies” is relevant to the post’s thesis. Which implies that the post’s thesis is that humanity will not go extinct. But the post is about the rate of AI progress, not human extinction.

This seems like a bucket error, where “will takeoff be fast or slow?” and “will AI cause human extinction?” are put in the same bucket.

• The central hypothesis of “takeoff speeds” is that at the time of serious AGI being developed, it is perfectly anti-Thielian in that it is devoid of secrets

No, the slow takeoff model just precludes there being one big secret that unlocks both 30%/​year growth and dyson spheres. It’s totally compatible with a bunch of medium-sized $1B secrets that different actors discover, adding up to hyperbolic economic growth in the years leading up to “rising out of the atmosphere”. Rounding off the slow takeoff hypothesis to “lots and lots of little innovations adding up to every key AGI threshold, which lots of actors are investing$10 million in at a time” seems like black-and-white thinking, demanding that the future either be perfectly Thielien or perfectly anti-Thielien. The real question is a quantitative one — how lumpy will takeoff be?

• I don’t think “viciousness” is the word you want to use here.

• Ah, great! To fill in some of the details:

• Given agents and numbers such that , there is an aggregate agent called which means “agents and acting together as a group, in which the relative power of versus is the ratio of to ”. The group does not make decisions by combining their utility functions, but instead by negotiating or fighting or something.

• Aggregation should be associative, so .

• If you spell out all the associativity relations, you’ll find that aggregation of agents is an algebra over the operad of topological simplices. (See Example 2 https://​​arxiv.org/​​abs/​​2107.09581.)

• Of course we still have the old VNM-rational utility-maximizing agents. But now we also have aggregates of such agents, which are “less Law-aspiring” than their parts.

• In order to specify the behavior of an aggregate, we might need more data than the component agents and their relative power . In that case we’d use some other operad.

• I like that you glossed the phrase “have your cake and eat it too”:

It’s like a toddler thinking that they can eat their slice of cake, and still have that very same slice of cake available to eat again the next morning.

I also like that you explained the snowclone “lies, damned lies, and statistics”. I’m familiar with both of these cliches, but they’re generally overused to the point of meaninglessness. It’s clear you used them with purpose.