A note: before I read this, I had played with asking questions about jokes and their explanations. I saw maybe half a dozen jokes that the AI spat out.
Human: “Can you tell me a joke that you have never told anyone before?” AI: “Sure, here’s one: Why was the math book sad? Because it had too many problems.”
One of the jokes I saw was exactly this one. I didn’t save the prompts, but I believe it was something like “Give me another pun and explain why it’s funny”.
In dark mode, the comment icon is very hard to notice.
You: My initial thoughts were Charity X or Y, for maybe-naive reasons x and y. I ended up going with Z instead because Person P thought it was better. [If P described any reasoning you may summarize it here]. My reasons for going with P’s decision are complicated and weird and not very applicable to your own choice of charity; if you’ve got a bunch of time we can discuss those, but otherwise it’s not worth it IMO.
I tried this. Overall interesting. I had some good, novel thoughts. I’m not sure I had more or different good, novel thoughts about my abstract object than I would have had I just said “I’m gonna think about this for an hour”. I do get the sense that if I were practiced at this, I would have had novel thoughts in 10-15 minutes rather than in an hour, and that if I regularly practiced this I’d naturally have some of those novel thoughts without really reaching for them. The whole thing felt brain stretchy in a nice way—not like “oh this is a hard math problem”, more like “oh I haven’t been to this city before apart from driving through”.
Usually 5 minutes is a great amount of time for me; here, I kept finding myself thinking “am I done?” at about 4 minutes.
Phase 2 Exercise 4 I skipped because I must have skimmed or something? and thought I was supposed to do the abstract object rather than concrete. So I did E5 instead, then read the instructions for E5 and was like “wait didn’t I just do that”.
P3E1 I don’t know why, but it took me a while to believe/confirm that “the previous exercise” referred to P2E6.
When looking at my felt sense for my abstract object, I noticed that it was slippery—I was feeling a specific hypothetical instance of the object, and I was feeling the generalization of a subset of the object, but rarely The Intended Object
I guess I’m still assuming the only reason to timestamp a statement is for the prediction-y qualities. “I was just giving myself the opportunity to prove that I wrote something in advance.” Why would this matter at all, if not for the prediction-y qualities of what you wrote? Could be a failure of imagination on my part. Can you give me a concrete example of something someone might want to write down, not share, and later prove they thought of in advance, not for the prediction-y qualities? I guess there’s “I was first so I get the patent”, and in a world where the idea doesn’t work but does contain a trade secret, you wouldn’t want to reveal it, to preserve the secret? Too convoluted—sorry, currently, due to my failure of imagination, despite your statements to the contrary, I think it’s very likely that the message was written for its prediction-y qualities.
(Also—is there a reason I should believe that if all did go according to plan, when you revealed your message, you would also have said “if all had not gone according to plan, I would not have revealed this message”? ’cause I currently think there’s a very low chance you would have said that. There would be at least a 1% chance it would have been advantageous to avoid saying that, for sure.)
there’s little reason to share
I now believe I should treat any supposed information coming from you as much more likely to be filtered evidence than I would usually suspect. :(
I chose exactly the wrong D&D.Sci to decide to not try building a model on, and instead try to solve just by eyeballing simple scatterplots.
Despite coming in “last place” I’m pretty happy with my results!
I think this was a perfectly reasonable setup. Even more so given that without any straightforward scenarios people won’t think “what if it’s just straightforward though”.
I thought many times during eyeballing “look, probably no one else has tried to build a model using all combos of mins and maxes of stats as features, just do it” but I stuck to my guns. For, uh, reasons. Presumably.
My biggest mistake, as I see it, was failing to generalize from “I have several max-of-two-formulas and a Dragonslayer distribution that is obviously a mix of two which I can’t seem to resolve into something nice, probably all of them are max-of-two-formulas, let’s see if I can refactor things to look like that and get a better idea of what Dragonslayer’s two might look like if the rest do factor well”.
I wonder how it would change things if there was an additional rule: “the button will be taken offline after X hours, pulled from [publish the distribution], unknown to anyone but Ruby in advance”.
The Ofstev rating of someone sorted into Thought-Talon can be modeled as follows:
lower = 1⁄2 x min(Intellect, Patience)upper = 3⁄2 x min(Intellect, Patience)~triangular distribution with min=lower, max=upper, mode=30% of the way from lower to upper
Each other house can be modeled similarly. …not that I fully succeeded at doing so. Just a guess. But sketching:
Serpentyne is between 3⁄4 x [min(Intellect, Reflexes, Patience) − 10] and max(Reflexes, Patience) − 5
Humblescrumble is between, uh,max(max(8, 3⁄4 x (min(Integrity, Intellect) − 15)), 1⁄4 x (max(Integrity, Patience) + 5))andmin(max(30, 3⁄4 x (min(Integrity, Intellect) + 15)), 3⁄4 x (max(Integrity, Patience) + 5))which is definitely 100% accurate
Dragonslayer is between max(5/6 x min(everything)-4, 3⁄2 x min(everything)-20) and 3⁄2 x min(Courage, max(everything else))) and also this one doesn’t yield something that looks triangular so yeah probably not that.
In any case, trying to maximize EV assuming those are right yields my new submission:
Dragonslayer [A, E, H]Humblescrumble [D, I, R]Serpentyne [G, L, M, N, P]Thought-Talon [B, C, F, J, K, O, Q, S, T]
Students may reach their potential in many ways, as long as they are not actively prevented.
Through sophisticate techniques (eyeballing), my own hat has recommended:
Dragonslayer [G, K, N]Humblescrumble [A, E, R, T]Humblescrumble? [L, M]Serpentyne [C, F, H, O, S]Serpentyne :( [B, D]Serpentyne/Humblescrumble [Q]Serpentyne? [P]Thought-Talon [J]Thought-Talon :( :( [I]
Otherwise known as:
Dragonslayer: [G, K, N]Thought-Talon: [I, J]Serpentyne: [B, C, D, F, H, O, P, Q, S]Humblescrumble: [A, E, L, M, R, T](Completely revised in followup comment)
My point was that (0.25)^n for large n is very small, so no, it would not be easy.
How many times do you think he has changed his expected time to disaster to 25% of what it was?
Brandon has been a professional game developer since 1998, starting his career at Epic Games with engineering and design on Unreal Tournament and Unreal Engine 1.0. More recently, Brandon spent 12 years at Valve wearing (and inventing) hats. Many, many hats… Brandon has spent considerable amounts of time in development and leadership on Team Fortress 2 and Dota 2 where he wrote mountains of code and pioneered modern approaches to game development. Also an advisor for the Makers Fund family of companies, Brandon offers his expertise to game startups at all stages of growth.
“The previously observed drop off in the value of additional miners after 5 seem to occur because it makes it less likely for other valuable types to be present, not because it is intrinsically bad.”My go-to check when there’s decent data is to compare P(something | N miners, M dwarves) to P(something | N-1 miners, M-1 dwarves).
Miners: 5Smiths: 1Woodcutters: 1Farmers: 2Brewers: 1Warriors: 2Crafters: 1
I expect to survive: in the Light Forest, 2 Farmers and 2 Warriors seem necessary for good odds and also sufficient for great odds. I suspect the Brewer is not needed, except that obviously the Brewer is needed. I expect my profits are not maximized without some rearrangement; I didn’t try to account for which resources were present much at all.
I did not know any specifics. I did think it was worth my time start skimming because I have another interesting problem vaguely related; then I thought it was worth my time to understand what the objection buried under the word salad (yes, Benjamin, it’s word salad) might be, because it seemed like there might actually be one. And there was! Standard lambda calculus at face value doesn’t work with nonconstructive proofs. That’s interesting and I didn’t know it. Then:
as expected, looks like there’s plenty of work on this, and there’s nothing actually surprising here. My standard practice after doing something like this is to leave a perfectly-reasonable-if-they-were-reasonable question getting to the heart of what’s up, as I did; I can afford this of course because I’m much less high profile than you, or, y’know, any physicist. :D
Interestingly and a complete aside: I grew up with a close relative who wrote in That Distinctive Style and only later encountered it on the wider internet, and wasn’t that a revelation.
[Googles] Why does something like https://arxiv.org/pdf/2006.05433.pdf not resolve things? Is it simply wrong? Is it not actually applicable?
A program for the full axiom of choiceThe theory of classical realizability is a framework for the Curry-Howard correspondence which enables to associate a program with each proof in Zermelo-Fraenkel set theory. But, almost all the applications of mathematics in physics, probability, statistics, etc. use Analysis i.e. the axiom of dependent choice (DC) or even the (full) axiom of choice (AC). It is therefore important to find explicit programs for these axioms. Various solutions have been found for DC, for instance the lambda-term called “bar recursion” or the instruction “quote” of LISP. We present here the first program for AC.
A program for the full axiom of choice
The theory of classical realizability is a framework for the Curry-Howard correspondence which enables to associate a program with each proof in Zermelo-Fraenkel set theory. But, almost all the applications of mathematics in physics, probability, statistics, etc. use Analysis i.e. the axiom of dependent choice (DC) or even the (full) axiom of choice (AC). It is therefore important to find explicit programs for these axioms. Various solutions have been found for DC, for instance the lambda-term called “bar recursion” or the instruction “quote” of LISP. We present here the first program for AC.
I now agree with you. Or possibly with a steelmanned you, who can say. ;)
This is why I was stressing that “chaa” and “fair” are very different concepts, and that this equilibrium notion is very much based on threats. They just need to be asymmetric threats that the opponent can’t defuse in order to work (or ways of asymmetrically benefiting yourself that your opponent can’t ruin, that’ll work just as well).
(from the next post in this sequence https://www.lesswrong.com/posts/RZNmNwc9SxdKayeQh/unifying-bargaining-notions-2-2)
in physical reality, payoffs outside of negotiations can depend very much on the players’ behavior inside the negotiations, and thus is not a constant. Nash himself wrote about this limitation (Nash, 1953) just three years after originally proposing the Nash bargaining solution. For instance, if someone makes an unacceptable threat against you during a business negotiation
(from Critch’s first boundary post https://www.lesswrong.com/posts/8oMF8Lv5jiGaQSFvo/boundaries-part-1-a-key-missing-concept-from-utility-theory)
I’m not really concerned about saying “but reputation matters; the solution you land on here affects your reputation later” since that should be baked into the payoffs.
But I do think it’s important to note the assumption that what happens during negotiation can affect the payoffs even of the current game which this analysis otherwise treats as constant.
A better example might be literally paying for something while in a marketplace you’re not going to visit again. You don’t have much cash, you do have barter items. Barter what you’ve got, compensate for the difference. Cooperative is “yes a trade is good”, competitive is “but where on the possibility list of acceptable barters will we land”?
I guess the difficulty is that the example really does want to say “all games can be decomposed like this if they’re denominated, not just games that sound kind of like cash”, but any game without significant reputational/relationship effects is gonna sound kind of like cash.
Maybe a side note to not forget outside-of-game considerations? But I’m perfectly fine reading about 4⁄3 pi r^3 without “don’t forget that actually things have densities that are never uniform and probably hard to measure and also gravity differs in different locations and in fact you almost certainly have an ellipsoid or something even more complicated instead”, and definitely prefer a world that can present it simply without having to take into account everything in the real world you’d actually have to account for when using the formula in a broader context.