Hello! I work at Lightcone and like LessWrong :-)
kave
It feels a lot like “Person Do Thing: the language”. In fact, the 49 words are close to a subset of toki pona’s. But toki pona is more expressive. Obviously there are a bunch more words, but also every word can be used as every part of speech, and the grammar disambiguates which part of speech it is. That makes it suprisingly usable. Still, toki pona sentences do feel like puzzles to me.
FWIW, “powe” has been removed from “official” toki pona. A more standard translation might be “sona ike lili”.
Smaller class sizes sounds pretty good! Maybe worth paying for? But I am reminded of the claim that most flights are empty, even though most people find themselves on full flights. Similarly, most person-class-hours might be spent in the biggest classes (cf the inspection paradox).
The pirates win because they don’t have to fight you.
Only if you buy the shares second, right? If they would have fought without your manipulation, they think they’re better off getting paid and fighting you.
This post made me feel confusion about how money keeps its value over time. So, uh … thanks!
The retirement savings/oven example gave me a giddy moment of thinking that the value of money shouldn’t be stable. And, y’know, there is in fact inflation, deflation and stuff!
Now, money’s value does stay pretty stable, but now that feels like something that needs a mechanism to make it true rather than the default.
Hm. This is the most important question for how much utility the pirates get? I agree it’s the most important for deciding whether the pirates attack you or not. I feel like it’s not surprising if the order affects which point on the Pareto frontier we end up at.
A delightfully non-distortionary resolution. As they say on the billboards, “Everybody works but the vacant [leaderboard s]lot”.
I would buy the Leg Cones but am holding off for the sake of the epistemic commons. If my butt cheek were wrong, no one would want to be right.
Wonderful method! I am a poop brain. Manifold rules ~~~
Thank you.
And have fun!
Full flights have more people on them. If you have 100 flights with one person and 1 flight with 200 people, most of the people in those flights are on the 200 person flight.
Not quite! If there were no central bank, money’s value would not jump around aggressively and discontinuously
I agree on the “reference” distribution in Daniel’s example. I think it generally means “the distribution over the random variables that would obtain without the optimiser”. What exactly that distribution is / where it comes from I think is out-of-scope for John’s (current) work, and I think is kind of the same question as where the probabilities come from in statistical mechanics.
Nitpick: to the extent you want to talk about the classic example, paperclip maximisers are as much meant to illustrate (what we would now call) inner alignment failure.
See Arbital on Paperclip (“The popular press has sometimes distorted the notion of a paperclip maximizer into a story about an AI running a paperclip factory that takes over the universe. [...] The concept of a ‘paperclip’ is not that it’s an explicit goal somebody foolishly gave an AI, or even a goal comprehensible in human terms at all.”) or a couple of EY tweet threads about it: 1, 2
Good point!
It seems like it would be nice in Daniel’s example for P(A|ref) to be the action distribution of an “instinctual” or “non-optimising” player. I don’t know how to recover that. You could imagine something like an n-gram model of player inputs across the MMO.
The Shannon entropy of a distribution over random variable conditional on the value of another random variable can be written as
If X and C are which face is up for two different fair coins, H(X) = H(C) = −1. But ? I think this works out fine for your case because (a) I(X,C) = H(C): the mutual information between C (which well you’re in) and X (where you are) is the entropy of C, (b) H(C|X) = 0: once you know where you are, you know which well you’re in, and, relatedly (c) H(X,C) = H(X): the entropy of the joint distribution just is the entropy over X.
stenographically
steganographically?
Future perfect (hey, that’s the name of the show!) seems like a reasonable hack for this in English
My impression was that “zero-sum” was not used in quite the standard way. I think the idea is the AI will cause a big reassignment of Earth’s capabilities to its own control. And that that’s contrasted with the AI massively increasing its own capabilities and thus Earth’s overall capabilities.
Heartbreaking CDT. I’ve got a Transparent Newcomb’s I’d like to sell you