Wow, this sure is a much clearer way to look at the self-pseudo-prediction/action-plan thingy than any I’ve seen laid out before.
AprilSR
Do we mean active LessWrong users? <10% would shock me, if you use a filter or weighting that solves the “probably there are a lot of people who look at LessWrong ever other than ‘real’ LessWrongers” aspect.
Maybe it’s less than half though. There might be a large contingent that has only read like, HPMOR and the Sequences Highlights.
Point out that most peoples’ clothes don’t really manage to signal what they intended even when they’re trying, and someone will say something like “well, it’s largely about signalling to oneself, e.g. to build confidence, so it doesn’t matter if other people get the signal”. And, like… I roll to disbelieve?
I think having a good model of when vibe people get from clothing is important and I find it plausible that there is some rationalization going on with this… but also, the self-signaling thing does seem like a large enough aspect to be the most important part, to me, even if the other-signaling aspect isn’t entirely unimportant.
I think part of this is probably that beauty is much less one-dimensional?
I agree Tessa’s explanation isn’t especially good, though it’s maybe more “incomplete” than “bogus”.
I don’t think the minimax theorem comes anywhere close to implying the existence of some sort of true optimal strategy, though, which I think becomes clear if you consider two types of chess bots. Bot A plays the same move as (something like) LeelaPieceOdds—unless that move would be moving from a game theoretically won position to a draw or loss, or from a draw to a loss, in which case it, say, randomly selects from all the moves that don’t do that, or ideally picks a move that humans are inclined to blunder against (maybe LeelaPieceOdds’s second choice, or something.)
On the other hand, Bot B immediately resigns whenever the position is a game theoretic loss, and immediately offers a draw whenever the position is a game theoretic draw. If its opponent rejects the draw offer, Bot B prefers to stay in states with the fewest opportunities for its opponent to blunder.
While both are “inexploitable”, Bot A beats humans every time, and Bot B draws them every time. (Unless chess is a game theoretic win for (WLOG) white, in which case Bot B wins as white but immediately resigns as black.) If you made Bot B a chess.com account, it very well might literally never break 1000 Elo.
So in pathological cases the non-transitivity can get pretty bad. The tops result sounds really neat but I haven’t read it yet and don’t know exactly how Bot B would fit into it—obviously “<1000 Elo” isn’t really going to be a description of Bot B that captures how it fits into such a structure very well.
I swear I don’t laugh about janus generally it’s just that the way you wrote that paragraph was really funny
You do get one guarantee, though: All the experiments are Bernoulli processes. In particular, the order of the trials is irrelevant.
I think those aren’t quite equivalent statements? If I pick my favorite string of bits, and shuffle it by a random permutation, then the probability of each bit being 1 is equal, the order is totally irrelevant (it was chosen at random), but it’s not Bernoulli because the trials aren’t independent of each other (if you know what my favorite string of bits is, you can learn the final bit as soon as you’ve observed all the rest.)
I think my lease might not let me paint my room green? Then again, maybe it does, or maybe if I painted it back before leaving I could get away with it...
Inducing trance isn’t superhuman!
You might think it’s “p of doom” like how “f(x)” is “f of x”, but actually I think people usually say just “p doom”.
The most elite groups (like billionaires or jews)
This quote looks pretty bad, but...
The most elite groups (like billionaires or Jews) are often the ones it’s most socially acceptable to blame for problems, or even call for violence against.
Now, you could maybe still critique this quote, but it reads very differently than when you cut the sentence off immediately!
That’s the picture that someone would come away with, after reading your characterization. And, of course, it would be completely inaccurate.
I’m not sure the more accurate picture is flawless behavior or anything, but I do think I definitely had an inaccurate picture in the way Said describes.
It definitely seems worth knowing about and understanding, but stuff like needing to specify a universal turing machine does still give me pause. It doesn’t make it uninsightful, but I do still think there is more work to do to really understand induction.
On the contrary, I think the authorship of a text is often relevant in some way! My enjoyment of this blog post was much furthered by my acquaintance with its author.
I suppose maybe the relevance isn’t logical, but one cares about much beyond logic.
The recommendations I hear for MDMA usage are like, not much more than 200mg, wait at least a month and preferably three months between rolls, take the supplements recommended by https://rollsafe.org, have lots of water and electrolytes, don’t overheat.
Following this advice, it’s maybe given me a headache, but definitely not the terrible crashes some people report—but I may be unusually lucky in that regard.
It’s definitely the drug I’ve had where I would be the most concerned about disregarding cautious safety procedures.
...You do not appear to me to have very much regard for the truth, given the whole thing where you declared that someone had not updated when they obviously had based only on them refusing to talk to you when you were being kind of rude.
I think it’s obvious that you should not pursue 3D chess without investing serious effort in making sure that you play 3D chess correctly. I think there is something to be said for ignoring the shiny clever ideas and playing simple virtue ethics.
But if a clever scheme is in fact better, and you have accounted for all of the problems inherent to clever schemery, of which there are very many, then… the burden of proof isn’t literally insurmountable, you’re just unlikely to end up surmounting it in practice.
(Unless it’s 3D chess where the only thing you might end up wasting is your own time. That has a lower burden of proof. Though still probably don’t waste all your time.)
Why do I have dozens of strong upvote and downvote strength, but no more agreement strength than before I began my strength training? Does EA not think agreement is importance?
While I would hate to besmirch the good name of the fewerstupidmistakesist community, I cannot help but feel that misunderstanding morality and decision theory enough to end up doing a murder is a stupider mistake than drawing a gun once a firefight has started, though perhaps not quite as stupid as beginning the fight in the first place.
Yeah I’m pretty sure it’s an idiosyncratic mental technique / human psychology observation, there isn’t technical agent foundations progress here.