nods I do agree with this to a significant degree. Note that one of the reasons for the frontpage/personal distinction is to allow people to opt-out of a lot of social-drama stuff, and generally create a space (the frontpage) in which you don’t have to keep track of a lot of this social stuff, and can focus on the epistemic content of the posts.
I agree with most of this, and do think that it’s very clearly worth it for us to continue announcing and publicly communicating anything in the reference class of the OP (as well as the vast majority of things less large than that).
I think you are misunderstanding the comment above. As moridinamael says, this is about the counterfactual in which the moderation team goes crazy for some reason, which I think mostly bottoms out in where the actual power lies. If Eliezer decides to ban everyone tomorrow, he was always able to do that, and I don’t think anyone would really have the ability to stop him now (since MIRI still owns the URL and a lot of the data). This has always been the case, and if anything is less the case now, but in either case is a counterfactual I don’t think we should optimize too much for.
Edit Note: I fixed some of the formatting in this post. Feel free to revert it.
Yeah, my current commenting guidelines are empty. Other users have non-empty commenting guideliens.
The FAQ covers almost all the site-functionality, including karma. Here is the relevant section:
You can also link to subsections, if you just right-click on the relevant section in the ToC and select “Copy Link Address”.
I’ve also spent 30 minutes looking for anything in this space and didn’t find anything. The closest that I could find was Neuroeconomics.
Promoted to curated: It’s been a while since this post has come out, but I’ve been thinking of the “credit assignment” abstraction a lot since then, and found it quite useful. I also really like the way the post made me curious about a lot of different aspects of the world, and I liked the way it invited me to boggle at the world together with you.
I also really appreciated your long responses to questions in the comments, which clarified a lot of things for me.
One thing comes to mind for maybe improving the post, though I think that’s mostly a difference of competing audiences:
I think some sections of the post end up referencing a lot of really high-level concepts, in a way that I think is valuable as a reference, but also in a way that might cause a lot of people to bounce off of it (even people with a pretty strong AI Alignment background). I can imagine a post that includes very short explanations of those concepts, or moves them into a context where they are more clearly marked as optional (since I think the post stands well without at least some of those high-level concepts)
nods Seems good. I agree that there are much more interesting things to discuss.
nods You did say the following:
I honestly don’t see how they could sensibly be aggregated into anything at all resembling a natural category
I interpreted that as saying “there is no resemblance between attending a CFAR workshop and reading the sequences”, which seems to me to include the natural categories of “they both include reading/listening to largely overlapping concepts” and “their creators largely shared the same aim in the effects it tried to produce in people”.
I think there is a valuable and useful argument to be made here that in the context of trying to analyze the impact of these interventions, you want to be careful to account for the important differences between reading a many-book length set of explanations and going to an in-person workshop with in-person instructors, but that doesn’t seem to me what you said in the your original comment. You just said that there is no sensible way to put these things into the same category, which just seems obviously wrong to me, since there clearly is a lot of shared structure to analyze between these interventions.
I mean, a lot of the CFAR curriculum is based on content in the sequences, the handbook covers a lot of the same declarative content, and they are setting out with highly related goals (with Eliezer helping with early curriculum development, though much less so in recent years). The beginning of R:A-Z even explicitly highlights how he thinks CFAR is filling in many of the gaps he left in the sequences, clearly implying that they are part of the same aim.
Sure, there are differences, but overall they are highly related and I think can meaningfully be judged to be in a natural category. Similar to how a textbook and a university-class or workshop on the same subject are obviously related, even though they will differ on many relevant dimensions.
Note that all three of the linked paper are about “boundedly rational agents with perfectly rational principals” or about “equally boundedly rational agents and principals”. I have been so far unable to find any papers that follow the described pattern of “boundedly rational principals and perfectly rational agents”.
I am confused. If MWI is true, we are all already immortal, and every living mind is instantiated a very large number of times, probably literally forever (since entropy doesn’t actually decrease in the full multiverse, and is just a result of statistical correlation, but if you buy the quantum immortality argument you no longer care about this).
Bayesian agents are logically omniscient, and I think a large fraction of deceptive practices rely on asymmetries in computation time between two agents with access to slightly different information (like generating a lie and checking the consistencies between this new statement and all my previous statements)
My sense is also that two-player games with bayesian agents are actually underspecified and give rise to all kinds of weird things due to the necessity for infinite regress (i.e. an agent modeling the other agent modeling themselves modeling the other agent, etc.), which doesn’t actually reliably converge, though I am not confident. A lot of decision-theory seems to do weird things with bayesian agents.
So overall, not sure how well you can prove theorems in this space, without having made a lot of progress in decision-theory, and I expect the resolution to a lot of our confusions in decision-theory to be resolved by moving away from bayesianism.
Yep, that’s correct. We experimented with some other indicators, but this was the one that seemed least intrusive.
I am also interested in this, and would give around $50 for some good sources on this (this is not a commitment that I will pay the best answer to this question, just that if an answer is good enough, I will send the person $50)
I mean, I agree that Coca Cola engages in marketing practices that try to fabricate associations that are not particularly truth-oriented, but that’s very different from the thing with Theranos.
I model Coca Cola mostly as damaging for my health, and model its short-term positive performance effects to be basically fully mediated via caffeine, but I still think it’s providing me value above and beyond those those benefits, and outweighing the costs in certain situations.
Theranos seems highly disanalogous, since I think almost no one who knew the actual extend of Theranos’ capabilities, and had accurate beliefs about its technologies, would give money to them. I have pretty confident bounds on the effects of coca-cola, and still decide to sometimes give them my money, and would be really highly surprised if there turns out to be a fact about coke that its internal executives are aware of (even subconsciously) that would drastically change that assessment for me, and it doesn’t seem like that’s what you are arguing for.
Somewhat confused by the coca-cola example. I don’t buy coke very often, but it seems usually worth it to me when I do buy it (in small amounts, since I do think it tastes pretty good). Is the claim that they are not providing any value some kind of assumption about my coherent extrapolated volition?
Yeah, I agree with this. I’ve been more annoyed by performance as well lately, and we are pretty close to shipping a variety of performance improvements that I expect will make a significant difference here (and have a few more in the works afterwards, though I think it will be quite a while until we are competitive with greaterwrong performance wise, in large parts due to just fundamentally different architectures).
Promoted to curated: I think this post captured some core ideas in predictions and modeling in a really clear way, and I particularly liked how it used a lot of examples and was just generally very concrete in how it explained things.
I really like this concept. It currently feels to me like a mixture between a fact post and an essay.
From the fact-post post:
You explicitly do not look for opinion, even expert opinion. You avoid news, and you’re wary of think-tank white papers. You’re looking for raw information. You are taking a sola scriptura approach, for better and for worse.And then you start letting the data show you things. You see things that are surprising or odd, and you note that. You see facts that seem to be inconsistent with each other, and you look into the data sources and methodology until you clear up the mystery.You orient towards the random, the unfamiliar, the things that are totally unfamiliar to your experience. One of the major exports of Germany is valves? When was the last time I even thought about valves? Why valves, what do you use valves in? OK, show me a list of all the different kinds of machine parts, by percent of total exports.
You explicitly do not look for opinion, even expert opinion. You avoid news, and you’re wary of think-tank white papers. You’re looking for raw information. You are taking a sola scriptura approach, for better and for worse.
And then you start letting the data show you things.
You see things that are surprising or odd, and you note that.
You see facts that seem to be inconsistent with each other, and you look into the data sources and methodology until you clear up the mystery.
You orient towards the random, the unfamiliar, the things that are totally unfamiliar to your experience. One of the major exports of Germany is valves? When was the last time I even thought about valves? Why valves, what do you use valves in? OK, show me a list of all the different kinds of machine parts, by percent of total exports.
From Paul Graham’s essay post:
Figure out what? You don’t know yet. And so you can’t begin with a thesis, because you don’t have one, and may never have one. An essay doesn’t begin with a statement, but with a question. In a real essay, you don’t take a position and defend it. You notice a door that’s ajar, and you open it and walk in to see what’s inside.If all you want to do is figure things out, why do you need to write anything, though? Why not just sit and think? Well, there precisely is Montaigne’s great discovery. Expressing ideas helps to form them. Indeed, helps is far too weak a word. Most of what ends up in my essays I only thought of when I sat down to write them. That’s why I write them.