# lunatic_at_large

Karma: 19
• Hmmm, I’m still thinking about this. I’m kinda unconvinced that you even need an algorithm-heavy approach here. Let’s say that you want to apply logit, add some small amount of noise, apply logistic, then score. Consider the function on R^n defined as (score function) composed with (coordinate-wise logistic function). We care about the expected value of this function with respect to the probability measure induced by our noise. For very small noise, you can approximate this function by its power series expansion. For example, if we’re adding iid Gaussian noise, then look at the second order approximation. Then in the limit as the standard deviation of the noise goes to zero, the expected value of the change is some constant (something something Gaussian integral) times the Laplacian of our function on R^n times the square of the standard deviation. Thus the Laplacian is very related to this precision we care about (it basically determines it for small noise). For most reasonable scoring functions, the Laplacian should have a closed-form solution. I think that gets you out of having to simulate anything. Let me know if I messed anything up! Cheers!

• If my interpretation of precision function is correct then I guess my main concern is this: how are we reaching inside the minds of the predictors to see what their distribution on is? Like, imagine we have an urn with black and red marbles in it and we have a prediction market on the probability that a uniformly randomly chosen marble will be red. Let’s say that two people participated in this prediction market: Alice and Bob. Alice estimated there to be a 0.3269230769 (or approximately 1752) chance of the marble being red because she saw the marbles being put in and there were 17 red marbles and 52 marbles total. Bob estimated there to be a 0.3269230769 chance of the marble being red because he felt like it. Bob is clearly providing false precision while Alice is providing entirely justified precision. However, no matter which way the urn draw goes, the input tuple (0.3269230769, 0) or (0.3269230769, 1) will be the same for both participants and thus the precision returned by any precision function will be the same. This feels to me like a fundamental disconnect between what we want to measure and what we are measuring. Am I mistaken in my understanding? Thanks!

• Awesome post! I’m very ignorant of the precision-estimation literature so I’m going to be asking dumb questions here.

First of all, I feel like a precision function should take some kind of “acceptable loss” parameter. From what I gather, to specify the precision you need some threshold in your algorithm(s) for how much accuracy loss you’re willing to tolerate.

More fundamentally, though, I’m trying to understand what exactly we want to measure. The list of desired properties of a precision function feel somewhat pulled out of thin air, and I’d feel more comfortable with a philosophical understanding of where these properties come from. So let’s say we have a set of possible states/​trajectories of the world, the world provides us with some evidence , and we’re interested in for some event . Maybe reality has some fixed out there, but we’re not privy to that, so we’re forced to use some “hyperprior” (am I using that word right?) on probability measures over . After conditioning on , we get some probability distribution on , which participants in a prediction market will take the expected value of as their answer. The precision is trying to quantify something like the standard deviation of this probability distribution on values of , right?

P.S. This is entirely a skill issue on my part but I’m not sure what symbols you’re using for precision function and perturbation function. Detexify was of no use. Feel free to enlighten me!

# [Question] Less­wrong’s opinion on in­finite epistemic regress and Bayesianism

17 Sep 2023 2:03 UTC
4 points
• I’m probably the least qualified person imaginable to represent “the Lesswrong community” given that I literally made my first post this weekend, but I did get into EA between high school and college and I have some thoughts on the topic.

My gut reaction is that it depends a lot on the kind of person this high schooler is. I was very interested in math and theoretical physics when I started thinking about EA. I don’t think I’m ever going to be satisfied with my life unless I’m doing work that’s math-heavy. I applied to schools with good AI programs with the intent of upskilling on AI/​ML during college and then going into AI Safety. When I started college I waved away the honors math classes with the intent of getting into theoretical machine learning research as fast as possible. Before the end of freshman year, I realized that I was miserable and the courses felt dumb and that I was finding it very hard to relate to any of the other people in the AI program—most of them were practically-minded and non-math-y. I begged to be let back into the honors math courses and thankfully the department allowed me to do so. I proceeded to co-found the AI Safety club at my college and have been thinking somewhat independently on questions adjacent to AI Safety that interest me. In retrospect, I think that I was too gung-ho about upskilling on ML to stop and pay attention to where my skills and my passion were. This nearly resulted in me having no friend group in college and not being productive at anything.

So yeah, I don’t know what exactly I would recommend. If I had been a more practically-minded person then my actions would probably have been pretty perfect. I guess the only advice I can give is cliches: think independently, explore, talk to people, listen to yourself. Sorry I can’t say anything more concrete!

• You raise an excellent point! In hindsight I’m realizing that I should have chosen a different example, but I’ll stick with it for now. Yes, I agree that “What states of the universe are likely to result from me killing vs not killing lanternflies” and “Which states of the universe do I prefer?” are both questions grounded in the state of the universe where Bayes’ rule applies very well. However, I feel like there’s a third question floating around in the background: “Which states of the universe ‘should’ I prefer?” Based on my inner experiences, I feel that I can change my values at will. I specifically remember a moment after high school when I first formalized an objective function over states of the world, and this was a conscious thing I had to do. It didn’t come by default. You could argue that the question “Which states of the universe would I decide I should prefer after thinking about it for 10 years” is a question that’s grounded in the state of the universe so that Bayes’ Rule makes sense. However, trying to answer this question basically reduces to thinking about my values for 10 years; I don’t know of a way to short circuit that computation. I’m reminded of the problem about how an agent can reason about a world that it’s embedded inside where its thought processes could change the answers it seeks.

If I may propose another example and take this conversation to the meta-level, consider the question “Can Bayes’ Rule alone answer the question ‘Should I kill lanternflies?’?” When I think about this meta-question, I think you need a little more than just Bayes’ Rule to reason. You could start by trying to estimate P(Bayes Rule alone solves the lanternfly question), P(Bayes Rule alone solves the lanternfly question | the lanternfly question can be decomposed into two separate questions), etc. The problem is that I don’t see how to ground these probabilities in the real world. How can you go outside and collect data and arrive at the conclusion “P(Bayes Rule alone solves the lanternfly question | the lanternfly question can be decomposed into two separate questions) = 0.734”?

In fact, that’s basically the issue that my post is trying to address! I love Bayes’ rule! I love it so much that the punchline of my post, the dismissive growth-consistent ideology weighting, is my attempt to throw probability theory at abstract arguments that really didn’t ask for probability theory to be thrown at them. “Growth-consistency” is a fancy word I made up that basically means “you can apply probability theory (including Bayes’ Rule) in the way you expect.” I want to be able to reason with probability theory in places where we don’t get “real probabilities” inherited from the world around us.

• Hey, thanks for the response! Yes, I’ve also read about Bayes’ Theorem. However, I’m unconvinced that it is applicable in all the circumstances that I care about. For example, suppose I’m interested in the question “Should I kill lanternflies whenever I can?” That’s not really an objective question about the universe that you could, for example, put on a prediction market. There doesn’t exist a natural function from (states of the universe) to (answers to that question). There’s interpretation involved. Let’s even say that we get some new evidence (my post wasn’t really centered on that context, but still). Suppose I see the news headline “Arkansas Department of Stuff says that you should kill lanternflies whenever you can.” How am I supposed to apply Bayes’ rule in this context? How do I estimate P(I should kill lanternflies whenever I can | Arkansas Department of Stuff says I should kill lanternflies whenever I can)? It would be nice to be able to dismiss these kinds of questions as ill-posed, but in practice I spend a sizeable fraction of my time thinking about them. Am I incorrect here? Is Bayes’ theorem more powerful than I’m realizing?

(1) Yeah, I’m intentionally inserting a requirement that’s trivially true. Some claims will make object-level statements that don’t directly impose restrictions on other claims. Since these object-level claims aren’t directly responsible for putting restrictions on the structure of the argument, they induce trivial clauses in the formula.

(2) Absolutely, you can’t provide concrete predictions on how beliefs will evolve over time. But I think you can still reason statistically. For example, I think it’s valid to ask “You put ten philosophers in a room and ask them whether God exists. At the start, you present them with five questions related to the existence of God and ask them to assign probabilities to combinations of answers to these questions. After seven years, you let the philosophers out and again ask them to assign probabilities to combinations of answers. What is the expected value of the shift (say, the KL divergence) between the original probabilities and the final probabilities?“ I obviously cannot hope to predict which direction the beliefs will evolve, but the degree to which we expect them to evolve seems more doable. Even if we’ve updated so that our current probabilities equal the expected value of our future probabilities, we can still ask about the variance of our future probabilities. Is that correct or am I misunderstanding something?

Thanks again, by the way!

# Prob­a­bil­is­tic ar­gu­ment re­la­tion­ships and an in­vi­ta­tion to the ar­gu­ment map­ping community

9 Sep 2023 18:45 UTC
13 points