Maybe it could be FLCI to avoid collision with the existing FLI.
Alex_Altair
I also think the name is off, but for a different reason. When I hear “the west” with no other context, I assume it means this, which doesn’t make sense here, because the UK and FHI are very solidly part of The West. (I have not heard the “Harvard of the west” phrase and I’m guessing it’s pretty darn obscure, especially to the international audience of LW.)
Feedback on the website: it’s not clear to me what the difference is between LessOnline and the summer camp right after. Is the summer camp only something you go to if you’re also going to Manifest? Is it the same as LessOnline but longer?
Oh, no, I’m saying it’s more like 2^8 afterwards. (Obviously it’s more than that but I think closer to 8 than a million.) I think having functioning vision at all brings it down to, I dunno, 2^10000. I think you would be hard pressed to name 500 attributes of mammals that you need to pay attention to to learn a new species.
We then get around the 2^8000000 problem by having only a relatively very very small set of candidate “things” to which words might be attached.
A major way that we get around this is by having hierarchical abstractions. By the time I’m learning “dog” from 1-5 examples, I’ve already done enormous work in learning about objects, animals, something-like-mammals, heads, eyes, legs, etc. So when you point at five dogs and say “those form a group” I’ve already forged abstractions that handle almost all the information that makes them worth paying attention to, and now I’m just paying attention to a few differences from other mammals, like size, fur color, ear shape, etc.
I’m not sure how the rest of this post relates to this, but it didn’t feel present; maybe it’s one of the umpteenth things you left out for the sake of introductory exposition.
I’ve noticed you using the word “chaos” a few times across your posts. I think you’re using it colloquially to mean something like “rapidly unpredictable”, but it does have a technical meaning that doesn’t always line up with how you use it, so it might be useful to distinguish it from a couple other things. Here’s my current understanding of what some things mean. (All of these definitions and implications depend on a pile of finicky math and tend to have surprising counter-example if you didn’t define things just right, and definitions vary across sources.)
Sensitive to initial conditions. A system is sensitive to initial conditions if two points in its phase space will eventually diverge exponentially (at least) over time. This is one way to say that you’ll rapidly lose information about a system, but it doesn’t have to look chaotic. For example, say you have a system whose phase space is just the real line, and its dynamics over time is just that points get 10x farther from the origin every time step. Then, if you know the value of a point to ten decimal places of precision, after ten time steps you only know one decimal place of precision. (Although there are regions of the real line where you’re still sure it doesn’t reside, for example you’re sure it’s not closer to the origin.)
Ergodic. A system is ergodic if (almost) every point in phase space will trace out a trajectory that gets arbitrarily close to every other point. This means that each point is some kind of chaotically unpredictable, because if it’s been going for a while and you’re not tracking it, you’ll eventually end up with maximum uncertainty about where it is. But this doesn’t imply sensitivity to initial conditions; there are systems that are ergodic, but where any pair of points will stay the same distance from each other. A simple example is where phase space is a circle, and the dynamics are that on each time step, you rotate each point around the circle by an irrational angle.
Chaos. The formal characterization that people assign to this word was an active research topic for decades, but I think it’s mostly settled now. My understanding is that it essentially means this;
Your system has at least one point whose trajectory is ergodic, that is, it will get arbitrarily close to every other point in the phase space
For every natural number n, there is a point in the phase space whose trajectory is periodic with period n. That is, after n time steps (and not before), it will return back exactly where it started. (Further, these periodic points are “dense”, that is, every point in phase space has periodic points arbitrarily close to it).
The reason these two criteria yield (colloquially) chaotic behavior is, I think, reasonably intuitively understandable. Take a random point in its phase space. Assume it isn’t one with a periodic trajectory (which will be true with “probability 1”). Instead it will be ergodic. That means it will eventually get arbitrarily close to all other points. But consider what happens when it gets close to one of the periodic trajectories; it will, at least for a while, act almost as though it has that period, until it drifts sufficiently far away. (This is using an unstated assumption that the dynamics of the systems have a property where nearby points act similarly.) But it will eventually do this for every periodic trajectory. Therefore, there will be times when it’s periodic very briefly, and times when it’s periodic for a long time, et cetera. This makes it pretty unpredictable.
There are also connections between the above. You might have noticed that my example of a system that was sensitive to initial conditions but not ergodic or chaotic relied on having an unbounded phase space, where the two points both shot off to infinity. I think that if you have sensitivity to initial conditions and a bounded phase space, then you generally also have ergodic and chaotic behavior.
Anyway, I think “chaos” is a sexy/popular term to use to describe vaguely unpredictable systems, but almost all of the time you don’t actually need to rely on the full technical criteria of it. I think this could be important for not leading readers into red-herring trails of investigation. For example, all of standard statistical mechanics only needs ergodicity.
Has anyone checked out Nassim Nicholas Taleb’s book Statistical Consequences of Fat Tails? I’m wondering where it lies on the spectrum from textbook to prolonged opinion piece. I’d love to read a textbook about the title.
Just noticing that every post has at least one negative vote, which feels interesting for some reason.
The e-ink tablet market has really diversified recently. I’d recommend that anyone interested look around at the options. My impression is that the Kindle Scribe is one of the least good ones (which doesn’t mean it’s bad).
Here’s the arxiv version of the paper, with a bunch more content in appendices.
And, since I can’t do everything: what popular platforms shouldn’t I prioritize?
I think cross-posting between twitter, mastodon and bluesky would be pretty easy. And it would let you gather your own data on which platforms are worth continuing.
I looked at these several months ago and unfortunately recommend neither. Pearl’s Causality is very dense, and not really a good introduction. The Primer is really egregiously riddled with errors; there seems to have been some problem with the publisher. And on top of that, I just found it not very well written.
I don’t have a specific recommendation, but I believe that at this point there are a bunch of statistics textbooks that competently discuss the essential content of causal modelling; maybe check the reviews for some of those on amazon.
One way that the analogy with code doesn’t carry over is that in math, you often can’t even being to use a theorem if you don’t know a lot of detail about what the objects in the theorem mean, and often knowing what they mean is pretty close to knowing why the theorem’s you’re building on are true. Being handed a theorem is less like being handed an API and more like being handed a sentence in a foreign language. I can’t begin to make use of the information content in the sentence until I learn what every symbol means and how the grammar works, and at that point I could have written the sentence myself.
I’d recommend porting it over as a sequence instead of one big post (or maybe just port the first chunk as an intro post?). LW doesn’t have a citation format, but you can use footnotes for it (and you can use the same footnote number in multiple places).
I had a side project to get better at research in 2023. I found very little resources that were actually helpful to me. But here are some that I liked.
A few posts by Holden Karnofsky on Cold Takes, especially Useful Vices for Wicked Problems and Learning By Writing.
Diving into deliberate practice. Most easily read is the popsci book Peak. This book emphasizes “mental representations”, which I find the most useful part of the method, though I think it’s also the least supported by the science.
The popsci book Grit.
The book Ultralearning. Extremely skimmable, large collection of heuristics that I find essential for the “lean” style of research.
Reading a scattering of historical accounts of how researchers did their research, and how it came to be useful. (E.g. Newton, Einstein, Erdős, Shannon, Kolmogorov, and a long tail of less big names.)
(Many resources were not helpful for me for reasons that might not apply to others; I was already doing what they advised, or they were about how to succeed inside academia, or they were about emotional problems like lack of confidence or burnout. But, I think mostly I failed to find good resources because no one knows how to do good research.)
- 25 Jan 2024 2:07 UTC; 6 points) 's comment on My research agenda in agent foundations by (
Finally, I want to note an aspect of the discussion in the report that makes me quite uncomfortable: namely, it seems plausible to me that in addition to potentially posing existential risks to humanity, the sorts of AIs discussed in the report might well be moral patients in their own right.
I strongly appreciate this paragraph for stating this concern so emphatically. I think this possibility is strongly under-represented in the AI safety discussion as a whole.
I agree there’s a core principle somewhere around the idea of “controllable implies understandable”. But when I think about this with respect to humans studying biology, then there’s another thought that comes to my mind; the things we want to control are not necessarily the things the system itself is controlling. For example, we would like to control the obesity crisis (and weight loss in general) but it’s not clear that the biological system itself is controlling that. It almost certainly was successfully controlling it in the ancestral environment (and therefore it was understandable within that environment) but perhaps the environment has changed enough that it is now uncontrollable (and potentially not understandable). Cancer manages to successfully control the system in the sense of causing itself to happen, but that doesn’t mean that our goal, “reliably stopping cancer” is understandable, since it is not a way that the system is controlling itself.
This mismatch seems pretty evidently applicable to AI alignment.
And perhaps the “environment” part is critical. A system being controllable in one environment doesn’t imply it being controllable in a different (or broader) environment, and thus guaranteed understandability is also lost. This feels like an expression of misgeneralization.
Looking back at Flint’s work, I don’t agree with this summary.
Ah, sorry, I wasn’t intending for that to be a summary. I found Flint’s framework very insightful, but after reading it I sort of just melded it into my own overall beliefs and understanding around optimization. I don’t think he intended it to be a coherent or finished framework on its own, so I don’t generally try to think “what does Flint’s framework say about X?”. I think its main influence on me was the whole idea of using dynamical systems and phase space as the basis for optimization. So for example;
In any case, I agree that Flint’s work also eliminates the need for an unnatural baseline in which we have to remove the agent.
I would say that working in the framework of dynamical systems is what lets one get a natural baseline against which to measure optimization, by comparing a given trajectory with all possible trajectories.
I think I could have some more response/commentary about each of your bullet points, but there’s a background overarching thing that may be more useful to prod at. I have a clear (-feeling-to-me) distinction between “optimization” and “agent”, which doesn’t seem to be how you’re using the words. The dynamical systems + Yudkowsky measure perspective is a great start on capturing the optimization concept, but it is agnostic about (my version of) the agent concept (except insofar as agents are a type of optimizer). It feels to me like the idea of endorsement you’re developing here is cool and useful and is… related to optimization, but isn’t the basis of optimization. So I agree that e.g. “endorsement” is closer to alignment, but also I don’t think that “optimization” is supposed to be all that close to alignment; I’d reserve that for “agent”. I think we’ll need a few levels of formalization in agent foundations, and you’re working toward a different level than those, and so these ideas aren’t in conflict.
Breaking that down just a bit more; let’s say that “alignment” refers to aligning the intentional goals of agents. I’d say that “optimization” is a more general phenomenon where some types of systems tend to move their state up an ordering; but that doesn’t mean that it’s “intentional”, nor that that goal is cleanly encoded somewhere inside the system. So while you could say that two optimizing systems “are more aligned” if they move up similar state orderings, it would be awkward to talk about aligning them.
(My notion of) optimization has its own version of the thing you’re calling “Vingean”, which is that if I believe a process optimizes along a certain state ordering, but I have no beliefs about how it works on the inside, then I can still at least predict that the state will go up the ordering. I can predict that the car will arrive at the airport even though I don’t know the turns. But this has nothing to do with the (optimization) process having beliefs or doing reasoning of any kind (which I think of as agent properties). For example I believe that there exists an optimization process such that mountains get worn down, and so I will predict it to happen, even though I know very little about the chemistry of erosion or rocks. And this is kinda like “endorsement”, but it’s not that the mountain has probability assignments or anything.
In fact I think it’s just a version of what makes something a good abstraction; an abstraction is a compact model that allows you to make accurate predictions about outcomes without having to predict all intermediate steps. And all abstractions also have the property that if you have enough compute/etc. then you can just directly calculate the outcome based on lower-level physics, and don’t need the abstraction to predict the outcome accurately.
I think that was a longer-winded way to say that I don’t think your concepts in this post are replacements for the Yudkowsky/Flint optimization ideas; instead it sounds like you’re saying “Assume the optimization process is of the kind that has beliefs and takes actions. Then we can define ‘endorsement’ as follows; …”
What’s your preferred response/solution to ~”problems”(?) of events that have probability zero but occur nevertheless
My impression is that people have generally agreed that this paradox is resolved (=formally grounded) by measure theory. I know enough measure theory to know what it is but haven’t gone out of my way to explore the corners of said paradoxes.
But you might be asking me about it in the framework of Yudkowsky’s measure of optimization. Let’s say the states are the real numbers in [0, 1] and the relevant ordering is the same as the one on the real numbers, and we’re using the uniform measure over it. Then, even though the probability of getting any specific real number is zero, the probability mass we use to calculate bit of optimization power is all the probability mass below that number. In that case, all the resulting numbers would imply finite optimization power. … except if we got the result that was exactly the number 0. But in that case, that would actually be infinitely surprising! And so the fact that the measure of optimization returns infinity bits reflects intuition.
It’s (probably) true that our physical reality has only finite precision
I’m also not a physicist but my impression is that physicists generally believe that the world does actually have infinite precision.
I’d also guess that the description length of (a computable version of) the standard model as-is (which includes infinite precision because it uses the real number system) has lower K-complexity than whatever comparable version of physics where you further specify a finite precision.
My current understanding is that QM is not-at-all needed to make sense of stat mech. Instead, the thing where energy is equally likely to be in any of the degrees of freedom just comes from using a measure over your phase space such that the dynamical law of your system preservers that measure!