Has anyone checked out Nassim Nicholas Taleb’s book Statistical Consequences of Fat Tails? I’m wondering where it lies on the spectrum from textbook to prolonged opinion piece. I’d love to read a textbook about the title.
Taleb has made available a technical Monograph that parallels that book, and all of his books. You can find it here: https://arxiv.org/abs/2001.10488
The pdf linked by @CstineSublime is definitely towards the textbook. I’ve started reading it and it has been an excellent read so far. Will probably write a review later.
Here’s my guess as to how the universality hypothesis a.k.a. natural abstractions will turn out. (This is not written to be particularly understandable.)
At the very “bottom”, or perceptual level of the conceptual hierarchy, there will be a pretty straight-forward objective set of concept. Think the first layer of CNNs in image processing, the neurons in the retina/V1, letter frequencies, how to break text strings into words. There’s some parameterization here, but the functional form will be clear (like having a basis of n vectors in R^n, but it (almost) doesn’t matter which vectors).
For a few levels above that, it’s much less clear to me that the concepts will be objective. Curve detectors may be universal, but the way they get combined is less obviously objective to me.
This continues until we get to a middle level that I’d call “objects”. I think it’s clear that things like cats and trees are objective concepts. Sufficiently good language models will all share concepts that correspond to a bunch of words. This level is very much due to the part where we live in this universe, which tends to create objects, and on earth, which has a biosphere with a bunch of mid-level complexity going on.
Then there will be another series of layers that are less obvious. Partly these levels are filled with whatever content is relevant to the system. If you study cats a lot then there is a bunch of objectively discernible cat behavior. But it’s not necessary to know that to operate in the world competently. Rivers and waterfalls will be a level 3 concept, but the details of fluid dynamics are in this level.
Somewhere around the top level of the conceptual hierarchy, I think there will be kind of a weird split. Some of the concepts up here will be profoundly objective; things like “and”, mathematics, and the abstract concept of “object”. Absolutely every competent system will have these. But then there will also be this other set of concepts that I would map onto “philosophy” or “worldview”. Humans demonstrate that you can have vastly different versions of these very high-level concepts, given very similar data, each of which is in some sense a functional local optimum. If this also holds for AIs, then that seems very tricky.
Actually my guess is that there is also a basically objective top-level of the conceptual hierarchy. Humans are capable of figuring it out but most of them get it wrong. So sufficiently advanced AIs will converge on this, but it may be hard to interact with humans about it. Also, some humans’ values may be defined in terms of their incorrect worldviews, leading to ontological crises with what the AIs are trying to do.
Totally baseless conjecture that I have not thought about for very long; chaos is identical to Turing completeness. All dynamical systems that demonstrate chaotic behavior are Turing complete (or at least implement an undecidable procedure).
Has anyone heard of an established connection here?
Might look at Wolfram’s work. One of the major themes of his CA classification project is that chaotic (in some sense, possibly not the rigorous ergodic dynamics definition) rulesets are not Turing-complete; only CAs which are in an intermediate region of complexity/simplicity have ever been shown to be TC.
It turns out I have the ESR version of firefox on this particular computer: Firefox 115.14.0esr (64-bit). Also tried it in incognito, and with all browser extensions turned off, and checked multiple posts that used sections.
It definitely should appear if you hover over it – doublechecking that on the ones you’re trying it on, there are actual headings in the post such that there’d be a ToC?
Maybe you already thought of this, but it might be a nice project for someone to take the unfinished drafts you’ve published, talk to you, and then clean them up for you. Apprentice/student kind of thing. (I’m not personally interested in this, though.)
I like that idea! I definitely welcome people to do that as practice in distillation/research, and to make their own polished posts of the content. (Although I’m not sure how interested I would be in having said person be mostly helping me get the posts “over the finish line”.)
Has anyone checked out Nassim Nicholas Taleb’s book Statistical Consequences of Fat Tails? I’m wondering where it lies on the spectrum from textbook to prolonged opinion piece. I’d love to read a textbook about the title.
Taleb has made available a technical Monograph that parallels that book, and all of his books. You can find it here: https://arxiv.org/abs/2001.10488
The pdf linked by @CstineSublime is definitely towards the textbook. I’ve started reading it and it has been an excellent read so far. Will probably write a review later.
Here’s my guess as to how the universality hypothesis a.k.a. natural abstractions will turn out. (This is not written to be particularly understandable.)
At the very “bottom”, or perceptual level of the conceptual hierarchy, there will be a pretty straight-forward objective set of concept. Think the first layer of CNNs in image processing, the neurons in the retina/V1, letter frequencies, how to break text strings into words. There’s some parameterization here, but the functional form will be clear (like having a basis of n vectors in R^n, but it (almost) doesn’t matter which vectors).
For a few levels above that, it’s much less clear to me that the concepts will be objective. Curve detectors may be universal, but the way they get combined is less obviously objective to me.
This continues until we get to a middle level that I’d call “objects”. I think it’s clear that things like cats and trees are objective concepts. Sufficiently good language models will all share concepts that correspond to a bunch of words. This level is very much due to the part where we live in this universe, which tends to create objects, and on earth, which has a biosphere with a bunch of mid-level complexity going on.
Then there will be another series of layers that are less obvious. Partly these levels are filled with whatever content is relevant to the system. If you study cats a lot then there is a bunch of objectively discernible cat behavior. But it’s not necessary to know that to operate in the world competently. Rivers and waterfalls will be a level 3 concept, but the details of fluid dynamics are in this level.
Somewhere around the top level of the conceptual hierarchy, I think there will be kind of a weird split. Some of the concepts up here will be profoundly objective; things like “and”, mathematics, and the abstract concept of “object”. Absolutely every competent system will have these. But then there will also be this other set of concepts that I would map onto “philosophy” or “worldview”. Humans demonstrate that you can have vastly different versions of these very high-level concepts, given very similar data, each of which is in some sense a functional local optimum. If this also holds for AIs, then that seems very tricky.
Actually my guess is that there is also a basically objective top-level of the conceptual hierarchy. Humans are capable of figuring it out but most of them get it wrong. So sufficiently advanced AIs will converge on this, but it may be hard to interact with humans about it. Also, some humans’ values may be defined in terms of their incorrect worldviews, leading to ontological crises with what the AIs are trying to do.
Totally baseless conjecture that I have not thought about for very long; chaos is identical to Turing completeness. All dynamical systems that demonstrate chaotic behavior are Turing complete (or at least implement an undecidable procedure).
Has anyone heard of an established connection here?
Might look at Wolfram’s work. One of the major themes of his CA classification project is that chaotic (in some sense, possibly not the rigorous ergodic dynamics definition) rulesets are not Turing-complete; only CAs which are in an intermediate region of complexity/simplicity have ever been shown to be TC.
Is it just me, or did the table of contents for posts disappear? The left sidebar just has lines and dots now.
Does it reappear when you hover your cursor over it?
It does not! At least, not anywhere that I’ve tried hovering.
Huh, want to post your browser and version number? Could be a bug related to that (it definitely works fine in Chrome, FF and Safari for me)
It turns out I have the ESR version of firefox on this particular computer: Firefox
115.14.0esr (64-bit)
. Also tried it in incognito, and with all browser extensions turned off, and checked multiple posts that used sections.Yeah I just replicated this with the mac version of the ESR version.
It definitely should appear if you hover over it – doublechecking that on the ones you’re trying it on, there are actual headings in the post such that there’d be a ToC?
It does for me
Maybe you already thought of this, but it might be a nice project for someone to take the unfinished drafts you’ve published, talk to you, and then clean them up for you. Apprentice/student kind of thing. (I’m not personally interested in this, though.)
I like that idea! I definitely welcome people to do that as practice in distillation/research, and to make their own polished posts of the content. (Although I’m not sure how interested I would be in having said person be mostly helping me get the posts “over the finish line”.)