Yes. My bad, I shouldn’t have implied all hidden-variables interpretations.
tangerine
Every non-deterministic interpretation has a virtually infinite Kolmogorov complexity because it has to hardcode the outcome of each random event.
Hidden-variables interpretations are uncomputable because they are incomplete.
It’s the simplest explanation (in terms of Kolmogorov complexity).
It’s also the interpretation which by far has the most elegant explanation for the apparent randomness of reality. Most interpretations provide no mechanism for the selection of a specific outcome, which is absurd. Under the MWI, randomness emerges from determinism through indexical uncertainty, i.e., not knowing which branch you’re in. Some people, such as Sabine Hossenfelder for example, get confused by this and ask, “then why am I this version of me?”, which implicitly assumes dualism, as if there is a free-floating consciousness which could in principle inhabit any branch; this is patently untrue because you are by definition this “version” of you. If you were someone else (including someone in a different branch where one of your atoms is moved by one Planck distance) then you wouldn’t be you; you would be literally someone else.
Note that the Copenhagen interpretation is also a many-worlds explanation, but with the added assumption that all but one randomly chosen world disappears when an “observation” is made, i.e., when entanglement with your branch takes place.
It’s just a matter of definition. We say that “you” and “I” are the things that are entangled with a specific observed state. Different versions of you are entangled with different observations. Nothing is stopping you from defining a new kind of person which is a superposition of different entanglements. The reason it doesn’t “look” that way from your perspective is because of entanglement and the law of the excluded middle. What would you expect to see if you were a superposition?
Have you read Joseph Henrich’s books The Secret of Our Success, and its sequel The WEIRDest People in the World? If not, they provide a pretty comprehensive view of how humanity innovates and particularly the Western world, which is roughly in line with what you wrote here.
I kind of agree that most knowledge is useless, but the utility of knowledge and experience that people accrue is probably distributed like a bell curve, which means you can’t just have more of the good knowledge without also accruing lots of useless knowledge. In addition, very often stuff that seems totally useless turns out to be very useful; you can’t always tell which is which.
I completely agree. In Joseph Henrich’s book The Secret of Our Success, he shows that the amount of knowledge possessed by a society is proportional to the number of people in that society. Dwindling population leads to dwindling technology and dwindling quality of life.
Those who advocate for population decline are unwittingly advocating for the disappearance of the knowledge, experience and frankly wisdom that is required to keep the comfortable life that they take for granted going.
Keeping all that knowledge in books is not enough. Otherwise our long years in education would be unnecessary. Knowing how to apply knowledge is its own form of knowledge.
If causality is everywhere, it is nowhere; declaring “causality is involved” will have no meaning. It begs the question whether an ontology containing the concept of causality is the best one to wield for what you’re trying to achieve. Consider that causality is not axiomatic, since the laws of physics are time-reversible.
I respect Sutskever a lot, but if he believed that he could get an equivalent world model by spending an equivalent amount of compute learning from next-token prediction using any other set of real-world data samples, why would they go to such lengths to specifically obtain human-generated text for training? They might as well just do lots of random recordings (e.g., video, audio, radio signals) and pump it all into the model. In principle that could probably work, but it’s very inefficient.
Human language is a very high density encoding of world models, so by training on human language models get much of their world model “for free“, because humanity has already done a lot of pre-work by sampling reality in a wide variety of ways and compressing it into the structure of language. However, our use of language still doesn’t capture all of reality exactly and I would argue it’s not even close. (Saying otherwise is equivalent to saying we’ve already discovered almost all possible capabilities, which would entail that AI actually has a hard cap at roughly human ability.)
In order to expand its world model beyond human ability, AI has to sample reality itself, which is much less sample-efficient than sampling human behavior, hence the “soft cap”.
In theory, yes, but that’s obviously a lot more costly than running just one instance. And you’ll need to keep these virtual researchers running in order to keep the new capabilities coming. At some point this will probably happen and totally eclipse human ability, but I think the soft cap will slow things down by a lot (i.e., no foom). That’s assuming that compute and the number of researchers even is the bottleneck to new discoveries; it could also be empirical data.
If you accept the premise of AI remaining within the human capability range in some broad sense, where it brings great productivity improvements and rewards those who use it well but remains foundationally a tool and everything seems basically normal, essentially the AI-Fizzle world, then we have disagreements
There is good reason to believe that AI will have a soft cap at roughly human ability (and by “soft cap” I mean that anything beyond the cap will be much harder to achieve) for the same reason that humans have a soft cap at human ability: copying existing capabilities is much easier than discovering new capabilities.
A human being born today can relatively easily achieve abilities that other humans have achieved, because you just copy them; lots of 12-year-olds can learn calculus, which is much easier than inventing it. AI will have the same issue.
The European socket map is deceptive. My charger will work anywhere on mainland Europe. Looking at the sockets, can you tell why?
Does this count as “rational, deliberate design”? I think a case could be made for both yes and no, but I lean towards no. Humans who have studied a certain subject often develop a good intuition for what will work and what won’t and I think deep learning captures that; you can get right answers at an acceptable rate without knowing why. This is not quite rational deliberation based on theory.
I think that “rational, deliberate design”, as you put it, is simply far less common (than random chance) than you think; that the vast majority of human knowledge is a result of induction instead of deduction; that theory is overrated and experimentalism is underrated.
This is also why I highly doubt that anything but prosaic AI alignment will happen.
I don’t think I disagree with what you’re saying here, though we may be using different terms to say the same thing.
How does what you say here inform your thoughts about the Hard Problem?
Regarding taking hints, the other gender typically does not see all the false positives one has to deal with. What seems obvious is usually not obvious at all. In fact, a socially skilled person will always try to use plausibly deniable (i.e., not-obvious) signals and will consider anything more a gauche faux pas. Acting on such signals is therefore inherently risky and is nowadays perhaps considered more risky than it used to be, especially at work and around close friends.
For example, a few years ago, a woman I had great rapport with called me her Valentine in a very charming way. You might say that’s an obvious signal, but when I tried to make plans with her she said that’s just a thing she does for friends and family and there was no special meaning to it. Some people are out to get your romantic attention, but ultimately want to keep you at arm’s length.
All I’m asking for is a way for other people to determine whether a given explanation will satisfy you. You haven’t given enough information to do that. Until that changes we can’t know that we even agree on the meaning of the Hard Problem.
Also., the existence of a problem does not depend on the existence of a solution.
Agreed, but even if no possible solution can ultimately satisfy objective properties, until those properties are defined the problem itself remains undefined. Can you define these objective properties?
I know. Like I said, neither Chalmers nor you or anyone else have shown it plausible that subjective experience is non-physical. Moreover, you repeatedly avoid giving an objective description what you’re looking for.
Until either of the above change, there is no reason to think there is a Hard Problem.
My introduction to Dennett, half a lifetime ago, was this talk:
That was the start of his profound influence on my thinking. I especially appreciated his continuous and unapologetic defense of the meme as a useful concept, despite the many detractors of memetics.
Sad to know that we won’t be hearing from him anymore.