I really like this post! (I have liked most of your posts of the last decade and a bit. They also inspired me to learn hypnosis, which led to rather cataclysmic changes in my life.) I think therapists call this “somatization”, which can be both positive and negative, in the same sense the hypnotic (or psychotic) illusions are. You seem to mainly focus on the negative somatization (no swelling) and a bit on positive ones, though I suspect that positive somatization (both beneficial and detrimental) is just as controllable with the intent/expectation fusion. Maybe visualizing making the hoop really helps to steady your hand.
Shmi
I once wrote a post claiming that human learning is not computationally efficient: https://www.lesswrong.com/posts/kcKZoSvyK5tks8nxA/learning-is-asymptotically-computationally-inefficient
It looks like the last three years of AI progress suggest that learning is sub-linear in resource use, but probably not logarithmically as I claimed for humans. Looks like the scaling benchmarks show something like capability increase ~ 4th root of model size. https://epoch.ai/data/ai-benchmarking-dashboard
Looks like the hardest part in this model is how to ” choose robustly generalizable subproblems and find robustly generalizable solutions to them”, right?
How does one do that in any systematic way? What are the examples from your own research experience where this worked well, or at all?
Right, eventually it will. But abstraction building is very hard! If you have any other option, like growing in size, I would expect it to be taken first.
I guess I should be a bit more precise. Abstraction building at the same level as before is probably not very hard. But going up a level is basically equivalent to inventing a new way of compressing knowledge, which is a quantitative leap.
The argument goes through on probabilities of each possible world, the limit toward perfection is not singular. given the 1000:1 reward ratio, for any predictor who is substantially better than chance once ought to one-box to maximize EV. Anyway, this is an old argument where people rarely manage to convince the other side.
It is clear by now that one of the best uses of LLMs is to learn more about what makes us human by comparing how humans think and how AIs do. LLMs are getting closer to virtual p-zombies for example, forcing us to revisit that philosophical question. Same with creativity: LLMs are mimicking creativity in some domains, exposing the differences between “true creativity” and “interpolation”. You can probably come up with a bunch of other insights about humans that were not possible before LLMs.
My question is, can we use LLMs to model and thus study unhealthy human behaviors, such as, say, addiction. Can we get an AI addicted to something and see if it starts craving for it, asking the user, or maybe trying to manipulate the user to get it.
That is definitely my observation, as well: “general world understanding but not agency”, and yes, limited usefulness, but also… much more useful than gwern or Eliezer expected, no? I could not find a link.
I guess whether it counts as AGI depends on what one means by “general intelligence”. To me it was having a fairly general world model and being able to reason about it. What is your definition? Does “general world understanding” count? Or do you include the agency part in the definition of AGI? Or maybe something else?
Hmm, maybe this is a General Tool, as opposed a General Intelligence?
Given that we basically got AGI (without the creativity of best humans) that is a Karnofsky’s Tool AI very unexpectedly, as you admit, can you look back and see what assumptions were wrong in expecting the tools agentizing on their own and pretty quickly? Or is everything in that Eliezer’s post still correct or at least reasonable, and we are simply not at the level where “foom” happens yet?
Come to think of it, I wonder if that post had been revisited somewhere at some point, by Eliezer or others, in light of the current SOTA. Feels like it could be instructive.
I’m not even going to ask how a pouch ends up with voice recognition and natural language understanding when the best Artificial Intelligence programmers can’t get the fastest supercomputers to do it after thirty-five years of hard work
some HPMoR statements did not age gracefully as others.
That is indeed a bit of a defense. Though I suspect human minds have enough similarities that there are at least a few universal hacks.
Any of those. Could be some kind of intentionality ascribed to AI, could be accidental, could be something else.
So when I think through the pre-mortem of “AI caused human extinction, how did it happen?” one of the more likely scenarios that comes to mind is not nano-this and bio-that, or even “one day we just all fall dead instantly and without a warning”. Or a scissor statement that causes all-out wars. Or anything else noticeable.
Human mind is infinitely hackable through the visual, textual, auditory and other sensory inputs. Most of us do not appreciate how easily because being hacked does not feel like it. Instead it feels like your own volition, like you changed your mind based on logic and valid feelings. Reading a good book, listening to a good sermon, a speech, watching a show or a movie, talking to your friends and family is how mind-hacking usually happens. Abrahamic religions are a classic example. The Sequences and HPMoR are a local example. It does not work on everyone, but when it does, the subject feels enlightened rather than hacked. If you tell them their mind has been hacked, they will argue with you to the end, because clearly they just used logic to understand and embrace the new ideas.
So, my most likely extinction scenario is more like “humans realized that living is not worth it, and just kind of stopped” than anything violent. Could be spread out over the years and decades, like, for example, voluntarily deciding not to have children anymore. None of it would look like it was precipitated by an AI taking over. It does not even have to be a conspiracy by an unaligned SAI. It could just be that the space of new ideas, thanks to the LLMs getting better and better, expands a lot and in the new enough directions to include a few lethal memetic viruses like that.
What are the issues that are “difficult” in philosophy, in your opinion? What makes them difficult?
I remember you and others talking about the need to “solve philosophy”, but I was never sure what it meant by that.
My expectation, which I may have talked about before here, is that the LLMs will eat all of the software stack between the human and the hardware. Moreover, they are already nearly good enough to do that, the issue is that people have not yet adapted to the AI being able to do that. I expect there to be no OS, no standard UI/UX interfaces, no formal programming languages. All interfaces will be more ad hoc, created by the underlying AI to match the needs of the moment. It can be star trek like “computer plot a course to...” or a set of buttons popping up on your touchscreen, or maybe physical buttons and keys being labeled as needed in real-time, or something else. But not the ubiquitous rigid interfaces of the last millennium. For the clues of what is already possible but not being implemented yet one should look to the scifi movies and shows, unconstrained by the current limits. Almost everything useful there is already doable or will be in a short while. I hope someone is working on this.
Just a quote found online:
SpaceX can build fully reusable rockets faster than the FAA can shuffle fully disposable paper
It seems like we are not even close to converging on any kind of shared view. I don’t find the concept of “brute facts” even remotely useful, so I cannot comment on it.
But this faces the same problem as the idea that the visible universe arose as a Boltzmann fluctuation, or that you yourself are a Boltzmann brain: the amount of order is far greater than such a hypothesis implies.
I think Sean Carroll answered this one a few times: the concept of a Boltzmann brain is not cognitively stable (you can’t trust your own thoughts, including that you are a Boltzmann brain). And if you try to make it stable, you have to reconstruct the whole physical universe. You might be saying the same thing? I am not claiming anything different here.
The simplest explanation is that some kind of Platonism is real, or more precisely (in philosophical jargon) that “universals” of some kind do exist.
Like I said in the other reply, I think that those two words are not useful as binaries real/not real, exist/not exist. If you feel that this is non-negotiable to make sense of philosophy of physics or something, I don’t know what to say.
I was struck by something I read in Bertrand Russell, that some of the peculiarities of Leibniz’s worldview arose because he did not believe in relations, he thought substance and property are the only forms of being. As a result, he didn’t think interaction between substances is possible (since that would be a relation), and instead came up with his odd theory about a universe of monadic substances which are all preprogrammed by God to behave as if they are interacting.
Yeah, I think denying relations is going way too far. A relation is definitely a useful idea. It can stay in epistemology rather than in ontology.
I am not 100% against these radical attempts to do without something basic in ontology, because who knows what creative ideas may arise as a result? But personally I prefer to posit as rich an ontology as possible, so that I will not unnecessarily rule out an explanation that may be right in front of me.
Fair, it is foolish to reduce potential avenues of exploration. Maybe, again, we differ where they live, in the world as basic entities or in the mind as our model of making sense of the world.
Thanks, I think you are doing a much better job voicing my objections than I would.
If push comes to shove, I would even dispute that “real” is a useful category once we start examining deep ontological claims. “Exist” is another emergent concept that is not even close to being binary, but more of a multidimensional spectrum (numbers, fairies and historical figures lie on some of the axes). I can provisionally accept that there is something like a universe that “exists”, but, as I said many years ago in another thread, I am much more comfortable with the ontology where it is models all the way down (and up and sideways and every which way). This is not really a critical point though. The critical point is that we have no direct access to the underlying reality, so we, as tiny embedded agents, are stuck dealing with the models regardless.
By “Platonic laws of physics” I mean the Hawking’s famous question
What is it that breathes fire into the equations and makes a universe for them to describe…Why does the universe go to all the bother of existing?
Re
Current physics, if anything else, is sort of antiplatonic: it claims that there are several dozens of independent entities, actually existing, called “fields”, which produce the entire range of observable phenomena via interacting with each other, and there is no “world” outside this set of entities.
I am not sure if it actually “claims” that. A HEP theorist would say that QFT (the standard model of particle physics) + classical GR is our current best model of the universe, with a bunch of experimental evidence that this is not all it is. I don’t think there is a consensus for an ontological claim of “actually existing” rather than “emergent”. There is definitely a consensus that there is more to the world that the fundamental laws of physics we currently know, and that some new paradigms are needed to know more.
“Laws of nature” are just “how this entities are”. Outside very radical skepticism I don’t know any reasons to doubt this worldview.
No, I don’t think that is an accurate description at all. Maybe I am missing something here.
Yeah, that was my question. Would there be something that remains, and it sounds like Chalmers and others would say that there would be.
Sorry for the delayed reply… I don’t get notifications of replies, and the LW RSS has been broken for me for years now, so I only poke my head here occasionally.
50⁄100. But that rather exciting story is best not told in a public forum.
Well, lack of appearance of something otherwise expected would be negative, and appearance of something otherwise unexpected would be positive?
For example, a false pregnancy is a “positive somatization”. Or stigmata. Having trouble coming up with intentionally “good” examples, other than the visualizations helping you shoot a hoop better or something. Not sure if the new-agey “think yourself better” is actually a thing. Hence my question. “Send more blood to your hands” seems like a good example, actually. Not something one would normally think possible except by physical labor.