One particular chart in this post would be much easier to look at if it didn’t have an uncanny abomination wrapped in human skin stapled to it.
S. Alex Bradt
In futures where we survive but our plans all get derailed, why do you expect a utopia? Or, equivalently: In futures where we survive and get a utopia, why do you expect our plans to all get derailed?
What this means in practice is that the “entry-level” positions are practically impossible for “entry-level” people to enter.
I recently applied to an ML/AI upskilling program. My rejection letter pointed out my rather sparse GitHub (fair!) and suggested that I fill it with AI safety-relevant projects… politely skimming over the fact that I can’t complete those projects until I learn ML/AI engineering. The cynic in me raises an eyebrow and asks what, other than credentials, the program actually has to offer, if they only accept people who already understand the material. (The gentler side of me says that maybe it’s okay if they only offer credentials.)
For what it’s worth, I’m glad to see that people more qualified than me are applying to these programs and jobs.
but even then there’s handwaving around why we’ll suddenly start producing stuff that nobody wants.
“Stuff that nobody wants”? Like what? If you’re referring to AI itself… Well, a lot of people want AI to solve medicine. All of it. Quickly. Usually, this involves a cure for aging. Maybe that could be done by an AI that poses no threat… but there are also people who want a superintelligence to take over the world and micromanage it into a utopia, or who are at least okay with that outcome. So “stuff that nobody wants” doesn’t refer to takeover-capable AI.
If you’re referring to goods and services that AIs could provide for us… Is there an upper limit to the amount of stuff people would want, if it were cheap? If there is one, it’s probably very high.
Yep, that all sounds right. In fact, a directed graph can be called transitive if… well, take a guess. And k-uniform hypergraphs (edit: not k-regular, that’s different) correspond to k-ary relations.
Here’s another thought for you: Adjacency matrices. There’s a one-to-one correspondence between matrices and edge-weighted directed graphs. So large chunks of graph theory could, in principle, be described using matrices alone. We only choose not to do that out of pragmatism.
(I’ve also heard of something even more general called matroid theory. Sadly, I never took the time to learn about it.)
such as the Continuum Hypothesis, which is conjectured to be independent of ZFC.
It’s in fact known to be independent of ZFC. Sources: Devlin, The Joy of Sets; Folland, Real Analysis; Wikipedia.
Not the way I’d use those words, nope. The first is a low bar; the second is extremely high, and includes a specific emotional reaction in it. I haven’t seen any plausible vision of 2040 that I’d enthusiastically endorse, whether it’s business-as-usual or dismantling stars, but it’s not hard to come up with futures that are preferable to the end of love in the universe.
If you can’t enthusiastically endorse that outcome, were it to happen, then you should be yelling at us to stop.
I don’t think it’s that simple. I’m not enthusiastic about transhumanism, so I can’t enthusiastically endorse that outcome, but I can’t bring myself to say, “Don’t build AI because it’ll make transhumanism possible sooner.” If anything, I expect that having a friendly-to-everyone ASI would make it a lot easier to transition into a world where some people are Jupiter-brained.
I am quite willing to say, “Don’t build AI until you can make sure it’s friendly-to-everyone,” of course.
This comment has been tumbling around in my head for a few days now. It seems to be both true and bad. Is there any hope at all that the Singularity could be a pleasant event to live through?
now a bunch of robots can do it. as someone who has a lot of their identity and their actual life built around “is good at math,” it’s a gut punch. it’s a kind of dying. [...] multiply that grief out by *every* mathematician, by every coder, maybe every knowledge worker, every artist… over the next few years… it’s a slightly bigger story
Have there been any rationalist writings on this topic? This cluster of social dynamics, this cluster of emotions? Dealing with human obsolescence, the end of human ability to contribute, probably the end of humans being able to teach each other things, probably the end of humans thinking of each other as “cool”? I’ve read Amputation of Destiny. Any others?
Let’s not forget that the AI action plan will be on the President’s desk by Tuesday, if it isn’t already.
I have to wake up to that every morning. Now you do, too.
I don’t understand the concept of “internal monologue”.
I have a hypothesis about this. Most people, most of the time, are automatically preparing to describe, just in case someone asks. You ask them what they’re imagining, doing, or sensing, and they can just tell you. The description was ready to go before you asked the question. Sometimes, these prepared descriptions get rehearsed; people imagine saying things out loud. That’s internal monologue.
There are some people who do not automatically prepare to describe, and hence have less internal monologue, or none. Those people end up having difficulty describing things. They might even get annoyed (frustrated?) if you ask them too many questions, because answering can be hard.
(I wonder how one might test whether or not a person automatically prepares to describe. The ability to describe things quickly is probably measurable, and one could compare that to self-reports about internal monologue. If there were no correlation, that’d be evidence against this hypothesis.)
It would be consistent with the pattern that short-timeline doomy predictions get signal-boosted here, and wouldn’t rule out the something-about-trauma-on-LessWrong hypothesis for that signal-boosting. No doubt about that! But I wasn’t talking about which predictions get signal-boosted; I was talking about which predictions get made, and in particular why the predictions in AI 2027 were made.
Consider Jane McKindaNormal, who has never heard of LessWrong and isn’t really part of the cluster at all. I wouldn’t guess that a widespread pattern among LessWrong users had affected Jane’s predictions regarding AI progress. (Eh, not directly, at least...) If Jane were the sole author of AI 2027, I wouldn’t guess that she’s making short-timeline doomy predictions because people are doing so on LessWrong. If all of her predictions were wrong, I wouldn’t guess that she mispredicted because of something-about-trauma-on-LessWrong. Perhaps she could have mispredicted because of something-about-trauma-by-herself, but there are a lot of other hypotheses hanging around, and I wouldn’t start with the hard-to-falsify ones about her upbringing.
I realized, after some thought, that the AI 2027 authors are part of the cluster, and I hadn’t taken that into account. “Oh, that might be it,” I thought. “OP is saying that we should (prepare to) ask if Kokotajlo et al, specifically, have preverbal trauma that influenced their timeline forecasts. That seemed bizarre to me at first, but it makes some sense to ask that because they’re part of the LW neighborhood, where other people are showing signs of the same thing. We wouldn’t ask this about Jane McKindaNormal.” Hence the question, to make sure that I had figured out my mistake. But it looks like I was still wrong. Now my thoughts are more like, “Eh, looks like I was focusing too hard on a few sentences and misinterpreting them. The OP is less focused on why some people have short timelines, and more on how those timelines get signal-boosted while others don’t.” (Maybe that’s still not exactly right, though.)
Or maybe you’re saying that human hallucinations involve the “the magical transmutation of observations into behavior”?
Right! Eh, maybe “observations into predictions into sensations” rather than “observations into behavior;” and “asking if you think” rather than “saying;” and really I’m thinking more about dreams than hallucinations, and just hoping that my understanding of one carries over to the other. (I acknowledge that my understanding of dreams, hallucinations, or both could be way off!) Joey Marcellino’s comment said it better, and you left a good response there.
whereas the competent behavior we see in LLMs today is instead determined largely by imitative learning, which I re-dub “the magical transmutation of observations into behavior” to remind us that it is a strange algorithmic mechanism, quite unlike anything in human brains and behavior.
And yet...
Well, I don’t know the history, but I think calling it “hallucination” is reasonable in light of the fact that “LLM pretraining magically transmutes observations into behavior”. Thus, you can interpret LLM base model outputs as kinda “what the LLM thinks that the input distribution is”. And from that perspective, it really is more “hallucination” than “confabulation”!
But hallucination is “anything in human brains,” isn’t it?
I’m afraid I don’t know what “the AI Futures team” is.
I mean the authors of AI 2027. The AI Futures Project, I should have said.
I’m not that picky about which subset of the community is making dire predictions and ringing the doom bell.
My question was more like this: What if the authors weren’t a subset of the community at all? What if they’d never heard of LessWrong, somehow? Then, if the authors’ predictions turned out to be all wrong, any pattern in this community wouldn’t be the reason why; at least, that would seem pretty arbitrary to me. In reality, the authors are part of this community, and that is relevant (if I’m understanding correctly). I didn’t think about that at first; hence the question, to confirm.
Is that a good-faith response to your question?
Definitely good-faith, and the post as a whole answers more than enough.
Thanks for the thorough explanation.
Now, you mention that you “can’t think of a reason” to privilege the trauma hypothesis. And yet I gave a post full of reasons.
You describe a pattern in the community, and then you ask if the dire predictions of AI 2027, specifically, are distorted by that pattern. I think that’s what I’m getting stuck on.
Imagine if the AI Futures team had nothing to do with the rationality community, but they’d made the same dire predictions. Then 2028 rolls around, and we don’t seem to be any closer to doom. People ask why AI 2027 was wrong. Would you still point to trauma-leading-to-doom-fixation as a hypothesis? (Or, rather, would you point to it just as readily?)
Why do you say this?