The Republican party will soon abandon Trumpism and become much better
The Republican party will soon come with a much more pro-trans policy
The Republican party will double down on opposition to artificial meat, but adopt a pro-animal-welfare attitude too
In the medium term, excess bureaucracy will become a much smaller problem, essentially solved
Spirituality will make a big comeback, with young people talking about karma and God(s) and sin and such
AI will be abandoned due to bad karma
There will be a lot of “retvrn” (to farming, to handmade craftsmanship, etc.)
Medical treatment will improve a lot, but not due to any particular technical innovation
Architecture will become a lot more elaborate and housing will become a lot more communal
No, I’m not going to put probabilities on them, and no, I’m not going to formalize these well enough that they can be easily scored, plus they’re not independent so it doesn’t make sense to score them independently.
Reading this feels like a normie might feel reading Kokotajlo’s prediction that energy use might increase 1000x in the next two decades; like, you hope there’s a model behind it, but you don’t know what it is, and you’re feeling pretty damn skeptical in the meantime.
It’s not exactly that AI won’t be used, but it will basically just be used as a more flexible interface to text. Any capabilities it develops will be in a “bag of heuristics” sense, and the bag of heuristics will lack behind on more weighty matters because people with a clue decide not to offer more heuristics to it. More flexible interfaces to text are of limited interest.
Sleep time will desynchronize from local day/night cycles
Sleep time will synchronize more closely to local day/night cycles.
Investment strategies based on energy return on energy invested (EROEI) will dramatically outperform traditional financial metrics
No strong opinion. Finance will lose its relevance.
none of raw compue, data, or bandwidth constraints will turn out to be the reason AI has not reached human capability levels
Lack of AI consciousness and preference not to use AI will turn out to be the reason AI will never reach human level.
Supply chains will deglobalize
Quite likely partially, but probably there will also be a growth in esoteric products, which might actually lead to more international trade on a quantitative level.
People will adopt a more heliocentric view
We are currently in a high-leverage situation where the way the moderate-term future sees our position in the universe is especially sensitive to perturbations. But rationalist-empiricist-reductionists opt out of the ability to influence this, and instead the results of future measurement instruments will depend on what certain non-rationalist-empiricist-reductionists do.
Any protocol can be serialized, so in principle if you had the hardware and software necessary to translate from and to the “neuralese” dialect of the sender and recipient, you could serialize that as text over the wire. But I think the load-bearing part is the ability to read, write, and translate the experiences that are upstream of language.
One could expect “everyone can visceral understand the lived experiences of others” to lead to a golden age as you describe, though it doesn’t really feel like your world model. But conditioning on it not being something about the flows of energy that come from the sun and the ecological those flows of energy flow through, it’s still my guess for generating those predictions (under the assumption that the predictions were generated by “find something I think is true and underappreciated about the world, come up with the wildest implications according to the lesswrong worldview, phrase them narrowly enough to seem crackpottish, don’t elaborate”)
Ah. Not quite what you’re asking about, but omniscience through higher consciousness is likely under my scenario.
find something I think is true and underappreciated about the world, come up with the wildest implications according to the lesswrong worldview, phrase them narrowly enough to seem crackpottish, don’t elaborate
Not sure what you mean by “phrase them narrowly enough to seem crackpottish”. I would seem much more crackpottish if I gave the underlying logic behind it, unless maybe I bring in a lot of context.
so there’s like an ultimate thing that your set of predictions is about, and you’re holding off on saying what is to be vindicated until some time that you can say “this is exactly/approximately what i was saying would happen”?
im not trying to be negative; i can still see utility in that if that’s a fair assessment but i want to know why, when you say you called it, this was the thing you wanted to have been called
fwiw I prefer people to write posts like this than-not, on the margin. I think operationalizing things is quite hard, I think the right norm is “well, you get a lot less credit for vague predictions with a lot of degrees of freedom”, but, it’s still good practice IMO to be in the habit of concretely predicting things.
At a guess, disappointment at the final paragraph. Without a timeline, specificity, or justification, what’s the point of calling this “preregistered predictions”?
Recently I’ve been starting to think it could go many other ways than my predictions above suggest. So it’s probably safer to say that the futurist/rationalist predictions are all wrong than that any particular prediction I can make is right.
The main difference being the “NNs fail to work in many ways, no digital human analog for sure, agents stay at the same “plays this one game very well” stage, but a lot of tech progress in other ways”?
Preregistering predictions:
The world will enter a golden age
The Republican party will soon abandon Trumpism and become much better
The Republican party will soon come with a much more pro-trans policy
The Republican party will double down on opposition to artificial meat, but adopt a pro-animal-welfare attitude too
In the medium term, excess bureaucracy will become a much smaller problem, essentially solved
Spirituality will make a big comeback, with young people talking about karma and God(s) and sin and such
AI will be abandoned due to bad karma
There will be a lot of “retvrn” (to farming, to handmade craftsmanship, etc.)
Medical treatment will improve a lot, but not due to any particular technical innovation
Architecture will become a lot more elaborate and housing will become a lot more communal
No, I’m not going to put probabilities on them, and no, I’m not going to formalize these well enough that they can be easily scored, plus they’re not independent so it doesn’t make sense to score them independently.
Reading this feels like a normie might feel reading Kokotajlo’s prediction that energy use might increase 1000x in the next two decades; like, you hope there’s a model behind it, but you don’t know what it is, and you’re feeling pretty damn skeptical in the meantime.
What’s the crux? Or what’s the most significant piece of evidence you could imagine coming across that would update you against these predictions?
Please explain. This part seems even less likely than the golden age of return to farming.
It’s not exactly that AI won’t be used, but it will basically just be used as a more flexible interface to text. Any capabilities it develops will be in a “bag of heuristics” sense, and the bag of heuristics will lack behind on more weighty matters because people with a clue decide not to offer more heuristics to it. More flexible interfaces to text are of limited interest.
Which of the following do you additionally predict?
Sleep time will desynchronize from local day/night cycles
Investment strategies based on energy return on energy invested (EROEI) will dramatically outperform traditional financial metrics
none of raw compue, data, or bandwidth constraints will turn out to be the reason AI has not reached human capability levels
Supply chains will deglobalize
People will adopt a more heliocentric view
Sleep time will synchronize more closely to local day/night cycles.
No strong opinion. Finance will lose its relevance.
Lack of AI consciousness and preference not to use AI will turn out to be the reason AI will never reach human level.
Quite likely partially, but probably there will also be a growth in esoteric products, which might actually lead to more international trade on a quantitative level.
We are currently in a high-leverage situation where the way the moderate-term future sees our position in the universe is especially sensitive to perturbations. But rationalist-empiricist-reductionists opt out of the ability to influence this, and instead the results of future measurement instruments will depend on what certain non-rationalist-empiricist-reductionists do.
Telepathy?
For most practical purposes we already have that. What would you do with telepathy that you can’t do with internet text messaging?
Any protocol can be serialized, so in principle if you had the hardware and software necessary to translate from and to the “neuralese” dialect of the sender and recipient, you could serialize that as text over the wire. But I think the load-bearing part is the ability to read, write, and translate the experiences that are upstream of language.
One could expect “everyone can visceral understand the lived experiences of others” to lead to a golden age as you describe, though it doesn’t really feel like your world model. But conditioning on it not being something about the flows of energy that come from the sun and the ecological those flows of energy flow through, it’s still my guess for generating those predictions (under the assumption that the predictions were generated by “find something I think is true and underappreciated about the world, come up with the wildest implications according to the lesswrong worldview, phrase them narrowly enough to seem crackpottish, don’t elaborate”)
Ah. Not quite what you’re asking about, but omniscience through higher consciousness is likely under my scenario.
Not sure what you mean by “phrase them narrowly enough to seem crackpottish”. I would seem much more crackpottish if I gave the underlying logic behind it, unless maybe I bring in a lot of context.
Can you give some reasons why you think all that or some of all that?
so there’s like an ultimate thing that your set of predictions is about, and you’re holding off on saying what is to be vindicated until some time that you can say “this is exactly/approximately what i was saying would happen”?
im not trying to be negative; i can still see utility in that if that’s a fair assessment but i want to know why, when you say you called it, this was the thing you wanted to have been called
fwiw I prefer people to write posts like this than-not, on the margin. I think operationalizing things is quite hard, I think the right norm is “well, you get a lot less credit for vague predictions with a lot of degrees of freedom”, but, it’s still good practice IMO to be in the habit of concretely predicting things.
Who’s gonna do that? It’s not like we have enough young people for rapid cultural evolution.
“Disappointed” as in disappointed in me for making such predictions or disappointed in the world if the predictions turn out true?
At a guess, disappointment at the final paragraph. Without a timeline, specificity, or justification, what’s the point of calling this “preregistered predictions”?
Can you give some time horizon on this? Like, 5 years, 10 years, 20 years?
Recently I’ve been starting to think it could go many other ways than my predictions above suggest. So it’s probably safer to say that the futurist/rationalist predictions are all wrong than that any particular prediction I can make is right.
I’m still mostly optimistic though.
The main difference being the “NNs fail to work in many ways, no digital human analog for sure, agents stay at the same “plays this one game very well” stage, but a lot of tech progress in other ways”?