This feels like a post that has used a significant amount of AI to be made...
tailcalled
My guess would be unstructured play develops more material skills and structured play develops more social skills.
I’ve personally gone from thinking that Glinda-like characters are insanely psychologically unrealistic (out of typical-mind fallacy) to thinking that they are much more common in real life than Elphaba-like characters are.
I’ve come to know consciousness is about rocks because I’ve looked at rocks and become conscious of them. I’ve come to know rocks themselves aren’t conscious because they are composed of too simple crystal grains. I’ve come to believe that sin interferes with consciousness because that applies to all the sins I’ve been able to think of (e.g. murder). I’ve come to believe that consciousness manifests spontaneously because humanity’s collective consciousness includes most of the observable universe.
No, the propofol is epiphenomenal, the sin is the surgery afterwards. (And the vice causing the sin is whatever disease the surgery is supposed to cure.)
Consciousness doesn’t manifest in the material, but around the material. So because a rock is not sinful, it is easy to become conscious of it.
Solution to the hard problem of consciousness: Consciousness manifests spontaneously in the absence of sin.
It’s not exactly that AI won’t be used, but it will basically just be used as a more flexible interface to text. Any capabilities it develops will be in a “bag of heuristics” sense, and the bag of heuristics will lack behind on more weighty matters because people with a clue decide not to offer more heuristics to it. More flexible interfaces to text are of limited interest.
You might be underestimating the strength of evidence that looking vaguely slavic gives.
Ah. Not quite what you’re asking about, but omniscience through higher consciousness is likely under my scenario.
find something I think is true and underappreciated about the world, come up with the wildest implications according to the lesswrong worldview, phrase them narrowly enough to seem crackpottish, don’t elaborate
Not sure what you mean by “phrase them narrowly enough to seem crackpottish”. I would seem much more crackpottish if I gave the underlying logic behind it, unless maybe I bring in a lot of context.
For most practical purposes we already have that. What would you do with telepathy that you can’t do with internet text messaging?
Sleep time will desynchronize from local day/night cycles
Sleep time will synchronize more closely to local day/night cycles.
Investment strategies based on energy return on energy invested (EROEI) will dramatically outperform traditional financial metrics
No strong opinion. Finance will lose its relevance.
none of raw compue, data, or bandwidth constraints will turn out to be the reason AI has not reached human capability levels
Lack of AI consciousness and preference not to use AI will turn out to be the reason AI will never reach human level.
Supply chains will deglobalize
Quite likely partially, but probably there will also be a growth in esoteric products, which might actually lead to more international trade on a quantitative level.
People will adopt a more heliocentric view
We are currently in a high-leverage situation where the way the moderate-term future sees our position in the universe is especially sensitive to perturbations. But rationalist-empiricist-reductionists opt out of the ability to influence this, and instead the results of future measurement instruments will depend on what certain non-rationalist-empiricist-reductionists do.
“Disappointed” as in disappointed in me for making such predictions or disappointed in the world if the predictions turn out true?
Preregistering predictions:
The world will enter a golden age
The Republican party will soon abandon Trumpism and become much better
The Republican party will soon come with a much more pro-trans policy
The Republican party will double down on opposition to artificial meat, but adopt a pro-animal-welfare attitude too
In the medium term, excess bureaucracy will become a much smaller problem, essentially solved
Spirituality will make a big comeback, with young people talking about karma and God(s) and sin and such
AI will be abandoned due to bad karma
There will be a lot of “retvrn” (to farming, to handmade craftsmanship, etc.)
Medical treatment will improve a lot, but not due to any particular technical innovation
Architecture will become a lot more elaborate and housing will become a lot more communal
No, I’m not going to put probabilities on them, and no, I’m not going to formalize these well enough that they can be easily scored, plus they’re not independent so it doesn’t make sense to score them independently.
I thought fractional derivatives were dependent on global information?
Diffusion LLMs and autoregressive LLMs seem like basically the same technology to me.
Text diffusion models are still LLMs, just not autoregressive.
One time in my sexology discord, some people were arguing that incest porn was super popular, and I was skeptical because this proposition conflicted with my survey data. I tried scraping data from some porn site (I think PornHub?), and when I sorted videos by number of views, I found that the top-viewed videos were often incest-themed, but as I looked at the cumulative viewcounts, the fraction of views to incest-themed porn dropped as I increased the sample size.
I don’t know for sure but my guess is that there was a supply/demand imbalance in the data, such that the fans of incest had their views concentrated into a smaller number of videos (that were thus more likely to have extraordinarily high view counts) because people weren’t producing “enough” incest videos to meet demand. But that overall preference for incest porn was lower than what one could guess from the top views.
I haven’t looked through your materials so I don’t know how my method of scraping in order of decreasing view count compares to your method. Did you get a complete/comprehensive dataset somehow?
If you want to upskill in coding, I’m open to tutoring you for money.
Point is they’re still LLMs.