In this piece, we want to paint a picture of the possible benefits of AI, without ignoring the risks or shying away from radical visions.
Thanks for this piece! In my opinion you are still shying away from discussing radical (although quite plausible) visions. I expect the median good outcome from superintelligence involves everyone being mind uploaded / living in simulations experiencing things that are hard to imagine currently.
Even short of that, in the first year after a singularity, I would want to:
Use brain computer interfaces to play videogames / simulations that feel 100% real to all senses, but which are not constrained by physics.
Go to Hogwarts (in a 100% realistic simulation) and learn magic and make real (AI) friends with Ron and Hermione.
Visit ancient Greece or view all the most important events of history based on superhuman AI archeology and historical reconstruction.
Take medication that makes you always feel wide awake, focused etc. with no side effects.
Engineer your body / use cybernetics to make yourself never have to eat, sleep, wash, etc. and be able to jump very high, run very fast, climb up walls, etc.
Use AI as the best teacher ever to learn maths, physics and every subject and language and musical instruments to super-expert level.
Visit other planets. Geoengineer them to have crazy landscapes and climates.
Play God and oversee the evolution of life on other planets.
Design buildings in new architectural styles and have AI build them.
Genetically modify cats to play catch.
Listen to new types of music, perfectly designed to sound good to you.
Design the biggest roller coaster ever and have AI build it.
Modify your brain to have better short term memory, eidetic memory, be able to calculate any arithmetic super fast, be super charismatic.
Bring back Dinosaurs and create new creatures.
Ask AI for way better ideas for this list.
I expect UBI, curing aging etc. to be solved within a few days of a friendly intelligence explosion.
Although I think we also plausibly will see a new type of scarcity. There is limited amount of compute you can create using the materials / energy in the universe. And if in fact most humans are mind-uploaded / brains in vats living in simulations, we will have to divide this among ourselves in order to run the simulations. If you have twice as much compute, you can simulate your brain twice as fast (or run two of you in parallel?), and thus experience twice as much subjective time—and so live twice as long until the heat death of the universe.
Computing the exact layer-truncated residual streams on GPT-2 Small, it seems that the effective layer horizon is quite large:
I’m mean ablating every edge with a source node more than n layers back and calculating the loss on 100 samples from The Pile.
Source code: https://gist.github.com/UFO-101/7b5e27291424029d092d8798ee1a1161
I believe the horizon may be large because, even if the approximation is fairly good at any particular layer, the errors compound as you go through the layers. If we just apply the horizon at the final output the horizon is smaller.
However, if we apply at just the middle layer (6), the horizon is surprisingly small, so we would expect relatively little error propagated.
But this appears to be an outlier. Compare to 5 and 7.
Source: https://gist.github.com/UFO-101/5ba35d88428beb1dab0a254dec07c33b