very powerful and sample efficient learning algorithm
I would add that people overestimate their ability to guess others preferences. “He just wants money” or “She just wants to marry him”. Such oversimplified models could be not just useful simplifications, buts could be blatantly wrong.
To escape creating just random minds, the future AI has to create a simulation of the history of the whole humanity, and it is still running, not maintained. I explored the topic of the resurrectional simulations here: https://philpapers.org/rec/TURYOL
How would measure affect this? If you’re forced to follow certain paths due to not existing in any others, then why does it matter how much measure it has?
Agree, but some don’t.
We could be (and probably are) in AI-created simulation, may be it is a “resurrectional simulation”. But if friendly AIs dominate, there will be no drastic changes.
QI works only if at least three main assumptions hold, but we don’t know for sure if they are true or not. One is very large size of the universe, the second is “unification of identical experiences” and the third one is that we could ignore the decline of measure corresponding to survival in MWI. So, QI validity is uncertain. Personally I think that it is more likely to be true than untrue.
It was just a toy example of rare, but stable world. If friendly AIs are dominating the measure, you most likely will be resurrected by friendly AI. Moreover, friendly AI may try to dominate total measure to increase human chances to be resurrected by it and it could try to rescue humans from evil AIs.
The world where someone wants to revive you has low measure (may be not, but let’s assume), but if they will do it, they will preserve you there for very long time. For example, some semi-evil AI may want to revive you only to show red fishes for the next 10 billion years. It is a very unlikely world, but still probable. And if you are in, it is very stable.
If QI is true, no matter how small is the share of the worlds where radical life extension is possible, I will eventually find myself in it, if not in 100, maybe in 1000 years.
I wrote the article quoted above. I think I understand your feelings as when I came to the idea of QI, I realised—after first period of excitement—that it implies the possibility of eternal sufferings. However, in current situation of quick technological progress such eternal sufferings are unlikely, as in 100 years some life extending and pain reducing technologies will appear. Or, if our civilization will crash, some aliens (or owners of simulation) will eventually bring pain reduction technics.
If you have thoughts about non-existence, it may be some form of suicidal ideation, which could be side effect of antidepressants or bad circumstances. I had it, and I am happy that it is in the past. If such ideation persists, ask professional help.
While death is impossible in QI setup, a partial death is still possible, when a person forgets those parts of him-her which want to die. Partial death has already happened many times with average adult person, when she forgets her childhood personality.
It will appear in a random moment of time when someone will guess it. However, this “randomness” is not evenly distributed. The probability of guessing the correct algo is higher with time (as more people is trying) and also it is higher in a DeepMind-like company than in a random basement as Deep Mind (or similar company) has already hired best minds. Also larger company has higher capability to test ideas, as it has higher computational capacity and other resources.
One possible way to the decisive strategic advantage is to combine rather mediocre AI with some also mediocre but rare real world capability.
Toy example: An AI is created with is capable to win in nuclear war by choosing right targets and other elements of nuclear strategy. The AI itself is not a superintelligence and maybe like something like AlphaZero for nukes. Many companies and people are capable to create such AI. However, only a nuclear power with a large nuclear arsenal could actually get any advantage of it, which could be only US, Russia and China. Lets assume that such AI gives +1000 in nuclear ELO rating between nuclear superpowers. Now the first of three countries which will get it, will have temporary decisive strategic advantage. This example is a toy example as it is unlikely that the first country which would get such “nuclear AI decisive advantage” will take a risk of first strike.
There are several other real world capabilities which could be combines with mediocre AI to get decisive strategic advantage: access to a very large training data, access to large surveillance capabilities like Prizm, access to large untapped computing power, to funds, to pool of scientists, to some other secret military capabilities, to some drone manufacturing capabilities.
All these capabilities are centered around largest military powers and their intelligence and military services. Thus, combining rather mediocre AI with a whole capabilities of a nuclear superpower could create a temporary strategic advantage. Assuming that we have around 3 nuclear superpowers, one of them could get temporary strategic advantage via AI. But each of them has some internal problems in implementing such project.
There are two interesting developments this year.
First is very large whole waffle chips with 1.2 trillions transistors, well above trend.
Second is “chiplets”—small silicon ships which are manufactured independently but are stacked on each other for higher connectivity.
They also claim increased performance in term of energy as they eliminate useless multiplications on zero which are often in matrix multiplication.
Kardashev – the creator of the Kardashev’s scale of civilizations – has died at 87. Here is his last video, which I recorded in May 2019. He spoke about the possibility of SETI via wormholes.
it is perfectly valid for programmers to use their own assumptions
Looks like “humans consulting HCH” procedure: programmers query their own intuition, consult each other, read books etc. This is why jury is often used in criminal cases: written law is just an approximation of human opinion, so why not ask humans directly?
My friend used to say “I don’t believe in missed opportunities”. She probably means that some people think that they had an opportunity, but missed it, but in fact there was no chance of getting what they wanted.
The best argument for the existing of qualia is existing of pain.