I am now coping a 4 TB HDD and it is taking 50 hours. Blu rays are more time consuming as one need to change the disks, and it may be around 80 disks 50GB each to record the same hard drive. So it could take more than day of work.
Looks like reverse stigmata effect.
Yes, but all what I said could be just a convergent prediction of M. Not the real human runs out of the room, but M predicted that its model human of H’ will leave the room.
On possible way how it could go wrong:
M to H: “Run out of the room!”
H runs out.
Adv prints something, but H never reads it. So M reached stable output.
In this sense, Stalinist purges is a way of the institutional regeneration. Evert few years, a king replace and kill all his ministers and other officials, and put new people on their places, thus cleaning all Nash equilibriums. But one day they replace the king.
Castration seems to increase human lifespan, but not make us immortal. It is interesting how it affects cancer rates in humans.
Is telomerasa active in all stem cells?
So, GPT-3 is something like Giant look-up table? Which approximate the answer between a few nearest recorded answers, but the whole actual intellectual work was performed by those who created the training dataset?
When pain is unbearable it destroys us; when it does not it is bearable. Marcus Aurelius
The goal of increasing the suffering contradicts the need to preserve an individual, who is experience the pain as the same person, which may be a natural limitation for intensity.
:) Don’t remember where I wrote about it.
Yes, I know your position from your previous comments on the topic, but it seems that GPT-like systems are winning median term and we can’t stops this. Even if they can’t be scaled to superintelligence, they may need some safety features.
I had an idea similar to yours “badness” algorithm: It will be interesting to add to the GPT a truth discriminator: another neural net which predicts the truth values of GPT’s statement relative to the real world and is trained on a database of true statements (there are several). The whole thing then is trained in GAN-style, and the GPT thus trained to produce statements with highest true score.
Even GPT-2 could be calibrated by some resent events, called “examples”—so it has some form of memory. GPT-7 robot has access to all data it observed before, so if it said “I want to kill Bill”, it will act in the future as if it has such desire. In other words, it behave as if it has memory.
It doesn’t have build-in ability to rewrite its architecture, but it can write code on a laptop or order things in the internet. But it doesn’t know much about its own internal structure except that it is very large GPT model.
It works the same way as GPT makes TL;DR summaries. There is no any reward for correct TL;DR or any training—it just completes sequence in the most probable way. The same way some self-driving cars work: there is a neural net from end to end, without any internal world models, an it just predicts what a normal car will do in this situation. I heard from a ML friend that they could achieve some reasonably good driving with such models.
Here Rolf-attack could help, which is basically something like reverse Roco Basilisk. The idea is that we now take a precomitment to create many simulations with different non-friendly AI in the future. Any future AI knows it as a fact about real world in the past, and increase its own estimation for being in a testing simulation. We don’t need significant increase. Even if it estimates this at 1 per cent, then the expected utility of killing humans is lower than the one of non-killing them, as human atoms and other resources are very cheap.
This is, -U(kill humans and being punished) > U(use of human atoms and habitats), so it is better to preserve humans for any future AI.
Rolf-attack doesn’t provide any clues for AI if it is in real world or simulation, but only installs uncertainty.
There should be only very subtle hints which are enough to install uncertainty in the AI about the nature of its reality, but not to solve it in any direction.
In my “message to any future AI” I tried to create “deliberate epistemic uncertainty” in any future AI.
Interesting, Singapore has extremely low CFR: 37.900 cases and only 25 deaths. Mostly because overtesting and young patients (migrant workers)