I’m reminded of Thomas Metzinger’s book “Being No One,” which argues that the self is essentially an illusion.
This statement is self-contradictory as as its opening ‘I’m reminded’ implies that ‘I exist’.
I’m reminded of Thomas Metzinger’s book “Being No One,” which argues that the self is essentially an illusion.
This statement is self-contradictory as as its opening ‘I’m reminded’ implies that ‘I exist’.
ChatGPT gives wrong answer for 500 sisters:
Prompt: Sally (a girl) has 3 brothers. Each brother has 500 sisters. How many sisters does Sally have?
The problem states that Sally has 3 brothers. It also says that each brother has 500 sisters. This might seem like a large number, but remember that all the brothers share the same sisters.
So, if one brother has 500 sisters, then all of them have the same 500 sisters, including Sally. Therefore, Sally has 500 sisters.
But it gives right answer for 100 sisters:
Let’s break it down step by step:
Sally has 3 brothers.
Each brother has 100 sisters.
Given that each brother has 100 sisters, and we know Sally is one of those sisters, it means that Sally and her other sisters make up the 100 sisters for each brother.
However, this doesn’t mean Sally has 300 sisters (3 brothers x 100 sisters). All the brothers are counting the same set of sisters.
So, Sally has 99 other sisters (100 sisters in total including Sally) and 3 brothers.
https://chat.openai.com/share/589777f0-1edf-4f7a-9435-28084510a9a8
Unreasonable effectiveness of philosophy could be explained by anthropics: I—as a writer of this comment—is selected only from minds which are capable to complex reflective thinking on abstract topics.
As I remember, russian tax system works like this. Income tax is small and flat: 13 per cent. But your employer also pays tax on your wage. And also every shop pays tax on every sale. So summing up it gives a lot.
I read that people are using meta-level commands to reach this like ‘align this two parts of me’ or ‘cure this problem’ without specifying how actually this will happen and it works for some.
Could the same technique be used to increase sexual desire if you need—just imagine that you penis is 10 times larger than usual?
Could it permanently damage sexual desire? Is it better than fapping?
I think I can’t do it alone.
But actually I applied one grant proposal which includes exploration of this idea.
There are several other use cases where cooperation across multiple universes could be beneficial:
1. Resurrection of the Dead: When generating a random mind file, the likelihood of it being a copy of a deceased person is extremely low. More often than not, it will either be noise or a copy of some other individual. However, for any given person, there exists a universe where they have just passed away. Therefore, we could collaborate with these alternate universes: we resurrect their deceased, and they resurrect ours.
2. Alleviating Past Suffering and S Risks: This concept is similar to the one above but is applied to any moment of suffering experienced by an observer. This is contingent on certain assumptions about the nature of personal identity and theories of indexical uncertainty.
I think that AI safety may be a service (like: we will make your model safe and controllable and aligned with regulations) and such service can be sold.
Moreover, it is better to have AI safety as paid service than AI safety only based on forceful regulation adopted because of fear: more people will want to get it, as it will save them money.
Note that AI safety as commercial service is not excluding fines and bombing data centers for those who decide not to have any certified AI safety. Such treats only increase motivation to subscribe. But many will services only to save money6 not from fear.
However, it looks like for now that AI safety is presented as only altruistic and non-commercial endeavor, and this can actually preclude its wider adoption. But someone will eventually will earn a lot of money and will become billionaire selling AI safety as service.
Yes, I see it as a risk that eventually AI will be aligned with mean human values, excluding some outliers, and will result in boring, almost senseless world.
I also tried it with drawing the board + adding explanation to moves, and there is some errors in drawings. But may be there is a way to make the drawing more coherent?
In some sense it already happens. When we train AI on more and more human-generated texts, it, in some sense, gets more capabilities and more alignment.
Could illegal moves be explicitly prohibited in the prompt—or some other changes in the prompt prevent them?
We could push analogy even farther:
Mathematical universe is God father.
Artificial intelligence is its son, as an agent built on the same computational principles.
But we should be careful with such analogies.
In some Hegelian sense, Superintelligence is God which self-evolves from matter.
What bridges Nietzsche and infinite ethics is the idea of eternal return. Nietzsche was the first infinite ethicist.
More generally, infinite ethics implies big world immortality and most of timelines where I am immortal are those where I am supported by some advance AI. Now the question is will it be friendly or hostile s-risks creating entity.
Thus the difference between hell and paradise in infinite ethics is one bit in AI’s value function.
Also, visual art is often such adversarial examples, but it creates not visual illusions, but emotion. E.g. sad landscape or religious epiphany.
One more reason is that humans have “pleasure to kill” drive, which can’t be implemented in real life, but easily implemented in fiction and games. From the point of view of this drive, DOOM is utopia.
It would be interesting to add in this model that AI has constantly increasing effects on human life expectancy: even ChatGPT now helps with diagnostics; companies use AI for drug development etc.