“The same goes for LLMs and AGIs, and I hope this forum realizes so in time.”
But it totally does, that’s like one of our main things? Also, “realizing it in time” sort of carries the implication that by realizing it, we might solve the problem “in time,” while evidently that is not sufficient.
I’m under the impression that many people are placing their hopes in AGI. They expect it to solve all our problems for us, to create abundance, to cure cancer, to eliminate suffering and poverty.
Many seem to think that the alternative to this is that we fail the alignment and that everyone dies, but that these are the two main outcomes and that we should try to steer towards the former.
I don’t disagree that the second outcome may occur, I disagree that the first outcome is realistic. I also don’t think that the second outcome requires artificial intelligence. Companies optimising for “growth” is already harmful in ways which resemble paperclip maximization. The nature of optimization, game theory, social dilemmas and darwinism is sufficient for failure, AGI is merely one path to such a failure.
If Moloch is something emergent, and if subjective quality of life doesn’t go up every decade, then I don’t see any benefit in technological advancements. For negative consequences do seem to accumulate over time (random examples include microplastics, decreasing agency, a lack of new children, emerging dystopic elements)
“The same goes for LLMs and AGIs, and I hope this forum realizes so in time.”
But it totally does, that’s like one of our main things?
Also, “realizing it in time” sort of carries the implication that by realizing it, we might solve the problem “in time,” while evidently that is not sufficient.
I’m under the impression that many people are placing their hopes in AGI. They expect it to solve all our problems for us, to create abundance, to cure cancer, to eliminate suffering and poverty.
Many seem to think that the alternative to this is that we fail the alignment and that everyone dies, but that these are the two main outcomes and that we should try to steer towards the former.
I don’t disagree that the second outcome may occur, I disagree that the first outcome is realistic. I also don’t think that the second outcome requires artificial intelligence. Companies optimising for “growth” is already harmful in ways which resemble paperclip maximization. The nature of optimization, game theory, social dilemmas and darwinism is sufficient for failure, AGI is merely one path to such a failure.
If Moloch is something emergent, and if subjective quality of life doesn’t go up every decade, then I don’t see any benefit in technological advancements. For negative consequences do seem to accumulate over time (random examples include microplastics, decreasing agency, a lack of new children, emerging dystopic elements)