One of the best essays I ever read about LLMs, extremely insightful. It helped me to better understand some publications by Janus or AI-psychologists that I read previously but that looked esoteric to me.
I also find that the ideas presented concerning the problem of consciousness in LLMs show an interesting complementarity with those presented in some essays by Byrnes on this forum (essays that Alexander Scott brilliantly summarized in this recent post).
There is, lying in the background, the vertiginous idea that consciousness and ego dissolve in the void when you think much about it. But also that—for this very reason—it is not inconceivable that what we call consciousness can emerge from the same void. Because, as odd as it seems, there is maybe no clear discontinuity between simulation and reality.
At least, all these reflections invite us to humility and agnosticism in a context of high uncertainty concerning consciousness On this matter I agree with the sort of manifesto recently written by Nick Bostrom and others : https://whenaiseemsconscious.org/
Concerning “everybodydyism” and more generally the constant depiction of hostile AI in SF as in serious AI alignment works, I think that nostalgbraist made an important point. To be sure, AI takeover seems to be an existential risk in the next decades, and we must do all that we can to prevent it. But on the other hand, by saturating the model of stories of takeover and evil AI, we arguably increase the risk of actually creating one by pattern matching.
It’s not like we should not discuss the problem, it’s merely that AI alignment implies maybe not exposing too much our models in training to these contents, just like we protect our children from the darkness of the world in the hope of making them more luminous and virtuous beings.
One of the best essays I ever read about LLMs, extremely insightful. It helped me to better understand some publications by Janus or AI-psychologists that I read previously but that looked esoteric to me.
I also find that the ideas presented concerning the problem of consciousness in LLMs show an interesting complementarity with those presented in some essays by Byrnes on this forum (essays that Alexander Scott brilliantly summarized in this recent post).
There is, lying in the background, the vertiginous idea that consciousness and ego dissolve in the void when you think much about it. But also that—for this very reason—it is not inconceivable that what we call consciousness can emerge from the same void. Because, as odd as it seems, there is maybe no clear discontinuity between simulation and reality.
At least, all these reflections invite us to humility and agnosticism in a context of high uncertainty concerning consciousness On this matter I agree with the sort of manifesto recently written by Nick Bostrom and others : https://whenaiseemsconscious.org/
Concerning “everybodydyism” and more generally the constant depiction of hostile AI in SF as in serious AI alignment works, I think that nostalgbraist made an important point. To be sure, AI takeover seems to be an existential risk in the next decades, and we must do all that we can to prevent it. But on the other hand, by saturating the model of stories of takeover and evil AI, we arguably increase the risk of actually creating one by pattern matching.
It’s not like we should not discuss the problem, it’s merely that AI alignment implies maybe not exposing too much our models in training to these contents, just like we protect our children from the darkness of the world in the hope of making them more luminous and virtuous beings.