I’ve always wondered, how do you retain agency if you embed a vastly smarter and faster mind to yours, when they would theoretically make better decisions than you would have 90% of the time. Scaling this intuition, turning humanity into a hive-mind does not strike me as a valuable future world.
Edit: I’ve also given more thought to the idea of BCIs allowing us to ‘download’ skills, and I’d really like someone to engage with the following. If we agree we derive value and meaning from effort we put into learning and the satisfaction we get from it, essentially specializing ourselves depending on our tastes, how do we find value in a world where anyone can instantly know anything? It’s a question I’ve been ruminating on for a few hours.
I have considered the loss of humanity from being in a hive mind versus the loss of humanity from being extinct completely or being emulated on digital processes, and concluded as bad as it might be to become much more akin to true eusocial insects like ants, you still have more humanity left by keeping some biology and individual bodies.
If we agree we derive value and meaning from effort we put into learning and the satisfaction we get from it
People get the feeling of meaning when they “have a hold on the reality”, which corresponds to finding coherent representations of their situation in the moment. See John Vervaeke’s work. During learning, this mostly happens during “Aha” moments (only a small percent of the time that we generally spend “learning”), or when you exercise a skill in the “goldilocks zone”, or “stay in the flow”. Perfecting skills through model learning (i.e., “energy landscape formation”) also happens during these moments (or after those moments, when neurodevelopmental processes in the brain kick in that are slower than those responsible for momentary inference), but it seems that it’s possible to decouple them, too. The human brain may play the role of integrative or orchestration layer that continually finds coherent representations (and thus “meaning”) while itself “going in circles” in terms of the energy landscape, or just exploiting a stable, highly optimised landscape, and the bulk of learning happens externally (repertoire of modular skills and representations learnt with DL).
Post-hoc “satisfaction from learning” is a rather different story from “meaning”. It is psychological and dopamine-mediated. It helps people stick with evolutionary beneficial activities that improve their prospects in the environment. There is no fundamental metaphysical substance to the fact that humans value hard work (like, hard learning) in today’s environment. Also it’s worth noting that extremely few people could find hard learning satisfactory today: humans really don’t like to put their brains to work.
I’ve always wondered, how do you retain agency if you embed a vastly smarter and faster mind to yours, when they would theoretically make better decisions than you would have 90% of the time. Scaling this intuition, turning humanity into a hive-mind does not strike me as a valuable future world.
Edit: I’ve also given more thought to the idea of BCIs allowing us to ‘download’ skills, and I’d really like someone to engage with the following. If we agree we derive value and meaning from effort we put into learning and the satisfaction we get from it, essentially specializing ourselves depending on our tastes, how do we find value in a world where anyone can instantly know anything? It’s a question I’ve been ruminating on for a few hours.
I have considered the loss of humanity from being in a hive mind versus the loss of humanity from being extinct completely or being emulated on digital processes, and concluded as bad as it might be to become much more akin to true eusocial insects like ants, you still have more humanity left by keeping some biology and individual bodies.
People get the feeling of meaning when they “have a hold on the reality”, which corresponds to finding coherent representations of their situation in the moment. See John Vervaeke’s work. During learning, this mostly happens during “Aha” moments (only a small percent of the time that we generally spend “learning”), or when you exercise a skill in the “goldilocks zone”, or “stay in the flow”. Perfecting skills through model learning (i.e., “energy landscape formation”) also happens during these moments (or after those moments, when neurodevelopmental processes in the brain kick in that are slower than those responsible for momentary inference), but it seems that it’s possible to decouple them, too. The human brain may play the role of integrative or orchestration layer that continually finds coherent representations (and thus “meaning”) while itself “going in circles” in terms of the energy landscape, or just exploiting a stable, highly optimised landscape, and the bulk of learning happens externally (repertoire of modular skills and representations learnt with DL).
Post-hoc “satisfaction from learning” is a rather different story from “meaning”. It is psychological and dopamine-mediated. It helps people stick with evolutionary beneficial activities that improve their prospects in the environment. There is no fundamental metaphysical substance to the fact that humans value hard work (like, hard learning) in today’s environment. Also it’s worth noting that extremely few people could find hard learning satisfactory today: humans really don’t like to put their brains to work.