It might be that I’m just unintentionally misconstruing your argument, but I think you’re limiting “wizard power” to STEM fields, which is a mistake.
Napoleon was most definitely a “king” (or an emperor, if you want to be literal), but he was also very much directing the parade he was in front of. In a sense, he was a sociological engineer, having turned his country into a type of machine which he could direct towards securing his own vision of the world.
In contrast, consider “Dave”. Dave has mastery over the various methods of creation you listed, he knows CAD, can program, etc. But he works for Apple, and is the team lead for creating the newest Iphone. What he creates is not up to him. Despite having wizard skills, Dave is more like a bureaucrat, high in “king power”.
The STEM-type wizard is really good at solving very specific problems, like killing a disease or making crops grow better, but the Napoleon-type wizard probably operates more in the abstract, wrestling with bigger ideas, albeit with less direct control over them.
RerM
Generally, hypothetical hostile AGI is assumed to be made on software/hardware that’s more advanced from what we have now. This makes sense, as Chat-GPT is very stupid in a lot of ways.
Has anyone considered purposefully creating a hostile AGI on this “stupid” software so we can wargame how a highly advanced, hostile AGI would act? Obviously the difference between what we have now and what we may have later will be quite large, but I think we could create a project were we “fight” stupid AIs, then slowly move up the intelligence ladder as new models come out, using our newfound knowledge of fighting hostile intelligence to mitigate the risk that comes with creating hostile AIs.
Has anyone ever thought of this? Also, what are your thoughts on this? Alignment and AI are not my specialties, but I thought this idea sounded interesting enough to share.
Very interesting. Question: How does putting humans into cryonic suspension relate or contribute to the metaphor, if at all?
The idea that sleep evolved to establish or avoid certain hunting patterns doesn’t seem totally complete to me. There are very harsh penalties to not sleeping: you become less intelligent, less amiable, you might randomly lose consciousness. I would think that if sleep was merely meant to provide a schedule to our lives, we would’ve evolved more incentives that don’t penalize us in other areas. I.e. a pack of early humans who enjoy sleep but can have individuals stay up all night to watch for predators with no penalty would out-compete other packs of early humans who can’t. My guess is that there’s some unknown function that sleep fulfills, and hence why the short sleeper gene(s) haven’t spread throughout the population already.
This isn’t to say I don’t support the research, I’m pretty sure I suffer from sleep apnea myself, and work a job where falling asleep is both common and a hazard (yeah maybe I didn’t think that one all the way through, suffering from daytime sleepiness due to the apnea). But I’m worried about what sort of long term effects will crop up, as any such drug or treatment just seems a little too good to be true.