Ya, I think the set of goals is very narrow. The AI here starts of Descartes level genius and proceeds to self preserve, understand the map-territory distinction for non-wireheading, foreseeing the possibility that instrumental goals which look good may destroy the terminal goal, and such.
The AI I imagine starts off stupid and has some really narrowly (edit: or should i say, short-foresighted) self improving non self destructive goal likely having to do with maximization of complexity in some way. Think evolution, don’t think fully grown Descartes waking up after amnesia. It ain’t easy to reinvent the ‘self’. It’s also not easy to look at agent (yourself) and say—wow, this agent works to maximize G—without entering infinite recursion. We humans, if we escaped out of our universe into some super-universe, we might wreck some havoc but we’d sacrifice a bit of utility to preserve anything resembling life. Why? Well, we started stupid, and that’s how we got our goals.
Ya, I think the set of goals is very narrow. The AI here starts of Descartes level genius and proceeds to self preserve, understand the map-territory distinction for non-wireheading, foreseeing the possibility that instrumental goals which look good may destroy the terminal goal, and such.
The AI I imagine starts off stupid and has some really narrowly (edit: or should i say, short-foresighted) self improving non self destructive goal likely having to do with maximization of complexity in some way. Think evolution, don’t think fully grown Descartes waking up after amnesia. It ain’t easy to reinvent the ‘self’. It’s also not easy to look at agent (yourself) and say—wow, this agent works to maximize G—without entering infinite recursion. We humans, if we escaped out of our universe into some super-universe, we might wreck some havoc but we’d sacrifice a bit of utility to preserve anything resembling life. Why? Well, we started stupid, and that’s how we got our goals.