(status: im newer here, this is a random thought i had, could be obvious to others, might also help when talking to outsiders about ai risk)
humans seem like a good example of an intelligence takeoff. for most of prehistory, species were following the same basic patterns repetitively (eating each other, trying to survive, etc.)
then at some arbitrary point, one species either passed some threshold in intelligence, or maybe it just gained a pivotal intelligence-unrelated ability (such as opposable thumbs), or maybe it just found itself in the right situation (e.g the agricultural revolution is commonly explained by humans ending up in an environment better suited for plant growth).
and then it spiraled out of control to where we are now.
and in the future, this species is gonna create an even more powerful intelligence. this mirrors our own worries about AI creating a more powerful AI.
sometimes people say that there’s no evidence for AI doom because it hasn’t been tested. humans might be moving evidence to such people when framed this way.
this might also have implications for how AI takeoff might go. it might be that there won’t be some surprisingly increase in intelligence compared to earlier AIs—it could be more like the biointellegence takeoff, where it happens after some arbitrary-seeming conditions are met.
Welcome! And yes, this is a thing people have talked about a lot, particularly in the context of outer versus inner alignment (the outer optimizer, evolution, designed an inner optimizer, humans, who optimize for different things, like pleasure etc, than evolution does, but ended up effectively becoming a “singularity” from its point of view). It’s cool that you noticed this on your own!
(status: im newer here, this is a random thought i had, could be obvious to others, might also help when talking to outsiders about ai risk)
humans seem like a good example of an intelligence takeoff. for most of prehistory, species were following the same basic patterns repetitively (eating each other, trying to survive, etc.)
then at some arbitrary point, one species either passed some threshold in intelligence, or maybe it just gained a pivotal intelligence-unrelated ability (such as opposable thumbs), or maybe it just found itself in the right situation (e.g the agricultural revolution is commonly explained by humans ending up in an environment better suited for plant growth).
and then it spiraled out of control to where we are now.
and in the future, this species is gonna create an even more powerful intelligence. this mirrors our own worries about AI creating a more powerful AI.
sometimes people say that there’s no evidence for AI doom because it hasn’t been tested. humans might be moving evidence to such people when framed this way.
this might also have implications for how AI takeoff might go. it might be that there won’t be some surprisingly increase in intelligence compared to earlier AIs—it could be more like the biointellegence takeoff, where it happens after some arbitrary-seeming conditions are met.
Welcome! And yes, this is a thing people have talked about a lot, particularly in the context of outer versus inner alignment (the outer optimizer, evolution, designed an inner optimizer, humans, who optimize for different things, like pleasure etc, than evolution does, but ended up effectively becoming a “singularity” from its point of view). It’s cool that you noticed this on your own!
thanks for the reply btw, i’d upvote you but the site won’t let me yet :p
eta: now i can :3