Why do some people try to make AGI?


Why do some people invest much of their energy trying to discover how to make AGI?

Who’s trying to discover how to make AGI? Academic researchers and their students, academic institutions, small and large companies and startups with AI teams with some members working on speculative things, independent lone researchers, privately funded research groups, independent /​ volunteer /​ hobbyist researchers and collaborations.

Are they really? A lot of this is people working on narrow AI or on dead-end approaches without learning anything, intentionally or not. Some people explicitly say they’re trying to discover how to make AGI.

For the people investing much of their energy trying to discover how to make AGI, why are they doing that?

Plausible reasons:
-- Coolness /​ prestige (it’s cool to be an AGI researcher; it’s fun to be in the club; it’s cool to be one of the people who made AGI)
-- Money (salary)
-- Need a job.
-- The problem is interesting /​ fun.
-- It would be interesting /​ fun to see an AGI.
-- AGI seems generally very useful, and humans having useful things enables them to get what they want, which is usually good.
-- AGI is the endpoint of yak-shaving.
-- There’s something they want to with an AGI, e.g. answer other questions /​ do science /​ explore, make money, help people, solve problems.
-- They want there to be an agent more intelligent than humans.
-- They want everthing to die (e.g. because life contains too much suffering).
-- They want every human to die.
-- They want to disrupt society or make a new society.
-- They want power over other people /​ the world.
-- They want nerds in general to have power as opposed to whoever currently has power.
-- They want security /​ protection.
-- It fits with their identity /​ self-image /​ social role.
-- They’re in some social context that pressures them to make AGI.
-- They like being around the people who work on AGI.
-- They want to be friends with a computer.
-- To piss off people who don’t want people to work on AGI.
-- To understand AGI in order to understand alignment.
-- To understand AGI in order to understand themselves.
-- To understand AGI in order to understand minds in general, to see a mind work, to understand thought.
-- They believe in and submit to some kind of basilisk, whether a Roko’s basilisk (acausal threats from a future AGI) or a political /​ moral-mazes basilisk (the AGI is the future new CEO /​ revolutionary party).
-- Other people seem to be excited about AGI, feel positively about it, feel positively about attempts to make it.
-- Other people seem to be worried about AGI, so AGI is interesting /​ important /​ powerful.
-- Intelligence /​ mind /​ thought /​ information is good in general.
-- Making AGI is like having a child; creating new life, passing something about yourself on to the future, having a young mind as a friend, passing on your ideas and ways of thinking, making the world be fresh by being seen through fresh eyes.
-- To alleviate the suffering of not being able to solve problems /​ think well, of reaching for sufficiently abstract /​ powerful tools but not finding them.
-- To beat other contries, other researchers, other people, other species, other companies, other coalitions, other cultures, other races, other political groups.
-- To protect from other contries, other researchers, other people, other species, other companies, other coalitions, other cultures, other races, other political groups.
-- Honor (it’s heroic, impressive, glorious to make AGI)
-- By accident, trying to work on something else but for some reason (e.g. instrumental convergence) trying to invent things that are key elements of AGI.
-- To democratize AGI, make it available to everyone, so that no one dominates.

Added from comments:

-- To enable a post-scarcity future.
-- To bring back dead people; loved ones, great minds.
-- To be immortal.
-- To upload.
-- To be able to self-modify and grow.
-- AGIs will be happier and less exploitable than humans.

Added December 2022:
-- As a consequence of telling employees to try to make AGI, as part of a marketing /​ pitch strategy to investors. See

https://​​twitter.com/​​soniajoseph_/​​status/​​1597735163936768000

-- Someone is going to make it; if I /​ we make it first, we won’t be left out /​ will have a stake /​ will have a seat at the table /​ will have defense /​ will be able to stop the bad people/​AI.

What are other plausible reasons? (I might update the list.)

Which of these are the main reasons? Which of these cause people to actually try to figure out how to make AGI, as opposed to going through the motions or pretending or getting nerdsniped on something else? What are the real reasons among AI researchers, weighted by how much they seem to have so far contributed towards the discovery of full AGI? (So appearing cool may be a common motive by headcount, but gets less weight than curiosity-based motives in terms of people making progress towards AGI.)

Among the real reasons, what are the underlying psychological dynamics of these reasons? What are the underlying beliefs and values implied by those reasons? Does that explain what those people say about AGI, or how they orient to the question of AGI downsides? Does that imply anything about error correction, e.g. what arguments might cause them to update or what needs could be met in some way other than AGI research etc.? E.g. could someone pay highly capable people working on AGI to not work on AGI? Could someone nerdsnipe or altruistsnipe AGI researchers to work on something else? Are AGI researchers traumatized, and in what sense? Could someone pay them to raise children instead of making AGI? (Leaving aside for now whether any actions like these would actually be net good.)