it seems pretty true that there is substantial cultural overlap between different labs. people come and go all the time, and info flows a lot. nobody really thinks of the individual people at competitor labs as enemies, though people do obviously really want their lab to win. EA is not really a major value of the uniparty. also, at least at openai, one of the major values that is missing from the list is a strong belief in the value of empiricism as opposed to philosophical argument.
the capabilities cluster is pretty socially distinct from the safety cluster. they mostly don’t go to the same parties, live in the same apartments, etc.
I’ve received a lot of pushback from people for arguing that AGI timelines might be longer than 3 years, and for arguing that developing capabilities slower would be a good thing. obviously, some people will dislike this enough to not want to talk to me. but I don’t feel like these are “unthinkable” propositions. perhaps I’ve memed so hard that I’ve found myself on a mystical island of stability where I have jesters’ privilege to say such things, but the more likely explanation imo is that people simply treat this like any other normal disagreement.
from observation, I do think people are heavily motivated by their stonks. but there are also a sizeable number of people, especially the more senior ones, whose actions are hard to explain as financially motivated.
some thoughts fmpov
it seems pretty true that there is substantial cultural overlap between different labs. people come and go all the time, and info flows a lot. nobody really thinks of the individual people at competitor labs as enemies, though people do obviously really want their lab to win. EA is not really a major value of the uniparty. also, at least at openai, one of the major values that is missing from the list is a strong belief in the value of empiricism as opposed to philosophical argument.
the capabilities cluster is pretty socially distinct from the safety cluster. they mostly don’t go to the same parties, live in the same apartments, etc.
I’ve received a lot of pushback from people for arguing that AGI timelines might be longer than 3 years, and for arguing that developing capabilities slower would be a good thing. obviously, some people will dislike this enough to not want to talk to me. but I don’t feel like these are “unthinkable” propositions. perhaps I’ve memed so hard that I’ve found myself on a mystical island of stability where I have jesters’ privilege to say such things, but the more likely explanation imo is that people simply treat this like any other normal disagreement.
from observation, I do think people are heavily motivated by their stonks. but there are also a sizeable number of people, especially the more senior ones, whose actions are hard to explain as financially motivated.