Then, what should those people actually do with that knowledge?
Focus a mixture of stigma, regulation, and financial pressures on the people who are responsible for building AGI/ASI. Importantly “responsible” is very different from “associated with”.
If AI devs are making fortunes endangering humanity, and we can’t negate their salaries or equity stakes, we can at least undercut the social status and moral prestige of the jobs that they’re doing.
Yep, I am in favor of such stigmas for people working on frontier development. I am not in favor of e.g. such a stigma for people who are developing self-driving cars, or are working on stopping AI themselves (and as such are “associated with building AGI/ASI”).
I think we both agree pretty strongly that I think there should be a lot of negative social consequences for people responsible for building AGI/ASI. My sense is you want to extend this further beyond “responsible” and into “associated with”, and I think this is bad. Yes, we can’t expect perfect causal models from the public and the forces behind social pressures, but we can help make them more sane and directed towards the things that help, as opposed to the things that are just collateral damage or actively anti-helpful. That’s all I am really asking for.
Oliver—that’s all very reasonable, and I largely agree.
I’ve got no problem with people developing narrow, domain-specific AI such as self-driving cars, or smarter matchmaking apps, or suchlike.
I wish there were better terms that could split the AI industry into ‘those focused on safe, narrow, non-agentic AI’ versus ‘those trying to build a Sand God’. It’s only the latter who need to be highly stigmatized.
Focus a mixture of stigma, regulation, and financial pressures on the people who are responsible for building AGI/ASI. Importantly “responsible” is very different from “associated with”.
Yep, I am in favor of such stigmas for people working on frontier development. I am not in favor of e.g. such a stigma for people who are developing self-driving cars, or are working on stopping AI themselves (and as such are “associated with building AGI/ASI”).
I think we both agree pretty strongly that I think there should be a lot of negative social consequences for people responsible for building AGI/ASI. My sense is you want to extend this further beyond “responsible” and into “associated with”, and I think this is bad. Yes, we can’t expect perfect causal models from the public and the forces behind social pressures, but we can help make them more sane and directed towards the things that help, as opposed to the things that are just collateral damage or actively anti-helpful. That’s all I am really asking for.
Oliver—that’s all very reasonable, and I largely agree.
I’ve got no problem with people developing narrow, domain-specific AI such as self-driving cars, or smarter matchmaking apps, or suchlike.
I wish there were better terms that could split the AI industry into ‘those focused on safe, narrow, non-agentic AI’ versus ‘those trying to build a Sand God’. It’s only the latter who need to be highly stigmatized.
Peace out :)