As far as I can tell, the recognition of the existential danger of AI is at an all time. It is going mainstream! Unfortunately, most of the discourse seems very pessimistic. The bulk of the messaging seems to imply that the only thing we can do is wait for nuclear weapon style regulation and bide our time until death.
What projects and efforts are there to promote and recruit for AI existential safety research to those that are just learning about the existential danger? Are there any that unskilled volunteers could contribute to?
[Question] What projects and efforts are there to promote AI safety research?
As far as I can tell, the recognition of the existential danger of AI is at an all time. It is going mainstream! Unfortunately, most of the discourse seems very pessimistic. The bulk of the messaging seems to imply that the only thing we can do is wait for nuclear weapon style regulation and bide our time until death.
What projects and efforts are there to promote and recruit for AI existential safety research to those that are just learning about the existential danger? Are there any that unskilled volunteers could contribute to?