See reply above, I don’t think I’m bringing Moloch up here at all, rather individuals being evil in ways that leads to both self and systemic harm, which is an easier problem to fix, if still unsolvable.
“Evil people are stupid” is actually an argument for 1. It means we’re equalising the field. If an AGI model leaks the way LLaMa did, we’re giving the most idiotic and deranged members of our species a chance to simply download more brains from the Internet, and use them for whatever stupid thing they wanted in the first place.
I agree it’s a very significant risk which is possibly somewhat underappreciated in the LW community.
I think all three situations are very possible and potentially catastrophic:
Evil people do evil with AI
Moloch goes Moloch with AI
ASI goes ASI (FOOM etc.)
Arguments against (1) could be “evil people are stupid” and “terrorism is not about terror”.
Arguments against (1) and (2) could be “timelines are short” and “AI power is likely to be very concentrated”.
See reply above, I don’t think I’m bringing Moloch up here at all, rather individuals being evil in ways that leads to both self and systemic harm, which is an easier problem to fix, if still unsolvable.
“Evil people are stupid” is actually an argument for 1. It means we’re equalising the field. If an AGI model leaks the way LLaMa did, we’re giving the most idiotic and deranged members of our species a chance to simply download more brains from the Internet, and use them for whatever stupid thing they wanted in the first place.