Individuals angry with humanity as a possible existential risk?

Basically, as technology improves, it will increase the ability of any individual human to change the world, and by extension, it will increase any individual’s ability to inflict more significant damage on it, if they so desire. This could be significant in the case of individuals who are especially angry with the world, and who want to take others down with them (e.g. the Columbine shooters, the Unabomber to an extent)

Now, the thing is this—what if someone angry at the world ultimately developed the means to annihilate the world at his own will? (or to cause massive destruction?) Certainly, this has not happened yet, but it’s a possibility with improved technology (especially an improved ability to bioengineer viruses and various nanoparticles). Now, one of the biggest constraints to this is lack of resources (available to an individual). But of course, with the development of nanotechnology (and the use of fewer resources used to construct certain things, and other developments such as the substitution of carbon tubes for other materials), this may not be as much of a constraint as it is now. We could improve monitoring, but this would obviously present a threat to civil liberties. (This is not an argument against technology—I’m a transhumanist after all, and I completely embrace technological developments. But this is something I’ve never seen a good solution to) Of course, reducing the number of angry individuals would also reduce the probability of this happening. This demands an understanding of psychology (especially the psychologies of people who are self-centered, don’t like it when they have to compromise, and who collect grudges very easily). And then a creative way to make them less angry (but this is quite difficult, which is why the creativity is needed). especially since many people get angry at the very thought of compromise. So has anyone else thought of this? And of possible solutions?