Information hazards are risks of harm, not necessarily net harm—and the audience matters
You write:
Also antibiotics might be useful for creating antibiotic resistant bacteria. (Not sure if such bacteria are more deadly to humans all else equal—this makes categorization difficult, how can an inventor tell if their invention can be used for ill?)
From memory, I don’t believe the following point is actually explicit in Bostrom’s paper, but I’d say that information hazards are just risks that some (notable level of) harm will occur, not necessarily that the net impacts of the information (dissemination) will be negative. (It’s possible that this is contrary to standard usage, but I think standard usage doesn’t necessarily have a clear position here, and this usage seems useful to me.)
Note that Bostrom defines an information hazard as “A risk that arises from the dissemination or the potential dissemination of (true) information that may cause harm or enable some agent to cause harm.” He doesn’t say “A risk that the dissemination or the potential dissemination of (true) information may make the world worse overall, compared to if the information hadn’t been disseminated, or enable some agent to make the world worse overall, compared to if the information hadn’t been disseminated.”
In fact, our upcoming post will note that it may often be the case that some information does pose risks of harm, but is still worth developing or sharing on balance, because the potential benefits are sufficiently high. This will often be because the harms may not actually occur (as they’re currently just risks). But it could also be because, even if both the harms and benefits do actually occur, the benefits would outweigh the harms.
(There are also other cases in which that isn’t true, and one should be very careful about developing or sharing some information, or even just not do so at all. That’ll be explored in that post.)
But I hadn’t thought about that point very explicitly when writing this post, and perhaps should’ve made it more explicit here, so thanks for bringing it to my attention.
Relatedly, you write:
(The key point here is that people vary, which could be important to ‘infohazards in general’. Perhaps some people acquiring the blueprints for a nuclear reactor wouldn’t be dangerous because they wouldn’t use them. Someone with the right knowledge (or in the right time and place) might be able to do more good with these blueprints, or even have less risk of harm; “I didn’t think of doing that, but I see how it’d make the reactor safer.”)
[...]
What is an infohazard seems relative. If information about how to increase health can also be used to negatively impact it, then whether or not something is an infohazard seems to be based on the audience—are they benign or malign?
It will often be the case that developing and sharing information will have some positive consequences and some negative, as noted above. It will also often be the case that information will have mostly positive consequences when received by certain people, and mostly negative consequences when received by others, as you note here.
I would say that the risk that the information you share ultimately reaches people who may use it badly is part of information hazards. If it might not reach those people, that means the risk is lower. If it will also reach other people who’ll use the information in beneficial ways, then that’s a benefit of sharing the info, and a reason it may be worth doing so even if there are also risks.
In our upcoming post, we’ll note that one strategy for handling potential information hazards would be to be quite conscious about how the info is framed/explained and who you share it with, to influence who receives it and how they use it. This is one of the “middle paths” between just sharing as if there’s no risk and just not sharing as if there’s only risk and no potential benefits.
Somewhat related to your point, I also think that, if we had a world where no one was malign or careless, then most things that currently pose information hazards would not. (Though I think some of Bostrom’s types would remain.) And if we had a world where the vast majority of people were very malign or careless, then the benefits of sharing info would go down and the risks would go up. We should judge how risky (vs potential beneficial) developing and sharing info is based on our best knowledge of how the world actually is, including the people in it, and especially those who are likely to receive the info.
Information hazards are risks of harm, not necessarily net harm—and the audience matters
You write:
From memory, I don’t believe the following point is actually explicit in Bostrom’s paper, but I’d say that information hazards are just risks that some (notable level of) harm will occur, not necessarily that the net impacts of the information (dissemination) will be negative. (It’s possible that this is contrary to standard usage, but I think standard usage doesn’t necessarily have a clear position here, and this usage seems useful to me.)
Note that Bostrom defines an information hazard as “A risk that arises from the dissemination or the potential dissemination of (true) information that may cause harm or enable some agent to cause harm.” He doesn’t say “A risk that the dissemination or the potential dissemination of (true) information may make the world worse overall, compared to if the information hadn’t been disseminated, or enable some agent to make the world worse overall, compared to if the information hadn’t been disseminated.”
In fact, our upcoming post will note that it may often be the case that some information does pose risks of harm, but is still worth developing or sharing on balance, because the potential benefits are sufficiently high. This will often be because the harms may not actually occur (as they’re currently just risks). But it could also be because, even if both the harms and benefits do actually occur, the benefits would outweigh the harms.
(There are also other cases in which that isn’t true, and one should be very careful about developing or sharing some information, or even just not do so at all. That’ll be explored in that post.)
But I hadn’t thought about that point very explicitly when writing this post, and perhaps should’ve made it more explicit here, so thanks for bringing it to my attention.
Relatedly, you write:
It will often be the case that developing and sharing information will have some positive consequences and some negative, as noted above. It will also often be the case that information will have mostly positive consequences when received by certain people, and mostly negative consequences when received by others, as you note here.
I would say that the risk that the information you share ultimately reaches people who may use it badly is part of information hazards. If it might not reach those people, that means the risk is lower. If it will also reach other people who’ll use the information in beneficial ways, then that’s a benefit of sharing the info, and a reason it may be worth doing so even if there are also risks.
In our upcoming post, we’ll note that one strategy for handling potential information hazards would be to be quite conscious about how the info is framed/explained and who you share it with, to influence who receives it and how they use it. This is one of the “middle paths” between just sharing as if there’s no risk and just not sharing as if there’s only risk and no potential benefits.
Somewhat related to your point, I also think that, if we had a world where no one was malign or careless, then most things that currently pose information hazards would not. (Though I think some of Bostrom’s types would remain.) And if we had a world where the vast majority of people were very malign or careless, then the benefits of sharing info would go down and the risks would go up. We should judge how risky (vs potential beneficial) developing and sharing info is based on our best knowledge of how the world actually is, including the people in it, and especially those who are likely to receive the info.