RSS

In­for­ma­tion Hazards

TagLast edit: 20 Jan 2021 17:31 UTC by yatangle

An Information Hazard is some true information that could harm people, or other sentient beings, if known. It is tricky to determine policies on information hazards. Some information might genuinely be dangerous, but excessive controls on information has its own perils.

This tag is for discussing the phenomenon of Information Hazards and what to do with them. Not for actual Information Hazards themselves.

An example might be a formula for easily creating cold fusion in your garage which would be very dangerous. Alternatively it might be an idea which causes great mental harm to people.

Bostrom’s Typology of Information Hazards

Nick Bostrom coined the term information hazard in a 2011 paper [1] for Review of Contemporary Philosophy. He defines it as follows:

Information hazard: A risk that arises from the dissemination or the potential dissemination of (true) information that may cause harm or enable some agent to cause harm.

Bostrom points out that this is in contrast to the generally accepted principle of information freedom and that, while rare, the possibility of information hazards needs to be considered when making information policies. He proceeds to categorize and define a large number of sub-types of information hazards. For example, he defines artificial intelligence hazard as:

Artificial intelligence hazard: There could be computer-related risks in which the threat would derive primarily from the cognitive sophistication of the program rather than the specific properties of any actuators to which the system initially has access.

The following table is reproduced from Bostrom 2011 [1].

TYPOLOGY OF INFORMATION HAZARDS
I. By information transfer mode
Data hazard
Idea hazard
Attention hazard
Template hazard
Signaling hazard
Evocation hazard
II. By effect
TYPESUBTYPE
ADVERSARIAL RISKSCompetiveness hazardEnemy Hazard
Intellectual property hazard
Commitment hazard
Knowing-too-much hazard
RISKS TO SOCIAL ORGANIZATION AND MARKETSNorm hazardInformation asymmetry Hazard
Unveiling hazard
Recognition hazard
RISKS OF IRRATIONALITY AND ERRORIdeological hazard
Distraction and temptation hazard
Role model hazard
Biasing hazard
De-biasing hazard
Neuropsychological hazard
Information-burying hazard
RISKS TO VALUABLE STATES AND ACTIVITIESPsychological reaction hazardDisappointment hazard
Spoiler hazard
Mindset hazard
Belief-constituted value hazard
(mixed)Embarrassment hazard
RISKS FROM INFORMATION TECHNOLOGY SYSTEMSInformation system hazardInformation infrastructure failure hazard
Information infrastructure misuse hazard
Artificial intelligence hazard
RISKS FROM DEVELOPMENTDevelopment hazard

See Also

References

  1. Bostrom, N. (2011). “Information Hazards: A Typology of Potential Harms from Knowledge”. Review of Contemporary Philosophy 10: 44-79.

Ter­ror­ism, Tylenol, and dan­ger­ous information

Davis_Kingsley12 May 2018 10:20 UTC
100 points
46 comments3 min readLW link

What are in­for­ma­tion haz­ards?

MichaelA18 Feb 2020 19:34 UTC
28 points
15 comments4 min readLW link

In­for­ma­tion haz­ards: Why you should care and what you can do

23 Feb 2020 20:47 UTC
15 points
4 comments15 min readLW link

Map­ping down­side risks and in­for­ma­tion hazards

20 Feb 2020 14:46 UTC
14 points
0 comments9 min readLW link

Thoughts on the Scope of LessWrong’s In­fo­haz­ard Policies

Ben Pace9 Mar 2020 7:44 UTC
46 points
5 comments8 min readLW link

Needed: AI in­fo­haz­ard policy

Vanessa Kosoy21 Sep 2020 15:26 UTC
49 points
17 comments2 min readLW link

Know­ing About Bi­ases Can Hurt People

Eliezer Yudkowsky4 Apr 2007 18:01 UTC
135 points
80 comments2 min readLW link

Memetic Hazards in Videogames

jimrandomh10 Sep 2010 2:22 UTC
115 points
160 comments3 min readLW link

The Fu­sion Power Gen­er­a­tor Scenario

johnswentworth8 Aug 2020 18:31 UTC
105 points
25 comments3 min readLW link

Memetic down­side risks: How ideas can evolve and cause harm

25 Feb 2020 19:47 UTC
15 points
3 comments15 min readLW link

Good and bad ways to think about down­side risks

11 Jun 2020 1:38 UTC
16 points
11 comments11 min readLW link

A brief his­tory of eth­i­cally con­cerned scientists

Kaj_Sotala9 Feb 2013 5:50 UTC
99 points
150 comments14 min readLW link

A few mis­con­cep­tions sur­round­ing Roko’s basilisk

Rob Bensinger5 Oct 2015 21:23 UTC
77 points
133 comments5 min readLW link

Bioinfohazards

Spiracular17 Sep 2019 2:41 UTC
76 points
15 comments18 min readLW link2 nominations2 reviews

A point of clar­ifi­ca­tion on in­fo­haz­ard terminology

eukaryote2 Feb 2020 17:43 UTC
49 points
21 comments2 min readLW link
(eukaryotewritesblog.com)

Win­ning vs Truth – In­fo­haz­ard Trade-Offs

eapache7 Mar 2020 22:49 UTC
12 points
11 comments2 min readLW link

[Link and com­men­tary] The Offense-Defense Balance of Scien­tific Knowl­edge: Does Pub­lish­ing AI Re­search Re­duce Mi­suse?

MichaelA16 Feb 2020 19:56 UTC
24 points
4 comments3 min readLW link

SlateS­tarCodex deleted be­cause NYT wants to dox Scott

Rudi C23 Jun 2020 7:51 UTC
89 points
95 comments1 min readLW link

[META] Build­ing a ra­tio­nal­ist com­mu­ni­ca­tion sys­tem to avoid censorship

Donald Hobson23 Jun 2020 14:12 UTC
36 points
33 comments2 min readLW link

Les­sons from the Cold War on In­for­ma­tion Hazards: Why In­ter­nal Com­mu­ni­ca­tion is Critical

Gentzel24 Feb 2018 23:34 UTC
47 points
10 comments4 min readLW link

USA v Pro­gres­sive 1979 excerpt

RyanCarey27 Nov 2017 17:32 UTC
22 points
2 comments2 min readLW link

Shock Level 5: Big Wor­lds and Mo­dal Realism

Roko25 May 2010 23:19 UTC
34 points
158 comments4 min readLW link
No comments.