TagLast edit: 15 Oct 2020 0:02 UTC by Chris Leong

Many philosophy problems involve imagining hypothetical scenarios. At times there has been significant debate over the validity of this.

Unrealistic hypotheticals:

At times there has been significant debate on Less Wrong about the relevance or value in discussing hypotheticals that are unrealistic. In The Least Convenient Possible World, Scott Alexander suggests that ignoring hypotheticals often means that you are technically correct at the cost “missing the point and losing a valuable effort to examine the nature of morality”. He suggests that considering about the least convenient world is often vital for allowing us to discover our true motivations and often leaves us too much “wiggle room”.

In Please Don’t Fight the Hypothetical, TimS suggests that fighting the hypothetical is equivalent to, “I don’t find this topic interesting for whatever reason, and wish to talk about something I am interested in.” He says that this is fine, but suggests it is important to be aware when you are changing the subject like this.

Hypotheticals: The Direct Application Fallacy suggests that it is a mistake to assume that the only reason for studying a hypothetical situation is to understand what to do in that exact situation. It suggests that practise exercises don’t need to be real and in fact insisting on this can make teaching nearly impossible. It further suggests that examining degenerate cases of a theory often provides a useful sanity check and can make the limitations of a heuristic more explicit.

Related terms:

Hypotheticals are essentially the same as counterfactuals, although a) the term counterfactual is preferred when imagining someone making different decisions b) technically the factual isn’t a counterfactual, but it is very common to say something like “iterate over all the counterfactuals and pick the one with the highest utility” where we treat the factual as a counterfactual.

The Least Con­ve­nient Pos­si­ble World

Scott Alexander14 Mar 2009 2:11 UTC
270 points
205 comments5 min readLW link

[Question] What could one do with truly un­limited com­pu­ta­tional power?

Yitz11 Nov 2020 10:03 UTC
30 points
22 comments2 min readLW link

Hy­po­thet­i­cals: The Direct Ap­pli­ca­tion Fallacy

Chris_Leong9 May 2018 14:23 UTC
21 points
19 comments4 min readLW link

A note on hypotheticals

PhilGoetz7 Aug 2009 18:56 UTC
23 points
18 comments3 min readLW link

Please Don’t Fight the Hypothetical

TimS20 Apr 2012 14:29 UTC
40 points
65 comments2 min readLW link

Hy­po­thet­i­cal situ­a­tions are not meant to exist

casebash27 Sep 2015 10:58 UTC
3 points
22 comments1 min readLW link

Up­dat­ing on hypotheticals

casebash6 Nov 2015 11:49 UTC
6 points
22 comments2 min readLW link

[Question] What’s the con­tin­gency plan if we get AGI to­mor­row?

Yitz23 Jun 2022 3:10 UTC
61 points
24 comments1 min readLW link

The Anti-Carter Basilisk

Jon Gilbert26 May 2021 22:56 UTC
0 points
0 comments2 min readLW link

[Question] What would make you con­fi­dent that AGI has been achieved?

Yitz29 Mar 2022 23:02 UTC
17 points
6 comments1 min readLW link

[Question] List of con­crete hy­po­thet­i­cals for AI takeover?

Yitz7 Apr 2022 16:54 UTC
7 points
5 comments1 min readLW link

In­ward and out­ward steelmanning

Q Home14 Jul 2022 23:32 UTC
11 points
5 comments18 min readLW link

“In­fo­haz­ards” The ML Field’s Great­est Ex­cuse.

Puffy Bird21 Sep 2022 3:19 UTC
−3 points
1 comment3 min readLW link

Why Weren’t Hot Air Bal­loons In­vented Sooner?

Lost Futures18 Oct 2022 0:41 UTC
107 points
52 comments6 min readLW link