# Falsifiable and non-Falsifiable Ideas

I have been talk­ing to some peo­ple (few spe­cific peo­ple I thought would benefit and ap­pre­ci­ate it) in my dorm and teach­ing them ra­tio­nal­ity. I have been think­ing of skills that should be taught first and it made me think about what skill is most im­por­tant to me as a ra­tio­nal­ist.

I de­cided to start with the ques­tion “What does it mean to be able to test some­thing with an ex­per­i­ment?” which could also mean “What does it mean to be falsifi­able?”

To help my point I brought up the thought ex­per­i­ment with a dragon in Carl Sa­gan’s garage which is as follows

Carl: There is a dragon in my garage
Me: I thought drag­ons only ex­isted in leg­ends and I want to see for my­self
Carl: Sure fol­low me and have a look
Me: I don’t see a dragon in there
Carl: My dragon is in­visi­ble
Me: Let me throw some flour in so I can see where the dragon is by the dis­rup­tion of the flour
Carl: My dragon is incorporeal

And so on

The an­swer that I was try­ing to bring about was along the lines that if some­thing could be tested by an ex­per­i­ment then it must have at least one differ­ent effect if it were true than if it were false. Fur­ther if some­thing had at least one effect differ­ent if it were true than if it was false then I could at least in the­ory test it with an ex­per­i­ment.

This led me to the state­ment:
If some­thing can­not at least in the­ory be tested by ex­per­i­ment then it has no effect on the world and lacks mean­ing from a truth stand point there­fore ra­tio­nal stand­point.

An­thony (the per­son I was talk­ing to at the time) started his counter ar­gu­ment with any ob­ject in a thought ex­per­i­ment can­not be tested for but still has a mean­ing.

So I re­vised my state­ment any ob­ject that if brought into the real world can­not be tested for has no mean­ing. Un­der the as­sump­tion that if an ob­ject could not be tested for in the real world it also has no effect on any­thing in the thought ex­per­i­ment. i.e. the story with the dragon would have gone the same way in­de­pen­dent of its truth val­ues if it were in the real world.

Then the dis­cus­sion con­tinued into could it be ra­tio­nal to have a be­lief that could not even in the­ory be tested. It be­came in­ter­est­ing when An­thony gave the ar­gu­ment that if be­liev­ing in a dragon in your garage gave you hap­piness and the world would be the same ei­ther way be­sides the hap­piness com­bined with the prin­ci­ple that ra­tio­nal­ity is the art of sys­tem­atized win­ning it is clearly ra­tio­nal to be­lieve in the dragon.

I re­sponded with truth trumps hap­piness and be­liev­ing the dragon would force you to be­lieve the false be­lief which is not worth the amount of hap­piness re­ceived by be­liev­ing it. Even fur­ther I ar­gued that it would in fact be a false be­lief be­cause p(world) > p(world)p(im­per­me­able in­visi­ble dragon) which is a sim­ple oc­cum’s ra­zor ar­gu­ment.

My in­tended di­rec­tion for this ar­gu­ment with An­thony from this point was to ap­ply these points to the­ol­ogy but we ran out of time and we have not had time again to talk so that may be a fu­ture post.

To­day how­ever Sh­minux pointed out to me that I held be­liefs that were them­selves non-falsifi­able. I re­al­ized then that it might be ra­tio­nal to be­lieve non-falsifi­able things for two rea­sons (I’m sure there’s more but these are the main one’s I can think of please com­ment your own)

1) The be­lief has a beauty to it that flows with falsifi­able be­liefs and makes known facts fit more perfectly. (this is very dan­ger­ous and should not be used lightly be­cause it fo­cuses to closely on opinion)

2) You be­lieve that the be­lief will some­day al­low you to make an origi­nal the­ory which will be falsifi­able.

Both of these rea­sons if not used very care­fully will al­low false be­liefs. As such I my­self de­cided that if a be­lief or new the­ory suffi­ciently meets these con­di­tions enough to make me want to be­lieve them I should put them into a spe­cial cat­e­gory of my thoughts (per­haps con­jec­tures). This cat­e­gory should be be­low be­liefs in power but still held as how the world works and any­thing in this cat­e­gory should always strive to leave it, mean­ing that I should always strive to make any non-falsifi­able con­jec­ture no longer be a con­jec­ture through mak­ing it a be­lief or dis­prov­ing it.

Note: This is my first post so as well as dis­cussing the post, cri­tiques sim­ply to the writ­ing are deeply wel­comed in PM to me.

• If some­thing can­not at least in the­ory be tested by ex­per­i­ment then it has no effect on the world and lacks mean­ing from a truth stand point there­fore ra­tio­nal stand­point.

Bet­ter ver­sion: …then it has no effect on the world and there­fore is not use­ful to have in­for­ma­tion about.

As to the rest of your post, I will make a gen­eral ob­ser­va­tion: you are speak­ing as if epistemic ra­tio­nal­ity is a ter­mi­nal value. There’s noth­ing wrong with that (in­so­far as no­body can say some­one else’s util­ity func­tion is wrong) but you might want to think about whether that is what you re­ally want.

The al­ter­na­tive is to al­low epistemic ra­tio­nal­ity to arise from in­stru­men­tal ra­tio­nal­ity: ob­tain­ing truth is use­ful in­so­far as it im­proves your plans to ob­tain what you ac­tu­ally want.

• Good point, I of­ten find my­self torn be­tween epistemic ra­tio­nal­ity as a ter­mi­nal value and its al­ter­na­tive. My thoughts are learn­ing how to treat truth as the high­est goal would be more use­ful to my ca­reer in physics and would be bet­ter for the world then if I cur­rently steered to my closer less im­por­tant val­ues.

• So treat­ing truth as the high­est goal serves your other, even higher goals?

What be­hav­iors are en­cap­su­lated by the state­ment that you’re treat­ing truth as the high­est goal, and why can’t you just ex­e­cute those be­hav­iors any­way?

• Truth is the high­est goal; be­ing ‘right’ is a lower goal; im­prov­ing a ca­reer in physics is the low­est goal.

Seek­ing truth is op­posed to be­ing ‘right’, but al­igned with the ca­reer in physics.

• If you’re jus­tify­ing a ter­mi­nal value, it’s not your real ter­mi­nal value.

• If you’re jus­tify­ing a ter­mi­nal value, it’s not your real ter­mi­nal value.

(Doesn’t strictly fol­low. Peo­ple can and do jus­tify things that they don’t need to jus­tify all the time. It does im­ply con­fu­sion.)

• In given situ­a­tion I would be more cu­ri­ous that if the dragon is in­visi­ble, in­cor­po­real, im­mune to all tests … then what ex­actly made Carl be­lieve there is a dragon?

Yeah, I know, there are many pos­si­ble ex­pla­na­tions. Per­haps the dragon can be­come visi­ble and cor­po­real when the dragon wants to; and it hap­pened when only Carl was near, be­cause un­like me, he is a nice and hum­ble per­son or what­ever, so the dragon likes him. Alter­na­tively, Carl de­rived the dragon’s ex­is­tence from the philo­soph­i­cal first prin­ci­ples, and if I dis­agree with him, here are hun­dred books with thou­sand pages each, and I am wel­come to show ex­actly where the au­thors made a mis­take. :(

• This post is some­what con­fused. I would recom­mend that you finish read­ing the Se­quences be­fore mak­ing a fu­ture post.

any ob­ject in a thought ex­per­i­ment can­not be tested for but still has a mean­ing.

One way to think about what is ac­com­plished when you perform a thought ex­per­i­ment is that you are perform­ing an ex­per­i­ment where the sub­ject is your brain. The goal is to figure out what your brain thinks will hap­pen, and state­ments about such things are falsifi­able state­ments about brains.

An­thony gave the ar­gu­ment that if be­liev­ing in a dragon in your garage gave you hap­piness and the world would be the same ei­ther way be­sides the hap­piness com­bined with the prin­ci­ple that ra­tio­nal­ity is the art of sys­tem­atized win­ning it is clearly ra­tio­nal to be­lieve in the dragon.

The world is not the same ei­ther way be­cause the dragon-be­liever is not the same ei­ther way. If the dragon-be­liever ac­tu­ally be­lieves that there’s a dragon in her garage (as op­posed to be­liev­ing in her be­lief that she has a dragon in her garage), that be­lief can af­fect how she makes other de­ci­sions. Truths are en­tan­gled and lies are con­ta­gious.

I re­sponded with truth trumps happiness

Why?

The be­lief has a beauty to it that flows with falsifi­able be­liefs and makes known facts fit more perfectly. (this is very dan­ger­ous and should not be used lightly be­cause it fo­cuses to closely on opinion)

Can you give some ex­am­ples of be­liefs with this prop­erty?

You be­lieve that the be­lief will some­day al­low you to make an origi­nal the­ory which will be falsifi­able.

Why call it a be­lief in­stead of an idea, then? (And why the em­pha­sis on origi­nal­ity?)

• This post is some­what con­fused. I would recom­mend that you finish read­ing the Se­quences be­fore mak­ing a fu­ture post.

Or at the very least, read Eliezer’s new episte­mol­ogy se­quence, which di­rectly ad­dresses the ques­tions at the heart of the OP.

• The pur­pose of a thought ex­per­i­ment is to make a pre­dic­tion about a real ex­per­i­ment. The thought ex­per­i­ment is as real as any other ab­stract ob­ject or men­tal pro­cess, and the pre­dic­tion it makes is as real as a pre­dic­tion made by any means.

And if be­liev­ing a be­lief which is known to be false re­sults is a higher out­put on your util­ity func­tion, you have a non­stan­dard util­ity func­tion. Ra­tion­al­ists who have rad­i­cally differ­ent util­ity func­tions are very dan­ger­ous things.

• This post is some­what con­fused. I would recom­mend that you finish read­ing the Se­quences be­fore mak­ing a fu­ture post.

I agree that I am putting a post here pre­ma­turely but I thought the crit­i­cism on some of my ideas would be worth it so I could fix things be­fore they were in­grained. So thanks for the crit­i­cism.

I re­sponded with truth trumps hap­piness Why?

Break of quotes

I of­ten find my­self torn be­tween epistemic ra­tio­nal­ity as a ter­mi­nal value and its al­ter­na­tive. My thoughts are learn­ing how to treat truth as the high­est goal would be more use­ful to my ca­reer in physics and would be bet­ter for the world then if I cur­rently steered to my closer less im­por­tant val­ues.

^from the com­ment below

I be­lieve that putting truth first will help me as a Physi­cist

Can you give some ex­am­ples of be­liefs with this prop­erty?

Most of the beau­tiful the­o­ries I know of at this point are those found in math­e­mat­ics and not Physics (this is due to my math ed­u­ca­tion be­ing much greater than my physics one even though my in­tended ca­reer is in physics) and I don’t think qual­ify as proper ex­am­ples in this cir­cum­stance. The best I could come up with by five min­utes on the clock is Ein­steins the­ory of rel­a­tivity which be­fore ex­per­i­men­tal pre­dic­tions were ob­tained was held as beau­tiful and cor­rect.

Why call it a be­lief in­stead of an idea, then?

I wanted to call it a be­lief in­stead of an idea be­cause when i think of ex­am­ples such as Time­less physics I be­lieve its ac­tu­ally how the world works and it seems much more mean­ingful than an idea that does not color my per­spec­tive on the world. This how­ever may sim­ply be over defi­ni­tion of Idea.

And why the em­pha­sis on origi­nal­ity?

You’re right, it doesn’t nec­es­sar­ily have to be origi­nal. I was think­ing along the lines that it is much harder to think of an origi­nal the­ory and this is a goal of mine so I had it in mind while writ­ing this.

• I’d quite se­ri­ously like to know if you’ve read Mak­ing Beliefs Pay Rent it and the Mys­te­ri­ous An­swers to Mys­te­ri­ous Ques­tions se­quence in gen­eral seem quite rele­vant I wouldn’t ex­pect you to write your post as it is now if you’d read them.

• I have read them but it was a long time ago and I was not prac­tic­ing us­ing the knowl­edge at the time so it may not have sunk in as it was sup­posed to. I will go back now and reread them, thank you.

• I be­lieve that putting truth first will help me as a Physicist

Why do you want to be a physi­cist? (Also, first rel­a­tive to what?)

The best I could come up with by five min­utes on the clock is Ein­steins the­ory of rel­a­tivity which be­fore ex­per­i­men­tal pre­dic­tions were ob­tained was held as beau­tiful and cor­rect.

In what sense was rel­a­tivity non-falsifi­able at the time that Ein­stein de­scribed it?

• Why do you want to be a physi­cist?

I learned of Quan­tum me­chan­ics when I was younger and I grew cu­ri­ous of it be­cause it was mys­te­ri­ous. Now Quan­tum me­chan­ics is not mys­te­ri­ous but the way the world works is and I am still deeply cu­ri­ous about it.

In what sense was rel­a­tivity non-falsifi­able at the time that Ein­stein de­scribed it?

It was falsifi­able but I was think­ing that it was still ex­traor­di­nar­ily beau­tiful

also the quote

In 1919, Sir Arthur Ed­ding­ton led ex­pe­di­tions to Brazil and to the is­land of Principe, aiming to ob­serve so­lar eclipses and thereby test an ex­per­i­men­tal pre­dic­tion of Ein­stein’s novel the­ory of Gen­eral Rel­a­tivity. A jour­nal­ist asked Ein­stein what he would do if Ed­ding­ton’s ob­ser­va­tions failed to match his the­ory. Ein­stein fa­mously replied: “Then I would feel sorry for the good Lord. The the­ory is cor­rect.”

Ein­stien’s Arrogance

• I learned of Quan­tum me­chan­ics when I was younger and I grew cu­ri­ous of it be­cause it was mys­te­ri­ous. Now Quan­tum me­chan­ics is not mys­te­ri­ous but the way the world works is and I am still deeply cu­ri­ous about it.

So… would you say that it makes you happy when your cu­ri­os­ity is satis­fied?

When you said that “truth trumps hap­piness,” it sounded like you were say­ing “in gen­eral, truth trumps hap­piness.” If the rea­son you per­son­ally value truth is be­cause you think it will help you as a physi­cist, and the rea­son you want to be a physi­cist is be­cause you are cu­ri­ous about physics, then you don’t have a rea­son to value truth which ap­plies in gen­eral. Why should other peo­ple, who are not nec­es­sar­ily in­ter­ested in physics or par­tic­u­larly cu­ri­ous about things, value truth above hap­piness?

It was falsifi­able but I was think­ing that it was still ex­traor­di­nar­ily beautiful

Right, but you were giv­ing a rea­son why you would have a be­lief that is non-falsifi­able, and this is not an ex­am­ple of such a be­lief. Ein­stein defy­ing the data is not Ein­stein think­ing that rel­a­tivity wasn’t falsifi­able, it’s Ein­stein think­ing that rel­a­tivity wasn’t falsifi­able by just one ex­per­i­men­tal re­sult.

• Think­ing about truth vs. hap­piness I be­lieve that I think if given a de­ci­sion of truth or hap­piness it is already to late for me to fully ac­cept hap­piness. In short think­ing about the de­ci­sion made the de­ci­sion. On top of this I am to cu­ri­ous to avoid think­ing about cer­tain top­ics of which i would be faced with this (not) de­ci­sion so I will always em­brace truth over hap­piness.

What I will now have to think on is given a friend who as­pires to be more ra­tio­nal yet, not a sci­en­tist or some­body similar, and i find a thought pat­tern that is giv­ing false but en­joy­able re­sults, should I in­ter­vene?

As to Ein­stein I was not say­ing how his be­lief was un­falsifi­able but my thought pro­cess with out my con­scious knowl­edge prob­a­bly thought that Eisen­stein’s the­ory was ev­i­dence for p(truth/​beauty) be­ing higher. If so I re­al­ize that this is only weak ev­i­dence.

• I be­lieve that I think if given a de­ci­sion of truth or hap­piness it is already to late for me to fully ac­cept hap­piness.

Why do you be­lieve that? Even given that you be­lieve this is cur­rently true, do you think this is some­thing you should change about your­self, and if not, why?

(I’m teas­ing you to some ex­tent. What I re­gard to be the an­swers to many of the ques­tions I’m ask­ing can be found in the Se­quences.)

As to Ein­stein I was not say­ing how his be­lief was unfalsifiable

I think you’ve lost track of why we were talk­ing about Ein­stein. In the origi­nal post, you listed two rea­sons to be­lieve non-falsifi­able things. I asked you to give an ex­am­ple of the first one. Maybe it wasn’t suffi­ciently clear that I was ask­ing for an ex­am­ple which wasn’t falsifi­able, in which case I apol­o­gize, but I was (af­ter all, that’s why it came up in the first place). Rel­a­tivity is falsifi­able. A heuris­tic that beau­tiful things tend to be true is also falsifi­able.

• (I’m teas­ing you to some ex­tent. What I re­gard to be the an­swers to many of the ques­tions I’m ask­ing can be found in the Se­quences.)

I know the an­swers to most of these ques­tions can be found in the se­quences be­cause I read them. How­ever the se­quences in­clude quite a bit of in­for­ma­tion and it is clear not all or prob­a­bly even most made it into the way I think. You ask­ing me these ques­tions is ex­tremely helpful to me filling in those gaps and I ap­pre­ci­ate it.

Why do you be­lieve that? Even given that you be­lieve this is cur­rently true, do you think this is some­thing you should change about your­self, and if not, why?

I be­lieve that be­cause I do not have the men­tal dis­ci­pline re­quired to both know a be­lief is false and still gain hap­piness from that be­lief. It is pos­si­ble that I could change this about my­self but I don’t see my­self ever learn­ing the dis­ci­pline re­quired to lie to my­self (if dou­ble­think is ac­tu­ally pos­si­ble). Its also pos­si­ble to go the other-way and say that some­thing in­jured my brain and brought my in­tel­li­gence to a level that I could no longer see why i should think one way in­stead of an­other, or not be­ing able to see the truth vs. hap­piness de­ci­sion which would let me pick hap­piness with­out ly­ing to my­self.

I think that most of two is based off of that heuris­tic which al­lows you to gain ev­i­dence for the claim even though it re­mains un­falsafi­able and only weak ev­i­dence.

• You ask­ing me these ques­tions is ex­tremely helpful to me filling in those gaps and I ap­pre­ci­ate it.

Glad to hear that. I was afraid I might be be­ing a lit­tle too harsh.

I be­lieve that be­cause I do not have the men­tal dis­ci­pline re­quired to both know a be­lief is false and still gain hap­piness from that be­lief.

I guess I should clar­ify what I was try­ing to say. If you op­ti­mize for truth and not hap­piness, you will seek out a whole bunch of truths whether or not you ex­pect that know­ing those truths will make you hap­pier. If you op­ti­mize for hap­piness and not truth, you’ll only seek truths that will help make you hap­pier. I’m not ask­ing you to con­sider ex­plic­itly ly­ing to your­self, which is in some sense hard, but I’m ask­ing you to con­sider the im­pli­ca­tions of op­ti­miz­ing for truth vs. op­ti­miz­ing for hap­piness.

Whether or not you do, most peo­ple do not op­ti­mize for truth. Do you think this is a good thing or a bad thing, and in ei­ther case, why?

I think that most of two is based off of that heuris­tic which al­lows you to gain ev­i­dence for the claim even though it re­mains un­falsafi­able and only weak ev­i­dence.

What is this refer­ring to?

• What is this refer­ring to?

I think you’ve lost track of why we were talk­ing about Ein­stein. In the origi­nal post, you listed two rea­sons to be­lieve non-falsifi­able things. I asked you to give an ex­am­ple of the first one. Maybe it wasn’t suffi­ciently clear that I was ask­ing for an ex­am­ple which wasn’t falsifi­able, in which case I apol­o­gize, but I was (af­ter all, that’s why it came up in the first place). Rel­a­tivity is falsifi­able. A heuris­tic that beau­tiful things tend to be true is also falsifi­able.

quote break

Whether or not you do, most peo­ple do not op­ti­mize for truth. Do you think this is a good thing or a bad thing, and in ei­ther case, why?

Per­haps it would be eas­ier for me to re­place the word hap­piness with Awe­some­ness in which case I could see the ar­gu­ment that op­ti­miz­ing for awe­some­ness would let me seek out ways to make the world more awe­some and would al­low spe­cific cir­cum­stances of what i con­sider awe­some to be to gov­ern which truths to seek out. In this way I can un­der­stand op­ti­miz­ing for awe­some­ness.

I think it is a good thing most peo­ple do not op­ti­mize for truth be­cause if it were so I don’t think the re­sult­ing world would be awe­some. It would be a world where many peo­ple were less happy even though it would also prob­a­bly be a world with more sci­en­tific ad­vances.

I sup­pose that if any­one were to op­ti­mize for truth it would be a minor­ity who wanted to ad­vance sci­ence fur­ther to make the gen­eral pop­u­la­tion more happy while the sci­en­tist them­selves were not always. Even in this case I could un­der­stand the ar­gu­ment that they were op­ti­miz­ing awe­some­ness not truth be­cause they thought the re­sult­ing world would be more awe­some.

• I still don’t see how any­thing you’ve said about Ein­stein is rele­vant to the origi­nal ques­tion I asked, which was for an ex­am­ple of a be­lief that you thought was beau­tiful, non-falsifi­able, and worth hold­ing.

I think it is a good thing most peo­ple do not op­ti­mize for truth be­cause if it were so I don’t think the re­sult­ing world would be awe­some. It would be a world where many peo­ple were less happy even though it would also prob­a­bly be a world with more sci­en­tific ad­vances.

Cool. So we agree now that truth does not trump awe­some­ness? (Some­what tan­gen­tial com­ment: sci­ence is not the only way to seek out truth. I also have in mind things like find­ing out whether you were adopted.)

• You’re right Ein­stien was not rele­vant to your origi­nal ques­tion. I brought him up be­cause I did not un­der­stand the ques­tion until

I think you’ve lost track of why we were talk­ing about Ein­stein. In the origi­nal post, you listed two rea­sons to be­lieve non-falsifi­able things. I asked you to give an ex­am­ple of the first one. Maybe it wasn’t suffi­ciently clear that I was ask­ing for an ex­am­ple which wasn’t falsifi­able, in which case I apol­o­gize, but I was (af­ter all, that’s why it came up in the first place). Rel­a­tivity is falsifi­able. A heuris­tic that beau­tiful things tend to be true is also falsifi­able.

Thanks for lead­ing me to the con­clu­sion truth does not trump awe­some­ness and yes I now agree with this.

I also have in mind things like find­ing out whether you were adopted

Good point

• I be­lieve that putting truth first will help me as a Physicist

Do you think that there are some pro­fes­sional Physi­cist who put truth first and oth­ers who don’t? Do you be­lieve that those who put truth first perform bet­ter.

What ev­i­dence do you see in the world that this is true?

• I re­sponded with truth trumps hap­piness and be­liev­ing the dragon would force you to be­lieve the false be­lief which is not worth the amount of hap­piness re­ceived by be­liev­ing it

In the fu­ture, I hope you no­tice this sort of situ­a­tion and re­spond by get­ting cu­ri­ous and en­gag­ing with the other per­son, rather than at­tempt­ing to win the ar­gu­ment.

To­day how­ever Sh­minux pointed out to me that I held be­liefs that were them­selves non-falsifi­able.

In fact, it’s rather worse :) The nega­tion of an un­falsifi­able be­lief is also un­falsifi­able—you un­falsifi­ably be­lieve that Carl’s garage does not have an im­ma­te­rial dragon in it. Even if you make an ob­ser­va­tion, e.g. you throw a ball to mea­sure the grav­i­ta­tional ac­cel­er­a­tion, you have an un­falsifi­able be­lief that you have not just hal­lu­ci­nated the whole thing.

• At some point you de­volve into declar­ing all be­liefs un­falsifi­able, be­cause of the un­falsifi­able be­lief that you ex­ist (what differ­ent ob­ser­va­tions would you ex­pect to you didn’t ex­ist?) and the com­ple­men­tary un­falsifi­able be­lief that you don’t ex­ist (sup­pose you ex­isted; what ob­ser­va­tions would be differ­ent?)

• In fact, it’s rather worse :) The nega­tion of an un­falsifi­able be­lief is also un­falsifi­able—you un­falsifi­ably be­lieve that Carl’s garage does not have an im­ma­te­rial dragon in it. Even if you make an ob­ser­va­tion, e.g. you throw a ball to mea­sure the grav­i­ta­tional ac­cel­er­a­tion, you have an un­falsifi­able be­lief that you have not just hal­lu­ci­nated the whole thing.

As a gen­eral prin­ci­ple it would seem that the nega­tion of an un­falsifi­able be­lief is bet­ter then the falsifi­able one. Mean­ing that the un­falsifi­able be­lief has a much larger num­ber of wor­lds in which it is true then the falsifi­able one.

For ex­am­ple: There are many more pos­si­ble ways that Carl does not have a im­ma­te­rial dragon in his garage than pos­si­ble ways that he does.

I think a good way to think about this mean­ing which un­falsifi­able be­lief to hold is the ev­i­dence that brought it out of the origi­nal hy­poth­e­sis space. In this way Time­less physics has a higher ra­tio of prob­a­bil­ity (p(time­less physics)/​p(not time­less physics)) then an im­ma­te­rial dragon.

How­ever it is a warn­ing flag to me when some­one brings up that

you have an un­falsifi­able be­lief that you have not just hal­lu­ci­nated the whole thing.

be­cause of the neg­ligible prob­a­bil­ity of this be­lief and giv­ing it power in an ar­gu­ment would both be an ex­am­ple of Scope Insen­si­tivity as well as pre­vent­ing any use­ful work be­ing done

Never the less it re­minded me that i should be think­ing in terms of prob­a­bil­ity to un­falsifi­able be­liefs rather then sim­ply the fact that there un­falsifi­able. maybe i should re­vise con­jec­tures to un­falsifi­able be­liefs that are within a cer­tain prob­a­bil­ity mar­gin. say p=.8 to p=.2. I would still sep­a­rate them from higher be­liefs be­cause sim­ply la­bel­ing them with a prob­a­bil­ity is still not in­tu­itive enough for my­self not con­fuse them with scope in­sen­si­tivity.

• It be­came in­ter­est­ing when An­thony gave the ar­gu­ment that if be­liev­ing in a dragon in your garage gave you hap­piness and the world would be the same ei­ther way be­sides the hap­piness com­bined with the prin­ci­ple that ra­tio­nal­ity is the art of sys­tem­atized win­ning it is clearly ra­tio­nal to be­lieve in the dragon.

It would in­deed makes sense to be­lieve in the dragon, if you had ex­am­ined all other al­ter­na­tives you could think of and found that this be­lief gives you most hap­piness, and if hap­piness is your goal (it’s the is­sue of he­dons vs utilons).

You may also want to con­sider the rea­sons why the +dragon world makes you hap­pier. For ex­am­ple, if you dig deeper, you might find that there is an ex­pe­rience some­where you are not at all fond of, like the un­re­solved pain of learn­ing years ago that God/​Santa/​Tooth Fairy is not real, af­ter all, and con­sider a visit to your ther­a­pist. Maybe once this is­sue is re­solved, you will no longer de­rive hap­piness from imag­in­ing that a +dragon world is real. Whether kil­ling cheap hap­piness this way is an in­stru­men­tally ra­tio­nal thing to do is a differ­ent story, and there is a fair bit about it in the Se­quences and in HPMOR (poor Draco, I feel for him).

• Keep in mind that “falsifi­a­bil­ity” is not a sci­en­tific con­cept, it is a philos­o­phy-of-sci­ence con­cept. Speci­fi­cally, Pop­per ar­tic­u­lated the con­cept in or­der to di­vide Science from pseudo-sci­en­tific the­o­ries mas­querad­ing as sci­en­tific. In other words, Pop­per was wor­ried that the­o­ries like Marx­ist His­tory and Freudian Psy­chol­ogy were latch­ing on to the halo effect and por­tray­ing them­selves as wor­thy of the same se­ri­ous con­sid­er­a­tion as Science with­out ac­tu­ally be­ing sci­en­tific.

Thus, there’s no par­tic­u­lar rea­son to de­sire that a be­lief be falsifi­able. Pop­per’s pro­ject was sim­ply to define Science such that only falsifi­able state­ments and the­o­ries qual­ified. It turns out that sci­en­tific the­o­ries are much bet­ter at mak­ing fu­ture pre­dic­tions than non-sci­en­tific the­o­ries, and we have philos­o­phy-of-sci­ence rea­sons why we think this is so. But falsifi­a­bil­ity is a defi­ni­tion to clar­ify thought, not a virtue to be as­pired to­wards.

• An un­falsifi­able idea can still be true. To re­ject it out­right just be­cause it’s un­falsifi­able would re­quire in­finite certainty

• I would be in­ter­ested to hear your ar­gu­ments that Truth > Hap­piness. I think it is kind of hard to sim­ply state that with­out back­ing it up with rea­sons.

• shaih and I dis­cussed this in the re­sponses to my com­ment here and he up­dated away from this con­clu­sion. (It is very easy to sim­ply state con­clu­sions with­out back­ing them up with rea­sons. I think you meant to say that it is not par­tic­u­larly per­sua­sive.)

• Thank you.

Yes that is what I was im­ply­ing and you seem to have been suc­cess­ful in de­riv­ing that so I don’t re­ally see the point in sug­gest­ing that I should have ex­plained it differ­ently.

• I also would like to point out that An­thony didn’t dis­agree with me when i said it and ac­cepted that as­sump­tion. When I can i’m go­ing to use the ar­gu­ments that Qiaochu_Yuan had and go back to talk with him to see if he will up­date as well.

• I re­sponded with truth trumps hap­piness and be­liev­ing the dragon would force you to be­lieve the false be­lief which is not worth the amount of hap­piness re­ceived by be­liev­ing it.

You miss the point. If you say that the be­lief in the dragon is false, than you are say­ing that it’s falsifi­able. It bad to con­fuse claims that aren’t falsifible with claims that are false. The two are very differ­ent.

The im­por­tant thing isn’t to shun non-falsifi­able be­liefs. The im­por­tant thing is to know which of your be­liefs are falsifi­able and which aren’t.

• The im­por­tant thing isn’t to shun non-falsifi­able be­liefs. The im­por­tant thing is to know which of your be­liefs are falsifi­able and which aren’t.

I thought a be­lief that isn’t even in-prin­ci­ple falsifi­able was es­sen­tially a float­ing be­lief not en­tan­gled to re­al­ity about some­thing epiphe­nom­e­nal that you couldn’t statis­ti­cally ever have cor­rectly guessed? Like, say, zom­bies or drag­ons in garages?

• The is­sue is with the mode of “shun­ning”: a mean­ingless be­lief shouldn’t be seen as false, it should be seen as mean­ingless. The op­po­site of a mean­ingless be­lief is not true.

(Also, “un­falsifi­able”, nar­rowly con­strued, is not the same thing as mean­ingless. There might be the­o­ret­i­cal con­clu­sions that are morally rele­vant, but can’t be tested other than by ex­am­in­ing the the­o­ret­i­cal ar­gu­ment.)

• Ah, thanks, all good points. Guess I was lump­ing to­gether the whole un­falsifi­a­bil­ity + mean­ingless­ness cluster/​re­gion.

Like­wise, when I thought “the op­po­site of a mean­ingless be­lief”, it turns out I was re­ally think­ing “the op­po­site of the im­plied as­sump­tion that this be­lief is mean­ingful”, which is ob­vi­ously true if the be­lief is known to be mean­ingless… (be­cause IME that’s what ar­gu­ments usu­ally end up be­ing about)

• I thought a be­lief that isn’t even in-prin­ci­ple falsifi­able was es­sen­tially a float­ing be­lief not en­tan­gled to re­al­ity about some­thing epiphe­nom­e­nal that you couldn’t statis­ti­cally ever have cor­rectly guessed?

There are state­ments that are nei­ther cor­rect nor in­cor­rect. “A ; This state­ment is false” would be one ex­am­ple.

Another state­ment would be “B : I know that this state­ment is false.” From my own per­spec­tive A and B are both state­ment to which I can’t at­tach the la­bel true or false. For me it would be a mis­take to be­lieve that A or B are true or that they are false.

There’s also an­other class of be­liefs: You have a bunch of be­liefs that you learned when you were three years old and younger about how you have a mind and of how other have minds. You be­lieve that there’s some­thing that can be mean­ingfully called “you” that can be happy or sad. You be­lieve that you are a worth­while hu­man be­ing whose life has mean­ing.

Those be­liefs are cen­tral to act as a sane hu­man be­ing in the world but they might not be falsifi­able true. It very difficult to go af­ter the be­liefs that you learned in your first three years of life as they are deeply in­grained in the way you deal with the world.

Some­one who be­lieves that their life has mean­ing usu­ally can’t give you a p value for that claim. It’s a be­lief that they hold for their own emo­tional health. They don’t re­ally need to ex­am­ine that be­lief crit­i­cally. It’s okay to hold be­liefs that way if you know that you do.

• You can gen­er­ally throw un­falsifi­able be­liefs into your util­ity func­tion but you might con­sider this in­tel­lec­tu­ally dishon­est.

As a quick anal­ogy, a solip­sist can still care about other peo­ple.