I’m not sure I get it (nor the previous post). Can you steelman (or point out a comment that shows) the point of view you’re arguing against? Even for a perfect agent with well-defined goals, it seems obvious that true knowledge which does not approach completeness (that is, there are still lots more unknown things about the universe) can lead to severely sub-optimal choices and future experiences. It seems even clearer that perverse subsets of the truth can be found which will lead an otherwise-rational agent to incorrect action. For such imperfect agents as we are, with such small access to truth, I just don’t understand the debate.
Your examples don’t help me understand, as it’s not clear what updates the participants are making toward true beliefs. Specifically:
1 - There’s no truth value in the description of Roko’s Basilisk. I’d argue that learning some actual truth could benefit her (about whether such a thing exists in what percentage of her future timelines), could benefit her.
2 - Wow. Research does not always lead to truth, especially if you don’t trust your researchers. Seems likely that saving for future use is best for all purposes. Spending some of it to investigate your research team might be rational, though.
3 - this is a good example, and can help narrow down the claim, if there is a claim to narrow down.
4 - I’d almost certainly open it. I don’t think this generalizes to all humans, but I’m pretty smart and I think I’d benefit by understanding a little more about “psychological damage” and infohazard mechanisms.
5 - Vaco’s beliefs about his discovery are outlandish enough that he’s simply wrong to be confident. Demonstrating it to a respected physicist and delegating the decision to someone more sane seems the obvious answer.
6 - I think there are a lot of ways to get information that I’d try before pressing the button, but I’d likely press it before I die of thirst.
I want to express schadenfreude that Dagon would open the folder in #4. I would also like to note that many people have pressed the button in #6 out of sheer boredom.
I should probably admit that my planned behavior for #4 is mostly hubris, and as such I forgive you your shadenfreude. I deserve what I get (including the sweet sweet knowledge, of course).
But this hubris _is_ based on additional assumptions and knowledge about the universe, including a fair bit about infohazards and “psychological damage”, which make me believe the threat is much less than stated. This is a different aspect of the problematic thesis under consideration: these examples are incredibly lacking in information that would allow one to judge the quantity of truth achievable for what amount of risk.
Thanks you for writing this, yeah it appears that we are talking past each other (particularly in my debate with Jimmy). I was going to write another post to try and clarify this debate, but decided against it for five reasons:
1. I already promised Isusr I would make a different (and hard to make) post this week and I don’t want to spam out posts.
2. The downvotes are telling me this is clearly a controversial/community-splitting topic and I’ve been making too much of those lately
3. I’ve now already made two posts on this topic so I’m starting to get sick of it.
4. While my english has improved remarkably these past years it is apparently not yet at the level where I can discuss contemporary philosophy without taking the risk of not effectively communicating with my interlocutor (this might sometimes be the fault of your interlocutor but it has happened two times in two weeks so I’m giving myself at the very least partial blame)
5. I have exams right now, so I should really be studying more for those.
Maybe I’ll make one in a couple months if no one else has made a post on it yet.
I’m not sure I get it (nor the previous post). Can you steelman (or point out a comment that shows) the point of view you’re arguing against? Even for a perfect agent with well-defined goals, it seems obvious that true knowledge which does not approach completeness (that is, there are still lots more unknown things about the universe) can lead to severely sub-optimal choices and future experiences. It seems even clearer that perverse subsets of the truth can be found which will lead an otherwise-rational agent to incorrect action.
For such imperfect agents as we are, with such small access to truth, I just don’t understand the debate.
Your examples don’t help me understand, as it’s not clear what updates the participants are making toward true beliefs. Specifically:
1 - There’s no truth value in the description of Roko’s Basilisk. I’d argue that learning some actual truth could benefit her (about whether such a thing exists in what percentage of her future timelines), could benefit her.
2 - Wow. Research does not always lead to truth, especially if you don’t trust your researchers. Seems likely that saving for future use is best for all purposes. Spending some of it to investigate your research team might be rational, though.
3 - this is a good example, and can help narrow down the claim, if there is a claim to narrow down.
4 - I’d almost certainly open it. I don’t think this generalizes to all humans, but I’m pretty smart and I think I’d benefit by understanding a little more about “psychological damage” and infohazard mechanisms.
5 - Vaco’s beliefs about his discovery are outlandish enough that he’s simply wrong to be confident. Demonstrating it to a respected physicist and delegating the decision to someone more sane seems the obvious answer.
6 - I think there are a lot of ways to get information that I’d try before pressing the button, but I’d likely press it before I die of thirst.
I want to express schadenfreude that Dagon would open the folder in #4. I would also like to note that many people have pressed the button in #6 out of sheer boredom.
I should probably admit that my planned behavior for #4 is mostly hubris, and as such I forgive you your shadenfreude. I deserve what I get (including the sweet sweet knowledge, of course).
But this hubris _is_ based on additional assumptions and knowledge about the universe, including a fair bit about infohazards and “psychological damage”, which make me believe the threat is much less than stated. This is a different aspect of the problematic thesis under consideration: these examples are incredibly lacking in information that would allow one to judge the quantity of truth achievable for what amount of risk.
Thanks you for writing this, yeah it appears that we are talking past each other (particularly in my debate with Jimmy). I was going to write another post to try and clarify this debate, but decided against it for five reasons:
1. I already promised Isusr I would make a different (and hard to make) post this week and I don’t want to spam out posts.
2. The downvotes are telling me this is clearly a controversial/community-splitting topic and I’ve been making too much of those lately
3. I’ve now already made two posts on this topic so I’m starting to get sick of it.
4. While my english has improved remarkably these past years it is apparently not yet at the level where I can discuss contemporary philosophy without taking the risk of not effectively communicating with my interlocutor (this might sometimes be the fault of your interlocutor but it has happened two times in two weeks so I’m giving myself at the very least partial blame)
5. I have exams right now, so I should really be studying more for those.
Maybe I’ll make one in a couple months if no one else has made a post on it yet.