For curious people who know a bit of chess, I played a version of bughouse (chess) where there are 3 boards. The bad rules were that you had to win 2 games to win. I got annoyed by the fact that the middle board has way to much influence, and a lot of times your center-board stopping their opponent was bad because you couldn’t get pieces. When I found the problem, I played middle board (as I was the worst chess player) and instructed my teammates to play normal chess while their opponents were thinking they are playing bughouse (we still lost somehow). After that, they talked about forcing everyone to do a move every minute. This is not what you do. You do not patch holes, you create better dams. Just give the middle board a weight of 2 and put the win condition first to 2 points like in regular bughouse
lionrock
I, for one, do not enjoy playing games like calibrated trivia when the rules are broken. I am a person who often 0%s. The fun in a game from my point of view is to maximize the chances of winning (or some other goal like EV(score)). When you discourage 0%ing, you are saying “we are just doing random actions without neccessarily trying to maximize our score”. This ruins the original point of the game of trying to get the players to be as well calibrated as possible.
Additionally, there is a very easy fix: just give 2p points if the player is correct and deduct p^2 points if the player is incorrect. This gives the score an expected value of p^3 + p^2 , and gives more meaning to certainty even when it’s under 50%. It doesn’t put enough meaning on the calibration part, but a neccessary part of trivia IS putting value on being correct. If you really don’t like it, just require players to put at least 50%. If rulesets have a problem, fixing them will often result in a better ruleset.
Please make it clear you are talking about weight. When I finished, I thought “what about the part where he LOSES money”
I actually thought of this in the sense of statements being partially true: We know godels incompleteness theorem (most likely you know it better than I do). I’m pretty sure it’s provable that BB(10^10) does not have a lower bound. However, if you simulate minds/civilizations/AI/something, and ask them to bet on mathematical theorems (at first with less resources so they don’t just solve it), and then ask them whether they think a certain unprovable theorem is true and let them bet on it, you might somehow know how true an unprovable statement is? I realize this comment is poorly written but I hope you understand my intuition.
I still think this should not be assumed to be true and used as an argument. If there is a reason that that which can be destroyed by the truth should be, use the reason as an argument instead.
If you’re not allowed to ask a job candidate whether they’re gay, you’re not allowed to ask them whether they’re a college graduate or not. You can give them all sorts of examinations, you can ask them their high school grades and SAT scores
If we’re popping bubbles, I see no reason to keep high school scores, and maybe even SAT. There is no reason for your history grade to affect your acceptance to programming-related work, and there is definitely no reason for accepting people who were liked most by their history teacher. Places of work should test applicants on their own.
“Why did you give our enemies the nuclear codes*? Now our country is going up is flames?”
“Well, the truth is that the code is 50285193, and the code destroyed our country, and that which can be destroyed by the truth should be”
*I don’t know how nuclear codes actually work. I’m giving a counterargument to “that which can be destroyed by the truth should be”
As I see the world, all current meaningful terminal goals are three-dimensional, and saving a life that will not affect the world has 0 meaning against the good in preventing x and s risks (maybe there is a risk I don’t know of)
So saving an American child has more value than saving an african child, and saving a north korean child has negative value.
If I save two kids from malaria, am I entitled to murder someone?
So no, not close to the same scale as a murderer. When you say murderer, the emotional meaning comes to mind, and we cannot possibly not hate murderers as a civilization. I think most readers will get the point by so I will stop writing
[insert supposedly famous person that may or may not actually be famous here] said [insert something along the lines of “AI is dangerous how didn’t I notice until now”] sounds VERY cherry-picking
This makes me think that Santa Claus is a good model of religion. In some areas, it is really accurate (I believed that Santa probably existed but is now long dead). Except, for some reason, children stop believing in Santa Claus. Perhaps it is because a child is never “punished” for disbelieving in Santa (in the sense that a tsk to a child about to commit a religious sin a punishment). It might be because of the way parents “look down” at children believing in Santa. If someone thinks they know why, please share.
Perhaps there can be a vote saying “ok/nice” and then there is upvote/”ok/nice”
I think [insert negative trait here] makes us human is more general
I had a hard time understanding the metaphor, and still do. I think it’s a valid complaint. Additionally, I think the term dark side causes more confusion than it resolves
When I was 10 years old, I estimated P(my birth religion’s god) at 1/10k. That number has actually gone down (to 1/billion), and it touched 0 at some point because I started believing it DIDN’T make sense to assign probabilities to “impossible” things. P(god) should be higher, as there is either a good or laws or physics. I don’t believe god has much insentive to simulate, or to do anything (intuitively god just does not make sense). So I’d say 1/1000
Psychic: 1/1k
Global warming: I don’t understand the subject AT ALL so I’m going to go with the average here (80%)
Pandemic: There is a chance the population will grow (shrinking the meaning of 1 billion) and engineered pandemic seems likely, so 40%
Mars: 2050 is a bit soon, so 30%
AI: Seems VERY likely, 90%.
You are assuming a simulation does not want to die, and this is unclear. The fact that 100$ is better than 0$ is taken as an axiom because it is part of the statement. However, death is worse than life (for a simulation!) is not trivial. “Rationality should not be done for rationality or it ends in a loop”. So posts use money as the thing the rational agents want. You have to assign a financial value to life before saying it is less valuable than 100$
Perhaps I do not understand the meaning of infohazard, but in this scenario, it seems like what you are trying to avoid is not information, but rather player B knowing you have information. I think this can be solved if you take one of the “Omegas” who can predict you, and then the information itself may be seen as harmful.
This can be fixed to one who can reliably be expected to speak the truth to people on their side. Now that I think of it, this should be highlighted.