Interested in math, Game Theory, etc.
Human values, or thing humans pretend are values?
after each time?
for example, STEM AI[.]
Assuming that was the end of that sentence.
I am sympathetic to the view that agents that have no human models will find it very difficult to be deceptive.
Hiding information seems like an effective general move. How effective it is without such a model remains to be seen.
I’d much prefer to [not] have to do things like that.
Why not do polls?
Want to increase completion rates of your survey? Make it shorter.
Why not send out ‘surveys’ to a large audience which have a random subset of questions included, randomized separately for each person?
Closure of schools. There’s a mountain of evidence that taking kids out of school is harmful. It’s not just the loss of education—although that doesn’t help—but also the loss of socialisation.
What if socialization is relative?
(One of those weirdos and misfits who got hired turned out to have said some
The place-of macro is not allowed to just compute where a place would be if it existed. The macro must also save our data to the place [if] the place is not already populated.
If you pretend to be a good person so others will reward you and you don’t get rewarded then [you] will become cynical.
Where are the footnotes?
As an example, I think it should be possible to learn to use a source of randomness in rock-paper-scissors against someone who can perfectly predict your decision, but not the extra randomness.
In order to do that, you have to think of doing that. (Seeing randomness might be hard—seeing ‘I have information I don’t think they have, and I don’t think they can read minds, so they can’t predict this’ makes more sense intuitively.) In practice, I don’t think people conduct exploration like this.
PCDT, faced with the possibility of encountering a Newcomblike problem at some point,
Similarly, I think a lot of agents consider possibilities after they encounter them at least once. This might help solve the cost of simulation/computation.
Radical Probabalism and InfraBayes are plausibly two orthogonal dimensions of generalization for rationality. Ultimately we want to generalize in both directions,
I’m glad this was highlighted.
(The point of the “kingmaker” mechanism is to incentivize rhetoric from both sides to be less extreme.)
What do you do if both defect?
One party wants basic health insurance to be governed by legislation, the other by the free market, but they’re both pretty similar ideas. There is no room for a center party because there is no space between the two parties, regardless of how angry they are at each other.
Enable or enforce price transparency in healthcare. Seems easy to appeal to both sides (whether or not implementation is simple).
Do you require only that an argument exists, or do you require that the agent recognizes the argument, or something in-between?
The second one I think. The epiphany is sometimes characterized by frustration ‘why didn’t I think of that sooner?’
The optimal chess game (assuming it’s unique) might proceed from the rules, but we might never know it. Even if I have the algorithm (say in pseudocode)
If I don’t have it in code, I might not run it
If I have it code, but don’t have the compute (or sufficiently efficient techniques) I might not find out what happens when I run it for long enough
If I have the code, and the compute, then it’s just a matter of running it.* But do I get around to it?
Understanding implication isn’t usually as simple as I made it out to be above. People can work hard on a problem, and not find the answer for a lot of reasons—even if they have everything they need to know to solve it. Because they also have a lot of other information, and before they have the answer, they don’t know what is, and what isn’t relevant.
In other words, where implication is trivial and fast, reflection may be trivial and fast. If not...
The proof I never find does not move me.
*After getting the right version of the programming language downloaded, and working properly, just to do this one thing.
I think Open Threads are kind of meant as being somewhat lower stakes. If you’ve got an idea and want to gauge interest, or get some feedback, you can try posting here.* (Though keep in mind, some times get less traffic than others, and a lack of response might mean you hit a low traffic time.)
*If you don’t know how to do Shortform posts, or find something*, someone here will know how to do it.
*Like the wiki, or arbital, or posts that are referenced, but aren’t on the website.
Self sacrifice is not a virtue.
(Might still have the same problems though.)
(Note: in aspects of life where you’re impulsive, don’t introspect enough, or have poor self discipline, this post is probably advice in the wrong direction.)
Go do something you wouldn’t normally do :)
This post could almost be read as ‘Make new habits’ as much as ‘Break your habits’, though it’s more focused on easing people into it.
The person who looks and says “I only wrote 100 words last hour?!??!” kind of reminds me of the investor checking their stock prices every day.
For this person three months or six months or a year might be a better time frame for checking how they’re doing.
If this is like, established fact or something...I did not know this, and I understand why the hypothetical person was also unaware of this.
Also, I wanted to say that I know many people who really came into their own in their mid-to-late thirties. I think a lot of people just start getting their life into order by that time, so I’m also not sure how much weight to give your personal experiences in this area.
Yes. But since I don’t expect to see an RCT anytime soon*, if anyone—you (Dustin) or the OP (Gordon)** -wrote posts about ‘things that improved my life’, I’d be interested to see those posts, and read them while keeping in mind that they’re not (necessarily) literal laws of physics, and different things might work for different people—especially when things are as vague as ‘keep your identity small’ and ‘don’t force that’. (How small is small? I don’t think I’ve seen ‘Make your identity big’ (and I won’t write it because I don’t know how to make it bigger.))
*If you’ve heard of something let me know.
Consequentialism is morally correct, but virtue ethics is what’s most effective, and deontology is what the virtuous person would use.
Consequentialism is right because it’s not about morality. (But also might be wrong, as a description, when people don’t do things for a reason, like habit.)
B: Why do you play chess?
A: To have fun. And to beat you.
If this were true, then simple belief in consequentialism would imply reflective belief in virtue ethics
Truth aside there’s issues with the implication part. Will people reach the conclusion? There’s a lot of math problems where the answer is a consequence of the properties of numbers. Does that mean you’ll know the answer some time before you die? You might be able to pick out a given one where you will find out before you die if you take the time to solve it. Ethics though, doesn’t seem to have the same guarantees, especially not around the correctness of general theories.
However, you can justifiably trust a probability distribution whose description includes running an accurate prime factorization algorithm.
That’s not a probability distribution, that’s a flowchart that terminates in “Yes” and “No”.
For example, do you wish you had more of an affordance to lie? Probably not, right?
If something comes to mind as an option (an action to take), or as a possibility (to consider), maybe it’s in your mind for a reason, and it might be useful to understand why. For example:
An affordance is a lot like an “open loop” in the Getting Things Done sense. If this is true, then if you have your phone in your pocket, the possibility of taking it out to check something takes some sliver of subconscious attention. On this model, you can increase your attention by removing things like this.
Maybe your attention goes to the phone because you have a habit involving that phone. Theories involving dopamine/skinner boxes/etc. aside, if doing a thing ‘makes you more likely to do it’, doing a thing less may be required for you to ‘do it less’, and feel the urge to do it less.
Get too caught up in a climber’s personal story about why they climbed Mt. Everest, and you might forget that statistically, a whole lot of whether-someone-climbs-mt-everest is probably statistically explained by whether they encountered situations which created the affordance.
The story may be useful to get an idea of what those situations (“which created the affordance”) are, or how they might be encountered.
Note to the future: All links are added automatically to the Internet Archive. In case of link rot, go there and input the dead link.
YES! Also, thanks for the newsletter/these posts.