Independent alignment researcher
Garrett Baker
Ah I see. Misinterpreted what you were saying in that last Note.
[Question] Why does “deep abstraction” lose it’s usefulness in the far past and future?
some method of incentivizing novelty / importance
Citation count clearly isn’t a good measure of accuracy, but it’s likely a good measure of importance in a field. So we could run some kind of expected value calculation where the usefulness of a paper is measured by
P(result is true) * (# of citations) - P(result is false) * (# of citations) = (# of citations) * [P(result is true) - P(result is false)]
.Edit: where the probabilities are approximated by replication markets. I think this function gives us what we actually want, so optimizing institutions to maximize it seems like a good idea.
Edit: This doesn’t actually represent what we want, since journals can just force everyone to cite the same well replicated study to maximize citation count on that, but it’s a good approximation. Not a great goal, but a good measurement of what we want, but we shouldn’t optimize institutions to maximize it.
I tried implementing Tell communication strategies, and the results were surprisingly effective. I have no idea how it never occurred to me to just tell people what I’m thinking, rather than hinting and having them guess what I was thinking, or me guess the answers to questions I have about what they’re thinking.
Edit: although, tbh, I’m assuming a lot less common conceptual knowledge between me, and my conversation partners than the examples in the article.
Could you give a few historical examples of where you think the collective intelligence of the parties involved is underestimated as a factor in the outcomes of those parties? It seems to me that most history I’ve read does come right out and say “party X was smarter than party Y”. Examples such as Caesar and Genghis Khan come to mind, as well as Darius the Great as counter examples to the trend you describe. Both in their domestic political maneuvering, administrative skills, and war tactics.
Edit: moved to comments, as per grim’s suggestion.
D0TheMath’s answer (which maybe should really be a comment?)
Yeah, sorry. It is more of a comment. Moved to comments section.
I often feel like I have very little to contribute in a given discussion, so I typically don’t comment, but I will comment more, as this post both presents a cool community experiment & has caused me to update my estimate of all feedback’s value upwards. Also, I like posts like this which try to push the community of the forum in a new direction to see if it adds or subtracts value.
My comment challenge: I will comment on all front-page posts that I see & read that are not from AF, unless I see a disrupting decrease in my willingness to read posts.
- 11 Nov 2020 19:07 UTC; 4 points) 's comment on The (Unofficial) Less Wrong Comment Challenge by (
This makes sense, and I’ve updated the comment to reflect what I meant more accurately. Though I think the improvement is very minor, and your time could likely be spent on more important things than providing marginal improvements to LessWrong comments, I thank you none-the-less.
This algorithm seems like it can be generalized for any human decision algorithm. For instance, I’m usually pretty indecisive while trying to order food, but I’d bet that implementing this algorithm would speed up my decision making immensely, while guaranteeing I’d be selecting the best option available.
Typo:
If you open the transparent envelope then one pound will be deducted from
youyour Muggle bank account and the opaque envelope will have contained nothing. If you never open the transparent
I’ve been vaguely grasping at this concept—that I give too little credence to people who think differently from me—and this was a great crystallization.
This is symbolic experimentation, and it’s worse than doing nothing at all. I can feel as though I’ve explored many ways to optimize my life, when in fact I’ve been accumulating failed attempts to change my habits. The anecdotal opinions I gather from these experiences are worse than sheer ignorance. They’re a bunch of fish stories.
Now that you mention it, this is definitely a problem, at least for me. The times I’ve tried something, but haven’t given it a “good try” versus the times I’ve actually followed through with that thing seem to be weighted too similarly than they should. It’s good to distinguish between these two types of exploration.
A corollary for this realization, assuming this bias is common in the population, is that you should probably ask others how long they tried doing something they’re recommending you do or don’t do.
I’m skeptical about how effective “good tries” can be as a substitute for lock ins and creating habits. There’s something to be said about having a pre-defined exit-condition & goal state you’re attempting to reach though. In combination with TAPs, peer pressure, and monetary lock-in (using something like Beeminder, or a friend taking collateral and then destroying it if you don’t follow through) the addition of a “good try” rule as an evaluation metric of how much you should update as a result of your experiments is probably a good idea.
This advice could be beneficial to a theoretical person who felt the need to talk & hear the points given by everyone they disagreed with, about every point of disagreement, and slightly less extreme versions of this person. I’m thinking about people like Joe Rogan here, who listen to everyone, and seemingly put very little effort into making sure the arguments given by such people are valid.
I, on the other hand, am very averse to discussing fundamental disagreements or reading about why I may be wrong. Such aversion makes it difficult for me to tell when the person I’m talking to is right about a particular topic, and makes me underestimate the benefits of knowing about their position. So I don’t think this advice—that is, the advice about not talking to people you disagree with—is helpful for me, or people like me. Many of the recommendations listed like turning off background info dumps, having an add blocker, and (to a lesser extent, admittedly) staying away from political discussions I do instinctively & automatically.
A good example of who we should strive to be like is Julia Galef, on her podcast Rationally Speaking. Here, she’ll read several books about the topics to be discussed, then talk with her interviewees, keeping the epistemic bar very high. Asking about predictions their hypotheses have made in the past, unnecessary complexities which don’t seem justified, and generally applying high-quality Bayesian rationality to the points given. Neither shying away from disagreement like I would, nor talking to people with niche ideas for the sake of talking to people with niche ideas like Joe Rogan would.
The Sequences are great, except in my area of expertise, where they are terrible
Can I get an example of a section of The Sequences where someone with the relevant area of expertise would say that that it’s terrible?
[Question] Information on time-complexity prior?
The first two levels I am very familiar with in my own reading, but I’ve never consciously done the last compression level. However when I go through my own Anki cards I will often give the answer in a much more compressed way than how I originally wrote it down, so it’s likely happening at some level during my memorization or reading process.
A minor improvement to the app would be the ability to move around using the mouse. But that’s only because my particular setup requires me to reach over my desk to access my video-conferencing computer’s keyboard, so I’m not sure how much you’d want to prioritize that.
It says something about the quality of Gather Town that that’s the only improvement I can think of.
Maybe it’d help to list a few concrete examples where you think you could’ve made a better decision by paying attention to the news more, places you believe you made a good decision based on news, and places where you made bad decisions based on news. Then figure out what possible strategies you could have used to preserve the good decisions you made, minimizing the bad decisions, and maximizing the good decisions.
To my knowledge, GPT-3 doesn’t store information about it’s “thought” process, so if GPT-3 is able to explain it’s own puns, it would necessarily be able to explain similar puns made by people.