Year 2 Computer Science student
find me anywhere in linktr.ee/saviomak
Year 2 Computer Science student
find me anywhere in linktr.ee/saviomak
I just stumbled on this website: https://notes.andymatuschak.org/About_these_notes It has a similar UI but for Obsidian-like linked notes. The UI seem pretty good.
Sorry, I don’t feel like completely understanding your POV is worth the time. But I did read you reply 2-3 times. In roughly the same order as your writing.
Yes, so if you observe no sabotage, then you do update about the existence of a fifth column that would have, with some probability, sabotaged (an infinite possibility). But you don’t update about the existence of the fifth column that doesn’t sabotage, or wouldn’t have sabotaged YET, which are also infinite possibilities.
I’m not sure why infinity matters here, many things have infinite possibilities (like any continuous random variable) and you can still apply a rough estimate on the probability distribution.
I guess it’s a general failure of Bayesian reasoning. You can’t update 1 confidence beliefs, you can’t update 0 confidence beliefs, and you can’t update undefined beliefs.
I think this is an argument similar to an infinite recursion of where do priors come from? But Bayesian updates usually produces better estimate than your prior (and always better than your prior if you can do perfect updates, but that’s impossible), and you can use many methods to guestimate a prior distribution.
You have a pretty good model about what might cause the sun to rise tomorrow, but no idea, complete uncertainty (not 0 with certainty nor 1 with certainty, nor 50⁄50 uncertainty, just completely undefined certainty) about what would make the sun NOT rise tomorrow, so you can’t (rationally) Bayesian reason about it. You can bet on it, but you can’t rationally believe about it.
Unknown Unknowns are indeed a thing. You can’t completely rationally Bayesian reason about it, and that doesn’t mean you can’t try to Bayesian reason about it. Eliezer didn’t say you can become a perfect Bayesian reasoner either, he always said you can attempt to reason better, and strive to approach Bayesian reasoning.
Relatedly, in-line private feedback. I saw a really good design for alerting typos here.
To the four people who picked 37 and thought there was a 5% chance other people would also choose it, well played.
Wow, that’s really a replicable phenomenon
Threads are pretty good, most help channels should probably be a forum (or 1 forum + 1 channel). Discord threads do have a significant drawback of lowering visibility by a lot, and people don’t like to write things that nobody ever sees.
^ Forum
I didn’t read either links, but you can write whatever you want on LessWrong! While most posts you see are very high quality, this is because there is a distinction between frontpage posts (promoted by mods) and personal blogposts (the default). See Site Guide: Personal Blogposts vs Frontpage Posts.
And yes some people do publish blogposts on LessWrong, jefftk being one that I follow.
FAQ: What can I post on LessWrong?
Posts on practically any topic are welcomed on LessWrong. I (and others on the team) feel it is important that members are able to “bring their entire selves” to LessWrong and are able to share all their thoughts, ideas, and experiences without fearing whether they are “on topic” for LessWrong. Rationality is not restricted to only specific domains of one’s life and neither should LessWrong be. [...]
I tend to think of “keep my identity small” as “keep my attachments to identity dimensions weak”.
Very much agree.
suggestions:
Duplicate this to the open thread to increase visibility
I don’t know your exact implementation for forming the ranked list, but I worry that if you (for example) simply sort from low likelihood to high likelihood, it encourages people to only submit very low probability predictions.
For possible solutions:
1. This is my problem and I should find a way to stop feeling ugh
2. Have some ways to easily read a summary of long comments (AI or author generated)
3. People should write shorter comments on average
I often have an ugh feeling towards reading long comments.
Posts are usually well written, but long comments are usually rambly, even the highest karma ones. It takes a lot of effort to read the comments on top of reading the post, and the payoff is often small.
But for multiple reasons, I still feel an obligation to read at least some comments, and ugh.
You’d need perhaps 100, maybe even 1,000 times more arguments to get a perfectly open-minded and Bayesian agent to start from the point where the other person started and end up agreeing with you.
Modelling humans with Bayesian agent seems wrong.
For humans, I think the problem usually isn’t the number of arguments / number of angles you attacked the problem, but whether you have hit on the few significant cruxes of that person. This is especially because humans are quite far away from perfect Bayesians. For relatively small disargreements (i.e. not at the scale of convincing a Christian that God doesn’t exist), usually people just had a few wrong assumptions or cached thoughts. If you can accurately hit those cruxes, then you can convince them. It is very very hard to know which arguments can hit those cruxes though and it is why one of the viable strategies is to keep throwing arguments until one of them work.
(Also unlike convincing Bayesian agents where you can argue for W->X, X->Y, Y->Z in any order, sometimes you need to argue about things in the correct order)
I found LW’s font size to be a little bit small but I have managed to get used to it. After reading your message I think I will try going to 110%, thanks. (170% is too large I feel like I’m reading on my phone on landscape)
I cannot bet in the preview multi-binary markets (also clicking “See 2 more answers” doesn’t work)
And then the popup closes itself and I am not logged in.
Why isn’t this kinda cool and worth participating in?
I wrote two comments about why people don’t read your post, but as I was betting I realized another two problems about the markets:
(Not your fault) The Manifold betting integration kind of sucks. Clicking “See 2 more answers” does nothing, and the options are ordered by percentage.
There isn’t enough liquidity in your markets. It makes betting difficult because the even M5 increments changes too much. idk, maybe buy some mana to subsidize your markets? It would also make people seeing your market from Manifold more interested to bet as they will have more to gain for the prediction.
The title doesn’t set a good expectation of the contents. If I am a person interested in “Please Bet On My Quantified Self Decision Markets”, I want to bet. I won’t expect to (and shouldn’t be expected to) read all your lengthy experimental details. It took a while for me to find the markets.
The actual quote was also too long that I would have stopped reading if I wasn’t trying to analyse your post.
The quote is also out of context, in that I am very confused about what the author was trying to say from the first paragraph. Because I was skimming, I didn’t really understand the quote until the market section.
Fortunately, there’s a good (and well-known) alternative
alternative to what?
, which is to randomize decisions sometimes, at randomyeah that makes sense, but how does randomization relate to prediction markets?
. You tell people: “I will roll a 20-sided die. If it comes up 1-19, everyone gets their money back and I do what I wantwhat is I do what I want
. If it comes up 20, the bets activate and I decide what to do using a coinflip.ok so this is about a bet, but then why coin flip??
”
@Raemon Is this intentionally unpinned?
For google forms, if the question is not required, you can click on the same radio button twice to cancel the selection
I love this post. I needed the reminder that sometimes I should spend more time explaining things to people.
Github is known to occasionally be unreachable in China. In general I think people in LW should be able to figure out VPNs