Year 2 Computer Science student
find me anywhere in linktr.ee/saviomak
Year 2 Computer Science student
find me anywhere in linktr.ee/saviomak
You may be interested to hear that there are real pet owners doing this nowadays. https://www.lesswrong.com/posts/zbqLuTgTCu365MNu9/your-dog-is-even-smarter-than-you-think
Relatedly, in-line private feedback. I saw a really good design for alerting typos here.
Why isn’t this kinda cool and worth participating in?
I wrote two comments about why people don’t read your post, but as I was betting I realized another two problems about the markets:
(Not your fault) The Manifold betting integration kind of sucks. Clicking “See 2 more answers” does nothing, and the options are ordered by percentage.
There isn’t enough liquidity in your markets. It makes betting difficult because the even M5 increments changes too much. idk, maybe buy some mana to subsidize your markets? It would also make people seeing your market from Manifold more interested to bet as they will have more to gain for the prediction.
The title doesn’t set a good expectation of the contents. If I am a person interested in “Please Bet On My Quantified Self Decision Markets”, I want to bet. I won’t expect to (and shouldn’t be expected to) read all your lengthy experimental details. It took a while for me to find the markets.
The actual quote was also too long that I would have stopped reading if I wasn’t trying to analyse your post.
The quote is also out of context, in that I am very confused about what the author was trying to say from the first paragraph. Because I was skimming, I didn’t really understand the quote until the market section.
Fortunately, there’s a good (and well-known) alternative
alternative to what?
, which is to randomize decisions sometimes, at randomyeah that makes sense, but how does randomization relate to prediction markets?
. You tell people: “I will roll a 20-sided die. If it comes up 1-19, everyone gets their money back and I do what I wantwhat is I do what I want
. If it comes up 20, the bets activate and I decide what to do using a coinflip.ok so this is about a bet, but then why coin flip??
”
Yes, but some estimates are clearly false, while your examples are estimates that may be true, may be false.
I didn’t read either links, but you can write whatever you want on LessWrong! While most posts you see are very high quality, this is because there is a distinction between frontpage posts (promoted by mods) and personal blogposts (the default). See Site Guide: Personal Blogposts vs Frontpage Posts.
And yes some people do publish blogposts on LessWrong, jefftk being one that I follow.
FAQ: What can I post on LessWrong?
Posts on practically any topic are welcomed on LessWrong. I (and others on the team) feel it is important that members are able to “bring their entire selves” to LessWrong and are able to share all their thoughts, ideas, and experiences without fearing whether they are “on topic” for LessWrong. Rationality is not restricted to only specific domains of one’s life and neither should LessWrong be. [...]
For google forms, if the question is not required, you can click on the same radio button twice to cancel the selection
I love this post. I needed the reminder that sometimes I should spend more time explaining things to people.
Is there a way to find all the posts that you have strong upvoted? I bet there is a hidden query that I can use on the All Posts page but I can’t find it easily.
If it actually doesn’t exist then this comment acts as my vote on getting it implemented.
I object that we need to weigh the cost of everything is a quite important thing to mention in this post. Weighing the cost of everything is a very important thing, but it is another topic on its own; It is a whole different skill to hone (I think Duncan actually wrote a post about this in the CFAR handbook).
osu! should be written in lowercase. a tweet from osu!
(I am currently on the path of learning how values actually work and figuring out what I should really do.)
It has been a few days since I read this post so I may be misrepresenting you, but I think this post committed a similar mistake to people who think that arguing with another person to change their mind is meaningless given that we don’t have free will, because given a deterministic future, that person will just automatically change their mind. But it doesn’t work like that, because the act of arguing is part of the deterministic process that eventually causes the person to change their mind. (I could not find the exact EY post that this appeared on: https://www.lesswrong.com/tag/free-will-solution) Similarly, even though we can let our values drift freely, controlling how the values drift is also part of the drifting process.
I think people (myself included) really underestimated this rather trivial statement that people don’t really learn about something when they don’t spend the time doing it/thinking about it. People even measure mastery by hours practiced and not years practiced, but I still couldn’t engrave this idea deep enough into my mind.
I currently don’t have much writable evidence about why I think people underestimated this fact, but I think it is true. Below are some things that I have changed my mind/realised after noticing this fact.
cached thoughts, on yourself
Personally, I am a huge procrastinator and I can really flinch away from doing something even when it is weeks overdue. I was trying out BaaS and Beeminder to build up some good habits, but even with the tools I still somethings have procrastination episodes. Only after quite a lot of cycles of the procrastination episodes, I realised that I basically completely wasted the time when I was procrastinating, and I was overall actually worse than before I started the procrastination episode.
Therefore, what I concluded is that you should expect yourself to be the exact same if you haven’t put in the time to think about that topic, especially high-level topics like math. (I acknowledge that motor skills require less thinking) It is a mere wish to be a different person since the last time; You don’t just learn a new theorem automatically.
cached thoughts, on modelling human
There was this bias that you assume other people are at around the same level as you, that is obviously false, but it is quite hard to internalize this. People really don’t automatically improve themselves either, there must be a push for that to happen. Also you can probably see many of those people that just stopped changing themselves.
there’s a very limited amount you can learn by just reading a few summaries:
Some texts are better than others, but even if you only read the best text on the topic you are trying to learn, with the text being paraphrased by a magical AI to maximize for your learning efficiency, there is still a maximum bandwidth on learning. Don’t expect to replicate what other people is able to do in just a few hours. Though I should acknowledge that there are actually very short texts that can change your mind greatly, I suspect that growth mindset is one of them but I’m not sure.
Alternatively, if the questions you ask are specific enough, then you may just be able to somewhat master that concept in a short amount of time. This seems to be how the “Learn in <very short amount of time>” courses out there do.
[Draft] It is really hard to communicate the level/strength of basically anything on a sliding scale, but especially things that could not make any intuitive sense even if you stated a percentage. One recent example I encountered is expressing what is in my mind the optimal tradeoff between reading quickly and thinking deeply to achieve the best learning efficiency.
Not sure what is the best way to deal with the above example, and other situations where percentage doesn’t make sense.
But where percentage makes sense, there are still two annoying problems. 1. sometimes you don’t have (haven’t generated) a fixed percentage in your mind. 2. You still need to express the uncertainty of that percentage. (vague example: 70% +- 20% vs 70% +- 5%).
I think it could be worth it to establish a common knowledge for what percentages do the uncertainty-hinting keywords represent. Say, “seems like” = 75% +- 30%
Unsorted Ideas Below
A lot of people don’t seem to realise this problem because their models are too black and white.
It is more difficult to communicate that the other person is being slightly too confident, than to tell them that they are way too overconfident.
Clarifications:
What I had in mind when I say “people” is myself, and the average non-LW friends around me.
Worthless is a bad word choice, I just mean that there are better things to read.
Additionally:
I also think I have the tendency of trying to read everything in a textbook, even if it is quite low in information density, with many filler stories or sentences served as conjunctions. I probably should be trying to skip sentences, paragraphs and sections where I have sufficient confidence of either 1. I have already learned it and don’t need a refresher, or 2. They are not important for me (filler material or unimportant knowledge)
I will try to make a more quantitative metric, but I don’t have one right now, just intuitions.
I am extremely confused by your comment, probably due to my own lack of linguistic knowledge.
(This whole reply should be seen as a call for help)
What I got is that fabricated options came from people “playing with word salad to form propositions” without fully understanding the implication of the words involved.
(I tried to generate an example of “propositions derived using syllogisms over syntactic or semantic categories”, but I am way too confused to write anything that makes sense)
Here are 2 questions: how does your model differ from/relate to johnswentworth’s model? Is john’s model a superset of yours? My understanding is that johnswentworth’s model says our algorithm relaxed some constraints, while yours specifically say that we relaxed the “true meaning” of the words (so the word “water” no longer requires a specific electronic configuration, or the melting point/boiling point to be specifically 0⁄100, “water” now just means something that feels like water and is transparent)
Only responding to this part.
Also, for more complicated problems such as following a distribution around in dynamic system: You also have to have a model of what the system is doing—that is also an assumption, not a certainty!
I’m sure you have multiple possible model of the system. If you have accounted for the possibility that your model is incorrect, then it will not be an assumption, it will be something that can be approximated into a distribution of confidence.
I just stumbled on this website: https://notes.andymatuschak.org/About_these_notes It has a similar UI but for Obsidian-like linked notes. The UI seem pretty good.