I took the survey; apparently I get karma for that? :-)
mathnerd314
17 Rules to Make a Definition that Avoids the 37 Ways of Words Being Wrong
A prime number n is a number whose only factors are multiplicative units and n*a multiplicative unit (and these two sets are distinct). Typical examples include 2, 3, 5, 7 and 11. Less-typical examples include −2 and 1+i; they are often excluded from consideration in mathematics.
If you require every word you use to have a definition, and ensure the definitions follow these rules, and then consistently use the words according to their definitions, then it follows that you are using the words correctly and not wrongly.
So I guess that could be the maxims for writing:
know the definition of every word you use
ensure the definitions follow these 17 rules
use words according to their definition
Indeed, but one of Eliezer’s points was that mathematical objects, e.g. the set of prime numbers, don’t need labels. I can write
without giving it a name at all, or just call it P.- 24 Feb 2014 14:42 UTC; 2 points) 's comment on 17 Rules to Make a Definition that Avoids the 37 Ways of Words Being Wrong by (
You can always be wrong. Even when it’s theoretically impossible to be wrong, you can still be wrong
You missed the context, which is when someone claims “This can’t be wrong.” Rule #1 clearly states the definition can be wrong. On the other hand, there are different levels of wrongness. Sure, these rules are most likely wrong and incomplete, but they are more correct than having no rules at all. And the reason definitions aren’t the best way to give semantics is because we already have a better semantics, namely the “similarity cluster”. (Map is not the territory, etc.) But forcing someone to give a definition that follows these 17 rules gives you the similarity cluster, and avoids pretty much all of Eliezer’s 37 ways of using words wrongly (See the superscripts!). There might be other ways of using words wrongly, but they’re going to be either obvious or so subtle that nobody can catch them anyway.
As for why I wrote this article, it’s simple: I need definitions of the things on my GTD list (in particular, I need a direct specification of what constitutes a “physical, visible action” for the next-actions list), and I recalled an EY post about definitions which was his 37 ways. But that was all about how to do it wrongly, and one of my tasks is “don’t think negatively”, so I rewrote it. It was and is sitting in my WhatIs:definition zim wiki page. I posted it here to get some commentary and maybe someone checking that I interpreted his points correctly, which I’ve been getting. (Thanks guys! :-))
Indeed, it’s very depressing. I doubt I’ll ever be able to understand other people, but I do have some hope for internal consistency in my usage (so mathnerd314_February2014 writes things that seem comprehensible to mathnerd314_July2020). I’ve collected my early 1990′s writings and they all sort of “click” into place, in that I understand them well enough to rewrite them word-for-word. Perhaps by writing down definitions for my words I’ll be able to see how the concepts have evolved over time (or that they haven’t changed).
Well, there’s a tricky thing in mathematics called “the law of excluded middle”. Using the law, you can e.g. prove that a implies b is logically equivalent to (not a) or b. It also lets you do existence proofs by proving it isn’t possible for there to be no examples. So in classical logic every statement is confused with its double negation.
I generally try to use intuitionistic logic though, where a->b is not logically equivalent to anything else and double negations have to be written out. You do have
, but that only goes one direction and results in a weaker statement. If you look at my other reply with an intuitionistic frame of mind, then you’ll see that the “only” is an implication, with no negation in sight.
It sounds like we’re in violent agreement here. I’ve already verified experimentally that writings by mathnerd314_1998 are clear to mathnerd314_2009. My brain doesn’t change that much over time.
Instead, I have two other questions:
can mathnerd314_2014 understand Gunnar_Zarncke_2014 on the same level he understands mathnerd314_1998?
If both mathnerd314_2014 and mathnerd314_2020 independently write down definitions, will they be textually different?
My hypothesis is that #1 is “no”, because internal organization of concepts varies dramatically from person to person, and that #2 is “yes”, because people do change over time.
Well, taxation has the threat of violence, in that if you don’t pay your taxes you will eventually be caught and sentenced to jail for tax evasion… hmm, maybe I should do a “The definition of X” series. They should really be wiki pages though, not posts...
So more recently I’ve been using a big 6000-line text file, it has all of my TODO’s as well as some URL’s. I randomized the order a while ago and now I just go through them. I’ve stalled on that (actually doing things is hard, particularly when they’re vague things like “post story”), so I might go back to feed reading; I experimented a bit with TinyTinyRSS but Feedly is probably a better choice.
It’s already random; replacing randomness with more randomness doesn’t help except for mixing in new tasks. I went through ~50 tasks today, so it’s not really that bad; just that I feel like some tasks should have more time dedicated. “Is putting animals in captivity an improvement?” is not the sort of question you want to dash off in 2 minutes. (Final answer: list of various animal rights groups).
The real problem is the list keeps growing longer; I’m starting to run into O(n^2) behavior in my text editor. It’s not really designed for handling a FIFO queue. I’ve been staring at TaskWarrior, which might be adapted for doing the things I want.
The simple answer is to ask someone else, or better yet a group; if D is small, then D^2 or D^4 will be infinitesimal. However, delusions are “infectious” (see Mass hysteria), so this is not really a good method unless you’re mostly isolated from the main population.
The more complicated answer is to track your beliefs and the evidence for each belief, and then when you get new evidence for a belief, add it to the old evidence and re-evaluate. For example, replacing an old wives’ tale with a peer-reviewed study is (usually) a no-brainer. On the other hand, if you have conflicting peer-reviewed studies, then your confidence in both should decrease and you should go back to the old wives’ tale (which, being old, is probably useful as a belief, regardless of truth value).
Finally, the defeatist answer is that you can’t actually distinguish that you are delusional. With the film Shutter Island in mind, I hope you can see that almost nothing is going to shake delusions; you’ll just rationalize them away regardless. If you keep notes on your beliefs, you’ll dismiss them as being written by someone else. People will either pander to your fantasy or be dismissed as crooks. Every day will be a new one, starting over from your deluded beliefs. In such a situation there’s not much hope for change.
For the record, I disagree with “delusional disorders being quite rare”; I believe D is somewhere between 0.5 and 0.8. Certainly, only 3% of these are “serious”, but I could fill a book with all of the ways people believe something that isn’t true.
I don’t have experience with those, but I’ll recommend Graphviz as a free (and useful) alternative. See e.g. http://k0s.org/mozilla/workflow.svg
Given replication rates of scientific studies a single study might not be enough.
Enough for what? My question is whether my hair stylist saying “Shaving makes the hair grow back thicker.” is more reliable than http://onlinelibrary.wiley.com/doi/10.1002/ar.1090370405/abstract. In general, the scientists have put more thought into their answer and have conducted actual experiments, so they are more reliable. I might revise that opinion if I find evidence of bias, such as a study being funded by a corporation that finds favorable results for their product, but in my line of life such studies are rare.
Single studies that go against your intuition are not enough reason to update. Especially if you only read the abstract.
I find that in most cases I simply don’t have an intuition. What’s the population of India? I can’t tell you, I’d have to look it up. In the rare cases where I do have some idea of the answer, I can delve back into my memory and recreate the evidence for that idea, then combine it with the study; the update happens regardless of how much I trust the study. I suppose that a well-written anecdote might beat a low-powered statistical study, but again such cases are rare (more often than not they are studying two different phenomena).
No need to get people to wash their hands before you do a business deal with them.
I wash my hands after shaking theirs, as soon as convenient. Or else I just take some ibuprofen after I get sick. (Not certain what you were trying to get at here...)
Exhibiting symptoms often considered as signs of mental illness. For example, this says 38.6% of general people have hallucinations. This says 40% of general people had paranoid thoughts. Presumably these groups aren’t exactly the same, so there you go: between 0.5 and 0.8 of the general population. You can probably pull together some more studies with similar results for other symptoms.
Humans are biased to overrate bad human behavior as a cause for mistakes.
If a crocodile bites off your hand, it’s generally your fault. If the hurricane hits your house and kills you, it’s your fault for not evacuating fast enough. In general, most causes are attributed to humans, because that allows actually considering alternatives. If you just attributed everything to, say, God, then it doesn’t give any ideas. I take this a step further: everything is my fault. So if I hear about someone else doing something stupid, I try to figure out how I could have stopped them from doing it. My time and ability are limited in scope, so I usually conclude they were too far away to help (space-like separation), but this has given useful results on a few occasions (mostly when something I’m involved in goes wrong).
The decent thing is to orient yourself on whether similar studies replicate.
Not really, since the replication is more likely to fail than the original study (due to inexperience), and is subject to less peer-review scrutiny (because it’s a replication). See http://wjh.harvard.edu/~jmitchel/writing/failed_science.htm. The correct thing to consider is followup work of any kind; for example, if a researcher has a long line of publications all saying the same thing in different experiments, or if it’s widely cited as a building block of someone’s theory, or if there’s a book on it.
Regardless every publish-or-perish paper has an inherent bias to find spectacular results.
Right, people only publish their successes. There are so many failures that it’s not worth mentioning or considering them. But they don’t need to be “spectacular”, just successful. Perhaps you are confusing publishing at all, even in e.g. a blog post, with publishing in “prestigious” journals, which indeed only publish “spectacular” results; looking at only those would give you a biased view, certainly, but as soon as you expand your field of view to “all information everywhere” then that bias (mostly) goes away, and the real problem is finding anything at all.
Let’s say wearing red every day.
So the study there links red to aggression; I don’t want to be aggressive all the time, so why should I wear red all the time? For example, I don’t want a red car because I don’t want to get pulled over by the cops all the time. Similarly for most results; they’re very limited in scope, of the form “if X then Y” or even “X associate with Y”. Many times, Y is irrelevant, so I don’t need to even consider X.
Thinking that those Israeli judges don’t give people parole because they don’t have enough sugar in their blood right before mealtime. Going and giving every judge a candy before hearing every case to make it fair isn’t warranted.
Sure, but if I’m involved with a case then I’ll be sure to try to get it heard after lunchtime, and offer the judge some candy if I can get away with it.
That’s fixable by training Fermi estimates.
You can memorize populations or memorize the Fermi factors and how to combine them, but the point stands regardless; you still have to remember something.
It’s a reference to the controversy about whether washing your hands primes you to be more moral. It’s a experimental social science result that failed to replicate.
Ah, social science. I need to take more courses in statistics before I can comment… so far I have been sticking to the biology/chemistry/physics side of things (where statistics are rare and the effects are obvious from inspection).
Once upon a time I tried using what I could coin “quicklists”. I took a receipt, turned it over to the back (clear side), and jotted down 5-10 things that I wanted to believe. Then I set a timer for 24 hours and, before that time elapsed, acted as if I believed those things. My experiment was too successful; by the time 24 hours were up I had ended up in a different county, with little recollection of what I’d been doing, and some policemen asking me pointed questions. (I don’t believe any drugs were involved, just sleep deprivation, but I can’t say for certain).
More recently, I rented and saw the film Memento, which explores these techniques in a fictional setting. The concept of short-term forgetting seemed reasonable and the techniques the character uses to work around it are easily adapted in real life. My initial test involved printing out a pamphlet with some dentistry stuff in tiny type (7 12-pt pages shrunk to fit on front-back of 1 page, folded in quarters), and carrying it with me to my dentist appointment. I was able to discuss most of the things from my pamphlet, and it did seem that the level of conversation was raised, but there were many other variables as well so it’s hard to quantify the exact effect.
I’m not certain these techniques actually count as “doublethink”, since the contradiction is between my “internal” beliefs and the beliefs I wrote down, but it does allow some exploration of the possibilities beyond rationality. I can override my system 2 with a piece of paper, and then system 1 follows.
NB: Retrieving your original beliefs after you’ve been going off of the ones from the paper is left as an exercise to the student
I thought I had written all I could. What sort of things should I add?
I look at it in terms of efficiency; sites like reddit are simply inefficient ways to communicate. They are good at making random connections and exploring new subject areas, and that is what I use them for: if I have heard of a subject, but don’t know about it, I find a subreddit on the topic and subscribe.
As a tool for discourse, however, there is much to be desired; communication is lossy (many posts are simply not upvoted enough to be seen) and interspersed with noise (unrelated but “viral” posts). Google Reader is almost lossless; it maintains a buffer of all messages for 30 days and then archives them so that they are available in search results but not as unread items. If one reads every feed to its end at least once a month, then no data is lost.
Google Reader thus has the odd effect of making one commit; either you are subscribed to a feed, and read every post of it, or you are not, and never see it anywhere. I have not used Reader for more than a few years, and furthermore haven’t conducted a survey of its users, but I would theorize that Reader users as a whole are more productive/active than non-users as a result. Perhaps it could be a question on the next LessWrong survey.