Would any one be interested if I did some posts on the rhetorical figures of speech? These posts would not be about simple or concise ways to improve your writing. I recommend luke prog’s post for that. I want the posts to be more orientated around a discussion on the rhetorical figures of speech and why/when they work. I have some rough notes where I have grouped them into some some simple categories.
ScottL
Some more:
Shut up and multiply- the ability to trust the math even when it feels wrong
Most of science is actually done by induction—To come up with something worth testing, a scientist needs to do lots of sound induction first or borrow an idea from someone who already used induction. This is because induction is the only way to reliably find candidate hypotheses which deserve attention. Examples of bad ways to find hypotheses include finding something interesting or surprising to believe in and then pinning all your hopes on that thing turning out to be true.
Curiosity – the burning itch
Relenquishment – “That which can be destroyed by the truth should be.” -P. C. Hodgell
Lightness – follow the evidence wherever it leads
Evenness – resist selective skepticism; use reason, not rationalization
Argument – do not avoid arguing; strive for exact honesty; fairness does not mean balancing yourself evenly between propositions
Empiricism – knowledge is rooted in empiricism and its fruit is prediction; argue what experiences to anticipate, not which beliefs to profess
Simplicity – is virtuous in belief, design, planning, and justification; ideally: nothing left to take away, not nothing left to add
Humility – take actions, anticipate errors; do not boast of modesty; no one achieves perfection
Perfectionism – seek the answer that is perfectly right – do not settle for less
Precision – the narrowest statements slice deepest; don’t walk but dance to the truth
Scholarship – absorb the powers of science
[The void] (the nameless virtue) – “More than anything, you must think of carrying your map through to reflecting the territory.”
Oops—Theories must be bold and expose themselves to falsification; be willing to commit the heroic sacrifice of giving up your own ideas when confronted with contrary evidence; play nice in your arguments; try not to deceive yourself; and other fuzzy verbalisms. It is better to say oops quickly when you realize a mistake. The alternative is stretching out the battle with yourself over years.
Explaining vs. explaining away – Explaining something does not subtract from its beauty. It in fact heightens it. Through understanding it, you gain greater awareness of it. Through understanding it, you are more likely to notice its similarities and interrelationships with others things. Through understanding it, you become able to see it not only on one level, but on multiple. In regards to the delusions which people are emotionally attached to, that which can be destroyed by the truth should be.
Ugh field—Pavlovian conditioning can cause humans to unconsciously flinch from even thinking about a serious personal problem they have. We call it an “ugh field”. The ugh field forms a self-shadowing blind spot covering an area desperately in need of optimization.
Privileging the question—questions that someone has unjustifiably brought to your attention in the same way that a privileged hypothesis unjustifiably gets brought to your attention. Examples are: should gay marriage be legal? Should Congress pass stricter gun control laws? Should immigration policy be tightened or relaxed? The problem with privileged questions is that you only have so much attention to spare. Attention paid to a question that has been privileged funges against attention you could be paying to better questions. Even worse, it may not feel from the inside like anything is wrong: you can apply all of the epistemic rationality in the world to answering a question like “should Congress pass stricter gun control laws?” and never once ask yourself where that question came from and whether there are better questions you could be answering instead.
Something to protect—The Art must have a purpose other than itself, or it collapses into infinite recursion.
Take joy in the merely real – If you believe that science coming to know about something places it into the dull catalogue of common things, then you’re going to be disappointed in pretty much everything eventually —either it will turn out not to exist, or even worse, it will turn out to be real. Another way to think about it is that if the magical and mythical were common place they would be merely real. If dragons were common, but zebras were a rare legendary creature then there’s a certain sort of person who would ignore dragons, who would never bother to look at dragons, and chase after rumors of zebras. The grass is always greener on the other side of reality. If we cannot take joy in the merely real, our lives shall be empty indeed.
Complexity of value—the thesis that human values have high Kolmogorov complexity and so cannot be summed up or compressed into a few simple rules. It includes the idea of fragility of value which is the thesis that losing even a small part of the rules that make up our values could lead to results that most of us would now consider as unacceptable.
Egan’s law—“It all adds up to normality.” — Greg Egan. The purpose of a theory is to add up to observed reality, rather than something else. Science sets out to answer the question “What adds up to normality?” and the answer turns out to be Quantum mechanics adds up to normality. A weaker extension of this principle applies to ethical and meta-ethical debates, which generally ought to end up explaining why you shouldn’t eat babies, rather than why you should.
Emotion—Contrary to the stereotype, rationality doesn’t mean denying emotion. When emotion is appropriate to the reality of the situation, it should be embraced; only when emotion isn’t appropriate should it be suppressed.
Litany of Gendlin – “What is true is already so. Owning up to it doesn’t make it worse. Not being open about it doesn’t make it go away. And because it’s true, it is what is there to be interacted with. Anything untrue isn’t there to be lived. People can stand what is true, for they are already enduring it.” —Eugene Gendlin
Litany of Tarski – “If the box contains a diamond, I desire to believe that the box contains a diamond; If the box does not contain a diamond, I desire to believe that the box does not contain a diamond; Let me not become attached to beliefs I may not want. “ —The Meditation on Curiosity
Magic—What seems to humans like a simple explanation, sometimes isn’t at all. In our own naturalistic, reductionist universe, there is always a simpler explanation. Any complicated thing that happens, happens because there is some physical mechanism behind it, even if you don’t know the mechanism yourself (which is most of the time). There is no magic.
Words can be wrong – There are many ways that words can be wrong it is for this reason that we should avoid arguing by definition. Instead, to facilitate communication we can taboo and reduce: we can replace the symbol with the substance and talk about facts and anticipations, not definitions.
I am not into politics at all, but I think if you could change the following it would improve the political process. Currently politics seems to be based on arguing your position and gaining status for your team rather than seeking truth and the best policy.
Politics is the Mind-Killer – Politics is not a good area for rational debate. It is often about status and power plays where arguments are soldiers rather than tools to get closer to the truth.
Adversarial process—a form of truth-seeking or conflict resolution in which identifiable factions hold one-sided positions.
Color politics—the words “Blues” and “Greens” are often used to refer to two opposing political factions. Politics commonly involves an adversarial process, where factions usually identify with political positions, and use arguments as soldiers to defend their side. The dichotomies presented by the opposing sides are often false dilemmas, which can be shown by presenting third options.
Arguments as soldiers – is a problematic scenario where arguments are treated like war or battle. Arguments get treated as soldiers, weapons to be used to defend your side of the debate, and to attack the other side. They are no longer instruments of the truth.
A democracy is only as strong as the people that are people in it. It seems to me like politicians too often dwell on inconsequential, but politically important issues. They do this because voters care about these issues. The thing is, though, that voters are most often laymen. They are not experts. Therefore, I find it worrisome that politicians seem to discuss and change policies in order to placate voters. Obviously the opinions of voters should still matter, but when it comes to where politicians spend their effort and time. This should be based on what will provide the most benefit. I see two ways to overcome this: somehow get the voters more informed or change the political process somehow, this is related to the first point, so that there is less showboating, sycophantism and placation to the whims of voters.
Privileging the question—questions that someone has unjustifiably brought to your attention in the same way that a privileged hypothesis unjustifiably gets brought to your attention. Examples are: should gay marriage be legal? Should Congress pass stricter gun control laws? Should immigration policy be tightened or relaxed? The problem with privileged questions is that you only have so much attention to spare. Attention paid to a question that has been privileged funges against attention you could be paying to better questions. Even worse, it may not feel from the inside like anything is wrong: you can apply all of the epistemic rationality in the world to answering a question like “should Congress pass stricter gun control laws?” and never once ask yourself where that question came from and whether there are better questions you could be answering instead.
Error of crowds—is the idea that under some scoring rules, the average error becomes less than the error of the average, thus making the average belief tautologically worse than a belief of a random person. Compare this to the ideas of modesty argument and wisdom of the crowd. A related idea is that a popular belief is likely to be wrong because the less popular ones couldn’t maintain support if they were worse than the popular one.
Most peoples’ beliefs aren’t worth considering—Sturgeon’s Law says that as a general rule, 90% of everything is garbage. Even if it is the case that 90% of everything produced by any field is garbage that does not mean one can dismiss the 10% that is quality work. Instead, it is important engage with that 10%, and use that as the standard of quality.
Politicians need to become more aware of complexity and the feedback caused by their policies. Understanding system dynamics would help tremendously in this regard.
Policy resistance—Frequently, a nonlinear feedback system will respond to a policy change in the desired manner for a short period of time, but then return to its pre-policy-change state. This occurs when the system’s feedback structure works to defeat the policy change designed to improve it.
Goodhart’s law—states that once a certain indicator of success is made a target of a social or economic policy, it will lose the information content that would qualify it to play such a role. People and institutions try to achieve their explicitly stated targets in the easiest way possible, often obeying the letter of the law. This is often done in way that the designers of the law did not anticipate or want. For example, the soviet factories which when given targets on the basis of numbers of nails produced many tiny useless nails and when given targets on basis of weight produced a few giant nails.
I think it would be good to put the points into subsections and then describe why each subsection is valuable. I have put some draft subsections below:
Have growth mindset
Nearly anything can be learnt or improved on. Aside from a few physical limits – i.e. being the best marathon runner is very difficult; but being a better marathon runner than you were yesterday is possible.
Clarify what X is
Make a list of what you think that X is. Break it down. Followed by what you know about X, and if possible what you think you are missing about X.
Do some research to confirm that your rough definition of X is actually correct. Confirm that what you know already is true, if not – replace that existing knowledge with true things about X. Do not jump into everything yet.
As you learn more about X, consider coming back to this point to confirm that these is still the original X, and not X2, or X3 etc.. (if you find you were actually looking for X2 or X3, go back over the early steps for Xn again.
Determine if it’s worth it to lean about X
Make sure your chosen X is aligned with your actual goals (are you doing it because you want to?). When you want to learn a thing; is X that thing? (Example: if you want to exercise; maybe skiing isn’t the best way to do it. Or maybe it is because you live in a snow country)
Check that you want to learn X and that will be progress towards a goal (or is a terminal goal – i.e. learning to draw faces can be your terminal, or can help you to paint a person’s portrait).
Get access to the best resources possible. Estimate how much resource they will take to go over (time, money) and confirm you are okay with those investments.
Determine best practices/common mistakes
Figure out what experts in the area know (by topic area name), try to find what strategies experts in the area use to go about improving themselves. (expert people are usually a pretty good way to find things out)
Find out what common mistakes are when learning X, and see if you can avoid them. (learn by other people’s mistakes where possible as it can save time)
Check if someone is teaching about X. Chances are that someone is, and someone has listed what relevant things they teach about X. We live in the information age, its probably all out there. If it’s not, reconsider if you are learning the right thing. (if no learning is out there it might be hard to master without trial and error the hard way)
Figure out the best resources on X. If this is taking too long; spend 10 minutes and then pick the best one so far. These can be books; people; wikipedia; website repositories; if X is actually safe – consider making a small investment and learn via trial and error. (i.e. frying an egg – the common mistakes probably won’t kill you, you could invest in 50 eggs and try several methods to do it at little cost)
Consider writing to 5 experts and asking them for advice in X or in finding out about X.
While learning X, externalise what you have learnt.
Delve in; make notes as you go. If things change along the way, re-evaluate.
Write out the best things you needed to learn and publish them for others. (remembering you had foundations to go on – publish these as well)
Try to teach X to other people. You can be empowering their lives, and teaching is a great way to learn, also making friends in the area of X is very helpful to keep you on task and enjoying X.
Try to apply what you learnt and find ways to improve what you have learnt
try to find experiments you can conduct on yourself to confirm you are on the right track towards X. Or ways to measure yourself (measurement or testing is one of the most effective ways to learn)
What, exactly, are the principles of good mental posture for the Art of Rationality?
I’m not sure if I can answer this because I don’t understand what good mental posture is or even what good physical posture is, for that matter. Can you please confirm if my understanding, below, of what these are is correct?
Basically, posture refers to the body’s alignment and positioning with respect to the force of gravity.
is efficient
allows movement within the posture
prepares for the next movement
allows you to react to unexpected forces
is structurally strong
Good posture refers to the removal of impediments in movement. It is about activating only the right muscles at the right time in order to achieve specific movements.
Good mental posture, on the other hand, seems to involve taking certain perspectives or entering certain frames of mind that are conducive to the achievement of your current goals.
From the article you linked:
I’ve been using a term for changing the overall quality of my thoughts and feelings to something more conducive to accomplishing my immediate goal. I call it “adopting a mental posture”.
If we view thought activation in a similar way to how we view muscle activation in regards to physical posture, then we can think of good mental posture as the undertaking of certain perspectives or mindsets that inhibit unhelpful thoughts and induce helpful thoughts, where what is helpful depends on the current task at hand.
A good mental posture will be:
Relaxed—there is no misattribution. That is, you are not carrying thoughts from previous interactions or arguments. You start the thought process with a relaxed mind set in which you are free from recurrent and intruding thoughts.
Fluid—there is no stickiness in your perspectives. This means that you can easily change your perspective. You can think of what the opposites are or what the other person you’re arguing with thinks or what the situation would be like if certain variables were changed etc. The key point here is that you can move between perspectives with ease. There is no flinching.
Efficient and synchronous—you are activating only the thoughts that are pertinent to the task at hand. You are also thinking of the pertinent thoughts at the right time. That is, you don’t linger and dwell on certain thoughts.
Adaptable—if you receive new information that requires you to change perspective, if you are to keep good posture, then you do so. This means that you update your beliefs.
Normally in a broad perspective—we can think of broadness as similar to stability in physical posture. In the same way that stability is transient in physical posture, that is, you are not stable during the transition to a new movement, but do default to being stable. Your psychical (mental) posture should by default be broad, but you should be able to transition to a narrow perspective if this is going to be beneficial. You do need to be able to transition back to the broad perspective, though.
PS. Physical posture and mental posture may be entwined. People who are in pain or tired often have bad posture.
I think of identity as if it were a kind of ‘thought groove’ or as if it was similar to trampling a path in snow that others will naturally tend to follow. By this I mean that it tends to cause some types of thoughts to be activated and others to be attenuated. The stronger your identity the stronger this effect.
What we perceive, is largely a product of what we have been primed and conditioned to perceive. Our perception is shaped by our previous experiences and beliefs for it is filled with assumptions and predictions. Gaps which must be filled by drawing upon pre-existing information in our minds. Whether a certain argument feels right or whether a particular remark is funny to you will depend largely on who you are and what your identity is.
Identity can be a great way to get certain thoughts and ways of thinking down to the 5 second level. On the other hand, it is also a common way to embed and propagate harmful or unhelpful thoughts. The best strategy to deal with it in my opinion involves four things:
removing unhelpful identities, e.g. learned blankness.
embedding helpful and life affirming identities, e.g. growth mindset, trying new things, being a person who is compassionate and grateful
learning how to choose identities that can be adaptable. Retirees (especially men) commonly experience depression after giving up work because their identities were tied to it. The ones who avoid this trouble are the ones who are able to retain a sense of purpose after retirement. The identity: “I am a person who regularly exercises” is better than: “I am a runner” because it points to a larger class of possible activities. If you had a leg injury, you can still retain the first identity by weight lifting, for example, but there is no way for you to retain the second.
learning how to keep your identities fluid. It is much better in my opinion to allow your identities to remain in a state of flux rather than becoming cemented in your psyche. This is because there may come a time when you need to abandon an identity or amplify it or shrink it.
Thanks for this. Let me know if you have any others and I will add them to this wiki page I created: Less Wrong Canon on Rationality. Here are some more that I already had.
Fallacy of gray → Continuum fallacy
Motivated skepticism → disconfirmation bias
Marginally zero-sum game → arms race
I think that it would probably be a good idea to differentiate: ‘simple explanations’ and ‘explanations that are based on simple rules’. See Fake Simplicity for a description of simple explanations. An example would be attributing all of the causality to some other entity, e.g. god. Explanations that are based on simple rules can sometimes also be easy to understand, but the way in which they are reached is rarely simple. They are grounded in extensive research and evidence.
Simple explanations can be dangerous because they are easy to believe. They are:
Easy to understand
Sometimes partly correct as they can be true some of the time even though they don’t describe the whole picture
Often overly broad so that they are hard to disprove
Etc.
I am just trying to saying that we should also be careful of simple explanations as well because they can be enticing. I would think that non-experts rarely have enough experience to reach explanations based on simple rules and will instead often just find simple explanations. This is because it is really hard or, perhaps, even impossible to find these simple rules without a lot of ground work. We often have to understand something intimately and deeply before we even begin to sense the undercurrent from the operation of these simple rules.
Here’s an extract from Feynman which is related:
The world is strange. The whole universe is very strange, but you see when you look at the details that the rules of the game are very simple – the mechanical rules by which you can figure out exactly what is going to happen when the situation is simple. It is like a chess game. If you are in a corner with only a few pieces involved, you can work out exactly what is going to happen, and you can always do that when there are only a few pieces. And yet in the real game there are so many pieces that you can’t figure out what is going to happen – so there is a kind of hierarchy of different complexities. It is hard to believe. It is incredible! In fact, most people don’t believe that the behavior of, say, me is the result of lots and lots of atoms all obeying very simple rules and evolving into such a creature that a billion years of life has produced. There is such a lot in the world. There is so much distance between the fundamental rules and the final phenomena that it is almost unbelievable that the final variety of phenomena can come from such a steady operation of such simple rules.
This is really good. I think that a summary of what the presentation covers would be useful as well. I wrote a draft one below:
‘You Are A Brain’ is a presentation by Liron Shapira that is tailored for a general audience and provides an introduction to some of the the core LessWrong concepts including:
Map and territory- the map is your brains internal representation of reality. The territory is reality. This presentation covers the importance of accuracy in your map, the idea that the map is inside of you, i.e. that it is beliefs encoded as neuron structures in the brain and that maps are inherently imperfect because:
You can’t see the whole territory
You’re computationally bounded
You’re biased
Heuristics and biases—the presentation covers the idea that due to computational limitations the brain must make use of heuristics. Heuristics are mental shortcuts which require less time and energy to use, but sometimes go awry, producing bias. This presentation explains that colour vision is an example of a heuristic. The presentation also explains that illusions reveal your heuristics.
You’re biased—this presentation defines biases as deviations from good map-drawing procedures. The following example biases are covered:
Stereotyping—You draw a map that is skewed toward what you expected to see
Defensiveness—You don’t fix a mistake in your map because you don’t want to admit being wrong
Wishful thinking—You draw whatever makes you feel good on your map
The map is not the territory –this presentation covers the idea that the map is not the territory. If your brother was to die, you don’t react when he dies. You only react after you understand that your brother is dead. Reality (the territory) exists outside of our mind but we construct models of the ‘territory’ based on what we glimpse through our senses.
Adaptation executors—this presentation covers the idea that individual organisms are best thought of as adaptation-executers rather than as fitness-maximizers. This is done tangentially through discussing super stimulus (stimulus that misleads your desire heuristics)
Mind projection fallacy—this presentation talks about how ‘sexiness’ is not a property of a woman, but is instead a characteristic that you attribute to the woman. That is, sexiness is not in the territory, but is in the map.
Wrong Questions—A question about your map that wouldn’t make sense if you had a more accurate map.
-
ignorance exists in the map, not in the territory. If I am ignorant about a phenomenon, that is a fact about my own state of mind, not a fact about the phenomenon itself.
Has any one been working on the basics of rationality or summarizing the sequences? I think it would be helpful if someone created a sequence in which they cover the less wrong core concepts concisely as well as providing practical advice on how to apply rationality skills related to these concepts at the 5 second level.
A useful format for the posts might be: overview of concept, example in which people frequently fail at being rational because they innately don’t follow the concept and then advice on how to apply the concept. Or another format might be: principle underlying multiple less wrong concepts, examples in which people fail at being rational because they don’t follow the concepts and then advice on how to deal with the principle and become more rational.
I think that all these posts should summed up with or contain pratical methods on how to improve rationality skills and ways to quantify and measure these improvements. The results of CFAR workshops could probably provide a basis for these methods.
Lots of links to the related less wrong posts or wikis would also be useful.
Yet how is a lichen ‘more than the sum of fungus and alga’?
I don’t know anything about lichen, but the below is what I assume “more than the sum of” in this context means:
“The symbiosis between the mycobiont and the photobiont creates an organism that is more than the sum of its parts, in other words, a lichen is an emergent property. Lets take a step back to examine this statement. On the one hand, neither the photobiont nor the mycobiont can withstand intense UV radiation, dessication, or extreme temperatures. But on the other hand, when the photobiont and mycobiont work together within the context of the lichen symbiosis, they create an organism that can withstand living in outer space – thats more extreme temperature and radiation (not to mention vacuum exposure) than is experienced on Earth! Lichen can even grow within rocks (endolithic lichen)! These are conditions that would kill a fungus or algae.”
Do you mean the article summaries?
The statement from spinger on this is here. I can think of some ways to fix the particular issue that caused the retractions:
Not allow author-suggested reviewers
More stringent process for author-suggested reviewers compared to non author-suggested reviewers
Only allow author-suggested reviwers if they are for reviewers that are registered in some global database so that people can’t create fake contact details for other people. There would need to be some method for authentication before you can register a user.
I would guess that any solution to the larger issue of scientific misconduct would need to consider Goodhart’s law and work to eliminate opportunites for people to game the system. There is a site called retractionwatch which has information on retractions that have occured.
This from here seems pretty accurate for Usenet:
Binary groups being a great big cost sink would be the main thing.
The store and forward protocol required quite some disk space at the time.
The network relied on “control” messages to create/delete groups automatically (as opposed to manual subscription), which due to the lack of authentication/encryption in the protocol, were very easy to spoof. A gpg-signing mechanism was later put into place, so that nodes peering with each other could establish a chain of trust by themselves. This was pretty nice in retrospect (and awesome by today standards), but the main problem is that creating new groups was a slow and painful approval-based process: people often wanted small groups just for themselves, and mailing lists offered the “same” without any approval required.
Having a large open network started to become a big attractor for SPAM, and managing SPAM in a P2P network without authentication is a harder problem to solve than a locally managed mailing list.
running a local server became so easy and cheap, that running mailing list offered local control and almost zero overhead. People that had niche groups started to create mailing lists with open access, and people migrated in flock. Why share your discussions in comp.programming.functional where you could create a mailing list just for your new fancy language? (it’s pretty sad, because I loved the breadth of the discussions). Discussions on general groups became less frequent as most of the interesting ones were on dedicated mailing lists. The trend worsened significantly as forums started to appear, which lowered the barrier to entry to people that didn’t know how to use a mail client properly.
For NNTP for LessWrong, I would think that we have to also take into account that people want to control how their content is displayed/styled. Their own separate blogs easily allow this.
I moved the main posts into a separate chart.. It should be less confusing now.
Pressing enter with the box focused didn’t start the scraping. I had to click ‘go’.
Fixed
I’d put main and discussion upvotes on the same scale. (So the ‘main’ y-axis is just 10 times the ‘discussion/comment’ y-axis.) Right now they don’t even have zero in the same place, which is really weird. Maybe also make the scale nonlinear.
I moved main into a separate graph. This should fix the issues.
It’s hard to click-and-drag over the whole width or height of the chart.
I could change it so that you can only zoom in on the yaxis like it is here.
I’m not sure how easy this would be, but I’d appreciate a distinction between meetups and non-meetups. (But I think some meetups are in main and some are in discussion.)
Maybe I will look at that later.
I’d like more context on posts/comments without having to visit them. For posts, the title; for comments, maybe the title of the attached post plus a few words (like in the sidebar). It might be too noisy to put that in the hover box; if so, perhaps if I click, the hover box expands and stays there until the next click, and includes an actual link?
I have updated this. Try it out and let me know what you think.
You don’t actually display total karma anywhere. I had to get it from positive-negative on the pie chart.
I fixed the total chart to have title that shows the total score.
The pie chart doesn’t have a slice for ‘neutral’.
The pie chart is meant to show the total score. Since neutral has 0 score I don’t think it should be in the graph.
I’d also be interested in seeing cumulative karma as a time series
Would that be something like this with the total score moving up and down over time. I would do this by ordering the comment/post scores by their dates.
and 30-day karma as a time series.
Would this be similar to the cumulative chart above, but just for 30 days.
I’m wondering what the optimal number of people on the leaderboard would be.
From a usability point of view, 15 sounds about right to me. There is already a lot of other stuff on the side bar.
I suspect that if there were 20 people on the leaderboard, that would increase the motivation effect, without significantly devaluing being on the leaderboard itself.
Maybe, if the point is to increase motivation there might be better ways. I don’t know if these are any good, but here’s some example ideas:
Designated appreciators—people who try to highlight the good parts in posts and offer constructive advice on how to improve them.
Month in review post—this would be a post that contains the authors favorite posts last month with short abstracts of the posts.
A more flexible feedback system.
Some designated (nice) users who offer to give advice if people message them.
Some designated collaborators—if you have some ideas for a post, but don’t know enough about the subject you can message some users who have volunteered and have knowledge in that particular area. If they have time, they can help you find what areas to look into further and offer advice on how to improve your post.
Suggested articles post—a post which provides some ideas that look interesting and haven’t really been explored thoroughly yet. This would be good for people who don’t have any ideas on what to write for a post.
Perhaps, integrate the karma awards for the users who volunteer to take on certain roles.
I really like your posts. Can you please let me know if the below summaries are accurate and what you think of the below questions.
Second Chances (live each day as if you’re doing it over)
This is about taking a perspective that helps develop a pervading attitude that there is a purpose to your day. If you are reliving a day, then it means that there is a reason for this. This means that you are going to be:
More appreciative of beauty and excellence
More mindful which means being in the moment and in control (you don’t need to do that silly thing that caused an accident last time).
More motivated
Questions
Do you think it has to be a whole day? What if you thought about a whole chunk of time in which you will be in a particular situation and then approached it with a specific purpose? If you are at the beach with your family, maybe you can take on the appreciative frame of mind. If you are driving, maybe you can take on the mindful frame of mind.
Is there any kind of thought pattern or ritual that would make the perspective you take more impactful and vivid.
Split Selves(You only need to worry about what you can do now. Trust that tomorrow you will be able to take the same attitude and so the work will eventually get done)
Bobbling(Essentially, it means that you allocate a period of time and then consider that time spent. In that time period you focus entirely on one particular task and ensure that there are no interruptions)
Questions
What do you think is the best amount of time to use?
Do you think you should string together bobbled times with small breaks in between like with the the pomodoro technique that you mentioned?
Do you ever extend the period of time. For example, if you are writing and you get a great idea do you just keep writing or do you take a break?
The Past, Interrupted(Essentially, it means that you make a certain perspective or context vivid so that you are more likely to take actions appropriate for that context)
I think that you can also relate mental practice or physical practice to this. Although, it is a bit more about training yourself so that specific actions or habits occur in specific contexts. For example, if you are having trouble getting up in the morning you can practice hearing the alarm and getting up straight away. Then, when you are in the context of hearing the alarm you will be more likely to get up straight away.
Toward a More Excellent Future(Successful time travel is all about bringing our past, present, and future selves into a cooperative alignment. They need to trust each other. They need to communicate.)
Notes:
“ug field” should be “ugh field”
I made this mistake. You should have a summary break so that people don’t need to scroll through the whole article when they look for new main articles.
I haven’t read the book. Here are some of my thoughts on the content in the above post. These thoughts may be invalidated by other parts in the book as I haven’t read the book and this is only the first quarter summary of the book.
According to Korzybski, the unique quality of humans is what he calls “time-binding”, described as “the capacity of an individual or a generation to begin where the former left off”.
Robert sussman here posits that there are three human behavioural traits not found in chimps or any other animal; they are unique and exemplify what it means to be human.
Symbolic behaviour—the ability to create alternative worlds, to ponder about the past and future, to imagine things that don’t exist.
Language—the unique communicative venue that enables humans to communicate not only in proximate contexts, but also about the past, the future, and things distant and imagined, allowing us to share and pass our symbols to future generations.
Culture—the ability found only in humans for different populations to create their own shared symbolic worlds and pass them on. Although chimpanzees can pass on learned behaviour, they cannot pass on shared and different world views.
Time binding seems to be the same as culture.
But religion is a ‘primitive science’
Aren’t religion and science disparate concepts? I get that they both provide theories about how the world is, but to refer to religion as scientific in any way seems strange to me.
Manhood of Humanityis: “What is a human?” Answering this question correctly could help us design a civilization allowing the fullest human development. Failure to answer this question correctly will repeat the cycle of revolutions and wars.
This seems to be a pre 1900 view of the world, i.e. before relativity, quantum mechanics, complexity theory etc.
This book goes over what I mean. Below is part of the abstract that explains it.
“Early theorists believed that in science lay the promise of certainty. Built on a foundation of fact and constructed with objective and trustworthy tools, science produced knowledge. But science has also shown us that this knowledge will always be fundamentally incomplete and that a true understanding of the world is ultimately beyond our grasp. In this thoughtful and compelling book, physicist F. David Peat examines the basic philosophic difference between the certainty that characterized the thinking of humankind through the nineteenth century and contrasts it with the startling fall of certainty in the twentieth. The nineteenth century was marked by a boundless optimism and confidence in the power of progress and technology.”
So, I basically I disagree that knowing “what is a human” is all you need to build a utopia.
Second, I don’t think that there is necessarily any way to set up society so that everyone is perfectly satisfied. People are both similar to each other, but divergent as well. We are individuals, but have an underlying human nature. There is no human template or certain way that things can be so that it is exactly the same for everyone and perfect for everyone as well. There is going to be conflict and this, to an extent, is necessary. Perfection like certainty may be forever elusive. There is of course underlying common patterns or human universals. I take the view that non-teleological evolution means that human nature is not immutable or timeless. Human nature does not refer to an unchanging essence. Instead, it describes what the members of humanity currently happen to be like. People have common propensities, predispositions, norms and needs and these cause a certain probability and likelihood for humans to have certain traits. Another way of putting this is that there is ‘species-typical’ behaviour, but the resultant behaviour is going to be diverse. An example is laughing. We laugh because of our biology, but what we laugh at is extraordinarily variable. The cognitive ability of human’s means that their behaviours are more diverse than other animals and that their thought patterns have a greater impact on their behaviour.
I do believe that understanding these common patterns or human needs is extremely helpful. An example is that infants who are touched gently on a regular basis gain weight and grow at better rates than babies who lack this contact. If you are designing a society or writing a policy, then understanding these needs can be immensely helpful. Some work I have found on human needs are below:
Maslows ranking of needs, but I don’t think the current research backs up the order or idea or ranking.
Manfred Max-Neef talked about fundamental human needs
There’s also this book by Martin Seligman and others which classifies the character strengths and virtues Peterson, C., & Seligman, M. E. P. (2004). Character strengths and virtues: A handbook and classification. New York: Oxford University Press and Washington, DC: American Psychological Association. Summary here:www.viacharacter.org/www/C...ths/VIA-Classification#nav
I think, though, that people intuitively know the human needs because they are human themselves. The greatest atrocities that are commited by humanity to other humans are not due to a lack of understanding of the human needs, but because certain people are excluded from being deserving of being able to meet these needs. In the past certain discriminating factors, for example, melanin in skin, religion, and patriotic allegiance have been used as indicators of bestiality, lowliness, inhumanity and other degrading qualities. In summary, I think the issue is more of a perspectival one, i.e. with people’s maps. You need to not only ask what are the fundamental human needs, but you need to also ask how you can get people to create maps that allow them to best fulfill these fundamental human needs. I am sure that there are also a multitude of other considerations that you would need to think about if you were to design a civilization that allowed the fullest human development possible.
That’s actually a tough question because Elizer/others tend to use new names for existing ideas. For example, ‘the fallacy of the grey’ instead of ‘the continuum fallacy’, so I am not entirely sure what concepts have been covered elsewhere. Also, I think a lot of the value from less wrong posts comes from them getting you to think about an idea that you may not otherwise have thought to look deeply into even though technically it may have been in the books you have read. For example, I would never have looked into kent berridges work on wanting and liking if I hadn’t read luke prog’s post on it.
The below list contains some of the concepts that I don’t think are covered elsewhere. You can also go through the wikis since you should know what topics you have already learnt:
Friendly artificial intelligence – is a superintelligence (i.e., a really powerful optimization process) that produces good, beneficial outcomes rather than harmful ones.
Decision Theories – Theories invented by researchers associated with MIRI and LW: TDT, Timeless Decision Theory, UDT, Updateless Decision Theory and ADT: Ambient Decision Theory (a variant of UDT)
Affective death spiral—positive attributes of a theory, person, or organization combine with the Halo effect in a feedback loop, resulting in the subject of the affective death spiral being held in higher and higher regard.
Chronophone – is a parable that is meant to convey the idea that it’s really hard to get somewhere when you don’t already know your destination. If there were some simple cognitive policy you could follow to spark moral and technological revolutions, without your home culture having advance knowledge of the destination, you could execute that cognitive policy today.
Free will (explanation) - means our algorithm’s ability to determine our actions. People often get confused over free will because they picture themselves as being restrained rather than part of physics. Yudowsky calls this view Requiredism, but most people just view this essentially as Compatibilism.
Politics is the Mind-Killer – Politics is not a good area for rational debate. It is often about status and power plays where arguments are soldiers rather than tools to get closer to the truth.
The map is not the territory – the idea that our perception of the world is being generated by our brain and can be considered as a ‘map’ of reality written in neural patterns. Reality exists outside our mind but we can construct models of this ‘territory’ based on what we glimpse through our senses.
Probability is in the Mind—Probabilities express uncertainty, and it is only agents who can be uncertain. A blank map does not correspond to a blank territory. Ignorance is in the mind.
Adaptation executors—Individual organisms are best thought of as adaptation-executers rather than as fitness-maximizers. Our taste buds do not find lettuce delicious and cheeseburgers distasteful once we are fed a diet too high in calories and too low in micronutrients. Tastebuds are adapted to an ancestral environment in which calories, not micronutrients, were the limiting factor. Evolution operates on too slow a timescale to re-adapt to adapt to a new conditions (such as a diet).
Cached thought – is an answer that was arrived at by recalling a previously-computed conclusion, rather than performing the reasoning from scratch.
Applause light—is an empty statement which evokes positive affect without providing new information
Belief as attire – is a example of an improper belief promoted by identification with a group or other signaling concerns, not by how well it reflects the territory.
Belief as cheering—People can bind themselves as a group by believing “crazy” things together. Then among outsiders they could show the same pride in their crazy belief as they would show wearing “crazy” group clothes among outsiders. The belief is more like a banner saying “GO BLUES”. It isn’t a statement of fact, or an attempt to persuade; it doesn’t have to be convincing—it’s a cheer.
Belief in belief I believe this is dennet’s idea—Where it is difficult to believe a thing, it is often much easier to believe that you ought to believe it. Were you to really believe and not just believe in belief, the consequences of error would be much more severe. When someone makes up excuses in advance, it would seem to require that belief, and belief in belief, have become unsynchronized.
Counter man syndrome—wherein a person behind a counter comes to believe that they know things they don’t know, because, after all, they’re the person behind the counter. So they can’t just answer a question with “I don’t know”… and thus they make something up, without really paying attention to the fact that they’re making it up. Pretty soon, they don’t know the difference between the facts and their made up stories
Crisis of faith—a combined technique for recognizing and eradicating the whole systems of mutually-supporting false beliefs. The technique involves systematic application of introspection, with the express intent to check the reliability of beliefs independently of the other beliefs that support them in the mind. The technique might be useful for the victims of affective death spirals, or any other systematic confusions, especially those supported by anti-epistemology.
Making Beliefs Pay Rent—Every question of belief should flow from a question of anticipation, and that question of anticipation should be the centre of the inquiry. Every guess of belief should begin by flowing to a specific guess of anticipation, and should continue to pay rent in future anticipations. If a belief turns deadbeat, evict it.
Least convenient possible world – is a technique for enforcing intellectual honesty, to be used when arguing against an idea. The essence of the technique is to assume that all the specific details will align with the idea against which you are arguing, i.e. to consider the idea in the context of a least convenient possible world, where every circumstance is colluding against your objections and counterarguments. This approach ensures that your objections are strong enough, running minimal risk of being rationalizations for your position.
Rationalist taboo—a technique for fighting muddles in discussions. By prohibiting the use of a certain word and all the words synonymous to it, people are forced to elucidate the specific contextual meaning they want to express, thus removing ambiguity otherwise present in a single word. Mainstream philosophy has a parallel procedure called “unpacking” where doubtful terms need to be expanded out.
Semantic stopsign – is a meaningless generic explanation that creates an illusion of giving an answer, without actually explaining anything.