When I was younger...
MrMind
I used the words “brute force” in the cryptographical sense, meaning to argue with someone until you deplete his stack of objections: though possible, a frustrating and time consuming effort it is...
I forgot to add a thing, which I get you already are doing: substituting uncritical nodes—with which you have to fight for acceptance—with nodes more aligned with your meta/beliefs… After all, the beauty of low utility nodes is that they are easily detached. I think this has also the benefit to increase the overall utility of your social network’s immediate neighbourhood...
Too bad this kind of shock can’t be produced systematically… it would be a wonderful way to detect ‘false’ friends.
Isn’t the objective of rationality to correctly align our beliefs with reality, so that they may pay rent when we try to achieve our goals?
Protecting oneself against manipulation, learning to argue correctly and getting used to being defeated are all byproducts of the fact that there is only one reality, independent of the mind.
Hello everybody, I’m Stefano from Italy. I’m 30, and my story about becoming a rationalist is quite tortuous… as a kid I was raised as a christian, but not strictly so: my only obligation was to attend mass every sunday morning. At the same time since young age I was fond of esoteric and scientific literature… With hindsight, I was a strange kid: by the age of 13 I already knew quite a lot about such things as the Order of the Golden Dawn or General Relativity… My fascination with computer and artificial intelligence begun approximately at the same age, when I met a teacher that first taught me how to program: I then realized that this would be one of my greatest passion. To cut short a long story, during the years I discarded all the esoteric nonsense (by means of… well, experiments) and proceeded to explore deeper and deeper within physics, math and AI.
I found this site some month ago, and after a reasonable recognition and after having read a fair amount of the sequences, I feel ready to contribute… so here I am.
This may very well be the case today, or in our society, but it’s not really difficult to imagine a society in which you have to ‘hold’ really crazy idea in order to win. Also, believing true things is an endeavour which is never completed per se: it surely is not possible to have it sorted out simpliciter before attaining 2 (the third imperative I really see as a subgoal of the second one).
The thesis after all conflicts with basically all history of humanity: homo sapiens has won more and more without attaining a perfect accuracy. However it seems to me that it had won more where it accumulated a greater amount of truths.
So I won’t really say that in order to win you have to be accurate, but I think a strong case can be made that accuracy enhances the probability of winning.
What is then the real purpose of rationality? I’m perfectly fine if we accept the conjunction “truth /\ winning”, with the provision that P(winning | high degree of truth) > P(winning | low degree of truth). However, if Omega is going to pop-up and ask:
You must choose between two alternatives. I can give you the real TOE and remove your cognitive bias if you accept to live a miserable life, or you can live a very comfortable and satisfying existence, provided that you let me implant the belief in the flying spaghetti monster.
I confess I would guiltily choose the second.
I think your experience deserves a narration in the discussion section.
I don’t find this anti-Spock argument very convincing. If the stove is hot, you just shouldn’t touch it, there’s really no reason to be afraid. Emotions were useful because they elicited the appropriate behaviour in the hunter-gatherer environment, but now we can simply manage, barring extreme situations, to do the proper things.
You can clearly point at rational behaviours and distinguish them from irrational ones, and you can call ‘rational’ an emotion which induces the rational behaviour. But that doesn’t mean that the emotions, per se, are necessary to that effect.
A spock can really functions as a proper and winning rationalist… but we, of course, are no vulcanians.
I suspect a plain hair dryer can’t produce that much heat, however a hot air gun should work neatly (I used routinely a Bosch PHG 600-3 and I want to point out that you can seriously burn yourself if you don’t properly handle it).
Learned blankness about the possible use of everyday appliances, besides the explicitely stated?
Now you’re morally obliged to find 3 creative use for a freezer...
This sequence predictor can potentially be really useful (for example, predict future siai publications from past siai publications then proceed to read the article which give a complete account of Friendliness theory...) and is not dangerous in itself.
I see a way in which a simple, super-intelligent sequence predictor can be dangerous. If it can predict an entire journal issue, it surely can simulate a human being sufficiently well to build a persuasive enough argument for letting it out of the box.
However you don’t need a complicated program predictor, you can just use the speed prior instead of the universal prior.
I was referring not to the experience of your trip, but to the following battle you fought about overcoming the (almost) Absolute Bias...
Politics is mind-killer because the category itself is difference-killing. You just don’t discuss politics, you discuss global problems and how to solve them. Without even entering in categorization beforehand...
This warrants for better judgement, not less sex.
I had to impose myself the exact same warning. I was trying to use karma point to signal “rationalist status” instead of simply trying my best to comment intelligent things. There apparently is a little segment of my neurology that is constantly scanning what the median groupthink is and prompting me in that direction...
That was exactly my thought… so you need to extract the problem that tax policy or monetary policy are trying to solve, contextualize it and maybe even translate it into a metaphor… that should be enough for rational mind to start discussing rationally...
Thanks for the link, cousin_it! I immediately started to download it, it appeals me a lot!
In all the stories I read about an AI dystopia, the solution proposed is to kill it. From Disney to the Lawnmower movie to Rucker’s Postsingular etc. While we know what General Relativity looks like, and so we can develop the story of a civilization which happens to discover it, we still have little clue to what a FAI would look like, and I think we shouldn’t burden a poor writer to discover the theory before writing a novel… From here a writer has two choices: uses FAI (we can imagine how it looks) to solve some other existential risk, or concentrate the UAI existential risk to some subset where the Friendly part is solvable but not obvious. I think I’ll ponder the last track for a while...
Hi Zetetic! Your curriculum is very interesting, in particular referring to “the link between category theory and cognitive science”. Are you talking about this? Or something else? I’m quite fond of category theory but AFAICR I’ve never stumbled upon such a link… Any pointer is really appreciated.
Yes, but Subsets(x,y) is a primitive relationship in ZFC. I don’t really know what cousin_it means by an explanation, but assuming it’s something like a first-order definition formula, nothing like that exists in ZFC that doesn’t subsume the concept in the first place.
It seems to me that you tried to renegotiate the entirety of your social contracts by brute forcing others into rationality: it’s not surprising that you experienced a certain degree of frustration...
My suggestion is to invest effort only in the critical nodes: your wife, your closest friend, etc and leave the rest of your social network to react as it may, provided that you don’t seek direct confrontation. With very low priority nodes, you can just pretend to be agnostic, a position which seems prone to elicit much less evangelization… You could just pretend that you lost your faith and nowadays you’re just very confused.
OTOH it’s crucial that high priority nodes get a precise picture of your beliefs, and that you ask them to be accepted as you are. Again, it’s not necessary to be confrontational, but you must be firm in asserting your rationality and equally firm in demanding that others accept that from you (obviously, you have to offer the same degree of tolerance).
In intersecting beliefs domain, such as raising your children, it’s not difficult to hack religious memeplex with rationality methods: expose your children to the 12 virtues in a context-safe environment, and manage for them to have as much space as they need to grow their own posteriors. Even if a mind has been buried under tons of religious conditioning, you can generally expect it to evolve into a rational point of view, given the right attitudes. It is also my very personal opinion that it’s much more important for your children to grow in a loving family than for them to become mini-Yudkowski at the age of 10