Kiva.org has the distinct honor of being the only charity that has ensured me maximum utilons for my money with an unexpected bonus of most fuzzies experienced ever. Seeing my money being repaid and knowing that it was possible only because my charity dollars worked, that the recipient of my funds actually put the dollars to effective use enough to thrive and pay back my money, well, goddamn it felt good.
haig
I just read a nice blog post at neurowhoa.blogspot.com/2009/03/believer-brains-different-from-non.html, covering research on brain differences of believers vs. non-believers. The take away from the recent study was “religious conviction is associated with reduced neural responsivity to uncertainty and error”. I’m hesitant to read too much into this particular study, but if there is something to this then the best way to spread rational thought would be to try to correct for this deficiency. Practicing not to let uncertainty or errors slide by, no matter how small, would result in a positive habit and develop their rationality skills.
In my experience, the inability to be satisfied with a materialistic world-view comes down to simple ego preservation, meaning, fear of death and the annihilation of our selves. The idea that everything we are and have ever known will be wiped out without a trace is literally inconceivable to many. The one common factor in all religions or spiritual ideologies is some sort of preservation of ‘soul’, whether it be a fully platonic heaven like the Christian belief, a more material resurrection like the Jewish idea, or more abstract ideas found in Eastern and New Age ideologies. The root of spiritual, ‘spirit’, is a non-corporeal substance/entity whose main purpose is to contrast itself with the material body. Spirit is that which is not material and so can survive the loss of material pattern decay.
In my opinion, THIS IS the hard pill to swallow.
Well, from reading the comments it seems the most popular type of akrasia that hinders this group is procrastination. I’m sure other weaknesses of will are common, but procrastination seems to be an overwhelmingly common nuisance. This paper http://www.uni-konstanz.de/FuF/SozWiss/fg-psy/gollwitzer/PUBLICATIONS/McCreaetal.PsychSci09.pdf might hint at why this is so. The gist is that the more abstract the tasks/projects/goals are, the more you will procrastinate. As the tasks become more concrete, the procrastination is eliminated. An example is the abstract concept of ‘write that essay’ vs. ‘pick up pen & paper and begin mind-mapping’ or whatever.
It is probably fair to assume that most people here are more abstract thinkers compared with the average population and thus might be extra sensitive to procrastination.
I think this post and the related ones are really hitting home why it is hard for our minds to function fully rationally at all times. Like Jon Haidt’s metaphor that our conscious awareness is akin to a person riding on top of an elephant, our conscious attempt at rational behavior is trying to tame this bundle of evolved mechanisms lying below ‘us’. Just think of the preposterous notion of ‘telling yourself’ to believe or not believe in something. Who are you telling this to? How is cognitive dissonance even possible?
I remember the point when I finally abandoned my religious beliefs as a kid. I had ‘known’ that belief in a personal god and the religious teachings were incompatible with rational thinking yet I still maintained my irrational behavior. What did the trick was to actually practice and live strictly for a set period of time only appropriately to what my rational beliefs allowed. After some number of days, I was finally completely changed and could not revert back to my state of contradiction.
In relation to this, think about why you can’t just read a math book and suddenly just get it (at least for us non math geniuses). You may read an equation and superficially understand that it is true, but you can still convince yourself otherwise or hold conflicted beliefs about it. But then, after doing examples yourself and practicing, you come to ‘know’ the material deeply and you can hardly imagine what it is like not to know it.
For people like the girl Eliezer was talking to, I wonder what would happen if you told her, as an experiment, to force herself to totally abandon her belief in god for a week, only adhering to reason, and see how she feels.
I voted up robin hanson, but I would love either Cory Doctorow or Bruce Sterling because they are both smart scifi authors who are vocally skeptical of something like the singularity happening.
Whoever it is, in my opinion the best discussions would consist of people who share very similar worldviews yet strongly differ on some critical ideas. We don’t need to see another religion debate that is for sure.
‘cleaning my room’ is still abstract. If you decompose that into ‘pick up clothes off floor, then make my bed, then vacuum the carpet, …..’, then those are concrete tasks.
I may be overlooking something, but I’d certainly consider Robin’s estimate of 1-2 week doublings a FOOM. Is that really a big difference compared with Eliezer’s estimates? Maybe the point in contention is not the time it takes for super-intelligence to surpass human ability, but the local vs. global nature of the singularity event; the local event taking place in some lab, and the global event taking place in a distributed fashion among different corporations, hobbyists, and/or governments through market mediated participation. Even this difference isn’t that great, since there will be some participants in the global scenario with much greater contributions and may seem very similar to the local scenario, and vice versa where a lab may get help from a diffuse network of contributors over the internet. If the differences really are that marginal, then Robin’s ‘outside view’ seems to approximately agree with Eliezer’s ‘inside view’.
There is a recent trend of ‘serious games’ which use video games to teach and train people in various capacities, including military, health care, management, as well as the traditional schooling. I see no reason why this couldn’t be applied to rationality training.
I always liked adventure style games as a kid, such as King’s Quest or Myst, and wondered why they aren’t around any more. They seemed to be testing rationality in that you would need to guide the character through many interconnected puzzles while figuring out the model of the world and how best to achieve the goals of the protagonist. It seems like the perfect video game genre for both developing and testing rationality skills.
Specifically, I’ve thought of a microcosm of the real world, taking place in a different setting yet similar enough to our real world that there would be analogues to religion, science, politics, etc. As you progress through the game, say from child to adult, you learn about the world and see how different beliefs and strategies effect the game. Players would encounter similar challenges to the real world but be disconnected enough not to put up a defense mechanism, yet involved enough to care about the outcome. Add MMO et al features to taste.
“It is useless to attempt to reason a man out of a thing he was never reasoned into.” (Jonathan Swift )
The post was supposed to be in the spirit of much of the self-improvement posts regarding akrasia, rationality, etc. It seemed logical that managing your information is an important component with the rest of the mental hygiene practices discussed here. It I was mistaken I apologize.
The Cynic’s Theory may in fact describe a true state of mind, but it is not describing akrasia. The Cynic’s Theory might better describe those minds whose preferences are placed by exterior influences that conflict with their internal, consciously hidden preferences. An example may be someone who always thought they wanted to be a doctor but deep down knew they wanted to be an artist.
However, when I think of Akrasia, I don’t think of incompatible goals or hidden preferences, I think of compatible goals but an inability to consciously exert control of your willpower in achieving the agreed upon goals. When you finally stop procrastinating and get going, you feel wonderful and wonder why you couldn’t have done it sooner—but then you go through the same problem the next time again. Akrasia is a problem of forming/eliminating automatic behaviors, aka habits. So in my opinion, the Cynic’s theory does not shed any light on the problem of akrasia.
“How an algorithm feels from inside” discusses a particular quale, that of the intuitive feeling of holding a correct answer from inside the cognizing agent. It does not touch upon what types of physically realizable systems can have qualia.
Warren Buffett seems to fit all the criteria of the counterexample Eliezer asked for. And if you doubt the fanaticism of his fandom, just look over some videos of his annual shareholders’ meeting/convention.
In group #2, where everybody at all levels understand all tactics and strategy, they would all understand the need for a coordinated, galvanized front, and so would figure out a way to designate who takes orders and who does the ordering because that is the rational response. The maximally optimal rational response might be a self-organized system where the lines are blurred between those who do the ordering and those who follow the orders, and may alternate in round-robin fashion or some other protocol. That boils down to a technical problem in operations research or systems engineering.
On another note, sometimes the most rational response for ‘winning’ will conflict with our morality, or at least, our emotions. Von Neumann advocated a first strike response against the soviets early on, and he might have been right. Even if his was the most rational decision, you do see the tangle of problems associated with it. What if winning means losing a part of you that was a huge part of the reason you were fighting in the first place.
We don’t want to create a new religion, but whatever we create to take the place of it needs to offer at least as much as that which it replaces, so we might end up actually needing a new ‘religion’ whether we like it or not. If indeed there is a biological predisposition for humans to want to engage in ‘worship’, then we might as well worship rationally. I hesitate to call this new organization a religion or the practice worship, those are the things they are replacing, but those words get my idea across.
How about we create a church-like organization that has local congregations and meets weekly to listen to talks on rationality, the latest scientific discoveries, lectures on philosophy, the state of the world, etc. And they don’t need to lack beauty or awe. A weekly dose of the unimaginable beauty of biology, or astrophysics, or even economics, in a shared setting, would sure add value to my life. A ‘bible study’ about fermi’s paradox would have made my day as a child. We can tug on the emotions as much as traditional religions without being irrational.
And the missionary arm would maintain the rationality of the ‘church’. If the catholic pope denounces condoms in africa, then our ‘church’ goes one further and starts a viral campaign to not only spread the reason why the pope is wrong, but gets creative and sets up condom donations or incentive structures to promote their use, or whatever.
I know there are many organizations that promote skepticism, secular humanism, reason, enlightenment, etc. but don’t know if they are widespread, have local chapters that meet regularly, or have much of a following.
And yes, ‘canonizing’ the vast information to make it more accessible would help a lot.
UPDATE: In regards to the post wondering how this all would be different from the atheist groups and other such organizations that currently exist, well, that is the rub isn’t it. Those have the right idea but aren’t successful....how can we make one succeed? Or, can we prove that one can’t succeed so as to not waste any more time over it.
I like EY’s writings, but don’t hold them up as gospel. For instance, I think this guy’s summary of Bayes Theorem (http://betterexplained.com/articles/an-intuitive-and-short-explanation-of-bayes-theorem) is much more readable and succinct than EY’s much longer (http://yudkowsky.net/rational/bayes) essay.
How does an infovore manage information overload?
An alternative to making things fun is to make things unconscious and/or automatic. No healthy individual complains about insulin production because their pancreas does it for them unconsciously, but diabetic patients must actively intervene with unpleasant, routine injections. One option would be to make the injections less unpleasant (make the process fun and/or less painful), but a better option would be to bring them in line with non-diabetic people and make the process unconscious and automatic again.
Is your pursuit of a theory of FAI similar to, say, Hutter’s AIXI, which is intractable in practice but offers an interesting intuition pump for the implementers of AGI systems? Or do you intend on arriving at the actual blueprints for constructing such systems? I’m still not 100% certain of your goals at SIAI.