moridinamael(Matt Freeman)
I strongly agree that this type of dynamic intelligence can be enhanced through training.
When humans are placed in a stressful situation for the first time, this is usually what happens by default:
Human enters stressful situation.
Human experiences a physiological stress response, e.g. sweating, stuttering.
Human says, “I am freaking out,” and loses all confidence in their ability to perform.
Loss of confidence leads to decreasing performance, a vicious cycle of failure is entered upon.
With practice / experience, a human can retrain themselves toward:
Human enters stressful situation.
Human experiences a physiological stress response, e.g. sweating, stuttering.
Human says, “I notice that I am experiencing a normal stress response. This is alright, and I will not let it affect my performance.”
Human performs well, and enters a cycle of increasing confidence.
For me, this is a hard-won observation. For example, there is a tendency to assume that some people are born “good public speakers.” I think it is more likely that there are simply people who are better at noticing their own physiological nervousness for what it is, and maintaining their mental composure despite it.
From my own subjective experience, once this ability is gained in one situational domain, it at least partially translates to other domains.
There is an organization at my university called Replant. Every year since 1991, students have participated in a massive campaign to plant trees. Last year, 1400 students were involved.
Like your suggestion of planting the seeds of rationality, this undertaking comes with pitfalls.
I’ve heard cynical/hilarious stories of Replant groups who go to the same location, several years in a row, dig up the dead trees they planted the year before, and plant new saplings in their place. The (rationalist) lesson here is that there are places where seeds won’t grow. Effort would be better spent elsewhere.
Also, as tends to happen with many in-groups, “Replant People” have acquired a reputation for being mildly self-righteous. I can see the same thing happening with rationalists trying to spread the dogma.
So, at the risk of straining the metaphor past the breaking point, planting the seeds of rationality is a great idea as long as you’ve found a nurturing environment in which to plant them, you can invest energy in guiding their maturation, and you don’t come off too smugly.
A friend and I thought of this a few months ago, and we actually recorded the first chapter. Then we calculated how much time the rest would take and concluded that we simply weren’t willing to make the investment.
That said, would you be open to collaboration, to defray the huge time requirements?
Towards an Algorithm for (Human) Self-Modification
Sure, alternating, or spreading the load among anybody who has a microphone and wants to contribute.
This would introduce issues of quality control—you might have to tell someone that their reading was just not very good.
The consistency between tones and character voices might also be jarring.
On the other hand, a little variety might be a fun exercise. Has there ever been a “community audiobook?” (Cursory googling suggests not.) We could be the first. Anyway, I’m sure there are plenty of people who couldn’t stomach twenty-plus hours of reading into a microphone, but would be happy to do twenty minutes, or one hour.
No bridges burned—it’s your project, and if me and my friends had intended on doing it, well, we would have done it. I agree with all of your objections. If you want this done “professionally” then you had better do it yourself.
If you change your mind and decide you want help, please do ask!
I agree. Actually, I do have at least two close friends who I would consider “very rational,” but we have known each other for so long that we can be blind even to one another’s irrationalities. You get used to your friends in the same way you get used to yourself. I think you need not just a community, you also need to meet new people who can look at things from new angles.
All this programming exercise really did was enable me to see various aspects of my life on paper, in a clinical and detached fashion, as if I were looking at the life of a stranger. From that perspective, what I needed to do seemed obvious, just as the solutions to other people’s problems are usually more obvious than the solutions to our own problems.
I was going to post Blindsight. I read a lot of sci-fi, and I have read no other work containing more genuinely mindblowing ideas per page.
This is one of the few fictional works that significantly and permanently changed my perspective.
I became aware or the elephant-and-rider metaphor a while ago, perhaps due to one of your posts. Since that time, I have attempted to take advantage of the insight by considering what else it could mean.
For example, the rider can “see farther” but the elephant perhaps can “see more clearly what is nearby.”. By this I mean only that feelings which have no obvious explanations often come from flashes of intuition about people, ideas or situations which your conscious mind would have never noticed.
In other words, the unconscious mind seems to be the seat of our various pattern matching algorithms, which leads us to make logical errors at times, but may also lead us to infer things about the motives or mental states of other humans or give us a “gut feeling” that some situation is unsafe, when the conscious mind would otherwise have blissfully ignored the danger.
This isn’t really in contradiction to what you just wrote; the main idea is that half of training the elephant may be listening to the elephant.
If you play taboo with the word “goals” I think the argument may be dissolved.
My laptop doesn’t have a “goal” of satisfying my desire to read LessWrong. I simply open the web browser and type in the URL, initiating a basically deterministic process which the computer merely executes. No need to imbue it with goals at all.
Except now my browser is smart enough to auto-fill the LessWrong URL after just a couple of letters. Is that goal-directed behavior? I think we’re already at the point of hairsplitting semantic distinctions and we’re talking about web browsers, not advanced AI.
Likewise, it isn’t material whether an advanced predictor/optimizer has goals, what is relevant is that it will follow its programming when that programming tells it to “tell me the answer.” If it needs more information to tell you the answer, it will get it, and it won’t worry about how it gets it.
- 10 Nov 2011 16:17 UTC; 2 points) 's comment on No Basic AI Drives by (
This is interesting, but I would respond with two observations:
First, this story is supposed to invoke the idea that some AI we are attempting to box can figure out our own universe. Our universe is computable (to within the limits required for our current level of science). So as an allegory, it’s something we should be worried about.
Second, I like to think that some population of scientists in the story were pursuing the idea that the outer-universe might not be computable. If they had turned out to be right, I have a feeling we still would have figured out how to get out of the box eventually. It would have merely taken more time.
There exists a particular cluster in thingspace which we call “living things” and we have invented a magical quality called “life” to apply to members of this cluster.
Virii are peripheral members of the cluster, just like penguins are atypical birds, so there’s confusion. There’s also general confusion about which dimensions should be considered important, i.e. complexity, intelligence, etc.
Discussions of the meaning of life are confused for the same reason as discussions about the morality of coffee tables.
I think the use of both DALYs and dollars in the main article is worth talking about, in context of some of the things you have mentioned. Being a stupid human, I find that it is generally useful for me to express utility to myself in dollars, because I possess a pragmatic faculty for thinking about dollars. I might not bend over to pick up one dollar. I might spend a couple of hours working for $100. There isn’t much difference between one billion and two billion dollars, from my current perspective.
When you ask me how many dollars I would spend to avert the deaths of a million people, the answer can’t be any larger than the amount of dollars I actually have. If you ask me how many dollars I would spend to avoid the suffering associated with a root canal, it could be some noticeable percentage of my net worth.
When we start talking about decisions where thousands of DALYs hang in the balance, my monkey brain has no intuitive sense of the scope of this, and no pragmatic way of engaging with it. I don’t have the resources or power to purchase even one DALY-equivalent under my own valuation!
If the net utility of the universe is actually being largely controlled by infinitesimal probabilities of enormous utilities, then my sense of scale for both risk and value is irrelevant. It hardly matters how many utilons I attribute to a million starving people when I have only so much time and so much money.
I don’t know what, if anything, to conclude from this, except to say that it makes me feel unsuited to reasoning about anything outside the narrow human scope of likelihoods and outcomes.
This is not my original observation but I haven’t seen it mentioned yet in this discussion:
The reason a middle school or high school student feels awkward, disconnected and asocial may not be that he/she has anything at all wrong with them. In fact, the problem may just be that middle school and high school are horrible places which encourage human beings’ worst tendencies and stifle any opportunities for positive interaction and self-actualization.
If you feel awkward in the cafeteria at lunch time and you don’t know or like anyone around you, that’s because James Bond would probably feel awkward in that situation. I think part of the perceived awkwardness comes from asking yourself what you should be doing in this situation and not finding an answer. There is no action you can take that will make that situation not somewhat awkward. As an adult in that situation I might try to strike up a conversation with a stranger, but do not forget that middle schoolers are not adults. If you could rely on middle schoolers to be affable and collegial, we wouldn’t remember those years as the worst of our lives.
I didn’t realize any of this until I grew up, and I’m not even sure if it would have been helpful for me to know. If you tell a prisoner that it’s okay, everything is fine after you get out of prison, that doesn’t really help them much. Maybe as a fix I would suggest that young people try to become members of groups not related to school, such as Scouts and martial arts schools and sports.
Might this be one of those instances where it is globally better for the annoyed party (non-US LWers) to self-modify to accept that everybody uses language from a inside a cultural framework, rather than to request that the majority self-modify to implement not-really-well-specified “universal” norms for English?
As an American engineer I personally think we should all use S.I., but it doesn’t do any good to correct people who use English units, unless I take the full effort of convincing them that a consistent unit system is actually more powerful.
It doesn’t seem like this strategy will continue to be effective when you are no longer a young man. Is this a short-term strategy?
It doesn’t seem like this approach will yield stable and reliable companionship into old age.
There is no mention of the desire for offspring in this post. Historically the point of sexual relationships has been offspring, the nominal “reason” for dating has been to find a suitable partner with which to raise offspring.
Sorry if this post is unbearably quaint, but I can’t figure out why you’re even bothering with all this. I mean, save yourself the trouble, just remain celibate or use prostitutes.
I happen to be in the middle of Zen and the Art of Motorcycle Maintenance right now and I’m amused that this post popped up. It seems almost to be aimed directly at Pirsig, whose primary problem seems (so far) to be that his use of traditional rationality to critique traditional rationality leads to the breaking of his mind. I find myself saying to the book, “Dissolve the question,” each time Pirsig reaches a dilemma or ponders a definition, but instead he builds towering recursive castles of thought (often grounded in nothing more than intuition) that would be heavily downvoted if posted here.
That came off as more negative than I had intended, and yet I still mean it.
Letter after W as well.
Since magic in the HP universe has the property of not having to make sense, one could imagine a spell that simply makes guns not work, or that makes all projectiles move slowly, or that causes everyone within the area to miss what they aim at.
The ending battle of Deathly Hallows pretty much treats wands as if they were guns. You could edit the film to replace all the wands with guns and have very few instances where anything looked wrong. So far HP:MoR has made the magic feel more magical than that.
Has anyone created a deck for An Intuitive Explanation of Bayes’ Theorem?
If not, would there be a lot of interest in a deck for An Intuitive Explanation of Bayes’ Theorem? I feel like spaced repetition and continuous correction of my understanding is the only way I will actually becomes more Bayesian, rather than merely thinking it would be really cool to be more Bayesian.