I just finished listening to the Audiobook version of Rationality: From AI to Zombie. Lots of thanks to Yudkowsky and everyone else that was involved in making this book and the audio book. I do not know who the reader of the audio book is, but thanks all the same.
I am writing this comment as my way of prizing this book. I will try to summarize what I have personalty learned from it, in the hope that someone who was involved, will read this post and fell some pride in having helped me in my self improvement. But I am also writing this comment because I just want to express my thoughts after finishing the book.
I have not have any major change of mind, but I have several minor ones, which might very well continue to grow.
Listening to Yudkowsky’s words have made me more confident, because he is saying many things that I already intuitively knew, but I could not properly explain it my self, and could therefore not be sure I was right. I am still not 100% certain I am right, but I am more confident, and I believe that this is a good thing. Smart people should be confident. No, this is not hind site bias, because:
I did not allays instantly agree, so I do know the difference.
I been actively introspecting since I was 12, so I know most of my brains tricks.
I never set out to be a rationalist. I don’t even remember having a pre-LessWrong concept for the word “rationalist”. There where just, correct thinking and in-correct thinking, and obviously correct thinking is the way that systematically leads you to the truth, because how else would you measure correctness. Maybe this saved me from falling in to some of the rationalist tropes that Yudkovsky warns about. Or maybe I avoided them because I have read to little science fiction. Or maybe it was because I looked at these types of tropes and saw an author who kinged to the, obviously wrong, but warm and fussy, idea that every human has the same number of skill points.
I wonder who setts out to be rational, with out having something specific they need rationality for. Maybe the same kind of people that identifies as an atheist? I am an atheist, but I don’t identify as such, because in my country, this is mostly a non issue.
I found LessWrong because my new boyfriend encouraged me to read here, and I actually got through the book, because I like audiobooks.
The pre-LessWrong me was a truth seeker, and as such, I though a lot about the way as applied to truth seeking. I had a crisis of faith, a several years a go, questioning the validity of science. But never really though about applying systematic reasoning to decision under uncertainty. When, in my past, I was confronted with a decision, which I did not know how to reason out, I used to deliberately hand over the decision to my feelings. Because, I reasoned, if I don’t know what is right anyway, I might as well save me the fight of going against my impulses. I hope that I can use what I have learned here to do better.
An other thing I have realized is that I am such a pushover for perceived social norms. I have notice a significant mental shift in my brain, just from having some one in my ear, who casually mentions many words and cryonics, as if these where the most normal things in the world. Intellectually I was already convinced, I already knew the right answer before listening to the book, but I still needed the extra nagging, to get all of may brain on board with it. I think that this has been the single most important insight I got from the book.
One reason I have not tried to develop the art of rational decision making before, is that I knew that I was not strong enough to counter my emotional preferences. But I was wrong. I now have one, systematically applicable self hack, and probably there are more out there to find. I have hope to be able to take charge of my motivation, and I have reasons to fight for control.
Current me is an aspiring effective altruist. I do not strive to be a perfect altruist, because I do have some selfish preferences, that I do not expect to go away. But I am going to get my ass out of the comfortable bubble of I can″t do anything anyway, and do something. Though I have not decided yet if I am going to take the path of earn to give, or if I should get directly involved in some project my self. I am looking in to both ways.
Finlay, here is one my favorite quotes from the book:
I pause. “Well…” I say slowly. “Frankly, I’m not entirely sure myself where this ‘reality’ business comes from. I can’t create my own reality in the lab, so I must not understand it yet. But occasionally I believe strongly that something is going to happen, and then something else happens instead. I need a name for whatever-it-is that determines my experimental results, so I call it ‘reality’. This ‘reality’ is somehow separate from even my very best hypotheses. Even when I have a simple hypothesis, strongly supported by all the evidence I know, sometimes I’m still surprised. So I need different names for the thingies that determine my predictions and the thingy that determines my experimental results. I call the former thingies ‘belief’, and the latter thingy ‘reality’.”
I got into AI Safety. My interest in AI Safety lured me to a CFAR workshop, since it was a joint event with MIRI. I came for the Agent Foundations research, but the CFAR turned out just as valuable. It helped me start to integrate my intuitions with my reasoning, though IDC and other methods. I’m still in AI Safety, mostly organising, but also doing some thinking, and still learning.
My resume lists all the major things I’ve been doing. Not the most interesting format, but I’m probably not going to write anything better anytime soon. Resume—Linda Linsefors—Google Docs
I just finished listening to the Audiobook version of Rationality: From AI to Zombie. Lots of thanks to Yudkowsky and everyone else that was involved in making this book and the audio book. I do not know who the reader of the audio book is, but thanks all the same.
I am writing this comment as my way of prizing this book. I will try to summarize what I have personalty learned from it, in the hope that someone who was involved, will read this post and fell some pride in having helped me in my self improvement. But I am also writing this comment because I just want to express my thoughts after finishing the book.
I have not have any major change of mind, but I have several minor ones, which might very well continue to grow.
Listening to Yudkowsky’s words have made me more confident, because he is saying many things that I already intuitively knew, but I could not properly explain it my self, and could therefore not be sure I was right. I am still not 100% certain I am right, but I am more confident, and I believe that this is a good thing. Smart people should be confident. No, this is not hind site bias, because:
I did not allays instantly agree, so I do know the difference.
I been actively introspecting since I was 12, so I know most of my brains tricks.
I never set out to be a rationalist. I don’t even remember having a pre-LessWrong concept for the word “rationalist”. There where just, correct thinking and in-correct thinking, and obviously correct thinking is the way that systematically leads you to the truth, because how else would you measure correctness. Maybe this saved me from falling in to some of the rationalist tropes that Yudkovsky warns about. Or maybe I avoided them because I have read to little science fiction. Or maybe it was because I looked at these types of tropes and saw an author who kinged to the, obviously wrong, but warm and fussy, idea that every human has the same number of skill points.
I wonder who setts out to be rational, with out having something specific they need rationality for. Maybe the same kind of people that identifies as an atheist? I am an atheist, but I don’t identify as such, because in my country, this is mostly a non issue.
I found LessWrong because my new boyfriend encouraged me to read here, and I actually got through the book, because I like audiobooks.
The pre-LessWrong me was a truth seeker, and as such, I though a lot about the way as applied to truth seeking. I had a crisis of faith, a several years a go, questioning the validity of science. But never really though about applying systematic reasoning to decision under uncertainty. When, in my past, I was confronted with a decision, which I did not know how to reason out, I used to deliberately hand over the decision to my feelings. Because, I reasoned, if I don’t know what is right anyway, I might as well save me the fight of going against my impulses. I hope that I can use what I have learned here to do better.
An other thing I have realized is that I am such a pushover for perceived social norms. I have notice a significant mental shift in my brain, just from having some one in my ear, who casually mentions many words and cryonics, as if these where the most normal things in the world. Intellectually I was already convinced, I already knew the right answer before listening to the book, but I still needed the extra nagging, to get all of may brain on board with it. I think that this has been the single most important insight I got from the book.
One reason I have not tried to develop the art of rational decision making before, is that I knew that I was not strong enough to counter my emotional preferences. But I was wrong. I now have one, systematically applicable self hack, and probably there are more out there to find. I have hope to be able to take charge of my motivation, and I have reasons to fight for control.
Current me is an aspiring effective altruist. I do not strive to be a perfect altruist, because I do have some selfish preferences, that I do not expect to go away. But I am going to get my ass out of the comfortable bubble of I can″t do anything anyway, and do something. Though I have not decided yet if I am going to take the path of earn to give, or if I should get directly involved in some project my self. I am looking in to both ways.
Finlay, here is one my favorite quotes from the book:
I’m leaving this comment so that I can find my way back here in the future.
Mind if you can write a follow-up review about how you joined the rationalist/EA community? Interested to see how your journey progressed 🙂
I got into AI Safety. My interest in AI Safety lured me to a CFAR workshop, since it was a joint event with MIRI. I came for the Agent Foundations research, but the CFAR turned out just as valuable. It helped me start to integrate my intuitions with my reasoning, though IDC and other methods. I’m still in AI Safety, mostly organising, but also doing some thinking, and still learning.
My resume lists all the major things I’ve been doing. Not the most interesting format, but I’m probably not going to write anything better anytime soon.
Resume—Linda Linsefors—Google Docs