I do AI Alignment research. Currently independent, but previously at: METR, Redwood, UC Berkeley, Good Judgment Project.
I’m also a part-time fund manager for the LTFF.
Obligatory research billboard website: https://chanlawrence.me/
I do AI Alignment research. Currently independent, but previously at: METR, Redwood, UC Berkeley, Good Judgment Project.
I’m also a part-time fund manager for the LTFF.
Obligatory research billboard website: https://chanlawrence.me/
Short Answer: It’s not.
Longer Explanation: The way I understand it, the universe of God feels safer because we think of God as like us. In that world, there’s a higher being out there. Since we model that being as having similar motivations, desires, etc., we believe that God will also follow some sort of morality and subscribe to basic ideas of fairness. So He’ll be compelled to intervene in the case things get too bad.
The existence of God also makes you feel less responsible for your fate. For example, if he chooses to smite you, there’s nothing you can do. But in a universe of Math, if you don’t take action, no higher being is going to step in to hurt/harm you.
I think Manfred got most of the big ones already, but here’s some of other ones that have strong Altruistic/Rationalist/Learning-inspiring/Transhumanist themes:
Online Original Fiction: Many of the short stories by Seth Dickinson, particularly “Economies of Force” and “Sekhmet Hunts the Dying Gnosis: A Computation”. Ra, though it can be hard to follow at times. Various short stories written by the author of Ra, though some of these involve are quite confusing. Worm, though that’s horrendously long—maybe a couple of interludes? “The Fable of the Dragon-Tyrant”, by Nick Bostrom. Yvain’s “The Study of Anglophysics”. (Though this may inspire world-destroying behavior.)
Book-Fiction: Sophie’s World. Peter Watt’s Blindsight.
Fanfiction: “Involiate”, by Scrivener. “Veritas”, by ShaneT, though I can’t find it for some reason. Several of the OptimalVerse Fics, particularly “Friendship is Optimal: Caelum est Conterrens”.
Upvoted for clarity and relevance. You touched on the exact reason why many people I know can’t/won’t become EAs; even if they genuinely want to help the world, the scope of the problem is just too massive for them to care about accurately. So they go back to donating to the causes that scream the loudest, and turning a blind eye to the rest of the problems.
I used to be like Alice, Bob, and Christine, and donated to whatever charitable cause would pop up. Then I had a couple of Daniel moments, and resolved that whenever I felt pressured to donate to a good cause, I’d note how much I was going to donate and then donate to one of Givewell’s top charities.
I think generally there’s an addendum to the problem where if Omega sees you using a quantum randomness generator, Omega will put nothing in box B, specifically to prevent this kind of solution. :P
Also, how did you reach your $1000490 figure? If Omega just simulates you once, your payoff is: 0.51 (0.51 (1000000) + 0.49 (1001000)) + 0.49 (0.51 0 + 0.49 (1000)) = $510490 < $1000000, so you’re better off one-boxing unless Omega simulates you multiple times.
″… beware of false dichotomies. Though it’s fun to reduce a complex issue to a war between two slogans, two camps, or two schools of thought, it is rarely a path to understanding. Few good ideas can be insightfully captured in a single word ending with -ism, and most of our ideas are so crude that we can make more progress by analyzing and refining them than by pitting them against each other in a winner-take-all contest.”
Steven Pinker, on page 345 of The Sense of Style.
Not sure about the number in Freakonomics, but according to the Department of Transportation’s 2013 Memorandum, the department values a life at $9.1 million 2012 dollars.
Thanks! Unfortunately I’m not sure if I’m good enough at math for an MIRI internship. Also, I don’t think there are any CFAR workshops in my area, especially any during break. :P
I’m not sure about what I’ll do after college—I’ve looked through most of the 80k Hr career options, but still can’t decide between earning to give via quantitative trading/consulting/investment banking, tech entrepreneurship, and research.
Just completed my first survey!
Took the survey!
On a related topic, does anyone know where I can find a copy of Scott’s Quantitative Health Prize entry? The link on the Less Wrong page is broken.
Thanks!
Surely a team of engineers capable of developing AGI can be given some guidance in advance so that they are at least competent enough to instill a set of values as robust as the ones we attempt to instill in children?
The number of actual possibilities of goals is HUGE compared to the relatively small subset of human goals. Humans share the same brain structure and general goal structure, but there’s no reason to expect the first AI to share our neural/goal structure. Innocuous goals like “Prevent Suffering” and “Maxmize Happiness” may not be interpreted and executed the way we wish them to be.
Indeed, gaining superpowers probably would not compromise the AI’s moral code. It only gives it the ability to fully execute the actions dictated by the moral code. Unfortunately, there’s no guarantee that its morals will fall in line with ours.
This is the weakest part of the argument. Why should anybody believe that there is a super complicated function that determines what is ‘good’? … Our brains come equipped with a simple function that maps “is” statements to “ought” statements. Thus, we can reason about “ought” statements just like we do with “is” statements
I think the claim isn’t that there is a super-complicated function that determines what is ‘good’, but that the mapping from ‘is’ statements to ‘ought’ statements in the human brain is extremely complicated. If we claim that what is ‘good’ is what our brain considers is ‘good’, though, we merely encapsulate this complexity in a convenient black box.
That’s not to say that it’s not a solution, though: have you looked into desire utilitarianism? What you’re proposing here is really similar to (as I understand it) that school of moral philosophy claims. If you have time, Fyfe’s A Better Place is a good introduction.
I think a big part of the problem is that I have an irrational alief that makes me feel like my opinions are uniquely valuable and important to share with others. I do think I’m smarter, more moderate, and more creative than most.
I’ve had a similar problem. Every time I feel the impulse to argue, I try to remember that (in general) arguing won’t change their position. It depends on what you’re trying to achieve with your arguments. Are you trying to make the other person lose social status? Are you enjoying yourself by demonstrating your greater intelligence/moderation/creativity? Or are you trying to get them to change their mind? Because the last goal is probably not achievable through argument.
I’ve found that questions/experiments/bets tend to be better ways to settle disputes than arguments, especially when it comes to sensitive topics. It’s probably better to avoid meaningless debate that just enrages people.
This discussion also reminds me of Yvain’s In Favor of Niceness, Community, and Civilization.
I’ve been struggling with this problem as well—for example, one of my family members believes very strongly in ‘fate’ in the traditional fatalist sense, while several others are practicing Buddhists. Most of the time we have a tacit agreement to avoid these topics—because my beliefs probably look very bizarre to them as well, and it is unlikely that any of us will change our mind.
I would add to this list heritability, eugenics, intelligence, and other issues related to the intersection of the nature/nurture debate—Here’s a classic example. Here’s another. Meta-mind-kill might also be a real thing: I’ve managed to turn a recitation into a riot by pointing out that certain topics, like eugenics, lead to mind-kill situations.
On a tangential note, I wouldn’t trust college or formal education with this task. This knowledge is much too important to allow it to be degraded into just another class to score a passing grade on. At least so I have gathered from my experience with formal education at all levels in my country; maybe things are better at the Ivies or MIT or top international universities in general, but it’s a shame for society if one has to get to that percentile to have the full toolbox to think about the world around oneself.
Given this, how much knowledge acquisition should you leave to college? As a current college student I’ve been struggling with how much knowledge I should pick up on my own vs how much to get via courses. I’ve mainly focused on satisfying requirements with my courses while self-studying math/programming on my own, but I realize that this might not be the best use of resources.
I’ve definitely noticed this happening. However, I think this is more “brain generates a temporary ugh field for the textbook” than being bored with the subject—when I swap to a different textbook on the same subject, I can often understand what’s going on and do the exercises.
I tried making one just for the math behind rationality/decision theory back in October, but I never got around to finishing it. The main problems I ran into were:
Where should the skill tree start? I’m sure that basic math like algebra, geometry, trig, etc are all really useful, but I’m not sure about the dependencies between them. I ended up lumping them all into “basic mathematics”.
How should the skill tree split subjects? Many subjects are best learned iteratively—for example, it’s probably best to get a rudimentary understanding of probability theory, then learn more probability theory later on once you’ve picked up other related subjects (Linear Algebra, Multivariate Calculus, etc) and then again after more subjects (Measure theory). The complication is that these other subjects are often split into different “levels”. I found that I didn’t have enough familiarity with math to split subjects naturally.
One method that seems promising is taking a bunch of textbooks/courses, and trying to figure out the dependencies between them.
gjm has mentioned most of what I think is relevant to the discussion. However, see also the discussion on Boltzmann brains.
Hi, everyone. I’m Lawrence and I’m a college freshman. I like to read, program, and do math in my spare time.
I grew up in the Bay Area with science and religion as my two ideals. My family was religious and went to church every Sunday, but at the same time they put a strong emphasis on learning science—by the time I was in fourth grade, the amount of science books my parents bought me (and that I read) filled an entire bookshelf. I loved religion because I felt like it gave meaning to the world, teaching us to be kind and to respect one another. But, perhaps paradoxically, that made me love science as well, for science gave us medicine, technologies, and other ways to help the poor and heal the sick, things that God commanded us to do.
My faith in religion took a hit in 5th grade, when a close family member was diagnosed with cancer. Neither the prayers of our Christian friends nor the medicine of her doctors helped. We moved to China to pursue alternate treatment, but in the end nothing could save her, and she passed away. I pleaded with God to bring her back, to enact some miracle. No miracles happened. Some of our Christian friends told us that it was all God’s plan, and that she was with God now. But I remembered asking myself, if God is so great, why did He cause us so much suffering?
I asked myself this question, and I found no answer. I read the Bible again and I looked online, but still, no answer. In fact, I found many arguments against the existence of God, and against my faith. Most famous scientists, I discovered, also didn’t believe that God existed. And so I slowly, painfully moved away from my faith.
Having turned myself away from God, I devoted myself to doing good in the world. I resolved to help end suffering, I told my family. They called me crazy. The suffering in the world wasn’t going to end itself, I retorted angrily. They were amused. After that, I weighed the options before me: either I could study science, and maybe maybe invent something that could help the world, or, I could try to become rich, and then donate my money to charities and researchers who could then help the world. I decided to choose the latter. So I set down my science books and picked up economics books and biographies.
However, I always felt there was more time. After all, I was making some money off my investments, I read a lot more books than most of my peers, and I had taught myself calculus by 8th grade. My classes were easy. I started slacking off. I stopped reading as many books as I used to. I am ashamed to say this, but I lost my ambition. It was only through a combination of talent, prior knowledge, and luck that I managed to make it all the way through middle and high school.
I discovered LessWrong around December of last year, through HPMOR. I quickly tore through all the sequences in less than three months. Boy, did it have an effect. The things said here resonated with me. After reading Challenging the Difficult, I realized how far I had to improve, and how complacent I had become. After How to Actually Change Your Mind, I looked out at the world and saw how many problems there were to fix. After reading My Coming of Age, I felt that spark again, the will to do good in the world and to fight against poverty, ignorance, and death.
LessWrong made me panic, because it gave me a sense of how great these problems are. It also gave me hope, because it showed me a path to self-improvement. It was the first time I felt truly awed and outclassed, but also really motivated. Truly, there would be no god to save us. If we don’t work hard enough, if we aren’t smart enough, we can and will die.
Today I’m trying to improve myself. I’ve been doing two hours of math a day—I am almost done with multivariate calculus and am looking to begin probability theory soon. I finished a course on R a while ago and halfway through Learn You a Haskell For Great Good. Like Harry at the end of HPMOR, I am climbing the power ladder, albeit from very far down.
People ask me sometimes, what motivates you? Why don’t you go out and have fun? And to them I reply with a quote from John Donne. “Any man’s death diminishes me, because I am involved in Mankind; And therefore never send to know for whom the bell tolls; it tolls for thee.”
I am involved in mankind. I’m going to fight for it, and I’m not going to give up we reach the stars or die trying. It’s not going to be easy. I know it’s not. But it’s not a fight we can give up on.
I look forward to contributing here!