I would like to say thanks to everyone who helped me out in the comments here. You genuinely helped me. Thank you.
Raiden
No Value
Robin, or anyone who agrees with Robin:
What evidence can you imagine would convince you that AGI would go FOOM?
- 29 Mar 2019 14:01 UTC; 10 points) 's comment on What are CAIS’ boldest near/medium-term predictions? by (
Regarding point 2, while it would be epistemologically risky and borderline dark arts, I think the idea is more about what to emphasize and openly signal, not what to actually believe.
Seeing this come together and implemented gives me hope that the sanity waterline actually can be raised. I can imagine future history books saying, “The early 21st century was the time of the Rationalist Movement, the greatest cultural rebirth in history. ” I am lucky to bear witness to what you people are going to accomplish.
Hi, I am Raiden. For most of my life I have been an aspiring rationalist, even though I didn’t call myself by that name. I was raised to think that I was some sort of super genius (it was a big shock in my later elementary school years to discover that I wasn’t the smartest person in the world). This had the effect of causing me to associate some of my identity with intelligence. This led me to be a traditional rationalist; I had much admiration for the Spock stereotype, and I have been a atheist since childhood despite a fundamentalist religious family. In my freshmen year of high school, I was exposed to some self-help books that led me to seriously consider other virtues besides intelligence to be of value. This slowly revolutionized my view of the world.
Over the course of the next summer, I was exposed to the philosophy of Objectivism, and quickly became a strong adherent to it. I was from the beginning in agreement with the “Open Objectivist” group which said Objectivism is not a complete philosophy. I agree that objectivism descended into some sort of cult, and that Ayn Rand was one of history’s greatest hypocrites. I also came to believe that this didn’t disqualify the soundness of the philosophy itself. Over time though, the philosophy began to lose its grip on my mind. I still consider myself to be some sort of Neo-Objectivist however, as many of Rand’s ideas shape my opinions.
Very recently I have discovered Less Wrong and was expose to its version of rationality, which I came to wholeheartedly adore. I have so far I have at least skimmed the Sequences, and I believe I have a basic understanding of rationality. My goal right now is to scan and absorb all the Sequences and then read some rationality-related textbooks. With a fundamental understand of rationality down, I will then re-examine all of my important beliefs from philosophy to politics to religion. After I come to a better understanding of rationality and the world, I will decide on goals and values and systematically work for them. I also plan to contribute to Less Wrong.
I am at the age of sixteen, so please don’t discriminate against me based on that. I consider myself to be far more mature than most people my age, and far more mature than I was even a few months ago. I am currently recovering from what may only be called an existential crisis, but in my outward behavior I am perfectly stable and sociable. Deep down inside I have a burning desire to know the truth. In my opinion, that is one of the greatest measures of one’s character.
It fits with the idea of the universe having an orderly underlying structure. The simulation hypothesis is just one way that can be true. Physics being true is another, simpler explanation.
I always thought that the “most civilizations just upload and live in a simulated utopia instead of colonizing the universe” response to the Fermi Paradox was obviously wrong, because it would only take ONE civilization breaking this trend to be visible, and regardless of what the aliens are doing, a galaxy of resources is always useful to have. But i was reading somewhere (I don’t remember where) about an interesting idea of a super-Turing computer that could calculate anything, regardless of time constraints and ignoring the halting problem. I think the proposal was to use closed time like curves or something.
This, of course, seemed very far-fetched, but the implications are fascinating. It would be possible to use such a device to simulate an eternity in a moment. We could upload and have an eternity of eudaimonia, without ever having to worry about running out of resources or the heat death of the universe or alien superintelligences. Even if the computer was to be destroyed an instant later, it wouldn’t matter to us. If such a thing was possible, then that would be an obvious solution to the Fermi Paradox.
Why?
I agree with you I just don’t like articles with a one-word command and no explanation.
Neural networks may very well turn out to be the easiest way to create a general intelligence, but whether they’re the easiest way to create a friendly general intelligence is another question altogether.
My current view is that most animals are not people, in the sense that they are not subject to moral concern. Of course, I do get upset when I see things such as animal abuse, but it seems to me that helping animals only nets me warm fuzzy feelings. I know animals react to suffering in a manner that we can sympathize with, but it just seems to me that they are still just running a program that is “below” that of humans. I think I feel that “react to pain” does not equal “worthy of moral consideration.” The only exceptions to this in my eyes may be “higher mammals” such as other primates. Yet others on this site have advocated concern for animal welfare. Where am I confused?
I’m at that point in life where I have to make a lot of choices about my future life. I’m considering doing a double major in biochemistry and computer science. I find both of these topics to be fascinating, but I’m not sure if that’s the most effective way to help the world. I am comfortable in my skills as an autodidact, and I find myself to be interested in comp sci, biochemistry, physics, and mathematics. I believe that regardless which I actually major in, I could learn any of the others quite well. I have a nagging voice in my head saying that I shouldn’t bother learning biochemistry, because it won’t be useful in the long term because everything will be based on nanotech and we will all be uploads. Is that a valid point? Or should I just focus on the world as it is now? And should I study something else or does biochem have potential to help the world? I find myself to be very confused about this subject and humbly request any advice.
I have no idea what I consider a person to be. I think that I wish it was binary because that would be neat and pretty and make moral questions a lot easier to answer. But I think that it probably isn’t. Right now I feel as though what separates person from nonperson is totally arbitrary.
It seems as though we evolved methods of feeling sympathy for others, and now we attempt to make a logical model from that to define things as people. It’s like “person” is an unsound concept that cannot be organized into an internally consistent system. Heck, I’m actually starting to feel like all of human nature is an internally inconsistent mess doomed to never make sense.
But it’s not obvious to me that your initial mix of evolved and encultured values even deserves this preferential treatment.
But your initial mix of evolved and encultured values are all you have to go on. There is no other source of values or intuitions. Even if you decide that you disagree with a value, you’re using other evolved or encultured intuitions to decide this. There is literally nothing can use except these. A person who abandons their religious faith after some thought is using the value “rational thought” against “religious belief.” This person was lucky enough to have “rational thought” instilled by someone as a value, and have it be strong enough to beat “religious belief.” The only way to change your value system is by using your value system to reflect upon your value system.
I’ve noticed that a lot of my desire to be rational is social. I was raised as the local “smart kid” and continue to feel associated with that identity. I get all the stuff about rationality should be approached like “I have this thing I care about, and therefore become rational to protect it.” but I just don’t feel that way. I’m not sure how I feel about that.
Of the three reasons to be rational that are described, I’m most motivated by the moral reason. This is probably because of the aforementioned identity. I feel very offended at anything I perceive as “irrational” in others, kinda like it’s an attack on my tribe. This has negative effects on my social life and causes me to be very arrogant to others. Does anybody have any advice for that?
Would a boxed AI be able to affect the world in any important way using the computer hardware itself? Like, make electrons move in funky patterns or affect air flow with cooling fans? If so, would it be able to do anything significant?
-25: It briefly occurs to me to think about a generic post on LW.
The idea of ALL beliefs being probabilities on a continuum, not just belief vs disbelief.