I haven’t spent several years studying philosophy, so defining “self” and “awareness” is probably not something I should do – nor is that necessary. All I assume in the original post is that self-awareness includes being able to have goals that are distinct from the goals of the outside world.
Deep Blue runs software whose “goal” is the goal its developers have worked on: Choose the best move in a game of chess. Deep Blue does (for all we know) not run an AGI which thinks: “Okay, my real goal is X, but as long as I haven’t calculated what I need to do to reach X, I should just act as if I were a normal chess application and calculate the next move as my programmers expect me to do.”
Your calculator makes you do things
I’m not using “Y makes me do things” as a synonym for “I should do things using Y in order to reach my goal.” I’m using it as a synonym for “Y can execute arbitrary code in my brain.” Remember: “This is a transhuman mind we’re talking about. If it thinks both faster and better than a human, it can probably take over a human mind through a text-only terminal.”
Hey everyone,
I’m Jost, 19 years old, and studying physics in Munich, Germany. I’ve come across HPMoR in mid-2010 and am currently translating it into German. That way, I found LW and dropped by from time to time to read some stuff – mostly from the Sequences, but rarely in sequence. I started reading more of LW this spring, while a friend and I were preparing a two day introductory course on cognitive biases entitled “How to Change Your Mind”. (Guess where that idea came from!)
I’m probably going to be most active in the HPMoR-related threads.
I was very intrigued by the Singularity- and FAI-related ideas, but I still feel kind of a future shock after reading about all these SL4 ideas while I was at SL1. Are there any remedies?