I sometimes do a brief presentation of rationality to acquaintances, and I often stress the importance of being able to change your mind. Often, in the Sequences, this is illustrated by thought experiments, which sound a bit contrived when taken out of context, or by wide-ranging choices, which sound too remote and dramatic for explanation purposes.
I don’t encounter enough examples of day-to-day application of instrumental rationality, the experience of changing your mind, rather than the knowledge of how to do it. Your post has short glimpses of it, and I would very much enjoy reading a more in-depth description of these experiences. You seem to notice them, which is a skill I find very valuable.
On a more personal note, your post nudges me towards “write more things down”, as I should track when I do change my mind. In other words, follow more of the rationality checklist advice. I’m too often frustrated by my lack of noticing stuff. So, thanks for this nudge!
Thanks for your clarification. Even though we can’t rederive Intergalactic Segways from unknown strange aliens, could we derive information about those same strange aliens, by looking at the Segways? I’m reminded of some SF stories about this, and our own work figuring out prehistorical technology...
Thanks again for this piece. I’ll follow your daily posts and comment on them regularly!
I have a few clarification questions for you:
if an AGI could simulate quasi-perfectly a human brain, with human knowledge encoded inside, would your utility function be satisfied?
is the goal of understanding all there is to the utility function? What would the AGI do, once able to model precisely the way humans encode knowledge? If the AGI has the keys to the observable universe, what does it do with it?
Thanks for your post. Your argumentation is well-written and clear (to me).
I am confused by the title, and the conclusion. You argue that a Segway is a strange concept, that an ASI may not be capable of reaching by itself through exploration. I agree that the space of possible concepts that the ASI can understand is far greater than the space of concepts that the ASI will compute/simulate/instantiate.
However, you compare this to one-shot learning. If an ASI sees a Segway, a single time, would it be able to infer what is does, what’s it for, how to build it, etc.? I think so! The purpose of one-shot learning models is to provide a context, a structure, that can be augmented with a new concept based on a single example. This is far simpler than coming up with said new concept from scratch.
See, on efficient use of sensory data, That Alien Message.
I interpret your post as « no, an ASI shouldn’t build the telescope, because it’s a waste of resources and it wouldn’t even need it » but I’m not sure this was the message you wanted to send.
I’ll be there. As I said in the sister post on LW1.0:
The community weekend of 2017 was one of my best memories from the past year. Varied and interesting activities, broad ranges of topics, tons of fascinating discussions with people from diverse backgrounds. Organizers are super friendly.
One very, very important point is people there cooperate by default. Communication is easy, contribution is easy, getting help is easy, feedback is easy, learning is easy. Great times and productivity. And lots of fun!
Entirely worth it.
The Community Weekend of 2017 was one of the highlights of my past year. I strongly recommend it.
Excellent discussions, very friendly organizers, awesome activities.
Hi! Was this a test post?
Winners have just been announced here.
I’ll be blunt. Until this second post, there was a negative incentive to people on this site to comment on your first post. The expected reaction was downvote it to hell without bothering to comment. Now, with this second post, clarifying the context of the first, I’d still downvote the first, but I’d comment.
I read the first post three times before downvoting. I substituted words. I tried to untie the metaphor. Then I came to two personal conclusions:
You offered us a challenge, ordering us to play along, with no reward, at a cost for us. HPMOR provided dozens of chapters and entertaining fiction before the Final Exam. You just posted once and expected effort.
You impersonate an ASI with very very precise underlying hypotheses. An ASI that would blackmail us? Fair enough, that would be a variant of Roko’s Basilisk. Your Treaty is not remotely close to what I expect an ASI to behave. As you state, the ASI make all important decisions, so why bother simulating a particular scenario involving human rights?
The first post was confusing, your second post is still confusing, neither fit the posting guidelines. You are not an ASI. Roleplaying an ASI leads to all sorts of human bias. I downvoted your two post because I do not expect anyone to be better equipped to think about superintelligences after reading them. That’s it.
Thanks for this post.
I’m not sure what is your central point, what is the component you announce at the start of the post. I understood that life contains a spectrum of situations we have more or less control over, that perfect control or perfect lack of control all the time is not desirable, and that we ought to have a wide range of experiences along that dimension to have enjoyable lives.
Did I miss something? Can you clarify your conclusions?
Hi Raemon! This is a topic I’m very bad ad writing structured answers about, and much better at chatting about, because there are tons of things to say and I’d need more details to know how to steer my advice.
That being said, I recommend you this repository for resources, aimed at people with a tech, but not necessarily math, background. Reading though some of the guides there should help you solve some of your last-section questions.
I’d say that staying up to date on AI developments with a goal of AI safety is entirely tractable as long as you’re not looking for the particular techniques that will lead to unsafe AI. Most AI literature is entirely disconnected from AI safety concerns, and if you dive into the field enough, you will become proficient enough to understand the papers that are relevant to safety concerns.
Cute little ML projects almost always have hidden depths, if you’re dealing with real-world data. I suggest to try them after tutorials, not as tutorials, so that you’ll be able to split whatever you’re trying to do in manageable chunks (and understand why things fail or succeed).
I wish you the best for your endeavor!
Drop in average temperature over the last few centuries, with a minimum around 1650.
Colder winters, change in crops produced, more glaciers.
Half a Kelvin (half a degree Celsius).
Up to five degrees until the end of the century.
See this and that. What is your point?
I agree with your points. To restate my question, what extra insights does your model provide, compared to (for example), an ever-updating Maslow’s hierarchy of needs?
What you are describing, as far as I can understand, is that we are adaptation executers. The List is everything we want, whether hardcoded into our biology, or expressed by our minds. Yes, it updates.
I’d also appreciate, as other commenters pointed out, what kind of predictions can your model make on human behavior.
I think you wanted to link to this recent essay by François Chollet (AI researcher and designer of Keras, a well-known deep learning framework). The essay has also been discussed on Hacker News, also on Twitter.
I’m currently writing an answer to this one. I think it would be beneficial to have extra material about intelligence explosion which is disconnected from the “what should be done about it” question, which is so often tied to “sci-fi” scenarios.
Not speaking for Christian here. Personally, I can’t steelman suggestions that have never been defended. I see the point of steelmanning as trying in good faith to build the strongest version of an opposing view, and then criticizing it. However, to do that, I need material! I need the voice of a proponent, I need something stable and sound to argue against.
I don’t want to generate arguments for your idea, since I will probably misrepresenting it and build a strawman out of speculations, even if I work in good faith. This is why I need your voice!
As I’m the one being answered to, a bit of context. A long discussion started on the #philosophy channel of the Slack group. For reasons irrelevant to the present discussion, I’m continuing the exchange on the website linked above.
I’m currently writing an answer to this. I do not claim to represent the LW community, though I’m trying my best to reflect the broad concepts and reasoning outlined in the Sequences, notably. Please correct any blatant inaccuracies in my prose, if you think it worthwhile.
This post describes the state of underconfidence, which is to assign a lower probability to events than what actually happens. The event is here “being right” or “being competent enough to do X”. Yes, if people think they’re wrong in situations where they are right, they will waste time seeking advice and/or help. Here, self-confidence is a good thing because it brings them closer to correctly evaluating themselves.
Conversely, if they are overconfident they will waste time by making errors and taking more responsibility than they should. There, self-confidence is a bad thing because they are too sure of themselves.
If you are placed in an uncertain situation and you want to ensure success, asking for help is a trade-off between the cost of asking and the expected gain from the information. Say you ask a “stupid question”. If its answer helps you figure out stuff outside your area of expertise, it is worth it, and not stupid.
If you are shamed for asking stuff, for wanting to learn, when you are outside your expertise, you are not the problem, unless you asked the wrong person, and there were ways to learn at lower cost.
Learning has a cost. Asking has a cost. You can skip it, rely only on your present knowledge to act now and take a risk, so that it saves resources.
Conversely, you could write a post titled “Asking for help as a time-saving tactic”… for the symmetric situations.