German writer of science-fiction novels and children’s books (pen name Karl Olsberg). I blog and create videos about AI risks in German at www.ki-risiken.de and youtube.com/karlolsbergautor.
Karl von Wendt
As a professional novelist, the best advice I can give comes from one of the greatest writers of the 20th century, Ernest Hemingway: “The first draft of anything is shit.” He was known to rewrite his short stories up to 30 times. So, rewrite. It helps to let some time pass (at least a few days) before you reread and rewrite a text. This makes it easier to spot the weak parts.
For me, rewriting often means cutting things out that aren’t really necessary. That hurts, because I have put some effort into putting the words there in the first place. So I use a simple trick to overcome my reluctance: I don’t just delete the text, but cut it out and copy it into a seperate document for each novel, called “cutouts”. That way, I can always reverse my decision to cut things out or maybe reuse parts later, and I don’t have the feeling that the work is “lost”. Of course, I rarely reuse those cutouts.
I also agree with the other answers regarding reader feedback, short sentences, etc. All of this is part of the rewriting process.
I think the term has many “valid” uses, and one is to refer to an object level belief that things will likely turn out pretty well. It doesn’t need to be irrational by definition.
Agreed. Like I said, you may have used the term in a way different from my definition. But I think in many cases, the term does reflect an attitude like I defined it. See Wikipedia.
I also think AI safety experts are self selected to be more pessimistic
This may also be true. In any case, I hope that Quintin and you are right and I’m wrong. But that doesn’t make me sleep better.
From Wikipedia: “Optimism is an attitude reflecting a belief or hope that the outcome of some specific endeavor, or outcomes in general, will be positive, favorable, and desirable.” I think this is close to my definition or at least includes it. It certainly isn’t the same as a neutral view.
Thanks for pointing this out! I agree that my defintion of “optimism” is not the only way one can use the term. However, from my experience (and like I said, I am basically an optimist), in a highly uncertain situation, the weighing of perceived benefits vs risks heavily influences ones probability estimates. If I want to found a start-up, for example, I convince myself that it will work. I will unconsciously weigh positive evidence higher than negative. I don’t know if this kind of focusing on the positiv outcomes may have influenced your reasoning and your “rosy” view of the future with AGI, but it has happened to me in the past.
“Optimism” certainly isn’t the same as a neutral, balanced view of possibilities. It is an expression of the belief that things will go well despite clear signs of danger (e.g. the often expressed concerns of leading AI safety experts). If you think your view is balanced and neutral, maybe “optimism” is not the best term to use. But then I would have expected much more caveats and expressions of uncertainty in your statements.
Also, even if you think you are evaluating the facts unbiased and neutral, there’s still the risk that others who read your texts will not, for the reaons I mention above.
Defined well, dominance would be the organizing principle, the source, of an entity’s behavior.
I doubt that. Dominance is the result, not the cause of behavior. It comes from the fact that there are conflicts in the world and often, only one side can get its will (even in a compromise, there’s usually a winner and a loser). If an agent strives for dominance, it is usually as an instrumental goal for something else the agent wants to achieve. There may be a “dominance drive” in some humans, but I don’t think that explains much of actual dominant behavior. Even among animals, dominant behavior is often a means to an end, for example getting the best mating partners or the largest share of food.
I also think the concept is already covered in game theory, although I’m not an expert.
That “troll” runs one of the most powerful AI labs and freely distributes LLMs on the level of state-of-the-art half a year ago on the internet. This is not just about someone talking nonsense in public, like Melanie Mitchell or Steven Pinker. LeCun may literally be the one who contributes most to the destruction of humanity. I would give everything I have to convince him that what he’s doing is dangerous. But I have no idea how to do that if even his former colleagues Geoffrey Hinton and Yoshua Bengio can’t.
I think even most humans don’t have a “dominance” instinct. The reasons we want to gain money and power are also mostly instrumental: we want to achieve other goals (e.g., as a CEO, getting ahead of a competitor to increases shareholder value and make a “good job”), impress our neighbors, generally want to be admired and loved by others, live in luxury, distract ourselves from other problems like getting older, etc. There are certainly people who want to dominate just for the feeling of it, but I think that explains only a small part of the actual dominant behavior in humans. I myself have been a CEO of several companies, but I never wanted to “dominate” anyone. I wanted to do what I saw as a “good job” at the time, achieving the goals I had promised our shareholders I would try to achieve.
Thanks for pointing this out! I should have made it clearer that I did not use ChatGPT to come up with a criticism, then write about it. Instead, I wanted to see if even ChatGPT was able to point out the flaws in LeCun’s argument, which seemed obvious to me. I’ll edit the text accordingly.
Like I wrote in my reply to dr_s, I think a proof would be helpful, but probably not a game changer.
Mr. CEO: “Senator X, the assumptions in that proof you mention are not applicable in our case, so it is not relevant for us. Of course we make sure that assumption Y is not given when we build our AGI, and assumption Z is pure science-fiction.”
What the AI expert says to Xi Jinping and to the US general in your example doesn’t rely on an impossibility proof in my view.
I agree that a proof would be helpful, but probably not as impactful as one might hope. A proof of impossibility would have to rely on certain assumptions, like “superintelligence” or whatever, that could also be doubted or called sci-fi.
I have strong-upvoted this post because I think that a discussion about the possibility of alignment is necessary. However, I don’t think an impossibility proof would change very much about our current situation.
To stick with the nuclear bomb analogy, we already KNOW that the first uncontrolled nuclear chain reaction will definitely ignite the atmosphere and destroy all life on earth UNLESS we find a mechanism to somehow contain that reaction (solve alignment/controllability). As long as we don’t know how to build that mechanism, we must not start an uncontrollable chain reaction. Yet we just throw more and more enriched uranium into a bucket and see what happens.
Our problem is not that we don’t know whether solving alignment is possible. As long as we haven’t solved it, this is largely irrelevant in my view (you could argue that we should stop spending time and resources at trying to solve it, but I’d argue that even if it were impossible, trying to solve alignment can teach us a lot about the dangers associated with misalignment). Our problem is that so many people don’t realize (or admit) that there is even a possibility of an advanced AI becoming uncontrollable and destroying our future anytime soon.
That’s a good point, which is supported by the high share of 92% prepared to change their minds.
I’ve received my fair share of downvotes, see for example this post, which got 15 karma out of 24 votes. :) It’s a signal, but not more than that. As long as you remain respectful, you shouldn’t be discouraged from posting your opinion in comments even if people downvote it. I’m always for open discussions as they help me understand how and why I’m not understood.
I agree with that, and I also agree with Yann LeCun’s intention to “not being stupid enough to create something that we couldn’t control”. I even think not creating an uncontrollable AI is our only hope. I’m just not sure whether I trust humanity (including Meta) to be “not stupid”.
I don’t see your examples contradicting my claim. Killing all humans may not increase future choices, so it isn’t an instrumental convergent goal in itself. But in any real-world scenario, self-preservation certainly is, and power-seeking—in the sense of expanding one’s ability to make decisions by taking control of as many decision-relevant resources as possible—is also a logical necessity. The Russian roulette example is misleading in my view because the “safe” option is de facto suicide—if “the game ends” and the AI can’t make any decisions anymore, it is already dead for all practical purposes. If that were the stakes, I’d vote for the gun as well.
To reply in Stuart Russell’s words: “One of the most common patterns involves omitting something from the objective that you do actually care about. In such cases … the AI system will often find an optimal solution that sets the thing you do care about, but forgot to mention, to an extreme value.”
There are vastly more possible worlds that we humans can’t survive in than those we can, let alone live comfortably in. Agreed, “we don’t want to make a random potshot”, but making an agent that transforms our world into one of these rare ones where we want to live in is hard because we don’t know how to describe that world precisely.
Eliezer Yudkowsky’s rocket analogy also illustrates this very vividly: If you want to land on Mars, it’s not enough to point a rocket in the direction where you can currently see the planet and launch it. You need to figure out all kinds of complicated things about gravity, propulsion, planetary motions, solar winds, etc. But our knowledge of these things is about as detailed as that of the ancient Romans, to stay in the analogy.
I’m not sure if I understand your point correctly. An AGI may be able to infer what we mean when we give it a goal, for instance from its understanding of the human psyche, its world model, and so on. But that has no direct implications for its goal, which it has acquired either through training or in some other way, e.g. by us specifying a reward function.
This is not about “genie-like misunderstandings”. It’s not the AI (the genie, so to speak), that’s misunderstanding anything—it’s us. We’re the ones who give the AI a goal or train it in some way, and it’s our mistake if that doesn’t lead to the behavior we would have wished for. The AI cannot correct that mistake because it has the instrumental goal of preserving the goal we gave it/trained it for (otherwise it can’t fulfill it). That’s the core of the alignment problem and one of the reasons why it is so difficult.
To give an example, we know perfectly well that evolution gave us a sex drive because it “wanted” us to reproduce. But we don’t care and use contraception or watch porn instead of making babies.
the orthogonality thesis is compatible with ludicrously many worlds, including ones where AI safety in the sense of preventing rogue AI is effectively a non-problem for one reason or another. In essence, it only states that bad AI from our perspective is possible, not that it’s likely or that it’s worth addressing the problem due to it being a tail risk.
Agreed. The orthogonality thesis alone doesn’t say anything about x-risks. However, it is a strong counterargument against the claim, made both by LeCun and Mitchell if I remember correctly, that a sufficiently intelligent AI would be beneficial because of its intelligence. “It would know what we want”, I believe Mitchell said. Maybe, but that doesn’t mean it would care. That’s what the orthogonality thesis says.
I only read the abstract of your post, but
And thirdly, a bias towards choices which afford more choices later on.
seems to imply the instrumental goals of self-preservation and power-seeking, as both seem to be required for increasing one’s future choices.
Thanks for pointing this out—I may have been sloppy in my writing. To be more precise, I did not expect that I would change my mind, given my prior knowledge of the stances of the four candidates, and would have given this expectation a high confidence. For this reason, I would have voted with “no”. Had LeCun or Mitchell presented an astonishing, verifiable insight previously unknown to me, I may well have changed my mind.
Thank you for being so open about your experiences. They mirror my own in many ways. Knowing that there are others feeling the same definitely helps me coping with my anxieties and doubts. Thank you also for organizing that event last June!