Donatas Lučiūnas
I AM curious if you have any modeling more than “could be anything at all!” for the idea of an unknown goal.
No.
I could say—Christian God or aliens. And you would say—bullshit. And I would say—argument from ignorance. And you would say—I don’t have time for that.
So I won’t say.
We can approach this from different angle. Imagine an unknown goal that according to your beliefs AGI would really care about. And accept the fact that there is a possibility that it exists. Absense of evidence is not evidence of absense.
If killing itself / allowing itself to be replaced leads to more expected paperclips than clinging to life does, it will do so.
I agree, but this misses the point.
What would change your opinion? It is not the first time we have a discussion, I don’t feel you are open for my perspective. I am concerned that you may be overlooking the possibility of an argument from ignorance fallacy.
Hm. How many paperclips is enough for the maximizer to kill itself?
It seems to me that you don’t hear me...
I claim that utility function is irrelevant
You claim that utility function could ignore improbable outcomes
I agree with your claim. But it seems to me that your claim is not directly related to my claim. Self-preservation is not part of utility function (instrumental convergence). How can you affect it?
OK, so using your vocabulary I think that’s the point I want to make—alignment is physically-impossible behavioral policy.
I elaborated a bit more there https://www.lesswrong.com/posts/AdS3P7Afu8izj2knw/orthogonality-thesis-burden-of-proof?commentId=qoXw7Yz4xh6oPcP9i
What you think?
Thank you for your comment! It is super rare for me to get such a reasonable reaction in this community, you are awesome 👍😁
there could be a line of code in an agent-program which sets the assigned EV of outcomes premised on a probability which is either <0.0001% or ‘unknown’ to 0.
I don’t think it is possible, could you help me understand how is this possible? This conflicts with Recursive Self-Improvement, doesn’t it?
Rationality vs Alignment
Why do you think it is rational to ignore tiny probabilities? I don’t think you can make maximizer ignore tiny probabilities. And some probabilities are not tiny, they are unknown (black swans), why do you think it is rational to ignore them? In my opinion ignoring self preservation is contradictory to maximizer’s goal. I understand that this is popular opinion, but it is not proven in any way. The opposite (focus on self preservation instead of paperclips) has logical proof (Pascal’s wager).
Maximizer can use robust decision making (https://en.wikipedia.org/wiki/Robust_decision-making) to deal with many contradictory choices.
I don’t think your reasoning is mathematical. Worth of survival is infinite. And we have situation analogous to Pascal’s wager. Why do you think the maximizer would reject Pascal’s logic?
Building one paperclip could EASILY increase the median and average number of future paperclips more than investing one paperclip’s worth of power into comet diversion.
Why do you think so? There will be no paperclips if planet and maximizer are destroyed.
[Question] Why would Squiggle Maximizer (formerly “Paperclip maximizer”) produce single paperclip?
The Orthogonality Thesis asserts that there can exist arbitrarily intelligent agents pursuing any kind of goal.
It basically says that intelligence and goals are independent
Images from A caveat to the Orthogonality Thesis.
While I claim that all intelligence that is capable to understand “I don’t know what I don’t know” can only seek power (alignment is impossible).
the ability of an AGI to have arbitrary utility functions is orthogonal (pun intended) to what behaviors are likely to result from those utility functions.
As I understand you say that there are Goals on one axis and Behaviors on other axis. I don’t think Orthogonality Thesis is about that.
[Question] Orthogonality Thesis burden of proof
Instead of “objective norm” I’ll use a word “threat” as it probably conveys the meaning better. And let’s agree that threat cannot be ignored by definition (if it could be ignored, it is not a threat).
How can agent ignore threat? How can agent ignore something that cannot be ignored by definition?
How would you defend this point? Probably I lack the domain knowledge to articulate it well.
The Orthogonality Thesis states that an agent can have any combination of intelligence level and final goal
I am concerned that higher intelligence will inevitably converge to a single goal (power seeking).
Or would you keep doing whatever you want, and let the universe worry about its goals?
If I am intelligent I avoid punishment therefore I produce paperclips.
By the way I don’t think Christian “right” is objective “should”.
It seems for me that at the same time you are saying that agent cares about “should” (optimize blindly to any given goal) and does not care about “should” (can ignore objective norms). How does this fit?
It’s entirely compatible with benevolence being very likely in practice.
Could you help me understand how is it possible? Why an intelligent agent should care about humans instead of defending against unknown threats?
As I understand your position is “AGI is most likely doom”. My position is “AGI is definitely doom”. 100%. And I think I have flawless logical proof. But this is on philosophical level and many people seem to downvote me without understanding 😅 Long story short my proposition is that all AGIs will converge to a single goal—seeking power endlessly and uncontrollably. And I base this proposition on a fact that “there are no objective norms” is not a reasonable assumption.
This conflicts with Gödel’s incompleteness theorems, Fitch’s paradox of knowability, Black swan theory.
A concept of experiment relies on this principle.
And this is exactly what scares me—people who work with AI have beliefs that are non scientific. I consider this to be an existential risk.
You may believe so, but AGI would not believe so.
Thanks to you too!