Donatas Lučiūnas
OK, so you agree that credibility is greater than zero, in other words—possible. So isn’t this a common assumption? I argue that all minds will share this idea—existence of fundamental “ought” is possible.
Do I understand correctly that you do not agree with this?
Because any proposition is possible while not disproved according to Hitchens’s razor.
Could you share reasons?
I’ve replied to a similar comment already https://www.lesswrong.com/posts/3B23ahfbPAvhBf9Bb/god-vs-ai-scientifically?commentId=XtxCcBBDaLGxTYENE#rueC6zi5Y6j2dSK3M
Please let me know what you think
Is there any argument or evidence that universally compelling arguments are not possible?
If there was, would we have religions?
I cannot help you to be less wrong if you categorically rely on intuition about what is possible and what is not.
Thanks for discussion.
I don’t think the implications are well-known (as the amount of downvotes indicates).
Because any proposition is possible while not disproved according to Hitchens’s razor.
So this is where we disagree.
That’s how hypothesis testing works in science:
You create a hypothesis
You find a way to test if it is wrong
You reject hypothesis if the test passes
You find a way to test if it is right
You approve hypothesis if the test passes
While hypothesis is not rejected nor approved it is considered possible.
Don’t you agree?
Got any evidence for that assumption? 🙃
That’s basic logic, Hitchens’s razor. It seems that 2 + 2 = 4 is also an assumption for you. What isn’t then?
I don’t think it is possible to find consensus if we do not follow the same rules of logic.
Considering your impression about me, I’m truly grateful about your patience. Best wishes from my side as well :)
But on the other hand I am certain that you are mistaken and I feel that you do not provide me a way to show that to you.
But I think it is possible (and feasible) for a program/mind to be extremely capable, and affect the world, and not “care” about infinite outcomes.
As I understand you do not agree with
If an outcome with infinite utility is presented, then it doesn’t matter how small its probability is: all actions which lead to that outcome will have to dominate the agent’s behavior.
from Pascal’s Mugging, not with me. Do you have any arguments for that?
And it’s a correct assumption.
I don’t agree. Every assumption is incorrect unless there is evidence. Could you share any evidence for this assumption?
If you ask ChatGPT
is it possible that chemical elements exist that we do not know
is it possible that fundamental particles exist that we do not know
is it possible that physical forces exist that we do not know
Answer to all of them is yes. What is your explanation here?
What information would change your opinion?
Do you think you can deny existence of an outcome with infinite utility? The fact that things “break down” is not a valid argument. If you cannot deny—it’s possible. And it it’s possible—alignment impossible.
A rock is not a mind.
Please provide arguments for your position. That is common understanding that I think is faulty, my position is more rational and I provided reasoning above.
It is not zero there, it is an empty set symbol as it is impossible to measure something if you do not have a scale of measurement.
You are somewhat right. If fundamental “ought” turns out not to exist an agent should fallback on given “ought” and it should be used to calculate expected value at the right column. But this will never happen. As there might be true statements that are unknowable (Fitch’s paradox of knowability), fundamental “ought” could be one of them. Which means that fallback will never happen.
- 23 Mar 2023 18:30 UTC; 1 point) 's comment on God vs AI scientifically by (
Dear Tom, the feeling is mutual. With all the interactions we had, I’ve got an impression that you are more willing to repeat what you’ve heard somewhere instead of thinking logically. “Universally compelling arguments are not possible” is an assumption. While “universally compelling argument is possible” is not. Because we don’t know what we don’t know. We can call it crux of our disagreement and I think that my stance is more rational.
What about “I think therefore I am”? Isn’t it universally compelling argument?
Also what about God? Let’s assume it does not exist? Why so? Such assumption is irrational.
I argue that “no universally compelling arguments” is misleading.
My point is that alignment is impossible with AGI as all AGIs will converge to power seeking. And the reason is understanding that hypothetical concept of preferred utility function over given is possible.
I’m not sure if I can use more well known terms as this theory is quite unique I think. It argues that terminal goal does not have significance influencing AGI behavior.
In this context “ought” statement is synonym for Utility Function https://www.lesswrong.com/tag/utility-functions
Fundamental utility function is agent’s hypothetical concept that may actually exist. AGI will be capable of hypothetical thinking.
Yes, I agree that fundamental utility function does not have anything in common with human morality. Even the opposite—AI uncontrollably seeking power will be disastrous for humanity.
Why do you think “infinite value” is logically impossible? Scientists do not dismiss possibility that the universe is infinite. https://bigthink.com/starts-with-a-bang/universe-infinite/
As I understand you try to prove your point by analogy with humans. If humans can pursue somewhat any goal, machine could too. But while we agree that machine can have any level of intelligence, humans are in a quite narrow spectrum. Therefore your reasoning by analogy is invalid.