[Question] I think I came up with a good utility function for AI that seems too obvious. Can you people poke holes in it?

Basically, the AI does the following:

Create a list of possible futures that it could cause.

For each person and future at the time of the AI’s activation:

1. Simulate convincing that person that the future is going to happen.

2. If the person would try to help the AI, add 1 to the utility of that future, and if the person would try to stop the AI, subtract 1 from the utility of that future.

Cause the future with the highest utility

No comments.