Ah, thank you, that makes sense. I agree that we definitely need some opaque entity to do these two operations. Though maybe not as opaque as magic, unless you consider GPT-4 magic. As you say, “GPT-4 can do all of the magic required in the problem above.” In which case you might as well call everything an LLM does “magic”, which would be fair, but not really illuminating.
GPT-4 analysis, for reference:
One possible decision tree for your problem is:
graph TD A[Will it rain?] -->|Yes| B[Throw party inside] A -->|No| C[Throw party outside] B --> D[Enjoyment: 0.8, Cost: 0.6] C --> E[Enjoyment: 1.0, Cost: 0.4]
This decision tree assumes that you have a binary choice between throwing the party inside or outside, and that you have some estimates of the enjoyment and cost of each option. The enjoyment and cost are normalized between 0 and 1, where higher values are better. The decision tree also assumes that you know the probability of rain, which you can estimate from weather forecasts or historical data.
If you want to include the option of throwing the party on the covered porch, you can modify the decision tree as follows:
graph TD A[Will it rain?] -->|Yes| B[Throw party on porch] A -->|No| C[Throw party outside] B --> D[Enjoyment: 0.9, Cost: 0.5] C --> E[Enjoyment: 1.0, Cost: 0.4] D --> F[Probability of rain: 0.3] F --> G[Party ruined: 0.1] F --> H[Party saved: 0.9] G --> I[Enjoyment: 0.2, Cost: 0.7] H --> J[Enjoyment: 0.9, Cost: 0.5]
This decision tree adds another layer of uncertainty to the problem, as throwing the party on the porch depends on the probability of rain and the chance of ruining or saving the party if it rains. You can adjust these values based on your own preferences and expectations.
To use a decision tree to make a decision, you can calculate the expected value of each option by multiplying the enjoyment and cost by their respective probabilities and adding them up. For example, the expected value of throwing the party outside is:
EV(outside) = Enjoyment * Probability + Cost * Probability
EV(outside) = 1.0 * (1 - Probability of rain) + 0.4 * (1 - Probability of rain)
EV(outside) = 1.4 * (1 - Probability of rain)
Similarly, you can calculate the expected value of throwing the party on the porch as:
EV(porch) = Enjoyment * Probability + Cost * Probability
EV(porch) = (0.9 * Probability of rain + 0.2 * Probability of rain * Party ruined + 0.9 * Probability of rain * Party saved) + (0.5 * Probability of rain + 0.7 * Probability of rain * Party ruined + 0.5 * Probability of rain * Party saved)
EV(porch) = (0.9 + 0.5) * Probability of rain + (0.2 + 0.7 − 0.9 − 0.5) * Probability of rain * Party ruined + (0.9 + 0.5 − 0.9 − 0.5) * Probability of rain * Party saved
EV(porch) = 1.4 * Probability of rain + (-0.4) * Probability of rain * Party ruined + (0) * Probability of rain * Party saved
You can then compare the expected values of each option and choose the one that maximizes your utility.
I probably should have listened to the initial feedback on this post along the lines that it wasn’t entirely clear what I actually meant by “magic” and was possibly more confusing than illuminating, but, oh well. I think that GPT-4 is magic in the same way that the human decision-making process is magic: both processes are opaque, we don’t really understand how they work at a granular level, and we can’t replicate them except in the most narrow circumstances.
One weakness of GPT-4 is it can’t really explain why it made the choices it did. It can give plausible reasons why those choices were made, but it doesn’t have the kind of insight into its motives that we do.
it doesn’t have the kind of insight into its motives that we do
Wait, human beings have insight into their own motives that’s better than GPTs have into theirs? When was the update released, and will it run on my brain? ;-)
Joking aside, though, I’d say the average person’s insight into their own motives is most of the time not much better than that of a GPT, because it’s usually generated in the same way: i.e. making up plausible stories.
Ah, thank you, that makes sense. I agree that we definitely need some opaque entity to do these two operations. Though maybe not as opaque as magic, unless you consider GPT-4 magic. As you say, “GPT-4 can do all of the magic required in the problem above.” In which case you might as well call everything an LLM does “magic”, which would be fair, but not really illuminating.
GPT-4 analysis, for reference:
One possible decision tree for your problem is:
graph TD
A[Will it rain?] -->|Yes| B[Throw party inside]
A -->|No| C[Throw party outside]
B --> D[Enjoyment: 0.8, Cost: 0.6]
C --> E[Enjoyment: 1.0, Cost: 0.4]
This decision tree assumes that you have a binary choice between throwing the party inside or outside, and that you have some estimates of the enjoyment and cost of each option. The enjoyment and cost are normalized between 0 and 1, where higher values are better. The decision tree also assumes that you know the probability of rain, which you can estimate from weather forecasts or historical data.
If you want to include the option of throwing the party on the covered porch, you can modify the decision tree as follows:
graph TD
A[Will it rain?] -->|Yes| B[Throw party on porch]
A -->|No| C[Throw party outside]
B --> D[Enjoyment: 0.9, Cost: 0.5]
C --> E[Enjoyment: 1.0, Cost: 0.4]
D --> F[Probability of rain: 0.3]
F --> G[Party ruined: 0.1]
F --> H[Party saved: 0.9]
G --> I[Enjoyment: 0.2, Cost: 0.7]
H --> J[Enjoyment: 0.9, Cost: 0.5]
This decision tree adds another layer of uncertainty to the problem, as throwing the party on the porch depends on the probability of rain and the chance of ruining or saving the party if it rains. You can adjust these values based on your own preferences and expectations.
To use a decision tree to make a decision, you can calculate the expected value of each option by multiplying the enjoyment and cost by their respective probabilities and adding them up. For example, the expected value of throwing the party outside is:
EV(outside) = Enjoyment * Probability + Cost * Probability
EV(outside) = 1.0 * (1 - Probability of rain) + 0.4 * (1 - Probability of rain)
EV(outside) = 1.4 * (1 - Probability of rain)
Similarly, you can calculate the expected value of throwing the party on the porch as:
EV(porch) = Enjoyment * Probability + Cost * Probability
EV(porch) = (0.9 * Probability of rain + 0.2 * Probability of rain * Party ruined + 0.9 * Probability of rain * Party saved) + (0.5 * Probability of rain + 0.7 * Probability of rain * Party ruined + 0.5 * Probability of rain * Party saved)
EV(porch) = (0.9 + 0.5) * Probability of rain + (0.2 + 0.7 − 0.9 − 0.5) * Probability of rain * Party ruined + (0.9 + 0.5 − 0.9 − 0.5) * Probability of rain * Party saved
EV(porch) = 1.4 * Probability of rain + (-0.4) * Probability of rain * Party ruined + (0) * Probability of rain * Party saved
You can then compare the expected values of each option and choose the one that maximizes your utility.
I probably should have listened to the initial feedback on this post along the lines that it wasn’t entirely clear what I actually meant by “magic” and was possibly more confusing than illuminating, but, oh well. I think that GPT-4 is magic in the same way that the human decision-making process is magic: both processes are opaque, we don’t really understand how they work at a granular level, and we can’t replicate them except in the most narrow circumstances.
One weakness of GPT-4 is it can’t really explain why it made the choices it did. It can give plausible reasons why those choices were made, but it doesn’t have the kind of insight into its motives that we do.
Wait, human beings have insight into their own motives that’s better than GPTs have into theirs? When was the update released, and will it run on my brain? ;-)
Joking aside, though, I’d say the average person’s insight into their own motives is most of the time not much better than that of a GPT, because it’s usually generated in the same way: i.e. making up plausible stories.