Short answer, yes, it means deferring to a black-box.
Longer answer, we don’t really understand what we’re doing when we do the magic steps, and nobody has succeeded in creating an algorithm to do the magic steps reliably. They are all open problems, yet humans do them so easily that it’s difficult for us to believe that they’re hard. The situation reminds me back when people thought that object recognition from images ought to be easy to do algorithmically, because we do it so quickly and effortlessly.
Maybe I’m misunderstanding your specific point, but the operations of “listing possible worlds” and “assigning utility to each possible world” are simultaneously “standard” in the sense that they are basic primitives of decision theory and “magic” in the sense that we haven’t had any kind of algorithmic system that was remotely capable of doing these tasks until GPT-3 or −4.
Ah, thank you, that makes sense. I agree that we definitely need some opaque entity to do these two operations. Though maybe not as opaque as magic, unless you consider GPT-4 magic. As you say, “GPT-4 can do all of the magic required in the problem above.” In which case you might as well call everything an LLM does “magic”, which would be fair, but not really illuminating.
GPT-4 analysis, for reference:
One possible decision tree for your problem is:
graph TD A[Will it rain?] -->|Yes| B[Throw party inside] A -->|No| C[Throw party outside] B --> D[Enjoyment: 0.8, Cost: 0.6] C --> E[Enjoyment: 1.0, Cost: 0.4]
This decision tree assumes that you have a binary choice between throwing the party inside or outside, and that you have some estimates of the enjoyment and cost of each option. The enjoyment and cost are normalized between 0 and 1, where higher values are better. The decision tree also assumes that you know the probability of rain, which you can estimate from weather forecasts or historical data.
If you want to include the option of throwing the party on the covered porch, you can modify the decision tree as follows:
graph TD A[Will it rain?] -->|Yes| B[Throw party on porch] A -->|No| C[Throw party outside] B --> D[Enjoyment: 0.9, Cost: 0.5] C --> E[Enjoyment: 1.0, Cost: 0.4] D --> F[Probability of rain: 0.3] F --> G[Party ruined: 0.1] F --> H[Party saved: 0.9] G --> I[Enjoyment: 0.2, Cost: 0.7] H --> J[Enjoyment: 0.9, Cost: 0.5]
This decision tree adds another layer of uncertainty to the problem, as throwing the party on the porch depends on the probability of rain and the chance of ruining or saving the party if it rains. You can adjust these values based on your own preferences and expectations.
To use a decision tree to make a decision, you can calculate the expected value of each option by multiplying the enjoyment and cost by their respective probabilities and adding them up. For example, the expected value of throwing the party outside is:
EV(outside) = Enjoyment * Probability + Cost * Probability
EV(outside) = 1.0 * (1 - Probability of rain) + 0.4 * (1 - Probability of rain)
EV(outside) = 1.4 * (1 - Probability of rain)
Similarly, you can calculate the expected value of throwing the party on the porch as:
EV(porch) = Enjoyment * Probability + Cost * Probability
EV(porch) = (0.9 * Probability of rain + 0.2 * Probability of rain * Party ruined + 0.9 * Probability of rain * Party saved) + (0.5 * Probability of rain + 0.7 * Probability of rain * Party ruined + 0.5 * Probability of rain * Party saved)
EV(porch) = (0.9 + 0.5) * Probability of rain + (0.2 + 0.7 − 0.9 − 0.5) * Probability of rain * Party ruined + (0.9 + 0.5 − 0.9 − 0.5) * Probability of rain * Party saved
EV(porch) = 1.4 * Probability of rain + (-0.4) * Probability of rain * Party ruined + (0) * Probability of rain * Party saved
You can then compare the expected values of each option and choose the one that maximizes your utility.
I probably should have listened to the initial feedback on this post along the lines that it wasn’t entirely clear what I actually meant by “magic” and was possibly more confusing than illuminating, but, oh well. I think that GPT-4 is magic in the same way that the human decision-making process is magic: both processes are opaque, we don’t really understand how they work at a granular level, and we can’t replicate them except in the most narrow circumstances.
One weakness of GPT-4 is it can’t really explain why it made the choices it did. It can give plausible reasons why those choices were made, but it doesn’t have the kind of insight into its motives that we do.
it doesn’t have the kind of insight into its motives that we do
Wait, human beings have insight into their own motives that’s better than GPTs have into theirs? When was the update released, and will it run on my brain? ;-)
Joking aside, though, I’d say the average person’s insight into their own motives is most of the time not much better than that of a GPT, because it’s usually generated in the same way: i.e. making up plausible stories.
I guess it is ironic but there is an important senses of magic that I read into the piece which are not disambiguated by that.
A black box can mean arbitrary code that you are not allowed to know. Let’s call this more tame style “formulaic”. A black box can also mean a part you do not know what it does. Let’s call this style “mysterious”.
Incompleteness and embeddedness style argumentation points to a direction that an agent can only partially have a formulaic understanding of itself. Things built from “tame things up” can be completely non-mysterious. But what we often do is find yourself with the capacity to make decisions and actions and then reflect what is that all about.
I think there was some famous fysicist that opines that the human brain is material but non-algoritmic and supposedly the lurking place of the weirdness would be in microtubules.
It is easy to see that math is very effective for the formulaic part. But do you need to and how would you tackle with any non-formulaic parts of the process? Any algorithm specification is only going to give you a formulaic handle. Thus where we can not speak we must be silent.
In recursive relevance realization lingo, you have a salience landscape, you do not calculate one. How come you initially come to feel some affordance as possible in the first place? Present yet ineffable elements are involved thus the appreciation of magic.
Short answer, yes, it means deferring to a black-box.
Longer answer, we don’t really understand what we’re doing when we do the magic steps, and nobody has succeeded in creating an algorithm to do the magic steps reliably. They are all open problems, yet humans do them so easily that it’s difficult for us to believe that they’re hard. The situation reminds me back when people thought that object recognition from images ought to be easy to do algorithmically, because we do it so quickly and effortlessly.
Maybe I’m misunderstanding your specific point, but the operations of “listing possible worlds” and “assigning utility to each possible world” are simultaneously “standard” in the sense that they are basic primitives of decision theory and “magic” in the sense that we haven’t had any kind of algorithmic system that was remotely capable of doing these tasks until GPT-3 or −4.
Ah, thank you, that makes sense. I agree that we definitely need some opaque entity to do these two operations. Though maybe not as opaque as magic, unless you consider GPT-4 magic. As you say, “GPT-4 can do all of the magic required in the problem above.” In which case you might as well call everything an LLM does “magic”, which would be fair, but not really illuminating.
GPT-4 analysis, for reference:
One possible decision tree for your problem is:
graph TD
A[Will it rain?] -->|Yes| B[Throw party inside]
A -->|No| C[Throw party outside]
B --> D[Enjoyment: 0.8, Cost: 0.6]
C --> E[Enjoyment: 1.0, Cost: 0.4]
This decision tree assumes that you have a binary choice between throwing the party inside or outside, and that you have some estimates of the enjoyment and cost of each option. The enjoyment and cost are normalized between 0 and 1, where higher values are better. The decision tree also assumes that you know the probability of rain, which you can estimate from weather forecasts or historical data.
If you want to include the option of throwing the party on the covered porch, you can modify the decision tree as follows:
graph TD
A[Will it rain?] -->|Yes| B[Throw party on porch]
A -->|No| C[Throw party outside]
B --> D[Enjoyment: 0.9, Cost: 0.5]
C --> E[Enjoyment: 1.0, Cost: 0.4]
D --> F[Probability of rain: 0.3]
F --> G[Party ruined: 0.1]
F --> H[Party saved: 0.9]
G --> I[Enjoyment: 0.2, Cost: 0.7]
H --> J[Enjoyment: 0.9, Cost: 0.5]
This decision tree adds another layer of uncertainty to the problem, as throwing the party on the porch depends on the probability of rain and the chance of ruining or saving the party if it rains. You can adjust these values based on your own preferences and expectations.
To use a decision tree to make a decision, you can calculate the expected value of each option by multiplying the enjoyment and cost by their respective probabilities and adding them up. For example, the expected value of throwing the party outside is:
EV(outside) = Enjoyment * Probability + Cost * Probability
EV(outside) = 1.0 * (1 - Probability of rain) + 0.4 * (1 - Probability of rain)
EV(outside) = 1.4 * (1 - Probability of rain)
Similarly, you can calculate the expected value of throwing the party on the porch as:
EV(porch) = Enjoyment * Probability + Cost * Probability
EV(porch) = (0.9 * Probability of rain + 0.2 * Probability of rain * Party ruined + 0.9 * Probability of rain * Party saved) + (0.5 * Probability of rain + 0.7 * Probability of rain * Party ruined + 0.5 * Probability of rain * Party saved)
EV(porch) = (0.9 + 0.5) * Probability of rain + (0.2 + 0.7 − 0.9 − 0.5) * Probability of rain * Party ruined + (0.9 + 0.5 − 0.9 − 0.5) * Probability of rain * Party saved
EV(porch) = 1.4 * Probability of rain + (-0.4) * Probability of rain * Party ruined + (0) * Probability of rain * Party saved
You can then compare the expected values of each option and choose the one that maximizes your utility.
I probably should have listened to the initial feedback on this post along the lines that it wasn’t entirely clear what I actually meant by “magic” and was possibly more confusing than illuminating, but, oh well. I think that GPT-4 is magic in the same way that the human decision-making process is magic: both processes are opaque, we don’t really understand how they work at a granular level, and we can’t replicate them except in the most narrow circumstances.
One weakness of GPT-4 is it can’t really explain why it made the choices it did. It can give plausible reasons why those choices were made, but it doesn’t have the kind of insight into its motives that we do.
Wait, human beings have insight into their own motives that’s better than GPTs have into theirs? When was the update released, and will it run on my brain? ;-)
Joking aside, though, I’d say the average person’s insight into their own motives is most of the time not much better than that of a GPT, because it’s usually generated in the same way: i.e. making up plausible stories.
I guess it is ironic but there is an important senses of magic that I read into the piece which are not disambiguated by that.
A black box can mean arbitrary code that you are not allowed to know. Let’s call this more tame style “formulaic”. A black box can also mean a part you do not know what it does. Let’s call this style “mysterious”.
Incompleteness and embeddedness style argumentation points to a direction that an agent can only partially have a formulaic understanding of itself. Things built from “tame things up” can be completely non-mysterious. But what we often do is find yourself with the capacity to make decisions and actions and then reflect what is that all about.
I think there was some famous fysicist that opines that the human brain is material but non-algoritmic and supposedly the lurking place of the weirdness would be in microtubules.
It is easy to see that math is very effective for the formulaic part. But do you need to and how would you tackle with any non-formulaic parts of the process? Any algorithm specification is only going to give you a formulaic handle. Thus where we can not speak we must be silent.
In recursive relevance realization lingo, you have a salience landscape, you do not calculate one. How come you initially come to feel some affordance as possible in the first place? Present yet ineffable elements are involved thus the appreciation of magic.
Isn’t this the distinction between symbolic and non-symbolic AI?