In The Terminator, we often see the world through the machine’s perspective: a red-tinged overlay of cascading data, a synthetic gaze parsing its environment with cold precision. But this raises an unsettling question: Who—or what—actually experiences that view?
Is there an AI inside the AI, or is this merely a fallacy that invites us to project a mind where none exists?
Nothing is preventing us from designing a system consisting of a module generating a red-tinged video stream and image recognition software that looks at the stream and based on some details of it sends commands to the robotic body. Now, it would be a silly and overcomplicated way to design a system, but that’s beside the point.
If matter is merely a symbol within our conceptual models, does the claim that “matter precedes and governs mind” hold any meaning?
Don’t confuse citation and the referent “Matter” is a symbol in our map, while matter itself in the territory. Naturally, the territory predates the map.
If one insists that the software—the computational patterns and processes—alone constitutes the essence of the AI, then one leans toward idealism, suggesting that the “helpful assistant” might exist in a realm hierarchically above physical instantiation, beyond space and time. Conversely, if one asserts that only the hardware—the physical substrate—truly exists, then one aligns with materialism or physicalism, reducing the AI to mere excitations of electrical charges within the integrated circuits of the GPU.
This seems to be entirely vibe based. You don’t need to lean idealist to talk about software.
Drawing from the first incompleteness theorem, EN suggests that the brain, as a biological “good regulator”, operates most effectively when it generates unprovable formal falsehoods—one of which corresponds to the claim of experiencing consciousness or qualia.
Incompleteness theorem implies the existence of either unprovable and true statement (if the system is incomplete),or provable and false statement (if it’s inconsistent).
It seems that all the substance of your argument is based on a completely wrong premise.
You basically left our other more formal conversation to engage in the critique of prose.
*slow clap*
These are metaphors to lead the reader slowly to the idea… This is not the Argument. The Argument is right there and you are not engaging with it.
You need to understand the claim first in order to deconstruct it. Now you might say I have a psychotic fit, but earlier as we discussed Turing, you didn’t seem to resonate with any of the ideas.
If you are ready to engage with the ideas I am at your disposal.
You basically left our other more formal conversation to engage in the critique of prose.
Not at all. I’m doing both. I specifically started the conversation in the post which is less… prose. But I suspect you may also be interested in engagement with the long post that you put so much effort to write. If it’s not the case—nevermind and let’s continue the discussion in the argument thread.
These are metaphors to lead the reader slowly to the idea...
If you require flawed metaphors, what does it say about the idea?
Now you might say I have a psychotic fit
Frankly speaking, that does indeed look like that. From my perspective you are not being particularly coherent, keep jumping from point to point, with nearly no engagement with what I write. But this can be an artifact of large inferential distances. So you have my benefit of the doubt and I’m eager to learn whether there is some profound substance in your reasoning.
Nothing is preventing us from designing a system consisting of a module generating a red-tinged video stream and image recognition software that looks at the stream and based on some details of it sends commands to the robotic body. Now, it would be a silly and overcomplicated way to design a system, but that’s beside the point.
Don’t confuse citation and the referent “Matter” is a symbol in our map, while matter itself in the territory. Naturally, the territory predates the map.
This seems to be entirely vibe based. You don’t need to lean idealist to talk about software.
Incompleteness theorem implies the existence of either unprovable and true statement (if the system is incomplete), or provable and false statement (if it’s inconsistent).
It seems that all the substance of your argument is based on a completely wrong premise.
You basically left our other more formal conversation to engage in the critique of prose.
*slow clap*
These are metaphors to lead the reader slowly to the idea… This is not the Argument. The Argument is right there and you are not engaging with it.
You need to understand the claim first in order to deconstruct it. Now you might say I have a psychotic fit, but earlier as we discussed Turing, you didn’t seem to resonate with any of the ideas.
If you are ready to engage with the ideas I am at your disposal.
Not at all. I’m doing both. I specifically started the conversation in the post which is less… prose. But I suspect you may also be interested in engagement with the long post that you put so much effort to write. If it’s not the case—nevermind and let’s continue the discussion in the argument thread.
If you require flawed metaphors, what does it say about the idea?
Frankly speaking, that does indeed look like that. From my perspective you are not being particularly coherent, keep jumping from point to point, with nearly no engagement with what I write. But this can be an artifact of large inferential distances. So you have my benefit of the doubt and I’m eager to learn whether there is some profound substance in your reasoning.