How do we know that an artificial intelligence is even possible? I understand that, in theory, assuming that consciousness is completely naturalistic (which seems reasonable), it should be possible to make a computer do the things neurons do to be conscious and thus be conscious. But neurons work differently than computers do
The highlighted portion of your sentence is not obvious. What exactly do you mean by work differently? There’s a thought experiment (that you’ve probably heard before) about replacing your neurons, one by one, with circuits that behave identically to each replaced neuron. The point of the hypo is to ask when, if ever, you draw the line and say that it isn’t you anymore. Justifying any particular answer is hard (since it is axiomatically true that the circuit reacts the way that the neuron would). I’m not sure that circuit-neuron replacement is possible, but I certainly couldn’t begin to justify (in physics terms) why I think that. That is, the counter-argument to my position is that neurons are physical things and thus should obey the laws of physics. If the neuron was build once (and it was, since it exists in your brain), what law of physics says that it is impossible to build a duplicate?
how do we know that it won’t take an unfeasibly high amount of computer-form computing power to do what brain-form computing power does?
I’m not physicist, but I don’t know that it is feasible (or understand the science well enough to have an intelligent answer). That said, it is clearly feasible with biological parts (again, neurons actually exist).
I’ve seen some mentions of an AI “bootstrapping” itself up to super-intelligence. What does that mean, exactly? Something about altering its own source code, right? How does it know what bits to change to make itself more intelligent? (I get the feeling this is a tremendously stupid question, along the lines of “if people evolved from apes then why are there still apes?”)
By hypothesis, the AI is running a deterministic process to make decisions. Let’s say that the module responsible for deciding Newcomb problems is originally coded to two-box. Further, some other part of the AI decides that this isn’t the best choice for achieving AI goals. So, the Newcomb module is changed so that it decides to one-box. Presumably, doing this type of improvement repeatedly to will make the AI better and better at achieving its goals. Especially if the self-improvement checker can itself by improved somehow.
It’s not obvious to me that this leads to super intelligence (i.e. Straumli-perversion level intelligence, if you’ve read [EDIT] A Fire on the Deep), even with massively faster thinking. But that’s what the community seems to mean by “recursive self-improvement.”
The highlighted portion of your sentence is not obvious. What exactly do you mean by work differently? There’s a thought experiment (that you’ve probably heard before) about replacing your neurons, one by one, with circuits that behave identically to each replaced neuron. The point of the hypo is to ask when, if ever, you draw the line and say that it isn’t you anymore. Justifying any particular answer is hard (since it is axiomatically true that the circuit reacts the way that the neuron would).
I’m not sure that circuit-neuron replacement is possible, but I certainly couldn’t begin to justify (in physics terms) why I think that. That is, the counter-argument to my position is that neurons are physical things and thus should obey the laws of physics. If the neuron was build once (and it was, since it exists in your brain), what law of physics says that it is impossible to build a duplicate?
I’m not physicist, but I don’t know that it is feasible (or understand the science well enough to have an intelligent answer). That said, it is clearly feasible with biological parts (again, neurons actually exist).
By hypothesis, the AI is running a deterministic process to make decisions. Let’s say that the module responsible for deciding Newcomb problems is originally coded to two-box. Further, some other part of the AI decides that this isn’t the best choice for achieving AI goals. So, the Newcomb module is changed so that it decides to one-box. Presumably, doing this type of improvement repeatedly to will make the AI better and better at achieving its goals. Especially if the self-improvement checker can itself by improved somehow.
It’s not obvious to me that this leads to super intelligence (i.e. Straumli-perversion level intelligence, if you’ve read [EDIT] A Fire on the Deep), even with massively faster thinking. But that’s what the community seems to mean by “recursive self-improvement.”
(A Fire Upon the Deep)
ETA: Oops! Deepness in the Sky is a prequel, didn’t know and didn’t google.
(Also, added to reading queue.)
Thanks, edited.