We’re certainly better at reflecting at some parts of our self than others. The ironic thing is, though, that when we look more closely and analyze just what it is that we are not reflecting on very well that we open up the can of worms that we had previously been avoiding.
In the context of the original post—suppose that SGAI is logging some of the internal state into a log file, and then gains access to reading this log file, and reasons about it in same way as it reasons about the world—noticing correlation between it’s feelings and state with the log file. Wouldn’t that be the kind of reflection that we have? Is SGAI even logically possible without hard-coding some blind spot inside the AI about itself?
If we could we’d probably have gone extinct already.
Or maybe we’re going to go extinct real soon now, because we lack ability to reflect like this, and consequently didn’t have couple thousands years to develop effective theory of mind for FAI before we make the hardware.
Or maybe we’re going to go extinct real soon now, because we lack ability to reflect like this, and consequently didn’t have couple thousands years to develop effective theory of mind for FAI before we make the hardware.
Having the ability to design and understand AI for a couple of thousand of years but somehow the inability to actually implement it sounds just about perfect. If only!
That is one idea for hacking friendliness: “Become the AI we would make if there were no existential threats and we didn’t have the hardware to implement it for a few thousand years, and flaming letters appeared on the moon saying ‘thou shall focus on designing Frienly AI’ ”
Havn’t bothered typing it out before because it falls in the reference class of trying to cheat on FAI, wich is always a bad idea, but it seemed relevant here.
Well, it wouldn’t be AI, it’d be simply I, as in “I think therefore I am.” but not stopping at this period.
edit: I mean, look at the SIAI; what do exactly they do right now which they couldn’t do in ancient Greece? If we could reflect on our mind better, and if our mind is physical in nature, then the idea of thinking machine would’ve been readily apparent, yet the microchips would still require very, very long time.
Would anyone downvoting me care to explain why they disagree? Look at Newton or Turing: what exactly did they do which couldn’t have been done in ancient Greece, and why is there no analogous counterexample for the SIAI?
Isn’t “why weren’t the Greeks working on Calculus” a far less silly question than “why weren’t the ancient Greeks working on AI”?
In the context of the original post—suppose that SGAI is logging some of the internal state into a log file, and then gains access to reading this log file, and reasons about it in same way as it reasons about the world—noticing correlation between it’s feelings and state with the log file. Wouldn’t that be the kind of reflection that we have? Is SGAI even logically possible without hard-coding some blind spot inside the AI about itself?
Or maybe we’re going to go extinct real soon now, because we lack ability to reflect like this, and consequently didn’t have couple thousands years to develop effective theory of mind for FAI before we make the hardware.
Having the ability to design and understand AI for a couple of thousand of years but somehow the inability to actually implement it sounds just about perfect. If only!
That is one idea for hacking friendliness: “Become the AI we would make if there were no existential threats and we didn’t have the hardware to implement it for a few thousand years, and flaming letters appeared on the moon saying ‘thou shall focus on designing Frienly AI’ ”
Havn’t bothered typing it out before because it falls in the reference class of trying to cheat on FAI, wich is always a bad idea, but it seemed relevant here.
Well, it wouldn’t be AI, it’d be simply I, as in “I think therefore I am.” but not stopping at this period.
edit: I mean, look at the SIAI; what do exactly they do right now which they couldn’t do in ancient Greece? If we could reflect on our mind better, and if our mind is physical in nature, then the idea of thinking machine would’ve been readily apparent, yet the microchips would still require very, very long time.
By this logic we’d have discovered all there is to know about math (including computer science) by Roman times at the latest.
Would anyone downvoting me care to explain why they disagree? Look at Newton or Turing: what exactly did they do which couldn’t have been done in ancient Greece, and why is there no analogous counterexample for the SIAI?
Isn’t “why weren’t the Greeks working on Calculus” a far less silly question than “why weren’t the ancient Greeks working on AI”?