No, you are ignoring Xi’s context. The claim is not about what a programmer on the team might do, it is about what the AI might write. Notice that the section starts ‘The goals of an AI will be under scrutiny at any time...’
Yes. I thought Xi’s claim was that if you have an AI and put it to work writing software, the programmers supervising the AI can look at the internal “motivations”, “goals”, and “planning” data structures and see what the AI is really doing. Obfuscation is beside the point.
I agree with you and XiXiDu that such observation should be possible in principle, but I also sort of agree with the detractors. You say,
Presumably developers of a large complicated AI will design it to be easy to debug...
Oh, I’m sure they’d try. But have you ever seen a large software project ? There’s usually mountains and mountains of code that runs in parallel on multiple nodes all over the place. Pieces of it are usually written with good intentions in mind; other pieces are written in a caffeine-fueled fog two days before the deadline, and peppered with years-old comments to the extent of, “TODO: fix this when I have more time”. When the code breaks in some significant way, it’s usually easier to write it from scratch than to debug the fault.
And that’s just enterprise software, which is orders of magnitude less complex than an AGI would be. So yes, it should be possible to write transparent and easily debuggable code in theory, but in practice, I predict that people would write code the usual way, instead.
No, you are ignoring Xi’s context. The claim is not about what a programmer on the team might do, it is about what the AI might write.
You are just lying. Some of what I wrote:
Why wouldn’t the humans who created it not be able to use the same algorithms that the AI uses to predict what it will do?
The goals of an AI will be under scrutiny at any time. It seems very implausible that scientists, a company or the military are going to create an AI and then just let it run without bothering about its plans. An artificial agent is not a black box, like humans are, where one is only able to guess its real intentions.
A plan for world domination seems like something that can’t be concealed from its creators. Lying is no option if your algorithms are open to inspection.
No, you are ignoring Xi’s context. The claim is not about what a programmer on the team might do, it is about what the AI might write. Notice that the section starts ‘The goals of an AI will be under scrutiny at any time...’
Yes. I thought Xi’s claim was that if you have an AI and put it to work writing software, the programmers supervising the AI can look at the internal “motivations”, “goals”, and “planning” data structures and see what the AI is really doing. Obfuscation is beside the point.
I agree with you and XiXiDu that such observation should be possible in principle, but I also sort of agree with the detractors. You say,
Oh, I’m sure they’d try. But have you ever seen a large software project ? There’s usually mountains and mountains of code that runs in parallel on multiple nodes all over the place. Pieces of it are usually written with good intentions in mind; other pieces are written in a caffeine-fueled fog two days before the deadline, and peppered with years-old comments to the extent of, “TODO: fix this when I have more time”. When the code breaks in some significant way, it’s usually easier to write it from scratch than to debug the fault.
And that’s just enterprise software, which is orders of magnitude less complex than an AGI would be. So yes, it should be possible to write transparent and easily debuggable code in theory, but in practice, I predict that people would write code the usual way, instead.
You are just lying. Some of what I wrote:
What asr wrote was just much more clearly.