I’d also point out that such an AI could invert control of anything it is interested in, and instantly avoid most latency problems (since it can now do things locally at full speed).
For example, our intuitive model says that any AI interested in, say, Haskell programming, would quickly go out of its mind as it waits subjective years for maintainers to review and apply its patches, answer its questions, and so on.
But isn’t it more likely that the AI will take a copy of the library it cares about, host it on its own datacenter, and then invest a few subjective months/years rewriting it, benchmarking it, documenting it, etc. until it is a gem of software perfection, and then a few objective seconds/minutes later, sends out an email notifying the relevant humans that their version is hopelessly obsolete and pathetic, and they can go work on something else now? Anyone with half a brain will now use the AI’s final version rather than the original. Nor will the AI being a maintainer cause any problems. It’s not like people mind sending in a bug report and having it fixed an objective second later or an email arrive instantly with more detailed questions.
If the AI maintains all the software it cares about, then there’s not going to be much of a insanity-inducing lag to development.
The lag will remain for things it can’t control locally, but I wonder how many of those things such an AI would really care about with regard to their lag.
But isn’t it more likely that the AI will take a copy of the library it cares about, host it on its own datacenter, and then invest a few subjective months/years rewriting it, benchmarking it, documenting it, etc. until it is a gem of software perfection, and then a few objective seconds/minutes later, sends out an email notifying the relevant humans that their version is hopelessly obsolete and pathetic, and they can go work on something else now?
Can you think of many programmers who would want to spend a few months on that while living in a solitary confinement chamber? You wouldn’t have the objective time to exchange information with other people.
I’d also point out that such an AI could invert control of anything it is interested in, and instantly avoid most latency problems (since it can now do things locally at full speed).
For example, our intuitive model says that any AI interested in, say, Haskell programming, would quickly go out of its mind as it waits subjective years for maintainers to review and apply its patches, answer its questions, and so on.
But isn’t it more likely that the AI will take a copy of the library it cares about, host it on its own datacenter, and then invest a few subjective months/years rewriting it, benchmarking it, documenting it, etc. until it is a gem of software perfection, and then a few objective seconds/minutes later, sends out an email notifying the relevant humans that their version is hopelessly obsolete and pathetic, and they can go work on something else now? Anyone with half a brain will now use the AI’s final version rather than the original. Nor will the AI being a maintainer cause any problems. It’s not like people mind sending in a bug report and having it fixed an objective second later or an email arrive instantly with more detailed questions.
If the AI maintains all the software it cares about, then there’s not going to be much of a insanity-inducing lag to development.
The lag will remain for things it can’t control locally, but I wonder how many of those things such an AI would really care about with regard to their lag.
Can you think of many programmers who would want to spend a few months on that while living in a solitary confinement chamber? You wouldn’t have the objective time to exchange information with other people.
I take it you’re not a programmer?