These intelligences would still require power to run. Right now, even 1 trillion computers running at 100 watts would cost somewhere upwards of 50 billion dollars an hour, which is a far cry from “working without pay”. Producing these 6 trillion general intelligences you speak of would also be nontrivial.
That said, even one “human equivalent” AI could (and almost certainly would) far exceed human capabilities in certain domains. Several of these domains (i.e. self-improvement, energy production, computing power, and finance) would either directly or indirectly allow the AI to improve itself. Others would be impressive, but not particularly dangerous (natural language processing, for example).
These intelligences would still require power to run. Right now, even 1 trillion computers running at 100 watts would cost somewhere upwards of 50 billion dollars an hour, which is a far cry from “working without pay”. Producing these 6 trillion general intelligences you speak of would also be nontrivial.
Humans spend roughly 10% of their caloric intake on their brains; and Americans spend roughly the same amount as a percentage of their post-tax income. -- so 1% of the pay currently spent on Americans goes towards their cognition, on average. The average American worker also works 46 (out of 168) hours per week.
We have no way of knowing the material costs of constructing these devices, nor do we know how energy-efficient they will be compared to modern human brains. Given how much heat energy a brain produces, how far a given brain is from the theoretical limits on computational efficiency and computational density, it’s fairly safe to say that the comparative costs of said brains is essentially negligible compared to the average worker today. If we compare the electrical-operational costs as equivalent to the energy costs of a human, then AGIs will have 1% those of a human. And they will work 4x as long—so that’s already a 400:1 ratio of cost per human and cost per AGI for operational budget. Then factor in the absence of travel energy expenditures, the absence of plumbing investment and other human-foible-related elements that machines just don’t have—and the picture painted quickly transitions towards that 1,000 AGIs per person being a “reasonable” number for economic reasons. (Especially since at least a large minority of them will, early on, be applied towards economic expansion purposes.)
So certainly, these intelligences would still require power to run. But they’d require vastly less—for the same economic output—than would humans. And all that economic output will be bent towards economic goals … such as generating energy.
That said, even one “human equivalent” AI could (and almost certainly would) far exceed human capabilities in certain domains.
I don’t find this to be a given at all. Brain-emulations would possess, most likely, equivalent capacities to human brains. There is no guarantee that any given AGI will be capable of examining its own code and coming up with better solutions than the people that created it. Nor is there a guarantee that an AGI will be more capable than the people who will have created it to access computation. Your further claims regarding energy production and finance just make no sense whatsoever.
Certainly, there do exist many models of conceived AGI that would possess many of these traits, but quite frankly it’s just a bit presumptuous to assume that those models are “almost certainly” the ones that will come about. There’s equally as many where AGI will start out dumber than people in most ways, or that people will augment themselves routinely before AGI kicks off, etc., etc..
We have no way of knowing the material costs of constructing these devices, nor do we know how energy-efficient they will be compared to modern human brains.
We can come up with at least a preliminary estimate of cost. The lowest estimate I have seen for the computational power of a brain is 38 pflops. The lowest cost of processing power is currently $1.80/gflops. This puts the cost of a whole-brain emulation at a bit under $70M in the best-case scenario. Assuming Moore’s law holds, that number should halve every year. Comparatively speaking, human brains are far more energy-efficient than our computers. The best we have is about 2 gflops/watt, as opposed to at least 3,800,000gflops/watt (assuming 10 w) by the human brain. So unless there is a truly remarkable decrease (several orders of magnitude) in the cost of computing power, operating the equivalent power of a human brain will be costly.
That said, even one “human equivalent” AI could (and almost certainly would) far exceed human capabilities in certain domains.
I don’t find this to be a given at all. Brain-emulations would possess, most likely, equivalent capacities to human brains.
I was unclear. I consider brain-emulations to be humans, not AIs. The majority of possible AGIs that are considered to be at the human level will almost certainly have different areas of strength and weakness from humans. In particular, they should be far superior in those areas our specialized artificial intelligences already exceed human ability (math, chess, jeapordy, etc.).
There’s equally as many where AGI will start out dumber than people in most ways, or that people will augment themselves routinely before AGI kicks off, etc., etc..
I did stipulate “human-equivalent” AGI. I am well aware of the possibility that people will augment themselves before AGI comes about. We already do, just not through direct neural interfaces. I’m studying neuroscience with the goal of developing tools to augment intelligence.
Verbal slight of hand: “human-equivalent” includes Karl Childers just as much as it does Sherlock Holmes.
mate of cost. The lowest estimate I have seen for the computational power of a brain is 38 pflops. The lowest cost of processing power is currently $1.80/gflops. This puts the cost of a whole-brain emulation at a bit under $70M in the best-case scenario. Assuming Moore’s law holds, that number should halve every year.
The cost per gflop is decreasing by exponentially, not linearly, unlike what Moore’s Law would extrapolate to.
Moore’s Law hasn’t held for several years now regardless. (See: “Moore’s Gap”).
This all rests on the notion of silicon as a primary substrate. That’s just not likely moving forward; a major buzz amongst theoretical computer science is “diamondoid substrate”—by which they mean chemical vapor deposited graphene doped with various substances to create a computing substrate that is several orders of magnitude ‘better’ than silicon due to several properties including its ability to retain semiconductive status at high temperatures, higher frequencies of operation for its logic gates, and potential transister-density. (Much of the energy cost of modern computers goes into heat dissipation, by the way.)
If the cost per gflop continues to trend similarly over the next forty years, and if AGI doesn’t become ‘practicable’ until 2050 (a common projection) -- then the cost per gflop may well be so negligible that the 1000:1 ratio would seem conservative.
I was unclear. I consider brain-emulations to be humans, not AIs.
Fair enough. I include emulations as a form of AGI, if for no other reason than there being a clear path to the goal.
In particular, they should be far superior in those areas our specialized artificial intelligences already exceed human ability (math, chess, jeapordy, etc.).
This does not follow. Fritz—the ‘inheritor’ to Deep Blue—was remarkable not because it was a superior chess-player to Deep Blue … but because of the way in which it was worse. Fritz initially lost to Kasparov, yet was more interesting. Why? What made it so interesting?
Fritz had the ability to be fuzzy, unclear, and ‘forget’. To ‘make mistakes’. And this made it a superior AI implementation than the perfect monolithic number-cruncher.
I see this sentiment in people in AGI all the time—that AGIs will be perfect, crystalline, numerical engines of inerrant geometry. I used to believe that myself. I’ve learned better. :)
We already do, just not through direct neural interfaces. I’m studying neuroscience with the goal of developing tools to augment intelligence.
Sir, in reading this I have only one suggestion: Let’s say you and I drop the rest of this B.S. and you explain to me what you mean. Because I’m all guinea-pig-eager over here.
My original point was that, based on current trends, AGIs would remain prohibitively expensive to run, as power requirements have not been dropping with Moore’s law. The graphene transistors look like they could solve the power requirement problem, so it looks like I was wrong.
When I said ‘one “human equivalent” AI could (and almost certainly would) far exceed human capabilities in certain domains.’ I simply meant that it is unlikely that a (nonhuman) human-level AI would possess exactly the same skillset as a human. If it was better than humans at something valuable, it would be a game changer, regardless of it being “only” human-level.
This idea seems not to be as clear to readers as it is to me, so let me explain. A human and a pocket calculator are far better than just a human at arithmetic than a human. Likewise, a human and a notebook are better at memory than an unassisted human. This does not mean notebooks are very good at storing information, it means that people are bad at it. An AI that is as computationally expensive as a human will almost certainly be much better at the things people are phenomenally bad at.
An AI that is as computationally expensive as a human will almost certainly be much better at the things people are phenomenally bad at.
I’m sorry, this is just plain not valid. I’ve already explained why. An AI that is “as computationally expensive as a human” is no more likely to be “much better at the things people are phenomenally bad at” than is a human. All of the computation that goes on in a human would quite likely need to be replicated by that AGI. And there is simply no guarantee that it would be any better than a human when it comes to how it accesses narrow AI mechanisms (storage methods, calculators, etc., etc..).
I really do wish I knew why you folks all always seem to assume this is an inerrant truth of the world. But based on what I have seen—it’s just not very likely at all.
I’m not sure exactly what part of my statement you disagree with.
1. People are phenomenally bad at some things.
A pocket calculator is far better than a human when it comes to performing basic operations on numbers. Unless you believe that a calculator is amazingly good at arithmetic, it stands to reason that humans are phenomenally bad at it.
2. An AGI would be better than people in the areas where humans suck
I am aware of the many virtues of fuzzy, unclear processes to arrive at answers to complex questions through massively parallel processes. However, there are some processes that are better done through serial, logical processes. I don’t see why an AGI wouldn’t pick these low hanging fruits. My reasoning is as follows: please tell me which part is wrong.
I. An emulation (not even talking about nonhuman AGIs at this point) would be able to perform as well as a human with access to a computer with, say, Python.
II. The way humans currently interact with computers is horribly inefficient. We translate our thoughts into a programming language, which we then translate into a series of motor impulses corresponding to keystrokes. We then run the program, which displays the feedback in the form of pixels of different brightness, which are translated by our visual cortex into shapes, which we then process for meaning.
III. There exist more efficient methods that, at a minimum, could bypass the barriers of typing speed and visual processing speed. (I suspect this is the part you disagree with)
What have you seen that makes you think AGIs with some superior skills to humans won’t exist?
What have you seen that makes you think AGIs with some superior skills to humans won’t exist?
Human-equivalent AGIs. That’s a vital element, here. There’s no reason to expect that the AGIs in question would be better-able to achieve output in most—if not all—areas. There is this ingrained assumption in people that AGIs would be able to interface with devices more directly—but that just isn’t exactly likely. Even if they do possess such interfaces, at the very least the early examples of such devices are quite likely to only be barely adequate to the task of being called “human-equivalent”. Karl Childers rather than Sherlock Holmes.
There’s no reason to expect that the AGIs in question would be better-able to achieve output in most—if not all—areas.
I said some, not most or all. I expect there to be relatively few of these areas, but large superiority in some particular minor skills can allow for drastically different results. It doesn’t take general superiority.
There is this ingrained assumption in people that AGIs would be able to interface with devices more directly—but that just isn’t exactly likely.
There is a reason we have this assumption. Do you think that translating our thoughts into motor nerve impulses that operate a keyboard and processing the output of the system through our visual cortex before assigning meaning is the most efficient system?
Do you think that translating our thoughts into motor nerve impulses that operate a keyboard and processing the output of the system through our visual cortex before assigning value is the most efficient system?
Why is a superior interface unlikely?
Humans can improve their interfacing with computers too...though we will likely interact more awkwardly than AGIs will be able too. From TheOnion, my favorite prediction of man machine interface.
Because it will also require translation from one vehicle to another. The output of the original program will require translation into something other than logging output. Language, and the processes to formulate it, do not happen at all much quicker than they do the act of speaking. And we have plenty of programs out there that translate speech into text. Shorthand typists are able to keep up with multiple conversations, in real-time, no less.
And, as I have also said; early AGIs are likely to be idiots, not geniuses. (If for no other reason than the fact that Whole Brain Emulations are likely to require far more time per neuronal event than a real human does. I have justification in this belief; that’s how neuron simulations currently operate.)
Because it will also require translation from one vehicle to another.
Even if this is unavoidable, I find it highly unlikely that we are at or near maximum transmission speed for that information, particularly on the typing/speaking side of things.
And, as I have also said; early AGIs are likely to be idiots, not geniuses.
Yes. Early AGIs may well be fairly useless, even with the processing power of a chimpanzee brain. Around the time it is considered “human equivalent”, however, a given AGI is quite likely to be far more formidable than an average human.
Basically what you are saying is that any AGI will be functionally identical to a human. I strongly disagree, and find your given reasons fall far short of convincing me.
Basically what you are saying is that any AGI will be functionally identical to a human.
No. What I have said is that “human-equivalent AGI is not especially likely to be better at any given function than a human is likely to.” This is nearly tautological. I have explained that the various tasks you’ve mentioned already have methodologies which allow for the function to be performed at nearly- or -equal-to- realtime speeds.
There is this deep myth that AGIs will automatically—necessarily—be “hooked into” databases or have their thoughts recorded into terminals which will be able to be directly integrated with programs, and so on.
That is a myth. Could those things be done? Certainly. But is it guaranteed?
By no means. As the example of Fritz shows—there is just no justification for this belief that merely because it’s in a computer it will automatically have access to all of these resources we traditionally ascribe to computers. That’s like saying that because a word-processor is on a computer it should be able to beat video games. It just doesn’t follow.
So whether you’re convinced or not, I really don’t especially care at this point. I have given reasons—plural—for my position, and you have not justified yours at all. So far as I can tell, you have allowed a myth to get itself cached into your thoughts and are simply refusing to dislodge it.
No. What I have said is that “human-equivalent AGI is not especially likely to be better at any given function than a human is likely to.” This is nearly tautological.
This is nowhere near tautological, unless you define “human-level AGI” as “AGI that has roughly equivalent ability to humans in all domains” in which case the distinction is useless, as it basically specifies humans and possibly whole brain emulations, and the tiny, tiny fraction of nonhuman AGIs that are effectively human.
There is this deep myth that AGIs will automatically—necessarily—be “hooked into” databases or have their thoughts recorded into terminals which will be able to be directly integrated with programs, and so on.
Integration is not a binary state of direct or indirect. A pocket calculator is a more direct interface than a system where you mail in a query and receive the result in 4-6 weeks, despite the overall result being the same.
As the example of Fritz shows—there is just no justification for this belief that merely because it’s in a computer it will automatically have access to all of these resources we traditionally ascribe to computers.
I don’t hold that belief, and if that’s what you were arguing against, you are correct to oppose it. I think humans have access to the same resources, but the access is less direct. A gain in speed can lead to a gain in productivity.
These intelligences would still require power to run. Right now, even 1 trillion computers running at 100 watts would cost somewhere upwards of 50 billion dollars an hour, which is a far cry from “working without pay”. Producing these 6 trillion general intelligences you speak of would also be nontrivial.
That said, even one “human equivalent” AI could (and almost certainly would) far exceed human capabilities in certain domains. Several of these domains (i.e. self-improvement, energy production, computing power, and finance) would either directly or indirectly allow the AI to improve itself. Others would be impressive, but not particularly dangerous (natural language processing, for example).
Humans spend roughly 10% of their caloric intake on their brains; and Americans spend roughly the same amount as a percentage of their post-tax income. -- so 1% of the pay currently spent on Americans goes towards their cognition, on average. The average American worker also works 46 (out of 168) hours per week.
We have no way of knowing the material costs of constructing these devices, nor do we know how energy-efficient they will be compared to modern human brains. Given how much heat energy a brain produces, how far a given brain is from the theoretical limits on computational efficiency and computational density, it’s fairly safe to say that the comparative costs of said brains is essentially negligible compared to the average worker today. If we compare the electrical-operational costs as equivalent to the energy costs of a human, then AGIs will have 1% those of a human. And they will work 4x as long—so that’s already a 400:1 ratio of cost per human and cost per AGI for operational budget. Then factor in the absence of travel energy expenditures, the absence of plumbing investment and other human-foible-related elements that machines just don’t have—and the picture painted quickly transitions towards that 1,000 AGIs per person being a “reasonable” number for economic reasons. (Especially since at least a large minority of them will, early on, be applied towards economic expansion purposes.)
So certainly, these intelligences would still require power to run. But they’d require vastly less—for the same economic output—than would humans. And all that economic output will be bent towards economic goals … such as generating energy.
I don’t find this to be a given at all. Brain-emulations would possess, most likely, equivalent capacities to human brains. There is no guarantee that any given AGI will be capable of examining its own code and coming up with better solutions than the people that created it. Nor is there a guarantee that an AGI will be more capable than the people who will have created it to access computation. Your further claims regarding energy production and finance just make no sense whatsoever.
Certainly, there do exist many models of conceived AGI that would possess many of these traits, but quite frankly it’s just a bit presumptuous to assume that those models are “almost certainly” the ones that will come about. There’s equally as many where AGI will start out dumber than people in most ways, or that people will augment themselves routinely before AGI kicks off, etc., etc..
We can come up with at least a preliminary estimate of cost. The lowest estimate I have seen for the computational power of a brain is 38 pflops. The lowest cost of processing power is currently $1.80/gflops. This puts the cost of a whole-brain emulation at a bit under $70M in the best-case scenario. Assuming Moore’s law holds, that number should halve every year. Comparatively speaking, human brains are far more energy-efficient than our computers. The best we have is about 2 gflops/watt, as opposed to at least 3,800,000gflops/watt (assuming 10 w) by the human brain. So unless there is a truly remarkable decrease (several orders of magnitude) in the cost of computing power, operating the equivalent power of a human brain will be costly.
I was unclear. I consider brain-emulations to be humans, not AIs. The majority of possible AGIs that are considered to be at the human level will almost certainly have different areas of strength and weakness from humans. In particular, they should be far superior in those areas our specialized artificial intelligences already exceed human ability (math, chess, jeapordy, etc.).
I did stipulate “human-equivalent” AGI. I am well aware of the possibility that people will augment themselves before AGI comes about. We already do, just not through direct neural interfaces. I’m studying neuroscience with the goal of developing tools to augment intelligence.
Verbal slight of hand: “human-equivalent” includes Karl Childers just as much as it does Sherlock Holmes.
A couple of points here:
In 1961 that same cost would have been 38 * ($1.1*10\^12 )*10\^6 -- ~4.2 million trillion dollars.
The cost per gflop is decreasing by exponentially, not linearly, unlike what Moore’s Law would extrapolate to.
Moore’s Law hasn’t held for several years now regardless. (See: “Moore’s Gap”).
This all rests on the notion of silicon as a primary substrate. That’s just not likely moving forward; a major buzz amongst theoretical computer science is “diamondoid substrate”—by which they mean chemical vapor deposited graphene doped with various substances to create a computing substrate that is several orders of magnitude ‘better’ than silicon due to several properties including its ability to retain semiconductive status at high temperatures, higher frequencies of operation for its logic gates, and potential transister-density. (Much of the energy cost of modern computers goes into heat dissipation, by the way.)
If the cost per gflop continues to trend similarly over the next forty years, and if AGI doesn’t become ‘practicable’ until 2050 (a common projection) -- then the cost per gflop may well be so negligible that the 1000:1 ratio would seem conservative.
Fair enough. I include emulations as a form of AGI, if for no other reason than there being a clear path to the goal.
This does not follow. Fritz—the ‘inheritor’ to Deep Blue—was remarkable not because it was a superior chess-player to Deep Blue … but because of the way in which it was worse. Fritz initially lost to Kasparov, yet was more interesting. Why? What made it so interesting?
Fritz had the ability to be fuzzy, unclear, and ‘forget’. To ‘make mistakes’. And this made it a superior AI implementation than the perfect monolithic number-cruncher.
I see this sentiment in people in AGI all the time—that AGIs will be perfect, crystalline, numerical engines of inerrant geometry. I used to believe that myself. I’ve learned better. :)
Sir, in reading this I have only one suggestion: Let’s say you and I drop the rest of this B.S. and you explain to me what you mean. Because I’m all guinea-pig-eager over here.
My original point was that, based on current trends, AGIs would remain prohibitively expensive to run, as power requirements have not been dropping with Moore’s law. The graphene transistors look like they could solve the power requirement problem, so it looks like I was wrong.
When I said ‘one “human equivalent” AI could (and almost certainly would) far exceed human capabilities in certain domains.’ I simply meant that it is unlikely that a (nonhuman) human-level AI would possess exactly the same skillset as a human. If it was better than humans at something valuable, it would be a game changer, regardless of it being “only” human-level.
This idea seems not to be as clear to readers as it is to me, so let me explain. A human and a pocket calculator are far better than just a human at arithmetic than a human. Likewise, a human and a notebook are better at memory than an unassisted human. This does not mean notebooks are very good at storing information, it means that people are bad at it. An AI that is as computationally expensive as a human will almost certainly be much better at the things people are phenomenally bad at.
I’m sorry, this is just plain not valid. I’ve already explained why. An AI that is “as computationally expensive as a human” is no more likely to be “much better at the things people are phenomenally bad at” than is a human. All of the computation that goes on in a human would quite likely need to be replicated by that AGI. And there is simply no guarantee that it would be any better than a human when it comes to how it accesses narrow AI mechanisms (storage methods, calculators, etc., etc..).
I really do wish I knew why you folks all always seem to assume this is an inerrant truth of the world. But based on what I have seen—it’s just not very likely at all.
I’m not sure exactly what part of my statement you disagree with.
1. People are phenomenally bad at some things.
A pocket calculator is far better than a human when it comes to performing basic operations on numbers. Unless you believe that a calculator is amazingly good at arithmetic, it stands to reason that humans are phenomenally bad at it.
2. An AGI would be better than people in the areas where humans suck
I am aware of the many virtues of fuzzy, unclear processes to arrive at answers to complex questions through massively parallel processes. However, there are some processes that are better done through serial, logical processes. I don’t see why an AGI wouldn’t pick these low hanging fruits. My reasoning is as follows: please tell me which part is wrong.
I. An emulation (not even talking about nonhuman AGIs at this point) would be able to perform as well as a human with access to a computer with, say, Python.
II. The way humans currently interact with computers is horribly inefficient. We translate our thoughts into a programming language, which we then translate into a series of motor impulses corresponding to keystrokes. We then run the program, which displays the feedback in the form of pixels of different brightness, which are translated by our visual cortex into shapes, which we then process for meaning.
III. There exist more efficient methods that, at a minimum, could bypass the barriers of typing speed and visual processing speed. (I suspect this is the part you disagree with)
What have you seen that makes you think AGIs with some superior skills to humans won’t exist?
Human-equivalent AGIs. That’s a vital element, here. There’s no reason to expect that the AGIs in question would be better-able to achieve output in most—if not all—areas. There is this ingrained assumption in people that AGIs would be able to interface with devices more directly—but that just isn’t exactly likely. Even if they do possess such interfaces, at the very least the early examples of such devices are quite likely to only be barely adequate to the task of being called “human-equivalent”. Karl Childers rather than Sherlock Holmes.
I said some, not most or all. I expect there to be relatively few of these areas, but large superiority in some particular minor skills can allow for drastically different results. It doesn’t take general superiority.
There is a reason we have this assumption. Do you think that translating our thoughts into motor nerve impulses that operate a keyboard and processing the output of the system through our visual cortex before assigning meaning is the most efficient system?
Why is a superior interface unlikely?
Humans can improve their interfacing with computers too...though we will likely interact more awkwardly than AGIs will be able too. From TheOnion, my favorite prediction of man machine interface.
Is that “Humans can also improve their interfacing with computers” or “Humans can improve their interfacing with computers as well as AGI could”?
Edited.
Because it will also require translation from one vehicle to another. The output of the original program will require translation into something other than logging output. Language, and the processes to formulate it, do not happen at all much quicker than they do the act of speaking. And we have plenty of programs out there that translate speech into text. Shorthand typists are able to keep up with multiple conversations, in real-time, no less.
And, as I have also said; early AGIs are likely to be idiots, not geniuses. (If for no other reason than the fact that Whole Brain Emulations are likely to require far more time per neuronal event than a real human does. I have justification in this belief; that’s how neuron simulations currently operate.)
Even if this is unavoidable, I find it highly unlikely that we are at or near maximum transmission speed for that information, particularly on the typing/speaking side of things.
Yes. Early AGIs may well be fairly useless, even with the processing power of a chimpanzee brain. Around the time it is considered “human equivalent”, however, a given AGI is quite likely to be far more formidable than an average human.
I strongly disagree, and I have given reasons why this is so.
Basically what you are saying is that any AGI will be functionally identical to a human. I strongly disagree, and find your given reasons fall far short of convincing me.
No. What I have said is that “human-equivalent AGI is not especially likely to be better at any given function than a human is likely to.” This is nearly tautological. I have explained that the various tasks you’ve mentioned already have methodologies which allow for the function to be performed at nearly- or -equal-to- realtime speeds.
There is this deep myth that AGIs will automatically—necessarily—be “hooked into” databases or have their thoughts recorded into terminals which will be able to be directly integrated with programs, and so on.
That is a myth. Could those things be done? Certainly. But is it guaranteed?
By no means. As the example of Fritz shows—there is just no justification for this belief that merely because it’s in a computer it will automatically have access to all of these resources we traditionally ascribe to computers. That’s like saying that because a word-processor is on a computer it should be able to beat video games. It just doesn’t follow.
So whether you’re convinced or not, I really don’t especially care at this point. I have given reasons—plural—for my position, and you have not justified yours at all. So far as I can tell, you have allowed a myth to get itself cached into your thoughts and are simply refusing to dislodge it.
This is nowhere near tautological, unless you define “human-level AGI” as “AGI that has roughly equivalent ability to humans in all domains” in which case the distinction is useless, as it basically specifies humans and possibly whole brain emulations, and the tiny, tiny fraction of nonhuman AGIs that are effectively human.
Integration is not a binary state of direct or indirect. A pocket calculator is a more direct interface than a system where you mail in a query and receive the result in 4-6 weeks, despite the overall result being the same.
I don’t hold that belief, and if that’s what you were arguing against, you are correct to oppose it. I think humans have access to the same resources, but the access is less direct. A gain in speed can lead to a gain in productivity.