Only if we pretend that it’s an unknowable question and that there’s no way to look at the limitations of a 286 by asking about how much data it can reasonably process over a timescale that is relevant to some hypothetical human-capable task.
http://datasheets.chipdb.org/Intel/x86/286/datashts/intel-80286.pdf
The relevant question here is about data transfers (bus speed) and arithmetic operations (instruction sets). Let’s assume the fastest 286 listed in this datasheet -- 12.5 MHz.
Let’s consider a very basic task—say, catching a ball thrown from 10-15 feet away.
To simplify this analysis, we are going to pretend that if we can analyze 3 image frames, in close proximity, then we can do a curve fit and calculate the ball’s trajectory, so that we don’t need need to look at any other images.
Let’s also assume that a 320x240 image is sufficient, and that it’s sufficient for the image to be in 1-byte-per-pixel grayscale. For reference, that looks like this:
With the 12.5 MHz system clock, we’re looking at 80 nanoseconds per clock cycle.
We’ve got 76800 bytes per image and that’s the same as 38400 processor words (it is 16-bit).
Because data transfers are one word per two processor clock cycles, and a processor clock cycle is two system clock cycles, we’ve got one word for every 4 system clock cycles. That’s 220 nanoseconds per word. Multiply that through and we’ve got ~8.5 ms to transfer each image into memory, or 25 milliseconds total for the 3 images we need.
We can fit all 3 images into the 1 megabyte of address space allowed in real addressing mode, so we don’t need to consider virtual addresses. This is good because virtual addresses would be slower.
In order to calculate the ball’s trajectory, we’ll need to find it in each image, and then do a curve fit. Let’s start with just the first image, because maybe we can do something clever on subsequent images.
We’ll also assume that we can do depth perception on each image because we know the size of the ball we are trying to catch. If we didn’t have that assumption, we’d want 2 cameras and 2 images per frame.
Most image processing algorithms are O(N) with respect to the size of the image. The constant factor is normally the number of multiplications and additions per pixel. We can simplify here and assume it’s like a convolution with a 3x3 kernel, so each pixel is just multiplied 9 times and added 9 times. This is a comical simplification because any image processing for “finding a ball” requires significantly more compute than that.
Let’s also assume we can do this using only integer math. If it was floating point, we’d need to use the 80287[1], and we’d pay for additional bus transfers to shuffle the memory to that processor. Also, math operations on the 80287 seem to be about 100-200 clock cycles, whereas our integer add and integer multiply are only 7 and 13 clock cycles respectively.
So each pixel is 9 multiplies and 9 additions, which at 12.5 MHz system clock gives us 14.4 microseconds per pixel, or 1.1 seconds per image.
Note that this is incredibly charitable, because I’m ignoring the fact that we only have 8 registers on this processor, so we’ll actually be spending a large amount of time on clock cycles just moving data into and out of the registers.
Since we have 3 images, and if we can’t do some type of clever reduction after the first image, then we’ll have to spend 1.1 seconds on each of them as well. 1.1 seconds is a long enough period of time that I’m not sure you can make any reasonable estimate about where the ball might be in subsequent frames after a single sample, so we are probably stuck.
That means we’re looking at 3.3 seconds before we can do that curve fit to avoid the expensive image processing work. Unless the ball is being thrown from very far away (and if it was, it wouldn’t be resolvable with this low image resolution), this system is not going to be able to react quickly enough to catch a ball thrown from 10-15 feet away.
Conclusion
Now is the point in this conversation where someone starts suggesting that a superhuman intelligence won’t need to look at pixels, or transfer data into memory, and they’ll somehow use algorithms that side-step basic facts about how computers work like how many clock cycles a multiplication takes. Or someone suggests that intelligence, as an algorithm, is not like looking at pixels, and reasoning about facts & logic & inferences requires far fewer math operations, so it’s not at all comparable, despite the next obvious question being, “what is the time-complexity of general intelligence?”
I take fault with your primary conclusion, for the same reasons I gave in the first thread:
You claim how little adding a 2nd hose would impact the system, without analyzing the actual constraints that apply to engineers building a product that must be shipped & distributed
You still neglect the existence of insulating wraps for the hose which do improve efficiency, but are also not sold with the single-hose AC system, which lends evidence to my first point—companies are aware of small cost items that improve AC system efficiency, but do not include them with the AC by default, suggesting that there is an actual price point / consumer market / confounding issue at play that prevents them doing so
The full posts, quoted here for convenience
EDIT: I want to make a meta point here, which is that I have not personally worked on ACs, but I have built & shipped multiple products to consumers, and the type of stupid examples I gave in the first AC post are not just made-up for fun. Engineers argue extensively in meetings about “how can we make product A better”, and ideas get shot down for seemingly trivial reasons that basically come down to—yes, in a vacuum, that would be better, but unfortunately, there’s a ton of existing context like how large a truck is or what parts can actually be bought off the shelf that kneecap those ideas before they leave the design room. The engineers who designed the AC were not idiots, or morons, or clowns who don’t understand thermodynamic efficiency. Engineering is about working around limitations. Those limitations do not have to be rooted in physics; society or infrastructure or consumer behavior around critical price points can all be just as real in terms of what it is feasible for a company to create. Just look at how many startups fail and the founder claims in a postmortem, “Yeah, our tech was way better, but unfortunately people wouldn’t pay 10% more for it, even though it was AMAZING compared to our competitor. We just couldn’t get them to switch.”
EDIT 2: I’m pretty annoyed that you doubled-down on your conclusion even after admitting the actual efficiency difference was significantly less than expected, and then chose a different analysis to let you defend your original point anyway, so these edits might keep coming. Regarding market pressures, two-hose AC units do exist. Companies do sell them, and if consumers want to buy a two-hose AC unit, they can do so. But the presence of both one-hose AC units and two-hose AC units in the market tells us it is not winner-take-all and there is consumer behavior, e.g. around price or complexity, that prevents two-hose units from acquiring literally all market share. So until that changes, it will always be more rational for companies to sell one-hose AC units in addition to their two-hose AC unit, because otherwise they’d be leaving money on the floor by only servicing part of the consumer market. (EDIT 5: see also this post, which was itself a reply to AllAmericanBreakfast’s reply on this thread here)
EDIT 3: Let’s look at your math. Outdoor temp is 85-88 F, let’s just take the average and call it 86.5 F. That’s pretty hot. I’d definitely be uncomfortable in that scenario. How cold did the AC cool the rooms? You say on low fan it was 20.6 F degrees with one hose, 22.7 F with two hoses, and then on high fan, 18.3 F with one hose, and 22.2 F with two hoses. The control was 13.1 F. Looking at the control, that gives a room temperature of ~73.4 F. That is uncomfortably hot in my opinion. I keep my room temperature around 68-70 F ish. The internet tells me that this is within the window of a “comfortable room temperature” defined as 67-75 F[1], so I’m just a normal human, I guess. How well did the ACs accomplish that? With one hose, you got it down to ~66 F, and with two hoses, you had it down to about ~64 F. That is pretty cold in my mind—I would not set my AC that low if it actually reached that temperature. What does this mean? The one hose unit literally did the job it was designed to do. With an incredibly hot outside temperature, that resulted in an uncomfortable indoor “control” temperature, the one-hose AC was able to lower the temperature to a comfortable, ideal range, and then go below that, showing it even has margin left over. But now you’re saying that they should make the thing more expensive and optimize it for even greater efficiency because … why!? It works!
EDIT 4: I will die on this hill. This is the problem with how the rationalist community approaches the concept of what it means to “make a rational decision” perfectly demonstrated in a single debate. You do not make a “rational decision” in the real world by reasoning in a vacuum. That is how you arrive at a hypothetically good action, but it is not necessarily feasible or possible to perform, so you always need to check your analysis by looking at real world constraints and then pick the action that is 1.) actually possible in the real world, and 2.) still has the highest expected value. Failing to do that is not more clever or more rational, it is just a bad, broken model for how an ideal, optimal agent would behave. An optimal agent doesn’t ignore their surroundings—they play to them, exploit them, use them.
I averaged the following lower / upper temperatures.
Wikipedia: 64-75
www.cielowigle.com: 68-72
www.vivint.com: 68-76
www.provicincialheating.ca: 68-76