My current best guess is that the estimate in my initial comment was correct and that for a typical case the efficiency loss is roughly 25-30% (e.g. when cooling from 85 to 70 in CA). I stand by the claim that your original post’s language was hyperbolic for an effect of this size, that your theoretical reasoning turned out to be wrong (e.g. you called my estimate “obviously ridiculous”), and that you are misunderstanding many portable AC customers’ preferences and the main costs from a second hose.
Post-morteming my estimate, I made two errors that happened to cancel out. I overestimated the temperature of exhaust by looking at this online comment (though I also think your AC might be on the unusually bad end), and I overlooked a crucial consideration raised by denkenberger here that reduces the efficiency loss ~2x.
ETA: actually it seems like humidity is also quite a large consideration, maybe increasing the efficiency loss by 1.5x. So now my best guess is more like 35-40%, significantly higher than the 25-30% estimate.
The 25-30% result is roughly the impression I got by googling 1-hose vs 2-hose AC. I don’t think the experiment results presented in this post meaningfully change my best guess. I could still easily imagine that my guess is wrong, though I may not reply to further comments.
This was actually a kind of fun test case for a priori reasoning. I think that I should have been able to notice the consideration denkenbgerger raised, but I didn’t think of it. In fact when I stared reading his comment my immediate reaction was “this methodology is so simple, how could the equilibrium infiltration rate end up being relevant?” My guess would be that my a priori reasoning about AI is wrong in tons of similar ways even in “simple” cases. (Though obviously the whole complexity scale is shifted up a lot, since I’ve spent hundreds of hours thinking about key questions.)
Note that some of the difference between these numbers comes from me stating (1-hose efficiency) / (2-hose efficiency) and John stating (2-hose efficiency) / (1-hose efficiency) in this post. This comment talks about (1-hose efficiency) / (2-hose efficiency) since those are the numbers we were discussing for most of the comment thread, including the initial comment that I’m reaffirming. Other ways of stating “25-30% efficiency loss” are: a 1-hose AC cools by 25-30% less than a 2-hose AC, or wastes 25-30% of the energy it uses, or requires 33-43% more energy than a 2-hose AC to achieve the same result.
This was actually a kind of fun test case for a priori reasoning. I think that I should have been able to notice the consideration denkenbgerger raised, but I didn’t think of it. In fact when I stared reading his comment my immediate reaction was “this methodology is so simple, how could the equilibrium infiltration rate end up being relevant?” My guess would be that my a priori reasoning about AI is wrong in tons of similar ways even in “simple” cases. (Though obviously the whole complexity scale is shifted up a lot, since I’ve spent hundreds of hours thinking about key questions.)
There is a mindset that people are simply not rational enough, and if they were more rational, they wouldn’t fall to those traps. Instead, they would more accurately model the situation, correctly anticipate what will and won’t matter, and arrive at the right answer, just by exercising more careful, diligent thought.
My hypothesis is that whatever that optimal “general intelligence” algorithm[1] is—the one where you reason a priori from first principles, and then you exhaustively check all of your assumptions for which one might be wrong, and then you recursively use that checking to re-reason from first principles—it is computational inefficient enough in such a way that for most interesting[2] problems, it is not realistic to assume that it can run to completion in any reasonable[3] time with realistic computation resources, e.g. a human brain, or a supercomputer.[4]
I suspect that the human brain is implementing some type of randomized vaguely-Monte-Carlo-like algorithm when reasoning, which is how people can (1) often solve problems in a reasonable amount of time[5], (2) often miss factors during a priori reasoning but understand them easily after they’ve seen it confirmed experimentally, (3) different people miss different things, (4) often if someone continues to think about a problem for an arbitrarily long people of time[6] they will continue to generate insights, and (5) often those insights generated from thinking about a problem for an arbitrarily long period of time are only loosely correlated[7].
In that world, while it is true that you should have been able to notice the problem, there is no guarantee on how much time it would have taken you to do so.
The “God algorithm” for reasoning, to use a term that Jeff Atwood wrote about in this blog post. It describes the idea of an optimal algorithm that isn’t possible to actually use, but the value of thinking about that algorithm is that it gives you a target to aim towards.
The use of the word “reasonable” is intended to describe the fact that if a building is on fire and you are inside of it, you need to calculate the optimal route out of that burning building in a time period that is than a few minutes in length in order to maximize your chance of survival. Likewise, if you are tasked to solve a problem at work, you have somewhere between weeks and months to show progress or be moved to a separate problem. For proving a theorem, it might be reasonable to spend 10+ years on it if there’s nothing necessitating a more immediate solution.
This is mostly based on an observation that for any scenario with say some fixed number of “obvious” factors influencing it, there are effectively arbitrarily many “other” factors that may influence the scenario, and the process of deterministically ordering an arbitrarily long list and then preceding down the list from “most likely to impact the situation” and “least likely to impact the scenario” to manually check if each “other” factor actually does matter has an arbitrarily high computational cost.
It’s like the algorithm jumped from one part of solution space where it was stuck to a random, new part of the solution space and that’s where it made progress.
My take is: you shouldn’t expect to get everything right when you try to reason about a moderately complicated system abstractly, no matter how smart you are. You’d like to have a lot of practice so that you can do your best, can get a sense for what kinds of things you tend to miss and how they change the bottom line, can better understand what the returns to thinking are typically like, and so on. This was a fun and unusually self-contained example, where we happened to miss an important and very clean consideration that can be appreciated with very little domain knowledge. (I think realistic cases are usually much more of a mess.)
In this case, I feel pretty confident that I would have noticed this consideration if I thought about the question for a few hours (and probably less), and I think that it would become obvious if you tried to write out your reasoning sufficiently carefully. But even if I spend hundreds of hours thinking about some issue with AI, I expect to miss all kinds of important and obvious-in-retrospect considerations in a roughly analogous way. (This is related to my view that verification is easier than generation.)
I don’t think that means we shouldn’t try to figure things out by thinking about them. Thinking about what’s going on is an important part of how to get to correct answers quickly and an important complement of empirical data (you need to think when empirical data is hard to come by, to help interpret history and the results of experiment, to prioritize experimentation, etc.).
I’m not sure if your comment is disagreeing with any of this. It sounds like we’re on the same page about the fact that exact reasoning is prohibitively costly, and so you will be reasoning approximately, will often miss things, etc.
Of course, I think even if you successfully notice every on-paper consideration, there are still likely to be messy facts about the real world that you either didn’t know or obviously had no hope of capturing in a model that’s simple enough to reason about. That said, I think that reasoning in practice is basically never purely in this regime (and if you do literally get to this regime for a question, in some sense you’ve probably spent too long thinking about the question relative to doing something else), so in practice wrong conclusions are almost always due to a combination of both “not knowing enough” and “not thinking hard enough” / “not being smart enough.”
I’m not sure if your comment is disagreeing with any of this. It sounds like we’re on the same page about the fact that exact reasoning is prohibitively costly, and so you will be reasoning approximately, will often miss things, etc.
I agree. The term I’ve heard to describe this state is “violent agreement”.
so in practice wrong conclusions are almost always due to a combination of both “not knowing enough” and “not thinking hard enough” / “not being smart enough.”
The only thing I was trying to point out (maybe more so for everyone else reading the commentary than for you specifically) is that it is perfectly rational for an actor to “not think hard enough” about some problem and thus arrive at a wrong conclusion (or correct conclusion but for a wrong reason), because that actor has higher priority items requiring their attention, and that puts hard time constraints on how many cycles they can dedicate to lower priority items, e.g. debating AC efficiency. Rational actors will try to minimize the likelihood that they’ve reached a wrong conclusion, but they’ll also be forced to minimize or at least not exceed some limit on allowed computation cycles, and on most problems that means the computation cost + any type of hard time constraint is going to be the actual limiting factor.
Although even that, I think that’s more or less what you meant by
in some sense you’ve probably spent too long thinking about the question relative to doing something else
In engineering R&D we often do a bunch of upfront thinking at the start of a project, and the goal is to identify where we have uncertainty or risk in our proposed design. Then, rather than spend 2 more months in meetings debating back-and-forth who has done the napkin math correctly, we’ll take the things we’re uncertain about and design prototypes to burn down risk directly.
I overlooked a crucial consideration raised by denkenberger here that reduces the efficiency loss ~2x.
Thanks-it looks like you are referring to the net infiltration flow rate impact on the building. But there was also the consideration of humidity, and I did not see any humidity measurements in the data, so we are not able to resolve that one. Humidity sensors are fairly cheap, but notoriously unreliable. But one could actually measure the amount of water condensed pretty accurately to get an idea how much of the cooling of the air conditioner is going to condensing water versus cooling air (sensibly).
I didn’t know how to estimate this effect but I was guessing the total impact on the bottom line is much smaller than the factors of 2 from the other factors, at least in CA (though it’s definitely another factor I overlooked). I’m comfortable treating 85 to 70 in CA as a typical use case to benchmark efficiency for a portable AC.
That guess is coming from the rough sense that dehumidifiers use much less energy than air conditioners. I don’t know if that’s right and reflects that dehumidifying is pretty cheap (at least in CA), or if dehumidifiers are just normally used for relatively small humidity changes, or if I’m wrong about relative energy use. I also have a sense that when I run an AC it just doesn’t produce very much water (and that the energy cost is like ~0.6kWh per liter).
Actually this seems pretty non-trivial to estimate. Do you know reasonable ballpark figures?
If you want to geek out on this you can use a psychrometric chart. For instance, if outdoor air is 85F and 50% relative humidity (RH), that’s an enthalpy of about 35 BTU/lb of dry air. Typical exit air conditions on the cool side of an air conditioner are ~50F and 100% RH, so ~20 BTU/lb of dry air. The dehumidification portion would be going to 85F and ~30% RH or ~29 BTU/lb of dry air, so ~40% of the heat removed is in the form of condensing water (latent). This means you would take the sensible part and multiply by about 1.7 to get the total load on the air conditioner. If you were not drawing in outdoor air, the latent load would be much lower. So overall I think you’re right that in CA the humidity correction is not as big as the other factors.
I don’t think I can follow your calculation. My version would be:
You are intaking hot wet outside air (wet from both high RH and high temp). You need to cool it and condense a bunch of water out of it. There’s some ratio that’s fixed by the humidity and temperature of the outside vs inside air. I think that’s what you are saying is around 40%? I think actually the number you are giving isn’t what quite this calculation needs, but I’ll run with it anyway.
If all the heat was coming in from outside air (either before turning on AC or from infiltration), then you’d have a fixed ratio of latent to sensible heat removed, so the ratio wouldn’t depend on how much additional infiltration you caused, and we could just ignore humidity when thinking about the efficiency loss.
But in fact some of the heat is coming in from other channels. I guess the other big one is sunlight through windows. That heat doesn’t come with any more humidity. Extra infiltration from 2-hose AC increases how much latent heat you need to remove per unit of sensible heat, by increasing the relative importance of infiltrated air vs sunlight and other sources of heat. So if we just calculate how much extra sensible heat you have to remove, we’ll underestimate the efficiency loss.
The total extra infiltrated heat is about 25% of what the AC removes. At equilibrium, that’s 25% of all the heat gain in the house. If 13% of heat gain is normally from infiltration, then replacing that with 75% normal heat and 25% new infiltration would increase the fraction of heat from infiltration all the way to 35%. (I was super wrong about the 13% going in, I was expecting 25-50%!)
So per unit of heating, you are also increasing the fraction of heat coming from infiltrated air by 22%.
For the heat coming from infiltration, the extra cost of dehumidifying is about 2⁄3 of the sensible heat removed. So per unit of sensible heat removed, you need to remove an additional 15% of a unit of latent heat.
If the AC exhaust was more humid than the inside, then this would be lower, but my sense is that AC exhaust is basically as dry as indoor air?
So the net effect would be to take you from 25% efficiency loss (ignoring humidity) up to roughly 40% efficiency loss, which is pretty huge.
That was a super confusing calculation, definitely beyond my pay grade. I assume I got a ton of numbers/calculations and wrong, that there were much simpler ways to do it, and that this overall computation is likely to be conceptually confused in one or more ways. So I’d be pretty curious for your bottom line estimate or intuition about where it should have ended up.
(But I also understand if you want to stop talking about AC and put this thread to rest...)
I would say that is basically right. AC exhaust is about as humid as indoor air. The fraction of the heating load in the summer due to infiltration really does depend on how tight your building construction is. With the numbers Jeff was assuming for a very old house, infiltration would be a much larger percentage. There are some other sources of heat in a house that come with humidity, such as people and showers, but overall it is much less humidity than bringing in outdoor air (there is heat conduction through the walls, electricity use of lighting and appliances, etc.). So that might mean that it would take you from a 25% efficiency loss (ignoring humidity) up to a 35% efficiency loss, which is still a big deal. But I’m not sure if 85°F in California typically corresponds to 50% relative humidity.
My current best guess is that the estimate in my initial comment was correct and that for a typical case the efficiency loss is roughly 25-30% (e.g. when cooling from 85 to 70 in CA). I stand by the claim that your original post’s language was hyperbolic for an effect of this size, that your theoretical reasoning turned out to be wrong (e.g. you called my estimate “obviously ridiculous”), and that you are misunderstanding many portable AC customers’ preferences and the main costs from a second hose.
Post-morteming my estimate, I made two errors that happened to cancel out. I overestimated the temperature of exhaust by looking at this online comment (though I also think your AC might be on the unusually bad end), and I overlooked a crucial consideration raised by denkenberger here that reduces the efficiency loss ~2x.
ETA: actually it seems like humidity is also quite a large consideration, maybe increasing the efficiency loss by 1.5x. So now my best guess is more like 35-40%, significantly higher than the 25-30% estimate.
The 25-30% result is roughly the impression I got by googling 1-hose vs 2-hose AC. I don’t think the experiment results presented in this post meaningfully change my best guess. I could still easily imagine that my guess is wrong, though I may not reply to further comments.
This was actually a kind of fun test case for a priori reasoning. I think that I should have been able to notice the consideration denkenbgerger raised, but I didn’t think of it. In fact when I stared reading his comment my immediate reaction was “this methodology is so simple, how could the equilibrium infiltration rate end up being relevant?” My guess would be that my a priori reasoning about AI is wrong in tons of similar ways even in “simple” cases. (Though obviously the whole complexity scale is shifted up a lot, since I’ve spent hundreds of hours thinking about key questions.)
Note that some of the difference between these numbers comes from me stating (1-hose efficiency) / (2-hose efficiency) and John stating (2-hose efficiency) / (1-hose efficiency) in this post. This comment talks about (1-hose efficiency) / (2-hose efficiency) since those are the numbers we were discussing for most of the comment thread, including the initial comment that I’m reaffirming. Other ways of stating “25-30% efficiency loss” are: a 1-hose AC cools by 25-30% less than a 2-hose AC, or wastes 25-30% of the energy it uses, or requires 33-43% more energy than a 2-hose AC to achieve the same result.
This idea—that you should have been able to notice the issue with infiltration rates—is what I’ve been questioning when I ask “what is the computational complexity of general intelligence” or “what does rational decision making look like in a world with computational costs for reasoning”.
There is a mindset that people are simply not rational enough, and if they were more rational, they wouldn’t fall to those traps. Instead, they would more accurately model the situation, correctly anticipate what will and won’t matter, and arrive at the right answer, just by exercising more careful, diligent thought.
My hypothesis is that whatever that optimal “general intelligence” algorithm[1] is—the one where you reason a priori from first principles, and then you exhaustively check all of your assumptions for which one might be wrong, and then you recursively use that checking to re-reason from first principles—it is computational inefficient enough in such a way that for most interesting[2] problems, it is not realistic to assume that it can run to completion in any reasonable[3] time with realistic computation resources, e.g. a human brain, or a supercomputer.[4]
I suspect that the human brain is implementing some type of randomized vaguely-Monte-Carlo-like algorithm when reasoning, which is how people can (1) often solve problems in a reasonable amount of time[5], (2) often miss factors during a priori reasoning but understand them easily after they’ve seen it confirmed experimentally, (3) different people miss different things, (4) often if someone continues to think about a problem for an arbitrarily long people of time[6] they will continue to generate insights, and (5) often those insights generated from thinking about a problem for an arbitrarily long period of time are only loosely correlated[7].
In that world, while it is true that you should have been able to notice the problem, there is no guarantee on how much time it would have taken you to do so.
The “God algorithm” for reasoning, to use a term that Jeff Atwood wrote about in this blog post. It describes the idea of an optimal algorithm that isn’t possible to actually use, but the value of thinking about that algorithm is that it gives you a target to aim towards.
The use of the word “interesting” is intended to describe the nature of problems in the real world, which require institutional knowledge, or context-dependent reasoning.
The use of the word “reasonable” is intended to describe the fact that if a building is on fire and you are inside of it, you need to calculate the optimal route out of that burning building in a time period that is than a few minutes in length in order to maximize your chance of survival. Likewise, if you are tasked to solve a problem at work, you have somewhere between weeks and months to show progress or be moved to a separate problem. For proving a theorem, it might be reasonable to spend 10+ years on it if there’s nothing necessitating a more immediate solution.
This is mostly based on an observation that for any scenario with say some fixed number of “obvious” factors influencing it, there are effectively arbitrarily many “other” factors that may influence the scenario, and the process of deterministically ordering an arbitrarily long list and then preceding down the list from “most likely to impact the situation” and “least likely to impact the scenario” to manually check if each “other” factor actually does matter has an arbitrarily high computational cost.
Feel free to put “solve” in quotes and read this as “halt in a reasonable time” instead. Getting the correct answer is optional.
Like mathematical proofs, or the thing where people take a walk and suddenly realize the answer to a question they’ve been considering.
It’s like the algorithm jumped from one part of solution space where it was stuck to a random, new part of the solution space and that’s where it made progress.
My take is: you shouldn’t expect to get everything right when you try to reason about a moderately complicated system abstractly, no matter how smart you are. You’d like to have a lot of practice so that you can do your best, can get a sense for what kinds of things you tend to miss and how they change the bottom line, can better understand what the returns to thinking are typically like, and so on. This was a fun and unusually self-contained example, where we happened to miss an important and very clean consideration that can be appreciated with very little domain knowledge. (I think realistic cases are usually much more of a mess.)
In this case, I feel pretty confident that I would have noticed this consideration if I thought about the question for a few hours (and probably less), and I think that it would become obvious if you tried to write out your reasoning sufficiently carefully. But even if I spend hundreds of hours thinking about some issue with AI, I expect to miss all kinds of important and obvious-in-retrospect considerations in a roughly analogous way. (This is related to my view that verification is easier than generation.)
I don’t think that means we shouldn’t try to figure things out by thinking about them. Thinking about what’s going on is an important part of how to get to correct answers quickly and an important complement of empirical data (you need to think when empirical data is hard to come by, to help interpret history and the results of experiment, to prioritize experimentation, etc.).
I’m not sure if your comment is disagreeing with any of this. It sounds like we’re on the same page about the fact that exact reasoning is prohibitively costly, and so you will be reasoning approximately, will often miss things, etc.
Of course, I think even if you successfully notice every on-paper consideration, there are still likely to be messy facts about the real world that you either didn’t know or obviously had no hope of capturing in a model that’s simple enough to reason about. That said, I think that reasoning in practice is basically never purely in this regime (and if you do literally get to this regime for a question, in some sense you’ve probably spent too long thinking about the question relative to doing something else), so in practice wrong conclusions are almost always due to a combination of both “not knowing enough” and “not thinking hard enough” / “not being smart enough.”
I agree. The term I’ve heard to describe this state is “violent agreement”.
The only thing I was trying to point out (maybe more so for everyone else reading the commentary than for you specifically) is that it is perfectly rational for an actor to “not think hard enough” about some problem and thus arrive at a wrong conclusion (or correct conclusion but for a wrong reason), because that actor has higher priority items requiring their attention, and that puts hard time constraints on how many cycles they can dedicate to lower priority items, e.g. debating AC efficiency. Rational actors will try to minimize the likelihood that they’ve reached a wrong conclusion, but they’ll also be forced to minimize or at least not exceed some limit on allowed computation cycles, and on most problems that means the computation cost + any type of hard time constraint is going to be the actual limiting factor.
Although even that, I think that’s more or less what you meant by
In engineering R&D we often do a bunch of upfront thinking at the start of a project, and the goal is to identify where we have uncertainty or risk in our proposed design. Then, rather than spend 2 more months in meetings debating back-and-forth who has done the napkin math correctly, we’ll take the things we’re uncertain about and design prototypes to burn down risk directly.
Thanks-it looks like you are referring to the net infiltration flow rate impact on the building. But there was also the consideration of humidity, and I did not see any humidity measurements in the data, so we are not able to resolve that one. Humidity sensors are fairly cheap, but notoriously unreliable. But one could actually measure the amount of water condensed pretty accurately to get an idea how much of the cooling of the air conditioner is going to condensing water versus cooling air (sensibly).
I didn’t know how to estimate this effect but I was guessing the total impact on the bottom line is much smaller than the factors of 2 from the other factors, at least in CA (though it’s definitely another factor I overlooked). I’m comfortable treating 85 to 70 in CA as a typical use case to benchmark efficiency for a portable AC.
That guess is coming from the rough sense that dehumidifiers use much less energy than air conditioners. I don’t know if that’s right and reflects that dehumidifying is pretty cheap (at least in CA), or if dehumidifiers are just normally used for relatively small humidity changes, or if I’m wrong about relative energy use. I also have a sense that when I run an AC it just doesn’t produce very much water (and that the energy cost is like ~0.6kWh per liter).
Actually this seems pretty non-trivial to estimate. Do you know reasonable ballpark figures?
If you want to geek out on this you can use a psychrometric chart. For instance, if outdoor air is 85F and 50% relative humidity (RH), that’s an enthalpy of about 35 BTU/lb of dry air. Typical exit air conditions on the cool side of an air conditioner are ~50F and 100% RH, so ~20 BTU/lb of dry air. The dehumidification portion would be going to 85F and ~30% RH or ~29 BTU/lb of dry air, so ~40% of the heat removed is in the form of condensing water (latent). This means you would take the sensible part and multiply by about 1.7 to get the total load on the air conditioner. If you were not drawing in outdoor air, the latent load would be much lower. So overall I think you’re right that in CA the humidity correction is not as big as the other factors.
I don’t think I can follow your calculation. My version would be:
You are intaking hot wet outside air (wet from both high RH and high temp). You need to cool it and condense a bunch of water out of it. There’s some ratio that’s fixed by the humidity and temperature of the outside vs inside air. I think that’s what you are saying is around 40%? I think actually the number you are giving isn’t what quite this calculation needs, but I’ll run with it anyway.
If all the heat was coming in from outside air (either before turning on AC or from infiltration), then you’d have a fixed ratio of latent to sensible heat removed, so the ratio wouldn’t depend on how much additional infiltration you caused, and we could just ignore humidity when thinking about the efficiency loss.
But in fact some of the heat is coming in from other channels. I guess the other big one is sunlight through windows. That heat doesn’t come with any more humidity. Extra infiltration from 2-hose AC increases how much latent heat you need to remove per unit of sensible heat, by increasing the relative importance of infiltrated air vs sunlight and other sources of heat. So if we just calculate how much extra sensible heat you have to remove, we’ll underestimate the efficiency loss.
The total extra infiltrated heat is about 25% of what the AC removes. At equilibrium, that’s 25% of all the heat gain in the house. If 13% of heat gain is normally from infiltration, then replacing that with 75% normal heat and 25% new infiltration would increase the fraction of heat from infiltration all the way to 35%. (I was super wrong about the 13% going in, I was expecting 25-50%!)
So per unit of heating, you are also increasing the fraction of heat coming from infiltrated air by 22%.
For the heat coming from infiltration, the extra cost of dehumidifying is about 2⁄3 of the sensible heat removed. So per unit of sensible heat removed, you need to remove an additional 15% of a unit of latent heat.
If the AC exhaust was more humid than the inside, then this would be lower, but my sense is that AC exhaust is basically as dry as indoor air?
So the net effect would be to take you from 25% efficiency loss (ignoring humidity) up to roughly 40% efficiency loss, which is pretty huge.
That was a super confusing calculation, definitely beyond my pay grade. I assume I got a ton of numbers/calculations and wrong, that there were much simpler ways to do it, and that this overall computation is likely to be conceptually confused in one or more ways. So I’d be pretty curious for your bottom line estimate or intuition about where it should have ended up.
(But I also understand if you want to stop talking about AC and put this thread to rest...)
I would say that is basically right. AC exhaust is about as humid as indoor air. The fraction of the heating load in the summer due to infiltration really does depend on how tight your building construction is. With the numbers Jeff was assuming for a very old house, infiltration would be a much larger percentage. There are some other sources of heat in a house that come with humidity, such as people and showers, but overall it is much less humidity than bringing in outdoor air (there is heat conduction through the walls, electricity use of lighting and appliances, etc.). So that might mean that it would take you from a 25% efficiency loss (ignoring humidity) up to a 35% efficiency loss, which is still a big deal. But I’m not sure if 85°F in California typically corresponds to 50% relative humidity.