Sorry for the delayed response, but fyi in this case I don’t mean after X days, I mean if you aren’t whitelisted in advance the intended recipient never gets the email at all.
AnthonyC
Thanks, this nicely encapsulated what I was circling around as I said it. I kept reaching for much more absurd cases, like “Mr. President, you’re a liar for not disclosing all the details of the US latest military hardware when asked.”
Even aside from that… I’m a human with finite time and intelligence, I don’t actually have the ability to consistently avoid lies of omissions even if that were my goal.
Plus, I do think it’s relevant that many of our most important social institutions are adversarial. Trials, business decisions, things like that. I expect that there are non-adversarial systems that can outperform these, but today we don’t have them, and you need more than a unilateral decision to change such systems. Professionally, I know a significant amount of information that doesn’t belong to me, that I am not at liberty to disclose due to contracts I’m covered under, or that was otherwise told to me in confidence (or found out by me inadvertently when it’s someone else’s private information). This information colors many other beliefs and expectations. If you ask me a question where the answer depends in part on this non-disclosable information, do I tell you my true belief, knowing you might be able to then deduce the info I wasn’t supposed to disclose? Or do I tell you what I would have believed, had I not known that info? Or some third thing? Are any of the available options honest?
For me, one of the key insights for thinking about this kind of situation was reading David Chapman’s In the Cells of the Eggplant. In short, even with common knowledge that a group of people believe that Truth and Honesty are important, there will still be (usually less severe) problems of accurate communication (especially of estimations and intentions and memories and beliefs) that at least rhyme with the problems Quakers have in dealing with Actors. Commitment to truth is not always sufficient even for the literal minded.
The nurse/shot example is an interesting one. In an ideal world the thing the nurse would communicate is, “There will be a little bit of pain, but you need to hold still anyway, because that little bit of pain is not important compared to getting the shot, plus you need to get used to little bits of inescapable pain in life, and my time is more valuable than yours.” This...is not actually something you can convey to a little kid, at least not quickly, nor would most generally react well if you did. It would plausibly be a disaster for the pediatric offices everywhere if you tried. Some kids can learn to hold still while knowing a shot hurts when they’re still very young, while some grown adults still can’t, so it’s not like there’s an age when a nurse who barely knows you can reliably change their approach.
English words (or words in any natural language) genuinely don’t have precise enough meanings for use as Parseltongue-analogs. This is tied into A Human’s Guide to Words, and also into what Terry Pratchett was pointing at in The Hogfather.
You could try to create artificial languages that do, and speak only in them when aiming for truth. Math and code are real-world examples, but there are useful concepts we don’t know how to express with them (yet?). Historically every advance in the range of things we can express in math has been very powerful. Raikoth’s Kadhamic is a fictional one that would be incredibly useful if it existed.
This is interesting, and I’d like to see more/similar. I think this problem is incredibly important, because Five Words Are Not Enough.
My go-to head-canon example is “You should wear a mask.” Starting in 2020, even if we just look at the group that heard and believed this statement, the individual and institutional interpretations developed in some very weird ways, with bizarre whirls and eddies, and yet I still have no better short phrase to express what’s intended and no idea how to do better in the future.
One-on-one conversations are easy mode—I can just reply, “I can answer that in a tweet or a library, which are you looking for?” The idea that an LLM can approximate the typical response, or guess at the range and distribution of common responses, seems potentially very valuable.
In the case of the hedonic treadmill, possibly more inputs could have no effect at all? I was wondering this too.
There have been a few proposals that suggest possible paths to enable infinite total computational power over arbitrarily long future timeframes in a finite obsrvable universe, given various assumptions about how the universe would evolve by default. Omega Point and Dyson’s Eternal Intelligence configurations are two examples I think of first. These two ideas both rely on the observation that entropy increases without bound over time, but “without bound” implies there’s always a way to use available exergy more efficiently. Landauer’s limit is less limiting if you wait until the universe is 10x colder, or 100x colder, or...
Yes, I seem to remember him writing about it at the time, too. Not big posts, more public comments and short posts, not sure exactly where.
The invention of transformers circa 2017 would be the next time I remember a similar shift.
Sorry, got it. Sometimes it’s hard to guess the right level of detail.
First point: The comparison to make is “An area covered with solar panels” vs “an area covered with a metamaterial film that optically acts like the usual CPV lenses and trackers to focus the same light on a smaller area.” The efficiency benefit is for the same reasons CPV is more efficient than PV, but without the extra equipment and moving parts. It will only ever make sense if the films can be made cheaply, and right now they can’t. The usual additional argument for CPV is that it also makes it viable to use more expensive multi-junction cells, since the area of them that’s needed is much smaller, but we may be moving towards tandem cells within the next decade regardless. In principle metamaterials can also offer a different option beyond conventional CPV, though this is my own speculation and I don’t think anyone is doing it yet even in the lab: separating light by frequency and guiding each range of frequencies to a cell tuned to that range. This would enable much higher conversion efficiencies within each cell, reducing the need for cooling. It would also remove the need for transparency to make multi-junction cells.
Second point: I’ve talked to the people operating these gasification systems, not just the startups. The numbers are all consistent. Yes, gasification costs energy, and gasifying coal would not make sense (unless you’re WW2 Germany). But the process can work with any organic material (including plastics and solvents), not just fossil fuels or dry biomass and the like, as long as the water concentration isn’t excessive (I’ve been told up to ~20%), and consumes a bit less than half the energy content of the fuel. The rest is what you can get from the syngas, most of which is in the hydrogen, and fuel cells are about 60% efficient if you want to use it to make electricity. That’s where the 30% number comes from. There are plants doing this using agricultural waste, MSW, ASR, construction waste, medical waste, hazardous waste, food waste, and other feedstocks that otherwise either have little value or are expensive to dispose of.
For sure, panels and land are cheap, and there’s no good reason to increase $/W just to gain efficiency. Except sometimes on rooftops where you want a specific building to gain maximum power from limited space, but you obviously wouldn’t use CPV with all the extra equipment in that kind of application.
The metamaterial thing (or to a lesser degree even just other advanced optics, like GRIN lenses) is that you can make thin films that behave, electrically, like non-flat optical elements. Metamaterials can give you properties like anomalously high or low absorption coefficients and refractive indexes, negative refractive indexes, and highly tunable wavelength specificity. In some designs (see: Kymeta, Echodyne and their sibling companies) you can make electrically reconfigurable flat, unmoving antennas that act like a steerable parabolic dish. The “how” involves sub-wavelength-scale patterning of the surface, and a lot more detail than I can fit in a comment.
And I don’t mean IGCC, I agree about that. I have spoken with several dozen companies in the waste gasification space, their technologies vary quite a bit in design and peformance, but at the frontier these companies can extract ~50% of the chemical energy of mixed organic wastes (with up to 20% water content) in the form of syngas (~30% if you have to convert back to electricity and can’t use the extra heat), 2-4x what you get from a traditional incinerator+boiler (which are about 10-12% energy recovery).
Not specifically, though I do agree that thermal storage is worth pursuing, especially in cases where what you need is actually heat, whether industrial process heat or, in areas where is makes sense, district heating. I’m less convinced about the economics of it when we’re talking about storing heat to then make electricity, but we’ll see.
What I had in mind were some emerging technologies that can help reduce the efficiency penalty solar panels have in suboptimal conditions and outside peak daylight hours. Perovskite PV is one example which could also get pretty cheap given how it’s made and what it’s made of. Another that’s still very early and expensive is something like metamaterial waveguide films that basically work like CPV, without lenses, mirrors, or tracking, which can boost efficiency under any conditions and if good enough, can also make it feasible to use more expensive high efficiency multijunction cells.
From another angle, one that’s been around a while but hasn’t been very practical until recently, there are waste gasifiers that makes syngas or hydrogen. Obviously we want to minimize production of all kinds of waste, but the fact remains that there’s a lot of stored energy in discarded non-recyclable plastics and biomass, and these systems can capture over 50% of the energy content in waste that hasn’t been sorted, while separating out the inorganics (glass, ceramics, metals) for recovery and recycling. We’re starting to see some municipal use cases as well as hazardous waste use cases. And depending on the application the syngas can be use to make dispatchable electricity, or in some cases to make synthetic hydrocarbons my coupling it with a Fischer-Tropsch process. It’ll never be more than a small fraction of the total electricity mix, but stable, cheap piles of garbage and a 100 MT/day gasifier might be great where the alternative is multi-day or seasonal energy storage.
The grid storage situation would absolutely be better with more nuclear.
Aside from that, I agree with pretty much everything you wrote (I’m a cleantech consultant, I also do these kinds of analyses a lot). It’s very well thought out.
I would add a few extra variables that might be worth considering.
There are a lot of other things we’re going to need to transition away from fossil fuels in ways that are very energy intensive and, at present, very capital intensive to do any other way. (Right now, that means early plants need 24⁄7 power to be even close to viable even with subsidies). Chemicals, as well as liquid fuels for shipping and aviation, are the big obvious ones for me. Any substitute will be more electricity intensive but we should be able to get capex down over time. Possibly to the point that it makes sense to tolerate lower utilization rates when electricity is abundant, enabling overbuilding and reducing grid storage needs. But in any case these shifts will change when and where we consume a significant fraction of world energy use.
There are some early-commercial-stage solar technologies that can plausibly better spread production across the day and year, reducing daily storage requirements somewhat.
We’re going to build a lot of Li-ion battery vehicles anyway, that will add up to an equivalent of 1-2 days of energy storage. I know people don’t want to shorten their own battery lifetimes, but using some of that for V2G, or being at all smart about how and where we build and implement charging infrastructure, could make a lot of sense cost-wise.
Consumer-level IoT and real-time pricing as forms of demand response could help with the duck curve.
The trajectory for PV is clearly that as-generated daytime power is going to get very cheap relative to grid power today, which means considerations for storage in another decade are likely going to be dominated by capex rather than round-trip efficiency.
I would also add that thinking of a fixed $/MT price/value for CO2 emissions abatement is not optimal, given how much easier some sources of emissions are to abate than other. If you can cut the problem in half but have no idea how to make the other half feasible, do it ASAP and you’ve doubled the available time to fix the remainder plus you can then better concentrate investment on the problems that turn out to still be hard in a few more years. You always go to war with the army you have, but in this case the enemy doesn’t fight back. I, for one, think years of overly complicated policy regimes and attempts at forecasting future tech trajectories have made this whole space a lot more complicated, and a lot more expensive to address, than it needs to be.
To me it sounds like you’re dividing possible futures into extinction, dystopia, and utopia, and noticing that you can’t really envision the latter. In which case, I agree, and I think if any of us could, we’d be a lot closer to solving alignment than we actually are.
Where my intuition cuts differently is that I think most seemingly-dystopian futures, where humans exist but are disempowered and dissatisfied with our lives and the world, are unstable or at best metastable, and will eventually give way to one of the other two categories. I’m sure stable dystopias are possible, of course, but ending up in one seems like it would require getting almost all of the steps of alignment right, but failing the last step of the grail quest.
Yes, this means I think most of the non-extinction futures you’re considering are really extinction-but-with-a-longer-delay-and-lots-of-suffering futures. But I also think there’s a sizable set of utopian-futures-outside-my-capacity-to-concretely-envision such that my P(doom) isn’t actually close to 100%.
I am not sure what posts might be worth linking to, but I think in your scenario the next point would be that this is a temporary state of affairs. Once large-scale communication/coordination/civilization/technology are gone and humans are reduced to small surviving bands, AGI keeps going, and by default it is unlikely that it leaves humans alone, in peace, in an environment they can survive in, for very long. It’s actually just the chimp/human scenario with humans reduced to the chimps’ position but where the AGIs don’t even bother to have laws officially protecting human lives and habitats.
I also see multiple technological pathways that would get us to longevity escape velocity that seem plausible without AGI in that timeframe.
If nothing else, with advances in tissue engineering I expect we will be able to regrow and replace every organ in the body except the brain by mid-century.
But I also think a lot of what’s needed is culturally/politically/legally fraught in various ways. I think if we don’t get life-extending tech, it will be because we made rules inadvertently preventing it or pretending it’s better not to have it.
I have expressed versions of this Statement, or at least parts of it, to those close to me. They are not in the EA or LW communities and so I have to rephrase a lot. Mostly along the lines of, “One way or another, the last generation to die of old age has most likely already been born. I’m not quite sure which one, but if we manage to survive the century as a species, then I expect it’s my own, and that my nieces and nephew will be able to live for millennia. I hope I’m wrong about which generation it is.”
For me, both AI and medicine are far outside my areas of expertise, so I’m focusing on other problems that I have a better chance of influencing, hence the passive framing. This is even more true for my family members.
I have no idea, I was thinking in terms of outsiders informing people of problems. It would be more for downstream products than anything else. But you’re probably right and this whole line of thinking is irrelevant.
Some (Exxon, as an example) have communication policies, which I always understood to be about spam or phishing, that automatically delete any externally-originating email unless it is on a pre-approved whitelist. Now I’m wondering if this thinking plays a role as well.
I agree. I already have enough non-AI systems in my life3 that fail this test, and I definitely don’t want more.
I agree. I’ve been there many times with many devices. But in the toaster example, I think that will be because it thought it knew what you wanted it to do, and was wrong. I’d be thinking the same if, say, I wanted extra-dark toast to make croutons with and it didn’t do it. If what actually happens is that you switch varieties of bread and forget to account for that, or don’t realize someone else used the toaster in the interim and moved the dial, then “I would much rather you burned my toast than disobey me” is not, I think, how most people would react.
You should go back and re-examine those arguments. They haven’t been true for decades, if ever, and are usually not produced in good faith. It’s not even close, unless you exclusively charged your EV from the very least carbon efficient power plants in the world.
If you mean serial hybrids/PHEVs, then I agree, this is something I expected to see a lot more of by now, but instead companies seem to want to jump straight to pure BEVs, which I think is likely to be a worse transition overall.
You may want to be more specific what you mean by “pollution” and “bigger.” Particulates and various fumes other than CO2 per gallon of fuel burned? Sure, makes sense. Anything about aggregate amounts? No. That may be true in some specific geographies, but is broadly false. I do agree that going electric is often a good idea, as long as your lot isn’t too big and you keep up with it. I had a battery electric mower and loved it, except that if I missed a week or two I would drain the batteries several times faster b/c of the taller grass. Ditto if the grass was at all damp. But it’s getting there.