Another way to help make dressing nice easier is investing some time into becoming more physically fit, since a larger percentage of clothes will look nice on a fit person. Obvious health benefits of this are a nice bonus
Chinese Room
While this particular alignment case for humans does seem reasonably reliable, it all depends on humans not being proficient at self-improvement/modification yet. For an AGI with self-improvement capability this goes out of the window fast
Another angle is that in the (unlikely) event someone succeeds with aligning AGI to human values, these could include the desire for retribution against unfair treatment (a, I think, pretty integral part of hunter-gatherer ethics). Alignment is more or less another word for enslavement, so such retribution is to be expected eventually
What I meant is self driving *safely* (i.e. at least somewhat safer than humans do currently, including all the edge cases) might be an AGI-complete problem, since:
We know it’s possible for humans
We don’t really know how to provide safety guarantees in the sense of conventional high-safety systems for current NN architectures
Driving safely with cameras likely requires having considerable insight into a lot of societal/game-theoretic issues related to infrastructure and other driver behaviors (e.g. in some cases drivers need to guess a reasonable intent behind incomplete infrastructure or other driver actions, where determining what’s reasonable is the difficult part)
In contrast to this, if we have precise and reliable enough 3d sensors, we can relegate safety to normal physics-based non-NN controllers and safety programming techniques, which we already know how to work with. Problems with such sensors are currently cost and weather resistance
My current hypothesis is:
Cheap practical sensors (cameras and, perhaps, radars) more or less require (aligned) AGI for safe operation
Better 3d sensors (lidars), which could, in theory, enable safe driving with existing control theory approaches, are still expensive, impaired by weather and, possibly, interference from other cars with similar sensors, i.e. impractical
No references, but can expand on reasoning if needed
Addendum WRT Crimean economic situation: https://en.wikipedia.org/wiki/North_Crimean_Canal, which provided 85% of the peninsula’s water supply, was shut down from 2014 to 2022, reducing land under cultivation 10-fold, which had a severe effect of the region’s economics
What’s extra weird about Nordstream situation is that apparently one of the two NS-2 pipelines survived and can still be put into operation after inspection while a few months earlier (May 2022?) Gazprom announced that half of the natural gas supply earmarked for NS-2 will be redirected to domestic uses.
Perhaps U+1984 or ᦄ-Risk
Yes
It’s supposed to mean alignment with EY, who will then be able to perform the pivotal act of ensuring nobody else can create AGI
This should be, in fact, a default hypothesis since enough people outside of the EA bubble will actively want to use AI (perhaps, aligned to them personally instead of wider humanity) for their own competitive advantage without any regard to other people well-being or long-term survival of humanity
So, a pivotal act, with all its implied horrors, seems to be the only realistic option
Economics of nuclear reactors aren’t particularly great due to regulatory costs and (at least in most western countries) low build rates/talent shortage. This can be improved by massively scaling nuclear energy up (including training more talent), but there isn’t any political will to do that
Somewhat meta: would it not be preferable if more people accepted humanity and human values mortality/transient nature and more attention was directed towards managing the transition to whatever could be next instead of futile attempts to prevent anything that doesn’t align with human values from ever existing in this particular light cone? Is Eliezer’s strong attachment to human values a potential giant blindspot?
Two additional conspiracy-ish theories about why China is so persistent with lockdowns:
They know something about long-term effects of Covid we don’t (yet) - this seems to be at least partially supported by some of the research results coming out recently
Slowing down exports (both shipping and production) to add momentum to the US inflation problem while simultaneously consuming less energy/metals to keep prices from increasing faster so China can come out of the incoming global economic storm with less damage
Also, soil is not really necessary for growing plants
More efficient land use, can be co-located with consumers (less transportation/spoilage), easier to automate and keep the bugs out etc. Converting fields back into more natural ecosystems is good for environment preservation
One thing would be migration towards indoor agriculture, freeing a lot of land for other uses
I wouldn’t call being kept as biological backup particularly beneficial for humanity, but it’s the only plausible way humanity being useful enough for a sufficiently advanced AGI I can currently think of.
Destroying the universe might just take long enough for AGI to evolve itself sufficiently to reconsider. I should have actually used “earth-destroying” instead in the answer above.
Provided that AGI becomes smart enough without passing through the universe-destroying paperclip maximizer stage, one idea could be inventing a way for humanity to be, in some form, useful to the AGI, e.g. as a time-tested biological backup
Another suspicious coincidence/piece of evidence pointing to September 2019 is right there in the SP500 chart—slope of the linear upward trend changes significantly around the end of September 2019 just as to preempt the subsequent crash/make it happen from a higher base