Another way to help make dressing nice easier is investing some time into becoming more physically fit, since a larger percentage of clothes will look nice on a fit person. Obvious health benefits of this are a nice bonus
Chinese Room
One possible way to increase dignity at the point of death could be shifting the focus from survival (seeing how unlikely it is) to looking for ways to influence what replaces us.
Getting killed by a literal paperclip maximizer seems less preferable compared to being replaced by something pursing more interesting goals
What guarantees that, in case you happen to be the first to build an interpretable aligned AGI, Conjecture, as an organization wielding a newly acquired immense power, stays aligned with the best interests of humanity?
Addendum WRT Crimean economic situation: https://en.wikipedia.org/wiki/North_Crimean_Canal, which provided 85% of the peninsula’s water supply, was shut down from 2014 to 2022, reducing land under cultivation 10-fold, which had a severe effect of the region’s economics
Thank you for your answer.
I have very high confidence that the *current* Connor Leahy will act towards the best interests of humanity, however, given the extraordinary amount of power an AGI can provide, confidence in this behavior staying the same for decades or centuries (directing some of the AGIs resources towards radical human life extension seems logical) to come is much less.
Another question in case you have time—considering the same hypothetical situation of Conjecture being first to develop an aligned AGI, do you think that immediately applying its powers to ensure no other AGIs can be constructed is the correct behavior to maximize humanity’s chances of survival?
(I’ve only skimmed the post, so this might have already been discussed there.)
The same argument might as well apply to:
mental models of other people (which are obviously distinct and somewhat independent from the subjects they model)
mental model of self (which, according to some theories of consciousness, is the self)
All of this and, in particular, the second point connects to some Buddhist interpretations pretty well, I think, and there is also a solution proposed, i.e. reduction/cessation of such mental modelling
Another suspicious coincidence/piece of evidence pointing to September 2019 is right there in the SP500 chart—slope of the linear upward trend changes significantly around the end of September 2019 just as to preempt the subsequent crash/make it happen from a higher base
While this particular alignment case for humans does seem reasonably reliable, it all depends on humans not being proficient at self-improvement/modification yet. For an AGI with self-improvement capability this goes out of the window fast
What’s extra weird about Nordstream situation is that apparently one of the two NS-2 pipelines survived and can still be put into operation after inspection while a few months earlier (May 2022?) Gazprom announced that half of the natural gas supply earmarked for NS-2 will be redirected to domestic uses.
Two additional conspiracy-ish theories about why China is so persistent with lockdowns:
They know something about long-term effects of Covid we don’t (yet) - this seems to be at least partially supported by some of the research results coming out recently
Slowing down exports (both shipping and production) to add momentum to the US inflation problem while simultaneously consuming less energy/metals to keep prices from increasing faster so China can come out of the incoming global economic storm with less damage
Whether the lockdown fails or not depends on its goals, which we don’t really know much about. I’d bet that it’ll fail to achieve anything resembling zero-covid due to Omicron being more contagious and vaccines less effective, however it might be successful in slowing the (Omicron) epidemic down enough so Hong Kong scenario (i.e. most of the previous waves mortality as experienced elsewhere packed into a few weeks) is avoided
I meant ‘copying’ above only necessary in the human case to escape the slow evolving biological brain. While it is certainly available to a hypothetical AGI, it is not strictly necessary for self-improvement (at least copying of the whole AGI isn’t)
I don’t think there’s need for an AGI to build a (separate) successor per se. Humans need the technological AGI only due to inability to copy/evolve our minds in a more efficient way compared to the existing biological one
Another angle is that in the (unlikely) event someone succeeds with aligning AGI to human values, these could include the desire for retribution against unfair treatment (a, I think, pretty integral part of hunter-gatherer ethics). Alignment is more or less another word for enslavement, so such retribution is to be expected eventually
Yes
Somewhat meta: would it not be preferable if more people accepted humanity and human values mortality/transient nature and more attention was directed towards managing the transition to whatever could be next instead of futile attempts to prevent anything that doesn’t align with human values from ever existing in this particular light cone? Is Eliezer’s strong attachment to human values a potential giant blindspot?
It’s supposed to mean alignment with EY, who will then be able to perform the pivotal act of ensuring nobody else can create AGI
Has anybody tried to quantify how much worse are fish farm conditions are compared to the wild? Since, from anecdotal but somewhat first-hand experience, wild environments for fish can hardly be described as anything but horror as well