I don’t like this defense for two reasons. One, I don’t se why the same argument doesn’t apply to the role Eliezer has already adopted as an early and insistent voice of concern. Being deliberately vague on some types of predictions doesn’t change the fact that his name is synonymous with AI doomsaying. Second, we’re talking about a person whose whole brand is built around intellectual transparency and reflection; if Eliezer’s predictive model of AI development contains relevant deficiencies, I wish to believe that Eliezer’s predictive model of AI development contains relevant deficiencies. I recognize the incentives may well be aligned against him here, but it’s frustrating that he seems to want to be taken seriously on the topic but isn’t obviously equally open to being rebutted in good faith.
swarriner
As a layperson, the problem has been that my ability to figure out what’s true relies on being able to evaluate subject-matter experts respective reliability on the technical elements of alignment. I’ve lurked in this community a long time; I’ve read the Sequences and watched the Robert Miles videos. I can offer a passing explanation of what the corrigibility problem is, or why ELK might be important.
None of that seems to count for much. Yitz made what I thought was a very lucid post from a similar level of knowledge, trying to bridge that gap, and got mostly answers that didn’t tell me (or as best I can tell, them) anything in concept I wasn’t already aware of, plus Eliezer himself being kind of hostile in response to someone trying to understand.
So here I find myself in the worst of both worlds; the apparent plurality of the LessWrong commentariat says I’m going to die and to maximise my chance to die with dignity I should quit my job and take out a bunch of loans try to turbo through an advanced degree in machine learning, and I don’t have the tools to evaluate whether they’re right.
AI discourse triggers severe anxiety in me, and as a non-technical person in a rural area I don’t feel I have anything to offer the field. I personally went so far as to fully hide the AI tag from my front page and frankly I’ve been on the threshold of blocking the site altogether for the amount of content that still gets through by passing reference and untagged posts. I like most non-AI content on the site, been checking regularly since the big LW2.0 launch, and I would consider it a loss of good reading material to stop browsing, but since DWD I’m taking my fate in my hands every time I browse here.
I don’t know how many readers out there are like me, but I think it at least warrants consideration that the AI doomtide acts as a barrier to entry for readers who would benefit from rationality content but can’t stomach the volume and tone of alignment discourse.
Agricultural practice is my Gell-Mann pet peeve. While it’s true that fertilizer and pest control are currently central to large swaths of the commercial ag industry, this is not necessarily a case of pure necessity so much as local maxima— for many crops we could reduce dependence on synthetic fertilizers and pesticides by integrating livestock, multi-cropping land, etc. Some of them are also ecologically unsustainable as practiced and may eventually need to be replaced.
That said, this doesn’t actually detract from the central point; I would very much like to live in a world where those questions are actually engaged with by the general populace as opposed to being defined by like, Whole Foods marketing copy and the US corn lobby.
That’s fair, and I’m grumbling less as an ag scientist or policy person than as a layperson born and raised in the ag industry. It is my opinion that the commercial ag industry in my country both contains inadequacies and is a system of no free energy, to borrow from Inadequate Equilibria.
To elaborate, I observe the following facts:
Conventional agriculture using fertilizer and pesticide creates negative externalities, notably by polluting runoff and consuming non-renewable resources (fertilizer is made from potash, a reasonably abundant but not infinite mineral which also creates a carbon footprint to mine).
Organic agriculture sacrifices considerable output as practiced, and is not actually optimized for minimal environmental impact but rather to maximize appeal to the organic food market, and as such also contains negative externalities which are not currently captured.
Almost no commercial agriculture in my area, organic or otherwise, incorporates livestock into land rotation cycles. Although I don’t have sources at hand, I am under the impression that evidence suggests that grazing animals provide not just replenishment of macronutrients, but also help to maintain a robust and fertile microbiome. Although labour is a factor, consider that under status quo, ranchers own land, and farmers own different land, and that land changing hands once every several years would on its own be an improvement.
Most commercial ag operations are extremely conservative with regard to implementing and operational changes, for good reason. Being subject to both global market fluctuations and climate fluctuations is an unenviable business position.
Combine all these things I have seen firsthand, and I do conclude there is a better global maximum out there somewhere. And granted, if I were appointed Ag Czar it would no doubt be a Great Leap Forward-like disaster because I don’t have the in-depth knowledge required to overhaul a complex ecological and economic system.
To bring all this back to the original thesis of the post, the precise reason I raised these gripes is because I agree with jasoncrawford that the waterline for industrial literacy is too low and more people should have a basic grasp of how these systems work. But like the Gell-Mann in the apocryphal story about trusting the news, I looked at his list of “things people should know about industry” and thought “Well… I have something to add to that, if people are going to take this post as a starting point for things that are important to know”.
I find this comment offensive.
First, your description of the process of consent is not universal; it doesn’t describe any relationship I’ve been in going all the way back to when I was a teenager. At the very least this should tell you that this series of events wasn’t acceptable because it’s “just the way humans interact.” Many men, including myself, actually talk to the women we want to have sex with, and “having lower amounts of sex” is far from an adequate reason to resort to the boundary-pushing and manipulation you describe.
Second, “the fact that you were raped doesn’t make Alex a rapist” is a patently absurd position to hold, not to mention an incredible red flag for anyone who might consider being at all vulnerable around you in the future. The mental gymnastics required are mind-boggling. It appears the case you’re making by saying “you were both naked on the beach, and I think a large fraction of men would at least try to escalate the situation” is something like “you were standing in traffic, you shouldn’t be surprised you got hit by a car”. This is textbook victim-blaming and completely ignores the perpetrator’s agency in the matter. I would say it’s akin to saying “you were eating a sandwich in public, and I think a large fraction of men are so hungry they would at least try to punch you and steal it.” If “retreat mind-state” is the defense here then I guess those retreats should probably not be happening. If I took the series of actions described in the open letter on my own fiancee I would think she would be disturbed and traumatized by the experience, to say nothing of the more ambiguous context described above.
Third and lastly; regardless of what views you may hold in private, it’s incredibly hostile behavior to make this case on a self-described assault victim’s post about the incident. The whole comments serves to demean OP’s perspective and you then condescend that “she can feel however she wants” as if you haven’t just described at length why you think she’s foolish and misguided.
It’s the “Boy who cried wolf” fable in the format of an incident report such as what might be written in the wake of an industrial disaster. Whether the fictional report writer has learned the right lessons I suppose is an exercise left for the reader.
From Ozy:
“I recently read an essay by Peter Singer, Ethics Beyond Species and Beyond Instincts, in which he defined the moral as that which is universalizable, in this sense: “We can distinguish the moral from the nonmoral by appeal to the idea that when we think, judge, or act within the realm of the moral, we do so in a manner that we are prepared to apply to all others who are similarly placed.”
I read that, sat back, and said to myself: “I cannot do morality.”
I cannot do it in the same sense that an alcoholic cannot drink, and a person with an eating disorder cannot go on a diet. I am incapable of engaging with universalizable morality in a way that does not cause me severe mental harm. While I can reject a universalizable moral claim on an intellectual level, I am incapable of rejecting them– no matter how absurd or contradictory to other things I accept– on an emotional level. If I fail to live up to such a claim, I will hate myself and curl in a ball and be utterly nonfunctional for a few hours, causing harm to both myself and those who have to put up with me.
So (with much backsliding) I have started to make an effort to weed out the universalizable morality from my brain. I do things I want to do, and I don’t do things I don’t want to do.”
https://thingofthings.wordpress.com/2016/06/13/assorted-thoughts-on-scrupulosity/
You and your girlfriend seem to have adopted a philosophical standard of morals which humans cannot uphold. I happen to believe that the case for the moral weight of organism lacking central nervous systems is extremely weak, but resisting the temptation to dismiss your position on those grounds alone I would say that if your slime civilization was proven real tomorrow, then there would be nothing to do except acknowledge the tragedy and move on with life. It’s not like human-dominated environments make up a majority of those that are so theoretically miserable for ants and dust mites and the bugs in Brian Tomasik’s compost, so even radical anti-natalism would accomplish a statistical nothing. If the ants suffered as you killed them, then the tragedy is not that you did it but that those ants were born into a world so hostile that if you hadn’t killed them because they can’t live in your apartment, then they would have been eaten by birds, or at war with other colonies, or frozen/drowned/dehydrated by the millions thanks to the weather.
Thankfully I do believe the case for the moral worth of ants is weak, so I hope you will consider seeking out counselling on how to reduce your/your girlfriend’s apparent feelings of shame for the largely hypothetical moral suffering you worry about causing.
For the record, ISO 3103 is in no way optimized for a tasty cup of tea; it’s explicitly standardized. Six minutes of brewing with boiling water can “scorch” certain teas by over-extracting tannins and other bitter compounds. If you dislike tea there’s a decent chance you would like it better with shorter brews or lower temperature water (I use 90C water for my black teas and 85C for greens, for example).
I am not a true expert, but there is one major element of this narrative that most coverage leaves out— no matter what happens to the short-sellers, the price of Gamestop and other short squeezed stocks must eventually normalize to a “truer” valuation.
I have seen a truly alarming lack of recognition of this fact, with some people apparently believing the squeezed price is the new normal for GME. Here’s why that probably isn’t the case:
The value of a stock is tied to two factors. One is (broadly) the cash flows one can expect to receive in the form of dividends and other shareholder benefits, the other is the expectation of the stock’s value appreciating. Market manipulation like the current squeeze can cause the price of a stock to inflate based on that second factor. As the archetypal example, we look to the housing crash that caused the ’08 recession. Thousands of mortgages were given out because it was thought that home prices would continue to rise indefinitely, meaning the loans were low risk (because even if the home buyer couldn’t make their payment, the bank could seize the house and not take a loss). This was fine until it suddenly wasn’t anymore; the assets lost perceived value, and the remaining fundamentals, i.e. homeowners’ ability to service their debts, was not up to the task of keeping the banks solvent.
For Gamestop, I’m told there is some reason to think their fundamentals are getting better from where they were one year ago, but I have seen no compelling reasons that those fundamentals will deliver the kind of dividends that would traditionally command such share prices.
When the short squeeze passes, some Wall Street firms will have taken a big loss, but many small investors will be left holding a stock that may still nominally bear a $300+ price per share, but will probably not be able to deliver the same cash flows or stability as holding the same amount of a business with stronger fundamentals than GameStop. In the absence of people shorting, you end up deciding whether to keep your money tied up in GME, which will return $X over however long you hold it, or some other stock that could return $2x or $3x. At this point, after the short squeeze is resolved, the price will start to fall again.
The investors who were able to sell the $300 stocks to firms obligated to meet short contracts will realize a big cash gain, but anyone left holding the stock after that are likely to be in a seriously bad way.
This, of course, is not investment advice. If I knew exactly when the people holding GME were going to get nervous and try to liquidate, I could just take out new shorts and get rich ( and if enough people did that maybe WSB would just try to squeeze those shorts again!). What all of this boils down to is that this is not the new normal, it is a speculation bubble, and bubbles pop.
Facts and data are of limited use without a paradigm to conceptualize them. If you have some you think are particularly illuminative though by all means share them here.
(this comment is copied to the other essay as well)
I respect the attempt, here, and I think a version of the thesis is true. Letting go of control and trying to appreciate the present moment is probably the best course of action given that one is confronted with impending doom. I also recognize that reaching this state is not just a switch one can immediately flip in one’s mind; it can only be reached by way of practice.
With these things in mind, I am still not okay. More than anything I find myself craving ignorance. I envy my wife; she’s not in ratspaces whatsoever and as far as I know has no idea people hold these beliefs. I think that would be a better way to live; perhaps an unpopular opinion on the website where people try not to live in ignorance. It’s hard not to be resentful sometimes. I resent the AI researchers, the site culture, and I especially resent certain MIRI founders and their declarations of defeat.
I think that means I need to disconnect, once and for all. I’ve been toying with the idea that I need to disconnect from the LW sphere completely and frankly I think it’s overdue. Dear reader; if you aren’t going to go solve alignment, I suggest you consider following suit. I might hand around a bit to view replies to this comment but… Yeah. Thanks for all the food for thought over the years LW, I’m not sure if it was worth it.
(this comment is copied to the other essay as well)
I respect the attempt, here, and I think a version of the thesis is true. Letting go of control and trying to appreciate the present moment is probably the best course of action given that one is confronted with impending doom. I also recognize that reaching this state is not just a switch one can immediately flip in one’s mind; it can only be reached by way of practice.
With these things in mind, I am still not okay. More than anything I find myself craving ignorance. I envy my wife; she’s not in ratspaces whatsoever and as far as I know has no idea people hold these beliefs. I think that would be a better way to live; perhaps an unpopular opinion on the website where people try not to live in ignorance. It’s hard not to be resentful sometimes. I resent the AI researchers, the site culture, and I especially resent certain MIRI founders and their declarations of defeat.
I think that means I need to disconnect, once and for all. I’ve been toying with the idea that I need to disconnect from the LW sphere completely and frankly I think it’s overdue. Dear reader; if you aren’t going to go solve alignment, I suggest you consider following suit. I might hand around a bit to view replies to this comment but… Yeah. Thanks for all the food for thought over the years LW, I’m not sure if it was worth it.
I’m not the OP, but I bite that bullet all day long. My parents’ last wishes are only relevant in two ways that I can see:
-
Their values are congruent with my own. If my parents last wishes are morally repugnant to me I certainly feel no obligation to help execute those wishes. Thankfully, in real life my parents values and wishes are fairly congruent with my own, so their request is likely to be something I could evaluate as worthy on its own terms; no obligation needed.
-
I wish to uphold a norm of last wishes being fulfilled. This has to meet a minimum threshold of congruence on point 1) above, but if I expect to have important last wishes that I will be unable to fulfill in my lifetime, I may want to promote this norm of paying it forward. Except I’m not convinced doing so is actually very effective; surely it’s better for me to work towards my own goals rather than work towards others in the hope it upholds a norm that will get my goals carried out later. Or, if my goals are beyond my own ability to execute then surely I should be working to get those goals accepted by more people on their own terms, rather than as an obligation to me.
-
I find myself concerned. Steven Pinker’s past work has been infamously vulnerable to spot-checks of citations, leading me to heavily discount any given factual claims he makes. Is there reason to think he has made an effort here that will be any better constructed?
Unless I’m very much mistaken, emergency mobilization systems refers to autonomic responses like a pounding heartbeat, heightened subjective senses, and other types of physical arousal; i.e. the things your body does when you believe someone or something is coming to kill you with spear or claw. Literal fight or flight stuff.
In both examples you give there is true danger, but your felt bodily sense doesn’t meaningfully correspond to it; you can’t escape or find the bomb by being ready for an immediate physical threat. This is the error being referred to. In both cases the preferred state of mind is resolute problem-solving and inability to register a felt sense of panic will likely reduce your ability to get to such a state.
Addendum: Crimp grips are a major cause of climbing injuries. It’s sheer biomechanics. The crimp grip puts massive stress on connective tissues which aren’t strong enough to reliably handle them.
The moral of the addendum: choose your impossible challenges wisely; even if you can overcome them the stress and pain might have been a warning from the beginning. If nothing else it should be a warning to get some good advice about prevention or you may find yourself unable to pursue your goal for weeks at a time.
Is there a strong theoretical basis for guessing what capabilities superhuman intelligence may have, be it sooner or later? I’m aware of the speed & quality superintelligence frameworks, but I have issues with them.
Speed alone seems relatively weak as an axis of superiority; I can only speculate about what I might be able to accomplish if, for example, my cognition were sped up 1000x, but it find it hard to believe it would extend to achieving strategic dominance over all humanity, especially if there are still limits on my ability to act and perceive information that happen on normal-human timescales. One could shorthand this to “how much more optimal could your decisions be if you were able to take maximal time to research and reflect on them in advance,” to which my answer is “only about as good as my decisions turned out to be when I wasn’t under time pressure and did do the research”. I’d be the greatest Starcraft player to ever exist, but I don’t think that generalizes outside the domain of [tactics measured in frames rather than minutes or hours or days].
To me quality superiority is the far more load-bearing but much muddier part of the argument for the dangers of AGI. Writing about the lives and minds of human prodigies like Von Neumann or Terry Tao or whoever you care to name frequently verges on the mystical; I don’t think even the very intelligent among us have a good gears-level model of how intelligence is working. To me this is a double-edged sword; if Ramanujan’s brain might as well have been magic, that’s evidence against our collective ability to guess what a quality superintelligence could accomplish. We don’t know what intelligence can do at very high levels (bad for our ability to survive AGI), but we also don’t know what it can’t do, which could turn out to be just as important. What if there are rapidly diminishing returns on the accuracy of prediction as the system has to account for more and more entropy? If that were true, an incredibly intelligent agent might still only have a marginal edge in decision-making which could be overwhelmed by other factors. What if the Kolmogorov complexity of x-risk is just straight up too many bits, or requires precision of measurement beyond what the AI has access to?
I don’t want to privilege the hypothesis that maybe the smartest thing we can build is still not that scary because the world is chaotic, but I feel I’ve seen many arguments that privilege the opposite; that the “sharp left turn” will hit and the rest is merely moving chess pieces through a solved endgame. So what is the best work on the topic?
One human’s moral arrogance is another human’s Occam’s razor. The evidence suggests to me, on grounds of both observation (very small organisms demonstrate very simple behaviour not consistent with a high level awareness) and theory (very small organisms have extremely minimal sensory/nervous architecture to contain qualia) that dust-mites are morally irrelevant, and the chance that I am mistaken in my opinion amounts to a Pascal’s Mugging.
Respectfully, no shit Sherlock, that’s what happens when a community leader establishes a norm of condescending to inquirers.
I feel much the same way as Citizen in that I want to understand the state of alignment and participate in conversations as a layperson. I too, have spent time pondering your model of reality to the detriment of my mental health. I will never post these questions and criticisms to LW because even if you yourself don’t show up to hit me with the classic:
then someone else will, having learned from your example. The site culture has become noticeably more hostile in my opinion ever since Death with Dignity, and I lay that at least in part at your feet.