Nice. In what measure would you say it affected your timelines, like a rough approximation? And what would your timelines be at this point? I know this is hard to tell so no offense if you don’t wanna answer, but I’m indeed curious.
superads91
All the gurus say that physical pain is just something from the body, and you can only have suffering (from it) if you are not enlightened. Would they still maintain that after being tortured for decades? I seriously doubt so.
This has lead me to believe that enlightenment is not about discovering truth, but quite the opposite. It’s about deluding yourself into happiness by believing that this world is actually something good.
That’s why I quit meditating. The only real hope is in eradicating suffering, a la David Pearce. Not ignoring it. Sure you can use meditation as pain management, but it isn’t the truth.
I often find myself thinking about this. As in “the law of attraction: thinking/belief directly influences reality”. I think it might be true, in this strange world that we live in of which we know so little.
However, I find it very hard to remain optimistic when I feel that humanity has not only extinction pointed at their heads, but probably also something worse (s-risks (don’t get into this topic if you don’t wanna risk your sanity)). Add that to the opinion that the alignment problem is nearly impossible because you can’t control something more intelligent than yourself, plus the vulnerable world structure, plus the fact that computer systems are so fallible, plus the possibly short timelines to AGI...
If we had 100 years maybe we could do it and remain optimistic, but we might not even have 10.
When this is your worldview of the facts, it’s very hard to have any hope and to not be paranoid. I think evolution gave us paranoia as a last resort strategy to splash about and think/try everything possible. If everyone actually became paranoid/super-depressed about this, I think that would be our only chance to effectively stop AGI development until we have the alignment problem solved. Would take massive changes to world government etc, but it’s possible. Rats in lab experiments have no chance at all to alter their grim fate, but we do, if little.
But I don’t know. Maybe no pessimism ever helps anyway. Maybe optimismic law of attraction does exist. I really don’t know. I just find it impossible to have hope when I look at the situation.
That’s why I consider this world a not good world, for that (and less) being possible. Whereas all of them (Osho, Sadhguru, Ramana Maharishi) say that enlightenment is about realizing that you’re living in a good world. Hence it’s a lie imo.
Hey everyone. I’m new here. I’ve recently been kinda freaking out about AGI and its timelines… Specially after reading Eliezer’s post about there being no fire-alarm.
However, he also mentions that “one could never be sure of a new thing coming, except for the last one or two years” (something along those lines).
So, in your opinion, are we already at a stage where AGI could arrive anytime? Because due to things like GPT-3, Wu Dao 2.0 and AlphaCode, I’ve been getting really scared… Plus if there is something more advanced being developed in secret...
Or will there at least be a 1-2 year “last epistemic stage” which we can be sure we haven’t reached yet? (as Soares also mentions)
Cause everyday I’ve been looking out the window expecting the nano-swarms to come lol… But I’m just a lay person, so I’d like to hear some more expert opinions.
Thanks for the attentious commentary.
-
Yeah, I was guessing that the smiley faces wouldn’t be the best example… I was just wanting to draw something from the Eliezer/Bostrom universe since I had mentioned the paperclipper beforehand. So, maybe a better Eliezer-Bostrom example would be, we ask the AGI to “make us happy”, and it puts everyone paralyzed in hospital beds on dopamine drips. It’s not hard to think that after a couple hours of a good high, this would actually be a hellish existence, since human happiness is way more complex than the amount of dopamine in one’s brain (but of course, Genie in the Lamp, Mida’s Touch, etc)
-
So, don’t you equate this kind of scenario with a significant amount of suffering? Again, forget the bad example of the smiley faces, and reconsider. (I’ve actually read in a popular lesswrong post about s-risks Paul clearly saying that the risk of s-risk was 1/100th of the risk of x-risk (which makes for even less than 1/100th overall). Isn’t that extremely naive, considering the whole Genie in the Lamp paradigm? How can we be so sure that the Genie will only create hell 1 time for each 100 times it creates extinction?)
-
a) I agree that a suffering-maximizer is quite unlikely. But you don’t necessarily need one to create s-risks scenarios. You just need a Genie in the Lamp scenario. Like the dopamine drip example, in which the AGI isn’t trying to maximize suffering, quite on contrary, but since it’s super-smart in Sciences but lacks human common sense (a Genie), it ends up doing it.
b) Yes I had read that article before. While it presents some fair solutions, I think it’s far from being mostly solved. “Since hyperexistential catastrophes are narrow special cases (or at least it seems this way and we sure hope so), we can avoid them much more widely than ordinary existential risks.” Note the “at least it seems this way and we surely hope so”. Plus, what’s the odds that the first AGI will be created by someone who listens to what Eliezer has to say? Not that bad actually, if you consider US companies, but if you consider China, then dear God...
On your PS1, yeah definitely not willing to do cryonics, and again, s-risks don’t need to come from threats, just misalignment.
Sorry if I black pilled you with this, maybe there is no point… Maybe I’m wrong. I hope I am.
-
A sober opinion (even if quite different from mine). My biggest fear is scaling a transformer + completing it with other “parts”, as in an agent (even if a dumb one), etc. Thanks
Another very nice reply, thanks.
To each paragraph:
-
Agree.
-
Not sure I follow. Orthogonality is the thesis that intelligence and goals aren’t necessarily related to each other. Intelligence as merely instrumental rationality. So that only helps the argument that the AGI could very well create suffering non-intentionally while trying to make us happy (again, maybe smiley faces isn’t the most perfect example, think of all of us paralyzed in hospital beds on dopamine drips instead). Because being a machine it would probably be super intelligent in reasoning, Science, etc, but kind in an autistic way due to lack of sentience and emotions. I.e., something very intelligent is some areas and very dumb in others.
“So if there’s suffering, there probably has to be an instrumental goal that coincidentally involves conscious beings.”
That part I agree though. I mean, it’s kinda equally likely. And it concerns me a lot as well, like the experiments cases. That’s stuff that sends me into despair land. That’s why we should really be panicking about this, there’s too many odds for way bad stuff to happen. And I agree also that changing the social structure could probably even be the only way to accomplish it at this point since we don’t have 100 years to solve alignment. Sometimes I really feel like just go talking to people, or carefully try to become an activist, because no one else is doing it, no one is giving ted talks about s-risks. It’s so hopeless though… Is the FRI and the likes even doing anything of substance?
Btw, since you seem to have quite some baggage, could you reply also if you think AGI could arrive tomorrow at this technological point, and what are your timelines? Do we actually have any time, even if only a few years?
And also regarding the b-word… Since you’ve mentioned acausal trade, do you think it only works between ASI’s (as I’ve heard), or between ASI’s and humans as well?
-
Maybe not is these exact terms, but maybe, I don’t know “realizing the benevolent tendency of existence”, “realizing the source as a benevolent force”, “realizing that all is love, that existence loves you”, etc. I’ve been hearing these kinds of claims from all gurus (although I’m not familiarized with any of the ones you mention, maybe you think the mainstream gurus from Osho to Eckhart Tolle are all bs? I don’t know).
Anyway, isn’t enlightenment also about losing fear, about being at ease? I once bought into it by understanding that ok, maybe all cravings are indeed futile, maybe death is indeed an illusion, maybe a back ache isn’t the end of the world and can be greatly alliviated through meditation… But how can you at least lose fear and be at ease, in a world where extreme physical pain is possible? Impossible.
Strongly agree on everything.
2 last clarifications:
On acausal trade, what I meant was, if you believe that it is possible for it to work BETWEEN a human and an ASI (apparently you do?). I’ve heard people say it doesn’t, because you are not intelligent enough to model an ASI. Only an ASI is. Which is what I’m more inclined to believe, also adding that humans are not computers and therefore can’t run any type of realistic simulations. But I agree that committing to no blackmail is the correct option anyway.
On AGI timelines, do you feel safe that it’s extremely unlikely to arrive tomorrow? Do you often find yourself looking out the window for the nano-swarms, like I do? GPT-3 scares the hell out of me. Do you feel safe that it’s at least 5 years? I’d like to have a more technical understanding to have a better idea on that, which can be hard when you’re not into computer science.
And on a more poetic note, this is such a crappy time to be alive… Specially for the 1% of us who take s-risk seriously. When I take a walk, I look at the people… We could have been in a right path, you know? (At least as far as Liberal democracies are concerned). We could have been building something good, if it wasn’t for this huge monster over our heads that most are too dumb, or too coward, to believe in.
Maybe start telling people that we can’t play God is a good start. (Not at least until hundreds of years from now till we have the mathematical and the social proof to build God).
Evolution might have not been perfect, allowing things like torture due to an obsolete-for-humans (and highly abusable) survival mechanism called pain. But at least there are balances. It gave us compassion, or more skeptically the need to vomit when we see atrocities. It gave us death so you can actually escape. It gave us shock so you have a high probability of dying quick in extreme cases. There is kind of a balance, even if weak. I see the possibility of that balance being broken with AGI or even just nano by itself.
If only it was possible to implement David Pearce’s abolitionist project of anihilating the molecular substrates of below 0 hedonic level, with several safeguards. That used to be my only hope but I think chaos will arrive way first.
Sober view as well, and much closer to mine. I definitely agree that compute will be the big bottleneck—GPT-3 and the scaling hypothesis scare the heck out of me.
8 years makes a lot of sense, after all many predictions point to 2030.
A more paranoid me would like to ask, what number would you give to the probabilities of it arriving: a) next week, b) next year?
And also: are you also paranoid like me looking out the window fom the nano-swarms, or think that at least in the very, very near-term it’s still close to impossible?
That Wei Dai post explains little in these specific regards. Every Eastern religion, in my opinion, from Buddhism, to Induism, to Yoga, to Zen, teach Enlightenment as a way to reach some kind of extreme well-being through discovering the true nature of existence. Such would be rational in an acceptable world, not in this one—in this one it is the opposite, achieving well-being through self-delusion about the nature of existence. If you’re gonna keep dribbling this fact or invoking fringe views (regardless of their value) as the dominant ones, then we might just agree to disagree. No offense!
“GPT-3 by itself shouldn’t scare you very much, I think, but as part of a pattern I think it’s scary.”
Exactly. Combining it with other parts, like an agent and something else, like an AI researcher whose name I can’t recall said in YouTube interview that I watched (titled something like “GPT-3 is the fire alarm for AGI” (reasons: GPT-2 was kinda dumb and just scalling the model turned into something drastically better, plus the combination aspect that I mentioned).
“Why is it crappy to be alive now? If you want a nice life, now’s fairly okay, esp. compared to most of history. If you’re worried about the future, best to be in the time that most matters, which is now, so you can do the most good. It does suck that there’s all that wasted potential though.”
Well isn’t it easy to tell?? Life is certainly more comfortable now, and mine certainly has been, but there’s a) the immense gloom of knowing the current situation, I don’t think any other human in history thought he or his fellow humans might come to face s-risk scenarios (except maybe those fooled by promises of Hell in religions, but I never saw any theist seriously stressed over it)
b) the possibility of being caught in a treacherous turn ending in s-risk scenario, making you paranoid 24⁄7 and considering… Well, bad things. That vastly outweighs any comfort advantage. Specially when your personal timelines are as short as mine.
And about helping… Again, sorry for being extremely depressing, but it’s just how it is: I don’t see any hope, don’t see any way out, specially because of, again, my short timelines, say 5-10 years. I’m with Eliezer that only a miracle can save us at this point. I started praying to a benevolent creator that might be listening, started hoping for aliens to save us, started hoping for the existence of Illuminati to save us, etc. such is my despair.
However, there is something else I would like to ask you: do you think meditation can provide you with insights about the nature of consciousness? Those hard questions like “is the brain running algorithms”, “is consciousness possible to emulate or transfer into some other medium”, etc? I’d give a lot to know the answers to those questions but I don’t think that science will arrive there any soon. (And as for psychedelics I think that they just tell you what you want to hear, like dreams).
Ever had any of such kind of insights yourself? Or even about the nature of existence too.
Fair argument, thanks.
“This sounds much better than extinction to me! Values might be complex, yeah, but if the AI is actually programmed to maximise human happiness then I expect the high wouldn’t wear off. Being turned into a wirehead arguably kills you, but it’s a much better experience than death for the wirehead!”
You keep dodging the point lol… As someone with some experience with drugs, I can tell you that it’s not fun. Human happiness is way subjective and doesn’t depend on a single chemical. For instance, some people love MDMA, others (like me) find it a too intense, too chemical, too fabricated happiness. A forced lifetime on MDMA would be some of the worst tortures I can imagine. It would fry you up. But even a very controlled dopamine drip wouldn’t be good. But anyway, I know you’re probably trolling, so just consider good old-fashioned torture in a dark dungeon instead...
On Paul: yes, he’s wrong, that’s how.
″ I think most scenarios where you’ve got a boundless optimiser superintelligence would lead to the creation of new minds that would perfectly satisfy its utility function.”
True, except that, on that basis alone, you have no idea how that would happen and what would it imply for those new minds (and old ones), since you’re not a digital superintelligence.
I read it somewhere while learning about these things. I might be wrong. It’s not too relevant for the broader topic anyway.
And by the way, shock is not freeze. Shock is going unconscious from the amount of pain, so that you suffer from say 5 seconds instead of 10 minutes. But like I say just one among many examples and the broader picture is what matters.
How does this affect timelines? Does this make the prospect of AGI a lot nearer? I’m sorry, I’m just a lay person, but this has got me more scared than anything else. So now AI can finally efficiently build itself?