Small-scale failures give us data about possible large-scale failures.
But you don’t go from a 160 IQ person with a lot of disagreeability and ambition, who ends up being a big commercial player or whatnot, to 195 IQ and suddenly get someone who just sits in their room for a decade and then speaks gibberish into a youtube livestream and everyone dies, or whatever. The large-scale failures aren’t feasible for humans acting alone. For humans acting very much not alone, like big AGI research companies, yeah that’s clearly a big problem. But I don’t think the problem is about any of the people you listed having too much brainpower.
(I feel we’re somewhat talking past each other, but I appreciate the conversation and still want to get where you’re coming from.)
For humans acting very much not alone, like big AGI research companies, yeah that’s clearly a big problem.
How about a group of superbabies that find and befriend each other? Then they’re no longer acting alone.
I don’t think the problem is about any of the people you listed having too much brainpower.
I don’t think problems caused by superbabies would look distinctively like “having too much brainpower”. They would look more like the ordinary problems humans have with each other. Brainpower would be a force multiplier.
(I feel we’re somewhat talking past each other, but I appreciate the conversation and still want to get where you’re coming from.)
Thanks. I mostly just want people to pay attention to this problem. I don’t feel like I have unique insight. I’ll probably stop commenting soon, since I think I’m hitting the point of diminishing returns.
I mostly just want people to pay attention to this problem.
Ok. To be clear, I strongly agree with this. I think I’ve been responding to a claim (maybe explicit, or maybe implicit / imagined by me) from you like: “There’s this risk, and therefore we should not do this.”. Where I want to disagree with the implication, not the antecedent. (I hope to more gracefully agree with things like this. Also someone should make a LW post with a really catchy term for this implication / antecedent discourse thing, or link me the one that’s already been written.)
But I do strongly disagree with the conclusion ”...we should not do this”, to the point where I say “We should basically do this as fast as possible, within the bounds of safety and sanity.”. The benefits are large, the risks look not that bad and largely ameliorable, and in particular the need regarding existential risk is great and urgent.
That said, more analysis is definitely needed. Though in defense of the pro-germline engineering position, there’s few resources, and everyone has a different objection.
But you don’t go from a 160 IQ person with a lot of disagreeability and ambition, who ends up being a big commercial player or whatnot, to 195 IQ and suddenly get someone who just sits in their room for a decade and then speaks gibberish into a youtube livestream and everyone dies, or whatever. The large-scale failures aren’t feasible for humans acting alone. For humans acting very much not alone, like big AGI research companies, yeah that’s clearly a big problem. But I don’t think the problem is about any of the people you listed having too much brainpower.
(I feel we’re somewhat talking past each other, but I appreciate the conversation and still want to get where you’re coming from.)
How about a group of superbabies that find and befriend each other? Then they’re no longer acting alone.
I don’t think problems caused by superbabies would look distinctively like “having too much brainpower”. They would look more like the ordinary problems humans have with each other. Brainpower would be a force multiplier.
Thanks. I mostly just want people to pay attention to this problem. I don’t feel like I have unique insight. I’ll probably stop commenting soon, since I think I’m hitting the point of diminishing returns.
Ok. To be clear, I strongly agree with this. I think I’ve been responding to a claim (maybe explicit, or maybe implicit / imagined by me) from you like: “There’s this risk, and therefore we should not do this.”. Where I want to disagree with the implication, not the antecedent. (I hope to more gracefully agree with things like this. Also someone should make a LW post with a really catchy term for this implication / antecedent discourse thing, or link me the one that’s already been written.)
But I do strongly disagree with the conclusion ”...we should not do this”, to the point where I say “We should basically do this as fast as possible, within the bounds of safety and sanity.”. The benefits are large, the risks look not that bad and largely ameliorable, and in particular the need regarding existential risk is great and urgent.
That said, more analysis is definitely needed. Though in defense of the pro-germline engineering position, there’s few resources, and everyone has a different objection.