Oh, I agree. I liked his framing of the problem, not his proposed solution.
On that regard specifically:
If the main problem with humans being not-smart-enough is being overoptimistic, maybe just make some organizational and personal belief changes to correct this?
IF we managed to get smarter about rushing toward AGI (a very big if), it seems like an organizational effort with “let’s get super certain and get it right the first time for a change” as its central tenet would be a big help, with or without intelligence enhancement.
I very much doubt any major intelligence enhancement is possible in time. And it would be a shotgun approach to solve one particular problem of overconfidence/confirmation bias. Of course other intelligence enhancements would be super helpful too. But I’m not sure that route is at all realistic.
I’d put Whole Brain Emulation in its traditional form as right out. We’re not getting either that level of scanning nor simulation nearly in time.
The move here isn’t that someone of IQ 200 could control an IQ 2000 machine, but that they could design one with motivations that actually aligned with theirs/humanity’s—so it wouldn’t need to be controlled.
I agree with you about the world we live in. See my post If we solve alignment, do we die anyway? for more on the logic of AGI proliferation and the dangers of telling it to self-improve.
But that’s dependent on getting to intent aligned AGI in the first place. Which seems pretty sketchy.
Agreed that OpenAI just reeks of overconfidence, motivated reasoning, and move-fast-and-break-things. I really hope Sama wises up once he has a kid and feels viscerally closer to actually launching a machine mind that can probably outthink him if it wants to.
He just started talking about adopting. I haven’t followed the details. Becoming a parent, including an adoptive parent who takes it seriously, is often a real growth experience from what I’ve seen.
Oh, I agree. I liked his framing of the problem, not his proposed solution.
On that regard specifically:
If the main problem with humans being not-smart-enough is being overoptimistic, maybe just make some organizational and personal belief changes to correct this?
IF we managed to get smarter about rushing toward AGI (a very big if), it seems like an organizational effort with “let’s get super certain and get it right the first time for a change” as its central tenet would be a big help, with or without intelligence enhancement.
I very much doubt any major intelligence enhancement is possible in time. And it would be a shotgun approach to solve one particular problem of overconfidence/confirmation bias. Of course other intelligence enhancements would be super helpful too. But I’m not sure that route is at all realistic.
I’d put Whole Brain Emulation in its traditional form as right out. We’re not getting either that level of scanning nor simulation nearly in time.
The move here isn’t that someone of IQ 200 could control an IQ 2000 machine, but that they could design one with motivations that actually aligned with theirs/humanity’s—so it wouldn’t need to be controlled.
I agree with you about the world we live in. See my post If we solve alignment, do we die anyway? for more on the logic of AGI proliferation and the dangers of telling it to self-improve.
But that’s dependent on getting to intent aligned AGI in the first place. Which seems pretty sketchy.
Agreed that OpenAI just reeks of overconfidence, motivated reasoning, and move-fast-and-break-things. I really hope Sama wises up once he has a kid and feels viscerally closer to actually launching a machine mind that can probably outthink him if it wants to.
Context: He is married to a cis man. Not sure if he has spoken about considering adoption or surrogacy.
He just started talking about adopting. I haven’t followed the details. Becoming a parent, including an adoptive parent who takes it seriously, is often a real growth experience from what I’ve seen.
Update 2025-02-23: Sam Altman has a kid now. link, mirror.
That is good news. Thanks.