I am working on human capability enhancement via genetics. I think it’s quite plausible that we could create humans smarter than any that have ever lived within a decade. But even I think that digital intelligence wins in the end.
Like it just seems obvious to me. The only reason I’m even working in the field is because I think that enhanced humans could play an extremely critical role in the development of aligned AI. Of course this requires time for them to grow up and do research, which we are increasingly short of. But in case AGI takes longer than projected or we get our act together and implement a ban on AI capabilities improvements until alignment is solved, it still seems worth continuing the work to me.
I am working on human capability enhancement via genetics. I think it’s quite plausible that we could create humans smarter than any that have ever lived within a decade. But even I think that digital intelligence wins in the end.
Like it just seems obvious to me. The only reason I’m even working in the field is because I think that enhanced humans could play an extremely critical role in the development of aligned AI. Of course this requires time for them to grow up and do research, which we are increasingly short of. But in case AGI takes longer than projected or we get our act together and implement a ban on AI capabilities improvements until alignment is solved, it still seems worth continuing the work to me.