Correspondingly the importance I assign to increasing the intelligence of humans has drastically increased.
I feel like human intelligence enhancement would increase capabilities development faster than alignment development, maybe unless you’ve got a lot of discrimination in favor of only increasing the intelligence of those involved with alignment.
Maybe if they all have IQ 200+, they automatically realize that and rather work on alignment than on capabilities? Or come up with a pivotal act.
With Eliezer going [public](https://x.com/tsarnick/status/1882927003508359242) with the IQ enhancement motion he at least must think so? (because if done publicly it’ll initiate intelligence enhancement race between US, China and other countries; and that’d normally lead to AI capabilities speed-run unless the amplified people are automatically wiser than that)
Well as the first few pararagphs of the text suggests, the median ‘AI Safety’ advocate over time has been barely sentient, relative to other motivated groups, when it comes to preventing certain labels from being co-opted by those groups…. so it seems unlikely they will become so many standard deviations above average in some other aspect at any point in the future.
Because the baseline will also change in the future.
I feel like human intelligence enhancement would increase capabilities development faster than alignment development, maybe unless you’ve got a lot of discrimination in favor of only increasing the intelligence of those involved with alignment.
Maybe if they all have IQ 200+, they automatically realize that and rather work on alignment than on capabilities? Or come up with a pivotal act.
With Eliezer going [public](https://x.com/tsarnick/status/1882927003508359242) with the IQ enhancement motion he at least must think so? (because if done publicly it’ll initiate intelligence enhancement race between US, China and other countries; and that’d normally lead to AI capabilities speed-run unless the amplified people are automatically wiser than that)
Well as the first few pararagphs of the text suggests, the median ‘AI Safety’ advocate over time has been barely sentient, relative to other motivated groups, when it comes to preventing certain labels from being co-opted by those groups…. so it seems unlikely they will become so many standard deviations above average in some other aspect at any point in the future.
Because the baseline will also change in the future.