We weren’t guaranteed to be born in a civilization where the alignment problem was even conceived of by a single person, let alone taken seriously by a single person. The odds were not extremely high that we’d be born on a timeline where alignment ended up well-researched and taken seriously by several billionaires.
We live in a civilization that’s disproportionately advantaged in the will and ability to tackle the alignment problem. We should maximize our privileged status by dramatically increasing the resources that are already allocated to the alignment problem; acknowledging that the problem is too hard for the ~100-expert approach so far. Especially because it might only be possible to solve the problem at the last minute, with whatever pool of experts and expertise has been already developed in the decades prior.
We should also acknowledge that AI is a multi-trillion dollar industry, with clear significance for geopolitical power as well. There are all sorts of vested interests, and plenty of sharks. It’s unreasonable to think that AI advocacy should be targeted towards the general public, instead of experts and other highly relevant people. Nor does it make sense to believe that the people opposed to alignment advocacy are doing so out of ignorance or disinterest; there are plenty of people who are trying to slander AI for any reason at all (e.g. truck driver unions who are mad about self-driving cars), and plenty of people who are paid to counteract any perceived slander by any means necessary, no matter where it pops up and what form it takes.
As a result, the people concerned with alignment needed pareto solutions; namely, dramatically increasing the number of people who take the alignment problem seriously, without running around like a headless chicken and randomly stepping on the toes of powerful people (who routinely vanquish far more intimidating threats to their interests). The history textbooks in the future will probably cover that extensively.
We weren’t guaranteed to be born in a civilization where the alignment problem was even conceived of by a single person, let alone taken seriously by a single person. The odds were not extremely high that we’d be born on a timeline where alignment ended up well-researched and taken seriously by several billionaires.
We live in a civilization that’s disproportionately advantaged in the will and ability to tackle the alignment problem. We should maximize our privileged status by dramatically increasing the resources that are already allocated to the alignment problem; acknowledging that the problem is too hard for the ~100-expert approach so far. Especially because it might only be possible to solve the problem at the last minute, with whatever pool of experts and expertise has been already developed in the decades prior.
We should also acknowledge that AI is a multi-trillion dollar industry, with clear significance for geopolitical power as well. There are all sorts of vested interests, and plenty of sharks. It’s unreasonable to think that AI advocacy should be targeted towards the general public, instead of experts and other highly relevant people. Nor does it make sense to believe that the people opposed to alignment advocacy are doing so out of ignorance or disinterest; there are plenty of people who are trying to slander AI for any reason at all (e.g. truck driver unions who are mad about self-driving cars), and plenty of people who are paid to counteract any perceived slander by any means necessary, no matter where it pops up and what form it takes.
As a result, the people concerned with alignment needed pareto solutions; namely, dramatically increasing the number of people who take the alignment problem seriously, without running around like a headless chicken and randomly stepping on the toes of powerful people (who routinely vanquish far more intimidating threats to their interests). The history textbooks in the future will probably cover that extensively.