Constraining and/or lowering (via bans, information concealment, raised expenses, etc…) capabilities gains via regulation of certain AI research and production components (weights, chips, electricity, code, etc…) is a strategy pursued in part or fully by different AI Safety organizations.
One friend (who works in this space) and I were very recently reflecting on AI progress, along with strategies to contend with AI related catastrophes. While we disagree on the success probabilities of different AI Safety plans and their facets, including those pertaining to policy and governance, we broadly support similar measures. He does, however, believe “shut down” strategies ought to be prioritized much more than I do.
This friend has, in the last year, met with between 10-50 (providing this range for preservation of anonymity) congressional staffers; the stories of these meetings he has could make for both an entertaining and informative short book, and I was grateful for the experiences and the details of how he prepares for conversations and framing AI that he imparted on me.
The density of familiarity with AI risk across the staffers was concentrated on weaponization; most staffers (save for 3) did not have much if any sense of AI catastrophe. This point is interesting, but I found how my friend perceived his role in AI Safety with these meetings more intriguing.
To prelude, both him and I believe that, generally speaking, individual humans and governments (including the US government) require some (the more spontaneous, the more impactful) catastrophe to engender productive responses. Examples of this include near-death experiences (for people) and the 11 September 2001 bombings for the US government.
With this remark made: my friend perceives his role to be one primarily of priming the staffers i.e. “the US government” to respond more effectively to catastrophe (e.g. 100s of thousands to millions but not billions dead) than they otherwise would have been able.
Any immediate actions taken by the staffers towards AI Safety, especially with respect to a full cessation of certain lines of research, access to computational resources, or information availability, my friend finds excellent, but due to the improbability of these occurring, he believes the brunt of his impact comes to fruition if there is an AI catastrophe that humans can recover from.
This updated how I perceive the “slow down” focused crowd in AI Safety from being one focused on literally having many aspects of AI progress stalled partially or fully to one of governmental and institutional response enhancement in fire-alarm moments.
To provide a better response to your first question than the one I’ve provided below, I would need to ask him to explain more than he has already.
From what he has remarked, the first several meetings were very stressful (for most people, this would, of course, be the case!) but soon he adjusted and developed a routine for his meetings.
While the routine could go off course based on the responsiveness of the individual(s) present (one staffer kept nodding Yes, had no questions, and then 20 minutes later remarked that they “take into account” what had been said; another staffer remarked that the US simply needed to innovate AI as much as possible, and that safety that stifled this was not to be prioritized; these are statements paraphrased from my friend), I get the sense that in most instances he has been able to provide adequate context on his organization and the broader situation with AI first (not sure which of these two comes first and for how long they are discussed).
Concerning provision of a description of the AI context, I am not sure how dynamic they are; I think he mentioned querying the staffers on their familiarity, and the impression he had was that most staffers listened well and thought critically about his remarks.
After the aforementioned descriptions, my friend begins discussing measures that can be taken in support of AI Safety.
He mentioned that he tries to steer framings away from those invoking thoughts of arms races or weaponization and instead focuses on uncontrollability “race to the bottom” scenarios, since the former categorization(s) given by others to the staffers has, in his experience, in some instances downplayed concerns for catastrophe and increased focus on further expanding AI capabilities to “outcompete China”.
My friend’s strategy framings seem appropriate and he is a good orator, but I have not the nuanced suggestion that I wanted for my conversation with him, as I’ve not thought enough about which AI risk framings and safety proposals to have ready for staffers and as I believe talking to staffers qualifies as an instance of the proverb “an ounce of practice outweighs a pound of precept”.
Constraining and/or lowering (via bans, information concealment, raised expenses, etc…) capabilities gains via regulation of certain AI research and production components (weights, chips, electricity, code, etc…) is a strategy pursued in part or fully by different AI Safety organizations.
One friend (who works in this space) and I were very recently reflecting on AI progress, along with strategies to contend with AI related catastrophes. While we disagree on the success probabilities of different AI Safety plans and their facets, including those pertaining to policy and governance, we broadly support similar measures. He does, however, believe “shut down” strategies ought to be prioritized much more than I do.
This friend has, in the last year, met with between 10-50 (providing this range for preservation of anonymity) congressional staffers; the stories of these meetings he has could make for both an entertaining and informative short book, and I was grateful for the experiences and the details of how he prepares for conversations and framing AI that he imparted on me.
The density of familiarity with AI risk across the staffers was concentrated on weaponization; most staffers (save for 3) did not have much if any sense of AI catastrophe. This point is interesting, but I found how my friend perceived his role in AI Safety with these meetings more intriguing.
To prelude, both him and I believe that, generally speaking, individual humans and governments (including the US government) require some (the more spontaneous, the more impactful) catastrophe to engender productive responses. Examples of this include near-death experiences (for people) and the 11 September 2001 bombings for the US government.
With this remark made: my friend perceives his role to be one primarily of priming the staffers i.e. “the US government” to respond more effectively to catastrophe (e.g. 100s of thousands to millions but not billions dead) than they otherwise would have been able.
Any immediate actions taken by the staffers towards AI Safety, especially with respect to a full cessation of certain lines of research, access to computational resources, or information availability, my friend finds excellent, but due to the improbability of these occurring, he believes the brunt of his impact comes to fruition if there is an AI catastrophe that humans can recover from.
This updated how I perceive the “slow down” focused crowd in AI Safety from being one focused on literally having many aspects of AI progress stalled partially or fully to one of governmental and institutional response enhancement in fire-alarm moments.
> I was grateful for the experiences and the details of how he prepares for conversations and framing AI that he imparted on me.
I’m curious, what was his strategy for preparing for these discussions? What did he discuss?
> This updated how I perceive the “show down” focused crowd
possible typo?
Thank you for the typo-linting.
To provide a better response to your first question than the one I’ve provided below, I would need to ask him to explain more than he has already.
From what he has remarked, the first several meetings were very stressful (for most people, this would, of course, be the case!) but soon he adjusted and developed a routine for his meetings.
While the routine could go off course based on the responsiveness of the individual(s) present (one staffer kept nodding Yes, had no questions, and then 20 minutes later remarked that they “take into account” what had been said; another staffer remarked that the US simply needed to innovate AI as much as possible, and that safety that stifled this was not to be prioritized; these are statements paraphrased from my friend), I get the sense that in most instances he has been able to provide adequate context on his organization and the broader situation with AI first (not sure which of these two comes first and for how long they are discussed).
Concerning provision of a description of the AI context, I am not sure how dynamic they are; I think he mentioned querying the staffers on their familiarity, and the impression he had was that most staffers listened well and thought critically about his remarks.
After the aforementioned descriptions, my friend begins discussing measures that can be taken in support of AI Safety.
He mentioned that he tries to steer framings away from those invoking thoughts of arms races or weaponization and instead focuses on uncontrollability “race to the bottom” scenarios, since the former categorization(s) given by others to the staffers has, in his experience, in some instances downplayed concerns for catastrophe and increased focus on further expanding AI capabilities to “outcompete China”.
My friend’s strategy framings seem appropriate and he is a good orator, but I have not the nuanced suggestion that I wanted for my conversation with him, as I’ve not thought enough about which AI risk framings and safety proposals to have ready for staffers and as I believe talking to staffers qualifies as an instance of the proverb “an ounce of practice outweighs a pound of precept”.