I also love The Inner Ring and basically endorse your take on it. One confusion/thought I had about it was: How do we distinguish between Inner Rings and Groups of Sound Craftsmen? Both are implicit/informal groups of people who recognize and respect each other. Is the difference simply that Sound Craftsmen’s mutual respect is based on correct judgments of competence, whereas Inner Rings are based on incorrect judgments of competence? That seems reasonable, but it makes it very hard in some cases to tell whether the group you are gradually becoming part of—and which you are excited to be part of—is an Inner Ring or a GoSC. (Because, sure, you think these people are competent, but of course you’d probably also think that if they were an Inner Ring because you are so starstruck.) And also it means there’s a smooth spectrum of which Inner Rings and GoSCs are ends; a GoSC is where everyone is totally correct in their judgments of competence and an Inner Ring is where everyone is totally incorrect… but almost every group, realistically, will be somewhere in the middle.
How do we distinguish between Inner Rings and Groups of Sound Craftsmen?
The essay’s answer to this is solid, and has steered me well:
In any wholesome group of people which holds together for a good purpose, the exclusions are in a sense accidental. Three or four people who are together for the sake of some piece of work exclude others because there is work only for so many or because the others can’t in fact do it. Your little musical group limits its numbers because the rooms they meet in are only so big. But your genuine Inner Ring exists for exclusion. There’d be no fun if there were no outsiders. The invisible line would have no meaning unless most people were on the wrong side of it. Exclusion is no accident; it is the essence.
My own experience supports this being the crucial difference. I’ve encountered a few groups where the exclusion is the main purpose of the group, *and* the exclusion is based on reasonably good judgments of competence. These groups strike me as pathological and corrupting in the way that Lewis describes. I’ve also encountered many groups where exclusion is only “accidental”, and also the people are very bad at judging competence. These groups certainly have their problems, but they don’t have the particular issues that Lewis describes.
I’m not sure in which category you would put it, but as a counterpoint, Team Cohesion and Exclusionary Egalitarianism argues that for some groups, exclusion is at least partially essential and that they are better off for it:
… you find this pattern across nearly all elite American Special Forces type units — (1) an exceedingly difficult bar to get in, followed by (2) incredibly loose, informal, collegial norms with nearly-infinitely less emphasis on hierarchy and bureaucracy compared to all other military units.
To even “try out” for a Special Forces group like Delta Force or the Navy SEAL Teams, you have to be among the most dedicated, most physically fit, and most competent of soldier.
Then, the selection procedures are incredibly intense — only around 10% of those who attend selection actually make the cut.
This is, of course, exclusionary.
But then, seemingly paradoxically, these organizations run with far less hierarchy, formal authority, and traditional military decorum than the norm. They run… far more egalitarian than other traditional military unit. [...]
Going back [...] [If we search out the root causes of “perpetual bickering” within many well-meaning volunteer organizations] we can find a few right away —
*When there’s low standards of trust among a team, people tend to advocate more strongly for their own preferences. There’s less confidence on an individual level that one’s own goals and preferences will be reached if not strongly advocated for.
*Ideas — especially new ideas — are notoriously difficult to evaluate. When there’s been no objective standard of performance set and achieved by people who are working on strategy and doctrine, you don’t know who has the ability to actually implement their ideas and see them through to conclusion.
*Generally at the idea phase, people are maximally excited and engaged. People are often unable to model themselves to know how they’ll perform when the enthusiasm wears off.
*In the absence of previously demonstrated competence, people might want to show they’re fit for a leadership role or key role in decisionmaking early, and might want to (perhaps subconsciously) demonstrate prowess at making good arguments, appearing smart and erudite, etc.
And of course, many more issues.
Once again, this is often resolved by hierarchy — X person is in charge. In the absence of everyone agreeing, we’ll do what X says to do. Because it’s better than the alternative.
But the tradeoffs of hierarchical organizations are well-known, and hierarchical leadership seems like a fit for some domains far moreso than others.
On the other end of the spectrum, it’s easy when being egalitarian to not actually have decisions get made and fail to have valuable work getting done. For all the flaws of hierarchical leadership, it does tend to resolve the “perpetual bickering” problem.
From both personal experience and a pretty deep immersion into the history of successful organizations, it looks like often an answer is an incredibly high bar to joining followed by largely decentralized, collaborative, egalitarian decisionmaking.
I think the case of limiting meetings because the room is only so big is too easy. What about limiting membership because you want only the best researchers in your org? (Or what if it’s a party or retreat for AI safety people—OK to limit membership to only the best researchers?) There’s a good reason for selecting based on competence, obviously. But now we are back to the problem I started with, which is that every Inner Circle probably presents itself (and thinks of itself) as excluding based on competence.
In Thinking Fast and Slow, Daniel Kahneman describes an adversarial collaboration between himself and expertise researcher Gary Klein. They were originally on opposite sides of the “how much can we trust the intuitions of confident experts” question, but eventually came to agree that expert intuitions can essentially be trusted if & only if the domain has good feedback loops. So I guess that’s one possible heuristic for telling apart a group of sound craftsmen from a mutual admiration society?
Man, that’s a very important bit of info which I had heard before but which it helps to be reminded of again. The implications for my own line of work are disturbing!
There was an interesting discussion on Twitter the other day about how many AI researchers were inspired to work on AGI by AI safety arguments. Apparently they bought the “AGI is important and possible” part of the argument but not the “alignment is crazy difficult” part.
I do think the AI safety community has some unfortunate echo chamber qualities which end up filtering those people out of the discussion. This seems bad because (1) the arguments for caution might be stronger if they were developed by talking to the smartest skeptics and (2) it may be that alignment isn’t crazy difficult and the people filtered out have good ideas for tackling it.
If I had extra money, I might sponsor a prize for a “why we don’t need to worry about AI safety” essay contest to try & create an incentive to bridge the tribal gap. Could accomplish one or more of the following:
Create more cross talk between people working in AGI and people thinking about how to make it safe
Show that the best arguments for not needing to worry, as discovered by this essay contest, aren’t very good
Get more mainstream AI people thinking about safety (and potentially realizing over the course of writing their essay that it needs to be prioritized)
Get fresh sets of eyes on AI safety problems in a way that could generate new insights
Another point here is that from a cause prioritization perspective, there’s a group of people incentivized to argue that AI safety is important (anyone who gets paid to work on AI safety), but there’s not really any group of people with much of an incentive to argue the reverse (that I can think of at least, let me know if you disagree). So we should expect the set of arguments which have been published to be imbalanced. A contest could help address that.
Another point here is that from a cause prioritization perspective, there’s a group of people incentivized to argue that AI safety is important (anyone who gets paid to work on AI safety), but there’s not really any group of people with much of an incentive to argue the reverse (that I can think of at least, let me know if you disagree).
What? What about all the people who prefer to do fun research that builds capabilities and has direct ways to make them rich, without having to consider the hypothesis that maybe they are causing harm? The incentives in the other direction easily seem 10x stronger to me.
Lobbying for people to ignore the harm that your industry is causing is standard in basically any industry, and we have a massive plethora of evidence of organizations putting lots of optimization power into arguing for why their work is going to have no downsides. See the energy industry, tobacco industry, dairy industry, farmers in general, technological incumbents, the medical industry, the construction industry, the meat-production and meat-packaging industries, and really any big industry I can think of. Downplaying risks of your technology is just standard practice for any mature industry out there.
What? What about all the people who prefer to do fun research that builds capabilities and has direct ways to make them rich, without having to consider the hypothesis that maybe they are causing harm?
If they’re not considering that hypothesis, that means they’re not trying to think of arguments against it. Do we disagree?
I agree if the government was seriously considering regulation of AI, the AI industry would probably lobby against this. But that’s not the same question. From a PR perspective, just ignoring critics often seems to be a good strategy.
Yes, I didn’t say “they are not considering that hypothesis”, I am saying “they don’t want to consider that hypothesis”. Those do indeed imply very different actions. I think one gives very naturally rise to producing counterarguments, the other one does not.
I am not really sure what you mean by the second paragraph. AI is being actively regulated, and there are very active lobbying efforts on behalf of the big technology companies, producing large volumes of arguments for why AI is nothing you have to worry about.
Yes, I didn’t say “they are not considering that hypothesis”, I am saying “they don’t want to consider that hypothesis”. Those do indeed imply very different actions. I think one gives very naturally rise to producing counterarguments, the other one does not.
They don’t want to consider the hypothesis, and that’s why they’ll spend a bunch of time carefully considering it and trying to figure out why it is flawed?
In any case… Assuming the Twitter discussion is accurate, some people working on AGI have already thought about the “alignment is hard” position (since those expositions are how they came to work on AGI). But they don’t think the “alignment is hard” position is correct—it would be kinda dumb to work on AGI carelessly if you thought that position is correct. So it seems to be a matter of considering the position and deciding it is incorrect.
I am not really sure what you mean by the second paragraph. AI is being actively regulated, and there are very active lobbying efforts on behalf of the big technology companies, producing large volumes of arguments for why AI is nothing you have to worry about.
That’s interesting, but it doesn’t seem that any of the arguments they’ve made have reached LW or the EA Forum—let me know if I’m wrong. Anyway I think my original point basically stands—from the perspective of EA cause prioritization, the incentives to dismantle/refute flawed arguments for prioritizing AI safety are pretty diffuse. (True for most EA causes—I’ve long maintained that people should be paid to argue for unincentivized positions.)
I also love The Inner Ring and basically endorse your take on it. One confusion/thought I had about it was: How do we distinguish between Inner Rings and Groups of Sound Craftsmen? Both are implicit/informal groups of people who recognize and respect each other. Is the difference simply that Sound Craftsmen’s mutual respect is based on correct judgments of competence, whereas Inner Rings are based on incorrect judgments of competence? That seems reasonable, but it makes it very hard in some cases to tell whether the group you are gradually becoming part of—and which you are excited to be part of—is an Inner Ring or a GoSC. (Because, sure, you think these people are competent, but of course you’d probably also think that if they were an Inner Ring because you are so starstruck.) And also it means there’s a smooth spectrum of which Inner Rings and GoSCs are ends; a GoSC is where everyone is totally correct in their judgments of competence and an Inner Ring is where everyone is totally incorrect… but almost every group, realistically, will be somewhere in the middle.
The essay’s answer to this is solid, and has steered me well:
My own experience supports this being the crucial difference. I’ve encountered a few groups where the exclusion is the main purpose of the group, *and* the exclusion is based on reasonably good judgments of competence. These groups strike me as pathological and corrupting in the way that Lewis describes. I’ve also encountered many groups where exclusion is only “accidental”, and also the people are very bad at judging competence. These groups certainly have their problems, but they don’t have the particular issues that Lewis describes.
I’m not sure in which category you would put it, but as a counterpoint, Team Cohesion and Exclusionary Egalitarianism argues that for some groups, exclusion is at least partially essential and that they are better off for it:
Thanks, this is helpful!
EDIT: more thoughts:
I think the case of limiting meetings because the room is only so big is too easy. What about limiting membership because you want only the best researchers in your org? (Or what if it’s a party or retreat for AI safety people—OK to limit membership to only the best researchers?) There’s a good reason for selecting based on competence, obviously. But now we are back to the problem I started with, which is that every Inner Circle probably presents itself (and thinks of itself) as excluding based on competence.
In Thinking Fast and Slow, Daniel Kahneman describes an adversarial collaboration between himself and expertise researcher Gary Klein. They were originally on opposite sides of the “how much can we trust the intuitions of confident experts” question, but eventually came to agree that expert intuitions can essentially be trusted if & only if the domain has good feedback loops. So I guess that’s one possible heuristic for telling apart a group of sound craftsmen from a mutual admiration society?
Relevant paper.
Man, that’s a very important bit of info which I had heard before but which it helps to be reminded of again. The implications for my own line of work are disturbing!
There was an interesting discussion on Twitter the other day about how many AI researchers were inspired to work on AGI by AI safety arguments. Apparently they bought the “AGI is important and possible” part of the argument but not the “alignment is crazy difficult” part.
I do think the AI safety community has some unfortunate echo chamber qualities which end up filtering those people out of the discussion. This seems bad because (1) the arguments for caution might be stronger if they were developed by talking to the smartest skeptics and (2) it may be that alignment isn’t crazy difficult and the people filtered out have good ideas for tackling it.
If I had extra money, I might sponsor a prize for a “why we don’t need to worry about AI safety” essay contest to try & create an incentive to bridge the tribal gap. Could accomplish one or more of the following:
Create more cross talk between people working in AGI and people thinking about how to make it safe
Show that the best arguments for not needing to worry, as discovered by this essay contest, aren’t very good
Get more mainstream AI people thinking about safety (and potentially realizing over the course of writing their essay that it needs to be prioritized)
Get fresh sets of eyes on AI safety problems in a way that could generate new insights
Another point here is that from a cause prioritization perspective, there’s a group of people incentivized to argue that AI safety is important (anyone who gets paid to work on AI safety), but there’s not really any group of people with much of an incentive to argue the reverse (that I can think of at least, let me know if you disagree). So we should expect the set of arguments which have been published to be imbalanced. A contest could help address that.
What? What about all the people who prefer to do fun research that builds capabilities and has direct ways to make them rich, without having to consider the hypothesis that maybe they are causing harm? The incentives in the other direction easily seem 10x stronger to me.
Lobbying for people to ignore the harm that your industry is causing is standard in basically any industry, and we have a massive plethora of evidence of organizations putting lots of optimization power into arguing for why their work is going to have no downsides. See the energy industry, tobacco industry, dairy industry, farmers in general, technological incumbents, the medical industry, the construction industry, the meat-production and meat-packaging industries, and really any big industry I can think of. Downplaying risks of your technology is just standard practice for any mature industry out there.
If they’re not considering that hypothesis, that means they’re not trying to think of arguments against it. Do we disagree?
I agree if the government was seriously considering regulation of AI, the AI industry would probably lobby against this. But that’s not the same question. From a PR perspective, just ignoring critics often seems to be a good strategy.
Yes, I didn’t say “they are not considering that hypothesis”, I am saying “they don’t want to consider that hypothesis”. Those do indeed imply very different actions. I think one gives very naturally rise to producing counterarguments, the other one does not.
I am not really sure what you mean by the second paragraph. AI is being actively regulated, and there are very active lobbying efforts on behalf of the big technology companies, producing large volumes of arguments for why AI is nothing you have to worry about.
They don’t want to consider the hypothesis, and that’s why they’ll spend a bunch of time carefully considering it and trying to figure out why it is flawed?
In any case… Assuming the Twitter discussion is accurate, some people working on AGI have already thought about the “alignment is hard” position (since those expositions are how they came to work on AGI). But they don’t think the “alignment is hard” position is correct—it would be kinda dumb to work on AGI carelessly if you thought that position is correct. So it seems to be a matter of considering the position and deciding it is incorrect.
That’s interesting, but it doesn’t seem that any of the arguments they’ve made have reached LW or the EA Forum—let me know if I’m wrong. Anyway I think my original point basically stands—from the perspective of EA cause prioritization, the incentives to dismantle/refute flawed arguments for prioritizing AI safety are pretty diffuse. (True for most EA causes—I’ve long maintained that people should be paid to argue for unincentivized positions.)
I do like the idea of sponsoring a prize for such an essay contest. I’d contribute to the prize pool and help with the judging!
Which is essential and which accidental , the competence or the group ?
A GoSC is a means to an end , the craftsmanship is the end.
When judging an outsider , an Inner Ring care more about group members then competency.