I’m actually somewhat worried that this post is still a bit dangerous (I guess my cruxes being 1) an assumption that, since _I_ hadn’t heard of the truck strategy, there’s still a lot of room for more people to hear about it, 2) an assumption that if this post were to become reasonably linked to (which it seems memetically fit enough to probably achieve), it has a nontrivial chance of getting noticed by the wrong person both due to direct-linkage and general google search.
This does seem like the minimum-viable version of this post that gives any examples though, and not sure if the previous incarnations of it (that had literally zero examples as far as I remember) were memetically fit enough to do their job anyhow. Shrug?
Yes, I worried about this myself for some time. Ultimately I decided that terrorist organizations already know about this method and it is being widely discussed in the media, so the number of potentially dangerous people who would hear about it here first is comparatively low. Further, this method is primarily suited towards indiscriminate attacks, which I am somewhat less worried about compared to alternatives.
Only if (a) terrorists tend to read what I consider to be fairly intellectual content or (b) they google around for meta-strategies. I rate (a) very unlikely and (b) as well, since as this post shows, they can’t even be bothered to google around for good terrorism methods.
I think I’m mostly persuaded on this, although I think the direct-links issue could still be problematic if a sufficiently funny/clickbaity version of this post got shared around a bunch, i.e. the way Scott Alexander articles sometimes go mainstream. (I think this post is not super likely to have that happen to it, part of the reason I’ve dialed down my concern)
(rambly thoughts about my interior thought process incoming)
So, I don’t endorse the actual algorithm I was running here. (i.e “notice dangerous information --> speak out about dangerous information” rather than “make even a crude attempt to reflect on the overall stakes”, which I do think I should do more often)
I think the algorithm Davis followed was basically correct (as I understand it, “start writing a post on dangerous info --> reflect on overall risk of using a particular example / check for less dangerous examples --> publish article with less dangerous examples and/or decide the risk is acceptable”)
It’s particularly salient to me that Ziz is correct to call me out here because I had recently noticed an inconsistency in myself: If I saw someone make a dangerous-seeming-decision, and they already double checked their reasoning, and the triple-checked their reasoning seeking out someone with different priors… I would probably demand that they quadruple-check their reasoning.
Which is maybe fine, except that if they had only doublechecked their math… I’m aware that I’d be satisfied if I demanded that they triplecheck in. And if they had quadruplechecked it, I’d probably demand that they check it a 5th time.
I lean towards “it’s better to have this algorithm to not have it to make sure people are doublechecking their dangerous decisions at all, but it’s definitely better to actually have a principled take on how much danger is reasonable.”
And this post was the first instance of me running into this behavior pattern since reflecting on it.
That all said...
In this particular post, which is literally about being careful with information hazards, which includes a potential information hazard… it seems sort of amiss to not at least address where to draw the line?
I think you are very unusual in not having heard of the truck strategy, and in particular that anybody interested in terrorism is especially likely to have heard of it.
I’m actually somewhat worried that this post is still a bit dangerous (I guess my cruxes being 1) an assumption that, since _I_ hadn’t heard of the truck strategy, there’s still a lot of room for more people to hear about it, 2) an assumption that if this post were to become reasonably linked to (which it seems memetically fit enough to probably achieve), it has a nontrivial chance of getting noticed by the wrong person both due to direct-linkage and general google search.
This does seem like the minimum-viable version of this post that gives any examples though, and not sure if the previous incarnations of it (that had literally zero examples as far as I remember) were memetically fit enough to do their job anyhow. Shrug?
Yes, I worried about this myself for some time. Ultimately I decided that terrorist organizations already know about this method and it is being widely discussed in the media, so the number of potentially dangerous people who would hear about it here first is comparatively low. Further, this method is primarily suited towards indiscriminate attacks, which I am somewhat less worried about compared to alternatives.
Only if (a) terrorists tend to read what I consider to be fairly intellectual content or (b) they google around for meta-strategies. I rate (a) very unlikely and (b) as well, since as this post shows, they can’t even be bothered to google around for good terrorism methods.
I think I’m mostly persuaded on this, although I think the direct-links issue could still be problematic if a sufficiently funny/clickbaity version of this post got shared around a bunch, i.e. the way Scott Alexander articles sometimes go mainstream. (I think this post is not super likely to have that happen to it, part of the reason I’ve dialed down my concern)
If this doubled such attacks, it would not be a feather on the scales, and you know that.
(rambly thoughts about my interior thought process incoming)
So, I don’t endorse the actual algorithm I was running here. (i.e “notice dangerous information --> speak out about dangerous information” rather than “make even a crude attempt to reflect on the overall stakes”, which I do think I should do more often)
I think the algorithm Davis followed was basically correct (as I understand it, “start writing a post on dangerous info --> reflect on overall risk of using a particular example / check for less dangerous examples --> publish article with less dangerous examples and/or decide the risk is acceptable”)
It’s particularly salient to me that Ziz is correct to call me out here because I had recently noticed an inconsistency in myself: If I saw someone make a dangerous-seeming-decision, and they already double checked their reasoning, and the triple-checked their reasoning seeking out someone with different priors… I would probably demand that they quadruple-check their reasoning.
Which is maybe fine, except that if they had only doublechecked their math… I’m aware that I’d be satisfied if I demanded that they triplecheck in. And if they had quadruplechecked it, I’d probably demand that they check it a 5th time.
I lean towards “it’s better to have this algorithm to not have it to make sure people are doublechecking their dangerous decisions at all, but it’s definitely better to actually have a principled take on how much danger is reasonable.”
And this post was the first instance of me running into this behavior pattern since reflecting on it.
That all said...
In this particular post, which is literally about being careful with information hazards, which includes a potential information hazard… it seems sort of amiss to not at least address where to draw the line?
I think you are very unusual in not having heard of the truck strategy, and in particular that anybody interested in terrorism is especially likely to have heard of it.
Fair. (I’d want a bit more evidence than one person’s say-so, but I’ve also already walked back my overall position a bit from the original point)