This is a horrible situation, where excessive knowledge of some bad action X could be evidence of being:
the kind of bad actor who does X or plans to do it, or
a person who was falsely accused of doing X, or
someone who spends a lot of effort on protecting themselves or others from X, or
a nerd who happens to be obsessed with X.
Taking all of them them together, we get a group that has the best knowledge of X, and which is dangerous to approach, because too many of them are bad actors. (Also, there is a possible overlap between the groups.)
Even worse, if you decide to avoid approaching the group and just study X yourself… you become one of them.
However, having zero knowledge about X makes you an easy victim. So what are we supposed to do?
I guess the standard solution is something like “try to learn about X without coming into too much contact with the teachers (learn from books, radio, TV, internet, or visit a public lecture), and keep your knowledge about X mostly private (do things that reduce the chance of X happening to you, maybe warn your friends about the most obvious mistakes, but do not give lectures yourself)” or sometimes “find a legible excuse to learn about X (join the police force)”. Which, again, is something that the bad actor would be happy to do, too.
They say that the former criminals make the most efficient cops. I believe it also works the other way round.
I guess Jordan Peterson would say that you cannot become stronger without simultaneously becoming a potential monster. The difference between good and bad actors is how much the “potential” remains potential. Weak (or ignorant) people are less of a threat, but also less of a help.
It could help if you know the person for a long time, so you could see how the excessive knowledge of X manifests in their actual life. Difficult to do for an online community.
Like I said, I don’t have a solution. At least, not one I’m confident and certain of. I have other essays in the pipeline with (optimistically) pieces of it.
I don’t think it’s doomed. Most security experts a bank would reasonably hire are not bank robbers, you know? I assume that’s true anyway, I’m not in that field but somehow my bank account goes un-robbed.
Checking where wildly different spheres agree seems promising. The source of advice here that I trust the most comes from a social worker who I knew for years who hadn’t heard of the rationalist community, and I asked them instead of them unprompted (or as part of an argument) starting to tell me how it should work. Put another way, getting outside perspectives is helpful- if a romantic partner seems like they might be pressuring you, describe it to a friend and see what they say.
It’s part of why I spent a while studying other communities, looking to see if there was anything that say, Toastmasters and the U.S. Marines and Burning Man and Worldcon all agreed about.
Most security experts a bank would reasonably hire are not bank robbers, you know? I assume that’s true anyway,
If you’re good at it, you can purchase the knowledge without giving them a position of power. Intelligence agencies purchase zero days from hackers on black market. Foreign spies can be turned using money to become double agents.
Most security experts a bank would reasonably hire are not bank robbers, you know?
Yes, it would be useful to know how exactly that happens.
I suspect that a part of the answer is how formal employment and long-term career changes the cost:benefit balance. Like, if you are not employed as a security expert, and rob a bank, you have an X% chance of getting Y money, and Z% chance of ending up in prison. If you get hired as a security expert, that increases the X, but probably even more increases the Z (you would be the obvious first suspect), and you probably get a nice salary so that somewhat reduces the temptation of X% chance at Y. So even if you hire people who are tempted to rob a bank, you kinda offer them a better deal on average?
Another part of the answer is distributing the responsibility, and letting the potential bad actors keep each other in check. You don’t have one person overseeing all security systems in the bank without any review. One guy places the cameras, another guy checks whether all locations are recorded. One guy knows a password to a sensitive system (preferably different people for different sensitive systems), another guy writes the code that logs all activities in the system. You pay auditors, external penetration testers, etc.
There is also reputation. If someone worked in several banks, and they those banks didn’t get robbed, maybe it is safe to hire that person. (Or they play a long con. Then again, many criminals probably don’t have patience for too long plans.) What about your first job? You probably get a role with less responsibility. And they probably check your background?
...also, sometimes the banks do get robbed; they probably do not always make it public news. So I guess there is no philosophically elegant solution to the problem, just a bunch of heuristics that together reduce the risk to the acceptable level (or rather, we get used to whatever is the final level).
So… yeah, it makes sense to learn the heuristics… and there will be obvious objections… and some of the heuristics will be expensive (in money and/or time).
I think the amount of cash a bank loses in a typical armed robbery really isn’t that large compared to the amounts of money the bank actually handles—bank robbers are a nusiance but not an existential threat to the bank.
The actual big danger to banks comes from insiders; as the saying goes, the best way to rob a bank is to own one.
make an M×N table, highlight places where the fact contradicts the theory
require an explanation for each such place in the current theory
if a new theory is made, add a new column to the table, and evaluate all cells in the new column
I guess this mostly avoids the failure mode when someone uses an argument A to support their theory X, later under the weight of evidence B switches to a theory Y (because B was incompatible with X, but is compatible with Y), and you fail to notice that A is now incompatible with Y… because you vaguely remember that “we talked about A, and there was a good explanation for that”.
The admitted disadvantage is that it takes a lot of time.
This is a horrible situation, where excessive knowledge of some bad action X could be evidence of being:
the kind of bad actor who does X or plans to do it, or
a person who was falsely accused of doing X, or
someone who spends a lot of effort on protecting themselves or others from X, or
a nerd who happens to be obsessed with X.
Taking all of them them together, we get a group that has the best knowledge of X, and which is dangerous to approach, because too many of them are bad actors. (Also, there is a possible overlap between the groups.)
Even worse, if you decide to avoid approaching the group and just study X yourself… you become one of them.
However, having zero knowledge about X makes you an easy victim. So what are we supposed to do?
I guess the standard solution is something like “try to learn about X without coming into too much contact with the teachers (learn from books, radio, TV, internet, or visit a public lecture), and keep your knowledge about X mostly private (do things that reduce the chance of X happening to you, maybe warn your friends about the most obvious mistakes, but do not give lectures yourself)” or sometimes “find a legible excuse to learn about X (join the police force)”. Which, again, is something that the bad actor would be happy to do, too.
They say that the former criminals make the most efficient cops. I believe it also works the other way round.
I guess Jordan Peterson would say that you cannot become stronger without simultaneously becoming a potential monster. The difference between good and bad actors is how much the “potential” remains potential. Weak (or ignorant) people are less of a threat, but also less of a help.
It could help if you know the person for a long time, so you could see how the excessive knowledge of X manifests in their actual life. Difficult to do for an online community.
Like I said, I don’t have a solution. At least, not one I’m confident and certain of. I have other essays in the pipeline with (optimistically) pieces of it.
I don’t think it’s doomed. Most security experts a bank would reasonably hire are not bank robbers, you know? I assume that’s true anyway, I’m not in that field but somehow my bank account goes un-robbed.
Checking where wildly different spheres agree seems promising. The source of advice here that I trust the most comes from a social worker who I knew for years who hadn’t heard of the rationalist community, and I asked them instead of them unprompted (or as part of an argument) starting to tell me how it should work. Put another way, getting outside perspectives is helpful- if a romantic partner seems like they might be pressuring you, describe it to a friend and see what they say.
It’s part of why I spent a while studying other communities, looking to see if there was anything that say, Toastmasters and the U.S. Marines and Burning Man and Worldcon all agreed about.
If you’re good at it, you can purchase the knowledge without giving them a position of power. Intelligence agencies purchase zero days from hackers on black market. Foreign spies can be turned using money to become double agents.
Yes, it would be useful to know how exactly that happens.
I suspect that a part of the answer is how formal employment and long-term career changes the cost:benefit balance. Like, if you are not employed as a security expert, and rob a bank, you have an X% chance of getting Y money, and Z% chance of ending up in prison. If you get hired as a security expert, that increases the X, but probably even more increases the Z (you would be the obvious first suspect), and you probably get a nice salary so that somewhat reduces the temptation of X% chance at Y. So even if you hire people who are tempted to rob a bank, you kinda offer them a better deal on average?
Another part of the answer is distributing the responsibility, and letting the potential bad actors keep each other in check. You don’t have one person overseeing all security systems in the bank without any review. One guy places the cameras, another guy checks whether all locations are recorded. One guy knows a password to a sensitive system (preferably different people for different sensitive systems), another guy writes the code that logs all activities in the system. You pay auditors, external penetration testers, etc.
There is also reputation. If someone worked in several banks, and they those banks didn’t get robbed, maybe it is safe to hire that person. (Or they play a long con. Then again, many criminals probably don’t have patience for too long plans.) What about your first job? You probably get a role with less responsibility. And they probably check your background?
...also, sometimes the banks do get robbed; they probably do not always make it public news. So I guess there is no philosophically elegant solution to the problem, just a bunch of heuristics that together reduce the risk to the acceptable level (or rather, we get used to whatever is the final level).
So… yeah, it makes sense to learn the heuristics… and there will be obvious objections… and some of the heuristics will be expensive (in money and/or time).
I think the amount of cash a bank loses in a typical armed robbery really isn’t that large compared to the amounts of money the bank actually handles—bank robbers are a nusiance but not an existential threat to the bank.
The actual big danger to banks comes from insiders; as the saying goes, the best way to rob a bank is to own one.
Any opinion on this regarding being a somewhat good solution?
https://www.lesswrong.com/posts/Q3huo2PYxcDGJWR6q/how-to-corner-liars-a-miasma-clearing-protocol
Trying to summarize the method:
list all know facts
list all competing theories
make an M×N table, highlight places where the fact contradicts the theory
require an explanation for each such place in the current theory
if a new theory is made, add a new column to the table, and evaluate all cells in the new column
I guess this mostly avoids the failure mode when someone uses an argument A to support their theory X, later under the weight of evidence B switches to a theory Y (because B was incompatible with X, but is compatible with Y), and you fail to notice that A is now incompatible with Y… because you vaguely remember that “we talked about A, and there was a good explanation for that”.
The admitted disadvantage is that it takes a lot of time.