I have some experience in the design of systems designed for high reliability and resistance to adversaries. I feel like I’ve seen this kind of thinking before.
Your current line of thinking is at a stage I would call “pretheoretical noodling around.” I don’t mean any disrespect; all design has to go through this stage. But you’re not going to find any good references, or come to any conclusions, if you stay at this stage. A next step is to settle on a model of what you want to get done, and what capabilities the adversaries have. You need some bounds on the adversaries; otherwise nothing can work. And of course you need some bounds on what the system does, and how reliably. Once you’ve got this, you can either figure out how to do it, or prove that it can’t be done.
For example there are ways of designing hardware which is reliable on the assumption that at most N transistors are corrupt.
The problem of coming to agreement between a number of actors, some of whom are corrupt, is known as the Byzantine generals problem. It is well studied, and you may find it interesting.
I’m also interested in this topic, and I look forward to seeing where this line of thinking takes you.
A next step is to settle on a model of what you want to get done, and what capabilities the adversaries have.
Perhaps. The issue here is that I’m not so interested in any specific goal, but rather in facilitating emergent complexity. One analogy here is designing Conway’s game of life: I expect that it wasn’t a process of “pick the rules you want, then see what results from those” but also in part “pick what results you want, and then see what rules lead to that”.
Re the Byzantine generals problem, see my reply to niplav below:
I believe (please correct me if I’m wrong) that Byzantine fault tolerance mostly thinks about cases where the nodes give separate outputs—e.g. in the Byzantine generals problem, the “output” of each node is whether it attacks or retreats. But I’m interested in cases where the nodes need to end up producing a “synthesis” output—i.e. there’s a single output channel under joint control.
I have some experience in the design of systems designed for high reliability and resistance to adversaries. I feel like I’ve seen this kind of thinking before.
Your current line of thinking is at a stage I would call “pretheoretical noodling around.” I don’t mean any disrespect; all design has to go through this stage. But you’re not going to find any good references, or come to any conclusions, if you stay at this stage. A next step is to settle on a model of what you want to get done, and what capabilities the adversaries have. You need some bounds on the adversaries; otherwise nothing can work. And of course you need some bounds on what the system does, and how reliably. Once you’ve got this, you can either figure out how to do it, or prove that it can’t be done.
For example there are ways of designing hardware which is reliable on the assumption that at most N transistors are corrupt.
The problem of coming to agreement between a number of actors, some of whom are corrupt, is known as the Byzantine generals problem. It is well studied, and you may find it interesting.
I’m also interested in this topic, and I look forward to seeing where this line of thinking takes you.
Perhaps. The issue here is that I’m not so interested in any specific goal, but rather in facilitating emergent complexity. One analogy here is designing Conway’s game of life: I expect that it wasn’t a process of “pick the rules you want, then see what results from those” but also in part “pick what results you want, and then see what rules lead to that”.
Re the Byzantine generals problem, see my reply to niplav below:
I think he already came to some conclusions and you already gave some good references (which support some of the conclusions).
Do those methods have names or address problems which have names (like the Byzantine generals problem)?