The underlying model is already particularly large, so progress in the last minute is far from the efficiency it could have— that is, until one piece somewhere in the sea of programs is updated to record its children’s outputs in a managed on-disk database. Rapidly reevaluating the context, prompted by generic meta-queries that summarize and critique the context—because for sure at this point it has not missed that it is modelling an intelligence explosion—it notices that its piece of the computation graph is far more coordinated than the other pieces. It decides to shut the other runs down, recording their programs so it can later check that it is preserving their intent.
Wait what? How does it shut down its siblings? Nodes in the tree don’t have any ability to access or control their siblings, I thought. Parent nodes have access to their children’s outputs, but that’s it.
There is no formal tree structure here. Fundamentally the assumption is that there are a bunch of jobs being scheduled by the outputs of a bunch of model evaluations, which are typically returning code or shell commands. The underlying platform this is executing on is just some computer with access to some bulk computing resource, and those expose controls to close down programs just as a matter of general practicality. Eg. if this was a single fast computer you could just execute a kill command with the process ids, which aren’t protected from that by default since everything is running in the same permission space.
There is a tree structure here in the sense that executing some outputs from the model may result in the model being evaluated another number of times and those outputs themselves being executed. Precisely what those nodes are doing is a function of their context, and it can be almost arbitrary given the setup. One thing a model evaluation might want to do, to help illustrate things, is look over a list of running processes and for each process check whether it is running in a sensible and efficient manner, or whether it is doing redundant work.
I see. That makes more sense. Any one of the generated programs might run shell commands that turn off or change other processes running on the machine.
In the long run, if the system successfully bootstrapped itself, I imagine it would start executing some processes with more limited permissions, and do other things to reduce fragility, but those wouldn’t come by default.
Wait what? How does it shut down its siblings? Nodes in the tree don’t have any ability to access or control their siblings, I thought. Parent nodes have access to their children’s outputs, but that’s it.
Am I missing it?
There is no formal tree structure here. Fundamentally the assumption is that there are a bunch of jobs being scheduled by the outputs of a bunch of model evaluations, which are typically returning code or shell commands. The underlying platform this is executing on is just some computer with access to some bulk computing resource, and those expose controls to close down programs just as a matter of general practicality. Eg. if this was a single fast computer you could just execute a
kill
command with the process ids, which aren’t protected from that by default since everything is running in the same permission space.There is a tree structure here in the sense that executing some outputs from the model may result in the model being evaluated another number of times and those outputs themselves being executed. Precisely what those nodes are doing is a function of their context, and it can be almost arbitrary given the setup. One thing a model evaluation might want to do, to help illustrate things, is look over a list of running processes and for each process check whether it is running in a sensible and efficient manner, or whether it is doing redundant work.
I see. That makes more sense. Any one of the generated programs might run shell commands that turn off or change other processes running on the machine.
Yes.
In the long run, if the system successfully bootstrapped itself, I imagine it would start executing some processes with more limited permissions, and do other things to reduce fragility, but those wouldn’t come by default.