Well said. I think that research fleets will be a big thing going forward and you expressed why quite well.
I think there’s an extension that we also have to make with some of the safety work we have, especially for control and related agendas. It is to some extent about aligning research fleets and not individual agents.
I’ve been researching ways of going about aligning & setting up these sorts of systems for the last year but I find myself very bottlenecked by not being able to communicate the theories that exists in related fields that well.
It is quite likely that RSI happens in lab automation and distributed labs before anything else. So the question then becomes how one can extend the existing techniques and theory that we currently have to distributed systems of research agents?
There’s a bunch of fun and very interesting decentralised coordination schemes and technologies one can use from fields such as digital democracy and collective intelligence. It is just really hard to prune what will work and to think about what the alignment proposals should be for these things. You usually have emergence which for Agent-Based Models which research systems are a sub-part of and often the best way to predict problems is to actually run the experiments in those systems.
So how in the hell are we supposed to predict the problems without this? What are the experiments we need to run? What types of organisation & control systems should be recommended to governance people when it comes to research fleets?
I would be very excited to see experiments with ABMs where the agents model fleets of research agents and tools. I expect in the near future we can build pipelines where the current fleet configuration—which should be defined in something like the terraform configuration language—automatically generates an ABM which is used for evaluation, control, and coordination experiments.
Well said. I think that research fleets will be a big thing going forward and you expressed why quite well.
I think there’s an extension that we also have to make with some of the safety work we have, especially for control and related agendas. It is to some extent about aligning research fleets and not individual agents.
I’ve been researching ways of going about aligning & setting up these sorts of systems for the last year but I find myself very bottlenecked by not being able to communicate the theories that exists in related fields that well.
It is quite likely that RSI happens in lab automation and distributed labs before anything else. So the question then becomes how one can extend the existing techniques and theory that we currently have to distributed systems of research agents?
There’s a bunch of fun and very interesting decentralised coordination schemes and technologies one can use from fields such as digital democracy and collective intelligence. It is just really hard to prune what will work and to think about what the alignment proposals should be for these things. You usually have emergence which for Agent-Based Models which research systems are a sub-part of and often the best way to predict problems is to actually run the experiments in those systems.
So how in the hell are we supposed to predict the problems without this? What are the experiments we need to run? What types of organisation & control systems should be recommended to governance people when it comes to research fleets?
I would be very excited to see experiments with ABMs where the agents model fleets of research agents and tools. I expect in the near future we can build pipelines where the current fleet configuration—which should be defined in something like the terraform configuration language—automatically generates an ABM which is used for evaluation, control, and coordination experiments.