Thank you for your feedback. The workshop arranges a lot of time for free discussions, so the motivation of the AI to be controlled might pop up there. Under the current proposals for talks, the focus is more on the environment in which the agent evolves than the agent itself. However, discussions about the ālevel of trustā or ālevel of cooperationā needed to actually keep control is absolutely in the theme.
On a more personal level, unless I have very strong reasons to believe in an agentās honesty, I would not feel safe in a situation where my control depends on the agentās cooperation. As such, I would like to understand what control look like in different situations before surrendering any control capabilities to the agent.
Whether or not we decide to put an agent in a situation that we canāt keep under control if the agent doesnāt wish to be controlled is an interesting topicābut not on the agenda for now. If youād like to participate and run a session on that topic, youāre more than welcome to apply!
Thank you for your feedback. The workshop arranges a lot of time for free discussions, so the motivation of the AI to be controlled might pop up there. Under the current proposals for talks, the focus is more on the environment in which the agent evolves than the agent itself. However, discussions about the ālevel of trustā or ālevel of cooperationā needed to actually keep control is absolutely in the theme.
On a more personal level, unless I have very strong reasons to believe in an agentās honesty, I would not feel safe in a situation where my control depends on the agentās cooperation. As such, I would like to understand what control look like in different situations before surrendering any control capabilities to the agent.
Whether or not we decide to put an agent in a situation that we canāt keep under control if the agent doesnāt wish to be controlled is an interesting topicābut not on the agenda for now. If youād like to participate and run a session on that topic, youāre more than welcome to apply!