[Question] Training a RL Model with Continuous State & Action Space in a Real-World Scenario

Hello everyone,

I’m a Data Science student diving into an exciting thesis topic: using reinforcement learning to stabilize boats in rough seas by adjusting a keel’s angle. But I am a bit concerned about the high complexity of the problem and the given situation:

Action Space: Continuous, representing the keel’s angle adjustments.

State Space: Continuous, capturing the dynamic behavior of the sea, including waves.

Training Environment: Currently, the company only has a real-world water tank setup to simulate the sea conditions. There’s no computer simulation available.

Given this setup, I have a couple of concerns:

Is it possible to train an RL model effectively in such a complex real-world scenario without first having a computer simulation? And if yes, what would be your initial steps in doing so?

Are there possibilities to reduce the problem’s complexity while training exclusively in the real-world water tank simulation? (i.e. transforming the action space into a discrete action space?)

Any insights or advice would be greatly appreciated!

No answers.
No comments.