Reinforcement learning with human feedback (RLHF) has become a popular way to align AI behavior with human preferences. But what happens when the system gets too good at optimizing the reward signal?
Evgenii Opryshko will guide us through an exploration of how overoptimization can lead to unintended behaviors, why it happens, and what we can do about it. We’ll look at examples, discuss open challenges, and consider what this means for aligning advanced AI systems.
Event Schedule 6:00 to 6:45 - Networking and refreshments 6:45 to 8:00 - Main Presentation 8:00 to 9:00 - Breakout Discussions
AI Safety Thursdays: When Good Rewards Go Bad—Reward Overoptimization in RLHF
Today’s Topic
Reinforcement learning with human feedback (RLHF) has become a popular way to align AI behavior with human preferences. But what happens when the system gets too good at optimizing the reward signal?
Evgenii Opryshko will guide us through an exploration of how overoptimization can lead to unintended behaviors, why it happens, and what we can do about it. We’ll look at examples, discuss open challenges, and consider what this means for aligning advanced AI systems.
Event Schedule
6:00 to 6:45 - Networking and refreshments
6:45 to 8:00 - Main Presentation
8:00 to 9:00 - Breakout Discussions