Practically, mode collapse seems like a bad thing by itself if (a) underlying reality shifts, or (b) if your beliefs in the first place were incorrect. Example of (a) would be when, after your mode collapsed, animal welfare becomes 80% of the proposals. Example of (b) would be an image model that “didn’t know” that diversity of outputs is in itself a value.
(b) doesn’t seem as bad for humans, because if we are investigating our beliefs, and find out some of our previously held convictions were wrong, we can try to trace back what decisions those informed, and break out of harmful mode collapse.
(a) is worse, because mode collapse deprives us of the signal on the distribution shift itself, making it hard to detect if it happened.
Good news are, solving (a) doesn’t require taking random walks periodically to balance out this exploration / exploitation dilemma. I would wager in most situations taking explicit action to check for distribution shift is cheaper and more efficient. Coming back to the grantmaker example, periodically checking true market distribution of grant proposals between global welfare and animal welfare is presumably cheaper than randomly trying out hiring people who are really good at evaluating animal welfare.
PS: This is ignoring effects that your mode collapse has on the market of grant proposals itself, which is unrealistic. That is why I start with “Practically, mode collapse seems like a bad thing by itself”.
Practically, mode collapse seems like a bad thing by itself if (a) underlying reality shifts, or (b) if your beliefs in the first place were incorrect. Example of (a) would be when, after your mode collapsed, animal welfare becomes 80% of the proposals. Example of (b) would be an image model that “didn’t know” that diversity of outputs is in itself a value.
(b) doesn’t seem as bad for humans, because if we are investigating our beliefs, and find out some of our previously held convictions were wrong, we can try to trace back what decisions those informed, and break out of harmful mode collapse.
(a) is worse, because mode collapse deprives us of the signal on the distribution shift itself, making it hard to detect if it happened.
Good news are, solving (a) doesn’t require taking random walks periodically to balance out this exploration / exploitation dilemma. I would wager in most situations taking explicit action to check for distribution shift is cheaper and more efficient. Coming back to the grantmaker example, periodically checking true market distribution of grant proposals between global welfare and animal welfare is presumably cheaper than randomly trying out hiring people who are really good at evaluating animal welfare.
PS: This is ignoring effects that your mode collapse has on the market of grant proposals itself, which is unrealistic. That is why I start with “Practically, mode collapse seems like a bad thing by itself”.