Why assume assuming Gaussian or sub-Gaussian error? I’d naively expect the error to find weird edge cases which end up being pretty far from the intended utility function, growing as the intelligence can explore more of the space?
Thanks, yeah Gaussian error is a strong assumption and usually not the default. But it’s intuitively a much more realistic target than ~no error, and we want to understand how much error we can tolerate.
Why assume assuming Gaussian or sub-Gaussian error? I’d naively expect the error to find weird edge cases which end up being pretty far from the intended utility function, growing as the intelligence can explore more of the space?
(worthwhile area to be considering, tho)
Thanks, yeah Gaussian error is a strong assumption and usually not the default. But it’s intuitively a much more realistic target than ~no error, and we want to understand how much error we can tolerate.