Highly recommended, this was one of the (many) things I learned to think more clearly about from interacting with Cole at the focus period on AI safety we had in Sydney last year. I think this is pretty subtle and interesting and I used to find Sterkenburg’s position convincing.
For those of us who couldn’t make it to Sydney last year—what was your previous position on Solomonoff induction and how did it change as a result of talking with Cole in Sydney?
Tom Sterkenburg will critique Solomonoff induction at the UAI research meeting tomorrow (March 9th, 1:15 pm EST): https://uaiasi.com/2026/03/08/tom-sterkenburg-on-solomonoff-induction/
I think his argument (questioning the universality of SI and the justification of Occam’s razor) is sophisticated and worth taking very seriously (though I ultimately disagree with some of his conclusions). Hope to see some of you there.
Zoom link: https://uwaterloo.zoom.us/j/7921763961?pwd=TDatET6CBu47o4TxyNn9ccL2Ia8HN4.1
Highly recommended, this was one of the (many) things I learned to think more clearly about from interacting with Cole at the focus period on AI safety we had in Sydney last year. I think this is pretty subtle and interesting and I used to find Sterkenburg’s position convincing.
For those of us who couldn’t make it to Sydney last year—what was your previous position on Solomonoff induction and how did it change as a result of talking with Cole in Sydney?