Overall, I’m skeptical of the existence of magic bullets when it comes to abstraction—by which I mean that I expect most problems to have multiple solutions and for those solutions to generalize only a little, not that I expect problems to have zero solutions.
Not sure I follow. Do you mean that you expect there to not be a single nice framework for abstractions? Or that in most situations, there won’t be one clearly best abstraction?
(FWIW, I’m quite agnostic but hopeful on the existence of a good general framework. And I think in many cases, there are going to be lots of very reasonable abstractions, and which one is “best” depends a lot on what you want to use it for.)
Sure, commuting diagrams / non-leaky abstractions have nice properties and are unique points in the space of abstractions, but they don’t count as solutions to most problems of interest. Calling commuting diagrams “abstraction” and everything else “approximate abstraction” is I think the wrong move—abstractions are almost as a rule leaky, and all the problems that that causes, all the complicated conceptual terrain that it implies, should be central content of the study of abstraction. An AI safety result that only helps if your diagrams commute, IMO, has only a 20% chance of being useful and is probably missing 90% of the work to get there.
I absolutely agree abstractions are almost always leaky; I expect ~every abstraction we actually end up using in practice when e.g. analyzing neural networks to not make things commute perfectly. I think I disagree on two points though:
While I expect the leaky setting to add new difficulties compared to the exact one, I’m not sure how big they’ll be. I can imagine scenarios where those problems are the key issue (e.g. errors tend to accumulate and blow up in hard to deal with ways). But there are also lots of potential issues that already occur in an exact setting (e.g. efficiently finding consistent abstractions, how to apply this framework to problems).
Even if most of the difficulty is in the leaky setting, I think it’s reasonable to start by studying the exact case, as long as insights are going to transfer. I.e. if the 10% of problems that occur already in the exact case also occur in a similar way in the leaky one, working on those for a bit isn’t a waste of time.
That being said, I do agree investigating the leaky case early on is important (if only to check whether that requires a complete overhaul of the framework, and what kinds of issues it introduces). I’ve already started working on that a bit, and for now I’m optimistic things will mostly transfer, but I suppose I’ll find out soon-ish.
Not sure I follow. Do you mean that you expect there to not be a single nice framework for abstractions? Or that in most situations, there won’t be one clearly best abstraction?
Yeah, I was vague. I think once a framework for abstractions becomes specific enough that it starts recommending specific abstractions to you, the two things you mention become connected—there will be different acceptable specific-frameworks because there are different acceptable abstractions.
I agree, once you have some specification of what you want to use the abstraction for (rather than just “be a good abstraction,”) that goes a long way to pinning down how to think about the world. But what we want is itself an abstraction that has multiple acceptable answers :P
Thanks for your thoughts!
Not sure I follow. Do you mean that you expect there to not be a single nice framework for abstractions? Or that in most situations, there won’t be one clearly best abstraction?
(FWIW, I’m quite agnostic but hopeful on the existence of a good general framework. And I think in many cases, there are going to be lots of very reasonable abstractions, and which one is “best” depends a lot on what you want to use it for.)
I absolutely agree abstractions are almost always leaky; I expect ~every abstraction we actually end up using in practice when e.g. analyzing neural networks to not make things commute perfectly. I think I disagree on two points though:
While I expect the leaky setting to add new difficulties compared to the exact one, I’m not sure how big they’ll be. I can imagine scenarios where those problems are the key issue (e.g. errors tend to accumulate and blow up in hard to deal with ways). But there are also lots of potential issues that already occur in an exact setting (e.g. efficiently finding consistent abstractions, how to apply this framework to problems).
Even if most of the difficulty is in the leaky setting, I think it’s reasonable to start by studying the exact case, as long as insights are going to transfer. I.e. if the 10% of problems that occur already in the exact case also occur in a similar way in the leaky one, working on those for a bit isn’t a waste of time.
That being said, I do agree investigating the leaky case early on is important (if only to check whether that requires a complete overhaul of the framework, and what kinds of issues it introduces). I’ve already started working on that a bit, and for now I’m optimistic things will mostly transfer, but I suppose I’ll find out soon-ish.
Yeah, I was vague. I think once a framework for abstractions becomes specific enough that it starts recommending specific abstractions to you, the two things you mention become connected—there will be different acceptable specific-frameworks because there are different acceptable abstractions.
I agree, once you have some specification of what you want to use the abstraction for (rather than just “be a good abstraction,”) that goes a long way to pinning down how to think about the world. But what we want is itself an abstraction that has multiple acceptable answers :P
Anyhow, best of luck :D