As always, I’m inspired by Sahil’s vision here! I’ve sat with his “high actuation spaces” and “live theory” for a while, and there’s something really appealing about it. I’m eager to get a better sense of it, by which I mean I’m eager to hear the vibes get cashed out more concretely. But maybe that’s against the spirit of the project? For example: what exactly is “sensitivity”? Or am I being insensitive just to ask?
This section here has some you might find useful, among wriing that is published. Excerpted below:
Live theory in oversimplified claims:
Claims about AI.
Claim: Scaling in this era depends on replication of fixed structure. (eg. the code of this website or the browser that hosts it.)
Claim: For a short intervening period, AI will remain only mildly creative, but its cost, latency, and error-rate will go down, causing wide adoption.
Claim: Mildly creative but very cheap and fast AI can be used to turn informal instruction (eg. prompts, comments) into formal instruction (eg. code)
Implication: With cheap and fast AI, scaling will not require universal fixed structure, and can instead be created just-in-time, tailored to local needs, based on prompts that informally capture the spirit.
(This implication I dub “attentive infrastructure” or “teleattention tech”, that allows you to scale that which is usually not considered scalable: attentivity.)
Claims about theorization.
Claim: General theories are about portability/scaling of insights.
Claim: Current generalization machinery for theoretical insight is primarily via parametrized formalisms (eg. equations).
Claim: Parametrized formalisms are an example of replication of fixed structure.
Implication: AI will enable new forms of scaling of insights, not dependent on finding common patterns. Instead, we will have attentive infrastructure that makes use of “postformal” theory prompts.
Implication: This will enable a new kind of generalization, that can create just-in-time local structure tailored to local needs, rather than via deep abstraction. It’s also a new kind of specialization via “postformal substitution” that can handle more subtlety than formal operations, thereby broadening the kind of generalization.
These leave out the relevance to risk. That’s the job of this paper: Substrate-Sensitive AI-risk Management. Let me know if these in combination lay it out more clearly!
As always, I’m inspired by Sahil’s vision here! I’ve sat with his “high actuation spaces” and “live theory” for a while, and there’s something really appealing about it. I’m eager to get a better sense of it, by which I mean I’m eager to hear the vibes get cashed out more concretely. But maybe that’s against the spirit of the project? For example: what exactly is “sensitivity”? Or am I being insensitive just to ask?
I agree with you that there’s a lot of interesting ideas here, but I would like to see the core arguments laid out more clearly.
This section here has some you might find useful, among wriing that is published. Excerpted below:
These leave out the relevance to risk. That’s the job of this paper: Substrate-Sensitive AI-risk Management. Let me know if these in combination lay it out more clearly!
That link is broken.